Browse Source

Merge tag 'drm-misc-next-2017-07-26' of git://anongit.freedesktop.org/git/drm-misc into drm-next

drm-misc-next-2017-07-18:
Core Changes:
- A couple fixes to only opening crc when needed (Maarten)
- Change atomic helper swap_state to be interruptible (Maarten)
- fb_helper: Support waiting for an output before setting up (Daniel)
- Allow drivers supporting runtime_pm to use helper_commit_tail (Maxime)

Driver Changes:
- misc: Use %pOF to print device node names (Rob)
- Miscellaneous fixes

drm-misc-next-2017-07-18:
UAPI Changes:
- Fail commits which request an event without including a crtc (Andrey)

Core Changes:
- Add YCBCR 4:2:0 support (Shashank)
- s/drm_atomic_replace_property_blob/drm_property_replace_blob/ (Peter)
- Add proper base class for private objs instead of using void* (Ville)
- Remove pending_read/write_domains from drm_gem_object (Chris)
- Add async plane update support (ie: cursor) to atomic helpers (Gustavo)
- Add old state to .enable and rename to .atomic_enable (Laurent)
- Add drm_atomic_helper_wait_for_flip_done() (Boris)
- Remove drm_driver->set_busid hook (Daniel)
- Migrate vblank documentation into the source files (Daniel)
- Add fb_helper->lock instead of abusing modeset lock (Thierry/Daniel)

Driver Changes:
- stm: Add STM32 DSI controller driver (Phillipe)
- amdgpu: Numerous small/misc fixes
- bridge: Add Synopsys Designware MIPI DSI host bridge driver (Phillipe)
- tinydrm: Add support for Pervasive Displays RePaper displays (Noralf)
- misc: Replace for_each_[obj]_in_state to prep for removal (Maarten)
- misc: Use .atomic_disable for atomic drivers (Laurent)
- vgem: Pin pages when mapped/exported (Chris)
- dw_hdmi: Add support for Rockchip RK3399 (Mark)
- atmel-hlcdc: Add 8-bit color look-up table format (Peter)
- vc4: Send vblank event when disabling a crtc (Boris)
- vc4: Use atomic helpers for fence waits (Eric)
- misc: drop drm_vblank_cleanup cargo-cult (Daniel)

Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Boris Brezillon <boris.brezillon@free-electrons.com>
Cc: Eric Anholt <eric@anholt.net>
Cc: Peter Rosin <peda@axentia.se>
Cc: Mark Yao <mark.yao@rock-chips.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Cc: Gustavo Padovan <gustavo.padovan@collabora.com>
Cc: Thierry Reding <treding@nvidia.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Peter Rosin <peda@axentia.se>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Cc: Philippe CORNU <philippe.cornu@st.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Maxime Ripard <maxime.ripard@free-electrons.com>

* tag 'drm-misc-next-2017-07-26' of git://anongit.freedesktop.org/git/drm-misc: (171 commits)
  drm/hisilicon: fix build error without fbdev emulation
  drm/atomic: implement drm_atomic_helper_commit_tail for runtime_pm users
  drm: Improve kerneldoc for drm_modeset_lock
  drm/hisilicon: Remove custom FB helper deferred setup
  drm/exynos: Remove custom FB helper deferred setup
  drm/fb-helper: Support deferred setup
  dma-fence: Don't BUG_ON when not absolutely needed
  drm: Convert to using %pOF instead of full_name
  drm/syncobj: Fix kerneldoc
  drm/atomic: Allow drm_atomic_helper_swap_state to fail
  drm/atomic: Add __must_check to drm_atomic_helper_swap_state.
  drm/vc4: Handle drm_atomic_helper_swap_state failure
  drm/tilcdc: Handle drm_atomic_helper_swap_state failure
  drm/tegra: Handle drm_atomic_helper_swap_state failure
  drm/msm: Handle drm_atomic_helper_swap_state failure
  drm/mediatek: Handle drm_atomic_helper_swap_state failure
  drm/i915: Handle drm_atomic_helper_swap_state failure
  drm/atmel-hlcdc: Handle drm_atomic_helper_swap_state failure
  drm/nouveau: Handle drm_atomic_helper_swap_state failure
  drm/atomic: Change drm_atomic_helper_swap_state to return an error.
  ...
Dave Airlie 8 years ago
parent
commit
542aefb5a2
100 changed files with 2949 additions and 1010 deletions
  1. 32 0
      Documentation/devicetree/bindings/display/bridge/dw_mipi_dsi.txt
  2. 52 0
      Documentation/devicetree/bindings/display/repaper.txt
  3. 5 2
      Documentation/devicetree/bindings/display/rockchip/dw_hdmi-rockchip.txt
  4. 1 0
      Documentation/devicetree/bindings/vendor-prefixes.txt
  5. 2 0
      Documentation/gpu/drm-internals.rst
  6. 2 57
      Documentation/gpu/drm-kms.rst
  7. 2 2
      Documentation/gpu/drm-mm.rst
  8. 2 0
      Documentation/gpu/drm-uapi.rst
  9. 6 0
      MAINTAINERS
  10. 2 2
      drivers/dma-buf/dma-fence.c
  11. 69 38
      drivers/dma-buf/sw_sync.c
  12. 8 11
      drivers/dma-buf/sync_debug.c
  13. 13 13
      drivers/dma-buf/sync_debug.h
  14. 0 1
      drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
  15. 0 1
      drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
  16. 0 1
      drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
  17. 1 1
      drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
  18. 1 1
      drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
  19. 1 1
      drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
  20. 1 1
      drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
  21. 20 21
      drivers/gpu/drm/arc/arcpgu_crtc.c
  22. 6 4
      drivers/gpu/drm/arm/hdlcd_crtc.c
  23. 0 2
      drivers/gpu/drm/arm/hdlcd_drv.c
  24. 6 4
      drivers/gpu/drm/arm/malidp_crtc.c
  25. 1 1
      drivers/gpu/drm/arm/malidp_drv.c
  26. 4 5
      drivers/gpu/drm/armada/armada_crtc.c
  27. 2 2
      drivers/gpu/drm/armada/armada_drv.c
  28. 0 1
      drivers/gpu/drm/armada/armada_fbdev.c
  29. 1 1
      drivers/gpu/drm/armada/armada_overlay.c
  30. 2 3
      drivers/gpu/drm/ast/ast_drv.c
  31. 0 1
      drivers/gpu/drm/ast/ast_fb.c
  32. 13 5
      drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c
  33. 22 8
      drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c
  34. 16 0
      drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h
  35. 29 0
      drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
  36. 2 3
      drivers/gpu/drm/bochs/bochs_drv.c
  37. 3 4
      drivers/gpu/drm/bochs/bochs_fbdev.c
  38. 1 5
      drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
  39. 3 6
      drivers/gpu/drm/bridge/analogix-anx78xx.c
  40. 3 6
      drivers/gpu/drm/bridge/dumb-vga-dac.c
  41. 1 5
      drivers/gpu/drm/bridge/nxp-ptn3460.c
  42. 1 4
      drivers/gpu/drm/bridge/panel.c
  43. 1 5
      drivers/gpu/drm/bridge/parade-ps8622.c
  44. 2 6
      drivers/gpu/drm/bridge/sii902x.c
  45. 6 0
      drivers/gpu/drm/bridge/synopsys/Kconfig
  46. 2 0
      drivers/gpu/drm/bridge/synopsys/Makefile
  47. 2 7
      drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
  48. 981 0
      drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
  49. 1 5
      drivers/gpu/drm/bridge/tc358767.c
  50. 1 5
      drivers/gpu/drm/bridge/ti-tfp410.c
  51. 2 3
      drivers/gpu/drm/cirrus/cirrus_drv.c
  52. 0 1
      drivers/gpu/drm/cirrus/cirrus_fbdev.c
  53. 70 51
      drivers/gpu/drm/drm_atomic.c
  54. 271 74
      drivers/gpu/drm/drm_atomic_helper.c
  55. 3 0
      drivers/gpu/drm/drm_color_mgmt.c
  56. 44 15
      drivers/gpu/drm/drm_debugfs_crc.c
  57. 44 45
      drivers/gpu/drm/drm_dp_mst_topology.c
  58. 16 0
      drivers/gpu/drm/drm_drv.c
  59. 431 9
      drivers/gpu/drm/drm_edid.c
  60. 2 2
      drivers/gpu/drm/drm_fb_cma_helper.c
  61. 396 236
      drivers/gpu/drm/drm_fb_helper.c
  62. 6 1
      drivers/gpu/drm/drm_file.c
  63. 1 1
      drivers/gpu/drm/drm_framebuffer.c
  64. 6 4
      drivers/gpu/drm/drm_gem.c
  65. 1 1
      drivers/gpu/drm/drm_gem_cma_helper.c
  66. 10 4
      drivers/gpu/drm/drm_internal.h
  67. 1 1
      drivers/gpu/drm/drm_ioc32.c
  68. 8 5
      drivers/gpu/drm/drm_ioctl.c
  69. 3 3
      drivers/gpu/drm/drm_mipi_dsi.c
  70. 89 2
      drivers/gpu/drm/drm_modes.c
  71. 10 2
      drivers/gpu/drm/drm_modeset_lock.c
  72. 2 2
      drivers/gpu/drm/drm_of.c
  73. 12 28
      drivers/gpu/drm/drm_pci.c
  74. 4 0
      drivers/gpu/drm/drm_probe_helper.c
  75. 23 0
      drivers/gpu/drm/drm_property.c
  76. 13 7
      drivers/gpu/drm/drm_simple_kms_helper.c
  77. 0 2
      drivers/gpu/drm/drm_syncobj.c
  78. 76 111
      drivers/gpu/drm/drm_vblank.c
  79. 6 4
      drivers/gpu/drm/exynos/exynos_drm_crtc.c
  80. 4 2
      drivers/gpu/drm/exynos/exynos_drm_drv.c
  81. 1 2
      drivers/gpu/drm/exynos/exynos_drm_dsi.c
  82. 1 26
      drivers/gpu/drm/exynos/exynos_drm_fb.c
  83. 1 25
      drivers/gpu/drm/exynos/exynos_drm_fbdev.c
  84. 1 1
      drivers/gpu/drm/exynos/exynos_hdmi.c
  85. 3 2
      drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_crtc.c
  86. 0 4
      drivers/gpu/drm/gma500/mdfld_intel_display.c
  87. 2 3
      drivers/gpu/drm/gma500/psb_drv.c
  88. 6 4
      drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c
  89. 2 1
      drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
  90. 0 1
      drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
  91. 7 6
      drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c
  92. 11 17
      drivers/gpu/drm/hisilicon/kirin/kirin_drm_drv.c
  93. 0 2
      drivers/gpu/drm/hisilicon/kirin/kirin_drm_drv.h
  94. 1 1
      drivers/gpu/drm/i2c/tda998x_drv.c
  95. 2 3
      drivers/gpu/drm/i810/i810_drv.c
  96. 1 6
      drivers/gpu/drm/i915/i915_drv.c
  97. 0 2
      drivers/gpu/drm/i915/i915_drv.h
  98. 1 1
      drivers/gpu/drm/i915/i915_irq.c
  99. 10 2
      drivers/gpu/drm/i915/intel_display.c
  100. 13 34
      drivers/gpu/drm/i915/intel_dp_mst.c

+ 32 - 0
Documentation/devicetree/bindings/display/bridge/dw_mipi_dsi.txt

@@ -0,0 +1,32 @@
+Synopsys DesignWare MIPI DSI host controller
+============================================
+
+This document defines device tree properties for the Synopsys DesignWare MIPI
+DSI host controller. It doesn't constitue a device tree binding specification
+by itself but is meant to be referenced by platform-specific device tree
+bindings.
+
+When referenced from platform device tree bindings the properties defined in
+this document are defined as follows. The platform device tree bindings are
+responsible for defining whether each optional property is used or not.
+
+- reg: Memory mapped base address and length of the DesignWare MIPI DSI
+  host controller registers. (mandatory)
+
+- clocks: References to all the clocks specified in the clock-names property
+  as specified in [1]. (mandatory)
+
+- clock-names:
+  - "pclk" is the peripheral clock for either AHB and APB. (mandatory)
+  - "px_clk" is the pixel clock for the DPI/RGB input. (optional)
+
+- resets: References to all the resets specified in the reset-names property
+  as specified in [2]. (optional)
+
+- reset-names: string reset name, must be "apb" if used. (optional)
+
+- panel or bridge node: see [3]. (mandatory)
+
+[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
+[2] Documentation/devicetree/bindings/reset/reset.txt
+[3] Documentation/devicetree/bindings/display/mipi-dsi-bus.txt

+ 52 - 0
Documentation/devicetree/bindings/display/repaper.txt

@@ -0,0 +1,52 @@
+Pervasive Displays RePaper branded e-ink displays
+
+Required properties:
+- compatible:		"pervasive,e1144cs021" for 1.44" display
+			"pervasive,e1190cs021" for 1.9" display
+			"pervasive,e2200cs021" for 2.0" display
+			"pervasive,e2271cs021" for 2.7" display
+
+- panel-on-gpios:	Timing controller power control
+- discharge-gpios:	Discharge control
+- reset-gpios:		RESET pin
+- busy-gpios:		BUSY pin
+
+Required property for e2271cs021:
+- border-gpios:		Border control
+
+The node for this driver must be a child node of a SPI controller, hence
+all mandatory properties described in ../spi/spi-bus.txt must be specified.
+
+Optional property:
+- pervasive,thermal-zone:	name of thermometer's thermal zone
+
+Example:
+
+	display_temp: lm75@48 {
+		compatible = "lm75b";
+		reg = <0x48>;
+		#thermal-sensor-cells = <0>;
+	};
+
+	thermal-zones {
+		display {
+			polling-delay-passive = <0>;
+			polling-delay = <0>;
+			thermal-sensors = <&display_temp>;
+		};
+	};
+
+	papirus27@0{
+		compatible = "pervasive,e2271cs021";
+		reg = <0>;
+
+		spi-max-frequency = <8000000>;
+
+		panel-on-gpios = <&gpio 23 0>;
+		border-gpios = <&gpio 14 0>;
+		discharge-gpios = <&gpio 15 0>;
+		reset-gpios = <&gpio 24 0>;
+		busy-gpios = <&gpio 25 0>;
+
+		pervasive,thermal-zone = "display";
+	};

+ 5 - 2
Documentation/devicetree/bindings/display/rockchip/dw_hdmi-rockchip.txt

@@ -11,7 +11,9 @@ following device-specific properties.
 
 Required properties:
 
-- compatible: Shall contain "rockchip,rk3288-dw-hdmi".
+- compatible: should be one of the following:
+		"rockchip,rk3288-dw-hdmi"
+		"rockchip,rk3399-dw-hdmi"
 - reg: See dw_hdmi.txt.
 - reg-io-width: See dw_hdmi.txt. Shall be 4.
 - interrupts: HDMI interrupt number
@@ -30,7 +32,8 @@ Optional properties
   I2C master controller.
 - clock-names: See dw_hdmi.txt. The "cec" clock is optional.
 - clock-names: May contain "cec" as defined in dw_hdmi.txt.
-
+- clock-names: May contain "grf", power for grf io.
+- clock-names: May contain "vpll", external clock for some hdmi phy.
 
 Example:
 

+ 1 - 0
Documentation/devicetree/bindings/vendor-prefixes.txt

@@ -249,6 +249,7 @@ oxsemi	Oxford Semiconductor, Ltd.
 panasonic	Panasonic Corporation
 parade	Parade Technologies Inc.
 pericom	Pericom Technology Inc.
+pervasive	Pervasive Displays, Inc.
 phytec	PHYTEC Messtechnik GmbH
 picochip	Picochip Ltd
 pine64	Pine64

+ 2 - 0
Documentation/gpu/drm-internals.rst

@@ -201,6 +201,8 @@ drivers.
 Open/Close, File Operations and IOCTLs
 ======================================
 
+.. _drm_driver_fops:
+
 File Operations
 ---------------
 

+ 2 - 57
Documentation/gpu/drm-kms.rst

@@ -523,9 +523,6 @@ Color Management Properties
 .. kernel-doc:: drivers/gpu/drm/drm_color_mgmt.c
    :doc: overview
 
-.. kernel-doc:: include/drm/drm_color_mgmt.h
-   :internal:
-
 .. kernel-doc:: drivers/gpu/drm/drm_color_mgmt.c
    :export:
 
@@ -554,60 +551,8 @@ various modules/drivers.
 Vertical Blanking
 =================
 
-Vertical blanking plays a major role in graphics rendering. To achieve
-tear-free display, users must synchronize page flips and/or rendering to
-vertical blanking. The DRM API offers ioctls to perform page flips
-synchronized to vertical blanking and wait for vertical blanking.
-
-The DRM core handles most of the vertical blanking management logic,
-which involves filtering out spurious interrupts, keeping race-free
-blanking counters, coping with counter wrap-around and resets and
-keeping use counts. It relies on the driver to generate vertical
-blanking interrupts and optionally provide a hardware vertical blanking
-counter. Drivers must implement the following operations.
-
--  int (\*enable_vblank) (struct drm_device \*dev, int crtc); void
-   (\*disable_vblank) (struct drm_device \*dev, int crtc);
-   Enable or disable vertical blanking interrupts for the given CRTC.
-
--  u32 (\*get_vblank_counter) (struct drm_device \*dev, int crtc);
-   Retrieve the value of the vertical blanking counter for the given
-   CRTC. If the hardware maintains a vertical blanking counter its value
-   should be returned. Otherwise drivers can use the
-   :c:func:`drm_vblank_count()` helper function to handle this
-   operation.
-
-Drivers must initialize the vertical blanking handling core with a call
-to :c:func:`drm_vblank_init()` in their load operation.
-
-Vertical blanking interrupts can be enabled by the DRM core or by
-drivers themselves (for instance to handle page flipping operations).
-The DRM core maintains a vertical blanking use count to ensure that the
-interrupts are not disabled while a user still needs them. To increment
-the use count, drivers call :c:func:`drm_vblank_get()`. Upon
-return vertical blanking interrupts are guaranteed to be enabled.
-
-To decrement the use count drivers call
-:c:func:`drm_vblank_put()`. Only when the use count drops to zero
-will the DRM core disable the vertical blanking interrupts after a delay
-by scheduling a timer. The delay is accessible through the
-vblankoffdelay module parameter or the ``drm_vblank_offdelay`` global
-variable and expressed in milliseconds. Its default value is 5000 ms.
-Zero means never disable, and a negative value means disable
-immediately. Drivers may override the behaviour by setting the
-:c:type:`struct drm_device <drm_device>`
-vblank_disable_immediate flag, which when set causes vblank interrupts
-to be disabled immediately regardless of the drm_vblank_offdelay
-value. The flag should only be set if there's a properly working
-hardware vblank counter present.
-
-When a vertical blanking interrupt occurs drivers only need to call the
-:c:func:`drm_handle_vblank()` function to account for the
-interrupt.
-
-Resources allocated by :c:func:`drm_vblank_init()` must be freed
-with a call to :c:func:`drm_vblank_cleanup()` in the driver unload
-operation handler.
+.. kernel-doc:: drivers/gpu/drm/drm_vblank.c
+   :doc: vblank handling
 
 Vertical Blanking and Interrupt Handling Functions Reference
 ------------------------------------------------------------

+ 2 - 2
Documentation/gpu/drm-mm.rst

@@ -191,7 +191,7 @@ acquired and release by :c:func:`calling drm_gem_object_get()` and
 holding the lock.
 
 When the last reference to a GEM object is released the GEM core calls
-the :c:type:`struct drm_driver <drm_driver>` gem_free_object
+the :c:type:`struct drm_driver <drm_driver>` gem_free_object_unlocked
 operation. That operation is mandatory for GEM-enabled drivers and must
 free the GEM object and all associated resources.
 
@@ -492,7 +492,7 @@ DRM Sync Objects
    :doc: Overview
 
 .. kernel-doc:: include/drm/drm_syncobj.h
-   :export:
+   :internal:
 
 .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
    :export:

+ 2 - 0
Documentation/gpu/drm-uapi.rst

@@ -160,6 +160,8 @@ other hand, a driver requires shared state between clients which is
 visible to user-space and accessible beyond open-file boundaries, they
 cannot support render nodes.
 
+.. _drm_driver_ioctl:
+
 IOCTL Support on Device Nodes
 =============================
 

+ 6 - 0
MAINTAINERS

@@ -4541,6 +4541,12 @@ M:	Dave Airlie <airlied@redhat.com>
 S:	Odd Fixes
 F:	drivers/gpu/drm/mgag200/
 
+DRM DRIVER FOR PERVASIVE DISPLAYS REPAPER PANELS
+M:	Noralf Trønnes <noralf@tronnes.org>
+S:	Maintained
+F:	drivers/gpu/drm/tinydrm/repaper.c
+F:	Documentation/devicetree/bindings/display/repaper.txt
+
 DRM DRIVER FOR RAGE 128 VIDEO CARDS
 S:	Orphan / Obsolete
 F:	drivers/gpu/drm/r128/

+ 2 - 2
drivers/dma-buf/dma-fence.c

@@ -48,7 +48,7 @@ static atomic64_t dma_fence_context_counter = ATOMIC64_INIT(0);
  */
 u64 dma_fence_context_alloc(unsigned num)
 {
-	BUG_ON(!num);
+	WARN_ON(!num);
 	return atomic64_add_return(num, &dma_fence_context_counter) - num;
 }
 EXPORT_SYMBOL(dma_fence_context_alloc);
@@ -177,7 +177,7 @@ void dma_fence_release(struct kref *kref)
 
 	trace_dma_fence_destroy(fence);
 
-	BUG_ON(!list_empty(&fence->cb_list));
+	WARN_ON(!list_empty(&fence->cb_list));
 
 	if (fence->ops->release)
 		fence->ops->release(fence);

+ 69 - 38
drivers/dma-buf/sw_sync.c

@@ -96,9 +96,9 @@ static struct sync_timeline *sync_timeline_create(const char *name)
 	obj->context = dma_fence_context_alloc(1);
 	strlcpy(obj->name, name, sizeof(obj->name));
 
-	INIT_LIST_HEAD(&obj->child_list_head);
-	INIT_LIST_HEAD(&obj->active_list_head);
-	spin_lock_init(&obj->child_list_lock);
+	obj->pt_tree = RB_ROOT;
+	INIT_LIST_HEAD(&obj->pt_list);
+	spin_lock_init(&obj->lock);
 
 	sync_timeline_debug_add(obj);
 
@@ -135,28 +135,28 @@ static void sync_timeline_put(struct sync_timeline *obj)
  */
 static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
 {
-	unsigned long flags;
 	struct sync_pt *pt, *next;
 
 	trace_sync_timeline(obj);
 
-	spin_lock_irqsave(&obj->child_list_lock, flags);
+	spin_lock_irq(&obj->lock);
 
 	obj->value += inc;
 
-	list_for_each_entry_safe(pt, next, &obj->active_list_head,
-				 active_list) {
-		if (dma_fence_is_signaled_locked(&pt->base))
-			list_del_init(&pt->active_list);
+	list_for_each_entry_safe(pt, next, &obj->pt_list, link) {
+		if (!dma_fence_is_signaled_locked(&pt->base))
+			break;
+
+		list_del_init(&pt->link);
+		rb_erase(&pt->node, &obj->pt_tree);
 	}
 
-	spin_unlock_irqrestore(&obj->child_list_lock, flags);
+	spin_unlock_irq(&obj->lock);
 }
 
 /**
  * sync_pt_create() - creates a sync pt
  * @parent:	fence's parent sync_timeline
- * @size:	size to allocate for this pt
  * @inc:	value of the fence
  *
  * Creates a new sync_pt as a child of @parent.  @size bytes will be
@@ -164,26 +164,55 @@ static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
  * the generic sync_timeline struct. Returns the sync_pt object or
  * NULL in case of error.
  */
-static struct sync_pt *sync_pt_create(struct sync_timeline *obj, int size,
-			     unsigned int value)
+static struct sync_pt *sync_pt_create(struct sync_timeline *obj,
+				      unsigned int value)
 {
-	unsigned long flags;
 	struct sync_pt *pt;
 
-	if (size < sizeof(*pt))
-		return NULL;
-
-	pt = kzalloc(size, GFP_KERNEL);
+	pt = kzalloc(sizeof(*pt), GFP_KERNEL);
 	if (!pt)
 		return NULL;
 
-	spin_lock_irqsave(&obj->child_list_lock, flags);
 	sync_timeline_get(obj);
-	dma_fence_init(&pt->base, &timeline_fence_ops, &obj->child_list_lock,
+	dma_fence_init(&pt->base, &timeline_fence_ops, &obj->lock,
 		       obj->context, value);
-	list_add_tail(&pt->child_list, &obj->child_list_head);
-	INIT_LIST_HEAD(&pt->active_list);
-	spin_unlock_irqrestore(&obj->child_list_lock, flags);
+	INIT_LIST_HEAD(&pt->link);
+
+	spin_lock_irq(&obj->lock);
+	if (!dma_fence_is_signaled_locked(&pt->base)) {
+		struct rb_node **p = &obj->pt_tree.rb_node;
+		struct rb_node *parent = NULL;
+
+		while (*p) {
+			struct sync_pt *other;
+			int cmp;
+
+			parent = *p;
+			other = rb_entry(parent, typeof(*pt), node);
+			cmp = value - other->base.seqno;
+			if (cmp > 0) {
+				p = &parent->rb_right;
+			} else if (cmp < 0) {
+				p = &parent->rb_left;
+			} else {
+				if (dma_fence_get_rcu(&other->base)) {
+					dma_fence_put(&pt->base);
+					pt = other;
+					goto unlock;
+				}
+				p = &parent->rb_left;
+			}
+		}
+		rb_link_node(&pt->node, parent, p);
+		rb_insert_color(&pt->node, &obj->pt_tree);
+
+		parent = rb_next(&pt->node);
+		list_add_tail(&pt->link,
+			      parent ? &rb_entry(parent, typeof(*pt), node)->link : &obj->pt_list);
+	}
+unlock:
+	spin_unlock_irq(&obj->lock);
+
 	return pt;
 }
 
@@ -203,13 +232,17 @@ static void timeline_fence_release(struct dma_fence *fence)
 {
 	struct sync_pt *pt = dma_fence_to_sync_pt(fence);
 	struct sync_timeline *parent = dma_fence_parent(fence);
-	unsigned long flags;
 
-	spin_lock_irqsave(fence->lock, flags);
-	list_del(&pt->child_list);
-	if (!list_empty(&pt->active_list))
-		list_del(&pt->active_list);
-	spin_unlock_irqrestore(fence->lock, flags);
+	if (!list_empty(&pt->link)) {
+		unsigned long flags;
+
+		spin_lock_irqsave(fence->lock, flags);
+		if (!list_empty(&pt->link)) {
+			list_del(&pt->link);
+			rb_erase(&pt->node, &parent->pt_tree);
+		}
+		spin_unlock_irqrestore(fence->lock, flags);
+	}
 
 	sync_timeline_put(parent);
 	dma_fence_free(fence);
@@ -219,18 +252,11 @@ static bool timeline_fence_signaled(struct dma_fence *fence)
 {
 	struct sync_timeline *parent = dma_fence_parent(fence);
 
-	return (fence->seqno > parent->value) ? false : true;
+	return !__dma_fence_is_later(fence->seqno, parent->value);
 }
 
 static bool timeline_fence_enable_signaling(struct dma_fence *fence)
 {
-	struct sync_pt *pt = dma_fence_to_sync_pt(fence);
-	struct sync_timeline *parent = dma_fence_parent(fence);
-
-	if (timeline_fence_signaled(fence))
-		return false;
-
-	list_add_tail(&pt->active_list, &parent->active_list_head);
 	return true;
 }
 
@@ -309,7 +335,7 @@ static long sw_sync_ioctl_create_fence(struct sync_timeline *obj,
 		goto err;
 	}
 
-	pt = sync_pt_create(obj, sizeof(*pt), data.value);
+	pt = sync_pt_create(obj, data.value);
 	if (!pt) {
 		err = -ENOMEM;
 		goto err;
@@ -345,6 +371,11 @@ static long sw_sync_ioctl_inc(struct sync_timeline *obj, unsigned long arg)
 	if (copy_from_user(&value, (void __user *)arg, sizeof(value)))
 		return -EFAULT;
 
+	while (value > INT_MAX)  {
+		sync_timeline_signal(obj, INT_MAX);
+		value -= INT_MAX;
+	}
+
 	sync_timeline_signal(obj, value);
 
 	return 0;

+ 8 - 11
drivers/dma-buf/sync_debug.c

@@ -116,17 +116,15 @@ static void sync_print_fence(struct seq_file *s,
 static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj)
 {
 	struct list_head *pos;
-	unsigned long flags;
 
 	seq_printf(s, "%s: %d\n", obj->name, obj->value);
 
-	spin_lock_irqsave(&obj->child_list_lock, flags);
-	list_for_each(pos, &obj->child_list_head) {
-		struct sync_pt *pt =
-			container_of(pos, struct sync_pt, child_list);
+	spin_lock_irq(&obj->lock);
+	list_for_each(pos, &obj->pt_list) {
+		struct sync_pt *pt = container_of(pos, struct sync_pt, link);
 		sync_print_fence(s, &pt->base, false);
 	}
-	spin_unlock_irqrestore(&obj->child_list_lock, flags);
+	spin_unlock_irq(&obj->lock);
 }
 
 static void sync_print_sync_file(struct seq_file *s,
@@ -151,12 +149,11 @@ static void sync_print_sync_file(struct seq_file *s,
 
 static int sync_debugfs_show(struct seq_file *s, void *unused)
 {
-	unsigned long flags;
 	struct list_head *pos;
 
 	seq_puts(s, "objs:\n--------------\n");
 
-	spin_lock_irqsave(&sync_timeline_list_lock, flags);
+	spin_lock_irq(&sync_timeline_list_lock);
 	list_for_each(pos, &sync_timeline_list_head) {
 		struct sync_timeline *obj =
 			container_of(pos, struct sync_timeline,
@@ -165,11 +162,11 @@ static int sync_debugfs_show(struct seq_file *s, void *unused)
 		sync_print_obj(s, obj);
 		seq_putc(s, '\n');
 	}
-	spin_unlock_irqrestore(&sync_timeline_list_lock, flags);
+	spin_unlock_irq(&sync_timeline_list_lock);
 
 	seq_puts(s, "fences:\n--------------\n");
 
-	spin_lock_irqsave(&sync_file_list_lock, flags);
+	spin_lock_irq(&sync_file_list_lock);
 	list_for_each(pos, &sync_file_list_head) {
 		struct sync_file *sync_file =
 			container_of(pos, struct sync_file, sync_file_list);
@@ -177,7 +174,7 @@ static int sync_debugfs_show(struct seq_file *s, void *unused)
 		sync_print_sync_file(s, sync_file);
 		seq_putc(s, '\n');
 	}
-	spin_unlock_irqrestore(&sync_file_list_lock, flags);
+	spin_unlock_irq(&sync_file_list_lock);
 	return 0;
 }
 

+ 13 - 13
drivers/dma-buf/sync_debug.h

@@ -14,6 +14,7 @@
 #define _LINUX_SYNC_H
 
 #include <linux/list.h>
+#include <linux/rbtree.h>
 #include <linux/spinlock.h>
 #include <linux/dma-fence.h>
 
@@ -24,42 +25,41 @@
  * struct sync_timeline - sync object
  * @kref:		reference count on fence.
  * @name:		name of the sync_timeline. Useful for debugging
- * @child_list_head:	list of children sync_pts for this sync_timeline
- * @child_list_lock:	lock protecting @child_list_head and fence.status
- * @active_list_head:	list of active (unsignaled/errored) sync_pts
+ * @lock:		lock protecting @pt_list and @value
+ * @pt_tree:		rbtree of active (unsignaled/errored) sync_pts
+ * @pt_list:		list of active (unsignaled/errored) sync_pts
  * @sync_timeline_list:	membership in global sync_timeline_list
  */
 struct sync_timeline {
 	struct kref		kref;
 	char			name[32];
 
-	/* protected by child_list_lock */
+	/* protected by lock */
 	u64			context;
 	int			value;
 
-	struct list_head	child_list_head;
-	spinlock_t		child_list_lock;
-
-	struct list_head	active_list_head;
+	struct rb_root		pt_tree;
+	struct list_head	pt_list;
+	spinlock_t		lock;
 
 	struct list_head	sync_timeline_list;
 };
 
 static inline struct sync_timeline *dma_fence_parent(struct dma_fence *fence)
 {
-	return container_of(fence->lock, struct sync_timeline, child_list_lock);
+	return container_of(fence->lock, struct sync_timeline, lock);
 }
 
 /**
  * struct sync_pt - sync_pt object
  * @base: base fence object
- * @child_list: sync timeline child's list
- * @active_list: sync timeline active child's list
+ * @link: link on the sync timeline's list
+ * @node: node in the sync timeline's tree
  */
 struct sync_pt {
 	struct dma_fence base;
-	struct list_head child_list;
-	struct list_head active_list;
+	struct list_head link;
+	struct rb_node node;
 };
 
 #ifdef CONFIG_SW_SYNC

+ 0 - 1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c

@@ -803,7 +803,6 @@ static struct drm_driver kms_driver = {
 	.open = amdgpu_driver_open_kms,
 	.postclose = amdgpu_driver_postclose_kms,
 	.lastclose = amdgpu_driver_lastclose_kms,
-	.set_busid = drm_pci_set_busid,
 	.unload = amdgpu_driver_unload_kms,
 	.get_vblank_counter = amdgpu_get_vblank_counter_kms,
 	.enable_vblank = amdgpu_enable_vblank_kms,

+ 0 - 1
drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c

@@ -245,7 +245,6 @@ static int amdgpufb_create(struct drm_fb_helper *helper,
 
 	drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);
 
-	info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT;
 	info->fbops = &amdgpufb_ops;
 
 	tmp = amdgpu_bo_gpu_offset(abo) - adev->mc.vram_start;

+ 0 - 1
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c

@@ -263,7 +263,6 @@ void amdgpu_irq_fini(struct amdgpu_device *adev)
 {
 	unsigned i, j;
 
-	drm_vblank_cleanup(adev->ddev);
 	if (adev->irq.installed) {
 		drm_irq_uninstall(adev->ddev);
 		adev->irq.installed = false;

+ 1 - 1
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c

@@ -1867,7 +1867,7 @@ static void dce_v10_0_afmt_setmode(struct drm_encoder *encoder,
 	dce_v10_0_audio_write_sad_regs(encoder);
 	dce_v10_0_audio_write_latency_fields(encoder, mode);
 
-	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
+	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
 	if (err < 0) {
 		DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
 		return;

+ 1 - 1
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c

@@ -1851,7 +1851,7 @@ static void dce_v11_0_afmt_setmode(struct drm_encoder *encoder,
 	dce_v11_0_audio_write_sad_regs(encoder);
 	dce_v11_0_audio_write_latency_fields(encoder, mode);
 
-	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
+	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
 	if (err < 0) {
 		DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
 		return;

+ 1 - 1
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c

@@ -1597,7 +1597,7 @@ static void dce_v6_0_audio_set_avi_infoframe(struct drm_encoder *encoder,
 	ssize_t err;
 	u32 tmp;
 
-	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
+	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
 	if (err < 0) {
 		DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
 		return;

+ 1 - 1
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c

@@ -1750,7 +1750,7 @@ static void dce_v8_0_afmt_setmode(struct drm_encoder *encoder,
 	dce_v8_0_audio_write_sad_regs(encoder);
 	dce_v8_0_audio_write_latency_fields(encoder, mode);
 
-	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
+	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
 	if (err < 0) {
 		DRM_ERROR("failed to setup AVI infoframe: %zd\n", err);
 		return;

+ 20 - 21
drivers/gpu/drm/arc/arcpgu_crtc.c

@@ -64,6 +64,19 @@ static const struct drm_crtc_funcs arc_pgu_crtc_funcs = {
 	.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
 };
 
+static enum drm_mode_status arc_pgu_crtc_mode_valid(struct drm_crtc *crtc,
+						    const struct drm_display_mode *mode)
+{
+	struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
+	long rate, clk_rate = mode->clock * 1000;
+
+	rate = clk_round_rate(arcpgu->clk, clk_rate);
+	if (rate != clk_rate)
+		return MODE_NOCLOCK;
+
+	return MODE_OK;
+}
+
 static void arc_pgu_crtc_mode_set_nofb(struct drm_crtc *crtc)
 {
 	struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
@@ -106,7 +119,8 @@ static void arc_pgu_crtc_mode_set_nofb(struct drm_crtc *crtc)
 	clk_set_rate(arcpgu->clk, m->crtc_clock * 1000);
 }
 
-static void arc_pgu_crtc_enable(struct drm_crtc *crtc)
+static void arc_pgu_crtc_atomic_enable(struct drm_crtc *crtc,
+				       struct drm_crtc_state *old_state)
 {
 	struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
 
@@ -116,7 +130,8 @@ static void arc_pgu_crtc_enable(struct drm_crtc *crtc)
 		      ARCPGU_CTRL_ENABLE_MASK);
 }
 
-static void arc_pgu_crtc_disable(struct drm_crtc *crtc)
+static void arc_pgu_crtc_atomic_disable(struct drm_crtc *crtc,
+					struct drm_crtc_state *old_state)
 {
 	struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
 
@@ -129,20 +144,6 @@ static void arc_pgu_crtc_disable(struct drm_crtc *crtc)
 			      ~ARCPGU_CTRL_ENABLE_MASK);
 }
 
-static int arc_pgu_crtc_atomic_check(struct drm_crtc *crtc,
-				     struct drm_crtc_state *state)
-{
-	struct arcpgu_drm_private *arcpgu = crtc_to_arcpgu_priv(crtc);
-	struct drm_display_mode *mode = &state->adjusted_mode;
-	long rate, clk_rate = mode->clock * 1000;
-
-	rate = clk_round_rate(arcpgu->clk, clk_rate);
-	if (rate != clk_rate)
-		return -EINVAL;
-
-	return 0;
-}
-
 static void arc_pgu_crtc_atomic_begin(struct drm_crtc *crtc,
 				      struct drm_crtc_state *state)
 {
@@ -158,15 +159,13 @@ static void arc_pgu_crtc_atomic_begin(struct drm_crtc *crtc,
 }
 
 static const struct drm_crtc_helper_funcs arc_pgu_crtc_helper_funcs = {
+	.mode_valid	= arc_pgu_crtc_mode_valid,
 	.mode_set	= drm_helper_crtc_mode_set,
 	.mode_set_base	= drm_helper_crtc_mode_set_base,
 	.mode_set_nofb	= arc_pgu_crtc_mode_set_nofb,
-	.enable		= arc_pgu_crtc_enable,
-	.disable	= arc_pgu_crtc_disable,
-	.prepare	= arc_pgu_crtc_disable,
-	.commit		= arc_pgu_crtc_enable,
-	.atomic_check	= arc_pgu_crtc_atomic_check,
 	.atomic_begin	= arc_pgu_crtc_atomic_begin,
+	.atomic_enable	= arc_pgu_crtc_atomic_enable,
+	.atomic_disable	= arc_pgu_crtc_atomic_disable,
 };
 
 static void arc_pgu_plane_atomic_update(struct drm_plane *plane,

+ 6 - 4
drivers/gpu/drm/arm/hdlcd_crtc.c

@@ -165,7 +165,8 @@ static void hdlcd_crtc_mode_set_nofb(struct drm_crtc *crtc)
 	clk_set_rate(hdlcd->clk, m->crtc_clock * 1000);
 }
 
-static void hdlcd_crtc_enable(struct drm_crtc *crtc)
+static void hdlcd_crtc_atomic_enable(struct drm_crtc *crtc,
+				     struct drm_crtc_state *old_state)
 {
 	struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
 
@@ -175,7 +176,8 @@ static void hdlcd_crtc_enable(struct drm_crtc *crtc)
 	drm_crtc_vblank_on(crtc);
 }
 
-static void hdlcd_crtc_disable(struct drm_crtc *crtc)
+static void hdlcd_crtc_atomic_disable(struct drm_crtc *crtc,
+				      struct drm_crtc_state *old_state)
 {
 	struct hdlcd_drm_private *hdlcd = crtc_to_hdlcd_priv(crtc);
 
@@ -218,10 +220,10 @@ static void hdlcd_crtc_atomic_begin(struct drm_crtc *crtc,
 }
 
 static const struct drm_crtc_helper_funcs hdlcd_crtc_helper_funcs = {
-	.enable		= hdlcd_crtc_enable,
-	.disable	= hdlcd_crtc_disable,
 	.atomic_check	= hdlcd_crtc_atomic_check,
 	.atomic_begin	= hdlcd_crtc_atomic_begin,
+	.atomic_enable	= hdlcd_crtc_atomic_enable,
+	.atomic_disable	= hdlcd_crtc_atomic_disable,
 };
 
 static int hdlcd_plane_atomic_check(struct drm_plane *plane,

+ 0 - 2
drivers/gpu/drm/arm/hdlcd_drv.c

@@ -343,7 +343,6 @@ err_register:
 	}
 err_fbdev:
 	drm_kms_helper_poll_fini(drm);
-	drm_vblank_cleanup(drm);
 err_vblank:
 	pm_runtime_disable(drm->dev);
 err_pm_active:
@@ -375,7 +374,6 @@ static void hdlcd_drm_unbind(struct device *dev)
 	component_unbind_all(dev, drm);
 	of_node_put(hdlcd->crtc.port);
 	hdlcd->crtc.port = NULL;
-	drm_vblank_cleanup(drm);
 	pm_runtime_get_sync(drm->dev);
 	drm_irq_uninstall(drm);
 	pm_runtime_put_sync(drm->dev);

+ 6 - 4
drivers/gpu/drm/arm/malidp_crtc.c

@@ -46,7 +46,8 @@ static enum drm_mode_status malidp_crtc_mode_valid(struct drm_crtc *crtc,
 	return MODE_OK;
 }
 
-static void malidp_crtc_enable(struct drm_crtc *crtc)
+static void malidp_crtc_atomic_enable(struct drm_crtc *crtc,
+				      struct drm_crtc_state *old_state)
 {
 	struct malidp_drm *malidp = crtc_to_malidp_device(crtc);
 	struct malidp_hw_device *hwdev = malidp->dev;
@@ -69,7 +70,8 @@ static void malidp_crtc_enable(struct drm_crtc *crtc)
 	drm_crtc_vblank_on(crtc);
 }
 
-static void malidp_crtc_disable(struct drm_crtc *crtc)
+static void malidp_crtc_atomic_disable(struct drm_crtc *crtc,
+				       struct drm_crtc_state *old_state)
 {
 	struct malidp_drm *malidp = crtc_to_malidp_device(crtc);
 	struct malidp_hw_device *hwdev = malidp->dev;
@@ -408,9 +410,9 @@ static int malidp_crtc_atomic_check(struct drm_crtc *crtc,
 
 static const struct drm_crtc_helper_funcs malidp_crtc_helper_funcs = {
 	.mode_valid = malidp_crtc_mode_valid,
-	.enable = malidp_crtc_enable,
-	.disable = malidp_crtc_disable,
 	.atomic_check = malidp_crtc_atomic_check,
+	.atomic_enable = malidp_crtc_atomic_enable,
+	.atomic_disable = malidp_crtc_atomic_disable,
 };
 
 static struct drm_crtc_state *malidp_crtc_duplicate_state(struct drm_crtc *crtc)

+ 1 - 1
drivers/gpu/drm/arm/malidp_drv.c

@@ -225,7 +225,7 @@ static void malidp_atomic_commit_tail(struct drm_atomic_state *state)
 
 	drm_atomic_helper_commit_modeset_disables(drm, state);
 
-	for_each_crtc_in_state(state, crtc, old_crtc_state, i) {
+	for_each_old_crtc_in_state(state, crtc, old_crtc_state, i) {
 		malidp_atomic_commit_update_gamma(crtc, old_crtc_state);
 		malidp_atomic_commit_update_coloradj(crtc, old_crtc_state);
 		malidp_atomic_commit_se_config(crtc, old_crtc_state);

+ 4 - 5
drivers/gpu/drm/armada/armada_crtc.c

@@ -1150,13 +1150,13 @@ int armada_drm_plane_init(struct armada_plane *plane)
 	return 0;
 }
 
-static struct drm_prop_enum_list armada_drm_csc_yuv_enum_list[] = {
+static const struct drm_prop_enum_list armada_drm_csc_yuv_enum_list[] = {
 	{ CSC_AUTO,        "Auto" },
 	{ CSC_YUV_CCIR601, "CCIR601" },
 	{ CSC_YUV_CCIR709, "CCIR709" },
 };
 
-static struct drm_prop_enum_list armada_drm_csc_rgb_enum_list[] = {
+static const struct drm_prop_enum_list armada_drm_csc_rgb_enum_list[] = {
 	{ CSC_AUTO,         "Auto" },
 	{ CSC_RGB_COMPUTER, "Computer system" },
 	{ CSC_RGB_STUDIO,   "Studio" },
@@ -1329,8 +1329,7 @@ armada_lcd_bind(struct device *dev, struct device *master, void *data)
 		port = of_get_child_by_name(parent, "port");
 		of_node_put(np);
 		if (!port) {
-			dev_err(dev, "no port node found in %s\n",
-				parent->full_name);
+			dev_err(dev, "no port node found in %pOF\n", parent);
 			return -ENXIO;
 		}
 
@@ -1364,7 +1363,7 @@ static int armada_lcd_remove(struct platform_device *pdev)
 	return 0;
 }
 
-static struct of_device_id armada_lcd_of_match[] = {
+static const struct of_device_id armada_lcd_of_match[] = {
 	{
 		.compatible	= "marvell,dove-lcd",
 		.data		= &armada510_ops,

+ 2 - 2
drivers/gpu/drm/armada/armada_drv.c

@@ -232,8 +232,8 @@ static void armada_add_endpoints(struct device *dev,
 			of_node_put(remote);
 			continue;
 		} else if (!of_device_is_available(remote->parent)) {
-			dev_warn(dev, "parent device of %s is not available\n",
-				 remote->full_name);
+			dev_warn(dev, "parent device of %pOF is not available\n",
+				 remote);
 			of_node_put(remote);
 			continue;
 		}

+ 0 - 1
drivers/gpu/drm/armada/armada_fbdev.c

@@ -81,7 +81,6 @@ static int armada_fb_create(struct drm_fb_helper *fbh,
 
 	strlcpy(info->fix.id, "armada-drmfb", sizeof(info->fix.id));
 	info->par = fbh;
-	info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT;
 	info->fbops = &armada_fb_ops;
 	info->fix.smem_start = obj->phys_addr;
 	info->fix.smem_len = obj->obj.size;

+ 1 - 1
drivers/gpu/drm/armada/armada_overlay.c

@@ -388,7 +388,7 @@ static const uint32_t armada_ovl_formats[] = {
 	DRM_FORMAT_BGR565,
 };
 
-static struct drm_prop_enum_list armada_drm_colorkey_enum_list[] = {
+static const struct drm_prop_enum_list armada_drm_colorkey_enum_list[] = {
 	{ CKMODE_DISABLE, "disabled" },
 	{ CKMODE_Y,       "Y component" },
 	{ CKMODE_U,       "U component" },

+ 2 - 3
drivers/gpu/drm/ast/ast_drv.c

@@ -197,7 +197,6 @@ static struct drm_driver driver = {
 
 	.load = ast_driver_load,
 	.unload = ast_driver_unload,
-	.set_busid = drm_pci_set_busid,
 
 	.fops = &ast_fops,
 	.name = DRIVER_NAME,
@@ -221,11 +220,11 @@ static int __init ast_init(void)
 
 	if (ast_modeset == 0)
 		return -EINVAL;
-	return drm_pci_init(&driver, &ast_pci_driver);
+	return pci_register_driver(&ast_pci_driver);
 }
 static void __exit ast_exit(void)
 {
-	drm_pci_exit(&driver, &ast_pci_driver);
+	pci_unregister_driver(&ast_pci_driver);
 }
 
 module_init(ast_init);

+ 0 - 1
drivers/gpu/drm/ast/ast_fb.c

@@ -231,7 +231,6 @@ static int astfb_create(struct drm_fb_helper *helper,
 
 	strcpy(info->fix.id, "astdrmfb");
 
-	info->flags = FBINFO_DEFAULT | FBINFO_CAN_FORCE_OUTPUT;
 	info->fbops = &astfb_ops;
 
 	info->apertures->ranges[0].base = pci_resource_start(dev->pdev, 0);

+ 13 - 5
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c

@@ -149,7 +149,8 @@ atmel_hlcdc_crtc_mode_valid(struct drm_crtc *c,
 	return atmel_hlcdc_dc_mode_valid(crtc->dc, mode);
 }
 
-static void atmel_hlcdc_crtc_disable(struct drm_crtc *c)
+static void atmel_hlcdc_crtc_atomic_disable(struct drm_crtc *c,
+					    struct drm_crtc_state *old_state)
 {
 	struct drm_device *dev = c->dev;
 	struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
@@ -183,7 +184,8 @@ static void atmel_hlcdc_crtc_disable(struct drm_crtc *c)
 	pm_runtime_put_sync(dev->dev);
 }
 
-static void atmel_hlcdc_crtc_enable(struct drm_crtc *c)
+static void atmel_hlcdc_crtc_atomic_enable(struct drm_crtc *c,
+					   struct drm_crtc_state *old_state)
 {
 	struct drm_device *dev = c->dev;
 	struct atmel_hlcdc_crtc *crtc = drm_crtc_to_atmel_hlcdc_crtc(c);
@@ -235,7 +237,7 @@ static int atmel_hlcdc_crtc_select_output_mode(struct drm_crtc_state *state)
 
 	crtc = drm_crtc_to_atmel_hlcdc_crtc(state->crtc);
 
-	for_each_connector_in_state(state->state, connector, cstate, i) {
+	for_each_new_connector_in_state(state->state, connector, cstate, i) {
 		struct drm_display_info *info = &connector->display_info;
 		unsigned int supported_fmts = 0;
 		int j;
@@ -319,11 +321,11 @@ static const struct drm_crtc_helper_funcs lcdc_crtc_helper_funcs = {
 	.mode_set = drm_helper_crtc_mode_set,
 	.mode_set_nofb = atmel_hlcdc_crtc_mode_set_nofb,
 	.mode_set_base = drm_helper_crtc_mode_set_base,
-	.disable = atmel_hlcdc_crtc_disable,
-	.enable = atmel_hlcdc_crtc_enable,
 	.atomic_check = atmel_hlcdc_crtc_atomic_check,
 	.atomic_begin = atmel_hlcdc_crtc_atomic_begin,
 	.atomic_flush = atmel_hlcdc_crtc_atomic_flush,
+	.atomic_enable = atmel_hlcdc_crtc_atomic_enable,
+	.atomic_disable = atmel_hlcdc_crtc_atomic_disable,
 };
 
 static void atmel_hlcdc_crtc_destroy(struct drm_crtc *c)
@@ -429,6 +431,8 @@ static const struct drm_crtc_funcs atmel_hlcdc_crtc_funcs = {
 	.atomic_destroy_state = atmel_hlcdc_crtc_destroy_state,
 	.enable_vblank = atmel_hlcdc_crtc_enable_vblank,
 	.disable_vblank = atmel_hlcdc_crtc_disable_vblank,
+	.set_property = drm_atomic_helper_crtc_set_property,
+	.gamma_set = drm_atomic_helper_legacy_gamma_set,
 };
 
 int atmel_hlcdc_crtc_create(struct drm_device *dev)
@@ -484,6 +488,10 @@ int atmel_hlcdc_crtc_create(struct drm_device *dev)
 	drm_crtc_helper_add(&crtc->base, &lcdc_crtc_helper_funcs);
 	drm_crtc_vblank_reset(&crtc->base);
 
+	drm_mode_crtc_set_gamma_size(&crtc->base, ATMEL_HLCDC_CLUT_SIZE);
+	drm_crtc_enable_color_mgmt(&crtc->base, 0, false,
+				   ATMEL_HLCDC_CLUT_SIZE);
+
 	dc->crtc = &crtc->base;
 
 	return 0;

+ 22 - 8
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c

@@ -42,6 +42,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9n12_layers[] = {
 			.default_color = 3,
 			.general_config = 4,
 		},
+		.clut_offset = 0x400,
 	},
 };
 
@@ -73,6 +74,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
 			.disc_pos = 5,
 			.disc_size = 6,
 		},
+		.clut_offset = 0x400,
 	},
 	{
 		.name = "overlay1",
@@ -91,6 +93,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
 			.chroma_key_mask = 8,
 			.general_config = 9,
 		},
+		.clut_offset = 0x800,
 	},
 	{
 		.name = "high-end-overlay",
@@ -112,6 +115,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
 			.scaler_config = 13,
 			.csc = 14,
 		},
+		.clut_offset = 0x1000,
 	},
 	{
 		.name = "cursor",
@@ -131,6 +135,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_at91sam9x5_layers[] = {
 			.chroma_key_mask = 8,
 			.general_config = 9,
 		},
+		.clut_offset = 0x1400,
 	},
 };
 
@@ -162,6 +167,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
 			.disc_pos = 5,
 			.disc_size = 6,
 		},
+		.clut_offset = 0x600,
 	},
 	{
 		.name = "overlay1",
@@ -180,6 +186,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
 			.chroma_key_mask = 8,
 			.general_config = 9,
 		},
+		.clut_offset = 0xa00,
 	},
 	{
 		.name = "overlay2",
@@ -198,6 +205,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
 			.chroma_key_mask = 8,
 			.general_config = 9,
 		},
+		.clut_offset = 0xe00,
 	},
 	{
 		.name = "high-end-overlay",
@@ -223,6 +231,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
 			},
 			.csc = 14,
 		},
+		.clut_offset = 0x1200,
 	},
 	{
 		.name = "cursor",
@@ -244,6 +253,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d3_layers[] = {
 			.general_config = 9,
 			.scaler_config = 13,
 		},
+		.clut_offset = 0x1600,
 	},
 };
 
@@ -275,6 +285,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
 			.disc_pos = 5,
 			.disc_size = 6,
 		},
+		.clut_offset = 0x600,
 	},
 	{
 		.name = "overlay1",
@@ -293,6 +304,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
 			.chroma_key_mask = 8,
 			.general_config = 9,
 		},
+		.clut_offset = 0xa00,
 	},
 	{
 		.name = "overlay2",
@@ -311,6 +323,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
 			.chroma_key_mask = 8,
 			.general_config = 9,
 		},
+		.clut_offset = 0xe00,
 	},
 	{
 		.name = "high-end-overlay",
@@ -336,6 +349,7 @@ static const struct atmel_hlcdc_layer_desc atmel_hlcdc_sama5d4_layers[] = {
 			},
 			.csc = 14,
 		},
+		.clut_offset = 0x1200,
 	},
 };
 
@@ -451,8 +465,7 @@ static void atmel_hlcdc_fb_output_poll_changed(struct drm_device *dev)
 {
 	struct atmel_hlcdc_dc *dc = dev->dev_private;
 
-	if (dc->fbdev)
-		drm_fbdev_cma_hotplug_event(dc->fbdev);
+	drm_fbdev_cma_hotplug_event(dc->fbdev);
 }
 
 struct atmel_hlcdc_dc_commit {
@@ -526,14 +539,13 @@ static int atmel_hlcdc_dc_atomic_commit(struct drm_device *dev,
 		dc->commit.pending = true;
 	spin_unlock(&dc->commit.wait.lock);
 
-	if (ret) {
-		kfree(commit);
-		goto error;
-	}
+	if (ret)
+		goto err_free;
 
-	/* Swap the state, this is the point of no return. */
-	drm_atomic_helper_swap_state(state, true);
+	/* We have our own synchronization through the commit lock. */
+	BUG_ON(drm_atomic_helper_swap_state(state, false) < 0);
 
+	/* Swap state succeeded, this is the point of no return. */
 	drm_atomic_state_get(state);
 	if (async)
 		queue_work(dc->wq, &commit->work);
@@ -542,6 +554,8 @@ static int atmel_hlcdc_dc_atomic_commit(struct drm_device *dev,
 
 	return 0;
 
+err_free:
+	kfree(commit);
 error:
 	drm_atomic_helper_cleanup_planes(dev, state);
 	return ret;

+ 16 - 0
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h

@@ -88,6 +88,11 @@
 #define ATMEL_HLCDC_YUV422SWP			BIT(17)
 #define ATMEL_HLCDC_DSCALEOPT			BIT(20)
 
+#define ATMEL_HLCDC_C1_MODE			ATMEL_HLCDC_CLUT_MODE(0)
+#define ATMEL_HLCDC_C2_MODE			ATMEL_HLCDC_CLUT_MODE(1)
+#define ATMEL_HLCDC_C4_MODE			ATMEL_HLCDC_CLUT_MODE(2)
+#define ATMEL_HLCDC_C8_MODE			ATMEL_HLCDC_CLUT_MODE(3)
+
 #define ATMEL_HLCDC_XRGB4444_MODE		ATMEL_HLCDC_RGB_MODE(0)
 #define ATMEL_HLCDC_ARGB4444_MODE		ATMEL_HLCDC_RGB_MODE(1)
 #define ATMEL_HLCDC_RGBA4444_MODE		ATMEL_HLCDC_RGB_MODE(2)
@@ -142,6 +147,8 @@
 #define ATMEL_HLCDC_DMA_CHANNEL_DSCR_DONE	BIT(2)
 #define ATMEL_HLCDC_DMA_CHANNEL_DSCR_OVERRUN	BIT(3)
 
+#define ATMEL_HLCDC_CLUT_SIZE			256
+
 #define ATMEL_HLCDC_MAX_LAYERS			6
 
 /**
@@ -259,6 +266,7 @@ struct atmel_hlcdc_layer_desc {
 	int id;
 	int regs_offset;
 	int cfgs_offset;
+	int clut_offset;
 	struct atmel_hlcdc_formats *formats;
 	struct atmel_hlcdc_layer_cfg_layout layout;
 	int max_width;
@@ -414,6 +422,14 @@ static inline u32 atmel_hlcdc_layer_read_cfg(struct atmel_hlcdc_layer *layer,
 					  (cfgid * sizeof(u32)));
 }
 
+static inline void atmel_hlcdc_layer_write_clut(struct atmel_hlcdc_layer *layer,
+						unsigned int c, u32 val)
+{
+	regmap_write(layer->regmap,
+		     layer->desc->clut_offset + c * sizeof(u32),
+		     val);
+}
+
 static inline void atmel_hlcdc_layer_init(struct atmel_hlcdc_layer *layer,
 				const struct atmel_hlcdc_layer_desc *desc,
 				struct regmap *regmap)

+ 29 - 0
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c

@@ -83,6 +83,7 @@ drm_plane_state_to_atmel_hlcdc_plane_state(struct drm_plane_state *s)
 #define SUBPIXEL_MASK			0xffff
 
 static uint32_t rgb_formats[] = {
+	DRM_FORMAT_C8,
 	DRM_FORMAT_XRGB4444,
 	DRM_FORMAT_ARGB4444,
 	DRM_FORMAT_RGBA4444,
@@ -100,6 +101,7 @@ struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_formats = {
 };
 
 static uint32_t rgb_and_yuv_formats[] = {
+	DRM_FORMAT_C8,
 	DRM_FORMAT_XRGB4444,
 	DRM_FORMAT_ARGB4444,
 	DRM_FORMAT_RGBA4444,
@@ -128,6 +130,9 @@ struct atmel_hlcdc_formats atmel_hlcdc_plane_rgb_and_yuv_formats = {
 static int atmel_hlcdc_format_to_plane_mode(u32 format, u32 *mode)
 {
 	switch (format) {
+	case DRM_FORMAT_C8:
+		*mode = ATMEL_HLCDC_C8_MODE;
+		break;
 	case DRM_FORMAT_XRGB4444:
 		*mode = ATMEL_HLCDC_XRGB4444_MODE;
 		break;
@@ -424,6 +429,29 @@ static void atmel_hlcdc_plane_update_format(struct atmel_hlcdc_plane *plane,
 				    ATMEL_HLCDC_LAYER_FORMAT_CFG, cfg);
 }
 
+static void atmel_hlcdc_plane_update_clut(struct atmel_hlcdc_plane *plane)
+{
+	struct drm_crtc *crtc = plane->base.crtc;
+	struct drm_color_lut *lut;
+	int idx;
+
+	if (!crtc || !crtc->state)
+		return;
+
+	if (!crtc->state->color_mgmt_changed || !crtc->state->gamma_lut)
+		return;
+
+	lut = (struct drm_color_lut *)crtc->state->gamma_lut->data;
+
+	for (idx = 0; idx < ATMEL_HLCDC_CLUT_SIZE; idx++, lut++) {
+		u32 val = ((lut->red << 8) & 0xff0000) |
+			(lut->green & 0xff00) |
+			(lut->blue >> 8);
+
+		atmel_hlcdc_layer_write_clut(&plane->layer, idx, val);
+	}
+}
+
 static void atmel_hlcdc_plane_update_buffers(struct atmel_hlcdc_plane *plane,
 					struct atmel_hlcdc_plane_state *state)
 {
@@ -768,6 +796,7 @@ static void atmel_hlcdc_plane_atomic_update(struct drm_plane *p,
 	atmel_hlcdc_plane_update_pos_and_size(plane, state);
 	atmel_hlcdc_plane_update_general_settings(plane, state);
 	atmel_hlcdc_plane_update_format(plane, state);
+	atmel_hlcdc_plane_update_clut(plane);
 	atmel_hlcdc_plane_update_buffers(plane, state);
 	atmel_hlcdc_plane_update_disc_area(plane, state);
 

+ 2 - 3
drivers/gpu/drm/bochs/bochs_drv.c

@@ -84,7 +84,6 @@ static struct drm_driver bochs_driver = {
 	.driver_features	= DRIVER_GEM | DRIVER_MODESET,
 	.load			= bochs_load,
 	.unload			= bochs_unload,
-	.set_busid		= drm_pci_set_busid,
 	.fops			= &bochs_fops,
 	.name			= "bochs-drm",
 	.desc			= "bochs dispi vga interface (qemu stdvga)",
@@ -224,12 +223,12 @@ static int __init bochs_init(void)
 	if (bochs_modeset == 0)
 		return -EINVAL;
 
-	return drm_pci_init(&bochs_driver, &bochs_pci_driver);
+	return pci_register_driver(&bochs_pci_driver);
 }
 
 static void __exit bochs_exit(void)
 {
-	drm_pci_exit(&bochs_driver, &bochs_pci_driver);
+	pci_unregister_driver(&bochs_pci_driver);
 }
 
 module_init(bochs_init);

+ 3 - 4
drivers/gpu/drm/bochs/bochs_fbdev.c

@@ -23,9 +23,9 @@ static int bochsfb_mmap(struct fb_info *info,
 static struct fb_ops bochsfb_ops = {
 	.owner = THIS_MODULE,
 	DRM_FB_HELPER_DEFAULT_OPS,
-	.fb_fillrect = drm_fb_helper_sys_fillrect,
-	.fb_copyarea = drm_fb_helper_sys_copyarea,
-	.fb_imageblit = drm_fb_helper_sys_imageblit,
+	.fb_fillrect = drm_fb_helper_cfb_fillrect,
+	.fb_copyarea = drm_fb_helper_cfb_copyarea,
+	.fb_imageblit = drm_fb_helper_cfb_imageblit,
 	.fb_mmap = bochsfb_mmap,
 };
 
@@ -118,7 +118,6 @@ static int bochsfb_create(struct drm_fb_helper *helper,
 
 	strcpy(info->fix.id, "bochsdrmfb");
 
-	info->flags = FBINFO_DEFAULT;
 	info->fbops = &bochsfb_ops;
 
 	drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);

+ 1 - 5
drivers/gpu/drm/bridge/adv7511/adv7511_drv.c

@@ -1126,11 +1126,7 @@ static int adv7511_probe(struct i2c_client *i2c, const struct i2c_device_id *id)
 	adv7511->bridge.funcs = &adv7511_bridge_funcs;
 	adv7511->bridge.of_node = dev->of_node;
 
-	ret = drm_bridge_add(&adv7511->bridge);
-	if (ret) {
-		dev_err(dev, "failed to add adv7511 bridge\n");
-		goto err_unregister_cec;
-	}
+	drm_bridge_add(&adv7511->bridge);
 
 	adv7511_audio_init(dev, adv7511);
 

+ 3 - 6
drivers/gpu/drm/bridge/analogix-anx78xx.c

@@ -1097,7 +1097,8 @@ static void anx78xx_bridge_mode_set(struct drm_bridge *bridge,
 
 	mutex_lock(&anx78xx->lock);
 
-	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, adjusted_mode);
+	err = drm_hdmi_avi_infoframe_from_display_mode(&frame, adjusted_mode,
+						       false);
 	if (err) {
 		DRM_ERROR("Failed to setup AVI infoframe: %d\n", err);
 		goto unlock;
@@ -1438,11 +1439,7 @@ static int anx78xx_i2c_probe(struct i2c_client *client,
 
 	anx78xx->bridge.funcs = &anx78xx_bridge_funcs;
 
-	err = drm_bridge_add(&anx78xx->bridge);
-	if (err < 0) {
-		DRM_ERROR("Failed to add drm bridge: %d\n", err);
-		goto err_poweroff;
-	}
+	drm_bridge_add(&anx78xx->bridge);
 
 	/* If cable is pulled out, just poweroff and wait for HPD event */
 	if (!gpiod_get_value(anx78xx->pdata.gpiod_hpd))

+ 3 - 6
drivers/gpu/drm/bridge/dumb-vga-dac.c

@@ -177,7 +177,6 @@ static struct i2c_adapter *dumb_vga_retrieve_ddc(struct device *dev)
 static int dumb_vga_probe(struct platform_device *pdev)
 {
 	struct dumb_vga *vga;
-	int ret;
 
 	vga = devm_kzalloc(&pdev->dev, sizeof(*vga), GFP_KERNEL);
 	if (!vga)
@@ -186,7 +185,7 @@ static int dumb_vga_probe(struct platform_device *pdev)
 
 	vga->vdd = devm_regulator_get_optional(&pdev->dev, "vdd");
 	if (IS_ERR(vga->vdd)) {
-		ret = PTR_ERR(vga->vdd);
+		int ret = PTR_ERR(vga->vdd);
 		if (ret == -EPROBE_DEFER)
 			return -EPROBE_DEFER;
 		vga->vdd = NULL;
@@ -207,11 +206,9 @@ static int dumb_vga_probe(struct platform_device *pdev)
 	vga->bridge.funcs = &dumb_vga_bridge_funcs;
 	vga->bridge.of_node = pdev->dev.of_node;
 
-	ret = drm_bridge_add(&vga->bridge);
-	if (ret && !IS_ERR(vga->ddc))
-		i2c_put_adapter(vga->ddc);
+	drm_bridge_add(&vga->bridge);
 
-	return ret;
+	return 0;
 }
 
 static int dumb_vga_remove(struct platform_device *pdev)

+ 1 - 5
drivers/gpu/drm/bridge/nxp-ptn3460.c

@@ -332,11 +332,7 @@ static int ptn3460_probe(struct i2c_client *client,
 
 	ptn_bridge->bridge.funcs = &ptn3460_bridge_funcs;
 	ptn_bridge->bridge.of_node = dev->of_node;
-	ret = drm_bridge_add(&ptn_bridge->bridge);
-	if (ret) {
-		DRM_ERROR("Failed to add bridge\n");
-		return ret;
-	}
+	drm_bridge_add(&ptn_bridge->bridge);
 
 	i2c_set_clientdata(client, ptn_bridge);
 

+ 1 - 4
drivers/gpu/drm/bridge/panel.c

@@ -158,7 +158,6 @@ struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel,
 					u32 connector_type)
 {
 	struct panel_bridge *panel_bridge;
-	int ret;
 
 	if (!panel)
 		return ERR_PTR(-EINVAL);
@@ -176,9 +175,7 @@ struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel,
 	panel_bridge->bridge.of_node = panel->dev->of_node;
 #endif
 
-	ret = drm_bridge_add(&panel_bridge->bridge);
-	if (ret)
-		return ERR_PTR(ret);
+	drm_bridge_add(&panel_bridge->bridge);
 
 	return &panel_bridge->bridge;
 }

+ 1 - 5
drivers/gpu/drm/bridge/parade-ps8622.c

@@ -598,11 +598,7 @@ static int ps8622_probe(struct i2c_client *client,
 
 	ps8622->bridge.funcs = &ps8622_bridge_funcs;
 	ps8622->bridge.of_node = dev->of_node;
-	ret = drm_bridge_add(&ps8622->bridge);
-	if (ret) {
-		DRM_ERROR("Failed to add bridge\n");
-		return ret;
-	}
+	drm_bridge_add(&ps8622->bridge);
 
 	i2c_set_clientdata(client, ps8622);
 

+ 2 - 6
drivers/gpu/drm/bridge/sii902x.c

@@ -269,7 +269,7 @@ static void sii902x_bridge_mode_set(struct drm_bridge *bridge,
 	if (ret)
 		return;
 
-	ret = drm_hdmi_avi_infoframe_from_display_mode(&frame, adj);
+	ret = drm_hdmi_avi_infoframe_from_display_mode(&frame, adj, false);
 	if (ret < 0) {
 		DRM_ERROR("couldn't fill AVI infoframe\n");
 		return;
@@ -418,11 +418,7 @@ static int sii902x_probe(struct i2c_client *client,
 
 	sii902x->bridge.funcs = &sii902x_bridge_funcs;
 	sii902x->bridge.of_node = dev->of_node;
-	ret = drm_bridge_add(&sii902x->bridge);
-	if (ret) {
-		dev_err(dev, "Failed to add drm_bridge\n");
-		return ret;
-	}
+	drm_bridge_add(&sii902x->bridge);
 
 	i2c_set_clientdata(client, sii902x);
 

+ 6 - 0
drivers/gpu/drm/bridge/synopsys/Kconfig

@@ -22,3 +22,9 @@ config DRM_DW_HDMI_I2S_AUDIO
 	help
 	  Support the I2S Audio interface which is part of the Synopsys
 	  Designware HDMI block.
+
+config DRM_DW_MIPI_DSI
+	tristate
+	select DRM_KMS_HELPER
+	select DRM_MIPI_DSI
+	select DRM_PANEL_BRIDGE

+ 2 - 0
drivers/gpu/drm/bridge/synopsys/Makefile

@@ -3,3 +3,5 @@
 obj-$(CONFIG_DRM_DW_HDMI) += dw-hdmi.o
 obj-$(CONFIG_DRM_DW_HDMI_AHB_AUDIO) += dw-hdmi-ahb-audio.o
 obj-$(CONFIG_DRM_DW_HDMI_I2S_AUDIO) += dw-hdmi-i2s-audio.o
+
+obj-$(CONFIG_DRM_DW_MIPI_DSI) += dw-mipi-dsi.o

+ 2 - 7
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c

@@ -1317,7 +1317,7 @@ static void hdmi_config_AVI(struct dw_hdmi *hdmi, struct drm_display_mode *mode)
 	u8 val;
 
 	/* Initialise info frame from DRM mode */
-	drm_hdmi_avi_infoframe_from_display_mode(&frame, mode);
+	drm_hdmi_avi_infoframe_from_display_mode(&frame, mode, false);
 
 	if (hdmi_bus_fmt_is_yuv444(hdmi->hdmi_data.enc_out_bus_format))
 		frame.colorspace = HDMI_COLORSPACE_YUV444;
@@ -2485,17 +2485,12 @@ int dw_hdmi_probe(struct platform_device *pdev,
 		  const struct dw_hdmi_plat_data *plat_data)
 {
 	struct dw_hdmi *hdmi;
-	int ret;
 
 	hdmi = __dw_hdmi_probe(pdev, plat_data);
 	if (IS_ERR(hdmi))
 		return PTR_ERR(hdmi);
 
-	ret = drm_bridge_add(&hdmi->bridge);
-	if (ret < 0) {
-		__dw_hdmi_remove(hdmi);
-		return ret;
-	}
+	drm_bridge_add(&hdmi->bridge);
 
 	return 0;
 }

+ 981 - 0
drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c

@@ -0,0 +1,981 @@
+/*
+ * Copyright (c) 2016, Fuzhou Rockchip Electronics Co., Ltd
+ * Copyright (C) STMicroelectronics SA 2017
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * Modified by Philippe Cornu <philippe.cornu@st.com>
+ * This generic Synopsys DesignWare MIPI DSI host driver is based on the
+ * Rockchip version from rockchip/dw-mipi-dsi.c with phy & bridge APIs.
+ */
+
+#include <linux/clk.h>
+#include <linux/component.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/reset.h>
+#include <drm/drmP.h>
+#include <drm/drm_atomic_helper.h>
+#include <drm/drm_bridge.h>
+#include <drm/drm_crtc.h>
+#include <drm/drm_crtc_helper.h>
+#include <drm/drm_mipi_dsi.h>
+#include <drm/drm_of.h>
+#include <drm/bridge/dw_mipi_dsi.h>
+#include <video/mipi_display.h>
+
+#define DSI_VERSION			0x00
+#define DSI_PWR_UP			0x04
+#define RESET				0
+#define POWERUP				BIT(0)
+
+#define DSI_CLKMGR_CFG			0x08
+#define TO_CLK_DIVIDSION(div)		(((div) & 0xff) << 8)
+#define TX_ESC_CLK_DIVIDSION(div)	(((div) & 0xff) << 0)
+
+#define DSI_DPI_VCID			0x0c
+#define DPI_VID(vid)			(((vid) & 0x3) << 0)
+
+#define DSI_DPI_COLOR_CODING		0x10
+#define EN18_LOOSELY			BIT(8)
+#define DPI_COLOR_CODING_16BIT_1	0x0
+#define DPI_COLOR_CODING_16BIT_2	0x1
+#define DPI_COLOR_CODING_16BIT_3	0x2
+#define DPI_COLOR_CODING_18BIT_1	0x3
+#define DPI_COLOR_CODING_18BIT_2	0x4
+#define DPI_COLOR_CODING_24BIT		0x5
+
+#define DSI_DPI_CFG_POL			0x14
+#define COLORM_ACTIVE_LOW		BIT(4)
+#define SHUTD_ACTIVE_LOW		BIT(3)
+#define HSYNC_ACTIVE_LOW		BIT(2)
+#define VSYNC_ACTIVE_LOW		BIT(1)
+#define DATAEN_ACTIVE_LOW		BIT(0)
+
+#define DSI_DPI_LP_CMD_TIM		0x18
+#define OUTVACT_LPCMD_TIME(p)		(((p) & 0xff) << 16)
+#define INVACT_LPCMD_TIME(p)		((p) & 0xff)
+
+#define DSI_DBI_CFG			0x20
+#define DSI_DBI_CMDSIZE			0x28
+
+#define DSI_PCKHDL_CFG			0x2c
+#define EN_CRC_RX			BIT(4)
+#define EN_ECC_RX			BIT(3)
+#define EN_BTA				BIT(2)
+#define EN_EOTP_RX			BIT(1)
+#define EN_EOTP_TX			BIT(0)
+
+#define DSI_MODE_CFG			0x34
+#define ENABLE_VIDEO_MODE		0
+#define ENABLE_CMD_MODE			BIT(0)
+
+#define DSI_VID_MODE_CFG		0x38
+#define FRAME_BTA_ACK			BIT(14)
+#define ENABLE_LOW_POWER		(0x3f << 8)
+#define ENABLE_LOW_POWER_MASK		(0x3f << 8)
+#define VID_MODE_TYPE_NON_BURST_SYNC_PULSES	0x0
+#define VID_MODE_TYPE_NON_BURST_SYNC_EVENTS	0x1
+#define VID_MODE_TYPE_BURST			0x2
+#define VID_MODE_TYPE_MASK			0x3
+
+#define DSI_VID_PKT_SIZE		0x3c
+#define VID_PKT_SIZE(p)			(((p) & 0x3fff) << 0)
+#define VID_PKT_MAX_SIZE		0x3fff
+
+#define DSI_VID_HSA_TIME		0x48
+#define DSI_VID_HBP_TIME		0x4c
+#define DSI_VID_HLINE_TIME		0x50
+#define DSI_VID_VSA_LINES		0x54
+#define DSI_VID_VBP_LINES		0x58
+#define DSI_VID_VFP_LINES		0x5c
+#define DSI_VID_VACTIVE_LINES		0x60
+#define DSI_CMD_MODE_CFG		0x68
+#define MAX_RD_PKT_SIZE_LP		BIT(24)
+#define DCS_LW_TX_LP			BIT(19)
+#define DCS_SR_0P_TX_LP			BIT(18)
+#define DCS_SW_1P_TX_LP			BIT(17)
+#define DCS_SW_0P_TX_LP			BIT(16)
+#define GEN_LW_TX_LP			BIT(14)
+#define GEN_SR_2P_TX_LP			BIT(13)
+#define GEN_SR_1P_TX_LP			BIT(12)
+#define GEN_SR_0P_TX_LP			BIT(11)
+#define GEN_SW_2P_TX_LP			BIT(10)
+#define GEN_SW_1P_TX_LP			BIT(9)
+#define GEN_SW_0P_TX_LP			BIT(8)
+#define EN_ACK_RQST			BIT(1)
+#define EN_TEAR_FX			BIT(0)
+
+#define CMD_MODE_ALL_LP			(MAX_RD_PKT_SIZE_LP | \
+					 DCS_LW_TX_LP | \
+					 DCS_SR_0P_TX_LP | \
+					 DCS_SW_1P_TX_LP | \
+					 DCS_SW_0P_TX_LP | \
+					 GEN_LW_TX_LP | \
+					 GEN_SR_2P_TX_LP | \
+					 GEN_SR_1P_TX_LP | \
+					 GEN_SR_0P_TX_LP | \
+					 GEN_SW_2P_TX_LP | \
+					 GEN_SW_1P_TX_LP | \
+					 GEN_SW_0P_TX_LP)
+
+#define DSI_GEN_HDR			0x6c
+#define GEN_HDATA(data)			(((data) & 0xffff) << 8)
+#define GEN_HDATA_MASK			(0xffff << 8)
+#define GEN_HTYPE(type)			(((type) & 0xff) << 0)
+#define GEN_HTYPE_MASK			0xff
+
+#define DSI_GEN_PLD_DATA		0x70
+
+#define DSI_CMD_PKT_STATUS		0x74
+#define GEN_CMD_EMPTY			BIT(0)
+#define GEN_CMD_FULL			BIT(1)
+#define GEN_PLD_W_EMPTY			BIT(2)
+#define GEN_PLD_W_FULL			BIT(3)
+#define GEN_PLD_R_EMPTY			BIT(4)
+#define GEN_PLD_R_FULL			BIT(5)
+#define GEN_RD_CMD_BUSY			BIT(6)
+
+#define DSI_TO_CNT_CFG			0x78
+#define HSTX_TO_CNT(p)			(((p) & 0xffff) << 16)
+#define LPRX_TO_CNT(p)			((p) & 0xffff)
+
+#define DSI_BTA_TO_CNT			0x8c
+#define DSI_LPCLK_CTRL			0x94
+#define AUTO_CLKLANE_CTRL		BIT(1)
+#define PHY_TXREQUESTCLKHS		BIT(0)
+
+#define DSI_PHY_TMR_LPCLK_CFG		0x98
+#define PHY_CLKHS2LP_TIME(lbcc)		(((lbcc) & 0x3ff) << 16)
+#define PHY_CLKLP2HS_TIME(lbcc)		((lbcc) & 0x3ff)
+
+#define DSI_PHY_TMR_CFG			0x9c
+#define PHY_HS2LP_TIME(lbcc)		(((lbcc) & 0xff) << 24)
+#define PHY_LP2HS_TIME(lbcc)		(((lbcc) & 0xff) << 16)
+#define MAX_RD_TIME(lbcc)		((lbcc) & 0x7fff)
+
+#define DSI_PHY_RSTZ			0xa0
+#define PHY_DISFORCEPLL			0
+#define PHY_ENFORCEPLL			BIT(3)
+#define PHY_DISABLECLK			0
+#define PHY_ENABLECLK			BIT(2)
+#define PHY_RSTZ			0
+#define PHY_UNRSTZ			BIT(1)
+#define PHY_SHUTDOWNZ			0
+#define PHY_UNSHUTDOWNZ			BIT(0)
+
+#define DSI_PHY_IF_CFG			0xa4
+#define N_LANES(n)			((((n) - 1) & 0x3) << 0)
+#define PHY_STOP_WAIT_TIME(cycle)	(((cycle) & 0xff) << 8)
+
+#define DSI_PHY_STATUS			0xb0
+#define LOCK				BIT(0)
+#define STOP_STATE_CLK_LANE		BIT(2)
+
+#define DSI_PHY_TST_CTRL0		0xb4
+#define PHY_TESTCLK			BIT(1)
+#define PHY_UNTESTCLK			0
+#define PHY_TESTCLR			BIT(0)
+#define PHY_UNTESTCLR			0
+
+#define DSI_PHY_TST_CTRL1		0xb8
+#define PHY_TESTEN			BIT(16)
+#define PHY_UNTESTEN			0
+#define PHY_TESTDOUT(n)			(((n) & 0xff) << 8)
+#define PHY_TESTDIN(n)			(((n) & 0xff) << 0)
+
+#define DSI_INT_ST0			0xbc
+#define DSI_INT_ST1			0xc0
+#define DSI_INT_MSK0			0xc4
+#define DSI_INT_MSK1			0xc8
+
+#define PHY_STATUS_TIMEOUT_US		10000
+#define CMD_PKT_STATUS_TIMEOUT_US	20000
+
+struct dw_mipi_dsi {
+	struct drm_bridge bridge;
+	struct mipi_dsi_host dsi_host;
+	struct drm_bridge *panel_bridge;
+	bool is_panel_bridge;
+	struct device *dev;
+	void __iomem *base;
+
+	struct clk *pclk;
+
+	unsigned int lane_mbps; /* per lane */
+	u32 channel;
+	u32 lanes;
+	u32 format;
+	unsigned long mode_flags;
+
+	const struct dw_mipi_dsi_plat_data *plat_data;
+};
+
+/*
+ * The controller should generate 2 frames before
+ * preparing the peripheral.
+ */
+static void dw_mipi_dsi_wait_for_two_frames(struct drm_display_mode *mode)
+{
+	int refresh, two_frames;
+
+	refresh = drm_mode_vrefresh(mode);
+	two_frames = DIV_ROUND_UP(MSEC_PER_SEC, refresh) * 2;
+	msleep(two_frames);
+}
+
+static inline struct dw_mipi_dsi *host_to_dsi(struct mipi_dsi_host *host)
+{
+	return container_of(host, struct dw_mipi_dsi, dsi_host);
+}
+
+static inline struct dw_mipi_dsi *bridge_to_dsi(struct drm_bridge *bridge)
+{
+	return container_of(bridge, struct dw_mipi_dsi, bridge);
+}
+
+static inline void dsi_write(struct dw_mipi_dsi *dsi, u32 reg, u32 val)
+{
+	writel(val, dsi->base + reg);
+}
+
+static inline u32 dsi_read(struct dw_mipi_dsi *dsi, u32 reg)
+{
+	return readl(dsi->base + reg);
+}
+
+static int dw_mipi_dsi_host_attach(struct mipi_dsi_host *host,
+				   struct mipi_dsi_device *device)
+{
+	struct dw_mipi_dsi *dsi = host_to_dsi(host);
+	struct drm_bridge *bridge;
+	struct drm_panel *panel;
+	int ret;
+
+	if (device->lanes > dsi->plat_data->max_data_lanes) {
+		dev_err(dsi->dev, "the number of data lanes(%u) is too many\n",
+			device->lanes);
+		return -EINVAL;
+	}
+
+	dsi->lanes = device->lanes;
+	dsi->channel = device->channel;
+	dsi->format = device->format;
+	dsi->mode_flags = device->mode_flags;
+
+	ret = drm_of_find_panel_or_bridge(host->dev->of_node, 1, 0,
+					  &panel, &bridge);
+	if (ret)
+		return ret;
+
+	if (panel) {
+		bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_DSI);
+		if (IS_ERR(bridge))
+			return PTR_ERR(bridge);
+		dsi->is_panel_bridge = true;
+	}
+
+	dsi->panel_bridge = bridge;
+
+	drm_bridge_add(&dsi->bridge);
+
+	return 0;
+}
+
+static int dw_mipi_dsi_host_detach(struct mipi_dsi_host *host,
+				   struct mipi_dsi_device *device)
+{
+	struct dw_mipi_dsi *dsi = host_to_dsi(host);
+
+	if (dsi->is_panel_bridge)
+		drm_panel_bridge_remove(dsi->panel_bridge);
+
+	drm_bridge_remove(&dsi->bridge);
+
+	return 0;
+}
+
+static void dw_mipi_message_config(struct dw_mipi_dsi *dsi,
+				   const struct mipi_dsi_msg *msg)
+{
+	bool lpm = msg->flags & MIPI_DSI_MSG_USE_LPM;
+	u32 val = 0;
+
+	if (msg->flags & MIPI_DSI_MSG_REQ_ACK)
+		val |= EN_ACK_RQST;
+	if (lpm)
+		val |= CMD_MODE_ALL_LP;
+
+	dsi_write(dsi, DSI_LPCLK_CTRL, lpm ? 0 : PHY_TXREQUESTCLKHS);
+	dsi_write(dsi, DSI_CMD_MODE_CFG, val);
+}
+
+static int dw_mipi_dsi_gen_pkt_hdr_write(struct dw_mipi_dsi *dsi, u32 hdr_val)
+{
+	int ret;
+	u32 val, mask;
+
+	ret = readl_poll_timeout(dsi->base + DSI_CMD_PKT_STATUS,
+				 val, !(val & GEN_CMD_FULL), 1000,
+				 CMD_PKT_STATUS_TIMEOUT_US);
+	if (ret < 0) {
+		dev_err(dsi->dev, "failed to get available command FIFO\n");
+		return ret;
+	}
+
+	dsi_write(dsi, DSI_GEN_HDR, hdr_val);
+
+	mask = GEN_CMD_EMPTY | GEN_PLD_W_EMPTY;
+	ret = readl_poll_timeout(dsi->base + DSI_CMD_PKT_STATUS,
+				 val, (val & mask) == mask,
+				 1000, CMD_PKT_STATUS_TIMEOUT_US);
+	if (ret < 0) {
+		dev_err(dsi->dev, "failed to write command FIFO\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static int dw_mipi_dsi_dcs_short_write(struct dw_mipi_dsi *dsi,
+				       const struct mipi_dsi_msg *msg)
+{
+	const u8 *tx_buf = msg->tx_buf;
+	u16 data = 0;
+	u32 val;
+
+	if (msg->tx_len > 0)
+		data |= tx_buf[0];
+	if (msg->tx_len > 1)
+		data |= tx_buf[1] << 8;
+
+	if (msg->tx_len > 2) {
+		dev_err(dsi->dev, "too long tx buf length %zu for short write\n",
+			msg->tx_len);
+		return -EINVAL;
+	}
+
+	val = GEN_HDATA(data) | GEN_HTYPE(msg->type);
+	return dw_mipi_dsi_gen_pkt_hdr_write(dsi, val);
+}
+
+static int dw_mipi_dsi_dcs_long_write(struct dw_mipi_dsi *dsi,
+				      const struct mipi_dsi_msg *msg)
+{
+	const u8 *tx_buf = msg->tx_buf;
+	int len = msg->tx_len, pld_data_bytes = sizeof(u32), ret;
+	u32 hdr_val = GEN_HDATA(msg->tx_len) | GEN_HTYPE(msg->type);
+	u32 remainder;
+	u32 val;
+
+	if (msg->tx_len < 3) {
+		dev_err(dsi->dev, "wrong tx buf length %zu for long write\n",
+			msg->tx_len);
+		return -EINVAL;
+	}
+
+	while (DIV_ROUND_UP(len, pld_data_bytes)) {
+		if (len < pld_data_bytes) {
+			remainder = 0;
+			memcpy(&remainder, tx_buf, len);
+			dsi_write(dsi, DSI_GEN_PLD_DATA, remainder);
+			len = 0;
+		} else {
+			memcpy(&remainder, tx_buf, pld_data_bytes);
+			dsi_write(dsi, DSI_GEN_PLD_DATA, remainder);
+			tx_buf += pld_data_bytes;
+			len -= pld_data_bytes;
+		}
+
+		ret = readl_poll_timeout(dsi->base + DSI_CMD_PKT_STATUS,
+					 val, !(val & GEN_PLD_W_FULL), 1000,
+					 CMD_PKT_STATUS_TIMEOUT_US);
+		if (ret < 0) {
+			dev_err(dsi->dev,
+				"failed to get available write payload FIFO\n");
+			return ret;
+		}
+	}
+
+	return dw_mipi_dsi_gen_pkt_hdr_write(dsi, hdr_val);
+}
+
+static ssize_t dw_mipi_dsi_host_transfer(struct mipi_dsi_host *host,
+					 const struct mipi_dsi_msg *msg)
+{
+	struct dw_mipi_dsi *dsi = host_to_dsi(host);
+	int ret;
+
+	/*
+	 * TODO dw drv improvements
+	 * use mipi_dsi_create_packet() instead of all following
+	 * functions and code (no switch cases, no
+	 * dw_mipi_dsi_dcs_short_write(), only the loop in long_write...)
+	 * and use packet.header...
+	 */
+	dw_mipi_message_config(dsi, msg);
+
+	switch (msg->type) {
+	case MIPI_DSI_DCS_SHORT_WRITE:
+	case MIPI_DSI_DCS_SHORT_WRITE_PARAM:
+	case MIPI_DSI_SET_MAXIMUM_RETURN_PACKET_SIZE:
+		ret = dw_mipi_dsi_dcs_short_write(dsi, msg);
+		break;
+	case MIPI_DSI_DCS_LONG_WRITE:
+		ret = dw_mipi_dsi_dcs_long_write(dsi, msg);
+		break;
+	default:
+		dev_err(dsi->dev, "unsupported message type 0x%02x\n",
+			msg->type);
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+static const struct mipi_dsi_host_ops dw_mipi_dsi_host_ops = {
+	.attach = dw_mipi_dsi_host_attach,
+	.detach = dw_mipi_dsi_host_detach,
+	.transfer = dw_mipi_dsi_host_transfer,
+};
+
+static void dw_mipi_dsi_video_mode_config(struct dw_mipi_dsi *dsi)
+{
+	u32 val;
+
+	/*
+	 * TODO dw drv improvements
+	 * enabling low power is panel-dependent, we should use the
+	 * panel configuration here...
+	 */
+	val = ENABLE_LOW_POWER;
+
+	if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_BURST)
+		val |= VID_MODE_TYPE_BURST;
+	else if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE)
+		val |= VID_MODE_TYPE_NON_BURST_SYNC_PULSES;
+	else
+		val |= VID_MODE_TYPE_NON_BURST_SYNC_EVENTS;
+
+	dsi_write(dsi, DSI_VID_MODE_CFG, val);
+}
+
+static void dw_mipi_dsi_set_mode(struct dw_mipi_dsi *dsi,
+				 unsigned long mode_flags)
+{
+	dsi_write(dsi, DSI_PWR_UP, RESET);
+
+	if (mode_flags & MIPI_DSI_MODE_VIDEO) {
+		dsi_write(dsi, DSI_MODE_CFG, ENABLE_VIDEO_MODE);
+		dw_mipi_dsi_video_mode_config(dsi);
+		dsi_write(dsi, DSI_LPCLK_CTRL, PHY_TXREQUESTCLKHS);
+	} else {
+		dsi_write(dsi, DSI_MODE_CFG, ENABLE_CMD_MODE);
+	}
+
+	dsi_write(dsi, DSI_PWR_UP, POWERUP);
+}
+
+static void dw_mipi_dsi_disable(struct dw_mipi_dsi *dsi)
+{
+	dsi_write(dsi, DSI_PWR_UP, RESET);
+	dsi_write(dsi, DSI_PHY_RSTZ, PHY_RSTZ);
+}
+
+static void dw_mipi_dsi_init(struct dw_mipi_dsi *dsi)
+{
+	/*
+	 * The maximum permitted escape clock is 20MHz and it is derived from
+	 * lanebyteclk, which is running at "lane_mbps / 8".  Thus we want:
+	 *
+	 *     (lane_mbps >> 3) / esc_clk_division < 20
+	 * which is:
+	 *     (lane_mbps >> 3) / 20 > esc_clk_division
+	 */
+	u32 esc_clk_division = (dsi->lane_mbps >> 3) / 20 + 1;
+
+	dsi_write(dsi, DSI_PWR_UP, RESET);
+
+	/*
+	 * TODO dw drv improvements
+	 * timeout clock division should be computed with the
+	 * high speed transmission counter timeout and byte lane...
+	 */
+	dsi_write(dsi, DSI_CLKMGR_CFG, TO_CLK_DIVIDSION(10) |
+		  TX_ESC_CLK_DIVIDSION(esc_clk_division));
+}
+
+static void dw_mipi_dsi_dpi_config(struct dw_mipi_dsi *dsi,
+				   struct drm_display_mode *mode)
+{
+	u32 val = 0, color = 0;
+
+	switch (dsi->format) {
+	case MIPI_DSI_FMT_RGB888:
+		color = DPI_COLOR_CODING_24BIT;
+		break;
+	case MIPI_DSI_FMT_RGB666:
+		color = DPI_COLOR_CODING_18BIT_2 | EN18_LOOSELY;
+		break;
+	case MIPI_DSI_FMT_RGB666_PACKED:
+		color = DPI_COLOR_CODING_18BIT_1;
+		break;
+	case MIPI_DSI_FMT_RGB565:
+		color = DPI_COLOR_CODING_16BIT_1;
+		break;
+	}
+
+	if (mode->flags & DRM_MODE_FLAG_NVSYNC)
+		val |= VSYNC_ACTIVE_LOW;
+	if (mode->flags & DRM_MODE_FLAG_NHSYNC)
+		val |= HSYNC_ACTIVE_LOW;
+
+	dsi_write(dsi, DSI_DPI_VCID, DPI_VID(dsi->channel));
+	dsi_write(dsi, DSI_DPI_COLOR_CODING, color);
+	dsi_write(dsi, DSI_DPI_CFG_POL, val);
+	/*
+	 * TODO dw drv improvements
+	 * largest packet sizes during hfp or during vsa/vpb/vfp
+	 * should be computed according to byte lane, lane number and only
+	 * if sending lp cmds in high speed is enable (PHY_TXREQUESTCLKHS)
+	 */
+	dsi_write(dsi, DSI_DPI_LP_CMD_TIM, OUTVACT_LPCMD_TIME(4)
+		  | INVACT_LPCMD_TIME(4));
+}
+
+static void dw_mipi_dsi_packet_handler_config(struct dw_mipi_dsi *dsi)
+{
+	dsi_write(dsi, DSI_PCKHDL_CFG, EN_CRC_RX | EN_ECC_RX | EN_BTA);
+}
+
+static void dw_mipi_dsi_video_packet_config(struct dw_mipi_dsi *dsi,
+					    struct drm_display_mode *mode)
+{
+	/*
+	 * TODO dw drv improvements
+	 * only burst mode is supported here. For non-burst video modes,
+	 * we should compute DSI_VID_PKT_SIZE, DSI_VCCR.NUMC &
+	 * DSI_VNPCR.NPSIZE... especially because this driver supports
+	 * non-burst video modes, see dw_mipi_dsi_video_mode_config()...
+	 */
+	dsi_write(dsi, DSI_VID_PKT_SIZE, VID_PKT_SIZE(mode->hdisplay));
+}
+
+static void dw_mipi_dsi_command_mode_config(struct dw_mipi_dsi *dsi)
+{
+	/*
+	 * TODO dw drv improvements
+	 * compute high speed transmission counter timeout according
+	 * to the timeout clock division (TO_CLK_DIVIDSION) and byte lane...
+	 */
+	dsi_write(dsi, DSI_TO_CNT_CFG, HSTX_TO_CNT(1000) | LPRX_TO_CNT(1000));
+	/*
+	 * TODO dw drv improvements
+	 * the Bus-Turn-Around Timeout Counter should be computed
+	 * according to byte lane...
+	 */
+	dsi_write(dsi, DSI_BTA_TO_CNT, 0xd00);
+	dsi_write(dsi, DSI_MODE_CFG, ENABLE_CMD_MODE);
+}
+
+/* Get lane byte clock cycles. */
+static u32 dw_mipi_dsi_get_hcomponent_lbcc(struct dw_mipi_dsi *dsi,
+					   struct drm_display_mode *mode,
+					   u32 hcomponent)
+{
+	u32 frac, lbcc;
+
+	lbcc = hcomponent * dsi->lane_mbps * MSEC_PER_SEC / 8;
+
+	frac = lbcc % mode->clock;
+	lbcc = lbcc / mode->clock;
+	if (frac)
+		lbcc++;
+
+	return lbcc;
+}
+
+static void dw_mipi_dsi_line_timer_config(struct dw_mipi_dsi *dsi,
+					  struct drm_display_mode *mode)
+{
+	u32 htotal, hsa, hbp, lbcc;
+
+	htotal = mode->htotal;
+	hsa = mode->hsync_end - mode->hsync_start;
+	hbp = mode->htotal - mode->hsync_end;
+
+	/*
+	 * TODO dw drv improvements
+	 * computations below may be improved...
+	 */
+	lbcc = dw_mipi_dsi_get_hcomponent_lbcc(dsi, mode, htotal);
+	dsi_write(dsi, DSI_VID_HLINE_TIME, lbcc);
+
+	lbcc = dw_mipi_dsi_get_hcomponent_lbcc(dsi, mode, hsa);
+	dsi_write(dsi, DSI_VID_HSA_TIME, lbcc);
+
+	lbcc = dw_mipi_dsi_get_hcomponent_lbcc(dsi, mode, hbp);
+	dsi_write(dsi, DSI_VID_HBP_TIME, lbcc);
+}
+
+static void dw_mipi_dsi_vertical_timing_config(struct dw_mipi_dsi *dsi,
+					       struct drm_display_mode *mode)
+{
+	u32 vactive, vsa, vfp, vbp;
+
+	vactive = mode->vdisplay;
+	vsa = mode->vsync_end - mode->vsync_start;
+	vfp = mode->vsync_start - mode->vdisplay;
+	vbp = mode->vtotal - mode->vsync_end;
+
+	dsi_write(dsi, DSI_VID_VACTIVE_LINES, vactive);
+	dsi_write(dsi, DSI_VID_VSA_LINES, vsa);
+	dsi_write(dsi, DSI_VID_VFP_LINES, vfp);
+	dsi_write(dsi, DSI_VID_VBP_LINES, vbp);
+}
+
+static void dw_mipi_dsi_dphy_timing_config(struct dw_mipi_dsi *dsi)
+{
+	/*
+	 * TODO dw drv improvements
+	 * data & clock lane timers should be computed according to panel
+	 * blankings and to the automatic clock lane control mode...
+	 * note: DSI_PHY_TMR_CFG.MAX_RD_TIME should be in line with
+	 * DSI_CMD_MODE_CFG.MAX_RD_PKT_SIZE_LP (see CMD_MODE_ALL_LP)
+	 */
+	dsi_write(dsi, DSI_PHY_TMR_CFG, PHY_HS2LP_TIME(0x40)
+		  | PHY_LP2HS_TIME(0x40) | MAX_RD_TIME(10000));
+
+	dsi_write(dsi, DSI_PHY_TMR_LPCLK_CFG, PHY_CLKHS2LP_TIME(0x40)
+		  | PHY_CLKLP2HS_TIME(0x40));
+}
+
+static void dw_mipi_dsi_dphy_interface_config(struct dw_mipi_dsi *dsi)
+{
+	/*
+	 * TODO dw drv improvements
+	 * stop wait time should be the maximum between host dsi
+	 * and panel stop wait times
+	 */
+	dsi_write(dsi, DSI_PHY_IF_CFG, PHY_STOP_WAIT_TIME(0x20) |
+		  N_LANES(dsi->lanes));
+}
+
+static void dw_mipi_dsi_dphy_init(struct dw_mipi_dsi *dsi)
+{
+	/* Clear PHY state */
+	dsi_write(dsi, DSI_PHY_RSTZ, PHY_DISFORCEPLL | PHY_DISABLECLK
+		  | PHY_RSTZ | PHY_SHUTDOWNZ);
+	dsi_write(dsi, DSI_PHY_TST_CTRL0, PHY_UNTESTCLR);
+	dsi_write(dsi, DSI_PHY_TST_CTRL0, PHY_TESTCLR);
+	dsi_write(dsi, DSI_PHY_TST_CTRL0, PHY_UNTESTCLR);
+}
+
+static void dw_mipi_dsi_dphy_enable(struct dw_mipi_dsi *dsi)
+{
+	u32 val;
+	int ret;
+
+	dsi_write(dsi, DSI_PHY_RSTZ, PHY_ENFORCEPLL | PHY_ENABLECLK |
+		  PHY_UNRSTZ | PHY_UNSHUTDOWNZ);
+
+	ret = readl_poll_timeout(dsi->base + DSI_PHY_STATUS,
+				 val, val & LOCK, 1000, PHY_STATUS_TIMEOUT_US);
+	if (ret < 0)
+		DRM_DEBUG_DRIVER("failed to wait phy lock state\n");
+
+	ret = readl_poll_timeout(dsi->base + DSI_PHY_STATUS,
+				 val, val & STOP_STATE_CLK_LANE, 1000,
+				 PHY_STATUS_TIMEOUT_US);
+	if (ret < 0)
+		DRM_DEBUG_DRIVER("failed to wait phy clk lane stop state\n");
+}
+
+static void dw_mipi_dsi_clear_err(struct dw_mipi_dsi *dsi)
+{
+	dsi_read(dsi, DSI_INT_ST0);
+	dsi_read(dsi, DSI_INT_ST1);
+	dsi_write(dsi, DSI_INT_MSK0, 0);
+	dsi_write(dsi, DSI_INT_MSK1, 0);
+}
+
+static void dw_mipi_dsi_bridge_post_disable(struct drm_bridge *bridge)
+{
+	struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge);
+
+	/*
+	 * Switch to command mode before panel-bridge post_disable &
+	 * panel unprepare.
+	 * Note: panel-bridge disable & panel disable has been called
+	 * before by the drm framework.
+	 */
+	dw_mipi_dsi_set_mode(dsi, 0);
+
+	/*
+	 * TODO Only way found to call panel-bridge post_disable &
+	 * panel unprepare before the dsi "final" disable...
+	 * This needs to be fixed in the drm_bridge framework and the API
+	 * needs to be updated to manage our own call chains...
+	 */
+	dsi->panel_bridge->funcs->post_disable(dsi->panel_bridge);
+
+	dw_mipi_dsi_disable(dsi);
+	clk_disable_unprepare(dsi->pclk);
+	pm_runtime_put(dsi->dev);
+}
+
+void dw_mipi_dsi_bridge_mode_set(struct drm_bridge *bridge,
+				 struct drm_display_mode *mode,
+				 struct drm_display_mode *adjusted_mode)
+{
+	struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge);
+	const struct dw_mipi_dsi_phy_ops *phy_ops = dsi->plat_data->phy_ops;
+	void *priv_data = dsi->plat_data->priv_data;
+	int ret;
+
+	clk_prepare_enable(dsi->pclk);
+
+	ret = phy_ops->get_lane_mbps(priv_data, mode, dsi->mode_flags,
+				     dsi->lanes, dsi->format, &dsi->lane_mbps);
+	if (ret)
+		DRM_DEBUG_DRIVER("Phy get_lane_mbps() failed\n");
+
+	pm_runtime_get_sync(dsi->dev);
+	dw_mipi_dsi_init(dsi);
+	dw_mipi_dsi_dpi_config(dsi, mode);
+	dw_mipi_dsi_packet_handler_config(dsi);
+	dw_mipi_dsi_video_mode_config(dsi);
+	dw_mipi_dsi_video_packet_config(dsi, mode);
+	dw_mipi_dsi_command_mode_config(dsi);
+	dw_mipi_dsi_line_timer_config(dsi, mode);
+	dw_mipi_dsi_vertical_timing_config(dsi, mode);
+
+	dw_mipi_dsi_dphy_init(dsi);
+	dw_mipi_dsi_dphy_timing_config(dsi);
+	dw_mipi_dsi_dphy_interface_config(dsi);
+
+	dw_mipi_dsi_clear_err(dsi);
+
+	ret = phy_ops->init(priv_data);
+	if (ret)
+		DRM_DEBUG_DRIVER("Phy init() failed\n");
+
+	dw_mipi_dsi_dphy_enable(dsi);
+
+	dw_mipi_dsi_wait_for_two_frames(mode);
+
+	/* Switch to cmd mode for panel-bridge pre_enable & panel prepare */
+	dw_mipi_dsi_set_mode(dsi, 0);
+}
+
+static void dw_mipi_dsi_bridge_enable(struct drm_bridge *bridge)
+{
+	struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge);
+
+	/* Switch to video mode for panel-bridge enable & panel enable */
+	dw_mipi_dsi_set_mode(dsi, MIPI_DSI_MODE_VIDEO);
+}
+
+static enum drm_mode_status
+dw_mipi_dsi_bridge_mode_valid(struct drm_bridge *bridge,
+			      const struct drm_display_mode *mode)
+{
+	struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge);
+	const struct dw_mipi_dsi_plat_data *pdata = dsi->plat_data;
+	enum drm_mode_status mode_status = MODE_OK;
+
+	if (pdata->mode_valid)
+		mode_status = pdata->mode_valid(pdata->priv_data, mode);
+
+	return mode_status;
+}
+
+static int dw_mipi_dsi_bridge_attach(struct drm_bridge *bridge)
+{
+	struct dw_mipi_dsi *dsi = bridge_to_dsi(bridge);
+
+	if (!bridge->encoder) {
+		DRM_ERROR("Parent encoder object not found\n");
+		return -ENODEV;
+	}
+
+	/* Set the encoder type as caller does not know it */
+	bridge->encoder->encoder_type = DRM_MODE_ENCODER_DSI;
+
+	/* Attach the panel-bridge to the dsi bridge */
+	return drm_bridge_attach(bridge->encoder, dsi->panel_bridge, bridge);
+}
+
+static struct drm_bridge_funcs dw_mipi_dsi_bridge_funcs = {
+	.mode_set     = dw_mipi_dsi_bridge_mode_set,
+	.enable	      = dw_mipi_dsi_bridge_enable,
+	.post_disable = dw_mipi_dsi_bridge_post_disable,
+	.mode_valid   = dw_mipi_dsi_bridge_mode_valid,
+	.attach	      = dw_mipi_dsi_bridge_attach,
+};
+
+static struct dw_mipi_dsi *
+__dw_mipi_dsi_probe(struct platform_device *pdev,
+		    const struct dw_mipi_dsi_plat_data *plat_data)
+{
+	struct device *dev = &pdev->dev;
+	struct reset_control *apb_rst;
+	struct dw_mipi_dsi *dsi;
+	struct resource *res;
+	int ret;
+
+	dsi = devm_kzalloc(dev, sizeof(*dsi), GFP_KERNEL);
+	if (!dsi)
+		return ERR_PTR(-ENOMEM);
+
+	dsi->dev = dev;
+	dsi->plat_data = plat_data;
+
+	if (!plat_data->phy_ops->init || !plat_data->phy_ops->get_lane_mbps) {
+		DRM_ERROR("Phy not properly configured\n");
+		return ERR_PTR(-ENODEV);
+	}
+
+	if (!plat_data->base) {
+		res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+		if (!res)
+			return ERR_PTR(-ENODEV);
+
+		dsi->base = devm_ioremap_resource(dev, res);
+		if (IS_ERR(dsi->base))
+			return ERR_PTR(-ENODEV);
+
+	} else {
+		dsi->base = plat_data->base;
+	}
+
+	dsi->pclk = devm_clk_get(dev, "pclk");
+	if (IS_ERR(dsi->pclk)) {
+		ret = PTR_ERR(dsi->pclk);
+		dev_err(dev, "Unable to get pclk: %d\n", ret);
+		return ERR_PTR(ret);
+	}
+
+	/*
+	 * Note that the reset was not defined in the initial device tree, so
+	 * we have to be prepared for it not being found.
+	 */
+	apb_rst = devm_reset_control_get(dev, "apb");
+	if (IS_ERR(apb_rst)) {
+		ret = PTR_ERR(apb_rst);
+		if (ret == -ENOENT) {
+			apb_rst = NULL;
+		} else {
+			dev_err(dev, "Unable to get reset control: %d\n", ret);
+			return ERR_PTR(ret);
+		}
+	}
+
+	if (apb_rst) {
+		ret = clk_prepare_enable(dsi->pclk);
+		if (ret) {
+			dev_err(dev, "%s: Failed to enable pclk\n", __func__);
+			return ERR_PTR(ret);
+		}
+
+		reset_control_assert(apb_rst);
+		usleep_range(10, 20);
+		reset_control_deassert(apb_rst);
+
+		clk_disable_unprepare(dsi->pclk);
+	}
+
+	pm_runtime_enable(dev);
+
+	dsi->dsi_host.ops = &dw_mipi_dsi_host_ops;
+	dsi->dsi_host.dev = dev;
+	ret = mipi_dsi_host_register(&dsi->dsi_host);
+	if (ret) {
+		dev_err(dev, "Failed to register MIPI host: %d\n", ret);
+		return ERR_PTR(ret);
+	}
+
+	dsi->bridge.driver_private = dsi;
+	dsi->bridge.funcs = &dw_mipi_dsi_bridge_funcs;
+#ifdef CONFIG_OF
+	dsi->bridge.of_node = pdev->dev.of_node;
+#endif
+
+	dev_set_drvdata(dev, dsi);
+
+	return dsi;
+}
+
+static void __dw_mipi_dsi_remove(struct dw_mipi_dsi *dsi)
+{
+	pm_runtime_disable(dsi->dev);
+}
+
+/*
+ * Probe/remove API, used from platforms based on the DRM bridge API.
+ */
+int dw_mipi_dsi_probe(struct platform_device *pdev,
+		      const struct dw_mipi_dsi_plat_data *plat_data)
+{
+	struct dw_mipi_dsi *dsi;
+
+	dsi = __dw_mipi_dsi_probe(pdev, plat_data);
+	if (IS_ERR(dsi))
+		return PTR_ERR(dsi);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(dw_mipi_dsi_probe);
+
+void dw_mipi_dsi_remove(struct platform_device *pdev)
+{
+	struct dw_mipi_dsi *dsi = platform_get_drvdata(pdev);
+
+	mipi_dsi_host_unregister(&dsi->dsi_host);
+
+	__dw_mipi_dsi_remove(dsi);
+}
+EXPORT_SYMBOL_GPL(dw_mipi_dsi_remove);
+
+/*
+ * Bind/unbind API, used from platforms based on the component framework.
+ */
+int dw_mipi_dsi_bind(struct platform_device *pdev, struct drm_encoder *encoder,
+		     const struct dw_mipi_dsi_plat_data *plat_data)
+{
+	struct dw_mipi_dsi *dsi;
+	int ret;
+
+	dsi = __dw_mipi_dsi_probe(pdev, plat_data);
+	if (IS_ERR(dsi))
+		return PTR_ERR(dsi);
+
+	ret = drm_bridge_attach(encoder, &dsi->bridge, NULL);
+	if (ret) {
+		dw_mipi_dsi_remove(pdev);
+		DRM_ERROR("Failed to initialize bridge with drm\n");
+		return ret;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(dw_mipi_dsi_bind);
+
+void dw_mipi_dsi_unbind(struct device *dev)
+{
+	struct dw_mipi_dsi *dsi = dev_get_drvdata(dev);
+
+	__dw_mipi_dsi_remove(dsi);
+}
+EXPORT_SYMBOL_GPL(dw_mipi_dsi_unbind);
+
+MODULE_AUTHOR("Chris Zhong <zyw@rock-chips.com>");
+MODULE_AUTHOR("Philippe Cornu <philippe.cornu@st.com>");
+MODULE_DESCRIPTION("DW MIPI DSI host controller driver");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS("platform:dw-mipi-dsi");

+ 1 - 5
drivers/gpu/drm/bridge/tc358767.c

@@ -1325,11 +1325,7 @@ static int tc_probe(struct i2c_client *client, const struct i2c_device_id *id)
 
 	tc->bridge.funcs = &tc_bridge_funcs;
 	tc->bridge.of_node = dev->of_node;
-	ret = drm_bridge_add(&tc->bridge);
-	if (ret) {
-		dev_err(dev, "Failed to add drm_bridge: %d\n", ret);
-		goto err_unregister_aux;
-	}
+	drm_bridge_add(&tc->bridge);
 
 	i2c_set_clientdata(client, tc);
 

+ 1 - 5
drivers/gpu/drm/bridge/ti-tfp410.c

@@ -237,11 +237,7 @@ static int tfp410_init(struct device *dev)
 		}
 	}
 
-	ret = drm_bridge_add(&dvi->bridge);
-	if (ret) {
-		dev_err(dev, "drm_bridge_add() failed: %d\n", ret);
-		goto fail;
-	}
+	drm_bridge_add(&dvi->bridge);
 
 	return 0;
 fail:

+ 2 - 3
drivers/gpu/drm/cirrus/cirrus_drv.c

@@ -132,7 +132,6 @@ static struct drm_driver driver = {
 	.driver_features = DRIVER_MODESET | DRIVER_GEM,
 	.load = cirrus_driver_load,
 	.unload = cirrus_driver_unload,
-	.set_busid = drm_pci_set_busid,
 	.fops = &cirrus_driver_fops,
 	.name = DRIVER_NAME,
 	.desc = DRIVER_DESC,
@@ -166,12 +165,12 @@ static int __init cirrus_init(void)
 
 	if (cirrus_modeset == 0)
 		return -EINVAL;
-	return drm_pci_init(&driver, &cirrus_pci_driver);
+	return pci_register_driver(&cirrus_pci_driver);
 }
 
 static void __exit cirrus_exit(void)
 {
-	drm_pci_exit(&driver, &cirrus_pci_driver);
+	pci_unregister_driver(&cirrus_pci_driver);
 }
 
 module_init(cirrus_init);

+ 0 - 1
drivers/gpu/drm/cirrus/cirrus_fbdev.c

@@ -215,7 +215,6 @@ static int cirrusfb_create(struct drm_fb_helper *helper,
 
 	strcpy(info->fix.id, "cirrusdrmfb");
 
-	info->flags = FBINFO_DEFAULT;
 	info->fbops = &cirrusfb_ops;
 
 	drm_fb_helper_fill_fix(info, fb->pitches[0], fb->format->depth);

+ 70 - 51
drivers/gpu/drm/drm_atomic.c

@@ -29,7 +29,6 @@
 #include <drm/drmP.h>
 #include <drm/drm_atomic.h>
 #include <drm/drm_mode.h>
-#include <drm/drm_plane_helper.h>
 #include <drm/drm_print.h>
 #include <linux/sync_file.h>
 
@@ -188,12 +187,15 @@ void drm_atomic_state_default_clear(struct drm_atomic_state *state)
 	}
 
 	for (i = 0; i < state->num_private_objs; i++) {
-		void *obj_state = state->private_objs[i].obj_state;
+		struct drm_private_obj *obj = state->private_objs[i].ptr;
 
-		state->private_objs[i].funcs->destroy_state(obj_state);
-		state->private_objs[i].obj = NULL;
-		state->private_objs[i].obj_state = NULL;
-		state->private_objs[i].funcs = NULL;
+		if (!obj)
+			continue;
+
+		obj->funcs->atomic_destroy_state(obj,
+						 state->private_objs[i].state);
+		state->private_objs[i].ptr = NULL;
+		state->private_objs[i].state = NULL;
 	}
 	state->num_private_objs = 0;
 
@@ -409,34 +411,6 @@ int drm_atomic_set_mode_prop_for_crtc(struct drm_crtc_state *state,
 }
 EXPORT_SYMBOL(drm_atomic_set_mode_prop_for_crtc);
 
-/**
- * drm_atomic_replace_property_blob - replace a blob property
- * @blob: a pointer to the member blob to be replaced
- * @new_blob: the new blob to replace with
- * @replaced: whether the blob has been replaced
- *
- * RETURNS:
- * Zero on success, error code on failure
- */
-static void
-drm_atomic_replace_property_blob(struct drm_property_blob **blob,
-				 struct drm_property_blob *new_blob,
-				 bool *replaced)
-{
-	struct drm_property_blob *old_blob = *blob;
-
-	if (old_blob == new_blob)
-		return;
-
-	drm_property_blob_put(old_blob);
-	if (new_blob)
-		drm_property_blob_get(new_blob);
-	*blob = new_blob;
-	*replaced = true;
-
-	return;
-}
-
 static int
 drm_atomic_replace_property_blob_from_id(struct drm_device *dev,
 					 struct drm_property_blob **blob,
@@ -457,7 +431,7 @@ drm_atomic_replace_property_blob_from_id(struct drm_device *dev,
 		}
 	}
 
-	drm_atomic_replace_property_blob(blob, new_blob, replaced);
+	*replaced |= drm_property_replace_blob(blob, new_blob);
 	drm_property_blob_put(new_blob);
 
 	return 0;
@@ -990,12 +964,45 @@ static void drm_atomic_plane_print_state(struct drm_printer *p,
 		plane->funcs->atomic_print_state(p, state);
 }
 
+/**
+ * drm_atomic_private_obj_init - initialize private object
+ * @obj: private object
+ * @state: initial private object state
+ * @funcs: pointer to the struct of function pointers that identify the object
+ * type
+ *
+ * Initialize the private object, which can be embedded into any
+ * driver private object that needs its own atomic state.
+ */
+void
+drm_atomic_private_obj_init(struct drm_private_obj *obj,
+			    struct drm_private_state *state,
+			    const struct drm_private_state_funcs *funcs)
+{
+	memset(obj, 0, sizeof(*obj));
+
+	obj->state = state;
+	obj->funcs = funcs;
+}
+EXPORT_SYMBOL(drm_atomic_private_obj_init);
+
+/**
+ * drm_atomic_private_obj_fini - finalize private object
+ * @obj: private object
+ *
+ * Finalize the private object.
+ */
+void
+drm_atomic_private_obj_fini(struct drm_private_obj *obj)
+{
+	obj->funcs->atomic_destroy_state(obj, obj->state);
+}
+EXPORT_SYMBOL(drm_atomic_private_obj_fini);
+
 /**
  * drm_atomic_get_private_obj_state - get private object state
  * @state: global atomic state
  * @obj: private object to get the state for
- * @funcs: pointer to the struct of function pointers that identify the object
- * type
  *
  * This function returns the private object state for the given private object,
  * allocating the state if needed. It does not grab any locks as the caller is
@@ -1005,18 +1012,18 @@ static void drm_atomic_plane_print_state(struct drm_printer *p,
  *
  * Either the allocated state or the error code encoded into a pointer.
  */
-void *
-drm_atomic_get_private_obj_state(struct drm_atomic_state *state, void *obj,
-			      const struct drm_private_state_funcs *funcs)
+struct drm_private_state *
+drm_atomic_get_private_obj_state(struct drm_atomic_state *state,
+				 struct drm_private_obj *obj)
 {
 	int index, num_objs, i;
 	size_t size;
 	struct __drm_private_objs_state *arr;
+	struct drm_private_state *obj_state;
 
 	for (i = 0; i < state->num_private_objs; i++)
-		if (obj == state->private_objs[i].obj &&
-		    state->private_objs[i].obj_state)
-			return state->private_objs[i].obj_state;
+		if (obj == state->private_objs[i].ptr)
+			return state->private_objs[i].state;
 
 	num_objs = state->num_private_objs + 1;
 	size = sizeof(*state->private_objs) * num_objs;
@@ -1028,18 +1035,21 @@ drm_atomic_get_private_obj_state(struct drm_atomic_state *state, void *obj,
 	index = state->num_private_objs;
 	memset(&state->private_objs[index], 0, sizeof(*state->private_objs));
 
-	state->private_objs[index].obj_state = funcs->duplicate_state(state, obj);
-	if (!state->private_objs[index].obj_state)
+	obj_state = obj->funcs->atomic_duplicate_state(obj);
+	if (!obj_state)
 		return ERR_PTR(-ENOMEM);
 
-	state->private_objs[index].obj = obj;
-	state->private_objs[index].funcs = funcs;
+	state->private_objs[index].state = obj_state;
+	state->private_objs[index].old_state = obj->state;
+	state->private_objs[index].new_state = obj_state;
+	state->private_objs[index].ptr = obj;
+
 	state->num_private_objs = num_objs;
 
-	DRM_DEBUG_ATOMIC("Added new private object state %p to %p\n",
-			 state->private_objs[index].obj_state, state);
+	DRM_DEBUG_ATOMIC("Added new private object %p state %p to %p\n",
+			 obj, obj_state, state);
 
-	return state->private_objs[index].obj_state;
+	return obj_state;
 }
 EXPORT_SYMBOL(drm_atomic_get_private_obj_state);
 
@@ -2039,7 +2049,7 @@ static int prepare_crtc_signaling(struct drm_device *dev,
 {
 	struct drm_crtc *crtc;
 	struct drm_crtc_state *crtc_state;
-	int i, ret;
+	int i, c = 0, ret;
 
 	if (arg->flags & DRM_MODE_ATOMIC_TEST_ONLY)
 		return 0;
@@ -2100,8 +2110,17 @@ static int prepare_crtc_signaling(struct drm_device *dev,
 
 			crtc_state->event->base.fence = fence;
 		}
+
+		c++;
 	}
 
+	/*
+	 * Having this flag means user mode pends on event which will never
+	 * reach due to lack of at least one CRTC for signaling
+	 */
+	if (c == 0 && (arg->flags & DRM_MODE_PAGE_FLIP_EVENT))
+		return -EINVAL;
+
 	return 0;
 }
 

+ 271 - 74
drivers/gpu/drm/drm_atomic_helper.c

@@ -795,6 +795,9 @@ int drm_atomic_helper_check(struct drm_device *dev,
 	if (ret)
 		return ret;
 
+	if (state->legacy_cursor_update)
+		state->async_update = !drm_atomic_helper_async_check(dev, state);
+
 	return ret;
 }
 EXPORT_SYMBOL(drm_atomic_helper_check);
@@ -1069,12 +1072,13 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
 					      struct drm_atomic_state *old_state)
 {
 	struct drm_crtc *crtc;
+	struct drm_crtc_state *old_crtc_state;
 	struct drm_crtc_state *new_crtc_state;
 	struct drm_connector *connector;
 	struct drm_connector_state *new_conn_state;
 	int i;
 
-	for_each_new_crtc_in_state(old_state, crtc, new_crtc_state, i) {
+	for_each_oldnew_crtc_in_state(old_state, crtc, old_crtc_state, new_crtc_state, i) {
 		const struct drm_crtc_helper_funcs *funcs;
 
 		/* Need to filter out CRTCs where only planes change. */
@@ -1090,8 +1094,8 @@ void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev,
 			DRM_DEBUG_ATOMIC("enabling [CRTC:%d:%s]\n",
 					 crtc->base.id, crtc->name);
 
-			if (funcs->enable)
-				funcs->enable(crtc);
+			if (funcs->atomic_enable)
+				funcs->atomic_enable(crtc, old_crtc_state);
 			else
 				funcs->commit(crtc);
 		}
@@ -1191,9 +1195,13 @@ EXPORT_SYMBOL(drm_atomic_helper_wait_for_fences);
  *
  * Helper to, after atomic commit, wait for vblanks on all effected
  * crtcs (ie. before cleaning up old framebuffers using
- * drm_atomic_helper_cleanup_planes()). It will only wait on crtcs where the
+ * drm_atomic_helper_cleanup_planes()). It will only wait on CRTCs where the
  * framebuffers have actually changed to optimize for the legacy cursor and
  * plane update use-case.
+ *
+ * Drivers using the nonblocking commit tracking support initialized by calling
+ * drm_atomic_helper_setup_commit() should look at
+ * drm_atomic_helper_wait_for_flip_done() as an alternative.
  */
 void
 drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
@@ -1240,28 +1248,55 @@ drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
 }
 EXPORT_SYMBOL(drm_atomic_helper_wait_for_vblanks);
 
+/**
+ * drm_atomic_helper_wait_for_flip_done - wait for all page flips to be done
+ * @dev: DRM device
+ * @old_state: atomic state object with old state structures
+ *
+ * Helper to, after atomic commit, wait for page flips on all effected
+ * crtcs (ie. before cleaning up old framebuffers using
+ * drm_atomic_helper_cleanup_planes()). Compared to
+ * drm_atomic_helper_wait_for_vblanks() this waits for the completion of on all
+ * CRTCs, assuming that cursors-only updates are signalling their completion
+ * immediately (or using a different path).
+ *
+ * This requires that drivers use the nonblocking commit tracking support
+ * initialized using drm_atomic_helper_setup_commit().
+ */
+void drm_atomic_helper_wait_for_flip_done(struct drm_device *dev,
+					  struct drm_atomic_state *old_state)
+{
+	struct drm_crtc_state *unused;
+	struct drm_crtc *crtc;
+	int i;
+
+	for_each_crtc_in_state(old_state, crtc, unused, i) {
+		struct drm_crtc_commit *commit = old_state->crtcs[i].commit;
+		int ret;
+
+		if (!commit)
+			continue;
+
+		ret = wait_for_completion_timeout(&commit->flip_done, 10 * HZ);
+		if (ret == 0)
+			DRM_ERROR("[CRTC:%d:%s] flip_done timed out\n",
+				  crtc->base.id, crtc->name);
+	}
+}
+EXPORT_SYMBOL(drm_atomic_helper_wait_for_flip_done);
+
 /**
  * drm_atomic_helper_commit_tail - commit atomic update to hardware
  * @old_state: atomic state object with old state structures
  *
  * This is the default implementation for the
- * &drm_mode_config_helper_funcs.atomic_commit_tail hook.
+ * &drm_mode_config_helper_funcs.atomic_commit_tail hook, for drivers
+ * that do not support runtime_pm or do not need the CRTC to be
+ * enabled to perform a commit. Otherwise, see
+ * drm_atomic_helper_commit_tail_rpm().
  *
  * Note that the default ordering of how the various stages are called is to
- * match the legacy modeset helper library closest. One peculiarity of that is
- * that it doesn't mesh well with runtime PM at all.
- *
- * For drivers supporting runtime PM the recommended sequence is instead ::
- *
- *     drm_atomic_helper_commit_modeset_disables(dev, old_state);
- *
- *     drm_atomic_helper_commit_modeset_enables(dev, old_state);
- *
- *     drm_atomic_helper_commit_planes(dev, old_state,
- *                                     DRM_PLANE_COMMIT_ACTIVE_ONLY);
- *
- * for committing the atomic update to hardware.  See the kerneldoc entries for
- * these three functions for more details.
+ * match the legacy modeset helper library closest.
  */
 void drm_atomic_helper_commit_tail(struct drm_atomic_state *old_state)
 {
@@ -1281,6 +1316,35 @@ void drm_atomic_helper_commit_tail(struct drm_atomic_state *old_state)
 }
 EXPORT_SYMBOL(drm_atomic_helper_commit_tail);
 
+/**
+ * drm_atomic_helper_commit_tail_rpm - commit atomic update to hardware
+ * @old_state: new modeset state to be committed
+ *
+ * This is an alternative implementation for the
+ * &drm_mode_config_helper_funcs.atomic_commit_tail hook, for drivers
+ * that support runtime_pm or need the CRTC to be enabled to perform a
+ * commit. Otherwise, one should use the default implementation
+ * drm_atomic_helper_commit_tail().
+ */
+void drm_atomic_helper_commit_tail_rpm(struct drm_atomic_state *old_state)
+{
+	struct drm_device *dev = old_state->dev;
+
+	drm_atomic_helper_commit_modeset_disables(dev, old_state);
+
+	drm_atomic_helper_commit_modeset_enables(dev, old_state);
+
+	drm_atomic_helper_commit_planes(dev, old_state,
+					DRM_PLANE_COMMIT_ACTIVE_ONLY);
+
+	drm_atomic_helper_commit_hw_done(old_state);
+
+	drm_atomic_helper_wait_for_vblanks(dev, old_state);
+
+	drm_atomic_helper_cleanup_planes(dev, old_state);
+}
+EXPORT_SYMBOL(drm_atomic_helper_commit_tail_rpm);
+
 static void commit_tail(struct drm_atomic_state *old_state)
 {
 	struct drm_device *dev = old_state->dev;
@@ -1310,6 +1374,114 @@ static void commit_work(struct work_struct *work)
 	commit_tail(state);
 }
 
+/**
+ * drm_atomic_helper_async_check - check if state can be commited asynchronously
+ * @dev: DRM device
+ * @state: the driver state object
+ *
+ * This helper will check if it is possible to commit the state asynchronously.
+ * Async commits are not supposed to swap the states like normal sync commits
+ * but just do in-place changes on the current state.
+ *
+ * It will return 0 if the commit can happen in an asynchronous fashion or error
+ * if not. Note that error just mean it can't be commited asynchronously, if it
+ * fails the commit should be treated like a normal synchronous commit.
+ */
+int drm_atomic_helper_async_check(struct drm_device *dev,
+				   struct drm_atomic_state *state)
+{
+	struct drm_crtc *crtc;
+	struct drm_crtc_state *crtc_state;
+	struct drm_crtc_commit *commit;
+	struct drm_plane *__plane, *plane = NULL;
+	struct drm_plane_state *__plane_state, *plane_state = NULL;
+	const struct drm_plane_helper_funcs *funcs;
+	int i, j, n_planes = 0;
+
+	for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
+		if (drm_atomic_crtc_needs_modeset(crtc_state))
+			return -EINVAL;
+	}
+
+	for_each_new_plane_in_state(state, __plane, __plane_state, i) {
+		n_planes++;
+		plane = __plane;
+		plane_state = __plane_state;
+	}
+
+	/* FIXME: we support only single plane updates for now */
+	if (!plane || n_planes != 1)
+		return -EINVAL;
+
+	if (!plane_state->crtc)
+		return -EINVAL;
+
+	funcs = plane->helper_private;
+	if (!funcs->atomic_async_update)
+		return -EINVAL;
+
+	if (plane_state->fence)
+		return -EINVAL;
+
+	/*
+	 * Don't do an async update if there is an outstanding commit modifying
+	 * the plane.  This prevents our async update's changes from getting
+	 * overridden by a previous synchronous update's state.
+	 */
+	for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
+		if (plane->crtc != crtc)
+			continue;
+
+		spin_lock(&crtc->commit_lock);
+		commit = list_first_entry_or_null(&crtc->commit_list,
+						  struct drm_crtc_commit,
+						  commit_entry);
+		if (!commit) {
+			spin_unlock(&crtc->commit_lock);
+			continue;
+		}
+		spin_unlock(&crtc->commit_lock);
+
+		if (!crtc->state->state)
+			continue;
+
+		for_each_plane_in_state(crtc->state->state, __plane,
+					__plane_state, j) {
+			if (__plane == plane)
+				return -EINVAL;
+		}
+	}
+
+	return funcs->atomic_async_check(plane, plane_state);
+}
+EXPORT_SYMBOL(drm_atomic_helper_async_check);
+
+/**
+ * drm_atomic_helper_async_commit - commit state asynchronously
+ * @dev: DRM device
+ * @state: the driver state object
+ *
+ * This function commits a state asynchronously, i.e., not vblank
+ * synchronized. It should be used on a state only when
+ * drm_atomic_async_check() succeeds. Async commits are not supposed to swap
+ * the states like normal sync commits, but just do in-place changes on the
+ * current state.
+ */
+void drm_atomic_helper_async_commit(struct drm_device *dev,
+				    struct drm_atomic_state *state)
+{
+	struct drm_plane *plane;
+	struct drm_plane_state *plane_state;
+	const struct drm_plane_helper_funcs *funcs;
+	int i;
+
+	for_each_new_plane_in_state(state, plane, plane_state, i) {
+		funcs = plane->helper_private;
+		funcs->atomic_async_update(plane, plane_state);
+	}
+}
+EXPORT_SYMBOL(drm_atomic_helper_async_commit);
+
 /**
  * drm_atomic_helper_commit - commit validated state object
  * @dev: DRM device
@@ -1334,6 +1506,17 @@ int drm_atomic_helper_commit(struct drm_device *dev,
 {
 	int ret;
 
+	if (state->async_update) {
+		ret = drm_atomic_helper_prepare_planes(dev, state);
+		if (ret)
+			return ret;
+
+		drm_atomic_helper_async_commit(dev, state);
+		drm_atomic_helper_cleanup_planes(dev, state);
+
+		return 0;
+	}
+
 	ret = drm_atomic_helper_setup_commit(state, nonblock);
 	if (ret)
 		return ret;
@@ -1346,10 +1529,8 @@ int drm_atomic_helper_commit(struct drm_device *dev,
 
 	if (!nonblock) {
 		ret = drm_atomic_helper_wait_for_fences(dev, state, true);
-		if (ret) {
-			drm_atomic_helper_cleanup_planes(dev, state);
-			return ret;
-		}
+		if (ret)
+			goto err;
 	}
 
 	/*
@@ -1358,7 +1539,9 @@ int drm_atomic_helper_commit(struct drm_device *dev,
 	 * the software side now.
 	 */
 
-	drm_atomic_helper_swap_state(state, true);
+	ret = drm_atomic_helper_swap_state(state, true);
+	if (ret)
+		goto err;
 
 	/*
 	 * Everything below can be run asynchronously without the need to grab
@@ -1387,6 +1570,10 @@ int drm_atomic_helper_commit(struct drm_device *dev,
 		commit_tail(state);
 
 	return 0;
+
+err:
+	drm_atomic_helper_cleanup_planes(dev, state);
+	return ret;
 }
 EXPORT_SYMBOL(drm_atomic_helper_commit);
 
@@ -1680,9 +1867,7 @@ void drm_atomic_helper_commit_hw_done(struct drm_atomic_state *old_state)
 
 		/* backend must have consumed any event by now */
 		WARN_ON(new_crtc_state->event);
-		spin_lock(&crtc->commit_lock);
 		complete_all(&commit->hw_done);
-		spin_unlock(&crtc->commit_lock);
 	}
 }
 EXPORT_SYMBOL(drm_atomic_helper_commit_hw_done);
@@ -1711,7 +1896,6 @@ void drm_atomic_helper_commit_cleanup_done(struct drm_atomic_state *old_state)
 		if (WARN_ON(!commit))
 			continue;
 
-		spin_lock(&crtc->commit_lock);
 		complete_all(&commit->cleanup_done);
 		WARN_ON(!try_wait_for_completion(&commit->hw_done));
 
@@ -1721,8 +1905,6 @@ void drm_atomic_helper_commit_cleanup_done(struct drm_atomic_state *old_state)
 		if (try_wait_for_completion(&commit->flip_done))
 			goto del_commit;
 
-		spin_unlock(&crtc->commit_lock);
-
 		/* We must wait for the vblank event to signal our completion
 		 * before releasing our reference, since the vblank work does
 		 * not hold a reference of its own. */
@@ -1732,8 +1914,8 @@ void drm_atomic_helper_commit_cleanup_done(struct drm_atomic_state *old_state)
 			DRM_ERROR("[CRTC:%d:%s] flip_done timed out\n",
 				  crtc->base.id, crtc->name);
 
-		spin_lock(&crtc->commit_lock);
 del_commit:
+		spin_lock(&crtc->commit_lock);
 		list_del(&commit->commit_entry);
 		spin_unlock(&crtc->commit_lock);
 	}
@@ -2069,14 +2251,14 @@ EXPORT_SYMBOL(drm_atomic_helper_cleanup_planes);
 /**
  * drm_atomic_helper_swap_state - store atomic state into current sw state
  * @state: atomic state
- * @stall: stall for proceeding commits
+ * @stall: stall for preceeding commits
  *
  * This function stores the atomic state into the current state pointers in all
  * driver objects. It should be called after all failing steps have been done
  * and succeeded, but before the actual hardware state is committed.
  *
  * For cleanup and error recovery the current state for all changed objects will
- * be swaped into @state.
+ * be swapped into @state.
  *
  * With that sequence it fits perfectly into the plane prepare/cleanup sequence:
  *
@@ -2095,12 +2277,16 @@ EXPORT_SYMBOL(drm_atomic_helper_cleanup_planes);
  * the &drm_plane.state, &drm_crtc.state or &drm_connector.state pointer. With
  * the current atomic helpers this is almost always the case, since the helpers
  * don't pass the right state structures to the callbacks.
+ *
+ * Returns:
+ *
+ * Returns 0 on success. Can return -ERESTARTSYS when @stall is true and the
+ * waiting for the previous commits has been interrupted.
  */
-void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
+int drm_atomic_helper_swap_state(struct drm_atomic_state *state,
 				  bool stall)
 {
-	int i;
-	long ret;
+	int i, ret;
 	struct drm_connector *connector;
 	struct drm_connector_state *old_conn_state, *new_conn_state;
 	struct drm_crtc *crtc;
@@ -2108,8 +2294,8 @@ void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
 	struct drm_plane *plane;
 	struct drm_plane_state *old_plane_state, *new_plane_state;
 	struct drm_crtc_commit *commit;
-	void *obj, *obj_state;
-	const struct drm_private_state_funcs *funcs;
+	struct drm_private_obj *obj;
+	struct drm_private_state *old_obj_state, *new_obj_state;
 
 	if (stall) {
 		for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
@@ -2123,12 +2309,11 @@ void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
 			if (!commit)
 				continue;
 
-			ret = wait_for_completion_timeout(&commit->hw_done,
-							  10*HZ);
-			if (ret == 0)
-				DRM_ERROR("[CRTC:%d:%s] hw_done timed out\n",
-					  crtc->base.id, crtc->name);
+			ret = wait_for_completion_interruptible(&commit->hw_done);
 			drm_crtc_commit_put(commit);
+
+			if (ret)
+				return ret;
 		}
 	}
 
@@ -2171,8 +2356,17 @@ void drm_atomic_helper_swap_state(struct drm_atomic_state *state,
 		plane->state = new_plane_state;
 	}
 
-	__for_each_private_obj(state, obj, obj_state, i, funcs)
-		funcs->swap_state(obj, &state->private_objs[i].obj_state);
+	for_each_oldnew_private_obj_in_state(state, obj, old_obj_state, new_obj_state, i) {
+		WARN_ON(obj->state != old_obj_state);
+
+		old_obj_state->state = state;
+		new_obj_state->state = NULL;
+
+		state->private_objs[i].state = old_obj_state;
+		obj->state = new_obj_state;
+	}
+
+	return 0;
 }
 EXPORT_SYMBOL(drm_atomic_helper_swap_state);
 
@@ -2556,13 +2750,13 @@ int drm_atomic_helper_disable_all(struct drm_device *dev,
 			goto free;
 	}
 
-	for_each_connector_in_state(state, conn, conn_state, i) {
+	for_each_new_connector_in_state(state, conn, conn_state, i) {
 		ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
 		if (ret < 0)
 			goto free;
 	}
 
-	for_each_plane_in_state(state, plane, plane_state, i) {
+	for_each_new_plane_in_state(state, plane, plane_state, i) {
 		ret = drm_atomic_set_crtc_for_plane(plane_state, NULL);
 		if (ret < 0)
 			goto free;
@@ -2928,12 +3122,11 @@ backoff:
 }
 EXPORT_SYMBOL(drm_atomic_helper_connector_set_property);
 
-static int page_flip_common(
-				struct drm_atomic_state *state,
-				struct drm_crtc *crtc,
-				struct drm_framebuffer *fb,
-				struct drm_pending_vblank_event *event,
-				uint32_t flags)
+static int page_flip_common(struct drm_atomic_state *state,
+			    struct drm_crtc *crtc,
+			    struct drm_framebuffer *fb,
+			    struct drm_pending_vblank_event *event,
+			    uint32_t flags)
 {
 	struct drm_plane *plane = crtc->primary;
 	struct drm_plane_state *plane_state;
@@ -3027,13 +3220,12 @@ EXPORT_SYMBOL(drm_atomic_helper_page_flip);
  * Returns:
  * Returns 0 on success, negative errno numbers on failure.
  */
-int drm_atomic_helper_page_flip_target(
-				struct drm_crtc *crtc,
-				struct drm_framebuffer *fb,
-				struct drm_pending_vblank_event *event,
-				uint32_t flags,
-				uint32_t target,
-				struct drm_modeset_acquire_ctx *ctx)
+int drm_atomic_helper_page_flip_target(struct drm_crtc *crtc,
+				       struct drm_framebuffer *fb,
+				       struct drm_pending_vblank_event *event,
+				       uint32_t flags,
+				       uint32_t target,
+				       struct drm_modeset_acquire_ctx *ctx)
 {
 	struct drm_plane *plane = crtc->primary;
 	struct drm_atomic_state *state;
@@ -3612,12 +3804,12 @@ int drm_atomic_helper_legacy_gamma_set(struct drm_crtc *crtc,
 				       struct drm_modeset_acquire_ctx *ctx)
 {
 	struct drm_device *dev = crtc->dev;
-	struct drm_mode_config *config = &dev->mode_config;
 	struct drm_atomic_state *state;
 	struct drm_crtc_state *crtc_state;
 	struct drm_property_blob *blob = NULL;
 	struct drm_color_lut *blob_data;
 	int i, ret = 0;
+	bool replaced;
 
 	state = drm_atomic_state_alloc(crtc->dev);
 	if (!state)
@@ -3648,20 +3840,10 @@ int drm_atomic_helper_legacy_gamma_set(struct drm_crtc *crtc,
 	}
 
 	/* Reset DEGAMMA_LUT and CTM properties. */
-	ret = drm_atomic_crtc_set_property(crtc, crtc_state,
-			config->degamma_lut_property, 0);
-	if (ret)
-		goto fail;
-
-	ret = drm_atomic_crtc_set_property(crtc, crtc_state,
-			config->ctm_property, 0);
-	if (ret)
-		goto fail;
-
-	ret = drm_atomic_crtc_set_property(crtc, crtc_state,
-			config->gamma_lut_property, blob->base.id);
-	if (ret)
-		goto fail;
+	replaced  = drm_property_replace_blob(&crtc_state->degamma_lut, NULL);
+	replaced |= drm_property_replace_blob(&crtc_state->ctm, NULL);
+	replaced |= drm_property_replace_blob(&crtc_state->gamma_lut, blob);
+	crtc_state->color_mgmt_changed |= replaced;
 
 	ret = drm_atomic_commit(state);
 
@@ -3671,3 +3853,18 @@ fail:
 	return ret;
 }
 EXPORT_SYMBOL(drm_atomic_helper_legacy_gamma_set);
+
+/**
+ * __drm_atomic_helper_private_duplicate_state - copy atomic private state
+ * @obj: CRTC object
+ * @state: new private object state
+ *
+ * Copies atomic state from a private objects's current state and resets inferred values.
+ * This is useful for drivers that subclass the private state.
+ */
+void __drm_atomic_helper_private_obj_duplicate_state(struct drm_private_obj *obj,
+						     struct drm_private_state *state)
+{
+	memcpy(state, obj->state, sizeof(*state));
+}
+EXPORT_SYMBOL(__drm_atomic_helper_private_obj_duplicate_state);

+ 3 - 0
drivers/gpu/drm/drm_color_mgmt.c

@@ -128,6 +128,9 @@ EXPORT_SYMBOL(drm_color_lut_extract);
  * optional. The gamma and degamma properties are only attached if
  * their size is not 0 and ctm_property is only attached if has_ctm is
  * true.
+ *
+ * Drivers should use drm_atomic_helper_legacy_gamma_set() to implement the
+ * legacy &drm_crtc_funcs.gamma_set callback.
  */
 void drm_crtc_enable_color_mgmt(struct drm_crtc *crtc,
 				uint degamma_lut_size,

+ 44 - 15
drivers/gpu/drm/drm_debugfs_crc.c

@@ -136,21 +136,51 @@ static int crtc_crc_data_count(struct drm_crtc_crc *crc)
 	return CIRC_CNT(crc->head, crc->tail, DRM_CRC_ENTRIES_NR);
 }
 
+static void crtc_crc_cleanup(struct drm_crtc_crc *crc)
+{
+	kfree(crc->entries);
+	crc->entries = NULL;
+	crc->head = 0;
+	crc->tail = 0;
+	crc->values_cnt = 0;
+	crc->opened = false;
+}
+
 static int crtc_crc_open(struct inode *inode, struct file *filep)
 {
 	struct drm_crtc *crtc = inode->i_private;
 	struct drm_crtc_crc *crc = &crtc->crc;
 	struct drm_crtc_crc_entry *entries = NULL;
 	size_t values_cnt;
-	int ret;
+	int ret = 0;
 
-	if (crc->opened)
-		return -EBUSY;
+	if (drm_drv_uses_atomic_modeset(crtc->dev)) {
+		ret = drm_modeset_lock_interruptible(&crtc->mutex, NULL);
+		if (ret)
+			return ret;
+
+		if (!crtc->state->active)
+			ret = -EIO;
+		drm_modeset_unlock(&crtc->mutex);
+
+		if (ret)
+			return ret;
+	}
+
+	spin_lock_irq(&crc->lock);
+	if (!crc->opened)
+		crc->opened = true;
+	else
+		ret = -EBUSY;
+	spin_unlock_irq(&crc->lock);
 
-	ret = crtc->funcs->set_crc_source(crtc, crc->source, &values_cnt);
 	if (ret)
 		return ret;
 
+	ret = crtc->funcs->set_crc_source(crtc, crc->source, &values_cnt);
+	if (ret)
+		goto err;
+
 	if (WARN_ON(values_cnt > DRM_MAX_CRC_NR)) {
 		ret = -EINVAL;
 		goto err_disable;
@@ -170,7 +200,6 @@ static int crtc_crc_open(struct inode *inode, struct file *filep)
 	spin_lock_irq(&crc->lock);
 	crc->entries = entries;
 	crc->values_cnt = values_cnt;
-	crc->opened = true;
 
 	/*
 	 * Only return once we got a first frame, so userspace doesn't have to
@@ -182,12 +211,17 @@ static int crtc_crc_open(struct inode *inode, struct file *filep)
 						crc->lock);
 	spin_unlock_irq(&crc->lock);
 
-	WARN_ON(ret);
+	if (ret)
+		goto err_disable;
 
 	return 0;
 
 err_disable:
 	crtc->funcs->set_crc_source(crtc, NULL, &values_cnt);
+err:
+	spin_lock_irq(&crc->lock);
+	crtc_crc_cleanup(crc);
+	spin_unlock_irq(&crc->lock);
 	return ret;
 }
 
@@ -197,17 +231,12 @@ static int crtc_crc_release(struct inode *inode, struct file *filep)
 	struct drm_crtc_crc *crc = &crtc->crc;
 	size_t values_cnt;
 
+	crtc->funcs->set_crc_source(crtc, NULL, &values_cnt);
+
 	spin_lock_irq(&crc->lock);
-	kfree(crc->entries);
-	crc->entries = NULL;
-	crc->head = 0;
-	crc->tail = 0;
-	crc->values_cnt = 0;
-	crc->opened = false;
+	crtc_crc_cleanup(crc);
 	spin_unlock_irq(&crc->lock);
 
-	crtc->funcs->set_crc_source(crtc, NULL, &values_cnt);
-
 	return 0;
 }
 
@@ -334,7 +363,7 @@ int drm_crtc_add_crc_entry(struct drm_crtc *crtc, bool has_frame,
 	spin_lock(&crc->lock);
 
 	/* Caller may not have noticed yet that userspace has stopped reading */
-	if (!crc->opened) {
+	if (!crc->entries) {
 		spin_unlock(&crc->lock);
 		return -EINVAL;
 	}

+ 44 - 45
drivers/gpu/drm/drm_dp_mst_topology.c

@@ -31,6 +31,8 @@
 #include <drm/drmP.h>
 
 #include <drm/drm_fixed.h>
+#include <drm/drm_atomic.h>
+#include <drm/drm_atomic_helper.h>
 
 /**
  * DOC: dp mst helper
@@ -1335,15 +1337,17 @@ static void drm_dp_mst_link_probe_work(struct work_struct *work)
 static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,
 				 u8 *guid)
 {
-	static u8 zero_guid[16];
+	u64 salt;
 
-	if (!memcmp(guid, zero_guid, 16)) {
-		u64 salt = get_jiffies_64();
-		memcpy(&guid[0], &salt, sizeof(u64));
-		memcpy(&guid[8], &salt, sizeof(u64));
-		return false;
-	}
-	return true;
+	if (memchr_inv(guid, 0, 16))
+		return true;
+
+	salt = get_jiffies_64();
+
+	memcpy(&guid[0], &salt, sizeof(u64));
+	memcpy(&guid[8], &salt, sizeof(u64));
+
+	return false;
 }
 
 #if 0
@@ -2515,8 +2519,8 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
 	int req_slots;
 
 	topology_state = drm_atomic_get_mst_topology_state(state, mgr);
-	if (topology_state == NULL)
-		return -ENOMEM;
+	if (IS_ERR(topology_state))
+		return PTR_ERR(topology_state);
 
 	port = drm_dp_get_validated_port_ref(mgr, port);
 	if (port == NULL)
@@ -2555,8 +2559,8 @@ int drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state,
 	struct drm_dp_mst_topology_state *topology_state;
 
 	topology_state = drm_atomic_get_mst_topology_state(state, mgr);
-	if (topology_state == NULL)
-		return -ENOMEM;
+	if (IS_ERR(topology_state))
+		return PTR_ERR(topology_state);
 
 	/* We cannot rely on port->vcpi.num_slots to update
 	 * topology_state->avail_slots as the port may not exist if the parent
@@ -2992,41 +2996,32 @@ static void drm_dp_destroy_connector_work(struct work_struct *work)
 		(*mgr->cbs->hotplug)(mgr);
 }
 
-void *drm_dp_mst_duplicate_state(struct drm_atomic_state *state, void *obj)
+static struct drm_private_state *
+drm_dp_mst_duplicate_state(struct drm_private_obj *obj)
 {
-	struct drm_dp_mst_topology_mgr *mgr = obj;
-	struct drm_dp_mst_topology_state *new_mst_state;
+	struct drm_dp_mst_topology_state *state;
 
-	if (WARN_ON(!mgr->state))
+	state = kmemdup(obj->state, sizeof(*state), GFP_KERNEL);
+	if (!state)
 		return NULL;
 
-	new_mst_state = kmemdup(mgr->state, sizeof(*new_mst_state), GFP_KERNEL);
-	if (new_mst_state)
-		new_mst_state->state = state;
-	return new_mst_state;
-}
-
-void drm_dp_mst_swap_state(void *obj, void **obj_state_ptr)
-{
-	struct drm_dp_mst_topology_mgr *mgr = obj;
-	struct drm_dp_mst_topology_state **topology_state_ptr;
-
-	topology_state_ptr = (struct drm_dp_mst_topology_state **)obj_state_ptr;
+	__drm_atomic_helper_private_obj_duplicate_state(obj, &state->base);
 
-	mgr->state->state = (*topology_state_ptr)->state;
-	swap(*topology_state_ptr, mgr->state);
-	mgr->state->state = NULL;
+	return &state->base;
 }
 
-void drm_dp_mst_destroy_state(void *obj_state)
+static void drm_dp_mst_destroy_state(struct drm_private_obj *obj,
+				     struct drm_private_state *state)
 {
-	kfree(obj_state);
+	struct drm_dp_mst_topology_state *mst_state =
+		to_dp_mst_topology_state(state);
+
+	kfree(mst_state);
 }
 
 static const struct drm_private_state_funcs mst_state_funcs = {
-	.duplicate_state = drm_dp_mst_duplicate_state,
-	.swap_state = drm_dp_mst_swap_state,
-	.destroy_state = drm_dp_mst_destroy_state,
+	.atomic_duplicate_state = drm_dp_mst_duplicate_state,
+	.atomic_destroy_state = drm_dp_mst_destroy_state,
 };
 
 /**
@@ -3050,8 +3045,7 @@ struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_a
 	struct drm_device *dev = mgr->dev;
 
 	WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex));
-	return drm_atomic_get_private_obj_state(state, mgr,
-						&mst_state_funcs);
+	return to_dp_mst_topology_state(drm_atomic_get_private_obj_state(state, &mgr->base));
 }
 EXPORT_SYMBOL(drm_atomic_get_mst_topology_state);
 
@@ -3071,6 +3065,8 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
 				 int max_dpcd_transaction_bytes,
 				 int max_payloads, int conn_base_id)
 {
+	struct drm_dp_mst_topology_state *mst_state;
+
 	mutex_init(&mgr->lock);
 	mutex_init(&mgr->qlock);
 	mutex_init(&mgr->payload_lock);
@@ -3099,14 +3095,18 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
 	if (test_calc_pbn_mode() < 0)
 		DRM_ERROR("MST PBN self-test failed\n");
 
-	mgr->state = kzalloc(sizeof(*mgr->state), GFP_KERNEL);
-	if (mgr->state == NULL)
+	mst_state = kzalloc(sizeof(*mst_state), GFP_KERNEL);
+	if (mst_state == NULL)
 		return -ENOMEM;
-	mgr->state->mgr = mgr;
+
+	mst_state->mgr = mgr;
 
 	/* max. time slots - one slot for MTP header */
-	mgr->state->avail_slots = 63;
-	mgr->funcs = &mst_state_funcs;
+	mst_state->avail_slots = 63;
+
+	drm_atomic_private_obj_init(&mgr->base,
+				    &mst_state->base,
+				    &mst_state_funcs);
 
 	return 0;
 }
@@ -3128,8 +3128,7 @@ void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)
 	mutex_unlock(&mgr->payload_lock);
 	mgr->dev = NULL;
 	mgr->aux = NULL;
-	kfree(mgr->state);
-	mgr->state = NULL;
+	drm_atomic_private_obj_fini(&mgr->base);
 	mgr->funcs = NULL;
 }
 EXPORT_SYMBOL(drm_dp_mst_topology_mgr_destroy);

+ 16 - 0
drivers/gpu/drm/drm_drv.c

@@ -63,6 +63,15 @@ module_param_named(debug, drm_debug, int, 0600);
 static DEFINE_SPINLOCK(drm_minor_lock);
 static struct idr drm_minors_idr;
 
+/*
+ * If the drm core fails to init for whatever reason,
+ * we should prevent any drivers from registering with it.
+ * It's best to check this at drm_dev_init(), as some drivers
+ * prefer to embed struct drm_device into their own device
+ * structure and call drm_dev_init() themselves.
+ */
+static bool drm_core_init_complete = false;
+
 static struct dentry *drm_debugfs_root;
 
 #define DRM_PRINTK_FMT "[" DRM_NAME ":%s]%s %pV"
@@ -484,6 +493,11 @@ int drm_dev_init(struct drm_device *dev,
 {
 	int ret;
 
+	if (!drm_core_init_complete) {
+		DRM_ERROR("DRM core is not initialized\n");
+		return -ENODEV;
+	}
+
 	kref_init(&dev->ref);
 	dev->dev = parent;
 	dev->driver = driver;
@@ -966,6 +980,8 @@ static int __init drm_core_init(void)
 	if (ret < 0)
 		goto error;
 
+	drm_core_init_complete = true;
+
 	DRM_DEBUG("Initialized\n");
 	return 0;
 

+ 431 - 9
drivers/gpu/drm/drm_edid.c

@@ -1006,6 +1006,221 @@ static const struct drm_display_mode edid_cea_modes[] = {
 		   2492, 2640, 0, 1080, 1084, 1089, 1125, 0,
 		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
 	 .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
+	/* 65 - 1280x720@24Hz */
+	{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 59400, 1280, 3040,
+		   3080, 3300, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 66 - 1280x720@25Hz */
+	{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 3700,
+		   3740, 3960, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 67 - 1280x720@30Hz */
+	{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 3040,
+		   3080, 3300, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 68 - 1280x720@50Hz */
+	{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 1720,
+		   1760, 1980, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 69 - 1280x720@60Hz */
+	{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 74250, 1280, 1390,
+		   1430, 1650, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 70 - 1280x720@100Hz */
+	{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 148500, 1280, 1720,
+		   1760, 1980, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 71 - 1280x720@120Hz */
+	{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 148500, 1280, 1390,
+		   1430, 1650, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 72 - 1920x1080@24Hz */
+	{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2558,
+		   2602, 2750, 0, 1080, 1084, 1089, 1125, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 73 - 1920x1080@25Hz */
+	{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2448,
+		   2492, 2640, 0, 1080, 1084, 1089, 1125, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 74 - 1920x1080@30Hz */
+	{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 74250, 1920, 2008,
+		   2052, 2200, 0, 1080, 1084, 1089, 1125, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 75 - 1920x1080@50Hz */
+	{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2448,
+		   2492, 2640, 0, 1080, 1084, 1089, 1125, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 76 - 1920x1080@60Hz */
+	{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2008,
+		   2052, 2200, 0, 1080, 1084, 1089, 1125, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 77 - 1920x1080@100Hz */
+	{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 297000, 1920, 2448,
+		   2492, 2640, 0, 1080, 1084, 1089, 1125, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 78 - 1920x1080@120Hz */
+	{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 297000, 1920, 2008,
+		   2052, 2200, 0, 1080, 1084, 1089, 1125, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 79 - 1680x720@24Hz */
+	{ DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 59400, 1680, 3040,
+		   3080, 3300, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 80 - 1680x720@25Hz */
+	{ DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 59400, 1680, 2908,
+		   2948, 3168, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 81 - 1680x720@30Hz */
+	{ DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 59400, 1680, 2380,
+		   2420, 2640, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 82 - 1680x720@50Hz */
+	{ DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 82500, 1680, 1940,
+		   1980, 2200, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 83 - 1680x720@60Hz */
+	{ DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 99000, 1680, 1940,
+		   1980, 2200, 0, 720, 725, 730, 750, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 84 - 1680x720@100Hz */
+	{ DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 165000, 1680, 1740,
+		   1780, 2000, 0, 720, 725, 730, 825, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 85 - 1680x720@120Hz */
+	{ DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 198000, 1680, 1740,
+		   1780, 2000, 0, 720, 725, 730, 825, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 86 - 2560x1080@24Hz */
+	{ DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 99000, 2560, 3558,
+		   3602, 3750, 0, 1080, 1084, 1089, 1100, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 87 - 2560x1080@25Hz */
+	{ DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 90000, 2560, 3008,
+		   3052, 3200, 0, 1080, 1084, 1089, 1125, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 88 - 2560x1080@30Hz */
+	{ DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 118800, 2560, 3328,
+		   3372, 3520, 0, 1080, 1084, 1089, 1125, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 89 - 2560x1080@50Hz */
+	{ DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 185625, 2560, 3108,
+		   3152, 3300, 0, 1080, 1084, 1089, 1125, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 90 - 2560x1080@60Hz */
+	{ DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 198000, 2560, 2808,
+		   2852, 3000, 0, 1080, 1084, 1089, 1100, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 91 - 2560x1080@100Hz */
+	{ DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 371250, 2560, 2778,
+		   2822, 2970, 0, 1080, 1084, 1089, 1250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 92 - 2560x1080@120Hz */
+	{ DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 495000, 2560, 3108,
+		   3152, 3300, 0, 1080, 1084, 1089, 1250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 93 - 3840x2160p@24Hz 16:9 */
+	{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 5116,
+		   5204, 5500, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
+	/* 94 - 3840x2160p@25Hz 16:9 */
+	{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 4896,
+		   4984, 5280, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
+	/* 95 - 3840x2160p@30Hz 16:9 */
+	{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 4016,
+		   4104, 4400, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
+	/* 96 - 3840x2160p@50Hz 16:9 */
+	{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 4896,
+		   4984, 5280, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
+	/* 97 - 3840x2160p@60Hz 16:9 */
+	{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 4016,
+		   4104, 4400, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
+	/* 98 - 4096x2160p@24Hz 256:135 */
+	{ DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 297000, 4096, 5116,
+		   5204, 5500, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, },
+	/* 99 - 4096x2160p@25Hz 256:135 */
+	{ DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 297000, 4096, 5064,
+		   5152, 5280, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, },
+	/* 100 - 4096x2160p@30Hz 256:135 */
+	{ DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 297000, 4096, 4184,
+		   4272, 4400, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, },
+	/* 101 - 4096x2160p@50Hz 256:135 */
+	{ DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 594000, 4096, 5064,
+		   5152, 5280, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, },
+	/* 102 - 4096x2160p@60Hz 256:135 */
+	{ DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 594000, 4096, 4184,
+		   4272, 4400, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, },
+	/* 103 - 3840x2160p@24Hz 64:27 */
+	{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 5116,
+		   5204, 5500, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 104 - 3840x2160p@25Hz 64:27 */
+	{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 4896,
+		   4984, 5280, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 105 - 3840x2160p@30Hz 64:27 */
+	{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 297000, 3840, 4016,
+		   4104, 4400, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 106 - 3840x2160p@50Hz 64:27 */
+	{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 4896,
+		   4984, 5280, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
+	/* 107 - 3840x2160p@60Hz 64:27 */
+	{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 4016,
+		   4104, 4400, 0, 2160, 2168, 2178, 2250, 0,
+		   DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
+	  .vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
 };
 
 /*
@@ -2566,7 +2781,10 @@ add_detailed_modes(struct drm_connector *connector, struct edid *edid,
 #define VIDEO_BLOCK     0x02
 #define VENDOR_BLOCK    0x03
 #define SPEAKER_BLOCK	0x04
-#define VIDEO_CAPABILITY_BLOCK	0x07
+#define USE_EXTENDED_TAG 0x07
+#define EXT_VIDEO_CAPABILITY_BLOCK 0x00
+#define EXT_VIDEO_DATA_BLOCK_420	0x0E
+#define EXT_VIDEO_CAP_BLOCK_Y420CMDB 0x0F
 #define EDID_BASIC_AUDIO	(1 << 6)
 #define EDID_CEA_YCRCB444	(1 << 5)
 #define EDID_CEA_YCRCB422	(1 << 4)
@@ -2902,6 +3120,15 @@ add_alternate_cea_modes(struct drm_connector *connector, struct edid *edid)
 	return modes;
 }
 
+static u8 svd_to_vic(u8 svd)
+{
+	/* 0-6 bit vic, 7th bit native mode indicator */
+	if ((svd >= 1 &&  svd <= 64) || (svd >= 129 && svd <= 192))
+		return svd & 127;
+
+	return svd;
+}
+
 static struct drm_display_mode *
 drm_display_mode_from_vic_index(struct drm_connector *connector,
 				const u8 *video_db, u8 video_len,
@@ -2915,7 +3142,7 @@ drm_display_mode_from_vic_index(struct drm_connector *connector,
 		return NULL;
 
 	/* CEA modes are numbered 1..127 */
-	vic = (video_db[video_index] & 127);
+	vic = svd_to_vic(video_db[video_index]);
 	if (!drm_valid_cea_vic(vic))
 		return NULL;
 
@@ -2928,15 +3155,85 @@ drm_display_mode_from_vic_index(struct drm_connector *connector,
 	return newmode;
 }
 
+/*
+ * do_y420vdb_modes - Parse YCBCR 420 only modes
+ * @connector: connector corresponding to the HDMI sink
+ * @svds: start of the data block of CEA YCBCR 420 VDB
+ * @len: length of the CEA YCBCR 420 VDB
+ *
+ * Parse the CEA-861-F YCBCR 420 Video Data Block (Y420VDB)
+ * which contains modes which can be supported in YCBCR 420
+ * output format only.
+ */
+static int do_y420vdb_modes(struct drm_connector *connector,
+			    const u8 *svds, u8 svds_len)
+{
+	int modes = 0, i;
+	struct drm_device *dev = connector->dev;
+	struct drm_display_info *info = &connector->display_info;
+	struct drm_hdmi_info *hdmi = &info->hdmi;
+
+	for (i = 0; i < svds_len; i++) {
+		u8 vic = svd_to_vic(svds[i]);
+		struct drm_display_mode *newmode;
+
+		if (!drm_valid_cea_vic(vic))
+			continue;
+
+		newmode = drm_mode_duplicate(dev, &edid_cea_modes[vic]);
+		if (!newmode)
+			break;
+		bitmap_set(hdmi->y420_vdb_modes, vic, 1);
+		drm_mode_probed_add(connector, newmode);
+		modes++;
+	}
+
+	if (modes > 0)
+		info->color_formats |= DRM_COLOR_FORMAT_YCRCB420;
+	return modes;
+}
+
+/*
+ * drm_add_cmdb_modes - Add a YCBCR 420 mode into bitmap
+ * @connector: connector corresponding to the HDMI sink
+ * @vic: CEA vic for the video mode to be added in the map
+ *
+ * Makes an entry for a videomode in the YCBCR 420 bitmap
+ */
+static void
+drm_add_cmdb_modes(struct drm_connector *connector, u8 svd)
+{
+	u8 vic = svd_to_vic(svd);
+	struct drm_hdmi_info *hdmi = &connector->display_info.hdmi;
+
+	if (!drm_valid_cea_vic(vic))
+		return;
+
+	bitmap_set(hdmi->y420_cmdb_modes, vic, 1);
+}
+
 static int
 do_cea_modes(struct drm_connector *connector, const u8 *db, u8 len)
 {
 	int i, modes = 0;
+	struct drm_hdmi_info *hdmi = &connector->display_info.hdmi;
 
 	for (i = 0; i < len; i++) {
 		struct drm_display_mode *mode;
 		mode = drm_display_mode_from_vic_index(connector, db, len, i);
 		if (mode) {
+			/*
+			 * YCBCR420 capability block contains a bitmap which
+			 * gives the index of CEA modes from CEA VDB, which
+			 * can support YCBCR 420 sampling output also (apart
+			 * from RGB/YCBCR444 etc).
+			 * For example, if the bit 0 in bitmap is set,
+			 * first mode in VDB can support YCBCR420 output too.
+			 * Add YCBCR420 modes only if sink is HDMI 2.0 capable.
+			 */
+			if (i < 64 && hdmi->y420_cmdb_map & (1ULL << i))
+				drm_add_cmdb_modes(connector, db[i]);
+
 			drm_mode_probed_add(connector, mode);
 			modes++;
 		}
@@ -3217,6 +3514,12 @@ cea_db_payload_len(const u8 *db)
 	return db[0] & 0x1f;
 }
 
+static int
+cea_db_extended_tag(const u8 *db)
+{
+	return db[1];
+}
+
 static int
 cea_db_tag(const u8 *db)
 {
@@ -3272,9 +3575,77 @@ static bool cea_db_is_hdmi_forum_vsdb(const u8 *db)
 	return oui == HDMI_FORUM_IEEE_OUI;
 }
 
+static bool cea_db_is_y420cmdb(const u8 *db)
+{
+	if (cea_db_tag(db) != USE_EXTENDED_TAG)
+		return false;
+
+	if (!cea_db_payload_len(db))
+		return false;
+
+	if (cea_db_extended_tag(db) != EXT_VIDEO_CAP_BLOCK_Y420CMDB)
+		return false;
+
+	return true;
+}
+
+static bool cea_db_is_y420vdb(const u8 *db)
+{
+	if (cea_db_tag(db) != USE_EXTENDED_TAG)
+		return false;
+
+	if (!cea_db_payload_len(db))
+		return false;
+
+	if (cea_db_extended_tag(db) != EXT_VIDEO_DATA_BLOCK_420)
+		return false;
+
+	return true;
+}
+
 #define for_each_cea_db(cea, i, start, end) \
 	for ((i) = (start); (i) < (end) && (i) + cea_db_payload_len(&(cea)[(i)]) < (end); (i) += cea_db_payload_len(&(cea)[(i)]) + 1)
 
+static void drm_parse_y420cmdb_bitmap(struct drm_connector *connector,
+				      const u8 *db)
+{
+	struct drm_display_info *info = &connector->display_info;
+	struct drm_hdmi_info *hdmi = &info->hdmi;
+	u8 map_len = cea_db_payload_len(db) - 1;
+	u8 count;
+	u64 map = 0;
+
+	if (map_len == 0) {
+		/* All CEA modes support ycbcr420 sampling also.*/
+		hdmi->y420_cmdb_map = U64_MAX;
+		info->color_formats |= DRM_COLOR_FORMAT_YCRCB420;
+		return;
+	}
+
+	/*
+	 * This map indicates which of the existing CEA block modes
+	 * from VDB can support YCBCR420 output too. So if bit=0 is
+	 * set, first mode from VDB can support YCBCR420 output too.
+	 * We will parse and keep this map, before parsing VDB itself
+	 * to avoid going through the same block again and again.
+	 *
+	 * Spec is not clear about max possible size of this block.
+	 * Clamping max bitmap block size at 8 bytes. Every byte can
+	 * address 8 CEA modes, in this way this map can address
+	 * 8*8 = first 64 SVDs.
+	 */
+	if (WARN_ON_ONCE(map_len > 8))
+		map_len = 8;
+
+	for (count = 0; count < map_len; count++)
+		map |= (u64)db[2 + count] << (8 * count);
+
+	if (map)
+		info->color_formats |= DRM_COLOR_FORMAT_YCRCB420;
+
+	hdmi->y420_cmdb_map = map;
+}
+
 static int
 add_cea_modes(struct drm_connector *connector, struct edid *edid)
 {
@@ -3297,10 +3668,16 @@ add_cea_modes(struct drm_connector *connector, struct edid *edid)
 				video = db + 1;
 				video_len = dbl;
 				modes += do_cea_modes(connector, video, dbl);
-			}
-			else if (cea_db_is_hdmi_vsdb(db)) {
+			} else if (cea_db_is_hdmi_vsdb(db)) {
 				hdmi = db;
 				hdmi_len = dbl;
+			} else if (cea_db_is_y420vdb(db)) {
+				const u8 *vdb420 = &db[2];
+
+				/* Add 4:2:0(only) modes present in EDID */
+				modes += do_y420vdb_modes(connector,
+							  vdb420,
+							  dbl - 1);
 			}
 		}
 	}
@@ -3793,8 +4170,10 @@ bool drm_rgb_quant_range_selectable(struct edid *edid)
 		return false;
 
 	for_each_cea_db(edid_ext, i, start, end) {
-		if (cea_db_tag(&edid_ext[i]) == VIDEO_CAPABILITY_BLOCK &&
-		    cea_db_payload_len(&edid_ext[i]) == 2) {
+		if (cea_db_tag(&edid_ext[i]) == USE_EXTENDED_TAG &&
+		    cea_db_payload_len(&edid_ext[i]) == 2 &&
+		    cea_db_extended_tag(&edid_ext[i]) ==
+			EXT_VIDEO_CAPABILITY_BLOCK) {
 			DRM_DEBUG_KMS("CEA VCDB 0x%02x\n", edid_ext[i + 2]);
 			return edid_ext[i + 2] & EDID_CEA_VCDB_QS;
 		}
@@ -3823,6 +4202,16 @@ drm_default_rgb_quant_range(const struct drm_display_mode *mode)
 }
 EXPORT_SYMBOL(drm_default_rgb_quant_range);
 
+static void drm_parse_ycbcr420_deep_color_info(struct drm_connector *connector,
+					       const u8 *db)
+{
+	u8 dc_mask;
+	struct drm_hdmi_info *hdmi = &connector->display_info.hdmi;
+
+	dc_mask = db[7] & DRM_EDID_YCBCR420_DC_MASK;
+	hdmi->y420_dc_modes |= dc_mask;
+}
+
 static void drm_parse_hdmi_forum_vsdb(struct drm_connector *connector,
 				 const u8 *hf_vsdb)
 {
@@ -3863,6 +4252,8 @@ static void drm_parse_hdmi_forum_vsdb(struct drm_connector *connector,
 				scdc->scrambling.low_rates = true;
 		}
 	}
+
+	drm_parse_ycbcr420_deep_color_info(connector, hf_vsdb);
 }
 
 static void drm_parse_hdmi_deep_color_info(struct drm_connector *connector,
@@ -3981,6 +4372,8 @@ static void drm_parse_cea_ext(struct drm_connector *connector,
 			drm_parse_hdmi_vsdb_video(connector, db);
 		if (cea_db_is_hdmi_forum_vsdb(db))
 			drm_parse_hdmi_forum_vsdb(connector, db);
+		if (cea_db_is_y420cmdb(db))
+			drm_parse_y420cmdb_bitmap(connector, db);
 	}
 }
 
@@ -4214,6 +4607,13 @@ int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid)
 
 	quirks = edid_get_quirks(edid);
 
+	/*
+	 * CEA-861-F adds ycbcr capability map block, for HDMI 2.0 sinks.
+	 * To avoid multiple parsing of same block, lets parse that map
+	 * from sink info, before parsing CEA modes.
+	 */
+	drm_add_display_info(connector, edid);
+
 	/*
 	 * EDID spec says modes should be preferred in this order:
 	 * - preferred detailed mode
@@ -4241,8 +4641,6 @@ int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid)
 	if (quirks & (EDID_QUIRK_PREFER_LARGE_60 | EDID_QUIRK_PREFER_LARGE_75))
 		edid_fixup_preferred(connector, quirks);
 
-	drm_add_display_info(connector, edid);
-
 	if (quirks & EDID_QUIRK_FORCE_6BPC)
 		connector->display_info.bpc = 6;
 
@@ -4334,12 +4732,14 @@ EXPORT_SYMBOL(drm_set_preferred_mode);
  *                                              data from a DRM display mode
  * @frame: HDMI AVI infoframe
  * @mode: DRM display mode
+ * @is_hdmi2_sink: Sink is HDMI 2.0 compliant
  *
  * Return: 0 on success or a negative error code on failure.
  */
 int
 drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
-					 const struct drm_display_mode *mode)
+					 const struct drm_display_mode *mode,
+					 bool is_hdmi2_sink)
 {
 	int err;
 
@@ -4355,6 +4755,28 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
 
 	frame->video_code = drm_match_cea_mode(mode);
 
+	/*
+	 * HDMI 1.4 VIC range: 1 <= VIC <= 64 (CEA-861-D) but
+	 * HDMI 2.0 VIC range: 1 <= VIC <= 107 (CEA-861-F). So we
+	 * have to make sure we dont break HDMI 1.4 sinks.
+	 */
+	if (!is_hdmi2_sink && frame->video_code > 64)
+		frame->video_code = 0;
+
+	/*
+	 * HDMI spec says if a mode is found in HDMI 1.4b 4K modes
+	 * we should send its VIC in vendor infoframes, else send the
+	 * VIC in AVI infoframes. Lets check if this mode is present in
+	 * HDMI 1.4b 4K modes
+	 */
+	if (frame->video_code) {
+		u8 vendor_if_vic = drm_match_hdmi_mode(mode);
+		bool is_s3d = mode->flags & DRM_MODE_FLAG_3D_MASK;
+
+		if (drm_valid_hdmi_vic(vendor_if_vic) && !is_s3d)
+			frame->video_code = 0;
+	}
+
 	frame->picture_aspect = HDMI_PICTURE_ASPECT_NONE;
 
 	/*

+ 2 - 2
drivers/gpu/drm/drm_fb_cma_helper.c

@@ -640,7 +640,7 @@ EXPORT_SYMBOL_GPL(drm_fbdev_cma_hotplug_event);
  * Calls drm_fb_helper_set_suspend, which is a wrapper around
  * fb_set_suspend implemented by fbdev core.
  */
-void drm_fbdev_cma_set_suspend(struct drm_fbdev_cma *fbdev_cma, int state)
+void drm_fbdev_cma_set_suspend(struct drm_fbdev_cma *fbdev_cma, bool state)
 {
 	if (fbdev_cma)
 		drm_fb_helper_set_suspend(&fbdev_cma->fb_helper, state);
@@ -657,7 +657,7 @@ EXPORT_SYMBOL(drm_fbdev_cma_set_suspend);
  * fb_set_suspend implemented by fbdev core.
  */
 void drm_fbdev_cma_set_suspend_unlocked(struct drm_fbdev_cma *fbdev_cma,
-					int state)
+					bool state)
 {
 	if (fbdev_cma)
 		drm_fb_helper_set_suspend_unlocked(&fbdev_cma->fb_helper,

+ 396 - 236
drivers/gpu/drm/drm_fb_helper.c

@@ -106,11 +106,11 @@ static DEFINE_MUTEX(kernel_fb_helper_lock);
  */
 
 #define drm_fb_helper_for_each_connector(fbh, i__) \
-	for (({ lockdep_assert_held(&(fbh)->dev->mode_config.mutex); }), \
+	for (({ lockdep_assert_held(&(fbh)->lock); }), \
 	     i__ = 0; i__ < (fbh)->connector_count; i__++)
 
-int drm_fb_helper_add_one_connector(struct drm_fb_helper *fb_helper,
-				    struct drm_connector *connector)
+static int __drm_fb_helper_add_one_connector(struct drm_fb_helper *fb_helper,
+					     struct drm_connector *connector)
 {
 	struct drm_fb_helper_connector *fb_conn;
 	struct drm_fb_helper_connector **temp;
@@ -119,7 +119,7 @@ int drm_fb_helper_add_one_connector(struct drm_fb_helper *fb_helper,
 	if (!drm_fbdev_emulation)
 		return 0;
 
-	WARN_ON(!mutex_is_locked(&fb_helper->dev->mode_config.mutex));
+	lockdep_assert_held(&fb_helper->lock);
 
 	count = fb_helper->connector_count + 1;
 
@@ -141,8 +141,21 @@ int drm_fb_helper_add_one_connector(struct drm_fb_helper *fb_helper,
 	drm_connector_get(connector);
 	fb_conn->connector = connector;
 	fb_helper->connector_info[fb_helper->connector_count++] = fb_conn;
+
 	return 0;
 }
+
+int drm_fb_helper_add_one_connector(struct drm_fb_helper *fb_helper,
+				    struct drm_connector *connector)
+{
+	int err;
+
+	mutex_lock(&fb_helper->lock);
+	err = __drm_fb_helper_add_one_connector(fb_helper, connector);
+	mutex_unlock(&fb_helper->lock);
+
+	return err;
+}
 EXPORT_SYMBOL(drm_fb_helper_add_one_connector);
 
 /**
@@ -169,11 +182,10 @@ int drm_fb_helper_single_add_all_connectors(struct drm_fb_helper *fb_helper)
 	if (!drm_fbdev_emulation)
 		return 0;
 
-	mutex_lock(&dev->mode_config.mutex);
+	mutex_lock(&fb_helper->lock);
 	drm_connector_list_iter_begin(dev, &conn_iter);
 	drm_for_each_connector_iter(connector, &conn_iter) {
-		ret = drm_fb_helper_add_one_connector(fb_helper, connector);
-
+		ret = __drm_fb_helper_add_one_connector(fb_helper, connector);
 		if (ret)
 			goto fail;
 	}
@@ -192,14 +204,14 @@ fail:
 	fb_helper->connector_count = 0;
 out:
 	drm_connector_list_iter_end(&conn_iter);
-	mutex_unlock(&dev->mode_config.mutex);
+	mutex_unlock(&fb_helper->lock);
 
 	return ret;
 }
 EXPORT_SYMBOL(drm_fb_helper_single_add_all_connectors);
 
-int drm_fb_helper_remove_one_connector(struct drm_fb_helper *fb_helper,
-				       struct drm_connector *connector)
+static int __drm_fb_helper_remove_one_connector(struct drm_fb_helper *fb_helper,
+						struct drm_connector *connector)
 {
 	struct drm_fb_helper_connector *fb_helper_connector;
 	int i, j;
@@ -207,9 +219,9 @@ int drm_fb_helper_remove_one_connector(struct drm_fb_helper *fb_helper,
 	if (!drm_fbdev_emulation)
 		return 0;
 
-	WARN_ON(!mutex_is_locked(&fb_helper->dev->mode_config.mutex));
+	lockdep_assert_held(&fb_helper->lock);
 
-	for (i = 0; i < fb_helper->connector_count; i++) {
+	drm_fb_helper_for_each_connector(fb_helper, i) {
 		if (fb_helper->connector_info[i]->connector == connector)
 			break;
 	}
@@ -227,23 +239,19 @@ int drm_fb_helper_remove_one_connector(struct drm_fb_helper *fb_helper,
 
 	return 0;
 }
-EXPORT_SYMBOL(drm_fb_helper_remove_one_connector);
 
-static void drm_fb_helper_save_lut_atomic(struct drm_crtc *crtc, struct drm_fb_helper *helper)
+int drm_fb_helper_remove_one_connector(struct drm_fb_helper *fb_helper,
+				       struct drm_connector *connector)
 {
-	uint16_t *r_base, *g_base, *b_base;
-	int i;
+	int err;
 
-	if (helper->funcs->gamma_get == NULL)
-		return;
+	mutex_lock(&fb_helper->lock);
+	err = __drm_fb_helper_remove_one_connector(fb_helper, connector);
+	mutex_unlock(&fb_helper->lock);
 
-	r_base = crtc->gamma_store;
-	g_base = r_base + crtc->gamma_size;
-	b_base = g_base + crtc->gamma_size;
-
-	for (i = 0; i < crtc->gamma_size; i++)
-		helper->funcs->gamma_get(crtc, &r_base[i], &g_base[i], &b_base[i], i);
+	return err;
 }
+EXPORT_SYMBOL(drm_fb_helper_remove_one_connector);
 
 static void drm_fb_helper_restore_lut_atomic(struct drm_crtc *crtc)
 {
@@ -285,7 +293,6 @@ int drm_fb_helper_debug_enter(struct fb_info *info)
 			if (drm_drv_uses_atomic_modeset(mode_set->crtc->dev))
 				continue;
 
-			drm_fb_helper_save_lut_atomic(mode_set->crtc, helper);
 			funcs->mode_set_base_atomic(mode_set->crtc,
 						    mode_set->fb,
 						    mode_set->x,
@@ -298,20 +305,6 @@ int drm_fb_helper_debug_enter(struct fb_info *info)
 }
 EXPORT_SYMBOL(drm_fb_helper_debug_enter);
 
-/* Find the real fb for a given fb helper CRTC */
-static struct drm_framebuffer *drm_mode_config_fb(struct drm_crtc *crtc)
-{
-	struct drm_device *dev = crtc->dev;
-	struct drm_crtc *c;
-
-	drm_for_each_crtc(c, dev) {
-		if (crtc->base.id == c->base.id)
-			return c->primary->fb;
-	}
-
-	return NULL;
-}
-
 /**
  * drm_fb_helper_debug_leave - implementation for &fb_ops.fb_debug_leave
  * @info: fbdev registered by the helper
@@ -328,8 +321,11 @@ int drm_fb_helper_debug_leave(struct fb_info *info)
 		struct drm_mode_set *mode_set = &helper->crtc_info[i].mode_set;
 
 		crtc = mode_set->crtc;
+		if (drm_drv_uses_atomic_modeset(crtc->dev))
+			continue;
+
 		funcs = crtc->helper_private;
-		fb = drm_mode_config_fb(crtc);
+		fb = crtc->primary->fb;
 
 		if (!crtc->enabled)
 			continue;
@@ -342,9 +338,6 @@ int drm_fb_helper_debug_leave(struct fb_info *info)
 		if (funcs->mode_set_base_atomic == NULL)
 			continue;
 
-		if (drm_drv_uses_atomic_modeset(crtc->dev))
-			continue;
-
 		drm_fb_helper_restore_lut_atomic(mode_set->crtc);
 		funcs->mode_set_base_atomic(mode_set->crtc, fb, crtc->x,
 					    crtc->y, LEAVE_ATOMIC_MODE_SET);
@@ -354,19 +347,24 @@ int drm_fb_helper_debug_leave(struct fb_info *info)
 }
 EXPORT_SYMBOL(drm_fb_helper_debug_leave);
 
-static int restore_fbdev_mode_atomic(struct drm_fb_helper *fb_helper)
+static int restore_fbdev_mode_atomic(struct drm_fb_helper *fb_helper, bool active)
 {
 	struct drm_device *dev = fb_helper->dev;
 	struct drm_plane *plane;
 	struct drm_atomic_state *state;
 	int i, ret;
 	unsigned int plane_mask;
+	struct drm_modeset_acquire_ctx ctx;
+
+	drm_modeset_acquire_init(&ctx, 0);
 
 	state = drm_atomic_state_alloc(dev);
-	if (!state)
-		return -ENOMEM;
+	if (!state) {
+		ret = -ENOMEM;
+		goto out_ctx;
+	}
 
-	state->acquire_ctx = dev->mode_config.acquire_ctx;
+	state->acquire_ctx = &ctx;
 retry:
 	plane_mask = 0;
 	drm_for_each_plane(plane, dev) {
@@ -375,7 +373,7 @@ retry:
 		plane_state = drm_atomic_get_plane_state(state, plane);
 		if (IS_ERR(plane_state)) {
 			ret = PTR_ERR(plane_state);
-			goto fail;
+			goto out_state;
 		}
 
 		plane_state->rotation = DRM_MODE_ROTATE_0;
@@ -389,7 +387,7 @@ retry:
 
 		ret = __drm_atomic_helper_disable_plane(plane, plane_state);
 		if (ret != 0)
-			goto fail;
+			goto out_state;
 	}
 
 	for (i = 0; i < fb_helper->crtc_count; i++) {
@@ -397,23 +395,38 @@ retry:
 
 		ret = __drm_atomic_helper_set_config(mode_set, state);
 		if (ret != 0)
-			goto fail;
+			goto out_state;
+
+		/*
+		 * __drm_atomic_helper_set_config() sets active when a
+		 * mode is set, unconditionally clear it if we force DPMS off
+		 */
+		if (!active) {
+			struct drm_crtc *crtc = mode_set->crtc;
+			struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc);
+
+			crtc_state->active = false;
+		}
 	}
 
 	ret = drm_atomic_commit(state);
 
-fail:
+out_state:
 	drm_atomic_clean_old_fb(dev, plane_mask, ret);
 
 	if (ret == -EDEADLK)
 		goto backoff;
 
 	drm_atomic_state_put(state);
+out_ctx:
+	drm_modeset_drop_locks(&ctx);
+	drm_modeset_acquire_fini(&ctx);
+
 	return ret;
 
 backoff:
 	drm_atomic_state_clear(state);
-	drm_atomic_legacy_backoff(state);
+	drm_modeset_backoff(&ctx);
 
 	goto retry;
 }
@@ -422,8 +435,9 @@ static int restore_fbdev_mode_legacy(struct drm_fb_helper *fb_helper)
 {
 	struct drm_device *dev = fb_helper->dev;
 	struct drm_plane *plane;
-	int i;
+	int i, ret = 0;
 
+	drm_modeset_lock_all(fb_helper->dev);
 	drm_for_each_plane(plane, dev) {
 		if (plane->type != DRM_PLANE_TYPE_PRIMARY)
 			drm_plane_force_disable(plane);
@@ -437,34 +451,33 @@ static int restore_fbdev_mode_legacy(struct drm_fb_helper *fb_helper)
 	for (i = 0; i < fb_helper->crtc_count; i++) {
 		struct drm_mode_set *mode_set = &fb_helper->crtc_info[i].mode_set;
 		struct drm_crtc *crtc = mode_set->crtc;
-		int ret;
 
 		if (crtc->funcs->cursor_set2) {
 			ret = crtc->funcs->cursor_set2(crtc, NULL, 0, 0, 0, 0, 0);
 			if (ret)
-				return ret;
+				goto out;
 		} else if (crtc->funcs->cursor_set) {
 			ret = crtc->funcs->cursor_set(crtc, NULL, 0, 0, 0);
 			if (ret)
-				return ret;
+				goto out;
 		}
 
 		ret = drm_mode_set_config_internal(mode_set);
 		if (ret)
-			return ret;
+			goto out;
 	}
+out:
+	drm_modeset_unlock_all(fb_helper->dev);
 
-	return 0;
+	return ret;
 }
 
 static int restore_fbdev_mode(struct drm_fb_helper *fb_helper)
 {
 	struct drm_device *dev = fb_helper->dev;
 
-	drm_warn_on_modeset_not_all_locked(dev);
-
 	if (drm_drv_uses_atomic_modeset(dev))
-		return restore_fbdev_mode_atomic(fb_helper);
+		return restore_fbdev_mode_atomic(fb_helper, true);
 	else
 		return restore_fbdev_mode_legacy(fb_helper);
 }
@@ -482,23 +495,26 @@ static int restore_fbdev_mode(struct drm_fb_helper *fb_helper)
  */
 int drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper)
 {
-	struct drm_device *dev = fb_helper->dev;
 	bool do_delayed;
 	int ret;
 
 	if (!drm_fbdev_emulation)
 		return -ENODEV;
 
-	drm_modeset_lock_all(dev);
+	if (READ_ONCE(fb_helper->deferred_setup))
+		return 0;
+
+	mutex_lock(&fb_helper->lock);
 	ret = restore_fbdev_mode(fb_helper);
 
 	do_delayed = fb_helper->delayed_hotplug;
 	if (do_delayed)
 		fb_helper->delayed_hotplug = false;
-	drm_modeset_unlock_all(dev);
+	mutex_unlock(&fb_helper->lock);
 
 	if (do_delayed)
 		drm_fb_helper_hotplug_event(fb_helper);
+
 	return ret;
 }
 EXPORT_SYMBOL(drm_fb_helper_restore_fbdev_mode_unlocked);
@@ -517,10 +533,12 @@ static bool drm_fb_helper_is_bound(struct drm_fb_helper *fb_helper)
 		return false;
 
 	drm_for_each_crtc(crtc, dev) {
+		drm_modeset_lock(&crtc->mutex, NULL);
 		if (crtc->primary->fb)
 			crtcs_bound++;
 		if (crtc->primary->fb == fb_helper->fb)
 			bound++;
+		drm_modeset_unlock(&crtc->mutex);
 	}
 
 	if (bound < crtcs_bound)
@@ -548,11 +566,11 @@ static bool drm_fb_helper_force_kernel_mode(void)
 		if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
 			continue;
 
-		drm_modeset_lock_all(dev);
+		mutex_lock(&helper->lock);
 		ret = restore_fbdev_mode(helper);
 		if (ret)
 			error = true;
-		drm_modeset_unlock_all(dev);
+		mutex_unlock(&helper->lock);
 	}
 	return error;
 }
@@ -581,23 +599,14 @@ static struct sysrq_key_op sysrq_drm_fb_helper_restore_op = {
 static struct sysrq_key_op sysrq_drm_fb_helper_restore_op = { };
 #endif
 
-static void drm_fb_helper_dpms(struct fb_info *info, int dpms_mode)
+static void dpms_legacy(struct drm_fb_helper *fb_helper, int dpms_mode)
 {
-	struct drm_fb_helper *fb_helper = info->par;
 	struct drm_device *dev = fb_helper->dev;
 	struct drm_crtc *crtc;
 	struct drm_connector *connector;
 	int i, j;
 
-	/*
-	 * For each CRTC in this fb, turn the connectors on/off.
-	 */
 	drm_modeset_lock_all(dev);
-	if (!drm_fb_helper_is_bound(fb_helper)) {
-		drm_modeset_unlock_all(dev);
-		return;
-	}
-
 	for (i = 0; i < fb_helper->crtc_count; i++) {
 		crtc = fb_helper->crtc_info[i].mode_set.crtc;
 
@@ -615,6 +624,26 @@ static void drm_fb_helper_dpms(struct fb_info *info, int dpms_mode)
 	drm_modeset_unlock_all(dev);
 }
 
+static void drm_fb_helper_dpms(struct fb_info *info, int dpms_mode)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+
+	/*
+	 * For each CRTC in this fb, turn the connectors on/off.
+	 */
+	mutex_lock(&fb_helper->lock);
+	if (!drm_fb_helper_is_bound(fb_helper)) {
+		mutex_unlock(&fb_helper->lock);
+		return;
+	}
+
+	if (drm_drv_uses_atomic_modeset(fb_helper->dev))
+		restore_fbdev_mode_atomic(fb_helper, dpms_mode == DRM_MODE_DPMS_ON);
+	else
+		dpms_legacy(fb_helper, dpms_mode);
+	mutex_unlock(&fb_helper->lock);
+}
+
 /**
  * drm_fb_helper_blank - implementation for &fb_ops.fb_blank
  * @blank: desired blanking state
@@ -734,6 +763,7 @@ void drm_fb_helper_prepare(struct drm_device *dev, struct drm_fb_helper *helper,
 	INIT_WORK(&helper->resume_work, drm_fb_helper_resume_worker);
 	INIT_WORK(&helper->dirty_work, drm_fb_helper_dirty_work);
 	helper->dirty_clip.x1 = helper->dirty_clip.y1 = ~0;
+	mutex_init(&helper->lock);
 	helper->funcs = funcs;
 	helper->dev = dev;
 }
@@ -899,6 +929,7 @@ void drm_fb_helper_fini(struct drm_fb_helper *fb_helper)
 	}
 	mutex_unlock(&kernel_fb_helper_lock);
 
+	mutex_destroy(&fb_helper->lock);
 	drm_fb_helper_crtc_free(fb_helper);
 
 }
@@ -1167,22 +1198,23 @@ void drm_fb_helper_set_suspend_unlocked(struct drm_fb_helper *fb_helper,
 }
 EXPORT_SYMBOL(drm_fb_helper_set_suspend_unlocked);
 
-static int setcolreg(struct drm_crtc *crtc, u16 red, u16 green,
-		     u16 blue, u16 regno, struct fb_info *info)
+static int setcmap_pseudo_palette(struct fb_cmap *cmap, struct fb_info *info)
 {
-	struct drm_fb_helper *fb_helper = info->par;
-	struct drm_framebuffer *fb = fb_helper->fb;
+	u32 *palette = (u32 *)info->pseudo_palette;
+	int i;
+
+	if (cmap->start + cmap->len > 16)
+		return -EINVAL;
 
-	if (info->fix.visual == FB_VISUAL_TRUECOLOR) {
-		u32 *palette;
+	for (i = 0; i < cmap->len; ++i) {
+		u16 red = cmap->red[i];
+		u16 green = cmap->green[i];
+		u16 blue = cmap->blue[i];
 		u32 value;
-		/* place color in psuedopalette */
-		if (regno > 16)
-			return -EINVAL;
-		palette = (u32 *)info->pseudo_palette;
-		red >>= (16 - info->var.red.length);
-		green >>= (16 - info->var.green.length);
-		blue >>= (16 - info->var.blue.length);
+
+		red >>= 16 - info->var.red.length;
+		green >>= 16 - info->var.green.length;
+		blue >>= 16 - info->var.blue.length;
 		value = (red << info->var.red.offset) |
 			(green << info->var.green.offset) |
 			(blue << info->var.blue.offset);
@@ -1192,23 +1224,169 @@ static int setcolreg(struct drm_crtc *crtc, u16 red, u16 green,
 			mask <<= info->var.transp.offset;
 			value |= mask;
 		}
-		palette[regno] = value;
-		return 0;
+		palette[cmap->start + i] = value;
 	}
 
-	/*
-	 * The driver really shouldn't advertise pseudo/directcolor
-	 * visuals if it can't deal with the palette.
-	 */
-	if (WARN_ON(!fb_helper->funcs->gamma_set ||
-		    !fb_helper->funcs->gamma_get))
-		return -EINVAL;
+	return 0;
+}
 
-	WARN_ON(fb->format->cpp[0] != 1);
+static int setcmap_legacy(struct fb_cmap *cmap, struct fb_info *info)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_crtc *crtc;
+	u16 *r, *g, *b;
+	int i, ret = 0;
 
-	fb_helper->funcs->gamma_set(crtc, red, green, blue, regno);
+	drm_modeset_lock_all(fb_helper->dev);
+	for (i = 0; i < fb_helper->crtc_count; i++) {
+		crtc = fb_helper->crtc_info[i].mode_set.crtc;
+		if (!crtc->funcs->gamma_set || !crtc->gamma_size)
+			return -EINVAL;
 
-	return 0;
+		if (cmap->start + cmap->len > crtc->gamma_size)
+			return -EINVAL;
+
+		r = crtc->gamma_store;
+		g = r + crtc->gamma_size;
+		b = g + crtc->gamma_size;
+
+		memcpy(r + cmap->start, cmap->red, cmap->len * sizeof(*r));
+		memcpy(g + cmap->start, cmap->green, cmap->len * sizeof(*g));
+		memcpy(b + cmap->start, cmap->blue, cmap->len * sizeof(*b));
+
+		ret = crtc->funcs->gamma_set(crtc, r, g, b,
+					     crtc->gamma_size, NULL);
+		if (ret)
+			return ret;
+	}
+	drm_modeset_unlock_all(fb_helper->dev);
+
+	return ret;
+}
+
+static struct drm_property_blob *setcmap_new_gamma_lut(struct drm_crtc *crtc,
+						       struct fb_cmap *cmap)
+{
+	struct drm_device *dev = crtc->dev;
+	struct drm_property_blob *gamma_lut;
+	struct drm_color_lut *lut;
+	int size = crtc->gamma_size;
+	int i;
+
+	if (!size || cmap->start + cmap->len > size)
+		return ERR_PTR(-EINVAL);
+
+	gamma_lut = drm_property_create_blob(dev, sizeof(*lut) * size, NULL);
+	if (IS_ERR(gamma_lut))
+		return gamma_lut;
+
+	lut = (struct drm_color_lut *)gamma_lut->data;
+	if (cmap->start || cmap->len != size) {
+		u16 *r = crtc->gamma_store;
+		u16 *g = r + crtc->gamma_size;
+		u16 *b = g + crtc->gamma_size;
+
+		for (i = 0; i < cmap->start; i++) {
+			lut[i].red = r[i];
+			lut[i].green = g[i];
+			lut[i].blue = b[i];
+		}
+		for (i = cmap->start + cmap->len; i < size; i++) {
+			lut[i].red = r[i];
+			lut[i].green = g[i];
+			lut[i].blue = b[i];
+		}
+	}
+
+	for (i = 0; i < cmap->len; i++) {
+		lut[cmap->start + i].red = cmap->red[i];
+		lut[cmap->start + i].green = cmap->green[i];
+		lut[cmap->start + i].blue = cmap->blue[i];
+	}
+
+	return gamma_lut;
+}
+
+static int setcmap_atomic(struct fb_cmap *cmap, struct fb_info *info)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_device *dev = fb_helper->dev;
+	struct drm_property_blob *gamma_lut = NULL;
+	struct drm_modeset_acquire_ctx ctx;
+	struct drm_crtc_state *crtc_state;
+	struct drm_atomic_state *state;
+	struct drm_crtc *crtc;
+	u16 *r, *g, *b;
+	int i, ret = 0;
+	bool replaced;
+
+	drm_modeset_acquire_init(&ctx, 0);
+
+	state = drm_atomic_state_alloc(dev);
+	if (!state) {
+		ret = -ENOMEM;
+		goto out_ctx;
+	}
+
+	state->acquire_ctx = &ctx;
+retry:
+	for (i = 0; i < fb_helper->crtc_count; i++) {
+		crtc = fb_helper->crtc_info[i].mode_set.crtc;
+
+		if (!gamma_lut)
+			gamma_lut = setcmap_new_gamma_lut(crtc, cmap);
+		if (IS_ERR(gamma_lut)) {
+			ret = PTR_ERR(gamma_lut);
+			gamma_lut = NULL;
+			goto out_state;
+		}
+
+		crtc_state = drm_atomic_get_crtc_state(state, crtc);
+		if (IS_ERR(crtc_state)) {
+			ret = PTR_ERR(crtc_state);
+			goto out_state;
+		}
+
+		replaced  = drm_property_replace_blob(&crtc_state->degamma_lut,
+						      NULL);
+		replaced |= drm_property_replace_blob(&crtc_state->ctm, NULL);
+		replaced |= drm_property_replace_blob(&crtc_state->gamma_lut,
+						      gamma_lut);
+		crtc_state->color_mgmt_changed |= replaced;
+	}
+
+	ret = drm_atomic_commit(state);
+	if (ret)
+		goto out_state;
+
+	for (i = 0; i < fb_helper->crtc_count; i++) {
+		crtc = fb_helper->crtc_info[i].mode_set.crtc;
+
+		r = crtc->gamma_store;
+		g = r + crtc->gamma_size;
+		b = g + crtc->gamma_size;
+
+		memcpy(r + cmap->start, cmap->red, cmap->len * sizeof(*r));
+		memcpy(g + cmap->start, cmap->green, cmap->len * sizeof(*g));
+		memcpy(b + cmap->start, cmap->blue, cmap->len * sizeof(*b));
+	}
+
+out_state:
+	if (ret == -EDEADLK)
+		goto backoff;
+
+	drm_property_blob_put(gamma_lut);
+	drm_atomic_state_put(state);
+out_ctx:
+	drm_modeset_drop_locks(&ctx);
+	drm_modeset_acquire_fini(&ctx);
+
+	return ret;
+
+backoff:
+	drm_atomic_state_clear(state);
+	drm_modeset_backoff(&ctx);
+	goto retry;
 }
 
 /**
@@ -1219,52 +1397,29 @@ static int setcolreg(struct drm_crtc *crtc, u16 red, u16 green,
 int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
 {
 	struct drm_fb_helper *fb_helper = info->par;
-	struct drm_device *dev = fb_helper->dev;
-	const struct drm_crtc_helper_funcs *crtc_funcs;
-	u16 *red, *green, *blue, *transp;
-	struct drm_crtc *crtc;
-	int i, j, rc = 0;
-	int start;
+	int ret;
 
 	if (oops_in_progress)
 		return -EBUSY;
 
-	drm_modeset_lock_all(dev);
+	mutex_lock(&fb_helper->lock);
+
 	if (!drm_fb_helper_is_bound(fb_helper)) {
-		drm_modeset_unlock_all(dev);
-		return -EBUSY;
+		ret = -EBUSY;
+		goto out;
 	}
 
-	for (i = 0; i < fb_helper->crtc_count; i++) {
-		crtc = fb_helper->crtc_info[i].mode_set.crtc;
-		crtc_funcs = crtc->helper_private;
-
-		red = cmap->red;
-		green = cmap->green;
-		blue = cmap->blue;
-		transp = cmap->transp;
-		start = cmap->start;
-
-		for (j = 0; j < cmap->len; j++) {
-			u16 hred, hgreen, hblue, htransp = 0xffff;
-
-			hred = *red++;
-			hgreen = *green++;
-			hblue = *blue++;
+	if (info->fix.visual == FB_VISUAL_TRUECOLOR)
+		ret = setcmap_pseudo_palette(cmap, info);
+	else if (drm_drv_uses_atomic_modeset(fb_helper->dev))
+		ret = setcmap_atomic(cmap, info);
+	else
+		ret = setcmap_legacy(cmap, info);
 
-			if (transp)
-				htransp = *transp++;
+out:
+	mutex_unlock(&fb_helper->lock);
 
-			rc = setcolreg(crtc, hred, hgreen, hblue, start++, info);
-			if (rc)
-				goto out;
-		}
-		if (crtc_funcs->load_lut)
-			crtc_funcs->load_lut(crtc);
-	}
- out:
-	drm_modeset_unlock_all(dev);
-	return rc;
+	return ret;
 }
 EXPORT_SYMBOL(drm_fb_helper_setcmap);
 
@@ -1281,12 +1436,11 @@ int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd,
 			unsigned long arg)
 {
 	struct drm_fb_helper *fb_helper = info->par;
-	struct drm_device *dev = fb_helper->dev;
 	struct drm_mode_set *mode_set;
 	struct drm_crtc *crtc;
 	int ret = 0;
 
-	mutex_lock(&dev->mode_config.mutex);
+	mutex_lock(&fb_helper->lock);
 	if (!drm_fb_helper_is_bound(fb_helper)) {
 		ret = -EBUSY;
 		goto unlock;
@@ -1331,7 +1485,7 @@ int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd,
 	}
 
 unlock:
-	mutex_unlock(&dev->mode_config.mutex);
+	mutex_unlock(&fb_helper->lock);
 	return ret;
 }
 EXPORT_SYMBOL(drm_fb_helper_ioctl);
@@ -1463,61 +1617,36 @@ int drm_fb_helper_set_par(struct fb_info *info)
 }
 EXPORT_SYMBOL(drm_fb_helper_set_par);
 
-static int pan_display_atomic(struct fb_var_screeninfo *var,
-			      struct fb_info *info)
+static void pan_set(struct drm_fb_helper *fb_helper, int x, int y)
 {
-	struct drm_fb_helper *fb_helper = info->par;
-	struct drm_device *dev = fb_helper->dev;
-	struct drm_atomic_state *state;
-	struct drm_plane *plane;
-	int i, ret;
-	unsigned int plane_mask;
-
-	state = drm_atomic_state_alloc(dev);
-	if (!state)
-		return -ENOMEM;
+	int i;
 
-	state->acquire_ctx = dev->mode_config.acquire_ctx;
-retry:
-	plane_mask = 0;
 	for (i = 0; i < fb_helper->crtc_count; i++) {
 		struct drm_mode_set *mode_set;
 
 		mode_set = &fb_helper->crtc_info[i].mode_set;
 
-		mode_set->x = var->xoffset;
-		mode_set->y = var->yoffset;
-
-		ret = __drm_atomic_helper_set_config(mode_set, state);
-		if (ret != 0)
-			goto fail;
-
-		plane = mode_set->crtc->primary;
-		plane_mask |= (1 << drm_plane_index(plane));
-		plane->old_fb = plane->fb;
+		mode_set->x = x;
+		mode_set->y = y;
 	}
+}
 
-	ret = drm_atomic_commit(state);
-	if (ret != 0)
-		goto fail;
-
-	info->var.xoffset = var->xoffset;
-	info->var.yoffset = var->yoffset;
+static int pan_display_atomic(struct fb_var_screeninfo *var,
+			      struct fb_info *info)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	int ret;
 
-fail:
-	drm_atomic_clean_old_fb(dev, plane_mask, ret);
+	pan_set(fb_helper, var->xoffset, var->yoffset);
 
-	if (ret == -EDEADLK)
-		goto backoff;
+	ret = restore_fbdev_mode_atomic(fb_helper, true);
+	if (!ret) {
+		info->var.xoffset = var->xoffset;
+		info->var.yoffset = var->yoffset;
+	} else
+		pan_set(fb_helper, info->var.xoffset, info->var.yoffset);
 
-	drm_atomic_state_put(state);
 	return ret;
-
-backoff:
-	drm_atomic_state_clear(state);
-	drm_atomic_legacy_backoff(state);
-
-	goto retry;
 }
 
 static int pan_display_legacy(struct fb_var_screeninfo *var,
@@ -1528,6 +1657,7 @@ static int pan_display_legacy(struct fb_var_screeninfo *var,
 	int ret = 0;
 	int i;
 
+	drm_modeset_lock_all(fb_helper->dev);
 	for (i = 0; i < fb_helper->crtc_count; i++) {
 		modeset = &fb_helper->crtc_info[i].mode_set;
 
@@ -1542,6 +1672,7 @@ static int pan_display_legacy(struct fb_var_screeninfo *var,
 			}
 		}
 	}
+	drm_modeset_unlock_all(fb_helper->dev);
 
 	return ret;
 }
@@ -1561,9 +1692,9 @@ int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
 	if (oops_in_progress)
 		return -EBUSY;
 
-	drm_modeset_lock_all(dev);
+	mutex_lock(&fb_helper->lock);
 	if (!drm_fb_helper_is_bound(fb_helper)) {
-		drm_modeset_unlock_all(dev);
+		mutex_unlock(&fb_helper->lock);
 		return -EBUSY;
 	}
 
@@ -1571,7 +1702,7 @@ int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
 		ret = pan_display_atomic(var, info);
 	else
 		ret = pan_display_legacy(var, info);
-	drm_modeset_unlock_all(dev);
+	mutex_unlock(&fb_helper->lock);
 
 	return ret;
 }
@@ -1579,8 +1710,7 @@ EXPORT_SYMBOL(drm_fb_helper_pan_display);
 
 /*
  * Allocates the backing storage and sets up the fbdev info structure through
- * the ->fb_probe callback and then registers the fbdev and sets up the panic
- * notifier.
+ * the ->fb_probe callback.
  */
 static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
 					 int preferred_bpp)
@@ -1678,13 +1808,8 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
 	}
 
 	if (crtc_count == 0 || sizes.fb_width == -1 || sizes.fb_height == -1) {
-		/*
-		 * hmm everyone went away - assume VGA cable just fell out
-		 * and will come back later.
-		 */
-		DRM_INFO("Cannot find any crtc or sizes - going 1024x768\n");
-		sizes.fb_width = sizes.surface_width = 1024;
-		sizes.fb_height = sizes.surface_height = 768;
+		DRM_INFO("Cannot find any crtc or sizes\n");
+		return -EAGAIN;
 	}
 
 	/* Handle our overallocation */
@@ -1831,12 +1956,11 @@ void drm_fb_helper_fill_var(struct fb_info *info, struct drm_fb_helper *fb_helpe
 EXPORT_SYMBOL(drm_fb_helper_fill_var);
 
 static int drm_fb_helper_probe_connector_modes(struct drm_fb_helper *fb_helper,
-					       uint32_t maxX,
-					       uint32_t maxY)
+						uint32_t maxX,
+						uint32_t maxY)
 {
 	struct drm_connector *connector;
-	int count = 0;
-	int i;
+	int i, count = 0;
 
 	drm_fb_helper_for_each_connector(fb_helper, i) {
 		connector = fb_helper->connector_info[i]->connector;
@@ -2234,11 +2358,8 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
 	int i;
 
 	DRM_DEBUG_KMS("\n");
-	if (drm_fb_helper_probe_connector_modes(fb_helper, width, height) == 0)
-		DRM_DEBUG_KMS("No connectors reported connected with modes\n");
-
 	/* prevent concurrent modification of connector_count by hotplug */
-	lockdep_assert_held(&fb_helper->dev->mode_config.mutex);
+	lockdep_assert_held(&fb_helper->lock);
 
 	crtcs = kcalloc(fb_helper->connector_count,
 			sizeof(struct drm_fb_helper_crtc *), GFP_KERNEL);
@@ -2253,6 +2374,9 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
 		goto out;
 	}
 
+	mutex_lock(&fb_helper->dev->mode_config.mutex);
+	if (drm_fb_helper_probe_connector_modes(fb_helper, width, height) == 0)
+		DRM_DEBUG_KMS("No connectors reported connected with modes\n");
 	drm_enable_connectors(fb_helper, enabled);
 
 	if (!(fb_helper->funcs->initial_config &&
@@ -2274,6 +2398,7 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
 
 		drm_pick_crtcs(fb_helper, crtcs, modes, 0, width, height);
 	}
+	mutex_unlock(&fb_helper->dev->mode_config.mutex);
 
 	/* need to set the modesets up here for use later */
 	/* fill out the connector<->crtc mappings into the modesets */
@@ -2313,6 +2438,59 @@ out:
 	kfree(enabled);
 }
 
+/* Note: Drops fb_helper->lock before returning. */
+static int
+__drm_fb_helper_initial_config_and_unlock(struct drm_fb_helper *fb_helper,
+					  int bpp_sel)
+{
+	struct drm_device *dev = fb_helper->dev;
+	struct fb_info *info;
+	unsigned int width, height;
+	int ret;
+
+	width = dev->mode_config.max_width;
+	height = dev->mode_config.max_height;
+
+	drm_setup_crtcs(fb_helper, width, height);
+	ret = drm_fb_helper_single_fb_probe(fb_helper, bpp_sel);
+	if (ret < 0) {
+		if (ret == -EAGAIN) {
+			fb_helper->preferred_bpp = bpp_sel;
+			fb_helper->deferred_setup = true;
+			ret = 0;
+		}
+		mutex_unlock(&fb_helper->lock);
+
+		return ret;
+	}
+
+	fb_helper->deferred_setup = false;
+
+	info = fb_helper->fbdev;
+	info->var.pixclock = 0;
+
+	/* Need to drop locks to avoid recursive deadlock in
+	 * register_framebuffer. This is ok because the only thing left to do is
+	 * register the fbdev emulation instance in kernel_fb_helper_list. */
+	mutex_unlock(&fb_helper->lock);
+
+	ret = register_framebuffer(info);
+	if (ret < 0)
+		return ret;
+
+	dev_info(dev->dev, "fb%d: %s frame buffer device\n",
+		 info->node, info->fix.id);
+
+	mutex_lock(&kernel_fb_helper_lock);
+	if (list_empty(&kernel_fb_helper_list))
+		register_sysrq_key('v', &sysrq_drm_fb_helper_restore_op);
+
+	list_add(&fb_helper->kernel_fb_list, &kernel_fb_helper_list);
+	mutex_unlock(&kernel_fb_helper_lock);
+
+	return 0;
+}
+
 /**
  * drm_fb_helper_initial_config - setup a sane initial connector configuration
  * @fb_helper: fb_helper device struct
@@ -2357,39 +2535,15 @@ out:
  */
 int drm_fb_helper_initial_config(struct drm_fb_helper *fb_helper, int bpp_sel)
 {
-	struct drm_device *dev = fb_helper->dev;
-	struct fb_info *info;
 	int ret;
 
 	if (!drm_fbdev_emulation)
 		return 0;
 
-	mutex_lock(&dev->mode_config.mutex);
-	drm_setup_crtcs(fb_helper,
-			dev->mode_config.max_width,
-			dev->mode_config.max_height);
-	ret = drm_fb_helper_single_fb_probe(fb_helper, bpp_sel);
-	mutex_unlock(&dev->mode_config.mutex);
-	if (ret)
-		return ret;
-
-	info = fb_helper->fbdev;
-	info->var.pixclock = 0;
-	ret = register_framebuffer(info);
-	if (ret < 0)
-		return ret;
-
-	dev_info(dev->dev, "fb%d: %s frame buffer device\n",
-		 info->node, info->fix.id);
-
-	mutex_lock(&kernel_fb_helper_lock);
-	if (list_empty(&kernel_fb_helper_list))
-		register_sysrq_key('v', &sysrq_drm_fb_helper_restore_op);
+	mutex_lock(&fb_helper->lock);
+	ret = __drm_fb_helper_initial_config_and_unlock(fb_helper, bpp_sel);
 
-	list_add(&fb_helper->kernel_fb_list, &kernel_fb_helper_list);
-	mutex_unlock(&kernel_fb_helper_lock);
-
-	return 0;
+	return ret;
 }
 EXPORT_SYMBOL(drm_fb_helper_initial_config);
 
@@ -2416,22 +2570,28 @@ EXPORT_SYMBOL(drm_fb_helper_initial_config);
  */
 int drm_fb_helper_hotplug_event(struct drm_fb_helper *fb_helper)
 {
-	struct drm_device *dev = fb_helper->dev;
+	int err = 0;
 
 	if (!drm_fbdev_emulation)
 		return 0;
 
-	mutex_lock(&dev->mode_config.mutex);
+	mutex_lock(&fb_helper->lock);
+	if (fb_helper->deferred_setup) {
+		err = __drm_fb_helper_initial_config_and_unlock(fb_helper,
+				fb_helper->preferred_bpp);
+		return err;
+	}
+
 	if (!fb_helper->fb || !drm_fb_helper_is_bound(fb_helper)) {
 		fb_helper->delayed_hotplug = true;
-		mutex_unlock(&dev->mode_config.mutex);
-		return 0;
+		mutex_unlock(&fb_helper->lock);
+		return err;
 	}
+
 	DRM_DEBUG_KMS("\n");
 
 	drm_setup_crtcs(fb_helper, fb_helper->fb->width, fb_helper->fb->height);
-
-	mutex_unlock(&dev->mode_config.mutex);
+	mutex_unlock(&fb_helper->lock);
 
 	drm_fb_helper_set_par(fb_helper->fbdev);
 

+ 6 - 1
drivers/gpu/drm/drm_file.c

@@ -75,7 +75,7 @@ DEFINE_MUTEX(drm_global_mutex);
  * for drivers which use the CMA GEM helpers it's drm_gem_cma_mmap().
  *
  * No other file operations are supported by the DRM userspace API. Overall the
- * following is an example #file_operations structure::
+ * following is an example &file_operations structure::
  *
  *     static const example_drm_fops = {
  *             .owner = THIS_MODULE,
@@ -92,6 +92,11 @@ DEFINE_MUTEX(drm_global_mutex);
  * For plain GEM based drivers there is the DEFINE_DRM_GEM_FOPS() macro, and for
  * CMA based drivers there is the DEFINE_DRM_GEM_CMA_FOPS() macro to make this
  * simpler.
+ *
+ * The driver's &file_operations must be stored in &drm_driver.fops.
+ *
+ * For driver-private IOCTL handling see the more detailed discussion in
+ * :ref:`IOCTL support in the userland interfaces chapter<drm_driver_ioctl>`.
  */
 
 static int drm_open_helper(struct file *filp, struct drm_minor *minor);

+ 1 - 1
drivers/gpu/drm/drm_framebuffer.c

@@ -817,7 +817,7 @@ retry:
 		plane->old_fb = plane->fb;
 	}
 
-	for_each_connector_in_state(state, conn, conn_state, i) {
+	for_each_new_connector_in_state(state, conn, conn_state, i) {
 		ret = drm_atomic_set_crtc_for_connector(conn_state, NULL);
 
 		if (ret)

+ 6 - 4
drivers/gpu/drm/drm_gem.c

@@ -826,13 +826,15 @@ drm_gem_object_put_unlocked(struct drm_gem_object *obj)
 		return;
 
 	dev = obj->dev;
-	might_lock(&dev->struct_mutex);
 
-	if (dev->driver->gem_free_object_unlocked)
+	if (dev->driver->gem_free_object_unlocked) {
 		kref_put(&obj->refcount, drm_gem_object_free);
-	else if (kref_put_mutex(&obj->refcount, drm_gem_object_free,
+	} else {
+		might_lock(&dev->struct_mutex);
+		if (kref_put_mutex(&obj->refcount, drm_gem_object_free,
 				&dev->struct_mutex))
-		mutex_unlock(&dev->struct_mutex);
+			mutex_unlock(&dev->struct_mutex);
+	}
 }
 EXPORT_SYMBOL(drm_gem_object_put_unlocked);
 

+ 1 - 1
drivers/gpu/drm/drm_gem_cma_helper.c

@@ -177,7 +177,7 @@ drm_gem_cma_create_with_handle(struct drm_file *file_priv,
  * This function frees the backing memory of the CMA GEM object, cleans up the
  * GEM object state and frees the memory used to store the object itself.
  * Drivers using the CMA helpers should set this as their
- * &drm_driver.gem_free_object callback.
+ * &drm_driver.gem_free_object_unlocked callback.
  */
 void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
 {

+ 10 - 4
drivers/gpu/drm/drm_internal.h

@@ -32,6 +32,7 @@ void drm_lastclose(struct drm_device *dev);
 int drm_irq_by_busid(struct drm_device *dev, void *data,
 		     struct drm_file *file_priv);
 void drm_pci_agp_destroy(struct drm_device *dev);
+int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master);
 
 /* drm_prime.c */
 int drm_prime_handle_to_fd_ioctl(struct drm_device *dev, void *data,
@@ -56,14 +57,19 @@ int drm_gem_name_info(struct seq_file *m, void *data);
 /* drm_vblank.c */
 extern unsigned int drm_timestamp_monotonic;
 void drm_vblank_disable_and_save(struct drm_device *dev, unsigned int pipe);
+void drm_vblank_cleanup(struct drm_device *dev);
+
+/* IOCTLS */
+int drm_wait_vblank_ioctl(struct drm_device *dev, void *data,
+			  struct drm_file *filp);
+int drm_legacy_modeset_ctl_ioctl(struct drm_device *dev, void *data,
+				 struct drm_file *file_priv);
+
+/* drm_irq.c */
 
 /* IOCTLS */
-int drm_wait_vblank(struct drm_device *dev, void *data,
-		    struct drm_file *filp);
 int drm_legacy_irq_control(struct drm_device *dev, void *data,
 			   struct drm_file *file_priv);
-int drm_legacy_modeset_ctl(struct drm_device *dev, void *data,
-			   struct drm_file *file_priv);
 
 /* drm_auth.c */
 int drm_getmagic(struct drm_device *dev, void *data,

+ 1 - 1
drivers/gpu/drm/drm_ioc32.c

@@ -842,7 +842,7 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
 	req.request.type = req32.request.type;
 	req.request.sequence = req32.request.sequence;
 	req.request.signal = req32.request.signal;
-	err = drm_ioctl_kernel(file, drm_wait_vblank, &req, DRM_UNLOCKED);
+	err = drm_ioctl_kernel(file, drm_wait_vblank_ioctl, &req, DRM_UNLOCKED);
 	if (err)
 		return err;
 

+ 8 - 5
drivers/gpu/drm/drm_ioctl.c

@@ -143,8 +143,8 @@ static int drm_set_busid(struct drm_device *dev, struct drm_file *file_priv)
 	if (master->unique != NULL)
 		drm_unset_busid(dev, master);
 
-	if (dev->driver->set_busid) {
-		ret = dev->driver->set_busid(dev, master);
+	if (dev->dev && dev_is_pci(dev->dev)) {
+		ret = drm_pci_set_busid(dev, master);
 		if (ret) {
 			drm_unset_busid(dev, master);
 			return ret;
@@ -603,9 +603,9 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
 	DRM_IOCTL_DEF(DRM_IOCTL_SG_ALLOC, drm_legacy_sg_alloc, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
 	DRM_IOCTL_DEF(DRM_IOCTL_SG_FREE, drm_legacy_sg_free, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
 
-	DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank, DRM_UNLOCKED),
+	DRM_IOCTL_DEF(DRM_IOCTL_WAIT_VBLANK, drm_wait_vblank_ioctl, DRM_UNLOCKED),
 
-	DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_legacy_modeset_ctl, 0),
+	DRM_IOCTL_DEF(DRM_IOCTL_MODESET_CTL, drm_legacy_modeset_ctl_ioctl, 0),
 
 	DRM_IOCTL_DEF(DRM_IOCTL_UPDATE_DRAW, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),
 
@@ -695,7 +695,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
  * 
  * DRM driver private IOCTL must be in the range from DRM_COMMAND_BASE to
  * DRM_COMMAND_END. Finally you need an array of &struct drm_ioctl_desc to wire
- * up the handlers and set the access rights:
+ * up the handlers and set the access rights::
  *
  *     static const struct drm_ioctl_desc my_driver_ioctls[] = {
  *         DRM_IOCTL_DEF_DRV(MY_DRIVER_OPERATION, my_driver_operation,
@@ -704,6 +704,9 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
  *
  * And then assign this to the &drm_driver.ioctls field in your driver
  * structure.
+ *
+ * See the separate chapter on :ref:`file operations<drm_driver_fops>` for how
+ * the driver-specific IOCTLs are wired up.
  */
 
 long drm_ioctl_kernel(struct file *file, drm_ioctl_t *func, void *kdata,

+ 3 - 3
drivers/gpu/drm/drm_mipi_dsi.c

@@ -165,14 +165,14 @@ of_mipi_dsi_device_add(struct mipi_dsi_host *host, struct device_node *node)
 	u32 reg;
 
 	if (of_modalias_node(node, info.type, sizeof(info.type)) < 0) {
-		dev_err(dev, "modalias failure on %s\n", node->full_name);
+		dev_err(dev, "modalias failure on %pOF\n", node);
 		return ERR_PTR(-EINVAL);
 	}
 
 	ret = of_property_read_u32(node, "reg", &reg);
 	if (ret) {
-		dev_err(dev, "device node %s has no valid reg property: %d\n",
-			node->full_name, ret);
+		dev_err(dev, "device node %pOF has no valid reg property: %d\n",
+			node, ret);
 		return ERR_PTR(-EINVAL);
 	}
 

+ 89 - 2
drivers/gpu/drm/drm_modes.c

@@ -709,8 +709,8 @@ int of_get_drm_display_mode(struct device_node *np,
 	if (bus_flags)
 		drm_bus_flags_from_videomode(&vm, bus_flags);
 
-	pr_debug("%s: got %dx%d display mode from %s\n",
-		of_node_full_name(np), vm.hactive, vm.vactive, np->name);
+	pr_debug("%pOF: got %dx%d display mode from %s\n",
+		np, vm.hactive, vm.vactive, np->name);
 	drm_mode_debug_printmodeline(dmode);
 
 	return 0;
@@ -1083,6 +1083,34 @@ drm_mode_validate_size(const struct drm_display_mode *mode,
 }
 EXPORT_SYMBOL(drm_mode_validate_size);
 
+/**
+ * drm_mode_validate_ycbcr420 - add 'ycbcr420-only' modes only when allowed
+ * @mode: mode to check
+ * @connector: drm connector under action
+ *
+ * This function is a helper which can be used to filter out any YCBCR420
+ * only mode, when the source doesn't support it.
+ *
+ * Returns:
+ * The mode status
+ */
+enum drm_mode_status
+drm_mode_validate_ycbcr420(const struct drm_display_mode *mode,
+			   struct drm_connector *connector)
+{
+	u8 vic = drm_match_cea_mode(mode);
+	enum drm_mode_status status = MODE_OK;
+	struct drm_hdmi_info *hdmi = &connector->display_info.hdmi;
+
+	if (test_bit(vic, hdmi->y420_vdb_modes)) {
+		if (!connector->ycbcr_420_allowed)
+			status = MODE_NO_420;
+	}
+
+	return status;
+}
+EXPORT_SYMBOL(drm_mode_validate_ycbcr420);
+
 #define MODE_STATUS(status) [MODE_ ## status + 3] = #status
 
 static const char * const drm_mode_status_names[] = {
@@ -1122,6 +1150,7 @@ static const char * const drm_mode_status_names[] = {
 	MODE_STATUS(ONE_SIZE),
 	MODE_STATUS(NO_REDUCED),
 	MODE_STATUS(NO_STEREO),
+	MODE_STATUS(NO_420),
 	MODE_STATUS(STALE),
 	MODE_STATUS(BAD),
 	MODE_STATUS(ERROR),
@@ -1576,3 +1605,61 @@ int drm_mode_convert_umode(struct drm_display_mode *out,
 out:
 	return ret;
 }
+
+/**
+ * drm_mode_is_420_only - if a given videomode can be only supported in YCBCR420
+ * output format
+ *
+ * @connector: drm connector under action.
+ * @mode: video mode to be tested.
+ *
+ * Returns:
+ * true if the mode can be supported in YCBCR420 format
+ * false if not.
+ */
+bool drm_mode_is_420_only(const struct drm_display_info *display,
+			  const struct drm_display_mode *mode)
+{
+	u8 vic = drm_match_cea_mode(mode);
+
+	return test_bit(vic, display->hdmi.y420_vdb_modes);
+}
+EXPORT_SYMBOL(drm_mode_is_420_only);
+
+/**
+ * drm_mode_is_420_also - if a given videomode can be supported in YCBCR420
+ * output format also (along with RGB/YCBCR444/422)
+ *
+ * @display: display under action.
+ * @mode: video mode to be tested.
+ *
+ * Returns:
+ * true if the mode can be support YCBCR420 format
+ * false if not.
+ */
+bool drm_mode_is_420_also(const struct drm_display_info *display,
+			  const struct drm_display_mode *mode)
+{
+	u8 vic = drm_match_cea_mode(mode);
+
+	return test_bit(vic, display->hdmi.y420_cmdb_modes);
+}
+EXPORT_SYMBOL(drm_mode_is_420_also);
+/**
+ * drm_mode_is_420 - if a given videomode can be supported in YCBCR420
+ * output format
+ *
+ * @display: display under action.
+ * @mode: video mode to be tested.
+ *
+ * Returns:
+ * true if the mode can be supported in YCBCR420 format
+ * false if not.
+ */
+bool drm_mode_is_420(const struct drm_display_info *display,
+		     const struct drm_display_mode *mode)
+{
+	return drm_mode_is_420_only(display, mode) ||
+		drm_mode_is_420_also(display, mode);
+}
+EXPORT_SYMBOL(drm_mode_is_420);

+ 10 - 2
drivers/gpu/drm/drm_modeset_lock.c

@@ -52,7 +52,12 @@
  *     drm_modeset_drop_locks(&ctx);
  *     drm_modeset_acquire_fini(&ctx);
  *
- * On top of of these per-object locks using &ww_mutex there's also an overall
+ * If all that is needed is a single modeset lock, then the &struct
+ * drm_modeset_acquire_ctx is not needed and the locking can be simplified
+ * by passing a NULL instead of ctx in the drm_modeset_lock()
+ * call and, when done, by calling drm_modeset_unlock().
+ *
+ * On top of these per-object locks using &ww_mutex there's also an overall
  * &drm_mode_config.mutex, for protecting everything else. Mostly this means
  * probe state of connectors, and preventing hotplug add/removal of connectors.
  *
@@ -313,11 +318,14 @@ EXPORT_SYMBOL(drm_modeset_lock_init);
  * @lock: lock to take
  * @ctx: acquire ctx
  *
- * If ctx is not NULL, then its ww acquire context is used and the
+ * If @ctx is not NULL, then its ww acquire context is used and the
  * lock will be tracked by the context and can be released by calling
  * drm_modeset_drop_locks().  If -EDEADLK is returned, this means a
  * deadlock scenario has been detected and it is an error to attempt
  * to take any more locks without first calling drm_modeset_backoff().
+ *
+ * If @ctx is NULL then the function call behaves like a normal,
+ * non-nesting mutex_lock() call.
  */
 int drm_modeset_lock(struct drm_modeset_lock *lock,
 		struct drm_modeset_acquire_ctx *ctx)

+ 2 - 2
drivers/gpu/drm/drm_of.c

@@ -160,8 +160,8 @@ int drm_of_component_probe(struct device *dev,
 				of_node_put(remote);
 				continue;
 			} else if (!of_device_is_available(remote->parent)) {
-				dev_warn(dev, "parent device of %s is not available\n",
-					 remote->full_name);
+				dev_warn(dev, "parent device of %pOF is not available\n",
+					 remote);
 				of_node_put(remote);
 				continue;
 			}

+ 12 - 28
drivers/gpu/drm/drm_pci.c

@@ -149,7 +149,6 @@ int drm_pci_set_busid(struct drm_device *dev, struct drm_master *master)
 	master->unique_len = strlen(master->unique);
 	return 0;
 }
-EXPORT_SYMBOL(drm_pci_set_busid);
 
 static int drm_pci_irq_by_busid(struct drm_device *dev, struct drm_irq_busid *p)
 {
@@ -281,20 +280,15 @@ err_free:
 EXPORT_SYMBOL(drm_get_pci_dev);
 
 /**
- * drm_pci_init - Register matching PCI devices with the DRM subsystem
+ * drm_legacy_pci_init - shadow-attach a legacy DRM PCI driver
  * @driver: DRM device driver
  * @pdriver: PCI device driver
  *
- * Initializes a drm_device structures, registering the stubs and initializing
- * the AGP device.
- *
- * NOTE: This function is deprecated. Modern modesetting drm drivers should use
- * pci_register_driver() directly, this function only provides shadow-binding
- * support for old legacy drivers on top of that core pci function.
+ * This is only used by legacy dri1 drivers and deprecated.
  *
  * Return: 0 on success or a negative error code on failure.
  */
-int drm_pci_init(struct drm_driver *driver, struct pci_driver *pdriver)
+int drm_legacy_pci_init(struct drm_driver *driver, struct pci_driver *pdriver)
 {
 	struct pci_dev *pdev = NULL;
 	const struct pci_device_id *pid;
@@ -302,8 +296,8 @@ int drm_pci_init(struct drm_driver *driver, struct pci_driver *pdriver)
 
 	DRM_DEBUG("\n");
 
-	if (!(driver->driver_features & DRIVER_LEGACY))
-		return pci_register_driver(pdriver);
+	if (WARN_ON(!(driver->driver_features & DRIVER_LEGACY)))
+		return -EINVAL;
 
 	/* If not using KMS, fall back to stealth mode manual scanning. */
 	INIT_LIST_HEAD(&driver->legacy_dev_list);
@@ -330,6 +324,7 @@ int drm_pci_init(struct drm_driver *driver, struct pci_driver *pdriver)
 	}
 	return 0;
 }
+EXPORT_SYMBOL(drm_legacy_pci_init);
 
 int drm_pcie_get_speed_cap_mask(struct drm_device *dev, u32 *mask)
 {
@@ -391,11 +386,6 @@ EXPORT_SYMBOL(drm_pcie_get_max_link_width);
 
 #else
 
-int drm_pci_init(struct drm_driver *driver, struct pci_driver *pdriver)
-{
-	return -1;
-}
-
 void drm_pci_agp_destroy(struct drm_device *dev) {}
 
 int drm_irq_by_busid(struct drm_device *dev, void *data,
@@ -405,27 +395,21 @@ int drm_irq_by_busid(struct drm_device *dev, void *data,
 }
 #endif
 
-EXPORT_SYMBOL(drm_pci_init);
-
 /**
- * drm_pci_exit - Unregister matching PCI devices from the DRM subsystem
+ * drm_legacy_pci_exit - unregister shadow-attach legacy DRM driver
  * @driver: DRM device driver
  * @pdriver: PCI device driver
  *
- * Unregisters one or more devices matched by a PCI driver from the DRM
- * subsystem.
- *
- * NOTE: This function is deprecated. Modern modesetting drm drivers should use
- * pci_unregister_driver() directly, this function only provides shadow-binding
- * support for old legacy drivers on top of that core pci function.
+ * Unregister a DRM driver shadow-attached through drm_legacy_pci_init(). This
+ * is deprecated and only used by dri1 drivers.
  */
-void drm_pci_exit(struct drm_driver *driver, struct pci_driver *pdriver)
+void drm_legacy_pci_exit(struct drm_driver *driver, struct pci_driver *pdriver)
 {
 	struct drm_device *dev, *tmp;
 	DRM_DEBUG("\n");
 
 	if (!(driver->driver_features & DRIVER_LEGACY)) {
-		pci_unregister_driver(pdriver);
+		WARN_ON(1);
 	} else {
 		list_for_each_entry_safe(dev, tmp, &driver->legacy_dev_list,
 					 legacy_dev_list) {
@@ -435,4 +419,4 @@ void drm_pci_exit(struct drm_driver *driver, struct pci_driver *pdriver)
 	}
 	DRM_INFO("Module unloaded\n");
 }
-EXPORT_SYMBOL(drm_pci_exit);
+EXPORT_SYMBOL(drm_legacy_pci_exit);

+ 4 - 0
drivers/gpu/drm/drm_probe_helper.c

@@ -528,6 +528,10 @@ retry:
 		if (mode->status == MODE_OK)
 			mode->status = drm_mode_validate_pipeline(mode,
 								  connector);
+
+		if (mode->status == MODE_OK)
+			mode->status = drm_mode_validate_ycbcr420(mode,
+								  connector);
 	}
 
 prune:

+ 23 - 0
drivers/gpu/drm/drm_property.c

@@ -709,6 +709,29 @@ err_created:
 }
 EXPORT_SYMBOL(drm_property_replace_global_blob);
 
+/**
+ * drm_property_replace_blob - replace a blob property
+ * @blob: a pointer to the member blob to be replaced
+ * @new_blob: the new blob to replace with
+ *
+ * Return: true if the blob was in fact replaced.
+ */
+bool drm_property_replace_blob(struct drm_property_blob **blob,
+			       struct drm_property_blob *new_blob)
+{
+	struct drm_property_blob *old_blob = *blob;
+
+	if (old_blob == new_blob)
+		return false;
+
+	drm_property_blob_put(old_blob);
+	if (new_blob)
+		drm_property_blob_get(new_blob);
+	*blob = new_blob;
+	return true;
+}
+EXPORT_SYMBOL(drm_property_replace_blob);
+
 int drm_mode_getblob_ioctl(struct drm_device *dev,
 			   void *data, struct drm_file *file_priv)
 {

+ 13 - 7
drivers/gpu/drm/drm_simple_kms_helper.c

@@ -37,10 +37,18 @@ static const struct drm_encoder_funcs drm_simple_kms_encoder_funcs = {
 static int drm_simple_kms_crtc_check(struct drm_crtc *crtc,
 				     struct drm_crtc_state *state)
 {
+	bool has_primary = state->plane_mask &
+			   BIT(drm_plane_index(crtc->primary));
+
+	/* We always want to have an active plane with an active CRTC */
+	if (has_primary != state->enable)
+		return -EINVAL;
+
 	return drm_atomic_add_affected_planes(state->state, crtc);
 }
 
-static void drm_simple_kms_crtc_enable(struct drm_crtc *crtc)
+static void drm_simple_kms_crtc_enable(struct drm_crtc *crtc,
+				       struct drm_crtc_state *old_state)
 {
 	struct drm_simple_display_pipe *pipe;
 
@@ -51,7 +59,8 @@ static void drm_simple_kms_crtc_enable(struct drm_crtc *crtc)
 	pipe->funcs->enable(pipe, crtc->state);
 }
 
-static void drm_simple_kms_crtc_disable(struct drm_crtc *crtc)
+static void drm_simple_kms_crtc_disable(struct drm_crtc *crtc,
+					struct drm_crtc_state *old_state)
 {
 	struct drm_simple_display_pipe *pipe;
 
@@ -64,8 +73,8 @@ static void drm_simple_kms_crtc_disable(struct drm_crtc *crtc)
 
 static const struct drm_crtc_helper_funcs drm_simple_kms_crtc_helper_funcs = {
 	.atomic_check = drm_simple_kms_crtc_check,
-	.disable = drm_simple_kms_crtc_disable,
-	.enable = drm_simple_kms_crtc_enable,
+	.atomic_enable = drm_simple_kms_crtc_enable,
+	.atomic_disable = drm_simple_kms_crtc_disable,
 };
 
 static const struct drm_crtc_funcs drm_simple_kms_crtc_funcs = {
@@ -88,9 +97,6 @@ static int drm_simple_kms_plane_atomic_check(struct drm_plane *plane,
 	pipe = container_of(plane, struct drm_simple_display_pipe, plane);
 	crtc_state = drm_atomic_get_new_crtc_state(plane_state->state,
 						   &pipe->crtc);
-	if (crtc_state->enable != !!plane_state->crtc)
-		return -EINVAL; /* plane must match crtc enable state */
-
 	if (!crtc_state->enable)
 		return 0; /* nothing to check when disabling or disabled */
 

+ 0 - 2
drivers/gpu/drm/drm_syncobj.c

@@ -330,7 +330,6 @@ err_put_fd:
 }
 /**
  * drm_syncobj_open - initalizes syncobj file-private structures at devnode open time
- * @dev: drm_device which is being opened by userspace
  * @file_private: drm file-private structure to set up
  *
  * Called at device open time, sets up the structure for handling refcounting
@@ -354,7 +353,6 @@ drm_syncobj_release_handle(int id, void *ptr, void *data)
 
 /**
  * drm_syncobj_release - release file-private sync object resources
- * @dev: drm_device which is being closed by userspace
  * @file_private: drm file-private structure to clean up
  *
  * Called at close time when the filp is going away.

+ 76 - 111
drivers/gpu/drm/drm_vblank.c

@@ -31,6 +31,41 @@
 #include "drm_trace.h"
 #include "drm_internal.h"
 
+/**
+ * DOC: vblank handling
+ *
+ * Vertical blanking plays a major role in graphics rendering. To achieve
+ * tear-free display, users must synchronize page flips and/or rendering to
+ * vertical blanking. The DRM API offers ioctls to perform page flips
+ * synchronized to vertical blanking and wait for vertical blanking.
+ *
+ * The DRM core handles most of the vertical blanking management logic, which
+ * involves filtering out spurious interrupts, keeping race-free blanking
+ * counters, coping with counter wrap-around and resets and keeping use counts.
+ * It relies on the driver to generate vertical blanking interrupts and
+ * optionally provide a hardware vertical blanking counter.
+ *
+ * Drivers must initialize the vertical blanking handling core with a call to
+ * drm_vblank_init(). Minimally, a driver needs to implement
+ * &drm_crtc_funcs.enable_vblank and &drm_crtc_funcs.disable_vblank plus call
+ * drm_crtc_handle_vblank() in it's vblank interrupt handler for working vblank
+ * support.
+ *
+ * Vertical blanking interrupts can be enabled by the DRM core or by drivers
+ * themselves (for instance to handle page flipping operations).  The DRM core
+ * maintains a vertical blanking use count to ensure that the interrupts are not
+ * disabled while a user still needs them. To increment the use count, drivers
+ * call drm_crtc_vblank_get() and release the vblank reference again with
+ * drm_crtc_vblank_put(). In between these two calls vblank interrupts are
+ * guaranteed to be enabled.
+ *
+ * On many hardware disabling the vblank interrupt cannot be done in a race-free
+ * manner, see &drm_driver.vblank_disable_immediate and
+ * &drm_driver.max_vblank_count. In that case the vblank core only disables the
+ * vblanks after a timer has expired, which can be configured through the
+ * ``vblankoffdelay`` module parameter.
+ */
+
 /* Retry timestamp calculation up to 3 times to satisfy
  * drm_timestamp_precision before giving up.
  */
@@ -259,16 +294,17 @@ static u32 drm_vblank_count(struct drm_device *dev, unsigned int pipe)
 }
 
 /**
- * drm_accurate_vblank_count - retrieve the master vblank counter
+ * drm_crtc_accurate_vblank_count - retrieve the master vblank counter
  * @crtc: which counter to retrieve
  *
- * This function is similar to @drm_crtc_vblank_count but this
- * function interpolates to handle a race with vblank irq's.
+ * This function is similar to drm_crtc_vblank_count() but this function
+ * interpolates to handle a race with vblank interrupts using the high precision
+ * timestamping support.
  *
- * This is mostly useful for hardware that can obtain the scanout
- * position, but doesn't have a frame counter.
+ * This is mostly useful for hardware that can obtain the scanout position, but
+ * doesn't have a hardware frame counter.
  */
-u32 drm_accurate_vblank_count(struct drm_crtc *crtc)
+u32 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc)
 {
 	struct drm_device *dev = crtc->dev;
 	unsigned int pipe = drm_crtc_index(crtc);
@@ -287,7 +323,7 @@ u32 drm_accurate_vblank_count(struct drm_crtc *crtc)
 
 	return vblank;
 }
-EXPORT_SYMBOL(drm_accurate_vblank_count);
+EXPORT_SYMBOL(drm_crtc_accurate_vblank_count);
 
 static void __disable_vblank(struct drm_device *dev, unsigned int pipe)
 {
@@ -358,15 +394,6 @@ static void vblank_disable_fn(unsigned long arg)
 	spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
 }
 
-/**
- * drm_vblank_cleanup - cleanup vblank support
- * @dev: DRM device
- *
- * This function cleans up any resources allocated in drm_vblank_init.
- *
- * Drivers which don't use drm_irq_install() need to set &drm_device.irq_enabled
- * themselves, to signal to the DRM core that vblank interrupts are enabled.
- */
 void drm_vblank_cleanup(struct drm_device *dev)
 {
 	unsigned int pipe;
@@ -388,7 +415,6 @@ void drm_vblank_cleanup(struct drm_device *dev)
 
 	dev->num_crtcs = 0;
 }
-EXPORT_SYMBOL(drm_vblank_cleanup);
 
 /**
  * drm_vblank_init - initialize vblank support
@@ -396,6 +422,8 @@ EXPORT_SYMBOL(drm_vblank_cleanup);
  * @num_crtcs: number of CRTCs supported by @dev
  *
  * This function initializes vblank support for @num_crtcs display pipelines.
+ * Cleanup is handled by the DRM core, or through calling drm_dev_fini() for
+ * drivers with a &drm_driver.release callback.
  *
  * Returns:
  * Zero on success or a negative error code on failure.
@@ -468,11 +496,11 @@ EXPORT_SYMBOL(drm_crtc_vblank_waitqueue);
  * @crtc: drm_crtc whose timestamp constants should be updated.
  * @mode: display mode containing the scanout timings
  *
- * Calculate and store various constants which are later
- * needed by vblank and swap-completion timestamping, e.g,
- * by drm_calc_vbltimestamp_from_scanoutpos(). They are
- * derived from CRTC's true scanout timing, so they take
- * things like panel scaling or other adjustments into account.
+ * Calculate and store various constants which are later needed by vblank and
+ * swap-completion timestamping, e.g, by
+ * drm_calc_vbltimestamp_from_scanoutpos(). They are derived from CRTC's true
+ * scanout timing, so they take things like panel scaling or other adjustments
+ * into account.
  */
 void drm_calc_timestamping_constants(struct drm_crtc *crtc,
 				     const struct drm_display_mode *mode)
@@ -535,25 +563,14 @@ EXPORT_SYMBOL(drm_calc_timestamping_constants);
  *     if flag is set.
  *
  * Implements calculation of exact vblank timestamps from given drm_display_mode
- * timings and current video scanout position of a CRTC. This can be called from
- * within get_vblank_timestamp() implementation of a kms driver to implement the
- * actual timestamping.
- *
- * Should return timestamps conforming to the OML_sync_control OpenML
- * extension specification. The timestamp corresponds to the end of
- * the vblank interval, aka start of scanout of topmost-leftmost display
- * pixel in the following video frame.
+ * timings and current video scanout position of a CRTC. This can be directly
+ * used as the &drm_driver.get_vblank_timestamp implementation of a kms driver
+ * if &drm_driver.get_scanout_position is implemented.
  *
- * Requires support for optional dev->driver->get_scanout_position()
- * in kms driver, plus a bit of setup code to provide a drm_display_mode
- * that corresponds to the true scanout timing.
- *
- * The current implementation only handles standard video modes. It
- * returns as no operation if a doublescan or interlaced video mode is
- * active. Higher level code is expected to handle this.
- *
- * This function can be used to implement the &drm_driver.get_vblank_timestamp
- * directly, if the driver implements the &drm_driver.get_scanout_position hook.
+ * The current implementation only handles standard video modes. For double scan
+ * and interlaced modes the driver is supposed to adjust the hardware mode
+ * (taken from &drm_crtc_state.adjusted mode for atomic modeset drivers) to
+ * match the scanout position reported.
  *
  * Note that atomic drivers must call drm_calc_timestamping_constants() before
  * enabling a CRTC. The atomic helpers already take care of that in
@@ -738,7 +755,9 @@ drm_get_last_vbltimestamp(struct drm_device *dev, unsigned int pipe,
  *
  * Fetches the "cooked" vblank count value that represents the number of
  * vblank events since the system was booted, including lost events due to
- * modesetting activity.
+ * modesetting activity. Note that this timer isn't correct against a racing
+ * vblank interrupt (since it only reports the software vblank counter), see
+ * drm_crtc_accurate_vblank_count() for such use-cases.
  *
  * Returns:
  * The software vblank counter.
@@ -749,20 +768,6 @@ u32 drm_crtc_vblank_count(struct drm_crtc *crtc)
 }
 EXPORT_SYMBOL(drm_crtc_vblank_count);
 
-/**
- * drm_vblank_count_and_time - retrieve "cooked" vblank counter value and the
- *     system timestamp corresponding to that vblank counter value.
- * @dev: DRM device
- * @pipe: index of CRTC whose counter to retrieve
- * @vblanktime: Pointer to struct timeval to receive the vblank timestamp.
- *
- * Fetches the "cooked" vblank count value that represents the number of
- * vblank events since the system was booted, including lost events due to
- * modesetting activity. Returns corresponding system timestamp of the time
- * of the vblank interval that corresponds to the current vblank counter value.
- *
- * This is the legacy version of drm_crtc_vblank_count_and_time().
- */
 static u32 drm_vblank_count_and_time(struct drm_device *dev, unsigned int pipe,
 				     struct timeval *vblanktime)
 {
@@ -831,7 +836,7 @@ static void send_vblank_event(struct drm_device *dev,
  * NOTE: Drivers using this to send out the &drm_crtc_state.event as part of an
  * atomic commit must ensure that the next vblank happens at exactly the same
  * time as the atomic commit is committed to the hardware. This function itself
- * does **not** protect again the next vblank interrupt racing with either this
+ * does **not** protect against the next vblank interrupt racing with either this
  * function call or the atomic commit operation. A possible sequence could be:
  *
  * 1. Driver commits new hardware state into vblank-synchronized registers.
@@ -852,8 +857,8 @@ static void send_vblank_event(struct drm_device *dev,
  * handler by calling drm_crtc_send_vblank_event() and make sure that there's no
  * possible race with the hardware committing the atomic update.
  *
- * Caller must hold event lock. Caller must also hold a vblank reference for
- * the event @e, which will be dropped when the next vblank arrives.
+ * Caller must hold a vblank reference for the event @e, which will be dropped
+ * when the next vblank arrives.
  */
 void drm_crtc_arm_vblank_event(struct drm_crtc *crtc,
 			       struct drm_pending_vblank_event *e)
@@ -913,14 +918,6 @@ static int __enable_vblank(struct drm_device *dev, unsigned int pipe)
 	return dev->driver->enable_vblank(dev, pipe);
 }
 
-/**
- * drm_vblank_enable - enable the vblank interrupt on a CRTC
- * @dev: DRM device
- * @pipe: CRTC index
- *
- * Returns:
- * Zero on success or a negative error code on failure.
- */
 static int drm_vblank_enable(struct drm_device *dev, unsigned int pipe)
 {
 	struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
@@ -958,19 +955,6 @@ static int drm_vblank_enable(struct drm_device *dev, unsigned int pipe)
 	return ret;
 }
 
-/**
- * drm_vblank_get - get a reference count on vblank events
- * @dev: DRM device
- * @pipe: index of CRTC to own
- *
- * Acquire a reference count on vblank events to avoid having them disabled
- * while in use.
- *
- * This is the legacy version of drm_crtc_vblank_get().
- *
- * Returns:
- * Zero on success or a negative error code on failure.
- */
 static int drm_vblank_get(struct drm_device *dev, unsigned int pipe)
 {
 	struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
@@ -1014,16 +998,6 @@ int drm_crtc_vblank_get(struct drm_crtc *crtc)
 }
 EXPORT_SYMBOL(drm_crtc_vblank_get);
 
-/**
- * drm_vblank_put - release ownership of vblank events
- * @dev: DRM device
- * @pipe: index of CRTC to release
- *
- * Release ownership of a given vblank counter, turning off interrupts
- * if possible. Disable interrupts after drm_vblank_offdelay milliseconds.
- *
- * This is the legacy version of drm_crtc_vblank_put().
- */
 static void drm_vblank_put(struct drm_device *dev, unsigned int pipe)
 {
 	struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
@@ -1067,6 +1041,8 @@ EXPORT_SYMBOL(drm_crtc_vblank_put);
  * This waits for one vblank to pass on @pipe, using the irq driver interfaces.
  * It is a failure to call this when the vblank irq for @pipe is disabled, e.g.
  * due to lack of driver support or because the crtc is off.
+ *
+ * This is the legacy version of drm_crtc_wait_one_vblank().
  */
 void drm_wait_one_vblank(struct drm_device *dev, unsigned int pipe)
 {
@@ -1116,7 +1092,7 @@ EXPORT_SYMBOL(drm_crtc_wait_one_vblank);
  * stored so that drm_vblank_on can restore it again.
  *
  * Drivers must use this function when the hardware vblank counter can get
- * reset, e.g. when suspending.
+ * reset, e.g. when suspending or disabling the @crtc in general.
  */
 void drm_crtc_vblank_off(struct drm_crtc *crtc)
 {
@@ -1184,6 +1160,8 @@ EXPORT_SYMBOL(drm_crtc_vblank_off);
  * drm_crtc_vblank_on() functions. The difference compared to
  * drm_crtc_vblank_off() is that this function doesn't save the vblank counter
  * and hence doesn't need to call any driver hooks.
+ *
+ * This is useful for recovering driver state e.g. on driver load, or on resume.
  */
 void drm_crtc_vblank_reset(struct drm_crtc *crtc)
 {
@@ -1212,9 +1190,10 @@ EXPORT_SYMBOL(drm_crtc_vblank_reset);
  * @crtc: CRTC in question
  *
  * This functions restores the vblank interrupt state captured with
- * drm_crtc_vblank_off() again. Note that calls to drm_crtc_vblank_on() and
- * drm_crtc_vblank_off() can be unbalanced and so can also be unconditionally called
- * in driver load code to reflect the current hardware state of the crtc.
+ * drm_crtc_vblank_off() again and is generally called when enabling @crtc. Note
+ * that calls to drm_crtc_vblank_on() and drm_crtc_vblank_off() can be
+ * unbalanced and so can also be unconditionally called in driver load code to
+ * reflect the current hardware state of the crtc.
  */
 void drm_crtc_vblank_on(struct drm_crtc *crtc)
 {
@@ -1299,8 +1278,8 @@ static void drm_legacy_vblank_post_modeset(struct drm_device *dev,
 	}
 }
 
-int drm_legacy_modeset_ctl(struct drm_device *dev, void *data,
-			   struct drm_file *file_priv)
+int drm_legacy_modeset_ctl_ioctl(struct drm_device *dev, void *data,
+				 struct drm_file *file_priv)
 {
 	struct drm_modeset_ctl *modeset = data;
 	unsigned int pipe;
@@ -1419,22 +1398,8 @@ static bool drm_wait_vblank_is_query(union drm_wait_vblank *vblwait)
 					  _DRM_VBLANK_NEXTONMISS));
 }
 
-/*
- * Wait for VBLANK.
- *
- * \param inode device inode.
- * \param file_priv DRM file private.
- * \param cmd command.
- * \param data user argument, pointing to a drm_wait_vblank structure.
- * \return zero on success or a negative number on failure.
- *
- * This function enables the vblank interrupt on the pipe requested, then
- * sleeps waiting for the requested sequence number to occur, and drops
- * the vblank interrupt refcount afterwards. (vblank IRQ disable follows that
- * after a timeout with no further vblank waits scheduled).
- */
-int drm_wait_vblank(struct drm_device *dev, void *data,
-		    struct drm_file *file_priv)
+int drm_wait_vblank_ioctl(struct drm_device *dev, void *data,
+			  struct drm_file *file_priv)
 {
 	struct drm_vblank_crtc *vblank;
 	union drm_wait_vblank *vblwait = data;

+ 6 - 4
drivers/gpu/drm/exynos/exynos_drm_crtc.c

@@ -21,7 +21,8 @@
 #include "exynos_drm_drv.h"
 #include "exynos_drm_plane.h"
 
-static void exynos_drm_crtc_enable(struct drm_crtc *crtc)
+static void exynos_drm_crtc_atomic_enable(struct drm_crtc *crtc,
+					  struct drm_crtc_state *old_state)
 {
 	struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
 
@@ -31,7 +32,8 @@ static void exynos_drm_crtc_enable(struct drm_crtc *crtc)
 	drm_crtc_vblank_on(crtc);
 }
 
-static void exynos_drm_crtc_disable(struct drm_crtc *crtc)
+static void exynos_drm_crtc_atomic_disable(struct drm_crtc *crtc,
+					   struct drm_crtc_state *old_state)
 {
 	struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
 
@@ -82,11 +84,11 @@ static void exynos_crtc_atomic_flush(struct drm_crtc *crtc,
 }
 
 static const struct drm_crtc_helper_funcs exynos_crtc_helper_funcs = {
-	.enable		= exynos_drm_crtc_enable,
-	.disable	= exynos_drm_crtc_disable,
 	.atomic_check	= exynos_crtc_atomic_check,
 	.atomic_begin	= exynos_crtc_atomic_begin,
 	.atomic_flush	= exynos_crtc_atomic_flush,
+	.atomic_enable	= exynos_drm_crtc_atomic_enable,
+	.atomic_disable	= exynos_drm_crtc_atomic_disable,
 };
 
 void exynos_crtc_handle_event(struct exynos_drm_crtc *exynos_crtc)

+ 4 - 2
drivers/gpu/drm/exynos/exynos_drm_drv.c

@@ -395,8 +395,9 @@ static int exynos_drm_bind(struct device *dev)
 	/* init kms poll for handling hpd */
 	drm_kms_helper_poll_init(drm);
 
-	/* force connectors detection */
-	drm_helper_hpd_irq_event(drm);
+	ret = exynos_drm_fbdev_init(drm);
+	if (ret)
+		goto err_cleanup_poll;
 
 	/* register the DRM device */
 	ret = drm_dev_register(drm, 0);
@@ -407,6 +408,7 @@ static int exynos_drm_bind(struct device *dev)
 
 err_cleanup_fbdev:
 	exynos_drm_fbdev_fini(drm);
+err_cleanup_poll:
 	drm_kms_helper_poll_fini(drm);
 	exynos_drm_device_subdrv_remove(drm);
 err_unbind_all:

+ 1 - 2
drivers/gpu/drm/exynos/exynos_drm_dsi.c

@@ -1618,8 +1618,7 @@ static int exynos_dsi_of_read_u32(const struct device_node *np,
 	int ret = of_property_read_u32(np, propname, out_value);
 
 	if (ret < 0)
-		pr_err("%s: failed to get '%s' property\n", np->full_name,
-		       propname);
+		pr_err("%pOF: failed to get '%s' property\n", np, propname);
 
 	return ret;
 }

+ 1 - 26
drivers/gpu/drm/exynos/exynos_drm_fb.c

@@ -187,33 +187,8 @@ dma_addr_t exynos_drm_fb_dma_addr(struct drm_framebuffer *fb, int index)
 	return exynos_fb->dma_addr[index];
 }
 
-static void exynos_drm_atomic_commit_tail(struct drm_atomic_state *state)
-{
-	struct drm_device *dev = state->dev;
-
-	drm_atomic_helper_commit_modeset_disables(dev, state);
-
-	drm_atomic_helper_commit_modeset_enables(dev, state);
-
-	/*
-	 * Exynos can't update planes with CRTCs and encoders disabled,
-	 * its updates routines, specially for FIMD, requires the clocks
-	 * to be enabled. So it is necessary to handle the modeset operations
-	 * *before* the commit_planes() step, this way it will always
-	 * have the relevant clocks enabled to perform the update.
-	 */
-	drm_atomic_helper_commit_planes(dev, state,
-					DRM_PLANE_COMMIT_ACTIVE_ONLY);
-
-	drm_atomic_helper_commit_hw_done(state);
-
-	drm_atomic_helper_wait_for_vblanks(dev, state);
-
-	drm_atomic_helper_cleanup_planes(dev, state);
-}
-
 static struct drm_mode_config_helper_funcs exynos_drm_mode_config_helpers = {
-	.atomic_commit_tail = exynos_drm_atomic_commit_tail,
+	.atomic_commit_tail = drm_atomic_helper_commit_tail_rpm,
 };
 
 static const struct drm_mode_config_funcs exynos_drm_mode_config_funcs = {

+ 1 - 25
drivers/gpu/drm/exynos/exynos_drm_fbdev.c

@@ -183,24 +183,6 @@ static const struct drm_fb_helper_funcs exynos_drm_fb_helper_funcs = {
 	.fb_probe =	exynos_drm_fbdev_create,
 };
 
-static bool exynos_drm_fbdev_is_anything_connected(struct drm_device *dev)
-{
-	struct drm_connector *connector;
-	bool ret = false;
-
-	mutex_lock(&dev->mode_config.mutex);
-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
-		if (connector->status != connector_status_connected)
-			continue;
-
-		ret = true;
-		break;
-	}
-	mutex_unlock(&dev->mode_config.mutex);
-
-	return ret;
-}
-
 int exynos_drm_fbdev_init(struct drm_device *dev)
 {
 	struct exynos_drm_fbdev *fbdev;
@@ -211,9 +193,6 @@ int exynos_drm_fbdev_init(struct drm_device *dev)
 	if (!dev->mode_config.num_crtc || !dev->mode_config.num_connector)
 		return 0;
 
-	if (!exynos_drm_fbdev_is_anything_connected(dev))
-		return 0;
-
 	fbdev = kzalloc(sizeof(*fbdev), GFP_KERNEL);
 	if (!fbdev)
 		return -ENOMEM;
@@ -304,8 +283,5 @@ void exynos_drm_output_poll_changed(struct drm_device *dev)
 	struct exynos_drm_private *private = dev->dev_private;
 	struct drm_fb_helper *fb_helper = private->fb_helper;
 
-	if (fb_helper)
-		drm_fb_helper_hotplug_event(fb_helper);
-	else
-		exynos_drm_fbdev_init(dev);
+	drm_fb_helper_hotplug_event(fb_helper);
 }

+ 1 - 1
drivers/gpu/drm/exynos/exynos_hdmi.c

@@ -784,7 +784,7 @@ static void hdmi_reg_infoframes(struct hdmi_context *hdata)
 	}
 
 	ret = drm_hdmi_avi_infoframe_from_display_mode(&frm.avi,
-			&hdata->current_mode);
+			&hdata->current_mode, false);
 	if (!ret)
 		ret = hdmi_avi_infoframe_pack(&frm.avi, buf, sizeof(buf));
 	if (ret > 0) {

+ 3 - 2
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_crtc.c

@@ -63,7 +63,8 @@ static void fsl_dcu_drm_crtc_atomic_disable(struct drm_crtc *crtc,
 	clk_disable_unprepare(fsl_dev->pix_clk);
 }
 
-static void fsl_dcu_drm_crtc_enable(struct drm_crtc *crtc)
+static void fsl_dcu_drm_crtc_atomic_enable(struct drm_crtc *crtc,
+					   struct drm_crtc_state *old_state)
 {
 	struct drm_device *dev = crtc->dev;
 	struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
@@ -133,7 +134,7 @@ static void fsl_dcu_drm_crtc_mode_set_nofb(struct drm_crtc *crtc)
 static const struct drm_crtc_helper_funcs fsl_dcu_drm_crtc_helper_funcs = {
 	.atomic_disable = fsl_dcu_drm_crtc_atomic_disable,
 	.atomic_flush = fsl_dcu_drm_crtc_atomic_flush,
-	.enable = fsl_dcu_drm_crtc_enable,
+	.atomic_enable = fsl_dcu_drm_crtc_atomic_enable,
 	.mode_set_nofb = fsl_dcu_drm_crtc_mode_set_nofb,
 };
 

+ 0 - 4
drivers/gpu/drm/gma500/mdfld_intel_display.c

@@ -737,11 +737,7 @@ static int mdfld_crtc_mode_set(struct drm_crtc *crtc,
 					sizeof(struct drm_display_mode));
 
 	list_for_each_entry(connector, &mode_config->connector_list, head) {
-		if (!connector)
-			continue;
-
 		encoder = connector->encoder;
-
 		if (!encoder)
 			continue;
 

+ 2 - 3
drivers/gpu/drm/gma500/psb_drv.c

@@ -480,7 +480,6 @@ static struct drm_driver driver = {
 	.load = psb_driver_load,
 	.unload = psb_driver_unload,
 	.lastclose = psb_driver_lastclose,
-	.set_busid = drm_pci_set_busid,
 
 	.num_ioctls = ARRAY_SIZE(psb_ioctls),
 	.irq_preinstall = psb_irq_preinstall,
@@ -517,12 +516,12 @@ static struct pci_driver psb_pci_driver = {
 
 static int __init psb_init(void)
 {
-	return drm_pci_init(&driver, &psb_pci_driver);
+	return pci_register_driver(&psb_pci_driver);
 }
 
 static void __exit psb_exit(void)
 {
-	drm_pci_exit(&driver, &psb_pci_driver);
+	pci_unregister_driver(&psb_pci_driver);
 }
 
 late_initcall(psb_init);

+ 6 - 4
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_de.c

@@ -192,7 +192,8 @@ static struct drm_plane *hibmc_plane_init(struct hibmc_drm_private *priv)
 	return plane;
 }
 
-static void hibmc_crtc_enable(struct drm_crtc *crtc)
+static void hibmc_crtc_atomic_enable(struct drm_crtc *crtc,
+				     struct drm_crtc_state *old_state)
 {
 	unsigned int reg;
 	struct hibmc_drm_private *priv = crtc->dev->dev_private;
@@ -209,7 +210,8 @@ static void hibmc_crtc_enable(struct drm_crtc *crtc)
 	drm_crtc_vblank_on(crtc);
 }
 
-static void hibmc_crtc_disable(struct drm_crtc *crtc)
+static void hibmc_crtc_atomic_disable(struct drm_crtc *crtc,
+				      struct drm_crtc_state *old_state)
 {
 	unsigned int reg;
 	struct hibmc_drm_private *priv = crtc->dev->dev_private;
@@ -453,11 +455,11 @@ static const struct drm_crtc_funcs hibmc_crtc_funcs = {
 };
 
 static const struct drm_crtc_helper_funcs hibmc_crtc_helper_funcs = {
-	.enable		= hibmc_crtc_enable,
-	.disable	= hibmc_crtc_disable,
 	.mode_set_nofb	= hibmc_crtc_mode_set_nofb,
 	.atomic_begin	= hibmc_crtc_atomic_begin,
 	.atomic_flush	= hibmc_crtc_atomic_flush,
+	.atomic_enable	= hibmc_crtc_atomic_enable,
+	.atomic_disable	= hibmc_crtc_atomic_disable,
 };
 
 int hibmc_de_init(struct hibmc_drm_private *priv)

+ 2 - 1
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c

@@ -276,11 +276,12 @@ static int hibmc_unload(struct drm_device *dev)
 
 	hibmc_fbdev_fini(priv);
 
+	drm_atomic_helper_shutdown(dev);
+
 	if (dev->irq_enabled)
 		drm_irq_uninstall(dev);
 	if (priv->msi_enabled)
 		pci_disable_msi(dev->pdev);
-	drm_vblank_cleanup(dev);
 
 	hibmc_kms_fini(priv);
 	hibmc_mm_fini(priv);

+ 0 - 1
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c

@@ -131,7 +131,6 @@ static int hibmc_drm_fb_create(struct drm_fb_helper *helper,
 
 	strcpy(info->fix.id, "hibmcdrmfb");
 
-	info->flags = FBINFO_DEFAULT;
 	info->fbops = &hibmc_drm_fb_ops;
 
 	drm_fb_helper_fill_fix(info, hi_fbdev->fb->fb.pitches[0],

+ 7 - 6
drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c

@@ -467,7 +467,8 @@ static void ade_dump_regs(void __iomem *base)
 static void ade_dump_regs(void __iomem *base) { }
 #endif
 
-static void ade_crtc_enable(struct drm_crtc *crtc)
+static void ade_crtc_atomic_enable(struct drm_crtc *crtc,
+				   struct drm_crtc_state *old_state)
 {
 	struct ade_crtc *acrtc = to_ade_crtc(crtc);
 	struct ade_hw_ctx *ctx = acrtc->ctx;
@@ -489,7 +490,8 @@ static void ade_crtc_enable(struct drm_crtc *crtc)
 	acrtc->enable = true;
 }
 
-static void ade_crtc_disable(struct drm_crtc *crtc)
+static void ade_crtc_atomic_disable(struct drm_crtc *crtc,
+				    struct drm_crtc_state *old_state)
 {
 	struct ade_crtc *acrtc = to_ade_crtc(crtc);
 	struct ade_hw_ctx *ctx = acrtc->ctx;
@@ -553,11 +555,11 @@ static void ade_crtc_atomic_flush(struct drm_crtc *crtc,
 }
 
 static const struct drm_crtc_helper_funcs ade_crtc_helper_funcs = {
-	.enable		= ade_crtc_enable,
-	.disable	= ade_crtc_disable,
 	.mode_set_nofb	= ade_crtc_mode_set_nofb,
 	.atomic_begin	= ade_crtc_atomic_begin,
 	.atomic_flush	= ade_crtc_atomic_flush,
+	.atomic_enable	= ade_crtc_atomic_enable,
+	.atomic_disable	= ade_crtc_atomic_disable,
 };
 
 static const struct drm_crtc_funcs ade_crtc_funcs = {
@@ -583,8 +585,7 @@ static int ade_crtc_init(struct drm_device *dev, struct drm_crtc *crtc,
 	 */
 	port = of_get_child_by_name(dev->dev->of_node, "port");
 	if (!port) {
-		DRM_ERROR("no port node found in %s\n",
-			  dev->dev->of_node->full_name);
+		DRM_ERROR("no port node found in %pOF\n", dev->dev->of_node);
 		return -EINVAL;
 	}
 	of_node_put(port);

+ 11 - 17
drivers/gpu/drm/hisilicon/kirin/kirin_drm_drv.c

@@ -34,14 +34,12 @@ static int kirin_drm_kms_cleanup(struct drm_device *dev)
 {
 	struct kirin_drm_private *priv = dev->dev_private;
 
-#ifdef CONFIG_DRM_FBDEV_EMULATION
 	if (priv->fbdev) {
 		drm_fbdev_cma_fini(priv->fbdev);
 		priv->fbdev = NULL;
 	}
-#endif
+
 	drm_kms_helper_poll_fini(dev);
-	drm_vblank_cleanup(dev);
 	dc_ops->cleanup(to_platform_device(dev->dev));
 	drm_mode_config_cleanup(dev);
 	devm_kfree(dev->dev, priv);
@@ -50,27 +48,16 @@ static int kirin_drm_kms_cleanup(struct drm_device *dev)
 	return 0;
 }
 
-#ifdef CONFIG_DRM_FBDEV_EMULATION
 static void kirin_fbdev_output_poll_changed(struct drm_device *dev)
 {
 	struct kirin_drm_private *priv = dev->dev_private;
 
-	if (priv->fbdev) {
-		drm_fbdev_cma_hotplug_event(priv->fbdev);
-	} else {
-		priv->fbdev = drm_fbdev_cma_init(dev, 32,
-						 dev->mode_config.num_connector);
-		if (IS_ERR(priv->fbdev))
-			priv->fbdev = NULL;
-	}
+	drm_fbdev_cma_hotplug_event(priv->fbdev);
 }
-#endif
 
 static const struct drm_mode_config_funcs kirin_drm_mode_config_funcs = {
 	.fb_create = drm_fb_cma_create,
-#ifdef CONFIG_DRM_FBDEV_EMULATION
 	.output_poll_changed = kirin_fbdev_output_poll_changed,
-#endif
 	.atomic_check = drm_atomic_helper_check,
 	.atomic_commit = drm_atomic_helper_commit,
 };
@@ -129,11 +116,18 @@ static int kirin_drm_kms_init(struct drm_device *dev)
 	/* init kms poll for handling hpd */
 	drm_kms_helper_poll_init(dev);
 
-	/* force detection after connectors init */
-	(void)drm_helper_hpd_irq_event(dev);
+	priv->fbdev = drm_fbdev_cma_init(dev, 32,
+					 dev->mode_config.num_connector);
 
+	if (IS_ERR(priv->fbdev)) {
+		DRM_ERROR("failed to initialize fbdev.\n");
+		ret = PTR_ERR(priv->fbdev);
+		goto err_cleanup_poll;
+	}
 	return 0;
 
+err_cleanup_poll:
+	drm_kms_helper_poll_fini(dev);
 err_unbind_all:
 	component_unbind_all(dev->dev, dev);
 err_dc_cleanup:

+ 0 - 2
drivers/gpu/drm/hisilicon/kirin/kirin_drm_drv.h

@@ -20,9 +20,7 @@ struct kirin_dc_ops {
 };
 
 struct kirin_drm_private {
-#ifdef CONFIG_DRM_FBDEV_EMULATION
 	struct drm_fbdev_cma *fbdev;
-#endif
 };
 
 extern const struct kirin_dc_ops ade_dc_ops;

+ 1 - 1
drivers/gpu/drm/i2c/tda998x_drv.c

@@ -712,7 +712,7 @@ tda998x_write_avi(struct tda998x_priv *priv, struct drm_display_mode *mode)
 {
 	union hdmi_infoframe frame;
 
-	drm_hdmi_avi_infoframe_from_display_mode(&frame.avi, mode);
+	drm_hdmi_avi_infoframe_from_display_mode(&frame.avi, mode, false);
 	frame.avi.quantization_range = HDMI_QUANTIZATION_RANGE_FULL;
 
 	tda998x_write_if(priv, DIP_IF_FLAGS_IF2, REG_IF2_HB0, &frame);

+ 2 - 3
drivers/gpu/drm/i810/i810_drv.c

@@ -59,7 +59,6 @@ static struct drm_driver driver = {
 	.load = i810_driver_load,
 	.lastclose = i810_driver_lastclose,
 	.preclose = i810_driver_preclose,
-	.set_busid = drm_pci_set_busid,
 	.dma_quiescent = i810_driver_dma_quiescent,
 	.ioctls = i810_ioctls,
 	.fops = &i810_driver_fops,
@@ -83,12 +82,12 @@ static int __init i810_init(void)
 		return -EINVAL;
 	}
 	driver.num_ioctls = i810_max_ioctl;
-	return drm_pci_init(&driver, &i810_pci_driver);
+	return drm_legacy_pci_init(&driver, &i810_pci_driver);
 }
 
 static void __exit i810_exit(void)
 {
-	drm_pci_exit(&driver, &i810_pci_driver);
+	drm_legacy_pci_exit(&driver, &i810_pci_driver);
 }
 
 module_init(i810_init);

+ 1 - 6
drivers/gpu/drm/i915/i915_drv.c

@@ -1332,7 +1332,7 @@ int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 	ret = i915_load_modeset_init(&dev_priv->drm);
 	if (ret < 0)
-		goto out_cleanup_vblank;
+		goto out_cleanup_hw;
 
 	i915_driver_register(dev_priv);
 
@@ -1349,8 +1349,6 @@ int i915_driver_load(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 	return 0;
 
-out_cleanup_vblank:
-	drm_vblank_cleanup(&dev_priv->drm);
 out_cleanup_hw:
 	i915_driver_cleanup_hw(dev_priv);
 out_cleanup_mmio:
@@ -1386,8 +1384,6 @@ void i915_driver_unload(struct drm_device *dev)
 
 	i915_driver_unregister(dev_priv);
 
-	drm_vblank_cleanup(dev);
-
 	intel_modeset_cleanup(dev);
 
 	/*
@@ -2747,7 +2743,6 @@ static struct drm_driver driver = {
 	.open = i915_driver_open,
 	.lastclose = i915_driver_lastclose,
 	.postclose = i915_driver_postclose,
-	.set_busid = drm_pci_set_busid,
 
 	.gem_close_object = i915_gem_close_object,
 	.gem_free_object_unlocked = i915_gem_free_object,

+ 0 - 2
drivers/gpu/drm/i915/i915_drv.h

@@ -2311,11 +2311,9 @@ struct drm_i915_private {
 
 	struct drm_i915_gem_object *vlv_pctx;
 
-#ifdef CONFIG_DRM_FBDEV_EMULATION
 	/* list of fbdev register on this device */
 	struct intel_fbdev *fbdev;
 	struct work_struct fbdev_suspend_work;
-#endif
 
 	struct drm_property *broadcast_rgb_property;
 	struct drm_property *force_audio_property;

+ 1 - 1
drivers/gpu/drm/i915/i915_irq.c

@@ -1591,7 +1591,7 @@ static void display_pipe_crc_irq_handler(struct drm_i915_private *dev_priv,
 		crcs[3] = crc3;
 		crcs[4] = crc4;
 		drm_crtc_add_crc_entry(&crtc->base, true,
-				       drm_accurate_vblank_count(&crtc->base),
+				       drm_crtc_accurate_vblank_count(&crtc->base),
 				       crcs);
 	}
 }

+ 10 - 2
drivers/gpu/drm/i915/intel_display.c

@@ -12818,7 +12818,7 @@ u32 intel_crtc_get_vblank_counter(struct intel_crtc *crtc)
 	struct drm_device *dev = crtc->base.dev;
 
 	if (!dev->max_vblank_count)
-		return drm_accurate_vblank_count(&crtc->base);
+		return drm_crtc_accurate_vblank_count(&crtc->base);
 
 	return dev->driver->get_vblank_counter(dev, crtc->pipe);
 }
@@ -13286,7 +13286,15 @@ static int intel_atomic_commit(struct drm_device *dev,
 	if (INTEL_GEN(dev_priv) < 9)
 		state->legacy_cursor_update = false;
 
-	drm_atomic_helper_swap_state(state, true);
+	ret = drm_atomic_helper_swap_state(state, true);
+	if (ret) {
+		i915_sw_fence_commit(&intel_state->commit_ready);
+
+		mutex_lock(&dev->struct_mutex);
+		drm_atomic_helper_cleanup_planes(dev, state);
+		mutex_unlock(&dev->struct_mutex);
+		return ret;
+	}
 	dev_priv->wm.distrust_bios_wm = false;
 	intel_shared_dpll_swap_state(state);
 	intel_atomic_track_fbs(state);

+ 13 - 34
drivers/gpu/drm/i915/intel_dp_mst.c

@@ -443,28 +443,6 @@ static bool intel_dp_mst_get_hw_state(struct intel_connector *connector)
 	return false;
 }
 
-static void intel_connector_add_to_fbdev(struct intel_connector *connector)
-{
-#ifdef CONFIG_DRM_FBDEV_EMULATION
-	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
-
-	if (dev_priv->fbdev)
-		drm_fb_helper_add_one_connector(&dev_priv->fbdev->helper,
-						&connector->base);
-#endif
-}
-
-static void intel_connector_remove_from_fbdev(struct intel_connector *connector)
-{
-#ifdef CONFIG_DRM_FBDEV_EMULATION
-	struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
-
-	if (dev_priv->fbdev)
-		drm_fb_helper_remove_one_connector(&dev_priv->fbdev->helper,
-						   &connector->base);
-#endif
-}
-
 static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, const char *pathprop)
 {
 	struct intel_dp *intel_dp = container_of(mgr, struct intel_dp, mst_mgr);
@@ -500,31 +478,32 @@ static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topolo
 
 static void intel_dp_register_mst_connector(struct drm_connector *connector)
 {
-	struct intel_connector *intel_connector = to_intel_connector(connector);
-	struct drm_device *dev = connector->dev;
+	struct drm_i915_private *dev_priv = to_i915(connector->dev);
 
-	drm_modeset_lock_all(dev);
-	intel_connector_add_to_fbdev(intel_connector);
-	drm_modeset_unlock_all(dev);
+	if (dev_priv->fbdev)
+		drm_fb_helper_add_one_connector(&dev_priv->fbdev->helper,
+						connector);
 
-	drm_connector_register(&intel_connector->base);
+	drm_connector_register(connector);
 }
 
 static void intel_dp_destroy_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
 					   struct drm_connector *connector)
 {
 	struct intel_connector *intel_connector = to_intel_connector(connector);
-	struct drm_device *dev = connector->dev;
+	struct drm_i915_private *dev_priv = to_i915(connector->dev);
 
 	drm_connector_unregister(connector);
 
-	/* need to nuke the connector */
-	drm_modeset_lock_all(dev);
-	intel_connector_remove_from_fbdev(intel_connector);
+	if (dev_priv->fbdev)
+		drm_fb_helper_remove_one_connector(&dev_priv->fbdev->helper,
+						   connector);
+	/* prevent race with the check in ->detect */
+	drm_modeset_lock(&connector->dev->mode_config.connection_mutex, NULL);
 	intel_connector->mst_port = NULL;
-	drm_modeset_unlock_all(dev);
+	drm_modeset_unlock(&connector->dev->mode_config.connection_mutex);
 
-	drm_connector_unreference(&intel_connector->base);
+	drm_connector_unreference(connector);
 	DRM_DEBUG_KMS("\n");
 }
 

Some files were not shown because too many files changed in this diff