Selaa lähdekoodia

Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux

Pull drm updates from Dave Airlie:
 "This is the main drm pull request for 4.5.  I don't think I've missed
  anything too major, I'm mostly back at work now but I'll probably get
  some sleep in 5 years time.

  Summary:

  New drivers:
   - etnaviv:

     GPU driver for the 3D core on the Vivante core used in numerous
     ARM boards.

  Highlights:

  Core:
   - Atomic suspend/resume helpers
   - Move the headers to using userspace friendlier types.
   - Documentation updates
   - Lots of struct_mutex removal.
   - Bunch of DP MST fixes from AMD.

  Panel:
   - More DSI helpers
   - Support for some new basic panels

  i915:
   - Basic Kabylake support
   - DP link training and detect code refactoring
   - fbc/psr fixes
   - FIFO underrun fixes
   - SDE interrupt handling fixes
   - dma-buf/fence support in pageflip path.
   - GPU side for MST audio support

  radeon/amdgpu:
   - Drop UMS support
   - GPUVM/Scheduler optimisations
   - Initial Powerplay support for Tonga/Fiji/CZ/ST
   - ACP audio prerequisites

  nouveau:
   - GK20a instmem improvements
   - PCIE link speed change support

  msm:
   - DSI support for msm8960/apq8064

  tegra:
   - Host1X support for Tegra210 SoC

  vc4:
   - 3D acceleration support

  armada:
   - Get rid of struct mutex

  tda998x:
   - Atomic modesetting support
   - TMDS clock limitations

  omapdrm:
   - Atomic modesetting support
   - improved TILER performance

  rockchip:
   - RK3036 VOP support
   - Atomic modesetting support
   - Synopsys DW MIPI DSI support

  exynos:
   - Runtime PM support
   - of_graph binding for DP panels
   - Cleanup of IPP code
   - Configurable plane support
   - Kernel panic fixes at release time"

* 'drm-next' of git://people.freedesktop.org/~airlied/linux: (711 commits)
  drm/fb_cma_helper: Remove implicit call to disable_unused_functions
  drm/amdgpu: add missing irq.h include
  drm/vmwgfx: Fix a width / pitch mismatch on framebuffer updates
  drm/vmwgfx: Fix an incorrect lock check
  drm: nouveau: fix nouveau_debugfs_init prototype
  drm/nouveau/pci: fix check in nvkm_pcie_set_link
  drm/amdgpu: validate duplicates first
  drm/amdgpu: move VM page tables to the LRU end on CS v2
  drm/ttm: add ttm_bo_move_to_lru_tail function v2
  drm/ttm: fix adding foreign BOs to the swap LRU
  drm/ttm: fix adding foreign BOs to the LRU during init v2
  drm/radeon: use kobj_to_dev()
  drm/amdgpu: use kobj_to_dev()
  drm/amdgpu/cz: force vce clocks when sclks are forced
  drm/amdgpu/cz: force uvd clocks when sclks are forced
  drm/amdgpu/cz: add code to enable forcing VCE clocks
  drm/amdgpu/cz: add code to enable forcing UVD clocks
  drm/amdgpu: fix lost sync_to if scheduler is enabled.
  drm/amd/powerplay: fix static checker warning for return meaningless value.
  drm/sysfs: use kobj_to_dev()
  ...
Linus Torvalds 9 vuotta sitten
vanhempi
commit
984065055e
100 muutettua tiedostoa jossa 10544 lisäystä ja 3223 poistoa
  1. 88 855
      Documentation/DocBook/gpu.tmpl
  2. 54 0
      Documentation/devicetree/bindings/display/etnaviv/etnaviv-drm.txt
  3. 37 4
      Documentation/devicetree/bindings/display/exynos/exynos_dp.txt
  4. 8 4
      Documentation/devicetree/bindings/display/msm/dsi.txt
  5. 18 8
      Documentation/devicetree/bindings/display/msm/mdp.txt
  6. 7 0
      Documentation/devicetree/bindings/display/panel/boe,tv080wum-nl0.txt
  7. 7 0
      Documentation/devicetree/bindings/display/panel/innolux,g121x1-l03.txt
  8. 7 0
      Documentation/devicetree/bindings/display/panel/kyo,tcg121xglp.txt
  9. 20 0
      Documentation/devicetree/bindings/display/panel/panasonic,vvx10f034n00.txt
  10. 7 0
      Documentation/devicetree/bindings/display/panel/qiaodian,qd43003c0-40.txt
  11. 22 0
      Documentation/devicetree/bindings/display/panel/sharp,ls043t1le01.txt
  12. 60 0
      Documentation/devicetree/bindings/display/rockchip/dw_mipi_dsi_rockchip.txt
  13. 1 0
      Documentation/devicetree/bindings/display/rockchip/rockchip-vop.txt
  14. 4 0
      Documentation/devicetree/bindings/media/exynos5-gsc.txt
  15. 4 0
      Documentation/devicetree/bindings/vendor-prefixes.txt
  16. 9 0
      MAINTAINERS
  17. 14 1
      arch/arm/boot/dts/exynos5800-peach-pi.dts
  18. 3 0
      drivers/gpu/drm/Kconfig
  19. 1 0
      drivers/gpu/drm/Makefile
  20. 16 4
      drivers/gpu/drm/amd/amdgpu/Makefile
  21. 93 47
      drivers/gpu/drm/amd/amdgpu/amdgpu.h
  22. 1 57
      drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
  23. 1 1
      drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
  24. 50 8
      drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
  25. 326 2
      drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
  26. 12 7
      drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
  27. 27 14
      drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
  28. 146 17
      drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
  29. 8 2
      drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
  30. 1 1
      drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
  31. 5 8
      drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
  32. 101 8
      drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
  33. 9 0
      drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h
  34. 1 0
      drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
  35. 1 0
      drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
  36. 151 84
      drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
  37. 317 0
      drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c
  38. 33 0
      drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.h
  39. 3 2
      drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
  40. 75 41
      drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
  41. 36 60
      drivers/gpu/drm/amd/amdgpu/atombios_dp.c
  42. 12 2
      drivers/gpu/drm/amd/amdgpu/ci_dpm.c
  43. 50 17
      drivers/gpu/drm/amd/amdgpu/cik.c
  44. 6 0
      drivers/gpu/drm/amd/amdgpu/cik_ih.c
  45. 272 1
      drivers/gpu/drm/amd/amdgpu/cz_dpm.c
  46. 2 0
      drivers/gpu/drm/amd/amdgpu/cz_dpm.h
  47. 7 0
      drivers/gpu/drm/amd/amdgpu/cz_ih.c
  48. 7 7
      drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
  49. 16 14
      drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
  50. 7 7
      drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
  51. 1 1
      drivers/gpu/drm/amd/amdgpu/fiji_dpm.c
  52. 0 182
      drivers/gpu/drm/amd/amdgpu/fiji_ppsmc.h
  53. 1 1
      drivers/gpu/drm/amd/amdgpu/fiji_smc.c
  54. 0 0
      drivers/gpu/drm/amd/amdgpu/fiji_smum.h
  55. 1539 1448
      drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
  56. 4 1
      drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
  57. 176 1
      drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
  58. 7 0
      drivers/gpu/drm/amd/amdgpu/iceland_ih.c
  59. 118 11
      drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
  60. 1 1
      drivers/gpu/drm/amd/amdgpu/tonga_dpm.c
  61. 7 0
      drivers/gpu/drm/amd/amdgpu/tonga_ih.c
  62. 0 198
      drivers/gpu/drm/amd/amdgpu/tonga_ppsmc.h
  63. 1 1
      drivers/gpu/drm/amd/amdgpu/tonga_smc.c
  64. 0 0
      drivers/gpu/drm/amd/amdgpu/tonga_smum.h
  65. 259 2
      drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
  66. 171 67
      drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
  67. 134 19
      drivers/gpu/drm/amd/amdgpu/vi.c
  68. 55 6
      drivers/gpu/drm/amd/include/amd_acpi.h
  69. 50 0
      drivers/gpu/drm/amd/include/amd_pcie.h
  70. 141 0
      drivers/gpu/drm/amd/include/amd_pcie_helpers.h
  71. 21 0
      drivers/gpu/drm/amd/include/amd_shared.h
  72. 1 0
      drivers/gpu/drm/amd/include/asic_reg/bif/bif_5_0_d.h
  73. 13 0
      drivers/gpu/drm/amd/include/asic_reg/gca/gfx_8_0_d.h
  74. 79 0
      drivers/gpu/drm/amd/include/atombios.h
  75. 123 1
      drivers/gpu/drm/amd/include/cgs_common.h
  76. 6 0
      drivers/gpu/drm/amd/powerplay/Kconfig
  77. 22 0
      drivers/gpu/drm/amd/powerplay/Makefile
  78. 660 0
      drivers/gpu/drm/amd/powerplay/amd_powerplay.c
  79. 11 0
      drivers/gpu/drm/amd/powerplay/eventmgr/Makefile
  80. 289 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventactionchains.c
  81. 62 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventactionchains.h
  82. 195 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventinit.c
  83. 34 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventinit.h
  84. 215 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventmanagement.c
  85. 59 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventmanagement.h
  86. 114 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventmgr.c
  87. 410 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventsubchains.c
  88. 100 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventsubchains.h
  89. 438 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventtasks.c
  90. 88 0
      drivers/gpu/drm/amd/powerplay/eventmgr/eventtasks.h
  91. 117 0
      drivers/gpu/drm/amd/powerplay/eventmgr/psm.c
  92. 38 0
      drivers/gpu/drm/amd/powerplay/eventmgr/psm.h
  93. 15 0
      drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
  94. 252 0
      drivers/gpu/drm/amd/powerplay/hwmgr/cz_clockpowergating.c
  95. 37 0
      drivers/gpu/drm/amd/powerplay/hwmgr/cz_clockpowergating.h
  96. 1737 0
      drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
  97. 326 0
      drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.h
  98. 114 0
      drivers/gpu/drm/amd/powerplay/hwmgr/fiji_clockpowergating.c
  99. 35 0
      drivers/gpu/drm/amd/powerplay/hwmgr/fiji_clockpowergating.h
  100. 105 0
      drivers/gpu/drm/amd/powerplay/hwmgr/fiji_dyn_defaults.h

+ 88 - 855
Documentation/DocBook/gpu.tmpl

@@ -124,6 +124,43 @@
     <para>
     <para>
       [Insert diagram of typical DRM stack here]
       [Insert diagram of typical DRM stack here]
     </para>
     </para>
+  <sect1>
+    <title>Style Guidelines</title>
+    <para>
+      For consistency this documentation uses American English. Abbreviations
+      are written as all-uppercase, for example: DRM, KMS, IOCTL, CRTC, and so
+      on. To aid in reading, documentations make full use of the markup
+      characters kerneldoc provides: @parameter for function parameters, @member
+      for structure members, &amp;structure to reference structures and
+      function() for functions. These all get automatically hyperlinked if
+      kerneldoc for the referenced objects exists. When referencing entries in
+      function vtables please use -&gt;vfunc(). Note that kerneldoc does
+      not support referencing struct members directly, so please add a reference
+      to the vtable struct somewhere in the same paragraph or at least section.
+    </para>
+    <para>
+      Except in special situations (to separate locked from unlocked variants)
+      locking requirements for functions aren't documented in the kerneldoc.
+      Instead locking should be check at runtime using e.g.
+      <code>WARN_ON(!mutex_is_locked(...));</code>. Since it's much easier to
+      ignore documentation than runtime noise this provides more value. And on
+      top of that runtime checks do need to be updated when the locking rules
+      change, increasing the chances that they're correct. Within the
+      documentation the locking rules should be explained in the relevant
+      structures: Either in the comment for the lock explaining what it
+      protects, or data fields need a note about which lock protects them, or
+      both.
+    </para>
+    <para>
+      Functions which have a non-<code>void</code> return value should have a
+      section called "Returns" explaining the expected return values in
+      different cases and their meanings. Currently there's no consensus whether
+      that section name should be all upper-case or not, and whether it should
+      end in a colon or not. Go with the file-local style. Other common section
+      names are "Notes" with information for dangerous or tricky corner cases,
+      and "FIXME" where the interface could be cleaned up.
+    </para>
+  </sect1>
   </chapter>
   </chapter>
 
 
   <!-- Internals -->
   <!-- Internals -->
@@ -946,12 +983,10 @@ int max_width, max_height;</synopsis>
     <sect2>
     <sect2>
       <title>Atomic Mode Setting Function Reference</title>
       <title>Atomic Mode Setting Function Reference</title>
 !Edrivers/gpu/drm/drm_atomic.c
 !Edrivers/gpu/drm/drm_atomic.c
+!Idrivers/gpu/drm/drm_atomic.c
     </sect2>
     </sect2>
     <sect2>
     <sect2>
-      <title>Frame Buffer Creation</title>
-      <synopsis>struct drm_framebuffer *(*fb_create)(struct drm_device *dev,
-				     struct drm_file *file_priv,
-				     struct drm_mode_fb_cmd2 *mode_cmd);</synopsis>
+      <title>Frame Buffer Abstraction</title>
       <para>
       <para>
         Frame buffers are abstract memory objects that provide a source of
         Frame buffers are abstract memory objects that provide a source of
         pixels to scanout to a CRTC. Applications explicitly request the
         pixels to scanout to a CRTC. Applications explicitly request the
@@ -969,73 +1004,6 @@ int max_width, max_height;</synopsis>
 	handles, e.g. vmwgfx directly exposes special TTM handles to userspace
 	handles, e.g. vmwgfx directly exposes special TTM handles to userspace
 	and so expects TTM handles in the create ioctl and not GEM handles.
 	and so expects TTM handles in the create ioctl and not GEM handles.
       </para>
       </para>
-      <para>
-        Drivers must first validate the requested frame buffer parameters passed
-        through the mode_cmd argument. In particular this is where invalid
-        sizes, pixel formats or pitches can be caught.
-      </para>
-      <para>
-        If the parameters are deemed valid, drivers then create, initialize and
-        return an instance of struct <structname>drm_framebuffer</structname>.
-        If desired the instance can be embedded in a larger driver-specific
-	structure. Drivers must fill its <structfield>width</structfield>,
-	<structfield>height</structfield>, <structfield>pitches</structfield>,
-        <structfield>offsets</structfield>, <structfield>depth</structfield>,
-        <structfield>bits_per_pixel</structfield> and
-        <structfield>pixel_format</structfield> fields from the values passed
-        through the <parameter>drm_mode_fb_cmd2</parameter> argument. They
-        should call the <function>drm_helper_mode_fill_fb_struct</function>
-        helper function to do so.
-      </para>
-
-      <para>
-	The initialization of the new framebuffer instance is finalized with a
-	call to <function>drm_framebuffer_init</function> which takes a pointer
-	to DRM frame buffer operations (struct
-	<structname>drm_framebuffer_funcs</structname>). Note that this function
-	publishes the framebuffer and so from this point on it can be accessed
-	concurrently from other threads. Hence it must be the last step in the
-	driver's framebuffer initialization sequence. Frame buffer operations
-	are
-        <itemizedlist>
-          <listitem>
-            <synopsis>int (*create_handle)(struct drm_framebuffer *fb,
-		     struct drm_file *file_priv, unsigned int *handle);</synopsis>
-            <para>
-              Create a handle to the frame buffer underlying memory object. If
-              the frame buffer uses a multi-plane format, the handle will
-              reference the memory object associated with the first plane.
-            </para>
-            <para>
-              Drivers call <function>drm_gem_handle_create</function> to create
-              the handle.
-            </para>
-          </listitem>
-          <listitem>
-            <synopsis>void (*destroy)(struct drm_framebuffer *framebuffer);</synopsis>
-            <para>
-              Destroy the frame buffer object and frees all associated
-              resources. Drivers must call
-              <function>drm_framebuffer_cleanup</function> to free resources
-              allocated by the DRM core for the frame buffer object, and must
-              make sure to unreference all memory objects associated with the
-              frame buffer. Handles created by the
-              <methodname>create_handle</methodname> operation are released by
-              the DRM core.
-            </para>
-          </listitem>
-          <listitem>
-            <synopsis>int (*dirty)(struct drm_framebuffer *framebuffer,
-	     struct drm_file *file_priv, unsigned flags, unsigned color,
-	     struct drm_clip_rect *clips, unsigned num_clips);</synopsis>
-            <para>
-              This optional operation notifies the driver that a region of the
-              frame buffer has changed in response to a DRM_IOCTL_MODE_DIRTYFB
-              ioctl call.
-            </para>
-          </listitem>
-        </itemizedlist>
-      </para>
       <para>
       <para>
 	The lifetime of a drm framebuffer is controlled with a reference count,
 	The lifetime of a drm framebuffer is controlled with a reference count,
 	drivers can grab additional references with
 	drivers can grab additional references with
@@ -1173,137 +1141,6 @@ int max_width, max_height;</synopsis>
           pointer to CRTC functions.
           pointer to CRTC functions.
         </para>
         </para>
       </sect3>
       </sect3>
-      <sect3 id="drm-kms-crtcops">
-        <title>CRTC Operations</title>
-        <sect4>
-          <title>Set Configuration</title>
-          <synopsis>int (*set_config)(struct drm_mode_set *set);</synopsis>
-          <para>
-            Apply a new CRTC configuration to the device. The configuration
-            specifies a CRTC, a frame buffer to scan out from, a (x,y) position in
-            the frame buffer, a display mode and an array of connectors to drive
-            with the CRTC if possible.
-          </para>
-          <para>
-            If the frame buffer specified in the configuration is NULL, the driver
-            must detach all encoders connected to the CRTC and all connectors
-            attached to those encoders and disable them.
-          </para>
-          <para>
-            This operation is called with the mode config lock held.
-          </para>
-          <note><para>
-	    Note that the drm core has no notion of restoring the mode setting
-	    state after resume, since all resume handling is in the full
-	    responsibility of the driver. The common mode setting helper library
-	    though provides a helper which can be used for this:
-	    <function>drm_helper_resume_force_mode</function>.
-          </para></note>
-        </sect4>
-        <sect4>
-          <title>Page Flipping</title>
-          <synopsis>int (*page_flip)(struct drm_crtc *crtc, struct drm_framebuffer *fb,
-                   struct drm_pending_vblank_event *event);</synopsis>
-          <para>
-            Schedule a page flip to the given frame buffer for the CRTC. This
-            operation is called with the mode config mutex held.
-          </para>
-          <para>
-            Page flipping is a synchronization mechanism that replaces the frame
-            buffer being scanned out by the CRTC with a new frame buffer during
-            vertical blanking, avoiding tearing. When an application requests a page
-            flip the DRM core verifies that the new frame buffer is large enough to
-            be scanned out by  the CRTC in the currently configured mode and then
-            calls the CRTC <methodname>page_flip</methodname> operation with a
-            pointer to the new frame buffer.
-          </para>
-          <para>
-            The <methodname>page_flip</methodname> operation schedules a page flip.
-            Once any pending rendering targeting the new frame buffer has
-            completed, the CRTC will be reprogrammed to display that frame buffer
-            after the next vertical refresh. The operation must return immediately
-            without waiting for rendering or page flip to complete and must block
-            any new rendering to the frame buffer until the page flip completes.
-          </para>
-          <para>
-            If a page flip can be successfully scheduled the driver must set the
-            <code>drm_crtc-&gt;fb</code> field to the new framebuffer pointed to
-            by <code>fb</code>. This is important so that the reference counting
-            on framebuffers stays balanced.
-          </para>
-          <para>
-            If a page flip is already pending, the
-            <methodname>page_flip</methodname> operation must return
-            -<errorname>EBUSY</errorname>.
-          </para>
-          <para>
-            To synchronize page flip to vertical blanking the driver will likely
-            need to enable vertical blanking interrupts. It should call
-            <function>drm_vblank_get</function> for that purpose, and call
-            <function>drm_vblank_put</function> after the page flip completes.
-          </para>
-          <para>
-            If the application has requested to be notified when page flip completes
-            the <methodname>page_flip</methodname> operation will be called with a
-            non-NULL <parameter>event</parameter> argument pointing to a
-            <structname>drm_pending_vblank_event</structname> instance. Upon page
-            flip completion the driver must call <methodname>drm_send_vblank_event</methodname>
-            to fill in the event and send to wake up any waiting processes.
-            This can be performed with
-            <programlisting><![CDATA[
-            spin_lock_irqsave(&dev->event_lock, flags);
-            ...
-            drm_send_vblank_event(dev, pipe, event);
-            spin_unlock_irqrestore(&dev->event_lock, flags);
-            ]]></programlisting>
-          </para>
-          <note><para>
-            FIXME: Could drivers that don't need to wait for rendering to complete
-            just add the event to <literal>dev-&gt;vblank_event_list</literal> and
-            let the DRM core handle everything, as for "normal" vertical blanking
-            events?
-          </para></note>
-          <para>
-            While waiting for the page flip to complete, the
-            <literal>event-&gt;base.link</literal> list head can be used freely by
-            the driver to store the pending event in a driver-specific list.
-          </para>
-          <para>
-            If the file handle is closed before the event is signaled, drivers must
-            take care to destroy the event in their
-            <methodname>preclose</methodname> operation (and, if needed, call
-            <function>drm_vblank_put</function>).
-          </para>
-        </sect4>
-        <sect4>
-          <title>Miscellaneous</title>
-          <itemizedlist>
-            <listitem>
-              <synopsis>void (*set_property)(struct drm_crtc *crtc,
-                     struct drm_property *property, uint64_t value);</synopsis>
-              <para>
-                Set the value of the given CRTC property to
-                <parameter>value</parameter>. See <xref linkend="drm-kms-properties"/>
-                for more information about properties.
-              </para>
-            </listitem>
-            <listitem>
-              <synopsis>void (*gamma_set)(struct drm_crtc *crtc, u16 *r, u16 *g, u16 *b,
-                        uint32_t start, uint32_t size);</synopsis>
-              <para>
-                Apply a gamma table to the device. The operation is optional.
-              </para>
-            </listitem>
-            <listitem>
-              <synopsis>void (*destroy)(struct drm_crtc *crtc);</synopsis>
-              <para>
-                Destroy the CRTC when not needed anymore. See
-                <xref linkend="drm-kms-init"/>.
-              </para>
-            </listitem>
-          </itemizedlist>
-        </sect4>
-      </sect3>
     </sect2>
     </sect2>
     <sect2>
     <sect2>
       <title>Planes (struct <structname>drm_plane</structname>)</title>
       <title>Planes (struct <structname>drm_plane</structname>)</title>
@@ -1320,7 +1157,7 @@ int max_width, max_height;</synopsis>
         <listitem>
         <listitem>
         DRM_PLANE_TYPE_PRIMARY represents a "main" plane for a CRTC.  Primary
         DRM_PLANE_TYPE_PRIMARY represents a "main" plane for a CRTC.  Primary
         planes are the planes operated upon by CRTC modesetting and flipping
         planes are the planes operated upon by CRTC modesetting and flipping
-        operations described in <xref linkend="drm-kms-crtcops"/>.
+	operations described in the page_flip hook in <structname>drm_crtc_funcs</structname>.
         </listitem>
         </listitem>
         <listitem>
         <listitem>
         DRM_PLANE_TYPE_CURSOR represents a "cursor" plane for a CRTC.  Cursor
         DRM_PLANE_TYPE_CURSOR represents a "cursor" plane for a CRTC.  Cursor
@@ -1357,52 +1194,6 @@ int max_width, max_height;</synopsis>
           primary plane with standard capabilities.
           primary plane with standard capabilities.
         </para>
         </para>
       </sect3>
       </sect3>
-      <sect3>
-        <title>Plane Operations</title>
-        <itemizedlist>
-          <listitem>
-            <synopsis>int (*update_plane)(struct drm_plane *plane, struct drm_crtc *crtc,
-                        struct drm_framebuffer *fb, int crtc_x, int crtc_y,
-                        unsigned int crtc_w, unsigned int crtc_h,
-                        uint32_t src_x, uint32_t src_y,
-                        uint32_t src_w, uint32_t src_h);</synopsis>
-            <para>
-              Enable and configure the plane to use the given CRTC and frame buffer.
-            </para>
-            <para>
-              The source rectangle in frame buffer memory coordinates is given by
-              the <parameter>src_x</parameter>, <parameter>src_y</parameter>,
-              <parameter>src_w</parameter> and <parameter>src_h</parameter>
-              parameters (as 16.16 fixed point values). Devices that don't support
-              subpixel plane coordinates can ignore the fractional part.
-            </para>
-            <para>
-              The destination rectangle in CRTC coordinates is given by the
-              <parameter>crtc_x</parameter>, <parameter>crtc_y</parameter>,
-              <parameter>crtc_w</parameter> and <parameter>crtc_h</parameter>
-              parameters (as integer values). Devices scale the source rectangle to
-              the destination rectangle. If scaling is not supported, and the source
-              rectangle size doesn't match the destination rectangle size, the
-              driver must return a -<errorname>EINVAL</errorname> error.
-            </para>
-          </listitem>
-          <listitem>
-            <synopsis>int (*disable_plane)(struct drm_plane *plane);</synopsis>
-            <para>
-              Disable the plane. The DRM core calls this method in response to a
-              DRM_IOCTL_MODE_SETPLANE ioctl call with the frame buffer ID set to 0.
-              Disabled planes must not be processed by the CRTC.
-            </para>
-          </listitem>
-          <listitem>
-            <synopsis>void (*destroy)(struct drm_plane *plane);</synopsis>
-            <para>
-              Destroy the plane when not needed anymore. See
-              <xref linkend="drm-kms-init"/>.
-            </para>
-          </listitem>
-        </itemizedlist>
-      </sect3>
     </sect2>
     </sect2>
     <sect2>
     <sect2>
       <title>Encoders (struct <structname>drm_encoder</structname>)</title>
       <title>Encoders (struct <structname>drm_encoder</structname>)</title>
@@ -1459,27 +1250,6 @@ int max_width, max_height;</synopsis>
           encoders they want to use to a CRTC.
           encoders they want to use to a CRTC.
         </para>
         </para>
       </sect3>
       </sect3>
-      <sect3>
-        <title>Encoder Operations</title>
-        <itemizedlist>
-          <listitem>
-            <synopsis>void (*destroy)(struct drm_encoder *encoder);</synopsis>
-            <para>
-              Called to destroy the encoder when not needed anymore. See
-              <xref linkend="drm-kms-init"/>.
-            </para>
-          </listitem>
-          <listitem>
-            <synopsis>void (*set_property)(struct drm_plane *plane,
-                     struct drm_property *property, uint64_t value);</synopsis>
-            <para>
-              Set the value of the given plane property to
-              <parameter>value</parameter>. See <xref linkend="drm-kms-properties"/>
-              for more information about properties.
-            </para>
-          </listitem>
-        </itemizedlist>
-      </sect3>
     </sect2>
     </sect2>
     <sect2>
     <sect2>
       <title>Connectors (struct <structname>drm_connector</structname>)</title>
       <title>Connectors (struct <structname>drm_connector</structname>)</title>
@@ -1683,27 +1453,6 @@ int max_width, max_height;</synopsis>
             connector_status_unknown.
             connector_status_unknown.
           </para>
           </para>
         </sect4>
         </sect4>
-        <sect4>
-          <title>Miscellaneous</title>
-          <itemizedlist>
-            <listitem>
-              <synopsis>void (*set_property)(struct drm_connector *connector,
-                     struct drm_property *property, uint64_t value);</synopsis>
-              <para>
-                Set the value of the given connector property to
-                <parameter>value</parameter>. See <xref linkend="drm-kms-properties"/>
-                for more information about properties.
-              </para>
-            </listitem>
-            <listitem>
-              <synopsis>void (*destroy)(struct drm_connector *connector);</synopsis>
-              <para>
-                Destroy the connector when not needed anymore. See
-                <xref linkend="drm-kms-init"/>.
-              </para>
-            </listitem>
-          </itemizedlist>
-        </sect4>
       </sect3>
       </sect3>
     </sect2>
     </sect2>
     <sect2>
     <sect2>
@@ -1829,462 +1578,6 @@ void intel_crt_init(struct drm_device *dev)
       To use it, a driver must provide bottom functions for all of the three KMS
       To use it, a driver must provide bottom functions for all of the three KMS
       entities.
       entities.
     </para>
     </para>
-    <sect2>
-      <title>Helper Functions</title>
-      <itemizedlist>
-        <listitem>
-          <synopsis>int drm_crtc_helper_set_config(struct drm_mode_set *set);</synopsis>
-          <para>
-            The <function>drm_crtc_helper_set_config</function> helper function
-            is a CRTC <methodname>set_config</methodname> implementation. It
-            first tries to locate the best encoder for each connector by calling
-            the connector <methodname>best_encoder</methodname> helper
-            operation.
-          </para>
-          <para>
-            After locating the appropriate encoders, the helper function will
-            call the <methodname>mode_fixup</methodname> encoder and CRTC helper
-            operations to adjust the requested mode, or reject it completely in
-            which case an error will be returned to the application. If the new
-            configuration after mode adjustment is identical to the current
-            configuration the helper function will return without performing any
-            other operation.
-          </para>
-          <para>
-            If the adjusted mode is identical to the current mode but changes to
-            the frame buffer need to be applied, the
-            <function>drm_crtc_helper_set_config</function> function will call
-            the CRTC <methodname>mode_set_base</methodname> helper operation. If
-            the adjusted mode differs from the current mode, or if the
-            <methodname>mode_set_base</methodname> helper operation is not
-            provided, the helper function performs a full mode set sequence by
-            calling the <methodname>prepare</methodname>,
-            <methodname>mode_set</methodname> and
-            <methodname>commit</methodname> CRTC and encoder helper operations,
-            in that order.
-          </para>
-        </listitem>
-        <listitem>
-          <synopsis>void drm_helper_connector_dpms(struct drm_connector *connector, int mode);</synopsis>
-          <para>
-            The <function>drm_helper_connector_dpms</function> helper function
-            is a connector <methodname>dpms</methodname> implementation that
-            tracks power state of connectors. To use the function, drivers must
-            provide <methodname>dpms</methodname> helper operations for CRTCs
-            and encoders to apply the DPMS state to the device.
-          </para>
-          <para>
-            The mid-layer doesn't track the power state of CRTCs and encoders.
-            The <methodname>dpms</methodname> helper operations can thus be
-            called with a mode identical to the currently active mode.
-          </para>
-        </listitem>
-        <listitem>
-          <synopsis>int drm_helper_probe_single_connector_modes(struct drm_connector *connector,
-                                            uint32_t maxX, uint32_t maxY);</synopsis>
-          <para>
-            The <function>drm_helper_probe_single_connector_modes</function> helper
-            function is a connector <methodname>fill_modes</methodname>
-            implementation that updates the connection status for the connector
-            and then retrieves a list of modes by calling the connector
-            <methodname>get_modes</methodname> helper operation.
-          </para>
-         <para>
-            If the helper operation returns no mode, and if the connector status
-            is connector_status_connected, standard VESA DMT modes up to
-            1024x768 are automatically added to the modes list by a call to
-            <function>drm_add_modes_noedid</function>.
-          </para>
-          <para>
-            The function then filters out modes larger than
-            <parameter>max_width</parameter> and <parameter>max_height</parameter>
-            if specified. It finally calls the optional connector
-            <methodname>mode_valid</methodname> helper operation for each mode in
-            the probed list to check whether the mode is valid for the connector.
-          </para>
-        </listitem>
-      </itemizedlist>
-    </sect2>
-    <sect2>
-      <title>CRTC Helper Operations</title>
-      <itemizedlist>
-        <listitem id="drm-helper-crtc-mode-fixup">
-          <synopsis>bool (*mode_fixup)(struct drm_crtc *crtc,
-                       const struct drm_display_mode *mode,
-                       struct drm_display_mode *adjusted_mode);</synopsis>
-          <para>
-            Let CRTCs adjust the requested mode or reject it completely. This
-            operation returns true if the mode is accepted (possibly after being
-            adjusted) or false if it is rejected.
-          </para>
-          <para>
-            The <methodname>mode_fixup</methodname> operation should reject the
-            mode if it can't reasonably use it. The definition of "reasonable"
-            is currently fuzzy in this context. One possible behaviour would be
-            to set the adjusted mode to the panel timings when a fixed-mode
-            panel is used with hardware capable of scaling. Another behaviour
-            would be to accept any input mode and adjust it to the closest mode
-            supported by the hardware (FIXME: This needs to be clarified).
-          </para>
-        </listitem>
-        <listitem>
-          <synopsis>int (*mode_set_base)(struct drm_crtc *crtc, int x, int y,
-                     struct drm_framebuffer *old_fb)</synopsis>
-          <para>
-            Move the CRTC on the current frame buffer (stored in
-            <literal>crtc-&gt;fb</literal>) to position (x,y). Any of the frame
-            buffer, x position or y position may have been modified.
-          </para>
-          <para>
-            This helper operation is optional. If not provided, the
-            <function>drm_crtc_helper_set_config</function> function will fall
-            back to the <methodname>mode_set</methodname> helper operation.
-          </para>
-          <note><para>
-            FIXME: Why are x and y passed as arguments, as they can be accessed
-            through <literal>crtc-&gt;x</literal> and
-            <literal>crtc-&gt;y</literal>?
-          </para></note>
-        </listitem>
-        <listitem>
-          <synopsis>void (*prepare)(struct drm_crtc *crtc);</synopsis>
-          <para>
-            Prepare the CRTC for mode setting. This operation is called after
-            validating the requested mode. Drivers use it to perform
-            device-specific operations required before setting the new mode.
-          </para>
-        </listitem>
-        <listitem>
-          <synopsis>int (*mode_set)(struct drm_crtc *crtc, struct drm_display_mode *mode,
-                struct drm_display_mode *adjusted_mode, int x, int y,
-                struct drm_framebuffer *old_fb);</synopsis>
-          <para>
-            Set a new mode, position and frame buffer. Depending on the device
-            requirements, the mode can be stored internally by the driver and
-            applied in the <methodname>commit</methodname> operation, or
-            programmed to the hardware immediately.
-          </para>
-          <para>
-            The <methodname>mode_set</methodname> operation returns 0 on success
-	    or a negative error code if an error occurs.
-          </para>
-        </listitem>
-        <listitem>
-          <synopsis>void (*commit)(struct drm_crtc *crtc);</synopsis>
-          <para>
-            Commit a mode. This operation is called after setting the new mode.
-            Upon return the device must use the new mode and be fully
-            operational.
-          </para>
-        </listitem>
-      </itemizedlist>
-    </sect2>
-    <sect2>
-      <title>Encoder Helper Operations</title>
-      <itemizedlist>
-        <listitem>
-          <synopsis>bool (*mode_fixup)(struct drm_encoder *encoder,
-                       const struct drm_display_mode *mode,
-                       struct drm_display_mode *adjusted_mode);</synopsis>
-          <para>
-            Let encoders adjust the requested mode or reject it completely. This
-            operation returns true if the mode is accepted (possibly after being
-            adjusted) or false if it is rejected. See the
-            <link linkend="drm-helper-crtc-mode-fixup">mode_fixup CRTC helper
-            operation</link> for an explanation of the allowed adjustments.
-          </para>
-        </listitem>
-        <listitem>
-          <synopsis>void (*prepare)(struct drm_encoder *encoder);</synopsis>
-          <para>
-            Prepare the encoder for mode setting. This operation is called after
-            validating the requested mode. Drivers use it to perform
-            device-specific operations required before setting the new mode.
-          </para>
-        </listitem>
-        <listitem>
-          <synopsis>void (*mode_set)(struct drm_encoder *encoder,
-                 struct drm_display_mode *mode,
-                 struct drm_display_mode *adjusted_mode);</synopsis>
-          <para>
-            Set a new mode. Depending on the device requirements, the mode can
-            be stored internally by the driver and applied in the
-            <methodname>commit</methodname> operation, or programmed to the
-            hardware immediately.
-          </para>
-        </listitem>
-        <listitem>
-          <synopsis>void (*commit)(struct drm_encoder *encoder);</synopsis>
-          <para>
-            Commit a mode. This operation is called after setting the new mode.
-            Upon return the device must use the new mode and be fully
-            operational.
-          </para>
-        </listitem>
-      </itemizedlist>
-    </sect2>
-    <sect2>
-      <title>Connector Helper Operations</title>
-      <itemizedlist>
-        <listitem>
-          <synopsis>struct drm_encoder *(*best_encoder)(struct drm_connector *connector);</synopsis>
-          <para>
-            Return a pointer to the best encoder for the connecter. Device that
-            map connectors to encoders 1:1 simply return the pointer to the
-            associated encoder. This operation is mandatory.
-          </para>
-        </listitem>
-        <listitem>
-          <synopsis>int (*get_modes)(struct drm_connector *connector);</synopsis>
-          <para>
-            Fill the connector's <structfield>probed_modes</structfield> list
-            by parsing EDID data with <function>drm_add_edid_modes</function>,
-            adding standard VESA DMT modes with <function>drm_add_modes_noedid</function>,
-            or calling <function>drm_mode_probed_add</function> directly for every
-            supported mode and return the number of modes it has detected. This
-            operation is mandatory.
-          </para>
-          <para>
-            Note that the caller function will automatically add standard VESA
-            DMT modes up to 1024x768 if the <methodname>get_modes</methodname>
-            helper operation returns no mode and if the connector status is
-            connector_status_connected. There is no need to call
-            <function>drm_add_edid_modes</function> manually in that case.
-          </para>
-          <para>
-            When adding modes manually the driver creates each mode with a call to
-            <function>drm_mode_create</function> and must fill the following fields.
-            <itemizedlist>
-              <listitem>
-                <synopsis>__u32 type;</synopsis>
-                <para>
-                  Mode type bitmask, a combination of
-                  <variablelist>
-                    <varlistentry>
-                      <term>DRM_MODE_TYPE_BUILTIN</term>
-                      <listitem><para>not used?</para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_TYPE_CLOCK_C</term>
-                      <listitem><para>not used?</para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_TYPE_CRTC_C</term>
-                      <listitem><para>not used?</para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>
-        DRM_MODE_TYPE_PREFERRED - The preferred mode for the connector
-                      </term>
-                      <listitem>
-                        <para>not used?</para>
-                      </listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_TYPE_DEFAULT</term>
-                      <listitem><para>not used?</para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_TYPE_USERDEF</term>
-                      <listitem><para>not used?</para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_TYPE_DRIVER</term>
-                      <listitem>
-                        <para>
-                          The mode has been created by the driver (as opposed to
-                          to user-created modes).
-                        </para>
-                      </listitem>
-                    </varlistentry>
-                  </variablelist>
-                  Drivers must set the DRM_MODE_TYPE_DRIVER bit for all modes they
-                  create, and set the DRM_MODE_TYPE_PREFERRED bit for the preferred
-                  mode.
-                </para>
-              </listitem>
-              <listitem>
-                <synopsis>__u32 clock;</synopsis>
-                <para>Pixel clock frequency in kHz unit</para>
-              </listitem>
-              <listitem>
-                <synopsis>__u16 hdisplay, hsync_start, hsync_end, htotal;
-    __u16 vdisplay, vsync_start, vsync_end, vtotal;</synopsis>
-                <para>Horizontal and vertical timing information</para>
-                <screen><![CDATA[
-             Active                 Front           Sync           Back
-             Region                 Porch                          Porch
-    <-----------------------><----------------><-------------><-------------->
-
-      //////////////////////|
-     ////////////////////// |
-    //////////////////////  |..................               ................
-                                               _______________
-
-    <----- [hv]display ----->
-    <------------- [hv]sync_start ------------>
-    <--------------------- [hv]sync_end --------------------->
-    <-------------------------------- [hv]total ----------------------------->
-]]></screen>
-              </listitem>
-              <listitem>
-                <synopsis>__u16 hskew;
-    __u16 vscan;</synopsis>
-                <para>Unknown</para>
-              </listitem>
-              <listitem>
-                <synopsis>__u32 flags;</synopsis>
-                <para>
-                  Mode flags, a combination of
-                  <variablelist>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_PHSYNC</term>
-                      <listitem><para>
-                        Horizontal sync is active high
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_NHSYNC</term>
-                      <listitem><para>
-                        Horizontal sync is active low
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_PVSYNC</term>
-                      <listitem><para>
-                        Vertical sync is active high
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_NVSYNC</term>
-                      <listitem><para>
-                        Vertical sync is active low
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_INTERLACE</term>
-                      <listitem><para>
-                        Mode is interlaced
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_DBLSCAN</term>
-                      <listitem><para>
-                        Mode uses doublescan
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_CSYNC</term>
-                      <listitem><para>
-                        Mode uses composite sync
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_PCSYNC</term>
-                      <listitem><para>
-                        Composite sync is active high
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_NCSYNC</term>
-                      <listitem><para>
-                        Composite sync is active low
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_HSKEW</term>
-                      <listitem><para>
-                        hskew provided (not used?)
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_BCAST</term>
-                      <listitem><para>
-                        not used?
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_PIXMUX</term>
-                      <listitem><para>
-                        not used?
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_DBLCLK</term>
-                      <listitem><para>
-                        not used?
-                      </para></listitem>
-                    </varlistentry>
-                    <varlistentry>
-                      <term>DRM_MODE_FLAG_CLKDIV2</term>
-                      <listitem><para>
-                        ?
-                      </para></listitem>
-                    </varlistentry>
-                  </variablelist>
-                </para>
-                <para>
-                  Note that modes marked with the INTERLACE or DBLSCAN flags will be
-                  filtered out by
-                  <function>drm_helper_probe_single_connector_modes</function> if
-                  the connector's <structfield>interlace_allowed</structfield> or
-                  <structfield>doublescan_allowed</structfield> field is set to 0.
-                </para>
-              </listitem>
-              <listitem>
-                <synopsis>char name[DRM_DISPLAY_MODE_LEN];</synopsis>
-                <para>
-                  Mode name. The driver must call
-                  <function>drm_mode_set_name</function> to fill the mode name from
-                  <structfield>hdisplay</structfield>,
-                  <structfield>vdisplay</structfield> and interlace flag after
-                  filling the corresponding fields.
-                </para>
-              </listitem>
-            </itemizedlist>
-          </para>
-          <para>
-            The <structfield>vrefresh</structfield> value is computed by
-            <function>drm_helper_probe_single_connector_modes</function>.
-          </para>
-          <para>
-            When parsing EDID data, <function>drm_add_edid_modes</function> fills the
-            connector <structfield>display_info</structfield>
-            <structfield>width_mm</structfield> and
-            <structfield>height_mm</structfield> fields. When creating modes
-            manually the <methodname>get_modes</methodname> helper operation must
-            set the <structfield>display_info</structfield>
-            <structfield>width_mm</structfield> and
-            <structfield>height_mm</structfield> fields if they haven't been set
-            already (for instance at initialization time when a fixed-size panel is
-            attached to the connector). The mode <structfield>width_mm</structfield>
-            and <structfield>height_mm</structfield> fields are only used internally
-            during EDID parsing and should not be set when creating modes manually.
-          </para>
-        </listitem>
-        <listitem>
-          <synopsis>int (*mode_valid)(struct drm_connector *connector,
-		  struct drm_display_mode *mode);</synopsis>
-          <para>
-            Verify whether a mode is valid for the connector. Return MODE_OK for
-            supported modes and one of the enum drm_mode_status values (MODE_*)
-            for unsupported modes. This operation is optional.
-          </para>
-          <para>
-            As the mode rejection reason is currently not used beside for
-            immediately removing the unsupported mode, an implementation can
-            return MODE_BAD regardless of the exact reason why the mode is not
-            valid.
-          </para>
-          <note><para>
-            Note that the <methodname>mode_valid</methodname> helper operation is
-            only called for modes detected by the device, and
-            <emphasis>not</emphasis> for modes set by the user through the CRTC
-            <methodname>set_config</methodname> operation.
-          </para></note>
-        </listitem>
-      </itemizedlist>
-    </sect2>
     <sect2>
     <sect2>
       <title>Atomic Modeset Helper Functions Reference</title>
       <title>Atomic Modeset Helper Functions Reference</title>
       <sect3>
       <sect3>
@@ -2303,8 +1596,12 @@ void intel_crt_init(struct drm_device *dev)
 !Edrivers/gpu/drm/drm_atomic_helper.c
 !Edrivers/gpu/drm/drm_atomic_helper.c
     </sect2>
     </sect2>
     <sect2>
     <sect2>
-      <title>Modeset Helper Functions Reference</title>
-!Iinclude/drm/drm_crtc_helper.h
+      <title>Modeset Helper Reference for Common Vtables</title>
+!Iinclude/drm/drm_modeset_helper_vtables.h
+!Pinclude/drm/drm_modeset_helper_vtables.h overview
+    </sect2>
+    <sect2>
+      <title>Legacy CRTC/Modeset Helper Functions Reference</title>
 !Edrivers/gpu/drm/drm_crtc_helper.c
 !Edrivers/gpu/drm/drm_crtc_helper.c
 !Pdrivers/gpu/drm/drm_crtc_helper.c overview
 !Pdrivers/gpu/drm/drm_crtc_helper.c overview
     </sect2>
     </sect2>
@@ -4015,92 +3312,6 @@ int num_ioctls;</synopsis>
       <sect2>
       <sect2>
         <title>DPIO</title>
         <title>DPIO</title>
 !Pdrivers/gpu/drm/i915/i915_reg.h DPIO
 !Pdrivers/gpu/drm/i915/i915_reg.h DPIO
-	<table id="dpiox2">
-	  <title>Dual channel PHY (VLV/CHV/BXT)</title>
-	  <tgroup cols="8">
-	    <colspec colname="c0" />
-	    <colspec colname="c1" />
-	    <colspec colname="c2" />
-	    <colspec colname="c3" />
-	    <colspec colname="c4" />
-	    <colspec colname="c5" />
-	    <colspec colname="c6" />
-	    <colspec colname="c7" />
-	    <spanspec spanname="ch0" namest="c0" nameend="c3" />
-	    <spanspec spanname="ch1" namest="c4" nameend="c7" />
-	    <spanspec spanname="ch0pcs01" namest="c0" nameend="c1" />
-	    <spanspec spanname="ch0pcs23" namest="c2" nameend="c3" />
-	    <spanspec spanname="ch1pcs01" namest="c4" nameend="c5" />
-	    <spanspec spanname="ch1pcs23" namest="c6" nameend="c7" />
-	    <thead>
-	      <row>
-		<entry spanname="ch0">CH0</entry>
-		<entry spanname="ch1">CH1</entry>
-	      </row>
-	    </thead>
-	    <tbody valign="top" align="center">
-	      <row>
-		<entry spanname="ch0">CMN/PLL/REF</entry>
-		<entry spanname="ch1">CMN/PLL/REF</entry>
-	      </row>
-	      <row>
-		<entry spanname="ch0pcs01">PCS01</entry>
-		<entry spanname="ch0pcs23">PCS23</entry>
-		<entry spanname="ch1pcs01">PCS01</entry>
-		<entry spanname="ch1pcs23">PCS23</entry>
-	      </row>
-	      <row>
-		<entry>TX0</entry>
-		<entry>TX1</entry>
-		<entry>TX2</entry>
-		<entry>TX3</entry>
-		<entry>TX0</entry>
-		<entry>TX1</entry>
-		<entry>TX2</entry>
-		<entry>TX3</entry>
-	      </row>
-	      <row>
-		<entry spanname="ch0">DDI0</entry>
-		<entry spanname="ch1">DDI1</entry>
-	      </row>
-	    </tbody>
-	  </tgroup>
-	</table>
-	<table id="dpiox1">
-	  <title>Single channel PHY (CHV/BXT)</title>
-	  <tgroup cols="4">
-	    <colspec colname="c0" />
-	    <colspec colname="c1" />
-	    <colspec colname="c2" />
-	    <colspec colname="c3" />
-	    <spanspec spanname="ch0" namest="c0" nameend="c3" />
-	    <spanspec spanname="ch0pcs01" namest="c0" nameend="c1" />
-	    <spanspec spanname="ch0pcs23" namest="c2" nameend="c3" />
-	    <thead>
-	      <row>
-		<entry spanname="ch0">CH0</entry>
-	      </row>
-	    </thead>
-	    <tbody valign="top" align="center">
-	      <row>
-		<entry spanname="ch0">CMN/PLL/REF</entry>
-	      </row>
-	      <row>
-		<entry spanname="ch0pcs01">PCS01</entry>
-		<entry spanname="ch0pcs23">PCS23</entry>
-	      </row>
-	      <row>
-		<entry>TX0</entry>
-		<entry>TX1</entry>
-		<entry>TX2</entry>
-		<entry>TX3</entry>
-	      </row>
-	      <row>
-		<entry spanname="ch0">DDI2</entry>
-	      </row>
-	    </tbody>
-	  </tgroup>
-	</table>
       </sect2>
       </sect2>
 
 
       <sect2>
       <sect2>
@@ -4226,41 +3437,63 @@ int num_ioctls;</synopsis>
 
 
   <chapter id="modes_of_use">
   <chapter id="modes_of_use">
     <title>Modes of Use</title>
     <title>Modes of Use</title>
-  <sect1>
-    <title>Manual switching and manual power control</title>
+    <sect1>
+      <title>Manual switching and manual power control</title>
 !Pdrivers/gpu/vga/vga_switcheroo.c Manual switching and manual power control
 !Pdrivers/gpu/vga/vga_switcheroo.c Manual switching and manual power control
-  </sect1>
-  <sect1>
-    <title>Driver power control</title>
+    </sect1>
+    <sect1>
+      <title>Driver power control</title>
 !Pdrivers/gpu/vga/vga_switcheroo.c Driver power control
 !Pdrivers/gpu/vga/vga_switcheroo.c Driver power control
-  </sect1>
+    </sect1>
   </chapter>
   </chapter>
 
 
-  <chapter id="pubfunctions">
-    <title>Public functions</title>
+  <chapter id="api">
+    <title>API</title>
+    <sect1>
+      <title>Public functions</title>
 !Edrivers/gpu/vga/vga_switcheroo.c
 !Edrivers/gpu/vga/vga_switcheroo.c
-  </chapter>
-
-  <chapter id="pubstructures">
-    <title>Public structures</title>
+    </sect1>
+    <sect1>
+      <title>Public structures</title>
 !Finclude/linux/vga_switcheroo.h vga_switcheroo_handler
 !Finclude/linux/vga_switcheroo.h vga_switcheroo_handler
 !Finclude/linux/vga_switcheroo.h vga_switcheroo_client_ops
 !Finclude/linux/vga_switcheroo.h vga_switcheroo_client_ops
-  </chapter>
-
-  <chapter id="pubconstants">
-    <title>Public constants</title>
+    </sect1>
+    <sect1>
+      <title>Public constants</title>
 !Finclude/linux/vga_switcheroo.h vga_switcheroo_client_id
 !Finclude/linux/vga_switcheroo.h vga_switcheroo_client_id
 !Finclude/linux/vga_switcheroo.h vga_switcheroo_state
 !Finclude/linux/vga_switcheroo.h vga_switcheroo_state
-  </chapter>
-
-  <chapter id="privstructures">
-    <title>Private structures</title>
+    </sect1>
+    <sect1>
+      <title>Private structures</title>
 !Fdrivers/gpu/vga/vga_switcheroo.c vgasr_priv
 !Fdrivers/gpu/vga/vga_switcheroo.c vgasr_priv
 !Fdrivers/gpu/vga/vga_switcheroo.c vga_switcheroo_client
 !Fdrivers/gpu/vga/vga_switcheroo.c vga_switcheroo_client
+    </sect1>
+  </chapter>
+
+  <chapter id="handlers">
+    <title>Handlers</title>
+    <sect1>
+      <title>apple-gmux Handler</title>
+!Pdrivers/platform/x86/apple-gmux.c Overview
+!Pdrivers/platform/x86/apple-gmux.c Interrupt
+      <sect2>
+        <title>Graphics mux</title>
+!Pdrivers/platform/x86/apple-gmux.c Graphics mux
+      </sect2>
+      <sect2>
+        <title>Power control</title>
+!Pdrivers/platform/x86/apple-gmux.c Power control
+      </sect2>
+      <sect2>
+        <title>Backlight control</title>
+!Pdrivers/platform/x86/apple-gmux.c Backlight control
+      </sect2>
+    </sect1>
   </chapter>
   </chapter>
 
 
 !Cdrivers/gpu/vga/vga_switcheroo.c
 !Cdrivers/gpu/vga/vga_switcheroo.c
 !Cinclude/linux/vga_switcheroo.h
 !Cinclude/linux/vga_switcheroo.h
+!Cdrivers/platform/x86/apple-gmux.c
 </part>
 </part>
 
 
 </book>
 </book>

+ 54 - 0
Documentation/devicetree/bindings/display/etnaviv/etnaviv-drm.txt

@@ -0,0 +1,54 @@
+Etnaviv DRM master device
+=========================
+
+The Etnaviv DRM master device is a virtual device needed to list all
+Vivante GPU cores that comprise the GPU subsystem.
+
+Required properties:
+- compatible: Should be one of
+    "fsl,imx-gpu-subsystem"
+    "marvell,dove-gpu-subsystem"
+- cores: Should contain a list of phandles pointing to Vivante GPU devices
+
+example:
+
+gpu-subsystem {
+	compatible = "fsl,imx-gpu-subsystem";
+	cores = <&gpu_2d>, <&gpu_3d>;
+};
+
+
+Vivante GPU core devices
+========================
+
+Required properties:
+- compatible: Should be "vivante,gc"
+  A more specific compatible is not needed, as the cores contain chip
+  identification registers at fixed locations, which provide all the
+  necessary information to the driver.
+- reg: should be register base and length as documented in the
+  datasheet
+- interrupts: Should contain the cores interrupt line
+- clocks: should contain one clock for entry in clock-names
+  see Documentation/devicetree/bindings/clock/clock-bindings.txt
+- clock-names:
+   - "bus":    AXI/register clock
+   - "core":   GPU core clock
+   - "shader": Shader clock (only required if GPU has feature PIPE_3D)
+
+Optional properties:
+- power-domains: a power domain consumer specifier according to
+  Documentation/devicetree/bindings/power/power_domain.txt
+
+example:
+
+gpu_3d: gpu@00130000 {
+	compatible = "vivante,gc";
+	reg = <0x00130000 0x4000>;
+	interrupts = <0 9 IRQ_TYPE_LEVEL_HIGH>;
+	clocks = <&clks IMX6QDL_CLK_GPU3D_AXI>,
+	         <&clks IMX6QDL_CLK_GPU3D_CORE>,
+	         <&clks IMX6QDL_CLK_GPU3D_SHADER>;
+	clock-names = "bus", "core", "shader";
+	power-domains = <&gpc 1>;
+};

+ 37 - 4
Documentation/devicetree/bindings/display/exynos/exynos_dp.txt

@@ -1,3 +1,20 @@
+Device-Tree bindings for Samsung Exynos Embedded DisplayPort Transmitter(eDP)
+
+DisplayPort is industry standard to accommodate the growing board adoption
+of digital display technology within the PC and CE industries.
+It consolidates the internal and external connection methods to reduce device
+complexity and cost. It also supports necessary features for important cross
+industry applications and provides performance scalability to enable the next
+generation of displays that feature higher color depths, refresh rates, and
+display resolutions.
+
+eDP (embedded display port) device is compliant with Embedded DisplayPort
+standard as follows,
+- DisplayPort standard 1.1a for Exynos5250 and Exynos5260.
+- DisplayPort standard 1.3 for Exynos5422s and Exynos5800.
+
+eDP resides between FIMD and panel or FIMD and bridge such as LVDS.
+
 The Exynos display port interface should be configured based on
 The Exynos display port interface should be configured based on
 the type of panel connected to it.
 the type of panel connected to it.
 
 
@@ -66,8 +83,15 @@ Optional properties for dp-controller:
 		Hotplug detect GPIO.
 		Hotplug detect GPIO.
 			Indicates which GPIO should be used for hotplug
 			Indicates which GPIO should be used for hotplug
 			detection
 			detection
-	-video interfaces: Device node can contain video interface port
-			    nodes according to [1].
+Video interfaces:
+  Device node can contain video interface port nodes according to [1].
+  The following are properties specific to those nodes:
+
+  endpoint node connected to bridge or panel node:
+   - remote-endpoint: specifies the endpoint in panel or bridge node.
+		      This node is required in all kinds of exynos dp
+		      to represent the connection between dp and bridge
+		      or dp and panel.
 
 
 [1]: Documentation/devicetree/bindings/media/video-interfaces.txt
 [1]: Documentation/devicetree/bindings/media/video-interfaces.txt
 
 
@@ -111,9 +135,18 @@ Board Specific portion:
 		};
 		};
 
 
 		ports {
 		ports {
-			port@0 {
+			port {
 				dp_out: endpoint {
 				dp_out: endpoint {
-					remote-endpoint = <&bridge_in>;
+					remote-endpoint = <&dp_in>;
+				};
+			};
+		};
+
+		panel {
+			...
+			port {
+				dp_in: endpoint {
+					remote-endpoint = <&dp_out>;
 				};
 				};
 			};
 			};
 		};
 		};

+ 8 - 4
Documentation/devicetree/bindings/display/msm/dsi.txt

@@ -14,17 +14,20 @@ Required properties:
 - clocks: device clocks
 - clocks: device clocks
   See Documentation/devicetree/bindings/clocks/clock-bindings.txt for details.
   See Documentation/devicetree/bindings/clocks/clock-bindings.txt for details.
 - clock-names: the following clocks are required:
 - clock-names: the following clocks are required:
+  * "mdp_core_clk"
+  * "iface_clk"
   * "bus_clk"
   * "bus_clk"
-  * "byte_clk"
-  * "core_clk"
   * "core_mmss_clk"
   * "core_mmss_clk"
-  * "iface_clk"
-  * "mdp_core_clk"
+  * "byte_clk"
   * "pixel_clk"
   * "pixel_clk"
+  * "core_clk"
+  For DSIv2, we need an additional clock:
+   * "src_clk"
 - vdd-supply: phandle to vdd regulator device node
 - vdd-supply: phandle to vdd regulator device node
 - vddio-supply: phandle to vdd-io regulator device node
 - vddio-supply: phandle to vdd-io regulator device node
 - vdda-supply: phandle to vdda regulator device node
 - vdda-supply: phandle to vdda regulator device node
 - qcom,dsi-phy: phandle to DSI PHY device node
 - qcom,dsi-phy: phandle to DSI PHY device node
+- syscon-sfpb: A phandle to mmss_sfpb syscon node (only for DSIv2)
 
 
 Optional properties:
 Optional properties:
 - panel@0: Node of panel connected to this DSI controller.
 - panel@0: Node of panel connected to this DSI controller.
@@ -51,6 +54,7 @@ Required properties:
   * "qcom,dsi-phy-28nm-hpm"
   * "qcom,dsi-phy-28nm-hpm"
   * "qcom,dsi-phy-28nm-lp"
   * "qcom,dsi-phy-28nm-lp"
   * "qcom,dsi-phy-20nm"
   * "qcom,dsi-phy-20nm"
+  * "qcom,dsi-phy-28nm-8960"
 - reg: Physical base address and length of the registers of PLL, PHY and PHY
 - reg: Physical base address and length of the registers of PLL, PHY and PHY
   regulator
   regulator
 - reg-names: The names of register regions. The following regions are required:
 - reg-names: The names of register regions. The following regions are required:

+ 18 - 8
Documentation/devicetree/bindings/display/msm/mdp.txt

@@ -2,18 +2,28 @@ Qualcomm adreno/snapdragon display controller
 
 
 Required properties:
 Required properties:
 - compatible:
 - compatible:
-  * "qcom,mdp" - mdp4
+  * "qcom,mdp4" - mdp4
+  * "qcom,mdp5" - mdp5
 - reg: Physical base address and length of the controller's registers.
 - reg: Physical base address and length of the controller's registers.
 - interrupts: The interrupt signal from the display controller.
 - interrupts: The interrupt signal from the display controller.
 - connectors: array of phandles for output device(s)
 - connectors: array of phandles for output device(s)
 - clocks: device clocks
 - clocks: device clocks
   See ../clocks/clock-bindings.txt for details.
   See ../clocks/clock-bindings.txt for details.
-- clock-names: the following clocks are required:
-  * "core_clk"
-  * "iface_clk"
-  * "src_clk"
-  * "hdmi_clk"
-  * "mpd_clk"
+- clock-names: the following clocks are required.
+  For MDP4:
+   * "core_clk"
+   * "iface_clk"
+   * "lut_clk"
+   * "src_clk"
+   * "hdmi_clk"
+   * "mdp_clk"
+  For MDP5:
+   * "bus_clk"
+   * "iface_clk"
+   * "core_clk_src"
+   * "core_clk"
+   * "lut_clk" (some MDP5 versions may not need this)
+   * "vsync_clk"
 
 
 Optional properties:
 Optional properties:
 - gpus: phandle for gpu device
 - gpus: phandle for gpu device
@@ -26,7 +36,7 @@ Example:
 	...
 	...
 
 
 	mdp: qcom,mdp@5100000 {
 	mdp: qcom,mdp@5100000 {
-		compatible = "qcom,mdp";
+		compatible = "qcom,mdp4";
 		reg = <0x05100000 0xf0000>;
 		reg = <0x05100000 0xf0000>;
 		interrupts = <GIC_SPI 75 0>;
 		interrupts = <GIC_SPI 75 0>;
 		connectors = <&hdmi>;
 		connectors = <&hdmi>;

+ 7 - 0
Documentation/devicetree/bindings/display/panel/boe,tv080wum-nl0.txt

@@ -0,0 +1,7 @@
+Boe Corporation 8.0" WUXGA TFT LCD panel
+
+Required properties:
+- compatible: should be "boe,tv080wum-nl0"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.

+ 7 - 0
Documentation/devicetree/bindings/display/panel/innolux,g121x1-l03.txt

@@ -0,0 +1,7 @@
+Innolux Corporation 12.1" G121X1-L03 XGA (1024x768) TFT LCD panel
+
+Required properties:
+- compatible: should be "innolux,g121x1-l03"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.

+ 7 - 0
Documentation/devicetree/bindings/display/panel/kyo,tcg121xglp.txt

@@ -0,0 +1,7 @@
+Kyocera Corporation 12.1" XGA (1024x768) TFT LCD panel
+
+Required properties:
+- compatible: should be "kyo,tcg121xglp"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.

+ 20 - 0
Documentation/devicetree/bindings/display/panel/panasonic,vvx10f034n00.txt

@@ -0,0 +1,20 @@
+Panasonic 10" WUXGA TFT LCD panel
+
+Required properties:
+- compatible: should be "panasonic,vvx10f034n00"
+- reg: DSI virtual channel of the peripheral
+- power-supply: phandle of the regulator that provides the supply voltage
+
+Optional properties:
+- backlight: phandle of the backlight device attached to the panel
+
+Example:
+
+	mdss_dsi@fd922800 {
+		panel@0 {
+			compatible = "panasonic,vvx10f034n00";
+			reg = <0>;
+			power-supply = <&vreg_vsp>;
+			backlight = <&lp8566_wled>;
+		};
+	};

+ 7 - 0
Documentation/devicetree/bindings/display/panel/qiaodian,qd43003c0-40.txt

@@ -0,0 +1,7 @@
+QiaoDian XianShi Corporation 4"3 TFT LCD panel
+
+Required properties:
+- compatible: should be "qiaodian,qd43003c0-40"
+
+This binding is compatible with the simple-panel binding, which is specified
+in simple-panel.txt in this directory.

+ 22 - 0
Documentation/devicetree/bindings/display/panel/sharp,ls043t1le01.txt

@@ -0,0 +1,22 @@
+Sharp Microelectronics 4.3" qHD TFT LCD panel
+
+Required properties:
+- compatible: should be "sharp,ls043t1le01-qhd"
+- reg: DSI virtual channel of the peripheral
+- power-supply: phandle of the regulator that provides the supply voltage
+
+Optional properties:
+- backlight: phandle of the backlight device attached to the panel
+- reset-gpios: a GPIO spec for the reset pin
+
+Example:
+
+	mdss_dsi@fd922800 {
+		panel@0 {
+			compatible = "sharp,ls043t1le01-qhd";
+			reg = <0>;
+			avdd-supply = <&pm8941_l22>;
+			backlight = <&pm8941_wled>;
+			reset-gpios = <&pm8941_gpios 19 GPIO_ACTIVE_HIGH>;
+		};
+	};

+ 60 - 0
Documentation/devicetree/bindings/display/rockchip/dw_mipi_dsi_rockchip.txt

@@ -0,0 +1,60 @@
+Rockchip specific extensions to the Synopsys Designware MIPI DSI
+================================
+
+Required properties:
+- #address-cells: Should be <1>.
+- #size-cells: Should be <0>.
+- compatible: "rockchip,rk3288-mipi-dsi", "snps,dw-mipi-dsi".
+- reg: Represent the physical address range of the controller.
+- interrupts: Represent the controller's interrupt to the CPU(s).
+- clocks, clock-names: Phandles to the controller's pll reference
+  clock(ref) and APB clock(pclk), as described in [1].
+- rockchip,grf: this soc should set GRF regs to mux vopl/vopb.
+- ports: contain a port node with endpoint definitions as defined in [2].
+  For vopb,set the reg = <0> and set the reg = <1> for vopl.
+
+[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
+[2] Documentation/devicetree/bindings/media/video-interfaces.txt
+
+Example:
+	mipi_dsi: mipi@ff960000 {
+		#address-cells = <1>;
+		#size-cells = <0>;
+		compatible = "rockchip,rk3288-mipi-dsi", "snps,dw-mipi-dsi";
+		reg = <0xff960000 0x4000>;
+		interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
+		clocks = <&cru SCLK_MIPI_24M>, <&cru PCLK_MIPI_DSI0>;
+		clock-names = "ref", "pclk";
+		rockchip,grf = <&grf>;
+		status = "okay";
+
+		ports {
+			#address-cells = <1>;
+			#size-cells = <0>;
+			reg = <1>;
+
+			mipi_in: port {
+				#address-cells = <1>;
+				#size-cells = <0>;
+				mipi_in_vopb: endpoint@0 {
+					reg = <0>;
+					remote-endpoint = <&vopb_out_mipi>;
+				};
+				mipi_in_vopl: endpoint@1 {
+					reg = <1>;
+					remote-endpoint = <&vopl_out_mipi>;
+				};
+			};
+		};
+
+		panel {
+			compatible ="boe,tv080wum-nl0";
+			reg = <0>;
+
+			enable-gpios = <&gpio7 3 GPIO_ACTIVE_HIGH>;
+			pinctrl-names = "default";
+			pinctrl-0 = <&lcd_en>;
+			backlight = <&backlight>;
+			status = "okay";
+		};
+	};

+ 1 - 0
Documentation/devicetree/bindings/display/rockchip/rockchip-vop.txt

@@ -7,6 +7,7 @@ buffer to an external LCD interface.
 Required properties:
 Required properties:
 - compatible: value should be one of the following
 - compatible: value should be one of the following
 		"rockchip,rk3288-vop";
 		"rockchip,rk3288-vop";
+		"rockchip,rk3036-vop";
 
 
 - interrupts: should contain a list of all VOP IP block interrupts in the
 - interrupts: should contain a list of all VOP IP block interrupts in the
 		 order: VSYNC, LCD_SYSTEM. The interrupt specifier
 		 order: VSYNC, LCD_SYSTEM. The interrupt specifier

+ 4 - 0
Documentation/devicetree/bindings/media/exynos5-gsc.txt

@@ -7,6 +7,10 @@ Required properties:
 - reg: should contain G-Scaler physical address location and length.
 - reg: should contain G-Scaler physical address location and length.
 - interrupts: should contain G-Scaler interrupt number
 - interrupts: should contain G-Scaler interrupt number
 
 
+Optional properties:
+- samsung,sysreg: handle to syscon used to control the system registers to
+  set writeback input and destination
+
 Example:
 Example:
 
 
 gsc_0:  gsc@0x13e00000 {
 gsc_0:  gsc@0x13e00000 {

+ 4 - 0
Documentation/devicetree/bindings/vendor-prefixes.txt

@@ -33,6 +33,7 @@ auo	AU Optronics Corporation
 avago	Avago Technologies
 avago	Avago Technologies
 avic	Shanghai AVIC Optoelectronics Co., Ltd.
 avic	Shanghai AVIC Optoelectronics Co., Ltd.
 axis	Axis Communications AB
 axis	Axis Communications AB
+boe	BOE Technology Group Co., Ltd.
 bosch	Bosch Sensortec GmbH
 bosch	Bosch Sensortec GmbH
 boundary	Boundary Devices Inc.
 boundary	Boundary Devices Inc.
 brcm	Broadcom Corporation
 brcm	Broadcom Corporation
@@ -123,6 +124,7 @@ jedec	JEDEC Solid State Technology Association
 karo	Ka-Ro electronics GmbH
 karo	Ka-Ro electronics GmbH
 keymile	Keymile GmbH
 keymile	Keymile GmbH
 kinetic Kinetic Technologies
 kinetic Kinetic Technologies
+kyo	Kyocera Corporation
 lacie	LaCie
 lacie	LaCie
 lantiq	Lantiq Semiconductor
 lantiq	Lantiq Semiconductor
 lenovo	Lenovo Group Ltd.
 lenovo	Lenovo Group Ltd.
@@ -181,6 +183,7 @@ qca	Qualcomm Atheros, Inc.
 qcom	Qualcomm Technologies, Inc
 qcom	Qualcomm Technologies, Inc
 qemu	QEMU, a generic and open source machine emulator and virtualizer
 qemu	QEMU, a generic and open source machine emulator and virtualizer
 qi	Qi Hardware
 qi	Qi Hardware
+qiaodian	QiaoDian XianShi Corporation
 qnap	QNAP Systems, Inc.
 qnap	QNAP Systems, Inc.
 radxa	Radxa
 radxa	Radxa
 raidsonic	RaidSonic Technology GmbH
 raidsonic	RaidSonic Technology GmbH
@@ -239,6 +242,7 @@ v3	V3 Semiconductor
 variscite	Variscite Ltd.
 variscite	Variscite Ltd.
 via	VIA Technologies, Inc.
 via	VIA Technologies, Inc.
 virtio	Virtual I/O Device Specification, developed by the OASIS consortium
 virtio	Virtual I/O Device Specification, developed by the OASIS consortium
+vivante	Vivante Corporation
 voipac	Voipac Technologies s.r.o.
 voipac	Voipac Technologies s.r.o.
 wexler	Wexler
 wexler	Wexler
 winbond Winbond Electronics corp.
 winbond Winbond Electronics corp.

+ 9 - 0
MAINTAINERS

@@ -3768,6 +3768,15 @@ S:	Maintained
 F:	drivers/gpu/drm/sti
 F:	drivers/gpu/drm/sti
 F:	Documentation/devicetree/bindings/display/st,stih4xx.txt
 F:	Documentation/devicetree/bindings/display/st,stih4xx.txt
 
 
+DRM DRIVERS FOR VIVANTE GPU IP
+M:	Lucas Stach <l.stach@pengutronix.de>
+R:	Russell King <linux+etnaviv@arm.linux.org.uk>
+R:	Christian Gmeiner <christian.gmeiner@gmail.com>
+L:	dri-devel@lists.freedesktop.org
+S:	Maintained
+F:	drivers/gpu/drm/etnaviv
+F:	Documentation/devicetree/bindings/display/etnaviv
+
 DSBR100 USB FM RADIO DRIVER
 DSBR100 USB FM RADIO DRIVER
 M:	Alexey Klimov <klimov.linux@gmail.com>
 M:	Alexey Klimov <klimov.linux@gmail.com>
 L:	linux-media@vger.kernel.org
 L:	linux-media@vger.kernel.org

+ 14 - 1
arch/arm/boot/dts/exynos5800-peach-pi.dts

@@ -122,6 +122,12 @@
 		compatible = "auo,b133htn01";
 		compatible = "auo,b133htn01";
 		power-supply = <&tps65090_fet6>;
 		power-supply = <&tps65090_fet6>;
 		backlight = <&backlight>;
 		backlight = <&backlight>;
+
+		port {
+			panel_in: endpoint {
+				remote-endpoint = <&dp_out>;
+			};
+		};
 	};
 	};
 
 
 	mmc1_pwrseq: mmc1_pwrseq {
 	mmc1_pwrseq: mmc1_pwrseq {
@@ -148,7 +154,14 @@
 	samsung,link-rate = <0x0a>;
 	samsung,link-rate = <0x0a>;
 	samsung,lane-count = <2>;
 	samsung,lane-count = <2>;
 	samsung,hpd-gpio = <&gpx2 6 GPIO_ACTIVE_HIGH>;
 	samsung,hpd-gpio = <&gpx2 6 GPIO_ACTIVE_HIGH>;
-	panel = <&panel>;
+
+	ports {
+		port {
+			dp_out: endpoint {
+				remote-endpoint = <&panel_in>;
+			};
+		};
+	};
 };
 };
 
 
 &fimd {
 &fimd {

+ 3 - 0
drivers/gpu/drm/Kconfig

@@ -160,6 +160,7 @@ config DRM_AMDGPU
 	  If M is selected, the module will be called amdgpu.
 	  If M is selected, the module will be called amdgpu.
 
 
 source "drivers/gpu/drm/amd/amdgpu/Kconfig"
 source "drivers/gpu/drm/amd/amdgpu/Kconfig"
+source "drivers/gpu/drm/amd/powerplay/Kconfig"
 
 
 source "drivers/gpu/drm/nouveau/Kconfig"
 source "drivers/gpu/drm/nouveau/Kconfig"
 
 
@@ -266,3 +267,5 @@ source "drivers/gpu/drm/amd/amdkfd/Kconfig"
 source "drivers/gpu/drm/imx/Kconfig"
 source "drivers/gpu/drm/imx/Kconfig"
 
 
 source "drivers/gpu/drm/vc4/Kconfig"
 source "drivers/gpu/drm/vc4/Kconfig"
+
+source "drivers/gpu/drm/etnaviv/Kconfig"

+ 1 - 0
drivers/gpu/drm/Makefile

@@ -75,3 +75,4 @@ obj-y			+= i2c/
 obj-y			+= panel/
 obj-y			+= panel/
 obj-y			+= bridge/
 obj-y			+= bridge/
 obj-$(CONFIG_DRM_FSL_DCU) += fsl-dcu/
 obj-$(CONFIG_DRM_FSL_DCU) += fsl-dcu/
+obj-$(CONFIG_DRM_ETNAVIV) += etnaviv/

+ 16 - 4
drivers/gpu/drm/amd/amdgpu/Makefile

@@ -2,10 +2,13 @@
 # Makefile for the drm device driver.  This driver provides support for the
 # Makefile for the drm device driver.  This driver provides support for the
 # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
 # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
 
 
-ccflags-y := -Iinclude/drm -Idrivers/gpu/drm/amd/include/asic_reg \
-	-Idrivers/gpu/drm/amd/include \
-	-Idrivers/gpu/drm/amd/amdgpu \
-	-Idrivers/gpu/drm/amd/scheduler
+FULL_AMD_PATH=$(src)/..
+
+ccflags-y := -Iinclude/drm -I$(FULL_AMD_PATH)/include/asic_reg \
+	-I$(FULL_AMD_PATH)/include \
+	-I$(FULL_AMD_PATH)/amdgpu \
+	-I$(FULL_AMD_PATH)/scheduler \
+	-I$(FULL_AMD_PATH)/powerplay/inc
 
 
 amdgpu-y := amdgpu_drv.o
 amdgpu-y := amdgpu_drv.o
 
 
@@ -44,6 +47,7 @@ amdgpu-y += \
 # add SMC block
 # add SMC block
 amdgpu-y += \
 amdgpu-y += \
 	amdgpu_dpm.o \
 	amdgpu_dpm.o \
+	amdgpu_powerplay.o \
 	cz_smc.o cz_dpm.o \
 	cz_smc.o cz_dpm.o \
 	tonga_smc.o tonga_dpm.o \
 	tonga_smc.o tonga_dpm.o \
 	fiji_smc.o fiji_dpm.o \
 	fiji_smc.o fiji_dpm.o \
@@ -94,6 +98,14 @@ amdgpu-$(CONFIG_VGA_SWITCHEROO) += amdgpu_atpx_handler.o
 amdgpu-$(CONFIG_ACPI) += amdgpu_acpi.o
 amdgpu-$(CONFIG_ACPI) += amdgpu_acpi.o
 amdgpu-$(CONFIG_MMU_NOTIFIER) += amdgpu_mn.o
 amdgpu-$(CONFIG_MMU_NOTIFIER) += amdgpu_mn.o
 
 
+ifneq ($(CONFIG_DRM_AMD_POWERPLAY),)
+
+include $(FULL_AMD_PATH)/powerplay/Makefile
+
+amdgpu-y += $(AMD_POWERPLAY_FILES)
+
+endif
+
 obj-$(CONFIG_DRM_AMDGPU)+= amdgpu.o
 obj-$(CONFIG_DRM_AMDGPU)+= amdgpu.o
 
 
 CFLAGS_amdgpu_trace_points.o := -I$(src)
 CFLAGS_amdgpu_trace_points.o := -I$(src)

+ 93 - 47
drivers/gpu/drm/amd/amdgpu/amdgpu.h

@@ -52,6 +52,7 @@
 #include "amdgpu_irq.h"
 #include "amdgpu_irq.h"
 #include "amdgpu_ucode.h"
 #include "amdgpu_ucode.h"
 #include "amdgpu_gds.h"
 #include "amdgpu_gds.h"
+#include "amd_powerplay.h"
 
 
 #include "gpu_scheduler.h"
 #include "gpu_scheduler.h"
 
 
@@ -85,6 +86,7 @@ extern int amdgpu_enable_scheduler;
 extern int amdgpu_sched_jobs;
 extern int amdgpu_sched_jobs;
 extern int amdgpu_sched_hw_submission;
 extern int amdgpu_sched_hw_submission;
 extern int amdgpu_enable_semaphores;
 extern int amdgpu_enable_semaphores;
+extern int amdgpu_powerplay;
 
 
 #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS	        3000
 #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS	        3000
 #define AMDGPU_MAX_USEC_TIMEOUT			100000	/* 100 ms */
 #define AMDGPU_MAX_USEC_TIMEOUT			100000	/* 100 ms */
@@ -918,8 +920,8 @@ struct amdgpu_ring {
 #define AMDGPU_VM_FAULT_STOP_ALWAYS	2
 #define AMDGPU_VM_FAULT_STOP_ALWAYS	2
 
 
 struct amdgpu_vm_pt {
 struct amdgpu_vm_pt {
-	struct amdgpu_bo	*bo;
-	uint64_t		addr;
+	struct amdgpu_bo_list_entry	entry;
+	uint64_t			addr;
 };
 };
 
 
 struct amdgpu_vm_id {
 struct amdgpu_vm_id {
@@ -981,9 +983,12 @@ struct amdgpu_vm_manager {
 void amdgpu_vm_manager_fini(struct amdgpu_device *adev);
 void amdgpu_vm_manager_fini(struct amdgpu_device *adev);
 int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm);
 int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm);
 void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm);
 void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm);
-struct amdgpu_bo_list_entry *amdgpu_vm_get_bos(struct amdgpu_device *adev,
-					       struct amdgpu_vm *vm,
-					       struct list_head *head);
+void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
+			 struct list_head *validated,
+			 struct amdgpu_bo_list_entry *entry);
+void amdgpu_vm_get_pt_bos(struct amdgpu_vm *vm, struct list_head *duplicates);
+void amdgpu_vm_move_pt_bos_in_lru(struct amdgpu_device *adev,
+				  struct amdgpu_vm *vm);
 int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
 int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
 		      struct amdgpu_sync *sync);
 		      struct amdgpu_sync *sync);
 void amdgpu_vm_flush(struct amdgpu_ring *ring,
 void amdgpu_vm_flush(struct amdgpu_ring *ring,
@@ -1024,11 +1029,9 @@ int amdgpu_vm_free_job(struct amdgpu_job *job);
  * context related structures
  * context related structures
  */
  */
 
 
-#define AMDGPU_CTX_MAX_CS_PENDING	16
-
 struct amdgpu_ctx_ring {
 struct amdgpu_ctx_ring {
 	uint64_t		sequence;
 	uint64_t		sequence;
-	struct fence		*fences[AMDGPU_CTX_MAX_CS_PENDING];
+	struct fence		**fences;
 	struct amd_sched_entity	entity;
 	struct amd_sched_entity	entity;
 };
 };
 
 
@@ -1037,6 +1040,7 @@ struct amdgpu_ctx {
 	struct amdgpu_device    *adev;
 	struct amdgpu_device    *adev;
 	unsigned		reset_counter;
 	unsigned		reset_counter;
 	spinlock_t		ring_lock;
 	spinlock_t		ring_lock;
+	struct fence            **fences;
 	struct amdgpu_ctx_ring	rings[AMDGPU_MAX_RINGS];
 	struct amdgpu_ctx_ring	rings[AMDGPU_MAX_RINGS];
 };
 };
 
 
@@ -1047,7 +1051,7 @@ struct amdgpu_ctx_mgr {
 	struct idr		ctx_handles;
 	struct idr		ctx_handles;
 };
 };
 
 
-int amdgpu_ctx_init(struct amdgpu_device *adev, bool kernel,
+int amdgpu_ctx_init(struct amdgpu_device *adev, enum amd_sched_priority pri,
 		    struct amdgpu_ctx *ctx);
 		    struct amdgpu_ctx *ctx);
 void amdgpu_ctx_fini(struct amdgpu_ctx *ctx);
 void amdgpu_ctx_fini(struct amdgpu_ctx *ctx);
 
 
@@ -1254,7 +1258,7 @@ struct amdgpu_cs_parser {
 	unsigned		nchunks;
 	unsigned		nchunks;
 	struct amdgpu_cs_chunk	*chunks;
 	struct amdgpu_cs_chunk	*chunks;
 	/* relocations */
 	/* relocations */
-	struct amdgpu_bo_list_entry	*vm_bos;
+	struct amdgpu_bo_list_entry	vm_pd;
 	struct list_head	validated;
 	struct list_head	validated;
 	struct fence		*fence;
 	struct fence		*fence;
 
 
@@ -1301,31 +1305,7 @@ struct amdgpu_wb {
 int amdgpu_wb_get(struct amdgpu_device *adev, u32 *wb);
 int amdgpu_wb_get(struct amdgpu_device *adev, u32 *wb);
 void amdgpu_wb_free(struct amdgpu_device *adev, u32 wb);
 void amdgpu_wb_free(struct amdgpu_device *adev, u32 wb);
 
 
-/**
- * struct amdgpu_pm - power management datas
- * It keeps track of various data needed to take powermanagement decision.
- */
 
 
-enum amdgpu_pm_state_type {
-	/* not used for dpm */
-	POWER_STATE_TYPE_DEFAULT,
-	POWER_STATE_TYPE_POWERSAVE,
-	/* user selectable states */
-	POWER_STATE_TYPE_BATTERY,
-	POWER_STATE_TYPE_BALANCED,
-	POWER_STATE_TYPE_PERFORMANCE,
-	/* internal states */
-	POWER_STATE_TYPE_INTERNAL_UVD,
-	POWER_STATE_TYPE_INTERNAL_UVD_SD,
-	POWER_STATE_TYPE_INTERNAL_UVD_HD,
-	POWER_STATE_TYPE_INTERNAL_UVD_HD2,
-	POWER_STATE_TYPE_INTERNAL_UVD_MVC,
-	POWER_STATE_TYPE_INTERNAL_BOOT,
-	POWER_STATE_TYPE_INTERNAL_THERMAL,
-	POWER_STATE_TYPE_INTERNAL_ACPI,
-	POWER_STATE_TYPE_INTERNAL_ULV,
-	POWER_STATE_TYPE_INTERNAL_3DPERF,
-};
 
 
 enum amdgpu_int_thermal_type {
 enum amdgpu_int_thermal_type {
 	THERMAL_TYPE_NONE,
 	THERMAL_TYPE_NONE,
@@ -1607,8 +1587,8 @@ struct amdgpu_dpm {
 	/* vce requirements */
 	/* vce requirements */
 	struct amdgpu_vce_state vce_states[AMDGPU_MAX_VCE_LEVELS];
 	struct amdgpu_vce_state vce_states[AMDGPU_MAX_VCE_LEVELS];
 	enum amdgpu_vce_level vce_level;
 	enum amdgpu_vce_level vce_level;
-	enum amdgpu_pm_state_type state;
-	enum amdgpu_pm_state_type user_state;
+	enum amd_pm_state_type state;
+	enum amd_pm_state_type user_state;
 	u32                     platform_caps;
 	u32                     platform_caps;
 	u32                     voltage_response_time;
 	u32                     voltage_response_time;
 	u32                     backbias_response_time;
 	u32                     backbias_response_time;
@@ -1661,8 +1641,13 @@ struct amdgpu_pm {
 	const struct firmware	*fw;	/* SMC firmware */
 	const struct firmware	*fw;	/* SMC firmware */
 	uint32_t                fw_version;
 	uint32_t                fw_version;
 	const struct amdgpu_dpm_funcs *funcs;
 	const struct amdgpu_dpm_funcs *funcs;
+	uint32_t                pcie_gen_mask;
+	uint32_t                pcie_mlw_mask;
+	struct amd_pp_display_configuration pm_display_cfg;/* set by DAL */
 };
 };
 
 
+void amdgpu_get_pcie_info(struct amdgpu_device *adev);
+
 /*
 /*
  * UVD
  * UVD
  */
  */
@@ -1830,6 +1815,8 @@ struct amdgpu_cu_info {
  */
  */
 struct amdgpu_asic_funcs {
 struct amdgpu_asic_funcs {
 	bool (*read_disabled_bios)(struct amdgpu_device *adev);
 	bool (*read_disabled_bios)(struct amdgpu_device *adev);
+	bool (*read_bios_from_rom)(struct amdgpu_device *adev,
+				   u8 *bios, u32 length_bytes);
 	int (*read_register)(struct amdgpu_device *adev, u32 se_num,
 	int (*read_register)(struct amdgpu_device *adev, u32 se_num,
 			     u32 sh_num, u32 reg_offset, u32 *value);
 			     u32 sh_num, u32 reg_offset, u32 *value);
 	void (*set_vga_state)(struct amdgpu_device *adev, bool state);
 	void (*set_vga_state)(struct amdgpu_device *adev, bool state);
@@ -2060,6 +2047,10 @@ struct amdgpu_device {
 	/* interrupts */
 	/* interrupts */
 	struct amdgpu_irq		irq;
 	struct amdgpu_irq		irq;
 
 
+	/* powerplay */
+	struct amd_powerplay		powerplay;
+	bool				pp_enabled;
+
 	/* dpm */
 	/* dpm */
 	struct amdgpu_pm		pm;
 	struct amdgpu_pm		pm;
 	u32				cg_flags;
 	u32				cg_flags;
@@ -2236,6 +2227,7 @@ amdgpu_get_sdma_instance(struct amdgpu_ring *ring)
 #define amdgpu_asic_set_vce_clocks(adev, ev, ec) (adev)->asic_funcs->set_vce_clocks((adev), (ev), (ec))
 #define amdgpu_asic_set_vce_clocks(adev, ev, ec) (adev)->asic_funcs->set_vce_clocks((adev), (ev), (ec))
 #define amdgpu_asic_get_gpu_clock_counter(adev) (adev)->asic_funcs->get_gpu_clock_counter((adev))
 #define amdgpu_asic_get_gpu_clock_counter(adev) (adev)->asic_funcs->get_gpu_clock_counter((adev))
 #define amdgpu_asic_read_disabled_bios(adev) (adev)->asic_funcs->read_disabled_bios((adev))
 #define amdgpu_asic_read_disabled_bios(adev) (adev)->asic_funcs->read_disabled_bios((adev))
+#define amdgpu_asic_read_bios_from_rom(adev, b, l) (adev)->asic_funcs->read_bios_from_rom((adev), (b), (l))
 #define amdgpu_asic_read_register(adev, se, sh, offset, v)((adev)->asic_funcs->read_register((adev), (se), (sh), (offset), (v)))
 #define amdgpu_asic_read_register(adev, se, sh, offset, v)((adev)->asic_funcs->read_register((adev), (se), (sh), (offset), (v)))
 #define amdgpu_asic_get_cu_info(adev, info) (adev)->asic_funcs->get_cu_info((adev), (info))
 #define amdgpu_asic_get_cu_info(adev, info) (adev)->asic_funcs->get_cu_info((adev), (info))
 #define amdgpu_gart_flush_gpu_tlb(adev, vmid) (adev)->gart.gart_funcs->flush_gpu_tlb((adev), (vmid))
 #define amdgpu_gart_flush_gpu_tlb(adev, vmid) (adev)->gart.gart_funcs->flush_gpu_tlb((adev), (vmid))
@@ -2277,24 +2269,78 @@ amdgpu_get_sdma_instance(struct amdgpu_ring *ring)
 #define amdgpu_display_resume_mc_access(adev, s) (adev)->mode_info.funcs->resume_mc_access((adev), (s))
 #define amdgpu_display_resume_mc_access(adev, s) (adev)->mode_info.funcs->resume_mc_access((adev), (s))
 #define amdgpu_emit_copy_buffer(adev, ib, s, d, b) (adev)->mman.buffer_funcs->emit_copy_buffer((ib),  (s), (d), (b))
 #define amdgpu_emit_copy_buffer(adev, ib, s, d, b) (adev)->mman.buffer_funcs->emit_copy_buffer((ib),  (s), (d), (b))
 #define amdgpu_emit_fill_buffer(adev, ib, s, d, b) (adev)->mman.buffer_funcs->emit_fill_buffer((ib), (s), (d), (b))
 #define amdgpu_emit_fill_buffer(adev, ib, s, d, b) (adev)->mman.buffer_funcs->emit_fill_buffer((ib), (s), (d), (b))
-#define amdgpu_dpm_get_temperature(adev) (adev)->pm.funcs->get_temperature((adev))
 #define amdgpu_dpm_pre_set_power_state(adev) (adev)->pm.funcs->pre_set_power_state((adev))
 #define amdgpu_dpm_pre_set_power_state(adev) (adev)->pm.funcs->pre_set_power_state((adev))
 #define amdgpu_dpm_set_power_state(adev) (adev)->pm.funcs->set_power_state((adev))
 #define amdgpu_dpm_set_power_state(adev) (adev)->pm.funcs->set_power_state((adev))
 #define amdgpu_dpm_post_set_power_state(adev) (adev)->pm.funcs->post_set_power_state((adev))
 #define amdgpu_dpm_post_set_power_state(adev) (adev)->pm.funcs->post_set_power_state((adev))
 #define amdgpu_dpm_display_configuration_changed(adev) (adev)->pm.funcs->display_configuration_changed((adev))
 #define amdgpu_dpm_display_configuration_changed(adev) (adev)->pm.funcs->display_configuration_changed((adev))
-#define amdgpu_dpm_get_sclk(adev, l) (adev)->pm.funcs->get_sclk((adev), (l))
-#define amdgpu_dpm_get_mclk(adev, l) (adev)->pm.funcs->get_mclk((adev), (l))
 #define amdgpu_dpm_print_power_state(adev, ps) (adev)->pm.funcs->print_power_state((adev), (ps))
 #define amdgpu_dpm_print_power_state(adev, ps) (adev)->pm.funcs->print_power_state((adev), (ps))
-#define amdgpu_dpm_debugfs_print_current_performance_level(adev, m) (adev)->pm.funcs->debugfs_print_current_performance_level((adev), (m))
-#define amdgpu_dpm_force_performance_level(adev, l) (adev)->pm.funcs->force_performance_level((adev), (l))
 #define amdgpu_dpm_vblank_too_short(adev) (adev)->pm.funcs->vblank_too_short((adev))
 #define amdgpu_dpm_vblank_too_short(adev) (adev)->pm.funcs->vblank_too_short((adev))
-#define amdgpu_dpm_powergate_uvd(adev, g) (adev)->pm.funcs->powergate_uvd((adev), (g))
-#define amdgpu_dpm_powergate_vce(adev, g) (adev)->pm.funcs->powergate_vce((adev), (g))
 #define amdgpu_dpm_enable_bapm(adev, e) (adev)->pm.funcs->enable_bapm((adev), (e))
 #define amdgpu_dpm_enable_bapm(adev, e) (adev)->pm.funcs->enable_bapm((adev), (e))
-#define amdgpu_dpm_set_fan_control_mode(adev, m) (adev)->pm.funcs->set_fan_control_mode((adev), (m))
-#define amdgpu_dpm_get_fan_control_mode(adev) (adev)->pm.funcs->get_fan_control_mode((adev))
-#define amdgpu_dpm_set_fan_speed_percent(adev, s) (adev)->pm.funcs->set_fan_speed_percent((adev), (s))
-#define amdgpu_dpm_get_fan_speed_percent(adev, s) (adev)->pm.funcs->get_fan_speed_percent((adev), (s))
+
+#define amdgpu_dpm_get_temperature(adev) \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->get_temperature((adev)->powerplay.pp_handle) : \
+	      (adev)->pm.funcs->get_temperature((adev))
+
+#define amdgpu_dpm_set_fan_control_mode(adev, m) \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->set_fan_control_mode((adev)->powerplay.pp_handle, (m)) : \
+	      (adev)->pm.funcs->set_fan_control_mode((adev), (m))
+
+#define amdgpu_dpm_get_fan_control_mode(adev) \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->get_fan_control_mode((adev)->powerplay.pp_handle) : \
+	      (adev)->pm.funcs->get_fan_control_mode((adev))
+
+#define amdgpu_dpm_set_fan_speed_percent(adev, s) \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->set_fan_speed_percent((adev)->powerplay.pp_handle, (s)) : \
+	      (adev)->pm.funcs->set_fan_speed_percent((adev), (s))
+
+#define amdgpu_dpm_get_fan_speed_percent(adev, s) \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->get_fan_speed_percent((adev)->powerplay.pp_handle, (s)) : \
+	      (adev)->pm.funcs->get_fan_speed_percent((adev), (s))
+
+#define amdgpu_dpm_get_sclk(adev, l) \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->get_sclk((adev)->powerplay.pp_handle, (l)) : \
+		(adev)->pm.funcs->get_sclk((adev), (l))
+
+#define amdgpu_dpm_get_mclk(adev, l)  \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->get_mclk((adev)->powerplay.pp_handle, (l)) : \
+	      (adev)->pm.funcs->get_mclk((adev), (l))
+
+
+#define amdgpu_dpm_force_performance_level(adev, l) \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->force_performance_level((adev)->powerplay.pp_handle, (l)) : \
+	      (adev)->pm.funcs->force_performance_level((adev), (l))
+
+#define amdgpu_dpm_powergate_uvd(adev, g) \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->powergate_uvd((adev)->powerplay.pp_handle, (g)) : \
+	      (adev)->pm.funcs->powergate_uvd((adev), (g))
+
+#define amdgpu_dpm_powergate_vce(adev, g) \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->powergate_vce((adev)->powerplay.pp_handle, (g)) : \
+	      (adev)->pm.funcs->powergate_vce((adev), (g))
+
+#define amdgpu_dpm_debugfs_print_current_performance_level(adev, m) \
+	(adev)->pp_enabled ?						\
+	      (adev)->powerplay.pp_funcs->print_current_performance_level((adev)->powerplay.pp_handle, (m)) : \
+	      (adev)->pm.funcs->debugfs_print_current_performance_level((adev), (m))
+
+#define amdgpu_dpm_get_current_power_state(adev) \
+	(adev)->powerplay.pp_funcs->get_current_power_state((adev)->powerplay.pp_handle)
+
+#define amdgpu_dpm_get_performance_level(adev) \
+	(adev)->powerplay.pp_funcs->get_performance_level((adev)->powerplay.pp_handle)
+
+#define amdgpu_dpm_dispatch_task(adev, event_id, input, output)		\
+	(adev)->powerplay.pp_funcs->dispatch_tasks((adev)->powerplay.pp_handle, (event_id), (input), (output))
 
 
 #define amdgpu_gds_switch(adev, r, v, d, w, a) (adev)->gds.funcs->patch_gds_switch((r), (v), (d), (w), (a))
 #define amdgpu_gds_switch(adev, r, v, d, w, a) (adev)->gds.funcs->patch_gds_switch((r), (v), (d), (w), (a))
 
 

+ 1 - 57
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c

@@ -29,66 +29,10 @@
 #include <drm/drmP.h>
 #include <drm/drmP.h>
 #include <drm/drm_crtc_helper.h>
 #include <drm/drm_crtc_helper.h>
 #include "amdgpu.h"
 #include "amdgpu.h"
-#include "amdgpu_acpi.h"
+#include "amd_acpi.h"
 #include "atom.h"
 #include "atom.h"
 
 
-#define ACPI_AC_CLASS           "ac_adapter"
-
 extern void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev);
 extern void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev);
-
-struct atif_verify_interface {
-	u16 size;		/* structure size in bytes (includes size field) */
-	u16 version;		/* version */
-	u32 notification_mask;	/* supported notifications mask */
-	u32 function_bits;	/* supported functions bit vector */
-} __packed;
-
-struct atif_system_params {
-	u16 size;		/* structure size in bytes (includes size field) */
-	u32 valid_mask;		/* valid flags mask */
-	u32 flags;		/* flags */
-	u8 command_code;	/* notify command code */
-} __packed;
-
-struct atif_sbios_requests {
-	u16 size;		/* structure size in bytes (includes size field) */
-	u32 pending;		/* pending sbios requests */
-	u8 panel_exp_mode;	/* panel expansion mode */
-	u8 thermal_gfx;		/* thermal state: target gfx controller */
-	u8 thermal_state;	/* thermal state: state id (0: exit state, non-0: state) */
-	u8 forced_power_gfx;	/* forced power state: target gfx controller */
-	u8 forced_power_state;	/* forced power state: state id */
-	u8 system_power_src;	/* system power source */
-	u8 backlight_level;	/* panel backlight level (0-255) */
-} __packed;
-
-#define ATIF_NOTIFY_MASK	0x3
-#define ATIF_NOTIFY_NONE	0
-#define ATIF_NOTIFY_81		1
-#define ATIF_NOTIFY_N		2
-
-struct atcs_verify_interface {
-	u16 size;		/* structure size in bytes (includes size field) */
-	u16 version;		/* version */
-	u32 function_bits;	/* supported functions bit vector */
-} __packed;
-
-#define ATCS_VALID_FLAGS_MASK	0x3
-
-struct atcs_pref_req_input {
-	u16 size;		/* structure size in bytes (includes size field) */
-	u16 client_id;		/* client id (bit 2-0: func num, 7-3: dev num, 15-8: bus num) */
-	u16 valid_flags_mask;	/* valid flags mask */
-	u16 flags;		/* flags */
-	u8 req_type;		/* request type */
-	u8 perf_req;		/* performance request */
-} __packed;
-
-struct atcs_pref_req_output {
-	u16 size;		/* structure size in bytes (includes size field) */
-	u8 ret_val;		/* return value */
-} __packed;
-
 /* Call the ATIF method
 /* Call the ATIF method
  */
  */
 /**
 /**

+ 1 - 1
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c

@@ -11,7 +11,7 @@
 #include <linux/acpi.h>
 #include <linux/acpi.h>
 #include <linux/pci.h>
 #include <linux/pci.h>
 
 
-#include "amdgpu_acpi.h"
+#include "amd_acpi.h"
 
 
 struct amdgpu_atpx_functions {
 struct amdgpu_atpx_functions {
 	bool px_params;
 	bool px_params;

+ 50 - 8
drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c

@@ -35,6 +35,13 @@
  * BIOS.
  * BIOS.
  */
  */
 
 
+#define AMD_VBIOS_SIGNATURE " 761295520"
+#define AMD_VBIOS_SIGNATURE_OFFSET 0x30
+#define AMD_VBIOS_SIGNATURE_SIZE sizeof(AMD_VBIOS_SIGNATURE)
+#define AMD_VBIOS_SIGNATURE_END (AMD_VBIOS_SIGNATURE_OFFSET + AMD_VBIOS_SIGNATURE_SIZE)
+#define AMD_IS_VALID_VBIOS(p) ((p)[0] == 0x55 && (p)[1] == 0xAA)
+#define AMD_VBIOS_LENGTH(p) ((p)[2] << 9)
+
 /* If you boot an IGP board with a discrete card as the primary,
 /* If you boot an IGP board with a discrete card as the primary,
  * the IGP rom is not accessible via the rom bar as the IGP rom is
  * the IGP rom is not accessible via the rom bar as the IGP rom is
  * part of the system bios.  On boot, the system bios puts a
  * part of the system bios.  On boot, the system bios puts a
@@ -58,7 +65,7 @@ static bool igp_read_bios_from_vram(struct amdgpu_device *adev)
 		return false;
 		return false;
 	}
 	}
 
 
-	if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) {
+	if (size == 0 || !AMD_IS_VALID_VBIOS(bios)) {
 		iounmap(bios);
 		iounmap(bios);
 		return false;
 		return false;
 	}
 	}
@@ -74,7 +81,7 @@ static bool igp_read_bios_from_vram(struct amdgpu_device *adev)
 
 
 bool amdgpu_read_bios(struct amdgpu_device *adev)
 bool amdgpu_read_bios(struct amdgpu_device *adev)
 {
 {
-	uint8_t __iomem *bios, val1, val2;
+	uint8_t __iomem *bios, val[2];
 	size_t size;
 	size_t size;
 
 
 	adev->bios = NULL;
 	adev->bios = NULL;
@@ -84,10 +91,10 @@ bool amdgpu_read_bios(struct amdgpu_device *adev)
 		return false;
 		return false;
 	}
 	}
 
 
-	val1 = readb(&bios[0]);
-	val2 = readb(&bios[1]);
+	val[0] = readb(&bios[0]);
+	val[1] = readb(&bios[1]);
 
 
-	if (size == 0 || val1 != 0x55 || val2 != 0xaa) {
+	if (size == 0 || !AMD_IS_VALID_VBIOS(val)) {
 		pci_unmap_rom(adev->pdev, bios);
 		pci_unmap_rom(adev->pdev, bios);
 		return false;
 		return false;
 	}
 	}
@@ -101,6 +108,38 @@ bool amdgpu_read_bios(struct amdgpu_device *adev)
 	return true;
 	return true;
 }
 }
 
 
+static bool amdgpu_read_bios_from_rom(struct amdgpu_device *adev)
+{
+	u8 header[AMD_VBIOS_SIGNATURE_END+1] = {0};
+	int len;
+
+	if (!adev->asic_funcs->read_bios_from_rom)
+		return false;
+
+	/* validate VBIOS signature */
+	if (amdgpu_asic_read_bios_from_rom(adev, &header[0], sizeof(header)) == false)
+		return false;
+	header[AMD_VBIOS_SIGNATURE_END] = 0;
+
+	if ((!AMD_IS_VALID_VBIOS(header)) ||
+	    0 != memcmp((char *)&header[AMD_VBIOS_SIGNATURE_OFFSET],
+			AMD_VBIOS_SIGNATURE,
+			strlen(AMD_VBIOS_SIGNATURE)))
+		return false;
+
+	/* valid vbios, go on */
+	len = AMD_VBIOS_LENGTH(header);
+	len = ALIGN(len, 4);
+	adev->bios = kmalloc(len, GFP_KERNEL);
+	if (!adev->bios) {
+		DRM_ERROR("no memory to allocate for BIOS\n");
+		return false;
+	}
+
+	/* read complete BIOS */
+	return amdgpu_asic_read_bios_from_rom(adev, adev->bios, len);
+}
+
 static bool amdgpu_read_platform_bios(struct amdgpu_device *adev)
 static bool amdgpu_read_platform_bios(struct amdgpu_device *adev)
 {
 {
 	uint8_t __iomem *bios;
 	uint8_t __iomem *bios;
@@ -113,7 +152,7 @@ static bool amdgpu_read_platform_bios(struct amdgpu_device *adev)
 		return false;
 		return false;
 	}
 	}
 
 
-	if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) {
+	if (size == 0 || !AMD_IS_VALID_VBIOS(bios)) {
 		return false;
 		return false;
 	}
 	}
 	adev->bios = kmemdup(bios, size, GFP_KERNEL);
 	adev->bios = kmemdup(bios, size, GFP_KERNEL);
@@ -230,7 +269,7 @@ static bool amdgpu_atrm_get_bios(struct amdgpu_device *adev)
 			break;
 			break;
 	}
 	}
 
 
-	if (i == 0 || adev->bios[0] != 0x55 || adev->bios[1] != 0xaa) {
+	if (i == 0 || !AMD_IS_VALID_VBIOS(adev->bios)) {
 		kfree(adev->bios);
 		kfree(adev->bios);
 		return false;
 		return false;
 	}
 	}
@@ -319,6 +358,9 @@ bool amdgpu_get_bios(struct amdgpu_device *adev)
 		r = igp_read_bios_from_vram(adev);
 		r = igp_read_bios_from_vram(adev);
 	if (r == false)
 	if (r == false)
 		r = amdgpu_read_bios(adev);
 		r = amdgpu_read_bios(adev);
+	if (r == false) {
+		r = amdgpu_read_bios_from_rom(adev);
+	}
 	if (r == false) {
 	if (r == false) {
 		r = amdgpu_read_disabled_bios(adev);
 		r = amdgpu_read_disabled_bios(adev);
 	}
 	}
@@ -330,7 +372,7 @@ bool amdgpu_get_bios(struct amdgpu_device *adev)
 		adev->bios = NULL;
 		adev->bios = NULL;
 		return false;
 		return false;
 	}
 	}
-	if (adev->bios[0] != 0x55 || adev->bios[1] != 0xaa) {
+	if (!AMD_IS_VALID_VBIOS(adev->bios)) {
 		printk("BIOS signature incorrect %x %x\n", adev->bios[0], adev->bios[1]);
 		printk("BIOS signature incorrect %x %x\n", adev->bios[0], adev->bios[1]);
 		goto free_bios;
 		goto free_bios;
 	}
 	}

+ 326 - 2
drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c

@@ -24,6 +24,7 @@
 #include <linux/list.h>
 #include <linux/list.h>
 #include <linux/slab.h>
 #include <linux/slab.h>
 #include <linux/pci.h>
 #include <linux/pci.h>
+#include <linux/acpi.h>
 #include <drm/drmP.h>
 #include <drm/drmP.h>
 #include <linux/firmware.h>
 #include <linux/firmware.h>
 #include <drm/amdgpu_drm.h>
 #include <drm/amdgpu_drm.h>
@@ -32,7 +33,6 @@
 #include "atom.h"
 #include "atom.h"
 #include "amdgpu_ucode.h"
 #include "amdgpu_ucode.h"
 
 
-
 struct amdgpu_cgs_device {
 struct amdgpu_cgs_device {
 	struct cgs_device base;
 	struct cgs_device base;
 	struct amdgpu_device *adev;
 	struct amdgpu_device *adev;
@@ -398,6 +398,41 @@ static void amdgpu_cgs_write_pci_config_dword(void *cgs_device, unsigned addr,
 	WARN(ret, "pci_write_config_dword error");
 	WARN(ret, "pci_write_config_dword error");
 }
 }
 
 
+
+static int amdgpu_cgs_get_pci_resource(void *cgs_device,
+				       enum cgs_resource_type resource_type,
+				       uint64_t size,
+				       uint64_t offset,
+				       uint64_t *resource_base)
+{
+	CGS_FUNC_ADEV;
+
+	if (resource_base == NULL)
+		return -EINVAL;
+
+	switch (resource_type) {
+	case CGS_RESOURCE_TYPE_MMIO:
+		if (adev->rmmio_size == 0)
+			return -ENOENT;
+		if ((offset + size) > adev->rmmio_size)
+			return -EINVAL;
+		*resource_base = adev->rmmio_base;
+		return 0;
+	case CGS_RESOURCE_TYPE_DOORBELL:
+		if (adev->doorbell.size == 0)
+			return -ENOENT;
+		if ((offset + size) > adev->doorbell.size)
+			return -EINVAL;
+		*resource_base = adev->doorbell.base;
+		return 0;
+	case CGS_RESOURCE_TYPE_FB:
+	case CGS_RESOURCE_TYPE_IO:
+	case CGS_RESOURCE_TYPE_ROM:
+	default:
+		return -EINVAL;
+	}
+}
+
 static const void *amdgpu_cgs_atom_get_data_table(void *cgs_device,
 static const void *amdgpu_cgs_atom_get_data_table(void *cgs_device,
 						  unsigned table, uint16_t *size,
 						  unsigned table, uint16_t *size,
 						  uint8_t *frev, uint8_t *crev)
 						  uint8_t *frev, uint8_t *crev)
@@ -703,6 +738,9 @@ static int amdgpu_cgs_get_firmware_info(void *cgs_device,
 		case CHIP_TONGA:
 		case CHIP_TONGA:
 			strcpy(fw_name, "amdgpu/tonga_smc.bin");
 			strcpy(fw_name, "amdgpu/tonga_smc.bin");
 			break;
 			break;
+		case CHIP_FIJI:
+			strcpy(fw_name, "amdgpu/fiji_smc.bin");
+			break;
 		default:
 		default:
 			DRM_ERROR("SMC firmware not supported\n");
 			DRM_ERROR("SMC firmware not supported\n");
 			return -EINVAL;
 			return -EINVAL;
@@ -736,6 +774,288 @@ static int amdgpu_cgs_get_firmware_info(void *cgs_device,
 	return 0;
 	return 0;
 }
 }
 
 
+static int amdgpu_cgs_query_system_info(void *cgs_device,
+				struct cgs_system_info *sys_info)
+{
+	CGS_FUNC_ADEV;
+
+	if (NULL == sys_info)
+		return -ENODEV;
+
+	if (sizeof(struct cgs_system_info) != sys_info->size)
+		return -ENODEV;
+
+	switch (sys_info->info_id) {
+	case CGS_SYSTEM_INFO_ADAPTER_BDF_ID:
+		sys_info->value = adev->pdev->devfn | (adev->pdev->bus->number << 8);
+		break;
+	case CGS_SYSTEM_INFO_PCIE_GEN_INFO:
+		sys_info->value = adev->pm.pcie_gen_mask;
+		break;
+	case CGS_SYSTEM_INFO_PCIE_MLW:
+		sys_info->value = adev->pm.pcie_mlw_mask;
+		break;
+	default:
+		return -ENODEV;
+	}
+
+	return 0;
+}
+
+static int amdgpu_cgs_get_active_displays_info(void *cgs_device,
+					  struct cgs_display_info *info)
+{
+	CGS_FUNC_ADEV;
+	struct amdgpu_crtc *amdgpu_crtc;
+	struct drm_device *ddev = adev->ddev;
+	struct drm_crtc *crtc;
+	uint32_t line_time_us, vblank_lines;
+
+	if (info == NULL)
+		return -EINVAL;
+
+	if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
+		list_for_each_entry(crtc,
+				&ddev->mode_config.crtc_list, head) {
+			amdgpu_crtc = to_amdgpu_crtc(crtc);
+			if (crtc->enabled) {
+				info->active_display_mask |= (1 << amdgpu_crtc->crtc_id);
+				info->display_count++;
+			}
+			if (info->mode_info != NULL &&
+				crtc->enabled && amdgpu_crtc->enabled &&
+				amdgpu_crtc->hw_mode.clock) {
+				line_time_us = (amdgpu_crtc->hw_mode.crtc_htotal * 1000) /
+							amdgpu_crtc->hw_mode.clock;
+				vblank_lines = amdgpu_crtc->hw_mode.crtc_vblank_end -
+							amdgpu_crtc->hw_mode.crtc_vdisplay +
+							(amdgpu_crtc->v_border * 2);
+				info->mode_info->vblank_time_us = vblank_lines * line_time_us;
+				info->mode_info->refresh_rate = drm_mode_vrefresh(&amdgpu_crtc->hw_mode);
+				info->mode_info->ref_clock = adev->clock.spll.reference_freq;
+				info->mode_info++;
+			}
+		}
+	}
+
+	return 0;
+}
+
+/** \brief evaluate acpi namespace object, handle or pathname must be valid
+ *  \param cgs_device
+ *  \param info input/output arguments for the control method
+ *  \return status
+ */
+
+#if defined(CONFIG_ACPI)
+static int amdgpu_cgs_acpi_eval_object(void *cgs_device,
+				    struct cgs_acpi_method_info *info)
+{
+	CGS_FUNC_ADEV;
+	acpi_handle handle;
+	struct acpi_object_list input;
+	struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
+	union acpi_object *params = NULL;
+	union acpi_object *obj = NULL;
+	uint8_t name[5] = {'\0'};
+	struct cgs_acpi_method_argument *argument = NULL;
+	uint32_t i, count;
+	acpi_status status;
+	int result;
+	uint32_t func_no = 0xFFFFFFFF;
+
+	handle = ACPI_HANDLE(&adev->pdev->dev);
+	if (!handle)
+		return -ENODEV;
+
+	memset(&input, 0, sizeof(struct acpi_object_list));
+
+	/* validate input info */
+	if (info->size != sizeof(struct cgs_acpi_method_info))
+		return -EINVAL;
+
+	input.count = info->input_count;
+	if (info->input_count > 0) {
+		if (info->pinput_argument == NULL)
+			return -EINVAL;
+		argument = info->pinput_argument;
+		func_no = argument->value;
+		for (i = 0; i < info->input_count; i++) {
+			if (((argument->type == ACPI_TYPE_STRING) ||
+			     (argument->type == ACPI_TYPE_BUFFER)) &&
+			    (argument->pointer == NULL))
+				return -EINVAL;
+			argument++;
+		}
+	}
+
+	if (info->output_count > 0) {
+		if (info->poutput_argument == NULL)
+			return -EINVAL;
+		argument = info->poutput_argument;
+		for (i = 0; i < info->output_count; i++) {
+			if (((argument->type == ACPI_TYPE_STRING) ||
+				(argument->type == ACPI_TYPE_BUFFER))
+				&& (argument->pointer == NULL))
+				return -EINVAL;
+			argument++;
+		}
+	}
+
+	/* The path name passed to acpi_evaluate_object should be null terminated */
+	if ((info->field & CGS_ACPI_FIELD_METHOD_NAME) != 0) {
+		strncpy(name, (char *)&(info->name), sizeof(uint32_t));
+		name[4] = '\0';
+	}
+
+	/* parse input parameters */
+	if (input.count > 0) {
+		input.pointer = params =
+				kzalloc(sizeof(union acpi_object) * input.count, GFP_KERNEL);
+		if (params == NULL)
+			return -EINVAL;
+
+		argument = info->pinput_argument;
+
+		for (i = 0; i < input.count; i++) {
+			params->type = argument->type;
+			switch (params->type) {
+			case ACPI_TYPE_INTEGER:
+				params->integer.value = argument->value;
+				break;
+			case ACPI_TYPE_STRING:
+				params->string.length = argument->method_length;
+				params->string.pointer = argument->pointer;
+				break;
+			case ACPI_TYPE_BUFFER:
+				params->buffer.length = argument->method_length;
+				params->buffer.pointer = argument->pointer;
+				break;
+			default:
+				break;
+			}
+			params++;
+			argument++;
+		}
+	}
+
+	/* parse output info */
+	count = info->output_count;
+	argument = info->poutput_argument;
+
+	/* evaluate the acpi method */
+	status = acpi_evaluate_object(handle, name, &input, &output);
+
+	if (ACPI_FAILURE(status)) {
+		result = -EIO;
+		goto error;
+	}
+
+	/* return the output info */
+	obj = output.pointer;
+
+	if (count > 1) {
+		if ((obj->type != ACPI_TYPE_PACKAGE) ||
+			(obj->package.count != count)) {
+			result = -EIO;
+			goto error;
+		}
+		params = obj->package.elements;
+	} else
+		params = obj;
+
+	if (params == NULL) {
+		result = -EIO;
+		goto error;
+	}
+
+	for (i = 0; i < count; i++) {
+		if (argument->type != params->type) {
+			result = -EIO;
+			goto error;
+		}
+		switch (params->type) {
+		case ACPI_TYPE_INTEGER:
+			argument->value = params->integer.value;
+			break;
+		case ACPI_TYPE_STRING:
+			if ((params->string.length != argument->data_length) ||
+				(params->string.pointer == NULL)) {
+				result = -EIO;
+				goto error;
+			}
+			strncpy(argument->pointer,
+				params->string.pointer,
+				params->string.length);
+			break;
+		case ACPI_TYPE_BUFFER:
+			if (params->buffer.pointer == NULL) {
+				result = -EIO;
+				goto error;
+			}
+			memcpy(argument->pointer,
+				params->buffer.pointer,
+				argument->data_length);
+			break;
+		default:
+			break;
+		}
+		argument++;
+		params++;
+	}
+
+error:
+	if (obj != NULL)
+		kfree(obj);
+	kfree((void *)input.pointer);
+	return result;
+}
+#else
+static int amdgpu_cgs_acpi_eval_object(void *cgs_device,
+				struct cgs_acpi_method_info *info)
+{
+	return -EIO;
+}
+#endif
+
+int amdgpu_cgs_call_acpi_method(void *cgs_device,
+					uint32_t acpi_method,
+					uint32_t acpi_function,
+					void *pinput, void *poutput,
+					uint32_t output_count,
+					uint32_t input_size,
+					uint32_t output_size)
+{
+	struct cgs_acpi_method_argument acpi_input[2] = { {0}, {0} };
+	struct cgs_acpi_method_argument acpi_output = {0};
+	struct cgs_acpi_method_info info = {0};
+
+	acpi_input[0].type = CGS_ACPI_TYPE_INTEGER;
+	acpi_input[0].method_length = sizeof(uint32_t);
+	acpi_input[0].data_length = sizeof(uint32_t);
+	acpi_input[0].value = acpi_function;
+
+	acpi_input[1].type = CGS_ACPI_TYPE_BUFFER;
+	acpi_input[1].method_length = CGS_ACPI_MAX_BUFFER_SIZE;
+	acpi_input[1].data_length = input_size;
+	acpi_input[1].pointer = pinput;
+
+	acpi_output.type = CGS_ACPI_TYPE_BUFFER;
+	acpi_output.method_length = CGS_ACPI_MAX_BUFFER_SIZE;
+	acpi_output.data_length = output_size;
+	acpi_output.pointer = poutput;
+
+	info.size = sizeof(struct cgs_acpi_method_info);
+	info.field = CGS_ACPI_FIELD_METHOD_NAME | CGS_ACPI_FIELD_INPUT_ARGUMENT_COUNT;
+	info.input_count = 2;
+	info.name = acpi_method;
+	info.pinput_argument = acpi_input;
+	info.output_count = output_count;
+	info.poutput_argument = &acpi_output;
+
+	return amdgpu_cgs_acpi_eval_object(cgs_device, &info);
+}
+
 static const struct cgs_ops amdgpu_cgs_ops = {
 static const struct cgs_ops amdgpu_cgs_ops = {
 	amdgpu_cgs_gpu_mem_info,
 	amdgpu_cgs_gpu_mem_info,
 	amdgpu_cgs_gmap_kmem,
 	amdgpu_cgs_gmap_kmem,
@@ -756,6 +1076,7 @@ static const struct cgs_ops amdgpu_cgs_ops = {
 	amdgpu_cgs_write_pci_config_byte,
 	amdgpu_cgs_write_pci_config_byte,
 	amdgpu_cgs_write_pci_config_word,
 	amdgpu_cgs_write_pci_config_word,
 	amdgpu_cgs_write_pci_config_dword,
 	amdgpu_cgs_write_pci_config_dword,
+	amdgpu_cgs_get_pci_resource,
 	amdgpu_cgs_atom_get_data_table,
 	amdgpu_cgs_atom_get_data_table,
 	amdgpu_cgs_atom_get_cmd_table_revs,
 	amdgpu_cgs_atom_get_cmd_table_revs,
 	amdgpu_cgs_atom_exec_cmd_table,
 	amdgpu_cgs_atom_exec_cmd_table,
@@ -768,7 +1089,10 @@ static const struct cgs_ops amdgpu_cgs_ops = {
 	amdgpu_cgs_set_camera_voltages,
 	amdgpu_cgs_set_camera_voltages,
 	amdgpu_cgs_get_firmware_info,
 	amdgpu_cgs_get_firmware_info,
 	amdgpu_cgs_set_powergating_state,
 	amdgpu_cgs_set_powergating_state,
-	amdgpu_cgs_set_clockgating_state
+	amdgpu_cgs_set_clockgating_state,
+	amdgpu_cgs_get_active_displays_info,
+	amdgpu_cgs_call_acpi_method,
+	amdgpu_cgs_query_system_info,
 };
 };
 
 
 static const struct cgs_os_ops amdgpu_cgs_os_ops = {
 static const struct cgs_os_ops amdgpu_cgs_os_ops = {

+ 12 - 7
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c

@@ -406,8 +406,8 @@ static int amdgpu_cs_parser_relocs(struct amdgpu_cs_parser *p)
 		amdgpu_cs_buckets_get_list(&buckets, &p->validated);
 		amdgpu_cs_buckets_get_list(&buckets, &p->validated);
 	}
 	}
 
 
-	p->vm_bos = amdgpu_vm_get_bos(p->adev, &fpriv->vm,
-				      &p->validated);
+	INIT_LIST_HEAD(&duplicates);
+	amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd);
 
 
 	if (p->uf.bo)
 	if (p->uf.bo)
 		list_add(&p->uf_entry.tv.head, &p->validated);
 		list_add(&p->uf_entry.tv.head, &p->validated);
@@ -415,20 +415,23 @@ static int amdgpu_cs_parser_relocs(struct amdgpu_cs_parser *p)
 	if (need_mmap_lock)
 	if (need_mmap_lock)
 		down_read(&current->mm->mmap_sem);
 		down_read(&current->mm->mmap_sem);
 
 
-	INIT_LIST_HEAD(&duplicates);
 	r = ttm_eu_reserve_buffers(&p->ticket, &p->validated, true, &duplicates);
 	r = ttm_eu_reserve_buffers(&p->ticket, &p->validated, true, &duplicates);
 	if (unlikely(r != 0))
 	if (unlikely(r != 0))
 		goto error_reserve;
 		goto error_reserve;
 
 
-	r = amdgpu_cs_list_validate(p->adev, &fpriv->vm, &p->validated);
+	amdgpu_vm_get_pt_bos(&fpriv->vm, &duplicates);
+
+	r = amdgpu_cs_list_validate(p->adev, &fpriv->vm, &duplicates);
 	if (r)
 	if (r)
 		goto error_validate;
 		goto error_validate;
 
 
-	r = amdgpu_cs_list_validate(p->adev, &fpriv->vm, &duplicates);
+	r = amdgpu_cs_list_validate(p->adev, &fpriv->vm, &p->validated);
 
 
 error_validate:
 error_validate:
-	if (r)
+	if (r) {
+		amdgpu_vm_move_pt_bos_in_lru(p->adev, &fpriv->vm);
 		ttm_eu_backoff_reservation(&p->ticket, &p->validated);
 		ttm_eu_backoff_reservation(&p->ticket, &p->validated);
+	}
 
 
 error_reserve:
 error_reserve:
 	if (need_mmap_lock)
 	if (need_mmap_lock)
@@ -472,8 +475,11 @@ static int cmp_size_smaller_first(void *priv, struct list_head *a,
  **/
  **/
 static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bool backoff)
 static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bool backoff)
 {
 {
+	struct amdgpu_fpriv *fpriv = parser->filp->driver_priv;
 	unsigned i;
 	unsigned i;
 
 
+	amdgpu_vm_move_pt_bos_in_lru(parser->adev, &fpriv->vm);
+
 	if (!error) {
 	if (!error) {
 		/* Sort the buffer list from the smallest to largest buffer,
 		/* Sort the buffer list from the smallest to largest buffer,
 		 * which affects the order of buffers in the LRU list.
 		 * which affects the order of buffers in the LRU list.
@@ -501,7 +507,6 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bo
 	if (parser->bo_list)
 	if (parser->bo_list)
 		amdgpu_bo_list_put(parser->bo_list);
 		amdgpu_bo_list_put(parser->bo_list);
 
 
-	drm_free_large(parser->vm_bos);
 	for (i = 0; i < parser->nchunks; i++)
 	for (i = 0; i < parser->nchunks; i++)
 		drm_free_large(parser->chunks[i].kdata);
 		drm_free_large(parser->chunks[i].kdata);
 	kfree(parser->chunks);
 	kfree(parser->chunks);

+ 27 - 14
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c

@@ -25,7 +25,7 @@
 #include <drm/drmP.h>
 #include <drm/drmP.h>
 #include "amdgpu.h"
 #include "amdgpu.h"
 
 
-int amdgpu_ctx_init(struct amdgpu_device *adev, bool kernel,
+int amdgpu_ctx_init(struct amdgpu_device *adev, enum amd_sched_priority pri,
 		    struct amdgpu_ctx *ctx)
 		    struct amdgpu_ctx *ctx)
 {
 {
 	unsigned i, j;
 	unsigned i, j;
@@ -35,17 +35,25 @@ int amdgpu_ctx_init(struct amdgpu_device *adev, bool kernel,
 	ctx->adev = adev;
 	ctx->adev = adev;
 	kref_init(&ctx->refcount);
 	kref_init(&ctx->refcount);
 	spin_lock_init(&ctx->ring_lock);
 	spin_lock_init(&ctx->ring_lock);
-	for (i = 0; i < AMDGPU_MAX_RINGS; ++i)
-		ctx->rings[i].sequence = 1;
+	ctx->fences = kzalloc(sizeof(struct fence *) * amdgpu_sched_jobs *
+			 AMDGPU_MAX_RINGS, GFP_KERNEL);
+	if (!ctx->fences)
+		return -ENOMEM;
 
 
+	for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
+		ctx->rings[i].sequence = 1;
+		ctx->rings[i].fences = (void *)ctx->fences + sizeof(struct fence *) *
+			amdgpu_sched_jobs * i;
+	}
 	if (amdgpu_enable_scheduler) {
 	if (amdgpu_enable_scheduler) {
 		/* create context entity for each ring */
 		/* create context entity for each ring */
 		for (i = 0; i < adev->num_rings; i++) {
 		for (i = 0; i < adev->num_rings; i++) {
 			struct amd_sched_rq *rq;
 			struct amd_sched_rq *rq;
-			if (kernel)
-				rq = &adev->rings[i]->sched.kernel_rq;
-			else
-				rq = &adev->rings[i]->sched.sched_rq;
+			if (pri >= AMD_SCHED_MAX_PRIORITY) {
+				kfree(ctx->fences);
+				return -EINVAL;
+			}
+			rq = &adev->rings[i]->sched.sched_rq[pri];
 			r = amd_sched_entity_init(&adev->rings[i]->sched,
 			r = amd_sched_entity_init(&adev->rings[i]->sched,
 						  &ctx->rings[i].entity,
 						  &ctx->rings[i].entity,
 						  rq, amdgpu_sched_jobs);
 						  rq, amdgpu_sched_jobs);
@@ -57,7 +65,7 @@ int amdgpu_ctx_init(struct amdgpu_device *adev, bool kernel,
 			for (j = 0; j < i; j++)
 			for (j = 0; j < i; j++)
 				amd_sched_entity_fini(&adev->rings[j]->sched,
 				amd_sched_entity_fini(&adev->rings[j]->sched,
 						      &ctx->rings[j].entity);
 						      &ctx->rings[j].entity);
-			kfree(ctx);
+			kfree(ctx->fences);
 			return r;
 			return r;
 		}
 		}
 	}
 	}
@@ -73,8 +81,9 @@ void amdgpu_ctx_fini(struct amdgpu_ctx *ctx)
 		return;
 		return;
 
 
 	for (i = 0; i < AMDGPU_MAX_RINGS; ++i)
 	for (i = 0; i < AMDGPU_MAX_RINGS; ++i)
-		for (j = 0; j < AMDGPU_CTX_MAX_CS_PENDING; ++j)
+		for (j = 0; j < amdgpu_sched_jobs; ++j)
 			fence_put(ctx->rings[i].fences[j]);
 			fence_put(ctx->rings[i].fences[j]);
+	kfree(ctx->fences);
 
 
 	if (amdgpu_enable_scheduler) {
 	if (amdgpu_enable_scheduler) {
 		for (i = 0; i < adev->num_rings; i++)
 		for (i = 0; i < adev->num_rings; i++)
@@ -103,9 +112,13 @@ static int amdgpu_ctx_alloc(struct amdgpu_device *adev,
 		return r;
 		return r;
 	}
 	}
 	*id = (uint32_t)r;
 	*id = (uint32_t)r;
-	r = amdgpu_ctx_init(adev, false, ctx);
+	r = amdgpu_ctx_init(adev, AMD_SCHED_PRIORITY_NORMAL, ctx);
+	if (r) {
+		idr_remove(&mgr->ctx_handles, *id);
+		*id = 0;
+		kfree(ctx);
+	}
 	mutex_unlock(&mgr->lock);
 	mutex_unlock(&mgr->lock);
-
 	return r;
 	return r;
 }
 }
 
 
@@ -239,7 +252,7 @@ uint64_t amdgpu_ctx_add_fence(struct amdgpu_ctx *ctx, struct amdgpu_ring *ring,
 	unsigned idx = 0;
 	unsigned idx = 0;
 	struct fence *other = NULL;
 	struct fence *other = NULL;
 
 
-	idx = seq % AMDGPU_CTX_MAX_CS_PENDING;
+	idx = seq & (amdgpu_sched_jobs - 1);
 	other = cring->fences[idx];
 	other = cring->fences[idx];
 	if (other) {
 	if (other) {
 		signed long r;
 		signed long r;
@@ -274,12 +287,12 @@ struct fence *amdgpu_ctx_get_fence(struct amdgpu_ctx *ctx,
 	}
 	}
 
 
 
 
-	if (seq + AMDGPU_CTX_MAX_CS_PENDING < cring->sequence) {
+	if (seq + amdgpu_sched_jobs < cring->sequence) {
 		spin_unlock(&ctx->ring_lock);
 		spin_unlock(&ctx->ring_lock);
 		return NULL;
 		return NULL;
 	}
 	}
 
 
-	fence = fence_get(cring->fences[seq % AMDGPU_CTX_MAX_CS_PENDING]);
+	fence = fence_get(cring->fences[seq & (amdgpu_sched_jobs - 1)]);
 	spin_unlock(&ctx->ring_lock);
 	spin_unlock(&ctx->ring_lock);
 
 
 	return fence;
 	return fence;

+ 146 - 17
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c

@@ -38,6 +38,7 @@
 #include "amdgpu_i2c.h"
 #include "amdgpu_i2c.h"
 #include "atom.h"
 #include "atom.h"
 #include "amdgpu_atombios.h"
 #include "amdgpu_atombios.h"
+#include "amd_pcie.h"
 #ifdef CONFIG_DRM_AMDGPU_CIK
 #ifdef CONFIG_DRM_AMDGPU_CIK
 #include "cik.h"
 #include "cik.h"
 #endif
 #endif
@@ -949,6 +950,15 @@ static bool amdgpu_check_pot_argument(int arg)
  */
  */
 static void amdgpu_check_arguments(struct amdgpu_device *adev)
 static void amdgpu_check_arguments(struct amdgpu_device *adev)
 {
 {
+	if (amdgpu_sched_jobs < 4) {
+		dev_warn(adev->dev, "sched jobs (%d) must be at least 4\n",
+			 amdgpu_sched_jobs);
+		amdgpu_sched_jobs = 4;
+	} else if (!amdgpu_check_pot_argument(amdgpu_sched_jobs)){
+		dev_warn(adev->dev, "sched jobs (%d) must be a power of 2\n",
+			 amdgpu_sched_jobs);
+		amdgpu_sched_jobs = roundup_pow_of_two(amdgpu_sched_jobs);
+	}
 	/* vramlimit must be a power of two */
 	/* vramlimit must be a power of two */
 	if (!amdgpu_check_pot_argument(amdgpu_vram_limit)) {
 	if (!amdgpu_check_pot_argument(amdgpu_vram_limit)) {
 		dev_warn(adev->dev, "vram limit (%d) must be a power of 2\n",
 		dev_warn(adev->dev, "vram limit (%d) must be a power of 2\n",
@@ -1214,12 +1224,14 @@ static int amdgpu_early_init(struct amdgpu_device *adev)
 		} else {
 		} else {
 			if (adev->ip_blocks[i].funcs->early_init) {
 			if (adev->ip_blocks[i].funcs->early_init) {
 				r = adev->ip_blocks[i].funcs->early_init((void *)adev);
 				r = adev->ip_blocks[i].funcs->early_init((void *)adev);
-				if (r == -ENOENT)
+				if (r == -ENOENT) {
 					adev->ip_block_status[i].valid = false;
 					adev->ip_block_status[i].valid = false;
-				else if (r)
+				} else if (r) {
+					DRM_ERROR("early_init %d failed %d\n", i, r);
 					return r;
 					return r;
-				else
+				} else {
 					adev->ip_block_status[i].valid = true;
 					adev->ip_block_status[i].valid = true;
+				}
 			} else {
 			} else {
 				adev->ip_block_status[i].valid = true;
 				adev->ip_block_status[i].valid = true;
 			}
 			}
@@ -1237,20 +1249,28 @@ static int amdgpu_init(struct amdgpu_device *adev)
 		if (!adev->ip_block_status[i].valid)
 		if (!adev->ip_block_status[i].valid)
 			continue;
 			continue;
 		r = adev->ip_blocks[i].funcs->sw_init((void *)adev);
 		r = adev->ip_blocks[i].funcs->sw_init((void *)adev);
-		if (r)
+		if (r) {
+			DRM_ERROR("sw_init %d failed %d\n", i, r);
 			return r;
 			return r;
+		}
 		adev->ip_block_status[i].sw = true;
 		adev->ip_block_status[i].sw = true;
 		/* need to do gmc hw init early so we can allocate gpu mem */
 		/* need to do gmc hw init early so we can allocate gpu mem */
 		if (adev->ip_blocks[i].type == AMD_IP_BLOCK_TYPE_GMC) {
 		if (adev->ip_blocks[i].type == AMD_IP_BLOCK_TYPE_GMC) {
 			r = amdgpu_vram_scratch_init(adev);
 			r = amdgpu_vram_scratch_init(adev);
-			if (r)
+			if (r) {
+				DRM_ERROR("amdgpu_vram_scratch_init failed %d\n", r);
 				return r;
 				return r;
+			}
 			r = adev->ip_blocks[i].funcs->hw_init((void *)adev);
 			r = adev->ip_blocks[i].funcs->hw_init((void *)adev);
-			if (r)
+			if (r) {
+				DRM_ERROR("hw_init %d failed %d\n", i, r);
 				return r;
 				return r;
+			}
 			r = amdgpu_wb_init(adev);
 			r = amdgpu_wb_init(adev);
-			if (r)
+			if (r) {
+				DRM_ERROR("amdgpu_wb_init failed %d\n", r);
 				return r;
 				return r;
+			}
 			adev->ip_block_status[i].hw = true;
 			adev->ip_block_status[i].hw = true;
 		}
 		}
 	}
 	}
@@ -1262,8 +1282,10 @@ static int amdgpu_init(struct amdgpu_device *adev)
 		if (adev->ip_blocks[i].type == AMD_IP_BLOCK_TYPE_GMC)
 		if (adev->ip_blocks[i].type == AMD_IP_BLOCK_TYPE_GMC)
 			continue;
 			continue;
 		r = adev->ip_blocks[i].funcs->hw_init((void *)adev);
 		r = adev->ip_blocks[i].funcs->hw_init((void *)adev);
-		if (r)
+		if (r) {
+			DRM_ERROR("hw_init %d failed %d\n", i, r);
 			return r;
 			return r;
+		}
 		adev->ip_block_status[i].hw = true;
 		adev->ip_block_status[i].hw = true;
 	}
 	}
 
 
@@ -1280,12 +1302,16 @@ static int amdgpu_late_init(struct amdgpu_device *adev)
 		/* enable clockgating to save power */
 		/* enable clockgating to save power */
 		r = adev->ip_blocks[i].funcs->set_clockgating_state((void *)adev,
 		r = adev->ip_blocks[i].funcs->set_clockgating_state((void *)adev,
 								    AMD_CG_STATE_GATE);
 								    AMD_CG_STATE_GATE);
-		if (r)
+		if (r) {
+			DRM_ERROR("set_clockgating_state(gate) %d failed %d\n", i, r);
 			return r;
 			return r;
+		}
 		if (adev->ip_blocks[i].funcs->late_init) {
 		if (adev->ip_blocks[i].funcs->late_init) {
 			r = adev->ip_blocks[i].funcs->late_init((void *)adev);
 			r = adev->ip_blocks[i].funcs->late_init((void *)adev);
-			if (r)
+			if (r) {
+				DRM_ERROR("late_init %d failed %d\n", i, r);
 				return r;
 				return r;
+			}
 		}
 		}
 	}
 	}
 
 
@@ -1306,10 +1332,15 @@ static int amdgpu_fini(struct amdgpu_device *adev)
 		/* ungate blocks before hw fini so that we can shutdown the blocks safely */
 		/* ungate blocks before hw fini so that we can shutdown the blocks safely */
 		r = adev->ip_blocks[i].funcs->set_clockgating_state((void *)adev,
 		r = adev->ip_blocks[i].funcs->set_clockgating_state((void *)adev,
 								    AMD_CG_STATE_UNGATE);
 								    AMD_CG_STATE_UNGATE);
-		if (r)
+		if (r) {
+			DRM_ERROR("set_clockgating_state(ungate) %d failed %d\n", i, r);
 			return r;
 			return r;
+		}
 		r = adev->ip_blocks[i].funcs->hw_fini((void *)adev);
 		r = adev->ip_blocks[i].funcs->hw_fini((void *)adev);
 		/* XXX handle errors */
 		/* XXX handle errors */
+		if (r) {
+			DRM_DEBUG("hw_fini %d failed %d\n", i, r);
+		}
 		adev->ip_block_status[i].hw = false;
 		adev->ip_block_status[i].hw = false;
 	}
 	}
 
 
@@ -1318,6 +1349,9 @@ static int amdgpu_fini(struct amdgpu_device *adev)
 			continue;
 			continue;
 		r = adev->ip_blocks[i].funcs->sw_fini((void *)adev);
 		r = adev->ip_blocks[i].funcs->sw_fini((void *)adev);
 		/* XXX handle errors */
 		/* XXX handle errors */
+		if (r) {
+			DRM_DEBUG("sw_fini %d failed %d\n", i, r);
+		}
 		adev->ip_block_status[i].sw = false;
 		adev->ip_block_status[i].sw = false;
 		adev->ip_block_status[i].valid = false;
 		adev->ip_block_status[i].valid = false;
 	}
 	}
@@ -1335,9 +1369,15 @@ static int amdgpu_suspend(struct amdgpu_device *adev)
 		/* ungate blocks so that suspend can properly shut them down */
 		/* ungate blocks so that suspend can properly shut them down */
 		r = adev->ip_blocks[i].funcs->set_clockgating_state((void *)adev,
 		r = adev->ip_blocks[i].funcs->set_clockgating_state((void *)adev,
 								    AMD_CG_STATE_UNGATE);
 								    AMD_CG_STATE_UNGATE);
+		if (r) {
+			DRM_ERROR("set_clockgating_state(ungate) %d failed %d\n", i, r);
+		}
 		/* XXX handle errors */
 		/* XXX handle errors */
 		r = adev->ip_blocks[i].funcs->suspend(adev);
 		r = adev->ip_blocks[i].funcs->suspend(adev);
 		/* XXX handle errors */
 		/* XXX handle errors */
+		if (r) {
+			DRM_ERROR("suspend %d failed %d\n", i, r);
+		}
 	}
 	}
 
 
 	return 0;
 	return 0;
@@ -1351,8 +1391,10 @@ static int amdgpu_resume(struct amdgpu_device *adev)
 		if (!adev->ip_block_status[i].valid)
 		if (!adev->ip_block_status[i].valid)
 			continue;
 			continue;
 		r = adev->ip_blocks[i].funcs->resume(adev);
 		r = adev->ip_blocks[i].funcs->resume(adev);
-		if (r)
+		if (r) {
+			DRM_ERROR("resume %d failed %d\n", i, r);
 			return r;
 			return r;
+		}
 	}
 	}
 
 
 	return 0;
 	return 0;
@@ -1484,8 +1526,10 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 		return -EINVAL;
 		return -EINVAL;
 	}
 	}
 	r = amdgpu_atombios_init(adev);
 	r = amdgpu_atombios_init(adev);
-	if (r)
+	if (r) {
+		dev_err(adev->dev, "amdgpu_atombios_init failed\n");
 		return r;
 		return r;
+	}
 
 
 	/* Post card if necessary */
 	/* Post card if necessary */
 	if (!amdgpu_card_posted(adev)) {
 	if (!amdgpu_card_posted(adev)) {
@@ -1499,21 +1543,26 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 
 
 	/* Initialize clocks */
 	/* Initialize clocks */
 	r = amdgpu_atombios_get_clock_info(adev);
 	r = amdgpu_atombios_get_clock_info(adev);
-	if (r)
+	if (r) {
+		dev_err(adev->dev, "amdgpu_atombios_get_clock_info failed\n");
 		return r;
 		return r;
+	}
 	/* init i2c buses */
 	/* init i2c buses */
 	amdgpu_atombios_i2c_init(adev);
 	amdgpu_atombios_i2c_init(adev);
 
 
 	/* Fence driver */
 	/* Fence driver */
 	r = amdgpu_fence_driver_init(adev);
 	r = amdgpu_fence_driver_init(adev);
-	if (r)
+	if (r) {
+		dev_err(adev->dev, "amdgpu_fence_driver_init failed\n");
 		return r;
 		return r;
+	}
 
 
 	/* init the mode config */
 	/* init the mode config */
 	drm_mode_config_init(adev->ddev);
 	drm_mode_config_init(adev->ddev);
 
 
 	r = amdgpu_init(adev);
 	r = amdgpu_init(adev);
 	if (r) {
 	if (r) {
+		dev_err(adev->dev, "amdgpu_init failed\n");
 		amdgpu_fini(adev);
 		amdgpu_fini(adev);
 		return r;
 		return r;
 	}
 	}
@@ -1528,7 +1577,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 		return r;
 		return r;
 	}
 	}
 
 
-	r = amdgpu_ctx_init(adev, true, &adev->kernel_ctx);
+	r = amdgpu_ctx_init(adev, AMD_SCHED_PRIORITY_KERNEL, &adev->kernel_ctx);
 	if (r) {
 	if (r) {
 		dev_err(adev->dev, "failed to create kernel context (%d).\n", r);
 		dev_err(adev->dev, "failed to create kernel context (%d).\n", r);
 		return r;
 		return r;
@@ -1570,8 +1619,10 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	 * explicit gating rather than handling it automatically.
 	 * explicit gating rather than handling it automatically.
 	 */
 	 */
 	r = amdgpu_late_init(adev);
 	r = amdgpu_late_init(adev);
-	if (r)
+	if (r) {
+		dev_err(adev->dev, "amdgpu_late_init failed\n");
 		return r;
 		return r;
+	}
 
 
 	return 0;
 	return 0;
 }
 }
@@ -1788,6 +1839,7 @@ int amdgpu_resume_kms(struct drm_device *dev, bool resume, bool fbcon)
 	}
 	}
 
 
 	drm_kms_helper_poll_enable(dev);
 	drm_kms_helper_poll_enable(dev);
+	drm_helper_hpd_irq_event(dev);
 
 
 	if (fbcon) {
 	if (fbcon) {
 		amdgpu_fbdev_set_suspend(adev, 0);
 		amdgpu_fbdev_set_suspend(adev, 0);
@@ -1881,6 +1933,83 @@ retry:
 	return r;
 	return r;
 }
 }
 
 
+void amdgpu_get_pcie_info(struct amdgpu_device *adev)
+{
+	u32 mask;
+	int ret;
+
+	if (pci_is_root_bus(adev->pdev->bus))
+		return;
+
+	if (amdgpu_pcie_gen2 == 0)
+		return;
+
+	if (adev->flags & AMD_IS_APU)
+		return;
+
+	ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
+	if (!ret) {
+		adev->pm.pcie_gen_mask = (CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN1 |
+					  CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN2 |
+					  CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN3);
+
+		if (mask & DRM_PCIE_SPEED_25)
+			adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1;
+		if (mask & DRM_PCIE_SPEED_50)
+			adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2;
+		if (mask & DRM_PCIE_SPEED_80)
+			adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3;
+	}
+	ret = drm_pcie_get_max_link_width(adev->ddev, &mask);
+	if (!ret) {
+		switch (mask) {
+		case 32:
+			adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X32 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X16 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+			break;
+		case 16:
+			adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X16 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+			break;
+		case 12:
+			adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+			break;
+		case 8:
+			adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+			break;
+		case 4:
+			adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+			break;
+		case 2:
+			adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+						  CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+			break;
+		case 1:
+			adev->pm.pcie_mlw_mask = CAIL_PCIE_LINK_WIDTH_SUPPORT_X1;
+			break;
+		default:
+			break;
+		}
+	}
+}
 
 
 /*
 /*
  * Debugfs
  * Debugfs

+ 8 - 2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c

@@ -79,9 +79,10 @@ int amdgpu_vm_fault_stop = 0;
 int amdgpu_vm_debug = 0;
 int amdgpu_vm_debug = 0;
 int amdgpu_exp_hw_support = 0;
 int amdgpu_exp_hw_support = 0;
 int amdgpu_enable_scheduler = 1;
 int amdgpu_enable_scheduler = 1;
-int amdgpu_sched_jobs = 16;
+int amdgpu_sched_jobs = 32;
 int amdgpu_sched_hw_submission = 2;
 int amdgpu_sched_hw_submission = 2;
 int amdgpu_enable_semaphores = 0;
 int amdgpu_enable_semaphores = 0;
+int amdgpu_powerplay = -1;
 
 
 MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes");
 MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes");
 module_param_named(vramlimit, amdgpu_vram_limit, int, 0600);
 module_param_named(vramlimit, amdgpu_vram_limit, int, 0600);
@@ -155,7 +156,7 @@ module_param_named(exp_hw_support, amdgpu_exp_hw_support, int, 0444);
 MODULE_PARM_DESC(enable_scheduler, "enable SW GPU scheduler (1 = enable (default), 0 = disable)");
 MODULE_PARM_DESC(enable_scheduler, "enable SW GPU scheduler (1 = enable (default), 0 = disable)");
 module_param_named(enable_scheduler, amdgpu_enable_scheduler, int, 0444);
 module_param_named(enable_scheduler, amdgpu_enable_scheduler, int, 0444);
 
 
-MODULE_PARM_DESC(sched_jobs, "the max number of jobs supported in the sw queue (default 16)");
+MODULE_PARM_DESC(sched_jobs, "the max number of jobs supported in the sw queue (default 32)");
 module_param_named(sched_jobs, amdgpu_sched_jobs, int, 0444);
 module_param_named(sched_jobs, amdgpu_sched_jobs, int, 0444);
 
 
 MODULE_PARM_DESC(sched_hw_submission, "the max number of HW submissions (default 2)");
 MODULE_PARM_DESC(sched_hw_submission, "the max number of HW submissions (default 2)");
@@ -164,6 +165,11 @@ module_param_named(sched_hw_submission, amdgpu_sched_hw_submission, int, 0444);
 MODULE_PARM_DESC(enable_semaphores, "Enable semaphores (1 = enable, 0 = disable (default))");
 MODULE_PARM_DESC(enable_semaphores, "Enable semaphores (1 = enable, 0 = disable (default))");
 module_param_named(enable_semaphores, amdgpu_enable_semaphores, int, 0644);
 module_param_named(enable_semaphores, amdgpu_enable_semaphores, int, 0644);
 
 
+#ifdef CONFIG_DRM_AMD_POWERPLAY
+MODULE_PARM_DESC(powerplay, "Powerplay component (1 = enable, 0 = disable, -1 = auto (default))");
+module_param_named(powerplay, amdgpu_powerplay, int, 0444);
+#endif
+
 static struct pci_device_id pciidlist[] = {
 static struct pci_device_id pciidlist[] = {
 #ifdef CONFIG_DRM_AMDGPU_CIK
 #ifdef CONFIG_DRM_AMDGPU_CIK
 	/* Kaveri */
 	/* Kaveri */

+ 1 - 1
drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c

@@ -263,7 +263,7 @@ out_unref:
 
 
 	}
 	}
 	if (fb && ret) {
 	if (fb && ret) {
-		drm_gem_object_unreference(gobj);
+		drm_gem_object_unreference_unlocked(gobj);
 		drm_framebuffer_unregister_private(fb);
 		drm_framebuffer_unregister_private(fb);
 		drm_framebuffer_cleanup(fb);
 		drm_framebuffer_cleanup(fb);
 		kfree(fb);
 		kfree(fb);

+ 5 - 8
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c

@@ -448,7 +448,7 @@ static void amdgpu_gem_va_update_vm(struct amdgpu_device *adev,
 				    struct amdgpu_bo_va *bo_va, uint32_t operation)
 				    struct amdgpu_bo_va *bo_va, uint32_t operation)
 {
 {
 	struct ttm_validate_buffer tv, *entry;
 	struct ttm_validate_buffer tv, *entry;
-	struct amdgpu_bo_list_entry *vm_bos;
+	struct amdgpu_bo_list_entry vm_pd;
 	struct ww_acquire_ctx ticket;
 	struct ww_acquire_ctx ticket;
 	struct list_head list, duplicates;
 	struct list_head list, duplicates;
 	unsigned domain;
 	unsigned domain;
@@ -461,15 +461,14 @@ static void amdgpu_gem_va_update_vm(struct amdgpu_device *adev,
 	tv.shared = true;
 	tv.shared = true;
 	list_add(&tv.head, &list);
 	list_add(&tv.head, &list);
 
 
-	vm_bos = amdgpu_vm_get_bos(adev, bo_va->vm, &list);
-	if (!vm_bos)
-		return;
+	amdgpu_vm_get_pd_bo(bo_va->vm, &list, &vm_pd);
 
 
 	/* Provide duplicates to avoid -EALREADY */
 	/* Provide duplicates to avoid -EALREADY */
 	r = ttm_eu_reserve_buffers(&ticket, &list, true, &duplicates);
 	r = ttm_eu_reserve_buffers(&ticket, &list, true, &duplicates);
 	if (r)
 	if (r)
-		goto error_free;
+		goto error_print;
 
 
+	amdgpu_vm_get_pt_bos(bo_va->vm, &duplicates);
 	list_for_each_entry(entry, &list, head) {
 	list_for_each_entry(entry, &list, head) {
 		domain = amdgpu_mem_type_to_domain(entry->bo->mem.mem_type);
 		domain = amdgpu_mem_type_to_domain(entry->bo->mem.mem_type);
 		/* if anything is swapped out don't swap it in here,
 		/* if anything is swapped out don't swap it in here,
@@ -499,9 +498,7 @@ static void amdgpu_gem_va_update_vm(struct amdgpu_device *adev,
 error_unreserve:
 error_unreserve:
 	ttm_eu_backoff_reservation(&ticket, &list);
 	ttm_eu_backoff_reservation(&ticket, &list);
 
 
-error_free:
-	drm_free_large(vm_bos);
-
+error_print:
 	if (r && r != -ERESTARTSYS)
 	if (r && r != -ERESTARTSYS)
 		DRM_ERROR("Couldn't update BO_VA (%d)\n", r);
 		DRM_ERROR("Couldn't update BO_VA (%d)\n", r);
 }
 }

+ 101 - 8
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c

@@ -25,6 +25,7 @@
  *          Alex Deucher
  *          Alex Deucher
  *          Jerome Glisse
  *          Jerome Glisse
  */
  */
+#include <linux/irq.h>
 #include <drm/drmP.h>
 #include <drm/drmP.h>
 #include <drm/drm_crtc_helper.h>
 #include <drm/drm_crtc_helper.h>
 #include <drm/amdgpu_drm.h>
 #include <drm/amdgpu_drm.h>
@@ -312,6 +313,7 @@ int amdgpu_irq_add_id(struct amdgpu_device *adev, unsigned src_id,
 	}
 	}
 
 
 	adev->irq.sources[src_id] = source;
 	adev->irq.sources[src_id] = source;
+
 	return 0;
 	return 0;
 }
 }
 
 
@@ -335,15 +337,19 @@ void amdgpu_irq_dispatch(struct amdgpu_device *adev,
 		return;
 		return;
 	}
 	}
 
 
-	src = adev->irq.sources[src_id];
-	if (!src) {
-		DRM_DEBUG("Unhandled interrupt src_id: %d\n", src_id);
-		return;
-	}
+	if (adev->irq.virq[src_id]) {
+		generic_handle_irq(irq_find_mapping(adev->irq.domain, src_id));
+	} else {
+		src = adev->irq.sources[src_id];
+		if (!src) {
+			DRM_DEBUG("Unhandled interrupt src_id: %d\n", src_id);
+			return;
+		}
 
 
-	r = src->funcs->process(adev, src, entry);
-	if (r)
-		DRM_ERROR("error processing interrupt (%d)\n", r);
+		r = src->funcs->process(adev, src, entry);
+		if (r)
+			DRM_ERROR("error processing interrupt (%d)\n", r);
+	}
 }
 }
 
 
 /**
 /**
@@ -461,3 +467,90 @@ bool amdgpu_irq_enabled(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
 
 
 	return !!atomic_read(&src->enabled_types[type]);
 	return !!atomic_read(&src->enabled_types[type]);
 }
 }
+
+/* gen irq */
+static void amdgpu_irq_mask(struct irq_data *irqd)
+{
+	/* XXX */
+}
+
+static void amdgpu_irq_unmask(struct irq_data *irqd)
+{
+	/* XXX */
+}
+
+static struct irq_chip amdgpu_irq_chip = {
+	.name = "amdgpu-ih",
+	.irq_mask = amdgpu_irq_mask,
+	.irq_unmask = amdgpu_irq_unmask,
+};
+
+static int amdgpu_irqdomain_map(struct irq_domain *d,
+				unsigned int irq, irq_hw_number_t hwirq)
+{
+	if (hwirq >= AMDGPU_MAX_IRQ_SRC_ID)
+		return -EPERM;
+
+	irq_set_chip_and_handler(irq,
+				 &amdgpu_irq_chip, handle_simple_irq);
+	return 0;
+}
+
+static struct irq_domain_ops amdgpu_hw_irqdomain_ops = {
+	.map = amdgpu_irqdomain_map,
+};
+
+/**
+ * amdgpu_irq_add_domain - create a linear irq domain
+ *
+ * @adev: amdgpu device pointer
+ *
+ * Create an irq domain for GPU interrupt sources
+ * that may be driven by another driver (e.g., ACP).
+ */
+int amdgpu_irq_add_domain(struct amdgpu_device *adev)
+{
+	adev->irq.domain = irq_domain_add_linear(NULL, AMDGPU_MAX_IRQ_SRC_ID,
+						 &amdgpu_hw_irqdomain_ops, adev);
+	if (!adev->irq.domain) {
+		DRM_ERROR("GPU irq add domain failed\n");
+		return -ENODEV;
+	}
+
+	return 0;
+}
+
+/**
+ * amdgpu_irq_remove_domain - remove the irq domain
+ *
+ * @adev: amdgpu device pointer
+ *
+ * Remove the irq domain for GPU interrupt sources
+ * that may be driven by another driver (e.g., ACP).
+ */
+void amdgpu_irq_remove_domain(struct amdgpu_device *adev)
+{
+	if (adev->irq.domain) {
+		irq_domain_remove(adev->irq.domain);
+		adev->irq.domain = NULL;
+	}
+}
+
+/**
+ * amdgpu_irq_create_mapping - create a mapping between a domain irq and a
+ *                             Linux irq
+ *
+ * @adev: amdgpu device pointer
+ * @src_id: IH source id
+ *
+ * Create a mapping between a domain irq (GPU IH src id) and a Linux irq
+ * Use this for components that generate a GPU interrupt, but are driven
+ * by a different driver (e.g., ACP).
+ * Returns the Linux irq.
+ */
+unsigned amdgpu_irq_create_mapping(struct amdgpu_device *adev, unsigned src_id)
+{
+	adev->irq.virq[src_id] = irq_create_mapping(adev->irq.domain, src_id);
+
+	return adev->irq.virq[src_id];
+}

+ 9 - 0
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h

@@ -24,6 +24,7 @@
 #ifndef __AMDGPU_IRQ_H__
 #ifndef __AMDGPU_IRQ_H__
 #define __AMDGPU_IRQ_H__
 #define __AMDGPU_IRQ_H__
 
 
+#include <linux/irqdomain.h>
 #include "amdgpu_ih.h"
 #include "amdgpu_ih.h"
 
 
 #define AMDGPU_MAX_IRQ_SRC_ID	0x100
 #define AMDGPU_MAX_IRQ_SRC_ID	0x100
@@ -65,6 +66,10 @@ struct amdgpu_irq {
 	/* interrupt ring */
 	/* interrupt ring */
 	struct amdgpu_ih_ring		ih;
 	struct amdgpu_ih_ring		ih;
 	const struct amdgpu_ih_funcs	*ih_funcs;
 	const struct amdgpu_ih_funcs	*ih_funcs;
+
+	/* gen irq stuff */
+	struct irq_domain		*domain; /* GPU irq controller domain */
+	unsigned			virq[AMDGPU_MAX_IRQ_SRC_ID];
 };
 };
 
 
 void amdgpu_irq_preinstall(struct drm_device *dev);
 void amdgpu_irq_preinstall(struct drm_device *dev);
@@ -90,4 +95,8 @@ int amdgpu_irq_put(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
 bool amdgpu_irq_enabled(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
 bool amdgpu_irq_enabled(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
 			unsigned type);
 			unsigned type);
 
 
+int amdgpu_irq_add_domain(struct amdgpu_device *adev);
+void amdgpu_irq_remove_domain(struct amdgpu_device *adev);
+unsigned amdgpu_irq_create_mapping(struct amdgpu_device *adev, unsigned src_id);
+
 #endif
 #endif

+ 1 - 0
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h

@@ -35,6 +35,7 @@
 #include <drm/drm_dp_helper.h>
 #include <drm/drm_dp_helper.h>
 #include <drm/drm_fixed.h>
 #include <drm/drm_fixed.h>
 #include <drm/drm_crtc_helper.h>
 #include <drm/drm_crtc_helper.h>
+#include <drm/drm_fb_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <drm/drm_plane_helper.h>
 #include <linux/i2c.h>
 #include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
 #include <linux/i2c-algo-bit.h>

+ 1 - 0
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h

@@ -96,6 +96,7 @@ static inline void amdgpu_bo_unreserve(struct amdgpu_bo *bo)
  */
  */
 static inline u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo)
 static inline u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo)
 {
 {
+	WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_SYSTEM);
 	return bo->tbo.offset;
 	return bo->tbo.offset;
 }
 }
 
 

+ 151 - 84
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c

@@ -30,10 +30,16 @@
 #include <linux/hwmon.h>
 #include <linux/hwmon.h>
 #include <linux/hwmon-sysfs.h>
 #include <linux/hwmon-sysfs.h>
 
 
+#include "amd_powerplay.h"
+
 static int amdgpu_debugfs_pm_init(struct amdgpu_device *adev);
 static int amdgpu_debugfs_pm_init(struct amdgpu_device *adev);
 
 
 void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev)
 void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev)
 {
 {
+	if (adev->pp_enabled)
+		/* TODO */
+		return;
+
 	if (adev->pm.dpm_enabled) {
 	if (adev->pm.dpm_enabled) {
 		mutex_lock(&adev->pm.mutex);
 		mutex_lock(&adev->pm.mutex);
 		if (power_supply_is_system_supplied() > 0)
 		if (power_supply_is_system_supplied() > 0)
@@ -52,7 +58,12 @@ static ssize_t amdgpu_get_dpm_state(struct device *dev,
 {
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = ddev->dev_private;
 	struct amdgpu_device *adev = ddev->dev_private;
-	enum amdgpu_pm_state_type pm = adev->pm.dpm.user_state;
+	enum amd_pm_state_type pm;
+
+	if (adev->pp_enabled) {
+		pm = amdgpu_dpm_get_current_power_state(adev);
+	} else
+		pm = adev->pm.dpm.user_state;
 
 
 	return snprintf(buf, PAGE_SIZE, "%s\n",
 	return snprintf(buf, PAGE_SIZE, "%s\n",
 			(pm == POWER_STATE_TYPE_BATTERY) ? "battery" :
 			(pm == POWER_STATE_TYPE_BATTERY) ? "battery" :
@@ -66,40 +77,57 @@ static ssize_t amdgpu_set_dpm_state(struct device *dev,
 {
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = ddev->dev_private;
 	struct amdgpu_device *adev = ddev->dev_private;
+	enum amd_pm_state_type  state;
 
 
-	mutex_lock(&adev->pm.mutex);
 	if (strncmp("battery", buf, strlen("battery")) == 0)
 	if (strncmp("battery", buf, strlen("battery")) == 0)
-		adev->pm.dpm.user_state = POWER_STATE_TYPE_BATTERY;
+		state = POWER_STATE_TYPE_BATTERY;
 	else if (strncmp("balanced", buf, strlen("balanced")) == 0)
 	else if (strncmp("balanced", buf, strlen("balanced")) == 0)
-		adev->pm.dpm.user_state = POWER_STATE_TYPE_BALANCED;
+		state = POWER_STATE_TYPE_BALANCED;
 	else if (strncmp("performance", buf, strlen("performance")) == 0)
 	else if (strncmp("performance", buf, strlen("performance")) == 0)
-		adev->pm.dpm.user_state = POWER_STATE_TYPE_PERFORMANCE;
+		state = POWER_STATE_TYPE_PERFORMANCE;
 	else {
 	else {
-		mutex_unlock(&adev->pm.mutex);
 		count = -EINVAL;
 		count = -EINVAL;
 		goto fail;
 		goto fail;
 	}
 	}
-	mutex_unlock(&adev->pm.mutex);
 
 
-	/* Can't set dpm state when the card is off */
-	if (!(adev->flags & AMD_IS_PX) ||
-	    (ddev->switch_power_state == DRM_SWITCH_POWER_ON))
-		amdgpu_pm_compute_clocks(adev);
+	if (adev->pp_enabled) {
+		amdgpu_dpm_dispatch_task(adev, AMD_PP_EVENT_ENABLE_USER_STATE, &state, NULL);
+	} else {
+		mutex_lock(&adev->pm.mutex);
+		adev->pm.dpm.user_state = state;
+		mutex_unlock(&adev->pm.mutex);
+
+		/* Can't set dpm state when the card is off */
+		if (!(adev->flags & AMD_IS_PX) ||
+		    (ddev->switch_power_state == DRM_SWITCH_POWER_ON))
+			amdgpu_pm_compute_clocks(adev);
+	}
 fail:
 fail:
 	return count;
 	return count;
 }
 }
 
 
 static ssize_t amdgpu_get_dpm_forced_performance_level(struct device *dev,
 static ssize_t amdgpu_get_dpm_forced_performance_level(struct device *dev,
-						       struct device_attribute *attr,
-						       char *buf)
+						struct device_attribute *attr,
+								char *buf)
 {
 {
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct drm_device *ddev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = ddev->dev_private;
 	struct amdgpu_device *adev = ddev->dev_private;
-	enum amdgpu_dpm_forced_level level = adev->pm.dpm.forced_level;
 
 
-	return snprintf(buf, PAGE_SIZE, "%s\n",
-			(level == AMDGPU_DPM_FORCED_LEVEL_AUTO) ? "auto" :
-			(level == AMDGPU_DPM_FORCED_LEVEL_LOW) ? "low" : "high");
+	if (adev->pp_enabled) {
+		enum amd_dpm_forced_level level;
+
+		level = amdgpu_dpm_get_performance_level(adev);
+		return snprintf(buf, PAGE_SIZE, "%s\n",
+				(level == AMD_DPM_FORCED_LEVEL_AUTO) ? "auto" :
+				(level == AMD_DPM_FORCED_LEVEL_LOW) ? "low" : "high");
+	} else {
+		enum amdgpu_dpm_forced_level level;
+
+		level = adev->pm.dpm.forced_level;
+		return snprintf(buf, PAGE_SIZE, "%s\n",
+				(level == AMDGPU_DPM_FORCED_LEVEL_AUTO) ? "auto" :
+				(level == AMDGPU_DPM_FORCED_LEVEL_LOW) ? "low" : "high");
+	}
 }
 }
 
 
 static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
 static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
@@ -112,7 +140,6 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
 	enum amdgpu_dpm_forced_level level;
 	enum amdgpu_dpm_forced_level level;
 	int ret = 0;
 	int ret = 0;
 
 
-	mutex_lock(&adev->pm.mutex);
 	if (strncmp("low", buf, strlen("low")) == 0) {
 	if (strncmp("low", buf, strlen("low")) == 0) {
 		level = AMDGPU_DPM_FORCED_LEVEL_LOW;
 		level = AMDGPU_DPM_FORCED_LEVEL_LOW;
 	} else if (strncmp("high", buf, strlen("high")) == 0) {
 	} else if (strncmp("high", buf, strlen("high")) == 0) {
@@ -123,7 +150,11 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
 		count = -EINVAL;
 		count = -EINVAL;
 		goto fail;
 		goto fail;
 	}
 	}
-	if (adev->pm.funcs->force_performance_level) {
+
+	if (adev->pp_enabled)
+		amdgpu_dpm_force_performance_level(adev, level);
+	else {
+		mutex_lock(&adev->pm.mutex);
 		if (adev->pm.dpm.thermal_active) {
 		if (adev->pm.dpm.thermal_active) {
 			count = -EINVAL;
 			count = -EINVAL;
 			goto fail;
 			goto fail;
@@ -131,6 +162,9 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
 		ret = amdgpu_dpm_force_performance_level(adev, level);
 		ret = amdgpu_dpm_force_performance_level(adev, level);
 		if (ret)
 		if (ret)
 			count = -EINVAL;
 			count = -EINVAL;
+		else
+			adev->pm.dpm.forced_level = level;
+		mutex_unlock(&adev->pm.mutex);
 	}
 	}
 fail:
 fail:
 	mutex_unlock(&adev->pm.mutex);
 	mutex_unlock(&adev->pm.mutex);
@@ -150,10 +184,10 @@ static ssize_t amdgpu_hwmon_show_temp(struct device *dev,
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
 	int temp;
 	int temp;
 
 
-	if (adev->pm.funcs->get_temperature)
-		temp = amdgpu_dpm_get_temperature(adev);
-	else
+	if (!adev->pp_enabled && !adev->pm.funcs->get_temperature)
 		temp = 0;
 		temp = 0;
+	else
+		temp = amdgpu_dpm_get_temperature(adev);
 
 
 	return snprintf(buf, PAGE_SIZE, "%d\n", temp);
 	return snprintf(buf, PAGE_SIZE, "%d\n", temp);
 }
 }
@@ -181,8 +215,10 @@ static ssize_t amdgpu_hwmon_get_pwm1_enable(struct device *dev,
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
 	u32 pwm_mode = 0;
 	u32 pwm_mode = 0;
 
 
-	if (adev->pm.funcs->get_fan_control_mode)
-		pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
+	if (!adev->pp_enabled && !adev->pm.funcs->get_fan_control_mode)
+		return -EINVAL;
+
+	pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
 
 
 	/* never 0 (full-speed), fuse or smc-controlled always */
 	/* never 0 (full-speed), fuse or smc-controlled always */
 	return sprintf(buf, "%i\n", pwm_mode == FDO_PWM_MODE_STATIC ? 1 : 2);
 	return sprintf(buf, "%i\n", pwm_mode == FDO_PWM_MODE_STATIC ? 1 : 2);
@@ -197,7 +233,7 @@ static ssize_t amdgpu_hwmon_set_pwm1_enable(struct device *dev,
 	int err;
 	int err;
 	int value;
 	int value;
 
 
-	if(!adev->pm.funcs->set_fan_control_mode)
+	if (!adev->pp_enabled && !adev->pm.funcs->set_fan_control_mode)
 		return -EINVAL;
 		return -EINVAL;
 
 
 	err = kstrtoint(buf, 10, &value);
 	err = kstrtoint(buf, 10, &value);
@@ -290,11 +326,11 @@ static struct attribute *hwmon_attributes[] = {
 static umode_t hwmon_attributes_visible(struct kobject *kobj,
 static umode_t hwmon_attributes_visible(struct kobject *kobj,
 					struct attribute *attr, int index)
 					struct attribute *attr, int index)
 {
 {
-	struct device *dev = container_of(kobj, struct device, kobj);
+	struct device *dev = kobj_to_dev(kobj);
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
 	struct amdgpu_device *adev = dev_get_drvdata(dev);
 	umode_t effective_mode = attr->mode;
 	umode_t effective_mode = attr->mode;
 
 
-	/* Skip attributes if DPM is not enabled */
+	/* Skip limit attributes if DPM is not enabled */
 	if (!adev->pm.dpm_enabled &&
 	if (!adev->pm.dpm_enabled &&
 	    (attr == &sensor_dev_attr_temp1_crit.dev_attr.attr ||
 	    (attr == &sensor_dev_attr_temp1_crit.dev_attr.attr ||
 	     attr == &sensor_dev_attr_temp1_crit_hyst.dev_attr.attr ||
 	     attr == &sensor_dev_attr_temp1_crit_hyst.dev_attr.attr ||
@@ -304,6 +340,9 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
 	     attr == &sensor_dev_attr_pwm1_min.dev_attr.attr))
 	     attr == &sensor_dev_attr_pwm1_min.dev_attr.attr))
 		return 0;
 		return 0;
 
 
+	if (adev->pp_enabled)
+		return effective_mode;
+
 	/* Skip fan attributes if fan is not present */
 	/* Skip fan attributes if fan is not present */
 	if (adev->pm.no_fan &&
 	if (adev->pm.no_fan &&
 	    (attr == &sensor_dev_attr_pwm1.dev_attr.attr ||
 	    (attr == &sensor_dev_attr_pwm1.dev_attr.attr ||
@@ -351,7 +390,7 @@ void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
 		container_of(work, struct amdgpu_device,
 		container_of(work, struct amdgpu_device,
 			     pm.dpm.thermal.work);
 			     pm.dpm.thermal.work);
 	/* switch to the thermal state */
 	/* switch to the thermal state */
-	enum amdgpu_pm_state_type dpm_state = POWER_STATE_TYPE_INTERNAL_THERMAL;
+	enum amd_pm_state_type dpm_state = POWER_STATE_TYPE_INTERNAL_THERMAL;
 
 
 	if (!adev->pm.dpm_enabled)
 	if (!adev->pm.dpm_enabled)
 		return;
 		return;
@@ -379,7 +418,7 @@ void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
 }
 }
 
 
 static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct amdgpu_device *adev,
 static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct amdgpu_device *adev,
-						     enum amdgpu_pm_state_type dpm_state)
+						     enum amd_pm_state_type dpm_state)
 {
 {
 	int i;
 	int i;
 	struct amdgpu_ps *ps;
 	struct amdgpu_ps *ps;
@@ -516,7 +555,7 @@ static void amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
 {
 {
 	int i;
 	int i;
 	struct amdgpu_ps *ps;
 	struct amdgpu_ps *ps;
-	enum amdgpu_pm_state_type dpm_state;
+	enum amd_pm_state_type dpm_state;
 	int ret;
 	int ret;
 
 
 	/* if dpm init failed */
 	/* if dpm init failed */
@@ -635,49 +674,54 @@ done:
 
 
 void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable)
 void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable)
 {
 {
-	if (adev->pm.funcs->powergate_uvd) {
-		mutex_lock(&adev->pm.mutex);
-		/* enable/disable UVD */
+	if (adev->pp_enabled)
 		amdgpu_dpm_powergate_uvd(adev, !enable);
 		amdgpu_dpm_powergate_uvd(adev, !enable);
-		mutex_unlock(&adev->pm.mutex);
-	} else {
-		if (enable) {
+	else {
+		if (adev->pm.funcs->powergate_uvd) {
 			mutex_lock(&adev->pm.mutex);
 			mutex_lock(&adev->pm.mutex);
-			adev->pm.dpm.uvd_active = true;
-			adev->pm.dpm.state = POWER_STATE_TYPE_INTERNAL_UVD;
+			/* enable/disable UVD */
+			amdgpu_dpm_powergate_uvd(adev, !enable);
 			mutex_unlock(&adev->pm.mutex);
 			mutex_unlock(&adev->pm.mutex);
 		} else {
 		} else {
-			mutex_lock(&adev->pm.mutex);
-			adev->pm.dpm.uvd_active = false;
-			mutex_unlock(&adev->pm.mutex);
+			if (enable) {
+				mutex_lock(&adev->pm.mutex);
+				adev->pm.dpm.uvd_active = true;
+				adev->pm.dpm.state = POWER_STATE_TYPE_INTERNAL_UVD;
+				mutex_unlock(&adev->pm.mutex);
+			} else {
+				mutex_lock(&adev->pm.mutex);
+				adev->pm.dpm.uvd_active = false;
+				mutex_unlock(&adev->pm.mutex);
+			}
+			amdgpu_pm_compute_clocks(adev);
 		}
 		}
 
 
-		amdgpu_pm_compute_clocks(adev);
 	}
 	}
 }
 }
 
 
 void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable)
 void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable)
 {
 {
-	if (adev->pm.funcs->powergate_vce) {
-		mutex_lock(&adev->pm.mutex);
-		/* enable/disable VCE */
+	if (adev->pp_enabled)
 		amdgpu_dpm_powergate_vce(adev, !enable);
 		amdgpu_dpm_powergate_vce(adev, !enable);
-
-		mutex_unlock(&adev->pm.mutex);
-	} else {
-		if (enable) {
+	else {
+		if (adev->pm.funcs->powergate_vce) {
 			mutex_lock(&adev->pm.mutex);
 			mutex_lock(&adev->pm.mutex);
-			adev->pm.dpm.vce_active = true;
-			/* XXX select vce level based on ring/task */
-			adev->pm.dpm.vce_level = AMDGPU_VCE_LEVEL_AC_ALL;
+			amdgpu_dpm_powergate_vce(adev, !enable);
 			mutex_unlock(&adev->pm.mutex);
 			mutex_unlock(&adev->pm.mutex);
 		} else {
 		} else {
-			mutex_lock(&adev->pm.mutex);
-			adev->pm.dpm.vce_active = false;
-			mutex_unlock(&adev->pm.mutex);
+			if (enable) {
+				mutex_lock(&adev->pm.mutex);
+				adev->pm.dpm.vce_active = true;
+				/* XXX select vce level based on ring/task */
+				adev->pm.dpm.vce_level = AMDGPU_VCE_LEVEL_AC_ALL;
+				mutex_unlock(&adev->pm.mutex);
+			} else {
+				mutex_lock(&adev->pm.mutex);
+				adev->pm.dpm.vce_active = false;
+				mutex_unlock(&adev->pm.mutex);
+			}
+			amdgpu_pm_compute_clocks(adev);
 		}
 		}
-
-		amdgpu_pm_compute_clocks(adev);
 	}
 	}
 }
 }
 
 
@@ -685,10 +729,13 @@ void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
 {
 {
 	int i;
 	int i;
 
 
-	for (i = 0; i < adev->pm.dpm.num_ps; i++) {
-		printk("== power state %d ==\n", i);
+	if (adev->pp_enabled)
+		/* TO DO */
+		return;
+
+	for (i = 0; i < adev->pm.dpm.num_ps; i++)
 		amdgpu_dpm_print_power_state(adev, &adev->pm.dpm.ps[i]);
 		amdgpu_dpm_print_power_state(adev, &adev->pm.dpm.ps[i]);
-	}
+
 }
 }
 
 
 int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)
 int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)
@@ -698,8 +745,11 @@ int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)
 	if (adev->pm.sysfs_initialized)
 	if (adev->pm.sysfs_initialized)
 		return 0;
 		return 0;
 
 
-	if (adev->pm.funcs->get_temperature == NULL)
-		return 0;
+	if (!adev->pp_enabled) {
+		if (adev->pm.funcs->get_temperature == NULL)
+			return 0;
+	}
+
 	adev->pm.int_hwmon_dev = hwmon_device_register_with_groups(adev->dev,
 	adev->pm.int_hwmon_dev = hwmon_device_register_with_groups(adev->dev,
 								   DRIVER_NAME, adev,
 								   DRIVER_NAME, adev,
 								   hwmon_groups);
 								   hwmon_groups);
@@ -748,32 +798,43 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
 	if (!adev->pm.dpm_enabled)
 	if (!adev->pm.dpm_enabled)
 		return;
 		return;
 
 
-	mutex_lock(&adev->pm.mutex);
+	if (adev->pp_enabled) {
+		int i = 0;
 
 
-	/* update active crtc counts */
-	adev->pm.dpm.new_active_crtcs = 0;
-	adev->pm.dpm.new_active_crtc_count = 0;
-	if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
-		list_for_each_entry(crtc,
-				    &ddev->mode_config.crtc_list, head) {
-			amdgpu_crtc = to_amdgpu_crtc(crtc);
-			if (crtc->enabled) {
-				adev->pm.dpm.new_active_crtcs |= (1 << amdgpu_crtc->crtc_id);
-				adev->pm.dpm.new_active_crtc_count++;
+		amdgpu_display_bandwidth_update(adev);
+		mutex_lock(&adev->ring_lock);
+			for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
+				struct amdgpu_ring *ring = adev->rings[i];
+				if (ring && ring->ready)
+					amdgpu_fence_wait_empty(ring);
 			}
 			}
-		}
-	}
+		mutex_unlock(&adev->ring_lock);
 
 
-	/* update battery/ac status */
-	if (power_supply_is_system_supplied() > 0)
-		adev->pm.dpm.ac_power = true;
-	else
-		adev->pm.dpm.ac_power = false;
-
-	amdgpu_dpm_change_power_state_locked(adev);
+		amdgpu_dpm_dispatch_task(adev, AMD_PP_EVENT_DISPLAY_CONFIG_CHANGE, NULL, NULL);
+	} else {
+		mutex_lock(&adev->pm.mutex);
+		adev->pm.dpm.new_active_crtcs = 0;
+		adev->pm.dpm.new_active_crtc_count = 0;
+		if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
+			list_for_each_entry(crtc,
+					    &ddev->mode_config.crtc_list, head) {
+				amdgpu_crtc = to_amdgpu_crtc(crtc);
+				if (crtc->enabled) {
+					adev->pm.dpm.new_active_crtcs |= (1 << amdgpu_crtc->crtc_id);
+					adev->pm.dpm.new_active_crtc_count++;
+				}
+			}
+		}
+		/* update battery/ac status */
+		if (power_supply_is_system_supplied() > 0)
+			adev->pm.dpm.ac_power = true;
+		else
+			adev->pm.dpm.ac_power = false;
 
 
-	mutex_unlock(&adev->pm.mutex);
+		amdgpu_dpm_change_power_state_locked(adev);
 
 
+		mutex_unlock(&adev->pm.mutex);
+	}
 }
 }
 
 
 /*
 /*
@@ -787,7 +848,13 @@ static int amdgpu_debugfs_pm_info(struct seq_file *m, void *data)
 	struct drm_device *dev = node->minor->dev;
 	struct drm_device *dev = node->minor->dev;
 	struct amdgpu_device *adev = dev->dev_private;
 	struct amdgpu_device *adev = dev->dev_private;
 
 
-	if (adev->pm.dpm_enabled) {
+	if (!adev->pm.dpm_enabled) {
+		seq_printf(m, "dpm not enabled\n");
+		return 0;
+	}
+	if (adev->pp_enabled) {
+		amdgpu_dpm_debugfs_print_current_performance_level(adev, m);
+	} else {
 		mutex_lock(&adev->pm.mutex);
 		mutex_lock(&adev->pm.mutex);
 		if (adev->pm.funcs->debugfs_print_current_performance_level)
 		if (adev->pm.funcs->debugfs_print_current_performance_level)
 			amdgpu_dpm_debugfs_print_current_performance_level(adev, m);
 			amdgpu_dpm_debugfs_print_current_performance_level(adev, m);

+ 317 - 0
drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c

@@ -0,0 +1,317 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+#include "atom.h"
+#include "amdgpu.h"
+#include "amd_shared.h"
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include "amdgpu_pm.h"
+#include <drm/amdgpu_drm.h>
+#include "amdgpu_powerplay.h"
+#include "cik_dpm.h"
+#include "vi_dpm.h"
+
+static int amdgpu_powerplay_init(struct amdgpu_device *adev)
+{
+	int ret = 0;
+	struct amd_powerplay *amd_pp;
+
+	amd_pp = &(adev->powerplay);
+
+	if (adev->pp_enabled) {
+#ifdef CONFIG_DRM_AMD_POWERPLAY
+		struct amd_pp_init *pp_init;
+
+		pp_init = kzalloc(sizeof(struct amd_pp_init), GFP_KERNEL);
+
+		if (pp_init == NULL)
+			return -ENOMEM;
+
+		pp_init->chip_family = adev->family;
+		pp_init->chip_id = adev->asic_type;
+		pp_init->device = amdgpu_cgs_create_device(adev);
+
+		ret = amd_powerplay_init(pp_init, amd_pp);
+		kfree(pp_init);
+#endif
+	} else {
+		amd_pp->pp_handle = (void *)adev;
+
+		switch (adev->asic_type) {
+#ifdef CONFIG_DRM_AMDGPU_CIK
+		case CHIP_BONAIRE:
+		case CHIP_HAWAII:
+			amd_pp->ip_funcs = &ci_dpm_ip_funcs;
+			break;
+		case CHIP_KABINI:
+		case CHIP_MULLINS:
+		case CHIP_KAVERI:
+			amd_pp->ip_funcs = &kv_dpm_ip_funcs;
+			break;
+#endif
+		case CHIP_TOPAZ:
+			amd_pp->ip_funcs = &iceland_dpm_ip_funcs;
+			break;
+		case CHIP_TONGA:
+			amd_pp->ip_funcs = &tonga_dpm_ip_funcs;
+			break;
+		case CHIP_FIJI:
+			amd_pp->ip_funcs = &fiji_dpm_ip_funcs;
+			break;
+		case CHIP_CARRIZO:
+		case CHIP_STONEY:
+			amd_pp->ip_funcs = &cz_dpm_ip_funcs;
+			break;
+		default:
+			ret = -EINVAL;
+			break;
+		}
+	}
+	return ret;
+}
+
+static int amdgpu_pp_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int ret = 0;
+
+#ifdef CONFIG_DRM_AMD_POWERPLAY
+	switch (adev->asic_type) {
+		case CHIP_TONGA:
+		case CHIP_FIJI:
+			adev->pp_enabled = (amdgpu_powerplay > 0) ? true : false;
+			break;
+		default:
+			adev->pp_enabled = (amdgpu_powerplay > 0) ? true : false;
+			break;
+	}
+#else
+	adev->pp_enabled = false;
+#endif
+
+	ret = amdgpu_powerplay_init(adev);
+	if (ret)
+		return ret;
+
+	if (adev->powerplay.ip_funcs->early_init)
+		ret = adev->powerplay.ip_funcs->early_init(
+					adev->powerplay.pp_handle);
+	return ret;
+}
+
+
+static int amdgpu_pp_late_init(void *handle)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->late_init)
+		ret = adev->powerplay.ip_funcs->late_init(
+					adev->powerplay.pp_handle);
+
+#ifdef CONFIG_DRM_AMD_POWERPLAY
+	if (adev->pp_enabled)
+		amdgpu_pm_sysfs_init(adev);
+#endif
+	return ret;
+}
+
+static int amdgpu_pp_sw_init(void *handle)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->sw_init)
+		ret = adev->powerplay.ip_funcs->sw_init(
+					adev->powerplay.pp_handle);
+
+#ifdef CONFIG_DRM_AMD_POWERPLAY
+	if (adev->pp_enabled) {
+		if (amdgpu_dpm == 0)
+			adev->pm.dpm_enabled = false;
+		else
+			adev->pm.dpm_enabled = true;
+	}
+#endif
+
+	return ret;
+}
+
+static int amdgpu_pp_sw_fini(void *handle)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->sw_fini)
+		ret = adev->powerplay.ip_funcs->sw_fini(
+					adev->powerplay.pp_handle);
+	if (ret)
+		return ret;
+
+#ifdef CONFIG_DRM_AMD_POWERPLAY
+	if (adev->pp_enabled) {
+		amdgpu_pm_sysfs_fini(adev);
+		amd_powerplay_fini(adev->powerplay.pp_handle);
+	}
+#endif
+
+	return ret;
+}
+
+static int amdgpu_pp_hw_init(void *handle)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->pp_enabled && adev->firmware.smu_load)
+		amdgpu_ucode_init_bo(adev);
+
+	if (adev->powerplay.ip_funcs->hw_init)
+		ret = adev->powerplay.ip_funcs->hw_init(
+					adev->powerplay.pp_handle);
+
+	return ret;
+}
+
+static int amdgpu_pp_hw_fini(void *handle)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->hw_fini)
+		ret = adev->powerplay.ip_funcs->hw_fini(
+					adev->powerplay.pp_handle);
+
+	if (adev->pp_enabled && adev->firmware.smu_load)
+		amdgpu_ucode_fini_bo(adev);
+
+	return ret;
+}
+
+static int amdgpu_pp_suspend(void *handle)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->suspend)
+		ret = adev->powerplay.ip_funcs->suspend(
+					 adev->powerplay.pp_handle);
+	return ret;
+}
+
+static int amdgpu_pp_resume(void *handle)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->resume)
+		ret = adev->powerplay.ip_funcs->resume(
+					adev->powerplay.pp_handle);
+	return ret;
+}
+
+static int amdgpu_pp_set_clockgating_state(void *handle,
+					enum amd_clockgating_state state)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->set_clockgating_state)
+		ret = adev->powerplay.ip_funcs->set_clockgating_state(
+				adev->powerplay.pp_handle, state);
+	return ret;
+}
+
+static int amdgpu_pp_set_powergating_state(void *handle,
+					enum amd_powergating_state state)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->set_powergating_state)
+		ret = adev->powerplay.ip_funcs->set_powergating_state(
+				 adev->powerplay.pp_handle, state);
+	return ret;
+}
+
+
+static bool amdgpu_pp_is_idle(void *handle)
+{
+	bool ret = true;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->is_idle)
+		ret = adev->powerplay.ip_funcs->is_idle(
+					adev->powerplay.pp_handle);
+	return ret;
+}
+
+static int amdgpu_pp_wait_for_idle(void *handle)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->wait_for_idle)
+		ret = adev->powerplay.ip_funcs->wait_for_idle(
+					adev->powerplay.pp_handle);
+	return ret;
+}
+
+static int amdgpu_pp_soft_reset(void *handle)
+{
+	int ret = 0;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->soft_reset)
+		ret = adev->powerplay.ip_funcs->soft_reset(
+					adev->powerplay.pp_handle);
+	return ret;
+}
+
+static void amdgpu_pp_print_status(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->powerplay.ip_funcs->print_status)
+		adev->powerplay.ip_funcs->print_status(
+					adev->powerplay.pp_handle);
+}
+
+const struct amd_ip_funcs amdgpu_pp_ip_funcs = {
+	.early_init = amdgpu_pp_early_init,
+	.late_init = amdgpu_pp_late_init,
+	.sw_init = amdgpu_pp_sw_init,
+	.sw_fini = amdgpu_pp_sw_fini,
+	.hw_init = amdgpu_pp_hw_init,
+	.hw_fini = amdgpu_pp_hw_fini,
+	.suspend = amdgpu_pp_suspend,
+	.resume = amdgpu_pp_resume,
+	.is_idle = amdgpu_pp_is_idle,
+	.wait_for_idle = amdgpu_pp_wait_for_idle,
+	.soft_reset = amdgpu_pp_soft_reset,
+	.print_status = amdgpu_pp_print_status,
+	.set_clockgating_state = amdgpu_pp_set_clockgating_state,
+	.set_powergating_state = amdgpu_pp_set_powergating_state,
+};

+ 33 - 0
drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.h

@@ -0,0 +1,33 @@
+/*
+ *  Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __AMDGPU_POPWERPLAY_H__
+#define __AMDGPU_POPWERPLAY_H__
+
+#include "amd_shared.h"
+
+extern const struct amd_ip_funcs amdgpu_pp_ip_funcs;
+
+#endif /* __AMDSOC_DM_H__ */

+ 3 - 2
drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c

@@ -293,7 +293,8 @@ int amdgpu_sync_rings(struct amdgpu_sync *sync,
 		fence = to_amdgpu_fence(sync->sync_to[i]);
 		fence = to_amdgpu_fence(sync->sync_to[i]);
 
 
 		/* check if we really need to sync */
 		/* check if we really need to sync */
-		if (!amdgpu_fence_need_sync(fence, ring))
+		if (!amdgpu_enable_scheduler &&
+		    !amdgpu_fence_need_sync(fence, ring))
 			continue;
 			continue;
 
 
 		/* prevent GPU deadlocks */
 		/* prevent GPU deadlocks */
@@ -303,7 +304,7 @@ int amdgpu_sync_rings(struct amdgpu_sync *sync,
 		}
 		}
 
 
 		if (amdgpu_enable_scheduler || !amdgpu_enable_semaphores) {
 		if (amdgpu_enable_scheduler || !amdgpu_enable_semaphores) {
-			r = fence_wait(&fence->base, true);
+			r = fence_wait(sync->sync_to[i], true);
 			if (r)
 			if (r)
 				return r;
 				return r;
 			continue;
 			continue;

+ 75 - 41
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c

@@ -75,50 +75,77 @@ static unsigned amdgpu_vm_directory_size(struct amdgpu_device *adev)
 }
 }
 
 
 /**
 /**
- * amdgpu_vm_get_bos - add the vm BOs to a validation list
+ * amdgpu_vm_get_pd_bo - add the VM PD to a validation list
  *
  *
  * @vm: vm providing the BOs
  * @vm: vm providing the BOs
- * @head: head of validation list
+ * @validated: head of validation list
+ * @entry: entry to add
  *
  *
  * Add the page directory to the list of BOs to
  * Add the page directory to the list of BOs to
- * validate for command submission (cayman+).
+ * validate for command submission.
  */
  */
-struct amdgpu_bo_list_entry *amdgpu_vm_get_bos(struct amdgpu_device *adev,
-					  struct amdgpu_vm *vm,
-					  struct list_head *head)
+void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
+			 struct list_head *validated,
+			 struct amdgpu_bo_list_entry *entry)
 {
 {
-	struct amdgpu_bo_list_entry *list;
-	unsigned i, idx;
+	entry->robj = vm->page_directory;
+	entry->prefered_domains = AMDGPU_GEM_DOMAIN_VRAM;
+	entry->allowed_domains = AMDGPU_GEM_DOMAIN_VRAM;
+	entry->priority = 0;
+	entry->tv.bo = &vm->page_directory->tbo;
+	entry->tv.shared = true;
+	list_add(&entry->tv.head, validated);
+}
 
 
-	list = drm_malloc_ab(vm->max_pde_used + 2,
-			     sizeof(struct amdgpu_bo_list_entry));
-	if (!list) {
-		return NULL;
-	}
+/**
+ * amdgpu_vm_get_bos - add the vm BOs to a duplicates list
+ *
+ * @vm: vm providing the BOs
+ * @duplicates: head of duplicates list
+ *
+ * Add the page directory to the BO duplicates list
+ * for command submission.
+ */
+void amdgpu_vm_get_pt_bos(struct amdgpu_vm *vm, struct list_head *duplicates)
+{
+	unsigned i;
 
 
 	/* add the vm page table to the list */
 	/* add the vm page table to the list */
-	list[0].robj = vm->page_directory;
-	list[0].prefered_domains = AMDGPU_GEM_DOMAIN_VRAM;
-	list[0].allowed_domains = AMDGPU_GEM_DOMAIN_VRAM;
-	list[0].priority = 0;
-	list[0].tv.bo = &vm->page_directory->tbo;
-	list[0].tv.shared = true;
-	list_add(&list[0].tv.head, head);
-
-	for (i = 0, idx = 1; i <= vm->max_pde_used; i++) {
-		if (!vm->page_tables[i].bo)
+	for (i = 0; i <= vm->max_pde_used; ++i) {
+		struct amdgpu_bo_list_entry *entry = &vm->page_tables[i].entry;
+
+		if (!entry->robj)
 			continue;
 			continue;
 
 
-		list[idx].robj = vm->page_tables[i].bo;
-		list[idx].prefered_domains = AMDGPU_GEM_DOMAIN_VRAM;
-		list[idx].allowed_domains = AMDGPU_GEM_DOMAIN_VRAM;
-		list[idx].priority = 0;
-		list[idx].tv.bo = &list[idx].robj->tbo;
-		list[idx].tv.shared = true;
-		list_add(&list[idx++].tv.head, head);
+		list_add(&entry->tv.head, duplicates);
 	}
 	}
 
 
-	return list;
+}
+
+/**
+ * amdgpu_vm_move_pt_bos_in_lru - move the PT BOs to the LRU tail
+ *
+ * @adev: amdgpu device instance
+ * @vm: vm providing the BOs
+ *
+ * Move the PT BOs to the tail of the LRU.
+ */
+void amdgpu_vm_move_pt_bos_in_lru(struct amdgpu_device *adev,
+				  struct amdgpu_vm *vm)
+{
+	struct ttm_bo_global *glob = adev->mman.bdev.glob;
+	unsigned i;
+
+	spin_lock(&glob->lru_lock);
+	for (i = 0; i <= vm->max_pde_used; ++i) {
+		struct amdgpu_bo_list_entry *entry = &vm->page_tables[i].entry;
+
+		if (!entry->robj)
+			continue;
+
+		ttm_bo_move_to_lru_tail(&entry->robj->tbo);
+	}
+	spin_unlock(&glob->lru_lock);
 }
 }
 
 
 /**
 /**
@@ -461,7 +488,7 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
 
 
 	/* walk over the address space and update the page directory */
 	/* walk over the address space and update the page directory */
 	for (pt_idx = 0; pt_idx <= vm->max_pde_used; ++pt_idx) {
 	for (pt_idx = 0; pt_idx <= vm->max_pde_used; ++pt_idx) {
-		struct amdgpu_bo *bo = vm->page_tables[pt_idx].bo;
+		struct amdgpu_bo *bo = vm->page_tables[pt_idx].entry.robj;
 		uint64_t pde, pt;
 		uint64_t pde, pt;
 
 
 		if (bo == NULL)
 		if (bo == NULL)
@@ -638,7 +665,7 @@ static int amdgpu_vm_update_ptes(struct amdgpu_device *adev,
 	/* walk over the address space and update the page tables */
 	/* walk over the address space and update the page tables */
 	for (addr = start; addr < end; ) {
 	for (addr = start; addr < end; ) {
 		uint64_t pt_idx = addr >> amdgpu_vm_block_size;
 		uint64_t pt_idx = addr >> amdgpu_vm_block_size;
-		struct amdgpu_bo *pt = vm->page_tables[pt_idx].bo;
+		struct amdgpu_bo *pt = vm->page_tables[pt_idx].entry.robj;
 		unsigned nptes;
 		unsigned nptes;
 		uint64_t pte;
 		uint64_t pte;
 		int r;
 		int r;
@@ -1010,13 +1037,13 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
 		return -EINVAL;
 		return -EINVAL;
 
 
 	/* make sure object fit at this offset */
 	/* make sure object fit at this offset */
-	eaddr = saddr + size;
+	eaddr = saddr + size - 1;
 	if ((saddr >= eaddr) || (offset + size > amdgpu_bo_size(bo_va->bo)))
 	if ((saddr >= eaddr) || (offset + size > amdgpu_bo_size(bo_va->bo)))
 		return -EINVAL;
 		return -EINVAL;
 
 
 	last_pfn = eaddr / AMDGPU_GPU_PAGE_SIZE;
 	last_pfn = eaddr / AMDGPU_GPU_PAGE_SIZE;
-	if (last_pfn > adev->vm_manager.max_pfn) {
-		dev_err(adev->dev, "va above limit (0x%08X > 0x%08X)\n",
+	if (last_pfn >= adev->vm_manager.max_pfn) {
+		dev_err(adev->dev, "va above limit (0x%08X >= 0x%08X)\n",
 			last_pfn, adev->vm_manager.max_pfn);
 			last_pfn, adev->vm_manager.max_pfn);
 		return -EINVAL;
 		return -EINVAL;
 	}
 	}
@@ -1025,7 +1052,7 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
 	eaddr /= AMDGPU_GPU_PAGE_SIZE;
 	eaddr /= AMDGPU_GPU_PAGE_SIZE;
 
 
 	spin_lock(&vm->it_lock);
 	spin_lock(&vm->it_lock);
-	it = interval_tree_iter_first(&vm->va, saddr, eaddr - 1);
+	it = interval_tree_iter_first(&vm->va, saddr, eaddr);
 	spin_unlock(&vm->it_lock);
 	spin_unlock(&vm->it_lock);
 	if (it) {
 	if (it) {
 		struct amdgpu_bo_va_mapping *tmp;
 		struct amdgpu_bo_va_mapping *tmp;
@@ -1046,7 +1073,7 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
 
 
 	INIT_LIST_HEAD(&mapping->list);
 	INIT_LIST_HEAD(&mapping->list);
 	mapping->it.start = saddr;
 	mapping->it.start = saddr;
-	mapping->it.last = eaddr - 1;
+	mapping->it.last = eaddr;
 	mapping->offset = offset;
 	mapping->offset = offset;
 	mapping->flags = flags;
 	mapping->flags = flags;
 
 
@@ -1070,9 +1097,11 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
 	/* walk over the address space and allocate the page tables */
 	/* walk over the address space and allocate the page tables */
 	for (pt_idx = saddr; pt_idx <= eaddr; ++pt_idx) {
 	for (pt_idx = saddr; pt_idx <= eaddr; ++pt_idx) {
 		struct reservation_object *resv = vm->page_directory->tbo.resv;
 		struct reservation_object *resv = vm->page_directory->tbo.resv;
+		struct amdgpu_bo_list_entry *entry;
 		struct amdgpu_bo *pt;
 		struct amdgpu_bo *pt;
 
 
-		if (vm->page_tables[pt_idx].bo)
+		entry = &vm->page_tables[pt_idx].entry;
+		if (entry->robj)
 			continue;
 			continue;
 
 
 		r = amdgpu_bo_create(adev, AMDGPU_VM_PTE_COUNT * 8,
 		r = amdgpu_bo_create(adev, AMDGPU_VM_PTE_COUNT * 8,
@@ -1094,8 +1123,13 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
 			goto error_free;
 			goto error_free;
 		}
 		}
 
 
+		entry->robj = pt;
+		entry->prefered_domains = AMDGPU_GEM_DOMAIN_VRAM;
+		entry->allowed_domains = AMDGPU_GEM_DOMAIN_VRAM;
+		entry->priority = 0;
+		entry->tv.bo = &entry->robj->tbo;
+		entry->tv.shared = true;
 		vm->page_tables[pt_idx].addr = 0;
 		vm->page_tables[pt_idx].addr = 0;
-		vm->page_tables[pt_idx].bo = pt;
 	}
 	}
 
 
 	return 0;
 	return 0;
@@ -1326,7 +1360,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
 	}
 	}
 
 
 	for (i = 0; i < amdgpu_vm_num_pdes(adev); i++)
 	for (i = 0; i < amdgpu_vm_num_pdes(adev); i++)
-		amdgpu_bo_unref(&vm->page_tables[i].bo);
+		amdgpu_bo_unref(&vm->page_tables[i].entry.robj);
 	kfree(vm->page_tables);
 	kfree(vm->page_tables);
 
 
 	amdgpu_bo_unref(&vm->page_directory);
 	amdgpu_bo_unref(&vm->page_directory);

+ 36 - 60
drivers/gpu/drm/amd/amdgpu/atombios_dp.c

@@ -243,7 +243,7 @@ static void amdgpu_atombios_dp_get_adjust_train(const u8 link_status[DP_LINK_STA
 
 
 /* convert bits per color to bits per pixel */
 /* convert bits per color to bits per pixel */
 /* get bpc from the EDID */
 /* get bpc from the EDID */
-static int amdgpu_atombios_dp_convert_bpc_to_bpp(int bpc)
+static unsigned amdgpu_atombios_dp_convert_bpc_to_bpp(int bpc)
 {
 {
 	if (bpc == 0)
 	if (bpc == 0)
 		return 24;
 		return 24;
@@ -251,64 +251,32 @@ static int amdgpu_atombios_dp_convert_bpc_to_bpp(int bpc)
 		return bpc * 3;
 		return bpc * 3;
 }
 }
 
 
-/* get the max pix clock supported by the link rate and lane num */
-static int amdgpu_atombios_dp_get_max_dp_pix_clock(int link_rate,
-					    int lane_num,
-					    int bpp)
-{
-	return (link_rate * lane_num * 8) / bpp;
-}
-
 /***** amdgpu specific DP functions *****/
 /***** amdgpu specific DP functions *****/
 
 
-/* First get the min lane# when low rate is used according to pixel clock
- * (prefer low rate), second check max lane# supported by DP panel,
- * if the max lane# < low rate lane# then use max lane# instead.
- */
-static int amdgpu_atombios_dp_get_dp_lane_number(struct drm_connector *connector,
+static int amdgpu_atombios_dp_get_dp_link_config(struct drm_connector *connector,
 						 const u8 dpcd[DP_DPCD_SIZE],
 						 const u8 dpcd[DP_DPCD_SIZE],
-						 int pix_clock)
-{
-	int bpp = amdgpu_atombios_dp_convert_bpc_to_bpp(amdgpu_connector_get_monitor_bpc(connector));
-	int max_link_rate = drm_dp_max_link_rate(dpcd);
-	int max_lane_num = drm_dp_max_lane_count(dpcd);
-	int lane_num;
-	int max_dp_pix_clock;
-
-	for (lane_num = 1; lane_num < max_lane_num; lane_num <<= 1) {
-		max_dp_pix_clock = amdgpu_atombios_dp_get_max_dp_pix_clock(max_link_rate, lane_num, bpp);
-		if (pix_clock <= max_dp_pix_clock)
-			break;
-	}
-
-	return lane_num;
-}
-
-static int amdgpu_atombios_dp_get_dp_link_clock(struct drm_connector *connector,
-						const u8 dpcd[DP_DPCD_SIZE],
-						int pix_clock)
+						 unsigned pix_clock,
+						 unsigned *dp_lanes, unsigned *dp_rate)
 {
 {
-	int bpp = amdgpu_atombios_dp_convert_bpc_to_bpp(amdgpu_connector_get_monitor_bpc(connector));
-	int lane_num, max_pix_clock;
-
-	if (amdgpu_connector_encoder_get_dp_bridge_encoder_id(connector) ==
-	    ENCODER_OBJECT_ID_NUTMEG)
-		return 270000;
-
-	lane_num = amdgpu_atombios_dp_get_dp_lane_number(connector, dpcd, pix_clock);
-	max_pix_clock = amdgpu_atombios_dp_get_max_dp_pix_clock(162000, lane_num, bpp);
-	if (pix_clock <= max_pix_clock)
-		return 162000;
-	max_pix_clock = amdgpu_atombios_dp_get_max_dp_pix_clock(270000, lane_num, bpp);
-	if (pix_clock <= max_pix_clock)
-		return 270000;
-	if (amdgpu_connector_is_dp12_capable(connector)) {
-		max_pix_clock = amdgpu_atombios_dp_get_max_dp_pix_clock(540000, lane_num, bpp);
-		if (pix_clock <= max_pix_clock)
-			return 540000;
+	unsigned bpp =
+		amdgpu_atombios_dp_convert_bpc_to_bpp(amdgpu_connector_get_monitor_bpc(connector));
+	static const unsigned link_rates[3] = { 162000, 270000, 540000 };
+	unsigned max_link_rate = drm_dp_max_link_rate(dpcd);
+	unsigned max_lane_num = drm_dp_max_lane_count(dpcd);
+	unsigned lane_num, i, max_pix_clock;
+
+	for (lane_num = 1; lane_num <= max_lane_num; lane_num <<= 1) {
+		for (i = 0; i < ARRAY_SIZE(link_rates) && link_rates[i] <= max_link_rate; i++) {
+			max_pix_clock = (lane_num * link_rates[i] * 8) / bpp;
+			if (max_pix_clock >= pix_clock) {
+				*dp_lanes = lane_num;
+				*dp_rate = link_rates[i];
+				return 0;
+			}
+		}
 	}
 	}
 
 
-	return drm_dp_max_link_rate(dpcd);
+	return -EINVAL;
 }
 }
 
 
 static u8 amdgpu_atombios_dp_encoder_service(struct amdgpu_device *adev,
 static u8 amdgpu_atombios_dp_encoder_service(struct amdgpu_device *adev,
@@ -422,6 +390,7 @@ void amdgpu_atombios_dp_set_link_config(struct drm_connector *connector,
 {
 {
 	struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
 	struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
 	struct amdgpu_connector_atom_dig *dig_connector;
 	struct amdgpu_connector_atom_dig *dig_connector;
+	int ret;
 
 
 	if (!amdgpu_connector->con_priv)
 	if (!amdgpu_connector->con_priv)
 		return;
 		return;
@@ -429,10 +398,14 @@ void amdgpu_atombios_dp_set_link_config(struct drm_connector *connector,
 
 
 	if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) ||
 	if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) ||
 	    (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) {
 	    (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) {
-		dig_connector->dp_clock =
-			amdgpu_atombios_dp_get_dp_link_clock(connector, dig_connector->dpcd, mode->clock);
-		dig_connector->dp_lane_count =
-			amdgpu_atombios_dp_get_dp_lane_number(connector, dig_connector->dpcd, mode->clock);
+		ret = amdgpu_atombios_dp_get_dp_link_config(connector, dig_connector->dpcd,
+							    mode->clock,
+							    &dig_connector->dp_lane_count,
+							    &dig_connector->dp_clock);
+		if (ret) {
+			dig_connector->dp_clock = 0;
+			dig_connector->dp_lane_count = 0;
+		}
 	}
 	}
 }
 }
 
 
@@ -441,14 +414,17 @@ int amdgpu_atombios_dp_mode_valid_helper(struct drm_connector *connector,
 {
 {
 	struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
 	struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
 	struct amdgpu_connector_atom_dig *dig_connector;
 	struct amdgpu_connector_atom_dig *dig_connector;
-	int dp_clock;
+	unsigned dp_lanes, dp_clock;
+	int ret;
 
 
 	if (!amdgpu_connector->con_priv)
 	if (!amdgpu_connector->con_priv)
 		return MODE_CLOCK_HIGH;
 		return MODE_CLOCK_HIGH;
 	dig_connector = amdgpu_connector->con_priv;
 	dig_connector = amdgpu_connector->con_priv;
 
 
-	dp_clock =
-		amdgpu_atombios_dp_get_dp_link_clock(connector, dig_connector->dpcd, mode->clock);
+	ret = amdgpu_atombios_dp_get_dp_link_config(connector, dig_connector->dpcd,
+						    mode->clock, &dp_lanes, &dp_clock);
+	if (ret)
+		return MODE_CLOCK_HIGH;
 
 
 	if ((dp_clock == 540000) &&
 	if ((dp_clock == 540000) &&
 	    (!amdgpu_connector_is_dp12_capable(connector)))
 	    (!amdgpu_connector_is_dp12_capable(connector)))

+ 12 - 2
drivers/gpu/drm/amd/amdgpu/ci_dpm.c

@@ -1395,7 +1395,6 @@ static void ci_thermal_stop_thermal_controller(struct amdgpu_device *adev)
 		ci_fan_ctrl_set_default_mode(adev);
 		ci_fan_ctrl_set_default_mode(adev);
 }
 }
 
 
-#if 0
 static int ci_read_smc_soft_register(struct amdgpu_device *adev,
 static int ci_read_smc_soft_register(struct amdgpu_device *adev,
 				     u16 reg_offset, u32 *value)
 				     u16 reg_offset, u32 *value)
 {
 {
@@ -1405,7 +1404,6 @@ static int ci_read_smc_soft_register(struct amdgpu_device *adev,
 				      pi->soft_regs_start + reg_offset,
 				      pi->soft_regs_start + reg_offset,
 				      value, pi->sram_end);
 				      value, pi->sram_end);
 }
 }
-#endif
 
 
 static int ci_write_smc_soft_register(struct amdgpu_device *adev,
 static int ci_write_smc_soft_register(struct amdgpu_device *adev,
 				      u16 reg_offset, u32 value)
 				      u16 reg_offset, u32 value)
@@ -6084,11 +6082,23 @@ ci_dpm_debugfs_print_current_performance_level(struct amdgpu_device *adev,
 	struct amdgpu_ps *rps = &pi->current_rps;
 	struct amdgpu_ps *rps = &pi->current_rps;
 	u32 sclk = ci_get_average_sclk_freq(adev);
 	u32 sclk = ci_get_average_sclk_freq(adev);
 	u32 mclk = ci_get_average_mclk_freq(adev);
 	u32 mclk = ci_get_average_mclk_freq(adev);
+	u32 activity_percent = 50;
+	int ret;
+
+	ret = ci_read_smc_soft_register(adev, offsetof(SMU7_SoftRegisters, AverageGraphicsA),
+					&activity_percent);
+
+	if (ret == 0) {
+		activity_percent += 0x80;
+		activity_percent >>= 8;
+		activity_percent = activity_percent > 100 ? 100 : activity_percent;
+	}
 
 
 	seq_printf(m, "uvd %sabled\n", pi->uvd_enabled ? "en" : "dis");
 	seq_printf(m, "uvd %sabled\n", pi->uvd_enabled ? "en" : "dis");
 	seq_printf(m, "vce %sabled\n", rps->vce_active ? "en" : "dis");
 	seq_printf(m, "vce %sabled\n", rps->vce_active ? "en" : "dis");
 	seq_printf(m, "power level avg    sclk: %u mclk: %u\n",
 	seq_printf(m, "power level avg    sclk: %u mclk: %u\n",
 		   sclk, mclk);
 		   sclk, mclk);
+	seq_printf(m, "GPU load: %u %%\n", activity_percent);
 }
 }
 
 
 static void ci_dpm_print_power_state(struct amdgpu_device *adev,
 static void ci_dpm_print_power_state(struct amdgpu_device *adev,

+ 50 - 17
drivers/gpu/drm/amd/amdgpu/cik.c

@@ -32,6 +32,7 @@
 #include "amdgpu_vce.h"
 #include "amdgpu_vce.h"
 #include "cikd.h"
 #include "cikd.h"
 #include "atom.h"
 #include "atom.h"
+#include "amd_pcie.h"
 
 
 #include "cik.h"
 #include "cik.h"
 #include "gmc_v7_0.h"
 #include "gmc_v7_0.h"
@@ -65,6 +66,7 @@
 #include "oss/oss_2_0_sh_mask.h"
 #include "oss/oss_2_0_sh_mask.h"
 
 
 #include "amdgpu_amdkfd.h"
 #include "amdgpu_amdkfd.h"
+#include "amdgpu_powerplay.h"
 
 
 /*
 /*
  * Indirect registers accessor
  * Indirect registers accessor
@@ -929,6 +931,37 @@ static bool cik_read_disabled_bios(struct amdgpu_device *adev)
 	return r;
 	return r;
 }
 }
 
 
+static bool cik_read_bios_from_rom(struct amdgpu_device *adev,
+				   u8 *bios, u32 length_bytes)
+{
+	u32 *dw_ptr;
+	unsigned long flags;
+	u32 i, length_dw;
+
+	if (bios == NULL)
+		return false;
+	if (length_bytes == 0)
+		return false;
+	/* APU vbios image is part of sbios image */
+	if (adev->flags & AMD_IS_APU)
+		return false;
+
+	dw_ptr = (u32 *)bios;
+	length_dw = ALIGN(length_bytes, 4) / 4;
+	/* take the smc lock since we are using the smc index */
+	spin_lock_irqsave(&adev->smc_idx_lock, flags);
+	/* set rom index to 0 */
+	WREG32(mmSMC_IND_INDEX_0, ixROM_INDEX);
+	WREG32(mmSMC_IND_DATA_0, 0);
+	/* set index to data for continous read */
+	WREG32(mmSMC_IND_INDEX_0, ixROM_DATA);
+	for (i = 0; i < length_dw; i++)
+		dw_ptr[i] = RREG32(mmSMC_IND_DATA_0);
+	spin_unlock_irqrestore(&adev->smc_idx_lock, flags);
+
+	return true;
+}
+
 static struct amdgpu_allowed_register_entry cik_allowed_read_registers[] = {
 static struct amdgpu_allowed_register_entry cik_allowed_read_registers[] = {
 	{mmGRBM_STATUS, false},
 	{mmGRBM_STATUS, false},
 	{mmGB_ADDR_CONFIG, false},
 	{mmGB_ADDR_CONFIG, false},
@@ -1563,8 +1596,8 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
 {
 {
 	struct pci_dev *root = adev->pdev->bus->self;
 	struct pci_dev *root = adev->pdev->bus->self;
 	int bridge_pos, gpu_pos;
 	int bridge_pos, gpu_pos;
-	u32 speed_cntl, mask, current_data_rate;
-	int ret, i;
+	u32 speed_cntl, current_data_rate;
+	int i;
 	u16 tmp16;
 	u16 tmp16;
 
 
 	if (pci_is_root_bus(adev->pdev->bus))
 	if (pci_is_root_bus(adev->pdev->bus))
@@ -1576,23 +1609,20 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
 	if (adev->flags & AMD_IS_APU)
 	if (adev->flags & AMD_IS_APU)
 		return;
 		return;
 
 
-	ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
-	if (ret != 0)
-		return;
-
-	if (!(mask & (DRM_PCIE_SPEED_50 | DRM_PCIE_SPEED_80)))
+	if (!(adev->pm.pcie_gen_mask & (CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2 |
+					CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)))
 		return;
 		return;
 
 
 	speed_cntl = RREG32_PCIE(ixPCIE_LC_SPEED_CNTL);
 	speed_cntl = RREG32_PCIE(ixPCIE_LC_SPEED_CNTL);
 	current_data_rate = (speed_cntl & PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE_MASK) >>
 	current_data_rate = (speed_cntl & PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE_MASK) >>
 		PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE__SHIFT;
 		PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE__SHIFT;
-	if (mask & DRM_PCIE_SPEED_80) {
+	if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) {
 		if (current_data_rate == 2) {
 		if (current_data_rate == 2) {
 			DRM_INFO("PCIE gen 3 link speeds already enabled\n");
 			DRM_INFO("PCIE gen 3 link speeds already enabled\n");
 			return;
 			return;
 		}
 		}
 		DRM_INFO("enabling PCIE gen 3 link speeds, disable with amdgpu.pcie_gen2=0\n");
 		DRM_INFO("enabling PCIE gen 3 link speeds, disable with amdgpu.pcie_gen2=0\n");
-	} else if (mask & DRM_PCIE_SPEED_50) {
+	} else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) {
 		if (current_data_rate == 1) {
 		if (current_data_rate == 1) {
 			DRM_INFO("PCIE gen 2 link speeds already enabled\n");
 			DRM_INFO("PCIE gen 2 link speeds already enabled\n");
 			return;
 			return;
@@ -1608,7 +1638,7 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
 	if (!gpu_pos)
 	if (!gpu_pos)
 		return;
 		return;
 
 
-	if (mask & DRM_PCIE_SPEED_80) {
+	if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) {
 		/* re-try equalization if gen3 is not already enabled */
 		/* re-try equalization if gen3 is not already enabled */
 		if (current_data_rate != 2) {
 		if (current_data_rate != 2) {
 			u16 bridge_cfg, gpu_cfg;
 			u16 bridge_cfg, gpu_cfg;
@@ -1703,9 +1733,9 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
 
 
 	pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16);
 	pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16);
 	tmp16 &= ~0xf;
 	tmp16 &= ~0xf;
-	if (mask & DRM_PCIE_SPEED_80)
+	if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
 		tmp16 |= 3; /* gen3 */
 		tmp16 |= 3; /* gen3 */
-	else if (mask & DRM_PCIE_SPEED_50)
+	else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
 		tmp16 |= 2; /* gen2 */
 		tmp16 |= 2; /* gen2 */
 	else
 	else
 		tmp16 |= 1; /* gen1 */
 		tmp16 |= 1; /* gen1 */
@@ -1922,7 +1952,7 @@ static const struct amdgpu_ip_block_version bonaire_ip_blocks[] =
 		.major = 7,
 		.major = 7,
 		.minor = 0,
 		.minor = 0,
 		.rev = 0,
 		.rev = 0,
-		.funcs = &ci_dpm_ip_funcs,
+		.funcs = &amdgpu_pp_ip_funcs,
 	},
 	},
 	{
 	{
 		.type = AMD_IP_BLOCK_TYPE_DCE,
 		.type = AMD_IP_BLOCK_TYPE_DCE,
@@ -1990,7 +2020,7 @@ static const struct amdgpu_ip_block_version hawaii_ip_blocks[] =
 		.major = 7,
 		.major = 7,
 		.minor = 0,
 		.minor = 0,
 		.rev = 0,
 		.rev = 0,
-		.funcs = &ci_dpm_ip_funcs,
+		.funcs = &amdgpu_pp_ip_funcs,
 	},
 	},
 	{
 	{
 		.type = AMD_IP_BLOCK_TYPE_DCE,
 		.type = AMD_IP_BLOCK_TYPE_DCE,
@@ -2058,7 +2088,7 @@ static const struct amdgpu_ip_block_version kabini_ip_blocks[] =
 		.major = 7,
 		.major = 7,
 		.minor = 0,
 		.minor = 0,
 		.rev = 0,
 		.rev = 0,
-		.funcs = &kv_dpm_ip_funcs,
+		.funcs = &amdgpu_pp_ip_funcs,
 	},
 	},
 	{
 	{
 		.type = AMD_IP_BLOCK_TYPE_DCE,
 		.type = AMD_IP_BLOCK_TYPE_DCE,
@@ -2126,7 +2156,7 @@ static const struct amdgpu_ip_block_version mullins_ip_blocks[] =
 		.major = 7,
 		.major = 7,
 		.minor = 0,
 		.minor = 0,
 		.rev = 0,
 		.rev = 0,
-		.funcs = &kv_dpm_ip_funcs,
+		.funcs = &amdgpu_pp_ip_funcs,
 	},
 	},
 	{
 	{
 		.type = AMD_IP_BLOCK_TYPE_DCE,
 		.type = AMD_IP_BLOCK_TYPE_DCE,
@@ -2194,7 +2224,7 @@ static const struct amdgpu_ip_block_version kaveri_ip_blocks[] =
 		.major = 7,
 		.major = 7,
 		.minor = 0,
 		.minor = 0,
 		.rev = 0,
 		.rev = 0,
-		.funcs = &kv_dpm_ip_funcs,
+		.funcs = &amdgpu_pp_ip_funcs,
 	},
 	},
 	{
 	{
 		.type = AMD_IP_BLOCK_TYPE_DCE,
 		.type = AMD_IP_BLOCK_TYPE_DCE,
@@ -2267,6 +2297,7 @@ int cik_set_ip_blocks(struct amdgpu_device *adev)
 static const struct amdgpu_asic_funcs cik_asic_funcs =
 static const struct amdgpu_asic_funcs cik_asic_funcs =
 {
 {
 	.read_disabled_bios = &cik_read_disabled_bios,
 	.read_disabled_bios = &cik_read_disabled_bios,
+	.read_bios_from_rom = &cik_read_bios_from_rom,
 	.read_register = &cik_read_register,
 	.read_register = &cik_read_register,
 	.reset = &cik_asic_reset,
 	.reset = &cik_asic_reset,
 	.set_vga_state = &cik_vga_set_state,
 	.set_vga_state = &cik_vga_set_state,
@@ -2417,6 +2448,8 @@ static int cik_common_early_init(void *handle)
 		return -EINVAL;
 		return -EINVAL;
 	}
 	}
 
 
+	amdgpu_get_pcie_info(adev);
+
 	return 0;
 	return 0;
 }
 }
 
 

+ 6 - 0
drivers/gpu/drm/amd/amdgpu/cik_ih.c

@@ -274,6 +274,11 @@ static void cik_ih_set_rptr(struct amdgpu_device *adev)
 static int cik_ih_early_init(void *handle)
 static int cik_ih_early_init(void *handle)
 {
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int ret;
+
+	ret = amdgpu_irq_add_domain(adev);
+	if (ret)
+		return ret;
 
 
 	cik_ih_set_interrupt_funcs(adev);
 	cik_ih_set_interrupt_funcs(adev);
 
 
@@ -300,6 +305,7 @@ static int cik_ih_sw_fini(void *handle)
 
 
 	amdgpu_irq_fini(adev);
 	amdgpu_irq_fini(adev);
 	amdgpu_ih_ring_fini(adev);
 	amdgpu_ih_ring_fini(adev);
+	amdgpu_irq_remove_domain(adev);
 
 
 	return 0;
 	return 0;
 }
 }

+ 272 - 1
drivers/gpu/drm/amd/amdgpu/cz_dpm.c

@@ -1078,6 +1078,37 @@ static uint32_t cz_get_eclk_level(struct amdgpu_device *adev,
 	return i;
 	return i;
 }
 }
 
 
+static uint32_t cz_get_uvd_level(struct amdgpu_device *adev,
+				 uint32_t clock, uint16_t msg)
+{
+	int i = 0;
+	struct amdgpu_uvd_clock_voltage_dependency_table *table =
+		&adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table;
+
+	switch (msg) {
+	case PPSMC_MSG_SetUvdSoftMin:
+	case PPSMC_MSG_SetUvdHardMin:
+		for (i = 0; i < table->count; i++)
+			if (clock <= table->entries[i].vclk)
+				break;
+		if (i == table->count)
+			i = table->count - 1;
+		break;
+	case PPSMC_MSG_SetUvdSoftMax:
+	case PPSMC_MSG_SetUvdHardMax:
+		for (i = table->count - 1; i >= 0; i--)
+			if (clock >= table->entries[i].vclk)
+				break;
+		if (i < 0)
+			i = 0;
+		break;
+	default:
+		break;
+	}
+
+	return i;
+}
+
 static int cz_program_bootup_state(struct amdgpu_device *adev)
 static int cz_program_bootup_state(struct amdgpu_device *adev)
 {
 {
 	struct cz_power_info *pi = cz_get_pi(adev);
 	struct cz_power_info *pi = cz_get_pi(adev);
@@ -1739,6 +1770,200 @@ static int cz_dpm_unforce_dpm_levels(struct amdgpu_device *adev)
 	return 0;
 	return 0;
 }
 }
 
 
+static int cz_dpm_uvd_force_highest(struct amdgpu_device *adev)
+{
+	struct cz_power_info *pi = cz_get_pi(adev);
+	int ret = 0;
+
+	if (pi->uvd_dpm.soft_min_clk != pi->uvd_dpm.soft_max_clk) {
+		pi->uvd_dpm.soft_min_clk =
+			pi->uvd_dpm.soft_max_clk;
+		ret = cz_send_msg_to_smc_with_parameter(adev,
+				PPSMC_MSG_SetUvdSoftMin,
+				cz_get_uvd_level(adev,
+					pi->uvd_dpm.soft_min_clk,
+					PPSMC_MSG_SetUvdSoftMin));
+		if (ret)
+			return ret;
+	}
+
+	return ret;
+}
+
+static int cz_dpm_uvd_force_lowest(struct amdgpu_device *adev)
+{
+	struct cz_power_info *pi = cz_get_pi(adev);
+	int ret = 0;
+
+	if (pi->uvd_dpm.soft_max_clk != pi->uvd_dpm.soft_min_clk) {
+		pi->uvd_dpm.soft_max_clk = pi->uvd_dpm.soft_min_clk;
+		ret = cz_send_msg_to_smc_with_parameter(adev,
+				PPSMC_MSG_SetUvdSoftMax,
+				cz_get_uvd_level(adev,
+					pi->uvd_dpm.soft_max_clk,
+					PPSMC_MSG_SetUvdSoftMax));
+		if (ret)
+			return ret;
+	}
+
+	return ret;
+}
+
+static uint32_t cz_dpm_get_max_uvd_level(struct amdgpu_device *adev)
+{
+	struct cz_power_info *pi = cz_get_pi(adev);
+
+	if (!pi->max_uvd_level) {
+		cz_send_msg_to_smc(adev, PPSMC_MSG_GetMaxUvdLevel);
+		pi->max_uvd_level = cz_get_argument(adev) + 1;
+	}
+
+	if (pi->max_uvd_level > CZ_MAX_HARDWARE_POWERLEVELS) {
+		DRM_ERROR("Invalid max uvd level!\n");
+		return -EINVAL;
+	}
+
+	return pi->max_uvd_level;
+}
+
+static int cz_dpm_unforce_uvd_dpm_levels(struct amdgpu_device *adev)
+{
+	struct cz_power_info *pi = cz_get_pi(adev);
+	struct amdgpu_uvd_clock_voltage_dependency_table *dep_table =
+		&adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table;
+	uint32_t level = 0;
+	int ret = 0;
+
+	pi->uvd_dpm.soft_min_clk = dep_table->entries[0].vclk;
+	level = cz_dpm_get_max_uvd_level(adev) - 1;
+	if (level < dep_table->count)
+		pi->uvd_dpm.soft_max_clk = dep_table->entries[level].vclk;
+	else
+		pi->uvd_dpm.soft_max_clk =
+			dep_table->entries[dep_table->count - 1].vclk;
+
+	/* get min/max sclk soft value
+	 * notify SMU to execute */
+	ret = cz_send_msg_to_smc_with_parameter(adev,
+				PPSMC_MSG_SetUvdSoftMin,
+				cz_get_uvd_level(adev,
+					pi->uvd_dpm.soft_min_clk,
+					PPSMC_MSG_SetUvdSoftMin));
+	if (ret)
+		return ret;
+
+	ret = cz_send_msg_to_smc_with_parameter(adev,
+				PPSMC_MSG_SetUvdSoftMax,
+				cz_get_uvd_level(adev,
+					pi->uvd_dpm.soft_max_clk,
+					PPSMC_MSG_SetUvdSoftMax));
+	if (ret)
+		return ret;
+
+	DRM_DEBUG("DPM uvd unforce state min=%d, max=%d.\n",
+		  pi->uvd_dpm.soft_min_clk,
+		  pi->uvd_dpm.soft_max_clk);
+
+	return 0;
+}
+
+static int cz_dpm_vce_force_highest(struct amdgpu_device *adev)
+{
+	struct cz_power_info *pi = cz_get_pi(adev);
+	int ret = 0;
+
+	if (pi->vce_dpm.soft_min_clk != pi->vce_dpm.soft_max_clk) {
+		pi->vce_dpm.soft_min_clk =
+			pi->vce_dpm.soft_max_clk;
+		ret = cz_send_msg_to_smc_with_parameter(adev,
+				PPSMC_MSG_SetEclkSoftMin,
+				cz_get_eclk_level(adev,
+					pi->vce_dpm.soft_min_clk,
+					PPSMC_MSG_SetEclkSoftMin));
+		if (ret)
+			return ret;
+	}
+
+	return ret;
+}
+
+static int cz_dpm_vce_force_lowest(struct amdgpu_device *adev)
+{
+	struct cz_power_info *pi = cz_get_pi(adev);
+	int ret = 0;
+
+	if (pi->vce_dpm.soft_max_clk != pi->vce_dpm.soft_min_clk) {
+		pi->vce_dpm.soft_max_clk = pi->vce_dpm.soft_min_clk;
+		ret = cz_send_msg_to_smc_with_parameter(adev,
+				PPSMC_MSG_SetEclkSoftMax,
+				cz_get_uvd_level(adev,
+					pi->vce_dpm.soft_max_clk,
+					PPSMC_MSG_SetEclkSoftMax));
+		if (ret)
+			return ret;
+	}
+
+	return ret;
+}
+
+static uint32_t cz_dpm_get_max_vce_level(struct amdgpu_device *adev)
+{
+	struct cz_power_info *pi = cz_get_pi(adev);
+
+	if (!pi->max_vce_level) {
+		cz_send_msg_to_smc(adev, PPSMC_MSG_GetMaxEclkLevel);
+		pi->max_vce_level = cz_get_argument(adev) + 1;
+	}
+
+	if (pi->max_vce_level > CZ_MAX_HARDWARE_POWERLEVELS) {
+		DRM_ERROR("Invalid max vce level!\n");
+		return -EINVAL;
+	}
+
+	return pi->max_vce_level;
+}
+
+static int cz_dpm_unforce_vce_dpm_levels(struct amdgpu_device *adev)
+{
+	struct cz_power_info *pi = cz_get_pi(adev);
+	struct amdgpu_vce_clock_voltage_dependency_table *dep_table =
+		&adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table;
+	uint32_t level = 0;
+	int ret = 0;
+
+	pi->vce_dpm.soft_min_clk = dep_table->entries[0].ecclk;
+	level = cz_dpm_get_max_vce_level(adev) - 1;
+	if (level < dep_table->count)
+		pi->vce_dpm.soft_max_clk = dep_table->entries[level].ecclk;
+	else
+		pi->vce_dpm.soft_max_clk =
+			dep_table->entries[dep_table->count - 1].ecclk;
+
+	/* get min/max sclk soft value
+	 * notify SMU to execute */
+	ret = cz_send_msg_to_smc_with_parameter(adev,
+				PPSMC_MSG_SetEclkSoftMin,
+				cz_get_eclk_level(adev,
+					pi->vce_dpm.soft_min_clk,
+					PPSMC_MSG_SetEclkSoftMin));
+	if (ret)
+		return ret;
+
+	ret = cz_send_msg_to_smc_with_parameter(adev,
+				PPSMC_MSG_SetEclkSoftMax,
+				cz_get_eclk_level(adev,
+					pi->vce_dpm.soft_max_clk,
+					PPSMC_MSG_SetEclkSoftMax));
+	if (ret)
+		return ret;
+
+	DRM_DEBUG("DPM vce unforce state min=%d, max=%d.\n",
+		  pi->vce_dpm.soft_min_clk,
+		  pi->vce_dpm.soft_max_clk);
+
+	return 0;
+}
+
 static int cz_dpm_force_dpm_level(struct amdgpu_device *adev,
 static int cz_dpm_force_dpm_level(struct amdgpu_device *adev,
 				  enum amdgpu_dpm_forced_level level)
 				  enum amdgpu_dpm_forced_level level)
 {
 {
@@ -1746,23 +1971,68 @@ static int cz_dpm_force_dpm_level(struct amdgpu_device *adev,
 
 
 	switch (level) {
 	switch (level) {
 	case AMDGPU_DPM_FORCED_LEVEL_HIGH:
 	case AMDGPU_DPM_FORCED_LEVEL_HIGH:
+		/* sclk */
 		ret = cz_dpm_unforce_dpm_levels(adev);
 		ret = cz_dpm_unforce_dpm_levels(adev);
 		if (ret)
 		if (ret)
 			return ret;
 			return ret;
 		ret = cz_dpm_force_highest(adev);
 		ret = cz_dpm_force_highest(adev);
+		if (ret)
+			return ret;
+
+		/* uvd */
+		ret = cz_dpm_unforce_uvd_dpm_levels(adev);
+		if (ret)
+			return ret;
+		ret = cz_dpm_uvd_force_highest(adev);
+		if (ret)
+			return ret;
+
+		/* vce */
+		ret = cz_dpm_unforce_vce_dpm_levels(adev);
+		if (ret)
+			return ret;
+		ret = cz_dpm_vce_force_highest(adev);
 		if (ret)
 		if (ret)
 			return ret;
 			return ret;
 		break;
 		break;
 	case AMDGPU_DPM_FORCED_LEVEL_LOW:
 	case AMDGPU_DPM_FORCED_LEVEL_LOW:
+		/* sclk */
 		ret = cz_dpm_unforce_dpm_levels(adev);
 		ret = cz_dpm_unforce_dpm_levels(adev);
 		if (ret)
 		if (ret)
 			return ret;
 			return ret;
 		ret = cz_dpm_force_lowest(adev);
 		ret = cz_dpm_force_lowest(adev);
+		if (ret)
+			return ret;
+
+		/* uvd */
+		ret = cz_dpm_unforce_uvd_dpm_levels(adev);
+		if (ret)
+			return ret;
+		ret = cz_dpm_uvd_force_lowest(adev);
+		if (ret)
+			return ret;
+
+		/* vce */
+		ret = cz_dpm_unforce_vce_dpm_levels(adev);
+		if (ret)
+			return ret;
+		ret = cz_dpm_vce_force_lowest(adev);
 		if (ret)
 		if (ret)
 			return ret;
 			return ret;
 		break;
 		break;
 	case AMDGPU_DPM_FORCED_LEVEL_AUTO:
 	case AMDGPU_DPM_FORCED_LEVEL_AUTO:
+		/* sclk */
 		ret = cz_dpm_unforce_dpm_levels(adev);
 		ret = cz_dpm_unforce_dpm_levels(adev);
+		if (ret)
+			return ret;
+
+		/* uvd */
+		ret = cz_dpm_unforce_uvd_dpm_levels(adev);
+		if (ret)
+			return ret;
+
+		/* vce */
+		ret = cz_dpm_unforce_vce_dpm_levels(adev);
 		if (ret)
 		if (ret)
 			return ret;
 			return ret;
 		break;
 		break;
@@ -1905,7 +2175,8 @@ static int cz_update_vce_dpm(struct amdgpu_device *adev)
 		pi->vce_dpm.hard_min_clk = table->entries[table->count-1].ecclk;
 		pi->vce_dpm.hard_min_clk = table->entries[table->count-1].ecclk;
 
 
 	} else { /* non-stable p-state cases. without vce.Arbiter.EcclkHardMin */
 	} else { /* non-stable p-state cases. without vce.Arbiter.EcclkHardMin */
-		pi->vce_dpm.hard_min_clk = table->entries[0].ecclk;
+		/* leave it as set by user */
+		/*pi->vce_dpm.hard_min_clk = table->entries[0].ecclk;*/
 	}
 	}
 
 
 	cz_send_msg_to_smc_with_parameter(adev,
 	cz_send_msg_to_smc_with_parameter(adev,

+ 2 - 0
drivers/gpu/drm/amd/amdgpu/cz_dpm.h

@@ -183,6 +183,8 @@ struct cz_power_info {
 	uint32_t voltage_drop_threshold;
 	uint32_t voltage_drop_threshold;
 	uint32_t gfx_pg_threshold;
 	uint32_t gfx_pg_threshold;
 	uint32_t max_sclk_level;
 	uint32_t max_sclk_level;
+	uint32_t max_uvd_level;
+	uint32_t max_vce_level;
 	/* flags */
 	/* flags */
 	bool didt_enabled;
 	bool didt_enabled;
 	bool video_start;
 	bool video_start;

+ 7 - 0
drivers/gpu/drm/amd/amdgpu/cz_ih.c

@@ -253,8 +253,14 @@ static void cz_ih_set_rptr(struct amdgpu_device *adev)
 static int cz_ih_early_init(void *handle)
 static int cz_ih_early_init(void *handle)
 {
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int ret;
+
+	ret = amdgpu_irq_add_domain(adev);
+	if (ret)
+		return ret;
 
 
 	cz_ih_set_interrupt_funcs(adev);
 	cz_ih_set_interrupt_funcs(adev);
+
 	return 0;
 	return 0;
 }
 }
 
 
@@ -278,6 +284,7 @@ static int cz_ih_sw_fini(void *handle)
 
 
 	amdgpu_irq_fini(adev);
 	amdgpu_irq_fini(adev);
 	amdgpu_ih_ring_fini(adev);
 	amdgpu_ih_ring_fini(adev);
+	amdgpu_irq_remove_domain(adev);
 
 
 	return 0;
 	return 0;
 }
 }

+ 7 - 7
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c

@@ -3729,7 +3729,7 @@ static void dce_v10_0_encoder_add(struct amdgpu_device *adev,
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
 		drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
 		drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
-				 DRM_MODE_ENCODER_DAC);
+				 DRM_MODE_ENCODER_DAC, NULL);
 		drm_encoder_helper_add(encoder, &dce_v10_0_dac_helper_funcs);
 		drm_encoder_helper_add(encoder, &dce_v10_0_dac_helper_funcs);
 		break;
 		break;
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
@@ -3740,15 +3740,15 @@ static void dce_v10_0_encoder_add(struct amdgpu_device *adev,
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
 			amdgpu_encoder->rmx_type = RMX_FULL;
 			amdgpu_encoder->rmx_type = RMX_FULL;
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
-					 DRM_MODE_ENCODER_LVDS);
+					 DRM_MODE_ENCODER_LVDS, NULL);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_lcd_info(amdgpu_encoder);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_lcd_info(amdgpu_encoder);
 		} else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) {
 		} else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) {
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
-					 DRM_MODE_ENCODER_DAC);
+					 DRM_MODE_ENCODER_DAC, NULL);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 		} else {
 		} else {
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
-					 DRM_MODE_ENCODER_TMDS);
+					 DRM_MODE_ENCODER_TMDS, NULL);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 		}
 		}
 		drm_encoder_helper_add(encoder, &dce_v10_0_dig_helper_funcs);
 		drm_encoder_helper_add(encoder, &dce_v10_0_dig_helper_funcs);
@@ -3766,13 +3766,13 @@ static void dce_v10_0_encoder_add(struct amdgpu_device *adev,
 		amdgpu_encoder->is_ext_encoder = true;
 		amdgpu_encoder->is_ext_encoder = true;
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
-					 DRM_MODE_ENCODER_LVDS);
+					 DRM_MODE_ENCODER_LVDS, NULL);
 		else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT))
 		else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT))
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
-					 DRM_MODE_ENCODER_DAC);
+					 DRM_MODE_ENCODER_DAC, NULL);
 		else
 		else
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
-					 DRM_MODE_ENCODER_TMDS);
+					 DRM_MODE_ENCODER_TMDS, NULL);
 		drm_encoder_helper_add(encoder, &dce_v10_0_ext_helper_funcs);
 		drm_encoder_helper_add(encoder, &dce_v10_0_ext_helper_funcs);
 		break;
 		break;
 	}
 	}

+ 16 - 14
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c

@@ -211,9 +211,9 @@ static bool dce_v11_0_is_counter_moving(struct amdgpu_device *adev, int crtc)
  */
  */
 static void dce_v11_0_vblank_wait(struct amdgpu_device *adev, int crtc)
 static void dce_v11_0_vblank_wait(struct amdgpu_device *adev, int crtc)
 {
 {
-	unsigned i = 0;
+	unsigned i = 100;
 
 
-	if (crtc >= adev->mode_info.num_crtc)
+	if (crtc < 0 || crtc >= adev->mode_info.num_crtc)
 		return;
 		return;
 
 
 	if (!(RREG32(mmCRTC_CONTROL + crtc_offsets[crtc]) & CRTC_CONTROL__CRTC_MASTER_EN_MASK))
 	if (!(RREG32(mmCRTC_CONTROL + crtc_offsets[crtc]) & CRTC_CONTROL__CRTC_MASTER_EN_MASK))
@@ -223,14 +223,16 @@ static void dce_v11_0_vblank_wait(struct amdgpu_device *adev, int crtc)
 	 * wait for another frame.
 	 * wait for another frame.
 	 */
 	 */
 	while (dce_v11_0_is_in_vblank(adev, crtc)) {
 	while (dce_v11_0_is_in_vblank(adev, crtc)) {
-		if (i++ % 100 == 0) {
+		if (i++ == 100) {
+			i = 0;
 			if (!dce_v11_0_is_counter_moving(adev, crtc))
 			if (!dce_v11_0_is_counter_moving(adev, crtc))
 				break;
 				break;
 		}
 		}
 	}
 	}
 
 
 	while (!dce_v11_0_is_in_vblank(adev, crtc)) {
 	while (!dce_v11_0_is_in_vblank(adev, crtc)) {
-		if (i++ % 100 == 0) {
+		if (i++ == 100) {
+			i = 0;
 			if (!dce_v11_0_is_counter_moving(adev, crtc))
 			if (!dce_v11_0_is_counter_moving(adev, crtc))
 				break;
 				break;
 		}
 		}
@@ -239,7 +241,7 @@ static void dce_v11_0_vblank_wait(struct amdgpu_device *adev, int crtc)
 
 
 static u32 dce_v11_0_vblank_get_counter(struct amdgpu_device *adev, int crtc)
 static u32 dce_v11_0_vblank_get_counter(struct amdgpu_device *adev, int crtc)
 {
 {
-	if (crtc >= adev->mode_info.num_crtc)
+	if (crtc < 0 || crtc >= adev->mode_info.num_crtc)
 		return 0;
 		return 0;
 	else
 	else
 		return RREG32(mmCRTC_STATUS_FRAME_COUNT + crtc_offsets[crtc]);
 		return RREG32(mmCRTC_STATUS_FRAME_COUNT + crtc_offsets[crtc]);
@@ -3384,7 +3386,7 @@ static void dce_v11_0_crtc_vblank_int_ack(struct amdgpu_device *adev,
 {
 {
 	u32 tmp;
 	u32 tmp;
 
 
-	if (crtc >= adev->mode_info.num_crtc) {
+	if (crtc < 0 || crtc >= adev->mode_info.num_crtc) {
 		DRM_DEBUG("invalid crtc %d\n", crtc);
 		DRM_DEBUG("invalid crtc %d\n", crtc);
 		return;
 		return;
 	}
 	}
@@ -3399,7 +3401,7 @@ static void dce_v11_0_crtc_vline_int_ack(struct amdgpu_device *adev,
 {
 {
 	u32 tmp;
 	u32 tmp;
 
 
-	if (crtc >= adev->mode_info.num_crtc) {
+	if (crtc < 0 || crtc >= adev->mode_info.num_crtc) {
 		DRM_DEBUG("invalid crtc %d\n", crtc);
 		DRM_DEBUG("invalid crtc %d\n", crtc);
 		return;
 		return;
 	}
 	}
@@ -3722,7 +3724,7 @@ static void dce_v11_0_encoder_add(struct amdgpu_device *adev,
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
 		drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
 		drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
-				 DRM_MODE_ENCODER_DAC);
+				 DRM_MODE_ENCODER_DAC, NULL);
 		drm_encoder_helper_add(encoder, &dce_v11_0_dac_helper_funcs);
 		drm_encoder_helper_add(encoder, &dce_v11_0_dac_helper_funcs);
 		break;
 		break;
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
@@ -3733,15 +3735,15 @@ static void dce_v11_0_encoder_add(struct amdgpu_device *adev,
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
 			amdgpu_encoder->rmx_type = RMX_FULL;
 			amdgpu_encoder->rmx_type = RMX_FULL;
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
-					 DRM_MODE_ENCODER_LVDS);
+					 DRM_MODE_ENCODER_LVDS, NULL);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_lcd_info(amdgpu_encoder);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_lcd_info(amdgpu_encoder);
 		} else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) {
 		} else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) {
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
-					 DRM_MODE_ENCODER_DAC);
+					 DRM_MODE_ENCODER_DAC, NULL);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 		} else {
 		} else {
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
-					 DRM_MODE_ENCODER_TMDS);
+					 DRM_MODE_ENCODER_TMDS, NULL);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 		}
 		}
 		drm_encoder_helper_add(encoder, &dce_v11_0_dig_helper_funcs);
 		drm_encoder_helper_add(encoder, &dce_v11_0_dig_helper_funcs);
@@ -3759,13 +3761,13 @@ static void dce_v11_0_encoder_add(struct amdgpu_device *adev,
 		amdgpu_encoder->is_ext_encoder = true;
 		amdgpu_encoder->is_ext_encoder = true;
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
-					 DRM_MODE_ENCODER_LVDS);
+					 DRM_MODE_ENCODER_LVDS, NULL);
 		else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT))
 		else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT))
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
-					 DRM_MODE_ENCODER_DAC);
+					 DRM_MODE_ENCODER_DAC, NULL);
 		else
 		else
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
-					 DRM_MODE_ENCODER_TMDS);
+					 DRM_MODE_ENCODER_TMDS, NULL);
 		drm_encoder_helper_add(encoder, &dce_v11_0_ext_helper_funcs);
 		drm_encoder_helper_add(encoder, &dce_v11_0_ext_helper_funcs);
 		break;
 		break;
 	}
 	}

+ 7 - 7
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c

@@ -3659,7 +3659,7 @@ static void dce_v8_0_encoder_add(struct amdgpu_device *adev,
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
 		drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
 		drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
-				 DRM_MODE_ENCODER_DAC);
+				 DRM_MODE_ENCODER_DAC, NULL);
 		drm_encoder_helper_add(encoder, &dce_v8_0_dac_helper_funcs);
 		drm_encoder_helper_add(encoder, &dce_v8_0_dac_helper_funcs);
 		break;
 		break;
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
 	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
@@ -3670,15 +3670,15 @@ static void dce_v8_0_encoder_add(struct amdgpu_device *adev,
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
 			amdgpu_encoder->rmx_type = RMX_FULL;
 			amdgpu_encoder->rmx_type = RMX_FULL;
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
-					 DRM_MODE_ENCODER_LVDS);
+					 DRM_MODE_ENCODER_LVDS, NULL);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_lcd_info(amdgpu_encoder);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_lcd_info(amdgpu_encoder);
 		} else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) {
 		} else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) {
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
-					 DRM_MODE_ENCODER_DAC);
+					 DRM_MODE_ENCODER_DAC, NULL);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 		} else {
 		} else {
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
-					 DRM_MODE_ENCODER_TMDS);
+					 DRM_MODE_ENCODER_TMDS, NULL);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 			amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
 		}
 		}
 		drm_encoder_helper_add(encoder, &dce_v8_0_dig_helper_funcs);
 		drm_encoder_helper_add(encoder, &dce_v8_0_dig_helper_funcs);
@@ -3696,13 +3696,13 @@ static void dce_v8_0_encoder_add(struct amdgpu_device *adev,
 		amdgpu_encoder->is_ext_encoder = true;
 		amdgpu_encoder->is_ext_encoder = true;
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
 		if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
-					 DRM_MODE_ENCODER_LVDS);
+					 DRM_MODE_ENCODER_LVDS, NULL);
 		else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT))
 		else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT))
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
-					 DRM_MODE_ENCODER_DAC);
+					 DRM_MODE_ENCODER_DAC, NULL);
 		else
 		else
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
 			drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
-					 DRM_MODE_ENCODER_TMDS);
+					 DRM_MODE_ENCODER_TMDS, NULL);
 		drm_encoder_helper_add(encoder, &dce_v8_0_ext_helper_funcs);
 		drm_encoder_helper_add(encoder, &dce_v8_0_ext_helper_funcs);
 		break;
 		break;
 	}
 	}

+ 1 - 1
drivers/gpu/drm/amd/amdgpu/fiji_dpm.c

@@ -24,7 +24,7 @@
 #include <linux/firmware.h>
 #include <linux/firmware.h>
 #include "drmP.h"
 #include "drmP.h"
 #include "amdgpu.h"
 #include "amdgpu.h"
-#include "fiji_smumgr.h"
+#include "fiji_smum.h"
 
 
 MODULE_FIRMWARE("amdgpu/fiji_smc.bin");
 MODULE_FIRMWARE("amdgpu/fiji_smc.bin");
 
 

+ 0 - 182
drivers/gpu/drm/amd/amdgpu/fiji_ppsmc.h

@@ -1,182 +0,0 @@
-/*
- * Copyright 2014 Advanced Micro Devices, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- */
-
-#ifndef FIJI_PP_SMC_H
-#define FIJI_PP_SMC_H
-
-#pragma pack(push, 1)
-
-#define PPSMC_SWSTATE_FLAG_DC                           0x01
-#define PPSMC_SWSTATE_FLAG_UVD                          0x02
-#define PPSMC_SWSTATE_FLAG_VCE                          0x04
-
-#define PPSMC_THERMAL_PROTECT_TYPE_INTERNAL             0x00
-#define PPSMC_THERMAL_PROTECT_TYPE_EXTERNAL             0x01
-#define PPSMC_THERMAL_PROTECT_TYPE_NONE                 0xff
-
-#define PPSMC_SYSTEMFLAG_GPIO_DC                        0x01
-#define PPSMC_SYSTEMFLAG_STEPVDDC                       0x02
-#define PPSMC_SYSTEMFLAG_GDDR5                          0x04
-
-#define PPSMC_SYSTEMFLAG_DISABLE_BABYSTEP               0x08
-
-#define PPSMC_SYSTEMFLAG_REGULATOR_HOT                  0x10
-#define PPSMC_SYSTEMFLAG_REGULATOR_HOT_ANALOG           0x20
-
-#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_MASK              0x07
-#define PPSMC_EXTRAFLAGS_AC2DC_DONT_WAIT_FOR_VBLANK     0x08
-
-#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_GOTODPMLOWSTATE   0x00
-#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_GOTOINITIALSTATE  0x01
-
-#define PPSMC_DPM2FLAGS_TDPCLMP                         0x01
-#define PPSMC_DPM2FLAGS_PWRSHFT                         0x02
-#define PPSMC_DPM2FLAGS_OCP                             0x04
-
-#define PPSMC_DISPLAY_WATERMARK_LOW                     0
-#define PPSMC_DISPLAY_WATERMARK_HIGH                    1
-
-#define PPSMC_STATEFLAG_AUTO_PULSE_SKIP    		0x01
-#define PPSMC_STATEFLAG_POWERBOOST         		0x02
-#define PPSMC_STATEFLAG_PSKIP_ON_TDP_FAULT 		0x04
-#define PPSMC_STATEFLAG_POWERSHIFT         		0x08
-#define PPSMC_STATEFLAG_SLOW_READ_MARGIN   		0x10
-#define PPSMC_STATEFLAG_DEEPSLEEP_THROTTLE 		0x20
-#define PPSMC_STATEFLAG_DEEPSLEEP_BYPASS   		0x40
-
-#define FDO_MODE_HARDWARE 0
-#define FDO_MODE_PIECE_WISE_LINEAR 1
-
-enum FAN_CONTROL {
-	FAN_CONTROL_FUZZY,
-	FAN_CONTROL_TABLE
-};
-
-//Gemini Modes
-#define PPSMC_GeminiModeNone   0  //Single GPU board
-#define PPSMC_GeminiModeMaster 1  //Master GPU on a Gemini board
-#define PPSMC_GeminiModeSlave  2  //Slave GPU on a Gemini board
-
-#define PPSMC_Result_OK             			((uint16_t)0x01)
-#define PPSMC_Result_NoMore         			((uint16_t)0x02)
-#define PPSMC_Result_NotNow         			((uint16_t)0x03)
-#define PPSMC_Result_Failed         			((uint16_t)0xFF)
-#define PPSMC_Result_UnknownCmd     			((uint16_t)0xFE)
-#define PPSMC_Result_UnknownVT      			((uint16_t)0xFD)
-
-typedef uint16_t PPSMC_Result;
-
-#define PPSMC_isERROR(x) ((uint16_t)0x80 & (x))
-
-#define PPSMC_MSG_Halt                      		((uint16_t)0x10)
-#define PPSMC_MSG_Resume                    		((uint16_t)0x11)
-#define PPSMC_MSG_EnableDPMLevel            		((uint16_t)0x12)
-#define PPSMC_MSG_ZeroLevelsDisabled        		((uint16_t)0x13)
-#define PPSMC_MSG_OneLevelsDisabled         		((uint16_t)0x14)
-#define PPSMC_MSG_TwoLevelsDisabled         		((uint16_t)0x15)
-#define PPSMC_MSG_EnableThermalInterrupt    		((uint16_t)0x16)
-#define PPSMC_MSG_RunningOnAC               		((uint16_t)0x17)
-#define PPSMC_MSG_LevelUp                   		((uint16_t)0x18)
-#define PPSMC_MSG_LevelDown                 		((uint16_t)0x19)
-#define PPSMC_MSG_ResetDPMCounters          		((uint16_t)0x1a)
-#define PPSMC_MSG_SwitchToSwState           		((uint16_t)0x20)
-#define PPSMC_MSG_SwitchToSwStateLast       		((uint16_t)0x3f)
-#define PPSMC_MSG_SwitchToInitialState      		((uint16_t)0x40)
-#define PPSMC_MSG_NoForcedLevel             		((uint16_t)0x41)
-#define PPSMC_MSG_ForceHigh                 		((uint16_t)0x42)
-#define PPSMC_MSG_ForceMediumOrHigh         		((uint16_t)0x43)
-#define PPSMC_MSG_SwitchToMinimumPower      		((uint16_t)0x51)
-#define PPSMC_MSG_ResumeFromMinimumPower    		((uint16_t)0x52)
-#define PPSMC_MSG_EnableCac                 		((uint16_t)0x53)
-#define PPSMC_MSG_DisableCac                		((uint16_t)0x54)
-#define PPSMC_DPMStateHistoryStart          		((uint16_t)0x55)
-#define PPSMC_DPMStateHistoryStop           		((uint16_t)0x56)
-#define PPSMC_CACHistoryStart               		((uint16_t)0x57)
-#define PPSMC_CACHistoryStop                		((uint16_t)0x58)
-#define PPSMC_TDPClampingActive             		((uint16_t)0x59)
-#define PPSMC_TDPClampingInactive           		((uint16_t)0x5A)
-#define PPSMC_StartFanControl               		((uint16_t)0x5B)
-#define PPSMC_StopFanControl                		((uint16_t)0x5C)
-#define PPSMC_NoDisplay                     		((uint16_t)0x5D)
-#define PPSMC_HasDisplay                    		((uint16_t)0x5E)
-#define PPSMC_MSG_UVDPowerOFF               		((uint16_t)0x60)
-#define PPSMC_MSG_UVDPowerON                		((uint16_t)0x61)
-#define PPSMC_MSG_EnableULV                 		((uint16_t)0x62)
-#define PPSMC_MSG_DisableULV                		((uint16_t)0x63)
-#define PPSMC_MSG_EnterULV                  		((uint16_t)0x64)
-#define PPSMC_MSG_ExitULV                   		((uint16_t)0x65)
-#define PPSMC_PowerShiftActive              		((uint16_t)0x6A)
-#define PPSMC_PowerShiftInactive            		((uint16_t)0x6B)
-#define PPSMC_OCPActive                     		((uint16_t)0x6C)
-#define PPSMC_OCPInactive                   		((uint16_t)0x6D)
-#define PPSMC_CACLongTermAvgEnable          		((uint16_t)0x6E)
-#define PPSMC_CACLongTermAvgDisable         		((uint16_t)0x6F)
-#define PPSMC_MSG_InferredStateSweep_Start  		((uint16_t)0x70)
-#define PPSMC_MSG_InferredStateSweep_Stop   		((uint16_t)0x71)
-#define PPSMC_MSG_SwitchToLowestInfState    		((uint16_t)0x72)
-#define PPSMC_MSG_SwitchToNonInfState       		((uint16_t)0x73)
-#define PPSMC_MSG_AllStateSweep_Start       		((uint16_t)0x74)
-#define PPSMC_MSG_AllStateSweep_Stop        		((uint16_t)0x75)
-#define PPSMC_MSG_SwitchNextLowerInfState   		((uint16_t)0x76)
-#define PPSMC_MSG_SwitchNextHigherInfState  		((uint16_t)0x77)
-#define PPSMC_MSG_MclkRetrainingTest        		((uint16_t)0x78)
-#define PPSMC_MSG_ForceTDPClamping          		((uint16_t)0x79)
-#define PPSMC_MSG_CollectCAC_PowerCorreln   		((uint16_t)0x7A)
-#define PPSMC_MSG_CollectCAC_WeightCalib    		((uint16_t)0x7B)
-#define PPSMC_MSG_CollectCAC_SQonly         		((uint16_t)0x7C)
-#define PPSMC_MSG_CollectCAC_TemperaturePwr 		((uint16_t)0x7D)
-#define PPSMC_MSG_ExtremitiesTest_Start     		((uint16_t)0x7E)
-#define PPSMC_MSG_ExtremitiesTest_Stop      		((uint16_t)0x7F)
-#define PPSMC_FlushDataCache                		((uint16_t)0x80)
-#define PPSMC_FlushInstrCache               		((uint16_t)0x81)
-#define PPSMC_MSG_SetEnabledLevels          		((uint16_t)0x82)
-#define PPSMC_MSG_SetForcedLevels           		((uint16_t)0x83)
-#define PPSMC_MSG_ResetToDefaults           		((uint16_t)0x84)
-#define PPSMC_MSG_SetForcedLevelsAndJump    		((uint16_t)0x85)
-#define PPSMC_MSG_SetCACHistoryMode         		((uint16_t)0x86)
-#define PPSMC_MSG_EnableDTE                 		((uint16_t)0x87)
-#define PPSMC_MSG_DisableDTE                		((uint16_t)0x88)
-#define PPSMC_MSG_SmcSpaceSetAddress        		((uint16_t)0x89)
-#define PPSMC_MSG_SmcSpaceWriteDWordInc     		((uint16_t)0x8A)
-#define PPSMC_MSG_SmcSpaceWriteWordInc      		((uint16_t)0x8B)
-#define PPSMC_MSG_SmcSpaceWriteByteInc      		((uint16_t)0x8C)
-
-#define PPSMC_MSG_BREAK                     		((uint16_t)0xF8)
-
-#define PPSMC_MSG_Test                      		((uint16_t)0x100)
-#define PPSMC_MSG_DRV_DRAM_ADDR_HI            		((uint16_t)0x250)
-#define PPSMC_MSG_DRV_DRAM_ADDR_LO            		((uint16_t)0x251)
-#define PPSMC_MSG_SMU_DRAM_ADDR_HI            		((uint16_t)0x252)
-#define PPSMC_MSG_SMU_DRAM_ADDR_LO            		((uint16_t)0x253)
-#define PPSMC_MSG_LoadUcodes                  		((uint16_t)0x254)
-
-typedef uint16_t PPSMC_Msg;
-
-#define PPSMC_EVENT_STATUS_THERMAL          		0x00000001
-#define PPSMC_EVENT_STATUS_REGULATORHOT     		0x00000002
-#define PPSMC_EVENT_STATUS_DC               		0x00000004
-#define PPSMC_EVENT_STATUS_GPIO17           		0x00000008
-
-#pragma pack(pop)
-
-#endif

+ 1 - 1
drivers/gpu/drm/amd/amdgpu/fiji_smc.c

@@ -25,7 +25,7 @@
 #include "drmP.h"
 #include "drmP.h"
 #include "amdgpu.h"
 #include "amdgpu.h"
 #include "fiji_ppsmc.h"
 #include "fiji_ppsmc.h"
-#include "fiji_smumgr.h"
+#include "fiji_smum.h"
 #include "smu_ucode_xfer_vi.h"
 #include "smu_ucode_xfer_vi.h"
 #include "amdgpu_ucode.h"
 #include "amdgpu_ucode.h"
 
 

+ 0 - 0
drivers/gpu/drm/amd/amdgpu/fiji_smumgr.h → drivers/gpu/drm/amd/amdgpu/fiji_smum.h


+ 1539 - 1448
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c

@@ -66,6 +66,27 @@
 #define MACRO_TILE_ASPECT(x)				((x) << GB_MACROTILE_MODE0__MACRO_TILE_ASPECT__SHIFT)
 #define MACRO_TILE_ASPECT(x)				((x) << GB_MACROTILE_MODE0__MACRO_TILE_ASPECT__SHIFT)
 #define NUM_BANKS(x)					((x) << GB_MACROTILE_MODE0__NUM_BANKS__SHIFT)
 #define NUM_BANKS(x)					((x) << GB_MACROTILE_MODE0__NUM_BANKS__SHIFT)
 
 
+#define RLC_CGTT_MGCG_OVERRIDE__CPF_MASK            0x00000001L
+#define RLC_CGTT_MGCG_OVERRIDE__RLC_MASK            0x00000002L
+#define RLC_CGTT_MGCG_OVERRIDE__MGCG_MASK           0x00000004L
+#define RLC_CGTT_MGCG_OVERRIDE__CGCG_MASK           0x00000008L
+#define RLC_CGTT_MGCG_OVERRIDE__CGLS_MASK           0x00000010L
+#define RLC_CGTT_MGCG_OVERRIDE__GRBM_MASK           0x00000020L
+
+/* BPM SERDES CMD */
+#define SET_BPM_SERDES_CMD    1
+#define CLE_BPM_SERDES_CMD    0
+
+/* BPM Register Address*/
+enum {
+	BPM_REG_CGLS_EN = 0,        /* Enable/Disable CGLS */
+	BPM_REG_CGLS_ON,            /* ON/OFF CGLS: shall be controlled by RLC FW */
+	BPM_REG_CGCG_OVERRIDE,      /* Set/Clear CGCG Override */
+	BPM_REG_MGCG_OVERRIDE,      /* Set/Clear MGCG Override */
+	BPM_REG_FGCG_OVERRIDE,      /* Set/Clear FGCG Override */
+	BPM_REG_FGCG_MAX
+};
+
 MODULE_FIRMWARE("amdgpu/carrizo_ce.bin");
 MODULE_FIRMWARE("amdgpu/carrizo_ce.bin");
 MODULE_FIRMWARE("amdgpu/carrizo_pfp.bin");
 MODULE_FIRMWARE("amdgpu/carrizo_pfp.bin");
 MODULE_FIRMWARE("amdgpu/carrizo_me.bin");
 MODULE_FIRMWARE("amdgpu/carrizo_me.bin");
@@ -964,6 +985,322 @@ static int gfx_v8_0_mec_init(struct amdgpu_device *adev)
 	return 0;
 	return 0;
 }
 }
 
 
+static const u32 vgpr_init_compute_shader[] =
+{
+	0x7e000209, 0x7e020208,
+	0x7e040207, 0x7e060206,
+	0x7e080205, 0x7e0a0204,
+	0x7e0c0203, 0x7e0e0202,
+	0x7e100201, 0x7e120200,
+	0x7e140209, 0x7e160208,
+	0x7e180207, 0x7e1a0206,
+	0x7e1c0205, 0x7e1e0204,
+	0x7e200203, 0x7e220202,
+	0x7e240201, 0x7e260200,
+	0x7e280209, 0x7e2a0208,
+	0x7e2c0207, 0x7e2e0206,
+	0x7e300205, 0x7e320204,
+	0x7e340203, 0x7e360202,
+	0x7e380201, 0x7e3a0200,
+	0x7e3c0209, 0x7e3e0208,
+	0x7e400207, 0x7e420206,
+	0x7e440205, 0x7e460204,
+	0x7e480203, 0x7e4a0202,
+	0x7e4c0201, 0x7e4e0200,
+	0x7e500209, 0x7e520208,
+	0x7e540207, 0x7e560206,
+	0x7e580205, 0x7e5a0204,
+	0x7e5c0203, 0x7e5e0202,
+	0x7e600201, 0x7e620200,
+	0x7e640209, 0x7e660208,
+	0x7e680207, 0x7e6a0206,
+	0x7e6c0205, 0x7e6e0204,
+	0x7e700203, 0x7e720202,
+	0x7e740201, 0x7e760200,
+	0x7e780209, 0x7e7a0208,
+	0x7e7c0207, 0x7e7e0206,
+	0xbf8a0000, 0xbf810000,
+};
+
+static const u32 sgpr_init_compute_shader[] =
+{
+	0xbe8a0100, 0xbe8c0102,
+	0xbe8e0104, 0xbe900106,
+	0xbe920108, 0xbe940100,
+	0xbe960102, 0xbe980104,
+	0xbe9a0106, 0xbe9c0108,
+	0xbe9e0100, 0xbea00102,
+	0xbea20104, 0xbea40106,
+	0xbea60108, 0xbea80100,
+	0xbeaa0102, 0xbeac0104,
+	0xbeae0106, 0xbeb00108,
+	0xbeb20100, 0xbeb40102,
+	0xbeb60104, 0xbeb80106,
+	0xbeba0108, 0xbebc0100,
+	0xbebe0102, 0xbec00104,
+	0xbec20106, 0xbec40108,
+	0xbec60100, 0xbec80102,
+	0xbee60004, 0xbee70005,
+	0xbeea0006, 0xbeeb0007,
+	0xbee80008, 0xbee90009,
+	0xbefc0000, 0xbf8a0000,
+	0xbf810000, 0x00000000,
+};
+
+static const u32 vgpr_init_regs[] =
+{
+	mmCOMPUTE_STATIC_THREAD_MGMT_SE0, 0xffffffff,
+	mmCOMPUTE_RESOURCE_LIMITS, 0,
+	mmCOMPUTE_NUM_THREAD_X, 256*4,
+	mmCOMPUTE_NUM_THREAD_Y, 1,
+	mmCOMPUTE_NUM_THREAD_Z, 1,
+	mmCOMPUTE_PGM_RSRC2, 20,
+	mmCOMPUTE_USER_DATA_0, 0xedcedc00,
+	mmCOMPUTE_USER_DATA_1, 0xedcedc01,
+	mmCOMPUTE_USER_DATA_2, 0xedcedc02,
+	mmCOMPUTE_USER_DATA_3, 0xedcedc03,
+	mmCOMPUTE_USER_DATA_4, 0xedcedc04,
+	mmCOMPUTE_USER_DATA_5, 0xedcedc05,
+	mmCOMPUTE_USER_DATA_6, 0xedcedc06,
+	mmCOMPUTE_USER_DATA_7, 0xedcedc07,
+	mmCOMPUTE_USER_DATA_8, 0xedcedc08,
+	mmCOMPUTE_USER_DATA_9, 0xedcedc09,
+};
+
+static const u32 sgpr1_init_regs[] =
+{
+	mmCOMPUTE_STATIC_THREAD_MGMT_SE0, 0x0f,
+	mmCOMPUTE_RESOURCE_LIMITS, 0x1000000,
+	mmCOMPUTE_NUM_THREAD_X, 256*5,
+	mmCOMPUTE_NUM_THREAD_Y, 1,
+	mmCOMPUTE_NUM_THREAD_Z, 1,
+	mmCOMPUTE_PGM_RSRC2, 20,
+	mmCOMPUTE_USER_DATA_0, 0xedcedc00,
+	mmCOMPUTE_USER_DATA_1, 0xedcedc01,
+	mmCOMPUTE_USER_DATA_2, 0xedcedc02,
+	mmCOMPUTE_USER_DATA_3, 0xedcedc03,
+	mmCOMPUTE_USER_DATA_4, 0xedcedc04,
+	mmCOMPUTE_USER_DATA_5, 0xedcedc05,
+	mmCOMPUTE_USER_DATA_6, 0xedcedc06,
+	mmCOMPUTE_USER_DATA_7, 0xedcedc07,
+	mmCOMPUTE_USER_DATA_8, 0xedcedc08,
+	mmCOMPUTE_USER_DATA_9, 0xedcedc09,
+};
+
+static const u32 sgpr2_init_regs[] =
+{
+	mmCOMPUTE_STATIC_THREAD_MGMT_SE0, 0xf0,
+	mmCOMPUTE_RESOURCE_LIMITS, 0x1000000,
+	mmCOMPUTE_NUM_THREAD_X, 256*5,
+	mmCOMPUTE_NUM_THREAD_Y, 1,
+	mmCOMPUTE_NUM_THREAD_Z, 1,
+	mmCOMPUTE_PGM_RSRC2, 20,
+	mmCOMPUTE_USER_DATA_0, 0xedcedc00,
+	mmCOMPUTE_USER_DATA_1, 0xedcedc01,
+	mmCOMPUTE_USER_DATA_2, 0xedcedc02,
+	mmCOMPUTE_USER_DATA_3, 0xedcedc03,
+	mmCOMPUTE_USER_DATA_4, 0xedcedc04,
+	mmCOMPUTE_USER_DATA_5, 0xedcedc05,
+	mmCOMPUTE_USER_DATA_6, 0xedcedc06,
+	mmCOMPUTE_USER_DATA_7, 0xedcedc07,
+	mmCOMPUTE_USER_DATA_8, 0xedcedc08,
+	mmCOMPUTE_USER_DATA_9, 0xedcedc09,
+};
+
+static const u32 sec_ded_counter_registers[] =
+{
+	mmCPC_EDC_ATC_CNT,
+	mmCPC_EDC_SCRATCH_CNT,
+	mmCPC_EDC_UCODE_CNT,
+	mmCPF_EDC_ATC_CNT,
+	mmCPF_EDC_ROQ_CNT,
+	mmCPF_EDC_TAG_CNT,
+	mmCPG_EDC_ATC_CNT,
+	mmCPG_EDC_DMA_CNT,
+	mmCPG_EDC_TAG_CNT,
+	mmDC_EDC_CSINVOC_CNT,
+	mmDC_EDC_RESTORE_CNT,
+	mmDC_EDC_STATE_CNT,
+	mmGDS_EDC_CNT,
+	mmGDS_EDC_GRBM_CNT,
+	mmGDS_EDC_OA_DED,
+	mmSPI_EDC_CNT,
+	mmSQC_ATC_EDC_GATCL1_CNT,
+	mmSQC_EDC_CNT,
+	mmSQ_EDC_DED_CNT,
+	mmSQ_EDC_INFO,
+	mmSQ_EDC_SEC_CNT,
+	mmTCC_EDC_CNT,
+	mmTCP_ATC_EDC_GATCL1_CNT,
+	mmTCP_EDC_CNT,
+	mmTD_EDC_CNT
+};
+
+static int gfx_v8_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = &adev->gfx.compute_ring[0];
+	struct amdgpu_ib ib;
+	struct fence *f = NULL;
+	int r, i;
+	u32 tmp;
+	unsigned total_size, vgpr_offset, sgpr_offset;
+	u64 gpu_addr;
+
+	/* only supported on CZ */
+	if (adev->asic_type != CHIP_CARRIZO)
+		return 0;
+
+	/* bail if the compute ring is not ready */
+	if (!ring->ready)
+		return 0;
+
+	tmp = RREG32(mmGB_EDC_MODE);
+	WREG32(mmGB_EDC_MODE, 0);
+
+	total_size =
+		(((ARRAY_SIZE(vgpr_init_regs) / 2) * 3) + 4 + 5 + 2) * 4;
+	total_size +=
+		(((ARRAY_SIZE(sgpr1_init_regs) / 2) * 3) + 4 + 5 + 2) * 4;
+	total_size +=
+		(((ARRAY_SIZE(sgpr2_init_regs) / 2) * 3) + 4 + 5 + 2) * 4;
+	total_size = ALIGN(total_size, 256);
+	vgpr_offset = total_size;
+	total_size += ALIGN(sizeof(vgpr_init_compute_shader), 256);
+	sgpr_offset = total_size;
+	total_size += sizeof(sgpr_init_compute_shader);
+
+	/* allocate an indirect buffer to put the commands in */
+	memset(&ib, 0, sizeof(ib));
+	r = amdgpu_ib_get(ring, NULL, total_size, &ib);
+	if (r) {
+		DRM_ERROR("amdgpu: failed to get ib (%d).\n", r);
+		return r;
+	}
+
+	/* load the compute shaders */
+	for (i = 0; i < ARRAY_SIZE(vgpr_init_compute_shader); i++)
+		ib.ptr[i + (vgpr_offset / 4)] = vgpr_init_compute_shader[i];
+
+	for (i = 0; i < ARRAY_SIZE(sgpr_init_compute_shader); i++)
+		ib.ptr[i + (sgpr_offset / 4)] = sgpr_init_compute_shader[i];
+
+	/* init the ib length to 0 */
+	ib.length_dw = 0;
+
+	/* VGPR */
+	/* write the register state for the compute dispatch */
+	for (i = 0; i < ARRAY_SIZE(vgpr_init_regs); i += 2) {
+		ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1);
+		ib.ptr[ib.length_dw++] = vgpr_init_regs[i] - PACKET3_SET_SH_REG_START;
+		ib.ptr[ib.length_dw++] = vgpr_init_regs[i + 1];
+	}
+	/* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */
+	gpu_addr = (ib.gpu_addr + (u64)vgpr_offset) >> 8;
+	ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2);
+	ib.ptr[ib.length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START;
+	ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr);
+	ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr);
+
+	/* write dispatch packet */
+	ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3);
+	ib.ptr[ib.length_dw++] = 8; /* x */
+	ib.ptr[ib.length_dw++] = 1; /* y */
+	ib.ptr[ib.length_dw++] = 1; /* z */
+	ib.ptr[ib.length_dw++] =
+		REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1);
+
+	/* write CS partial flush packet */
+	ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0);
+	ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4);
+
+	/* SGPR1 */
+	/* write the register state for the compute dispatch */
+	for (i = 0; i < ARRAY_SIZE(sgpr1_init_regs); i += 2) {
+		ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1);
+		ib.ptr[ib.length_dw++] = sgpr1_init_regs[i] - PACKET3_SET_SH_REG_START;
+		ib.ptr[ib.length_dw++] = sgpr1_init_regs[i + 1];
+	}
+	/* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */
+	gpu_addr = (ib.gpu_addr + (u64)sgpr_offset) >> 8;
+	ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2);
+	ib.ptr[ib.length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START;
+	ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr);
+	ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr);
+
+	/* write dispatch packet */
+	ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3);
+	ib.ptr[ib.length_dw++] = 8; /* x */
+	ib.ptr[ib.length_dw++] = 1; /* y */
+	ib.ptr[ib.length_dw++] = 1; /* z */
+	ib.ptr[ib.length_dw++] =
+		REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1);
+
+	/* write CS partial flush packet */
+	ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0);
+	ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4);
+
+	/* SGPR2 */
+	/* write the register state for the compute dispatch */
+	for (i = 0; i < ARRAY_SIZE(sgpr2_init_regs); i += 2) {
+		ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1);
+		ib.ptr[ib.length_dw++] = sgpr2_init_regs[i] - PACKET3_SET_SH_REG_START;
+		ib.ptr[ib.length_dw++] = sgpr2_init_regs[i + 1];
+	}
+	/* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */
+	gpu_addr = (ib.gpu_addr + (u64)sgpr_offset) >> 8;
+	ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2);
+	ib.ptr[ib.length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START;
+	ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr);
+	ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr);
+
+	/* write dispatch packet */
+	ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3);
+	ib.ptr[ib.length_dw++] = 8; /* x */
+	ib.ptr[ib.length_dw++] = 1; /* y */
+	ib.ptr[ib.length_dw++] = 1; /* z */
+	ib.ptr[ib.length_dw++] =
+		REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1);
+
+	/* write CS partial flush packet */
+	ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0);
+	ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4);
+
+	/* shedule the ib on the ring */
+	r = amdgpu_sched_ib_submit_kernel_helper(adev, ring, &ib, 1, NULL,
+						 AMDGPU_FENCE_OWNER_UNDEFINED,
+						 &f);
+	if (r) {
+		DRM_ERROR("amdgpu: ib submit failed (%d).\n", r);
+		goto fail;
+	}
+
+	/* wait for the GPU to finish processing the IB */
+	r = fence_wait(f, false);
+	if (r) {
+		DRM_ERROR("amdgpu: fence wait failed (%d).\n", r);
+		goto fail;
+	}
+
+	tmp = REG_SET_FIELD(tmp, GB_EDC_MODE, DED_MODE, 2);
+	tmp = REG_SET_FIELD(tmp, GB_EDC_MODE, PROP_FED, 1);
+	WREG32(mmGB_EDC_MODE, tmp);
+
+	tmp = RREG32(mmCC_GC_EDC_CONFIG);
+	tmp = REG_SET_FIELD(tmp, CC_GC_EDC_CONFIG, DIS_EDC, 0) | 1;
+	WREG32(mmCC_GC_EDC_CONFIG, tmp);
+
+
+	/* read back registers to clear the counters */
+	for (i = 0; i < ARRAY_SIZE(sec_ded_counter_registers); i++)
+		RREG32(sec_ded_counter_registers[i]);
+
+fail:
+	fence_put(f);
+	amdgpu_ib_free(adev, &ib);
+
+	return r;
+}
+
 static void gfx_v8_0_gpu_early_init(struct amdgpu_device *adev)
 static void gfx_v8_0_gpu_early_init(struct amdgpu_device *adev)
 {
 {
 	u32 gb_addr_config;
 	u32 gb_addr_config;
@@ -1323,1418 +1660,923 @@ static int gfx_v8_0_sw_fini(void *handle)
 
 
 static void gfx_v8_0_tiling_mode_table_init(struct amdgpu_device *adev)
 static void gfx_v8_0_tiling_mode_table_init(struct amdgpu_device *adev)
 {
 {
-	const u32 num_tile_mode_states = 32;
-	const u32 num_secondary_tile_mode_states = 16;
-	u32 reg_offset, gb_tile_moden, split_equal_to_row_size;
+	uint32_t *modearray, *mod2array;
+	const u32 num_tile_mode_states = ARRAY_SIZE(adev->gfx.config.tile_mode_array);
+	const u32 num_secondary_tile_mode_states = ARRAY_SIZE(adev->gfx.config.macrotile_mode_array);
+	u32 reg_offset;
 
 
-	switch (adev->gfx.config.mem_row_size_in_kb) {
-	case 1:
-		split_equal_to_row_size = ADDR_SURF_TILE_SPLIT_1KB;
-		break;
-	case 2:
-	default:
-		split_equal_to_row_size = ADDR_SURF_TILE_SPLIT_2KB;
-		break;
-	case 4:
-		split_equal_to_row_size = ADDR_SURF_TILE_SPLIT_4KB;
-		break;
-	}
+	modearray = adev->gfx.config.tile_mode_array;
+	mod2array = adev->gfx.config.macrotile_mode_array;
+
+	for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++)
+		modearray[reg_offset] = 0;
+
+	for (reg_offset = 0; reg_offset <  num_secondary_tile_mode_states; reg_offset++)
+		mod2array[reg_offset] = 0;
 
 
 	switch (adev->asic_type) {
 	switch (adev->asic_type) {
 	case CHIP_TOPAZ:
 	case CHIP_TOPAZ:
-		for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++) {
-			switch (reg_offset) {
-			case 0:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 1:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 2:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 3:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 4:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 5:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 6:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 8:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
-						PIPE_CONFIG(ADDR_SURF_P2));
-				break;
-			case 9:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 10:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 11:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 13:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 14:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 15:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 16:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 18:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 19:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 20:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 21:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 22:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 24:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 25:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 26:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 27:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 28:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 29:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 7:
-			case 12:
-			case 17:
-			case 23:
-				/* unused idx */
-				continue;
-			default:
-				gb_tile_moden = 0;
-				break;
-			};
-			adev->gfx.config.tile_mode_array[reg_offset] = gb_tile_moden;
-			WREG32(mmGB_TILE_MODE0 + reg_offset, gb_tile_moden);
-		}
-		for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++) {
-			switch (reg_offset) {
-			case 0:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 1:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 2:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 3:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 4:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 5:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 6:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 8:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 9:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 10:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 11:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 12:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 13:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 14:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 7:
-				/* unused idx */
-				continue;
-			default:
-				gb_tile_moden = 0;
-				break;
-			};
-			adev->gfx.config.macrotile_mode_array[reg_offset] = gb_tile_moden;
-			WREG32(mmGB_MACROTILE_MODE0 + reg_offset, gb_tile_moden);
-		}
+		modearray[0] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[1] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[2] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[3] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[4] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[5] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[6] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[8] = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
+				PIPE_CONFIG(ADDR_SURF_P2));
+		modearray[9] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[10] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[11] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[13] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[14] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[15] = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[16] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[18] = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[19] = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[20] = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[21] = (ARRAY_MODE(ARRAY_3D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[22] = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[24] = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[25] = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[26] = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[27] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[28] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[29] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+
+		mod2array[0] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[1] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[2] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[3] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[4] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[5] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[6] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[8] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[9] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[10] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[11] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[12] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[13] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[14] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				 NUM_BANKS(ADDR_SURF_8_BANK));
+
+		for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++)
+			if (reg_offset != 7 && reg_offset != 12 && reg_offset != 17 &&
+			    reg_offset != 23)
+				WREG32(mmGB_TILE_MODE0 + reg_offset, modearray[reg_offset]);
+
+		for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++)
+			if (reg_offset != 7)
+				WREG32(mmGB_MACROTILE_MODE0 + reg_offset, mod2array[reg_offset]);
+
+		break;
 	case CHIP_FIJI:
 	case CHIP_FIJI:
-		for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++) {
-			switch (reg_offset) {
-			case 0:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 1:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 2:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 3:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 4:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 5:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 6:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 7:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P4_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 8:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16));
-				break;
-			case 9:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 10:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 11:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 12:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P4_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 13:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 14:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 15:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 16:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 17:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P4_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 18:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 19:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 20:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 21:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 22:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 23:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P4_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 24:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 25:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 26:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 27:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 28:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 29:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 30:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P4_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			default:
-				gb_tile_moden = 0;
-				break;
-			}
-			adev->gfx.config.tile_mode_array[reg_offset] = gb_tile_moden;
-			WREG32(mmGB_TILE_MODE0 + reg_offset, gb_tile_moden);
-		}
-		for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++) {
-			switch (reg_offset) {
-			case 0:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 1:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 2:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 3:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 4:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 5:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 6:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 8:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 9:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 10:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 11:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 12:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 13:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 14:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_4_BANK));
-				break;
-			case 7:
-				/* unused idx */
-				continue;
-			default:
-				gb_tile_moden = 0;
-				break;
-			}
-			adev->gfx.config.macrotile_mode_array[reg_offset] = gb_tile_moden;
-			WREG32(mmGB_MACROTILE_MODE0 + reg_offset, gb_tile_moden);
-		}
+		modearray[0] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[1] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[2] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[3] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[4] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[5] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[6] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[7] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P4_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[8] = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
+				PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16));
+		modearray[9] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[10] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[11] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[12] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P4_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[13] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[14] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[15] = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[16] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[17] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P4_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[18] = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[19] = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[20] = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[21] = (ARRAY_MODE(ARRAY_3D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[22] = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[23] = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P4_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[24] = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[25] = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[26] = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[27] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[28] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[29] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P16_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[30] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P4_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+
+		mod2array[0] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[1] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[2] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[3] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[4] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[5] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[6] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[8] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[9] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[10] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				 NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[11] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				 NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[12] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				 NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[13] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				 NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[14] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				 NUM_BANKS(ADDR_SURF_4_BANK));
+
+		for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++)
+			WREG32(mmGB_TILE_MODE0 + reg_offset, modearray[reg_offset]);
+
+		for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++)
+			if (reg_offset != 7)
+				WREG32(mmGB_MACROTILE_MODE0 + reg_offset, mod2array[reg_offset]);
+
 		break;
 		break;
 	case CHIP_TONGA:
 	case CHIP_TONGA:
-		for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++) {
-			switch (reg_offset) {
-			case 0:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 1:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 2:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 3:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 4:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 5:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 6:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 7:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P4_16x16) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 8:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16));
-				break;
-			case 9:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 10:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 11:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 12:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P4_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 13:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 14:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 15:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 16:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 17:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P4_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 18:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 19:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 20:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 21:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 22:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 23:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P4_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 24:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 25:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 26:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 27:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 28:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 29:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 30:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P4_16x16) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			default:
-				gb_tile_moden = 0;
-				break;
-			};
-			adev->gfx.config.tile_mode_array[reg_offset] = gb_tile_moden;
-			WREG32(mmGB_TILE_MODE0 + reg_offset, gb_tile_moden);
-		}
-		for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++) {
-			switch (reg_offset) {
-			case 0:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 1:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 2:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 3:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 4:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 5:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 6:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 8:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 9:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 10:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 11:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 12:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 13:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_4_BANK));
-				break;
-			case 14:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
-						NUM_BANKS(ADDR_SURF_4_BANK));
-				break;
-			case 7:
-				/* unused idx */
-				continue;
-			default:
-				gb_tile_moden = 0;
-				break;
-			};
-			adev->gfx.config.macrotile_mode_array[reg_offset] = gb_tile_moden;
-			WREG32(mmGB_MACROTILE_MODE0 + reg_offset, gb_tile_moden);
-		}
+		modearray[0] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[1] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[2] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[3] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[4] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[5] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[6] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[7] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P4_16x16) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[8] = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
+				PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16));
+		modearray[9] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[10] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[11] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[12] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P4_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[13] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[14] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[15] = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[16] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[17] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P4_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[18] = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[19] = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[20] = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[21] = (ARRAY_MODE(ARRAY_3D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[22] = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[23] = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P4_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[24] = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[25] = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[26] = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[27] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[28] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[29] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P8_32x32_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[30] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P4_16x16) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+
+		mod2array[0] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[1] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[2] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[3] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[4] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[5] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[6] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[8] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[9] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[10] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[11] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[12] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				 NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[13] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				 NUM_BANKS(ADDR_SURF_4_BANK));
+		mod2array[14] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+				 NUM_BANKS(ADDR_SURF_4_BANK));
+
+		for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++)
+			WREG32(mmGB_TILE_MODE0 + reg_offset, modearray[reg_offset]);
+
+		for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++)
+			if (reg_offset != 7)
+				WREG32(mmGB_MACROTILE_MODE0 + reg_offset, mod2array[reg_offset]);
+
 		break;
 		break;
 	case CHIP_STONEY:
 	case CHIP_STONEY:
-		for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++) {
-			switch (reg_offset) {
-			case 0:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 1:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 2:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 3:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 4:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 5:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 6:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 8:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
-						PIPE_CONFIG(ADDR_SURF_P2));
-				break;
-			case 9:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 10:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 11:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 13:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 14:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 15:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 16:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 18:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 19:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 20:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 21:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 22:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 24:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 25:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 26:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 27:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 28:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 29:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 7:
-			case 12:
-			case 17:
-			case 23:
-				/* unused idx */
-				continue;
-			default:
-				gb_tile_moden = 0;
-				break;
-			};
-			adev->gfx.config.tile_mode_array[reg_offset] = gb_tile_moden;
-			WREG32(mmGB_TILE_MODE0 + reg_offset, gb_tile_moden);
-		}
-		for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++) {
-			switch (reg_offset) {
-			case 0:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 1:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 2:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 3:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 4:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 5:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 6:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 8:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 9:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 10:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 11:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 12:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 13:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 14:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 7:
-				/* unused idx */
-				continue;
-			default:
-				gb_tile_moden = 0;
-				break;
-			};
-			adev->gfx.config.macrotile_mode_array[reg_offset] = gb_tile_moden;
-			WREG32(mmGB_MACROTILE_MODE0 + reg_offset, gb_tile_moden);
-		}
+		modearray[0] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[1] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[2] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[3] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[4] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[5] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[6] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[8] = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
+				PIPE_CONFIG(ADDR_SURF_P2));
+		modearray[9] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[10] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[11] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[13] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[14] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[15] = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[16] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[18] = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[19] = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[20] = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[21] = (ARRAY_MODE(ARRAY_3D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[22] = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[24] = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[25] = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[26] = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[27] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[28] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[29] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+
+		mod2array[0] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[1] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[2] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[3] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[4] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[5] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[6] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[8] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[9] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[10] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[11] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[12] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[13] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[14] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				 NUM_BANKS(ADDR_SURF_8_BANK));
+
+		for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++)
+			if (reg_offset != 7 && reg_offset != 12 && reg_offset != 17 &&
+			    reg_offset != 23)
+				WREG32(mmGB_TILE_MODE0 + reg_offset, modearray[reg_offset]);
+
+		for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++)
+			if (reg_offset != 7)
+				WREG32(mmGB_MACROTILE_MODE0 + reg_offset, mod2array[reg_offset]);
+
 		break;
 		break;
-	case CHIP_CARRIZO:
 	default:
 	default:
-		for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++) {
-			switch (reg_offset) {
-			case 0:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 1:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 2:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 3:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 4:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 5:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 6:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
-				break;
-			case 8:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
-						PIPE_CONFIG(ADDR_SURF_P2));
-				break;
-			case 9:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 10:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 11:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 13:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 14:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 15:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 16:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 18:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 19:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 20:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 21:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 22:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 24:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 25:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 26:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
-				break;
-			case 27:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 28:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
-				break;
-			case 29:
-				gb_tile_moden = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
-						PIPE_CONFIG(ADDR_SURF_P2) |
-						MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
-						SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
-				break;
-			case 7:
-			case 12:
-			case 17:
-			case 23:
-				/* unused idx */
-				continue;
-			default:
-				gb_tile_moden = 0;
-				break;
-			};
-			adev->gfx.config.tile_mode_array[reg_offset] = gb_tile_moden;
-			WREG32(mmGB_TILE_MODE0 + reg_offset, gb_tile_moden);
-		}
-		for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++) {
-			switch (reg_offset) {
-			case 0:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 1:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 2:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 3:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 4:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 5:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 6:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 8:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 9:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 10:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 11:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 12:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 13:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
-						NUM_BANKS(ADDR_SURF_16_BANK));
-				break;
-			case 14:
-				gb_tile_moden = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
-						BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
-						MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
-						NUM_BANKS(ADDR_SURF_8_BANK));
-				break;
-			case 7:
-				/* unused idx */
-				continue;
-			default:
-				gb_tile_moden = 0;
-				break;
-			};
-			adev->gfx.config.macrotile_mode_array[reg_offset] = gb_tile_moden;
-			WREG32(mmGB_MACROTILE_MODE0 + reg_offset, gb_tile_moden);
-		}
+		dev_warn(adev->dev,
+			 "Unknown chip type (%d) in function gfx_v8_0_tiling_mode_table_init() falling through to CHIP_CARRIZO\n",
+			 adev->asic_type);
+
+	case CHIP_CARRIZO:
+		modearray[0] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[1] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[2] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[3] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[4] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[5] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[6] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DEPTH_MICRO_TILING));
+		modearray[8] = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
+				PIPE_CONFIG(ADDR_SURF_P2));
+		modearray[9] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				PIPE_CONFIG(ADDR_SURF_P2) |
+				MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[10] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[11] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_DISPLAY_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[13] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[14] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[15] = (ARRAY_MODE(ARRAY_3D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[16] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+		modearray[18] = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[19] = (ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[20] = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[21] = (ARRAY_MODE(ARRAY_3D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[22] = (ARRAY_MODE(ARRAY_PRT_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[24] = (ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THIN_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[25] = (ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[26] = (ARRAY_MODE(ARRAY_3D_TILED_XTHICK) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_THICK_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_1));
+		modearray[27] = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[28] = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_2));
+		modearray[29] = (ARRAY_MODE(ARRAY_PRT_TILED_THIN1) |
+				 PIPE_CONFIG(ADDR_SURF_P2) |
+				 MICRO_TILE_MODE_NEW(ADDR_SURF_ROTATED_MICRO_TILING) |
+				 SAMPLE_SPLIT(ADDR_SURF_SAMPLE_SPLIT_8));
+
+		mod2array[0] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[1] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[2] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[3] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[4] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[5] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[6] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				NUM_BANKS(ADDR_SURF_8_BANK));
+		mod2array[8] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_8) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[9] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_4) |
+				BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[10] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[11] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[12] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[13] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+				 NUM_BANKS(ADDR_SURF_16_BANK));
+		mod2array[14] = (BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+				 BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+				 MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+				 NUM_BANKS(ADDR_SURF_8_BANK));
+
+		for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++)
+			if (reg_offset != 7 && reg_offset != 12 && reg_offset != 17 &&
+			    reg_offset != 23)
+				WREG32(mmGB_TILE_MODE0 + reg_offset, modearray[reg_offset]);
+
+		for (reg_offset = 0; reg_offset < num_secondary_tile_mode_states; reg_offset++)
+			if (reg_offset != 7)
+				WREG32(mmGB_MACROTILE_MODE0 + reg_offset, mod2array[reg_offset]);
+
+		break;
 	}
 	}
 }
 }
 
 
 static u32 gfx_v8_0_create_bitmask(u32 bit_width)
 static u32 gfx_v8_0_create_bitmask(u32 bit_width)
 {
 {
-	u32 i, mask = 0;
-
-	for (i = 0; i < bit_width; i++) {
-		mask <<= 1;
-		mask |= 1;
-	}
-	return mask;
+	return (u32)((1ULL << bit_width) - 1);
 }
 }
 
 
 void gfx_v8_0_select_se_sh(struct amdgpu_device *adev, u32 se_num, u32 sh_num)
 void gfx_v8_0_select_se_sh(struct amdgpu_device *adev, u32 se_num, u32 sh_num)
@@ -2809,7 +2651,7 @@ static void gfx_v8_0_setup_rb(struct amdgpu_device *adev,
 	mutex_lock(&adev->grbm_idx_mutex);
 	mutex_lock(&adev->grbm_idx_mutex);
 	for (i = 0; i < se_num; i++) {
 	for (i = 0; i < se_num; i++) {
 		gfx_v8_0_select_se_sh(adev, i, 0xffffffff);
 		gfx_v8_0_select_se_sh(adev, i, 0xffffffff);
-		data = 0;
+		data = RREG32(mmPA_SC_RASTER_CONFIG);
 		for (j = 0; j < sh_per_se; j++) {
 		for (j = 0; j < sh_per_se; j++) {
 			switch (enabled_rbs & 3) {
 			switch (enabled_rbs & 3) {
 			case 0:
 			case 0:
@@ -2997,17 +2839,11 @@ static void gfx_v8_0_enable_gui_idle_interrupt(struct amdgpu_device *adev,
 {
 {
 	u32 tmp = RREG32(mmCP_INT_CNTL_RING0);
 	u32 tmp = RREG32(mmCP_INT_CNTL_RING0);
 
 
-	if (enable) {
-		tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE, 1);
-		tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_EMPTY_INT_ENABLE, 1);
-		tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CMP_BUSY_INT_ENABLE, 1);
-		tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, GFX_IDLE_INT_ENABLE, 1);
-	} else {
-		tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE, 0);
-		tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_EMPTY_INT_ENABLE, 0);
-		tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CMP_BUSY_INT_ENABLE, 0);
-		tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, GFX_IDLE_INT_ENABLE, 0);
-	}
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE, enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_EMPTY_INT_ENABLE, enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CMP_BUSY_INT_ENABLE, enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, GFX_IDLE_INT_ENABLE, enable ? 1 : 0);
+
 	WREG32(mmCP_INT_CNTL_RING0, tmp);
 	WREG32(mmCP_INT_CNTL_RING0, tmp);
 }
 }
 
 
@@ -3087,16 +2923,18 @@ static int gfx_v8_0_rlc_resume(struct amdgpu_device *adev)
 
 
 	gfx_v8_0_rlc_reset(adev);
 	gfx_v8_0_rlc_reset(adev);
 
 
-	if (!adev->firmware.smu_load) {
-		/* legacy rlc firmware loading */
-		r = gfx_v8_0_rlc_load_microcode(adev);
-		if (r)
-			return r;
-	} else {
-		r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
-						AMDGPU_UCODE_ID_RLC_G);
-		if (r)
-			return -EINVAL;
+	if (!adev->pp_enabled) {
+		if (!adev->firmware.smu_load) {
+			/* legacy rlc firmware loading */
+			r = gfx_v8_0_rlc_load_microcode(adev);
+			if (r)
+				return r;
+		} else {
+			r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
+							AMDGPU_UCODE_ID_RLC_G);
+			if (r)
+				return -EINVAL;
+		}
 	}
 	}
 
 
 	gfx_v8_0_rlc_start(adev);
 	gfx_v8_0_rlc_start(adev);
@@ -3941,6 +3779,11 @@ static int gfx_v8_0_cp_compute_resume(struct amdgpu_device *adev)
 		tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x53);
 		tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x53);
 		WREG32(mmCP_HQD_PERSISTENT_STATE, tmp);
 		WREG32(mmCP_HQD_PERSISTENT_STATE, tmp);
 		mqd->cp_hqd_persistent_state = tmp;
 		mqd->cp_hqd_persistent_state = tmp;
+		if (adev->asic_type == CHIP_STONEY) {
+			tmp = RREG32(mmCP_ME1_PIPE3_INT_CNTL);
+			tmp = REG_SET_FIELD(tmp, CP_ME1_PIPE3_INT_CNTL, GENERIC2_INT_ENABLE, 1);
+			WREG32(mmCP_ME1_PIPE3_INT_CNTL, tmp);
+		}
 
 
 		/* activate the queue */
 		/* activate the queue */
 		mqd->cp_hqd_active = 1;
 		mqd->cp_hqd_active = 1;
@@ -3982,35 +3825,37 @@ static int gfx_v8_0_cp_resume(struct amdgpu_device *adev)
 	if (!(adev->flags & AMD_IS_APU))
 	if (!(adev->flags & AMD_IS_APU))
 		gfx_v8_0_enable_gui_idle_interrupt(adev, false);
 		gfx_v8_0_enable_gui_idle_interrupt(adev, false);
 
 
-	if (!adev->firmware.smu_load) {
-		/* legacy firmware loading */
-		r = gfx_v8_0_cp_gfx_load_microcode(adev);
-		if (r)
-			return r;
-
-		r = gfx_v8_0_cp_compute_load_microcode(adev);
-		if (r)
-			return r;
-	} else {
-		r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
-						AMDGPU_UCODE_ID_CP_CE);
-		if (r)
-			return -EINVAL;
-
-		r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
-						AMDGPU_UCODE_ID_CP_PFP);
-		if (r)
-			return -EINVAL;
-
-		r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
-						AMDGPU_UCODE_ID_CP_ME);
-		if (r)
-			return -EINVAL;
+	if (!adev->pp_enabled) {
+		if (!adev->firmware.smu_load) {
+			/* legacy firmware loading */
+			r = gfx_v8_0_cp_gfx_load_microcode(adev);
+			if (r)
+				return r;
 
 
-		r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
-						AMDGPU_UCODE_ID_CP_MEC1);
-		if (r)
-			return -EINVAL;
+			r = gfx_v8_0_cp_compute_load_microcode(adev);
+			if (r)
+				return r;
+		} else {
+			r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
+							AMDGPU_UCODE_ID_CP_CE);
+			if (r)
+				return -EINVAL;
+
+			r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
+							AMDGPU_UCODE_ID_CP_PFP);
+			if (r)
+				return -EINVAL;
+
+			r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
+							AMDGPU_UCODE_ID_CP_ME);
+			if (r)
+				return -EINVAL;
+
+			r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
+							AMDGPU_UCODE_ID_CP_MEC1);
+			if (r)
+				return -EINVAL;
+		}
 	}
 	}
 
 
 	r = gfx_v8_0_cp_gfx_resume(adev);
 	r = gfx_v8_0_cp_gfx_resume(adev);
@@ -4458,15 +4303,261 @@ static int gfx_v8_0_early_init(void *handle)
 	return 0;
 	return 0;
 }
 }
 
 
+static int gfx_v8_0_late_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int r;
+
+	/* requires IBs so do in late init after IB pool is initialized */
+	r = gfx_v8_0_do_edc_gpr_workarounds(adev);
+	if (r)
+		return r;
+
+	return 0;
+}
+
 static int gfx_v8_0_set_powergating_state(void *handle,
 static int gfx_v8_0_set_powergating_state(void *handle,
 					  enum amd_powergating_state state)
 					  enum amd_powergating_state state)
 {
 {
 	return 0;
 	return 0;
 }
 }
 
 
+static void fiji_send_serdes_cmd(struct amdgpu_device *adev,
+		uint32_t reg_addr, uint32_t cmd)
+{
+	uint32_t data;
+
+	gfx_v8_0_select_se_sh(adev, 0xffffffff, 0xffffffff);
+
+	WREG32(mmRLC_SERDES_WR_CU_MASTER_MASK, 0xffffffff);
+	WREG32(mmRLC_SERDES_WR_NONCU_MASTER_MASK, 0xffffffff);
+
+	data = RREG32(mmRLC_SERDES_WR_CTRL);
+	data &= ~(RLC_SERDES_WR_CTRL__WRITE_COMMAND_MASK |
+			RLC_SERDES_WR_CTRL__READ_COMMAND_MASK |
+			RLC_SERDES_WR_CTRL__P1_SELECT_MASK |
+			RLC_SERDES_WR_CTRL__P2_SELECT_MASK |
+			RLC_SERDES_WR_CTRL__RDDATA_RESET_MASK |
+			RLC_SERDES_WR_CTRL__POWER_DOWN_MASK |
+			RLC_SERDES_WR_CTRL__POWER_UP_MASK |
+			RLC_SERDES_WR_CTRL__SHORT_FORMAT_MASK |
+			RLC_SERDES_WR_CTRL__BPM_DATA_MASK |
+			RLC_SERDES_WR_CTRL__REG_ADDR_MASK |
+			RLC_SERDES_WR_CTRL__SRBM_OVERRIDE_MASK);
+	data |= (RLC_SERDES_WR_CTRL__RSVD_BPM_ADDR_MASK |
+			(cmd << RLC_SERDES_WR_CTRL__BPM_DATA__SHIFT) |
+			(reg_addr << RLC_SERDES_WR_CTRL__REG_ADDR__SHIFT) |
+			(0xff << RLC_SERDES_WR_CTRL__BPM_ADDR__SHIFT));
+
+	WREG32(mmRLC_SERDES_WR_CTRL, data);
+}
+
+static void fiji_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t temp, data;
+
+	/* It is disabled by HW by default */
+	if (enable) {
+		/* 1 - RLC memory Light sleep */
+		temp = data = RREG32(mmRLC_MEM_SLP_CNTL);
+		data |= RLC_MEM_SLP_CNTL__RLC_MEM_LS_EN_MASK;
+		if (temp != data)
+			WREG32(mmRLC_MEM_SLP_CNTL, data);
+
+		/* 2 - CP memory Light sleep */
+		temp = data = RREG32(mmCP_MEM_SLP_CNTL);
+		data |= CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK;
+		if (temp != data)
+			WREG32(mmCP_MEM_SLP_CNTL, data);
+
+		/* 3 - RLC_CGTT_MGCG_OVERRIDE */
+		temp = data = RREG32(mmRLC_CGTT_MGCG_OVERRIDE);
+		data &= ~(RLC_CGTT_MGCG_OVERRIDE__CPF_MASK |
+				RLC_CGTT_MGCG_OVERRIDE__RLC_MASK |
+				RLC_CGTT_MGCG_OVERRIDE__MGCG_MASK |
+				RLC_CGTT_MGCG_OVERRIDE__GRBM_MASK);
+
+		if (temp != data)
+			WREG32(mmRLC_CGTT_MGCG_OVERRIDE, data);
+
+		/* 4 - wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
+		gfx_v8_0_wait_for_rlc_serdes(adev);
+
+		/* 5 - clear mgcg override */
+		fiji_send_serdes_cmd(adev, BPM_REG_MGCG_OVERRIDE, CLE_BPM_SERDES_CMD);
+
+		/* 6 - Enable CGTS(Tree Shade) MGCG /MGLS */
+		temp = data = RREG32(mmCGTS_SM_CTRL_REG);
+		data &= ~(CGTS_SM_CTRL_REG__SM_MODE_MASK);
+		data |= (0x2 << CGTS_SM_CTRL_REG__SM_MODE__SHIFT);
+		data |= CGTS_SM_CTRL_REG__SM_MODE_ENABLE_MASK;
+		data &= ~CGTS_SM_CTRL_REG__OVERRIDE_MASK;
+		data &= ~CGTS_SM_CTRL_REG__LS_OVERRIDE_MASK;
+		data |= CGTS_SM_CTRL_REG__ON_MONITOR_ADD_EN_MASK;
+		data |= (0x96 << CGTS_SM_CTRL_REG__ON_MONITOR_ADD__SHIFT);
+		if (temp != data)
+			WREG32(mmCGTS_SM_CTRL_REG, data);
+		udelay(50);
+
+		/* 7 - wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
+		gfx_v8_0_wait_for_rlc_serdes(adev);
+	} else {
+		/* 1 - MGCG_OVERRIDE[0] for CP and MGCG_OVERRIDE[1] for RLC */
+		temp = data = RREG32(mmRLC_CGTT_MGCG_OVERRIDE);
+		data |= (RLC_CGTT_MGCG_OVERRIDE__CPF_MASK |
+				RLC_CGTT_MGCG_OVERRIDE__RLC_MASK |
+				RLC_CGTT_MGCG_OVERRIDE__MGCG_MASK |
+				RLC_CGTT_MGCG_OVERRIDE__GRBM_MASK);
+		if (temp != data)
+			WREG32(mmRLC_CGTT_MGCG_OVERRIDE, data);
+
+		/* 2 - disable MGLS in RLC */
+		data = RREG32(mmRLC_MEM_SLP_CNTL);
+		if (data & RLC_MEM_SLP_CNTL__RLC_MEM_LS_EN_MASK) {
+			data &= ~RLC_MEM_SLP_CNTL__RLC_MEM_LS_EN_MASK;
+			WREG32(mmRLC_MEM_SLP_CNTL, data);
+		}
+
+		/* 3 - disable MGLS in CP */
+		data = RREG32(mmCP_MEM_SLP_CNTL);
+		if (data & CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK) {
+			data &= ~CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK;
+			WREG32(mmCP_MEM_SLP_CNTL, data);
+		}
+
+		/* 4 - Disable CGTS(Tree Shade) MGCG and MGLS */
+		temp = data = RREG32(mmCGTS_SM_CTRL_REG);
+		data |= (CGTS_SM_CTRL_REG__OVERRIDE_MASK |
+				CGTS_SM_CTRL_REG__LS_OVERRIDE_MASK);
+		if (temp != data)
+			WREG32(mmCGTS_SM_CTRL_REG, data);
+
+		/* 5 - wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
+		gfx_v8_0_wait_for_rlc_serdes(adev);
+
+		/* 6 - set mgcg override */
+		fiji_send_serdes_cmd(adev, BPM_REG_MGCG_OVERRIDE, SET_BPM_SERDES_CMD);
+
+		udelay(50);
+
+		/* 7- wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
+		gfx_v8_0_wait_for_rlc_serdes(adev);
+	}
+}
+
+static void fiji_update_coarse_grain_clock_gating(struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t temp, temp1, data, data1;
+
+	temp = data = RREG32(mmRLC_CGCG_CGLS_CTRL);
+
+	if (enable) {
+		/* 1 enable cntx_empty_int_enable/cntx_busy_int_enable/
+		 * Cmp_busy/GFX_Idle interrupts
+		 */
+		gfx_v8_0_enable_gui_idle_interrupt(adev, true);
+
+		temp1 = data1 =	RREG32(mmRLC_CGTT_MGCG_OVERRIDE);
+		data1 &= ~RLC_CGTT_MGCG_OVERRIDE__CGCG_MASK;
+		if (temp1 != data1)
+			WREG32(mmRLC_CGTT_MGCG_OVERRIDE, data1);
+
+		/* 2 wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
+		gfx_v8_0_wait_for_rlc_serdes(adev);
+
+		/* 3 - clear cgcg override */
+		fiji_send_serdes_cmd(adev, BPM_REG_CGCG_OVERRIDE, CLE_BPM_SERDES_CMD);
+
+		/* wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
+		gfx_v8_0_wait_for_rlc_serdes(adev);
+
+		/* 4 - write cmd to set CGLS */
+		fiji_send_serdes_cmd(adev, BPM_REG_CGLS_EN, SET_BPM_SERDES_CMD);
+
+		/* 5 - enable cgcg */
+		data |= RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK;
+
+		/* enable cgls*/
+		data |= RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK;
+
+		temp1 = data1 =	RREG32(mmRLC_CGTT_MGCG_OVERRIDE);
+		data1 &= ~RLC_CGTT_MGCG_OVERRIDE__CGLS_MASK;
+
+		if (temp1 != data1)
+			WREG32(mmRLC_CGTT_MGCG_OVERRIDE, data1);
+
+		if (temp != data)
+			WREG32(mmRLC_CGCG_CGLS_CTRL, data);
+	} else {
+		/* disable cntx_empty_int_enable & GFX Idle interrupt */
+		gfx_v8_0_enable_gui_idle_interrupt(adev, false);
+
+		/* TEST CGCG */
+		temp1 = data1 =	RREG32(mmRLC_CGTT_MGCG_OVERRIDE);
+		data1 |= (RLC_CGTT_MGCG_OVERRIDE__CGCG_MASK |
+				RLC_CGTT_MGCG_OVERRIDE__CGLS_MASK);
+		if (temp1 != data1)
+			WREG32(mmRLC_CGTT_MGCG_OVERRIDE, data1);
+
+		/* read gfx register to wake up cgcg */
+		RREG32(mmCB_CGTT_SCLK_CTRL);
+		RREG32(mmCB_CGTT_SCLK_CTRL);
+		RREG32(mmCB_CGTT_SCLK_CTRL);
+		RREG32(mmCB_CGTT_SCLK_CTRL);
+
+		/* wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
+		gfx_v8_0_wait_for_rlc_serdes(adev);
+
+		/* write cmd to Set CGCG Overrride */
+		fiji_send_serdes_cmd(adev, BPM_REG_CGCG_OVERRIDE, SET_BPM_SERDES_CMD);
+
+		/* wait for RLC_SERDES_CU_MASTER & RLC_SERDES_NONCU_MASTER idle */
+		gfx_v8_0_wait_for_rlc_serdes(adev);
+
+		/* write cmd to Clear CGLS */
+		fiji_send_serdes_cmd(adev, BPM_REG_CGLS_EN, CLE_BPM_SERDES_CMD);
+
+		/* disable cgcg, cgls should be disabled too. */
+		data &= ~(RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK |
+				RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK);
+		if (temp != data)
+			WREG32(mmRLC_CGCG_CGLS_CTRL, data);
+	}
+}
+static int fiji_update_gfx_clock_gating(struct amdgpu_device *adev,
+		bool enable)
+{
+	if (enable) {
+		/* CGCG/CGLS should be enabled after MGCG/MGLS/TS(CG/LS)
+		 * ===  MGCG + MGLS + TS(CG/LS) ===
+		 */
+		fiji_update_medium_grain_clock_gating(adev, enable);
+		fiji_update_coarse_grain_clock_gating(adev, enable);
+	} else {
+		/* CGCG/CGLS should be disabled before MGCG/MGLS/TS(CG/LS)
+		 * ===  CGCG + CGLS ===
+		 */
+		fiji_update_coarse_grain_clock_gating(adev, enable);
+		fiji_update_medium_grain_clock_gating(adev, enable);
+	}
+	return 0;
+}
+
 static int gfx_v8_0_set_clockgating_state(void *handle,
 static int gfx_v8_0_set_clockgating_state(void *handle,
 					  enum amd_clockgating_state state)
 					  enum amd_clockgating_state state)
 {
 {
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (adev->asic_type) {
+	case CHIP_FIJI:
+		fiji_update_gfx_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
 	return 0;
 	return 0;
 }
 }
 
 
@@ -4627,7 +4718,7 @@ static void gfx_v8_0_ring_emit_fence_gfx(struct amdgpu_ring *ring, u64 addr,
 				 EVENT_TYPE(CACHE_FLUSH_AND_INV_TS_EVENT) |
 				 EVENT_TYPE(CACHE_FLUSH_AND_INV_TS_EVENT) |
 				 EVENT_INDEX(5)));
 				 EVENT_INDEX(5)));
 	amdgpu_ring_write(ring, addr & 0xfffffffc);
 	amdgpu_ring_write(ring, addr & 0xfffffffc);
-	amdgpu_ring_write(ring, (upper_32_bits(addr) & 0xffff) | 
+	amdgpu_ring_write(ring, (upper_32_bits(addr) & 0xffff) |
 			  DATA_SEL(write64bit ? 2 : 1) | INT_SEL(int_sel ? 2 : 0));
 			  DATA_SEL(write64bit ? 2 : 1) | INT_SEL(int_sel ? 2 : 0));
 	amdgpu_ring_write(ring, lower_32_bits(seq));
 	amdgpu_ring_write(ring, lower_32_bits(seq));
 	amdgpu_ring_write(ring, upper_32_bits(seq));
 	amdgpu_ring_write(ring, upper_32_bits(seq));
@@ -4995,7 +5086,7 @@ static int gfx_v8_0_priv_inst_irq(struct amdgpu_device *adev,
 
 
 const struct amd_ip_funcs gfx_v8_0_ip_funcs = {
 const struct amd_ip_funcs gfx_v8_0_ip_funcs = {
 	.early_init = gfx_v8_0_early_init,
 	.early_init = gfx_v8_0_early_init,
-	.late_init = NULL,
+	.late_init = gfx_v8_0_late_init,
 	.sw_init = gfx_v8_0_sw_init,
 	.sw_init = gfx_v8_0_sw_init,
 	.sw_fini = gfx_v8_0_sw_fini,
 	.sw_fini = gfx_v8_0_sw_fini,
 	.hw_init = gfx_v8_0_hw_init,
 	.hw_init = gfx_v8_0_hw_init,

+ 4 - 1
drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c

@@ -370,6 +370,10 @@ static int gmc_v7_0_mc_init(struct amdgpu_device *adev)
 	adev->mc.real_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
 	adev->mc.real_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
 	adev->mc.visible_vram_size = adev->mc.aper_size;
 	adev->mc.visible_vram_size = adev->mc.aper_size;
 
 
+	/* In case the PCI BAR is larger than the actual amount of vram */
+	if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
+		adev->mc.visible_vram_size = adev->mc.real_vram_size;
+
 	/* unless the user had overridden it, set the gart
 	/* unless the user had overridden it, set the gart
 	 * size equal to the 1024 or vram, whichever is larger.
 	 * size equal to the 1024 or vram, whichever is larger.
 	 */
 	 */
@@ -1012,7 +1016,6 @@ static int gmc_v7_0_suspend(void *handle)
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
 
 	if (adev->vm_manager.enabled) {
 	if (adev->vm_manager.enabled) {
-		amdgpu_vm_manager_fini(adev);
 		gmc_v7_0_vm_fini(adev);
 		gmc_v7_0_vm_fini(adev);
 		adev->vm_manager.enabled = false;
 		adev->vm_manager.enabled = false;
 	}
 	}

+ 176 - 1
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c

@@ -476,6 +476,10 @@ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
 	adev->mc.real_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
 	adev->mc.real_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
 	adev->mc.visible_vram_size = adev->mc.aper_size;
 	adev->mc.visible_vram_size = adev->mc.aper_size;
 
 
+	/* In case the PCI BAR is larger than the actual amount of vram */
+	if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
+		adev->mc.visible_vram_size = adev->mc.real_vram_size;
+
 	/* unless the user had overridden it, set the gart
 	/* unless the user had overridden it, set the gart
 	 * size equal to the 1024 or vram, whichever is larger.
 	 * size equal to the 1024 or vram, whichever is larger.
 	 */
 	 */
@@ -1033,7 +1037,6 @@ static int gmc_v8_0_suspend(void *handle)
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
 
 	if (adev->vm_manager.enabled) {
 	if (adev->vm_manager.enabled) {
-		amdgpu_vm_manager_fini(adev);
 		gmc_v8_0_vm_fini(adev);
 		gmc_v8_0_vm_fini(adev);
 		adev->vm_manager.enabled = false;
 		adev->vm_manager.enabled = false;
 	}
 	}
@@ -1324,9 +1327,181 @@ static int gmc_v8_0_process_interrupt(struct amdgpu_device *adev,
 	return 0;
 	return 0;
 }
 }
 
 
+static void fiji_update_mc_medium_grain_clock_gating(struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t data;
+
+	if (enable) {
+		data = RREG32(mmMC_HUB_MISC_HUB_CG);
+		data |= MC_HUB_MISC_HUB_CG__ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_HUB_CG, data);
+
+		data = RREG32(mmMC_HUB_MISC_SIP_CG);
+		data |= MC_HUB_MISC_SIP_CG__ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_SIP_CG, data);
+
+		data = RREG32(mmMC_HUB_MISC_VM_CG);
+		data |= MC_HUB_MISC_VM_CG__ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_VM_CG, data);
+
+		data = RREG32(mmMC_XPB_CLK_GAT);
+		data |= MC_XPB_CLK_GAT__ENABLE_MASK;
+		WREG32(mmMC_XPB_CLK_GAT, data);
+
+		data = RREG32(mmATC_MISC_CG);
+		data |= ATC_MISC_CG__ENABLE_MASK;
+		WREG32(mmATC_MISC_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_WR_CG);
+		data |= MC_CITF_MISC_WR_CG__ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_WR_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_RD_CG);
+		data |= MC_CITF_MISC_RD_CG__ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_RD_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_VM_CG);
+		data |= MC_CITF_MISC_VM_CG__ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_VM_CG, data);
+
+		data = RREG32(mmVM_L2_CG);
+		data |= VM_L2_CG__ENABLE_MASK;
+		WREG32(mmVM_L2_CG, data);
+	} else {
+		data = RREG32(mmMC_HUB_MISC_HUB_CG);
+		data &= ~MC_HUB_MISC_HUB_CG__ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_HUB_CG, data);
+
+		data = RREG32(mmMC_HUB_MISC_SIP_CG);
+		data &= ~MC_HUB_MISC_SIP_CG__ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_SIP_CG, data);
+
+		data = RREG32(mmMC_HUB_MISC_VM_CG);
+		data &= ~MC_HUB_MISC_VM_CG__ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_VM_CG, data);
+
+		data = RREG32(mmMC_XPB_CLK_GAT);
+		data &= ~MC_XPB_CLK_GAT__ENABLE_MASK;
+		WREG32(mmMC_XPB_CLK_GAT, data);
+
+		data = RREG32(mmATC_MISC_CG);
+		data &= ~ATC_MISC_CG__ENABLE_MASK;
+		WREG32(mmATC_MISC_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_WR_CG);
+		data &= ~MC_CITF_MISC_WR_CG__ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_WR_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_RD_CG);
+		data &= ~MC_CITF_MISC_RD_CG__ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_RD_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_VM_CG);
+		data &= ~MC_CITF_MISC_VM_CG__ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_VM_CG, data);
+
+		data = RREG32(mmVM_L2_CG);
+		data &= ~VM_L2_CG__ENABLE_MASK;
+		WREG32(mmVM_L2_CG, data);
+	}
+}
+
+static void fiji_update_mc_light_sleep(struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t data;
+
+	if (enable) {
+		data = RREG32(mmMC_HUB_MISC_HUB_CG);
+		data |= MC_HUB_MISC_HUB_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_HUB_CG, data);
+
+		data = RREG32(mmMC_HUB_MISC_SIP_CG);
+		data |= MC_HUB_MISC_SIP_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_SIP_CG, data);
+
+		data = RREG32(mmMC_HUB_MISC_VM_CG);
+		data |= MC_HUB_MISC_VM_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_VM_CG, data);
+
+		data = RREG32(mmMC_XPB_CLK_GAT);
+		data |= MC_XPB_CLK_GAT__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_XPB_CLK_GAT, data);
+
+		data = RREG32(mmATC_MISC_CG);
+		data |= ATC_MISC_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmATC_MISC_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_WR_CG);
+		data |= MC_CITF_MISC_WR_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_WR_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_RD_CG);
+		data |= MC_CITF_MISC_RD_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_RD_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_VM_CG);
+		data |= MC_CITF_MISC_VM_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_VM_CG, data);
+
+		data = RREG32(mmVM_L2_CG);
+		data |= VM_L2_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmVM_L2_CG, data);
+	} else {
+		data = RREG32(mmMC_HUB_MISC_HUB_CG);
+		data &= ~MC_HUB_MISC_HUB_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_HUB_CG, data);
+
+		data = RREG32(mmMC_HUB_MISC_SIP_CG);
+		data &= ~MC_HUB_MISC_SIP_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_SIP_CG, data);
+
+		data = RREG32(mmMC_HUB_MISC_VM_CG);
+		data &= ~MC_HUB_MISC_VM_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_HUB_MISC_VM_CG, data);
+
+		data = RREG32(mmMC_XPB_CLK_GAT);
+		data &= ~MC_XPB_CLK_GAT__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_XPB_CLK_GAT, data);
+
+		data = RREG32(mmATC_MISC_CG);
+		data &= ~ATC_MISC_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmATC_MISC_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_WR_CG);
+		data &= ~MC_CITF_MISC_WR_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_WR_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_RD_CG);
+		data &= ~MC_CITF_MISC_RD_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_RD_CG, data);
+
+		data = RREG32(mmMC_CITF_MISC_VM_CG);
+		data &= ~MC_CITF_MISC_VM_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmMC_CITF_MISC_VM_CG, data);
+
+		data = RREG32(mmVM_L2_CG);
+		data &= ~VM_L2_CG__MEM_LS_ENABLE_MASK;
+		WREG32(mmVM_L2_CG, data);
+	}
+}
+
 static int gmc_v8_0_set_clockgating_state(void *handle,
 static int gmc_v8_0_set_clockgating_state(void *handle,
 					  enum amd_clockgating_state state)
 					  enum amd_clockgating_state state)
 {
 {
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (adev->asic_type) {
+	case CHIP_FIJI:
+		fiji_update_mc_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		fiji_update_mc_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
 	return 0;
 	return 0;
 }
 }
 
 

+ 7 - 0
drivers/gpu/drm/amd/amdgpu/iceland_ih.c

@@ -253,8 +253,14 @@ static void iceland_ih_set_rptr(struct amdgpu_device *adev)
 static int iceland_ih_early_init(void *handle)
 static int iceland_ih_early_init(void *handle)
 {
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int ret;
+
+	ret = amdgpu_irq_add_domain(adev);
+	if (ret)
+		return ret;
 
 
 	iceland_ih_set_interrupt_funcs(adev);
 	iceland_ih_set_interrupt_funcs(adev);
+
 	return 0;
 	return 0;
 }
 }
 
 
@@ -278,6 +284,7 @@ static int iceland_ih_sw_fini(void *handle)
 
 
 	amdgpu_irq_fini(adev);
 	amdgpu_irq_fini(adev);
 	amdgpu_ih_ring_fini(adev);
 	amdgpu_ih_ring_fini(adev);
+	amdgpu_irq_remove_domain(adev);
 
 
 	return 0;
 	return 0;
 }
 }

+ 118 - 11
drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c

@@ -727,18 +727,20 @@ static int sdma_v3_0_start(struct amdgpu_device *adev)
 {
 {
 	int r, i;
 	int r, i;
 
 
-	if (!adev->firmware.smu_load) {
-		r = sdma_v3_0_load_microcode(adev);
-		if (r)
-			return r;
-	} else {
-		for (i = 0; i < adev->sdma.num_instances; i++) {
-			r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
-									 (i == 0) ?
-									 AMDGPU_UCODE_ID_SDMA0 :
-									 AMDGPU_UCODE_ID_SDMA1);
+	if (!adev->pp_enabled) {
+		if (!adev->firmware.smu_load) {
+			r = sdma_v3_0_load_microcode(adev);
 			if (r)
 			if (r)
-				return -EINVAL;
+				return r;
+		} else {
+			for (i = 0; i < adev->sdma.num_instances; i++) {
+				r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
+										 (i == 0) ?
+										 AMDGPU_UCODE_ID_SDMA0 :
+										 AMDGPU_UCODE_ID_SDMA1);
+				if (r)
+					return -EINVAL;
+			}
 		}
 		}
 	}
 	}
 
 
@@ -1427,9 +1429,114 @@ static int sdma_v3_0_process_illegal_inst_irq(struct amdgpu_device *adev,
 	return 0;
 	return 0;
 }
 }
 
 
+static void fiji_update_sdma_medium_grain_clock_gating(
+		struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t temp, data;
+
+	if (enable) {
+		temp = data = RREG32(mmSDMA0_CLK_CTRL);
+		data &= ~(SDMA0_CLK_CTRL__SOFT_OVERRIDE7_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE6_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE5_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE4_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE3_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE2_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE1_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE0_MASK);
+		if (data != temp)
+			WREG32(mmSDMA0_CLK_CTRL, data);
+
+		temp = data = RREG32(mmSDMA1_CLK_CTRL);
+		data &= ~(SDMA1_CLK_CTRL__SOFT_OVERRIDE7_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE6_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE5_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE4_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE3_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE2_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE1_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE0_MASK);
+
+		if (data != temp)
+			WREG32(mmSDMA1_CLK_CTRL, data);
+	} else {
+		temp = data = RREG32(mmSDMA0_CLK_CTRL);
+		data |= SDMA0_CLK_CTRL__SOFT_OVERRIDE7_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE6_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE5_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE4_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE3_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE2_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE1_MASK |
+				SDMA0_CLK_CTRL__SOFT_OVERRIDE0_MASK;
+
+		if (data != temp)
+			WREG32(mmSDMA0_CLK_CTRL, data);
+
+		temp = data = RREG32(mmSDMA1_CLK_CTRL);
+		data |= SDMA1_CLK_CTRL__SOFT_OVERRIDE7_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE6_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE5_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE4_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE3_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE2_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE1_MASK |
+				SDMA1_CLK_CTRL__SOFT_OVERRIDE0_MASK;
+
+		if (data != temp)
+			WREG32(mmSDMA1_CLK_CTRL, data);
+	}
+}
+
+static void fiji_update_sdma_medium_grain_light_sleep(
+		struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t temp, data;
+
+	if (enable) {
+		temp = data = RREG32(mmSDMA0_POWER_CNTL);
+		data |= SDMA0_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
+
+		if (temp != data)
+			WREG32(mmSDMA0_POWER_CNTL, data);
+
+		temp = data = RREG32(mmSDMA1_POWER_CNTL);
+		data |= SDMA1_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
+
+		if (temp != data)
+			WREG32(mmSDMA1_POWER_CNTL, data);
+	} else {
+		temp = data = RREG32(mmSDMA0_POWER_CNTL);
+		data &= ~SDMA0_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
+
+		if (temp != data)
+			WREG32(mmSDMA0_POWER_CNTL, data);
+
+		temp = data = RREG32(mmSDMA1_POWER_CNTL);
+		data &= ~SDMA1_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
+
+		if (temp != data)
+			WREG32(mmSDMA1_POWER_CNTL, data);
+	}
+}
+
 static int sdma_v3_0_set_clockgating_state(void *handle,
 static int sdma_v3_0_set_clockgating_state(void *handle,
 					  enum amd_clockgating_state state)
 					  enum amd_clockgating_state state)
 {
 {
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (adev->asic_type) {
+	case CHIP_FIJI:
+		fiji_update_sdma_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		fiji_update_sdma_medium_grain_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
 	return 0;
 	return 0;
 }
 }
 
 

+ 1 - 1
drivers/gpu/drm/amd/amdgpu/tonga_dpm.c

@@ -24,7 +24,7 @@
 #include <linux/firmware.h>
 #include <linux/firmware.h>
 #include "drmP.h"
 #include "drmP.h"
 #include "amdgpu.h"
 #include "amdgpu.h"
-#include "tonga_smumgr.h"
+#include "tonga_smum.h"
 
 
 MODULE_FIRMWARE("amdgpu/tonga_smc.bin");
 MODULE_FIRMWARE("amdgpu/tonga_smc.bin");
 
 

+ 7 - 0
drivers/gpu/drm/amd/amdgpu/tonga_ih.c

@@ -273,8 +273,14 @@ static void tonga_ih_set_rptr(struct amdgpu_device *adev)
 static int tonga_ih_early_init(void *handle)
 static int tonga_ih_early_init(void *handle)
 {
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int ret;
+
+	ret = amdgpu_irq_add_domain(adev);
+	if (ret)
+		return ret;
 
 
 	tonga_ih_set_interrupt_funcs(adev);
 	tonga_ih_set_interrupt_funcs(adev);
+
 	return 0;
 	return 0;
 }
 }
 
 
@@ -301,6 +307,7 @@ static int tonga_ih_sw_fini(void *handle)
 
 
 	amdgpu_irq_fini(adev);
 	amdgpu_irq_fini(adev);
 	amdgpu_ih_ring_fini(adev);
 	amdgpu_ih_ring_fini(adev);
+	amdgpu_irq_add_domain(adev);
 
 
 	return 0;
 	return 0;
 }
 }

+ 0 - 198
drivers/gpu/drm/amd/amdgpu/tonga_ppsmc.h

@@ -1,198 +0,0 @@
-/*
- * Copyright 2014 Advanced Micro Devices, Inc.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
- * OTHER DEALINGS IN THE SOFTWARE.
- *
- */
-
-#ifndef TONGA_PP_SMC_H
-#define TONGA_PP_SMC_H
-
-#pragma pack(push, 1)
-
-#define PPSMC_SWSTATE_FLAG_DC                           0x01
-#define PPSMC_SWSTATE_FLAG_UVD                          0x02
-#define PPSMC_SWSTATE_FLAG_VCE                          0x04
-#define PPSMC_SWSTATE_FLAG_PCIE_X1                      0x08
-
-#define PPSMC_THERMAL_PROTECT_TYPE_INTERNAL             0x00
-#define PPSMC_THERMAL_PROTECT_TYPE_EXTERNAL             0x01
-#define PPSMC_THERMAL_PROTECT_TYPE_NONE                 0xff 
-
-#define PPSMC_SYSTEMFLAG_GPIO_DC                        0x01 
-#define PPSMC_SYSTEMFLAG_STEPVDDC                       0x02 
-#define PPSMC_SYSTEMFLAG_GDDR5                          0x04 
-
-#define PPSMC_SYSTEMFLAG_DISABLE_BABYSTEP               0x08 
-
-#define PPSMC_SYSTEMFLAG_REGULATOR_HOT                  0x10 
-#define PPSMC_SYSTEMFLAG_REGULATOR_HOT_ANALOG           0x20 
-#define PPSMC_SYSTEMFLAG_12CHANNEL                      0x40
-
-#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_MASK              0x07
-#define PPSMC_EXTRAFLAGS_AC2DC_DONT_WAIT_FOR_VBLANK     0x08
-
-#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_GOTODPMLOWSTATE   0x00
-#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_GOTOINITIALSTATE  0x01
-
-#define PPSMC_EXTRAFLAGS_AC2DC_GPIO5_POLARITY_HIGH      0x10
-#define PPSMC_EXTRAFLAGS_DRIVER_TO_GPIO17               0x20
-#define PPSMC_EXTRAFLAGS_PCC_TO_GPIO17                  0x40
-
-#define PPSMC_DPM2FLAGS_TDPCLMP                         0x01 
-#define PPSMC_DPM2FLAGS_PWRSHFT                         0x02 
-#define PPSMC_DPM2FLAGS_OCP                             0x04 
-
-#define PPSMC_DISPLAY_WATERMARK_LOW                     0
-#define PPSMC_DISPLAY_WATERMARK_HIGH                    1
-
-#define PPSMC_STATEFLAG_AUTO_PULSE_SKIP    		0x01
-#define PPSMC_STATEFLAG_POWERBOOST         		0x02
-#define PPSMC_STATEFLAG_PSKIP_ON_TDP_FAULT 		0x04
-#define PPSMC_STATEFLAG_POWERSHIFT         		0x08
-#define PPSMC_STATEFLAG_SLOW_READ_MARGIN   		0x10
-#define PPSMC_STATEFLAG_DEEPSLEEP_THROTTLE 		0x20
-#define PPSMC_STATEFLAG_DEEPSLEEP_BYPASS   		0x40
-
-#define FDO_MODE_HARDWARE 0
-#define FDO_MODE_PIECE_WISE_LINEAR 1
-
-enum FAN_CONTROL {
-	FAN_CONTROL_FUZZY,
-	FAN_CONTROL_TABLE
-};
-
-#define PPSMC_Result_OK             			((uint16_t)0x01)
-#define PPSMC_Result_NoMore         			((uint16_t)0x02)
-#define PPSMC_Result_NotNow         			((uint16_t)0x03)
-#define PPSMC_Result_Failed         			((uint16_t)0xFF) 
-#define PPSMC_Result_UnknownCmd     			((uint16_t)0xFE) 
-#define PPSMC_Result_UnknownVT      			((uint16_t)0xFD) 
-
-typedef uint16_t PPSMC_Result;
-
-#define PPSMC_isERROR(x) ((uint16_t)0x80 & (x))
-
-#define PPSMC_MSG_Halt                      		((uint16_t)0x10)
-#define PPSMC_MSG_Resume                    		((uint16_t)0x11)
-#define PPSMC_MSG_EnableDPMLevel            		((uint16_t)0x12)
-#define PPSMC_MSG_ZeroLevelsDisabled        		((uint16_t)0x13)
-#define PPSMC_MSG_OneLevelsDisabled         		((uint16_t)0x14)
-#define PPSMC_MSG_TwoLevelsDisabled         		((uint16_t)0x15)
-#define PPSMC_MSG_EnableThermalInterrupt    		((uint16_t)0x16)
-#define PPSMC_MSG_RunningOnAC               		((uint16_t)0x17)
-#define PPSMC_MSG_LevelUp                   		((uint16_t)0x18)
-#define PPSMC_MSG_LevelDown                 		((uint16_t)0x19)
-#define PPSMC_MSG_ResetDPMCounters          		((uint16_t)0x1a)
-#define PPSMC_MSG_SwitchToSwState           		((uint16_t)0x20)
-#define PPSMC_MSG_SwitchToSwStateLast       		((uint16_t)0x3f)
-#define PPSMC_MSG_SwitchToInitialState      		((uint16_t)0x40)
-#define PPSMC_MSG_NoForcedLevel             		((uint16_t)0x41)
-#define PPSMC_MSG_ForceHigh                 		((uint16_t)0x42)
-#define PPSMC_MSG_ForceMediumOrHigh         		((uint16_t)0x43)
-#define PPSMC_MSG_SwitchToMinimumPower      		((uint16_t)0x51)
-#define PPSMC_MSG_ResumeFromMinimumPower    		((uint16_t)0x52)
-#define PPSMC_MSG_EnableCac                 		((uint16_t)0x53)
-#define PPSMC_MSG_DisableCac                		((uint16_t)0x54)
-#define PPSMC_DPMStateHistoryStart          		((uint16_t)0x55)
-#define PPSMC_DPMStateHistoryStop           		((uint16_t)0x56)
-#define PPSMC_CACHistoryStart               		((uint16_t)0x57)
-#define PPSMC_CACHistoryStop                		((uint16_t)0x58)
-#define PPSMC_TDPClampingActive             		((uint16_t)0x59)
-#define PPSMC_TDPClampingInactive           		((uint16_t)0x5A)
-#define PPSMC_StartFanControl               		((uint16_t)0x5B)
-#define PPSMC_StopFanControl                		((uint16_t)0x5C)
-#define PPSMC_NoDisplay                     		((uint16_t)0x5D)
-#define PPSMC_HasDisplay                    		((uint16_t)0x5E)
-#define PPSMC_MSG_UVDPowerOFF               		((uint16_t)0x60)
-#define PPSMC_MSG_UVDPowerON                		((uint16_t)0x61)
-#define PPSMC_MSG_EnableULV                 		((uint16_t)0x62)
-#define PPSMC_MSG_DisableULV                		((uint16_t)0x63)
-#define PPSMC_MSG_EnterULV                  		((uint16_t)0x64)
-#define PPSMC_MSG_ExitULV                   		((uint16_t)0x65)
-#define PPSMC_PowerShiftActive              		((uint16_t)0x6A)
-#define PPSMC_PowerShiftInactive            		((uint16_t)0x6B)
-#define PPSMC_OCPActive                     		((uint16_t)0x6C)
-#define PPSMC_OCPInactive                   		((uint16_t)0x6D)
-#define PPSMC_CACLongTermAvgEnable          		((uint16_t)0x6E)
-#define PPSMC_CACLongTermAvgDisable         		((uint16_t)0x6F)
-#define PPSMC_MSG_InferredStateSweep_Start  		((uint16_t)0x70)
-#define PPSMC_MSG_InferredStateSweep_Stop   		((uint16_t)0x71)
-#define PPSMC_MSG_SwitchToLowestInfState    		((uint16_t)0x72)
-#define PPSMC_MSG_SwitchToNonInfState       		((uint16_t)0x73)
-#define PPSMC_MSG_AllStateSweep_Start       		((uint16_t)0x74)
-#define PPSMC_MSG_AllStateSweep_Stop        		((uint16_t)0x75)
-#define PPSMC_MSG_SwitchNextLowerInfState   		((uint16_t)0x76)
-#define PPSMC_MSG_SwitchNextHigherInfState  		((uint16_t)0x77)
-#define PPSMC_MSG_MclkRetrainingTest        		((uint16_t)0x78)
-#define PPSMC_MSG_ForceTDPClamping          		((uint16_t)0x79)
-#define PPSMC_MSG_CollectCAC_PowerCorreln   		((uint16_t)0x7A)
-#define PPSMC_MSG_CollectCAC_WeightCalib    		((uint16_t)0x7B)
-#define PPSMC_MSG_CollectCAC_SQonly         		((uint16_t)0x7C)
-#define PPSMC_MSG_CollectCAC_TemperaturePwr 		((uint16_t)0x7D)
-#define PPSMC_MSG_ExtremitiesTest_Start     		((uint16_t)0x7E)
-#define PPSMC_MSG_ExtremitiesTest_Stop      		((uint16_t)0x7F)
-#define PPSMC_FlushDataCache                		((uint16_t)0x80)
-#define PPSMC_FlushInstrCache               		((uint16_t)0x81)
-#define PPSMC_MSG_SetEnabledLevels          		((uint16_t)0x82)
-#define PPSMC_MSG_SetForcedLevels           		((uint16_t)0x83)
-#define PPSMC_MSG_ResetToDefaults           		((uint16_t)0x84)
-#define PPSMC_MSG_SetForcedLevelsAndJump    		((uint16_t)0x85)
-#define PPSMC_MSG_SetCACHistoryMode         		((uint16_t)0x86)
-#define PPSMC_MSG_EnableDTE                 		((uint16_t)0x87)
-#define PPSMC_MSG_DisableDTE                		((uint16_t)0x88)
-#define PPSMC_MSG_SmcSpaceSetAddress        		((uint16_t)0x89)
-#define PPSMC_MSG_SmcSpaceWriteDWordInc     		((uint16_t)0x8A)
-#define PPSMC_MSG_SmcSpaceWriteWordInc      		((uint16_t)0x8B)
-#define PPSMC_MSG_SmcSpaceWriteByteInc      		((uint16_t)0x8C)
-#define PPSMC_MSG_ChangeNearTDPLimit        		((uint16_t)0x90)
-#define PPSMC_MSG_ChangeSafePowerLimit      		((uint16_t)0x91)
-#define PPSMC_MSG_DPMStateSweepStart        		((uint16_t)0x92)
-#define PPSMC_MSG_DPMStateSweepStop         		((uint16_t)0x93)
-#define PPSMC_MSG_OVRDDisableSCLKDS         		((uint16_t)0x94)
-#define PPSMC_MSG_CancelDisableOVRDSCLKDS   		((uint16_t)0x95)
-#define PPSMC_MSG_ThrottleOVRDSCLKDS        		((uint16_t)0x96)
-#define PPSMC_MSG_CancelThrottleOVRDSCLKDS  		((uint16_t)0x97)
-#define PPSMC_MSG_GPIO17                    		((uint16_t)0x98)
-#define PPSMC_MSG_API_SetSvi2Volt_Vddc      		((uint16_t)0x99)
-#define PPSMC_MSG_API_SetSvi2Volt_Vddci     		((uint16_t)0x9A)
-#define PPSMC_MSG_API_SetSvi2Volt_Mvdd      		((uint16_t)0x9B)
-#define PPSMC_MSG_API_GetSvi2Volt_Vddc      		((uint16_t)0x9C)
-#define PPSMC_MSG_API_GetSvi2Volt_Vddci     		((uint16_t)0x9D)
-#define PPSMC_MSG_API_GetSvi2Volt_Mvdd      		((uint16_t)0x9E)
-
-#define PPSMC_MSG_BREAK                     		((uint16_t)0xF8)
-
-#define PPSMC_MSG_Test                      		((uint16_t)0x100)
-#define PPSMC_MSG_DRV_DRAM_ADDR_HI            		((uint16_t)0x250)
-#define PPSMC_MSG_DRV_DRAM_ADDR_LO            		((uint16_t)0x251)
-#define PPSMC_MSG_SMU_DRAM_ADDR_HI            		((uint16_t)0x252)
-#define PPSMC_MSG_SMU_DRAM_ADDR_LO            		((uint16_t)0x253)
-#define PPSMC_MSG_LoadUcodes                  		((uint16_t)0x254)
-
-typedef uint16_t PPSMC_Msg;
-
-#define PPSMC_EVENT_STATUS_THERMAL          		0x00000001
-#define PPSMC_EVENT_STATUS_REGULATORHOT     		0x00000002
-#define PPSMC_EVENT_STATUS_DC               		0x00000004
-#define PPSMC_EVENT_STATUS_GPIO17           		0x00000008
-
-#pragma pack(pop)
-
-#endif

+ 1 - 1
drivers/gpu/drm/amd/amdgpu/tonga_smc.c

@@ -25,7 +25,7 @@
 #include "drmP.h"
 #include "drmP.h"
 #include "amdgpu.h"
 #include "amdgpu.h"
 #include "tonga_ppsmc.h"
 #include "tonga_ppsmc.h"
-#include "tonga_smumgr.h"
+#include "tonga_smum.h"
 #include "smu_ucode_xfer_vi.h"
 #include "smu_ucode_xfer_vi.h"
 #include "amdgpu_ucode.h"
 #include "amdgpu_ucode.h"
 
 

+ 0 - 0
drivers/gpu/drm/amd/amdgpu/tonga_smumgr.h → drivers/gpu/drm/amd/amdgpu/tonga_smum.h


+ 259 - 2
drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c

@@ -279,6 +279,234 @@ static void uvd_v6_0_mc_resume(struct amdgpu_device *adev)
 	WREG32(mmUVD_VCPU_CACHE_SIZE2, size);
 	WREG32(mmUVD_VCPU_CACHE_SIZE2, size);
 }
 }
 
 
+static void cz_set_uvd_clock_gating_branches(struct amdgpu_device *adev,
+		bool enable)
+{
+	u32 data, data1;
+
+	data = RREG32(mmUVD_CGC_GATE);
+	data1 = RREG32(mmUVD_SUVD_CGC_GATE);
+	if (enable) {
+		data |= UVD_CGC_GATE__SYS_MASK |
+				UVD_CGC_GATE__UDEC_MASK |
+				UVD_CGC_GATE__MPEG2_MASK |
+				UVD_CGC_GATE__RBC_MASK |
+				UVD_CGC_GATE__LMI_MC_MASK |
+				UVD_CGC_GATE__IDCT_MASK |
+				UVD_CGC_GATE__MPRD_MASK |
+				UVD_CGC_GATE__MPC_MASK |
+				UVD_CGC_GATE__LBSI_MASK |
+				UVD_CGC_GATE__LRBBM_MASK |
+				UVD_CGC_GATE__UDEC_RE_MASK |
+				UVD_CGC_GATE__UDEC_CM_MASK |
+				UVD_CGC_GATE__UDEC_IT_MASK |
+				UVD_CGC_GATE__UDEC_DB_MASK |
+				UVD_CGC_GATE__UDEC_MP_MASK |
+				UVD_CGC_GATE__WCB_MASK |
+				UVD_CGC_GATE__VCPU_MASK |
+				UVD_CGC_GATE__SCPU_MASK;
+		data1 |= UVD_SUVD_CGC_GATE__SRE_MASK |
+				UVD_SUVD_CGC_GATE__SIT_MASK |
+				UVD_SUVD_CGC_GATE__SMP_MASK |
+				UVD_SUVD_CGC_GATE__SCM_MASK |
+				UVD_SUVD_CGC_GATE__SDB_MASK |
+				UVD_SUVD_CGC_GATE__SRE_H264_MASK |
+				UVD_SUVD_CGC_GATE__SRE_HEVC_MASK |
+				UVD_SUVD_CGC_GATE__SIT_H264_MASK |
+				UVD_SUVD_CGC_GATE__SIT_HEVC_MASK |
+				UVD_SUVD_CGC_GATE__SCM_H264_MASK |
+				UVD_SUVD_CGC_GATE__SCM_HEVC_MASK |
+				UVD_SUVD_CGC_GATE__SDB_H264_MASK |
+				UVD_SUVD_CGC_GATE__SDB_HEVC_MASK;
+	} else {
+		data &= ~(UVD_CGC_GATE__SYS_MASK |
+				UVD_CGC_GATE__UDEC_MASK |
+				UVD_CGC_GATE__MPEG2_MASK |
+				UVD_CGC_GATE__RBC_MASK |
+				UVD_CGC_GATE__LMI_MC_MASK |
+				UVD_CGC_GATE__LMI_UMC_MASK |
+				UVD_CGC_GATE__IDCT_MASK |
+				UVD_CGC_GATE__MPRD_MASK |
+				UVD_CGC_GATE__MPC_MASK |
+				UVD_CGC_GATE__LBSI_MASK |
+				UVD_CGC_GATE__LRBBM_MASK |
+				UVD_CGC_GATE__UDEC_RE_MASK |
+				UVD_CGC_GATE__UDEC_CM_MASK |
+				UVD_CGC_GATE__UDEC_IT_MASK |
+				UVD_CGC_GATE__UDEC_DB_MASK |
+				UVD_CGC_GATE__UDEC_MP_MASK |
+				UVD_CGC_GATE__WCB_MASK |
+				UVD_CGC_GATE__VCPU_MASK |
+				UVD_CGC_GATE__SCPU_MASK);
+		data1 &= ~(UVD_SUVD_CGC_GATE__SRE_MASK |
+				UVD_SUVD_CGC_GATE__SIT_MASK |
+				UVD_SUVD_CGC_GATE__SMP_MASK |
+				UVD_SUVD_CGC_GATE__SCM_MASK |
+				UVD_SUVD_CGC_GATE__SDB_MASK |
+				UVD_SUVD_CGC_GATE__SRE_H264_MASK |
+				UVD_SUVD_CGC_GATE__SRE_HEVC_MASK |
+				UVD_SUVD_CGC_GATE__SIT_H264_MASK |
+				UVD_SUVD_CGC_GATE__SIT_HEVC_MASK |
+				UVD_SUVD_CGC_GATE__SCM_H264_MASK |
+				UVD_SUVD_CGC_GATE__SCM_HEVC_MASK |
+				UVD_SUVD_CGC_GATE__SDB_H264_MASK |
+				UVD_SUVD_CGC_GATE__SDB_HEVC_MASK);
+	}
+	WREG32(mmUVD_CGC_GATE, data);
+	WREG32(mmUVD_SUVD_CGC_GATE, data1);
+}
+
+static void tonga_set_uvd_clock_gating_branches(struct amdgpu_device *adev,
+		bool enable)
+{
+	u32 data, data1;
+
+	data = RREG32(mmUVD_CGC_GATE);
+	data1 = RREG32(mmUVD_SUVD_CGC_GATE);
+	if (enable) {
+		data |= UVD_CGC_GATE__SYS_MASK |
+				UVD_CGC_GATE__UDEC_MASK |
+				UVD_CGC_GATE__MPEG2_MASK |
+				UVD_CGC_GATE__RBC_MASK |
+				UVD_CGC_GATE__LMI_MC_MASK |
+				UVD_CGC_GATE__IDCT_MASK |
+				UVD_CGC_GATE__MPRD_MASK |
+				UVD_CGC_GATE__MPC_MASK |
+				UVD_CGC_GATE__LBSI_MASK |
+				UVD_CGC_GATE__LRBBM_MASK |
+				UVD_CGC_GATE__UDEC_RE_MASK |
+				UVD_CGC_GATE__UDEC_CM_MASK |
+				UVD_CGC_GATE__UDEC_IT_MASK |
+				UVD_CGC_GATE__UDEC_DB_MASK |
+				UVD_CGC_GATE__UDEC_MP_MASK |
+				UVD_CGC_GATE__WCB_MASK |
+				UVD_CGC_GATE__VCPU_MASK |
+				UVD_CGC_GATE__SCPU_MASK;
+		data1 |= UVD_SUVD_CGC_GATE__SRE_MASK |
+				UVD_SUVD_CGC_GATE__SIT_MASK |
+				UVD_SUVD_CGC_GATE__SMP_MASK |
+				UVD_SUVD_CGC_GATE__SCM_MASK |
+				UVD_SUVD_CGC_GATE__SDB_MASK;
+	} else {
+		data &= ~(UVD_CGC_GATE__SYS_MASK |
+				UVD_CGC_GATE__UDEC_MASK |
+				UVD_CGC_GATE__MPEG2_MASK |
+				UVD_CGC_GATE__RBC_MASK |
+				UVD_CGC_GATE__LMI_MC_MASK |
+				UVD_CGC_GATE__LMI_UMC_MASK |
+				UVD_CGC_GATE__IDCT_MASK |
+				UVD_CGC_GATE__MPRD_MASK |
+				UVD_CGC_GATE__MPC_MASK |
+				UVD_CGC_GATE__LBSI_MASK |
+				UVD_CGC_GATE__LRBBM_MASK |
+				UVD_CGC_GATE__UDEC_RE_MASK |
+				UVD_CGC_GATE__UDEC_CM_MASK |
+				UVD_CGC_GATE__UDEC_IT_MASK |
+				UVD_CGC_GATE__UDEC_DB_MASK |
+				UVD_CGC_GATE__UDEC_MP_MASK |
+				UVD_CGC_GATE__WCB_MASK |
+				UVD_CGC_GATE__VCPU_MASK |
+				UVD_CGC_GATE__SCPU_MASK);
+		data1 &= ~(UVD_SUVD_CGC_GATE__SRE_MASK |
+				UVD_SUVD_CGC_GATE__SIT_MASK |
+				UVD_SUVD_CGC_GATE__SMP_MASK |
+				UVD_SUVD_CGC_GATE__SCM_MASK |
+				UVD_SUVD_CGC_GATE__SDB_MASK);
+	}
+	WREG32(mmUVD_CGC_GATE, data);
+	WREG32(mmUVD_SUVD_CGC_GATE, data1);
+}
+
+static void uvd_v6_0_set_uvd_dynamic_clock_mode(struct amdgpu_device *adev,
+		bool swmode)
+{
+	u32 data, data1 = 0, data2;
+
+	/* Always un-gate UVD REGS bit */
+	data = RREG32(mmUVD_CGC_GATE);
+	data &= ~(UVD_CGC_GATE__REGS_MASK);
+	WREG32(mmUVD_CGC_GATE, data);
+
+	data = RREG32(mmUVD_CGC_CTRL);
+	data &= ~(UVD_CGC_CTRL__CLK_OFF_DELAY_MASK |
+			UVD_CGC_CTRL__CLK_GATE_DLY_TIMER_MASK);
+	data |= UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK |
+			1 << REG_FIELD_SHIFT(UVD_CGC_CTRL, CLK_GATE_DLY_TIMER) |
+			4 << REG_FIELD_SHIFT(UVD_CGC_CTRL, CLK_OFF_DELAY);
+
+	data2 = RREG32(mmUVD_SUVD_CGC_CTRL);
+	if (swmode) {
+		data &= ~(UVD_CGC_CTRL__UDEC_RE_MODE_MASK |
+				UVD_CGC_CTRL__UDEC_CM_MODE_MASK |
+				UVD_CGC_CTRL__UDEC_IT_MODE_MASK |
+				UVD_CGC_CTRL__UDEC_DB_MODE_MASK |
+				UVD_CGC_CTRL__UDEC_MP_MODE_MASK |
+				UVD_CGC_CTRL__SYS_MODE_MASK |
+				UVD_CGC_CTRL__UDEC_MODE_MASK |
+				UVD_CGC_CTRL__MPEG2_MODE_MASK |
+				UVD_CGC_CTRL__REGS_MODE_MASK |
+				UVD_CGC_CTRL__RBC_MODE_MASK |
+				UVD_CGC_CTRL__LMI_MC_MODE_MASK |
+				UVD_CGC_CTRL__LMI_UMC_MODE_MASK |
+				UVD_CGC_CTRL__IDCT_MODE_MASK |
+				UVD_CGC_CTRL__MPRD_MODE_MASK |
+				UVD_CGC_CTRL__MPC_MODE_MASK |
+				UVD_CGC_CTRL__LBSI_MODE_MASK |
+				UVD_CGC_CTRL__LRBBM_MODE_MASK |
+				UVD_CGC_CTRL__WCB_MODE_MASK |
+				UVD_CGC_CTRL__VCPU_MODE_MASK |
+				UVD_CGC_CTRL__JPEG_MODE_MASK |
+				UVD_CGC_CTRL__SCPU_MODE_MASK);
+		data1 |= UVD_CGC_CTRL2__DYN_OCLK_RAMP_EN_MASK |
+				UVD_CGC_CTRL2__DYN_RCLK_RAMP_EN_MASK;
+		data1 &= ~UVD_CGC_CTRL2__GATER_DIV_ID_MASK;
+		data1 |= 7 << REG_FIELD_SHIFT(UVD_CGC_CTRL2, GATER_DIV_ID);
+		data2 &= ~(UVD_SUVD_CGC_CTRL__SRE_MODE_MASK |
+				UVD_SUVD_CGC_CTRL__SIT_MODE_MASK |
+				UVD_SUVD_CGC_CTRL__SMP_MODE_MASK |
+				UVD_SUVD_CGC_CTRL__SCM_MODE_MASK |
+				UVD_SUVD_CGC_CTRL__SDB_MODE_MASK);
+	} else {
+		data |= UVD_CGC_CTRL__UDEC_RE_MODE_MASK |
+				UVD_CGC_CTRL__UDEC_CM_MODE_MASK |
+				UVD_CGC_CTRL__UDEC_IT_MODE_MASK |
+				UVD_CGC_CTRL__UDEC_DB_MODE_MASK |
+				UVD_CGC_CTRL__UDEC_MP_MODE_MASK |
+				UVD_CGC_CTRL__SYS_MODE_MASK |
+				UVD_CGC_CTRL__UDEC_MODE_MASK |
+				UVD_CGC_CTRL__MPEG2_MODE_MASK |
+				UVD_CGC_CTRL__REGS_MODE_MASK |
+				UVD_CGC_CTRL__RBC_MODE_MASK |
+				UVD_CGC_CTRL__LMI_MC_MODE_MASK |
+				UVD_CGC_CTRL__LMI_UMC_MODE_MASK |
+				UVD_CGC_CTRL__IDCT_MODE_MASK |
+				UVD_CGC_CTRL__MPRD_MODE_MASK |
+				UVD_CGC_CTRL__MPC_MODE_MASK |
+				UVD_CGC_CTRL__LBSI_MODE_MASK |
+				UVD_CGC_CTRL__LRBBM_MODE_MASK |
+				UVD_CGC_CTRL__WCB_MODE_MASK |
+				UVD_CGC_CTRL__VCPU_MODE_MASK |
+				UVD_CGC_CTRL__SCPU_MODE_MASK;
+		data2 |= UVD_SUVD_CGC_CTRL__SRE_MODE_MASK |
+				UVD_SUVD_CGC_CTRL__SIT_MODE_MASK |
+				UVD_SUVD_CGC_CTRL__SMP_MODE_MASK |
+				UVD_SUVD_CGC_CTRL__SCM_MODE_MASK |
+				UVD_SUVD_CGC_CTRL__SDB_MODE_MASK;
+	}
+	WREG32(mmUVD_CGC_CTRL, data);
+	WREG32(mmUVD_SUVD_CGC_CTRL, data2);
+
+	data = RREG32_UVD_CTX(ixUVD_CGC_CTRL2);
+	data &= ~(REG_FIELD_MASK(UVD_CGC_CTRL2, DYN_OCLK_RAMP_EN) |
+			REG_FIELD_MASK(UVD_CGC_CTRL2, DYN_RCLK_RAMP_EN) |
+			REG_FIELD_MASK(UVD_CGC_CTRL2, GATER_DIV_ID));
+	data1 &= (REG_FIELD_MASK(UVD_CGC_CTRL2, DYN_OCLK_RAMP_EN) |
+			REG_FIELD_MASK(UVD_CGC_CTRL2, DYN_RCLK_RAMP_EN) |
+			REG_FIELD_MASK(UVD_CGC_CTRL2, GATER_DIV_ID));
+	data |= data1;
+	WREG32_UVD_CTX(ixUVD_CGC_CTRL2, data);
+}
+
 /**
 /**
  * uvd_v6_0_start - start UVD block
  * uvd_v6_0_start - start UVD block
  *
  *
@@ -303,8 +531,19 @@ static int uvd_v6_0_start(struct amdgpu_device *adev)
 
 
 	uvd_v6_0_mc_resume(adev);
 	uvd_v6_0_mc_resume(adev);
 
 
-	/* disable clock gating */
-	WREG32(mmUVD_CGC_GATE, 0);
+	/* Set dynamic clock gating in S/W control mode */
+	if (adev->cg_flags & AMDGPU_CG_SUPPORT_UVD_MGCG) {
+		if (adev->flags & AMD_IS_APU)
+			cz_set_uvd_clock_gating_branches(adev, false);
+		else
+			tonga_set_uvd_clock_gating_branches(adev, false);
+		uvd_v6_0_set_uvd_dynamic_clock_mode(adev, true);
+	} else {
+		/* disable clock gating */
+		uint32_t data = RREG32(mmUVD_CGC_CTRL);
+		data &= ~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK;
+		WREG32(mmUVD_CGC_CTRL, data);
+	}
 
 
 	/* disable interupt */
 	/* disable interupt */
 	WREG32_P(mmUVD_MASTINT_EN, 0, ~(1 << 1));
 	WREG32_P(mmUVD_MASTINT_EN, 0, ~(1 << 1));
@@ -758,6 +997,24 @@ static int uvd_v6_0_process_interrupt(struct amdgpu_device *adev,
 static int uvd_v6_0_set_clockgating_state(void *handle,
 static int uvd_v6_0_set_clockgating_state(void *handle,
 					  enum amd_clockgating_state state)
 					  enum amd_clockgating_state state)
 {
 {
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
+
+	if (!(adev->cg_flags & AMDGPU_CG_SUPPORT_UVD_MGCG))
+		return 0;
+
+	if (enable) {
+		if (adev->flags & AMD_IS_APU)
+			cz_set_uvd_clock_gating_branches(adev, enable);
+		else
+			tonga_set_uvd_clock_gating_branches(adev, enable);
+		uvd_v6_0_set_uvd_dynamic_clock_mode(adev, true);
+	} else {
+		uint32_t data = RREG32(mmUVD_CGC_CTRL);
+		data &= ~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK;
+		WREG32(mmUVD_CGC_CTRL, data);
+	}
+
 	return 0;
 	return 0;
 }
 }
 
 

+ 171 - 67
drivers/gpu/drm/amd/amdgpu/vce_v3_0.c

@@ -103,6 +103,108 @@ static void vce_v3_0_ring_set_wptr(struct amdgpu_ring *ring)
 		WREG32(mmVCE_RB_WPTR2, ring->wptr);
 		WREG32(mmVCE_RB_WPTR2, ring->wptr);
 }
 }
 
 
+static void vce_v3_0_override_vce_clock_gating(struct amdgpu_device *adev, bool override)
+{
+	u32 tmp, data;
+
+	tmp = data = RREG32(mmVCE_RB_ARB_CTRL);
+	if (override)
+		data |= VCE_RB_ARB_CTRL__VCE_CGTT_OVERRIDE_MASK;
+	else
+		data &= ~VCE_RB_ARB_CTRL__VCE_CGTT_OVERRIDE_MASK;
+
+	if (tmp != data)
+		WREG32(mmVCE_RB_ARB_CTRL, data);
+}
+
+static void vce_v3_0_set_vce_sw_clock_gating(struct amdgpu_device *adev,
+					     bool gated)
+{
+	u32 tmp, data;
+	/* Set Override to disable Clock Gating */
+	vce_v3_0_override_vce_clock_gating(adev, true);
+
+	if (!gated) {
+		/* Force CLOCK ON for VCE_CLOCK_GATING_B,
+		 * {*_FORCE_ON, *_FORCE_OFF} = {1, 0}
+		 * VREG can be FORCE ON or set to Dynamic, but can't be OFF
+		 */
+		tmp = data = RREG32(mmVCE_CLOCK_GATING_B);
+		data |= 0x1ff;
+		data &= ~0xef0000;
+		if (tmp != data)
+			WREG32(mmVCE_CLOCK_GATING_B, data);
+
+		/* Force CLOCK ON for VCE_UENC_CLOCK_GATING,
+		 * {*_FORCE_ON, *_FORCE_OFF} = {1, 0}
+		 */
+		tmp = data = RREG32(mmVCE_UENC_CLOCK_GATING);
+		data |= 0x3ff000;
+		data &= ~0xffc00000;
+		if (tmp != data)
+			WREG32(mmVCE_UENC_CLOCK_GATING, data);
+
+		/* set VCE_UENC_CLOCK_GATING_2 */
+		tmp = data = RREG32(mmVCE_UENC_CLOCK_GATING_2);
+		data |= 0x2;
+		data &= ~0x2;
+		if (tmp != data)
+			WREG32(mmVCE_UENC_CLOCK_GATING_2, data);
+
+		/* Force CLOCK ON for VCE_UENC_REG_CLOCK_GATING */
+		tmp = data = RREG32(mmVCE_UENC_REG_CLOCK_GATING);
+		data |= 0x37f;
+		if (tmp != data)
+			WREG32(mmVCE_UENC_REG_CLOCK_GATING, data);
+
+		/* Force VCE_UENC_DMA_DCLK_CTRL Clock ON */
+		tmp = data = RREG32(mmVCE_UENC_DMA_DCLK_CTRL);
+		data |= VCE_UENC_DMA_DCLK_CTRL__WRDMCLK_FORCEON_MASK |
+				VCE_UENC_DMA_DCLK_CTRL__RDDMCLK_FORCEON_MASK |
+				VCE_UENC_DMA_DCLK_CTRL__REGCLK_FORCEON_MASK  |
+				0x8;
+		if (tmp != data)
+			WREG32(mmVCE_UENC_DMA_DCLK_CTRL, data);
+	} else {
+		/* Force CLOCK OFF for VCE_CLOCK_GATING_B,
+		 * {*, *_FORCE_OFF} = {*, 1}
+		 * set VREG to Dynamic, as it can't be OFF
+		 */
+		tmp = data = RREG32(mmVCE_CLOCK_GATING_B);
+		data &= ~0x80010;
+		data |= 0xe70008;
+		if (tmp != data)
+			WREG32(mmVCE_CLOCK_GATING_B, data);
+		/* Force CLOCK OFF for VCE_UENC_CLOCK_GATING,
+		 * Force ClOCK OFF takes precedent over Force CLOCK ON setting.
+		 * {*_FORCE_ON, *_FORCE_OFF} = {*, 1}
+		 */
+		tmp = data = RREG32(mmVCE_UENC_CLOCK_GATING);
+		data |= 0xffc00000;
+		if (tmp != data)
+			WREG32(mmVCE_UENC_CLOCK_GATING, data);
+		/* Set VCE_UENC_CLOCK_GATING_2 */
+		tmp = data = RREG32(mmVCE_UENC_CLOCK_GATING_2);
+		data |= 0x10000;
+		if (tmp != data)
+			WREG32(mmVCE_UENC_CLOCK_GATING_2, data);
+		/* Set VCE_UENC_REG_CLOCK_GATING to dynamic */
+		tmp = data = RREG32(mmVCE_UENC_REG_CLOCK_GATING);
+		data &= ~0xffc00000;
+		if (tmp != data)
+			WREG32(mmVCE_UENC_REG_CLOCK_GATING, data);
+		/* Set VCE_UENC_DMA_DCLK_CTRL CG always in dynamic mode */
+		tmp = data = RREG32(mmVCE_UENC_DMA_DCLK_CTRL);
+		data &= ~(VCE_UENC_DMA_DCLK_CTRL__WRDMCLK_FORCEON_MASK |
+				VCE_UENC_DMA_DCLK_CTRL__RDDMCLK_FORCEON_MASK |
+				VCE_UENC_DMA_DCLK_CTRL__REGCLK_FORCEON_MASK  |
+				0x8);
+		if (tmp != data)
+			WREG32(mmVCE_UENC_DMA_DCLK_CTRL, data);
+	}
+	vce_v3_0_override_vce_clock_gating(adev, false);
+}
+
 /**
 /**
  * vce_v3_0_start - start VCE block
  * vce_v3_0_start - start VCE block
  *
  *
@@ -121,7 +223,7 @@ static int vce_v3_0_start(struct amdgpu_device *adev)
 		if (adev->vce.harvest_config & (1 << idx))
 		if (adev->vce.harvest_config & (1 << idx))
 			continue;
 			continue;
 
 
-		if(idx == 0)
+		if (idx == 0)
 			WREG32_P(mmGRBM_GFX_INDEX, 0,
 			WREG32_P(mmGRBM_GFX_INDEX, 0,
 				~GRBM_GFX_INDEX__VCE_INSTANCE_MASK);
 				~GRBM_GFX_INDEX__VCE_INSTANCE_MASK);
 		else
 		else
@@ -174,6 +276,10 @@ static int vce_v3_0_start(struct amdgpu_device *adev)
 		/* clear BUSY flag */
 		/* clear BUSY flag */
 		WREG32_P(mmVCE_STATUS, 0, ~1);
 		WREG32_P(mmVCE_STATUS, 0, ~1);
 
 
+		/* Set Clock-Gating off */
+		if (adev->cg_flags & AMDGPU_CG_SUPPORT_VCE_MGCG)
+			vce_v3_0_set_vce_sw_clock_gating(adev, false);
+
 		if (r) {
 		if (r) {
 			DRM_ERROR("VCE not responding, giving up!!!\n");
 			DRM_ERROR("VCE not responding, giving up!!!\n");
 			mutex_unlock(&adev->grbm_idx_mutex);
 			mutex_unlock(&adev->grbm_idx_mutex);
@@ -208,14 +314,11 @@ static int vce_v3_0_start(struct amdgpu_device *adev)
 static unsigned vce_v3_0_get_harvest_config(struct amdgpu_device *adev)
 static unsigned vce_v3_0_get_harvest_config(struct amdgpu_device *adev)
 {
 {
 	u32 tmp;
 	u32 tmp;
-	unsigned ret;
 
 
 	/* Fiji, Stoney are single pipe */
 	/* Fiji, Stoney are single pipe */
 	if ((adev->asic_type == CHIP_FIJI) ||
 	if ((adev->asic_type == CHIP_FIJI) ||
-	    (adev->asic_type == CHIP_STONEY)){
-		ret = AMDGPU_VCE_HARVEST_VCE1;
-		return ret;
-	}
+	    (adev->asic_type == CHIP_STONEY))
+		return AMDGPU_VCE_HARVEST_VCE1;
 
 
 	/* Tonga and CZ are dual or single pipe */
 	/* Tonga and CZ are dual or single pipe */
 	if (adev->flags & AMD_IS_APU)
 	if (adev->flags & AMD_IS_APU)
@@ -229,19 +332,14 @@ static unsigned vce_v3_0_get_harvest_config(struct amdgpu_device *adev)
 
 
 	switch (tmp) {
 	switch (tmp) {
 	case 1:
 	case 1:
-		ret = AMDGPU_VCE_HARVEST_VCE0;
-		break;
+		return AMDGPU_VCE_HARVEST_VCE0;
 	case 2:
 	case 2:
-		ret = AMDGPU_VCE_HARVEST_VCE1;
-		break;
+		return AMDGPU_VCE_HARVEST_VCE1;
 	case 3:
 	case 3:
-		ret = AMDGPU_VCE_HARVEST_VCE0 | AMDGPU_VCE_HARVEST_VCE1;
-		break;
+		return AMDGPU_VCE_HARVEST_VCE0 | AMDGPU_VCE_HARVEST_VCE1;
 	default:
 	default:
-		ret = 0;
+		return 0;
 	}
 	}
-
-	return ret;
 }
 }
 
 
 static int vce_v3_0_early_init(void *handle)
 static int vce_v3_0_early_init(void *handle)
@@ -316,28 +414,22 @@ static int vce_v3_0_sw_fini(void *handle)
 
 
 static int vce_v3_0_hw_init(void *handle)
 static int vce_v3_0_hw_init(void *handle)
 {
 {
-	struct amdgpu_ring *ring;
-	int r;
+	int r, i;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
 
 	r = vce_v3_0_start(adev);
 	r = vce_v3_0_start(adev);
 	if (r)
 	if (r)
 		return r;
 		return r;
 
 
-	ring = &adev->vce.ring[0];
-	ring->ready = true;
-	r = amdgpu_ring_test_ring(ring);
-	if (r) {
-		ring->ready = false;
-		return r;
-	}
+	adev->vce.ring[0].ready = false;
+	adev->vce.ring[1].ready = false;
 
 
-	ring = &adev->vce.ring[1];
-	ring->ready = true;
-	r = amdgpu_ring_test_ring(ring);
-	if (r) {
-		ring->ready = false;
-		return r;
+	for (i = 0; i < 2; i++) {
+		r = amdgpu_ring_test_ring(&adev->vce.ring[i]);
+		if (r)
+			return r;
+		else
+			adev->vce.ring[i].ready = true;
 	}
 	}
 
 
 	DRM_INFO("VCE initialized successfully.\n");
 	DRM_INFO("VCE initialized successfully.\n");
@@ -437,17 +529,9 @@ static bool vce_v3_0_is_idle(void *handle)
 {
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	u32 mask = 0;
 	u32 mask = 0;
-	int idx;
-
-	for (idx = 0; idx < 2; ++idx) {
-		if (adev->vce.harvest_config & (1 << idx))
-			continue;
 
 
-		if (idx == 0)
-			mask |= SRBM_STATUS2__VCE0_BUSY_MASK;
-		else
-			mask |= SRBM_STATUS2__VCE1_BUSY_MASK;
-	}
+	mask |= (adev->vce.harvest_config & AMDGPU_VCE_HARVEST_VCE0) ? 0 : SRBM_STATUS2__VCE0_BUSY_MASK;
+	mask |= (adev->vce.harvest_config & AMDGPU_VCE_HARVEST_VCE1) ? 0 : SRBM_STATUS2__VCE1_BUSY_MASK;
 
 
 	return !(RREG32(mmSRBM_STATUS2) & mask);
 	return !(RREG32(mmSRBM_STATUS2) & mask);
 }
 }
@@ -456,23 +540,11 @@ static int vce_v3_0_wait_for_idle(void *handle)
 {
 {
 	unsigned i;
 	unsigned i;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-	u32 mask = 0;
-	int idx;
-
-	for (idx = 0; idx < 2; ++idx) {
-		if (adev->vce.harvest_config & (1 << idx))
-			continue;
-
-		if (idx == 0)
-			mask |= SRBM_STATUS2__VCE0_BUSY_MASK;
-		else
-			mask |= SRBM_STATUS2__VCE1_BUSY_MASK;
-	}
 
 
-	for (i = 0; i < adev->usec_timeout; i++) {
-		if (!(RREG32(mmSRBM_STATUS2) & mask))
+	for (i = 0; i < adev->usec_timeout; i++)
+		if (vce_v3_0_is_idle(handle))
 			return 0;
 			return 0;
-	}
+
 	return -ETIMEDOUT;
 	return -ETIMEDOUT;
 }
 }
 
 
@@ -480,17 +552,10 @@ static int vce_v3_0_soft_reset(void *handle)
 {
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 	u32 mask = 0;
 	u32 mask = 0;
-	int idx;
 
 
-	for (idx = 0; idx < 2; ++idx) {
-		if (adev->vce.harvest_config & (1 << idx))
-			continue;
+	mask |= (adev->vce.harvest_config & AMDGPU_VCE_HARVEST_VCE0) ? 0 : SRBM_SOFT_RESET__SOFT_RESET_VCE0_MASK;
+	mask |= (adev->vce.harvest_config & AMDGPU_VCE_HARVEST_VCE1) ? 0 : SRBM_SOFT_RESET__SOFT_RESET_VCE1_MASK;
 
 
-		if (idx == 0)
-			mask |= SRBM_SOFT_RESET__SOFT_RESET_VCE0_MASK;
-		else
-			mask |= SRBM_SOFT_RESET__SOFT_RESET_VCE1_MASK;
-	}
 	WREG32_P(mmSRBM_SOFT_RESET, mask,
 	WREG32_P(mmSRBM_SOFT_RESET, mask,
 		 ~(SRBM_SOFT_RESET__SOFT_RESET_VCE0_MASK |
 		 ~(SRBM_SOFT_RESET__SOFT_RESET_VCE0_MASK |
 		   SRBM_SOFT_RESET__SOFT_RESET_VCE1_MASK));
 		   SRBM_SOFT_RESET__SOFT_RESET_VCE1_MASK));
@@ -592,10 +657,8 @@ static int vce_v3_0_process_interrupt(struct amdgpu_device *adev,
 
 
 	switch (entry->src_data) {
 	switch (entry->src_data) {
 	case 0:
 	case 0:
-		amdgpu_fence_process(&adev->vce.ring[0]);
-		break;
 	case 1:
 	case 1:
-		amdgpu_fence_process(&adev->vce.ring[1]);
+		amdgpu_fence_process(&adev->vce.ring[entry->src_data]);
 		break;
 		break;
 	default:
 	default:
 		DRM_ERROR("Unhandled interrupt: %d %d\n",
 		DRM_ERROR("Unhandled interrupt: %d %d\n",
@@ -609,6 +672,47 @@ static int vce_v3_0_process_interrupt(struct amdgpu_device *adev,
 static int vce_v3_0_set_clockgating_state(void *handle,
 static int vce_v3_0_set_clockgating_state(void *handle,
 					  enum amd_clockgating_state state)
 					  enum amd_clockgating_state state)
 {
 {
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
+	int i;
+
+	if (!(adev->cg_flags & AMDGPU_CG_SUPPORT_VCE_MGCG))
+		return 0;
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	for (i = 0; i < 2; i++) {
+		/* Program VCE Instance 0 or 1 if not harvested */
+		if (adev->vce.harvest_config & (1 << i))
+			continue;
+
+		if (i == 0)
+			WREG32_P(mmGRBM_GFX_INDEX, 0,
+					~GRBM_GFX_INDEX__VCE_INSTANCE_MASK);
+		else
+			WREG32_P(mmGRBM_GFX_INDEX,
+					GRBM_GFX_INDEX__VCE_INSTANCE_MASK,
+					~GRBM_GFX_INDEX__VCE_INSTANCE_MASK);
+
+		if (enable) {
+			/* initialize VCE_CLOCK_GATING_A: Clock ON/OFF delay */
+			uint32_t data = RREG32(mmVCE_CLOCK_GATING_A);
+			data &= ~(0xf | 0xff0);
+			data |= ((0x0 << 0) | (0x04 << 4));
+			WREG32(mmVCE_CLOCK_GATING_A, data);
+
+			/* initialize VCE_UENC_CLOCK_GATING: Clock ON/OFF delay */
+			data = RREG32(mmVCE_UENC_CLOCK_GATING);
+			data &= ~(0xf | 0xff0);
+			data |= ((0x0 << 0) | (0x04 << 4));
+			WREG32(mmVCE_UENC_CLOCK_GATING, data);
+		}
+
+		vce_v3_0_set_vce_sw_clock_gating(adev, enable);
+	}
+
+	WREG32_P(mmGRBM_GFX_INDEX, 0, ~GRBM_GFX_INDEX__VCE_INSTANCE_MASK);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
 	return 0;
 	return 0;
 }
 }
 
 

+ 134 - 19
drivers/gpu/drm/amd/amdgpu/vi.c

@@ -31,6 +31,7 @@
 #include "amdgpu_vce.h"
 #include "amdgpu_vce.h"
 #include "amdgpu_ucode.h"
 #include "amdgpu_ucode.h"
 #include "atom.h"
 #include "atom.h"
+#include "amd_pcie.h"
 
 
 #include "gmc/gmc_8_1_d.h"
 #include "gmc/gmc_8_1_d.h"
 #include "gmc/gmc_8_1_sh_mask.h"
 #include "gmc/gmc_8_1_sh_mask.h"
@@ -71,6 +72,7 @@
 #include "uvd_v5_0.h"
 #include "uvd_v5_0.h"
 #include "uvd_v6_0.h"
 #include "uvd_v6_0.h"
 #include "vce_v3_0.h"
 #include "vce_v3_0.h"
+#include "amdgpu_powerplay.h"
 
 
 /*
 /*
  * Indirect registers accessor
  * Indirect registers accessor
@@ -376,6 +378,38 @@ static bool vi_read_disabled_bios(struct amdgpu_device *adev)
 	WREG32_SMC(ixROM_CNTL, rom_cntl);
 	WREG32_SMC(ixROM_CNTL, rom_cntl);
 	return r;
 	return r;
 }
 }
+
+static bool vi_read_bios_from_rom(struct amdgpu_device *adev,
+				  u8 *bios, u32 length_bytes)
+{
+	u32 *dw_ptr;
+	unsigned long flags;
+	u32 i, length_dw;
+
+	if (bios == NULL)
+		return false;
+	if (length_bytes == 0)
+		return false;
+	/* APU vbios image is part of sbios image */
+	if (adev->flags & AMD_IS_APU)
+		return false;
+
+	dw_ptr = (u32 *)bios;
+	length_dw = ALIGN(length_bytes, 4) / 4;
+	/* take the smc lock since we are using the smc index */
+	spin_lock_irqsave(&adev->smc_idx_lock, flags);
+	/* set rom index to 0 */
+	WREG32(mmSMC_IND_INDEX_0, ixROM_INDEX);
+	WREG32(mmSMC_IND_DATA_0, 0);
+	/* set index to data for continous read */
+	WREG32(mmSMC_IND_INDEX_0, ixROM_DATA);
+	for (i = 0; i < length_dw; i++)
+		dw_ptr[i] = RREG32(mmSMC_IND_DATA_0);
+	spin_unlock_irqrestore(&adev->smc_idx_lock, flags);
+
+	return true;
+}
+
 static struct amdgpu_allowed_register_entry tonga_allowed_read_registers[] = {
 static struct amdgpu_allowed_register_entry tonga_allowed_read_registers[] = {
 	{mmGB_MACROTILE_MODE7, true},
 	{mmGB_MACROTILE_MODE7, true},
 };
 };
@@ -1019,9 +1053,6 @@ static int vi_set_vce_clocks(struct amdgpu_device *adev, u32 evclk, u32 ecclk)
 
 
 static void vi_pcie_gen3_enable(struct amdgpu_device *adev)
 static void vi_pcie_gen3_enable(struct amdgpu_device *adev)
 {
 {
-	u32 mask;
-	int ret;
-
 	if (pci_is_root_bus(adev->pdev->bus))
 	if (pci_is_root_bus(adev->pdev->bus))
 		return;
 		return;
 
 
@@ -1031,11 +1062,8 @@ static void vi_pcie_gen3_enable(struct amdgpu_device *adev)
 	if (adev->flags & AMD_IS_APU)
 	if (adev->flags & AMD_IS_APU)
 		return;
 		return;
 
 
-	ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
-	if (ret != 0)
-		return;
-
-	if (!(mask & (DRM_PCIE_SPEED_50 | DRM_PCIE_SPEED_80)))
+	if (!(adev->pm.pcie_gen_mask & (CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2 |
+					CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)))
 		return;
 		return;
 
 
 	/* todo */
 	/* todo */
@@ -1098,7 +1126,7 @@ static const struct amdgpu_ip_block_version topaz_ip_blocks[] =
 		.major = 7,
 		.major = 7,
 		.minor = 1,
 		.minor = 1,
 		.rev = 0,
 		.rev = 0,
-		.funcs = &iceland_dpm_ip_funcs,
+		.funcs = &amdgpu_pp_ip_funcs,
 	},
 	},
 	{
 	{
 		.type = AMD_IP_BLOCK_TYPE_GFX,
 		.type = AMD_IP_BLOCK_TYPE_GFX,
@@ -1145,7 +1173,7 @@ static const struct amdgpu_ip_block_version tonga_ip_blocks[] =
 		.major = 7,
 		.major = 7,
 		.minor = 1,
 		.minor = 1,
 		.rev = 0,
 		.rev = 0,
-		.funcs = &tonga_dpm_ip_funcs,
+		.funcs = &amdgpu_pp_ip_funcs,
 	},
 	},
 	{
 	{
 		.type = AMD_IP_BLOCK_TYPE_DCE,
 		.type = AMD_IP_BLOCK_TYPE_DCE,
@@ -1213,7 +1241,7 @@ static const struct amdgpu_ip_block_version fiji_ip_blocks[] =
 		.major = 7,
 		.major = 7,
 		.minor = 1,
 		.minor = 1,
 		.rev = 0,
 		.rev = 0,
-		.funcs = &fiji_dpm_ip_funcs,
+		.funcs = &amdgpu_pp_ip_funcs,
 	},
 	},
 	{
 	{
 		.type = AMD_IP_BLOCK_TYPE_DCE,
 		.type = AMD_IP_BLOCK_TYPE_DCE,
@@ -1281,7 +1309,7 @@ static const struct amdgpu_ip_block_version cz_ip_blocks[] =
 		.major = 8,
 		.major = 8,
 		.minor = 0,
 		.minor = 0,
 		.rev = 0,
 		.rev = 0,
-		.funcs = &cz_dpm_ip_funcs,
+		.funcs = &amdgpu_pp_ip_funcs
 	},
 	},
 	{
 	{
 		.type = AMD_IP_BLOCK_TYPE_DCE,
 		.type = AMD_IP_BLOCK_TYPE_DCE,
@@ -1354,20 +1382,18 @@ int vi_set_ip_blocks(struct amdgpu_device *adev)
 
 
 static uint32_t vi_get_rev_id(struct amdgpu_device *adev)
 static uint32_t vi_get_rev_id(struct amdgpu_device *adev)
 {
 {
-	if (adev->asic_type == CHIP_TOPAZ)
-		return (RREG32(mmPCIE_EFUSE4) & PCIE_EFUSE4__STRAP_BIF_ATI_REV_ID_MASK)
-			>> PCIE_EFUSE4__STRAP_BIF_ATI_REV_ID__SHIFT;
-	else if (adev->flags & AMD_IS_APU)
+	if (adev->flags & AMD_IS_APU)
 		return (RREG32_SMC(ATI_REV_ID_FUSE_MACRO__ADDRESS) & ATI_REV_ID_FUSE_MACRO__MASK)
 		return (RREG32_SMC(ATI_REV_ID_FUSE_MACRO__ADDRESS) & ATI_REV_ID_FUSE_MACRO__MASK)
 			>> ATI_REV_ID_FUSE_MACRO__SHIFT;
 			>> ATI_REV_ID_FUSE_MACRO__SHIFT;
 	else
 	else
-		return (RREG32(mmCC_DRM_ID_STRAPS) & CC_DRM_ID_STRAPS__ATI_REV_ID_MASK)
-			>> CC_DRM_ID_STRAPS__ATI_REV_ID__SHIFT;
+		return (RREG32(mmPCIE_EFUSE4) & PCIE_EFUSE4__STRAP_BIF_ATI_REV_ID_MASK)
+			>> PCIE_EFUSE4__STRAP_BIF_ATI_REV_ID__SHIFT;
 }
 }
 
 
 static const struct amdgpu_asic_funcs vi_asic_funcs =
 static const struct amdgpu_asic_funcs vi_asic_funcs =
 {
 {
 	.read_disabled_bios = &vi_read_disabled_bios,
 	.read_disabled_bios = &vi_read_disabled_bios,
+	.read_bios_from_rom = &vi_read_bios_from_rom,
 	.read_register = &vi_read_register,
 	.read_register = &vi_read_register,
 	.reset = &vi_asic_reset,
 	.reset = &vi_asic_reset,
 	.set_vga_state = &vi_vga_set_state,
 	.set_vga_state = &vi_vga_set_state,
@@ -1416,7 +1442,8 @@ static int vi_common_early_init(void *handle)
 		break;
 		break;
 	case CHIP_FIJI:
 	case CHIP_FIJI:
 		adev->has_uvd = true;
 		adev->has_uvd = true;
-		adev->cg_flags = 0;
+		adev->cg_flags = AMDGPU_CG_SUPPORT_UVD_MGCG |
+				AMDGPU_CG_SUPPORT_VCE_MGCG;
 		adev->pg_flags = 0;
 		adev->pg_flags = 0;
 		adev->external_rev_id = adev->rev_id + 0x3c;
 		adev->external_rev_id = adev->rev_id + 0x3c;
 		break;
 		break;
@@ -1442,6 +1469,8 @@ static int vi_common_early_init(void *handle)
 	if (amdgpu_smc_load_fw && smc_enabled)
 	if (amdgpu_smc_load_fw && smc_enabled)
 		adev->firmware.smu_load = true;
 		adev->firmware.smu_load = true;
 
 
+	amdgpu_get_pcie_info(adev);
+
 	return 0;
 	return 0;
 }
 }
 
 
@@ -1515,9 +1544,95 @@ static int vi_common_soft_reset(void *handle)
 	return 0;
 	return 0;
 }
 }
 
 
+static void fiji_update_bif_medium_grain_light_sleep(struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t temp, data;
+
+	temp = data = RREG32_PCIE(ixPCIE_CNTL2);
+
+	if (enable)
+		data |= PCIE_CNTL2__SLV_MEM_LS_EN_MASK |
+				PCIE_CNTL2__MST_MEM_LS_EN_MASK |
+				PCIE_CNTL2__REPLAY_MEM_LS_EN_MASK;
+	else
+		data &= ~(PCIE_CNTL2__SLV_MEM_LS_EN_MASK |
+				PCIE_CNTL2__MST_MEM_LS_EN_MASK |
+				PCIE_CNTL2__REPLAY_MEM_LS_EN_MASK);
+
+	if (temp != data)
+		WREG32_PCIE(ixPCIE_CNTL2, data);
+}
+
+static void fiji_update_hdp_medium_grain_clock_gating(struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t temp, data;
+
+	temp = data = RREG32(mmHDP_HOST_PATH_CNTL);
+
+	if (enable)
+		data &= ~HDP_HOST_PATH_CNTL__CLOCK_GATING_DIS_MASK;
+	else
+		data |= HDP_HOST_PATH_CNTL__CLOCK_GATING_DIS_MASK;
+
+	if (temp != data)
+		WREG32(mmHDP_HOST_PATH_CNTL, data);
+}
+
+static void fiji_update_hdp_light_sleep(struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t temp, data;
+
+	temp = data = RREG32(mmHDP_MEM_POWER_LS);
+
+	if (enable)
+		data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK;
+	else
+		data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK;
+
+	if (temp != data)
+		WREG32(mmHDP_MEM_POWER_LS, data);
+}
+
+static void fiji_update_rom_medium_grain_clock_gating(struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t temp, data;
+
+	temp = data = RREG32_SMC(ixCGTT_ROM_CLK_CTRL0);
+
+	if (enable)
+		data &= ~(CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE0_MASK |
+				CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE1_MASK);
+	else
+		data |= CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE0_MASK |
+				CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE1_MASK;
+
+	if (temp != data)
+		WREG32_SMC(ixCGTT_ROM_CLK_CTRL0, data);
+}
+
 static int vi_common_set_clockgating_state(void *handle,
 static int vi_common_set_clockgating_state(void *handle,
 					    enum amd_clockgating_state state)
 					    enum amd_clockgating_state state)
 {
 {
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (adev->asic_type) {
+	case CHIP_FIJI:
+		fiji_update_bif_medium_grain_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		fiji_update_hdp_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		fiji_update_hdp_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		fiji_update_rom_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
 	return 0;
 	return 0;
 }
 }
 
 

+ 55 - 6
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.h → drivers/gpu/drm/amd/include/amd_acpi.h

@@ -21,14 +21,63 @@
  *
  *
  */
  */
 
 
-#ifndef AMDGPU_ACPI_H
-#define AMDGPU_ACPI_H
+#ifndef AMD_ACPI_H
+#define AMD_ACPI_H
 
 
-struct amdgpu_device;
-struct acpi_bus_event;
+#define ACPI_AC_CLASS           "ac_adapter"
 
 
-int amdgpu_atif_handler(struct amdgpu_device *adev,
-		struct acpi_bus_event *event);
+struct atif_verify_interface {
+	u16 size;		/* structure size in bytes (includes size field) */
+	u16 version;		/* version */
+	u32 notification_mask;	/* supported notifications mask */
+	u32 function_bits;	/* supported functions bit vector */
+} __packed;
+
+struct atif_system_params {
+	u16 size;		/* structure size in bytes (includes size field) */
+	u32 valid_mask;		/* valid flags mask */
+	u32 flags;		/* flags */
+	u8 command_code;	/* notify command code */
+} __packed;
+
+struct atif_sbios_requests {
+	u16 size;		/* structure size in bytes (includes size field) */
+	u32 pending;		/* pending sbios requests */
+	u8 panel_exp_mode;	/* panel expansion mode */
+	u8 thermal_gfx;		/* thermal state: target gfx controller */
+	u8 thermal_state;	/* thermal state: state id (0: exit state, non-0: state) */
+	u8 forced_power_gfx;	/* forced power state: target gfx controller */
+	u8 forced_power_state;	/* forced power state: state id */
+	u8 system_power_src;	/* system power source */
+	u8 backlight_level;	/* panel backlight level (0-255) */
+} __packed;
+
+#define ATIF_NOTIFY_MASK	0x3
+#define ATIF_NOTIFY_NONE	0
+#define ATIF_NOTIFY_81		1
+#define ATIF_NOTIFY_N		2
+
+struct atcs_verify_interface {
+	u16 size;		/* structure size in bytes (includes size field) */
+	u16 version;		/* version */
+	u32 function_bits;	/* supported functions bit vector */
+} __packed;
+
+#define ATCS_VALID_FLAGS_MASK	0x3
+
+struct atcs_pref_req_input {
+	u16 size;		/* structure size in bytes (includes size field) */
+	u16 client_id;		/* client id (bit 2-0: func num, 7-3: dev num, 15-8: bus num) */
+	u16 valid_flags_mask;	/* valid flags mask */
+	u16 flags;		/* flags */
+	u8 req_type;		/* request type */
+	u8 perf_req;		/* performance request */
+} __packed;
+
+struct atcs_pref_req_output {
+	u16 size;		/* structure size in bytes (includes size field) */
+	u8 ret_val;		/* return value */
+} __packed;
 
 
 /* AMD hw uses four ACPI control methods:
 /* AMD hw uses four ACPI control methods:
  * 1. ATIF
  * 1. ATIF

+ 50 - 0
drivers/gpu/drm/amd/include/amd_pcie.h

@@ -0,0 +1,50 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __AMD_PCIE_H__
+#define __AMD_PCIE_H__
+
+/* Following flags shows PCIe link speed supported in driver which are decided by chipset and ASIC */
+#define CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1        0x00010000
+#define CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2        0x00020000
+#define CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3        0x00040000
+#define CAIL_PCIE_LINK_SPEED_SUPPORT_MASK        0xFFFF0000
+#define CAIL_PCIE_LINK_SPEED_SUPPORT_SHIFT       16
+
+/* Following flags shows PCIe link speed supported by ASIC H/W.*/
+#define CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN1   0x00000001
+#define CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN2   0x00000002
+#define CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN3   0x00000004
+#define CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_MASK   0x0000FFFF
+#define CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_SHIFT  0
+
+/* Following flags shows PCIe lane width switch supported in driver which are decided by chipset and ASIC */
+#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X1          0x00010000
+#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X2          0x00020000
+#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X4          0x00040000
+#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X8          0x00080000
+#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X12         0x00100000
+#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X16         0x00200000
+#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X32         0x00400000
+#define CAIL_PCIE_LINK_WIDTH_SUPPORT_SHIFT       16
+
+#endif

+ 141 - 0
drivers/gpu/drm/amd/include/amd_pcie_helpers.h

@@ -0,0 +1,141 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __AMD_PCIE_HELPERS_H__
+#define __AMD_PCIE_HELPERS_H__
+
+#include "amd_pcie.h"
+
+static inline bool is_pcie_gen3_supported(uint32_t pcie_link_speed_cap)
+{
+	if (pcie_link_speed_cap & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
+		return true;
+
+	return false;
+}
+
+static inline bool is_pcie_gen2_supported(uint32_t pcie_link_speed_cap)
+{
+	if (pcie_link_speed_cap & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
+		return true;
+
+	return false;
+}
+
+/* Get the new PCIE speed given the ASIC PCIE Cap and the NewState's requested PCIE speed*/
+static inline uint16_t get_pcie_gen_support(uint32_t pcie_link_speed_cap,
+					    uint16_t ns_pcie_gen)
+{
+	uint32_t asic_pcie_link_speed_cap = (pcie_link_speed_cap &
+		CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_MASK);
+	uint32_t sys_pcie_link_speed_cap  = (pcie_link_speed_cap &
+		CAIL_PCIE_LINK_SPEED_SUPPORT_MASK);
+
+	switch (asic_pcie_link_speed_cap) {
+	case CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN1:
+		return PP_PCIEGen1;
+
+	case CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN2:
+		return PP_PCIEGen2;
+
+	case CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN3:
+		return PP_PCIEGen3;
+
+	default:
+		if (is_pcie_gen3_supported(sys_pcie_link_speed_cap) &&
+			(ns_pcie_gen == PP_PCIEGen3)) {
+			return PP_PCIEGen3;
+		} else if (is_pcie_gen2_supported(sys_pcie_link_speed_cap) &&
+			((ns_pcie_gen == PP_PCIEGen3) || (ns_pcie_gen == PP_PCIEGen2))) {
+			return PP_PCIEGen2;
+		}
+	}
+
+	return PP_PCIEGen1;
+}
+
+static inline uint16_t get_pcie_lane_support(uint32_t pcie_lane_width_cap,
+					     uint16_t ns_pcie_lanes)
+{
+	int i, j;
+	uint16_t new_pcie_lanes = ns_pcie_lanes;
+	uint16_t pcie_lanes[7] = {1, 2, 4, 8, 12, 16, 32};
+
+	switch (pcie_lane_width_cap) {
+	case 0:
+		printk(KERN_ERR "No valid PCIE lane width reported");
+		break;
+	case CAIL_PCIE_LINK_WIDTH_SUPPORT_X1:
+		new_pcie_lanes = 1;
+		break;
+	case CAIL_PCIE_LINK_WIDTH_SUPPORT_X2:
+		new_pcie_lanes = 2;
+		break;
+	case CAIL_PCIE_LINK_WIDTH_SUPPORT_X4:
+		new_pcie_lanes = 4;
+		break;
+	case CAIL_PCIE_LINK_WIDTH_SUPPORT_X8:
+		new_pcie_lanes = 8;
+		break;
+	case CAIL_PCIE_LINK_WIDTH_SUPPORT_X12:
+		new_pcie_lanes = 12;
+		break;
+	case CAIL_PCIE_LINK_WIDTH_SUPPORT_X16:
+		new_pcie_lanes = 16;
+		break;
+	case CAIL_PCIE_LINK_WIDTH_SUPPORT_X32:
+		new_pcie_lanes = 32;
+		break;
+	default:
+		for (i = 0; i < 7; i++) {
+			if (ns_pcie_lanes == pcie_lanes[i]) {
+				if (pcie_lane_width_cap & (0x10000 << i)) {
+					break;
+				} else {
+					for (j = i - 1; j >= 0; j--) {
+						if (pcie_lane_width_cap & (0x10000 << j)) {
+							new_pcie_lanes = pcie_lanes[j];
+							break;
+						}
+					}
+
+					if (j < 0) {
+						for (j = i + 1; j < 7; j++) {
+							if (pcie_lane_width_cap & (0x10000 << j)) {
+								new_pcie_lanes = pcie_lanes[j];
+								break;
+							}
+						}
+						if (j > 7)
+							printk(KERN_ERR "Cannot find a valid PCIE lane width!");
+					}
+				}
+				break;
+			}
+		}
+		break;
+	}
+
+	return new_pcie_lanes;
+}
+
+#endif

+ 21 - 0
drivers/gpu/drm/amd/include/amd_shared.h

@@ -85,6 +85,27 @@ enum amd_powergating_state {
 	AMD_PG_STATE_UNGATE,
 	AMD_PG_STATE_UNGATE,
 };
 };
 
 
+enum amd_pm_state_type {
+	/* not used for dpm */
+	POWER_STATE_TYPE_DEFAULT,
+	POWER_STATE_TYPE_POWERSAVE,
+	/* user selectable states */
+	POWER_STATE_TYPE_BATTERY,
+	POWER_STATE_TYPE_BALANCED,
+	POWER_STATE_TYPE_PERFORMANCE,
+	/* internal states */
+	POWER_STATE_TYPE_INTERNAL_UVD,
+	POWER_STATE_TYPE_INTERNAL_UVD_SD,
+	POWER_STATE_TYPE_INTERNAL_UVD_HD,
+	POWER_STATE_TYPE_INTERNAL_UVD_HD2,
+	POWER_STATE_TYPE_INTERNAL_UVD_MVC,
+	POWER_STATE_TYPE_INTERNAL_BOOT,
+	POWER_STATE_TYPE_INTERNAL_THERMAL,
+	POWER_STATE_TYPE_INTERNAL_ACPI,
+	POWER_STATE_TYPE_INTERNAL_ULV,
+	POWER_STATE_TYPE_INTERNAL_3DPERF,
+};
+
 struct amd_ip_funcs {
 struct amd_ip_funcs {
 	/* sets up early driver state (pre sw_init), does not configure hw - Optional */
 	/* sets up early driver state (pre sw_init), does not configure hw - Optional */
 	int (*early_init)(void *handle);
 	int (*early_init)(void *handle);

+ 1 - 0
drivers/gpu/drm/amd/include/asic_reg/bif/bif_5_0_d.h

@@ -596,6 +596,7 @@
 #define mmSWRST_EP_CONTROL_0                                                    0x14ac
 #define mmSWRST_EP_CONTROL_0                                                    0x14ac
 #define mmCPM_CONTROL                                                           0x14b8
 #define mmCPM_CONTROL                                                           0x14b8
 #define mmGSKT_CONTROL                                                          0x14bf
 #define mmGSKT_CONTROL                                                          0x14bf
+#define ixSWRST_COMMAND_1                                                       0x1400103
 #define ixLM_CONTROL                                                            0x1400120
 #define ixLM_CONTROL                                                            0x1400120
 #define ixLM_PCIETXMUX0                                                         0x1400121
 #define ixLM_PCIETXMUX0                                                         0x1400121
 #define ixLM_PCIETXMUX1                                                         0x1400122
 #define ixLM_PCIETXMUX1                                                         0x1400122

+ 13 - 0
drivers/gpu/drm/amd/include/asic_reg/gca/gfx_8_0_d.h

@@ -2807,5 +2807,18 @@
 #define ixDIDT_DBR_WEIGHT0_3                                                    0x90
 #define ixDIDT_DBR_WEIGHT0_3                                                    0x90
 #define ixDIDT_DBR_WEIGHT4_7                                                    0x91
 #define ixDIDT_DBR_WEIGHT4_7                                                    0x91
 #define ixDIDT_DBR_WEIGHT8_11                                                   0x92
 #define ixDIDT_DBR_WEIGHT8_11                                                   0x92
+#define mmTD_EDC_CNT                                                            0x252e
+#define mmCPF_EDC_TAG_CNT                                                       0x3188
+#define mmCPF_EDC_ROQ_CNT                                                       0x3189
+#define mmCPF_EDC_ATC_CNT                                                       0x318a
+#define mmCPG_EDC_TAG_CNT                                                       0x318b
+#define mmCPG_EDC_ATC_CNT                                                       0x318c
+#define mmCPG_EDC_DMA_CNT                                                       0x318d
+#define mmCPC_EDC_SCRATCH_CNT                                                   0x318e
+#define mmCPC_EDC_UCODE_CNT                                                     0x318f
+#define mmCPC_EDC_ATC_CNT                                                       0x3190
+#define mmDC_EDC_STATE_CNT                                                      0x3191
+#define mmDC_EDC_CSINVOC_CNT                                                    0x3192
+#define mmDC_EDC_RESTORE_CNT                                                    0x3193
 
 
 #endif /* GFX_8_0_D_H */
 #endif /* GFX_8_0_D_H */

+ 79 - 0
drivers/gpu/drm/amd/include/atombios.h

@@ -550,6 +550,13 @@ typedef struct _COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1
 //MPLL_CNTL_FLAG_BYPASS_AD_PLL has a wrong name, should be BYPASS_DQ_PLL
 //MPLL_CNTL_FLAG_BYPASS_AD_PLL has a wrong name, should be BYPASS_DQ_PLL
 #define MPLL_CNTL_FLAG_BYPASS_AD_PLL            0x04
 #define MPLL_CNTL_FLAG_BYPASS_AD_PLL            0x04
 
 
+// use for ComputeMemoryClockParamTable
+typedef struct _COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_2
+{
+  COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_V4 ulClock;
+  ULONG ulReserved;
+}COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_2;
+
 typedef struct _DYNAMICE_MEMORY_SETTINGS_PARAMETER
 typedef struct _DYNAMICE_MEMORY_SETTINGS_PARAMETER
 {
 {
   ATOM_COMPUTE_CLOCK_FREQ ulClock;
   ATOM_COMPUTE_CLOCK_FREQ ulClock;
@@ -4988,6 +4995,78 @@ typedef struct  _ATOM_ASIC_PROFILING_INFO_V3_3
   ULONG  ulSDCMargine;
   ULONG  ulSDCMargine;
 }ATOM_ASIC_PROFILING_INFO_V3_3;
 }ATOM_ASIC_PROFILING_INFO_V3_3;
 
 
+// for Fiji speed EVV algorithm
+typedef struct  _ATOM_ASIC_PROFILING_INFO_V3_4
+{
+  ATOM_COMMON_TABLE_HEADER         asHeader;
+  ULONG  ulEvvLkgFactor;
+  ULONG  ulBoardCoreTemp;
+  ULONG  ulMaxVddc;
+  ULONG  ulMinVddc;
+  ULONG  ulLoadLineSlop;
+  ULONG  ulLeakageTemp;
+  ULONG  ulLeakageVoltage;
+  EFUSE_LINEAR_FUNC_PARAM sCACm;
+  EFUSE_LINEAR_FUNC_PARAM sCACb;
+  EFUSE_LOGISTIC_FUNC_PARAM sKt_b;
+  EFUSE_LOGISTIC_FUNC_PARAM sKv_m;
+  EFUSE_LOGISTIC_FUNC_PARAM sKv_b;
+  USHORT usLkgEuseIndex;
+  UCHAR  ucLkgEfuseBitLSB;
+  UCHAR  ucLkgEfuseLength;
+  ULONG  ulLkgEncodeLn_MaxDivMin;
+  ULONG  ulLkgEncodeMax;
+  ULONG  ulLkgEncodeMin;
+  ULONG  ulEfuseLogisticAlpha;
+  USHORT usPowerDpm0;
+  USHORT usPowerDpm1;
+  USHORT usPowerDpm2;
+  USHORT usPowerDpm3;
+  USHORT usPowerDpm4;
+  USHORT usPowerDpm5;
+  USHORT usPowerDpm6;
+  USHORT usPowerDpm7;
+  ULONG  ulTdpDerateDPM0;
+  ULONG  ulTdpDerateDPM1;
+  ULONG  ulTdpDerateDPM2;
+  ULONG  ulTdpDerateDPM3;
+  ULONG  ulTdpDerateDPM4;
+  ULONG  ulTdpDerateDPM5;
+  ULONG  ulTdpDerateDPM6;
+  ULONG  ulTdpDerateDPM7;
+  EFUSE_LINEAR_FUNC_PARAM sRoFuse;
+  ULONG  ulEvvDefaultVddc;
+  ULONG  ulEvvNoCalcVddc;
+  USHORT usParamNegFlag;
+  USHORT usSpeed_Model;
+  ULONG  ulSM_A0;
+  ULONG  ulSM_A1;
+  ULONG  ulSM_A2;
+  ULONG  ulSM_A3;
+  ULONG  ulSM_A4;
+  ULONG  ulSM_A5;
+  ULONG  ulSM_A6;
+  ULONG  ulSM_A7;
+  UCHAR  ucSM_A0_sign;
+  UCHAR  ucSM_A1_sign;
+  UCHAR  ucSM_A2_sign;
+  UCHAR  ucSM_A3_sign;
+  UCHAR  ucSM_A4_sign;
+  UCHAR  ucSM_A5_sign;
+  UCHAR  ucSM_A6_sign;
+  UCHAR  ucSM_A7_sign;
+  ULONG ulMargin_RO_a;
+  ULONG ulMargin_RO_b;
+  ULONG ulMargin_RO_c;
+  ULONG ulMargin_fixed;
+  ULONG ulMargin_Fmax_mean;
+  ULONG ulMargin_plat_mean;
+  ULONG ulMargin_Fmax_sigma;
+  ULONG ulMargin_plat_sigma;
+  ULONG ulMargin_DC_sigma;
+  ULONG ulReserved[8];            // Reserved for future ASIC
+}ATOM_ASIC_PROFILING_INFO_V3_4;
+
 typedef struct _ATOM_POWER_SOURCE_OBJECT
 typedef struct _ATOM_POWER_SOURCE_OBJECT
 {
 {
    UCHAR  ucPwrSrcId;                                   // Power source
    UCHAR  ucPwrSrcId;                                   // Power source

+ 123 - 1
drivers/gpu/drm/amd/include/cgs_common.h

@@ -105,6 +105,34 @@ enum cgs_ucode_id {
 	CGS_UCODE_ID_MAXIMUM,
 	CGS_UCODE_ID_MAXIMUM,
 };
 };
 
 
+enum cgs_system_info_id {
+	CGS_SYSTEM_INFO_ADAPTER_BDF_ID = 1,
+	CGS_SYSTEM_INFO_PCIE_GEN_INFO,
+	CGS_SYSTEM_INFO_PCIE_MLW,
+	CGS_SYSTEM_INFO_ID_MAXIMUM,
+};
+
+struct cgs_system_info {
+	uint64_t       size;
+	uint64_t       info_id;
+	union {
+		void           *ptr;
+		uint64_t        value;
+	};
+	uint64_t               padding[13];
+};
+
+/*
+ * enum cgs_resource_type - GPU resource type
+ */
+enum cgs_resource_type {
+	CGS_RESOURCE_TYPE_MMIO = 0,
+	CGS_RESOURCE_TYPE_FB,
+	CGS_RESOURCE_TYPE_IO,
+	CGS_RESOURCE_TYPE_DOORBELL,
+	CGS_RESOURCE_TYPE_ROM,
+};
+
 /**
 /**
  * struct cgs_clock_limits - Clock limits
  * struct cgs_clock_limits - Clock limits
  *
  *
@@ -127,8 +155,53 @@ struct cgs_firmware_info {
 	void			*kptr;
 	void			*kptr;
 };
 };
 
 
+struct cgs_mode_info {
+	uint32_t		refresh_rate;
+	uint32_t		ref_clock;
+	uint32_t		vblank_time_us;
+};
+
+struct cgs_display_info {
+	uint32_t		display_count;
+	uint32_t		active_display_mask;
+	struct cgs_mode_info *mode_info;
+};
+
 typedef unsigned long cgs_handle_t;
 typedef unsigned long cgs_handle_t;
 
 
+#define CGS_ACPI_METHOD_ATCS          0x53435441
+#define CGS_ACPI_METHOD_ATIF          0x46495441
+#define CGS_ACPI_METHOD_ATPX          0x58505441
+#define CGS_ACPI_FIELD_METHOD_NAME                      0x00000001
+#define CGS_ACPI_FIELD_INPUT_ARGUMENT_COUNT             0x00000002
+#define CGS_ACPI_MAX_BUFFER_SIZE     256
+#define CGS_ACPI_TYPE_ANY                      0x00
+#define CGS_ACPI_TYPE_INTEGER               0x01
+#define CGS_ACPI_TYPE_STRING                0x02
+#define CGS_ACPI_TYPE_BUFFER                0x03
+#define CGS_ACPI_TYPE_PACKAGE               0x04
+
+struct cgs_acpi_method_argument {
+	uint32_t type;
+	uint32_t method_length;
+	uint32_t data_length;
+	union{
+		uint32_t value;
+		void *pointer;
+	};
+};
+
+struct cgs_acpi_method_info {
+	uint32_t size;
+	uint32_t field;
+	uint32_t input_count;
+	uint32_t name;
+	struct cgs_acpi_method_argument *pinput_argument;
+	uint32_t output_count;
+	struct cgs_acpi_method_argument *poutput_argument;
+	uint32_t padding[9];
+};
+
 /**
 /**
  * cgs_gpu_mem_info() - Return information about memory heaps
  * cgs_gpu_mem_info() - Return information about memory heaps
  * @cgs_device: opaque device handle
  * @cgs_device: opaque device handle
@@ -355,6 +428,23 @@ typedef void (*cgs_write_pci_config_word_t)(void *cgs_device, unsigned addr,
 typedef void (*cgs_write_pci_config_dword_t)(void *cgs_device, unsigned addr,
 typedef void (*cgs_write_pci_config_dword_t)(void *cgs_device, unsigned addr,
 					     uint32_t value);
 					     uint32_t value);
 
 
+
+/**
+ * cgs_get_pci_resource() - provide access to a device resource (PCI BAR)
+ * @cgs_device:	opaque device handle
+ * @resource_type:	Type of Resource (MMIO, IO, ROM, FB, DOORBELL)
+ * @size:	size of the region
+ * @offset:	offset from the start of the region
+ * @resource_base:	base address (not including offset) returned
+ *
+ * Return: 0 on success, -errno otherwise
+ */
+typedef int (*cgs_get_pci_resource_t)(void *cgs_device,
+				      enum cgs_resource_type resource_type,
+				      uint64_t size,
+				      uint64_t offset,
+				      uint64_t *resource_base);
+
 /**
 /**
  * cgs_atom_get_data_table() - Get a pointer to an ATOM BIOS data table
  * cgs_atom_get_data_table() - Get a pointer to an ATOM BIOS data table
  * @cgs_device:	opaque device handle
  * @cgs_device:	opaque device handle
@@ -493,6 +583,21 @@ typedef int(*cgs_set_clockgating_state)(void *cgs_device,
 				  enum amd_ip_block_type block_type,
 				  enum amd_ip_block_type block_type,
 				  enum amd_clockgating_state state);
 				  enum amd_clockgating_state state);
 
 
+typedef int(*cgs_get_active_displays_info)(
+					void *cgs_device,
+					struct cgs_display_info *info);
+
+typedef int (*cgs_call_acpi_method)(void *cgs_device,
+					uint32_t acpi_method,
+					uint32_t acpi_function,
+					void *pinput, void *poutput,
+					uint32_t output_count,
+					uint32_t input_size,
+					uint32_t output_size);
+
+typedef int (*cgs_query_system_info)(void *cgs_device,
+				struct cgs_system_info *sys_info);
+
 struct cgs_ops {
 struct cgs_ops {
 	/* memory management calls (similar to KFD interface) */
 	/* memory management calls (similar to KFD interface) */
 	cgs_gpu_mem_info_t gpu_mem_info;
 	cgs_gpu_mem_info_t gpu_mem_info;
@@ -516,6 +621,8 @@ struct cgs_ops {
 	cgs_write_pci_config_byte_t write_pci_config_byte;
 	cgs_write_pci_config_byte_t write_pci_config_byte;
 	cgs_write_pci_config_word_t write_pci_config_word;
 	cgs_write_pci_config_word_t write_pci_config_word;
 	cgs_write_pci_config_dword_t write_pci_config_dword;
 	cgs_write_pci_config_dword_t write_pci_config_dword;
+	/* PCI resources */
+	cgs_get_pci_resource_t get_pci_resource;
 	/* ATOM BIOS */
 	/* ATOM BIOS */
 	cgs_atom_get_data_table_t atom_get_data_table;
 	cgs_atom_get_data_table_t atom_get_data_table;
 	cgs_atom_get_cmd_table_revs_t atom_get_cmd_table_revs;
 	cgs_atom_get_cmd_table_revs_t atom_get_cmd_table_revs;
@@ -533,7 +640,12 @@ struct cgs_ops {
 	/* cg pg interface*/
 	/* cg pg interface*/
 	cgs_set_powergating_state set_powergating_state;
 	cgs_set_powergating_state set_powergating_state;
 	cgs_set_clockgating_state set_clockgating_state;
 	cgs_set_clockgating_state set_clockgating_state;
-	/* ACPI (TODO) */
+	/* display manager */
+	cgs_get_active_displays_info get_active_displays_info;
+	/* ACPI */
+	cgs_call_acpi_method call_acpi_method;
+	/* get system info */
+	cgs_query_system_info query_system_info;
 };
 };
 
 
 struct cgs_os_ops; /* To be define in OS-specific CGS header */
 struct cgs_os_ops; /* To be define in OS-specific CGS header */
@@ -620,5 +732,15 @@ struct cgs_device
 	CGS_CALL(set_powergating_state, dev, block_type, state)
 	CGS_CALL(set_powergating_state, dev, block_type, state)
 #define cgs_set_clockgating_state(dev, block_type, state)	\
 #define cgs_set_clockgating_state(dev, block_type, state)	\
 	CGS_CALL(set_clockgating_state, dev, block_type, state)
 	CGS_CALL(set_clockgating_state, dev, block_type, state)
+#define cgs_get_active_displays_info(dev, info)	\
+	CGS_CALL(get_active_displays_info, dev, info)
+#define cgs_call_acpi_method(dev, acpi_method, acpi_function, pintput, poutput, output_count, input_size, output_size)	\
+	CGS_CALL(call_acpi_method, dev, acpi_method, acpi_function, pintput, poutput, output_count, input_size, output_size)
+#define cgs_query_system_info(dev, sys_info)	\
+	CGS_CALL(query_system_info, dev, sys_info)
+#define cgs_get_pci_resource(cgs_device, resource_type, size, offset, \
+	resource_base) \
+	CGS_CALL(get_pci_resource, cgs_device, resource_type, size, offset, \
+	resource_base)
 
 
 #endif /* _CGS_COMMON_H */
 #endif /* _CGS_COMMON_H */

+ 6 - 0
drivers/gpu/drm/amd/powerplay/Kconfig

@@ -0,0 +1,6 @@
+config DRM_AMD_POWERPLAY
+	bool  "Enable AMD powerplay component"
+	depends on DRM_AMDGPU
+	default n
+	help
+	  select this option will enable AMD powerplay component.

+ 22 - 0
drivers/gpu/drm/amd/powerplay/Makefile

@@ -0,0 +1,22 @@
+
+subdir-ccflags-y += -Iinclude/drm  \
+		-Idrivers/gpu/drm/amd/powerplay/inc/  \
+		-Idrivers/gpu/drm/amd/include/asic_reg  \
+		-Idrivers/gpu/drm/amd/include  \
+		-Idrivers/gpu/drm/amd/powerplay/smumgr\
+		-Idrivers/gpu/drm/amd/powerplay/hwmgr \
+		-Idrivers/gpu/drm/amd/powerplay/eventmgr
+
+AMD_PP_PATH = ../powerplay
+
+PP_LIBS = smumgr hwmgr eventmgr
+
+AMD_POWERPLAY = $(addsuffix /Makefile,$(addprefix drivers/gpu/drm/amd/powerplay/,$(PP_LIBS)))
+
+include $(AMD_POWERPLAY)
+
+POWER_MGR = amd_powerplay.o
+
+AMD_PP_POWER = $(addprefix $(AMD_PP_PATH)/,$(POWER_MGR))
+
+AMD_POWERPLAY_FILES += $(AMD_PP_POWER)

+ 660 - 0
drivers/gpu/drm/amd/powerplay/amd_powerplay.c

@@ -0,0 +1,660 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/gfp.h>
+#include <linux/slab.h>
+#include "amd_shared.h"
+#include "amd_powerplay.h"
+#include "pp_instance.h"
+#include "power_state.h"
+#include "eventmanager.h"
+
+#define PP_CHECK(handle)						\
+	do {								\
+		if ((handle) == NULL || (handle)->pp_valid != PP_VALID)	\
+			return -EINVAL;					\
+	} while (0)
+
+static int pp_early_init(void *handle)
+{
+	return 0;
+}
+
+static int pp_sw_init(void *handle)
+{
+	struct pp_instance *pp_handle;
+	struct pp_hwmgr  *hwmgr;
+	int ret = 0;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	pp_handle = (struct pp_instance *)handle;
+	hwmgr = pp_handle->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->pptable_func == NULL ||
+	    hwmgr->hwmgr_func == NULL ||
+	    hwmgr->pptable_func->pptable_init == NULL ||
+	    hwmgr->hwmgr_func->backend_init == NULL)
+		return -EINVAL;
+
+	ret = hwmgr->pptable_func->pptable_init(hwmgr);
+
+	if (ret == 0)
+		ret = hwmgr->hwmgr_func->backend_init(hwmgr);
+
+	return ret;
+}
+
+static int pp_sw_fini(void *handle)
+{
+	struct pp_instance *pp_handle;
+	struct pp_hwmgr  *hwmgr;
+	int ret = 0;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	pp_handle = (struct pp_instance *)handle;
+	hwmgr = pp_handle->hwmgr;
+
+	if (hwmgr != NULL || hwmgr->hwmgr_func != NULL ||
+	    hwmgr->hwmgr_func->backend_fini != NULL)
+		ret = hwmgr->hwmgr_func->backend_fini(hwmgr);
+
+	return ret;
+}
+
+static int pp_hw_init(void *handle)
+{
+	struct pp_instance *pp_handle;
+	struct pp_smumgr *smumgr;
+	struct pp_eventmgr *eventmgr;
+	int ret = 0;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	pp_handle = (struct pp_instance *)handle;
+	smumgr = pp_handle->smu_mgr;
+
+	if (smumgr == NULL || smumgr->smumgr_funcs == NULL ||
+		smumgr->smumgr_funcs->smu_init == NULL ||
+		smumgr->smumgr_funcs->start_smu == NULL)
+		return -EINVAL;
+
+	ret = smumgr->smumgr_funcs->smu_init(smumgr);
+	if (ret) {
+		printk(KERN_ERR "[ powerplay ] smc initialization failed\n");
+		return ret;
+	}
+
+	ret = smumgr->smumgr_funcs->start_smu(smumgr);
+	if (ret) {
+		printk(KERN_ERR "[ powerplay ] smc start failed\n");
+		smumgr->smumgr_funcs->smu_fini(smumgr);
+		return ret;
+	}
+
+	hw_init_power_state_table(pp_handle->hwmgr);
+	eventmgr = pp_handle->eventmgr;
+
+	if (eventmgr == NULL || eventmgr->pp_eventmgr_init == NULL)
+		return -EINVAL;
+
+	ret = eventmgr->pp_eventmgr_init(eventmgr);
+	return 0;
+}
+
+static int pp_hw_fini(void *handle)
+{
+	struct pp_instance *pp_handle;
+	struct pp_smumgr *smumgr;
+	struct pp_eventmgr *eventmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	pp_handle = (struct pp_instance *)handle;
+	eventmgr = pp_handle->eventmgr;
+
+	if (eventmgr != NULL || eventmgr->pp_eventmgr_fini != NULL)
+		eventmgr->pp_eventmgr_fini(eventmgr);
+
+	smumgr = pp_handle->smu_mgr;
+
+	if (smumgr != NULL || smumgr->smumgr_funcs != NULL ||
+		smumgr->smumgr_funcs->smu_fini != NULL)
+		smumgr->smumgr_funcs->smu_fini(smumgr);
+
+	return 0;
+}
+
+static bool pp_is_idle(void *handle)
+{
+	return 0;
+}
+
+static int pp_wait_for_idle(void *handle)
+{
+	return 0;
+}
+
+static int pp_sw_reset(void *handle)
+{
+	return 0;
+}
+
+static void pp_print_status(void *handle)
+{
+
+}
+
+static int pp_set_clockgating_state(void *handle,
+				    enum amd_clockgating_state state)
+{
+	return 0;
+}
+
+static int pp_set_powergating_state(void *handle,
+				    enum amd_powergating_state state)
+{
+	return 0;
+}
+
+static int pp_suspend(void *handle)
+{
+	struct pp_instance *pp_handle;
+	struct pp_eventmgr *eventmgr;
+	struct pem_event_data event_data = { {0} };
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	pp_handle = (struct pp_instance *)handle;
+	eventmgr = pp_handle->eventmgr;
+	pem_handle_event(eventmgr, AMD_PP_EVENT_SUSPEND, &event_data);
+	return 0;
+}
+
+static int pp_resume(void *handle)
+{
+	struct pp_instance *pp_handle;
+	struct pp_eventmgr *eventmgr;
+	struct pem_event_data event_data = { {0} };
+	struct pp_smumgr *smumgr;
+	int ret;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	pp_handle = (struct pp_instance *)handle;
+	smumgr = pp_handle->smu_mgr;
+
+	if (smumgr == NULL || smumgr->smumgr_funcs == NULL ||
+		smumgr->smumgr_funcs->start_smu == NULL)
+		return -EINVAL;
+
+	ret = smumgr->smumgr_funcs->start_smu(smumgr);
+	if (ret) {
+		printk(KERN_ERR "[ powerplay ] smc start failed\n");
+		smumgr->smumgr_funcs->smu_fini(smumgr);
+		return ret;
+	}
+
+	eventmgr = pp_handle->eventmgr;
+	pem_handle_event(eventmgr, AMD_PP_EVENT_RESUME, &event_data);
+
+	return 0;
+}
+
+const struct amd_ip_funcs pp_ip_funcs = {
+	.early_init = pp_early_init,
+	.late_init = NULL,
+	.sw_init = pp_sw_init,
+	.sw_fini = pp_sw_fini,
+	.hw_init = pp_hw_init,
+	.hw_fini = pp_hw_fini,
+	.suspend = pp_suspend,
+	.resume = pp_resume,
+	.is_idle = pp_is_idle,
+	.wait_for_idle = pp_wait_for_idle,
+	.soft_reset = pp_sw_reset,
+	.print_status = pp_print_status,
+	.set_clockgating_state = pp_set_clockgating_state,
+	.set_powergating_state = pp_set_powergating_state,
+};
+
+static int pp_dpm_load_fw(void *handle)
+{
+	return 0;
+}
+
+static int pp_dpm_fw_loading_complete(void *handle)
+{
+	return 0;
+}
+
+static int pp_dpm_force_performance_level(void *handle,
+					enum amd_dpm_forced_level level)
+{
+	struct pp_instance *pp_handle;
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	pp_handle = (struct pp_instance *)handle;
+
+	hwmgr = pp_handle->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	    hwmgr->hwmgr_func->force_dpm_level == NULL)
+		return -EINVAL;
+
+	hwmgr->hwmgr_func->force_dpm_level(hwmgr, level);
+
+	return 0;
+}
+
+static enum amd_dpm_forced_level pp_dpm_get_performance_level(
+								void *handle)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL)
+		return -EINVAL;
+
+	return (((struct pp_instance *)handle)->hwmgr->dpm_level);
+}
+
+static int pp_dpm_get_sclk(void *handle, bool low)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	    hwmgr->hwmgr_func->get_sclk == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->get_sclk(hwmgr, low);
+}
+
+static int pp_dpm_get_mclk(void *handle, bool low)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	    hwmgr->hwmgr_func->get_mclk == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->get_mclk(hwmgr, low);
+}
+
+static int pp_dpm_powergate_vce(void *handle, bool gate)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	    hwmgr->hwmgr_func->powergate_vce == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->powergate_vce(hwmgr, gate);
+}
+
+static int pp_dpm_powergate_uvd(void *handle, bool gate)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	    hwmgr->hwmgr_func->powergate_uvd == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->powergate_uvd(hwmgr, gate);
+}
+
+static enum PP_StateUILabel power_state_convert(enum amd_pm_state_type  state)
+{
+	switch (state) {
+	case POWER_STATE_TYPE_BATTERY:
+		return PP_StateUILabel_Battery;
+	case POWER_STATE_TYPE_BALANCED:
+		return PP_StateUILabel_Balanced;
+	case POWER_STATE_TYPE_PERFORMANCE:
+		return PP_StateUILabel_Performance;
+	default:
+		return PP_StateUILabel_None;
+	}
+}
+
+int pp_dpm_dispatch_tasks(void *handle, enum amd_pp_event event_id, void *input, void *output)
+{
+	int ret = 0;
+	struct pp_instance *pp_handle;
+	struct pem_event_data data = { {0} };
+
+	pp_handle = (struct pp_instance *)handle;
+
+	if (pp_handle == NULL)
+		return -EINVAL;
+
+	switch (event_id) {
+	case AMD_PP_EVENT_DISPLAY_CONFIG_CHANGE:
+		ret = pem_handle_event(pp_handle->eventmgr, event_id, &data);
+		break;
+	case AMD_PP_EVENT_ENABLE_USER_STATE:
+	{
+		enum amd_pm_state_type  ps;
+
+		if (input == NULL)
+			return -EINVAL;
+		ps = *(unsigned long *)input;
+
+		data.requested_ui_label = power_state_convert(ps);
+		ret = pem_handle_event(pp_handle->eventmgr, event_id, &data);
+	}
+	break;
+	default:
+		break;
+	}
+	return ret;
+}
+
+enum amd_pm_state_type pp_dpm_get_current_power_state(void *handle)
+{
+	struct pp_hwmgr *hwmgr;
+	struct pp_power_state *state;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->current_ps == NULL)
+		return -EINVAL;
+
+	state = hwmgr->current_ps;
+
+	switch (state->classification.ui_label) {
+	case PP_StateUILabel_Battery:
+		return POWER_STATE_TYPE_BATTERY;
+	case PP_StateUILabel_Balanced:
+		return POWER_STATE_TYPE_BALANCED;
+	case PP_StateUILabel_Performance:
+		return POWER_STATE_TYPE_PERFORMANCE;
+	default:
+		return POWER_STATE_TYPE_DEFAULT;
+	}
+}
+
+static void
+pp_debugfs_print_current_performance_level(void *handle,
+					       struct seq_file *m)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	  hwmgr->hwmgr_func->print_current_perforce_level == NULL)
+		return;
+
+	hwmgr->hwmgr_func->print_current_perforce_level(hwmgr, m);
+}
+
+static int pp_dpm_set_fan_control_mode(void *handle, uint32_t mode)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	  hwmgr->hwmgr_func->set_fan_control_mode == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->set_fan_control_mode(hwmgr, mode);
+}
+
+static int pp_dpm_get_fan_control_mode(void *handle)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	  hwmgr->hwmgr_func->get_fan_control_mode == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->get_fan_control_mode(hwmgr);
+}
+
+static int pp_dpm_set_fan_speed_percent(void *handle, uint32_t percent)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	  hwmgr->hwmgr_func->set_fan_speed_percent == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->set_fan_speed_percent(hwmgr, percent);
+}
+
+static int pp_dpm_get_fan_speed_percent(void *handle, uint32_t *speed)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	  hwmgr->hwmgr_func->get_fan_speed_percent == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->get_fan_speed_percent(hwmgr, speed);
+}
+
+static int pp_dpm_get_temperature(void *handle)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
+	  hwmgr->hwmgr_func->get_temperature == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->get_temperature(hwmgr);
+}
+
+const struct amd_powerplay_funcs pp_dpm_funcs = {
+	.get_temperature = pp_dpm_get_temperature,
+	.load_firmware = pp_dpm_load_fw,
+	.wait_for_fw_loading_complete = pp_dpm_fw_loading_complete,
+	.force_performance_level = pp_dpm_force_performance_level,
+	.get_performance_level = pp_dpm_get_performance_level,
+	.get_current_power_state = pp_dpm_get_current_power_state,
+	.get_sclk = pp_dpm_get_sclk,
+	.get_mclk = pp_dpm_get_mclk,
+	.powergate_vce = pp_dpm_powergate_vce,
+	.powergate_uvd = pp_dpm_powergate_uvd,
+	.dispatch_tasks = pp_dpm_dispatch_tasks,
+	.print_current_performance_level = pp_debugfs_print_current_performance_level,
+	.set_fan_control_mode = pp_dpm_set_fan_control_mode,
+	.get_fan_control_mode = pp_dpm_get_fan_control_mode,
+	.set_fan_speed_percent = pp_dpm_set_fan_speed_percent,
+	.get_fan_speed_percent = pp_dpm_get_fan_speed_percent,
+};
+
+static int amd_pp_instance_init(struct amd_pp_init *pp_init,
+				struct amd_powerplay *amd_pp)
+{
+	int ret;
+	struct pp_instance *handle;
+
+	handle = kzalloc(sizeof(struct pp_instance), GFP_KERNEL);
+	if (handle == NULL)
+		return -ENOMEM;
+
+	handle->pp_valid = PP_VALID;
+
+	ret = smum_init(pp_init, handle);
+	if (ret)
+		goto fail_smum;
+
+	ret = hwmgr_init(pp_init, handle);
+	if (ret)
+		goto fail_hwmgr;
+
+	ret = eventmgr_init(handle);
+	if (ret)
+		goto fail_eventmgr;
+
+	amd_pp->pp_handle = handle;
+	return 0;
+
+fail_eventmgr:
+	hwmgr_fini(handle->hwmgr);
+fail_hwmgr:
+	smum_fini(handle->smu_mgr);
+fail_smum:
+	kfree(handle);
+	return ret;
+}
+
+static int amd_pp_instance_fini(void *handle)
+{
+	struct pp_instance *instance = (struct pp_instance *)handle;
+
+	if (instance == NULL)
+		return -EINVAL;
+
+	eventmgr_fini(instance->eventmgr);
+
+	hwmgr_fini(instance->hwmgr);
+
+	smum_fini(instance->smu_mgr);
+
+	kfree(handle);
+	return 0;
+}
+
+int amd_powerplay_init(struct amd_pp_init *pp_init,
+		       struct amd_powerplay *amd_pp)
+{
+	int ret;
+
+	if (pp_init == NULL || amd_pp == NULL)
+		return -EINVAL;
+
+	ret = amd_pp_instance_init(pp_init, amd_pp);
+
+	if (ret)
+		return ret;
+
+	amd_pp->ip_funcs = &pp_ip_funcs;
+	amd_pp->pp_funcs = &pp_dpm_funcs;
+
+	return 0;
+}
+
+int amd_powerplay_fini(void *handle)
+{
+	amd_pp_instance_fini(handle);
+
+	return 0;
+}
+
+/* export this function to DAL */
+
+int amd_powerplay_display_configuration_change(void *handle, const void *input)
+{
+	struct pp_hwmgr  *hwmgr;
+	const struct amd_pp_display_configuration *display_config = input;
+
+	PP_CHECK((struct pp_instance *)handle);
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	phm_store_dal_configuration_data(hwmgr, display_config);
+
+	return 0;
+}
+
+int amd_powerplay_get_display_power_level(void *handle,
+		struct amd_pp_dal_clock_info *output)
+{
+	struct pp_hwmgr  *hwmgr;
+
+	PP_CHECK((struct pp_instance *)handle);
+
+	if (output == NULL)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	return phm_get_dal_power_level(hwmgr, output);
+}

+ 11 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/Makefile

@@ -0,0 +1,11 @@
+#
+# Makefile for the 'event manager' sub-component of powerplay.
+# It provides the event management services for the driver.
+
+EVENT_MGR = eventmgr.o eventinit.o eventmanagement.o  \
+		eventactionchains.o eventsubchains.o eventtasks.o psm.o
+
+AMD_PP_EVENT = $(addprefix $(AMD_PP_PATH)/eventmgr/,$(EVENT_MGR))
+
+AMD_POWERPLAY_FILES += $(AMD_PP_EVENT)
+

+ 289 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventactionchains.c

@@ -0,0 +1,289 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "eventmgr.h"
+#include "eventactionchains.h"
+#include "eventsubchains.h"
+
+static const pem_event_action *initialize_event[] = {
+	block_adjust_power_state_tasks,
+	power_budget_tasks,
+	system_config_tasks,
+	setup_asic_tasks,
+	enable_dynamic_state_management_tasks,
+	enable_clock_power_gatings_tasks,
+	get_2d_performance_state_tasks,
+	set_performance_state_tasks,
+	initialize_thermal_controller_tasks,
+	conditionally_force_3d_performance_state_tasks,
+	process_vbios_eventinfo_tasks,
+	broadcast_power_policy_tasks,
+	NULL
+};
+
+const struct action_chain initialize_action_chain = {
+	"Initialize",
+	initialize_event
+};
+
+static const pem_event_action *uninitialize_event[] = {
+	ungate_all_display_phys_tasks,
+	uninitialize_display_phy_access_tasks,
+	disable_gfx_voltage_island_power_gating_tasks,
+	disable_gfx_clock_gating_tasks,
+	set_boot_state_tasks,
+	adjust_power_state_tasks,
+	disable_dynamic_state_management_tasks,
+	disable_clock_power_gatings_tasks,
+	cleanup_asic_tasks,
+	prepare_for_pnp_stop_tasks,
+	NULL
+};
+
+const struct action_chain uninitialize_action_chain = {
+	"Uninitialize",
+	uninitialize_event
+};
+
+static const pem_event_action *power_source_change_event_pp_enabled[] = {
+	set_power_source_tasks,
+	set_power_saving_state_tasks,
+	adjust_power_state_tasks,
+	enable_disable_fps_tasks,
+	set_nbmcu_state_tasks,
+	broadcast_power_policy_tasks,
+	NULL
+};
+
+const struct action_chain power_source_change_action_chain_pp_enabled = {
+	"Power source change - PowerPlay enabled",
+	power_source_change_event_pp_enabled
+};
+
+static const pem_event_action *power_source_change_event_pp_disabled[] = {
+	set_power_source_tasks,
+	set_nbmcu_state_tasks,
+	NULL
+};
+
+const struct action_chain power_source_changes_action_chain_pp_disabled = {
+	"Power source change - PowerPlay disabled",
+	power_source_change_event_pp_disabled
+};
+
+static const pem_event_action *power_source_change_event_hardware_dc[] = {
+	set_power_source_tasks,
+	set_power_saving_state_tasks,
+	adjust_power_state_tasks,
+	enable_disable_fps_tasks,
+	reset_hardware_dc_notification_tasks,
+	set_nbmcu_state_tasks,
+	broadcast_power_policy_tasks,
+	NULL
+};
+
+const struct action_chain power_source_change_action_chain_hardware_dc = {
+	"Power source change - with Hardware DC switching",
+	power_source_change_event_hardware_dc
+};
+
+static const pem_event_action *suspend_event[] = {
+	reset_display_phy_access_tasks,
+	unregister_interrupt_tasks,
+	disable_gfx_voltage_island_power_gating_tasks,
+	disable_gfx_clock_gating_tasks,
+	notify_smu_suspend_tasks,
+	disable_smc_firmware_ctf_tasks,
+	set_boot_state_tasks,
+	adjust_power_state_tasks,
+	disable_fps_tasks,
+	vari_bright_suspend_tasks,
+	reset_fan_speed_to_default_tasks,
+	power_down_asic_tasks,
+	disable_stutter_mode_tasks,
+	set_connected_standby_tasks,
+	block_hw_access_tasks,
+	NULL
+};
+
+const struct action_chain suspend_action_chain = {
+	"Suspend",
+	suspend_event
+};
+
+static const pem_event_action *resume_event[] = {
+	unblock_hw_access_tasks,
+	resume_connected_standby_tasks,
+	notify_smu_resume_tasks,
+	reset_display_configCounter_tasks,
+	update_dal_configuration_tasks,
+	vari_bright_resume_tasks,
+	block_adjust_power_state_tasks,
+	setup_asic_tasks,
+	enable_stutter_mode_tasks, /*must do this in boot state and before SMC is started */
+	enable_dynamic_state_management_tasks,
+	enable_clock_power_gatings_tasks,
+	enable_disable_bapm_tasks,
+	initialize_thermal_controller_tasks,
+	reset_boot_state_tasks,
+	adjust_power_state_tasks,
+	enable_disable_fps_tasks,
+	notify_hw_power_source_tasks,
+	process_vbios_event_info_tasks,
+	enable_gfx_clock_gating_tasks,
+	enable_gfx_voltage_island_power_gating_tasks,
+	reset_clock_gating_tasks,
+	notify_smu_vpu_recovery_end_tasks,
+	disable_vpu_cap_tasks,
+	execute_escape_sequence_tasks,
+	NULL
+};
+
+
+const struct action_chain resume_action_chain = {
+	"resume",
+	resume_event
+};
+
+static const pem_event_action *complete_init_event[] = {
+	adjust_power_state_tasks,
+	enable_gfx_clock_gating_tasks,
+	enable_gfx_voltage_island_power_gating_tasks,
+	notify_power_state_change_tasks,
+	NULL
+};
+
+const struct action_chain complete_init_action_chain = {
+	"complete init",
+	complete_init_event
+};
+
+static const pem_event_action *enable_gfx_clock_gating_event[] = {
+	enable_gfx_clock_gating_tasks,
+	NULL
+};
+
+const struct action_chain enable_gfx_clock_gating_action_chain = {
+	"enable gfx clock gate",
+	enable_gfx_clock_gating_event
+};
+
+static const pem_event_action *disable_gfx_clock_gating_event[] = {
+	disable_gfx_clock_gating_tasks,
+	NULL
+};
+
+const struct action_chain disable_gfx_clock_gating_action_chain = {
+	"disable gfx clock gate",
+	disable_gfx_clock_gating_event
+};
+
+static const pem_event_action *enable_cgpg_event[] = {
+	enable_cgpg_tasks,
+	NULL
+};
+
+const struct action_chain enable_cgpg_action_chain = {
+	"eable cg pg",
+	enable_cgpg_event
+};
+
+static const pem_event_action *disable_cgpg_event[] = {
+	disable_cgpg_tasks,
+	NULL
+};
+
+const struct action_chain disable_cgpg_action_chain = {
+	"disable cg pg",
+	disable_cgpg_event
+};
+
+
+/* Enable user _2d performance and activate */
+
+static const pem_event_action *enable_user_state_event[] = {
+	create_new_user_performance_state_tasks,
+	adjust_power_state_tasks,
+	NULL
+};
+
+const struct action_chain enable_user_state_action_chain = {
+	"Enable user state",
+	enable_user_state_event
+};
+
+static const pem_event_action *enable_user_2d_performance_event[] = {
+	enable_user_2d_performance_tasks,
+	add_user_2d_performance_state_tasks,
+	set_performance_state_tasks,
+	adjust_power_state_tasks,
+	delete_user_2d_performance_state_tasks,
+	NULL
+};
+
+const struct action_chain enable_user_2d_performance_action_chain = {
+	"enable_user_2d_performance_event_activate",
+	enable_user_2d_performance_event
+};
+
+
+static const pem_event_action *disable_user_2d_performance_event[] = {
+	disable_user_2d_performance_tasks,
+	delete_user_2d_performance_state_tasks,
+	NULL
+};
+
+const struct action_chain disable_user_2d_performance_action_chain = {
+	"disable_user_2d_performance_event",
+	disable_user_2d_performance_event
+};
+
+
+static const pem_event_action *display_config_change_event[] = {
+	/* countDisplayConfigurationChangeEventTasks, */
+	unblock_adjust_power_state_tasks,
+	set_cpu_power_state,
+	notify_hw_power_source_tasks,
+	/* updateDALConfigurationTasks,
+	variBrightDisplayConfigurationChangeTasks, */
+	adjust_power_state_tasks,
+	/*enableDisableFPSTasks,
+	setNBMCUStateTasks,
+	notifyPCIEDeviceReadyTasks,*/
+	NULL
+};
+
+const struct action_chain display_config_change_action_chain = {
+	"Display configuration change",
+	display_config_change_event
+};
+
+static const pem_event_action *readjust_power_state_event[] = {
+	adjust_power_state_tasks,
+	NULL
+};
+
+const struct action_chain readjust_power_state_action_chain = {
+	"re-adjust power state",
+	readjust_power_state_event
+};
+

+ 62 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventactionchains.h

@@ -0,0 +1,62 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef _EVENT_ACTION_CHAINS_H_
+#define _EVENT_ACTION_CHAINS_H_
+#include "eventmgr.h"
+
+extern const struct action_chain initialize_action_chain;
+
+extern const struct action_chain uninitialize_action_chain;
+
+extern const struct action_chain power_source_change_action_chain_pp_enabled;
+
+extern const struct action_chain power_source_changes_action_chain_pp_disabled;
+
+extern const struct action_chain power_source_change_action_chain_hardware_dc;
+
+extern const struct action_chain suspend_action_chain;
+
+extern const struct action_chain resume_action_chain;
+
+extern const struct action_chain complete_init_action_chain;
+
+extern const struct action_chain enable_gfx_clock_gating_action_chain;
+
+extern const struct action_chain disable_gfx_clock_gating_action_chain;
+
+extern const struct action_chain enable_cgpg_action_chain;
+
+extern const struct action_chain disable_cgpg_action_chain;
+
+extern const struct action_chain enable_user_2d_performance_action_chain;
+
+extern const struct action_chain disable_user_2d_performance_action_chain;
+
+extern const struct action_chain enable_user_state_action_chain;
+
+extern const struct action_chain readjust_power_state_action_chain;
+
+extern const struct action_chain display_config_change_action_chain;
+
+#endif /*_EVENT_ACTION_CHAINS_H_*/
+

+ 195 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventinit.c

@@ -0,0 +1,195 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "eventmgr.h"
+#include "eventinit.h"
+#include "ppinterrupt.h"
+#include "hardwaremanager.h"
+
+void pem_init_feature_info(struct pp_eventmgr *eventmgr)
+{
+
+	/* PowerPlay info */
+	eventmgr->ui_state_info[PP_PowerSource_AC].default_ui_lable =
+					    PP_StateUILabel_Performance;
+
+	eventmgr->ui_state_info[PP_PowerSource_AC].current_ui_label =
+					    PP_StateUILabel_Performance;
+
+	eventmgr->ui_state_info[PP_PowerSource_DC].default_ui_lable =
+						  PP_StateUILabel_Battery;
+
+	eventmgr->ui_state_info[PP_PowerSource_DC].current_ui_label =
+						  PP_StateUILabel_Battery;
+
+	if (phm_cap_enabled(eventmgr->platform_descriptor->platformCaps, PHM_PlatformCaps_PowerPlaySupport)) {
+		eventmgr->features[PP_Feature_PowerPlay].supported = true;
+		eventmgr->features[PP_Feature_PowerPlay].version = PEM_CURRENT_POWERPLAY_FEATURE_VERSION;
+		eventmgr->features[PP_Feature_PowerPlay].enabled_default = true;
+		eventmgr->features[PP_Feature_PowerPlay].enabled = true;
+	} else {
+		eventmgr->features[PP_Feature_PowerPlay].supported = false;
+		eventmgr->features[PP_Feature_PowerPlay].enabled = false;
+		eventmgr->features[PP_Feature_PowerPlay].enabled_default = false;
+	}
+
+	eventmgr->features[PP_Feature_Force3DClock].supported = true;
+	eventmgr->features[PP_Feature_Force3DClock].enabled = false;
+	eventmgr->features[PP_Feature_Force3DClock].enabled_default = false;
+	eventmgr->features[PP_Feature_Force3DClock].version = 1;
+
+	/* over drive*/
+	eventmgr->features[PP_Feature_User2DPerformance].version = 4;
+	eventmgr->features[PP_Feature_User3DPerformance].version = 4;
+	eventmgr->features[PP_Feature_OverdriveTest].version = 4;
+
+	eventmgr->features[PP_Feature_OverDrive].version = 4;
+	eventmgr->features[PP_Feature_OverDrive].enabled = false;
+	eventmgr->features[PP_Feature_OverDrive].enabled_default = false;
+
+	eventmgr->features[PP_Feature_User2DPerformance].supported = false;
+	eventmgr->features[PP_Feature_User2DPerformance].enabled = false;
+	eventmgr->features[PP_Feature_User2DPerformance].enabled_default = false;
+
+	eventmgr->features[PP_Feature_User3DPerformance].supported = false;
+	eventmgr->features[PP_Feature_User3DPerformance].enabled = false;
+	eventmgr->features[PP_Feature_User3DPerformance].enabled_default = false;
+
+	eventmgr->features[PP_Feature_OverdriveTest].supported = false;
+	eventmgr->features[PP_Feature_OverdriveTest].enabled = false;
+	eventmgr->features[PP_Feature_OverdriveTest].enabled_default = false;
+
+	eventmgr->features[PP_Feature_OverDrive].supported = false;
+
+	eventmgr->features[PP_Feature_PowerBudgetWaiver].enabled_default = false;
+	eventmgr->features[PP_Feature_PowerBudgetWaiver].version = 1;
+	eventmgr->features[PP_Feature_PowerBudgetWaiver].supported = false;
+	eventmgr->features[PP_Feature_PowerBudgetWaiver].enabled = false;
+
+	/* Multi UVD States support */
+	eventmgr->features[PP_Feature_MultiUVDState].supported = false;
+	eventmgr->features[PP_Feature_MultiUVDState].enabled = false;
+	eventmgr->features[PP_Feature_MultiUVDState].enabled_default = false;
+
+	/* Dynamic UVD States support */
+	eventmgr->features[PP_Feature_DynamicUVDState].supported = false;
+	eventmgr->features[PP_Feature_DynamicUVDState].enabled = false;
+	eventmgr->features[PP_Feature_DynamicUVDState].enabled_default = false;
+
+	/* VCE DPM support */
+	eventmgr->features[PP_Feature_VCEDPM].supported = false;
+	eventmgr->features[PP_Feature_VCEDPM].enabled = false;
+	eventmgr->features[PP_Feature_VCEDPM].enabled_default = false;
+
+	/* ACP PowerGating support */
+	eventmgr->features[PP_Feature_ACP_POWERGATING].supported = false;
+	eventmgr->features[PP_Feature_ACP_POWERGATING].enabled = false;
+	eventmgr->features[PP_Feature_ACP_POWERGATING].enabled_default = false;
+
+	/* PPM support */
+	eventmgr->features[PP_Feature_PPM].version = 1;
+	eventmgr->features[PP_Feature_PPM].supported = false;
+	eventmgr->features[PP_Feature_PPM].enabled = false;
+
+	/* FFC support (enables fan and temp settings, Gemini needs temp settings) */
+	if (phm_cap_enabled(eventmgr->platform_descriptor->platformCaps, PHM_PlatformCaps_ODFuzzyFanControlSupport) ||
+	    phm_cap_enabled(eventmgr->platform_descriptor->platformCaps, PHM_PlatformCaps_GeminiRegulatorFanControlSupport)) {
+		eventmgr->features[PP_Feature_FFC].version = 1;
+		eventmgr->features[PP_Feature_FFC].supported = true;
+		eventmgr->features[PP_Feature_FFC].enabled = true;
+		eventmgr->features[PP_Feature_FFC].enabled_default = true;
+	} else {
+		eventmgr->features[PP_Feature_FFC].supported = false;
+		eventmgr->features[PP_Feature_FFC].enabled = false;
+		eventmgr->features[PP_Feature_FFC].enabled_default = false;
+	}
+
+	eventmgr->features[PP_Feature_VariBright].supported = false;
+	eventmgr->features[PP_Feature_VariBright].enabled = false;
+	eventmgr->features[PP_Feature_VariBright].enabled_default = false;
+
+	eventmgr->features[PP_Feature_BACO].supported = false;
+	eventmgr->features[PP_Feature_BACO].supported = false;
+	eventmgr->features[PP_Feature_BACO].enabled_default = false;
+
+	/* PowerDown feature support */
+	eventmgr->features[PP_Feature_PowerDown].supported = false;
+	eventmgr->features[PP_Feature_PowerDown].enabled = false;
+	eventmgr->features[PP_Feature_PowerDown].enabled_default = false;
+
+	eventmgr->features[PP_Feature_FPS].version = 1;
+	eventmgr->features[PP_Feature_FPS].supported = false;
+	eventmgr->features[PP_Feature_FPS].enabled_default = false;
+	eventmgr->features[PP_Feature_FPS].enabled = false;
+
+	eventmgr->features[PP_Feature_ViPG].version = 1;
+	eventmgr->features[PP_Feature_ViPG].supported = false;
+	eventmgr->features[PP_Feature_ViPG].enabled_default = false;
+	eventmgr->features[PP_Feature_ViPG].enabled = false;
+}
+
+static int thermal_interrupt_callback(void *private_data,
+				      unsigned src_id, const uint32_t *iv_entry)
+{
+	/* TO DO hanle PEM_Event_ThermalNotification (struct pp_eventmgr *)private_data*/
+	printk("current thermal is out of range \n");
+	return 0;
+}
+
+int pem_register_interrupts(struct pp_eventmgr *eventmgr)
+{
+	int result = 0;
+	struct pp_interrupt_registration_info info;
+
+	info.call_back = thermal_interrupt_callback;
+	info.context = eventmgr;
+
+	result = phm_register_thermal_interrupt(eventmgr->hwmgr, &info);
+
+	/* TODO:
+	 * 2. Register CTF event interrupt
+	 * 3. Register for vbios events interrupt
+	 * 4. Register External Throttle Interrupt
+	 * 5. Register Smc To Host Interrupt
+	 * */
+	return result;
+}
+
+
+int pem_unregister_interrupts(struct pp_eventmgr *eventmgr)
+{
+	return 0;
+}
+
+
+void pem_uninit_featureInfo(struct pp_eventmgr *eventmgr)
+{
+	eventmgr->features[PP_Feature_MultiUVDState].supported = false;
+	eventmgr->features[PP_Feature_VariBright].supported = false;
+	eventmgr->features[PP_Feature_PowerBudgetWaiver].supported = false;
+	eventmgr->features[PP_Feature_OverDrive].supported = false;
+	eventmgr->features[PP_Feature_OverdriveTest].supported = false;
+	eventmgr->features[PP_Feature_User3DPerformance].supported = false;
+	eventmgr->features[PP_Feature_User2DPerformance].supported = false;
+	eventmgr->features[PP_Feature_PowerPlay].supported = false;
+	eventmgr->features[PP_Feature_Force3DClock].supported = false;
+}

+ 34 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventinit.h

@@ -0,0 +1,34 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _EVENTINIT_H_
+#define _EVENTINIT_H_
+
+#define PEM_CURRENT_POWERPLAY_FEATURE_VERSION 4
+
+void pem_init_feature_info(struct pp_eventmgr *eventmgr);
+void pem_uninit_featureInfo(struct pp_eventmgr *eventmgr);
+int pem_register_interrupts(struct pp_eventmgr *eventmgr);
+int pem_unregister_interrupts(struct pp_eventmgr *eventmgr);
+
+#endif /* _EVENTINIT_H_ */

+ 215 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventmanagement.c

@@ -0,0 +1,215 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "eventmanagement.h"
+#include "eventmgr.h"
+#include "eventactionchains.h"
+
+int pem_init_event_action_chains(struct pp_eventmgr *eventmgr)
+{
+	int i;
+
+	for (i = 0; i < AMD_PP_EVENT_MAX; i++)
+		eventmgr->event_chain[i] = NULL;
+
+	eventmgr->event_chain[AMD_PP_EVENT_SUSPEND] = pem_get_suspend_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_INITIALIZE] = pem_get_initialize_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_UNINITIALIZE] = pem_get_uninitialize_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_POWER_SOURCE_CHANGE] = pem_get_power_source_change_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_HIBERNATE] = pem_get_hibernate_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_RESUME] = pem_get_resume_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_THERMAL_NOTIFICATION] = pem_get_thermal_notification_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_VBIOS_NOTIFICATION] = pem_get_vbios_notification_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_ENTER_THERMAL_STATE] = pem_get_enter_thermal_state_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_EXIT_THERMAL_STATE] = pem_get_exit_thermal_state_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_ENABLE_POWER_PLAY] = pem_get_enable_powerplay_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_DISABLE_POWER_PLAY] = pem_get_disable_powerplay_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_ENABLE_OVER_DRIVE_TEST] = pem_get_enable_overdrive_test_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_DISABLE_OVER_DRIVE_TEST] = pem_get_disable_overdrive_test_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_ENABLE_GFX_CLOCK_GATING] = pem_get_enable_gfx_clock_gating_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_DISABLE_GFX_CLOCK_GATING] = pem_get_disable_gfx_clock_gating_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_ENABLE_CGPG] = pem_get_enable_cgpg_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_DISABLE_CGPG] = pem_get_disable_cgpg_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_COMPLETE_INIT] = pem_get_complete_init_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_SCREEN_ON] = pem_get_screen_on_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_SCREEN_OFF] = pem_get_screen_off_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_PRE_SUSPEND] = pem_get_pre_suspend_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_PRE_RESUME] = pem_get_pre_resume_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_ENABLE_USER_STATE] = pem_enable_user_state_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_READJUST_POWER_STATE] = pem_readjust_power_state_action_chain(eventmgr);
+	eventmgr->event_chain[AMD_PP_EVENT_DISPLAY_CONFIG_CHANGE] = pem_display_config_change_action_chain(eventmgr);
+	return 0;
+}
+
+int pem_excute_event_chain(struct pp_eventmgr *eventmgr, const struct action_chain *event_chain, struct pem_event_data *event_data)
+{
+	const pem_event_action **paction_chain;
+	const pem_event_action *psub_chain;
+	int tmp_result = 0;
+	int result = 0;
+
+	if (eventmgr == NULL || event_chain == NULL || event_data == NULL)
+		return -EINVAL;
+
+	for (paction_chain = event_chain->action_chain; NULL != *paction_chain; paction_chain++) {
+		if (0 != result)
+			return result;
+
+		for (psub_chain = *paction_chain; NULL != *psub_chain; psub_chain++) {
+			tmp_result = (*psub_chain)(eventmgr, event_data);
+			if (0 == result)
+				result = tmp_result;
+		}
+	}
+
+	return result;
+}
+
+const struct action_chain *pem_get_suspend_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &suspend_action_chain;
+}
+
+const struct action_chain *pem_get_initialize_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &initialize_action_chain;
+}
+
+const struct action_chain *pem_get_uninitialize_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &uninitialize_action_chain;
+}
+
+const struct action_chain *pem_get_power_source_change_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &power_source_change_action_chain_pp_enabled;  /* other case base on feature info*/
+}
+
+const struct action_chain *pem_get_resume_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &resume_action_chain;
+}
+
+const struct action_chain *pem_get_hibernate_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_thermal_notification_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_vbios_notification_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_enter_thermal_state_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_exit_thermal_state_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_enable_powerplay_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_disable_powerplay_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_enable_overdrive_test_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_disable_overdrive_test_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_enable_gfx_clock_gating_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &enable_gfx_clock_gating_action_chain;
+}
+
+const struct action_chain *pem_get_disable_gfx_clock_gating_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &disable_gfx_clock_gating_action_chain;
+}
+
+const struct action_chain *pem_get_enable_cgpg_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &enable_cgpg_action_chain;
+}
+
+const struct action_chain *pem_get_disable_cgpg_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &disable_cgpg_action_chain;
+}
+
+const struct action_chain *pem_get_complete_init_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &complete_init_action_chain;
+}
+
+const struct action_chain *pem_get_screen_on_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_screen_off_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_pre_suspend_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_get_pre_resume_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return NULL;
+}
+
+const struct action_chain *pem_enable_user_state_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &enable_user_state_action_chain;
+}
+
+const struct action_chain *pem_readjust_power_state_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &readjust_power_state_action_chain;
+}
+
+const struct action_chain *pem_display_config_change_action_chain(struct pp_eventmgr *eventmgr)
+{
+	return &display_config_change_action_chain;
+}

+ 59 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventmanagement.h

@@ -0,0 +1,59 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef _EVENT_MANAGEMENT_H_
+#define _EVENT_MANAGEMENT_H_
+
+#include "eventmgr.h"
+
+int pem_init_event_action_chains(struct pp_eventmgr *eventmgr);
+int pem_excute_event_chain(struct pp_eventmgr *eventmgr, const struct action_chain *event_chain, struct pem_event_data *event_data);
+const struct action_chain *pem_get_suspend_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_initialize_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_uninitialize_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_power_source_change_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_resume_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_hibernate_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_thermal_notification_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_vbios_notification_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_enter_thermal_state_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_exit_thermal_state_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_enable_powerplay_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_disable_powerplay_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_enable_overdrive_test_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_disable_overdrive_test_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_enable_gfx_clock_gating_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_disable_gfx_clock_gating_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_enable_cgpg_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_disable_cgpg_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_complete_init_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_screen_on_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_screen_off_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_pre_suspend_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_get_pre_resume_action_chain(struct pp_eventmgr *eventmgr);
+
+extern const struct action_chain *pem_enable_user_state_action_chain(struct pp_eventmgr *eventmgr);
+extern const struct action_chain *pem_readjust_power_state_action_chain(struct pp_eventmgr *eventmgr);
+const struct action_chain *pem_display_config_change_action_chain(struct pp_eventmgr *eventmgr);
+
+
+#endif /* _EVENT_MANAGEMENT_H_ */

+ 114 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventmgr.c

@@ -0,0 +1,114 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "eventmgr.h"
+#include "hwmgr.h"
+#include "eventinit.h"
+#include "eventmanagement.h"
+
+static int pem_init(struct pp_eventmgr *eventmgr)
+{
+	int result = 0;
+	struct pem_event_data event_data;
+
+	/* Initialize PowerPlay feature info */
+	pem_init_feature_info(eventmgr);
+
+	/* Initialize event action chains */
+	pem_init_event_action_chains(eventmgr);
+
+	/* Call initialization event */
+	result = pem_handle_event(eventmgr, AMD_PP_EVENT_INITIALIZE, &event_data);
+
+	if (0 != result)
+		return result;
+
+	/* Register interrupt callback functions */
+	result = pem_register_interrupts(eventmgr);
+	return 0;
+}
+
+static void pem_fini(struct pp_eventmgr *eventmgr)
+{
+	struct pem_event_data event_data;
+
+	pem_uninit_featureInfo(eventmgr);
+	pem_unregister_interrupts(eventmgr);
+
+	pem_handle_event(eventmgr, AMD_PP_EVENT_UNINITIALIZE, &event_data);
+
+	if (eventmgr != NULL)
+		kfree(eventmgr);
+}
+
+int eventmgr_init(struct pp_instance *handle)
+{
+	int result = 0;
+	struct pp_eventmgr *eventmgr;
+
+	if (handle == NULL)
+		return -EINVAL;
+
+	eventmgr = kzalloc(sizeof(struct pp_eventmgr), GFP_KERNEL);
+	if (eventmgr == NULL)
+		return -ENOMEM;
+
+	eventmgr->hwmgr = handle->hwmgr;
+	handle->eventmgr = eventmgr;
+
+	eventmgr->platform_descriptor = &(eventmgr->hwmgr->platform_descriptor);
+	eventmgr->pp_eventmgr_init = pem_init;
+	eventmgr->pp_eventmgr_fini = pem_fini;
+
+	return result;
+}
+
+int eventmgr_fini(struct pp_eventmgr *eventmgr)
+{
+	kfree(eventmgr);
+	return 0;
+}
+
+static int pem_handle_event_unlocked(struct pp_eventmgr *eventmgr, enum amd_pp_event event, struct pem_event_data *data)
+{
+	if (eventmgr == NULL || event >= AMD_PP_EVENT_MAX || data == NULL)
+		return -EINVAL;
+
+	return pem_excute_event_chain(eventmgr, eventmgr->event_chain[event], data);
+}
+
+int pem_handle_event(struct pp_eventmgr *eventmgr, enum amd_pp_event event, struct pem_event_data *event_data)
+{
+	int r = 0;
+
+	r = pem_handle_event_unlocked(eventmgr, event, event_data);
+
+	return r;
+}
+
+bool pem_is_hw_access_blocked(struct pp_eventmgr *eventmgr)
+{
+	return (eventmgr->block_adjust_power_state || phm_is_hw_access_blocked(eventmgr->hwmgr));
+}

+ 410 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventsubchains.c

@@ -0,0 +1,410 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "eventmgr.h"
+#include "eventsubchains.h"
+#include "eventtasks.h"
+#include "hardwaremanager.h"
+
+const pem_event_action reset_display_phy_access_tasks[] = {
+	pem_task_reset_display_phys_access,
+	NULL
+};
+
+const pem_event_action broadcast_power_policy_tasks[] = {
+	/* PEM_Task_BroadcastPowerPolicyChange, */
+	NULL
+};
+
+const pem_event_action unregister_interrupt_tasks[] = {
+	pem_task_unregister_interrupts,
+	NULL
+};
+
+/* Disable GFX Voltage Islands Power Gating */
+const pem_event_action disable_gfx_voltage_island_powergating_tasks[] = {
+	pem_task_disable_voltage_island_power_gating,
+	NULL
+};
+
+const pem_event_action disable_gfx_clockgating_tasks[] = {
+	pem_task_disable_gfx_clock_gating,
+	NULL
+};
+
+const pem_event_action block_adjust_power_state_tasks[] = {
+	pem_task_block_adjust_power_state,
+	NULL
+};
+
+
+const pem_event_action unblock_adjust_power_state_tasks[] = {
+	pem_task_unblock_adjust_power_state,
+	NULL
+};
+
+const pem_event_action set_performance_state_tasks[] = {
+	pem_task_set_performance_state,
+	NULL
+};
+
+const pem_event_action get_2d_performance_state_tasks[] = {
+	pem_task_get_2D_performance_state_id,
+	NULL
+};
+
+const pem_event_action conditionally_force3D_performance_state_tasks[] = {
+	pem_task_conditionally_force_3d_performance_state,
+	NULL
+};
+
+const pem_event_action process_vbios_eventinfo_tasks[] = {
+	/* PEM_Task_ProcessVbiosEventInfo,*/
+	NULL
+};
+
+const pem_event_action enable_dynamic_state_management_tasks[] = {
+	/* PEM_Task_ResetBAPMPolicyChangedFlag,*/
+	pem_task_get_boot_state_id,
+	pem_task_enable_dynamic_state_management,
+	pem_task_register_interrupts,
+	NULL
+};
+
+const pem_event_action enable_clock_power_gatings_tasks[] = {
+	pem_task_enable_clock_power_gatings_tasks,
+	pem_task_powerdown_uvd_tasks,
+	pem_task_powerdown_vce_tasks,
+	NULL
+};
+
+const pem_event_action setup_asic_tasks[] = {
+	pem_task_setup_asic,
+	NULL
+};
+
+const pem_event_action power_budget_tasks[] = {
+	/* TODO
+	 * PEM_Task_PowerBudgetWaiverAvailable,
+	 * PEM_Task_PowerBudgetWarningMessage,
+	 * PEM_Task_PruneStatesBasedOnPowerBudget,
+	*/
+	NULL
+};
+
+const pem_event_action system_config_tasks[] = {
+	/* PEM_Task_PruneStatesBasedOnSystemConfig,*/
+	NULL
+};
+
+
+const pem_event_action conditionally_force_3d_performance_state_tasks[] = {
+	pem_task_conditionally_force_3d_performance_state,
+	NULL
+};
+
+const pem_event_action ungate_all_display_phys_tasks[] = {
+	/* PEM_Task_GetDisplayPhyAccessInfo */
+	NULL
+};
+
+const pem_event_action uninitialize_display_phy_access_tasks[] = {
+	/* PEM_Task_UninitializeDisplayPhysAccess, */
+	NULL
+};
+
+const pem_event_action disable_gfx_voltage_island_power_gating_tasks[] = {
+	/* PEM_Task_DisableVoltageIslandPowerGating, */
+	NULL
+};
+
+const pem_event_action disable_gfx_clock_gating_tasks[] = {
+	pem_task_disable_gfx_clock_gating,
+	NULL
+};
+
+const pem_event_action set_boot_state_tasks[] = {
+	pem_task_get_boot_state_id,
+	pem_task_set_boot_state,
+	NULL
+};
+
+const pem_event_action adjust_power_state_tasks[] = {
+	pem_task_notify_hw_mgr_display_configuration_change,
+	pem_task_adjust_power_state,
+	pem_task_notify_smc_display_config_after_power_state_adjustment,
+	pem_task_update_allowed_performance_levels,
+	/* to do pem_task_Enable_disable_bapm, */
+	NULL
+};
+
+const pem_event_action disable_dynamic_state_management_tasks[] = {
+	pem_task_unregister_interrupts,
+	pem_task_get_boot_state_id,
+	pem_task_disable_dynamic_state_management,
+	NULL
+};
+
+const pem_event_action disable_clock_power_gatings_tasks[] = {
+	pem_task_disable_clock_power_gatings_tasks,
+	NULL
+};
+
+const pem_event_action cleanup_asic_tasks[] = {
+	/* PEM_Task_DisableFPS,*/
+	pem_task_cleanup_asic,
+	NULL
+};
+
+const pem_event_action prepare_for_pnp_stop_tasks[] = {
+	/* PEM_Task_PrepareForPnpStop,*/
+	NULL
+};
+
+const pem_event_action set_power_source_tasks[] = {
+	pem_task_set_power_source,
+	pem_task_notify_hw_of_power_source,
+	NULL
+};
+
+const pem_event_action set_power_saving_state_tasks[] = {
+	pem_task_reset_power_saving_state,
+	pem_task_get_power_saving_state,
+	pem_task_set_power_saving_state,
+	/* PEM_Task_ResetODDCState,
+	 * PEM_Task_GetODDCState,
+	 * PEM_Task_SetODDCState,*/
+	NULL
+};
+
+const pem_event_action enable_disable_fps_tasks[] = {
+	/* PEM_Task_EnableDisableFPS,*/
+	NULL
+};
+
+const pem_event_action set_nbmcu_state_tasks[] = {
+	/* PEM_Task_NBMCUStateChange,*/
+	NULL
+};
+
+const pem_event_action reset_hardware_dc_notification_tasks[] = {
+	/* PEM_Task_ResetHardwareDCNotification,*/
+	NULL
+};
+
+
+const pem_event_action notify_smu_suspend_tasks[] = {
+	/* PEM_Task_NotifySMUSuspend,*/
+	NULL
+};
+
+const pem_event_action disable_smc_firmware_ctf_tasks[] = {
+	/* PEM_Task_DisableSMCFirmwareCTF,*/
+	NULL
+};
+
+const pem_event_action disable_fps_tasks[] = {
+	/* PEM_Task_DisableFPS,*/
+	NULL
+};
+
+const pem_event_action vari_bright_suspend_tasks[] = {
+	/* PEM_Task_VariBright_Suspend,*/
+	NULL
+};
+
+const pem_event_action reset_fan_speed_to_default_tasks[] = {
+	/* PEM_Task_ResetFanSpeedToDefault,*/
+	NULL
+};
+
+const pem_event_action power_down_asic_tasks[] = {
+	/* PEM_Task_DisableFPS,*/
+	pem_task_power_down_asic,
+	NULL
+};
+
+const pem_event_action disable_stutter_mode_tasks[] = {
+	/* PEM_Task_DisableStutterMode,*/
+	NULL
+};
+
+const pem_event_action set_connected_standby_tasks[] = {
+	/* PEM_Task_SetConnectedStandby,*/
+	NULL
+};
+
+const pem_event_action block_hw_access_tasks[] = {
+	pem_task_block_hw_access,
+	NULL
+};
+
+const pem_event_action unblock_hw_access_tasks[] = {
+	pem_task_un_block_hw_access,
+	NULL
+};
+
+const pem_event_action resume_connected_standby_tasks[] = {
+	/* PEM_Task_ResumeConnectedStandby,*/
+	NULL
+};
+
+const pem_event_action notify_smu_resume_tasks[] = {
+	/* PEM_Task_NotifySMUResume,*/
+	NULL
+};
+
+const pem_event_action reset_display_configCounter_tasks[] = {
+	pem_task_reset_display_phys_access,
+	NULL
+};
+
+const pem_event_action update_dal_configuration_tasks[] = {
+	/* PEM_Task_CheckVBlankTime,*/
+	NULL
+};
+
+const pem_event_action vari_bright_resume_tasks[] = {
+	/* PEM_Task_VariBright_Resume,*/
+	NULL
+};
+
+const pem_event_action notify_hw_power_source_tasks[] = {
+	pem_task_notify_hw_of_power_source,
+	NULL
+};
+
+const pem_event_action process_vbios_event_info_tasks[] = {
+	/* PEM_Task_ProcessVbiosEventInfo,*/
+	NULL
+};
+
+const pem_event_action enable_gfx_clock_gating_tasks[] = {
+	pem_task_enable_gfx_clock_gating,
+	NULL
+};
+
+const pem_event_action enable_gfx_voltage_island_power_gating_tasks[] = {
+	pem_task_enable_voltage_island_power_gating,
+	NULL
+};
+
+const pem_event_action reset_clock_gating_tasks[] = {
+	/* PEM_Task_ResetClockGating*/
+	NULL
+};
+
+const pem_event_action notify_smu_vpu_recovery_end_tasks[] = {
+	/* PEM_Task_NotifySmuVPURecoveryEnd,*/
+	NULL
+};
+
+const pem_event_action disable_vpu_cap_tasks[] = {
+	/* PEM_Task_DisableVPUCap,*/
+	NULL
+};
+
+const pem_event_action execute_escape_sequence_tasks[] = {
+	/* PEM_Task_ExecuteEscapesequence,*/
+	NULL
+};
+
+const pem_event_action notify_power_state_change_tasks[] = {
+	pem_task_notify_power_state_change,
+	NULL
+};
+
+const pem_event_action enable_cgpg_tasks[] = {
+	pem_task_enable_cgpg,
+	NULL
+};
+
+const pem_event_action disable_cgpg_tasks[] = {
+	pem_task_disable_cgpg,
+	NULL
+};
+
+const pem_event_action enable_user_2d_performance_tasks[] = {
+	/* PEM_Task_SetUser2DPerformanceFlag,*/
+	/* PEM_Task_UpdateUser2DPerformanceEnableEvents,*/
+	NULL
+};
+
+const pem_event_action add_user_2d_performance_state_tasks[] = {
+	/* PEM_Task_Get2DPerformanceTemplate,*/
+	/* PEM_Task_AllocateNewPowerStateMemory,*/
+	/* PEM_Task_CopyNewPowerStateInfo,*/
+	/* PEM_Task_UpdateNewPowerStateClocks,*/
+	/* PEM_Task_UpdateNewPowerStateUser2DPerformanceFlag,*/
+	/* PEM_Task_AddPowerState,*/
+	/* PEM_Task_ReleaseNewPowerStateMemory,*/
+	NULL
+};
+
+const pem_event_action delete_user_2d_performance_state_tasks[] = {
+	/* PEM_Task_GetCurrentUser2DPerformanceStateID,*/
+	/* PEM_Task_DeletePowerState,*/
+	/* PEM_Task_SetCurrentUser2DPerformanceStateID,*/
+	NULL
+};
+
+const pem_event_action disable_user_2d_performance_tasks[] = {
+	/* PEM_Task_ResetUser2DPerformanceFlag,*/
+	/* PEM_Task_UpdateUser2DPerformanceDisableEvents,*/
+	NULL
+};
+
+const pem_event_action enable_stutter_mode_tasks[] = {
+	pem_task_enable_stutter_mode,
+	NULL
+};
+
+const pem_event_action enable_disable_bapm_tasks[] = {
+	/*PEM_Task_EnableDisableBAPM,*/
+	NULL
+};
+
+const pem_event_action reset_boot_state_tasks[] = {
+	pem_task_reset_boot_state,
+	NULL
+};
+
+const pem_event_action create_new_user_performance_state_tasks[] = {
+	pem_task_create_user_performance_state,
+	NULL
+};
+
+const pem_event_action initialize_thermal_controller_tasks[] = {
+	pem_task_initialize_thermal_controller,
+	NULL
+};
+
+const pem_event_action uninitialize_thermal_controller_tasks[] = {
+	pem_task_uninitialize_thermal_controller,
+	NULL
+};
+
+const pem_event_action set_cpu_power_state[] = {
+	pem_task_set_cpu_power_state,
+	NULL
+};

+ 100 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventsubchains.h

@@ -0,0 +1,100 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _EVENT_SUB_CHAINS_H_
+#define _EVENT_SUB_CHAINS_H_
+
+#include "eventmgr.h"
+
+extern const pem_event_action reset_display_phy_access_tasks[];
+extern const pem_event_action broadcast_power_policy_tasks[];
+extern const pem_event_action unregister_interrupt_tasks[];
+extern const pem_event_action disable_GFX_voltage_island_powergating_tasks[];
+extern const pem_event_action disable_GFX_clockgating_tasks[];
+extern const pem_event_action block_adjust_power_state_tasks[];
+extern const pem_event_action unblock_adjust_power_state_tasks[];
+extern const pem_event_action set_performance_state_tasks[];
+extern const pem_event_action get_2D_performance_state_tasks[];
+extern const pem_event_action conditionally_force3D_performance_state_tasks[];
+extern const pem_event_action process_vbios_eventinfo_tasks[];
+extern const pem_event_action enable_dynamic_state_management_tasks[];
+extern const pem_event_action enable_clock_power_gatings_tasks[];
+extern const pem_event_action conditionally_force3D_performance_state_tasks[];
+extern const pem_event_action setup_asic_tasks[];
+extern const pem_event_action power_budget_tasks[];
+extern const pem_event_action system_config_tasks[];
+extern const pem_event_action get_2d_performance_state_tasks[];
+extern const pem_event_action conditionally_force_3d_performance_state_tasks[];
+extern const pem_event_action ungate_all_display_phys_tasks[];
+extern const pem_event_action uninitialize_display_phy_access_tasks[];
+extern const pem_event_action disable_gfx_voltage_island_power_gating_tasks[];
+extern const pem_event_action disable_gfx_clock_gating_tasks[];
+extern const pem_event_action set_boot_state_tasks[];
+extern const pem_event_action adjust_power_state_tasks[];
+extern const pem_event_action disable_dynamic_state_management_tasks[];
+extern const pem_event_action disable_clock_power_gatings_tasks[];
+extern const pem_event_action cleanup_asic_tasks[];
+extern const pem_event_action prepare_for_pnp_stop_tasks[];
+extern const pem_event_action set_power_source_tasks[];
+extern const pem_event_action set_power_saving_state_tasks[];
+extern const pem_event_action enable_disable_fps_tasks[];
+extern const pem_event_action set_nbmcu_state_tasks[];
+extern const pem_event_action reset_hardware_dc_notification_tasks[];
+extern const pem_event_action notify_smu_suspend_tasks[];
+extern const pem_event_action disable_smc_firmware_ctf_tasks[];
+extern const pem_event_action disable_fps_tasks[];
+extern const pem_event_action vari_bright_suspend_tasks[];
+extern const pem_event_action reset_fan_speed_to_default_tasks[];
+extern const pem_event_action power_down_asic_tasks[];
+extern const pem_event_action disable_stutter_mode_tasks[];
+extern const pem_event_action set_connected_standby_tasks[];
+extern const pem_event_action block_hw_access_tasks[];
+extern const pem_event_action unblock_hw_access_tasks[];
+extern const pem_event_action resume_connected_standby_tasks[];
+extern const pem_event_action notify_smu_resume_tasks[];
+extern const pem_event_action reset_display_configCounter_tasks[];
+extern const pem_event_action update_dal_configuration_tasks[];
+extern const pem_event_action vari_bright_resume_tasks[];
+extern const pem_event_action notify_hw_power_source_tasks[];
+extern const pem_event_action process_vbios_event_info_tasks[];
+extern const pem_event_action enable_gfx_clock_gating_tasks[];
+extern const pem_event_action enable_gfx_voltage_island_power_gating_tasks[];
+extern const pem_event_action reset_clock_gating_tasks[];
+extern const pem_event_action notify_smu_vpu_recovery_end_tasks[];
+extern const pem_event_action disable_vpu_cap_tasks[];
+extern const pem_event_action execute_escape_sequence_tasks[];
+extern const pem_event_action notify_power_state_change_tasks[];
+extern const pem_event_action enable_cgpg_tasks[];
+extern const pem_event_action disable_cgpg_tasks[];
+extern const pem_event_action enable_user_2d_performance_tasks[];
+extern const pem_event_action add_user_2d_performance_state_tasks[];
+extern const pem_event_action delete_user_2d_performance_state_tasks[];
+extern const pem_event_action disable_user_2d_performance_tasks[];
+extern const pem_event_action enable_stutter_mode_tasks[];
+extern const pem_event_action enable_disable_bapm_tasks[];
+extern const pem_event_action reset_boot_state_tasks[];
+extern const pem_event_action create_new_user_performance_state_tasks[];
+extern const pem_event_action initialize_thermal_controller_tasks[];
+extern const pem_event_action uninitialize_thermal_controller_tasks[];
+extern const pem_event_action set_cpu_power_state[];
+#endif /* _EVENT_SUB_CHAINS_H_ */

+ 438 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventtasks.c

@@ -0,0 +1,438 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "eventmgr.h"
+#include "eventinit.h"
+#include "eventmanagement.h"
+#include "eventmanager.h"
+#include "hardwaremanager.h"
+#include "eventtasks.h"
+#include "power_state.h"
+#include "hwmgr.h"
+#include "amd_powerplay.h"
+#include "psm.h"
+
+#define TEMP_RANGE_MIN (90 * 1000)
+#define TEMP_RANGE_MAX (120 * 1000)
+
+int pem_task_update_allowed_performance_levels(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+
+	if (pem_is_hw_access_blocked(eventmgr))
+		return 0;
+
+	phm_force_dpm_levels(eventmgr->hwmgr, AMD_DPM_FORCED_LEVEL_AUTO);
+
+	return 0;
+}
+
+/* eventtasks_generic.c */
+int pem_task_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	struct pp_hwmgr *hwmgr;
+
+	if (pem_is_hw_access_blocked(eventmgr))
+		return 0;
+
+	hwmgr = eventmgr->hwmgr;
+	if (event_data->pnew_power_state != NULL)
+		hwmgr->request_ps = event_data->pnew_power_state;
+
+	if (phm_cap_enabled(eventmgr->platform_descriptor->platformCaps, PHM_PlatformCaps_DynamicPatchPowerState))
+		psm_adjust_power_state_dynamic(eventmgr, event_data->skip_state_adjust_rules);
+	else
+		psm_adjust_power_state_static(eventmgr, event_data->skip_state_adjust_rules);
+
+	return 0;
+}
+
+int pem_task_power_down_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	return phm_power_down_asic(eventmgr->hwmgr);
+}
+
+int pem_task_set_boot_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	if (pem_is_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID))
+		return psm_set_states(eventmgr, &(event_data->requested_state_id));
+
+	return 0;
+}
+
+int pem_task_reset_boot_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_update_new_power_state_clocks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_system_shutdown(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_register_interrupts(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_unregister_interrupts(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	return pem_unregister_interrupts(eventmgr);
+}
+
+int pem_task_get_boot_state_id(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	int result;
+
+	result = psm_get_state_by_classification(eventmgr,
+		PP_StateClassificationFlag_Boot,
+		&(event_data->requested_state_id)
+	);
+
+	if (0 == result)
+		pem_set_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID);
+	else
+		pem_unset_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID);
+
+	return result;
+}
+
+int pem_task_enable_dynamic_state_management(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	return phm_enable_dynamic_state_management(eventmgr->hwmgr);
+}
+
+int pem_task_disable_dynamic_state_management(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_enable_clock_power_gatings_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	return phm_enable_clock_power_gatings(eventmgr->hwmgr);
+}
+
+int pem_task_powerdown_uvd_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	return phm_powerdown_uvd(eventmgr->hwmgr);
+}
+
+int pem_task_powerdown_vce_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	phm_powergate_uvd(eventmgr->hwmgr, true);
+	phm_powergate_vce(eventmgr->hwmgr, true);
+	return 0;
+}
+
+int pem_task_disable_clock_power_gatings_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_start_asic_block_usage(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_stop_asic_block_usage(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_setup_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	return phm_setup_asic(eventmgr->hwmgr);
+}
+
+int pem_task_cleanup_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_store_dal_configuration(struct pp_eventmgr *eventmgr, const struct amd_display_configuration *display_config)
+{
+	/* TODO */
+	return 0;
+	/*phm_store_dal_configuration_data(eventmgr->hwmgr, display_config) */
+}
+
+int pem_task_notify_hw_mgr_display_configuration_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	if (pem_is_hw_access_blocked(eventmgr))
+		return 0;
+
+	return phm_display_configuration_changed(eventmgr->hwmgr);
+}
+
+int pem_task_notify_hw_mgr_pre_display_configuration_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	return 0;
+}
+
+int pem_task_notify_smc_display_config_after_power_state_adjustment(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	if (pem_is_hw_access_blocked(eventmgr))
+		return 0;
+
+	return phm_notify_smc_display_config_after_ps_adjustment(eventmgr->hwmgr);
+}
+
+int pem_task_block_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	eventmgr->block_adjust_power_state = true;
+	/* to do PHM_ResetIPSCounter(pEventMgr->pHwMgr);*/
+	return 0;
+}
+
+int pem_task_unblock_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	eventmgr->block_adjust_power_state = false;
+	return 0;
+}
+
+int pem_task_notify_power_state_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_block_hw_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_un_block_hw_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_reset_display_phys_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_set_cpu_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	return phm_set_cpu_power_state(eventmgr->hwmgr);
+}
+
+/*powersaving*/
+
+int pem_task_set_power_source(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_notify_hw_of_power_source(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_get_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_reset_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_set_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_set_screen_state_on(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_set_screen_state_off(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_enable_voltage_island_power_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_disable_voltage_island_power_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_enable_cgpg(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_disable_cgpg(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_enable_clock_power_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+
+int pem_task_enable_gfx_clock_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_disable_gfx_clock_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+
+/* performance */
+int pem_task_set_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	if (pem_is_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID))
+		return psm_set_states(eventmgr, &(event_data->requested_state_id));
+
+	return 0;
+}
+
+int pem_task_conditionally_force_3d_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_enable_stutter_mode(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	/* TODO */
+	return 0;
+}
+
+int pem_task_get_2D_performance_state_id(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	int result;
+
+	if (eventmgr->features[PP_Feature_PowerPlay].supported &&
+		!(eventmgr->features[PP_Feature_PowerPlay].enabled))
+			result = psm_get_state_by_classification(eventmgr,
+					PP_StateClassificationFlag_Boot,
+					&(event_data->requested_state_id));
+	else if (eventmgr->features[PP_Feature_User2DPerformance].enabled)
+			result = psm_get_state_by_classification(eventmgr,
+				   PP_StateClassificationFlag_User2DPerformance,
+					&(event_data->requested_state_id));
+	else
+		result = psm_get_ui_state(eventmgr, PP_StateUILabel_Performance,
+					&(event_data->requested_state_id));
+
+	if (0 == result)
+		pem_set_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID);
+	else
+		pem_unset_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID);
+
+	return result;
+}
+
+int pem_task_create_user_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	struct pp_power_state *state;
+	int table_entries;
+	struct pp_hwmgr *hwmgr = eventmgr->hwmgr;
+	int i;
+
+	table_entries = hwmgr->num_ps;
+	state = hwmgr->ps;
+
+restart_search:
+	for (i = 0; i < table_entries; i++) {
+		if (state->classification.ui_label & event_data->requested_ui_label) {
+			event_data->pnew_power_state = state;
+			return 0;
+		}
+		state = (struct pp_power_state *)((unsigned long)state + hwmgr->ps_size);
+	}
+
+	switch (event_data->requested_ui_label) {
+	case PP_StateUILabel_Battery:
+	case PP_StateUILabel_Balanced:
+		event_data->requested_ui_label = PP_StateUILabel_Performance;
+		goto restart_search;
+	default:
+		break;
+	}
+	return -1;
+}
+
+int pem_task_initialize_thermal_controller(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	struct PP_TemperatureRange range;
+
+	range.max = TEMP_RANGE_MAX;
+	range.min = TEMP_RANGE_MIN;
+
+	if (eventmgr == NULL || eventmgr->platform_descriptor == NULL)
+		return -EINVAL;
+
+	if (phm_cap_enabled(eventmgr->platform_descriptor->platformCaps, PHM_PlatformCaps_ThermalController))
+		return phm_start_thermal_controller(eventmgr->hwmgr, &range);
+
+	return 0;
+}
+
+int pem_task_uninitialize_thermal_controller(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
+{
+	return phm_stop_thermal_controller(eventmgr->hwmgr);
+}

+ 88 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/eventtasks.h

@@ -0,0 +1,88 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _EVENT_TASKS_H_
+#define _EVENT_TASKS_H_
+#include "eventmgr.h"
+
+struct amd_display_configuration;
+
+/* eventtasks_generic.c */
+int pem_task_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_power_down_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_get_boot_state_id(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_set_boot_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_reset_boot_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_update_new_power_state_clocks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_system_shutdown(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_register_interrupts(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_unregister_interrupts(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_enable_dynamic_state_management(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_disable_dynamic_state_management(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_enable_clock_power_gatings_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_powerdown_uvd_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_powerdown_vce_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_disable_clock_power_gatings_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_start_asic_block_usage(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_stop_asic_block_usage(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_setup_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_cleanup_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_store_dal_configuration (struct pp_eventmgr *eventmgr, const struct amd_display_configuration *display_config);
+int pem_task_notify_hw_mgr_display_configuration_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_notify_hw_mgr_pre_display_configuration_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_block_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_unblock_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_notify_power_state_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_block_hw_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_un_block_hw_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_reset_display_phys_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_set_cpu_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_notify_smc_display_config_after_power_state_adjustment(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+/*powersaving*/
+
+int pem_task_set_power_source(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_notify_hw_of_power_source(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_get_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_reset_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_set_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_set_screen_state_on(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_set_screen_state_off(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_enable_voltage_island_power_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_disable_voltage_island_power_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_enable_cgpg(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_disable_cgpg(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_enable_gfx_clock_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_disable_gfx_clock_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_enable_stutter_mode(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+
+/* performance */
+int pem_task_set_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_conditionally_force_3d_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_get_2D_performance_state_id(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_create_user_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_update_allowed_performance_levels(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+/*thermal */
+int pem_task_initialize_thermal_controller(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+int pem_task_uninitialize_thermal_controller(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
+
+#endif /* _EVENT_TASKS_H_ */

+ 117 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/psm.c

@@ -0,0 +1,117 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "psm.h"
+
+int psm_get_ui_state(struct pp_eventmgr *eventmgr, enum PP_StateUILabel ui_label, unsigned long *state_id)
+{
+	struct pp_power_state *state;
+	int table_entries;
+	struct pp_hwmgr *hwmgr = eventmgr->hwmgr;
+	int i;
+
+	table_entries = hwmgr->num_ps;
+	state = hwmgr->ps;
+
+	for (i = 0; i < table_entries; i++) {
+		if (state->classification.ui_label & ui_label) {
+			*state_id = state->id;
+			return 0;
+		}
+		state = (struct pp_power_state *)((unsigned long)state + hwmgr->ps_size);
+	}
+	return -1;
+}
+
+int psm_get_state_by_classification(struct pp_eventmgr *eventmgr, enum PP_StateClassificationFlag flag, unsigned long *state_id)
+{
+	struct pp_power_state *state;
+	int table_entries;
+	struct pp_hwmgr *hwmgr = eventmgr->hwmgr;
+	int i;
+
+	table_entries = hwmgr->num_ps;
+	state = hwmgr->ps;
+
+	for (i = 0; i < table_entries; i++) {
+		if (state->classification.flags & flag) {
+			*state_id = state->id;
+			return 0;
+		}
+		state = (struct pp_power_state *)((unsigned long)state + hwmgr->ps_size);
+	}
+	return -1;
+}
+
+int psm_set_states(struct pp_eventmgr *eventmgr, unsigned long *state_id)
+{
+	struct pp_power_state *state;
+	int table_entries;
+	struct pp_hwmgr *hwmgr = eventmgr->hwmgr;
+	int i;
+
+	table_entries = hwmgr->num_ps;
+	state = hwmgr->ps;
+
+	for (i = 0; i < table_entries; i++) {
+		if (state->id == *state_id) {
+			hwmgr->request_ps = state;
+			return 0;
+		}
+		state = (struct pp_power_state *)((unsigned long)state + hwmgr->ps_size);
+	}
+	return -1;
+}
+
+int psm_adjust_power_state_dynamic(struct pp_eventmgr *eventmgr, bool skip)
+{
+
+	struct pp_power_state *pcurrent;
+	struct pp_power_state *requested;
+	struct pp_hwmgr *hwmgr;
+	bool equal;
+
+	if (skip)
+		return 0;
+
+	hwmgr = eventmgr->hwmgr;
+	pcurrent = hwmgr->current_ps;
+	requested = hwmgr->request_ps;
+
+	if (requested == NULL)
+		return 0;
+
+	if (pcurrent == NULL || (0 != phm_check_states_equal(hwmgr, &pcurrent->hardware, &requested->hardware, &equal)))
+		equal = false;
+
+	if (!equal || phm_check_smc_update_required_for_display_configuration(hwmgr)) {
+		phm_apply_state_adjust_rules(hwmgr, requested, pcurrent);
+		phm_set_power_state(hwmgr, &pcurrent->hardware, &requested->hardware);
+		hwmgr->current_ps = requested;
+	}
+	return 0;
+}
+
+int psm_adjust_power_state_static(struct pp_eventmgr *eventmgr, bool skip)
+{
+	return 0;
+}

+ 38 - 0
drivers/gpu/drm/amd/powerplay/eventmgr/psm.h

@@ -0,0 +1,38 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "eventmgr.h"
+#include "eventinit.h"
+#include "eventmanagement.h"
+#include "eventmanager.h"
+#include "power_state.h"
+#include "hardwaremanager.h"
+
+int psm_get_ui_state(struct pp_eventmgr *eventmgr, enum PP_StateUILabel ui_label, unsigned long *state_id);
+
+int psm_get_state_by_classification(struct pp_eventmgr *eventmgr, enum PP_StateClassificationFlag flag, unsigned long *state_id);
+
+int psm_set_states(struct pp_eventmgr *eventmgr, unsigned long *state_id);
+
+int psm_adjust_power_state_dynamic(struct pp_eventmgr *eventmgr, bool skip);
+
+int psm_adjust_power_state_static(struct pp_eventmgr *eventmgr, bool skip);

+ 15 - 0
drivers/gpu/drm/amd/powerplay/hwmgr/Makefile

@@ -0,0 +1,15 @@
+#
+# Makefile for the 'hw manager' sub-component of powerplay.
+# It provides the hardware management services for the driver.
+
+HARDWARE_MGR = hwmgr.o processpptables.o functiontables.o \
+	       hardwaremanager.o pp_acpi.o cz_hwmgr.o \
+               cz_clockpowergating.o \
+	       tonga_processpptables.o ppatomctrl.o \
+               tonga_hwmgr.o pppcielanes.o  tonga_thermal.o\
+               fiji_powertune.o fiji_hwmgr.o tonga_clockpowergating.o \
+               fiji_clockpowergating.o fiji_thermal.o
+
+AMD_PP_HWMGR = $(addprefix $(AMD_PP_PATH)/hwmgr/,$(HARDWARE_MGR))
+
+AMD_POWERPLAY_FILES += $(AMD_PP_HWMGR)

+ 252 - 0
drivers/gpu/drm/amd/powerplay/hwmgr/cz_clockpowergating.c

@@ -0,0 +1,252 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "hwmgr.h"
+#include "cz_clockpowergating.h"
+#include "cz_ppsmc.h"
+
+/* PhyID -> Status Mapping in DDI_PHY_GEN_STATUS
+    0    GFX0L (3:0),                  (27:24),
+    1    GFX0H (7:4),                  (31:28),
+    2    GFX1L (3:0),                  (19:16),
+    3    GFX1H (7:4),                  (23:20),
+    4    DDIL   (3:0),                   (11: 8),
+    5    DDIH  (7:4),                   (15:12),
+    6    DDI2L (3:0),                   ( 3: 0),
+    7    DDI2H (7:4),                   ( 7: 4),
+*/
+#define DDI_PHY_GEN_STATUS_VAL(phyID)   (1 << ((3 - ((phyID & 0x07)/2))*8 + (phyID & 0x01)*4))
+#define IS_PHY_ID_USED_BY_PLL(PhyID)    (((0xF3 & (1 << PhyID)) & 0xFF) ? true : false)
+
+
+int cz_phm_set_asic_block_gating(struct pp_hwmgr *hwmgr, enum PHM_AsicBlock block, enum PHM_ClockGateSetting gating)
+{
+	int ret = 0;
+
+	switch (block) {
+	case PHM_AsicBlock_UVD_MVC:
+	case PHM_AsicBlock_UVD:
+	case PHM_AsicBlock_UVD_HD:
+	case PHM_AsicBlock_UVD_SD:
+		if (gating == PHM_ClockGateSetting_StaticOff)
+			ret = cz_dpm_powerdown_uvd(hwmgr);
+		else
+			ret = cz_dpm_powerup_uvd(hwmgr);
+		break;
+	case PHM_AsicBlock_GFX:
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+
+bool cz_phm_is_safe_for_asic_block(struct pp_hwmgr *hwmgr, const struct pp_hw_power_state *state, enum PHM_AsicBlock block)
+{
+	return true;
+}
+
+
+int cz_phm_enable_disable_gfx_power_gating(struct pp_hwmgr *hwmgr, bool enable)
+{
+	return 0;
+}
+
+int cz_phm_smu_power_up_down_pcie(struct pp_hwmgr *hwmgr, uint32_t target, bool up, uint32_t args)
+{
+	/* TODO */
+	return 0;
+}
+
+int cz_phm_initialize_display_phy_access(struct pp_hwmgr *hwmgr, bool initialize, bool accesshw)
+{
+	/* TODO */
+	return 0;
+}
+
+int cz_phm_get_display_phy_access_info(struct pp_hwmgr *hwmgr)
+{
+	/* TODO */
+	return 0;
+}
+
+int cz_phm_gate_unused_display_phys(struct pp_hwmgr *hwmgr)
+{
+	/* TODO */
+	return 0;
+}
+
+int cz_phm_ungate_all_display_phys(struct pp_hwmgr *hwmgr)
+{
+	/* TODO */
+	return 0;
+}
+
+static int cz_tf_uvd_power_gating_initialize(struct pp_hwmgr *hwmgr, void *pInput, void *pOutput, void *pStorage, int Result)
+{
+	return 0;
+}
+
+static int cz_tf_vce_power_gating_initialize(struct pp_hwmgr *hwmgr, void *pInput, void *pOutput, void *pStorage, int Result)
+{
+	return 0;
+}
+
+int cz_enable_disable_uvd_dpm(struct pp_hwmgr *hwmgr, bool enable)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	uint32_t dpm_features = 0;
+
+	if (enable &&
+		phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+				  PHM_PlatformCaps_UVDDPM)) {
+		cz_hwmgr->dpm_flags |= DPMFlags_UVD_Enabled;
+		dpm_features |= UVD_DPM_MASK;
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+			    PPSMC_MSG_EnableAllSmuFeatures, dpm_features);
+	} else {
+		dpm_features |= UVD_DPM_MASK;
+		cz_hwmgr->dpm_flags &= ~DPMFlags_UVD_Enabled;
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+			   PPSMC_MSG_DisableAllSmuFeatures, dpm_features);
+	}
+	return 0;
+}
+
+int cz_enable_disable_vce_dpm(struct pp_hwmgr *hwmgr, bool enable)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	uint32_t dpm_features = 0;
+
+	if (enable && phm_cap_enabled(
+				hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_VCEDPM)) {
+		cz_hwmgr->dpm_flags |= DPMFlags_VCE_Enabled;
+		dpm_features |= VCE_DPM_MASK;
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+			    PPSMC_MSG_EnableAllSmuFeatures, dpm_features);
+	} else {
+		dpm_features |= VCE_DPM_MASK;
+		cz_hwmgr->dpm_flags &= ~DPMFlags_VCE_Enabled;
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+			   PPSMC_MSG_DisableAllSmuFeatures, dpm_features);
+	}
+
+	return 0;
+}
+
+
+int cz_dpm_powergate_uvd(struct pp_hwmgr *hwmgr, bool bgate)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	if (cz_hwmgr->uvd_power_gated == bgate)
+		return 0;
+
+	cz_hwmgr->uvd_power_gated = bgate;
+
+	if (bgate) {
+		cgs_set_clockgating_state(hwmgr->device,
+						AMD_IP_BLOCK_TYPE_UVD,
+						AMD_CG_STATE_UNGATE);
+		cgs_set_powergating_state(hwmgr->device,
+						AMD_IP_BLOCK_TYPE_UVD,
+						AMD_PG_STATE_GATE);
+		cz_dpm_update_uvd_dpm(hwmgr, true);
+		cz_dpm_powerdown_uvd(hwmgr);
+	} else {
+		cz_dpm_powerup_uvd(hwmgr);
+		cgs_set_clockgating_state(hwmgr->device,
+						AMD_IP_BLOCK_TYPE_UVD,
+						AMD_PG_STATE_GATE);
+		cgs_set_powergating_state(hwmgr->device,
+						AMD_IP_BLOCK_TYPE_UVD,
+						AMD_CG_STATE_UNGATE);
+		cz_dpm_update_uvd_dpm(hwmgr, false);
+	}
+
+	return 0;
+}
+
+int cz_dpm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+					PHM_PlatformCaps_VCEPowerGating)) {
+		if (cz_hwmgr->vce_power_gated != bgate) {
+			if (bgate) {
+				cgs_set_clockgating_state(
+							hwmgr->device,
+							AMD_IP_BLOCK_TYPE_VCE,
+							AMD_CG_STATE_UNGATE);
+				cgs_set_powergating_state(
+							hwmgr->device,
+							AMD_IP_BLOCK_TYPE_VCE,
+							AMD_PG_STATE_GATE);
+				cz_enable_disable_vce_dpm(hwmgr, false);
+			/* TODO: to figure out why vce can't be poweroff*/
+				cz_hwmgr->vce_power_gated = true;
+			} else {
+				cz_dpm_powerup_vce(hwmgr);
+				cz_hwmgr->vce_power_gated = false;
+				cgs_set_clockgating_state(
+							hwmgr->device,
+							AMD_IP_BLOCK_TYPE_VCE,
+							AMD_PG_STATE_GATE);
+				cgs_set_powergating_state(
+							hwmgr->device,
+							AMD_IP_BLOCK_TYPE_VCE,
+							AMD_CG_STATE_UNGATE);
+				cz_dpm_update_vce_dpm(hwmgr);
+				cz_enable_disable_vce_dpm(hwmgr, true);
+				return 0;
+			}
+		}
+	} else {
+		cz_dpm_update_vce_dpm(hwmgr);
+		cz_enable_disable_vce_dpm(hwmgr, true);
+		return 0;
+	}
+
+	if (!cz_hwmgr->vce_power_gated)
+		cz_dpm_update_vce_dpm(hwmgr);
+
+	return 0;
+}
+
+
+static struct phm_master_table_item cz_enable_clock_power_gatings_list[] = {
+	/*we don't need an exit table here, because there is only D3 cold on Kv*/
+	{ phm_cf_want_uvd_power_gating, cz_tf_uvd_power_gating_initialize },
+	{ phm_cf_want_vce_power_gating, cz_tf_vce_power_gating_initialize },
+	/* to do { NULL, cz_tf_xdma_power_gating_enable }, */
+	{ NULL, NULL }
+};
+
+struct phm_master_table_header cz_phm_enable_clock_power_gatings_master = {
+	0,
+	PHM_MasterTableFlag_None,
+	cz_enable_clock_power_gatings_list
+};

+ 37 - 0
drivers/gpu/drm/amd/powerplay/hwmgr/cz_clockpowergating.h

@@ -0,0 +1,37 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _CZ_CLOCK_POWER_GATING_H_
+#define _CZ_CLOCK_POWER_GATING_H_
+
+#include "cz_hwmgr.h"
+#include "pp_asicblocks.h"
+
+extern int cz_phm_set_asic_block_gating(struct pp_hwmgr *hwmgr, enum PHM_AsicBlock block, enum PHM_ClockGateSetting gating);
+extern struct phm_master_table_header cz_phm_enable_clock_power_gatings_master;
+extern struct phm_master_table_header cz_phm_disable_clock_power_gatings_master;
+extern int cz_dpm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate);
+extern int cz_dpm_powergate_uvd(struct pp_hwmgr *hwmgr, bool bgate);
+extern int cz_enable_disable_vce_dpm(struct pp_hwmgr *hwmgr, bool enable);
+extern int cz_enable_disable_uvd_dpm(struct pp_hwmgr *hwmgr, bool enable);
+#endif /* _CZ_CLOCK_POWER_GATING_H_ */

+ 1737 - 0
drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c

@@ -0,0 +1,1737 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "atom-types.h"
+#include "atombios.h"
+#include "processpptables.h"
+#include "pp_debug.h"
+#include "cgs_common.h"
+#include "smu/smu_8_0_d.h"
+#include "smu8_fusion.h"
+#include "smu/smu_8_0_sh_mask.h"
+#include "smumgr.h"
+#include "hwmgr.h"
+#include "hardwaremanager.h"
+#include "cz_ppsmc.h"
+#include "cz_hwmgr.h"
+#include "power_state.h"
+#include "cz_clockpowergating.h"
+#include "pp_debug.h"
+
+#define ixSMUSVI_NB_CURRENTVID 0xD8230044
+#define CURRENT_NB_VID_MASK 0xff000000
+#define CURRENT_NB_VID__SHIFT 24
+#define ixSMUSVI_GFX_CURRENTVID  0xD8230048
+#define CURRENT_GFX_VID_MASK 0xff000000
+#define CURRENT_GFX_VID__SHIFT 24
+
+static const unsigned long PhwCz_Magic = (unsigned long) PHM_Cz_Magic;
+
+static struct cz_power_state *cast_PhwCzPowerState(struct pp_hw_power_state *hw_ps)
+{
+	if (PhwCz_Magic != hw_ps->magic)
+		return NULL;
+
+	return (struct cz_power_state *)hw_ps;
+}
+
+static const struct cz_power_state *cast_const_PhwCzPowerState(
+				const struct pp_hw_power_state *hw_ps)
+{
+	if (PhwCz_Magic != hw_ps->magic)
+		return NULL;
+
+	return (struct cz_power_state *)hw_ps;
+}
+
+uint32_t cz_get_eclk_level(struct pp_hwmgr *hwmgr,
+					uint32_t clock, uint32_t msg)
+{
+	int i = 0;
+	struct phm_vce_clock_voltage_dependency_table *ptable =
+		hwmgr->dyn_state.vce_clock_voltage_dependency_table;
+
+	switch (msg) {
+	case PPSMC_MSG_SetEclkSoftMin:
+	case PPSMC_MSG_SetEclkHardMin:
+		for (i = 0; i < (int)ptable->count; i++) {
+			if (clock <= ptable->entries[i].ecclk)
+				break;
+		}
+		break;
+
+	case PPSMC_MSG_SetEclkSoftMax:
+	case PPSMC_MSG_SetEclkHardMax:
+		for (i = ptable->count - 1; i >= 0; i--) {
+			if (clock >= ptable->entries[i].ecclk)
+				break;
+		}
+		break;
+
+	default:
+		break;
+	}
+
+	return i;
+}
+
+static uint32_t cz_get_sclk_level(struct pp_hwmgr *hwmgr,
+				uint32_t clock, uint32_t msg)
+{
+	int i = 0;
+	struct phm_clock_voltage_dependency_table *table =
+				hwmgr->dyn_state.vddc_dependency_on_sclk;
+
+	switch (msg) {
+	case PPSMC_MSG_SetSclkSoftMin:
+	case PPSMC_MSG_SetSclkHardMin:
+		for (i = 0; i < (int)table->count; i++) {
+			if (clock <= table->entries[i].clk)
+				break;
+		}
+		break;
+
+	case PPSMC_MSG_SetSclkSoftMax:
+	case PPSMC_MSG_SetSclkHardMax:
+		for (i = table->count - 1; i >= 0; i--) {
+			if (clock >= table->entries[i].clk)
+				break;
+		}
+		break;
+
+	default:
+		break;
+	}
+	return i;
+}
+
+static uint32_t cz_get_uvd_level(struct pp_hwmgr *hwmgr,
+					uint32_t clock, uint32_t msg)
+{
+	int i = 0;
+	struct phm_uvd_clock_voltage_dependency_table *ptable =
+		hwmgr->dyn_state.uvd_clock_voltage_dependency_table;
+
+	switch (msg) {
+	case PPSMC_MSG_SetUvdSoftMin:
+	case PPSMC_MSG_SetUvdHardMin:
+		for (i = 0; i < (int)ptable->count; i++) {
+			if (clock <= ptable->entries[i].vclk)
+				break;
+		}
+		break;
+
+	case PPSMC_MSG_SetUvdSoftMax:
+	case PPSMC_MSG_SetUvdHardMax:
+		for (i = ptable->count - 1; i >= 0; i--) {
+			if (clock >= ptable->entries[i].vclk)
+				break;
+		}
+		break;
+
+	default:
+		break;
+	}
+
+	return i;
+}
+
+static uint32_t cz_get_max_sclk_level(struct pp_hwmgr *hwmgr)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	if (cz_hwmgr->max_sclk_level == 0) {
+		smum_send_msg_to_smc(hwmgr->smumgr, PPSMC_MSG_GetMaxSclkLevel);
+		cz_hwmgr->max_sclk_level = smum_get_argument(hwmgr->smumgr) + 1;
+	}
+
+	return cz_hwmgr->max_sclk_level;
+}
+
+static int cz_initialize_dpm_defaults(struct pp_hwmgr *hwmgr)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	uint32_t i;
+
+	cz_hwmgr->gfx_ramp_step = 256*25/100;
+
+	cz_hwmgr->gfx_ramp_delay = 1; /* by default, we delay 1us */
+
+	for (i = 0; i < CZ_MAX_HARDWARE_POWERLEVELS; i++)
+		cz_hwmgr->activity_target[i] = CZ_AT_DFLT;
+
+	cz_hwmgr->mgcg_cgtt_local0 = 0x00000000;
+	cz_hwmgr->mgcg_cgtt_local1 = 0x00000000;
+
+	cz_hwmgr->clock_slow_down_freq = 25000;
+
+	cz_hwmgr->skip_clock_slow_down = 1;
+
+	cz_hwmgr->enable_nb_ps_policy = 1; /* disable until UNB is ready, Enabled */
+
+	cz_hwmgr->voltage_drop_in_dce_power_gating = 0; /* disable until fully verified */
+
+	cz_hwmgr->voting_rights_clients = 0x00C00033;
+
+	cz_hwmgr->static_screen_threshold = 8;
+
+	cz_hwmgr->ddi_power_gating_disabled = 0;
+
+	cz_hwmgr->bapm_enabled = 1;
+
+	cz_hwmgr->voltage_drop_threshold = 0;
+
+	cz_hwmgr->gfx_power_gating_threshold = 500;
+
+	cz_hwmgr->vce_slow_sclk_threshold = 20000;
+
+	cz_hwmgr->dce_slow_sclk_threshold = 30000;
+
+	cz_hwmgr->disable_driver_thermal_policy = 1;
+
+	cz_hwmgr->disable_nb_ps3_in_battery = 0;
+
+	phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+							PHM_PlatformCaps_ABM);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+				    PHM_PlatformCaps_NonABMSupportInPPLib);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+					   PHM_PlatformCaps_SclkDeepSleep);
+
+	phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+					PHM_PlatformCaps_DynamicM3Arbiter);
+
+	cz_hwmgr->override_dynamic_mgpg = 1;
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+				  PHM_PlatformCaps_DynamicPatchPowerState);
+
+	cz_hwmgr->thermal_auto_throttling_treshold = 0;
+
+	cz_hwmgr->tdr_clock = 0;
+
+	cz_hwmgr->disable_gfx_power_gating_in_uvd = 0;
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+					PHM_PlatformCaps_DynamicUVDState);
+
+	cz_hwmgr->cc6_settings.cpu_cc6_disable = false;
+	cz_hwmgr->cc6_settings.cpu_pstate_disable = false;
+	cz_hwmgr->cc6_settings.nb_pstate_switch_disable = false;
+	cz_hwmgr->cc6_settings.cpu_pstate_separation_time = 0;
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+				   PHM_PlatformCaps_DisableVoltageIsland);
+
+	return 0;
+}
+
+static uint32_t cz_convert_8Bit_index_to_voltage(
+			struct pp_hwmgr *hwmgr, uint16_t voltage)
+{
+	return 6200 - (voltage * 25);
+}
+
+static int cz_construct_max_power_limits_table(struct pp_hwmgr *hwmgr,
+			struct phm_clock_and_voltage_limits *table)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)hwmgr->backend;
+	struct cz_sys_info *sys_info = &cz_hwmgr->sys_info;
+	struct phm_clock_voltage_dependency_table *dep_table =
+				hwmgr->dyn_state.vddc_dependency_on_sclk;
+
+	if (dep_table->count > 0) {
+		table->sclk = dep_table->entries[dep_table->count-1].clk;
+		table->vddc = cz_convert_8Bit_index_to_voltage(hwmgr,
+		   (uint16_t)dep_table->entries[dep_table->count-1].v);
+	}
+	table->mclk = sys_info->nbp_memory_clock[0];
+	return 0;
+}
+
+static int cz_init_dynamic_state_adjustment_rule_settings(
+			struct pp_hwmgr *hwmgr,
+			ATOM_CLK_VOLT_CAPABILITY *disp_voltage_table)
+{
+	uint32_t table_size =
+		sizeof(struct phm_clock_voltage_dependency_table) +
+		(7 * sizeof(struct phm_clock_voltage_dependency_record));
+
+	struct phm_clock_voltage_dependency_table *table_clk_vlt =
+					kzalloc(table_size, GFP_KERNEL);
+
+	if (NULL == table_clk_vlt) {
+		printk(KERN_ERR "[ powerplay ] Can not allocate memory!\n");
+		return -ENOMEM;
+	}
+
+	table_clk_vlt->count = 8;
+	table_clk_vlt->entries[0].clk = PP_DAL_POWERLEVEL_0;
+	table_clk_vlt->entries[0].v = 0;
+	table_clk_vlt->entries[1].clk = PP_DAL_POWERLEVEL_1;
+	table_clk_vlt->entries[1].v = 1;
+	table_clk_vlt->entries[2].clk = PP_DAL_POWERLEVEL_2;
+	table_clk_vlt->entries[2].v = 2;
+	table_clk_vlt->entries[3].clk = PP_DAL_POWERLEVEL_3;
+	table_clk_vlt->entries[3].v = 3;
+	table_clk_vlt->entries[4].clk = PP_DAL_POWERLEVEL_4;
+	table_clk_vlt->entries[4].v = 4;
+	table_clk_vlt->entries[5].clk = PP_DAL_POWERLEVEL_5;
+	table_clk_vlt->entries[5].v = 5;
+	table_clk_vlt->entries[6].clk = PP_DAL_POWERLEVEL_6;
+	table_clk_vlt->entries[6].v = 6;
+	table_clk_vlt->entries[7].clk = PP_DAL_POWERLEVEL_7;
+	table_clk_vlt->entries[7].v = 7;
+	hwmgr->dyn_state.vddc_dep_on_dal_pwrl = table_clk_vlt;
+
+	return 0;
+}
+
+static int cz_get_system_info_data(struct pp_hwmgr *hwmgr)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)hwmgr->backend;
+	ATOM_INTEGRATED_SYSTEM_INFO_V1_9 *info = NULL;
+	uint32_t i;
+	int result = 0;
+	uint8_t frev, crev;
+	uint16_t size;
+
+	info = (ATOM_INTEGRATED_SYSTEM_INFO_V1_9 *) cgs_atom_get_data_table(
+			hwmgr->device,
+			GetIndexIntoMasterTable(DATA, IntegratedSystemInfo),
+			&size, &frev, &crev);
+
+	if (crev != 9) {
+		printk(KERN_ERR "[ powerplay ] Unsupported IGP table: %d %d\n", frev, crev);
+		return -EINVAL;
+	}
+
+	if (info == NULL) {
+		printk(KERN_ERR "[ powerplay ] Could not retrieve the Integrated System Info Table!\n");
+		return -EINVAL;
+	}
+
+	cz_hwmgr->sys_info.bootup_uma_clock =
+				   le32_to_cpu(info->ulBootUpUMAClock);
+
+	cz_hwmgr->sys_info.bootup_engine_clock =
+				le32_to_cpu(info->ulBootUpEngineClock);
+
+	cz_hwmgr->sys_info.dentist_vco_freq =
+				   le32_to_cpu(info->ulDentistVCOFreq);
+
+	cz_hwmgr->sys_info.system_config =
+				     le32_to_cpu(info->ulSystemConfig);
+
+	cz_hwmgr->sys_info.bootup_nb_voltage_index =
+				  le16_to_cpu(info->usBootUpNBVoltage);
+
+	cz_hwmgr->sys_info.htc_hyst_lmt =
+			(info->ucHtcHystLmt == 0) ? 5 : info->ucHtcHystLmt;
+
+	cz_hwmgr->sys_info.htc_tmp_lmt =
+			(info->ucHtcTmpLmt == 0) ? 203 : info->ucHtcTmpLmt;
+
+	if (cz_hwmgr->sys_info.htc_tmp_lmt <=
+			cz_hwmgr->sys_info.htc_hyst_lmt) {
+		printk(KERN_ERR "[ powerplay ] The htcTmpLmt should be larger than htcHystLmt.\n");
+		return -EINVAL;
+	}
+
+	cz_hwmgr->sys_info.nb_dpm_enable =
+				cz_hwmgr->enable_nb_ps_policy &&
+				(le32_to_cpu(info->ulSystemConfig) >> 3 & 0x1);
+
+	for (i = 0; i < CZ_NUM_NBPSTATES; i++) {
+		if (i < CZ_NUM_NBPMEMORYCLOCK) {
+			cz_hwmgr->sys_info.nbp_memory_clock[i] =
+			  le32_to_cpu(info->ulNbpStateMemclkFreq[i]);
+		}
+		cz_hwmgr->sys_info.nbp_n_clock[i] =
+			    le32_to_cpu(info->ulNbpStateNClkFreq[i]);
+	}
+
+	for (i = 0; i < MAX_DISPLAY_CLOCK_LEVEL; i++) {
+		cz_hwmgr->sys_info.display_clock[i] =
+					le32_to_cpu(info->sDispClkVoltageMapping[i].ulMaximumSupportedCLK);
+	}
+
+	/* Here use 4 levels, make sure not exceed */
+	for (i = 0; i < CZ_NUM_NBPSTATES; i++) {
+		cz_hwmgr->sys_info.nbp_voltage_index[i] =
+			     le16_to_cpu(info->usNBPStateVoltage[i]);
+	}
+
+	if (!cz_hwmgr->sys_info.nb_dpm_enable) {
+		for (i = 1; i < CZ_NUM_NBPSTATES; i++) {
+			if (i < CZ_NUM_NBPMEMORYCLOCK) {
+				cz_hwmgr->sys_info.nbp_memory_clock[i] =
+				    cz_hwmgr->sys_info.nbp_memory_clock[0];
+			}
+			cz_hwmgr->sys_info.nbp_n_clock[i] =
+				    cz_hwmgr->sys_info.nbp_n_clock[0];
+			cz_hwmgr->sys_info.nbp_voltage_index[i] =
+				    cz_hwmgr->sys_info.nbp_voltage_index[0];
+		}
+	}
+
+	if (le32_to_cpu(info->ulGPUCapInfo) &
+		SYS_INFO_GPUCAPS__ENABEL_DFS_BYPASS) {
+		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+				    PHM_PlatformCaps_EnableDFSBypass);
+	}
+
+	cz_hwmgr->sys_info.uma_channel_number = info->ucUMAChannelNumber;
+
+	cz_construct_max_power_limits_table (hwmgr,
+				    &hwmgr->dyn_state.max_clock_voltage_on_ac);
+
+	cz_init_dynamic_state_adjustment_rule_settings(hwmgr,
+				    &info->sDISPCLK_Voltage[0]);
+
+	return result;
+}
+
+static int cz_construct_boot_state(struct pp_hwmgr *hwmgr)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	cz_hwmgr->boot_power_level.engineClock =
+				cz_hwmgr->sys_info.bootup_engine_clock;
+
+	cz_hwmgr->boot_power_level.vddcIndex =
+			(uint8_t)cz_hwmgr->sys_info.bootup_nb_voltage_index;
+
+	cz_hwmgr->boot_power_level.dsDividerIndex = 0;
+
+	cz_hwmgr->boot_power_level.ssDividerIndex = 0;
+
+	cz_hwmgr->boot_power_level.allowGnbSlow = 1;
+
+	cz_hwmgr->boot_power_level.forceNBPstate = 0;
+
+	cz_hwmgr->boot_power_level.hysteresis_up = 0;
+
+	cz_hwmgr->boot_power_level.numSIMDToPowerDown = 0;
+
+	cz_hwmgr->boot_power_level.display_wm = 0;
+
+	cz_hwmgr->boot_power_level.vce_wm = 0;
+
+	return 0;
+}
+
+static int cz_tf_reset_active_process_mask(struct pp_hwmgr *hwmgr, void *input,
+					void *output, void *storage, int result)
+{
+	return 0;
+}
+
+static int cz_tf_upload_pptable_to_smu(struct pp_hwmgr *hwmgr, void *input,
+				       void *output, void *storage, int result)
+{
+	struct SMU8_Fusion_ClkTable *clock_table;
+	int ret;
+	uint32_t i;
+	void *table = NULL;
+	pp_atomctrl_clock_dividers_kong dividers;
+
+	struct phm_clock_voltage_dependency_table *vddc_table =
+		hwmgr->dyn_state.vddc_dependency_on_sclk;
+	struct phm_clock_voltage_dependency_table *vdd_gfx_table =
+		hwmgr->dyn_state.vdd_gfx_dependency_on_sclk;
+	struct phm_acp_clock_voltage_dependency_table *acp_table =
+		hwmgr->dyn_state.acp_clock_voltage_dependency_table;
+	struct phm_uvd_clock_voltage_dependency_table *uvd_table =
+		hwmgr->dyn_state.uvd_clock_voltage_dependency_table;
+	struct phm_vce_clock_voltage_dependency_table *vce_table =
+		hwmgr->dyn_state.vce_clock_voltage_dependency_table;
+
+	if (!hwmgr->need_pp_table_upload)
+		return 0;
+
+	ret = smum_download_powerplay_table(hwmgr->smumgr, &table);
+
+	PP_ASSERT_WITH_CODE((0 == ret && NULL != table),
+			    "Fail to get clock table from SMU!", return -EINVAL;);
+
+	clock_table = (struct SMU8_Fusion_ClkTable *)table;
+
+	/* patch clock table */
+	PP_ASSERT_WITH_CODE((vddc_table->count <= CZ_MAX_HARDWARE_POWERLEVELS),
+			    "Dependency table entry exceeds max limit!", return -EINVAL;);
+	PP_ASSERT_WITH_CODE((vdd_gfx_table->count <= CZ_MAX_HARDWARE_POWERLEVELS),
+			    "Dependency table entry exceeds max limit!", return -EINVAL;);
+	PP_ASSERT_WITH_CODE((acp_table->count <= CZ_MAX_HARDWARE_POWERLEVELS),
+			    "Dependency table entry exceeds max limit!", return -EINVAL;);
+	PP_ASSERT_WITH_CODE((uvd_table->count <= CZ_MAX_HARDWARE_POWERLEVELS),
+			    "Dependency table entry exceeds max limit!", return -EINVAL;);
+	PP_ASSERT_WITH_CODE((vce_table->count <= CZ_MAX_HARDWARE_POWERLEVELS),
+			    "Dependency table entry exceeds max limit!", return -EINVAL;);
+
+	for (i = 0; i < CZ_MAX_HARDWARE_POWERLEVELS; i++) {
+
+		/* vddc_sclk */
+		clock_table->SclkBreakdownTable.ClkLevel[i].GnbVid =
+			(i < vddc_table->count) ? (uint8_t)vddc_table->entries[i].v : 0;
+		clock_table->SclkBreakdownTable.ClkLevel[i].Frequency =
+			(i < vddc_table->count) ? vddc_table->entries[i].clk : 0;
+
+		atomctrl_get_engine_pll_dividers_kong(hwmgr,
+						      clock_table->SclkBreakdownTable.ClkLevel[i].Frequency,
+						      &dividers);
+
+		clock_table->SclkBreakdownTable.ClkLevel[i].DfsDid =
+			(uint8_t)dividers.pll_post_divider;
+
+		/* vddgfx_sclk */
+		clock_table->SclkBreakdownTable.ClkLevel[i].GfxVid =
+			(i < vdd_gfx_table->count) ? (uint8_t)vdd_gfx_table->entries[i].v : 0;
+
+		/* acp breakdown */
+		clock_table->AclkBreakdownTable.ClkLevel[i].GfxVid =
+			(i < acp_table->count) ? (uint8_t)acp_table->entries[i].v : 0;
+		clock_table->AclkBreakdownTable.ClkLevel[i].Frequency =
+			(i < acp_table->count) ? acp_table->entries[i].acpclk : 0;
+
+		atomctrl_get_engine_pll_dividers_kong(hwmgr,
+						      clock_table->AclkBreakdownTable.ClkLevel[i].Frequency,
+						      &dividers);
+
+		clock_table->AclkBreakdownTable.ClkLevel[i].DfsDid =
+			(uint8_t)dividers.pll_post_divider;
+
+
+		/* uvd breakdown */
+		clock_table->VclkBreakdownTable.ClkLevel[i].GfxVid =
+			(i < uvd_table->count) ? (uint8_t)uvd_table->entries[i].v : 0;
+		clock_table->VclkBreakdownTable.ClkLevel[i].Frequency =
+			(i < uvd_table->count) ? uvd_table->entries[i].vclk : 0;
+
+		atomctrl_get_engine_pll_dividers_kong(hwmgr,
+						      clock_table->VclkBreakdownTable.ClkLevel[i].Frequency,
+						      &dividers);
+
+		clock_table->VclkBreakdownTable.ClkLevel[i].DfsDid =
+			(uint8_t)dividers.pll_post_divider;
+
+		clock_table->DclkBreakdownTable.ClkLevel[i].GfxVid =
+			(i < uvd_table->count) ? (uint8_t)uvd_table->entries[i].v : 0;
+		clock_table->DclkBreakdownTable.ClkLevel[i].Frequency =
+			(i < uvd_table->count) ? uvd_table->entries[i].dclk : 0;
+
+		atomctrl_get_engine_pll_dividers_kong(hwmgr,
+						      clock_table->DclkBreakdownTable.ClkLevel[i].Frequency,
+						      &dividers);
+
+		clock_table->DclkBreakdownTable.ClkLevel[i].DfsDid =
+			(uint8_t)dividers.pll_post_divider;
+
+		/* vce breakdown */
+		clock_table->EclkBreakdownTable.ClkLevel[i].GfxVid =
+			(i < vce_table->count) ? (uint8_t)vce_table->entries[i].v : 0;
+		clock_table->EclkBreakdownTable.ClkLevel[i].Frequency =
+			(i < vce_table->count) ? vce_table->entries[i].ecclk : 0;
+
+
+		atomctrl_get_engine_pll_dividers_kong(hwmgr,
+						      clock_table->EclkBreakdownTable.ClkLevel[i].Frequency,
+						      &dividers);
+
+		clock_table->EclkBreakdownTable.ClkLevel[i].DfsDid =
+			(uint8_t)dividers.pll_post_divider;
+
+	}
+	ret = smum_upload_powerplay_table(hwmgr->smumgr);
+
+	return ret;
+}
+
+static int cz_tf_init_sclk_limit(struct pp_hwmgr *hwmgr, void *input,
+				 void *output, void *storage, int result)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	struct phm_clock_voltage_dependency_table *table =
+					hwmgr->dyn_state.vddc_dependency_on_sclk;
+	unsigned long clock = 0, level;
+
+	if (NULL == table || table->count <= 0)
+		return -EINVAL;
+
+	cz_hwmgr->sclk_dpm.soft_min_clk = table->entries[0].clk;
+	cz_hwmgr->sclk_dpm.hard_min_clk = table->entries[0].clk;
+
+	level = cz_get_max_sclk_level(hwmgr) - 1;
+
+	if (level < table->count)
+		clock = table->entries[level].clk;
+	else
+		clock = table->entries[table->count - 1].clk;
+
+	cz_hwmgr->sclk_dpm.soft_max_clk = clock;
+	cz_hwmgr->sclk_dpm.hard_max_clk = clock;
+
+	return 0;
+}
+
+static int cz_tf_init_uvd_limit(struct pp_hwmgr *hwmgr, void *input,
+				void *output, void *storage, int result)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	struct phm_uvd_clock_voltage_dependency_table *table =
+				hwmgr->dyn_state.uvd_clock_voltage_dependency_table;
+	unsigned long clock = 0, level;
+
+	if (NULL == table || table->count <= 0)
+		return -EINVAL;
+
+	cz_hwmgr->uvd_dpm.soft_min_clk = 0;
+	cz_hwmgr->uvd_dpm.hard_min_clk = 0;
+
+	smum_send_msg_to_smc(hwmgr->smumgr, PPSMC_MSG_GetMaxUvdLevel);
+	level = smum_get_argument(hwmgr->smumgr);
+
+	if (level < table->count)
+		clock = table->entries[level].vclk;
+	else
+		clock = table->entries[table->count - 1].vclk;
+
+	cz_hwmgr->uvd_dpm.soft_max_clk = clock;
+	cz_hwmgr->uvd_dpm.hard_max_clk = clock;
+
+	return 0;
+}
+
+static int cz_tf_init_vce_limit(struct pp_hwmgr *hwmgr, void *input,
+				void *output, void *storage, int result)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	struct phm_vce_clock_voltage_dependency_table *table =
+				hwmgr->dyn_state.vce_clock_voltage_dependency_table;
+	unsigned long clock = 0, level;
+
+	if (NULL == table || table->count <= 0)
+		return -EINVAL;
+
+	cz_hwmgr->vce_dpm.soft_min_clk = 0;
+	cz_hwmgr->vce_dpm.hard_min_clk = 0;
+
+	smum_send_msg_to_smc(hwmgr->smumgr, PPSMC_MSG_GetMaxEclkLevel);
+	level = smum_get_argument(hwmgr->smumgr);
+
+	if (level < table->count)
+		clock = table->entries[level].ecclk;
+	else
+		clock = table->entries[table->count - 1].ecclk;
+
+	cz_hwmgr->vce_dpm.soft_max_clk = clock;
+	cz_hwmgr->vce_dpm.hard_max_clk = clock;
+
+	return 0;
+}
+
+static int cz_tf_init_acp_limit(struct pp_hwmgr *hwmgr, void *input,
+				void *output, void *storage, int result)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	struct phm_acp_clock_voltage_dependency_table *table =
+				hwmgr->dyn_state.acp_clock_voltage_dependency_table;
+	unsigned long clock = 0, level;
+
+	if (NULL == table || table->count <= 0)
+		return -EINVAL;
+
+	cz_hwmgr->acp_dpm.soft_min_clk = 0;
+	cz_hwmgr->acp_dpm.hard_min_clk = 0;
+
+	smum_send_msg_to_smc(hwmgr->smumgr, PPSMC_MSG_GetMaxAclkLevel);
+	level = smum_get_argument(hwmgr->smumgr);
+
+	if (level < table->count)
+		clock = table->entries[level].acpclk;
+	else
+		clock = table->entries[table->count - 1].acpclk;
+
+	cz_hwmgr->acp_dpm.soft_max_clk = clock;
+	cz_hwmgr->acp_dpm.hard_max_clk = clock;
+	return 0;
+}
+
+static int cz_tf_init_power_gate_state(struct pp_hwmgr *hwmgr, void *input,
+				void *output, void *storage, int result)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	cz_hwmgr->uvd_power_gated = false;
+	cz_hwmgr->vce_power_gated = false;
+	cz_hwmgr->samu_power_gated = false;
+	cz_hwmgr->acp_power_gated = false;
+	cz_hwmgr->pgacpinit = true;
+
+	return 0;
+}
+
+static int cz_tf_init_sclk_threshold(struct pp_hwmgr *hwmgr, void *input,
+				void *output, void *storage, int result)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	cz_hwmgr->low_sclk_interrupt_threshold = 0;
+
+	return 0;
+}
+static int cz_tf_update_sclk_limit(struct pp_hwmgr *hwmgr,
+					void *input, void *output,
+					void *storage, int result)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	struct phm_clock_voltage_dependency_table *table =
+					hwmgr->dyn_state.vddc_dependency_on_sclk;
+
+	unsigned long clock = 0;
+	unsigned long level;
+	unsigned long stable_pstate_sclk;
+	struct PP_Clocks clocks;
+	unsigned long percentage;
+
+	cz_hwmgr->sclk_dpm.soft_min_clk = table->entries[0].clk;
+	level = cz_get_max_sclk_level(hwmgr) - 1;
+
+	if (level < table->count)
+		cz_hwmgr->sclk_dpm.soft_max_clk  = table->entries[level].clk;
+	else
+		cz_hwmgr->sclk_dpm.soft_max_clk  = table->entries[table->count - 1].clk;
+
+	/*PECI_GetMinClockSettings(pHwMgr->pPECI, &clocks);*/
+	clock = clocks.engineClock;
+
+	if (cz_hwmgr->sclk_dpm.hard_min_clk != clock) {
+		cz_hwmgr->sclk_dpm.hard_min_clk = clock;
+
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+						PPSMC_MSG_SetSclkHardMin,
+						 cz_get_sclk_level(hwmgr,
+					cz_hwmgr->sclk_dpm.hard_min_clk,
+					     PPSMC_MSG_SetSclkHardMin));
+	}
+
+	clock = cz_hwmgr->sclk_dpm.soft_min_clk;
+
+	/* update minimum clocks for Stable P-State feature */
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+				     PHM_PlatformCaps_StablePState)) {
+		percentage = 75;
+		/*Sclk - calculate sclk value based on percentage and find FLOOR sclk from VddcDependencyOnSCLK table  */
+		stable_pstate_sclk = (hwmgr->dyn_state.max_clock_voltage_on_ac.mclk *
+					percentage) / 100;
+
+		if (clock < stable_pstate_sclk)
+			clock = stable_pstate_sclk;
+	} else {
+		if (clock < hwmgr->gfx_arbiter.sclk)
+			clock = hwmgr->gfx_arbiter.sclk;
+	}
+
+	if (cz_hwmgr->sclk_dpm.soft_min_clk != clock) {
+		cz_hwmgr->sclk_dpm.soft_min_clk = clock;
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+						PPSMC_MSG_SetSclkSoftMin,
+						cz_get_sclk_level(hwmgr,
+					cz_hwmgr->sclk_dpm.soft_min_clk,
+					     PPSMC_MSG_SetSclkSoftMin));
+	}
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+				    PHM_PlatformCaps_StablePState) &&
+			 cz_hwmgr->sclk_dpm.soft_max_clk != clock) {
+		cz_hwmgr->sclk_dpm.soft_max_clk = clock;
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+						PPSMC_MSG_SetSclkSoftMax,
+						cz_get_sclk_level(hwmgr,
+					cz_hwmgr->sclk_dpm.soft_max_clk,
+					PPSMC_MSG_SetSclkSoftMax));
+	}
+
+	return 0;
+}
+
+static int cz_tf_set_deep_sleep_sclk_threshold(struct pp_hwmgr *hwmgr,
+					void *input, void *output,
+					void *storage, int result)
+{
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_SclkDeepSleep)) {
+		uint32_t clks = hwmgr->display_config.min_core_set_clock_in_sr;
+		if (clks == 0)
+			clks = CZ_MIN_DEEP_SLEEP_SCLK;
+
+		PP_DBG_LOG("Setting Deep Sleep Clock: %d\n", clks);
+
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+				PPSMC_MSG_SetMinDeepSleepSclk,
+				clks);
+	}
+
+	return 0;
+}
+
+static int cz_tf_set_watermark_threshold(struct pp_hwmgr *hwmgr,
+					void *input, void *output,
+					void *storage, int result)
+{
+	struct cz_hwmgr *cz_hwmgr =
+				  (struct cz_hwmgr *)(hwmgr->backend);
+
+	smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+					PPSMC_MSG_SetWatermarkFrequency,
+				      cz_hwmgr->sclk_dpm.soft_max_clk);
+
+	return 0;
+}
+
+static int cz_tf_set_enabled_levels(struct pp_hwmgr *hwmgr,
+					void *input, void *output,
+					void *storage, int result)
+{
+	return 0;
+}
+
+
+static int cz_tf_enable_nb_dpm(struct pp_hwmgr *hwmgr,
+					void *input, void *output,
+					void *storage, int result)
+{
+	int ret = 0;
+
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	unsigned long dpm_features = 0;
+
+	if (!cz_hwmgr->is_nb_dpm_enabled) {
+		PP_DBG_LOG("enabling ALL SMU features.\n");
+		dpm_features |= NB_DPM_MASK;
+		ret = smum_send_msg_to_smc_with_parameter(
+							     hwmgr->smumgr,
+					 PPSMC_MSG_EnableAllSmuFeatures,
+							     dpm_features);
+		if (ret == 0)
+			cz_hwmgr->is_nb_dpm_enabled = true;
+	}
+
+	return ret;
+}
+
+static int cz_nbdpm_pstate_enable_disable(struct pp_hwmgr *hwmgr, bool enable, bool lock)
+{
+	struct cz_hwmgr *hw_data = (struct cz_hwmgr *)(hwmgr->backend);
+
+	if (hw_data->is_nb_dpm_enabled) {
+		if (enable) {
+			PP_DBG_LOG("enable Low Memory PState.\n");
+
+			return smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+						PPSMC_MSG_EnableLowMemoryPstate,
+						(lock ? 1 : 0));
+		} else {
+			PP_DBG_LOG("disable Low Memory PState.\n");
+
+			return smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+						PPSMC_MSG_DisableLowMemoryPstate,
+						(lock ? 1 : 0));
+		}
+	}
+
+	return 0;
+}
+
+static int cz_tf_update_low_mem_pstate(struct pp_hwmgr *hwmgr,
+					void *input, void *output,
+					void *storage, int result)
+{
+	bool disable_switch;
+	bool enable_low_mem_state;
+	struct cz_hwmgr *hw_data = (struct cz_hwmgr *)(hwmgr->backend);
+	const struct phm_set_power_state_input *states = (struct phm_set_power_state_input *)input;
+	const struct cz_power_state *pnew_state = cast_const_PhwCzPowerState(states->pnew_state);
+
+	if (hw_data->sys_info.nb_dpm_enable) {
+		disable_switch = hw_data->cc6_settings.nb_pstate_switch_disable ? true : false;
+		enable_low_mem_state = hw_data->cc6_settings.nb_pstate_switch_disable ? false : true;
+
+		if (pnew_state->action == FORCE_HIGH)
+			cz_nbdpm_pstate_enable_disable(hwmgr, false, disable_switch);
+		else if(pnew_state->action == CANCEL_FORCE_HIGH)
+			cz_nbdpm_pstate_enable_disable(hwmgr, false, disable_switch);
+		else 
+			cz_nbdpm_pstate_enable_disable(hwmgr, enable_low_mem_state, disable_switch);
+	}
+	return 0;
+}
+
+static struct phm_master_table_item cz_set_power_state_list[] = {
+	{NULL, cz_tf_update_sclk_limit},
+	{NULL, cz_tf_set_deep_sleep_sclk_threshold},
+	{NULL, cz_tf_set_watermark_threshold},
+	{NULL, cz_tf_set_enabled_levels},
+	{NULL, cz_tf_enable_nb_dpm},
+	{NULL, cz_tf_update_low_mem_pstate},
+	{NULL, NULL}
+};
+
+static struct phm_master_table_header cz_set_power_state_master = {
+	0,
+	PHM_MasterTableFlag_None,
+	cz_set_power_state_list
+};
+
+static struct phm_master_table_item cz_setup_asic_list[] = {
+	{NULL, cz_tf_reset_active_process_mask},
+	{NULL, cz_tf_upload_pptable_to_smu},
+	{NULL, cz_tf_init_sclk_limit},
+	{NULL, cz_tf_init_uvd_limit},
+	{NULL, cz_tf_init_vce_limit},
+	{NULL, cz_tf_init_acp_limit},
+	{NULL, cz_tf_init_power_gate_state},
+	{NULL, cz_tf_init_sclk_threshold},
+	{NULL, NULL}
+};
+
+static struct phm_master_table_header cz_setup_asic_master = {
+	0,
+	PHM_MasterTableFlag_None,
+	cz_setup_asic_list
+};
+
+static int cz_tf_power_up_display_clock_sys_pll(struct pp_hwmgr *hwmgr,
+					void *input, void *output,
+					void *storage, int result)
+{
+	struct cz_hwmgr *hw_data = (struct cz_hwmgr *)(hwmgr->backend);
+	hw_data->disp_clk_bypass_pending = false;
+	hw_data->disp_clk_bypass = false;
+
+	return 0;
+}
+
+static int cz_tf_clear_nb_dpm_flag(struct pp_hwmgr *hwmgr,
+					void *input, void *output,
+					void *storage, int result)
+{
+	struct cz_hwmgr *hw_data = (struct cz_hwmgr *)(hwmgr->backend);
+	hw_data->is_nb_dpm_enabled = false;
+
+	return 0;
+}
+
+static int cz_tf_reset_cc6_data(struct pp_hwmgr *hwmgr,
+					void *input, void *output,
+					void *storage, int result)
+{
+	struct cz_hwmgr *hw_data = (struct cz_hwmgr *)(hwmgr->backend);
+
+	hw_data->cc6_settings.cc6_setting_changed = false;
+	hw_data->cc6_settings.cpu_pstate_separation_time = 0;
+	hw_data->cc6_settings.cpu_cc6_disable = false;
+	hw_data->cc6_settings.cpu_pstate_disable = false;
+
+	return 0;
+}
+
+static struct phm_master_table_item cz_power_down_asic_list[] = {
+	{NULL, cz_tf_power_up_display_clock_sys_pll},
+	{NULL, cz_tf_clear_nb_dpm_flag},
+	{NULL, cz_tf_reset_cc6_data},
+	{NULL, NULL}
+};
+
+static struct phm_master_table_header cz_power_down_asic_master = {
+	0,
+	PHM_MasterTableFlag_None,
+	cz_power_down_asic_list
+};
+
+static int cz_tf_program_voting_clients(struct pp_hwmgr *hwmgr, void *input,
+				void *output, void *storage, int result)
+{
+	PHMCZ_WRITE_SMC_REGISTER(hwmgr->device, CG_FREQ_TRAN_VOTING_0,
+				PPCZ_VOTINGRIGHTSCLIENTS_DFLT0);
+	return 0;
+}
+
+static int cz_tf_start_dpm(struct pp_hwmgr *hwmgr, void *input, void *output,
+			   void *storage, int result)
+{
+	int res = 0xff;
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	unsigned long dpm_features = 0;
+
+	cz_hwmgr->dpm_flags |= DPMFlags_SCLK_Enabled;
+	dpm_features |= SCLK_DPM_MASK;
+
+	res = smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+				PPSMC_MSG_EnableAllSmuFeatures,
+				dpm_features);
+
+	return res;
+}
+
+static int cz_tf_program_bootup_state(struct pp_hwmgr *hwmgr, void *input,
+				void *output, void *storage, int result)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	cz_hwmgr->sclk_dpm.soft_min_clk = cz_hwmgr->sys_info.bootup_engine_clock;
+	cz_hwmgr->sclk_dpm.soft_max_clk = cz_hwmgr->sys_info.bootup_engine_clock;
+
+	smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+				PPSMC_MSG_SetSclkSoftMin,
+				cz_get_sclk_level(hwmgr,
+				cz_hwmgr->sclk_dpm.soft_min_clk,
+				PPSMC_MSG_SetSclkSoftMin));
+
+	smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+				PPSMC_MSG_SetSclkSoftMax,
+				cz_get_sclk_level(hwmgr,
+				cz_hwmgr->sclk_dpm.soft_max_clk,
+				PPSMC_MSG_SetSclkSoftMax));
+
+	return 0;
+}
+
+int cz_tf_reset_acp_boot_level(struct pp_hwmgr *hwmgr, void *input,
+				void *output, void *storage, int result)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	cz_hwmgr->acp_boot_level = 0xff;
+	return 0;
+}
+
+static bool cz_dpm_check_smu_features(struct pp_hwmgr *hwmgr,
+				unsigned long check_feature)
+{
+	int result;
+	unsigned long features;
+
+	result = smum_send_msg_to_smc_with_parameter(hwmgr->smumgr, PPSMC_MSG_GetFeatureStatus, 0);
+	if (result == 0) {
+		features = smum_get_argument(hwmgr->smumgr);
+		if (features & check_feature)
+			return true;
+	}
+
+	return result;
+}
+
+static int cz_tf_check_for_dpm_disabled(struct pp_hwmgr *hwmgr, void *input,
+				void *output, void *storage, int result)
+{
+	if (cz_dpm_check_smu_features(hwmgr, SMU_EnabledFeatureScoreboard_SclkDpmOn))
+		return PP_Result_TableImmediateExit;
+	return 0;
+}
+
+static int cz_tf_enable_didt(struct pp_hwmgr *hwmgr, void *input,
+				void *output, void *storage, int result)
+{
+	/* TO DO */
+	return 0;
+}
+
+static int cz_tf_check_for_dpm_enabled(struct pp_hwmgr *hwmgr,
+						void *input, void *output,
+						void *storage, int result)
+{
+	if (!cz_dpm_check_smu_features(hwmgr,
+			     SMU_EnabledFeatureScoreboard_SclkDpmOn))
+		return PP_Result_TableImmediateExit;
+	return 0;
+}
+
+static struct phm_master_table_item cz_disable_dpm_list[] = {
+	{ NULL, cz_tf_check_for_dpm_enabled},
+	{NULL, NULL},
+};
+
+
+static struct phm_master_table_header cz_disable_dpm_master = {
+	0,
+	PHM_MasterTableFlag_None,
+	cz_disable_dpm_list
+};
+
+static struct phm_master_table_item cz_enable_dpm_list[] = {
+	{ NULL, cz_tf_check_for_dpm_disabled },
+	{ NULL, cz_tf_program_voting_clients },
+	{ NULL, cz_tf_start_dpm},
+	{ NULL, cz_tf_program_bootup_state},
+	{ NULL, cz_tf_enable_didt },
+	{ NULL, cz_tf_reset_acp_boot_level },
+	{NULL, NULL},
+};
+
+static struct phm_master_table_header cz_enable_dpm_master = {
+	0,
+	PHM_MasterTableFlag_None,
+	cz_enable_dpm_list
+};
+
+static int cz_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+				struct pp_power_state  *prequest_ps,
+			const struct pp_power_state *pcurrent_ps)
+{
+	struct cz_power_state *cz_ps =
+				cast_PhwCzPowerState(&prequest_ps->hardware);
+
+	const struct cz_power_state *cz_current_ps =
+				cast_const_PhwCzPowerState(&pcurrent_ps->hardware);
+
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	struct PP_Clocks clocks;
+	bool force_high;
+	unsigned long  num_of_active_displays = 4;
+
+	cz_ps->evclk = hwmgr->vce_arbiter.evclk;
+	cz_ps->ecclk = hwmgr->vce_arbiter.ecclk;
+
+	cz_ps->need_dfs_bypass = true;
+
+	cz_hwmgr->video_start = (hwmgr->uvd_arbiter.vclk != 0 || hwmgr->uvd_arbiter.dclk != 0 ||
+				hwmgr->vce_arbiter.evclk != 0 || hwmgr->vce_arbiter.ecclk != 0);
+
+	cz_hwmgr->battery_state = (PP_StateUILabel_Battery == prequest_ps->classification.ui_label);
+
+	/* to do PECI_GetMinClockSettings(pHwMgr->pPECI, &clocks); */
+	/* PECI_GetNumberOfActiveDisplays(pHwMgr->pPECI, &numOfActiveDisplays); */
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_StablePState))
+		clocks.memoryClock = hwmgr->dyn_state.max_clock_voltage_on_ac.mclk;
+	else
+		clocks.memoryClock = 0;
+
+	if (clocks.memoryClock < hwmgr->gfx_arbiter.mclk)
+		clocks.memoryClock = hwmgr->gfx_arbiter.mclk;
+
+	force_high = (clocks.memoryClock > cz_hwmgr->sys_info.nbp_memory_clock[CZ_NUM_NBPMEMORYCLOCK - 1])
+			|| (num_of_active_displays >= 3);
+
+	cz_ps->action = cz_current_ps->action;
+
+	if ((force_high == false) && (cz_ps->action == FORCE_HIGH))
+		cz_ps->action = CANCEL_FORCE_HIGH;
+	else if ((force_high == true) && (cz_ps->action != FORCE_HIGH))
+		cz_ps->action = FORCE_HIGH;
+	else
+		cz_ps->action = DO_NOTHING;
+
+	return 0;
+}
+
+static int cz_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+{
+	int result = 0;
+
+	result = cz_initialize_dpm_defaults(hwmgr);
+	if (result != 0) {
+		printk(KERN_ERR "[ powerplay ] cz_initialize_dpm_defaults failed\n");
+		return result;
+	}
+
+	result = cz_get_system_info_data(hwmgr);
+	if (result != 0) {
+		printk(KERN_ERR "[ powerplay ] cz_get_system_info_data failed\n");
+		return result;
+	}
+
+	cz_construct_boot_state(hwmgr);
+
+	result = phm_construct_table(hwmgr, &cz_setup_asic_master,
+				&(hwmgr->setup_asic));
+	if (result != 0) {
+		printk(KERN_ERR "[ powerplay ] Fail to construct setup ASIC\n");
+		return result;
+	}
+
+	result = phm_construct_table(hwmgr, &cz_power_down_asic_master,
+				&(hwmgr->power_down_asic));
+	if (result != 0) {
+		printk(KERN_ERR "[ powerplay ] Fail to construct power down ASIC\n");
+		return result;
+	}
+
+	result = phm_construct_table(hwmgr, &cz_disable_dpm_master,
+				&(hwmgr->disable_dynamic_state_management));
+	if (result != 0) {
+		printk(KERN_ERR "[ powerplay ] Fail to disable_dynamic_state\n");
+		return result;
+	}
+	result = phm_construct_table(hwmgr, &cz_enable_dpm_master,
+				&(hwmgr->enable_dynamic_state_management));
+	if (result != 0) {
+		printk(KERN_ERR "[ powerplay ] Fail to enable_dynamic_state\n");
+		return result;
+	}
+	result = phm_construct_table(hwmgr, &cz_set_power_state_master,
+				&(hwmgr->set_power_state));
+	if (result != 0) {
+		printk(KERN_ERR "[ powerplay ] Fail to construct set_power_state\n");
+		return result;
+	}
+
+	result = phm_construct_table(hwmgr, &cz_phm_enable_clock_power_gatings_master, &(hwmgr->enable_clock_power_gatings));
+	if (result != 0) {
+		printk(KERN_ERR "[ powerplay ] Fail to construct enable_clock_power_gatings\n");
+		return result;
+	}
+	return result;
+}
+
+static int cz_hwmgr_backend_fini(struct pp_hwmgr *hwmgr)
+{
+	if (hwmgr != NULL || hwmgr->backend != NULL) {
+		kfree(hwmgr->backend);
+		kfree(hwmgr);
+	}
+	return 0;
+}
+
+int cz_phm_force_dpm_highest(struct pp_hwmgr *hwmgr)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	if (cz_hwmgr->sclk_dpm.soft_min_clk !=
+				cz_hwmgr->sclk_dpm.soft_max_clk)
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+						PPSMC_MSG_SetSclkSoftMin,
+						cz_get_sclk_level(hwmgr,
+						cz_hwmgr->sclk_dpm.soft_max_clk,
+						PPSMC_MSG_SetSclkSoftMin));
+	return 0;
+}
+
+int cz_phm_unforce_dpm_levels(struct pp_hwmgr *hwmgr)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	struct phm_clock_voltage_dependency_table *table =
+				hwmgr->dyn_state.vddc_dependency_on_sclk;
+	unsigned long clock = 0, level;
+
+	if (NULL == table || table->count <= 0)
+		return -EINVAL;
+
+	cz_hwmgr->sclk_dpm.soft_min_clk = table->entries[0].clk;
+	cz_hwmgr->sclk_dpm.hard_min_clk = table->entries[0].clk;
+
+	level = cz_get_max_sclk_level(hwmgr) - 1;
+
+	if (level < table->count)
+		clock = table->entries[level].clk;
+	else
+		clock = table->entries[table->count - 1].clk;
+
+	cz_hwmgr->sclk_dpm.soft_max_clk = clock;
+	cz_hwmgr->sclk_dpm.hard_max_clk = clock;
+
+	smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+				PPSMC_MSG_SetSclkSoftMin,
+				cz_get_sclk_level(hwmgr,
+				cz_hwmgr->sclk_dpm.soft_min_clk,
+				PPSMC_MSG_SetSclkSoftMin));
+
+	smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+				PPSMC_MSG_SetSclkSoftMax,
+				cz_get_sclk_level(hwmgr,
+				cz_hwmgr->sclk_dpm.soft_max_clk,
+				PPSMC_MSG_SetSclkSoftMax));
+
+	return 0;
+}
+
+int cz_phm_force_dpm_lowest(struct pp_hwmgr *hwmgr)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	if (cz_hwmgr->sclk_dpm.soft_min_clk !=
+				cz_hwmgr->sclk_dpm.soft_max_clk) {
+		cz_hwmgr->sclk_dpm.soft_max_clk =
+			cz_hwmgr->sclk_dpm.soft_min_clk;
+
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+				PPSMC_MSG_SetSclkSoftMax,
+				cz_get_sclk_level(hwmgr,
+				cz_hwmgr->sclk_dpm.soft_max_clk,
+				PPSMC_MSG_SetSclkSoftMax));
+	}
+
+	return 0;
+}
+
+static int cz_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+				enum amd_dpm_forced_level level)
+{
+	int ret = 0;
+
+	switch (level) {
+	case AMD_DPM_FORCED_LEVEL_HIGH:
+		ret = cz_phm_force_dpm_highest(hwmgr);
+		if (ret)
+			return ret;
+		break;
+	case AMD_DPM_FORCED_LEVEL_LOW:
+		ret = cz_phm_force_dpm_lowest(hwmgr);
+		if (ret)
+			return ret;
+		break;
+	case AMD_DPM_FORCED_LEVEL_AUTO:
+		ret = cz_phm_unforce_dpm_levels(hwmgr);
+		if (ret)
+			return ret;
+		break;
+	default:
+		break;
+	}
+
+	hwmgr->dpm_level = level;
+
+	return ret;
+}
+
+int cz_dpm_powerdown_uvd(struct pp_hwmgr *hwmgr)
+{
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+					 PHM_PlatformCaps_UVDPowerGating))
+		return smum_send_msg_to_smc(hwmgr->smumgr,
+						     PPSMC_MSG_UVDPowerOFF);
+	return 0;
+}
+
+int cz_dpm_powerup_uvd(struct pp_hwmgr *hwmgr)
+{
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+					 PHM_PlatformCaps_UVDPowerGating)) {
+		if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+				  PHM_PlatformCaps_UVDDynamicPowerGating)) {
+			return smum_send_msg_to_smc_with_parameter(
+								hwmgr->smumgr,
+						   PPSMC_MSG_UVDPowerON, 1);
+		} else {
+			return smum_send_msg_to_smc_with_parameter(
+								hwmgr->smumgr,
+						   PPSMC_MSG_UVDPowerON, 0);
+		}
+	}
+
+	return 0;
+}
+
+int cz_dpm_update_uvd_dpm(struct pp_hwmgr *hwmgr, bool bgate)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	struct phm_uvd_clock_voltage_dependency_table *ptable =
+		hwmgr->dyn_state.uvd_clock_voltage_dependency_table;
+
+	if (!bgate) {
+		/* Stable Pstate is enabled and we need to set the UVD DPM to highest level */
+		if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+					 PHM_PlatformCaps_StablePState)) {
+			cz_hwmgr->uvd_dpm.hard_min_clk =
+				   ptable->entries[ptable->count - 1].vclk;
+
+			smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+						     PPSMC_MSG_SetUvdHardMin,
+						      cz_get_uvd_level(hwmgr,
+					     cz_hwmgr->uvd_dpm.hard_min_clk,
+						   PPSMC_MSG_SetUvdHardMin));
+
+			cz_enable_disable_uvd_dpm(hwmgr, true);
+		} else
+			cz_enable_disable_uvd_dpm(hwmgr, true);
+	} else
+		cz_enable_disable_uvd_dpm(hwmgr, false);
+
+	return 0;
+}
+
+int  cz_dpm_update_vce_dpm(struct pp_hwmgr *hwmgr)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	struct phm_vce_clock_voltage_dependency_table *ptable =
+		hwmgr->dyn_state.vce_clock_voltage_dependency_table;
+
+	/* Stable Pstate is enabled and we need to set the VCE DPM to highest level */
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+					 PHM_PlatformCaps_StablePState)) {
+		cz_hwmgr->vce_dpm.hard_min_clk =
+				  ptable->entries[ptable->count - 1].ecclk;
+
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+					PPSMC_MSG_SetEclkHardMin,
+					cz_get_eclk_level(hwmgr,
+					     cz_hwmgr->vce_dpm.hard_min_clk,
+						PPSMC_MSG_SetEclkHardMin));
+	} else {
+		/*EPR# 419220 -HW limitation to to */
+		cz_hwmgr->vce_dpm.hard_min_clk = hwmgr->vce_arbiter.ecclk;
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+					    PPSMC_MSG_SetEclkHardMin,
+					    cz_get_eclk_level(hwmgr,
+				     cz_hwmgr->vce_dpm.hard_min_clk,
+					  PPSMC_MSG_SetEclkHardMin));
+
+	}
+	return 0;
+}
+
+int cz_dpm_powerdown_vce(struct pp_hwmgr *hwmgr)
+{
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+					 PHM_PlatformCaps_VCEPowerGating))
+		return smum_send_msg_to_smc(hwmgr->smumgr,
+						     PPSMC_MSG_VCEPowerOFF);
+	return 0;
+}
+
+int cz_dpm_powerup_vce(struct pp_hwmgr *hwmgr)
+{
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+					 PHM_PlatformCaps_VCEPowerGating))
+		return smum_send_msg_to_smc(hwmgr->smumgr,
+						     PPSMC_MSG_VCEPowerON);
+	return 0;
+}
+
+static int cz_dpm_get_mclk(struct pp_hwmgr *hwmgr, bool low)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	return cz_hwmgr->sys_info.bootup_uma_clock;
+}
+
+static int cz_dpm_get_sclk(struct pp_hwmgr *hwmgr, bool low)
+{
+	struct pp_power_state  *ps;
+	struct cz_power_state  *cz_ps;
+
+	if (hwmgr == NULL)
+		return -EINVAL;
+
+	ps = hwmgr->request_ps;
+
+	if (ps == NULL)
+		return -EINVAL;
+
+	cz_ps = cast_PhwCzPowerState(&ps->hardware);
+
+	if (low)
+		return cz_ps->levels[0].engineClock;
+	else
+		return cz_ps->levels[cz_ps->level-1].engineClock;
+}
+
+static int cz_dpm_patch_boot_state(struct pp_hwmgr *hwmgr,
+					struct pp_hw_power_state *hw_ps)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+	struct cz_power_state *cz_ps = cast_PhwCzPowerState(hw_ps);
+
+	cz_ps->level = 1;
+	cz_ps->nbps_flags = 0;
+	cz_ps->bapm_flags = 0;
+	cz_ps->levels[0] = cz_hwmgr->boot_power_level;
+
+	return 0;
+}
+
+static int cz_dpm_get_pp_table_entry_callback(
+						     struct pp_hwmgr *hwmgr,
+					   struct pp_hw_power_state *hw_ps,
+							  unsigned int index,
+						     const void *clock_info)
+{
+	struct cz_power_state *cz_ps = cast_PhwCzPowerState(hw_ps);
+
+	const ATOM_PPLIB_CZ_CLOCK_INFO *cz_clock_info = clock_info;
+
+	struct phm_clock_voltage_dependency_table *table =
+				    hwmgr->dyn_state.vddc_dependency_on_sclk;
+	uint8_t clock_info_index = cz_clock_info->index;
+
+	if (clock_info_index > (uint8_t)(hwmgr->platform_descriptor.hardwareActivityPerformanceLevels - 1))
+		clock_info_index = (uint8_t)(hwmgr->platform_descriptor.hardwareActivityPerformanceLevels - 1);
+
+	cz_ps->levels[index].engineClock = table->entries[clock_info_index].clk;
+	cz_ps->levels[index].vddcIndex = (uint8_t)table->entries[clock_info_index].v;
+
+	cz_ps->level = index + 1;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_SclkDeepSleep)) {
+		cz_ps->levels[index].dsDividerIndex = 5;
+		cz_ps->levels[index].ssDividerIndex = 5;
+	}
+
+	return 0;
+}
+
+static int cz_dpm_get_num_of_pp_table_entries(struct pp_hwmgr *hwmgr)
+{
+	int result;
+	unsigned long ret = 0;
+
+	result = pp_tables_get_num_of_entries(hwmgr, &ret);
+
+	return result ? 0 : ret;
+}
+
+static int cz_dpm_get_pp_table_entry(struct pp_hwmgr *hwmgr,
+		    unsigned long entry, struct pp_power_state *ps)
+{
+	int result;
+	struct cz_power_state *cz_ps;
+
+	ps->hardware.magic = PhwCz_Magic;
+
+	cz_ps = cast_PhwCzPowerState(&(ps->hardware));
+
+	result = pp_tables_get_entry(hwmgr, entry, ps,
+			cz_dpm_get_pp_table_entry_callback);
+
+	cz_ps->uvd_clocks.vclk = ps->uvd_clocks.VCLK;
+	cz_ps->uvd_clocks.dclk = ps->uvd_clocks.DCLK;
+
+	return result;
+}
+
+int cz_get_power_state_size(struct pp_hwmgr *hwmgr)
+{
+	return sizeof(struct cz_power_state);
+}
+
+static void
+cz_print_current_perforce_level(struct pp_hwmgr *hwmgr, struct seq_file *m)
+{
+	struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
+
+	struct phm_clock_voltage_dependency_table *table =
+				hwmgr->dyn_state.vddc_dependency_on_sclk;
+
+	struct phm_vce_clock_voltage_dependency_table *vce_table =
+		hwmgr->dyn_state.vce_clock_voltage_dependency_table;
+
+	struct phm_uvd_clock_voltage_dependency_table *uvd_table =
+		hwmgr->dyn_state.uvd_clock_voltage_dependency_table;
+
+	uint32_t sclk_index = PHM_GET_FIELD(cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixTARGET_AND_CURRENT_PROFILE_INDEX),
+					TARGET_AND_CURRENT_PROFILE_INDEX, CURR_SCLK_INDEX);
+	uint32_t uvd_index = PHM_GET_FIELD(cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixTARGET_AND_CURRENT_PROFILE_INDEX_2),
+					TARGET_AND_CURRENT_PROFILE_INDEX_2, CURR_UVD_INDEX);
+	uint32_t vce_index = PHM_GET_FIELD(cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixTARGET_AND_CURRENT_PROFILE_INDEX_2),
+					TARGET_AND_CURRENT_PROFILE_INDEX_2, CURR_VCE_INDEX);
+
+	uint32_t sclk, vclk, dclk, ecclk, tmp, activity_percent;
+	uint16_t vddnb, vddgfx;
+	int result;
+
+	if (sclk_index >= NUM_SCLK_LEVELS) {
+		seq_printf(m, "\n invalid sclk dpm profile %d\n", sclk_index);
+	} else {
+		sclk = table->entries[sclk_index].clk;
+		seq_printf(m, "\n index: %u sclk: %u MHz\n", sclk_index, sclk/100);
+	}
+
+	tmp = (cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixSMUSVI_NB_CURRENTVID) &
+		CURRENT_NB_VID_MASK) >> CURRENT_NB_VID__SHIFT;
+	vddnb = cz_convert_8Bit_index_to_voltage(hwmgr, tmp);
+	tmp = (cgs_read_ind_register(hwmgr->device, CGS_IND_REG__SMC, ixSMUSVI_GFX_CURRENTVID) &
+		CURRENT_GFX_VID_MASK) >> CURRENT_GFX_VID__SHIFT;
+	vddgfx = cz_convert_8Bit_index_to_voltage(hwmgr, (u16)tmp);
+	seq_printf(m, "\n vddnb: %u vddgfx: %u\n", vddnb, vddgfx);
+
+	seq_printf(m, "\n uvd    %sabled\n", cz_hwmgr->uvd_power_gated ? "dis" : "en");
+	if (!cz_hwmgr->uvd_power_gated) {
+		if (uvd_index >= CZ_MAX_HARDWARE_POWERLEVELS) {
+			seq_printf(m, "\n invalid uvd dpm level %d\n", uvd_index);
+		} else {
+			vclk = uvd_table->entries[uvd_index].vclk;
+			dclk = uvd_table->entries[uvd_index].dclk;
+			seq_printf(m, "\n index: %u uvd vclk: %u MHz dclk: %u MHz\n", uvd_index, vclk/100, dclk/100);
+		}
+	}
+
+	seq_printf(m, "\n vce    %sabled\n", cz_hwmgr->vce_power_gated ? "dis" : "en");
+	if (!cz_hwmgr->vce_power_gated) {
+		if (vce_index >= CZ_MAX_HARDWARE_POWERLEVELS) {
+			seq_printf(m, "\n invalid vce dpm level %d\n", vce_index);
+		} else {
+			ecclk = vce_table->entries[vce_index].ecclk;
+			seq_printf(m, "\n index: %u vce ecclk: %u MHz\n", vce_index, ecclk/100);
+		}
+	}
+
+	result = smum_send_msg_to_smc(hwmgr->smumgr, PPSMC_MSG_GetAverageGraphicsActivity);
+	if (0 == result) {
+		activity_percent = cgs_read_register(hwmgr->device, mmSMU_MP1_SRBM2P_ARG_0);
+		activity_percent = activity_percent > 100 ? 100 : activity_percent;
+	} else {
+		activity_percent = 50;
+	}
+
+	seq_printf(m, "\n [GPU load]: %u %%\n\n", activity_percent);
+}
+
+static void cz_hw_print_display_cfg(
+	const struct cc6_settings *cc6_settings)
+{
+	PP_DBG_LOG("New Display Configuration:\n");
+
+	PP_DBG_LOG("   cpu_cc6_disable: %d\n",
+			cc6_settings->cpu_cc6_disable);
+	PP_DBG_LOG("   cpu_pstate_disable: %d\n",
+			cc6_settings->cpu_pstate_disable);
+	PP_DBG_LOG("   nb_pstate_switch_disable: %d\n",
+			cc6_settings->nb_pstate_switch_disable);
+	PP_DBG_LOG("   cpu_pstate_separation_time: %d\n\n",
+			cc6_settings->cpu_pstate_separation_time);
+}
+
+ static int cz_set_cpu_power_state(struct pp_hwmgr *hwmgr)
+{
+	struct cz_hwmgr *hw_data = (struct cz_hwmgr *)(hwmgr->backend);
+	uint32_t data = 0;
+
+	if (hw_data->cc6_settings.cc6_setting_changed == true) {
+
+		hw_data->cc6_settings.cc6_setting_changed = false;
+
+		cz_hw_print_display_cfg(&hw_data->cc6_settings);
+
+		data |= (hw_data->cc6_settings.cpu_pstate_separation_time
+			& PWRMGT_SEPARATION_TIME_MASK)
+			<< PWRMGT_SEPARATION_TIME_SHIFT;
+
+		data|= (hw_data->cc6_settings.cpu_cc6_disable ? 0x1 : 0x0)
+			<< PWRMGT_DISABLE_CPU_CSTATES_SHIFT;
+
+		data|= (hw_data->cc6_settings.cpu_pstate_disable ? 0x1 : 0x0)
+			<< PWRMGT_DISABLE_CPU_PSTATES_SHIFT;
+
+		PP_DBG_LOG("SetDisplaySizePowerParams data: 0x%X\n",
+			data);
+
+		smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+						PPSMC_MSG_SetDisplaySizePowerParams,
+						data);
+	}
+
+	return 0;
+}
+
+
+ static int cz_store_cc6_data(struct pp_hwmgr *hwmgr, uint32_t separation_time,
+			bool cc6_disable, bool pstate_disable, bool pstate_switch_disable)
+ {
+	struct cz_hwmgr *hw_data = (struct cz_hwmgr *)(hwmgr->backend);
+
+	if (separation_time !=
+		hw_data->cc6_settings.cpu_pstate_separation_time
+		|| cc6_disable !=
+		hw_data->cc6_settings.cpu_cc6_disable
+		|| pstate_disable !=
+		hw_data->cc6_settings.cpu_pstate_disable
+		|| pstate_switch_disable !=
+		hw_data->cc6_settings.nb_pstate_switch_disable) {
+
+		hw_data->cc6_settings.cc6_setting_changed = true;
+
+		hw_data->cc6_settings.cpu_pstate_separation_time =
+			separation_time;
+		hw_data->cc6_settings.cpu_cc6_disable =
+			cc6_disable;
+		hw_data->cc6_settings.cpu_pstate_disable =
+			pstate_disable;
+		hw_data->cc6_settings.nb_pstate_switch_disable =
+			pstate_switch_disable;
+
+	}
+
+	return 0;
+}
+
+ static int cz_get_dal_power_level(struct pp_hwmgr *hwmgr,
+		struct amd_pp_dal_clock_info*info)
+{
+	uint32_t i;
+	const struct phm_clock_voltage_dependency_table * table =
+			hwmgr->dyn_state.vddc_dep_on_dal_pwrl;
+	const struct phm_clock_and_voltage_limits* limits =
+			&hwmgr->dyn_state.max_clock_voltage_on_ac;
+
+	info->engine_max_clock = limits->sclk;
+	info->memory_max_clock = limits->mclk;
+
+	for (i = table->count - 1; i > 0; i--) {
+
+		if (limits->vddc >= table->entries[i].v) {
+			info->level = table->entries[i].clk;
+			return 0;
+		}
+	}
+	return -EINVAL;
+}
+
+static const struct pp_hwmgr_func cz_hwmgr_funcs = {
+	.backend_init = cz_hwmgr_backend_init,
+	.backend_fini = cz_hwmgr_backend_fini,
+	.asic_setup = NULL,
+	.apply_state_adjust_rules = cz_apply_state_adjust_rules,
+	.force_dpm_level = cz_dpm_force_dpm_level,
+	.get_power_state_size = cz_get_power_state_size,
+	.powerdown_uvd = cz_dpm_powerdown_uvd,
+	.powergate_uvd = cz_dpm_powergate_uvd,
+	.powergate_vce = cz_dpm_powergate_vce,
+	.get_mclk = cz_dpm_get_mclk,
+	.get_sclk = cz_dpm_get_sclk,
+	.patch_boot_state = cz_dpm_patch_boot_state,
+	.get_pp_table_entry = cz_dpm_get_pp_table_entry,
+	.get_num_of_pp_table_entries = cz_dpm_get_num_of_pp_table_entries,
+	.print_current_perforce_level = cz_print_current_perforce_level,
+	.set_cpu_power_state = cz_set_cpu_power_state,
+	.store_cc6_data = cz_store_cc6_data,
+	.get_dal_power_level= cz_get_dal_power_level,
+};
+
+int cz_hwmgr_init(struct pp_hwmgr *hwmgr)
+{
+	struct cz_hwmgr *cz_hwmgr;
+	int ret = 0;
+
+	cz_hwmgr = kzalloc(sizeof(struct cz_hwmgr), GFP_KERNEL);
+	if (cz_hwmgr == NULL)
+		return -ENOMEM;
+
+	hwmgr->backend = cz_hwmgr;
+	hwmgr->hwmgr_func = &cz_hwmgr_funcs;
+	hwmgr->pptable_func = &pptable_funcs;
+	return ret;
+}

+ 326 - 0
drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.h

@@ -0,0 +1,326 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _CZ_HWMGR_H_
+#define _CZ_HWMGR_H_
+
+#include "cgs_common.h"
+#include "ppatomctrl.h"
+
+#define CZ_NUM_NBPSTATES               4
+#define CZ_NUM_NBPMEMORYCLOCK          2
+#define MAX_DISPLAY_CLOCK_LEVEL        8
+#define CZ_AT_DFLT                     30
+#define CZ_MAX_HARDWARE_POWERLEVELS    8
+#define PPCZ_VOTINGRIGHTSCLIENTS_DFLT0   0x3FFFC102
+#define CZ_MIN_DEEP_SLEEP_SCLK         800
+
+/* Carrizo device IDs */
+#define DEVICE_ID_CZ_9870             0x9870
+#define DEVICE_ID_CZ_9874             0x9874
+#define DEVICE_ID_CZ_9875             0x9875
+#define DEVICE_ID_CZ_9876             0x9876
+#define DEVICE_ID_CZ_9877             0x9877
+
+#define PHMCZ_WRITE_SMC_REGISTER(device, reg, value)                            \
+		cgs_write_ind_register(device, CGS_IND_REG__SMC, ix##reg, value)
+
+struct cz_dpm_entry {
+	uint32_t soft_min_clk;
+	uint32_t hard_min_clk;
+	uint32_t soft_max_clk;
+	uint32_t hard_max_clk;
+};
+
+struct cz_sys_info {
+	uint32_t bootup_uma_clock;
+	uint32_t bootup_engine_clock;
+	uint32_t dentist_vco_freq;
+	uint32_t nb_dpm_enable;
+	uint32_t nbp_memory_clock[CZ_NUM_NBPMEMORYCLOCK];
+	uint32_t nbp_n_clock[CZ_NUM_NBPSTATES];
+	uint16_t nbp_voltage_index[CZ_NUM_NBPSTATES];
+	uint32_t display_clock[MAX_DISPLAY_CLOCK_LEVEL];
+	uint16_t bootup_nb_voltage_index;
+	uint8_t htc_tmp_lmt;
+	uint8_t htc_hyst_lmt;
+	uint32_t system_config;
+	uint32_t uma_channel_number;
+};
+
+#define MAX_DISPLAYPHY_IDS			0x8
+#define DISPLAYPHY_LANEMASK			0xF
+#define UNKNOWN_TRANSMITTER_PHY_ID		(-1)
+
+#define DISPLAYPHY_PHYID_SHIFT			24
+#define DISPLAYPHY_LANESELECT_SHIFT		16
+
+#define DISPLAYPHY_RX_SELECT			0x1
+#define DISPLAYPHY_TX_SELECT			0x2
+#define DISPLAYPHY_CORE_SELECT			0x4
+
+#define DDI_POWERGATING_ARG(phyID, lanemask, rx, tx, core) \
+		(((uint32_t)(phyID))<<DISPLAYPHY_PHYID_SHIFT | \
+		((uint32_t)(lanemask))<<DISPLAYPHY_LANESELECT_SHIFT | \
+		((rx) ? DISPLAYPHY_RX_SELECT : 0) | \
+		((tx) ? DISPLAYPHY_TX_SELECT : 0) | \
+		((core) ? DISPLAYPHY_CORE_SELECT : 0))
+
+struct cz_display_phy_info_entry {
+	uint8_t phy_present;
+	uint8_t active_lane_mapping;
+	uint8_t display_config_type;
+	uint8_t active_number_of_lanes;
+};
+
+#define CZ_MAX_DISPLAYPHY_IDS			10
+
+struct cz_display_phy_info {
+	bool display_phy_access_initialized;
+	struct cz_display_phy_info_entry entries[CZ_MAX_DISPLAYPHY_IDS];
+};
+
+struct cz_power_level {
+	uint32_t engineClock;
+	uint8_t vddcIndex;
+	uint8_t dsDividerIndex;
+	uint8_t ssDividerIndex;
+	uint8_t allowGnbSlow;
+	uint8_t forceNBPstate;
+	uint8_t display_wm;
+	uint8_t vce_wm;
+	uint8_t numSIMDToPowerDown;
+	uint8_t hysteresis_up;
+	uint8_t rsv[3];
+};
+
+struct cz_uvd_clocks {
+	uint32_t vclk;
+	uint32_t dclk;
+	uint32_t vclk_low_divider;
+	uint32_t vclk_high_divider;
+	uint32_t dclk_low_divider;
+	uint32_t dclk_high_divider;
+};
+
+enum cz_pstate_previous_action {
+	DO_NOTHING = 1,
+	FORCE_HIGH,
+	CANCEL_FORCE_HIGH
+};
+
+struct pp_disable_nb_ps_flags {
+	union {
+		struct {
+			uint32_t entry : 1;
+			uint32_t display : 1;
+			uint32_t driver: 1;
+			uint32_t vce : 1;
+			uint32_t uvd : 1;
+			uint32_t acp : 1;
+			uint32_t reserved: 26;
+		} bits;
+		uint32_t u32All;
+	};
+};
+
+struct cz_power_state {
+	unsigned int magic;
+	uint32_t level;
+	struct cz_uvd_clocks uvd_clocks;
+	uint32_t evclk;
+	uint32_t ecclk;
+	uint32_t samclk;
+	uint32_t acpclk;
+	bool need_dfs_bypass;
+	uint32_t nbps_flags;
+	uint32_t bapm_flags;
+	uint8_t dpm_0_pg_nb_ps_low;
+	uint8_t dpm_0_pg_nb_ps_high;
+	uint8_t dpm_x_nb_ps_low;
+	uint8_t dpm_x_nb_ps_high;
+	enum cz_pstate_previous_action action;
+	struct cz_power_level levels[CZ_MAX_HARDWARE_POWERLEVELS];
+	struct pp_disable_nb_ps_flags disable_nb_ps_flag;
+};
+
+#define DPMFlags_SCLK_Enabled			0x00000001
+#define DPMFlags_UVD_Enabled			0x00000002
+#define DPMFlags_VCE_Enabled			0x00000004
+#define DPMFlags_ACP_Enabled			0x00000008
+#define DPMFlags_ForceHighestValid		0x40000000
+#define DPMFlags_Debug				0x80000000
+
+#define SMU_EnabledFeatureScoreboard_AcpDpmOn   0x00000001 /* bit 0 */
+#define SMU_EnabledFeatureScoreboard_SclkDpmOn    0x00200000
+#define SMU_EnabledFeatureScoreboard_UvdDpmOn   0x00800000 /* bit 23 */
+#define SMU_EnabledFeatureScoreboard_VceDpmOn   0x01000000 /* bit 24 */
+
+struct cc6_settings {
+	bool cc6_setting_changed;
+	bool nb_pstate_switch_disable;/* controls NB PState switch */
+	bool cpu_cc6_disable; /* controls CPU CState switch ( on or off) */
+	bool cpu_pstate_disable;
+	uint32_t cpu_pstate_separation_time;
+};
+
+struct cz_hwmgr {
+	uint32_t activity_target[CZ_MAX_HARDWARE_POWERLEVELS];
+	uint32_t dpm_interval;
+
+	uint32_t voltage_drop_threshold;
+
+	uint32_t voting_rights_clients;
+
+	uint32_t disable_driver_thermal_policy;
+
+	uint32_t static_screen_threshold;
+
+	uint32_t gfx_power_gating_threshold;
+
+	uint32_t activity_hysteresis;
+	uint32_t bootup_sclk_divider;
+	uint32_t gfx_ramp_step;
+	uint32_t gfx_ramp_delay; /* in micro-seconds */
+
+	uint32_t thermal_auto_throttling_treshold;
+
+	struct cz_sys_info sys_info;
+
+	struct cz_power_level boot_power_level;
+	struct cz_power_state *cz_current_ps;
+	struct cz_power_state *cz_requested_ps;
+
+	uint32_t mgcg_cgtt_local0;
+	uint32_t mgcg_cgtt_local1;
+
+	uint32_t tdr_clock; /* in 10khz unit */
+
+	uint32_t ddi_power_gating_disabled;
+	uint32_t disable_gfx_power_gating_in_uvd;
+	uint32_t disable_nb_ps3_in_battery;
+
+	uint32_t lock_nb_ps_in_uvd_play_back;
+
+	struct cz_display_phy_info display_phy_info;
+	uint32_t vce_slow_sclk_threshold; /* default 200mhz */
+	uint32_t dce_slow_sclk_threshold; /* default 300mhz */
+	uint32_t min_sclk_did;  /* minimum sclk divider */
+
+	bool disp_clk_bypass;
+	bool disp_clk_bypass_pending;
+	uint32_t bapm_enabled;
+	uint32_t clock_slow_down_freq;
+	uint32_t skip_clock_slow_down;
+	uint32_t enable_nb_ps_policy;
+	uint32_t voltage_drop_in_dce_power_gating;
+	uint32_t uvd_dpm_interval;
+	uint32_t override_dynamic_mgpg;
+	uint32_t lclk_deep_enabled;
+
+	uint32_t uvd_performance;
+
+	bool video_start;
+	bool battery_state;
+	uint32_t lowest_valid;
+	uint32_t highest_valid;
+	uint32_t high_voltage_threshold;
+	uint32_t is_nb_dpm_enabled;
+	struct cc6_settings cc6_settings;
+	uint32_t is_voltage_island_enabled;
+
+	bool pgacpinit;
+
+	uint8_t disp_config;
+
+	/* PowerTune */
+	uint32_t power_containment_features;
+	bool cac_enabled;
+	bool disable_uvd_power_tune_feature;
+	bool enable_ba_pm_feature;
+	bool enable_tdc_limit_feature;
+
+	uint32_t sram_end;
+	uint32_t dpm_table_start;
+	uint32_t soft_regs_start;
+
+	uint8_t uvd_level_count;
+	uint8_t vce_level_count;
+
+	uint8_t acp_level_count;
+	uint8_t samu_level_count;
+	uint32_t fps_high_threshold;
+	uint32_t fps_low_threshold;
+
+	uint32_t dpm_flags;
+	struct cz_dpm_entry sclk_dpm;
+	struct cz_dpm_entry uvd_dpm;
+	struct cz_dpm_entry vce_dpm;
+	struct cz_dpm_entry acp_dpm;
+
+	uint8_t uvd_boot_level;
+	uint8_t vce_boot_level;
+	uint8_t acp_boot_level;
+	uint8_t samu_boot_level;
+	uint8_t uvd_interval;
+	uint8_t vce_interval;
+	uint8_t acp_interval;
+	uint8_t samu_interval;
+
+	uint8_t graphics_interval;
+	uint8_t graphics_therm_throttle_enable;
+	uint8_t graphics_voltage_change_enable;
+
+	uint8_t graphics_clk_slow_enable;
+	uint8_t graphics_clk_slow_divider;
+
+	uint32_t display_cac;
+	uint32_t low_sclk_interrupt_threshold;
+
+	uint32_t dram_log_addr_h;
+	uint32_t dram_log_addr_l;
+	uint32_t dram_log_phy_addr_h;
+	uint32_t dram_log_phy_addr_l;
+	uint32_t dram_log_buff_size;
+
+	bool uvd_power_gated;
+	bool vce_power_gated;
+	bool samu_power_gated;
+	bool acp_power_gated;
+	bool acp_power_up_no_dsp;
+	uint32_t active_process_mask;
+
+	uint32_t max_sclk_level;
+	uint32_t num_of_clk_entries;
+};
+
+struct pp_hwmgr;
+
+int cz_hwmgr_init(struct pp_hwmgr *hwmgr);
+int cz_dpm_powerdown_uvd(struct pp_hwmgr *hwmgr);
+int cz_dpm_powerup_uvd(struct pp_hwmgr *hwmgr);
+int cz_dpm_powerdown_vce(struct pp_hwmgr *hwmgr);
+int cz_dpm_powerup_vce(struct pp_hwmgr *hwmgr);
+int cz_dpm_update_uvd_dpm(struct pp_hwmgr *hwmgr, bool bgate);
+int  cz_dpm_update_vce_dpm(struct pp_hwmgr *hwmgr);
+#endif /* _CZ_HWMGR_H_ */

+ 114 - 0
drivers/gpu/drm/amd/powerplay/hwmgr/fiji_clockpowergating.c

@@ -0,0 +1,114 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "hwmgr.h"
+#include "fiji_clockpowergating.h"
+#include "fiji_ppsmc.h"
+#include "fiji_hwmgr.h"
+
+int fiji_phm_disable_clock_power_gating(struct pp_hwmgr *hwmgr)
+{
+	struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
+
+	data->uvd_power_gated = false;
+	data->vce_power_gated = false;
+	data->samu_power_gated = false;
+	data->acp_power_gated = false;
+
+	return 0;
+}
+
+int fiji_phm_powergate_uvd(struct pp_hwmgr *hwmgr, bool bgate)
+{
+	struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
+
+	if (data->uvd_power_gated == bgate)
+		return 0;
+
+	data->uvd_power_gated = bgate;
+
+	if (bgate)
+		fiji_update_uvd_dpm(hwmgr, true);
+	else
+		fiji_update_uvd_dpm(hwmgr, false);
+
+	return 0;
+}
+
+int fiji_phm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate)
+{
+	struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
+	struct phm_set_power_state_input states;
+	const struct pp_power_state  *pcurrent;
+	struct pp_power_state  *requested;
+
+	if (data->vce_power_gated == bgate)
+		return 0;
+
+	data->vce_power_gated = bgate;
+
+	pcurrent = hwmgr->current_ps;
+	requested = hwmgr->request_ps;
+
+	states.pcurrent_state = &(pcurrent->hardware);
+	states.pnew_state = &(requested->hardware);
+
+	fiji_update_vce_dpm(hwmgr, &states);
+	fiji_enable_disable_vce_dpm(hwmgr, !bgate);
+
+	return 0;
+}
+
+int fiji_phm_powergate_samu(struct pp_hwmgr *hwmgr, bool bgate)
+{
+	struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
+
+	if (data->samu_power_gated == bgate)
+		return 0;
+
+	data->samu_power_gated = bgate;
+
+	if (bgate)
+		fiji_update_samu_dpm(hwmgr, true);
+	else
+		fiji_update_samu_dpm(hwmgr, false);
+
+	return 0;
+}
+
+int fiji_phm_powergate_acp(struct pp_hwmgr *hwmgr, bool bgate)
+{
+	struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
+
+	if (data->acp_power_gated == bgate)
+		return 0;
+
+	data->acp_power_gated = bgate;
+
+	if (bgate)
+		fiji_update_acp_dpm(hwmgr, true);
+	else
+		fiji_update_acp_dpm(hwmgr, false);
+
+	return 0;
+}

+ 35 - 0
drivers/gpu/drm/amd/powerplay/hwmgr/fiji_clockpowergating.h

@@ -0,0 +1,35 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _FIJI_CLOCK_POWER_GATING_H_
+#define _FIJI_CLOCK_POWER_GATING_H_
+
+#include "fiji_hwmgr.h"
+#include "pp_asicblocks.h"
+
+extern int fiji_phm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate);
+extern int fiji_phm_powergate_uvd(struct pp_hwmgr *hwmgr, bool bgate);
+extern int fiji_phm_powergate_samu(struct pp_hwmgr *hwmgr, bool bgate);
+extern int fiji_phm_powergate_acp(struct pp_hwmgr *hwmgr, bool bgate);
+extern int fiji_phm_disable_clock_power_gating(struct pp_hwmgr *hwmgr);
+#endif /* _TONGA_CLOCK_POWER_GATING_H_ */

+ 105 - 0
drivers/gpu/drm/amd/powerplay/hwmgr/fiji_dyn_defaults.h

@@ -0,0 +1,105 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef FIJI_DYN_DEFAULTS_H
+#define FIJI_DYN_DEFAULTS_H
+
+/** \file
+* Volcanic Islands Dynamic default parameters.
+*/
+
+enum FIJIdpm_TrendDetection
+{
+    FIJIAdpm_TrendDetection_AUTO,
+    FIJIAdpm_TrendDetection_UP,
+    FIJIAdpm_TrendDetection_DOWN
+};
+typedef enum FIJIdpm_TrendDetection FIJIdpm_TrendDetection;
+
+/* We need to fill in the default values!!!!!!!!!!!!!!!!!!!!!!! */
+
+/* Bit vector representing same fields as hardware register. */
+#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT0    0x3FFFC102  /* CP_Gfx_busy ????
+                                                         * HDP_busy
+                                                         * IH_busy
+                                                         * UVD_busy
+                                                         * VCE_busy
+                                                         * ACP_busy
+                                                         * SAMU_busy
+                                                         * SDMA enabled */
+#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT1    0x000400  /* FE_Gfx_busy  - Intended for primary usage.   Rest are for flexibility. ????
+                                                       * SH_Gfx_busy
+                                                       * RB_Gfx_busy
+                                                       * VCE_busy */
+
+#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT2    0xC00080  /* SH_Gfx_busy - Intended for primary usage.   Rest are for flexibility.
+                                                       * FE_Gfx_busy
+                                                       * RB_Gfx_busy
+                                                       * ACP_busy */
+
+#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT3    0xC00200  /* RB_Gfx_busy - Intended for primary usage.   Rest are for flexibility.
+                                                       * FE_Gfx_busy
+                                                       * SH_Gfx_busy
+                                                       * UVD_busy */
+
+#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT4    0xC01680  /* UVD_busy
+                                                       * VCE_busy
+                                                       * ACP_busy
+                                                       * SAMU_busy */
+
+#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT5    0xC00033  /* GFX, HDP */
+#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT6    0xC00033  /* GFX, HDP */
+#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT7    0x3FFFC000  /* GFX, HDP */
+
+
+/* thermal protection counter (units). */
+#define PPFIJI_THERMALPROTECTCOUNTER_DFLT            0x200 /* ~19us */
+
+/* static screen threshold unit */
+#define PPFIJI_STATICSCREENTHRESHOLDUNIT_DFLT    0
+
+/* static screen threshold */
+#define PPFIJI_STATICSCREENTHRESHOLD_DFLT        0x00C8
+
+/* gfx idle clock stop threshold */
+#define PPFIJI_GFXIDLECLOCKSTOPTHRESHOLD_DFLT        0x200 /* ~19us with static screen threshold unit of 0 */
+
+/* Fixed reference divider to use when building baby stepping tables. */
+#define PPFIJI_REFERENCEDIVIDER_DFLT                  4
+
+/* ULV voltage change delay time
+ * Used to be delay_vreg in N.I. split for S.I.
+ * Using N.I. delay_vreg value as default
+ * ReferenceClock = 2700
+ * VoltageResponseTime = 1000
+ * VDDCDelayTime = (VoltageResponseTime * ReferenceClock) / 1600 = 1687
+ */
+#define PPFIJI_ULVVOLTAGECHANGEDELAY_DFLT             1687
+
+#define PPFIJI_CGULVPARAMETER_DFLT			0x00040035
+#define PPFIJI_CGULVCONTROL_DFLT			0x00007450
+#define PPFIJI_TARGETACTIVITY_DFLT			30 /* 30%*/
+#define PPFIJI_MCLK_TARGETACTIVITY_DFLT		10 /* 10% */
+
+#endif
+

Kaikkia tiedostoja ei voida näyttää, sillä liian monta tiedostoa muuttui tässä diffissä