Просмотр исходного кода

Merge tag 'drm-intel-next-2018-06-20' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

Chris is doing many reworks that allow us to get full-ppgtt supported
on all platforms back to HSW. As well many other fix and improvements,
Including:
- Use GEM suspend when aborting initialization (Chris)
- Change i915_gem_fault to return vm_fault_t (Chris)
- Expand VMA to Non gem object entities (Chris)
- Improve logs for load failure, but quite logging on fault injection to avoid noise on CI (Chris)
- Other page directory handling fixes and improvements for gen6 (Chris)
- Other gtt clean-up removing redundancies and unused checks (Chris)
- Reorder aliasing ppgtt fini (Chris)
- Refactor of unsetting obg->mm.pages (Chris)
- Apply batch location restrictions before pinning (Chris)
- Ringbuffer fixes for context restore (Chris)
- Execlist fixes on freeing error pointer on allocation error (Chris)
- Make closing request flush mandatory (Chris)
- Move GEM sanitize from resume_early to resume (Chris)
- Improve debug dumps (Chris)
- Silent compiler for selftest (Chris)
- Other execlists changes to improve hangcheck and reset.
- Many gtt page directory fixes and improvements (Chris)
- Reorg context workarounds (Chris)
- Avoid ERR_PTR dereference on selftest (Chris)

Other GEM related work:
- Stop trying to reset GPU if reset failed (Mika)
- Add HW workaround for KBL to fix GPU reset (Mika)
- Fix context ban and hang accounting for client (Mika)
- Fixes on OA perf (Michel, Jani)
- Refactor on GuC log mechanisms (Piotr)
- Enable provoking vertex fix on Gen9 system (Kenneth)

More ICL patches for Display enabling:
- ICL - 10-bit support for HDMI (RK)
- ICL - Start adding TBT PLL (Paulo)
- ICL - DDI HDMK level selection (Manasi)
- ICL - GMBUS GPIO pin mapping fix (Mahesh)
- ICL - Adding DP_AUX_E support (James)
- ICL - Display interrupts handling (DK)

Other display fixes and improvements:
- Fix sprite destination color keying on SKL+ (Ville)
- Fixes and improvements on PCH detection, specially for non PCH systems (Jani)
- Document PCH_NOP (Lucas)
- Allow DBLSCAN user modes with eDP/LVDS/DSI (Ville)
- Opregion and ACPI cleanup and organization (Jani)
- Kill delays when activation psr (Rodrigo)
- ...and a consequent fix of the psr activation flow (DK)
- Fix HDMI infoframe setting (Imre)
- Fix Display interrupts and modes on old gens (Ville)
- Start switching to kernel unsigned int types (Jani)
- Introduction to Amber Lake and Whiskey Lake platforms (Jose)
- Audio clock fixes for HBR3 (RK)
- Standardize i915_reg.h definitions according to our doc and checkpatch (Paulo)
- Remove unused timespec_to_jiffies_timeout function (Arnd)
- Increase the scope of PSR wake fix for other VBTs out there (Vathsala)
- Improve debug msgs with prop name/id (Ville)
- Other clean up on unecessary cursor size defines (Ville)
- Enforce max hdisplay/hblank_start limits on HSW/BDW (Ville)
- Make ELD pointers constant (Jani)
- Fix for PSR VBT parse (Colin)
- Add warn about unsupported CDCLK rates (Imre)

Signed-off-by: Dave Airlie <airlied@redhat.com>

# gpg: Signature made Thu 21 Jun 2018 07:12:10 AM AEST
# gpg:                using RSA key FA625F640EEB13CA
# gpg: Good signature from "Rodrigo Vivi <rodrigo.vivi@intel.com>"
# gpg:                 aka "Rodrigo Vivi <rodrigo.vivi@gmail.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6D20 7068 EEDD 6509 1C2C  E2A3 FA62 5F64 0EEB 13CA
Link: https://patchwork.freedesktop.org/patch/msgid/20180625165622.GA21761@intel.com
Dave Airlie 7 лет назад
Родитель
Сommit
b4d4b0b7de
100 измененных файлов с 4453 добавлено и 3087 удалено
  1. 10 10
      drivers/gpu/drm/i915/dvo_ch7017.c
  2. 11 11
      drivers/gpu/drm/i915/dvo_ch7xxx.c
  3. 13 13
      drivers/gpu/drm/i915/dvo_ivch.c
  4. 22 22
      drivers/gpu/drm/i915/dvo_ns2501.c
  5. 5 5
      drivers/gpu/drm/i915/dvo_sil164.c
  6. 8 8
      drivers/gpu/drm/i915/dvo_tfp410.c
  7. 18 25
      drivers/gpu/drm/i915/gvt/cmd_parser.c
  8. 42 16
      drivers/gpu/drm/i915/gvt/display.c
  9. 19 7
      drivers/gpu/drm/i915/gvt/dmabuf.c
  10. 19 1
      drivers/gpu/drm/i915/gvt/edid.c
  11. 5 8
      drivers/gpu/drm/i915/gvt/execlist.h
  12. 12 3
      drivers/gpu/drm/i915/gvt/fb_decoder.c
  13. 1 1
      drivers/gpu/drm/i915/gvt/firmware.c
  14. 3 8
      drivers/gpu/drm/i915/gvt/gtt.c
  15. 11 16
      drivers/gpu/drm/i915/gvt/gvt.c
  16. 16 0
      drivers/gpu/drm/i915/gvt/gvt.h
  17. 352 47
      drivers/gpu/drm/i915/gvt/handlers.c
  18. 7 10
      drivers/gpu/drm/i915/gvt/interrupt.c
  19. 6 6
      drivers/gpu/drm/i915/gvt/mmio.c
  20. 6 5
      drivers/gpu/drm/i915/gvt/mmio.h
  21. 11 5
      drivers/gpu/drm/i915/gvt/mmio_context.c
  22. 2 3
      drivers/gpu/drm/i915/gvt/page_track.c
  23. 32 4
      drivers/gpu/drm/i915/gvt/sched_policy.c
  24. 14 11
      drivers/gpu/drm/i915/gvt/scheduler.c
  25. 30 26
      drivers/gpu/drm/i915/gvt/vgpu.c
  26. 10 26
      drivers/gpu/drm/i915/i915_debugfs.c
  27. 26 30
      drivers/gpu/drm/i915/i915_drv.c
  28. 35 21
      drivers/gpu/drm/i915/i915_drv.h
  29. 103 78
      drivers/gpu/drm/i915/i915_gem.c
  30. 2 9
      drivers/gpu/drm/i915/i915_gem_context.c
  31. 29 24
      drivers/gpu/drm/i915/i915_gem_execbuffer.c
  32. 353 351
      drivers/gpu/drm/i915/i915_gem_gtt.c
  33. 42 15
      drivers/gpu/drm/i915/i915_gem_gtt.h
  34. 3 0
      drivers/gpu/drm/i915/i915_gpu_error.c
  35. 176 6
      drivers/gpu/drm/i915/i915_irq.c
  36. 4 1
      drivers/gpu/drm/i915/i915_pci.c
  37. 9 9
      drivers/gpu/drm/i915/i915_perf.c
  38. 4 1
      drivers/gpu/drm/i915/i915_pvinfo.h
  39. 1681 1619
      drivers/gpu/drm/i915/i915_reg.h
  40. 4 16
      drivers/gpu/drm/i915/i915_request.c
  41. 4 3
      drivers/gpu/drm/i915/i915_request.h
  42. 0 33
      drivers/gpu/drm/i915/i915_trace.h
  43. 62 45
      drivers/gpu/drm/i915/i915_vma.c
  44. 9 1
      drivers/gpu/drm/i915/i915_vma.h
  45. 11 16
      drivers/gpu/drm/i915/intel_acpi.c
  46. 4 2
      drivers/gpu/drm/i915/intel_atomic.c
  47. 4 2
      drivers/gpu/drm/i915/intel_atomic_plane.c
  48. 29 19
      drivers/gpu/drm/i915/intel_audio.c
  49. 7 5
      drivers/gpu/drm/i915/intel_bios.c
  50. 53 3
      drivers/gpu/drm/i915/intel_cdclk.c
  51. 34 1
      drivers/gpu/drm/i915/intel_crt.c
  52. 36 3
      drivers/gpu/drm/i915/intel_ddi.c
  53. 104 18
      drivers/gpu/drm/i915/intel_display.c
  54. 2 1
      drivers/gpu/drm/i915/intel_display.h
  55. 54 24
      drivers/gpu/drm/i915/intel_dp.c
  56. 6 6
      drivers/gpu/drm/i915/intel_dp_aux_backlight.c
  57. 28 11
      drivers/gpu/drm/i915/intel_dp_link_training.c
  58. 6 0
      drivers/gpu/drm/i915/intel_dp_mst.c
  59. 16 4
      drivers/gpu/drm/i915/intel_dpll_mgr.c
  60. 9 5
      drivers/gpu/drm/i915/intel_dpll_mgr.h
  61. 1 6
      drivers/gpu/drm/i915/intel_drv.h
  62. 6 0
      drivers/gpu/drm/i915/intel_dsi.c
  63. 7 1
      drivers/gpu/drm/i915/intel_dvo.c
  64. 64 11
      drivers/gpu/drm/i915/intel_engine_cs.c
  65. 85 30
      drivers/gpu/drm/i915/intel_guc.c
  66. 3 17
      drivers/gpu/drm/i915/intel_guc_fwif.h
  67. 37 33
      drivers/gpu/drm/i915/intel_guc_log.c
  68. 17 9
      drivers/gpu/drm/i915/intel_guc_log.h
  69. 2 0
      drivers/gpu/drm/i915/intel_gvt.c
  70. 16 1
      drivers/gpu/drm/i915/intel_hangcheck.c
  71. 74 30
      drivers/gpu/drm/i915/intel_hdmi.c
  72. 7 7
      drivers/gpu/drm/i915/intel_i2c.c
  73. 62 27
      drivers/gpu/drm/i915/intel_lrc.c
  74. 1 1
      drivers/gpu/drm/i915/intel_lspcon.c
  75. 5 0
      drivers/gpu/drm/i915/intel_lvds.c
  76. 12 19
      drivers/gpu/drm/i915/intel_opregion.c
  77. 1 0
      drivers/gpu/drm/i915/intel_opregion.h
  78. 4 4
      drivers/gpu/drm/i915/intel_panel.c
  79. 8 22
      drivers/gpu/drm/i915/intel_psr.c
  80. 157 82
      drivers/gpu/drm/i915/intel_ringbuffer.c
  81. 22 17
      drivers/gpu/drm/i915/intel_ringbuffer.h
  82. 2 0
      drivers/gpu/drm/i915/intel_runtime_pm.c
  83. 6 0
      drivers/gpu/drm/i915/intel_sdvo.c
  84. 61 3
      drivers/gpu/drm/i915/intel_sprite.c
  85. 10 2
      drivers/gpu/drm/i915/intel_tv.c
  86. 1 1
      drivers/gpu/drm/i915/intel_uc.c
  87. 11 5
      drivers/gpu/drm/i915/intel_uncore.c
  88. 11 11
      drivers/gpu/drm/i915/intel_uncore.h
  89. 4 2
      drivers/gpu/drm/i915/intel_vbt_defs.h
  90. 59 17
      drivers/gpu/drm/i915/intel_workarounds.c
  91. 1 1
      drivers/gpu/drm/i915/selftests/huge_pages.c
  92. 2 2
      drivers/gpu/drm/i915/selftests/i915_gem_coherency.c
  93. 4 4
      drivers/gpu/drm/i915/selftests/i915_gem_context.c
  94. 8 10
      drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
  95. 3 3
      drivers/gpu/drm/i915/selftests/i915_request.c
  96. 8 8
      drivers/gpu/drm/i915/selftests/intel_hangcheck.c
  97. 1 1
      drivers/gpu/drm/i915/selftests/intel_lrc.c
  98. 1 1
      drivers/gpu/drm/i915/selftests/intel_workarounds.c
  99. 10 8
      drivers/gpu/drm/i915/selftests/mock_gtt.c
  100. 25 12
      include/drm/i915_pciids.h

+ 10 - 10
drivers/gpu/drm/i915/dvo_ch7017.c

@@ -159,7 +159,7 @@
 #define CH7017_BANG_LIMIT_CONTROL	0x7f
 
 struct ch7017_priv {
-	uint8_t dummy;
+	u8 dummy;
 };
 
 static void ch7017_dump_regs(struct intel_dvo_device *dvo);
@@ -186,7 +186,7 @@ static bool ch7017_read(struct intel_dvo_device *dvo, u8 addr, u8 *val)
 
 static bool ch7017_write(struct intel_dvo_device *dvo, u8 addr, u8 val)
 {
-	uint8_t buf[2] = { addr, val };
+	u8 buf[2] = { addr, val };
 	struct i2c_msg msg = {
 		.addr = dvo->slave_addr,
 		.flags = 0,
@@ -258,11 +258,11 @@ static void ch7017_mode_set(struct intel_dvo_device *dvo,
 			    const struct drm_display_mode *mode,
 			    const struct drm_display_mode *adjusted_mode)
 {
-	uint8_t lvds_pll_feedback_div, lvds_pll_vco_control;
-	uint8_t outputs_enable, lvds_control_2, lvds_power_down;
-	uint8_t horizontal_active_pixel_input;
-	uint8_t horizontal_active_pixel_output, vertical_active_line_output;
-	uint8_t active_input_line_output;
+	u8 lvds_pll_feedback_div, lvds_pll_vco_control;
+	u8 outputs_enable, lvds_control_2, lvds_power_down;
+	u8 horizontal_active_pixel_input;
+	u8 horizontal_active_pixel_output, vertical_active_line_output;
+	u8 active_input_line_output;
 
 	DRM_DEBUG_KMS("Registers before mode setting\n");
 	ch7017_dump_regs(dvo);
@@ -333,7 +333,7 @@ static void ch7017_mode_set(struct intel_dvo_device *dvo,
 /* set the CH7017 power state */
 static void ch7017_dpms(struct intel_dvo_device *dvo, bool enable)
 {
-	uint8_t val;
+	u8 val;
 
 	ch7017_read(dvo, CH7017_LVDS_POWER_DOWN, &val);
 
@@ -361,7 +361,7 @@ static void ch7017_dpms(struct intel_dvo_device *dvo, bool enable)
 
 static bool ch7017_get_hw_state(struct intel_dvo_device *dvo)
 {
-	uint8_t val;
+	u8 val;
 
 	ch7017_read(dvo, CH7017_LVDS_POWER_DOWN, &val);
 
@@ -373,7 +373,7 @@ static bool ch7017_get_hw_state(struct intel_dvo_device *dvo)
 
 static void ch7017_dump_regs(struct intel_dvo_device *dvo)
 {
-	uint8_t val;
+	u8 val;
 
 #define DUMP(reg)					\
 do {							\

+ 11 - 11
drivers/gpu/drm/i915/dvo_ch7xxx.c

@@ -85,7 +85,7 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  */
 
 static struct ch7xxx_id_struct {
-	uint8_t vid;
+	u8 vid;
 	char *name;
 } ch7xxx_ids[] = {
 	{ CH7011_VID, "CH7011" },
@@ -96,7 +96,7 @@ static struct ch7xxx_id_struct {
 };
 
 static struct ch7xxx_did_struct {
-	uint8_t did;
+	u8 did;
 	char *name;
 } ch7xxx_dids[] = {
 	{ CH7xxx_DID, "CH7XXX" },
@@ -107,7 +107,7 @@ struct ch7xxx_priv {
 	bool quiet;
 };
 
-static char *ch7xxx_get_id(uint8_t vid)
+static char *ch7xxx_get_id(u8 vid)
 {
 	int i;
 
@@ -119,7 +119,7 @@ static char *ch7xxx_get_id(uint8_t vid)
 	return NULL;
 }
 
-static char *ch7xxx_get_did(uint8_t did)
+static char *ch7xxx_get_did(u8 did)
 {
 	int i;
 
@@ -132,7 +132,7 @@ static char *ch7xxx_get_did(uint8_t did)
 }
 
 /** Reads an 8 bit register */
-static bool ch7xxx_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
+static bool ch7xxx_readb(struct intel_dvo_device *dvo, int addr, u8 *ch)
 {
 	struct ch7xxx_priv *ch7xxx = dvo->dev_priv;
 	struct i2c_adapter *adapter = dvo->i2c_bus;
@@ -170,11 +170,11 @@ static bool ch7xxx_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
 }
 
 /** Writes an 8 bit register */
-static bool ch7xxx_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch)
+static bool ch7xxx_writeb(struct intel_dvo_device *dvo, int addr, u8 ch)
 {
 	struct ch7xxx_priv *ch7xxx = dvo->dev_priv;
 	struct i2c_adapter *adapter = dvo->i2c_bus;
-	uint8_t out_buf[2];
+	u8 out_buf[2];
 	struct i2c_msg msg = {
 		.addr = dvo->slave_addr,
 		.flags = 0,
@@ -201,7 +201,7 @@ static bool ch7xxx_init(struct intel_dvo_device *dvo,
 {
 	/* this will detect the CH7xxx chip on the specified i2c bus */
 	struct ch7xxx_priv *ch7xxx;
-	uint8_t vendor, device;
+	u8 vendor, device;
 	char *name, *devid;
 
 	ch7xxx = kzalloc(sizeof(struct ch7xxx_priv), GFP_KERNEL);
@@ -244,7 +244,7 @@ out:
 
 static enum drm_connector_status ch7xxx_detect(struct intel_dvo_device *dvo)
 {
-	uint8_t cdet, orig_pm, pm;
+	u8 cdet, orig_pm, pm;
 
 	ch7xxx_readb(dvo, CH7xxx_PM, &orig_pm);
 
@@ -276,7 +276,7 @@ static void ch7xxx_mode_set(struct intel_dvo_device *dvo,
 			    const struct drm_display_mode *mode,
 			    const struct drm_display_mode *adjusted_mode)
 {
-	uint8_t tvco, tpcp, tpd, tlpf, idf;
+	u8 tvco, tpcp, tpd, tlpf, idf;
 
 	if (mode->clock <= 65000) {
 		tvco = 0x23;
@@ -336,7 +336,7 @@ static void ch7xxx_dump_regs(struct intel_dvo_device *dvo)
 	int i;
 
 	for (i = 0; i < CH7xxx_NUM_REGS; i++) {
-		uint8_t val;
+		u8 val;
 		if ((i % 8) == 0)
 			DRM_DEBUG_KMS("\n %02X: ", i);
 		ch7xxx_readb(dvo, i, &val);

+ 13 - 13
drivers/gpu/drm/i915/dvo_ivch.c

@@ -161,7 +161,7 @@
  * instead. The following list contains all registers that
  * require saving.
  */
-static const uint16_t backup_addresses[] = {
+static const u16 backup_addresses[] = {
 	0x11, 0x12,
 	0x18, 0x19, 0x1a, 0x1f,
 	0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
@@ -174,11 +174,11 @@ static const uint16_t backup_addresses[] = {
 struct ivch_priv {
 	bool quiet;
 
-	uint16_t width, height;
+	u16 width, height;
 
 	/* Register backup */
 
-	uint16_t reg_backup[ARRAY_SIZE(backup_addresses)];
+	u16 reg_backup[ARRAY_SIZE(backup_addresses)];
 };
 
 
@@ -188,7 +188,7 @@ static void ivch_dump_regs(struct intel_dvo_device *dvo);
  *
  * Each of the 256 registers are 16 bits long.
  */
-static bool ivch_read(struct intel_dvo_device *dvo, int addr, uint16_t *data)
+static bool ivch_read(struct intel_dvo_device *dvo, int addr, u16 *data)
 {
 	struct ivch_priv *priv = dvo->dev_priv;
 	struct i2c_adapter *adapter = dvo->i2c_bus;
@@ -231,7 +231,7 @@ static bool ivch_read(struct intel_dvo_device *dvo, int addr, uint16_t *data)
 }
 
 /* Writes a 16-bit register on the ivch */
-static bool ivch_write(struct intel_dvo_device *dvo, int addr, uint16_t data)
+static bool ivch_write(struct intel_dvo_device *dvo, int addr, u16 data)
 {
 	struct ivch_priv *priv = dvo->dev_priv;
 	struct i2c_adapter *adapter = dvo->i2c_bus;
@@ -263,7 +263,7 @@ static bool ivch_init(struct intel_dvo_device *dvo,
 		      struct i2c_adapter *adapter)
 {
 	struct ivch_priv *priv;
-	uint16_t temp;
+	u16 temp;
 	int i;
 
 	priv = kzalloc(sizeof(struct ivch_priv), GFP_KERNEL);
@@ -342,7 +342,7 @@ static void ivch_reset(struct intel_dvo_device *dvo)
 static void ivch_dpms(struct intel_dvo_device *dvo, bool enable)
 {
 	int i;
-	uint16_t vr01, vr30, backlight;
+	u16 vr01, vr30, backlight;
 
 	ivch_reset(dvo);
 
@@ -379,7 +379,7 @@ static void ivch_dpms(struct intel_dvo_device *dvo, bool enable)
 
 static bool ivch_get_hw_state(struct intel_dvo_device *dvo)
 {
-	uint16_t vr01;
+	u16 vr01;
 
 	ivch_reset(dvo);
 
@@ -398,9 +398,9 @@ static void ivch_mode_set(struct intel_dvo_device *dvo,
 			  const struct drm_display_mode *adjusted_mode)
 {
 	struct ivch_priv *priv = dvo->dev_priv;
-	uint16_t vr40 = 0;
-	uint16_t vr01 = 0;
-	uint16_t vr10;
+	u16 vr40 = 0;
+	u16 vr01 = 0;
+	u16 vr10;
 
 	ivch_reset(dvo);
 
@@ -416,7 +416,7 @@ static void ivch_mode_set(struct intel_dvo_device *dvo,
 
 	if (mode->hdisplay != adjusted_mode->crtc_hdisplay ||
 	    mode->vdisplay != adjusted_mode->crtc_vdisplay) {
-		uint16_t x_ratio, y_ratio;
+		u16 x_ratio, y_ratio;
 
 		vr01 |= VR01_PANEL_FIT_ENABLE;
 		vr40 |= VR40_CLOCK_GATING_ENABLE;
@@ -438,7 +438,7 @@ static void ivch_mode_set(struct intel_dvo_device *dvo,
 
 static void ivch_dump_regs(struct intel_dvo_device *dvo)
 {
-	uint16_t val;
+	u16 val;
 
 	ivch_read(dvo, VR00, &val);
 	DRM_DEBUG_KMS("VR00: 0x%04x\n", val);

+ 22 - 22
drivers/gpu/drm/i915/dvo_ns2501.c

@@ -191,8 +191,8 @@ enum {
 };
 
 struct ns2501_reg {
-	 uint8_t offset;
-	 uint8_t value;
+	u8 offset;
+	u8 value;
 };
 
 /*
@@ -202,23 +202,23 @@ struct ns2501_reg {
  * read all this with a grain of salt.
  */
 struct ns2501_configuration {
-	uint8_t sync;		/* configuration of the C0 register */
-	uint8_t conf;		/* configuration register 8 */
-	uint8_t syncb;		/* configuration register 41 */
-	uint8_t	dither;		/* configuration of the dithering */
-	uint8_t pll_a;		/* PLL configuration, register A, 1B */
-	uint16_t pll_b;		/* PLL configuration, register B, 1C/1D */
-	uint16_t hstart;	/* horizontal start, registers C1/C2 */
-	uint16_t hstop;		/* horizontal total, registers C3/C4 */
-	uint16_t vstart;	/* vertical start, registers C5/C6 */
-	uint16_t vstop;		/* vertical total, registers C7/C8 */
-	uint16_t vsync;         /* manual vertical sync start, 80/81 */
-	uint16_t vtotal;        /* number of lines generated, 82/83 */
-	uint16_t hpos;		/* horizontal position + 256, 98/99  */
-	uint16_t vpos;		/* vertical position, 8e/8f */
-	uint16_t voffs;		/* vertical output offset, 9c/9d */
-	uint16_t hscale;	/* horizontal scaling factor, b8/b9 */
-	uint16_t vscale;	/* vertical scaling factor, 10/11 */
+	u8 sync;		/* configuration of the C0 register */
+	u8 conf;		/* configuration register 8 */
+	u8 syncb;		/* configuration register 41 */
+	u8 dither;		/* configuration of the dithering */
+	u8 pll_a;		/* PLL configuration, register A, 1B */
+	u16 pll_b;		/* PLL configuration, register B, 1C/1D */
+	u16 hstart;		/* horizontal start, registers C1/C2 */
+	u16 hstop;		/* horizontal total, registers C3/C4 */
+	u16 vstart;		/* vertical start, registers C5/C6 */
+	u16 vstop;		/* vertical total, registers C7/C8 */
+	u16 vsync;		/* manual vertical sync start, 80/81 */
+	u16 vtotal;		/* number of lines generated, 82/83 */
+	u16 hpos;		/* horizontal position + 256, 98/99  */
+	u16 vpos;		/* vertical position, 8e/8f */
+	u16 voffs;		/* vertical output offset, 9c/9d */
+	u16 hscale;		/* horizontal scaling factor, b8/b9 */
+	u16 vscale;		/* vertical scaling factor, 10/11 */
 };
 
 /*
@@ -389,7 +389,7 @@ struct ns2501_priv {
 ** If it returns false, it might be wise to enable the
 ** DVO with the above function.
 */
-static bool ns2501_readb(struct intel_dvo_device *dvo, int addr, uint8_t * ch)
+static bool ns2501_readb(struct intel_dvo_device *dvo, int addr, u8 *ch)
 {
 	struct ns2501_priv *ns = dvo->dev_priv;
 	struct i2c_adapter *adapter = dvo->i2c_bus;
@@ -434,11 +434,11 @@ static bool ns2501_readb(struct intel_dvo_device *dvo, int addr, uint8_t * ch)
 ** If it returns false, it might be wise to enable the
 ** DVO with the above function.
 */
-static bool ns2501_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch)
+static bool ns2501_writeb(struct intel_dvo_device *dvo, int addr, u8 ch)
 {
 	struct ns2501_priv *ns = dvo->dev_priv;
 	struct i2c_adapter *adapter = dvo->i2c_bus;
-	uint8_t out_buf[2];
+	u8 out_buf[2];
 
 	struct i2c_msg msg = {
 		.addr = dvo->slave_addr,

+ 5 - 5
drivers/gpu/drm/i915/dvo_sil164.c

@@ -65,7 +65,7 @@ struct sil164_priv {
 
 #define SILPTR(d) ((SIL164Ptr)(d->DriverPrivate.ptr))
 
-static bool sil164_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
+static bool sil164_readb(struct intel_dvo_device *dvo, int addr, u8 *ch)
 {
 	struct sil164_priv *sil = dvo->dev_priv;
 	struct i2c_adapter *adapter = dvo->i2c_bus;
@@ -102,11 +102,11 @@ static bool sil164_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
 	return false;
 }
 
-static bool sil164_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch)
+static bool sil164_writeb(struct intel_dvo_device *dvo, int addr, u8 ch)
 {
 	struct sil164_priv *sil = dvo->dev_priv;
 	struct i2c_adapter *adapter = dvo->i2c_bus;
-	uint8_t out_buf[2];
+	u8 out_buf[2];
 	struct i2c_msg msg = {
 		.addr = dvo->slave_addr,
 		.flags = 0,
@@ -173,7 +173,7 @@ out:
 
 static enum drm_connector_status sil164_detect(struct intel_dvo_device *dvo)
 {
-	uint8_t reg9;
+	u8 reg9;
 
 	sil164_readb(dvo, SIL164_REG9, &reg9);
 
@@ -243,7 +243,7 @@ static bool sil164_get_hw_state(struct intel_dvo_device *dvo)
 
 static void sil164_dump_regs(struct intel_dvo_device *dvo)
 {
-	uint8_t val;
+	u8 val;
 
 	sil164_readb(dvo, SIL164_FREQ_LO, &val);
 	DRM_DEBUG_KMS("SIL164_FREQ_LO: 0x%02x\n", val);

+ 8 - 8
drivers/gpu/drm/i915/dvo_tfp410.c

@@ -90,7 +90,7 @@ struct tfp410_priv {
 	bool quiet;
 };
 
-static bool tfp410_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
+static bool tfp410_readb(struct intel_dvo_device *dvo, int addr, u8 *ch)
 {
 	struct tfp410_priv *tfp = dvo->dev_priv;
 	struct i2c_adapter *adapter = dvo->i2c_bus;
@@ -127,11 +127,11 @@ static bool tfp410_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
 	return false;
 }
 
-static bool tfp410_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch)
+static bool tfp410_writeb(struct intel_dvo_device *dvo, int addr, u8 ch)
 {
 	struct tfp410_priv *tfp = dvo->dev_priv;
 	struct i2c_adapter *adapter = dvo->i2c_bus;
-	uint8_t out_buf[2];
+	u8 out_buf[2];
 	struct i2c_msg msg = {
 		.addr = dvo->slave_addr,
 		.flags = 0,
@@ -155,7 +155,7 @@ static bool tfp410_writeb(struct intel_dvo_device *dvo, int addr, uint8_t ch)
 
 static int tfp410_getid(struct intel_dvo_device *dvo, int addr)
 {
-	uint8_t ch1, ch2;
+	u8 ch1, ch2;
 
 	if (tfp410_readb(dvo, addr+0, &ch1) &&
 	    tfp410_readb(dvo, addr+1, &ch2))
@@ -203,7 +203,7 @@ out:
 static enum drm_connector_status tfp410_detect(struct intel_dvo_device *dvo)
 {
 	enum drm_connector_status ret = connector_status_disconnected;
-	uint8_t ctl2;
+	u8 ctl2;
 
 	if (tfp410_readb(dvo, TFP410_CTL_2, &ctl2)) {
 		if (ctl2 & TFP410_CTL_2_RSEN)
@@ -236,7 +236,7 @@ static void tfp410_mode_set(struct intel_dvo_device *dvo,
 /* set the tfp410 power state */
 static void tfp410_dpms(struct intel_dvo_device *dvo, bool enable)
 {
-	uint8_t ctl1;
+	u8 ctl1;
 
 	if (!tfp410_readb(dvo, TFP410_CTL_1, &ctl1))
 		return;
@@ -251,7 +251,7 @@ static void tfp410_dpms(struct intel_dvo_device *dvo, bool enable)
 
 static bool tfp410_get_hw_state(struct intel_dvo_device *dvo)
 {
-	uint8_t ctl1;
+	u8 ctl1;
 
 	if (!tfp410_readb(dvo, TFP410_CTL_1, &ctl1))
 		return false;
@@ -264,7 +264,7 @@ static bool tfp410_get_hw_state(struct intel_dvo_device *dvo)
 
 static void tfp410_dump_regs(struct intel_dvo_device *dvo)
 {
-	uint8_t val, val2;
+	u8 val, val2;
 
 	tfp410_readb(dvo, TFP410_REV, &val);
 	DRM_DEBUG_KMS("TFP410_REV: 0x%02X\n", val);

+ 18 - 25
drivers/gpu/drm/i915/gvt/cmd_parser.c

@@ -172,6 +172,7 @@ struct decode_info {
 #define OP_MEDIA_INTERFACE_DESCRIPTOR_LOAD      OP_3D_MEDIA(0x2, 0x0, 0x2)
 #define OP_MEDIA_GATEWAY_STATE                  OP_3D_MEDIA(0x2, 0x0, 0x3)
 #define OP_MEDIA_STATE_FLUSH                    OP_3D_MEDIA(0x2, 0x0, 0x4)
+#define OP_MEDIA_POOL_STATE                     OP_3D_MEDIA(0x2, 0x0, 0x5)
 
 #define OP_MEDIA_OBJECT                         OP_3D_MEDIA(0x2, 0x1, 0x0)
 #define OP_MEDIA_OBJECT_PRT                     OP_3D_MEDIA(0x2, 0x1, 0x2)
@@ -1256,7 +1257,9 @@ static int gen8_check_mi_display_flip(struct parser_exec_state *s,
 	if (!info->async_flip)
 		return 0;
 
-	if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
+	if (IS_SKYLAKE(dev_priv)
+		|| IS_KABYLAKE(dev_priv)
+		|| IS_BROXTON(dev_priv)) {
 		stride = vgpu_vreg_t(s->vgpu, info->stride_reg) & GENMASK(9, 0);
 		tile = (vgpu_vreg_t(s->vgpu, info->ctrl_reg) &
 				GENMASK(12, 10)) >> 10;
@@ -1284,7 +1287,9 @@ static int gen8_update_plane_mmio_from_mi_display_flip(
 
 	set_mask_bits(&vgpu_vreg_t(vgpu, info->surf_reg), GENMASK(31, 12),
 		      info->surf_val << 12);
-	if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
+	if (IS_SKYLAKE(dev_priv)
+		|| IS_KABYLAKE(dev_priv)
+		|| IS_BROXTON(dev_priv)) {
 		set_mask_bits(&vgpu_vreg_t(vgpu, info->stride_reg), GENMASK(9, 0),
 			      info->stride_val);
 		set_mask_bits(&vgpu_vreg_t(vgpu, info->ctrl_reg), GENMASK(12, 10),
@@ -1308,7 +1313,9 @@ static int decode_mi_display_flip(struct parser_exec_state *s,
 
 	if (IS_BROADWELL(dev_priv))
 		return gen8_decode_mi_display_flip(s, info);
-	if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))
+	if (IS_SKYLAKE(dev_priv)
+		|| IS_KABYLAKE(dev_priv)
+		|| IS_BROXTON(dev_priv))
 		return skl_decode_mi_display_flip(s, info);
 
 	return -ENODEV;
@@ -1317,26 +1324,14 @@ static int decode_mi_display_flip(struct parser_exec_state *s,
 static int check_mi_display_flip(struct parser_exec_state *s,
 		struct mi_display_flip_command_info *info)
 {
-	struct drm_i915_private *dev_priv = s->vgpu->gvt->dev_priv;
-
-	if (IS_BROADWELL(dev_priv)
-		|| IS_SKYLAKE(dev_priv)
-		|| IS_KABYLAKE(dev_priv))
-		return gen8_check_mi_display_flip(s, info);
-	return -ENODEV;
+	return gen8_check_mi_display_flip(s, info);
 }
 
 static int update_plane_mmio_from_mi_display_flip(
 		struct parser_exec_state *s,
 		struct mi_display_flip_command_info *info)
 {
-	struct drm_i915_private *dev_priv = s->vgpu->gvt->dev_priv;
-
-	if (IS_BROADWELL(dev_priv)
-		|| IS_SKYLAKE(dev_priv)
-		|| IS_KABYLAKE(dev_priv))
-		return gen8_update_plane_mmio_from_mi_display_flip(s, info);
-	return -ENODEV;
+	return gen8_update_plane_mmio_from_mi_display_flip(s, info);
 }
 
 static int cmd_handler_mi_display_flip(struct parser_exec_state *s)
@@ -1615,15 +1610,10 @@ static int copy_gma_to_hva(struct intel_vgpu *vgpu, struct intel_vgpu_mm *mm,
  */
 static int batch_buffer_needs_scan(struct parser_exec_state *s)
 {
-	struct intel_gvt *gvt = s->vgpu->gvt;
-
-	if (IS_BROADWELL(gvt->dev_priv) || IS_SKYLAKE(gvt->dev_priv)
-		|| IS_KABYLAKE(gvt->dev_priv)) {
-		/* BDW decides privilege based on address space */
-		if (cmd_val(s, 0) & (1 << 8) &&
+	/* Decide privilege based on address space */
+	if (cmd_val(s, 0) & (1 << 8) &&
 			!(s->vgpu->scan_nonprivbb & (1 << s->ring_id)))
-			return 0;
-	}
+		return 0;
 	return 1;
 }
 
@@ -2349,6 +2339,9 @@ static struct cmd_info cmd_info[] = {
 	{"MEDIA_STATE_FLUSH", OP_MEDIA_STATE_FLUSH, F_LEN_VAR, R_RCS, D_ALL,
 		0, 16, NULL},
 
+	{"MEDIA_POOL_STATE", OP_MEDIA_POOL_STATE, F_LEN_VAR, R_RCS, D_ALL,
+		0, 16, NULL},
+
 	{"MEDIA_OBJECT", OP_MEDIA_OBJECT, F_LEN_VAR, R_RCS, D_ALL, 0, 16, NULL},
 
 	{"MEDIA_CURBE_LOAD", OP_MEDIA_CURBE_LOAD, F_LEN_VAR, R_RCS, D_ALL,

+ 42 - 16
drivers/gpu/drm/i915/gvt/display.c

@@ -171,6 +171,29 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
 	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
 	int pipe;
 
+	if (IS_BROXTON(dev_priv)) {
+		vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) &= ~(BXT_DE_PORT_HP_DDIA |
+			BXT_DE_PORT_HP_DDIB |
+			BXT_DE_PORT_HP_DDIC);
+
+		if (intel_vgpu_has_monitor_on_port(vgpu, PORT_A)) {
+			vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
+				BXT_DE_PORT_HP_DDIA;
+		}
+
+		if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
+			vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
+				BXT_DE_PORT_HP_DDIB;
+		}
+
+		if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
+			vgpu_vreg_t(vgpu, GEN8_DE_PORT_ISR) |=
+				BXT_DE_PORT_HP_DDIC;
+		}
+
+		return;
+	}
+
 	vgpu_vreg_t(vgpu, SDEISR) &= ~(SDE_PORTB_HOTPLUG_CPT |
 			SDE_PORTC_HOTPLUG_CPT |
 			SDE_PORTD_HOTPLUG_CPT);
@@ -337,26 +360,28 @@ void intel_gvt_check_vblank_emulation(struct intel_gvt *gvt)
 	struct intel_gvt_irq *irq = &gvt->irq;
 	struct intel_vgpu *vgpu;
 	int pipe, id;
+	int found = false;
 
-	if (WARN_ON(!mutex_is_locked(&gvt->lock)))
-		return;
-
+	mutex_lock(&gvt->lock);
 	for_each_active_vgpu(gvt, vgpu, id) {
 		for (pipe = 0; pipe < I915_MAX_PIPES; pipe++) {
-			if (pipe_is_enabled(vgpu, pipe))
-				goto out;
+			if (pipe_is_enabled(vgpu, pipe)) {
+				found = true;
+				break;
+			}
 		}
+		if (found)
+			break;
 	}
 
 	/* all the pipes are disabled */
-	hrtimer_cancel(&irq->vblank_timer.timer);
-	return;
-
-out:
-	hrtimer_start(&irq->vblank_timer.timer,
-		ktime_add_ns(ktime_get(), irq->vblank_timer.period),
-		HRTIMER_MODE_ABS);
-
+	if (!found)
+		hrtimer_cancel(&irq->vblank_timer.timer);
+	else
+		hrtimer_start(&irq->vblank_timer.timer,
+			ktime_add_ns(ktime_get(), irq->vblank_timer.period),
+			HRTIMER_MODE_ABS);
+	mutex_unlock(&gvt->lock);
 }
 
 static void emulate_vblank_on_pipe(struct intel_vgpu *vgpu, int pipe)
@@ -393,8 +418,10 @@ static void emulate_vblank(struct intel_vgpu *vgpu)
 {
 	int pipe;
 
+	mutex_lock(&vgpu->vgpu_lock);
 	for_each_pipe(vgpu->gvt->dev_priv, pipe)
 		emulate_vblank_on_pipe(vgpu, pipe);
+	mutex_unlock(&vgpu->vgpu_lock);
 }
 
 /**
@@ -409,11 +436,10 @@ void intel_gvt_emulate_vblank(struct intel_gvt *gvt)
 	struct intel_vgpu *vgpu;
 	int id;
 
-	if (WARN_ON(!mutex_is_locked(&gvt->lock)))
-		return;
-
+	mutex_lock(&gvt->lock);
 	for_each_active_vgpu(gvt, vgpu, id)
 		emulate_vblank(vgpu);
+	mutex_unlock(&gvt->lock);
 }
 
 /**

+ 19 - 7
drivers/gpu/drm/i915/gvt/dmabuf.c

@@ -164,7 +164,9 @@ static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
 
 	obj->read_domains = I915_GEM_DOMAIN_GTT;
 	obj->write_domain = 0;
-	if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
+	if (IS_SKYLAKE(dev_priv)
+		|| IS_KABYLAKE(dev_priv)
+		|| IS_BROXTON(dev_priv)) {
 		unsigned int tiling_mode = 0;
 		unsigned int stride = 0;
 
@@ -192,6 +194,14 @@ static struct drm_i915_gem_object *vgpu_create_gem(struct drm_device *dev,
 	return obj;
 }
 
+static bool validate_hotspot(struct intel_vgpu_cursor_plane_format *c)
+{
+	if (c && c->x_hot <= c->width && c->y_hot <= c->height)
+		return true;
+	else
+		return false;
+}
+
 static int vgpu_get_plane_info(struct drm_device *dev,
 		struct intel_vgpu *vgpu,
 		struct intel_vgpu_fb_info *info,
@@ -229,12 +239,14 @@ static int vgpu_get_plane_info(struct drm_device *dev,
 		info->x_pos = c.x_pos;
 		info->y_pos = c.y_pos;
 
-		/* The invalid cursor hotspot value is delivered to host
-		 * until we find a way to get the cursor hotspot info of
-		 * guest OS.
-		 */
-		info->x_hot = UINT_MAX;
-		info->y_hot = UINT_MAX;
+		if (validate_hotspot(&c)) {
+			info->x_hot = c.x_hot;
+			info->y_hot = c.y_hot;
+		} else {
+			info->x_hot = UINT_MAX;
+			info->y_hot = UINT_MAX;
+		}
+
 		info->size = (((info->stride * c.height * c.bpp) / 8)
 				+ (PAGE_SIZE - 1)) >> PAGE_SHIFT;
 	} else {

+ 19 - 1
drivers/gpu/drm/i915/gvt/edid.c

@@ -77,6 +77,20 @@ static unsigned char edid_get_byte(struct intel_vgpu *vgpu)
 	return chr;
 }
 
+static inline int bxt_get_port_from_gmbus0(u32 gmbus0)
+{
+	int port_select = gmbus0 & _GMBUS_PIN_SEL_MASK;
+	int port = -EINVAL;
+
+	if (port_select == 1)
+		port = PORT_B;
+	else if (port_select == 2)
+		port = PORT_C;
+	else if (port_select == 3)
+		port = PORT_D;
+	return port;
+}
+
 static inline int get_port_from_gmbus0(u32 gmbus0)
 {
 	int port_select = gmbus0 & _GMBUS_PIN_SEL_MASK;
@@ -105,6 +119,7 @@ static void reset_gmbus_controller(struct intel_vgpu *vgpu)
 static int gmbus0_mmio_write(struct intel_vgpu *vgpu,
 			unsigned int offset, void *p_data, unsigned int bytes)
 {
+	struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
 	int port, pin_select;
 
 	memcpy(&vgpu_vreg(vgpu, offset), p_data, bytes);
@@ -116,7 +131,10 @@ static int gmbus0_mmio_write(struct intel_vgpu *vgpu,
 	if (pin_select == 0)
 		return 0;
 
-	port = get_port_from_gmbus0(pin_select);
+	if (IS_BROXTON(dev_priv))
+		port = bxt_get_port_from_gmbus0(pin_select);
+	else
+		port = get_port_from_gmbus0(pin_select);
 	if (WARN_ON(port < 0))
 		return 0;
 

+ 5 - 8
drivers/gpu/drm/i915/gvt/execlist.h

@@ -146,14 +146,11 @@ struct execlist_ring_context {
 	u32 nop4;
 	u32 lri_cmd_2;
 	struct execlist_mmio_pair ctx_timestamp;
-	struct execlist_mmio_pair pdp3_UDW;
-	struct execlist_mmio_pair pdp3_LDW;
-	struct execlist_mmio_pair pdp2_UDW;
-	struct execlist_mmio_pair pdp2_LDW;
-	struct execlist_mmio_pair pdp1_UDW;
-	struct execlist_mmio_pair pdp1_LDW;
-	struct execlist_mmio_pair pdp0_UDW;
-	struct execlist_mmio_pair pdp0_LDW;
+	/*
+	 * pdps[8]={ pdp3_UDW, pdp3_LDW, pdp2_UDW, pdp2_LDW,
+	 *           pdp1_UDW, pdp1_LDW, pdp0_UDW, pdp0_LDW}
+	 */
+	struct execlist_mmio_pair pdps[8];
 };
 
 struct intel_vgpu_elsp_dwords {

+ 12 - 3
drivers/gpu/drm/i915/gvt/fb_decoder.c

@@ -36,6 +36,7 @@
 #include <uapi/drm/drm_fourcc.h>
 #include "i915_drv.h"
 #include "gvt.h"
+#include "i915_pvinfo.h"
 
 #define PRIMARY_FORMAT_NUM	16
 struct pixel_format {
@@ -150,7 +151,9 @@ static u32 intel_vgpu_get_stride(struct intel_vgpu *vgpu, int pipe,
 	u32 stride_reg = vgpu_vreg_t(vgpu, DSPSTRIDE(pipe)) & stride_mask;
 	u32 stride = stride_reg;
 
-	if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
+	if (IS_SKYLAKE(dev_priv)
+		|| IS_KABYLAKE(dev_priv)
+		|| IS_BROXTON(dev_priv)) {
 		switch (tiled) {
 		case PLANE_CTL_TILED_LINEAR:
 			stride = stride_reg * 64;
@@ -214,7 +217,9 @@ int intel_vgpu_decode_primary_plane(struct intel_vgpu *vgpu,
 	if (!plane->enabled)
 		return -ENODEV;
 
-	if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
+	if (IS_SKYLAKE(dev_priv)
+		|| IS_KABYLAKE(dev_priv)
+		|| IS_BROXTON(dev_priv)) {
 		plane->tiled = (val & PLANE_CTL_TILED_MASK) >>
 		_PLANE_CTL_TILED_SHIFT;
 		fmt = skl_format_to_drm(
@@ -256,7 +261,9 @@ int intel_vgpu_decode_primary_plane(struct intel_vgpu *vgpu,
 	}
 
 	plane->stride = intel_vgpu_get_stride(vgpu, pipe, (plane->tiled << 10),
-		(IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) ?
+		(IS_SKYLAKE(dev_priv)
+		|| IS_KABYLAKE(dev_priv)
+		|| IS_BROXTON(dev_priv)) ?
 			(_PRI_PLANE_STRIDE_MASK >> 6) :
 				_PRI_PLANE_STRIDE_MASK, plane->bpp);
 
@@ -384,6 +391,8 @@ int intel_vgpu_decode_cursor_plane(struct intel_vgpu *vgpu,
 	plane->y_pos = (val & _CURSOR_POS_Y_MASK) >> _CURSOR_POS_Y_SHIFT;
 	plane->y_sign = (val & _CURSOR_SIGN_Y_MASK) >> _CURSOR_SIGN_Y_SHIFT;
 
+	plane->x_hot = vgpu_vreg_t(vgpu, vgtif_reg(cursor_x_hot));
+	plane->y_hot = vgpu_vreg_t(vgpu, vgtif_reg(cursor_y_hot));
 	return 0;
 }
 

+ 1 - 1
drivers/gpu/drm/i915/gvt/firmware.c

@@ -162,7 +162,7 @@ static int verify_firmware(struct intel_gvt *gvt,
 
 	h = (struct gvt_firmware_header *)fw->data;
 
-	crc32_start = offsetof(struct gvt_firmware_header, crc32) + 4;
+	crc32_start = offsetofend(struct gvt_firmware_header, crc32);
 	mem = fw->data + crc32_start;
 
 #define VERIFY(s, a, b) do { \

+ 3 - 8
drivers/gpu/drm/i915/gvt/gtt.c

@@ -1973,7 +1973,7 @@ static int alloc_scratch_pages(struct intel_vgpu *vgpu,
 	 * GTT_TYPE_PPGTT_PDE_PT level pt, that means this scratch_pt it self
 	 * is GTT_TYPE_PPGTT_PTE_PT, and full filled by scratch page mfn.
 	 */
-	if (type > GTT_TYPE_PPGTT_PTE_PT && type < GTT_TYPE_MAX) {
+	if (type > GTT_TYPE_PPGTT_PTE_PT) {
 		struct intel_gvt_gtt_entry se;
 
 		memset(&se, 0, sizeof(struct intel_gvt_gtt_entry));
@@ -2257,13 +2257,8 @@ int intel_gvt_init_gtt(struct intel_gvt *gvt)
 
 	gvt_dbg_core("init gtt\n");
 
-	if (IS_BROADWELL(gvt->dev_priv) || IS_SKYLAKE(gvt->dev_priv)
-		|| IS_KABYLAKE(gvt->dev_priv)) {
-		gvt->gtt.pte_ops = &gen8_gtt_pte_ops;
-		gvt->gtt.gma_ops = &gen8_gtt_gma_ops;
-	} else {
-		return -ENODEV;
-	}
+	gvt->gtt.pte_ops = &gen8_gtt_pte_ops;
+	gvt->gtt.gma_ops = &gen8_gtt_gma_ops;
 
 	page = (void *)get_zeroed_page(GFP_KERNEL);
 	if (!page) {

+ 11 - 16
drivers/gpu/drm/i915/gvt/gvt.c

@@ -238,18 +238,15 @@ static void init_device_info(struct intel_gvt *gvt)
 	struct intel_gvt_device_info *info = &gvt->device_info;
 	struct pci_dev *pdev = gvt->dev_priv->drm.pdev;
 
-	if (IS_BROADWELL(gvt->dev_priv) || IS_SKYLAKE(gvt->dev_priv)
-		|| IS_KABYLAKE(gvt->dev_priv)) {
-		info->max_support_vgpus = 8;
-		info->cfg_space_size = PCI_CFG_SPACE_EXP_SIZE;
-		info->mmio_size = 2 * 1024 * 1024;
-		info->mmio_bar = 0;
-		info->gtt_start_offset = 8 * 1024 * 1024;
-		info->gtt_entry_size = 8;
-		info->gtt_entry_size_shift = 3;
-		info->gmadr_bytes_in_cmd = 8;
-		info->max_surface_size = 36 * 1024 * 1024;
-	}
+	info->max_support_vgpus = 8;
+	info->cfg_space_size = PCI_CFG_SPACE_EXP_SIZE;
+	info->mmio_size = 2 * 1024 * 1024;
+	info->mmio_bar = 0;
+	info->gtt_start_offset = 8 * 1024 * 1024;
+	info->gtt_entry_size = 8;
+	info->gtt_entry_size_shift = 3;
+	info->gmadr_bytes_in_cmd = 8;
+	info->max_surface_size = 36 * 1024 * 1024;
 	info->msi_cap_offset = pdev->msi_cap;
 }
 
@@ -271,11 +268,8 @@ static int gvt_service_thread(void *data)
 			continue;
 
 		if (test_and_clear_bit(INTEL_GVT_REQUEST_EMULATE_VBLANK,
-					(void *)&gvt->service_request)) {
-			mutex_lock(&gvt->lock);
+					(void *)&gvt->service_request))
 			intel_gvt_emulate_vblank(gvt);
-			mutex_unlock(&gvt->lock);
-		}
 
 		if (test_bit(INTEL_GVT_REQUEST_SCHED,
 				(void *)&gvt->service_request) ||
@@ -379,6 +373,7 @@ int intel_gvt_init_device(struct drm_i915_private *dev_priv)
 	idr_init(&gvt->vgpu_idr);
 	spin_lock_init(&gvt->scheduler.mmio_context_lock);
 	mutex_init(&gvt->lock);
+	mutex_init(&gvt->sched_lock);
 	gvt->dev_priv = dev_priv;
 
 	init_device_info(gvt);

+ 16 - 0
drivers/gpu/drm/i915/gvt/gvt.h

@@ -170,12 +170,18 @@ struct intel_vgpu_submission {
 
 struct intel_vgpu {
 	struct intel_gvt *gvt;
+	struct mutex vgpu_lock;
 	int id;
 	unsigned long handle; /* vGPU handle used by hypervisor MPT modules */
 	bool active;
 	bool pv_notified;
 	bool failsafe;
 	unsigned int resetting_eng;
+
+	/* Both sched_data and sched_ctl can be seen a part of the global gvt
+	 * scheduler structure. So below 2 vgpu data are protected
+	 * by sched_lock, not vgpu_lock.
+	 */
 	void *sched_data;
 	struct vgpu_sched_ctl sched_ctl;
 
@@ -294,7 +300,13 @@ struct intel_vgpu_type {
 };
 
 struct intel_gvt {
+	/* GVT scope lock, protect GVT itself, and all resource currently
+	 * not yet protected by special locks(vgpu and scheduler lock).
+	 */
 	struct mutex lock;
+	/* scheduler scope lock, protect gvt and vgpu schedule related data */
+	struct mutex sched_lock;
+
 	struct drm_i915_private *dev_priv;
 	struct idr vgpu_idr;	/* vGPU IDR pool */
 
@@ -314,6 +326,10 @@ struct intel_gvt {
 
 	struct task_struct *service_thread;
 	wait_queue_head_t service_thread_wq;
+
+	/* service_request is always used in bit operation, we should always
+	 * use it with atomic bit ops so that no need to use gvt big lock.
+	 */
 	unsigned long service_request;
 
 	struct {

+ 352 - 47
drivers/gpu/drm/i915/gvt/handlers.c

@@ -55,6 +55,8 @@ unsigned long intel_gvt_get_device_type(struct intel_gvt *gvt)
 		return D_SKL;
 	else if (IS_KABYLAKE(gvt->dev_priv))
 		return D_KBL;
+	else if (IS_BROXTON(gvt->dev_priv))
+		return D_BXT;
 
 	return 0;
 }
@@ -255,7 +257,8 @@ static int mul_force_wake_write(struct intel_vgpu *vgpu,
 	new = CALC_MODE_MASK_REG(old, *(u32 *)p_data);
 
 	if (IS_SKYLAKE(vgpu->gvt->dev_priv)
-		|| IS_KABYLAKE(vgpu->gvt->dev_priv)) {
+		|| IS_KABYLAKE(vgpu->gvt->dev_priv)
+		|| IS_BROXTON(vgpu->gvt->dev_priv)) {
 		switch (offset) {
 		case FORCEWAKE_RENDER_GEN9_REG:
 			ack_reg_offset = FORCEWAKE_ACK_RENDER_GEN9_REG;
@@ -316,6 +319,7 @@ static int gdrst_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 		}
 	}
 
+	/* vgpu_lock already hold by emulate mmio r/w */
 	intel_gvt_reset_vgpu_locked(vgpu, false, engine_mask);
 
 	/* sw will wait for the device to ack the reset request */
@@ -420,7 +424,10 @@ static int pipeconf_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 		vgpu_vreg(vgpu, offset) |= I965_PIPECONF_ACTIVE;
 	else
 		vgpu_vreg(vgpu, offset) &= ~I965_PIPECONF_ACTIVE;
+	/* vgpu_lock already hold by emulate mmio r/w */
+	mutex_unlock(&vgpu->vgpu_lock);
 	intel_gvt_check_vblank_emulation(vgpu->gvt);
+	mutex_lock(&vgpu->vgpu_lock);
 	return 0;
 }
 
@@ -857,7 +864,8 @@ static int dp_aux_ch_ctl_mmio_write(struct intel_vgpu *vgpu,
 	data = vgpu_vreg(vgpu, offset);
 
 	if ((IS_SKYLAKE(vgpu->gvt->dev_priv)
-		|| IS_KABYLAKE(vgpu->gvt->dev_priv))
+		|| IS_KABYLAKE(vgpu->gvt->dev_priv)
+		|| IS_BROXTON(vgpu->gvt->dev_priv))
 		&& offset != _REG_SKL_DP_AUX_CH_CTL(port_index)) {
 		/* SKL DPB/C/D aux ctl register changed */
 		return 0;
@@ -1209,8 +1217,8 @@ static int pvinfo_mmio_write(struct intel_vgpu *vgpu, unsigned int offset,
 		ret = handle_g2v_notification(vgpu, data);
 		break;
 	/* add xhot and yhot to handled list to avoid error log */
-	case 0x78830:
-	case 0x78834:
+	case _vgtif_reg(cursor_x_hot):
+	case _vgtif_reg(cursor_y_hot):
 	case _vgtif_reg(pdp[0].lo):
 	case _vgtif_reg(pdp[0].hi):
 	case _vgtif_reg(pdp[1].lo):
@@ -1369,6 +1377,16 @@ static int mailbox_write(struct intel_vgpu *vgpu, unsigned int offset,
 				*data0 = 0x1e1a1100;
 			else
 				*data0 = 0x61514b3d;
+		} else if (IS_BROXTON(vgpu->gvt->dev_priv)) {
+			/**
+			 * "Read memory latency" command on gen9.
+			 * Below memory latency values are read
+			 * from Broxton MRB.
+			 */
+			if (!*data0)
+				*data0 = 0x16080707;
+			else
+				*data0 = 0x16161616;
 		}
 		break;
 	case SKL_PCODE_CDCLK_CONTROL:
@@ -1426,8 +1444,11 @@ static int skl_power_well_ctl_write(struct intel_vgpu *vgpu,
 {
 	u32 v = *(u32 *)p_data;
 
-	v &= (1 << 31) | (1 << 29) | (1 << 9) |
-	     (1 << 7) | (1 << 5) | (1 << 3) | (1 << 1);
+	if (IS_BROXTON(vgpu->gvt->dev_priv))
+		v &= (1 << 31) | (1 << 29);
+	else
+		v &= (1 << 31) | (1 << 29) | (1 << 9) |
+			(1 << 7) | (1 << 5) | (1 << 3) | (1 << 1);
 	v |= (v >> 1);
 
 	return intel_vgpu_default_mmio_write(vgpu, offset, &v, bytes);
@@ -1447,6 +1468,102 @@ static int skl_lcpll_write(struct intel_vgpu *vgpu, unsigned int offset,
 	return 0;
 }
 
+static int bxt_de_pll_enable_write(struct intel_vgpu *vgpu,
+		unsigned int offset, void *p_data, unsigned int bytes)
+{
+	u32 v = *(u32 *)p_data;
+
+	if (v & BXT_DE_PLL_PLL_ENABLE)
+		v |= BXT_DE_PLL_LOCK;
+
+	vgpu_vreg(vgpu, offset) = v;
+
+	return 0;
+}
+
+static int bxt_port_pll_enable_write(struct intel_vgpu *vgpu,
+		unsigned int offset, void *p_data, unsigned int bytes)
+{
+	u32 v = *(u32 *)p_data;
+
+	if (v & PORT_PLL_ENABLE)
+		v |= PORT_PLL_LOCK;
+
+	vgpu_vreg(vgpu, offset) = v;
+
+	return 0;
+}
+
+static int bxt_phy_ctl_family_write(struct intel_vgpu *vgpu,
+		unsigned int offset, void *p_data, unsigned int bytes)
+{
+	u32 v = *(u32 *)p_data;
+	u32 data = v & COMMON_RESET_DIS ? BXT_PHY_LANE_ENABLED : 0;
+
+	vgpu_vreg(vgpu, _BXT_PHY_CTL_DDI_A) = data;
+	vgpu_vreg(vgpu, _BXT_PHY_CTL_DDI_B) = data;
+	vgpu_vreg(vgpu, _BXT_PHY_CTL_DDI_C) = data;
+
+	vgpu_vreg(vgpu, offset) = v;
+
+	return 0;
+}
+
+static int bxt_port_tx_dw3_read(struct intel_vgpu *vgpu,
+		unsigned int offset, void *p_data, unsigned int bytes)
+{
+	u32 v = vgpu_vreg(vgpu, offset);
+
+	v &= ~UNIQUE_TRANGE_EN_METHOD;
+
+	vgpu_vreg(vgpu, offset) = v;
+
+	return intel_vgpu_default_mmio_read(vgpu, offset, p_data, bytes);
+}
+
+static int bxt_pcs_dw12_grp_write(struct intel_vgpu *vgpu,
+		unsigned int offset, void *p_data, unsigned int bytes)
+{
+	u32 v = *(u32 *)p_data;
+
+	if (offset == _PORT_PCS_DW12_GRP_A || offset == _PORT_PCS_DW12_GRP_B) {
+		vgpu_vreg(vgpu, offset - 0x600) = v;
+		vgpu_vreg(vgpu, offset - 0x800) = v;
+	} else {
+		vgpu_vreg(vgpu, offset - 0x400) = v;
+		vgpu_vreg(vgpu, offset - 0x600) = v;
+	}
+
+	vgpu_vreg(vgpu, offset) = v;
+
+	return 0;
+}
+
+static int bxt_gt_disp_pwron_write(struct intel_vgpu *vgpu,
+		unsigned int offset, void *p_data, unsigned int bytes)
+{
+	u32 v = *(u32 *)p_data;
+
+	if (v & BIT(0)) {
+		vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) &=
+			~PHY_RESERVED;
+		vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) |=
+			PHY_POWER_GOOD;
+	}
+
+	if (v & BIT(1)) {
+		vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY1)) &=
+			~PHY_RESERVED;
+		vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY1)) |=
+			PHY_POWER_GOOD;
+	}
+
+
+	vgpu_vreg(vgpu, offset) = v;
+
+	return 0;
+}
+
 static int mmio_read_from_hw(struct intel_vgpu *vgpu,
 		unsigned int offset, void *p_data, unsigned int bytes)
 {
@@ -2670,17 +2787,17 @@ static int init_skl_mmio_info(struct intel_gvt *gvt)
 	MMIO_D(_MMIO(0x45504), D_SKL_PLUS);
 	MMIO_D(_MMIO(0x45520), D_SKL_PLUS);
 	MMIO_D(_MMIO(0x46000), D_SKL_PLUS);
-	MMIO_DH(_MMIO(0x46010), D_SKL | D_KBL, NULL, skl_lcpll_write);
-	MMIO_DH(_MMIO(0x46014), D_SKL | D_KBL, NULL, skl_lcpll_write);
-	MMIO_D(_MMIO(0x6C040), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x6C048), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x6C050), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x6C044), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x6C04C), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x6C054), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x6c058), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x6c05c), D_SKL | D_KBL);
-	MMIO_DH(_MMIO(0x6c060), D_SKL | D_KBL, dpll_status_read, NULL);
+	MMIO_DH(_MMIO(0x46010), D_SKL_PLUS, NULL, skl_lcpll_write);
+	MMIO_DH(_MMIO(0x46014), D_SKL_PLUS, NULL, skl_lcpll_write);
+	MMIO_D(_MMIO(0x6C040), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x6C048), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x6C050), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x6C044), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x6C04C), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x6C054), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x6c058), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x6c05c), D_SKL_PLUS);
+	MMIO_DH(_MMIO(0x6c060), D_SKL_PLUS, dpll_status_read, NULL);
 
 	MMIO_DH(SKL_PS_WIN_POS(PIPE_A, 0), D_SKL_PLUS, NULL, pf_write);
 	MMIO_DH(SKL_PS_WIN_POS(PIPE_A, 1), D_SKL_PLUS, NULL, pf_write);
@@ -2805,53 +2922,57 @@ static int init_skl_mmio_info(struct intel_gvt *gvt)
 	MMIO_D(_MMIO(0x7239c), D_SKL_PLUS);
 	MMIO_D(_MMIO(0x7039c), D_SKL_PLUS);
 
-	MMIO_D(_MMIO(0x8f074), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x8f004), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x8f034), D_SKL | D_KBL);
+	MMIO_D(_MMIO(0x8f074), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x8f004), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x8f034), D_SKL_PLUS);
 
-	MMIO_D(_MMIO(0xb11c), D_SKL | D_KBL);
+	MMIO_D(_MMIO(0xb11c), D_SKL_PLUS);
 
-	MMIO_D(_MMIO(0x51000), D_SKL | D_KBL);
+	MMIO_D(_MMIO(0x51000), D_SKL_PLUS);
 	MMIO_D(_MMIO(0x6c00c), D_SKL_PLUS);
 
-	MMIO_F(_MMIO(0xc800), 0x7f8, F_CMD_ACCESS, 0, 0, D_SKL | D_KBL, NULL, NULL);
-	MMIO_F(_MMIO(0xb020), 0x80, F_CMD_ACCESS, 0, 0, D_SKL | D_KBL, NULL, NULL);
+	MMIO_F(_MMIO(0xc800), 0x7f8, F_CMD_ACCESS, 0, 0, D_SKL_PLUS,
+		NULL, NULL);
+	MMIO_F(_MMIO(0xb020), 0x80, F_CMD_ACCESS, 0, 0, D_SKL_PLUS,
+		NULL, NULL);
 
 	MMIO_D(RPM_CONFIG0, D_SKL_PLUS);
 	MMIO_D(_MMIO(0xd08), D_SKL_PLUS);
 	MMIO_D(RC6_LOCATION, D_SKL_PLUS);
 	MMIO_DFH(_MMIO(0x20e0), D_SKL_PLUS, F_MODE_MASK, NULL, NULL);
-	MMIO_DFH(_MMIO(0x20ec), D_SKL_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL);
+	MMIO_DFH(_MMIO(0x20ec), D_SKL_PLUS, F_MODE_MASK | F_CMD_ACCESS,
+		NULL, NULL);
 
 	/* TRTT */
-	MMIO_DFH(_MMIO(0x4de0), D_SKL | D_KBL, F_CMD_ACCESS, NULL, NULL);
-	MMIO_DFH(_MMIO(0x4de4), D_SKL | D_KBL, F_CMD_ACCESS, NULL, NULL);
-	MMIO_DFH(_MMIO(0x4de8), D_SKL | D_KBL, F_CMD_ACCESS, NULL, NULL);
-	MMIO_DFH(_MMIO(0x4dec), D_SKL | D_KBL, F_CMD_ACCESS, NULL, NULL);
-	MMIO_DFH(_MMIO(0x4df0), D_SKL | D_KBL, F_CMD_ACCESS, NULL, NULL);
-	MMIO_DFH(_MMIO(0x4df4), D_SKL | D_KBL, F_CMD_ACCESS, NULL, gen9_trtte_write);
-	MMIO_DH(_MMIO(0x4dfc), D_SKL | D_KBL, NULL, gen9_trtt_chicken_write);
+	MMIO_DFH(_MMIO(0x4de0), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
+	MMIO_DFH(_MMIO(0x4de4), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
+	MMIO_DFH(_MMIO(0x4de8), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
+	MMIO_DFH(_MMIO(0x4dec), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
+	MMIO_DFH(_MMIO(0x4df0), D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
+	MMIO_DFH(_MMIO(0x4df4), D_SKL_PLUS, F_CMD_ACCESS,
+		NULL, gen9_trtte_write);
+	MMIO_DH(_MMIO(0x4dfc), D_SKL_PLUS, NULL, gen9_trtt_chicken_write);
 
-	MMIO_D(_MMIO(0x45008), D_SKL | D_KBL);
+	MMIO_D(_MMIO(0x45008), D_SKL_PLUS);
 
-	MMIO_D(_MMIO(0x46430), D_SKL | D_KBL);
+	MMIO_D(_MMIO(0x46430), D_SKL_PLUS);
 
-	MMIO_D(_MMIO(0x46520), D_SKL | D_KBL);
+	MMIO_D(_MMIO(0x46520), D_SKL_PLUS);
 
-	MMIO_D(_MMIO(0xc403c), D_SKL | D_KBL);
+	MMIO_D(_MMIO(0xc403c), D_SKL_PLUS);
 	MMIO_D(_MMIO(0xb004), D_SKL_PLUS);
 	MMIO_DH(DMA_CTRL, D_SKL_PLUS, NULL, dma_ctrl_write);
 
 	MMIO_D(_MMIO(0x65900), D_SKL_PLUS);
-	MMIO_D(_MMIO(0x1082c0), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x4068), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x67054), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x6e560), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x6e554), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x2b20), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x65f00), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x65f08), D_SKL | D_KBL);
-	MMIO_D(_MMIO(0x320f0), D_SKL | D_KBL);
+	MMIO_D(_MMIO(0x1082c0), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x4068), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x67054), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x6e560), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x6e554), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x2b20), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x65f00), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x65f08), D_SKL_PLUS);
+	MMIO_D(_MMIO(0x320f0), D_SKL_PLUS);
 
 	MMIO_D(_MMIO(0x70034), D_SKL_PLUS);
 	MMIO_D(_MMIO(0x71034), D_SKL_PLUS);
@@ -2869,11 +2990,185 @@ static int init_skl_mmio_info(struct intel_gvt *gvt)
 
 	MMIO_D(_MMIO(0x44500), D_SKL_PLUS);
 	MMIO_DFH(GEN9_CSFE_CHICKEN1_RCS, D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL);
-	MMIO_DFH(GEN8_HDC_CHICKEN1, D_SKL | D_KBL, F_MODE_MASK | F_CMD_ACCESS,
+	MMIO_DFH(GEN8_HDC_CHICKEN1, D_SKL_PLUS, F_MODE_MASK | F_CMD_ACCESS,
 		NULL, NULL);
 
 	MMIO_D(_MMIO(0x4ab8), D_KBL);
-	MMIO_D(_MMIO(0x2248), D_SKL_PLUS | D_KBL);
+	MMIO_D(_MMIO(0x2248), D_KBL | D_SKL);
+
+	return 0;
+}
+
+static int init_bxt_mmio_info(struct intel_gvt *gvt)
+{
+	struct drm_i915_private *dev_priv = gvt->dev_priv;
+	int ret;
+
+	MMIO_F(_MMIO(0x80000), 0x3000, 0, 0, 0, D_BXT, NULL, NULL);
+
+	MMIO_D(GEN7_SAMPLER_INSTDONE, D_BXT);
+	MMIO_D(GEN7_ROW_INSTDONE, D_BXT);
+	MMIO_D(GEN8_FAULT_TLB_DATA0, D_BXT);
+	MMIO_D(GEN8_FAULT_TLB_DATA1, D_BXT);
+	MMIO_D(ERROR_GEN6, D_BXT);
+	MMIO_D(DONE_REG, D_BXT);
+	MMIO_D(EIR, D_BXT);
+	MMIO_D(PGTBL_ER, D_BXT);
+	MMIO_D(_MMIO(0x4194), D_BXT);
+	MMIO_D(_MMIO(0x4294), D_BXT);
+	MMIO_D(_MMIO(0x4494), D_BXT);
+
+	MMIO_RING_D(RING_PSMI_CTL, D_BXT);
+	MMIO_RING_D(RING_DMA_FADD, D_BXT);
+	MMIO_RING_D(RING_DMA_FADD_UDW, D_BXT);
+	MMIO_RING_D(RING_IPEHR, D_BXT);
+	MMIO_RING_D(RING_INSTPS, D_BXT);
+	MMIO_RING_D(RING_BBADDR_UDW, D_BXT);
+	MMIO_RING_D(RING_BBSTATE, D_BXT);
+	MMIO_RING_D(RING_IPEIR, D_BXT);
+
+	MMIO_F(SOFT_SCRATCH(0), 16 * 4, 0, 0, 0, D_BXT, NULL, NULL);
+
+	MMIO_DH(BXT_P_CR_GT_DISP_PWRON, D_BXT, NULL, bxt_gt_disp_pwron_write);
+	MMIO_D(BXT_RP_STATE_CAP, D_BXT);
+	MMIO_DH(BXT_PHY_CTL_FAMILY(DPIO_PHY0), D_BXT,
+		NULL, bxt_phy_ctl_family_write);
+	MMIO_DH(BXT_PHY_CTL_FAMILY(DPIO_PHY1), D_BXT,
+		NULL, bxt_phy_ctl_family_write);
+	MMIO_D(BXT_PHY_CTL(PORT_A), D_BXT);
+	MMIO_D(BXT_PHY_CTL(PORT_B), D_BXT);
+	MMIO_D(BXT_PHY_CTL(PORT_C), D_BXT);
+	MMIO_DH(BXT_PORT_PLL_ENABLE(PORT_A), D_BXT,
+		NULL, bxt_port_pll_enable_write);
+	MMIO_DH(BXT_PORT_PLL_ENABLE(PORT_B), D_BXT,
+		NULL, bxt_port_pll_enable_write);
+	MMIO_DH(BXT_PORT_PLL_ENABLE(PORT_C), D_BXT, NULL,
+		bxt_port_pll_enable_write);
+
+	MMIO_D(BXT_PORT_CL1CM_DW0(DPIO_PHY0), D_BXT);
+	MMIO_D(BXT_PORT_CL1CM_DW9(DPIO_PHY0), D_BXT);
+	MMIO_D(BXT_PORT_CL1CM_DW10(DPIO_PHY0), D_BXT);
+	MMIO_D(BXT_PORT_CL1CM_DW28(DPIO_PHY0), D_BXT);
+	MMIO_D(BXT_PORT_CL1CM_DW30(DPIO_PHY0), D_BXT);
+	MMIO_D(BXT_PORT_CL2CM_DW6(DPIO_PHY0), D_BXT);
+	MMIO_D(BXT_PORT_REF_DW3(DPIO_PHY0), D_BXT);
+	MMIO_D(BXT_PORT_REF_DW6(DPIO_PHY0), D_BXT);
+	MMIO_D(BXT_PORT_REF_DW8(DPIO_PHY0), D_BXT);
+
+	MMIO_D(BXT_PORT_CL1CM_DW0(DPIO_PHY1), D_BXT);
+	MMIO_D(BXT_PORT_CL1CM_DW9(DPIO_PHY1), D_BXT);
+	MMIO_D(BXT_PORT_CL1CM_DW10(DPIO_PHY1), D_BXT);
+	MMIO_D(BXT_PORT_CL1CM_DW28(DPIO_PHY1), D_BXT);
+	MMIO_D(BXT_PORT_CL1CM_DW30(DPIO_PHY1), D_BXT);
+	MMIO_D(BXT_PORT_CL2CM_DW6(DPIO_PHY1), D_BXT);
+	MMIO_D(BXT_PORT_REF_DW3(DPIO_PHY1), D_BXT);
+	MMIO_D(BXT_PORT_REF_DW6(DPIO_PHY1), D_BXT);
+	MMIO_D(BXT_PORT_REF_DW8(DPIO_PHY1), D_BXT);
+
+	MMIO_D(BXT_PORT_PLL_EBB_0(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_PLL_EBB_4(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW10_LN01(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW10_GRP(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW12_LN01(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW12_LN23(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_DH(BXT_PORT_PCS_DW12_GRP(DPIO_PHY0, DPIO_CH0), D_BXT,
+		NULL, bxt_pcs_dw12_grp_write);
+	MMIO_D(BXT_PORT_TX_DW2_LN0(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW2_GRP(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_DH(BXT_PORT_TX_DW3_LN0(DPIO_PHY0, DPIO_CH0), D_BXT,
+		bxt_port_tx_dw3_read, NULL);
+	MMIO_D(BXT_PORT_TX_DW3_GRP(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW4_LN0(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW4_GRP(DPIO_PHY0, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH0, 0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH0, 1), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH0, 2), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH0, 3), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 0), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 1), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 2), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 3), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 6), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 8), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 9), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH0, 10), D_BXT);
+
+	MMIO_D(BXT_PORT_PLL_EBB_0(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_D(BXT_PORT_PLL_EBB_4(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW10_LN01(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW10_GRP(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW12_LN01(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW12_LN23(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_DH(BXT_PORT_PCS_DW12_GRP(DPIO_PHY0, DPIO_CH1), D_BXT,
+		NULL, bxt_pcs_dw12_grp_write);
+	MMIO_D(BXT_PORT_TX_DW2_LN0(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW2_GRP(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_DH(BXT_PORT_TX_DW3_LN0(DPIO_PHY0, DPIO_CH1), D_BXT,
+		bxt_port_tx_dw3_read, NULL);
+	MMIO_D(BXT_PORT_TX_DW3_GRP(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW4_LN0(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW4_GRP(DPIO_PHY0, DPIO_CH1), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH1, 0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH1, 1), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH1, 2), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY0, DPIO_CH1, 3), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 0), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 1), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 2), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 3), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 6), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 8), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 9), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY0, DPIO_CH1, 10), D_BXT);
+
+	MMIO_D(BXT_PORT_PLL_EBB_0(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_PLL_EBB_4(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW10_LN01(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW10_GRP(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW12_LN01(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_PCS_DW12_LN23(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_DH(BXT_PORT_PCS_DW12_GRP(DPIO_PHY1, DPIO_CH0), D_BXT,
+		NULL, bxt_pcs_dw12_grp_write);
+	MMIO_D(BXT_PORT_TX_DW2_LN0(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW2_GRP(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_DH(BXT_PORT_TX_DW3_LN0(DPIO_PHY1, DPIO_CH0), D_BXT,
+		bxt_port_tx_dw3_read, NULL);
+	MMIO_D(BXT_PORT_TX_DW3_GRP(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW4_LN0(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW4_GRP(DPIO_PHY1, DPIO_CH0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY1, DPIO_CH0, 0), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY1, DPIO_CH0, 1), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY1, DPIO_CH0, 2), D_BXT);
+	MMIO_D(BXT_PORT_TX_DW14_LN(DPIO_PHY1, DPIO_CH0, 3), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 0), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 1), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 2), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 3), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 6), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 8), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 9), D_BXT);
+	MMIO_D(BXT_PORT_PLL(DPIO_PHY1, DPIO_CH0, 10), D_BXT);
+
+	MMIO_D(BXT_DE_PLL_CTL, D_BXT);
+	MMIO_DH(BXT_DE_PLL_ENABLE, D_BXT, NULL, bxt_de_pll_enable_write);
+	MMIO_D(BXT_DSI_PLL_CTL, D_BXT);
+	MMIO_D(BXT_DSI_PLL_ENABLE, D_BXT);
+
+	MMIO_D(GEN9_CLKGATE_DIS_0, D_BXT);
+
+	MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_A), D_BXT);
+	MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_B), D_BXT);
+	MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_C), D_BXT);
+
+	MMIO_D(RC6_CTX_BASE, D_BXT);
+
+	MMIO_D(GEN8_PUSHBUS_CONTROL, D_BXT);
+	MMIO_D(GEN8_PUSHBUS_ENABLE, D_BXT);
+	MMIO_D(GEN8_PUSHBUS_SHIFT, D_BXT);
+	MMIO_D(GEN6_GFXPAUSE, D_BXT);
+	MMIO_D(GEN8_L3SQCREG1, D_BXT);
+
+	MMIO_DFH(GEN9_CTX_PREEMPT_REG, D_BXT, F_CMD_ACCESS, NULL, NULL);
 
 	return 0;
 }
@@ -2965,6 +3260,16 @@ int intel_gvt_setup_mmio_info(struct intel_gvt *gvt)
 		ret = init_skl_mmio_info(gvt);
 		if (ret)
 			goto err;
+	} else if (IS_BROXTON(dev_priv)) {
+		ret = init_broadwell_mmio_info(gvt);
+		if (ret)
+			goto err;
+		ret = init_skl_mmio_info(gvt);
+		if (ret)
+			goto err;
+		ret = init_bxt_mmio_info(gvt);
+		if (ret)
+			goto err;
 	}
 
 	gvt->mmio.mmio_block = mmio_blocks;

+ 7 - 10
drivers/gpu/drm/i915/gvt/interrupt.c

@@ -350,7 +350,8 @@ static void update_upstream_irq(struct intel_vgpu *vgpu,
 			clear_bits |= (1 << bit);
 	}
 
-	WARN_ON(!up_irq_info);
+	if (WARN_ON(!up_irq_info))
+		return;
 
 	if (up_irq_info->group == INTEL_GVT_IRQ_INFO_MASTER) {
 		u32 isr = i915_mmio_reg_offset(up_irq_info->reg_base);
@@ -580,7 +581,9 @@ static void gen8_init_irq(
 
 		SET_BIT_INFO(irq, 4, PRIMARY_C_FLIP_DONE, INTEL_GVT_IRQ_INFO_DE_PIPE_C);
 		SET_BIT_INFO(irq, 5, SPRITE_C_FLIP_DONE, INTEL_GVT_IRQ_INFO_DE_PIPE_C);
-	} else if (IS_SKYLAKE(gvt->dev_priv) || IS_KABYLAKE(gvt->dev_priv)) {
+	} else if (IS_SKYLAKE(gvt->dev_priv)
+			|| IS_KABYLAKE(gvt->dev_priv)
+			|| IS_BROXTON(gvt->dev_priv)) {
 		SET_BIT_INFO(irq, 25, AUX_CHANNEL_B, INTEL_GVT_IRQ_INFO_DE_PORT);
 		SET_BIT_INFO(irq, 26, AUX_CHANNEL_C, INTEL_GVT_IRQ_INFO_DE_PORT);
 		SET_BIT_INFO(irq, 27, AUX_CHANNEL_D, INTEL_GVT_IRQ_INFO_DE_PORT);
@@ -690,14 +693,8 @@ int intel_gvt_init_irq(struct intel_gvt *gvt)
 
 	gvt_dbg_core("init irq framework\n");
 
-	if (IS_BROADWELL(gvt->dev_priv) || IS_SKYLAKE(gvt->dev_priv)
-		|| IS_KABYLAKE(gvt->dev_priv)) {
-		irq->ops = &gen8_irq_ops;
-		irq->irq_map = gen8_irq_map;
-	} else {
-		WARN_ON(1);
-		return -ENODEV;
-	}
+	irq->ops = &gen8_irq_ops;
+	irq->irq_map = gen8_irq_map;
 
 	/* common event initialization */
 	init_events(irq);

+ 6 - 6
drivers/gpu/drm/i915/gvt/mmio.c

@@ -67,7 +67,7 @@ static void failsafe_emulate_mmio_rw(struct intel_vgpu *vgpu, uint64_t pa,
 		return;
 
 	gvt = vgpu->gvt;
-	mutex_lock(&gvt->lock);
+	mutex_lock(&vgpu->vgpu_lock);
 	offset = intel_vgpu_gpa_to_mmio_offset(vgpu, pa);
 	if (reg_is_mmio(gvt, offset)) {
 		if (read)
@@ -85,7 +85,7 @@ static void failsafe_emulate_mmio_rw(struct intel_vgpu *vgpu, uint64_t pa,
 			memcpy(pt, p_data, bytes);
 
 	}
-	mutex_unlock(&gvt->lock);
+	mutex_unlock(&vgpu->vgpu_lock);
 }
 
 /**
@@ -109,7 +109,7 @@ int intel_vgpu_emulate_mmio_read(struct intel_vgpu *vgpu, uint64_t pa,
 		failsafe_emulate_mmio_rw(vgpu, pa, p_data, bytes, true);
 		return 0;
 	}
-	mutex_lock(&gvt->lock);
+	mutex_lock(&vgpu->vgpu_lock);
 
 	offset = intel_vgpu_gpa_to_mmio_offset(vgpu, pa);
 
@@ -156,7 +156,7 @@ err:
 	gvt_vgpu_err("fail to emulate MMIO read %08x len %d\n",
 			offset, bytes);
 out:
-	mutex_unlock(&gvt->lock);
+	mutex_unlock(&vgpu->vgpu_lock);
 	return ret;
 }
 
@@ -182,7 +182,7 @@ int intel_vgpu_emulate_mmio_write(struct intel_vgpu *vgpu, uint64_t pa,
 		return 0;
 	}
 
-	mutex_lock(&gvt->lock);
+	mutex_lock(&vgpu->vgpu_lock);
 
 	offset = intel_vgpu_gpa_to_mmio_offset(vgpu, pa);
 
@@ -220,7 +220,7 @@ err:
 	gvt_vgpu_err("fail to emulate MMIO write %08x len %d\n", offset,
 		     bytes);
 out:
-	mutex_unlock(&gvt->lock);
+	mutex_unlock(&vgpu->vgpu_lock);
 	return ret;
 }
 

+ 6 - 5
drivers/gpu/drm/i915/gvt/mmio.h

@@ -42,15 +42,16 @@ struct intel_vgpu;
 #define D_BDW   (1 << 0)
 #define D_SKL	(1 << 1)
 #define D_KBL	(1 << 2)
+#define D_BXT	(1 << 3)
 
-#define D_GEN9PLUS	(D_SKL | D_KBL)
-#define D_GEN8PLUS	(D_BDW | D_SKL | D_KBL)
+#define D_GEN9PLUS	(D_SKL | D_KBL | D_BXT)
+#define D_GEN8PLUS	(D_BDW | D_SKL | D_KBL | D_BXT)
 
-#define D_SKL_PLUS	(D_SKL | D_KBL)
-#define D_BDW_PLUS	(D_BDW | D_SKL | D_KBL)
+#define D_SKL_PLUS	(D_SKL | D_KBL | D_BXT)
+#define D_BDW_PLUS	(D_BDW | D_SKL | D_KBL | D_BXT)
 
 #define D_PRE_SKL	(D_BDW)
-#define D_ALL		(D_BDW | D_SKL | D_KBL)
+#define D_ALL		(D_BDW | D_SKL | D_KBL | D_BXT)
 
 typedef int (*gvt_mmio_func)(struct intel_vgpu *, unsigned int, void *,
 			     unsigned int);

+ 11 - 5
drivers/gpu/drm/i915/gvt/mmio_context.c

@@ -364,7 +364,8 @@ static void handle_tlb_pending_event(struct intel_vgpu *vgpu, int ring_id)
 	 */
 	fw = intel_uncore_forcewake_for_reg(dev_priv, reg,
 					    FW_REG_READ | FW_REG_WRITE);
-	if (ring_id == RCS && (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)))
+	if (ring_id == RCS && (IS_SKYLAKE(dev_priv) ||
+			IS_KABYLAKE(dev_priv) || IS_BROXTON(dev_priv)))
 		fw |= FORCEWAKE_RENDER;
 
 	intel_uncore_forcewake_get(dev_priv, fw);
@@ -401,7 +402,7 @@ static void switch_mocs(struct intel_vgpu *pre, struct intel_vgpu *next,
 	if (WARN_ON(ring_id >= ARRAY_SIZE(regs)))
 		return;
 
-	if (IS_KABYLAKE(dev_priv) && ring_id == RCS)
+	if ((IS_KABYLAKE(dev_priv) || IS_BROXTON(dev_priv)) && ring_id == RCS)
 		return;
 
 	if (!pre && !gen9_render_mocs.initialized)
@@ -467,7 +468,9 @@ static void switch_mmio(struct intel_vgpu *pre,
 	u32 old_v, new_v;
 
 	dev_priv = pre ? pre->gvt->dev_priv : next->gvt->dev_priv;
-	if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))
+	if (IS_SKYLAKE(dev_priv)
+		|| IS_KABYLAKE(dev_priv)
+		|| IS_BROXTON(dev_priv))
 		switch_mocs(pre, next, ring_id);
 
 	for (mmio = dev_priv->gvt->engine_mmio_list.mmio;
@@ -479,7 +482,8 @@ static void switch_mmio(struct intel_vgpu *pre,
 		 * state image on kabylake, it's initialized by lri command and
 		 * save or restore with context together.
 		 */
-		if (IS_KABYLAKE(dev_priv) && mmio->in_context)
+		if ((IS_KABYLAKE(dev_priv) || IS_BROXTON(dev_priv))
+			&& mmio->in_context)
 			continue;
 
 		// save
@@ -574,7 +578,9 @@ void intel_gvt_init_engine_mmio_context(struct intel_gvt *gvt)
 {
 	struct engine_mmio *mmio;
 
-	if (IS_SKYLAKE(gvt->dev_priv) || IS_KABYLAKE(gvt->dev_priv))
+	if (IS_SKYLAKE(gvt->dev_priv) ||
+		IS_KABYLAKE(gvt->dev_priv) ||
+		IS_BROXTON(gvt->dev_priv))
 		gvt->engine_mmio_list.mmio = gen9_engine_mmio_list;
 	else
 		gvt->engine_mmio_list.mmio = gen8_engine_mmio_list;

+ 2 - 3
drivers/gpu/drm/i915/gvt/page_track.c

@@ -157,11 +157,10 @@ int intel_vgpu_disable_page_track(struct intel_vgpu *vgpu, unsigned long gfn)
 int intel_vgpu_page_track_handler(struct intel_vgpu *vgpu, u64 gpa,
 		void *data, unsigned int bytes)
 {
-	struct intel_gvt *gvt = vgpu->gvt;
 	struct intel_vgpu_page_track *page_track;
 	int ret = 0;
 
-	mutex_lock(&gvt->lock);
+	mutex_lock(&vgpu->vgpu_lock);
 
 	page_track = intel_vgpu_find_page_track(vgpu, gpa >> PAGE_SHIFT);
 	if (!page_track) {
@@ -179,6 +178,6 @@ int intel_vgpu_page_track_handler(struct intel_vgpu *vgpu, u64 gpa,
 	}
 
 out:
-	mutex_unlock(&gvt->lock);
+	mutex_unlock(&vgpu->vgpu_lock);
 	return ret;
 }

+ 32 - 4
drivers/gpu/drm/i915/gvt/sched_policy.c

@@ -228,7 +228,7 @@ void intel_gvt_schedule(struct intel_gvt *gvt)
 	struct gvt_sched_data *sched_data = gvt->scheduler.sched_data;
 	ktime_t cur_time;
 
-	mutex_lock(&gvt->lock);
+	mutex_lock(&gvt->sched_lock);
 	cur_time = ktime_get();
 
 	if (test_and_clear_bit(INTEL_GVT_REQUEST_SCHED,
@@ -244,7 +244,7 @@ void intel_gvt_schedule(struct intel_gvt *gvt)
 	vgpu_update_timeslice(gvt->scheduler.current_vgpu, cur_time);
 	tbs_sched_func(sched_data);
 
-	mutex_unlock(&gvt->lock);
+	mutex_unlock(&gvt->sched_lock);
 }
 
 static enum hrtimer_restart tbs_timer_fn(struct hrtimer *timer_data)
@@ -359,39 +359,65 @@ static struct intel_gvt_sched_policy_ops tbs_schedule_ops = {
 
 int intel_gvt_init_sched_policy(struct intel_gvt *gvt)
 {
+	int ret;
+
+	mutex_lock(&gvt->sched_lock);
 	gvt->scheduler.sched_ops = &tbs_schedule_ops;
+	ret = gvt->scheduler.sched_ops->init(gvt);
+	mutex_unlock(&gvt->sched_lock);
 
-	return gvt->scheduler.sched_ops->init(gvt);
+	return ret;
 }
 
 void intel_gvt_clean_sched_policy(struct intel_gvt *gvt)
 {
+	mutex_lock(&gvt->sched_lock);
 	gvt->scheduler.sched_ops->clean(gvt);
+	mutex_unlock(&gvt->sched_lock);
 }
 
+/* for per-vgpu scheduler policy, there are 2 per-vgpu data:
+ * sched_data, and sched_ctl. We see these 2 data as part of
+ * the global scheduler which are proteced by gvt->sched_lock.
+ * Caller should make their decision if the vgpu_lock should
+ * be hold outside.
+ */
+
 int intel_vgpu_init_sched_policy(struct intel_vgpu *vgpu)
 {
-	return vgpu->gvt->scheduler.sched_ops->init_vgpu(vgpu);
+	int ret;
+
+	mutex_lock(&vgpu->gvt->sched_lock);
+	ret = vgpu->gvt->scheduler.sched_ops->init_vgpu(vgpu);
+	mutex_unlock(&vgpu->gvt->sched_lock);
+
+	return ret;
 }
 
 void intel_vgpu_clean_sched_policy(struct intel_vgpu *vgpu)
 {
+	mutex_lock(&vgpu->gvt->sched_lock);
 	vgpu->gvt->scheduler.sched_ops->clean_vgpu(vgpu);
+	mutex_unlock(&vgpu->gvt->sched_lock);
 }
 
 void intel_vgpu_start_schedule(struct intel_vgpu *vgpu)
 {
 	struct vgpu_sched_data *vgpu_data = vgpu->sched_data;
 
+	mutex_lock(&vgpu->gvt->sched_lock);
 	if (!vgpu_data->active) {
 		gvt_dbg_core("vgpu%d: start schedule\n", vgpu->id);
 		vgpu->gvt->scheduler.sched_ops->start_schedule(vgpu);
 	}
+	mutex_unlock(&vgpu->gvt->sched_lock);
 }
 
 void intel_gvt_kick_schedule(struct intel_gvt *gvt)
 {
+	mutex_lock(&gvt->sched_lock);
 	intel_gvt_request_service(gvt, INTEL_GVT_REQUEST_EVENT_SCHED);
+	mutex_unlock(&gvt->sched_lock);
 }
 
 void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
@@ -406,6 +432,7 @@ void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
 
 	gvt_dbg_core("vgpu%d: stop schedule\n", vgpu->id);
 
+	mutex_lock(&vgpu->gvt->sched_lock);
 	scheduler->sched_ops->stop_schedule(vgpu);
 
 	if (scheduler->next_vgpu == vgpu)
@@ -425,4 +452,5 @@ void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
 		}
 	}
 	spin_unlock_bh(&scheduler->mmio_context_lock);
+	mutex_unlock(&vgpu->gvt->sched_lock);
 }

+ 14 - 11
drivers/gpu/drm/i915/gvt/scheduler.c

@@ -45,11 +45,10 @@ static void set_context_pdp_root_pointer(
 		struct execlist_ring_context *ring_context,
 		u32 pdp[8])
 {
-	struct execlist_mmio_pair *pdp_pair = &ring_context->pdp3_UDW;
 	int i;
 
 	for (i = 0; i < 8; i++)
-		pdp_pair[i].val = pdp[7 - i];
+		ring_context->pdps[i].val = pdp[7 - i];
 }
 
 static void update_shadow_pdps(struct intel_vgpu_workload *workload)
@@ -298,7 +297,8 @@ static int copy_workload_to_ring_buffer(struct intel_vgpu_workload *workload)
 	void *shadow_ring_buffer_va;
 	u32 *cs;
 
-	if (IS_KABYLAKE(req->i915) && is_inhibit_context(req->hw_context))
+	if ((IS_KABYLAKE(req->i915) || IS_BROXTON(req->i915))
+		&& is_inhibit_context(req->hw_context))
 		intel_vgpu_restore_inhibit_context(vgpu, req);
 
 	/* allocate shadow ring buffer */
@@ -634,6 +634,7 @@ static int dispatch_workload(struct intel_vgpu_workload *workload)
 	gvt_dbg_sched("ring id %d prepare to dispatch workload %p\n",
 		ring_id, workload);
 
+	mutex_lock(&vgpu->vgpu_lock);
 	mutex_lock(&dev_priv->drm.struct_mutex);
 
 	ret = intel_gvt_scan_and_shadow_workload(workload);
@@ -654,6 +655,7 @@ out:
 	}
 
 	mutex_unlock(&dev_priv->drm.struct_mutex);
+	mutex_unlock(&vgpu->vgpu_lock);
 	return ret;
 }
 
@@ -663,7 +665,7 @@ static struct intel_vgpu_workload *pick_next_workload(
 	struct intel_gvt_workload_scheduler *scheduler = &gvt->scheduler;
 	struct intel_vgpu_workload *workload = NULL;
 
-	mutex_lock(&gvt->lock);
+	mutex_lock(&gvt->sched_lock);
 
 	/*
 	 * no current vgpu / will be scheduled out / no workload
@@ -709,7 +711,7 @@ static struct intel_vgpu_workload *pick_next_workload(
 
 	atomic_inc(&workload->vgpu->submission.running_workload_num);
 out:
-	mutex_unlock(&gvt->lock);
+	mutex_unlock(&gvt->sched_lock);
 	return workload;
 }
 
@@ -807,7 +809,8 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id)
 	struct i915_request *rq = workload->req;
 	int event;
 
-	mutex_lock(&gvt->lock);
+	mutex_lock(&vgpu->vgpu_lock);
+	mutex_lock(&gvt->sched_lock);
 
 	/* For the workload w/ request, needs to wait for the context
 	 * switch to make sure request is completed.
@@ -883,7 +886,8 @@ static void complete_current_workload(struct intel_gvt *gvt, int ring_id)
 	if (gvt->scheduler.need_reschedule)
 		intel_gvt_request_service(gvt, INTEL_GVT_REQUEST_EVENT_SCHED);
 
-	mutex_unlock(&gvt->lock);
+	mutex_unlock(&gvt->sched_lock);
+	mutex_unlock(&vgpu->vgpu_lock);
 }
 
 struct workload_thread_param {
@@ -901,7 +905,8 @@ static int workload_thread(void *priv)
 	struct intel_vgpu *vgpu = NULL;
 	int ret;
 	bool need_force_wake = IS_SKYLAKE(gvt->dev_priv)
-			|| IS_KABYLAKE(gvt->dev_priv);
+			|| IS_KABYLAKE(gvt->dev_priv)
+			|| IS_BROXTON(gvt->dev_priv);
 	DEFINE_WAIT_FUNC(wait, woken_wake_function);
 
 	kfree(p);
@@ -935,9 +940,7 @@ static int workload_thread(void *priv)
 			intel_uncore_forcewake_get(gvt->dev_priv,
 					FORCEWAKE_ALL);
 
-		mutex_lock(&gvt->lock);
 		ret = dispatch_workload(workload);
-		mutex_unlock(&gvt->lock);
 
 		if (ret) {
 			vgpu = workload->vgpu;
@@ -1228,7 +1231,7 @@ static void read_guest_pdps(struct intel_vgpu *vgpu,
 	u64 gpa;
 	int i;
 
-	gpa = ring_context_gpa + RING_CTX_OFF(pdp3_UDW.val);
+	gpa = ring_context_gpa + RING_CTX_OFF(pdps[0].val);
 
 	for (i = 0; i < 8; i++)
 		intel_gvt_hypervisor_read_gpa(vgpu,

+ 30 - 26
drivers/gpu/drm/i915/gvt/vgpu.c

@@ -58,6 +58,9 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu)
 
 	vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.fence_num)) = vgpu_fence_sz(vgpu);
 
+	vgpu_vreg_t(vgpu, vgtif_reg(cursor_x_hot)) = UINT_MAX;
+	vgpu_vreg_t(vgpu, vgtif_reg(cursor_y_hot)) = UINT_MAX;
+
 	gvt_dbg_core("Populate PVINFO PAGE for vGPU %d\n", vgpu->id);
 	gvt_dbg_core("aperture base [GMADR] 0x%llx size 0x%llx\n",
 		vgpu_aperture_gmadr_base(vgpu), vgpu_aperture_sz(vgpu));
@@ -223,22 +226,20 @@ void intel_gvt_activate_vgpu(struct intel_vgpu *vgpu)
  */
 void intel_gvt_deactivate_vgpu(struct intel_vgpu *vgpu)
 {
-	struct intel_gvt *gvt = vgpu->gvt;
-
-	mutex_lock(&gvt->lock);
+	mutex_lock(&vgpu->vgpu_lock);
 
 	vgpu->active = false;
 
 	if (atomic_read(&vgpu->submission.running_workload_num)) {
-		mutex_unlock(&gvt->lock);
+		mutex_unlock(&vgpu->vgpu_lock);
 		intel_gvt_wait_vgpu_idle(vgpu);
-		mutex_lock(&gvt->lock);
+		mutex_lock(&vgpu->vgpu_lock);
 	}
 
 	intel_vgpu_stop_schedule(vgpu);
 	intel_vgpu_dmabuf_cleanup(vgpu);
 
-	mutex_unlock(&gvt->lock);
+	mutex_unlock(&vgpu->vgpu_lock);
 }
 
 /**
@@ -252,14 +253,11 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
 {
 	struct intel_gvt *gvt = vgpu->gvt;
 
-	mutex_lock(&gvt->lock);
+	mutex_lock(&vgpu->vgpu_lock);
 
 	WARN(vgpu->active, "vGPU is still active!\n");
 
 	intel_gvt_debugfs_remove_vgpu(vgpu);
-	idr_remove(&gvt->vgpu_idr, vgpu->id);
-	if (idr_is_empty(&gvt->vgpu_idr))
-		intel_gvt_clean_irq(gvt);
 	intel_vgpu_clean_sched_policy(vgpu);
 	intel_vgpu_clean_submission(vgpu);
 	intel_vgpu_clean_display(vgpu);
@@ -269,10 +267,16 @@ void intel_gvt_destroy_vgpu(struct intel_vgpu *vgpu)
 	intel_vgpu_free_resource(vgpu);
 	intel_vgpu_clean_mmio(vgpu);
 	intel_vgpu_dmabuf_cleanup(vgpu);
-	vfree(vgpu);
+	mutex_unlock(&vgpu->vgpu_lock);
 
+	mutex_lock(&gvt->lock);
+	idr_remove(&gvt->vgpu_idr, vgpu->id);
+	if (idr_is_empty(&gvt->vgpu_idr))
+		intel_gvt_clean_irq(gvt);
 	intel_gvt_update_vgpu_types(gvt);
 	mutex_unlock(&gvt->lock);
+
+	vfree(vgpu);
 }
 
 #define IDLE_VGPU_IDR 0
@@ -298,6 +302,7 @@ struct intel_vgpu *intel_gvt_create_idle_vgpu(struct intel_gvt *gvt)
 
 	vgpu->id = IDLE_VGPU_IDR;
 	vgpu->gvt = gvt;
+	mutex_init(&vgpu->vgpu_lock);
 
 	for (i = 0; i < I915_NUM_ENGINES; i++)
 		INIT_LIST_HEAD(&vgpu->submission.workload_q_head[i]);
@@ -324,7 +329,10 @@ out_free_vgpu:
  */
 void intel_gvt_destroy_idle_vgpu(struct intel_vgpu *vgpu)
 {
+	mutex_lock(&vgpu->vgpu_lock);
 	intel_vgpu_clean_sched_policy(vgpu);
+	mutex_unlock(&vgpu->vgpu_lock);
+
 	vfree(vgpu);
 }
 
@@ -342,8 +350,6 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
 	if (!vgpu)
 		return ERR_PTR(-ENOMEM);
 
-	mutex_lock(&gvt->lock);
-
 	ret = idr_alloc(&gvt->vgpu_idr, vgpu, IDLE_VGPU_IDR + 1, GVT_MAX_VGPU,
 		GFP_KERNEL);
 	if (ret < 0)
@@ -353,6 +359,7 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
 	vgpu->handle = param->handle;
 	vgpu->gvt = gvt;
 	vgpu->sched_ctl.weight = param->weight;
+	mutex_init(&vgpu->vgpu_lock);
 	INIT_LIST_HEAD(&vgpu->dmabuf_obj_list_head);
 	INIT_RADIX_TREE(&vgpu->page_track_tree, GFP_KERNEL);
 	idr_init(&vgpu->object_idr);
@@ -400,8 +407,6 @@ static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
 	if (ret)
 		goto out_clean_sched_policy;
 
-	mutex_unlock(&gvt->lock);
-
 	return vgpu;
 
 out_clean_sched_policy:
@@ -424,7 +429,6 @@ out_clean_idr:
 	idr_remove(&gvt->vgpu_idr, vgpu->id);
 out_free_vgpu:
 	vfree(vgpu);
-	mutex_unlock(&gvt->lock);
 	return ERR_PTR(ret);
 }
 
@@ -456,12 +460,12 @@ struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt,
 	param.low_gm_sz = BYTES_TO_MB(param.low_gm_sz);
 	param.high_gm_sz = BYTES_TO_MB(param.high_gm_sz);
 
+	mutex_lock(&gvt->lock);
 	vgpu = __intel_gvt_create_vgpu(gvt, &param);
-	if (IS_ERR(vgpu))
-		return vgpu;
-
-	/* calculate left instance change for types */
-	intel_gvt_update_vgpu_types(gvt);
+	if (!IS_ERR(vgpu))
+		/* calculate left instance change for types */
+		intel_gvt_update_vgpu_types(gvt);
+	mutex_unlock(&gvt->lock);
 
 	return vgpu;
 }
@@ -473,7 +477,7 @@ struct intel_vgpu *intel_gvt_create_vgpu(struct intel_gvt *gvt,
  * @engine_mask: engines to reset for GT reset
  *
  * This function is called when user wants to reset a virtual GPU through
- * device model reset or GT reset. The caller should hold the gvt lock.
+ * device model reset or GT reset. The caller should hold the vgpu lock.
  *
  * vGPU Device Model Level Reset (DMLR) simulates the PCI level reset to reset
  * the whole vGPU to default state as when it is created. This vGPU function
@@ -513,9 +517,9 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
 	 * scheduler when the reset is triggered by current vgpu.
 	 */
 	if (scheduler->current_vgpu == NULL) {
-		mutex_unlock(&gvt->lock);
+		mutex_unlock(&vgpu->vgpu_lock);
 		intel_gvt_wait_vgpu_idle(vgpu);
-		mutex_lock(&gvt->lock);
+		mutex_lock(&vgpu->vgpu_lock);
 	}
 
 	intel_vgpu_reset_submission(vgpu, resetting_eng);
@@ -555,7 +559,7 @@ void intel_gvt_reset_vgpu_locked(struct intel_vgpu *vgpu, bool dmlr,
  */
 void intel_gvt_reset_vgpu(struct intel_vgpu *vgpu)
 {
-	mutex_lock(&vgpu->gvt->lock);
+	mutex_lock(&vgpu->vgpu_lock);
 	intel_gvt_reset_vgpu_locked(vgpu, true, 0);
-	mutex_unlock(&vgpu->gvt->lock);
+	mutex_unlock(&vgpu->vgpu_lock);
 }

+ 10 - 26
drivers/gpu/drm/i915/i915_debugfs.c

@@ -1359,11 +1359,12 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
 		seq_printf(m, "\tseqno = %x [current %x, last %x]\n",
 			   engine->hangcheck.seqno, seqno[id],
 			   intel_engine_last_submit(engine));
-		seq_printf(m, "\twaiters? %s, fake irq active? %s, stalled? %s\n",
+		seq_printf(m, "\twaiters? %s, fake irq active? %s, stalled? %s, wedged? %s\n",
 			   yesno(intel_engine_has_waiter(engine)),
 			   yesno(test_bit(engine->id,
 					  &dev_priv->gpu_error.missed_irq_rings)),
-			   yesno(engine->hangcheck.stalled));
+			   yesno(engine->hangcheck.stalled),
+			   yesno(engine->hangcheck.wedged));
 
 		spin_lock_irq(&b->rb_lock);
 		for (rb = rb_first(&b->waiters); rb; rb = rb_next(rb)) {
@@ -2536,7 +2537,7 @@ static int i915_guc_log_level_get(void *data, u64 *val)
 	if (!USES_GUC(dev_priv))
 		return -ENODEV;
 
-	*val = intel_guc_log_level_get(&dev_priv->guc.log);
+	*val = intel_guc_log_get_level(&dev_priv->guc.log);
 
 	return 0;
 }
@@ -2548,7 +2549,7 @@ static int i915_guc_log_level_set(void *data, u64 val)
 	if (!USES_GUC(dev_priv))
 		return -ENODEV;
 
-	return intel_guc_log_level_set(&dev_priv->guc.log, val);
+	return intel_guc_log_set_level(&dev_priv->guc.log, val);
 }
 
 DEFINE_SIMPLE_ATTRIBUTE(i915_guc_log_level_fops,
@@ -2660,8 +2661,6 @@ static int i915_edp_psr_status(struct seq_file *m, void *data)
 	seq_printf(m, "Enabled: %s\n", yesno((bool)dev_priv->psr.enabled));
 	seq_printf(m, "Busy frontbuffer bits: 0x%03x\n",
 		   dev_priv->psr.busy_frontbuffer_bits);
-	seq_printf(m, "Re-enable work scheduled: %s\n",
-		   yesno(work_busy(&dev_priv->psr.work.work)));
 
 	if (dev_priv->psr.psr2_enabled)
 		enabled = I915_READ(EDP_PSR2_CTL) & EDP_PSR2_ENABLE;
@@ -3379,28 +3378,13 @@ static int i915_shared_dplls_info(struct seq_file *m, void *unused)
 
 static int i915_wa_registers(struct seq_file *m, void *unused)
 {
-	struct drm_i915_private *dev_priv = node_to_i915(m->private);
-	struct i915_workarounds *workarounds = &dev_priv->workarounds;
+	struct i915_workarounds *wa = &node_to_i915(m->private)->workarounds;
 	int i;
 
-	intel_runtime_pm_get(dev_priv);
-
-	seq_printf(m, "Workarounds applied: %d\n", workarounds->count);
-	for (i = 0; i < workarounds->count; ++i) {
-		i915_reg_t addr;
-		u32 mask, value, read;
-		bool ok;
-
-		addr = workarounds->reg[i].addr;
-		mask = workarounds->reg[i].mask;
-		value = workarounds->reg[i].value;
-		read = I915_READ(addr);
-		ok = (value & mask) == (read & mask);
-		seq_printf(m, "0x%X: 0x%08X, mask: 0x%08X, read: 0x%08x, status: %s\n",
-			   i915_mmio_reg_offset(addr), value, mask, read, ok ? "OK" : "FAIL");
-	}
-
-	intel_runtime_pm_put(dev_priv);
+	seq_printf(m, "Workarounds applied: %d\n", wa->count);
+	for (i = 0; i < wa->count; ++i)
+		seq_printf(m, "0x%X: 0x%08X, mask: 0x%08X\n",
+			   wa->reg[i].addr, wa->reg[i].value, wa->reg[i].mask);
 
 	return 0;
 }

+ 26 - 30
drivers/gpu/drm/i915/i915_drv.c

@@ -73,6 +73,12 @@ bool __i915_inject_load_failure(const char *func, int line)
 
 	return false;
 }
+
+bool i915_error_injected(void)
+{
+	return i915_load_fail_count && !i915_modparams.inject_load_failure;
+}
+
 #endif
 
 #define FDO_BUG_URL "https://bugs.freedesktop.org/enter_bug.cgi?product=DRI"
@@ -115,20 +121,6 @@ __i915_printk(struct drm_i915_private *dev_priv, const char *level,
 	va_end(args);
 }
 
-static bool i915_error_injected(struct drm_i915_private *dev_priv)
-{
-#if IS_ENABLED(CONFIG_DRM_I915_DEBUG)
-	return i915_load_fail_count && !i915_modparams.inject_load_failure;
-#else
-	return false;
-#endif
-}
-
-#define i915_load_error(i915, fmt, ...)					 \
-	__i915_printk(i915,						 \
-		      i915_error_injected(i915) ? KERN_DEBUG : KERN_ERR, \
-		      fmt, ##__VA_ARGS__)
-
 /* Map PCH device id to PCH type, or PCH_NONE if unknown. */
 static enum intel_pch
 intel_pch_type(const struct drm_i915_private *dev_priv, unsigned short id)
@@ -248,14 +240,6 @@ static void intel_detect_pch(struct drm_i915_private *dev_priv)
 {
 	struct pci_dev *pch = NULL;
 
-	/* In all current cases, num_pipes is equivalent to the PCH_NOP setting
-	 * (which really amounts to a PCH but no South Display).
-	 */
-	if (INTEL_INFO(dev_priv)->num_pipes == 0) {
-		dev_priv->pch_type = PCH_NOP;
-		return;
-	}
-
 	/*
 	 * The reason to probe ISA bridge instead of Dev31:Fun0 is to
 	 * make graphics device passthrough work easy for VMM, that only
@@ -284,18 +268,28 @@ static void intel_detect_pch(struct drm_i915_private *dev_priv)
 		} else if (intel_is_virt_pch(id, pch->subsystem_vendor,
 					 pch->subsystem_device)) {
 			id = intel_virt_detect_pch(dev_priv);
-			if (id) {
-				pch_type = intel_pch_type(dev_priv, id);
-				if (WARN_ON(pch_type == PCH_NONE))
-					pch_type = PCH_NOP;
-			} else {
-				pch_type = PCH_NOP;
-			}
+			pch_type = intel_pch_type(dev_priv, id);
+
+			/* Sanity check virtual PCH id */
+			if (WARN_ON(id && pch_type == PCH_NONE))
+				id = 0;
+
 			dev_priv->pch_type = pch_type;
 			dev_priv->pch_id = id;
 			break;
 		}
 	}
+
+	/*
+	 * Use PCH_NOP (PCH but no South Display) for PCH platforms without
+	 * display.
+	 */
+	if (pch && INTEL_INFO(dev_priv)->num_pipes == 0) {
+		DRM_DEBUG_KMS("Display disabled, reverting to NOP PCH\n");
+		dev_priv->pch_type = PCH_NOP;
+		dev_priv->pch_id = 0;
+	}
+
 	if (!pch)
 		DRM_DEBUG_KMS("No PCH found.\n");
 
@@ -1704,6 +1698,8 @@ static int i915_drm_resume(struct drm_device *dev)
 	disable_rpm_wakeref_asserts(dev_priv);
 	intel_sanitize_gt_powersave(dev_priv);
 
+	i915_gem_sanitize(dev_priv);
+
 	ret = i915_ggtt_enable_hw(dev_priv);
 	if (ret)
 		DRM_ERROR("failed to re-enable GGTT\n");
@@ -1845,7 +1841,7 @@ static int i915_drm_resume_early(struct drm_device *dev)
 	else
 		intel_display_set_init_power(dev_priv, true);
 
-	i915_gem_sanitize(dev_priv);
+	intel_engines_sanitize(dev_priv);
 
 	enable_rpm_wakeref_asserts(dev_priv);
 

+ 35 - 21
drivers/gpu/drm/i915/i915_drv.h

@@ -40,6 +40,7 @@
 #include <linux/hash.h>
 #include <linux/intel-iommu.h>
 #include <linux/kref.h>
+#include <linux/mm_types.h>
 #include <linux/perf_event.h>
 #include <linux/pm_qos.h>
 #include <linux/reservation.h>
@@ -85,8 +86,8 @@
 
 #define DRIVER_NAME		"i915"
 #define DRIVER_DESC		"Intel Graphics"
-#define DRIVER_DATE		"20180606"
-#define DRIVER_TIMESTAMP	1528323047
+#define DRIVER_DATE		"20180620"
+#define DRIVER_TIMESTAMP	1529529048
 
 /* Use I915_STATE_WARN(x) and I915_STATE_WARN_ON() (rather than WARN() and
  * WARN_ON()) for hw state sanity checks to check for unexpected conditions
@@ -107,13 +108,24 @@
 	I915_STATE_WARN((x), "%s", "WARN_ON(" __stringify(x) ")")
 
 #if IS_ENABLED(CONFIG_DRM_I915_DEBUG)
+
 bool __i915_inject_load_failure(const char *func, int line);
 #define i915_inject_load_failure() \
 	__i915_inject_load_failure(__func__, __LINE__)
+
+bool i915_error_injected(void);
+
 #else
+
 #define i915_inject_load_failure() false
+#define i915_error_injected() false
+
 #endif
 
+#define i915_load_error(i915, fmt, ...)					 \
+	__i915_printk(i915, i915_error_injected() ? KERN_DEBUG : KERN_ERR, \
+		      fmt, ##__VA_ARGS__)
+
 typedef struct {
 	uint32_t val;
 } uint_fixed_16_16_t;
@@ -340,14 +352,21 @@ struct drm_i915_file_private {
 
 	unsigned int bsd_engine;
 
-/* Client can have a maximum of 3 contexts banned before
- * it is denied of creating new contexts. As one context
- * ban needs 4 consecutive hangs, and more if there is
- * progress in between, this is a last resort stop gap measure
- * to limit the badly behaving clients access to gpu.
+/*
+ * Every context ban increments per client ban score. Also
+ * hangs in short succession increments ban score. If ban threshold
+ * is reached, client is considered banned and submitting more work
+ * will fail. This is a stop gap measure to limit the badly behaving
+ * clients access to gpu. Note that unbannable contexts never increment
+ * the client ban score.
  */
-#define I915_MAX_CLIENT_CONTEXT_BANS 3
-	atomic_t context_bans;
+#define I915_CLIENT_SCORE_HANG_FAST	1
+#define   I915_CLIENT_FAST_HANG_JIFFIES (60 * HZ)
+#define I915_CLIENT_SCORE_CONTEXT_BAN   3
+#define I915_CLIENT_SCORE_BANNED	9
+	/** ban_score: Accumulated score of all ctx bans and fast hangs. */
+	atomic_t ban_score;
+	unsigned long hang_timestamp;
 };
 
 /* Interface history:
@@ -601,7 +620,7 @@ struct i915_psr {
 	bool sink_support;
 	struct intel_dp *enabled;
 	bool active;
-	struct delayed_work work;
+	struct work_struct work;
 	unsigned busy_frontbuffer_bits;
 	bool sink_psr2_support;
 	bool link_standby;
@@ -631,7 +650,7 @@ enum intel_pch {
 	PCH_KBP,        /* Kaby Lake PCH */
 	PCH_CNP,        /* Cannon Lake PCH */
 	PCH_ICP,	/* Ice Lake PCH */
-	PCH_NOP,
+	PCH_NOP,	/* PCH without south display */
 };
 
 enum intel_sbi_destination {
@@ -994,6 +1013,8 @@ struct i915_gem_mm {
 #define I915_ENGINE_DEAD_TIMEOUT  (4 * HZ)  /* Seqno, head and subunits dead */
 #define I915_SEQNO_DEAD_TIMEOUT   (12 * HZ) /* Seqno dead with active head */
 
+#define I915_ENGINE_WEDGED_TIMEOUT  (60 * HZ)  /* Reset but no recovery? */
+
 enum modeset_restore {
 	MODESET_ON_LID_OPEN,
 	MODESET_DONE,
@@ -1004,6 +1025,7 @@ enum modeset_restore {
 #define DP_AUX_B 0x10
 #define DP_AUX_C 0x20
 #define DP_AUX_D 0x30
+#define DP_AUX_E 0x50
 #define DP_AUX_F 0x60
 
 #define DDC_PIN_B  0x05
@@ -1290,7 +1312,7 @@ struct i915_frontbuffer_tracking {
 };
 
 struct i915_wa_reg {
-	i915_reg_t addr;
+	u32 addr;
 	u32 value;
 	/* bitmask representing WA bits */
 	u32 mask;
@@ -3174,7 +3196,7 @@ int i915_gem_wait_for_idle(struct drm_i915_private *dev_priv,
 int __must_check i915_gem_suspend(struct drm_i915_private *dev_priv);
 void i915_gem_suspend_late(struct drm_i915_private *dev_priv);
 void i915_gem_resume(struct drm_i915_private *dev_priv);
-int i915_gem_fault(struct vm_fault *vmf);
+vm_fault_t i915_gem_fault(struct vm_fault *vmf);
 int i915_gem_object_wait(struct drm_i915_gem_object *obj,
 			 unsigned int flags,
 			 long timeout,
@@ -3678,14 +3700,6 @@ static inline unsigned long nsecs_to_jiffies_timeout(const u64 n)
         return min_t(u64, MAX_JIFFY_OFFSET, nsecs_to_jiffies64(n) + 1);
 }
 
-static inline unsigned long
-timespec_to_jiffies_timeout(const struct timespec *value)
-{
-	unsigned long j = timespec_to_jiffies(value);
-
-	return min_t(unsigned long, MAX_JIFFY_OFFSET, j + 1);
-}
-
 /*
  * If you need to wait X milliseconds between events A and B, but event B
  * doesn't happen exactly after event A, you record the timestamp (jiffies) of

+ 103 - 78
drivers/gpu/drm/i915/i915_gem.c

@@ -1995,7 +1995,7 @@ compute_partial_view(struct drm_i915_gem_object *obj,
  * The current feature set supported by i915_gem_fault() and thus GTT mmaps
  * is exposed via I915_PARAM_MMAP_GTT_VERSION (see i915_gem_mmap_gtt_version).
  */
-int i915_gem_fault(struct vm_fault *vmf)
+vm_fault_t i915_gem_fault(struct vm_fault *vmf)
 {
 #define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT)
 	struct vm_area_struct *area = vmf->vma;
@@ -2112,10 +2112,8 @@ err:
 		 * fail). But any other -EIO isn't ours (e.g. swap in failure)
 		 * and so needs to be reported.
 		 */
-		if (!i915_terminally_wedged(&dev_priv->gpu_error)) {
-			ret = VM_FAULT_SIGBUS;
-			break;
-		}
+		if (!i915_terminally_wedged(&dev_priv->gpu_error))
+			return VM_FAULT_SIGBUS;
 	case -EAGAIN:
 		/*
 		 * EAGAIN means the gpu is hung and we'll wait for the error
@@ -2130,21 +2128,16 @@ err:
 		 * EBUSY is ok: this just means that another thread
 		 * already did the job.
 		 */
-		ret = VM_FAULT_NOPAGE;
-		break;
+		return VM_FAULT_NOPAGE;
 	case -ENOMEM:
-		ret = VM_FAULT_OOM;
-		break;
+		return VM_FAULT_OOM;
 	case -ENOSPC:
 	case -EFAULT:
-		ret = VM_FAULT_SIGBUS;
-		break;
+		return VM_FAULT_SIGBUS;
 	default:
 		WARN_ONCE(ret, "unhandled error in i915_gem_fault: %i\n", ret);
-		ret = VM_FAULT_SIGBUS;
-		break;
+		return VM_FAULT_SIGBUS;
 	}
-	return ret;
 }
 
 static void __i915_gem_object_release_mmap(struct drm_i915_gem_object *obj)
@@ -2408,29 +2401,15 @@ static void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj)
 	rcu_read_unlock();
 }
 
-void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
-				 enum i915_mm_subclass subclass)
+static struct sg_table *
+__i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct sg_table *pages;
 
-	if (i915_gem_object_has_pinned_pages(obj))
-		return;
-
-	GEM_BUG_ON(obj->bind_count);
-	if (!i915_gem_object_has_pages(obj))
-		return;
-
-	/* May be called by shrinker from within get_pages() (on another bo) */
-	mutex_lock_nested(&obj->mm.lock, subclass);
-	if (unlikely(atomic_read(&obj->mm.pages_pin_count)))
-		goto unlock;
-
-	/* ->put_pages might need to allocate memory for the bit17 swizzle
-	 * array, hence protect them from being reaped by removing them from gtt
-	 * lists early. */
 	pages = fetch_and_zero(&obj->mm.pages);
-	GEM_BUG_ON(!pages);
+	if (!pages)
+		return NULL;
 
 	spin_lock(&i915->mm.obj_lock);
 	list_del(&obj->mm.link);
@@ -2449,12 +2428,37 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
 	}
 
 	__i915_gem_object_reset_page_iter(obj);
+	obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
+
+	return pages;
+}
+
+void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
+				 enum i915_mm_subclass subclass)
+{
+	struct sg_table *pages;
+
+	if (i915_gem_object_has_pinned_pages(obj))
+		return;
+
+	GEM_BUG_ON(obj->bind_count);
+	if (!i915_gem_object_has_pages(obj))
+		return;
 
+	/* May be called by shrinker from within get_pages() (on another bo) */
+	mutex_lock_nested(&obj->mm.lock, subclass);
+	if (unlikely(atomic_read(&obj->mm.pages_pin_count)))
+		goto unlock;
+
+	/*
+	 * ->put_pages might need to allocate memory for the bit17 swizzle
+	 * array, hence protect them from being reaped by removing them from gtt
+	 * lists early.
+	 */
+	pages = __i915_gem_object_unset_pages(obj);
 	if (!IS_ERR(pages))
 		obj->ops->put_pages(obj, pages);
 
-	obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0;
-
 unlock:
 	mutex_unlock(&obj->mm.lock);
 }
@@ -2937,32 +2941,54 @@ i915_gem_object_pwrite_gtt(struct drm_i915_gem_object *obj,
 	return 0;
 }
 
-static void i915_gem_context_mark_guilty(struct i915_gem_context *ctx)
+static void i915_gem_client_mark_guilty(struct drm_i915_file_private *file_priv,
+					const struct i915_gem_context *ctx)
 {
-	bool banned;
+	unsigned int score;
+	unsigned long prev_hang;
 
-	atomic_inc(&ctx->guilty_count);
+	if (i915_gem_context_is_banned(ctx))
+		score = I915_CLIENT_SCORE_CONTEXT_BAN;
+	else
+		score = 0;
 
-	banned = false;
-	if (i915_gem_context_is_bannable(ctx)) {
-		unsigned int score;
+	prev_hang = xchg(&file_priv->hang_timestamp, jiffies);
+	if (time_before(jiffies, prev_hang + I915_CLIENT_FAST_HANG_JIFFIES))
+		score += I915_CLIENT_SCORE_HANG_FAST;
 
-		score = atomic_add_return(CONTEXT_SCORE_GUILTY,
-					  &ctx->ban_score);
-		banned = score >= CONTEXT_SCORE_BAN_THRESHOLD;
+	if (score) {
+		atomic_add(score, &file_priv->ban_score);
 
-		DRM_DEBUG_DRIVER("context %s marked guilty (score %d) banned? %s\n",
-				 ctx->name, score, yesno(banned));
+		DRM_DEBUG_DRIVER("client %s: gained %u ban score, now %u\n",
+				 ctx->name, score,
+				 atomic_read(&file_priv->ban_score));
 	}
-	if (!banned)
+}
+
+static void i915_gem_context_mark_guilty(struct i915_gem_context *ctx)
+{
+	unsigned int score;
+	bool banned, bannable;
+
+	atomic_inc(&ctx->guilty_count);
+
+	bannable = i915_gem_context_is_bannable(ctx);
+	score = atomic_add_return(CONTEXT_SCORE_GUILTY, &ctx->ban_score);
+	banned = score >= CONTEXT_SCORE_BAN_THRESHOLD;
+
+	/* Cool contexts don't accumulate client ban score */
+	if (!bannable)
 		return;
 
-	i915_gem_context_set_banned(ctx);
-	if (!IS_ERR_OR_NULL(ctx->file_priv)) {
-		atomic_inc(&ctx->file_priv->context_bans);
-		DRM_DEBUG_DRIVER("client %s has had %d context banned\n",
-				 ctx->name, atomic_read(&ctx->file_priv->context_bans));
+	if (banned) {
+		DRM_DEBUG_DRIVER("context %s: guilty %d, score %u, banned\n",
+				 ctx->name, atomic_read(&ctx->guilty_count),
+				 score);
+		i915_gem_context_set_banned(ctx);
 	}
+
+	if (!IS_ERR_OR_NULL(ctx->file_priv))
+		i915_gem_client_mark_guilty(ctx->file_priv, ctx);
 }
 
 static void i915_gem_context_mark_innocent(struct i915_gem_context *ctx)
@@ -3140,15 +3166,17 @@ i915_gem_reset_request(struct intel_engine_cs *engine,
 		 */
 		request = i915_gem_find_active_request(engine);
 		if (request) {
+			unsigned long flags;
+
 			i915_gem_context_mark_innocent(request->gem_context);
 			dma_fence_set_error(&request->fence, -EAGAIN);
 
 			/* Rewind the engine to replay the incomplete rq */
-			spin_lock_irq(&engine->timeline.lock);
+			spin_lock_irqsave(&engine->timeline.lock, flags);
 			request = list_prev_entry(request, link);
 			if (&request->link == &engine->timeline.requests)
 				request = NULL;
-			spin_unlock_irq(&engine->timeline.lock);
+			spin_unlock_irqrestore(&engine->timeline.lock, flags);
 		}
 	}
 
@@ -3209,7 +3237,7 @@ void i915_gem_reset(struct drm_i915_private *dev_priv,
 			rq = i915_request_alloc(engine,
 						dev_priv->kernel_context);
 			if (!IS_ERR(rq))
-				__i915_request_add(rq, false);
+				i915_request_add(rq);
 		}
 	}
 
@@ -4962,8 +4990,7 @@ void __i915_gem_object_release_unless_active(struct drm_i915_gem_object *obj)
 
 void i915_gem_sanitize(struct drm_i915_private *i915)
 {
-	struct intel_engine_cs *engine;
-	enum intel_engine_id id;
+	int err;
 
 	GEM_TRACE("\n");
 
@@ -4989,14 +5016,11 @@ void i915_gem_sanitize(struct drm_i915_private *i915)
 	 * it may impact the display and we are uncertain about the stability
 	 * of the reset, so this could be applied to even earlier gen.
 	 */
+	err = -ENODEV;
 	if (INTEL_GEN(i915) >= 5 && intel_has_gpu_reset(i915))
-		WARN_ON(intel_gpu_reset(i915, ALL_ENGINES));
-
-	/* Reset the submission backend after resume as well as the GPU reset */
-	for_each_engine(engine, i915, id) {
-		if (engine->reset.reset)
-			engine->reset.reset(engine, NULL);
-	}
+		err = WARN_ON(intel_gpu_reset(i915, ALL_ENGINES));
+	if (!err)
+		intel_engines_sanitize(i915);
 
 	intel_uncore_forcewake_put(i915, FORCEWAKE_ALL);
 	intel_runtime_pm_put(i915);
@@ -5328,7 +5352,7 @@ static int __intel_engines_record_defaults(struct drm_i915_private *i915)
 		if (engine->init_context)
 			err = engine->init_context(rq);
 
-		__i915_request_add(rq, true);
+		i915_request_add(rq);
 		if (err)
 			goto err_active;
 	}
@@ -5514,8 +5538,12 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
 	 * driver doesn't explode during runtime.
 	 */
 err_init_hw:
-	i915_gem_wait_for_idle(dev_priv, I915_WAIT_LOCKED);
-	i915_gem_contexts_lost(dev_priv);
+	mutex_unlock(&dev_priv->drm.struct_mutex);
+
+	WARN_ON(i915_gem_suspend(dev_priv));
+	i915_gem_suspend_late(dev_priv);
+
+	mutex_lock(&dev_priv->drm.struct_mutex);
 	intel_uc_fini_hw(dev_priv);
 err_uc_init:
 	intel_uc_fini(dev_priv);
@@ -5544,7 +5572,8 @@ err_unlock:
 		 * for all other failure, such as an allocation failure, bail.
 		 */
 		if (!i915_terminally_wedged(&dev_priv->gpu_error)) {
-			DRM_ERROR("Failed to initialize GPU, declaring it wedged\n");
+			i915_load_error(dev_priv,
+					"Failed to initialize GPU, declaring it wedged!\n");
 			i915_gem_set_wedged(dev_priv);
 		}
 		ret = 0;
@@ -5811,6 +5840,7 @@ int i915_gem_open(struct drm_i915_private *i915, struct drm_file *file)
 	INIT_LIST_HEAD(&file_priv->mm.request_list);
 
 	file_priv->bsd_engine = -1;
+	file_priv->hang_timestamp = jiffies;
 
 	ret = i915_gem_context_open(i915, file);
 	if (ret)
@@ -6091,16 +6121,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 		goto err_unlock;
 	}
 
-	pages = fetch_and_zero(&obj->mm.pages);
-	if (pages) {
-		struct drm_i915_private *i915 = to_i915(obj->base.dev);
-
-		__i915_gem_object_reset_page_iter(obj);
-
-		spin_lock(&i915->mm.obj_lock);
-		list_del(&obj->mm.link);
-		spin_unlock(&i915->mm.obj_lock);
-	}
+	pages = __i915_gem_object_unset_pages(obj);
 
 	obj->ops = &i915_gem_phys_ops;
 
@@ -6118,7 +6139,11 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
 
 err_xfer:
 	obj->ops = &i915_gem_object_ops;
-	obj->mm.pages = pages;
+	if (!IS_ERR_OR_NULL(pages)) {
+		unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl);
+
+		__i915_gem_object_set_pages(obj, pages, sg_page_sizes);
+	}
 err_unlock:
 	mutex_unlock(&obj->mm.lock);
 	return err;

+ 2 - 9
drivers/gpu/drm/i915/i915_gem_context.c

@@ -700,14 +700,7 @@ int i915_gem_switch_to_kernel_context(struct drm_i915_private *i915)
 			i915_timeline_sync_set(rq->timeline, &prev->fence);
 		}
 
-		/*
-		 * Force a flush after the switch to ensure that all rendering
-		 * and operations prior to switching to the kernel context hits
-		 * memory. This should be guaranteed by the previous request,
-		 * but an extra layer of paranoia before we declare the system
-		 * idle (on suspend etc) is advisable!
-		 */
-		__i915_request_add(rq, true);
+		i915_request_add(rq);
 	}
 
 	return 0;
@@ -715,7 +708,7 @@ int i915_gem_switch_to_kernel_context(struct drm_i915_private *i915)
 
 static bool client_is_banned(struct drm_i915_file_private *file_priv)
 {
-	return atomic_read(&file_priv->context_bans) > I915_MAX_CLIENT_CONTEXT_BANS;
+	return atomic_read(&file_priv->ban_score) >= I915_CLIENT_SCORE_BANNED;
 }
 
 int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,

+ 29 - 24
drivers/gpu/drm/i915/i915_gem_execbuffer.c

@@ -489,7 +489,9 @@ eb_validate_vma(struct i915_execbuffer *eb,
 }
 
 static int
-eb_add_vma(struct i915_execbuffer *eb, unsigned int i, struct i915_vma *vma)
+eb_add_vma(struct i915_execbuffer *eb,
+	   unsigned int i, unsigned batch_idx,
+	   struct i915_vma *vma)
 {
 	struct drm_i915_gem_exec_object2 *entry = &eb->exec[i];
 	int err;
@@ -522,6 +524,24 @@ eb_add_vma(struct i915_execbuffer *eb, unsigned int i, struct i915_vma *vma)
 	eb->flags[i] = entry->flags;
 	vma->exec_flags = &eb->flags[i];
 
+	/*
+	 * SNA is doing fancy tricks with compressing batch buffers, which leads
+	 * to negative relocation deltas. Usually that works out ok since the
+	 * relocate address is still positive, except when the batch is placed
+	 * very low in the GTT. Ensure this doesn't happen.
+	 *
+	 * Note that actual hangs have only been observed on gen7, but for
+	 * paranoia do it everywhere.
+	 */
+	if (i == batch_idx) {
+		if (!(eb->flags[i] & EXEC_OBJECT_PINNED))
+			eb->flags[i] |= __EXEC_OBJECT_NEEDS_BIAS;
+		if (eb->reloc_cache.has_fence)
+			eb->flags[i] |= EXEC_OBJECT_NEEDS_FENCE;
+
+		eb->batch = vma;
+	}
+
 	err = 0;
 	if (eb_pin_vma(eb, entry, vma)) {
 		if (entry->offset != vma->node.start) {
@@ -716,7 +736,7 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
 {
 	struct radix_tree_root *handles_vma = &eb->ctx->handles_vma;
 	struct drm_i915_gem_object *obj;
-	unsigned int i;
+	unsigned int i, batch;
 	int err;
 
 	if (unlikely(i915_gem_context_is_closed(eb->ctx)))
@@ -728,6 +748,8 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
 	INIT_LIST_HEAD(&eb->relocs);
 	INIT_LIST_HEAD(&eb->unbound);
 
+	batch = eb_batch_index(eb);
+
 	for (i = 0; i < eb->buffer_count; i++) {
 		u32 handle = eb->exec[i].handle;
 		struct i915_lut_handle *lut;
@@ -770,33 +792,16 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
 		lut->handle = handle;
 
 add_vma:
-		err = eb_add_vma(eb, i, vma);
+		err = eb_add_vma(eb, i, batch, vma);
 		if (unlikely(err))
 			goto err_vma;
 
 		GEM_BUG_ON(vma != eb->vma[i]);
 		GEM_BUG_ON(vma->exec_flags != &eb->flags[i]);
+		GEM_BUG_ON(drm_mm_node_allocated(&vma->node) &&
+			   eb_vma_misplaced(&eb->exec[i], vma, eb->flags[i]));
 	}
 
-	/* take note of the batch buffer before we might reorder the lists */
-	i = eb_batch_index(eb);
-	eb->batch = eb->vma[i];
-	GEM_BUG_ON(eb->batch->exec_flags != &eb->flags[i]);
-
-	/*
-	 * SNA is doing fancy tricks with compressing batch buffers, which leads
-	 * to negative relocation deltas. Usually that works out ok since the
-	 * relocate address is still positive, except when the batch is placed
-	 * very low in the GTT. Ensure this doesn't happen.
-	 *
-	 * Note that actual hangs have only been observed on gen7, but for
-	 * paranoia do it everywhere.
-	 */
-	if (!(eb->flags[i] & EXEC_OBJECT_PINNED))
-		eb->flags[i] |= __EXEC_OBJECT_NEEDS_BIAS;
-	if (eb->reloc_cache.has_fence)
-		eb->flags[i] |= EXEC_OBJECT_NEEDS_FENCE;
-
 	eb->args->flags |= __EXEC_VALIDATED;
 	return eb_reserve(eb);
 
@@ -916,7 +921,7 @@ static void reloc_gpu_flush(struct reloc_cache *cache)
 	i915_gem_object_unpin_map(cache->rq->batch->obj);
 	i915_gem_chipset_flush(cache->rq->i915);
 
-	__i915_request_add(cache->rq, true);
+	i915_request_add(cache->rq);
 	cache->rq = NULL;
 }
 
@@ -2433,7 +2438,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
 	trace_i915_request_queue(eb.request, eb.batch_flags);
 	err = eb_submit(&eb);
 err_request:
-	__i915_request_add(eb.request, err == 0);
+	i915_request_add(eb.request);
 	add_to_client(eb.request, file);
 
 	if (fences)

+ 353 - 351
drivers/gpu/drm/i915/i915_gem_gtt.c

@@ -190,11 +190,19 @@ int intel_sanitize_enable_ppgtt(struct drm_i915_private *dev_priv,
 	return 1;
 }
 
-static int gen6_ppgtt_bind_vma(struct i915_vma *vma,
-			       enum i915_cache_level cache_level,
-			       u32 unused)
+static int ppgtt_bind_vma(struct i915_vma *vma,
+			  enum i915_cache_level cache_level,
+			  u32 unused)
 {
 	u32 pte_flags;
+	int err;
+
+	if (!(vma->flags & I915_VMA_LOCAL_BIND)) {
+		err = vma->vm->allocate_va_range(vma->vm,
+						 vma->node.start, vma->size);
+		if (err)
+			return err;
+	}
 
 	/* Currently applicable only to VLV */
 	pte_flags = 0;
@@ -206,22 +214,6 @@ static int gen6_ppgtt_bind_vma(struct i915_vma *vma,
 	return 0;
 }
 
-static int gen8_ppgtt_bind_vma(struct i915_vma *vma,
-			       enum i915_cache_level cache_level,
-			       u32 unused)
-{
-	int ret;
-
-	if (!(vma->flags & I915_VMA_LOCAL_BIND)) {
-		ret = vma->vm->allocate_va_range(vma->vm,
-						 vma->node.start, vma->size);
-		if (ret)
-			return ret;
-	}
-
-	return gen6_ppgtt_bind_vma(vma, cache_level, unused);
-}
-
 static void ppgtt_unbind_vma(struct i915_vma *vma)
 {
 	vma->vm->clear_range(vma->vm, vma->node.start, vma->size);
@@ -648,11 +640,10 @@ static void gen8_initialize_pt(struct i915_address_space *vm,
 		gen8_pte_encode(vm->scratch_page.daddr, I915_CACHE_LLC));
 }
 
-static void gen6_initialize_pt(struct i915_address_space *vm,
+static void gen6_initialize_pt(struct gen6_hw_ppgtt *ppgtt,
 			       struct i915_page_table *pt)
 {
-	fill32_px(vm, pt,
-		  vm->pte_encode(vm->scratch_page.daddr, I915_CACHE_LLC, 0));
+	fill32_px(&ppgtt->base.vm, pt, ppgtt->scratch_pte);
 }
 
 static struct i915_page_directory *alloc_pd(struct i915_address_space *vm)
@@ -1562,32 +1553,36 @@ unwind:
  * space.
  *
  */
-static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
+static struct i915_hw_ppgtt *gen8_ppgtt_create(struct drm_i915_private *i915)
 {
-	struct i915_address_space *vm = &ppgtt->vm;
-	struct drm_i915_private *dev_priv = vm->i915;
-	int ret;
+	struct i915_hw_ppgtt *ppgtt;
+	int err;
 
-	ppgtt->vm.total = USES_FULL_48BIT_PPGTT(dev_priv) ?
+	ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
+	if (!ppgtt)
+		return ERR_PTR(-ENOMEM);
+
+	ppgtt->vm.i915 = i915;
+	ppgtt->vm.dma = &i915->drm.pdev->dev;
+
+	ppgtt->vm.total = USES_FULL_48BIT_PPGTT(i915) ?
 		1ULL << 48 :
 		1ULL << 32;
 
 	/* There are only few exceptions for gen >=6. chv and bxt.
 	 * And we are not sure about the latter so play safe for now.
 	 */
-	if (IS_CHERRYVIEW(dev_priv) || IS_BROXTON(dev_priv))
+	if (IS_CHERRYVIEW(i915) || IS_BROXTON(i915))
 		ppgtt->vm.pt_kmap_wc = true;
 
-	ret = gen8_init_scratch(&ppgtt->vm);
-	if (ret) {
-		ppgtt->vm.total = 0;
-		return ret;
-	}
+	err = gen8_init_scratch(&ppgtt->vm);
+	if (err)
+		goto err_free;
 
-	if (use_4lvl(vm)) {
-		ret = setup_px(&ppgtt->vm, &ppgtt->pml4);
-		if (ret)
-			goto free_scratch;
+	if (use_4lvl(&ppgtt->vm)) {
+		err = setup_px(&ppgtt->vm, &ppgtt->pml4);
+		if (err)
+			goto err_scratch;
 
 		gen8_initialize_pml4(&ppgtt->vm, &ppgtt->pml4);
 
@@ -1595,15 +1590,15 @@ static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
 		ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl;
 		ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl;
 	} else {
-		ret = __pdp_init(&ppgtt->vm, &ppgtt->pdp);
-		if (ret)
-			goto free_scratch;
+		err = __pdp_init(&ppgtt->vm, &ppgtt->pdp);
+		if (err)
+			goto err_scratch;
 
-		if (intel_vgpu_active(dev_priv)) {
-			ret = gen8_preallocate_top_level_pdp(ppgtt);
-			if (ret) {
+		if (intel_vgpu_active(i915)) {
+			err = gen8_preallocate_top_level_pdp(ppgtt);
+			if (err) {
 				__pdp_fini(&ppgtt->pdp);
-				goto free_scratch;
+				goto err_scratch;
 			}
 		}
 
@@ -1612,159 +1607,88 @@ static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
 		ppgtt->vm.clear_range = gen8_ppgtt_clear_3lvl;
 	}
 
-	if (intel_vgpu_active(dev_priv))
+	if (intel_vgpu_active(i915))
 		gen8_ppgtt_notify_vgt(ppgtt, true);
 
 	ppgtt->vm.cleanup = gen8_ppgtt_cleanup;
-	ppgtt->vm.bind_vma = gen8_ppgtt_bind_vma;
-	ppgtt->vm.unbind_vma = ppgtt_unbind_vma;
-	ppgtt->vm.set_pages = ppgtt_set_pages;
-	ppgtt->vm.clear_pages = clear_pages;
 	ppgtt->debug_dump = gen8_dump_ppgtt;
 
-	return 0;
+	ppgtt->vm.vma_ops.bind_vma    = ppgtt_bind_vma;
+	ppgtt->vm.vma_ops.unbind_vma  = ppgtt_unbind_vma;
+	ppgtt->vm.vma_ops.set_pages   = ppgtt_set_pages;
+	ppgtt->vm.vma_ops.clear_pages = clear_pages;
 
-free_scratch:
+	return ppgtt;
+
+err_scratch:
 	gen8_free_scratch(&ppgtt->vm);
-	return ret;
+err_free:
+	kfree(ppgtt);
+	return ERR_PTR(err);
 }
 
-static void gen6_dump_ppgtt(struct i915_hw_ppgtt *ppgtt, struct seq_file *m)
+static void gen6_dump_ppgtt(struct i915_hw_ppgtt *base, struct seq_file *m)
 {
-	struct i915_address_space *vm = &ppgtt->vm;
-	struct i915_page_table *unused;
-	gen6_pte_t scratch_pte;
-	u32 pd_entry, pte, pde;
-	u32 start = 0, length = ppgtt->vm.total;
+	struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(base);
+	const gen6_pte_t scratch_pte = ppgtt->scratch_pte;
+	struct i915_page_table *pt;
+	u32 pte, pde;
 
-	scratch_pte = vm->pte_encode(vm->scratch_page.daddr,
-				     I915_CACHE_LLC, 0);
+	gen6_for_all_pdes(pt, &base->pd, pde) {
+		gen6_pte_t *vaddr;
+
+		if (pt == base->vm.scratch_pt)
+			continue;
+
+		if (i915_vma_is_bound(ppgtt->vma, I915_VMA_GLOBAL_BIND)) {
+			u32 expected =
+				GEN6_PDE_ADDR_ENCODE(px_dma(pt)) |
+				GEN6_PDE_VALID;
+			u32 pd_entry = readl(ppgtt->pd_addr + pde);
+
+			if (pd_entry != expected)
+				seq_printf(m,
+					   "\tPDE #%d mismatch: Actual PDE: %x Expected PDE: %x\n",
+					   pde,
+					   pd_entry,
+					   expected);
 
-	gen6_for_each_pde(unused, &ppgtt->pd, start, length, pde) {
-		u32 expected;
-		gen6_pte_t *pt_vaddr;
-		const dma_addr_t pt_addr = px_dma(ppgtt->pd.page_table[pde]);
-		pd_entry = readl(ppgtt->pd_addr + pde);
-		expected = (GEN6_PDE_ADDR_ENCODE(pt_addr) | GEN6_PDE_VALID);
-
-		if (pd_entry != expected)
-			seq_printf(m, "\tPDE #%d mismatch: Actual PDE: %x Expected PDE: %x\n",
-				   pde,
-				   pd_entry,
-				   expected);
-		seq_printf(m, "\tPDE: %x\n", pd_entry);
-
-		pt_vaddr = kmap_atomic_px(ppgtt->pd.page_table[pde]);
-
-		for (pte = 0; pte < GEN6_PTES; pte+=4) {
-			unsigned long va =
-				(pde * PAGE_SIZE * GEN6_PTES) +
-				(pte * PAGE_SIZE);
+			seq_printf(m, "\tPDE: %x\n", pd_entry);
+		}
+
+		vaddr = kmap_atomic_px(base->pd.page_table[pde]);
+		for (pte = 0; pte < GEN6_PTES; pte += 4) {
 			int i;
-			bool found = false;
+
 			for (i = 0; i < 4; i++)
-				if (pt_vaddr[pte + i] != scratch_pte)
-					found = true;
-			if (!found)
+				if (vaddr[pte + i] != scratch_pte)
+					break;
+			if (i == 4)
 				continue;
 
-			seq_printf(m, "\t\t0x%lx [%03d,%04d]: =", va, pde, pte);
+			seq_printf(m, "\t\t(%03d, %04d) %08lx: ",
+				   pde, pte,
+				   (pde * GEN6_PTES + pte) * PAGE_SIZE);
 			for (i = 0; i < 4; i++) {
-				if (pt_vaddr[pte + i] != scratch_pte)
-					seq_printf(m, " %08x", pt_vaddr[pte + i]);
+				if (vaddr[pte + i] != scratch_pte)
+					seq_printf(m, " %08x", vaddr[pte + i]);
 				else
-					seq_puts(m, "  SCRATCH ");
+					seq_puts(m, "  SCRATCH");
 			}
 			seq_puts(m, "\n");
 		}
-		kunmap_atomic(pt_vaddr);
+		kunmap_atomic(vaddr);
 	}
 }
 
 /* Write pde (index) from the page directory @pd to the page table @pt */
-static inline void gen6_write_pde(const struct i915_hw_ppgtt *ppgtt,
+static inline void gen6_write_pde(const struct gen6_hw_ppgtt *ppgtt,
 				  const unsigned int pde,
 				  const struct i915_page_table *pt)
 {
 	/* Caller needs to make sure the write completes if necessary */
-	writel_relaxed(GEN6_PDE_ADDR_ENCODE(px_dma(pt)) | GEN6_PDE_VALID,
-		       ppgtt->pd_addr + pde);
-}
-
-/* Write all the page tables found in the ppgtt structure to incrementing page
- * directories. */
-static void gen6_write_page_range(struct i915_hw_ppgtt *ppgtt,
-				  u32 start, u32 length)
-{
-	struct i915_page_table *pt;
-	unsigned int pde;
-
-	gen6_for_each_pde(pt, &ppgtt->pd, start, length, pde)
-		gen6_write_pde(ppgtt, pde, pt);
-
-	mark_tlbs_dirty(ppgtt);
-	wmb();
-}
-
-static inline u32 get_pd_offset(struct i915_hw_ppgtt *ppgtt)
-{
-	GEM_BUG_ON(ppgtt->pd.base.ggtt_offset & 0x3f);
-	return ppgtt->pd.base.ggtt_offset << 10;
-}
-
-static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
-			 struct i915_request *rq)
-{
-	struct intel_engine_cs *engine = rq->engine;
-	u32 *cs;
-
-	/* NB: TLBs must be flushed and invalidated before a switch */
-	cs = intel_ring_begin(rq, 6);
-	if (IS_ERR(cs))
-		return PTR_ERR(cs);
-
-	*cs++ = MI_LOAD_REGISTER_IMM(2);
-	*cs++ = i915_mmio_reg_offset(RING_PP_DIR_DCLV(engine));
-	*cs++ = PP_DIR_DCLV_2G;
-	*cs++ = i915_mmio_reg_offset(RING_PP_DIR_BASE(engine));
-	*cs++ = get_pd_offset(ppgtt);
-	*cs++ = MI_NOOP;
-	intel_ring_advance(rq, cs);
-
-	return 0;
-}
-
-static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
-			  struct i915_request *rq)
-{
-	struct intel_engine_cs *engine = rq->engine;
-	u32 *cs;
-
-	/* NB: TLBs must be flushed and invalidated before a switch */
-	cs = intel_ring_begin(rq, 6);
-	if (IS_ERR(cs))
-		return PTR_ERR(cs);
-
-	*cs++ = MI_LOAD_REGISTER_IMM(2);
-	*cs++ = i915_mmio_reg_offset(RING_PP_DIR_DCLV(engine));
-	*cs++ = PP_DIR_DCLV_2G;
-	*cs++ = i915_mmio_reg_offset(RING_PP_DIR_BASE(engine));
-	*cs++ = get_pd_offset(ppgtt);
-	*cs++ = MI_NOOP;
-	intel_ring_advance(rq, cs);
-
-	return 0;
-}
-
-static int gen6_mm_switch(struct i915_hw_ppgtt *ppgtt,
-			  struct i915_request *rq)
-{
-	struct intel_engine_cs *engine = rq->engine;
-	struct drm_i915_private *dev_priv = rq->i915;
-
-	I915_WRITE(RING_PP_DIR_DCLV(engine), PP_DIR_DCLV_2G);
-	I915_WRITE(RING_PP_DIR_BASE(engine), get_pd_offset(ppgtt));
-	return 0;
+	iowrite32(GEN6_PDE_ADDR_ENCODE(px_dma(pt)) | GEN6_PDE_VALID,
+		  ppgtt->pd_addr + pde);
 }
 
 static void gen8_ppgtt_enable(struct drm_i915_private *dev_priv)
@@ -1826,22 +1750,30 @@ static void gen6_ppgtt_enable(struct drm_i915_private *dev_priv)
 static void gen6_ppgtt_clear_range(struct i915_address_space *vm,
 				   u64 start, u64 length)
 {
-	struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+	struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(i915_vm_to_ppgtt(vm));
 	unsigned int first_entry = start >> PAGE_SHIFT;
 	unsigned int pde = first_entry / GEN6_PTES;
 	unsigned int pte = first_entry % GEN6_PTES;
 	unsigned int num_entries = length >> PAGE_SHIFT;
-	gen6_pte_t scratch_pte =
-		vm->pte_encode(vm->scratch_page.daddr, I915_CACHE_LLC, 0);
+	const gen6_pte_t scratch_pte = ppgtt->scratch_pte;
 
 	while (num_entries) {
-		struct i915_page_table *pt = ppgtt->pd.page_table[pde++];
-		unsigned int end = min(pte + num_entries, GEN6_PTES);
+		struct i915_page_table *pt = ppgtt->base.pd.page_table[pde++];
+		const unsigned int end = min(pte + num_entries, GEN6_PTES);
+		const unsigned int count = end - pte;
 		gen6_pte_t *vaddr;
 
-		num_entries -= end - pte;
+		GEM_BUG_ON(pt == vm->scratch_pt);
+
+		num_entries -= count;
 
-		/* Note that the hw doesn't support removing PDE on the fly
+		GEM_BUG_ON(count > pt->used_ptes);
+		pt->used_ptes -= count;
+		if (!pt->used_ptes)
+			ppgtt->scan_for_unused_pt = true;
+
+		/*
+		 * Note that the hw doesn't support removing PDE on the fly
 		 * (they are cached inside the context with no means to
 		 * invalidate the cache), so we can only reset the PTE
 		 * entries back to scratch.
@@ -1870,6 +1802,8 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
 	struct sgt_dma iter = sgt_dma(vma);
 	gen6_pte_t *vaddr;
 
+	GEM_BUG_ON(ppgtt->pd.page_table[act_pt] == vm->scratch_pt);
+
 	vaddr = kmap_atomic_px(ppgtt->pd.page_table[act_pt]);
 	do {
 		vaddr[act_pte] = pte_encode | GEN6_PTE_ADDR_ENCODE(iter.dma);
@@ -1898,194 +1832,277 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
 static int gen6_alloc_va_range(struct i915_address_space *vm,
 			       u64 start, u64 length)
 {
-	struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+	struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(i915_vm_to_ppgtt(vm));
 	struct i915_page_table *pt;
 	u64 from = start;
 	unsigned int pde;
 	bool flush = false;
 
-	gen6_for_each_pde(pt, &ppgtt->pd, start, length, pde) {
+	gen6_for_each_pde(pt, &ppgtt->base.pd, start, length, pde) {
+		const unsigned int count = gen6_pte_count(start, length);
+
 		if (pt == vm->scratch_pt) {
 			pt = alloc_pt(vm);
 			if (IS_ERR(pt))
 				goto unwind_out;
 
-			gen6_initialize_pt(vm, pt);
-			ppgtt->pd.page_table[pde] = pt;
-			gen6_write_pde(ppgtt, pde, pt);
-			flush = true;
+			gen6_initialize_pt(ppgtt, pt);
+			ppgtt->base.pd.page_table[pde] = pt;
+
+			if (i915_vma_is_bound(ppgtt->vma,
+					      I915_VMA_GLOBAL_BIND)) {
+				gen6_write_pde(ppgtt, pde, pt);
+				flush = true;
+			}
+
+			GEM_BUG_ON(pt->used_ptes);
 		}
+
+		pt->used_ptes += count;
 	}
 
 	if (flush) {
-		mark_tlbs_dirty(ppgtt);
-		wmb();
+		mark_tlbs_dirty(&ppgtt->base);
+		gen6_ggtt_invalidate(ppgtt->base.vm.i915);
 	}
 
 	return 0;
 
 unwind_out:
-	gen6_ppgtt_clear_range(vm, from, start);
+	gen6_ppgtt_clear_range(vm, from, start - from);
 	return -ENOMEM;
 }
 
-static int gen6_init_scratch(struct i915_address_space *vm)
+static int gen6_ppgtt_init_scratch(struct gen6_hw_ppgtt *ppgtt)
 {
+	struct i915_address_space * const vm = &ppgtt->base.vm;
+	struct i915_page_table *unused;
+	u32 pde;
 	int ret;
 
 	ret = setup_scratch_page(vm, __GFP_HIGHMEM);
 	if (ret)
 		return ret;
 
+	ppgtt->scratch_pte =
+		vm->pte_encode(vm->scratch_page.daddr,
+			       I915_CACHE_NONE, PTE_READ_ONLY);
+
 	vm->scratch_pt = alloc_pt(vm);
 	if (IS_ERR(vm->scratch_pt)) {
 		cleanup_scratch_page(vm);
 		return PTR_ERR(vm->scratch_pt);
 	}
 
-	gen6_initialize_pt(vm, vm->scratch_pt);
+	gen6_initialize_pt(ppgtt, vm->scratch_pt);
+	gen6_for_all_pdes(unused, &ppgtt->base.pd, pde)
+		ppgtt->base.pd.page_table[pde] = vm->scratch_pt;
 
 	return 0;
 }
 
-static void gen6_free_scratch(struct i915_address_space *vm)
+static void gen6_ppgtt_free_scratch(struct i915_address_space *vm)
 {
 	free_pt(vm, vm->scratch_pt);
 	cleanup_scratch_page(vm);
 }
 
-static void gen6_ppgtt_cleanup(struct i915_address_space *vm)
+static void gen6_ppgtt_free_pd(struct gen6_hw_ppgtt *ppgtt)
 {
-	struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
-	struct i915_page_directory *pd = &ppgtt->pd;
 	struct i915_page_table *pt;
 	u32 pde;
 
-	drm_mm_remove_node(&ppgtt->node);
+	gen6_for_all_pdes(pt, &ppgtt->base.pd, pde)
+		if (pt != ppgtt->base.vm.scratch_pt)
+			free_pt(&ppgtt->base.vm, pt);
+}
 
-	gen6_for_all_pdes(pt, pd, pde)
-		if (pt != vm->scratch_pt)
-			free_pt(vm, pt);
+static void gen6_ppgtt_cleanup(struct i915_address_space *vm)
+{
+	struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(i915_vm_to_ppgtt(vm));
+
+	i915_vma_destroy(ppgtt->vma);
 
-	gen6_free_scratch(vm);
+	gen6_ppgtt_free_pd(ppgtt);
+	gen6_ppgtt_free_scratch(vm);
 }
 
-static int gen6_ppgtt_allocate_page_directories(struct i915_hw_ppgtt *ppgtt)
+static int pd_vma_set_pages(struct i915_vma *vma)
 {
-	struct i915_address_space *vm = &ppgtt->vm;
-	struct drm_i915_private *dev_priv = ppgtt->vm.i915;
-	struct i915_ggtt *ggtt = &dev_priv->ggtt;
-	int ret;
+	vma->pages = ERR_PTR(-ENODEV);
+	return 0;
+}
 
-	/* PPGTT PDEs reside in the GGTT and consists of 512 entries. The
-	 * allocator works in address space sizes, so it's multiplied by page
-	 * size. We allocate at the top of the GTT to avoid fragmentation.
-	 */
-	BUG_ON(!drm_mm_initialized(&ggtt->vm.mm));
+static void pd_vma_clear_pages(struct i915_vma *vma)
+{
+	GEM_BUG_ON(!vma->pages);
 
-	ret = gen6_init_scratch(vm);
-	if (ret)
-		return ret;
+	vma->pages = NULL;
+}
 
-	ret = i915_gem_gtt_insert(&ggtt->vm, &ppgtt->node,
-				  GEN6_PD_SIZE, GEN6_PD_ALIGN,
-				  I915_COLOR_UNEVICTABLE,
-				  0, ggtt->vm.total,
-				  PIN_HIGH);
-	if (ret)
-		goto err_out;
+static int pd_vma_bind(struct i915_vma *vma,
+		       enum i915_cache_level cache_level,
+		       u32 unused)
+{
+	struct i915_ggtt *ggtt = i915_vm_to_ggtt(vma->vm);
+	struct gen6_hw_ppgtt *ppgtt = vma->private;
+	u32 ggtt_offset = i915_ggtt_offset(vma) / PAGE_SIZE;
+	struct i915_page_table *pt;
+	unsigned int pde;
 
-	if (ppgtt->node.start < ggtt->mappable_end)
-		DRM_DEBUG("Forced to use aperture for PDEs\n");
+	ppgtt->base.pd.base.ggtt_offset = ggtt_offset * sizeof(gen6_pte_t);
+	ppgtt->pd_addr = (gen6_pte_t __iomem *)ggtt->gsm + ggtt_offset;
 
-	ppgtt->pd.base.ggtt_offset =
-		ppgtt->node.start / PAGE_SIZE * sizeof(gen6_pte_t);
+	gen6_for_all_pdes(pt, &ppgtt->base.pd, pde)
+		gen6_write_pde(ppgtt, pde, pt);
 
-	ppgtt->pd_addr = (gen6_pte_t __iomem *)ggtt->gsm +
-		ppgtt->pd.base.ggtt_offset / sizeof(gen6_pte_t);
+	mark_tlbs_dirty(&ppgtt->base);
+	gen6_ggtt_invalidate(ppgtt->base.vm.i915);
 
 	return 0;
+}
 
-err_out:
-	gen6_free_scratch(vm);
-	return ret;
+static void pd_vma_unbind(struct i915_vma *vma)
+{
+	struct gen6_hw_ppgtt *ppgtt = vma->private;
+	struct i915_page_table * const scratch_pt = ppgtt->base.vm.scratch_pt;
+	struct i915_page_table *pt;
+	unsigned int pde;
+
+	if (!ppgtt->scan_for_unused_pt)
+		return;
+
+	/* Free all no longer used page tables */
+	gen6_for_all_pdes(pt, &ppgtt->base.pd, pde) {
+		if (pt->used_ptes || pt == scratch_pt)
+			continue;
+
+		free_pt(&ppgtt->base.vm, pt);
+		ppgtt->base.pd.page_table[pde] = scratch_pt;
+	}
+
+	ppgtt->scan_for_unused_pt = false;
 }
 
-static int gen6_ppgtt_alloc(struct i915_hw_ppgtt *ppgtt)
+static const struct i915_vma_ops pd_vma_ops = {
+	.set_pages = pd_vma_set_pages,
+	.clear_pages = pd_vma_clear_pages,
+	.bind_vma = pd_vma_bind,
+	.unbind_vma = pd_vma_unbind,
+};
+
+static struct i915_vma *pd_vma_create(struct gen6_hw_ppgtt *ppgtt, int size)
 {
-	return gen6_ppgtt_allocate_page_directories(ppgtt);
+	struct drm_i915_private *i915 = ppgtt->base.vm.i915;
+	struct i915_ggtt *ggtt = &i915->ggtt;
+	struct i915_vma *vma;
+	int i;
+
+	GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_PAGE_SIZE));
+	GEM_BUG_ON(size > ggtt->vm.total);
+
+	vma = kmem_cache_zalloc(i915->vmas, GFP_KERNEL);
+	if (!vma)
+		return ERR_PTR(-ENOMEM);
+
+	for (i = 0; i < ARRAY_SIZE(vma->last_read); i++)
+		init_request_active(&vma->last_read[i], NULL);
+	init_request_active(&vma->last_fence, NULL);
+
+	vma->vm = &ggtt->vm;
+	vma->ops = &pd_vma_ops;
+	vma->private = ppgtt;
+
+	vma->size = size;
+	vma->fence_size = size;
+	vma->flags = I915_VMA_GGTT;
+	vma->ggtt_view.type = I915_GGTT_VIEW_ROTATED; /* prevent fencing */
+
+	INIT_LIST_HEAD(&vma->obj_link);
+	list_add(&vma->vm_link, &vma->vm->unbound_list);
+
+	return vma;
 }
 
-static void gen6_scratch_va_range(struct i915_hw_ppgtt *ppgtt,
-				  u64 start, u64 length)
+int gen6_ppgtt_pin(struct i915_hw_ppgtt *base)
 {
-	struct i915_page_table *unused;
-	u32 pde;
+	struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(base);
 
-	gen6_for_each_pde(unused, &ppgtt->pd, start, length, pde)
-		ppgtt->pd.page_table[pde] = ppgtt->vm.scratch_pt;
+	/*
+	 * Workaround the limited maximum vma->pin_count and the aliasing_ppgtt
+	 * which will be pinned into every active context.
+	 * (When vma->pin_count becomes atomic, I expect we will naturally
+	 * need a larger, unpacked, type and kill this redundancy.)
+	 */
+	if (ppgtt->pin_count++)
+		return 0;
+
+	/*
+	 * PPGTT PDEs reside in the GGTT and consists of 512 entries. The
+	 * allocator works in address space sizes, so it's multiplied by page
+	 * size. We allocate at the top of the GTT to avoid fragmentation.
+	 */
+	return i915_vma_pin(ppgtt->vma,
+			    0, GEN6_PD_ALIGN,
+			    PIN_GLOBAL | PIN_HIGH);
 }
 
-static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
+void gen6_ppgtt_unpin(struct i915_hw_ppgtt *base)
 {
-	struct drm_i915_private *dev_priv = ppgtt->vm.i915;
-	struct i915_ggtt *ggtt = &dev_priv->ggtt;
-	int ret;
+	struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(base);
 
-	ppgtt->vm.pte_encode = ggtt->vm.pte_encode;
-	if (intel_vgpu_active(dev_priv) || IS_GEN6(dev_priv))
-		ppgtt->switch_mm = gen6_mm_switch;
-	else if (IS_HASWELL(dev_priv))
-		ppgtt->switch_mm = hsw_mm_switch;
-	else if (IS_GEN7(dev_priv))
-		ppgtt->switch_mm = gen7_mm_switch;
-	else
-		BUG();
+	GEM_BUG_ON(!ppgtt->pin_count);
+	if (--ppgtt->pin_count)
+		return;
 
-	ret = gen6_ppgtt_alloc(ppgtt);
-	if (ret)
-		return ret;
+	i915_vma_unpin(ppgtt->vma);
+}
 
-	ppgtt->vm.total = I915_PDES * GEN6_PTES * PAGE_SIZE;
+static struct i915_hw_ppgtt *gen6_ppgtt_create(struct drm_i915_private *i915)
+{
+	struct i915_ggtt * const ggtt = &i915->ggtt;
+	struct gen6_hw_ppgtt *ppgtt;
+	int err;
 
-	gen6_scratch_va_range(ppgtt, 0, ppgtt->vm.total);
-	gen6_write_page_range(ppgtt, 0, ppgtt->vm.total);
+	ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
+	if (!ppgtt)
+		return ERR_PTR(-ENOMEM);
 
-	ret = gen6_alloc_va_range(&ppgtt->vm, 0, ppgtt->vm.total);
-	if (ret) {
-		gen6_ppgtt_cleanup(&ppgtt->vm);
-		return ret;
-	}
+	ppgtt->base.vm.i915 = i915;
+	ppgtt->base.vm.dma = &i915->drm.pdev->dev;
 
-	ppgtt->vm.clear_range = gen6_ppgtt_clear_range;
-	ppgtt->vm.insert_entries = gen6_ppgtt_insert_entries;
-	ppgtt->vm.bind_vma = gen6_ppgtt_bind_vma;
-	ppgtt->vm.unbind_vma = ppgtt_unbind_vma;
-	ppgtt->vm.set_pages = ppgtt_set_pages;
-	ppgtt->vm.clear_pages = clear_pages;
-	ppgtt->vm.cleanup = gen6_ppgtt_cleanup;
-	ppgtt->debug_dump = gen6_dump_ppgtt;
+	ppgtt->base.vm.total = I915_PDES * GEN6_PTES * PAGE_SIZE;
 
-	DRM_DEBUG_DRIVER("Allocated pde space (%lldM) at GTT entry: %llx\n",
-			 ppgtt->node.size >> 20,
-			 ppgtt->node.start / PAGE_SIZE);
+	ppgtt->base.vm.allocate_va_range = gen6_alloc_va_range;
+	ppgtt->base.vm.clear_range = gen6_ppgtt_clear_range;
+	ppgtt->base.vm.insert_entries = gen6_ppgtt_insert_entries;
+	ppgtt->base.vm.cleanup = gen6_ppgtt_cleanup;
+	ppgtt->base.debug_dump = gen6_dump_ppgtt;
 
-	DRM_DEBUG_DRIVER("Adding PPGTT at offset %x\n",
-			 ppgtt->pd.base.ggtt_offset << 10);
+	ppgtt->base.vm.vma_ops.bind_vma    = ppgtt_bind_vma;
+	ppgtt->base.vm.vma_ops.unbind_vma  = ppgtt_unbind_vma;
+	ppgtt->base.vm.vma_ops.set_pages   = ppgtt_set_pages;
+	ppgtt->base.vm.vma_ops.clear_pages = clear_pages;
 
-	return 0;
-}
+	ppgtt->base.vm.pte_encode = ggtt->vm.pte_encode;
 
-static int __hw_ppgtt_init(struct i915_hw_ppgtt *ppgtt,
-			   struct drm_i915_private *dev_priv)
-{
-	ppgtt->vm.i915 = dev_priv;
-	ppgtt->vm.dma = &dev_priv->drm.pdev->dev;
+	err = gen6_ppgtt_init_scratch(ppgtt);
+	if (err)
+		goto err_free;
 
-	if (INTEL_GEN(dev_priv) < 8)
-		return gen6_ppgtt_init(ppgtt);
-	else
-		return gen8_ppgtt_init(ppgtt);
+	ppgtt->vma = pd_vma_create(ppgtt, GEN6_PD_SIZE);
+	if (IS_ERR(ppgtt->vma)) {
+		err = PTR_ERR(ppgtt->vma);
+		goto err_scratch;
+	}
+
+	return &ppgtt->base;
+
+err_scratch:
+	gen6_ppgtt_free_scratch(&ppgtt->base.vm);
+err_free:
+	kfree(ppgtt);
+	return ERR_PTR(err);
 }
 
 static void i915_address_space_init(struct i915_address_space *vm,
@@ -2171,26 +2188,28 @@ int i915_ppgtt_init_hw(struct drm_i915_private *dev_priv)
 	return 0;
 }
 
+static struct i915_hw_ppgtt *
+__hw_ppgtt_create(struct drm_i915_private *i915)
+{
+	if (INTEL_GEN(i915) < 8)
+		return gen6_ppgtt_create(i915);
+	else
+		return gen8_ppgtt_create(i915);
+}
+
 struct i915_hw_ppgtt *
-i915_ppgtt_create(struct drm_i915_private *dev_priv,
+i915_ppgtt_create(struct drm_i915_private *i915,
 		  struct drm_i915_file_private *fpriv,
 		  const char *name)
 {
 	struct i915_hw_ppgtt *ppgtt;
-	int ret;
-
-	ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
-	if (!ppgtt)
-		return ERR_PTR(-ENOMEM);
 
-	ret = __hw_ppgtt_init(ppgtt, dev_priv);
-	if (ret) {
-		kfree(ppgtt);
-		return ERR_PTR(ret);
-	}
+	ppgtt = __hw_ppgtt_create(i915);
+	if (IS_ERR(ppgtt))
+		return ppgtt;
 
 	kref_init(&ppgtt->ref);
-	i915_address_space_init(&ppgtt->vm, dev_priv, name);
+	i915_address_space_init(&ppgtt->vm, i915, name);
 	ppgtt->vm.file = fpriv;
 
 	trace_i915_ppgtt_create(&ppgtt->vm);
@@ -2674,8 +2693,7 @@ static int aliasing_gtt_bind_vma(struct i915_vma *vma,
 	if (flags & I915_VMA_LOCAL_BIND) {
 		struct i915_hw_ppgtt *appgtt = i915->mm.aliasing_ppgtt;
 
-		if (!(vma->flags & I915_VMA_LOCAL_BIND) &&
-		    appgtt->vm.allocate_va_range) {
+		if (!(vma->flags & I915_VMA_LOCAL_BIND)) {
 			ret = appgtt->vm.allocate_va_range(&appgtt->vm,
 							   vma->node.start,
 							   vma->size);
@@ -2774,30 +2792,28 @@ int i915_gem_init_aliasing_ppgtt(struct drm_i915_private *i915)
 	if (IS_ERR(ppgtt))
 		return PTR_ERR(ppgtt);
 
-	if (WARN_ON(ppgtt->vm.total < ggtt->vm.total)) {
+	if (GEM_WARN_ON(ppgtt->vm.total < ggtt->vm.total)) {
 		err = -ENODEV;
 		goto err_ppgtt;
 	}
 
-	if (ppgtt->vm.allocate_va_range) {
-		/* Note we only pre-allocate as far as the end of the global
-		 * GTT. On 48b / 4-level page-tables, the difference is very,
-		 * very significant! We have to preallocate as GVT/vgpu does
-		 * not like the page directory disappearing.
-		 */
-		err = ppgtt->vm.allocate_va_range(&ppgtt->vm,
-						  0, ggtt->vm.total);
-		if (err)
-			goto err_ppgtt;
-	}
+	/*
+	 * Note we only pre-allocate as far as the end of the global
+	 * GTT. On 48b / 4-level page-tables, the difference is very,
+	 * very significant! We have to preallocate as GVT/vgpu does
+	 * not like the page directory disappearing.
+	 */
+	err = ppgtt->vm.allocate_va_range(&ppgtt->vm, 0, ggtt->vm.total);
+	if (err)
+		goto err_ppgtt;
 
 	i915->mm.aliasing_ppgtt = ppgtt;
 
-	GEM_BUG_ON(ggtt->vm.bind_vma != ggtt_bind_vma);
-	ggtt->vm.bind_vma = aliasing_gtt_bind_vma;
+	GEM_BUG_ON(ggtt->vm.vma_ops.bind_vma != ggtt_bind_vma);
+	ggtt->vm.vma_ops.bind_vma = aliasing_gtt_bind_vma;
 
-	GEM_BUG_ON(ggtt->vm.unbind_vma != ggtt_unbind_vma);
-	ggtt->vm.unbind_vma = aliasing_gtt_unbind_vma;
+	GEM_BUG_ON(ggtt->vm.vma_ops.unbind_vma != ggtt_unbind_vma);
+	ggtt->vm.vma_ops.unbind_vma = aliasing_gtt_unbind_vma;
 
 	return 0;
 
@@ -2817,8 +2833,8 @@ void i915_gem_fini_aliasing_ppgtt(struct drm_i915_private *i915)
 
 	i915_ppgtt_put(ppgtt);
 
-	ggtt->vm.bind_vma = ggtt_bind_vma;
-	ggtt->vm.unbind_vma = ggtt_unbind_vma;
+	ggtt->vm.vma_ops.bind_vma   = ggtt_bind_vma;
+	ggtt->vm.vma_ops.unbind_vma = ggtt_unbind_vma;
 }
 
 int i915_gem_init_ggtt(struct drm_i915_private *dev_priv)
@@ -2886,15 +2902,11 @@ void i915_ggtt_cleanup_hw(struct drm_i915_private *dev_priv)
 	ggtt->vm.closed = true;
 
 	mutex_lock(&dev_priv->drm.struct_mutex);
+	i915_gem_fini_aliasing_ppgtt(dev_priv);
+
 	GEM_BUG_ON(!list_empty(&ggtt->vm.active_list));
 	list_for_each_entry_safe(vma, vn, &ggtt->vm.inactive_list, vm_link)
 		WARN_ON(i915_vma_unbind(vma));
-	mutex_unlock(&dev_priv->drm.struct_mutex);
-
-	i915_gem_cleanup_stolen(&dev_priv->drm);
-
-	mutex_lock(&dev_priv->drm.struct_mutex);
-	i915_gem_fini_aliasing_ppgtt(dev_priv);
 
 	if (drm_mm_node_allocated(&ggtt->error_capture))
 		drm_mm_remove_node(&ggtt->error_capture);
@@ -2916,6 +2928,8 @@ void i915_ggtt_cleanup_hw(struct drm_i915_private *dev_priv)
 
 	arch_phys_wc_del(ggtt->mtrr);
 	io_mapping_fini(&ggtt->iomap);
+
+	i915_gem_cleanup_stolen(&dev_priv->drm);
 }
 
 static unsigned int gen6_get_total_gtt_size(u16 snb_gmch_ctl)
@@ -3310,10 +3324,6 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
 
 	ggtt->vm.total = (size / sizeof(gen8_pte_t)) << PAGE_SHIFT;
 	ggtt->vm.cleanup = gen6_gmch_remove;
-	ggtt->vm.bind_vma = ggtt_bind_vma;
-	ggtt->vm.unbind_vma = ggtt_unbind_vma;
-	ggtt->vm.set_pages = ggtt_set_pages;
-	ggtt->vm.clear_pages = clear_pages;
 	ggtt->vm.insert_page = gen8_ggtt_insert_page;
 	ggtt->vm.clear_range = nop_clear_range;
 	if (!USES_FULL_PPGTT(dev_priv) || intel_scanout_needs_vtd_wa(dev_priv))
@@ -3331,6 +3341,11 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
 
 	ggtt->invalidate = gen6_ggtt_invalidate;
 
+	ggtt->vm.vma_ops.bind_vma    = ggtt_bind_vma;
+	ggtt->vm.vma_ops.unbind_vma  = ggtt_unbind_vma;
+	ggtt->vm.vma_ops.set_pages   = ggtt_set_pages;
+	ggtt->vm.vma_ops.clear_pages = clear_pages;
+
 	setup_private_pat(dev_priv);
 
 	return ggtt_probe_common(ggtt, size);
@@ -3370,10 +3385,6 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt)
 	ggtt->vm.clear_range = gen6_ggtt_clear_range;
 	ggtt->vm.insert_page = gen6_ggtt_insert_page;
 	ggtt->vm.insert_entries = gen6_ggtt_insert_entries;
-	ggtt->vm.bind_vma = ggtt_bind_vma;
-	ggtt->vm.unbind_vma = ggtt_unbind_vma;
-	ggtt->vm.set_pages = ggtt_set_pages;
-	ggtt->vm.clear_pages = clear_pages;
 	ggtt->vm.cleanup = gen6_gmch_remove;
 
 	ggtt->invalidate = gen6_ggtt_invalidate;
@@ -3389,6 +3400,11 @@ static int gen6_gmch_probe(struct i915_ggtt *ggtt)
 	else
 		ggtt->vm.pte_encode = snb_pte_encode;
 
+	ggtt->vm.vma_ops.bind_vma    = ggtt_bind_vma;
+	ggtt->vm.vma_ops.unbind_vma  = ggtt_unbind_vma;
+	ggtt->vm.vma_ops.set_pages   = ggtt_set_pages;
+	ggtt->vm.vma_ops.clear_pages = clear_pages;
+
 	return ggtt_probe_common(ggtt, size);
 }
 
@@ -3419,14 +3435,15 @@ static int i915_gmch_probe(struct i915_ggtt *ggtt)
 	ggtt->vm.insert_page = i915_ggtt_insert_page;
 	ggtt->vm.insert_entries = i915_ggtt_insert_entries;
 	ggtt->vm.clear_range = i915_ggtt_clear_range;
-	ggtt->vm.bind_vma = ggtt_bind_vma;
-	ggtt->vm.unbind_vma = ggtt_unbind_vma;
-	ggtt->vm.set_pages = ggtt_set_pages;
-	ggtt->vm.clear_pages = clear_pages;
 	ggtt->vm.cleanup = i915_gmch_remove;
 
 	ggtt->invalidate = gmch_ggtt_invalidate;
 
+	ggtt->vm.vma_ops.bind_vma    = ggtt_bind_vma;
+	ggtt->vm.vma_ops.unbind_vma  = ggtt_unbind_vma;
+	ggtt->vm.vma_ops.set_pages   = ggtt_set_pages;
+	ggtt->vm.vma_ops.clear_pages = clear_pages;
+
 	if (unlikely(ggtt->do_idle_maps))
 		DRM_INFO("applying Ironlake quirks for intel_iommu\n");
 
@@ -3588,11 +3605,15 @@ void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv)
 		if (!i915_vma_unbind(vma))
 			continue;
 
-		WARN_ON(i915_vma_bind(vma, obj->cache_level, PIN_UPDATE));
-		WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false));
+		WARN_ON(i915_vma_bind(vma,
+				      obj ? obj->cache_level : 0,
+				      PIN_UPDATE));
+		if (obj)
+			WARN_ON(i915_gem_object_set_to_gtt_domain(obj, false));
 	}
 
 	ggtt->vm.closed = false;
+	i915_ggtt_invalidate(dev_priv);
 
 	if (INTEL_GEN(dev_priv) >= 8) {
 		struct intel_ppat *ppat = &dev_priv->ppat;
@@ -3601,25 +3622,6 @@ void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv)
 		dev_priv->ppat.update_hw(dev_priv);
 		return;
 	}
-
-	if (USES_PPGTT(dev_priv)) {
-		struct i915_address_space *vm;
-
-		list_for_each_entry(vm, &dev_priv->vm_list, global_link) {
-			struct i915_hw_ppgtt *ppgtt;
-
-			if (i915_is_ggtt(vm))
-				ppgtt = dev_priv->mm.aliasing_ppgtt;
-			else
-				ppgtt = i915_vm_to_ppgtt(vm);
-			if (!ppgtt)
-				continue;
-
-			gen6_write_page_range(ppgtt, 0, ppgtt->vm.total);
-		}
-	}
-
-	i915_ggtt_invalidate(dev_priv);
 }
 
 static struct scatterlist *

+ 42 - 15
drivers/gpu/drm/i915/i915_gem_gtt.h

@@ -58,6 +58,7 @@
 
 struct drm_i915_file_private;
 struct drm_i915_fence_reg;
+struct i915_vma;
 
 typedef u32 gen6_pte_t;
 typedef u64 gen8_pte_t;
@@ -254,6 +255,21 @@ struct i915_pml4 {
 	struct i915_page_directory_pointer *pdps[GEN8_PML4ES_PER_PML4];
 };
 
+struct i915_vma_ops {
+	/* Map an object into an address space with the given cache flags. */
+	int (*bind_vma)(struct i915_vma *vma,
+			enum i915_cache_level cache_level,
+			u32 flags);
+	/*
+	 * Unmap an object from an address space. This usually consists of
+	 * setting the valid PTE entries to a reserved scratch page.
+	 */
+	void (*unbind_vma)(struct i915_vma *vma);
+
+	int (*set_pages)(struct i915_vma *vma);
+	void (*clear_pages)(struct i915_vma *vma);
+};
+
 struct i915_address_space {
 	struct drm_mm mm;
 	struct drm_i915_private *i915;
@@ -331,15 +347,8 @@ struct i915_address_space {
 			       enum i915_cache_level cache_level,
 			       u32 flags);
 	void (*cleanup)(struct i915_address_space *vm);
-	/** Unmap an object from an address space. This usually consists of
-	 * setting the valid PTE entries to a reserved scratch page. */
-	void (*unbind_vma)(struct i915_vma *vma);
-	/* Map an object into an address space with the given cache flags. */
-	int (*bind_vma)(struct i915_vma *vma,
-			enum i915_cache_level cache_level,
-			u32 flags);
-	int (*set_pages)(struct i915_vma *vma);
-	void (*clear_pages)(struct i915_vma *vma);
+
+	struct i915_vma_ops vma_ops;
 
 	I915_SELFTEST_DECLARE(struct fault_attr fault_attr);
 	I915_SELFTEST_DECLARE(bool scrub_64K);
@@ -387,7 +396,7 @@ struct i915_ggtt {
 struct i915_hw_ppgtt {
 	struct i915_address_space vm;
 	struct kref ref;
-	struct drm_mm_node node;
+
 	unsigned long pd_dirty_rings;
 	union {
 		struct i915_pml4 pml4;		/* GEN8+ & 48b PPGTT */
@@ -395,13 +404,28 @@ struct i915_hw_ppgtt {
 		struct i915_page_directory pd;		/* GEN6-7 */
 	};
 
+	void (*debug_dump)(struct i915_hw_ppgtt *ppgtt, struct seq_file *m);
+};
+
+struct gen6_hw_ppgtt {
+	struct i915_hw_ppgtt base;
+
+	struct i915_vma *vma;
 	gen6_pte_t __iomem *pd_addr;
+	gen6_pte_t scratch_pte;
 
-	int (*switch_mm)(struct i915_hw_ppgtt *ppgtt,
-			 struct i915_request *rq);
-	void (*debug_dump)(struct i915_hw_ppgtt *ppgtt, struct seq_file *m);
+	unsigned int pin_count;
+	bool scan_for_unused_pt;
 };
 
+#define __to_gen6_ppgtt(base) container_of(base, struct gen6_hw_ppgtt, base)
+
+static inline struct gen6_hw_ppgtt *to_gen6_ppgtt(struct i915_hw_ppgtt *base)
+{
+	BUILD_BUG_ON(offsetof(struct gen6_hw_ppgtt, base));
+	return __to_gen6_ppgtt(base);
+}
+
 /*
  * gen6_for_each_pde() iterates over every pde from start until start+length.
  * If start and start+length are not perfectly divisible, the macro will round
@@ -440,8 +464,8 @@ static inline u32 i915_pte_count(u64 addr, u64 length, unsigned int pde_shift)
 	const u64 mask = ~((1ULL << pde_shift) - 1);
 	u64 end;
 
-	WARN_ON(length == 0);
-	WARN_ON(offset_in_page(addr|length));
+	GEM_BUG_ON(length == 0);
+	GEM_BUG_ON(offset_in_page(addr | length));
 
 	end = addr + length;
 
@@ -605,6 +629,9 @@ static inline void i915_ppgtt_put(struct i915_hw_ppgtt *ppgtt)
 		kref_put(&ppgtt->ref, i915_ppgtt_release);
 }
 
+int gen6_ppgtt_pin(struct i915_hw_ppgtt *base);
+void gen6_ppgtt_unpin(struct i915_hw_ppgtt *base);
+
 void i915_check_and_clear_faults(struct drm_i915_private *dev_priv);
 void i915_gem_suspend_gtt_mappings(struct drm_i915_private *dev_priv);
 void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv);

+ 3 - 0
drivers/gpu/drm/i915/i915_gpu_error.c

@@ -1050,6 +1050,9 @@ static u32 capture_error_bo(struct drm_i915_error_buffer *err,
 	int i = 0;
 
 	list_for_each_entry(vma, head, vm_link) {
+		if (!vma->obj)
+			continue;
+
 		if (pinned_only && !i915_vma_is_pinned(vma))
 			continue;
 

+ 176 - 6
drivers/gpu/drm/i915/i915_irq.c

@@ -115,6 +115,13 @@ static const u32 hpd_bxt[HPD_NUM_PINS] = {
 	[HPD_PORT_C] = BXT_DE_PORT_HP_DDIC
 };
 
+static const u32 hpd_gen11[HPD_NUM_PINS] = {
+	[HPD_PORT_C] = GEN11_TC1_HOTPLUG | GEN11_TBT1_HOTPLUG,
+	[HPD_PORT_D] = GEN11_TC2_HOTPLUG | GEN11_TBT2_HOTPLUG,
+	[HPD_PORT_E] = GEN11_TC3_HOTPLUG | GEN11_TBT3_HOTPLUG,
+	[HPD_PORT_F] = GEN11_TC4_HOTPLUG | GEN11_TBT4_HOTPLUG
+};
+
 /* IIR can theoretically queue up two events. Be paranoid. */
 #define GEN8_IRQ_RESET_NDX(type, which) do { \
 	I915_WRITE(GEN8_##type##_IMR(which), 0xffffffff); \
@@ -1549,6 +1556,22 @@ static void gen8_gt_irq_handler(struct drm_i915_private *i915,
 	}
 }
 
+static bool gen11_port_hotplug_long_detect(enum port port, u32 val)
+{
+	switch (port) {
+	case PORT_C:
+		return val & GEN11_HOTPLUG_CTL_LONG_DETECT(PORT_TC1);
+	case PORT_D:
+		return val & GEN11_HOTPLUG_CTL_LONG_DETECT(PORT_TC2);
+	case PORT_E:
+		return val & GEN11_HOTPLUG_CTL_LONG_DETECT(PORT_TC3);
+	case PORT_F:
+		return val & GEN11_HOTPLUG_CTL_LONG_DETECT(PORT_TC4);
+	default:
+		return false;
+	}
+}
+
 static bool bxt_port_hotplug_long_detect(enum port port, u32 val)
 {
 	switch (port) {
@@ -1893,9 +1916,17 @@ static void i9xx_pipestat_irq_ack(struct drm_i915_private *dev_priv,
 
 		/*
 		 * Clear the PIPE*STAT regs before the IIR
+		 *
+		 * Toggle the enable bits to make sure we get an
+		 * edge in the ISR pipe event bit if we don't clear
+		 * all the enabled status bits. Otherwise the edge
+		 * triggered IIR on i965/g4x wouldn't notice that
+		 * an interrupt is still pending.
 		 */
-		if (pipe_stats[pipe])
-			I915_WRITE(reg, enable_mask | pipe_stats[pipe]);
+		if (pipe_stats[pipe]) {
+			I915_WRITE(reg, pipe_stats[pipe]);
+			I915_WRITE(reg, enable_mask);
+		}
 	}
 	spin_unlock(&dev_priv->irq_lock);
 }
@@ -2590,6 +2621,40 @@ static void bxt_hpd_irq_handler(struct drm_i915_private *dev_priv,
 	intel_hpd_irq_handler(dev_priv, pin_mask, long_mask);
 }
 
+static void gen11_hpd_irq_handler(struct drm_i915_private *dev_priv, u32 iir)
+{
+	u32 pin_mask = 0, long_mask = 0;
+	u32 trigger_tc = iir & GEN11_DE_TC_HOTPLUG_MASK;
+	u32 trigger_tbt = iir & GEN11_DE_TBT_HOTPLUG_MASK;
+
+	if (trigger_tc) {
+		u32 dig_hotplug_reg;
+
+		dig_hotplug_reg = I915_READ(GEN11_TC_HOTPLUG_CTL);
+		I915_WRITE(GEN11_TC_HOTPLUG_CTL, dig_hotplug_reg);
+
+		intel_get_hpd_pins(dev_priv, &pin_mask, &long_mask, trigger_tc,
+				   dig_hotplug_reg, hpd_gen11,
+				   gen11_port_hotplug_long_detect);
+	}
+
+	if (trigger_tbt) {
+		u32 dig_hotplug_reg;
+
+		dig_hotplug_reg = I915_READ(GEN11_TBT_HOTPLUG_CTL);
+		I915_WRITE(GEN11_TBT_HOTPLUG_CTL, dig_hotplug_reg);
+
+		intel_get_hpd_pins(dev_priv, &pin_mask, &long_mask, trigger_tbt,
+				   dig_hotplug_reg, hpd_gen11,
+				   gen11_port_hotplug_long_detect);
+	}
+
+	if (pin_mask)
+		intel_hpd_irq_handler(dev_priv, pin_mask, long_mask);
+	else
+		DRM_ERROR("Unexpected DE HPD interrupt 0x%08x\n", iir);
+}
+
 static irqreturn_t
 gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
 {
@@ -2625,6 +2690,17 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
 			DRM_ERROR("The master control interrupt lied (DE MISC)!\n");
 	}
 
+	if (INTEL_GEN(dev_priv) >= 11 && (master_ctl & GEN11_DE_HPD_IRQ)) {
+		iir = I915_READ(GEN11_DE_HPD_IIR);
+		if (iir) {
+			I915_WRITE(GEN11_DE_HPD_IIR, iir);
+			ret = IRQ_HANDLED;
+			gen11_hpd_irq_handler(dev_priv, iir);
+		} else {
+			DRM_ERROR("The master control interrupt lied, (DE HPD)!\n");
+		}
+	}
+
 	if (master_ctl & GEN8_DE_PORT_IRQ) {
 		iir = I915_READ(GEN8_DE_PORT_IIR);
 		if (iir) {
@@ -2640,6 +2716,9 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
 					    GEN9_AUX_CHANNEL_C |
 					    GEN9_AUX_CHANNEL_D;
 
+			if (INTEL_GEN(dev_priv) >= 11)
+				tmp_mask |= ICL_AUX_CHANNEL_E;
+
 			if (IS_CNL_WITH_PORT_F(dev_priv) ||
 			    INTEL_GEN(dev_priv) >= 11)
 				tmp_mask |= CNL_AUX_CHANNEL_F;
@@ -2943,11 +3022,44 @@ gen11_gt_irq_handler(struct drm_i915_private * const i915,
 	spin_unlock(&i915->irq_lock);
 }
 
+static void
+gen11_gu_misc_irq_ack(struct drm_i915_private *dev_priv, const u32 master_ctl,
+		      u32 *iir)
+{
+	void __iomem * const regs = dev_priv->regs;
+
+	if (!(master_ctl & GEN11_GU_MISC_IRQ))
+		return;
+
+	*iir = raw_reg_read(regs, GEN11_GU_MISC_IIR);
+	if (likely(*iir))
+		raw_reg_write(regs, GEN11_GU_MISC_IIR, *iir);
+}
+
+static void
+gen11_gu_misc_irq_handler(struct drm_i915_private *dev_priv,
+			  const u32 master_ctl, const u32 iir)
+{
+	if (!(master_ctl & GEN11_GU_MISC_IRQ))
+		return;
+
+	if (unlikely(!iir)) {
+		DRM_ERROR("GU_MISC iir blank!\n");
+		return;
+	}
+
+	if (iir & GEN11_GU_MISC_GSE)
+		intel_opregion_asle_intr(dev_priv);
+	else
+		DRM_ERROR("Unexpected GU_MISC interrupt 0x%x\n", iir);
+}
+
 static irqreturn_t gen11_irq_handler(int irq, void *arg)
 {
 	struct drm_i915_private * const i915 = to_i915(arg);
 	void __iomem * const regs = i915->regs;
 	u32 master_ctl;
+	u32 gu_misc_iir;
 
 	if (!intel_irqs_enabled(i915))
 		return IRQ_NONE;
@@ -2976,9 +3088,13 @@ static irqreturn_t gen11_irq_handler(int irq, void *arg)
 		enable_rpm_wakeref_asserts(i915);
 	}
 
+	gen11_gu_misc_irq_ack(i915, master_ctl, &gu_misc_iir);
+
 	/* Acknowledge and enable interrupts. */
 	raw_reg_write(regs, GEN11_GFX_MSTR_IRQ, GEN11_MASTER_IRQ | master_ctl);
 
+	gen11_gu_misc_irq_handler(i915, master_ctl, gu_misc_iir);
+
 	return IRQ_HANDLED;
 }
 
@@ -3465,6 +3581,8 @@ static void gen11_irq_reset(struct drm_device *dev)
 
 	GEN3_IRQ_RESET(GEN8_DE_PORT_);
 	GEN3_IRQ_RESET(GEN8_DE_MISC_);
+	GEN3_IRQ_RESET(GEN11_DE_HPD_);
+	GEN3_IRQ_RESET(GEN11_GU_MISC_);
 	GEN3_IRQ_RESET(GEN8_PCU_);
 }
 
@@ -3582,6 +3700,41 @@ static void ibx_hpd_irq_setup(struct drm_i915_private *dev_priv)
 	ibx_hpd_detection_setup(dev_priv);
 }
 
+static void gen11_hpd_detection_setup(struct drm_i915_private *dev_priv)
+{
+	u32 hotplug;
+
+	hotplug = I915_READ(GEN11_TC_HOTPLUG_CTL);
+	hotplug |= GEN11_HOTPLUG_CTL_ENABLE(PORT_TC1) |
+		   GEN11_HOTPLUG_CTL_ENABLE(PORT_TC2) |
+		   GEN11_HOTPLUG_CTL_ENABLE(PORT_TC3) |
+		   GEN11_HOTPLUG_CTL_ENABLE(PORT_TC4);
+	I915_WRITE(GEN11_TC_HOTPLUG_CTL, hotplug);
+
+	hotplug = I915_READ(GEN11_TBT_HOTPLUG_CTL);
+	hotplug |= GEN11_HOTPLUG_CTL_ENABLE(PORT_TC1) |
+		   GEN11_HOTPLUG_CTL_ENABLE(PORT_TC2) |
+		   GEN11_HOTPLUG_CTL_ENABLE(PORT_TC3) |
+		   GEN11_HOTPLUG_CTL_ENABLE(PORT_TC4);
+	I915_WRITE(GEN11_TBT_HOTPLUG_CTL, hotplug);
+}
+
+static void gen11_hpd_irq_setup(struct drm_i915_private *dev_priv)
+{
+	u32 hotplug_irqs, enabled_irqs;
+	u32 val;
+
+	enabled_irqs = intel_hpd_enabled_irqs(dev_priv, hpd_gen11);
+	hotplug_irqs = GEN11_DE_TC_HOTPLUG_MASK | GEN11_DE_TBT_HOTPLUG_MASK;
+
+	val = I915_READ(GEN11_DE_HPD_IMR);
+	val &= ~hotplug_irqs;
+	I915_WRITE(GEN11_DE_HPD_IMR, val);
+	POSTING_READ(GEN11_DE_HPD_IMR);
+
+	gen11_hpd_detection_setup(dev_priv);
+}
+
 static void spt_hpd_detection_setup(struct drm_i915_private *dev_priv)
 {
 	u32 val, hotplug;
@@ -3908,9 +4061,12 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
 	uint32_t de_pipe_enables;
 	u32 de_port_masked = GEN8_AUX_CHANNEL_A;
 	u32 de_port_enables;
-	u32 de_misc_masked = GEN8_DE_MISC_GSE | GEN8_DE_EDP_PSR;
+	u32 de_misc_masked = GEN8_DE_EDP_PSR;
 	enum pipe pipe;
 
+	if (INTEL_GEN(dev_priv) <= 10)
+		de_misc_masked |= GEN8_DE_MISC_GSE;
+
 	if (INTEL_GEN(dev_priv) >= 9) {
 		de_pipe_masked |= GEN9_DE_PIPE_IRQ_FAULT_ERRORS;
 		de_port_masked |= GEN9_AUX_CHANNEL_B | GEN9_AUX_CHANNEL_C |
@@ -3921,6 +4077,9 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
 		de_pipe_masked |= GEN8_DE_PIPE_IRQ_FAULT_ERRORS;
 	}
 
+	if (INTEL_GEN(dev_priv) >= 11)
+		de_port_masked |= ICL_AUX_CHANNEL_E;
+
 	if (IS_CNL_WITH_PORT_F(dev_priv) || INTEL_GEN(dev_priv) >= 11)
 		de_port_masked |= CNL_AUX_CHANNEL_F;
 
@@ -3949,10 +4108,18 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
 	GEN3_IRQ_INIT(GEN8_DE_PORT_, ~de_port_masked, de_port_enables);
 	GEN3_IRQ_INIT(GEN8_DE_MISC_, ~de_misc_masked, de_misc_masked);
 
-	if (IS_GEN9_LP(dev_priv))
+	if (INTEL_GEN(dev_priv) >= 11) {
+		u32 de_hpd_masked = 0;
+		u32 de_hpd_enables = GEN11_DE_TC_HOTPLUG_MASK |
+				     GEN11_DE_TBT_HOTPLUG_MASK;
+
+		GEN3_IRQ_INIT(GEN11_DE_HPD_, ~de_hpd_masked, de_hpd_enables);
+		gen11_hpd_detection_setup(dev_priv);
+	} else if (IS_GEN9_LP(dev_priv)) {
 		bxt_hpd_detection_setup(dev_priv);
-	else if (IS_BROADWELL(dev_priv))
+	} else if (IS_BROADWELL(dev_priv)) {
 		ilk_hpd_detection_setup(dev_priv);
+	}
 }
 
 static int gen8_irq_postinstall(struct drm_device *dev)
@@ -4004,10 +4171,13 @@ static void gen11_gt_irq_postinstall(struct drm_i915_private *dev_priv)
 static int gen11_irq_postinstall(struct drm_device *dev)
 {
 	struct drm_i915_private *dev_priv = dev->dev_private;
+	u32 gu_misc_masked = GEN11_GU_MISC_GSE;
 
 	gen11_gt_irq_postinstall(dev_priv);
 	gen8_de_irq_postinstall(dev_priv);
 
+	GEN3_IRQ_INIT(GEN11_GU_MISC_, ~gu_misc_masked, gu_misc_masked);
+
 	I915_WRITE(GEN11_DISPLAY_INT_CTL, GEN11_DISPLAY_IRQ_ENABLE);
 
 	I915_WRITE(GEN11_GFX_MSTR_IRQ, GEN11_MASTER_IRQ);
@@ -4471,7 +4641,7 @@ void intel_irq_init(struct drm_i915_private *dev_priv)
 		dev->driver->irq_uninstall = gen11_irq_reset;
 		dev->driver->enable_vblank = gen8_enable_vblank;
 		dev->driver->disable_vblank = gen8_disable_vblank;
-		dev_priv->display.hpd_irq_setup = spt_hpd_irq_setup;
+		dev_priv->display.hpd_irq_setup = gen11_hpd_irq_setup;
 	} else if (INTEL_GEN(dev_priv) >= 8) {
 		dev->driver->irq_handler = gen8_irq_handler;
 		dev->driver->irq_preinstall = gen8_irq_reset;

+ 4 - 1
drivers/gpu/drm/i915/i915_pci.c

@@ -657,12 +657,15 @@ static const struct pci_device_id pciidlist[] = {
 	INTEL_KBL_GT2_IDS(&intel_kabylake_gt2_info),
 	INTEL_KBL_GT3_IDS(&intel_kabylake_gt3_info),
 	INTEL_KBL_GT4_IDS(&intel_kabylake_gt3_info),
+	INTEL_AML_GT2_IDS(&intel_kabylake_gt2_info),
 	INTEL_CFL_S_GT1_IDS(&intel_coffeelake_gt1_info),
 	INTEL_CFL_S_GT2_IDS(&intel_coffeelake_gt2_info),
 	INTEL_CFL_H_GT2_IDS(&intel_coffeelake_gt2_info),
-	INTEL_CFL_U_GT1_IDS(&intel_coffeelake_gt1_info),
 	INTEL_CFL_U_GT2_IDS(&intel_coffeelake_gt2_info),
 	INTEL_CFL_U_GT3_IDS(&intel_coffeelake_gt3_info),
+	INTEL_WHL_U_GT1_IDS(&intel_coffeelake_gt1_info),
+	INTEL_WHL_U_GT2_IDS(&intel_coffeelake_gt2_info),
+	INTEL_WHL_U_GT3_IDS(&intel_coffeelake_gt3_info),
 	INTEL_CNL_IDS(&intel_cannonlake_info),
 	INTEL_ICL_11_IDS(&intel_icelake_11_info),
 	{0, 0, 0}

+ 9 - 9
drivers/gpu/drm/i915/i915_perf.c

@@ -315,7 +315,7 @@ static u32 i915_oa_max_sample_rate = 100000;
  * code assumes all reports have a power-of-two size and ~(size - 1) can
  * be used as a mask to align the OA tail pointer.
  */
-static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
+static const struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_A13]	    = { 0, 64 },
 	[I915_OA_FORMAT_A29]	    = { 1, 128 },
 	[I915_OA_FORMAT_A13_B8_C8]  = { 2, 128 },
@@ -326,7 +326,7 @@ static struct i915_oa_format hsw_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_C4_B8]	    = { 7, 64 },
 };
 
-static struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
+static const struct i915_oa_format gen8_plus_oa_formats[I915_OA_FORMAT_MAX] = {
 	[I915_OA_FORMAT_A12]		    = { 0, 64 },
 	[I915_OA_FORMAT_A12_B8_C8]	    = { 2, 128 },
 	[I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
@@ -1279,23 +1279,23 @@ static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
 			i915->perf.oa.specific_ctx_id_mask =
 				(1U << (GEN8_CTX_ID_WIDTH - 1)) - 1;
 		} else {
-			i915->perf.oa.specific_ctx_id = stream->ctx->hw_id;
 			i915->perf.oa.specific_ctx_id_mask =
 				(1U << GEN8_CTX_ID_WIDTH) - 1;
+			i915->perf.oa.specific_ctx_id =
+				upper_32_bits(ce->lrc_desc);
+			i915->perf.oa.specific_ctx_id &=
+				i915->perf.oa.specific_ctx_id_mask;
 		}
 		break;
 
 	case 11: {
-		struct intel_engine_cs *engine = i915->engine[RCS];
-
-		i915->perf.oa.specific_ctx_id =
-			stream->ctx->hw_id << (GEN11_SW_CTX_ID_SHIFT - 32) |
-			engine->instance << (GEN11_ENGINE_INSTANCE_SHIFT - 32) |
-			engine->class << (GEN11_ENGINE_INSTANCE_SHIFT - 32);
 		i915->perf.oa.specific_ctx_id_mask =
 			((1U << GEN11_SW_CTX_ID_WIDTH) - 1) << (GEN11_SW_CTX_ID_SHIFT - 32) |
 			((1U << GEN11_ENGINE_INSTANCE_WIDTH) - 1) << (GEN11_ENGINE_INSTANCE_SHIFT - 32) |
 			((1 << GEN11_ENGINE_CLASS_WIDTH) - 1) << (GEN11_ENGINE_CLASS_SHIFT - 32);
+		i915->perf.oa.specific_ctx_id = upper_32_bits(ce->lrc_desc);
+		i915->perf.oa.specific_ctx_id &=
+			i915->perf.oa.specific_ctx_id_mask;
 		break;
 	}
 

+ 4 - 1
drivers/gpu/drm/i915/i915_pvinfo.h

@@ -94,7 +94,10 @@ struct vgt_if {
 	u32 rsv5[4];
 
 	u32 g2v_notify;
-	u32 rsv6[7];
+	u32 rsv6[5];
+
+	u32 cursor_x_hot;
+	u32 cursor_y_hot;
 
 	struct {
 		u32 lo;

+ 1681 - 1619
drivers/gpu/drm/i915/i915_reg.h

@@ -141,21 +141,22 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 
 #define _PICK(__index, ...) (((const u32 []){ __VA_ARGS__ })[__index])
 
-#define _PIPE(pipe, a, b) ((a) + (pipe)*((b)-(a)))
+#define _PIPE(pipe, a, b) ((a) + (pipe) * ((b) - (a)))
 #define _MMIO_PIPE(pipe, a, b) _MMIO(_PIPE(pipe, a, b))
 #define _PLANE(plane, a, b) _PIPE(plane, a, b)
 #define _MMIO_PLANE(plane, a, b) _MMIO_PIPE(plane, a, b)
-#define _TRANS(tran, a, b) ((a) + (tran)*((b)-(a)))
+#define _TRANS(tran, a, b) ((a) + (tran) * ((b) - (a)))
 #define _MMIO_TRANS(tran, a, b) _MMIO(_TRANS(tran, a, b))
-#define _PORT(port, a, b) ((a) + (port)*((b)-(a)))
+#define _PORT(port, a, b) ((a) + (port) * ((b) - (a)))
 #define _MMIO_PORT(port, a, b) _MMIO(_PORT(port, a, b))
 #define _MMIO_PIPE3(pipe, a, b, c) _MMIO(_PICK(pipe, a, b, c))
 #define _MMIO_PORT3(pipe, a, b, c) _MMIO(_PICK(pipe, a, b, c))
-#define _PLL(pll, a, b) ((a) + (pll)*((b)-(a)))
+#define _PLL(pll, a, b) ((a) + (pll) * ((b) - (a)))
 #define _MMIO_PLL(pll, a, b) _MMIO(_PLL(pll, a, b))
 #define _PHY3(phy, ...) _PICK(phy, __VA_ARGS__)
 #define _MMIO_PHY3(phy, a, b, c) _MMIO(_PHY3(phy, a, b, c))
 
+#define __MASKED_FIELD(mask, value) ((mask) << 16 | (value))
 #define _MASKED_FIELD(mask, value) ({					   \
 	if (__builtin_constant_p(mask))					   \
 		BUILD_BUG_ON_MSG(((mask) & 0xffff0000), "Incorrect mask"); \
@@ -164,7 +165,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 	if (__builtin_constant_p(mask) && __builtin_constant_p(value))	   \
 		BUILD_BUG_ON_MSG((value) & ~(mask),			   \
 				 "Incorrect value for mask");		   \
-	(mask) << 16 | (value); })
+	__MASKED_FIELD(mask, value); })
 #define _MASKED_BIT_ENABLE(a)	({ typeof(a) _a = (a); _MASKED_FIELD(_a, _a); })
 #define _MASKED_BIT_DISABLE(a)	(_MASKED_FIELD((a), 0))
 
@@ -270,19 +271,19 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 
 
 #define ILK_GDSR _MMIO(MCHBAR_MIRROR_BASE + 0x2ca4)
-#define  ILK_GRDOM_FULL		(0<<1)
-#define  ILK_GRDOM_RENDER	(1<<1)
-#define  ILK_GRDOM_MEDIA	(3<<1)
-#define  ILK_GRDOM_MASK		(3<<1)
-#define  ILK_GRDOM_RESET_ENABLE (1<<0)
+#define  ILK_GRDOM_FULL		(0 << 1)
+#define  ILK_GRDOM_RENDER	(1 << 1)
+#define  ILK_GRDOM_MEDIA	(3 << 1)
+#define  ILK_GRDOM_MASK		(3 << 1)
+#define  ILK_GRDOM_RESET_ENABLE (1 << 0)
 
 #define GEN6_MBCUNIT_SNPCR	_MMIO(0x900c) /* for LLC config */
 #define   GEN6_MBC_SNPCR_SHIFT	21
-#define   GEN6_MBC_SNPCR_MASK	(3<<21)
-#define   GEN6_MBC_SNPCR_MAX	(0<<21)
-#define   GEN6_MBC_SNPCR_MED	(1<<21)
-#define   GEN6_MBC_SNPCR_LOW	(2<<21)
-#define   GEN6_MBC_SNPCR_MIN	(3<<21) /* only 1/16th of the cache is shared */
+#define   GEN6_MBC_SNPCR_MASK	(3 << 21)
+#define   GEN6_MBC_SNPCR_MAX	(0 << 21)
+#define   GEN6_MBC_SNPCR_MED	(1 << 21)
+#define   GEN6_MBC_SNPCR_LOW	(2 << 21)
+#define   GEN6_MBC_SNPCR_MIN	(3 << 21) /* only 1/16th of the cache is shared */
 
 #define VLV_G3DCTL		_MMIO(0x9024)
 #define VLV_GSCKGCTL		_MMIO(0x9028)
@@ -314,13 +315,13 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN11_GRDOM_VECS		(1 << 13)
 #define  GEN11_GRDOM_VECS2		(1 << 14)
 
-#define RING_PP_DIR_BASE(engine)	_MMIO((engine)->mmio_base+0x228)
-#define RING_PP_DIR_BASE_READ(engine)	_MMIO((engine)->mmio_base+0x518)
-#define RING_PP_DIR_DCLV(engine)	_MMIO((engine)->mmio_base+0x220)
+#define RING_PP_DIR_BASE(engine)	_MMIO((engine)->mmio_base + 0x228)
+#define RING_PP_DIR_BASE_READ(engine)	_MMIO((engine)->mmio_base + 0x518)
+#define RING_PP_DIR_DCLV(engine)	_MMIO((engine)->mmio_base + 0x220)
 #define   PP_DIR_DCLV_2G		0xffffffff
 
-#define GEN8_RING_PDP_UDW(engine, n)	_MMIO((engine)->mmio_base+0x270 + (n) * 8 + 4)
-#define GEN8_RING_PDP_LDW(engine, n)	_MMIO((engine)->mmio_base+0x270 + (n) * 8)
+#define GEN8_RING_PDP_UDW(engine, n)	_MMIO((engine)->mmio_base + 0x270 + (n) * 8 + 4)
+#define GEN8_RING_PDP_LDW(engine, n)	_MMIO((engine)->mmio_base + 0x270 + (n) * 8)
 
 #define GEN8_R_PWR_CLK_STATE		_MMIO(0x20C8)
 #define   GEN8_RPCS_ENABLE		(1 << 31)
@@ -358,25 +359,25 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define   GEN8_SELECTIVE_READ_ADDRESSING_ENABLE         (1 << 13)
 
 #define GAM_ECOCHK			_MMIO(0x4090)
-#define   BDW_DISABLE_HDC_INVALIDATION	(1<<25)
-#define   ECOCHK_SNB_BIT		(1<<10)
-#define   ECOCHK_DIS_TLB		(1<<8)
-#define   HSW_ECOCHK_ARB_PRIO_SOL	(1<<6)
-#define   ECOCHK_PPGTT_CACHE64B		(0x3<<3)
-#define   ECOCHK_PPGTT_CACHE4B		(0x0<<3)
-#define   ECOCHK_PPGTT_GFDT_IVB		(0x1<<4)
-#define   ECOCHK_PPGTT_LLC_IVB		(0x1<<3)
-#define   ECOCHK_PPGTT_UC_HSW		(0x1<<3)
-#define   ECOCHK_PPGTT_WT_HSW		(0x2<<3)
-#define   ECOCHK_PPGTT_WB_HSW		(0x3<<3)
+#define   BDW_DISABLE_HDC_INVALIDATION	(1 << 25)
+#define   ECOCHK_SNB_BIT		(1 << 10)
+#define   ECOCHK_DIS_TLB		(1 << 8)
+#define   HSW_ECOCHK_ARB_PRIO_SOL	(1 << 6)
+#define   ECOCHK_PPGTT_CACHE64B		(0x3 << 3)
+#define   ECOCHK_PPGTT_CACHE4B		(0x0 << 3)
+#define   ECOCHK_PPGTT_GFDT_IVB		(0x1 << 4)
+#define   ECOCHK_PPGTT_LLC_IVB		(0x1 << 3)
+#define   ECOCHK_PPGTT_UC_HSW		(0x1 << 3)
+#define   ECOCHK_PPGTT_WT_HSW		(0x2 << 3)
+#define   ECOCHK_PPGTT_WB_HSW		(0x3 << 3)
 
 #define GAC_ECO_BITS			_MMIO(0x14090)
-#define   ECOBITS_SNB_BIT		(1<<13)
-#define   ECOBITS_PPGTT_CACHE64B	(3<<8)
-#define   ECOBITS_PPGTT_CACHE4B		(0<<8)
+#define   ECOBITS_SNB_BIT		(1 << 13)
+#define   ECOBITS_PPGTT_CACHE64B	(3 << 8)
+#define   ECOBITS_PPGTT_CACHE4B		(0 << 8)
 
 #define GAB_CTL				_MMIO(0x24000)
-#define   GAB_CTL_CONT_AFTER_PAGEFAULT	(1<<8)
+#define   GAB_CTL_CONT_AFTER_PAGEFAULT	(1 << 8)
 
 #define GEN6_STOLEN_RESERVED		_MMIO(0x1082C0)
 #define GEN6_STOLEN_RESERVED_ADDR_MASK	(0xFFF << 20)
@@ -404,15 +405,15 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define _VGA_MSR_WRITE _MMIO(0x3c2)
 #define VGA_MSR_WRITE 0x3c2
 #define VGA_MSR_READ 0x3cc
-#define   VGA_MSR_MEM_EN (1<<1)
-#define   VGA_MSR_CGA_MODE (1<<0)
+#define   VGA_MSR_MEM_EN (1 << 1)
+#define   VGA_MSR_CGA_MODE (1 << 0)
 
 #define VGA_SR_INDEX 0x3c4
 #define SR01			1
 #define VGA_SR_DATA 0x3c5
 
 #define VGA_AR_INDEX 0x3c0
-#define   VGA_AR_VID_EN (1<<5)
+#define   VGA_AR_VID_EN (1 << 5)
 #define VGA_AR_DATA_WRITE 0x3c0
 #define VGA_AR_DATA_READ 0x3c1
 
@@ -445,8 +446,8 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define MI_PREDICATE_SRC1_UDW	_MMIO(0x2408 + 4)
 
 #define MI_PREDICATE_RESULT_2	_MMIO(0x2214)
-#define  LOWER_SLICE_ENABLED	(1<<0)
-#define  LOWER_SLICE_DISABLED	(0<<0)
+#define  LOWER_SLICE_ENABLED	(1 << 0)
+#define  LOWER_SLICE_DISABLED	(0 << 0)
 
 /*
  * Registers used only by the command parser
@@ -504,47 +505,47 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define  GEN7_OACONTROL_CTX_MASK	    0xFFFFF000
 #define  GEN7_OACONTROL_TIMER_PERIOD_MASK   0x3F
 #define  GEN7_OACONTROL_TIMER_PERIOD_SHIFT  6
-#define  GEN7_OACONTROL_TIMER_ENABLE	    (1<<5)
-#define  GEN7_OACONTROL_FORMAT_A13	    (0<<2)
-#define  GEN7_OACONTROL_FORMAT_A29	    (1<<2)
-#define  GEN7_OACONTROL_FORMAT_A13_B8_C8    (2<<2)
-#define  GEN7_OACONTROL_FORMAT_A29_B8_C8    (3<<2)
-#define  GEN7_OACONTROL_FORMAT_B4_C8	    (4<<2)
-#define  GEN7_OACONTROL_FORMAT_A45_B8_C8    (5<<2)
-#define  GEN7_OACONTROL_FORMAT_B4_C8_A16    (6<<2)
-#define  GEN7_OACONTROL_FORMAT_C4_B8	    (7<<2)
+#define  GEN7_OACONTROL_TIMER_ENABLE	    (1 << 5)
+#define  GEN7_OACONTROL_FORMAT_A13	    (0 << 2)
+#define  GEN7_OACONTROL_FORMAT_A29	    (1 << 2)
+#define  GEN7_OACONTROL_FORMAT_A13_B8_C8    (2 << 2)
+#define  GEN7_OACONTROL_FORMAT_A29_B8_C8    (3 << 2)
+#define  GEN7_OACONTROL_FORMAT_B4_C8	    (4 << 2)
+#define  GEN7_OACONTROL_FORMAT_A45_B8_C8    (5 << 2)
+#define  GEN7_OACONTROL_FORMAT_B4_C8_A16    (6 << 2)
+#define  GEN7_OACONTROL_FORMAT_C4_B8	    (7 << 2)
 #define  GEN7_OACONTROL_FORMAT_SHIFT	    2
-#define  GEN7_OACONTROL_PER_CTX_ENABLE	    (1<<1)
-#define  GEN7_OACONTROL_ENABLE		    (1<<0)
+#define  GEN7_OACONTROL_PER_CTX_ENABLE	    (1 << 1)
+#define  GEN7_OACONTROL_ENABLE		    (1 << 0)
 
 #define GEN8_OACTXID _MMIO(0x2364)
 
 #define GEN8_OA_DEBUG _MMIO(0x2B04)
-#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1<<5)
-#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1<<6)
-#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1<<2)
-#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1<<1)
+#define  GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS    (1 << 5)
+#define  GEN9_OA_DEBUG_INCLUDE_CLK_RATIO	    (1 << 6)
+#define  GEN9_OA_DEBUG_DISABLE_GO_1_0_REPORTS	    (1 << 2)
+#define  GEN9_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS   (1 << 1)
 
 #define GEN8_OACONTROL _MMIO(0x2B00)
-#define  GEN8_OA_REPORT_FORMAT_A12	    (0<<2)
-#define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2<<2)
-#define  GEN8_OA_REPORT_FORMAT_A36_B8_C8    (5<<2)
-#define  GEN8_OA_REPORT_FORMAT_C4_B8	    (7<<2)
+#define  GEN8_OA_REPORT_FORMAT_A12	    (0 << 2)
+#define  GEN8_OA_REPORT_FORMAT_A12_B8_C8    (2 << 2)
+#define  GEN8_OA_REPORT_FORMAT_A36_B8_C8    (5 << 2)
+#define  GEN8_OA_REPORT_FORMAT_C4_B8	    (7 << 2)
 #define  GEN8_OA_REPORT_FORMAT_SHIFT	    2
-#define  GEN8_OA_SPECIFIC_CONTEXT_ENABLE    (1<<1)
-#define  GEN8_OA_COUNTER_ENABLE             (1<<0)
+#define  GEN8_OA_SPECIFIC_CONTEXT_ENABLE    (1 << 1)
+#define  GEN8_OA_COUNTER_ENABLE             (1 << 0)
 
 #define GEN8_OACTXCONTROL _MMIO(0x2360)
 #define  GEN8_OA_TIMER_PERIOD_MASK	    0x3F
 #define  GEN8_OA_TIMER_PERIOD_SHIFT	    2
-#define  GEN8_OA_TIMER_ENABLE		    (1<<1)
-#define  GEN8_OA_COUNTER_RESUME		    (1<<0)
+#define  GEN8_OA_TIMER_ENABLE		    (1 << 1)
+#define  GEN8_OA_COUNTER_RESUME		    (1 << 0)
 
 #define GEN7_OABUFFER _MMIO(0x23B0) /* R/W */
-#define  GEN7_OABUFFER_OVERRUN_DISABLE	    (1<<3)
-#define  GEN7_OABUFFER_EDGE_TRIGGER	    (1<<2)
-#define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1<<1)
-#define  GEN7_OABUFFER_RESUME		    (1<<0)
+#define  GEN7_OABUFFER_OVERRUN_DISABLE	    (1 << 3)
+#define  GEN7_OABUFFER_EDGE_TRIGGER	    (1 << 2)
+#define  GEN7_OABUFFER_STOP_RESUME_ENABLE   (1 << 1)
+#define  GEN7_OABUFFER_RESUME		    (1 << 0)
 
 #define GEN8_OABUFFER_UDW _MMIO(0x23b4)
 #define GEN8_OABUFFER _MMIO(0x2b14)
@@ -552,33 +553,33 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 
 #define GEN7_OASTATUS1 _MMIO(0x2364)
 #define  GEN7_OASTATUS1_TAIL_MASK	    0xffffffc0
-#define  GEN7_OASTATUS1_COUNTER_OVERFLOW    (1<<2)
-#define  GEN7_OASTATUS1_OABUFFER_OVERFLOW   (1<<1)
-#define  GEN7_OASTATUS1_REPORT_LOST	    (1<<0)
+#define  GEN7_OASTATUS1_COUNTER_OVERFLOW    (1 << 2)
+#define  GEN7_OASTATUS1_OABUFFER_OVERFLOW   (1 << 1)
+#define  GEN7_OASTATUS1_REPORT_LOST	    (1 << 0)
 
 #define GEN7_OASTATUS2 _MMIO(0x2368)
 #define  GEN7_OASTATUS2_HEAD_MASK           0xffffffc0
 #define  GEN7_OASTATUS2_MEM_SELECT_GGTT     (1 << 0) /* 0: PPGTT, 1: GGTT */
 
 #define GEN8_OASTATUS _MMIO(0x2b08)
-#define  GEN8_OASTATUS_OVERRUN_STATUS	    (1<<3)
-#define  GEN8_OASTATUS_COUNTER_OVERFLOW     (1<<2)
-#define  GEN8_OASTATUS_OABUFFER_OVERFLOW    (1<<1)
-#define  GEN8_OASTATUS_REPORT_LOST	    (1<<0)
+#define  GEN8_OASTATUS_OVERRUN_STATUS	    (1 << 3)
+#define  GEN8_OASTATUS_COUNTER_OVERFLOW     (1 << 2)
+#define  GEN8_OASTATUS_OABUFFER_OVERFLOW    (1 << 1)
+#define  GEN8_OASTATUS_REPORT_LOST	    (1 << 0)
 
 #define GEN8_OAHEADPTR _MMIO(0x2B0C)
 #define GEN8_OAHEADPTR_MASK    0xffffffc0
 #define GEN8_OATAILPTR _MMIO(0x2B10)
 #define GEN8_OATAILPTR_MASK    0xffffffc0
 
-#define OABUFFER_SIZE_128K  (0<<3)
-#define OABUFFER_SIZE_256K  (1<<3)
-#define OABUFFER_SIZE_512K  (2<<3)
-#define OABUFFER_SIZE_1M    (3<<3)
-#define OABUFFER_SIZE_2M    (4<<3)
-#define OABUFFER_SIZE_4M    (5<<3)
-#define OABUFFER_SIZE_8M    (6<<3)
-#define OABUFFER_SIZE_16M   (7<<3)
+#define OABUFFER_SIZE_128K  (0 << 3)
+#define OABUFFER_SIZE_256K  (1 << 3)
+#define OABUFFER_SIZE_512K  (2 << 3)
+#define OABUFFER_SIZE_1M    (3 << 3)
+#define OABUFFER_SIZE_2M    (4 << 3)
+#define OABUFFER_SIZE_4M    (5 << 3)
+#define OABUFFER_SIZE_8M    (6 << 3)
+#define OABUFFER_SIZE_16M   (7 << 3)
 
 /*
  * Flexible, Aggregate EU Counter Registers.
@@ -601,35 +602,35 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define OASTARTTRIG1_THRESHOLD_MASK	      0xffff
 
 #define OASTARTTRIG2 _MMIO(0x2714)
-#define OASTARTTRIG2_INVERT_A_0 (1<<0)
-#define OASTARTTRIG2_INVERT_A_1 (1<<1)
-#define OASTARTTRIG2_INVERT_A_2 (1<<2)
-#define OASTARTTRIG2_INVERT_A_3 (1<<3)
-#define OASTARTTRIG2_INVERT_A_4 (1<<4)
-#define OASTARTTRIG2_INVERT_A_5 (1<<5)
-#define OASTARTTRIG2_INVERT_A_6 (1<<6)
-#define OASTARTTRIG2_INVERT_A_7 (1<<7)
-#define OASTARTTRIG2_INVERT_A_8 (1<<8)
-#define OASTARTTRIG2_INVERT_A_9 (1<<9)
-#define OASTARTTRIG2_INVERT_A_10 (1<<10)
-#define OASTARTTRIG2_INVERT_A_11 (1<<11)
-#define OASTARTTRIG2_INVERT_A_12 (1<<12)
-#define OASTARTTRIG2_INVERT_A_13 (1<<13)
-#define OASTARTTRIG2_INVERT_A_14 (1<<14)
-#define OASTARTTRIG2_INVERT_A_15 (1<<15)
-#define OASTARTTRIG2_INVERT_B_0 (1<<16)
-#define OASTARTTRIG2_INVERT_B_1 (1<<17)
-#define OASTARTTRIG2_INVERT_B_2 (1<<18)
-#define OASTARTTRIG2_INVERT_B_3 (1<<19)
-#define OASTARTTRIG2_INVERT_C_0 (1<<20)
-#define OASTARTTRIG2_INVERT_C_1 (1<<21)
-#define OASTARTTRIG2_INVERT_D_0 (1<<22)
-#define OASTARTTRIG2_THRESHOLD_ENABLE	    (1<<23)
-#define OASTARTTRIG2_START_TRIG_FLAG_MBZ    (1<<24)
-#define OASTARTTRIG2_EVENT_SELECT_0  (1<<28)
-#define OASTARTTRIG2_EVENT_SELECT_1  (1<<29)
-#define OASTARTTRIG2_EVENT_SELECT_2  (1<<30)
-#define OASTARTTRIG2_EVENT_SELECT_3  (1<<31)
+#define OASTARTTRIG2_INVERT_A_0 (1 << 0)
+#define OASTARTTRIG2_INVERT_A_1 (1 << 1)
+#define OASTARTTRIG2_INVERT_A_2 (1 << 2)
+#define OASTARTTRIG2_INVERT_A_3 (1 << 3)
+#define OASTARTTRIG2_INVERT_A_4 (1 << 4)
+#define OASTARTTRIG2_INVERT_A_5 (1 << 5)
+#define OASTARTTRIG2_INVERT_A_6 (1 << 6)
+#define OASTARTTRIG2_INVERT_A_7 (1 << 7)
+#define OASTARTTRIG2_INVERT_A_8 (1 << 8)
+#define OASTARTTRIG2_INVERT_A_9 (1 << 9)
+#define OASTARTTRIG2_INVERT_A_10 (1 << 10)
+#define OASTARTTRIG2_INVERT_A_11 (1 << 11)
+#define OASTARTTRIG2_INVERT_A_12 (1 << 12)
+#define OASTARTTRIG2_INVERT_A_13 (1 << 13)
+#define OASTARTTRIG2_INVERT_A_14 (1 << 14)
+#define OASTARTTRIG2_INVERT_A_15 (1 << 15)
+#define OASTARTTRIG2_INVERT_B_0 (1 << 16)
+#define OASTARTTRIG2_INVERT_B_1 (1 << 17)
+#define OASTARTTRIG2_INVERT_B_2 (1 << 18)
+#define OASTARTTRIG2_INVERT_B_3 (1 << 19)
+#define OASTARTTRIG2_INVERT_C_0 (1 << 20)
+#define OASTARTTRIG2_INVERT_C_1 (1 << 21)
+#define OASTARTTRIG2_INVERT_D_0 (1 << 22)
+#define OASTARTTRIG2_THRESHOLD_ENABLE	    (1 << 23)
+#define OASTARTTRIG2_START_TRIG_FLAG_MBZ    (1 << 24)
+#define OASTARTTRIG2_EVENT_SELECT_0  (1 << 28)
+#define OASTARTTRIG2_EVENT_SELECT_1  (1 << 29)
+#define OASTARTTRIG2_EVENT_SELECT_2  (1 << 30)
+#define OASTARTTRIG2_EVENT_SELECT_3  (1 << 31)
 
 #define OASTARTTRIG3 _MMIO(0x2718)
 #define OASTARTTRIG3_NOA_SELECT_MASK	   0xf
@@ -658,35 +659,35 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define OASTARTTRIG5_THRESHOLD_MASK	      0xffff
 
 #define OASTARTTRIG6 _MMIO(0x2724)
-#define OASTARTTRIG6_INVERT_A_0 (1<<0)
-#define OASTARTTRIG6_INVERT_A_1 (1<<1)
-#define OASTARTTRIG6_INVERT_A_2 (1<<2)
-#define OASTARTTRIG6_INVERT_A_3 (1<<3)
-#define OASTARTTRIG6_INVERT_A_4 (1<<4)
-#define OASTARTTRIG6_INVERT_A_5 (1<<5)
-#define OASTARTTRIG6_INVERT_A_6 (1<<6)
-#define OASTARTTRIG6_INVERT_A_7 (1<<7)
-#define OASTARTTRIG6_INVERT_A_8 (1<<8)
-#define OASTARTTRIG6_INVERT_A_9 (1<<9)
-#define OASTARTTRIG6_INVERT_A_10 (1<<10)
-#define OASTARTTRIG6_INVERT_A_11 (1<<11)
-#define OASTARTTRIG6_INVERT_A_12 (1<<12)
-#define OASTARTTRIG6_INVERT_A_13 (1<<13)
-#define OASTARTTRIG6_INVERT_A_14 (1<<14)
-#define OASTARTTRIG6_INVERT_A_15 (1<<15)
-#define OASTARTTRIG6_INVERT_B_0 (1<<16)
-#define OASTARTTRIG6_INVERT_B_1 (1<<17)
-#define OASTARTTRIG6_INVERT_B_2 (1<<18)
-#define OASTARTTRIG6_INVERT_B_3 (1<<19)
-#define OASTARTTRIG6_INVERT_C_0 (1<<20)
-#define OASTARTTRIG6_INVERT_C_1 (1<<21)
-#define OASTARTTRIG6_INVERT_D_0 (1<<22)
-#define OASTARTTRIG6_THRESHOLD_ENABLE	    (1<<23)
-#define OASTARTTRIG6_START_TRIG_FLAG_MBZ    (1<<24)
-#define OASTARTTRIG6_EVENT_SELECT_4  (1<<28)
-#define OASTARTTRIG6_EVENT_SELECT_5  (1<<29)
-#define OASTARTTRIG6_EVENT_SELECT_6  (1<<30)
-#define OASTARTTRIG6_EVENT_SELECT_7  (1<<31)
+#define OASTARTTRIG6_INVERT_A_0 (1 << 0)
+#define OASTARTTRIG6_INVERT_A_1 (1 << 1)
+#define OASTARTTRIG6_INVERT_A_2 (1 << 2)
+#define OASTARTTRIG6_INVERT_A_3 (1 << 3)
+#define OASTARTTRIG6_INVERT_A_4 (1 << 4)
+#define OASTARTTRIG6_INVERT_A_5 (1 << 5)
+#define OASTARTTRIG6_INVERT_A_6 (1 << 6)
+#define OASTARTTRIG6_INVERT_A_7 (1 << 7)
+#define OASTARTTRIG6_INVERT_A_8 (1 << 8)
+#define OASTARTTRIG6_INVERT_A_9 (1 << 9)
+#define OASTARTTRIG6_INVERT_A_10 (1 << 10)
+#define OASTARTTRIG6_INVERT_A_11 (1 << 11)
+#define OASTARTTRIG6_INVERT_A_12 (1 << 12)
+#define OASTARTTRIG6_INVERT_A_13 (1 << 13)
+#define OASTARTTRIG6_INVERT_A_14 (1 << 14)
+#define OASTARTTRIG6_INVERT_A_15 (1 << 15)
+#define OASTARTTRIG6_INVERT_B_0 (1 << 16)
+#define OASTARTTRIG6_INVERT_B_1 (1 << 17)
+#define OASTARTTRIG6_INVERT_B_2 (1 << 18)
+#define OASTARTTRIG6_INVERT_B_3 (1 << 19)
+#define OASTARTTRIG6_INVERT_C_0 (1 << 20)
+#define OASTARTTRIG6_INVERT_C_1 (1 << 21)
+#define OASTARTTRIG6_INVERT_D_0 (1 << 22)
+#define OASTARTTRIG6_THRESHOLD_ENABLE	    (1 << 23)
+#define OASTARTTRIG6_START_TRIG_FLAG_MBZ    (1 << 24)
+#define OASTARTTRIG6_EVENT_SELECT_4  (1 << 28)
+#define OASTARTTRIG6_EVENT_SELECT_5  (1 << 29)
+#define OASTARTTRIG6_EVENT_SELECT_6  (1 << 30)
+#define OASTARTTRIG6_EVENT_SELECT_7  (1 << 31)
 
 #define OASTARTTRIG7 _MMIO(0x2728)
 #define OASTARTTRIG7_NOA_SELECT_MASK	   0xf
@@ -715,31 +716,31 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define OAREPORTTRIG1_EDGE_LEVEL_TRIGER_SELECT_MASK 0xffff0000 /* 0=level */
 
 #define OAREPORTTRIG2 _MMIO(0x2744)
-#define OAREPORTTRIG2_INVERT_A_0  (1<<0)
-#define OAREPORTTRIG2_INVERT_A_1  (1<<1)
-#define OAREPORTTRIG2_INVERT_A_2  (1<<2)
-#define OAREPORTTRIG2_INVERT_A_3  (1<<3)
-#define OAREPORTTRIG2_INVERT_A_4  (1<<4)
-#define OAREPORTTRIG2_INVERT_A_5  (1<<5)
-#define OAREPORTTRIG2_INVERT_A_6  (1<<6)
-#define OAREPORTTRIG2_INVERT_A_7  (1<<7)
-#define OAREPORTTRIG2_INVERT_A_8  (1<<8)
-#define OAREPORTTRIG2_INVERT_A_9  (1<<9)
-#define OAREPORTTRIG2_INVERT_A_10 (1<<10)
-#define OAREPORTTRIG2_INVERT_A_11 (1<<11)
-#define OAREPORTTRIG2_INVERT_A_12 (1<<12)
-#define OAREPORTTRIG2_INVERT_A_13 (1<<13)
-#define OAREPORTTRIG2_INVERT_A_14 (1<<14)
-#define OAREPORTTRIG2_INVERT_A_15 (1<<15)
-#define OAREPORTTRIG2_INVERT_B_0  (1<<16)
-#define OAREPORTTRIG2_INVERT_B_1  (1<<17)
-#define OAREPORTTRIG2_INVERT_B_2  (1<<18)
-#define OAREPORTTRIG2_INVERT_B_3  (1<<19)
-#define OAREPORTTRIG2_INVERT_C_0  (1<<20)
-#define OAREPORTTRIG2_INVERT_C_1  (1<<21)
-#define OAREPORTTRIG2_INVERT_D_0  (1<<22)
-#define OAREPORTTRIG2_THRESHOLD_ENABLE	    (1<<23)
-#define OAREPORTTRIG2_REPORT_TRIGGER_ENABLE (1<<31)
+#define OAREPORTTRIG2_INVERT_A_0  (1 << 0)
+#define OAREPORTTRIG2_INVERT_A_1  (1 << 1)
+#define OAREPORTTRIG2_INVERT_A_2  (1 << 2)
+#define OAREPORTTRIG2_INVERT_A_3  (1 << 3)
+#define OAREPORTTRIG2_INVERT_A_4  (1 << 4)
+#define OAREPORTTRIG2_INVERT_A_5  (1 << 5)
+#define OAREPORTTRIG2_INVERT_A_6  (1 << 6)
+#define OAREPORTTRIG2_INVERT_A_7  (1 << 7)
+#define OAREPORTTRIG2_INVERT_A_8  (1 << 8)
+#define OAREPORTTRIG2_INVERT_A_9  (1 << 9)
+#define OAREPORTTRIG2_INVERT_A_10 (1 << 10)
+#define OAREPORTTRIG2_INVERT_A_11 (1 << 11)
+#define OAREPORTTRIG2_INVERT_A_12 (1 << 12)
+#define OAREPORTTRIG2_INVERT_A_13 (1 << 13)
+#define OAREPORTTRIG2_INVERT_A_14 (1 << 14)
+#define OAREPORTTRIG2_INVERT_A_15 (1 << 15)
+#define OAREPORTTRIG2_INVERT_B_0  (1 << 16)
+#define OAREPORTTRIG2_INVERT_B_1  (1 << 17)
+#define OAREPORTTRIG2_INVERT_B_2  (1 << 18)
+#define OAREPORTTRIG2_INVERT_B_3  (1 << 19)
+#define OAREPORTTRIG2_INVERT_C_0  (1 << 20)
+#define OAREPORTTRIG2_INVERT_C_1  (1 << 21)
+#define OAREPORTTRIG2_INVERT_D_0  (1 << 22)
+#define OAREPORTTRIG2_THRESHOLD_ENABLE	    (1 << 23)
+#define OAREPORTTRIG2_REPORT_TRIGGER_ENABLE (1 << 31)
 
 #define OAREPORTTRIG3 _MMIO(0x2748)
 #define OAREPORTTRIG3_NOA_SELECT_MASK	    0xf
@@ -768,31 +769,31 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define OAREPORTTRIG5_EDGE_LEVEL_TRIGER_SELECT_MASK 0xffff0000 /* 0=level */
 
 #define OAREPORTTRIG6 _MMIO(0x2754)
-#define OAREPORTTRIG6_INVERT_A_0  (1<<0)
-#define OAREPORTTRIG6_INVERT_A_1  (1<<1)
-#define OAREPORTTRIG6_INVERT_A_2  (1<<2)
-#define OAREPORTTRIG6_INVERT_A_3  (1<<3)
-#define OAREPORTTRIG6_INVERT_A_4  (1<<4)
-#define OAREPORTTRIG6_INVERT_A_5  (1<<5)
-#define OAREPORTTRIG6_INVERT_A_6  (1<<6)
-#define OAREPORTTRIG6_INVERT_A_7  (1<<7)
-#define OAREPORTTRIG6_INVERT_A_8  (1<<8)
-#define OAREPORTTRIG6_INVERT_A_9  (1<<9)
-#define OAREPORTTRIG6_INVERT_A_10 (1<<10)
-#define OAREPORTTRIG6_INVERT_A_11 (1<<11)
-#define OAREPORTTRIG6_INVERT_A_12 (1<<12)
-#define OAREPORTTRIG6_INVERT_A_13 (1<<13)
-#define OAREPORTTRIG6_INVERT_A_14 (1<<14)
-#define OAREPORTTRIG6_INVERT_A_15 (1<<15)
-#define OAREPORTTRIG6_INVERT_B_0  (1<<16)
-#define OAREPORTTRIG6_INVERT_B_1  (1<<17)
-#define OAREPORTTRIG6_INVERT_B_2  (1<<18)
-#define OAREPORTTRIG6_INVERT_B_3  (1<<19)
-#define OAREPORTTRIG6_INVERT_C_0  (1<<20)
-#define OAREPORTTRIG6_INVERT_C_1  (1<<21)
-#define OAREPORTTRIG6_INVERT_D_0  (1<<22)
-#define OAREPORTTRIG6_THRESHOLD_ENABLE	    (1<<23)
-#define OAREPORTTRIG6_REPORT_TRIGGER_ENABLE (1<<31)
+#define OAREPORTTRIG6_INVERT_A_0  (1 << 0)
+#define OAREPORTTRIG6_INVERT_A_1  (1 << 1)
+#define OAREPORTTRIG6_INVERT_A_2  (1 << 2)
+#define OAREPORTTRIG6_INVERT_A_3  (1 << 3)
+#define OAREPORTTRIG6_INVERT_A_4  (1 << 4)
+#define OAREPORTTRIG6_INVERT_A_5  (1 << 5)
+#define OAREPORTTRIG6_INVERT_A_6  (1 << 6)
+#define OAREPORTTRIG6_INVERT_A_7  (1 << 7)
+#define OAREPORTTRIG6_INVERT_A_8  (1 << 8)
+#define OAREPORTTRIG6_INVERT_A_9  (1 << 9)
+#define OAREPORTTRIG6_INVERT_A_10 (1 << 10)
+#define OAREPORTTRIG6_INVERT_A_11 (1 << 11)
+#define OAREPORTTRIG6_INVERT_A_12 (1 << 12)
+#define OAREPORTTRIG6_INVERT_A_13 (1 << 13)
+#define OAREPORTTRIG6_INVERT_A_14 (1 << 14)
+#define OAREPORTTRIG6_INVERT_A_15 (1 << 15)
+#define OAREPORTTRIG6_INVERT_B_0  (1 << 16)
+#define OAREPORTTRIG6_INVERT_B_1  (1 << 17)
+#define OAREPORTTRIG6_INVERT_B_2  (1 << 18)
+#define OAREPORTTRIG6_INVERT_B_3  (1 << 19)
+#define OAREPORTTRIG6_INVERT_C_0  (1 << 20)
+#define OAREPORTTRIG6_INVERT_C_1  (1 << 21)
+#define OAREPORTTRIG6_INVERT_D_0  (1 << 22)
+#define OAREPORTTRIG6_THRESHOLD_ENABLE	    (1 << 23)
+#define OAREPORTTRIG6_REPORT_TRIGGER_ENABLE (1 << 31)
 
 #define OAREPORTTRIG7 _MMIO(0x2758)
 #define OAREPORTTRIG7_NOA_SELECT_MASK	    0xf
@@ -828,9 +829,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define OACEC_COMPARE_VALUE_MASK    0xffff
 #define OACEC_COMPARE_VALUE_SHIFT   3
 
-#define OACEC_SELECT_NOA	(0<<19)
-#define OACEC_SELECT_PREV	(1<<19)
-#define OACEC_SELECT_BOOLEAN	(2<<19)
+#define OACEC_SELECT_NOA	(0 << 19)
+#define OACEC_SELECT_PREV	(1 << 19)
+#define OACEC_SELECT_BOOLEAN	(2 << 19)
 
 /* CECX_1 */
 #define OACEC_MASK_MASK		    0xffff
@@ -948,9 +949,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
  * Reset registers
  */
 #define DEBUG_RESET_I830		_MMIO(0x6070)
-#define  DEBUG_RESET_FULL		(1<<7)
-#define  DEBUG_RESET_RENDER		(1<<8)
-#define  DEBUG_RESET_DISPLAY		(1<<9)
+#define  DEBUG_RESET_FULL		(1 << 7)
+#define  DEBUG_RESET_RENDER		(1 << 8)
+#define  DEBUG_RESET_DISPLAY		(1 << 9)
 
 /*
  * IOSF sideband
@@ -961,7 +962,7 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define   IOSF_PORT_SHIFT			8
 #define   IOSF_BYTE_ENABLES_SHIFT		4
 #define   IOSF_BAR_SHIFT			1
-#define   IOSF_SB_BUSY				(1<<0)
+#define   IOSF_SB_BUSY				(1 << 0)
 #define   IOSF_PORT_BUNIT			0x03
 #define   IOSF_PORT_PUNIT			0x04
 #define   IOSF_PORT_NC				0x11
@@ -1098,8 +1099,8 @@ enum i915_power_well_id {
 #define PUNIT_REG_GPU_LFM			0xd3
 #define PUNIT_REG_GPU_FREQ_REQ			0xd4
 #define PUNIT_REG_GPU_FREQ_STS			0xd8
-#define   GPLLENABLE				(1<<4)
-#define   GENFREQSTATUS				(1<<0)
+#define   GPLLENABLE				(1 << 4)
+#define   GENFREQSTATUS				(1 << 0)
 #define PUNIT_REG_MEDIA_TURBO_FREQ_REQ		0xdc
 #define PUNIT_REG_CZ_TIMESTAMP			0xce
 
@@ -1141,11 +1142,11 @@ enum i915_power_well_id {
 #define   FB_FMAX_VMIN_FREQ_LO_SHIFT		27
 #define   FB_FMAX_VMIN_FREQ_LO_MASK		0xf8000000
 
-#define VLV_TURBO_SOC_OVERRIDE	0x04
-#define 	VLV_OVERRIDE_EN	1
-#define 	VLV_SOC_TDP_EN	(1 << 1)
-#define 	VLV_BIAS_CPU_125_SOC_875 (6 << 2)
-#define 	CHV_BIAS_CPU_50_SOC_50 (3 << 2)
+#define VLV_TURBO_SOC_OVERRIDE		0x04
+#define   VLV_OVERRIDE_EN		1
+#define   VLV_SOC_TDP_EN		(1 << 1)
+#define   VLV_BIAS_CPU_125_SOC_875	(6 << 2)
+#define   CHV_BIAS_CPU_50_SOC_50	(3 << 2)
 
 /* vlv2 north clock has */
 #define CCK_FUSE_REG				0x8
@@ -1194,10 +1195,10 @@ enum i915_power_well_id {
 #define DPIO_DEVFN			0
 
 #define DPIO_CTL			_MMIO(VLV_DISPLAY_BASE + 0x2110)
-#define  DPIO_MODSEL1			(1<<3) /* if ref clk b == 27 */
-#define  DPIO_MODSEL0			(1<<2) /* if ref clk a == 27 */
-#define  DPIO_SFR_BYPASS		(1<<1)
-#define  DPIO_CMNRST			(1<<0)
+#define  DPIO_MODSEL1			(1 << 3) /* if ref clk b == 27 */
+#define  DPIO_MODSEL0			(1 << 2) /* if ref clk a == 27 */
+#define  DPIO_SFR_BYPASS		(1 << 1)
+#define  DPIO_CMNRST			(1 << 0)
 
 #define DPIO_PHY(pipe)			((pipe) >> 1)
 #define DPIO_PHY_IOSF_PORT(phy)		(dev_priv->dpio_phy_iosf_port[phy])
@@ -1215,7 +1216,7 @@ enum i915_power_well_id {
 #define   DPIO_P1_SHIFT			(21) /* 3 bits */
 #define   DPIO_P2_SHIFT			(16) /* 5 bits */
 #define   DPIO_N_SHIFT			(12) /* 4 bits */
-#define   DPIO_ENABLE_CALIBRATION	(1<<11)
+#define   DPIO_ENABLE_CALIBRATION	(1 << 11)
 #define   DPIO_M1DIV_SHIFT		(8) /* 3 bits */
 #define   DPIO_M2DIV_MASK		0xff
 #define _VLV_PLL_DW3_CH1		0x802c
@@ -1264,10 +1265,10 @@ enum i915_power_well_id {
 
 #define _VLV_PCS_DW0_CH0		0x8200
 #define _VLV_PCS_DW0_CH1		0x8400
-#define   DPIO_PCS_TX_LANE2_RESET	(1<<16)
-#define   DPIO_PCS_TX_LANE1_RESET	(1<<7)
-#define   DPIO_LEFT_TXFIFO_RST_MASTER2	(1<<4)
-#define   DPIO_RIGHT_TXFIFO_RST_MASTER2	(1<<3)
+#define   DPIO_PCS_TX_LANE2_RESET	(1 << 16)
+#define   DPIO_PCS_TX_LANE1_RESET	(1 << 7)
+#define   DPIO_LEFT_TXFIFO_RST_MASTER2	(1 << 4)
+#define   DPIO_RIGHT_TXFIFO_RST_MASTER2	(1 << 3)
 #define VLV_PCS_DW0(ch) _PORT(ch, _VLV_PCS_DW0_CH0, _VLV_PCS_DW0_CH1)
 
 #define _VLV_PCS01_DW0_CH0		0x200
@@ -1279,11 +1280,11 @@ enum i915_power_well_id {
 
 #define _VLV_PCS_DW1_CH0		0x8204
 #define _VLV_PCS_DW1_CH1		0x8404
-#define   CHV_PCS_REQ_SOFTRESET_EN	(1<<23)
-#define   DPIO_PCS_CLK_CRI_RXEB_EIOS_EN	(1<<22)
-#define   DPIO_PCS_CLK_CRI_RXDIGFILTSG_EN (1<<21)
+#define   CHV_PCS_REQ_SOFTRESET_EN	(1 << 23)
+#define   DPIO_PCS_CLK_CRI_RXEB_EIOS_EN	(1 << 22)
+#define   DPIO_PCS_CLK_CRI_RXDIGFILTSG_EN (1 << 21)
 #define   DPIO_PCS_CLK_DATAWIDTH_SHIFT	(6)
-#define   DPIO_PCS_CLK_SOFT_RESET	(1<<5)
+#define   DPIO_PCS_CLK_SOFT_RESET	(1 << 5)
 #define VLV_PCS_DW1(ch) _PORT(ch, _VLV_PCS_DW1_CH0, _VLV_PCS_DW1_CH1)
 
 #define _VLV_PCS01_DW1_CH0		0x204
@@ -1308,12 +1309,12 @@ enum i915_power_well_id {
 
 #define _VLV_PCS_DW9_CH0		0x8224
 #define _VLV_PCS_DW9_CH1		0x8424
-#define   DPIO_PCS_TX2MARGIN_MASK	(0x7<<13)
-#define   DPIO_PCS_TX2MARGIN_000	(0<<13)
-#define   DPIO_PCS_TX2MARGIN_101	(1<<13)
-#define   DPIO_PCS_TX1MARGIN_MASK	(0x7<<10)
-#define   DPIO_PCS_TX1MARGIN_000	(0<<10)
-#define   DPIO_PCS_TX1MARGIN_101	(1<<10)
+#define   DPIO_PCS_TX2MARGIN_MASK	(0x7 << 13)
+#define   DPIO_PCS_TX2MARGIN_000	(0 << 13)
+#define   DPIO_PCS_TX2MARGIN_101	(1 << 13)
+#define   DPIO_PCS_TX1MARGIN_MASK	(0x7 << 10)
+#define   DPIO_PCS_TX1MARGIN_000	(0 << 10)
+#define   DPIO_PCS_TX1MARGIN_101	(1 << 10)
 #define	VLV_PCS_DW9(ch) _PORT(ch, _VLV_PCS_DW9_CH0, _VLV_PCS_DW9_CH1)
 
 #define _VLV_PCS01_DW9_CH0		0x224
@@ -1325,14 +1326,14 @@ enum i915_power_well_id {
 
 #define _CHV_PCS_DW10_CH0		0x8228
 #define _CHV_PCS_DW10_CH1		0x8428
-#define   DPIO_PCS_SWING_CALC_TX0_TX2	(1<<30)
-#define   DPIO_PCS_SWING_CALC_TX1_TX3	(1<<31)
-#define   DPIO_PCS_TX2DEEMP_MASK	(0xf<<24)
-#define   DPIO_PCS_TX2DEEMP_9P5		(0<<24)
-#define   DPIO_PCS_TX2DEEMP_6P0		(2<<24)
-#define   DPIO_PCS_TX1DEEMP_MASK	(0xf<<16)
-#define   DPIO_PCS_TX1DEEMP_9P5		(0<<16)
-#define   DPIO_PCS_TX1DEEMP_6P0		(2<<16)
+#define   DPIO_PCS_SWING_CALC_TX0_TX2	(1 << 30)
+#define   DPIO_PCS_SWING_CALC_TX1_TX3	(1 << 31)
+#define   DPIO_PCS_TX2DEEMP_MASK	(0xf << 24)
+#define   DPIO_PCS_TX2DEEMP_9P5		(0 << 24)
+#define   DPIO_PCS_TX2DEEMP_6P0		(2 << 24)
+#define   DPIO_PCS_TX1DEEMP_MASK	(0xf << 16)
+#define   DPIO_PCS_TX1DEEMP_9P5		(0 << 16)
+#define   DPIO_PCS_TX1DEEMP_6P0		(2 << 16)
 #define CHV_PCS_DW10(ch) _PORT(ch, _CHV_PCS_DW10_CH0, _CHV_PCS_DW10_CH1)
 
 #define _VLV_PCS01_DW10_CH0		0x0228
@@ -1344,10 +1345,10 @@ enum i915_power_well_id {
 
 #define _VLV_PCS_DW11_CH0		0x822c
 #define _VLV_PCS_DW11_CH1		0x842c
-#define   DPIO_TX2_STAGGER_MASK(x)	((x)<<24)
-#define   DPIO_LANEDESKEW_STRAP_OVRD	(1<<3)
-#define   DPIO_LEFT_TXFIFO_RST_MASTER	(1<<1)
-#define   DPIO_RIGHT_TXFIFO_RST_MASTER	(1<<0)
+#define   DPIO_TX2_STAGGER_MASK(x)	((x) << 24)
+#define   DPIO_LANEDESKEW_STRAP_OVRD	(1 << 3)
+#define   DPIO_LEFT_TXFIFO_RST_MASTER	(1 << 1)
+#define   DPIO_RIGHT_TXFIFO_RST_MASTER	(1 << 0)
 #define VLV_PCS_DW11(ch) _PORT(ch, _VLV_PCS_DW11_CH0, _VLV_PCS_DW11_CH1)
 
 #define _VLV_PCS01_DW11_CH0		0x022c
@@ -1366,11 +1367,11 @@ enum i915_power_well_id {
 
 #define _VLV_PCS_DW12_CH0		0x8230
 #define _VLV_PCS_DW12_CH1		0x8430
-#define   DPIO_TX2_STAGGER_MULT(x)	((x)<<20)
-#define   DPIO_TX1_STAGGER_MULT(x)	((x)<<16)
-#define   DPIO_TX1_STAGGER_MASK(x)	((x)<<8)
-#define   DPIO_LANESTAGGER_STRAP_OVRD	(1<<6)
-#define   DPIO_LANESTAGGER_STRAP(x)	((x)<<0)
+#define   DPIO_TX2_STAGGER_MULT(x)	((x) << 20)
+#define   DPIO_TX1_STAGGER_MULT(x)	((x) << 16)
+#define   DPIO_TX1_STAGGER_MASK(x)	((x) << 8)
+#define   DPIO_LANESTAGGER_STRAP_OVRD	(1 << 6)
+#define   DPIO_LANESTAGGER_STRAP(x)	((x) << 0)
 #define VLV_PCS_DW12(ch) _PORT(ch, _VLV_PCS_DW12_CH0, _VLV_PCS_DW12_CH1)
 
 #define _VLV_PCS_DW14_CH0		0x8238
@@ -1391,7 +1392,7 @@ enum i915_power_well_id {
 #define _VLV_TX_DW3_CH0			0x828c
 #define _VLV_TX_DW3_CH1			0x848c
 /* The following bit for CHV phy */
-#define   DPIO_TX_UNIQ_TRANS_SCALE_EN	(1<<27)
+#define   DPIO_TX_UNIQ_TRANS_SCALE_EN	(1 << 27)
 #define   DPIO_SWING_MARGIN101_SHIFT	16
 #define   DPIO_SWING_MARGIN101_MASK	(0xff << DPIO_SWING_MARGIN101_SHIFT)
 #define VLV_TX_DW3(ch) _PORT(ch, _VLV_TX_DW3_CH0, _VLV_TX_DW3_CH1)
@@ -1410,7 +1411,7 @@ enum i915_power_well_id {
 
 #define _VLV_TX_DW5_CH0			0x8294
 #define _VLV_TX_DW5_CH1			0x8494
-#define   DPIO_TX_OCALINIT_EN		(1<<31)
+#define   DPIO_TX_OCALINIT_EN		(1 << 31)
 #define VLV_TX_DW5(ch) _PORT(ch, _VLV_TX_DW5_CH0, _VLV_TX_DW5_CH1)
 
 #define _VLV_TX_DW11_CH0		0x82ac
@@ -1640,10 +1641,10 @@ enum i915_power_well_id {
 #define  PORT_PLL_LOCK_THRESHOLD_SHIFT	1
 #define  PORT_PLL_LOCK_THRESHOLD_MASK	(0x7 << PORT_PLL_LOCK_THRESHOLD_SHIFT)
 /* PORT_PLL_10_A */
-#define  PORT_PLL_DCO_AMP_OVR_EN_H	(1<<27)
+#define  PORT_PLL_DCO_AMP_OVR_EN_H	(1 << 27)
 #define  PORT_PLL_DCO_AMP_DEFAULT	15
 #define  PORT_PLL_DCO_AMP_MASK		0x3c00
-#define  PORT_PLL_DCO_AMP(x)		((x)<<10)
+#define  PORT_PLL_DCO_AMP(x)		((x) << 10)
 #define _PORT_PLL_BASE(phy, ch)		_BXT_PHY_CH(phy, ch, \
 						    _PORT_PLL_0_B, \
 						    _PORT_PLL_0_C)
@@ -1745,7 +1746,7 @@ enum i915_power_well_id {
 					       _CNL_PORT_TX_D_GRP_OFFSET, \
 					       _CNL_PORT_TX_AE_GRP_OFFSET, \
 					       _CNL_PORT_TX_F_GRP_OFFSET) + \
-					       4*(dw))
+					       4 * (dw))
 #define _CNL_PORT_TX_DW_LN0(port, dw)	(_PICK((port), \
 					       _CNL_PORT_TX_AE_LN0_OFFSET, \
 					       _CNL_PORT_TX_B_LN0_OFFSET, \
@@ -1753,7 +1754,7 @@ enum i915_power_well_id {
 					       _CNL_PORT_TX_D_LN0_OFFSET, \
 					       _CNL_PORT_TX_AE_LN0_OFFSET, \
 					       _CNL_PORT_TX_F_LN0_OFFSET) + \
-					       4*(dw))
+					       4 * (dw))
 
 #define CNL_PORT_TX_DW2_GRP(port)	_MMIO(_CNL_PORT_TX_DW_GRP((port), 2))
 #define CNL_PORT_TX_DW2_LN0(port)	_MMIO(_CNL_PORT_TX_DW_LN0((port), 2))
@@ -1779,7 +1780,7 @@ enum i915_power_well_id {
 #define CNL_PORT_TX_DW4_GRP(port)	_MMIO(_CNL_PORT_TX_DW_GRP((port), 4))
 #define CNL_PORT_TX_DW4_LN0(port)	_MMIO(_CNL_PORT_TX_DW_LN0((port), 4))
 #define CNL_PORT_TX_DW4_LN(port, ln)   _MMIO(_CNL_PORT_TX_DW_LN0((port), 4) + \
-					     (ln * (_CNL_PORT_TX_DW4_LN1_AE - \
+					   ((ln) * (_CNL_PORT_TX_DW4_LN1_AE - \
 						    _CNL_PORT_TX_DW4_LN0_AE)))
 #define _ICL_PORT_TX_DW4_GRP_A		0x162690
 #define _ICL_PORT_TX_DW4_GRP_B		0x6C690
@@ -1792,8 +1793,8 @@ enum i915_power_well_id {
 #define ICL_PORT_TX_DW4_LN(port, ln)	_MMIO(_PORT(port, \
 						   _ICL_PORT_TX_DW4_LN0_A, \
 						   _ICL_PORT_TX_DW4_LN0_B) + \
-					      (ln * (_ICL_PORT_TX_DW4_LN1_A - \
-						     _ICL_PORT_TX_DW4_LN0_A)))
+					     ((ln) * (_ICL_PORT_TX_DW4_LN1_A - \
+						      _ICL_PORT_TX_DW4_LN0_A)))
 #define   LOADGEN_SELECT		(1 << 31)
 #define   POST_CURSOR_1(x)		((x) << 12)
 #define   POST_CURSOR_1_MASK		(0x3F << 12)
@@ -2139,8 +2140,8 @@ enum i915_power_well_id {
 /* SKL balance leg register */
 #define DISPIO_CR_TX_BMU_CR0		_MMIO(0x6C00C)
 /* I_boost values */
-#define BALANCE_LEG_SHIFT(port)		(8+3*(port))
-#define BALANCE_LEG_MASK(port)		(7<<(8+3*(port)))
+#define BALANCE_LEG_SHIFT(port)		(8 + 3 * (port))
+#define BALANCE_LEG_MASK(port)		(7 << (8 + 3 * (port)))
 /* Balance leg disable bits */
 #define BALANCE_LEG_DISABLE_SHIFT	23
 #define BALANCE_LEG_DISABLE(port)	(1 << (23 + (port)))
@@ -2160,10 +2161,10 @@ enum i915_power_well_id {
 #define   I830_FENCE_TILING_Y_SHIFT	12
 #define   I830_FENCE_SIZE_BITS(size)	((ffs((size) >> 19) - 1) << 8)
 #define   I830_FENCE_PITCH_SHIFT	4
-#define   I830_FENCE_REG_VALID		(1<<0)
+#define   I830_FENCE_REG_VALID		(1 << 0)
 #define   I915_FENCE_MAX_PITCH_VAL	4
 #define   I830_FENCE_MAX_PITCH_VAL	6
-#define   I830_FENCE_MAX_SIZE_VAL	(1<<8)
+#define   I830_FENCE_MAX_SIZE_VAL	(1 << 8)
 
 #define   I915_FENCE_START_MASK		0x0ff00000
 #define   I915_FENCE_SIZE_BITS(size)	((ffs((size) >> 20) - 1) << 8)
@@ -2172,7 +2173,7 @@ enum i915_power_well_id {
 #define FENCE_REG_965_HI(i)		_MMIO(0x03000 + (i) * 8 + 4)
 #define   I965_FENCE_PITCH_SHIFT	2
 #define   I965_FENCE_TILING_Y_SHIFT	1
-#define   I965_FENCE_REG_VALID		(1<<0)
+#define   I965_FENCE_REG_VALID		(1 << 0)
 #define   I965_FENCE_MAX_PITCH_VAL	0x0400
 
 #define FENCE_REG_GEN6_LO(i)		_MMIO(0x100000 + (i) * 8)
@@ -2195,13 +2196,13 @@ enum i915_power_well_id {
 #define   PGTBL_ADDRESS_LO_MASK	0xfffff000 /* bits [31:12] */
 #define   PGTBL_ADDRESS_HI_MASK	0x000000f0 /* bits [35:32] (gen4) */
 #define PGTBL_ER	_MMIO(0x02024)
-#define PRB0_BASE	(0x2030-0x30)
-#define PRB1_BASE	(0x2040-0x30) /* 830,gen3 */
-#define PRB2_BASE	(0x2050-0x30) /* gen3 */
-#define SRB0_BASE	(0x2100-0x30) /* gen2 */
-#define SRB1_BASE	(0x2110-0x30) /* gen2 */
-#define SRB2_BASE	(0x2120-0x30) /* 830 */
-#define SRB3_BASE	(0x2130-0x30) /* 830 */
+#define PRB0_BASE	(0x2030 - 0x30)
+#define PRB1_BASE	(0x2040 - 0x30) /* 830,gen3 */
+#define PRB2_BASE	(0x2050 - 0x30) /* gen3 */
+#define SRB0_BASE	(0x2100 - 0x30) /* gen2 */
+#define SRB1_BASE	(0x2110 - 0x30) /* gen2 */
+#define SRB2_BASE	(0x2120 - 0x30) /* 830 */
+#define SRB3_BASE	(0x2130 - 0x30) /* 830 */
 #define RENDER_RING_BASE	0x02000
 #define BSD_RING_BASE		0x04000
 #define GEN6_BSD_RING_BASE	0x12000
@@ -2214,14 +2215,14 @@ enum i915_power_well_id {
 #define GEN11_VEBOX_RING_BASE		0x1c8000
 #define GEN11_VEBOX2_RING_BASE		0x1d8000
 #define BLT_RING_BASE		0x22000
-#define RING_TAIL(base)		_MMIO((base)+0x30)
-#define RING_HEAD(base)		_MMIO((base)+0x34)
-#define RING_START(base)	_MMIO((base)+0x38)
-#define RING_CTL(base)		_MMIO((base)+0x3c)
+#define RING_TAIL(base)		_MMIO((base) + 0x30)
+#define RING_HEAD(base)		_MMIO((base) + 0x34)
+#define RING_START(base)	_MMIO((base) + 0x38)
+#define RING_CTL(base)		_MMIO((base) + 0x3c)
 #define   RING_CTL_SIZE(size)	((size) - PAGE_SIZE) /* in bytes -> pages */
-#define RING_SYNC_0(base)	_MMIO((base)+0x40)
-#define RING_SYNC_1(base)	_MMIO((base)+0x44)
-#define RING_SYNC_2(base)	_MMIO((base)+0x48)
+#define RING_SYNC_0(base)	_MMIO((base) + 0x40)
+#define RING_SYNC_1(base)	_MMIO((base) + 0x44)
+#define RING_SYNC_2(base)	_MMIO((base) + 0x48)
 #define GEN6_RVSYNC	(RING_SYNC_0(RENDER_RING_BASE))
 #define GEN6_RBSYNC	(RING_SYNC_1(RENDER_RING_BASE))
 #define GEN6_RVESYNC	(RING_SYNC_2(RENDER_RING_BASE))
@@ -2235,21 +2236,22 @@ enum i915_power_well_id {
 #define GEN6_VERSYNC	(RING_SYNC_1(VEBOX_RING_BASE))
 #define GEN6_VEVSYNC	(RING_SYNC_2(VEBOX_RING_BASE))
 #define GEN6_NOSYNC	INVALID_MMIO_REG
-#define RING_PSMI_CTL(base)	_MMIO((base)+0x50)
-#define RING_MAX_IDLE(base)	_MMIO((base)+0x54)
-#define RING_HWS_PGA(base)	_MMIO((base)+0x80)
-#define RING_HWS_PGA_GEN6(base)	_MMIO((base)+0x2080)
-#define RING_RESET_CTL(base)	_MMIO((base)+0xd0)
+#define RING_PSMI_CTL(base)	_MMIO((base) + 0x50)
+#define RING_MAX_IDLE(base)	_MMIO((base) + 0x54)
+#define RING_HWS_PGA(base)	_MMIO((base) + 0x80)
+#define RING_HWS_PGA_GEN6(base)	_MMIO((base) + 0x2080)
+#define RING_RESET_CTL(base)	_MMIO((base) + 0xd0)
 #define   RESET_CTL_REQUEST_RESET  (1 << 0)
 #define   RESET_CTL_READY_TO_RESET (1 << 1)
+#define RING_SEMA_WAIT_POLL(base) _MMIO((base) + 0x24c)
 
 #define HSW_GTT_CACHE_EN	_MMIO(0x4024)
 #define   GTT_CACHE_EN_ALL	0xF0007FFF
 #define GEN7_WR_WATERMARK	_MMIO(0x4028)
 #define GEN7_GFX_PRIO_CTRL	_MMIO(0x402C)
 #define ARB_MODE		_MMIO(0x4030)
-#define   ARB_MODE_SWIZZLE_SNB	(1<<4)
-#define   ARB_MODE_SWIZZLE_IVB	(1<<5)
+#define   ARB_MODE_SWIZZLE_SNB	(1 << 4)
+#define   ARB_MODE_SWIZZLE_IVB	(1 << 5)
 #define GEN7_GFX_PEND_TLB0	_MMIO(0x4034)
 #define GEN7_GFX_PEND_TLB1	_MMIO(0x4038)
 /* L3, CVS, ZTLB, RCC, CASC LRA min, max values */
@@ -2259,30 +2261,30 @@ enum i915_power_well_id {
 #define GEN7_GFX_MAX_REQ_COUNT		_MMIO(0x4074)
 
 #define GAMTARBMODE		_MMIO(0x04a08)
-#define   ARB_MODE_BWGTLB_DISABLE (1<<9)
-#define   ARB_MODE_SWIZZLE_BDW	(1<<1)
+#define   ARB_MODE_BWGTLB_DISABLE (1 << 9)
+#define   ARB_MODE_SWIZZLE_BDW	(1 << 1)
 #define RENDER_HWS_PGA_GEN7	_MMIO(0x04080)
-#define RING_FAULT_REG(engine)	_MMIO(0x4094 + 0x100*(engine)->hw_id)
+#define RING_FAULT_REG(engine)	_MMIO(0x4094 + 0x100 * (engine)->hw_id)
 #define GEN8_RING_FAULT_REG	_MMIO(0x4094)
 #define   GEN8_RING_FAULT_ENGINE_ID(x)	(((x) >> 12) & 0x7)
-#define   RING_FAULT_GTTSEL_MASK (1<<11)
+#define   RING_FAULT_GTTSEL_MASK (1 << 11)
 #define   RING_FAULT_SRCID(x)	(((x) >> 3) & 0xff)
 #define   RING_FAULT_FAULT_TYPE(x) (((x) >> 1) & 0x3)
-#define   RING_FAULT_VALID	(1<<0)
+#define   RING_FAULT_VALID	(1 << 0)
 #define DONE_REG		_MMIO(0x40b0)
 #define GEN8_PRIVATE_PAT_LO	_MMIO(0x40e0)
 #define GEN8_PRIVATE_PAT_HI	_MMIO(0x40e0 + 4)
-#define GEN10_PAT_INDEX(index)	_MMIO(0x40e0 + (index)*4)
+#define GEN10_PAT_INDEX(index)	_MMIO(0x40e0 + (index) * 4)
 #define BSD_HWS_PGA_GEN7	_MMIO(0x04180)
 #define BLT_HWS_PGA_GEN7	_MMIO(0x04280)
 #define VEBOX_HWS_PGA_GEN7	_MMIO(0x04380)
-#define RING_ACTHD(base)	_MMIO((base)+0x74)
-#define RING_ACTHD_UDW(base)	_MMIO((base)+0x5c)
-#define RING_NOPID(base)	_MMIO((base)+0x94)
-#define RING_IMR(base)		_MMIO((base)+0xa8)
-#define RING_HWSTAM(base)	_MMIO((base)+0x98)
-#define RING_TIMESTAMP(base)		_MMIO((base)+0x358)
-#define RING_TIMESTAMP_UDW(base)	_MMIO((base)+0x358 + 4)
+#define RING_ACTHD(base)	_MMIO((base) + 0x74)
+#define RING_ACTHD_UDW(base)	_MMIO((base) + 0x5c)
+#define RING_NOPID(base)	_MMIO((base) + 0x94)
+#define RING_IMR(base)		_MMIO((base) + 0xa8)
+#define RING_HWSTAM(base)	_MMIO((base) + 0x98)
+#define RING_TIMESTAMP(base)		_MMIO((base) + 0x358)
+#define RING_TIMESTAMP_UDW(base)	_MMIO((base) + 0x358 + 4)
 #define   TAIL_ADDR		0x001FFFF8
 #define   HEAD_WRAP_COUNT	0xFFE00000
 #define   HEAD_WRAP_ONE		0x00200000
@@ -2295,17 +2297,17 @@ enum i915_power_well_id {
 #define   RING_VALID_MASK	0x00000001
 #define   RING_VALID		0x00000001
 #define   RING_INVALID		0x00000000
-#define   RING_WAIT_I8XX	(1<<0) /* gen2, PRBx_HEAD */
-#define   RING_WAIT		(1<<11) /* gen3+, PRBx_CTL */
-#define   RING_WAIT_SEMAPHORE	(1<<10) /* gen6+ */
+#define   RING_WAIT_I8XX	(1 << 0) /* gen2, PRBx_HEAD */
+#define   RING_WAIT		(1 << 11) /* gen3+, PRBx_CTL */
+#define   RING_WAIT_SEMAPHORE	(1 << 10) /* gen6+ */
 
-#define RING_FORCE_TO_NONPRIV(base, i) _MMIO(((base)+0x4D0) + (i)*4)
+#define RING_FORCE_TO_NONPRIV(base, i) _MMIO(((base) + 0x4D0) + (i) * 4)
 #define   RING_MAX_NONPRIV_SLOTS  12
 
 #define GEN7_TLB_RD_ADDR	_MMIO(0x4700)
 
 #define GEN9_GAMT_ECO_REG_RW_IA _MMIO(0x4ab0)
-#define   GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS	(1<<18)
+#define   GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS	(1 << 18)
 
 #define GEN8_GAMW_ECO_DEV_RW_IA _MMIO(0x4080)
 #define   GAMW_ECO_ENABLE_64K_IPS_FIELD 0xF
@@ -2339,19 +2341,19 @@ enum i915_power_well_id {
 #define   GEN11_MCR_SLICE_MASK		GEN11_MCR_SLICE(0xf)
 #define   GEN11_MCR_SUBSLICE(subslice)	(((subslice) & 0x7) << 24)
 #define   GEN11_MCR_SUBSLICE_MASK	GEN11_MCR_SUBSLICE(0x7)
-#define RING_IPEIR(base)	_MMIO((base)+0x64)
-#define RING_IPEHR(base)	_MMIO((base)+0x68)
+#define RING_IPEIR(base)	_MMIO((base) + 0x64)
+#define RING_IPEHR(base)	_MMIO((base) + 0x68)
 /*
  * On GEN4, only the render ring INSTDONE exists and has a different
  * layout than the GEN7+ version.
  * The GEN2 counterpart of this register is GEN2_INSTDONE.
  */
-#define RING_INSTDONE(base)	_MMIO((base)+0x6c)
-#define RING_INSTPS(base)	_MMIO((base)+0x70)
-#define RING_DMA_FADD(base)	_MMIO((base)+0x78)
-#define RING_DMA_FADD_UDW(base)	_MMIO((base)+0x60) /* gen8+ */
-#define RING_INSTPM(base)	_MMIO((base)+0xc0)
-#define RING_MI_MODE(base)	_MMIO((base)+0x9c)
+#define RING_INSTDONE(base)	_MMIO((base) + 0x6c)
+#define RING_INSTPS(base)	_MMIO((base) + 0x70)
+#define RING_DMA_FADD(base)	_MMIO((base) + 0x78)
+#define RING_DMA_FADD_UDW(base)	_MMIO((base) + 0x60) /* gen8+ */
+#define RING_INSTPM(base)	_MMIO((base) + 0xc0)
+#define RING_MI_MODE(base)	_MMIO((base) + 0x9c)
 #define INSTPS		_MMIO(0x2070) /* 965+ only */
 #define GEN4_INSTDONE1	_MMIO(0x207c) /* 965+ only, aka INSTDONE_2 on SNB */
 #define ACTHD_I965	_MMIO(0x2074)
@@ -2359,37 +2361,37 @@ enum i915_power_well_id {
 #define HWS_ADDRESS_MASK	0xfffff000
 #define HWS_START_ADDRESS_SHIFT	4
 #define PWRCTXA		_MMIO(0x2088) /* 965GM+ only */
-#define   PWRCTX_EN	(1<<0)
+#define   PWRCTX_EN	(1 << 0)
 #define IPEIR		_MMIO(0x2088)
 #define IPEHR		_MMIO(0x208c)
 #define GEN2_INSTDONE	_MMIO(0x2090)
 #define NOPID		_MMIO(0x2094)
 #define HWSTAM		_MMIO(0x2098)
 #define DMA_FADD_I8XX	_MMIO(0x20d0)
-#define RING_BBSTATE(base)	_MMIO((base)+0x110)
+#define RING_BBSTATE(base)	_MMIO((base) + 0x110)
 #define   RING_BB_PPGTT		(1 << 5)
-#define RING_SBBADDR(base)	_MMIO((base)+0x114) /* hsw+ */
-#define RING_SBBSTATE(base)	_MMIO((base)+0x118) /* hsw+ */
-#define RING_SBBADDR_UDW(base)	_MMIO((base)+0x11c) /* gen8+ */
-#define RING_BBADDR(base)	_MMIO((base)+0x140)
-#define RING_BBADDR_UDW(base)	_MMIO((base)+0x168) /* gen8+ */
-#define RING_BB_PER_CTX_PTR(base)	_MMIO((base)+0x1c0) /* gen8+ */
-#define RING_INDIRECT_CTX(base)		_MMIO((base)+0x1c4) /* gen8+ */
-#define RING_INDIRECT_CTX_OFFSET(base)	_MMIO((base)+0x1c8) /* gen8+ */
-#define RING_CTX_TIMESTAMP(base)	_MMIO((base)+0x3a8) /* gen8+ */
+#define RING_SBBADDR(base)	_MMIO((base) + 0x114) /* hsw+ */
+#define RING_SBBSTATE(base)	_MMIO((base) + 0x118) /* hsw+ */
+#define RING_SBBADDR_UDW(base)	_MMIO((base) + 0x11c) /* gen8+ */
+#define RING_BBADDR(base)	_MMIO((base) + 0x140)
+#define RING_BBADDR_UDW(base)	_MMIO((base) + 0x168) /* gen8+ */
+#define RING_BB_PER_CTX_PTR(base)	_MMIO((base) + 0x1c0) /* gen8+ */
+#define RING_INDIRECT_CTX(base)		_MMIO((base) + 0x1c4) /* gen8+ */
+#define RING_INDIRECT_CTX_OFFSET(base)	_MMIO((base) + 0x1c8) /* gen8+ */
+#define RING_CTX_TIMESTAMP(base)	_MMIO((base) + 0x3a8) /* gen8+ */
 
 #define ERROR_GEN6	_MMIO(0x40a0)
 #define GEN7_ERR_INT	_MMIO(0x44040)
-#define   ERR_INT_POISON		(1<<31)
-#define   ERR_INT_MMIO_UNCLAIMED	(1<<13)
-#define   ERR_INT_PIPE_CRC_DONE_C	(1<<8)
-#define   ERR_INT_FIFO_UNDERRUN_C	(1<<6)
-#define   ERR_INT_PIPE_CRC_DONE_B	(1<<5)
-#define   ERR_INT_FIFO_UNDERRUN_B	(1<<3)
-#define   ERR_INT_PIPE_CRC_DONE_A	(1<<2)
-#define   ERR_INT_PIPE_CRC_DONE(pipe)	(1<<(2 + (pipe)*3))
-#define   ERR_INT_FIFO_UNDERRUN_A	(1<<0)
-#define   ERR_INT_FIFO_UNDERRUN(pipe)	(1<<((pipe)*3))
+#define   ERR_INT_POISON		(1 << 31)
+#define   ERR_INT_MMIO_UNCLAIMED	(1 << 13)
+#define   ERR_INT_PIPE_CRC_DONE_C	(1 << 8)
+#define   ERR_INT_FIFO_UNDERRUN_C	(1 << 6)
+#define   ERR_INT_PIPE_CRC_DONE_B	(1 << 5)
+#define   ERR_INT_FIFO_UNDERRUN_B	(1 << 3)
+#define   ERR_INT_PIPE_CRC_DONE_A	(1 << 2)
+#define   ERR_INT_PIPE_CRC_DONE(pipe)	(1 << (2 + (pipe) * 3))
+#define   ERR_INT_FIFO_UNDERRUN_A	(1 << 0)
+#define   ERR_INT_FIFO_UNDERRUN(pipe)	(1 << ((pipe) * 3))
 
 #define GEN8_FAULT_TLB_DATA0		_MMIO(0x4b10)
 #define GEN8_FAULT_TLB_DATA1		_MMIO(0x4b14)
@@ -2397,7 +2399,7 @@ enum i915_power_well_id {
 #define   FAULT_GTT_SEL			(1 << 4)
 
 #define FPGA_DBG		_MMIO(0x42300)
-#define   FPGA_DBG_RM_NOCLAIM	(1<<31)
+#define   FPGA_DBG_RM_NOCLAIM	(1 << 31)
 
 #define CLAIM_ER		_MMIO(VLV_DISPLAY_BASE + 0x2028)
 #define   CLAIM_ER_CLR		(1 << 31)
@@ -2406,22 +2408,22 @@ enum i915_power_well_id {
 
 #define DERRMR		_MMIO(0x44050)
 /* Note that HBLANK events are reserved on bdw+ */
-#define   DERRMR_PIPEA_SCANLINE		(1<<0)
-#define   DERRMR_PIPEA_PRI_FLIP_DONE	(1<<1)
-#define   DERRMR_PIPEA_SPR_FLIP_DONE	(1<<2)
-#define   DERRMR_PIPEA_VBLANK		(1<<3)
-#define   DERRMR_PIPEA_HBLANK		(1<<5)
-#define   DERRMR_PIPEB_SCANLINE 	(1<<8)
-#define   DERRMR_PIPEB_PRI_FLIP_DONE	(1<<9)
-#define   DERRMR_PIPEB_SPR_FLIP_DONE	(1<<10)
-#define   DERRMR_PIPEB_VBLANK		(1<<11)
-#define   DERRMR_PIPEB_HBLANK		(1<<13)
+#define   DERRMR_PIPEA_SCANLINE		(1 << 0)
+#define   DERRMR_PIPEA_PRI_FLIP_DONE	(1 << 1)
+#define   DERRMR_PIPEA_SPR_FLIP_DONE	(1 << 2)
+#define   DERRMR_PIPEA_VBLANK		(1 << 3)
+#define   DERRMR_PIPEA_HBLANK		(1 << 5)
+#define   DERRMR_PIPEB_SCANLINE		(1 << 8)
+#define   DERRMR_PIPEB_PRI_FLIP_DONE	(1 << 9)
+#define   DERRMR_PIPEB_SPR_FLIP_DONE	(1 << 10)
+#define   DERRMR_PIPEB_VBLANK		(1 << 11)
+#define   DERRMR_PIPEB_HBLANK		(1 << 13)
 /* Note that PIPEC is not a simple translation of PIPEA/PIPEB */
-#define   DERRMR_PIPEC_SCANLINE		(1<<14)
-#define   DERRMR_PIPEC_PRI_FLIP_DONE	(1<<15)
-#define   DERRMR_PIPEC_SPR_FLIP_DONE	(1<<20)
-#define   DERRMR_PIPEC_VBLANK		(1<<21)
-#define   DERRMR_PIPEC_HBLANK		(1<<22)
+#define   DERRMR_PIPEC_SCANLINE		(1 << 14)
+#define   DERRMR_PIPEC_PRI_FLIP_DONE	(1 << 15)
+#define   DERRMR_PIPEC_SPR_FLIP_DONE	(1 << 20)
+#define   DERRMR_PIPEC_VBLANK		(1 << 21)
+#define   DERRMR_PIPEC_HBLANK		(1 << 22)
 
 
 /* GM45+ chicken bits -- debug workaround bits that may be required
@@ -2431,16 +2433,21 @@ enum i915_power_well_id {
 #define _3D_CHICKEN	_MMIO(0x2084)
 #define  _3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB	(1 << 10)
 #define _3D_CHICKEN2	_MMIO(0x208c)
+
+#define FF_SLICE_CHICKEN	_MMIO(0x2088)
+#define  FF_SLICE_CHICKEN_CL_PROVOKING_VERTEX_FIX	(1 << 1)
+
 /* Disables pipelining of read flushes past the SF-WIZ interface.
  * Required on all Ironlake steppings according to the B-Spec, but the
  * particular danger of not doing so is not specified.
  */
 # define _3D_CHICKEN2_WM_READ_PIPELINED			(1 << 14)
 #define _3D_CHICKEN3	_MMIO(0x2090)
+#define  _3D_CHICKEN_SF_PROVOKING_VERTEX_FIX		(1 << 12)
 #define  _3D_CHICKEN_SF_DISABLE_OBJEND_CULL		(1 << 10)
 #define  _3D_CHICKEN3_AA_LINE_QUALITY_FIX_ENABLE	(1 << 5)
 #define  _3D_CHICKEN3_SF_DISABLE_FASTCLIP_CULL		(1 << 5)
-#define  _3D_CHICKEN_SDE_LIMIT_FIFO_POLY_DEPTH(x)	((x)<<1) /* gen8+ */
+#define  _3D_CHICKEN_SDE_LIMIT_FIFO_POLY_DEPTH(x)	((x) << 1) /* gen8+ */
 #define  _3D_CHICKEN3_SF_DISABLE_PIPELINED_ATTR_FETCH	(1 << 1) /* gen6 */
 
 #define MI_MODE		_MMIO(0x209c)
@@ -2479,22 +2486,22 @@ enum i915_power_well_id {
 
 #define GFX_MODE	_MMIO(0x2520)
 #define GFX_MODE_GEN7	_MMIO(0x229c)
-#define RING_MODE_GEN7(engine)	_MMIO((engine)->mmio_base+0x29c)
-#define   GFX_RUN_LIST_ENABLE		(1<<15)
-#define   GFX_INTERRUPT_STEERING	(1<<14)
-#define   GFX_TLB_INVALIDATE_EXPLICIT	(1<<13)
-#define   GFX_SURFACE_FAULT_ENABLE	(1<<12)
-#define   GFX_REPLAY_MODE		(1<<11)
-#define   GFX_PSMI_GRANULARITY		(1<<10)
-#define   GFX_PPGTT_ENABLE		(1<<9)
-#define   GEN8_GFX_PPGTT_48B		(1<<7)
-
-#define   GFX_FORWARD_VBLANK_MASK	(3<<5)
-#define   GFX_FORWARD_VBLANK_NEVER	(0<<5)
-#define   GFX_FORWARD_VBLANK_ALWAYS	(1<<5)
-#define   GFX_FORWARD_VBLANK_COND	(2<<5)
-
-#define   GEN11_GFX_DISABLE_LEGACY_MODE	(1<<3)
+#define RING_MODE_GEN7(engine)	_MMIO((engine)->mmio_base + 0x29c)
+#define   GFX_RUN_LIST_ENABLE		(1 << 15)
+#define   GFX_INTERRUPT_STEERING	(1 << 14)
+#define   GFX_TLB_INVALIDATE_EXPLICIT	(1 << 13)
+#define   GFX_SURFACE_FAULT_ENABLE	(1 << 12)
+#define   GFX_REPLAY_MODE		(1 << 11)
+#define   GFX_PSMI_GRANULARITY		(1 << 10)
+#define   GFX_PPGTT_ENABLE		(1 << 9)
+#define   GEN8_GFX_PPGTT_48B		(1 << 7)
+
+#define   GFX_FORWARD_VBLANK_MASK	(3 << 5)
+#define   GFX_FORWARD_VBLANK_NEVER	(0 << 5)
+#define   GFX_FORWARD_VBLANK_ALWAYS	(1 << 5)
+#define   GFX_FORWARD_VBLANK_COND	(2 << 5)
+
+#define   GEN11_GFX_DISABLE_LEGACY_MODE	(1 << 3)
 
 #define VLV_DISPLAY_BASE 0x180000
 #define VLV_MIPI_BASE VLV_DISPLAY_BASE
@@ -2508,8 +2515,8 @@ enum i915_power_well_id {
 #define IMR		_MMIO(0x20a8)
 #define ISR		_MMIO(0x20ac)
 #define VLV_GUNIT_CLOCK_GATE	_MMIO(VLV_DISPLAY_BASE + 0x2060)
-#define   GINT_DIS		(1<<22)
-#define   GCFG_DIS		(1<<8)
+#define   GINT_DIS		(1 << 22)
+#define   GCFG_DIS		(1 << 8)
 #define VLV_GUNIT_CLOCK_GATE2	_MMIO(VLV_DISPLAY_BASE + 0x2064)
 #define VLV_IIR_RW	_MMIO(VLV_DISPLAY_BASE + 0x2084)
 #define VLV_IER		_MMIO(VLV_DISPLAY_BASE + 0x20a0)
@@ -2519,35 +2526,35 @@ enum i915_power_well_id {
 #define VLV_PCBR	_MMIO(VLV_DISPLAY_BASE + 0x2120)
 #define VLV_PCBR_ADDR_SHIFT	12
 
-#define   DISPLAY_PLANE_FLIP_PENDING(plane) (1<<(11-(plane))) /* A and B only */
+#define   DISPLAY_PLANE_FLIP_PENDING(plane) (1 << (11 - (plane))) /* A and B only */
 #define EIR		_MMIO(0x20b0)
 #define EMR		_MMIO(0x20b4)
 #define ESR		_MMIO(0x20b8)
-#define   GM45_ERROR_PAGE_TABLE				(1<<5)
-#define   GM45_ERROR_MEM_PRIV				(1<<4)
-#define   I915_ERROR_PAGE_TABLE				(1<<4)
-#define   GM45_ERROR_CP_PRIV				(1<<3)
-#define   I915_ERROR_MEMORY_REFRESH			(1<<1)
-#define   I915_ERROR_INSTRUCTION			(1<<0)
+#define   GM45_ERROR_PAGE_TABLE				(1 << 5)
+#define   GM45_ERROR_MEM_PRIV				(1 << 4)
+#define   I915_ERROR_PAGE_TABLE				(1 << 4)
+#define   GM45_ERROR_CP_PRIV				(1 << 3)
+#define   I915_ERROR_MEMORY_REFRESH			(1 << 1)
+#define   I915_ERROR_INSTRUCTION			(1 << 0)
 #define INSTPM	        _MMIO(0x20c0)
-#define   INSTPM_SELF_EN (1<<12) /* 915GM only */
-#define   INSTPM_AGPBUSY_INT_EN (1<<11) /* gen3: when disabled, pending interrupts
+#define   INSTPM_SELF_EN (1 << 12) /* 915GM only */
+#define   INSTPM_AGPBUSY_INT_EN (1 << 11) /* gen3: when disabled, pending interrupts
 					will not assert AGPBUSY# and will only
 					be delivered when out of C3. */
-#define   INSTPM_FORCE_ORDERING				(1<<7) /* GEN6+ */
-#define   INSTPM_TLB_INVALIDATE	(1<<9)
-#define   INSTPM_SYNC_FLUSH	(1<<5)
+#define   INSTPM_FORCE_ORDERING				(1 << 7) /* GEN6+ */
+#define   INSTPM_TLB_INVALIDATE	(1 << 9)
+#define   INSTPM_SYNC_FLUSH	(1 << 5)
 #define ACTHD	        _MMIO(0x20c8)
 #define MEM_MODE	_MMIO(0x20cc)
-#define   MEM_DISPLAY_B_TRICKLE_FEED_DISABLE (1<<3) /* 830 only */
-#define   MEM_DISPLAY_A_TRICKLE_FEED_DISABLE (1<<2) /* 830/845 only */
-#define   MEM_DISPLAY_TRICKLE_FEED_DISABLE (1<<2) /* 85x only */
+#define   MEM_DISPLAY_B_TRICKLE_FEED_DISABLE (1 << 3) /* 830 only */
+#define   MEM_DISPLAY_A_TRICKLE_FEED_DISABLE (1 << 2) /* 830/845 only */
+#define   MEM_DISPLAY_TRICKLE_FEED_DISABLE (1 << 2) /* 85x only */
 #define FW_BLC		_MMIO(0x20d8)
 #define FW_BLC2		_MMIO(0x20dc)
 #define FW_BLC_SELF	_MMIO(0x20e0) /* 915+ only */
-#define   FW_BLC_SELF_EN_MASK      (1<<31)
-#define   FW_BLC_SELF_FIFO_MASK    (1<<16) /* 945 only */
-#define   FW_BLC_SELF_EN           (1<<15) /* 945 only */
+#define   FW_BLC_SELF_EN_MASK      (1 << 31)
+#define   FW_BLC_SELF_FIFO_MASK    (1 << 16) /* 945 only */
+#define   FW_BLC_SELF_EN           (1 << 15) /* 945 only */
 #define MM_BURST_LENGTH     0x00700000
 #define MM_FIFO_WATERMARK   0x0001F000
 #define LM_BURST_LENGTH     0x00000700
@@ -2646,40 +2653,40 @@ enum i915_power_well_id {
 #define   MI_AGPBUSY_830_MODE			(1 << 0) /* 85x only */
 
 #define CACHE_MODE_0	_MMIO(0x2120) /* 915+ only */
-#define   CM0_PIPELINED_RENDER_FLUSH_DISABLE (1<<8)
-#define   CM0_IZ_OPT_DISABLE      (1<<6)
-#define   CM0_ZR_OPT_DISABLE      (1<<5)
-#define	  CM0_STC_EVICT_DISABLE_LRA_SNB	(1<<5)
-#define   CM0_DEPTH_EVICT_DISABLE (1<<4)
-#define   CM0_COLOR_EVICT_DISABLE (1<<3)
-#define   CM0_DEPTH_WRITE_DISABLE (1<<1)
-#define   CM0_RC_OP_FLUSH_DISABLE (1<<0)
+#define   CM0_PIPELINED_RENDER_FLUSH_DISABLE (1 << 8)
+#define   CM0_IZ_OPT_DISABLE      (1 << 6)
+#define   CM0_ZR_OPT_DISABLE      (1 << 5)
+#define	  CM0_STC_EVICT_DISABLE_LRA_SNB	(1 << 5)
+#define   CM0_DEPTH_EVICT_DISABLE (1 << 4)
+#define   CM0_COLOR_EVICT_DISABLE (1 << 3)
+#define   CM0_DEPTH_WRITE_DISABLE (1 << 1)
+#define   CM0_RC_OP_FLUSH_DISABLE (1 << 0)
 #define GFX_FLSH_CNTL	_MMIO(0x2170) /* 915+ only */
 #define GFX_FLSH_CNTL_GEN6	_MMIO(0x101008)
-#define   GFX_FLSH_CNTL_EN	(1<<0)
+#define   GFX_FLSH_CNTL_EN	(1 << 0)
 #define ECOSKPD		_MMIO(0x21d0)
-#define   ECO_GATING_CX_ONLY	(1<<3)
-#define   ECO_FLIP_DONE		(1<<0)
+#define   ECO_GATING_CX_ONLY	(1 << 3)
+#define   ECO_FLIP_DONE		(1 << 0)
 
 #define CACHE_MODE_0_GEN7	_MMIO(0x7000) /* IVB+ */
-#define RC_OP_FLUSH_ENABLE (1<<0)
-#define   HIZ_RAW_STALL_OPT_DISABLE (1<<2)
+#define RC_OP_FLUSH_ENABLE (1 << 0)
+#define   HIZ_RAW_STALL_OPT_DISABLE (1 << 2)
 #define CACHE_MODE_1		_MMIO(0x7004) /* IVB+ */
-#define   PIXEL_SUBSPAN_COLLECT_OPT_DISABLE	(1<<6)
-#define   GEN8_4x4_STC_OPTIMIZATION_DISABLE	(1<<6)
-#define   GEN9_PARTIAL_RESOLVE_IN_VC_DISABLE	(1<<1)
+#define   PIXEL_SUBSPAN_COLLECT_OPT_DISABLE	(1 << 6)
+#define   GEN8_4x4_STC_OPTIMIZATION_DISABLE	(1 << 6)
+#define   GEN9_PARTIAL_RESOLVE_IN_VC_DISABLE	(1 << 1)
 
 #define GEN10_CACHE_MODE_SS			_MMIO(0xe420)
 #define   FLOAT_BLEND_OPTIMIZATION_ENABLE	(1 << 4)
 
 #define GEN6_BLITTER_ECOSKPD	_MMIO(0x221d0)
 #define   GEN6_BLITTER_LOCK_SHIFT			16
-#define   GEN6_BLITTER_FBC_NOTIFY			(1<<3)
+#define   GEN6_BLITTER_FBC_NOTIFY			(1 << 3)
 
 #define GEN6_RC_SLEEP_PSMI_CONTROL	_MMIO(0x2050)
 #define   GEN6_PSMI_SLEEP_MSG_DISABLE	(1 << 0)
 #define   GEN8_RC_SEMA_IDLE_MSG_DISABLE	(1 << 12)
-#define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1<<10)
+#define   GEN8_FF_DOP_CLOCK_GATE_DISABLE	(1 << 10)
 
 #define GEN6_RCS_PWR_FSM _MMIO(0x22ac)
 #define GEN9_RCS_FE_FSM2 _MMIO(0x22a4)
@@ -2735,7 +2742,7 @@ enum i915_power_well_id {
 #define GEN8_EU_DISABLE2		_MMIO(0x913c)
 #define   GEN8_EU_DIS2_S2_MASK		0xff
 
-#define GEN9_EU_DISABLE(slice)		_MMIO(0x9134 + (slice)*0x4)
+#define GEN9_EU_DISABLE(slice)		_MMIO(0x9134 + (slice) * 0x4)
 
 #define GEN10_EU_DISABLE3		_MMIO(0x9140)
 #define   GEN10_EU_DIS_SS_MASK		0xff
@@ -2792,44 +2799,44 @@ enum i915_power_well_id {
 	 (IS_HASWELL(dev_priv) ? GT_RENDER_L3_PARITY_ERROR_INTERRUPT_S1 : 0))
 
 /* These are all the "old" interrupts */
-#define ILK_BSD_USER_INTERRUPT				(1<<5)
-
-#define I915_PM_INTERRUPT				(1<<31)
-#define I915_ISP_INTERRUPT				(1<<22)
-#define I915_LPE_PIPE_B_INTERRUPT			(1<<21)
-#define I915_LPE_PIPE_A_INTERRUPT			(1<<20)
-#define I915_MIPIC_INTERRUPT				(1<<19)
-#define I915_MIPIA_INTERRUPT				(1<<18)
-#define I915_PIPE_CONTROL_NOTIFY_INTERRUPT		(1<<18)
-#define I915_DISPLAY_PORT_INTERRUPT			(1<<17)
-#define I915_DISPLAY_PIPE_C_HBLANK_INTERRUPT		(1<<16)
-#define I915_MASTER_ERROR_INTERRUPT			(1<<15)
-#define I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT	(1<<15)
-#define I915_DISPLAY_PIPE_B_HBLANK_INTERRUPT		(1<<14)
-#define I915_GMCH_THERMAL_SENSOR_EVENT_INTERRUPT	(1<<14) /* p-state */
-#define I915_DISPLAY_PIPE_A_HBLANK_INTERRUPT		(1<<13)
-#define I915_HWB_OOM_INTERRUPT				(1<<13)
-#define I915_LPE_PIPE_C_INTERRUPT			(1<<12)
-#define I915_SYNC_STATUS_INTERRUPT			(1<<12)
-#define I915_MISC_INTERRUPT				(1<<11)
-#define I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT	(1<<11)
-#define I915_DISPLAY_PIPE_C_VBLANK_INTERRUPT		(1<<10)
-#define I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT	(1<<10)
-#define I915_DISPLAY_PIPE_C_EVENT_INTERRUPT		(1<<9)
-#define I915_OVERLAY_PLANE_FLIP_PENDING_INTERRUPT	(1<<9)
-#define I915_DISPLAY_PIPE_C_DPBM_INTERRUPT		(1<<8)
-#define I915_DISPLAY_PLANE_C_FLIP_PENDING_INTERRUPT	(1<<8)
-#define I915_DISPLAY_PIPE_A_VBLANK_INTERRUPT		(1<<7)
-#define I915_DISPLAY_PIPE_A_EVENT_INTERRUPT		(1<<6)
-#define I915_DISPLAY_PIPE_B_VBLANK_INTERRUPT		(1<<5)
-#define I915_DISPLAY_PIPE_B_EVENT_INTERRUPT		(1<<4)
-#define I915_DISPLAY_PIPE_A_DPBM_INTERRUPT		(1<<3)
-#define I915_DISPLAY_PIPE_B_DPBM_INTERRUPT		(1<<2)
-#define I915_DEBUG_INTERRUPT				(1<<2)
-#define I915_WINVALID_INTERRUPT				(1<<1)
-#define I915_USER_INTERRUPT				(1<<1)
-#define I915_ASLE_INTERRUPT				(1<<0)
-#define I915_BSD_USER_INTERRUPT				(1<<25)
+#define ILK_BSD_USER_INTERRUPT				(1 << 5)
+
+#define I915_PM_INTERRUPT				(1 << 31)
+#define I915_ISP_INTERRUPT				(1 << 22)
+#define I915_LPE_PIPE_B_INTERRUPT			(1 << 21)
+#define I915_LPE_PIPE_A_INTERRUPT			(1 << 20)
+#define I915_MIPIC_INTERRUPT				(1 << 19)
+#define I915_MIPIA_INTERRUPT				(1 << 18)
+#define I915_PIPE_CONTROL_NOTIFY_INTERRUPT		(1 << 18)
+#define I915_DISPLAY_PORT_INTERRUPT			(1 << 17)
+#define I915_DISPLAY_PIPE_C_HBLANK_INTERRUPT		(1 << 16)
+#define I915_MASTER_ERROR_INTERRUPT			(1 << 15)
+#define I915_RENDER_COMMAND_PARSER_ERROR_INTERRUPT	(1 << 15)
+#define I915_DISPLAY_PIPE_B_HBLANK_INTERRUPT		(1 << 14)
+#define I915_GMCH_THERMAL_SENSOR_EVENT_INTERRUPT	(1 << 14) /* p-state */
+#define I915_DISPLAY_PIPE_A_HBLANK_INTERRUPT		(1 << 13)
+#define I915_HWB_OOM_INTERRUPT				(1 << 13)
+#define I915_LPE_PIPE_C_INTERRUPT			(1 << 12)
+#define I915_SYNC_STATUS_INTERRUPT			(1 << 12)
+#define I915_MISC_INTERRUPT				(1 << 11)
+#define I915_DISPLAY_PLANE_A_FLIP_PENDING_INTERRUPT	(1 << 11)
+#define I915_DISPLAY_PIPE_C_VBLANK_INTERRUPT		(1 << 10)
+#define I915_DISPLAY_PLANE_B_FLIP_PENDING_INTERRUPT	(1 << 10)
+#define I915_DISPLAY_PIPE_C_EVENT_INTERRUPT		(1 << 9)
+#define I915_OVERLAY_PLANE_FLIP_PENDING_INTERRUPT	(1 << 9)
+#define I915_DISPLAY_PIPE_C_DPBM_INTERRUPT		(1 << 8)
+#define I915_DISPLAY_PLANE_C_FLIP_PENDING_INTERRUPT	(1 << 8)
+#define I915_DISPLAY_PIPE_A_VBLANK_INTERRUPT		(1 << 7)
+#define I915_DISPLAY_PIPE_A_EVENT_INTERRUPT		(1 << 6)
+#define I915_DISPLAY_PIPE_B_VBLANK_INTERRUPT		(1 << 5)
+#define I915_DISPLAY_PIPE_B_EVENT_INTERRUPT		(1 << 4)
+#define I915_DISPLAY_PIPE_A_DPBM_INTERRUPT		(1 << 3)
+#define I915_DISPLAY_PIPE_B_DPBM_INTERRUPT		(1 << 2)
+#define I915_DEBUG_INTERRUPT				(1 << 2)
+#define I915_WINVALID_INTERRUPT				(1 << 1)
+#define I915_USER_INTERRUPT				(1 << 1)
+#define I915_ASLE_INTERRUPT				(1 << 0)
+#define I915_BSD_USER_INTERRUPT				(1 << 25)
 
 #define I915_HDMI_LPE_AUDIO_BASE	(VLV_DISPLAY_BASE + 0x65000)
 #define I915_HDMI_LPE_AUDIO_SIZE	0x1000
@@ -2852,19 +2859,19 @@ enum i915_power_well_id {
 #define GEN7_FF_THREAD_MODE		_MMIO(0x20a0)
 #define   GEN7_FF_SCHED_MASK		0x0077070
 #define   GEN8_FF_DS_REF_CNT_FFME	(1 << 19)
-#define   GEN7_FF_TS_SCHED_HS1		(0x5<<16)
-#define   GEN7_FF_TS_SCHED_HS0		(0x3<<16)
-#define   GEN7_FF_TS_SCHED_LOAD_BALANCE	(0x1<<16)
-#define   GEN7_FF_TS_SCHED_HW		(0x0<<16) /* Default */
+#define   GEN7_FF_TS_SCHED_HS1		(0x5 << 16)
+#define   GEN7_FF_TS_SCHED_HS0		(0x3 << 16)
+#define   GEN7_FF_TS_SCHED_LOAD_BALANCE	(0x1 << 16)
+#define   GEN7_FF_TS_SCHED_HW		(0x0 << 16) /* Default */
 #define   GEN7_FF_VS_REF_CNT_FFME	(1 << 15)
-#define   GEN7_FF_VS_SCHED_HS1		(0x5<<12)
-#define   GEN7_FF_VS_SCHED_HS0		(0x3<<12)
-#define   GEN7_FF_VS_SCHED_LOAD_BALANCE	(0x1<<12) /* Default */
-#define   GEN7_FF_VS_SCHED_HW		(0x0<<12)
-#define   GEN7_FF_DS_SCHED_HS1		(0x5<<4)
-#define   GEN7_FF_DS_SCHED_HS0		(0x3<<4)
-#define   GEN7_FF_DS_SCHED_LOAD_BALANCE	(0x1<<4)  /* Default */
-#define   GEN7_FF_DS_SCHED_HW		(0x0<<4)
+#define   GEN7_FF_VS_SCHED_HS1		(0x5 << 12)
+#define   GEN7_FF_VS_SCHED_HS0		(0x3 << 12)
+#define   GEN7_FF_VS_SCHED_LOAD_BALANCE	(0x1 << 12) /* Default */
+#define   GEN7_FF_VS_SCHED_HW		(0x0 << 12)
+#define   GEN7_FF_DS_SCHED_HS1		(0x5 << 4)
+#define   GEN7_FF_DS_SCHED_HS0		(0x3 << 4)
+#define   GEN7_FF_DS_SCHED_LOAD_BALANCE	(0x1 << 4)  /* Default */
+#define   GEN7_FF_DS_SCHED_HW		(0x0 << 4)
 
 /*
  * Framebuffer compression (915+ only)
@@ -2873,51 +2880,51 @@ enum i915_power_well_id {
 #define FBC_CFB_BASE		_MMIO(0x3200) /* 4k page aligned */
 #define FBC_LL_BASE		_MMIO(0x3204) /* 4k page aligned */
 #define FBC_CONTROL		_MMIO(0x3208)
-#define   FBC_CTL_EN		(1<<31)
-#define   FBC_CTL_PERIODIC	(1<<30)
+#define   FBC_CTL_EN		(1 << 31)
+#define   FBC_CTL_PERIODIC	(1 << 30)
 #define   FBC_CTL_INTERVAL_SHIFT (16)
-#define   FBC_CTL_UNCOMPRESSIBLE (1<<14)
-#define   FBC_CTL_C3_IDLE	(1<<13)
+#define   FBC_CTL_UNCOMPRESSIBLE (1 << 14)
+#define   FBC_CTL_C3_IDLE	(1 << 13)
 #define   FBC_CTL_STRIDE_SHIFT	(5)
 #define   FBC_CTL_FENCENO_SHIFT	(0)
 #define FBC_COMMAND		_MMIO(0x320c)
-#define   FBC_CMD_COMPRESS	(1<<0)
+#define   FBC_CMD_COMPRESS	(1 << 0)
 #define FBC_STATUS		_MMIO(0x3210)
-#define   FBC_STAT_COMPRESSING	(1<<31)
-#define   FBC_STAT_COMPRESSED	(1<<30)
-#define   FBC_STAT_MODIFIED	(1<<29)
+#define   FBC_STAT_COMPRESSING	(1 << 31)
+#define   FBC_STAT_COMPRESSED	(1 << 30)
+#define   FBC_STAT_MODIFIED	(1 << 29)
 #define   FBC_STAT_CURRENT_LINE_SHIFT	(0)
 #define FBC_CONTROL2		_MMIO(0x3214)
-#define   FBC_CTL_FENCE_DBL	(0<<4)
-#define   FBC_CTL_IDLE_IMM	(0<<2)
-#define   FBC_CTL_IDLE_FULL	(1<<2)
-#define   FBC_CTL_IDLE_LINE	(2<<2)
-#define   FBC_CTL_IDLE_DEBUG	(3<<2)
-#define   FBC_CTL_CPU_FENCE	(1<<1)
-#define   FBC_CTL_PLANE(plane)	((plane)<<0)
+#define   FBC_CTL_FENCE_DBL	(0 << 4)
+#define   FBC_CTL_IDLE_IMM	(0 << 2)
+#define   FBC_CTL_IDLE_FULL	(1 << 2)
+#define   FBC_CTL_IDLE_LINE	(2 << 2)
+#define   FBC_CTL_IDLE_DEBUG	(3 << 2)
+#define   FBC_CTL_CPU_FENCE	(1 << 1)
+#define   FBC_CTL_PLANE(plane)	((plane) << 0)
 #define FBC_FENCE_OFF		_MMIO(0x3218) /* BSpec typo has 321Bh */
 #define FBC_TAG(i)		_MMIO(0x3300 + (i) * 4)
 
 #define FBC_LL_SIZE		(1536)
 
 #define FBC_LLC_READ_CTRL	_MMIO(0x9044)
-#define   FBC_LLC_FULLY_OPEN	(1<<30)
+#define   FBC_LLC_FULLY_OPEN	(1 << 30)
 
 /* Framebuffer compression for GM45+ */
 #define DPFC_CB_BASE		_MMIO(0x3200)
 #define DPFC_CONTROL		_MMIO(0x3208)
-#define   DPFC_CTL_EN		(1<<31)
-#define   DPFC_CTL_PLANE(plane)	((plane)<<30)
-#define   IVB_DPFC_CTL_PLANE(plane)	((plane)<<29)
-#define   DPFC_CTL_FENCE_EN	(1<<29)
-#define   IVB_DPFC_CTL_FENCE_EN	(1<<28)
-#define   DPFC_CTL_PERSISTENT_MODE	(1<<25)
-#define   DPFC_SR_EN		(1<<10)
-#define   DPFC_CTL_LIMIT_1X	(0<<6)
-#define   DPFC_CTL_LIMIT_2X	(1<<6)
-#define   DPFC_CTL_LIMIT_4X	(2<<6)
+#define   DPFC_CTL_EN		(1 << 31)
+#define   DPFC_CTL_PLANE(plane)	((plane) << 30)
+#define   IVB_DPFC_CTL_PLANE(plane)	((plane) << 29)
+#define   DPFC_CTL_FENCE_EN	(1 << 29)
+#define   IVB_DPFC_CTL_FENCE_EN	(1 << 28)
+#define   DPFC_CTL_PERSISTENT_MODE	(1 << 25)
+#define   DPFC_SR_EN		(1 << 10)
+#define   DPFC_CTL_LIMIT_1X	(0 << 6)
+#define   DPFC_CTL_LIMIT_2X	(1 << 6)
+#define   DPFC_CTL_LIMIT_4X	(2 << 6)
 #define DPFC_RECOMP_CTL		_MMIO(0x320c)
-#define   DPFC_RECOMP_STALL_EN	(1<<27)
+#define   DPFC_RECOMP_STALL_EN	(1 << 27)
 #define   DPFC_RECOMP_STALL_WM_SHIFT (16)
 #define   DPFC_RECOMP_STALL_WM_MASK (0x07ff0000)
 #define   DPFC_RECOMP_TIMER_COUNT_SHIFT (0)
@@ -2930,12 +2937,12 @@ enum i915_power_well_id {
 #define DPFC_STATUS2		_MMIO(0x3214)
 #define DPFC_FENCE_YOFF		_MMIO(0x3218)
 #define DPFC_CHICKEN		_MMIO(0x3224)
-#define   DPFC_HT_MODIFY	(1<<31)
+#define   DPFC_HT_MODIFY	(1 << 31)
 
 /* Framebuffer compression for Ironlake */
 #define ILK_DPFC_CB_BASE	_MMIO(0x43200)
 #define ILK_DPFC_CONTROL	_MMIO(0x43208)
-#define   FBC_CTL_FALSE_COLOR	(1<<10)
+#define   FBC_CTL_FALSE_COLOR	(1 << 10)
 /* The bit 28-8 is reserved */
 #define   DPFC_RESERVED		(0x1FFFFF00)
 #define ILK_DPFC_RECOMP_CTL	_MMIO(0x4320c)
@@ -2946,15 +2953,15 @@ enum i915_power_well_id {
 #define  BDW_FBC_COMP_SEG_MASK	0xfff
 #define ILK_DPFC_FENCE_YOFF	_MMIO(0x43218)
 #define ILK_DPFC_CHICKEN	_MMIO(0x43224)
-#define   ILK_DPFC_DISABLE_DUMMY0 (1<<8)
-#define   ILK_DPFC_NUKE_ON_ANY_MODIFICATION	(1<<23)
+#define   ILK_DPFC_DISABLE_DUMMY0 (1 << 8)
+#define   ILK_DPFC_NUKE_ON_ANY_MODIFICATION	(1 << 23)
 #define ILK_FBC_RT_BASE		_MMIO(0x2128)
-#define   ILK_FBC_RT_VALID	(1<<0)
-#define   SNB_FBC_FRONT_BUFFER	(1<<1)
+#define   ILK_FBC_RT_VALID	(1 << 0)
+#define   SNB_FBC_FRONT_BUFFER	(1 << 1)
 
 #define ILK_DISPLAY_CHICKEN1	_MMIO(0x42000)
-#define   ILK_FBCQ_DIS		(1<<22)
-#define	  ILK_PABSTRETCH_DIS	(1<<21)
+#define   ILK_FBCQ_DIS		(1 << 22)
+#define	  ILK_PABSTRETCH_DIS	(1 << 21)
 
 
 /*
@@ -2963,7 +2970,7 @@ enum i915_power_well_id {
  * The following two registers are of type GTTMMADR
  */
 #define SNB_DPFC_CTL_SA		_MMIO(0x100100)
-#define   SNB_CPU_FENCE_ENABLE	(1<<29)
+#define   SNB_CPU_FENCE_ENABLE	(1 << 29)
 #define DPFC_CPU_FENCE_OFFSET	_MMIO(0x100104)
 
 /* Framebuffer compression for Ivybridge */
@@ -2973,8 +2980,8 @@ enum i915_power_well_id {
 #define   IPS_ENABLE	(1 << 31)
 
 #define MSG_FBC_REND_STATE	_MMIO(0x50380)
-#define   FBC_REND_NUKE		(1<<2)
-#define   FBC_REND_CACHE_CLEAN	(1<<1)
+#define   FBC_REND_NUKE		(1 << 2)
+#define   FBC_REND_CACHE_CLEAN	(1 << 1)
 
 /*
  * GPIO regs
@@ -2987,6 +2994,10 @@ enum i915_power_well_id {
 #define GPIOF			_MMIO(0x5024)
 #define GPIOG			_MMIO(0x5028)
 #define GPIOH			_MMIO(0x502c)
+#define GPIOJ			_MMIO(0x5034)
+#define GPIOK			_MMIO(0x5038)
+#define GPIOL			_MMIO(0x503C)
+#define GPIOM			_MMIO(0x5040)
 # define GPIO_CLOCK_DIR_MASK		(1 << 0)
 # define GPIO_CLOCK_DIR_IN		(0 << 1)
 # define GPIO_CLOCK_DIR_OUT		(1 << 1)
@@ -3003,12 +3014,12 @@ enum i915_power_well_id {
 # define GPIO_DATA_PULLUP_DISABLE	(1 << 13)
 
 #define GMBUS0			_MMIO(dev_priv->gpio_mmio_base + 0x5100) /* clock/port select */
-#define   GMBUS_AKSV_SELECT	(1<<11)
-#define   GMBUS_RATE_100KHZ	(0<<8)
-#define   GMBUS_RATE_50KHZ	(1<<8)
-#define   GMBUS_RATE_400KHZ	(2<<8) /* reserved on Pineview */
-#define   GMBUS_RATE_1MHZ	(3<<8) /* reserved on Pineview */
-#define   GMBUS_HOLD_EXT	(1<<7) /* 300ns hold time, rsvd on Pineview */
+#define   GMBUS_AKSV_SELECT	(1 << 11)
+#define   GMBUS_RATE_100KHZ	(0 << 8)
+#define   GMBUS_RATE_50KHZ	(1 << 8)
+#define   GMBUS_RATE_400KHZ	(2 << 8) /* reserved on Pineview */
+#define   GMBUS_RATE_1MHZ	(3 << 8) /* reserved on Pineview */
+#define   GMBUS_HOLD_EXT	(1 << 7) /* 300ns hold time, rsvd on Pineview */
 #define   GMBUS_PIN_DISABLED	0
 #define   GMBUS_PIN_SSC		1
 #define   GMBUS_PIN_VGADDC	2
@@ -3029,36 +3040,36 @@ enum i915_power_well_id {
 
 #define   GMBUS_NUM_PINS	13 /* including 0 */
 #define GMBUS1			_MMIO(dev_priv->gpio_mmio_base + 0x5104) /* command/status */
-#define   GMBUS_SW_CLR_INT	(1<<31)
-#define   GMBUS_SW_RDY		(1<<30)
-#define   GMBUS_ENT		(1<<29) /* enable timeout */
-#define   GMBUS_CYCLE_NONE	(0<<25)
-#define   GMBUS_CYCLE_WAIT	(1<<25)
-#define   GMBUS_CYCLE_INDEX	(2<<25)
-#define   GMBUS_CYCLE_STOP	(4<<25)
+#define   GMBUS_SW_CLR_INT	(1 << 31)
+#define   GMBUS_SW_RDY		(1 << 30)
+#define   GMBUS_ENT		(1 << 29) /* enable timeout */
+#define   GMBUS_CYCLE_NONE	(0 << 25)
+#define   GMBUS_CYCLE_WAIT	(1 << 25)
+#define   GMBUS_CYCLE_INDEX	(2 << 25)
+#define   GMBUS_CYCLE_STOP	(4 << 25)
 #define   GMBUS_BYTE_COUNT_SHIFT 16
 #define   GMBUS_BYTE_COUNT_MAX   256U
 #define   GMBUS_SLAVE_INDEX_SHIFT 8
 #define   GMBUS_SLAVE_ADDR_SHIFT 1
-#define   GMBUS_SLAVE_READ	(1<<0)
-#define   GMBUS_SLAVE_WRITE	(0<<0)
+#define   GMBUS_SLAVE_READ	(1 << 0)
+#define   GMBUS_SLAVE_WRITE	(0 << 0)
 #define GMBUS2			_MMIO(dev_priv->gpio_mmio_base + 0x5108) /* status */
-#define   GMBUS_INUSE		(1<<15)
-#define   GMBUS_HW_WAIT_PHASE	(1<<14)
-#define   GMBUS_STALL_TIMEOUT	(1<<13)
-#define   GMBUS_INT		(1<<12)
-#define   GMBUS_HW_RDY		(1<<11)
-#define   GMBUS_SATOER		(1<<10)
-#define   GMBUS_ACTIVE		(1<<9)
+#define   GMBUS_INUSE		(1 << 15)
+#define   GMBUS_HW_WAIT_PHASE	(1 << 14)
+#define   GMBUS_STALL_TIMEOUT	(1 << 13)
+#define   GMBUS_INT		(1 << 12)
+#define   GMBUS_HW_RDY		(1 << 11)
+#define   GMBUS_SATOER		(1 << 10)
+#define   GMBUS_ACTIVE		(1 << 9)
 #define GMBUS3			_MMIO(dev_priv->gpio_mmio_base + 0x510c) /* data buffer bytes 3-0 */
 #define GMBUS4			_MMIO(dev_priv->gpio_mmio_base + 0x5110) /* interrupt mask (Pineview+) */
-#define   GMBUS_SLAVE_TIMEOUT_EN (1<<4)
-#define   GMBUS_NAK_EN		(1<<3)
-#define   GMBUS_IDLE_EN		(1<<2)
-#define   GMBUS_HW_WAIT_EN	(1<<1)
-#define   GMBUS_HW_RDY_EN	(1<<0)
+#define   GMBUS_SLAVE_TIMEOUT_EN (1 << 4)
+#define   GMBUS_NAK_EN		(1 << 3)
+#define   GMBUS_IDLE_EN		(1 << 2)
+#define   GMBUS_HW_WAIT_EN	(1 << 1)
+#define   GMBUS_HW_RDY_EN	(1 << 0)
 #define GMBUS5			_MMIO(dev_priv->gpio_mmio_base + 0x5120) /* byte index */
-#define   GMBUS_2BYTE_INDEX_EN	(1<<31)
+#define   GMBUS_2BYTE_INDEX_EN	(1 << 31)
 
 /*
  * Clock control & power management
@@ -3096,10 +3107,10 @@ enum i915_power_well_id {
 #define   DPLL_P2_CLOCK_DIV_MASK	0x03000000 /* i915 */
 #define   DPLL_FPA01_P1_POST_DIV_MASK	0x00ff0000 /* i915 */
 #define   DPLL_FPA01_P1_POST_DIV_MASK_PINEVIEW	0x00ff8000 /* Pineview */
-#define   DPLL_LOCK_VLV			(1<<15)
-#define   DPLL_INTEGRATED_CRI_CLK_VLV	(1<<14)
-#define   DPLL_INTEGRATED_REF_CLK_VLV	(1<<13)
-#define   DPLL_SSC_REF_CLK_CHV		(1<<13)
+#define   DPLL_LOCK_VLV			(1 << 15)
+#define   DPLL_INTEGRATED_CRI_CLK_VLV	(1 << 14)
+#define   DPLL_INTEGRATED_REF_CLK_VLV	(1 << 13)
+#define   DPLL_SSC_REF_CLK_CHV		(1 << 13)
 #define   DPLL_PORTC_READY_MASK		(0xf << 4)
 #define   DPLL_PORTB_READY_MASK		(0xf)
 
@@ -3109,20 +3120,20 @@ enum i915_power_well_id {
 #define DPIO_PHY_STATUS			_MMIO(VLV_DISPLAY_BASE + 0x6240)
 #define   DPLL_PORTD_READY_MASK		(0xf)
 #define DISPLAY_PHY_CONTROL _MMIO(VLV_DISPLAY_BASE + 0x60100)
-#define   PHY_CH_POWER_DOWN_OVRD_EN(phy, ch)	(1 << (2*(phy)+(ch)+27))
+#define   PHY_CH_POWER_DOWN_OVRD_EN(phy, ch)	(1 << (2 * (phy) + (ch) + 27))
 #define   PHY_LDO_DELAY_0NS			0x0
 #define   PHY_LDO_DELAY_200NS			0x1
 #define   PHY_LDO_DELAY_600NS			0x2
-#define   PHY_LDO_SEQ_DELAY(delay, phy)		((delay) << (2*(phy)+23))
-#define   PHY_CH_POWER_DOWN_OVRD(mask, phy, ch)	((mask) << (8*(phy)+4*(ch)+11))
+#define   PHY_LDO_SEQ_DELAY(delay, phy)		((delay) << (2 * (phy) + 23))
+#define   PHY_CH_POWER_DOWN_OVRD(mask, phy, ch)	((mask) << (8 * (phy) + 4 * (ch) + 11))
 #define   PHY_CH_SU_PSR				0x1
 #define   PHY_CH_DEEP_PSR			0x7
-#define   PHY_CH_POWER_MODE(mode, phy, ch)	((mode) << (6*(phy)+3*(ch)+2))
+#define   PHY_CH_POWER_MODE(mode, phy, ch)	((mode) << (6 * (phy) + 3 * (ch) + 2))
 #define   PHY_COM_LANE_RESET_DEASSERT(phy)	(1 << (phy))
 #define DISPLAY_PHY_STATUS _MMIO(VLV_DISPLAY_BASE + 0x60104)
-#define   PHY_POWERGOOD(phy)	(((phy) == DPIO_PHY0) ? (1<<31) : (1<<30))
-#define   PHY_STATUS_CMN_LDO(phy, ch)                   (1 << (6-(6*(phy)+3*(ch))))
-#define   PHY_STATUS_SPLINE_LDO(phy, ch, spline)        (1 << (8-(6*(phy)+3*(ch)+(spline))))
+#define   PHY_POWERGOOD(phy)	(((phy) == DPIO_PHY0) ? (1 << 31) : (1 << 30))
+#define   PHY_STATUS_CMN_LDO(phy, ch)                   (1 << (6 - (6 * (phy) + 3 * (ch))))
+#define   PHY_STATUS_SPLINE_LDO(phy, ch, spline)        (1 << (8 - (6 * (phy) + 3 * (ch) + (spline))))
 
 /*
  * The i830 generation, in LVDS mode, defines P1 as the bit number set within
@@ -3143,7 +3154,7 @@ enum i915_power_well_id {
 /* Ironlake */
 # define PLL_REF_SDVO_HDMI_MULTIPLIER_SHIFT     9
 # define PLL_REF_SDVO_HDMI_MULTIPLIER_MASK      (7 << 9)
-# define PLL_REF_SDVO_HDMI_MULTIPLIER(x)	(((x)-1) << 9)
+# define PLL_REF_SDVO_HDMI_MULTIPLIER(x)	(((x) - 1) << 9)
 # define DPLL_FPA1_P1_POST_DIV_SHIFT            0
 # define DPLL_FPA1_P1_POST_DIV_MASK             0xff
 
@@ -3232,10 +3243,10 @@ enum i915_power_well_id {
 #define   DPLLA_TEST_M_BYPASS		(1 << 2)
 #define   DPLLA_INPUT_BUFFER_ENABLE	(1 << 0)
 #define D_STATE		_MMIO(0x6104)
-#define  DSTATE_GFX_RESET_I830			(1<<6)
-#define  DSTATE_PLL_D3_OFF			(1<<3)
-#define  DSTATE_GFX_CLOCK_GATING		(1<<1)
-#define  DSTATE_DOT_CLOCK_GATING		(1<<0)
+#define  DSTATE_GFX_RESET_I830			(1 << 6)
+#define  DSTATE_PLL_D3_OFF			(1 << 3)
+#define  DSTATE_GFX_CLOCK_GATING		(1 << 1)
+#define  DSTATE_DOT_CLOCK_GATING		(1 << 0)
 #define DSPCLK_GATE_D	_MMIO(dev_priv->info.display_mmio_offset + 0x6200)
 # define DPUNIT_B_CLOCK_GATE_DISABLE		(1 << 30) /* 965 */
 # define VSUNIT_CLOCK_GATE_DISABLE		(1 << 29) /* 965 */
@@ -3352,7 +3363,7 @@ enum i915_power_well_id {
 #define DEUC			_MMIO(0x6214)          /* CRL only */
 
 #define FW_BLC_SELF_VLV		_MMIO(VLV_DISPLAY_BASE + 0x6500)
-#define  FW_CSPWRDWNEN		(1<<15)
+#define  FW_CSPWRDWNEN		(1 << 15)
 
 #define MI_ARB_VLV		_MMIO(VLV_DISPLAY_BASE + 0x6504)
 
@@ -3477,7 +3488,7 @@ enum i915_power_well_id {
 #define HPLLVCO_MOBILE          _MMIO(MCHBAR_MIRROR_BASE + 0xc0f)
 
 #define TSC1			_MMIO(0x11001)
-#define   TSE			(1<<0)
+#define   TSE			(1 << 0)
 #define TR1			_MMIO(0x11006)
 #define TSFS			_MMIO(0x11020)
 #define   TSFS_SLOPE_MASK	0x0000ff00
@@ -3523,23 +3534,23 @@ enum i915_power_well_id {
 #define   MEMCTL_CMD_CHVID	3
 #define   MEMCTL_CMD_VMMOFF	4
 #define   MEMCTL_CMD_VMMON	5
-#define   MEMCTL_CMD_STS	(1<<12) /* write 1 triggers command, clears
+#define   MEMCTL_CMD_STS	(1 << 12) /* write 1 triggers command, clears
 					   when command complete */
 #define   MEMCTL_FREQ_MASK	0x0f00 /* jitter, from 0-15 */
 #define   MEMCTL_FREQ_SHIFT	8
-#define   MEMCTL_SFCAVM		(1<<7)
+#define   MEMCTL_SFCAVM		(1 << 7)
 #define   MEMCTL_TGT_VID_MASK	0x007f
 #define MEMIHYST		_MMIO(0x1117c)
 #define MEMINTREN		_MMIO(0x11180) /* 16 bits */
-#define   MEMINT_RSEXIT_EN	(1<<8)
-#define   MEMINT_CX_SUPR_EN	(1<<7)
-#define   MEMINT_CONT_BUSY_EN	(1<<6)
-#define   MEMINT_AVG_BUSY_EN	(1<<5)
-#define   MEMINT_EVAL_CHG_EN	(1<<4)
-#define   MEMINT_MON_IDLE_EN	(1<<3)
-#define   MEMINT_UP_EVAL_EN	(1<<2)
-#define   MEMINT_DOWN_EVAL_EN	(1<<1)
-#define   MEMINT_SW_CMD_EN	(1<<0)
+#define   MEMINT_RSEXIT_EN	(1 << 8)
+#define   MEMINT_CX_SUPR_EN	(1 << 7)
+#define   MEMINT_CONT_BUSY_EN	(1 << 6)
+#define   MEMINT_AVG_BUSY_EN	(1 << 5)
+#define   MEMINT_EVAL_CHG_EN	(1 << 4)
+#define   MEMINT_MON_IDLE_EN	(1 << 3)
+#define   MEMINT_UP_EVAL_EN	(1 << 2)
+#define   MEMINT_DOWN_EVAL_EN	(1 << 1)
+#define   MEMINT_SW_CMD_EN	(1 << 0)
 #define MEMINTRSTR		_MMIO(0x11182) /* 16 bits */
 #define   MEM_RSEXIT_MASK	0xc000
 #define   MEM_RSEXIT_SHIFT	14
@@ -3561,26 +3572,26 @@ enum i915_power_well_id {
 #define   MEM_INT_STEER_SMI	2
 #define   MEM_INT_STEER_SCI	3
 #define MEMINTRSTS		_MMIO(0x11184)
-#define   MEMINT_RSEXIT		(1<<7)
-#define   MEMINT_CONT_BUSY	(1<<6)
-#define   MEMINT_AVG_BUSY	(1<<5)
-#define   MEMINT_EVAL_CHG	(1<<4)
-#define   MEMINT_MON_IDLE	(1<<3)
-#define   MEMINT_UP_EVAL	(1<<2)
-#define   MEMINT_DOWN_EVAL	(1<<1)
-#define   MEMINT_SW_CMD		(1<<0)
+#define   MEMINT_RSEXIT		(1 << 7)
+#define   MEMINT_CONT_BUSY	(1 << 6)
+#define   MEMINT_AVG_BUSY	(1 << 5)
+#define   MEMINT_EVAL_CHG	(1 << 4)
+#define   MEMINT_MON_IDLE	(1 << 3)
+#define   MEMINT_UP_EVAL	(1 << 2)
+#define   MEMINT_DOWN_EVAL	(1 << 1)
+#define   MEMINT_SW_CMD		(1 << 0)
 #define MEMMODECTL		_MMIO(0x11190)
-#define   MEMMODE_BOOST_EN	(1<<31)
+#define   MEMMODE_BOOST_EN	(1 << 31)
 #define   MEMMODE_BOOST_FREQ_MASK 0x0f000000 /* jitter for boost, 0-15 */
 #define   MEMMODE_BOOST_FREQ_SHIFT 24
 #define   MEMMODE_IDLE_MODE_MASK 0x00030000
 #define   MEMMODE_IDLE_MODE_SHIFT 16
 #define   MEMMODE_IDLE_MODE_EVAL 0
 #define   MEMMODE_IDLE_MODE_CONT 1
-#define   MEMMODE_HWIDLE_EN	(1<<15)
-#define   MEMMODE_SWMODE_EN	(1<<14)
-#define   MEMMODE_RCLK_GATE	(1<<13)
-#define   MEMMODE_HW_UPDATE	(1<<12)
+#define   MEMMODE_HWIDLE_EN	(1 << 15)
+#define   MEMMODE_SWMODE_EN	(1 << 14)
+#define   MEMMODE_RCLK_GATE	(1 << 13)
+#define   MEMMODE_HW_UPDATE	(1 << 12)
 #define   MEMMODE_FSTART_MASK	0x00000f00 /* starting jitter, 0-15 */
 #define   MEMMODE_FSTART_SHIFT	8
 #define   MEMMODE_FMAX_MASK	0x000000f0 /* max jitter, 0-15 */
@@ -3594,8 +3605,8 @@ enum i915_power_well_id {
 #define   SWMEMCMD_TARVID	(3 << 13)
 #define   SWMEMCMD_VRM_OFF	(4 << 13)
 #define   SWMEMCMD_VRM_ON	(5 << 13)
-#define   CMDSTS		(1<<12)
-#define   SFCAVM		(1<<11)
+#define   CMDSTS		(1 << 12)
+#define   SFCAVM		(1 << 11)
 #define   SWFREQ_MASK		0x0380 /* P0-7 */
 #define   SWFREQ_SHIFT		7
 #define   TARVID_MASK		0x001f
@@ -3604,49 +3615,49 @@ enum i915_power_well_id {
 #define RCUPEI			_MMIO(0x111b0)
 #define RCDNEI			_MMIO(0x111b4)
 #define RSTDBYCTL		_MMIO(0x111b8)
-#define   RS1EN			(1<<31)
-#define   RS2EN			(1<<30)
-#define   RS3EN			(1<<29)
-#define   D3RS3EN		(1<<28) /* Display D3 imlies RS3 */
-#define   SWPROMORSX		(1<<27) /* RSx promotion timers ignored */
-#define   RCWAKERW		(1<<26) /* Resetwarn from PCH causes wakeup */
-#define   DPRSLPVREN		(1<<25) /* Fast voltage ramp enable */
-#define   GFXTGHYST		(1<<24) /* Hysteresis to allow trunk gating */
-#define   RCX_SW_EXIT		(1<<23) /* Leave RSx and prevent re-entry */
-#define   RSX_STATUS_MASK	(7<<20)
-#define   RSX_STATUS_ON		(0<<20)
-#define   RSX_STATUS_RC1	(1<<20)
-#define   RSX_STATUS_RC1E	(2<<20)
-#define   RSX_STATUS_RS1	(3<<20)
-#define   RSX_STATUS_RS2	(4<<20) /* aka rc6 */
-#define   RSX_STATUS_RSVD	(5<<20) /* deep rc6 unsupported on ilk */
-#define   RSX_STATUS_RS3	(6<<20) /* rs3 unsupported on ilk */
-#define   RSX_STATUS_RSVD2	(7<<20)
-#define   UWRCRSXE		(1<<19) /* wake counter limit prevents rsx */
-#define   RSCRP			(1<<18) /* rs requests control on rs1/2 reqs */
-#define   JRSC			(1<<17) /* rsx coupled to cpu c-state */
-#define   RS2INC0		(1<<16) /* allow rs2 in cpu c0 */
-#define   RS1CONTSAV_MASK	(3<<14)
-#define   RS1CONTSAV_NO_RS1	(0<<14) /* rs1 doesn't save/restore context */
-#define   RS1CONTSAV_RSVD	(1<<14)
-#define   RS1CONTSAV_SAVE_RS1	(2<<14) /* rs1 saves context */
-#define   RS1CONTSAV_FULL_RS1	(3<<14) /* rs1 saves and restores context */
-#define   NORMSLEXLAT_MASK	(3<<12)
-#define   SLOW_RS123		(0<<12)
-#define   SLOW_RS23		(1<<12)
-#define   SLOW_RS3		(2<<12)
-#define   NORMAL_RS123		(3<<12)
-#define   RCMODE_TIMEOUT	(1<<11) /* 0 is eval interval method */
-#define   IMPROMOEN		(1<<10) /* promo is immediate or delayed until next idle interval (only for timeout method above) */
-#define   RCENTSYNC		(1<<9) /* rs coupled to cpu c-state (3/6/7) */
-#define   STATELOCK		(1<<7) /* locked to rs_cstate if 0 */
-#define   RS_CSTATE_MASK	(3<<4)
-#define   RS_CSTATE_C367_RS1	(0<<4)
-#define   RS_CSTATE_C36_RS1_C7_RS2 (1<<4)
-#define   RS_CSTATE_RSVD	(2<<4)
-#define   RS_CSTATE_C367_RS2	(3<<4)
-#define   REDSAVES		(1<<3) /* no context save if was idle during rs0 */
-#define   REDRESTORES		(1<<2) /* no restore if was idle during rs0 */
+#define   RS1EN			(1 << 31)
+#define   RS2EN			(1 << 30)
+#define   RS3EN			(1 << 29)
+#define   D3RS3EN		(1 << 28) /* Display D3 imlies RS3 */
+#define   SWPROMORSX		(1 << 27) /* RSx promotion timers ignored */
+#define   RCWAKERW		(1 << 26) /* Resetwarn from PCH causes wakeup */
+#define   DPRSLPVREN		(1 << 25) /* Fast voltage ramp enable */
+#define   GFXTGHYST		(1 << 24) /* Hysteresis to allow trunk gating */
+#define   RCX_SW_EXIT		(1 << 23) /* Leave RSx and prevent re-entry */
+#define   RSX_STATUS_MASK	(7 << 20)
+#define   RSX_STATUS_ON		(0 << 20)
+#define   RSX_STATUS_RC1	(1 << 20)
+#define   RSX_STATUS_RC1E	(2 << 20)
+#define   RSX_STATUS_RS1	(3 << 20)
+#define   RSX_STATUS_RS2	(4 << 20) /* aka rc6 */
+#define   RSX_STATUS_RSVD	(5 << 20) /* deep rc6 unsupported on ilk */
+#define   RSX_STATUS_RS3	(6 << 20) /* rs3 unsupported on ilk */
+#define   RSX_STATUS_RSVD2	(7 << 20)
+#define   UWRCRSXE		(1 << 19) /* wake counter limit prevents rsx */
+#define   RSCRP			(1 << 18) /* rs requests control on rs1/2 reqs */
+#define   JRSC			(1 << 17) /* rsx coupled to cpu c-state */
+#define   RS2INC0		(1 << 16) /* allow rs2 in cpu c0 */
+#define   RS1CONTSAV_MASK	(3 << 14)
+#define   RS1CONTSAV_NO_RS1	(0 << 14) /* rs1 doesn't save/restore context */
+#define   RS1CONTSAV_RSVD	(1 << 14)
+#define   RS1CONTSAV_SAVE_RS1	(2 << 14) /* rs1 saves context */
+#define   RS1CONTSAV_FULL_RS1	(3 << 14) /* rs1 saves and restores context */
+#define   NORMSLEXLAT_MASK	(3 << 12)
+#define   SLOW_RS123		(0 << 12)
+#define   SLOW_RS23		(1 << 12)
+#define   SLOW_RS3		(2 << 12)
+#define   NORMAL_RS123		(3 << 12)
+#define   RCMODE_TIMEOUT	(1 << 11) /* 0 is eval interval method */
+#define   IMPROMOEN		(1 << 10) /* promo is immediate or delayed until next idle interval (only for timeout method above) */
+#define   RCENTSYNC		(1 << 9) /* rs coupled to cpu c-state (3/6/7) */
+#define   STATELOCK		(1 << 7) /* locked to rs_cstate if 0 */
+#define   RS_CSTATE_MASK	(3 << 4)
+#define   RS_CSTATE_C367_RS1	(0 << 4)
+#define   RS_CSTATE_C36_RS1_C7_RS2 (1 << 4)
+#define   RS_CSTATE_RSVD	(2 << 4)
+#define   RS_CSTATE_C367_RS2	(3 << 4)
+#define   REDSAVES		(1 << 3) /* no context save if was idle during rs0 */
+#define   REDRESTORES		(1 << 2) /* no restore if was idle during rs0 */
 #define VIDCTL			_MMIO(0x111c0)
 #define VIDSTS			_MMIO(0x111c8)
 #define VIDSTART		_MMIO(0x111cc) /* 8 bits */
@@ -3655,7 +3666,7 @@ enum i915_power_well_id {
 #define   MEMSTAT_VID_SHIFT	8
 #define   MEMSTAT_PSTATE_MASK	0x00f8
 #define   MEMSTAT_PSTATE_SHIFT  3
-#define   MEMSTAT_MON_ACTV	(1<<2)
+#define   MEMSTAT_MON_ACTV	(1 << 2)
 #define   MEMSTAT_SRC_CTL_MASK	0x0003
 #define   MEMSTAT_SRC_CTL_CORE	0
 #define   MEMSTAT_SRC_CTL_TRB	1
@@ -3664,7 +3675,7 @@ enum i915_power_well_id {
 #define RCPREVBSYTUPAVG		_MMIO(0x113b8)
 #define RCPREVBSYTDNAVG		_MMIO(0x113bc)
 #define PMMISC			_MMIO(0x11214)
-#define   MCPPCE_EN		(1<<0) /* enable PM_MSG from PCH->MPC */
+#define   MCPPCE_EN		(1 << 0) /* enable PM_MSG from PCH->MPC */
 #define SDEW			_MMIO(0x1124c)
 #define CSIEW0			_MMIO(0x11250)
 #define CSIEW1			_MMIO(0x11254)
@@ -3681,8 +3692,8 @@ enum i915_power_well_id {
 #define RPPREVBSYTUPAVG		_MMIO(0x113b8)
 #define RPPREVBSYTDNAVG		_MMIO(0x113bc)
 #define ECR			_MMIO(0x11600)
-#define   ECR_GPFE		(1<<31)
-#define   ECR_IMONE		(1<<30)
+#define   ECR_GPFE		(1 << 31)
+#define   ECR_IMONE		(1 << 30)
 #define   ECR_CAP_MASK		0x0000001f /* Event range, 0-31 */
 #define OGW0			_MMIO(0x11608)
 #define OGW1			_MMIO(0x1160c)
@@ -3789,11 +3800,11 @@ enum {
 	FAULT_AND_CONTINUE /* Unsupported */
 };
 
-#define GEN8_CTX_VALID (1<<0)
-#define GEN8_CTX_FORCE_PD_RESTORE (1<<1)
-#define GEN8_CTX_FORCE_RESTORE (1<<2)
-#define GEN8_CTX_L3LLC_COHERENT (1<<5)
-#define GEN8_CTX_PRIVILEGE (1<<8)
+#define GEN8_CTX_VALID (1 << 0)
+#define GEN8_CTX_FORCE_PD_RESTORE (1 << 1)
+#define GEN8_CTX_FORCE_RESTORE (1 << 2)
+#define GEN8_CTX_L3LLC_COHERENT (1 << 5)
+#define GEN8_CTX_PRIVILEGE (1 << 8)
 #define GEN8_CTX_ADDRESSING_MODE_SHIFT 3
 
 #define GEN8_CTX_ID_SHIFT 32
@@ -3815,7 +3826,7 @@ enum {
 
 #define OVADD			_MMIO(0x30000)
 #define DOVSTA			_MMIO(0x30008)
-#define OC_BUF			(0x3<<20)
+#define OC_BUF			(0x3 << 20)
 #define OGAMC5			_MMIO(0x30010)
 #define OGAMC4			_MMIO(0x30014)
 #define OGAMC3			_MMIO(0x30018)
@@ -3983,64 +3994,64 @@ enum {
 /* VLV eDP PSR registers */
 #define _PSRCTLA				(VLV_DISPLAY_BASE + 0x60090)
 #define _PSRCTLB				(VLV_DISPLAY_BASE + 0x61090)
-#define  VLV_EDP_PSR_ENABLE			(1<<0)
-#define  VLV_EDP_PSR_RESET			(1<<1)
-#define  VLV_EDP_PSR_MODE_MASK			(7<<2)
-#define  VLV_EDP_PSR_MODE_HW_TIMER		(1<<3)
-#define  VLV_EDP_PSR_MODE_SW_TIMER		(1<<2)
-#define  VLV_EDP_PSR_SINGLE_FRAME_UPDATE	(1<<7)
-#define  VLV_EDP_PSR_ACTIVE_ENTRY		(1<<8)
-#define  VLV_EDP_PSR_SRC_TRANSMITTER_STATE	(1<<9)
-#define  VLV_EDP_PSR_DBL_FRAME			(1<<10)
-#define  VLV_EDP_PSR_FRAME_COUNT_MASK		(0xff<<16)
+#define  VLV_EDP_PSR_ENABLE			(1 << 0)
+#define  VLV_EDP_PSR_RESET			(1 << 1)
+#define  VLV_EDP_PSR_MODE_MASK			(7 << 2)
+#define  VLV_EDP_PSR_MODE_HW_TIMER		(1 << 3)
+#define  VLV_EDP_PSR_MODE_SW_TIMER		(1 << 2)
+#define  VLV_EDP_PSR_SINGLE_FRAME_UPDATE	(1 << 7)
+#define  VLV_EDP_PSR_ACTIVE_ENTRY		(1 << 8)
+#define  VLV_EDP_PSR_SRC_TRANSMITTER_STATE	(1 << 9)
+#define  VLV_EDP_PSR_DBL_FRAME			(1 << 10)
+#define  VLV_EDP_PSR_FRAME_COUNT_MASK		(0xff << 16)
 #define  VLV_EDP_PSR_IDLE_FRAME_SHIFT		16
 #define VLV_PSRCTL(pipe)	_MMIO_PIPE(pipe, _PSRCTLA, _PSRCTLB)
 
 #define _VSCSDPA			(VLV_DISPLAY_BASE + 0x600a0)
 #define _VSCSDPB			(VLV_DISPLAY_BASE + 0x610a0)
-#define  VLV_EDP_PSR_SDP_FREQ_MASK	(3<<30)
-#define  VLV_EDP_PSR_SDP_FREQ_ONCE	(1<<31)
-#define  VLV_EDP_PSR_SDP_FREQ_EVFRAME	(1<<30)
+#define  VLV_EDP_PSR_SDP_FREQ_MASK	(3 << 30)
+#define  VLV_EDP_PSR_SDP_FREQ_ONCE	(1 << 31)
+#define  VLV_EDP_PSR_SDP_FREQ_EVFRAME	(1 << 30)
 #define VLV_VSCSDP(pipe)	_MMIO_PIPE(pipe, _VSCSDPA, _VSCSDPB)
 
 #define _PSRSTATA			(VLV_DISPLAY_BASE + 0x60094)
 #define _PSRSTATB			(VLV_DISPLAY_BASE + 0x61094)
-#define  VLV_EDP_PSR_LAST_STATE_MASK	(7<<3)
+#define  VLV_EDP_PSR_LAST_STATE_MASK	(7 << 3)
 #define  VLV_EDP_PSR_CURR_STATE_MASK	7
-#define  VLV_EDP_PSR_DISABLED		(0<<0)
-#define  VLV_EDP_PSR_INACTIVE		(1<<0)
-#define  VLV_EDP_PSR_IN_TRANS_TO_ACTIVE	(2<<0)
-#define  VLV_EDP_PSR_ACTIVE_NORFB_UP	(3<<0)
-#define  VLV_EDP_PSR_ACTIVE_SF_UPDATE	(4<<0)
-#define  VLV_EDP_PSR_EXIT		(5<<0)
-#define  VLV_EDP_PSR_IN_TRANS		(1<<7)
+#define  VLV_EDP_PSR_DISABLED		(0 << 0)
+#define  VLV_EDP_PSR_INACTIVE		(1 << 0)
+#define  VLV_EDP_PSR_IN_TRANS_TO_ACTIVE	(2 << 0)
+#define  VLV_EDP_PSR_ACTIVE_NORFB_UP	(3 << 0)
+#define  VLV_EDP_PSR_ACTIVE_SF_UPDATE	(4 << 0)
+#define  VLV_EDP_PSR_EXIT		(5 << 0)
+#define  VLV_EDP_PSR_IN_TRANS		(1 << 7)
 #define VLV_PSRSTAT(pipe)	_MMIO_PIPE(pipe, _PSRSTATA, _PSRSTATB)
 
 /* HSW+ eDP PSR registers */
 #define HSW_EDP_PSR_BASE	0x64800
 #define BDW_EDP_PSR_BASE	0x6f800
 #define EDP_PSR_CTL				_MMIO(dev_priv->psr_mmio_base + 0)
-#define   EDP_PSR_ENABLE			(1<<31)
-#define   BDW_PSR_SINGLE_FRAME			(1<<30)
-#define   EDP_PSR_RESTORE_PSR_ACTIVE_CTX_MASK	(1<<29) /* SW can't modify */
-#define   EDP_PSR_LINK_STANDBY			(1<<27)
-#define   EDP_PSR_MIN_LINK_ENTRY_TIME_MASK	(3<<25)
-#define   EDP_PSR_MIN_LINK_ENTRY_TIME_8_LINES	(0<<25)
-#define   EDP_PSR_MIN_LINK_ENTRY_TIME_4_LINES	(1<<25)
-#define   EDP_PSR_MIN_LINK_ENTRY_TIME_2_LINES	(2<<25)
-#define   EDP_PSR_MIN_LINK_ENTRY_TIME_0_LINES	(3<<25)
+#define   EDP_PSR_ENABLE			(1 << 31)
+#define   BDW_PSR_SINGLE_FRAME			(1 << 30)
+#define   EDP_PSR_RESTORE_PSR_ACTIVE_CTX_MASK	(1 << 29) /* SW can't modify */
+#define   EDP_PSR_LINK_STANDBY			(1 << 27)
+#define   EDP_PSR_MIN_LINK_ENTRY_TIME_MASK	(3 << 25)
+#define   EDP_PSR_MIN_LINK_ENTRY_TIME_8_LINES	(0 << 25)
+#define   EDP_PSR_MIN_LINK_ENTRY_TIME_4_LINES	(1 << 25)
+#define   EDP_PSR_MIN_LINK_ENTRY_TIME_2_LINES	(2 << 25)
+#define   EDP_PSR_MIN_LINK_ENTRY_TIME_0_LINES	(3 << 25)
 #define   EDP_PSR_MAX_SLEEP_TIME_SHIFT		20
-#define   EDP_PSR_SKIP_AUX_EXIT			(1<<12)
-#define   EDP_PSR_TP1_TP2_SEL			(0<<11)
-#define   EDP_PSR_TP1_TP3_SEL			(1<<11)
-#define   EDP_PSR_TP2_TP3_TIME_500us		(0<<8)
-#define   EDP_PSR_TP2_TP3_TIME_100us		(1<<8)
-#define   EDP_PSR_TP2_TP3_TIME_2500us		(2<<8)
-#define   EDP_PSR_TP2_TP3_TIME_0us		(3<<8)
-#define   EDP_PSR_TP1_TIME_500us		(0<<4)
-#define   EDP_PSR_TP1_TIME_100us		(1<<4)
-#define   EDP_PSR_TP1_TIME_2500us		(2<<4)
-#define   EDP_PSR_TP1_TIME_0us			(3<<4)
+#define   EDP_PSR_SKIP_AUX_EXIT			(1 << 12)
+#define   EDP_PSR_TP1_TP2_SEL			(0 << 11)
+#define   EDP_PSR_TP1_TP3_SEL			(1 << 11)
+#define   EDP_PSR_TP2_TP3_TIME_500us		(0 << 8)
+#define   EDP_PSR_TP2_TP3_TIME_100us		(1 << 8)
+#define   EDP_PSR_TP2_TP3_TIME_2500us		(2 << 8)
+#define   EDP_PSR_TP2_TP3_TIME_0us		(3 << 8)
+#define   EDP_PSR_TP1_TIME_500us		(0 << 4)
+#define   EDP_PSR_TP1_TIME_100us		(1 << 4)
+#define   EDP_PSR_TP1_TIME_2500us		(2 << 4)
+#define   EDP_PSR_TP1_TIME_0us			(3 << 4)
 #define   EDP_PSR_IDLE_FRAME_SHIFT		0
 
 /* Bspec claims those aren't shifted but stay at 0x64800 */
@@ -4060,55 +4071,55 @@ enum {
 #define EDP_PSR_AUX_DATA(i)			_MMIO(dev_priv->psr_mmio_base + 0x14 + (i) * 4) /* 5 registers */
 
 #define EDP_PSR_STATUS				_MMIO(dev_priv->psr_mmio_base + 0x40)
-#define   EDP_PSR_STATUS_STATE_MASK		(7<<29)
-#define   EDP_PSR_STATUS_STATE_IDLE		(0<<29)
-#define   EDP_PSR_STATUS_STATE_SRDONACK		(1<<29)
-#define   EDP_PSR_STATUS_STATE_SRDENT		(2<<29)
-#define   EDP_PSR_STATUS_STATE_BUFOFF		(3<<29)
-#define   EDP_PSR_STATUS_STATE_BUFON		(4<<29)
-#define   EDP_PSR_STATUS_STATE_AUXACK		(5<<29)
-#define   EDP_PSR_STATUS_STATE_SRDOFFACK	(6<<29)
-#define   EDP_PSR_STATUS_LINK_MASK		(3<<26)
-#define   EDP_PSR_STATUS_LINK_FULL_OFF		(0<<26)
-#define   EDP_PSR_STATUS_LINK_FULL_ON		(1<<26)
-#define   EDP_PSR_STATUS_LINK_STANDBY		(2<<26)
+#define   EDP_PSR_STATUS_STATE_MASK		(7 << 29)
+#define   EDP_PSR_STATUS_STATE_IDLE		(0 << 29)
+#define   EDP_PSR_STATUS_STATE_SRDONACK		(1 << 29)
+#define   EDP_PSR_STATUS_STATE_SRDENT		(2 << 29)
+#define   EDP_PSR_STATUS_STATE_BUFOFF		(3 << 29)
+#define   EDP_PSR_STATUS_STATE_BUFON		(4 << 29)
+#define   EDP_PSR_STATUS_STATE_AUXACK		(5 << 29)
+#define   EDP_PSR_STATUS_STATE_SRDOFFACK	(6 << 29)
+#define   EDP_PSR_STATUS_LINK_MASK		(3 << 26)
+#define   EDP_PSR_STATUS_LINK_FULL_OFF		(0 << 26)
+#define   EDP_PSR_STATUS_LINK_FULL_ON		(1 << 26)
+#define   EDP_PSR_STATUS_LINK_STANDBY		(2 << 26)
 #define   EDP_PSR_STATUS_MAX_SLEEP_TIMER_SHIFT	20
 #define   EDP_PSR_STATUS_MAX_SLEEP_TIMER_MASK	0x1f
 #define   EDP_PSR_STATUS_COUNT_SHIFT		16
 #define   EDP_PSR_STATUS_COUNT_MASK		0xf
-#define   EDP_PSR_STATUS_AUX_ERROR		(1<<15)
-#define   EDP_PSR_STATUS_AUX_SENDING		(1<<12)
-#define   EDP_PSR_STATUS_SENDING_IDLE		(1<<9)
-#define   EDP_PSR_STATUS_SENDING_TP2_TP3	(1<<8)
-#define   EDP_PSR_STATUS_SENDING_TP1		(1<<4)
+#define   EDP_PSR_STATUS_AUX_ERROR		(1 << 15)
+#define   EDP_PSR_STATUS_AUX_SENDING		(1 << 12)
+#define   EDP_PSR_STATUS_SENDING_IDLE		(1 << 9)
+#define   EDP_PSR_STATUS_SENDING_TP2_TP3	(1 << 8)
+#define   EDP_PSR_STATUS_SENDING_TP1		(1 << 4)
 #define   EDP_PSR_STATUS_IDLE_MASK		0xf
 
 #define EDP_PSR_PERF_CNT		_MMIO(dev_priv->psr_mmio_base + 0x44)
 #define   EDP_PSR_PERF_CNT_MASK		0xffffff
 
 #define EDP_PSR_DEBUG				_MMIO(dev_priv->psr_mmio_base + 0x60) /* PSR_MASK on SKL+ */
-#define   EDP_PSR_DEBUG_MASK_MAX_SLEEP         (1<<28)
-#define   EDP_PSR_DEBUG_MASK_LPSP              (1<<27)
-#define   EDP_PSR_DEBUG_MASK_MEMUP             (1<<26)
-#define   EDP_PSR_DEBUG_MASK_HPD               (1<<25)
-#define   EDP_PSR_DEBUG_MASK_DISP_REG_WRITE    (1<<16)
-#define   EDP_PSR_DEBUG_EXIT_ON_PIXEL_UNDERRUN (1<<15) /* SKL+ */
+#define   EDP_PSR_DEBUG_MASK_MAX_SLEEP         (1 << 28)
+#define   EDP_PSR_DEBUG_MASK_LPSP              (1 << 27)
+#define   EDP_PSR_DEBUG_MASK_MEMUP             (1 << 26)
+#define   EDP_PSR_DEBUG_MASK_HPD               (1 << 25)
+#define   EDP_PSR_DEBUG_MASK_DISP_REG_WRITE    (1 << 16)
+#define   EDP_PSR_DEBUG_EXIT_ON_PIXEL_UNDERRUN (1 << 15) /* SKL+ */
 
 #define EDP_PSR2_CTL			_MMIO(0x6f900)
-#define   EDP_PSR2_ENABLE		(1<<31)
-#define   EDP_SU_TRACK_ENABLE		(1<<30)
-#define   EDP_Y_COORDINATE_VALID	(1<<26) /* GLK and CNL+ */
-#define   EDP_Y_COORDINATE_ENABLE	(1<<25) /* GLK and CNL+ */
-#define   EDP_MAX_SU_DISABLE_TIME(t)	((t)<<20)
-#define   EDP_MAX_SU_DISABLE_TIME_MASK	(0x1f<<20)
-#define   EDP_PSR2_TP2_TIME_500us	(0<<8)
-#define   EDP_PSR2_TP2_TIME_100us	(1<<8)
-#define   EDP_PSR2_TP2_TIME_2500us	(2<<8)
-#define   EDP_PSR2_TP2_TIME_50us	(3<<8)
-#define   EDP_PSR2_TP2_TIME_MASK	(3<<8)
+#define   EDP_PSR2_ENABLE		(1 << 31)
+#define   EDP_SU_TRACK_ENABLE		(1 << 30)
+#define   EDP_Y_COORDINATE_VALID	(1 << 26) /* GLK and CNL+ */
+#define   EDP_Y_COORDINATE_ENABLE	(1 << 25) /* GLK and CNL+ */
+#define   EDP_MAX_SU_DISABLE_TIME(t)	((t) << 20)
+#define   EDP_MAX_SU_DISABLE_TIME_MASK	(0x1f << 20)
+#define   EDP_PSR2_TP2_TIME_500us	(0 << 8)
+#define   EDP_PSR2_TP2_TIME_100us	(1 << 8)
+#define   EDP_PSR2_TP2_TIME_2500us	(2 << 8)
+#define   EDP_PSR2_TP2_TIME_50us	(3 << 8)
+#define   EDP_PSR2_TP2_TIME_MASK	(3 << 8)
 #define   EDP_PSR2_FRAME_BEFORE_SU_SHIFT 4
-#define   EDP_PSR2_FRAME_BEFORE_SU_MASK	(0xf<<4)
-#define   EDP_PSR2_FRAME_BEFORE_SU(a)	((a)<<4)
+#define   EDP_PSR2_FRAME_BEFORE_SU_MASK	(0xf << 4)
+#define   EDP_PSR2_FRAME_BEFORE_SU(a)	((a) << 4)
 #define   EDP_PSR2_IDLE_FRAME_MASK	0xf
 #define   EDP_PSR2_IDLE_FRAME_SHIFT	0
 
@@ -4136,7 +4147,7 @@ enum {
 #define  PSR_EVENT_PSR_DISABLE			(1 << 0)
 
 #define EDP_PSR2_STATUS			_MMIO(0x6f940)
-#define EDP_PSR2_STATUS_STATE_MASK     (0xf<<28)
+#define EDP_PSR2_STATUS_STATE_MASK     (0xf << 28)
 #define EDP_PSR2_STATUS_STATE_SHIFT    28
 
 /* VGA port control */
@@ -4144,48 +4155,48 @@ enum {
 #define PCH_ADPA                _MMIO(0xe1100)
 #define VLV_ADPA		_MMIO(VLV_DISPLAY_BASE + 0x61100)
 
-#define   ADPA_DAC_ENABLE	(1<<31)
+#define   ADPA_DAC_ENABLE	(1 << 31)
 #define   ADPA_DAC_DISABLE	0
 #define   ADPA_PIPE_SEL_SHIFT		30
-#define   ADPA_PIPE_SEL_MASK		(1<<30)
+#define   ADPA_PIPE_SEL_MASK		(1 << 30)
 #define   ADPA_PIPE_SEL(pipe)		((pipe) << 30)
 #define   ADPA_PIPE_SEL_SHIFT_CPT	29
-#define   ADPA_PIPE_SEL_MASK_CPT	(3<<29)
+#define   ADPA_PIPE_SEL_MASK_CPT	(3 << 29)
 #define   ADPA_PIPE_SEL_CPT(pipe)	((pipe) << 29)
 #define   ADPA_CRT_HOTPLUG_MASK  0x03ff0000 /* bit 25-16 */
-#define   ADPA_CRT_HOTPLUG_MONITOR_NONE  (0<<24)
-#define   ADPA_CRT_HOTPLUG_MONITOR_MASK  (3<<24)
-#define   ADPA_CRT_HOTPLUG_MONITOR_COLOR (3<<24)
-#define   ADPA_CRT_HOTPLUG_MONITOR_MONO  (2<<24)
-#define   ADPA_CRT_HOTPLUG_ENABLE        (1<<23)
-#define   ADPA_CRT_HOTPLUG_PERIOD_64     (0<<22)
-#define   ADPA_CRT_HOTPLUG_PERIOD_128    (1<<22)
-#define   ADPA_CRT_HOTPLUG_WARMUP_5MS    (0<<21)
-#define   ADPA_CRT_HOTPLUG_WARMUP_10MS   (1<<21)
-#define   ADPA_CRT_HOTPLUG_SAMPLE_2S     (0<<20)
-#define   ADPA_CRT_HOTPLUG_SAMPLE_4S     (1<<20)
-#define   ADPA_CRT_HOTPLUG_VOLTAGE_40    (0<<18)
-#define   ADPA_CRT_HOTPLUG_VOLTAGE_50    (1<<18)
-#define   ADPA_CRT_HOTPLUG_VOLTAGE_60    (2<<18)
-#define   ADPA_CRT_HOTPLUG_VOLTAGE_70    (3<<18)
-#define   ADPA_CRT_HOTPLUG_VOLREF_325MV  (0<<17)
-#define   ADPA_CRT_HOTPLUG_VOLREF_475MV  (1<<17)
-#define   ADPA_CRT_HOTPLUG_FORCE_TRIGGER (1<<16)
-#define   ADPA_USE_VGA_HVPOLARITY (1<<15)
+#define   ADPA_CRT_HOTPLUG_MONITOR_NONE  (0 << 24)
+#define   ADPA_CRT_HOTPLUG_MONITOR_MASK  (3 << 24)
+#define   ADPA_CRT_HOTPLUG_MONITOR_COLOR (3 << 24)
+#define   ADPA_CRT_HOTPLUG_MONITOR_MONO  (2 << 24)
+#define   ADPA_CRT_HOTPLUG_ENABLE        (1 << 23)
+#define   ADPA_CRT_HOTPLUG_PERIOD_64     (0 << 22)
+#define   ADPA_CRT_HOTPLUG_PERIOD_128    (1 << 22)
+#define   ADPA_CRT_HOTPLUG_WARMUP_5MS    (0 << 21)
+#define   ADPA_CRT_HOTPLUG_WARMUP_10MS   (1 << 21)
+#define   ADPA_CRT_HOTPLUG_SAMPLE_2S     (0 << 20)
+#define   ADPA_CRT_HOTPLUG_SAMPLE_4S     (1 << 20)
+#define   ADPA_CRT_HOTPLUG_VOLTAGE_40    (0 << 18)
+#define   ADPA_CRT_HOTPLUG_VOLTAGE_50    (1 << 18)
+#define   ADPA_CRT_HOTPLUG_VOLTAGE_60    (2 << 18)
+#define   ADPA_CRT_HOTPLUG_VOLTAGE_70    (3 << 18)
+#define   ADPA_CRT_HOTPLUG_VOLREF_325MV  (0 << 17)
+#define   ADPA_CRT_HOTPLUG_VOLREF_475MV  (1 << 17)
+#define   ADPA_CRT_HOTPLUG_FORCE_TRIGGER (1 << 16)
+#define   ADPA_USE_VGA_HVPOLARITY (1 << 15)
 #define   ADPA_SETS_HVPOLARITY	0
-#define   ADPA_VSYNC_CNTL_DISABLE (1<<10)
+#define   ADPA_VSYNC_CNTL_DISABLE (1 << 10)
 #define   ADPA_VSYNC_CNTL_ENABLE 0
-#define   ADPA_HSYNC_CNTL_DISABLE (1<<11)
+#define   ADPA_HSYNC_CNTL_DISABLE (1 << 11)
 #define   ADPA_HSYNC_CNTL_ENABLE 0
-#define   ADPA_VSYNC_ACTIVE_HIGH (1<<4)
+#define   ADPA_VSYNC_ACTIVE_HIGH (1 << 4)
 #define   ADPA_VSYNC_ACTIVE_LOW	0
-#define   ADPA_HSYNC_ACTIVE_HIGH (1<<3)
+#define   ADPA_HSYNC_ACTIVE_HIGH (1 << 3)
 #define   ADPA_HSYNC_ACTIVE_LOW	0
-#define   ADPA_DPMS_MASK	(~(3<<10))
-#define   ADPA_DPMS_ON		(0<<10)
-#define   ADPA_DPMS_SUSPEND	(1<<10)
-#define   ADPA_DPMS_STANDBY	(2<<10)
-#define   ADPA_DPMS_OFF		(3<<10)
+#define   ADPA_DPMS_MASK	(~(3 << 10))
+#define   ADPA_DPMS_ON		(0 << 10)
+#define   ADPA_DPMS_SUSPEND	(1 << 10)
+#define   ADPA_DPMS_STANDBY	(2 << 10)
+#define   ADPA_DPMS_OFF		(3 << 10)
 
 
 /* Hotplug control (945+ only) */
@@ -4394,7 +4405,7 @@ enum {
 #define   DVO_BLANK_ACTIVE_HIGH		(1 << 2)
 #define   DVO_OUTPUT_CSTATE_PIXELS	(1 << 1)	/* SDG only */
 #define   DVO_OUTPUT_SOURCE_SIZE_PIXELS	(1 << 0)	/* SDG only */
-#define   DVO_PRESERVE_MASK		(0x7<<24)
+#define   DVO_PRESERVE_MASK		(0x7 << 24)
 #define DVOA_SRCDIM		_MMIO(0x61124)
 #define DVOB_SRCDIM		_MMIO(0x61144)
 #define DVOC_SRCDIM		_MMIO(0x61164)
@@ -5310,6 +5321,13 @@ enum {
 #define _DPD_AUX_CH_DATA4	(dev_priv->info.display_mmio_offset + 0x64320)
 #define _DPD_AUX_CH_DATA5	(dev_priv->info.display_mmio_offset + 0x64324)
 
+#define _DPE_AUX_CH_CTL		(dev_priv->info.display_mmio_offset + 0x64410)
+#define _DPE_AUX_CH_DATA1	(dev_priv->info.display_mmio_offset + 0x64414)
+#define _DPE_AUX_CH_DATA2	(dev_priv->info.display_mmio_offset + 0x64418)
+#define _DPE_AUX_CH_DATA3	(dev_priv->info.display_mmio_offset + 0x6441c)
+#define _DPE_AUX_CH_DATA4	(dev_priv->info.display_mmio_offset + 0x64420)
+#define _DPE_AUX_CH_DATA5	(dev_priv->info.display_mmio_offset + 0x64424)
+
 #define _DPF_AUX_CH_CTL		(dev_priv->info.display_mmio_offset + 0x64510)
 #define _DPF_AUX_CH_DATA1	(dev_priv->info.display_mmio_offset + 0x64514)
 #define _DPF_AUX_CH_DATA2	(dev_priv->info.display_mmio_offset + 0x64518)
@@ -5365,7 +5383,7 @@ enum {
 #define _PIPEB_DATA_M_G4X	0x71050
 
 /* Transfer unit size for display port - 1, default is 0x3f (for TU size 64) */
-#define  TU_SIZE(x)             (((x)-1) << 25) /* default size 64 */
+#define  TU_SIZE(x)             (((x) - 1) << 25) /* default size 64 */
 #define  TU_SIZE_SHIFT		25
 #define  TU_SIZE_MASK           (0x3f << 25)
 
@@ -5407,18 +5425,18 @@ enum {
 #define   DSL_LINEMASK_GEN2	0x00000fff
 #define   DSL_LINEMASK_GEN3	0x00001fff
 #define _PIPEACONF		0x70008
-#define   PIPECONF_ENABLE	(1<<31)
+#define   PIPECONF_ENABLE	(1 << 31)
 #define   PIPECONF_DISABLE	0
-#define   PIPECONF_DOUBLE_WIDE	(1<<30)
-#define   I965_PIPECONF_ACTIVE	(1<<30)
-#define   PIPECONF_DSI_PLL_LOCKED	(1<<29) /* vlv & pipe A only */
-#define   PIPECONF_FRAME_START_DELAY_MASK (3<<27)
+#define   PIPECONF_DOUBLE_WIDE	(1 << 30)
+#define   I965_PIPECONF_ACTIVE	(1 << 30)
+#define   PIPECONF_DSI_PLL_LOCKED	(1 << 29) /* vlv & pipe A only */
+#define   PIPECONF_FRAME_START_DELAY_MASK (3 << 27)
 #define   PIPECONF_SINGLE_WIDE	0
 #define   PIPECONF_PIPE_UNLOCKED 0
-#define   PIPECONF_PIPE_LOCKED	(1<<25)
+#define   PIPECONF_PIPE_LOCKED	(1 << 25)
 #define   PIPECONF_PALETTE	0
-#define   PIPECONF_GAMMA		(1<<24)
-#define   PIPECONF_FORCE_BORDER	(1<<25)
+#define   PIPECONF_GAMMA		(1 << 24)
+#define   PIPECONF_FORCE_BORDER	(1 << 25)
 #define   PIPECONF_INTERLACE_MASK	(7 << 21)
 #define   PIPECONF_INTERLACE_MASK_HSW	(3 << 21)
 /* Note that pre-gen3 does not support interlaced display directly. Panel
@@ -5437,67 +5455,67 @@ enum {
 #define   PIPECONF_PFIT_PF_INTERLACED_DBL_ILK	(5 << 21) /* ilk/snb only */
 #define   PIPECONF_INTERLACE_MODE_MASK		(7 << 21)
 #define   PIPECONF_EDP_RR_MODE_SWITCH		(1 << 20)
-#define   PIPECONF_CXSR_DOWNCLOCK	(1<<16)
+#define   PIPECONF_CXSR_DOWNCLOCK	(1 << 16)
 #define   PIPECONF_EDP_RR_MODE_SWITCH_VLV	(1 << 14)
 #define   PIPECONF_COLOR_RANGE_SELECT	(1 << 13)
 #define   PIPECONF_BPC_MASK	(0x7 << 5)
-#define   PIPECONF_8BPC		(0<<5)
-#define   PIPECONF_10BPC	(1<<5)
-#define   PIPECONF_6BPC		(2<<5)
-#define   PIPECONF_12BPC	(3<<5)
-#define   PIPECONF_DITHER_EN	(1<<4)
+#define   PIPECONF_8BPC		(0 << 5)
+#define   PIPECONF_10BPC	(1 << 5)
+#define   PIPECONF_6BPC		(2 << 5)
+#define   PIPECONF_12BPC	(3 << 5)
+#define   PIPECONF_DITHER_EN	(1 << 4)
 #define   PIPECONF_DITHER_TYPE_MASK (0x0000000c)
-#define   PIPECONF_DITHER_TYPE_SP (0<<2)
-#define   PIPECONF_DITHER_TYPE_ST1 (1<<2)
-#define   PIPECONF_DITHER_TYPE_ST2 (2<<2)
-#define   PIPECONF_DITHER_TYPE_TEMP (3<<2)
+#define   PIPECONF_DITHER_TYPE_SP (0 << 2)
+#define   PIPECONF_DITHER_TYPE_ST1 (1 << 2)
+#define   PIPECONF_DITHER_TYPE_ST2 (2 << 2)
+#define   PIPECONF_DITHER_TYPE_TEMP (3 << 2)
 #define _PIPEASTAT		0x70024
-#define   PIPE_FIFO_UNDERRUN_STATUS		(1UL<<31)
-#define   SPRITE1_FLIP_DONE_INT_EN_VLV		(1UL<<30)
-#define   PIPE_CRC_ERROR_ENABLE			(1UL<<29)
-#define   PIPE_CRC_DONE_ENABLE			(1UL<<28)
-#define   PERF_COUNTER2_INTERRUPT_EN		(1UL<<27)
-#define   PIPE_GMBUS_EVENT_ENABLE		(1UL<<27)
-#define   PLANE_FLIP_DONE_INT_EN_VLV		(1UL<<26)
-#define   PIPE_HOTPLUG_INTERRUPT_ENABLE		(1UL<<26)
-#define   PIPE_VSYNC_INTERRUPT_ENABLE		(1UL<<25)
-#define   PIPE_DISPLAY_LINE_COMPARE_ENABLE	(1UL<<24)
-#define   PIPE_DPST_EVENT_ENABLE		(1UL<<23)
-#define   SPRITE0_FLIP_DONE_INT_EN_VLV		(1UL<<22)
-#define   PIPE_LEGACY_BLC_EVENT_ENABLE		(1UL<<22)
-#define   PIPE_ODD_FIELD_INTERRUPT_ENABLE	(1UL<<21)
-#define   PIPE_EVEN_FIELD_INTERRUPT_ENABLE	(1UL<<20)
-#define   PIPE_B_PSR_INTERRUPT_ENABLE_VLV	(1UL<<19)
-#define   PERF_COUNTER_INTERRUPT_EN		(1UL<<19)
-#define   PIPE_HOTPLUG_TV_INTERRUPT_ENABLE	(1UL<<18) /* pre-965 */
-#define   PIPE_START_VBLANK_INTERRUPT_ENABLE	(1UL<<18) /* 965 or later */
-#define   PIPE_FRAMESTART_INTERRUPT_ENABLE	(1UL<<17)
-#define   PIPE_VBLANK_INTERRUPT_ENABLE		(1UL<<17)
-#define   PIPEA_HBLANK_INT_EN_VLV		(1UL<<16)
-#define   PIPE_OVERLAY_UPDATED_ENABLE		(1UL<<16)
-#define   SPRITE1_FLIP_DONE_INT_STATUS_VLV	(1UL<<15)
-#define   SPRITE0_FLIP_DONE_INT_STATUS_VLV	(1UL<<14)
-#define   PIPE_CRC_ERROR_INTERRUPT_STATUS	(1UL<<13)
-#define   PIPE_CRC_DONE_INTERRUPT_STATUS	(1UL<<12)
-#define   PERF_COUNTER2_INTERRUPT_STATUS	(1UL<<11)
-#define   PIPE_GMBUS_INTERRUPT_STATUS		(1UL<<11)
-#define   PLANE_FLIP_DONE_INT_STATUS_VLV	(1UL<<10)
-#define   PIPE_HOTPLUG_INTERRUPT_STATUS		(1UL<<10)
-#define   PIPE_VSYNC_INTERRUPT_STATUS		(1UL<<9)
-#define   PIPE_DISPLAY_LINE_COMPARE_STATUS	(1UL<<8)
-#define   PIPE_DPST_EVENT_STATUS		(1UL<<7)
-#define   PIPE_A_PSR_STATUS_VLV			(1UL<<6)
-#define   PIPE_LEGACY_BLC_EVENT_STATUS		(1UL<<6)
-#define   PIPE_ODD_FIELD_INTERRUPT_STATUS	(1UL<<5)
-#define   PIPE_EVEN_FIELD_INTERRUPT_STATUS	(1UL<<4)
-#define   PIPE_B_PSR_STATUS_VLV			(1UL<<3)
-#define   PERF_COUNTER_INTERRUPT_STATUS		(1UL<<3)
-#define   PIPE_HOTPLUG_TV_INTERRUPT_STATUS	(1UL<<2) /* pre-965 */
-#define   PIPE_START_VBLANK_INTERRUPT_STATUS	(1UL<<2) /* 965 or later */
-#define   PIPE_FRAMESTART_INTERRUPT_STATUS	(1UL<<1)
-#define   PIPE_VBLANK_INTERRUPT_STATUS		(1UL<<1)
-#define   PIPE_HBLANK_INT_STATUS		(1UL<<0)
-#define   PIPE_OVERLAY_UPDATED_STATUS		(1UL<<0)
+#define   PIPE_FIFO_UNDERRUN_STATUS		(1UL << 31)
+#define   SPRITE1_FLIP_DONE_INT_EN_VLV		(1UL << 30)
+#define   PIPE_CRC_ERROR_ENABLE			(1UL << 29)
+#define   PIPE_CRC_DONE_ENABLE			(1UL << 28)
+#define   PERF_COUNTER2_INTERRUPT_EN		(1UL << 27)
+#define   PIPE_GMBUS_EVENT_ENABLE		(1UL << 27)
+#define   PLANE_FLIP_DONE_INT_EN_VLV		(1UL << 26)
+#define   PIPE_HOTPLUG_INTERRUPT_ENABLE		(1UL << 26)
+#define   PIPE_VSYNC_INTERRUPT_ENABLE		(1UL << 25)
+#define   PIPE_DISPLAY_LINE_COMPARE_ENABLE	(1UL << 24)
+#define   PIPE_DPST_EVENT_ENABLE		(1UL << 23)
+#define   SPRITE0_FLIP_DONE_INT_EN_VLV		(1UL << 22)
+#define   PIPE_LEGACY_BLC_EVENT_ENABLE		(1UL << 22)
+#define   PIPE_ODD_FIELD_INTERRUPT_ENABLE	(1UL << 21)
+#define   PIPE_EVEN_FIELD_INTERRUPT_ENABLE	(1UL << 20)
+#define   PIPE_B_PSR_INTERRUPT_ENABLE_VLV	(1UL << 19)
+#define   PERF_COUNTER_INTERRUPT_EN		(1UL << 19)
+#define   PIPE_HOTPLUG_TV_INTERRUPT_ENABLE	(1UL << 18) /* pre-965 */
+#define   PIPE_START_VBLANK_INTERRUPT_ENABLE	(1UL << 18) /* 965 or later */
+#define   PIPE_FRAMESTART_INTERRUPT_ENABLE	(1UL << 17)
+#define   PIPE_VBLANK_INTERRUPT_ENABLE		(1UL << 17)
+#define   PIPEA_HBLANK_INT_EN_VLV		(1UL << 16)
+#define   PIPE_OVERLAY_UPDATED_ENABLE		(1UL << 16)
+#define   SPRITE1_FLIP_DONE_INT_STATUS_VLV	(1UL << 15)
+#define   SPRITE0_FLIP_DONE_INT_STATUS_VLV	(1UL << 14)
+#define   PIPE_CRC_ERROR_INTERRUPT_STATUS	(1UL << 13)
+#define   PIPE_CRC_DONE_INTERRUPT_STATUS	(1UL << 12)
+#define   PERF_COUNTER2_INTERRUPT_STATUS	(1UL << 11)
+#define   PIPE_GMBUS_INTERRUPT_STATUS		(1UL << 11)
+#define   PLANE_FLIP_DONE_INT_STATUS_VLV	(1UL << 10)
+#define   PIPE_HOTPLUG_INTERRUPT_STATUS		(1UL << 10)
+#define   PIPE_VSYNC_INTERRUPT_STATUS		(1UL << 9)
+#define   PIPE_DISPLAY_LINE_COMPARE_STATUS	(1UL << 8)
+#define   PIPE_DPST_EVENT_STATUS		(1UL << 7)
+#define   PIPE_A_PSR_STATUS_VLV			(1UL << 6)
+#define   PIPE_LEGACY_BLC_EVENT_STATUS		(1UL << 6)
+#define   PIPE_ODD_FIELD_INTERRUPT_STATUS	(1UL << 5)
+#define   PIPE_EVEN_FIELD_INTERRUPT_STATUS	(1UL << 4)
+#define   PIPE_B_PSR_STATUS_VLV			(1UL << 3)
+#define   PERF_COUNTER_INTERRUPT_STATUS		(1UL << 3)
+#define   PIPE_HOTPLUG_TV_INTERRUPT_STATUS	(1UL << 2) /* pre-965 */
+#define   PIPE_START_VBLANK_INTERRUPT_STATUS	(1UL << 2) /* 965 or later */
+#define   PIPE_FRAMESTART_INTERRUPT_STATUS	(1UL << 1)
+#define   PIPE_VBLANK_INTERRUPT_STATUS		(1UL << 1)
+#define   PIPE_HBLANK_INT_STATUS		(1UL << 0)
+#define   PIPE_OVERLAY_UPDATED_STATUS		(1UL << 0)
 
 #define PIPESTAT_INT_ENABLE_MASK		0x7fff0000
 #define PIPESTAT_INT_STATUS_MASK		0x0000ffff
@@ -5526,67 +5544,67 @@ enum {
 
 #define _PIPE_MISC_A			0x70030
 #define _PIPE_MISC_B			0x71030
-#define   PIPEMISC_YUV420_ENABLE	(1<<27)
-#define   PIPEMISC_YUV420_MODE_FULL_BLEND (1<<26)
-#define   PIPEMISC_OUTPUT_COLORSPACE_YUV  (1<<11)
-#define   PIPEMISC_DITHER_BPC_MASK	(7<<5)
-#define   PIPEMISC_DITHER_8_BPC		(0<<5)
-#define   PIPEMISC_DITHER_10_BPC	(1<<5)
-#define   PIPEMISC_DITHER_6_BPC		(2<<5)
-#define   PIPEMISC_DITHER_12_BPC	(3<<5)
-#define   PIPEMISC_DITHER_ENABLE	(1<<4)
-#define   PIPEMISC_DITHER_TYPE_MASK	(3<<2)
-#define   PIPEMISC_DITHER_TYPE_SP	(0<<2)
+#define   PIPEMISC_YUV420_ENABLE	(1 << 27)
+#define   PIPEMISC_YUV420_MODE_FULL_BLEND (1 << 26)
+#define   PIPEMISC_OUTPUT_COLORSPACE_YUV  (1 << 11)
+#define   PIPEMISC_DITHER_BPC_MASK	(7 << 5)
+#define   PIPEMISC_DITHER_8_BPC		(0 << 5)
+#define   PIPEMISC_DITHER_10_BPC	(1 << 5)
+#define   PIPEMISC_DITHER_6_BPC		(2 << 5)
+#define   PIPEMISC_DITHER_12_BPC	(3 << 5)
+#define   PIPEMISC_DITHER_ENABLE	(1 << 4)
+#define   PIPEMISC_DITHER_TYPE_MASK	(3 << 2)
+#define   PIPEMISC_DITHER_TYPE_SP	(0 << 2)
 #define PIPEMISC(pipe)			_MMIO_PIPE2(pipe, _PIPE_MISC_A)
 
 #define VLV_DPFLIPSTAT				_MMIO(VLV_DISPLAY_BASE + 0x70028)
-#define   PIPEB_LINE_COMPARE_INT_EN		(1<<29)
-#define   PIPEB_HLINE_INT_EN			(1<<28)
-#define   PIPEB_VBLANK_INT_EN			(1<<27)
-#define   SPRITED_FLIP_DONE_INT_EN		(1<<26)
-#define   SPRITEC_FLIP_DONE_INT_EN		(1<<25)
-#define   PLANEB_FLIP_DONE_INT_EN		(1<<24)
-#define   PIPE_PSR_INT_EN			(1<<22)
-#define   PIPEA_LINE_COMPARE_INT_EN		(1<<21)
-#define   PIPEA_HLINE_INT_EN			(1<<20)
-#define   PIPEA_VBLANK_INT_EN			(1<<19)
-#define   SPRITEB_FLIP_DONE_INT_EN		(1<<18)
-#define   SPRITEA_FLIP_DONE_INT_EN		(1<<17)
-#define   PLANEA_FLIPDONE_INT_EN		(1<<16)
-#define   PIPEC_LINE_COMPARE_INT_EN		(1<<13)
-#define   PIPEC_HLINE_INT_EN			(1<<12)
-#define   PIPEC_VBLANK_INT_EN			(1<<11)
-#define   SPRITEF_FLIPDONE_INT_EN		(1<<10)
-#define   SPRITEE_FLIPDONE_INT_EN		(1<<9)
-#define   PLANEC_FLIPDONE_INT_EN		(1<<8)
+#define   PIPEB_LINE_COMPARE_INT_EN		(1 << 29)
+#define   PIPEB_HLINE_INT_EN			(1 << 28)
+#define   PIPEB_VBLANK_INT_EN			(1 << 27)
+#define   SPRITED_FLIP_DONE_INT_EN		(1 << 26)
+#define   SPRITEC_FLIP_DONE_INT_EN		(1 << 25)
+#define   PLANEB_FLIP_DONE_INT_EN		(1 << 24)
+#define   PIPE_PSR_INT_EN			(1 << 22)
+#define   PIPEA_LINE_COMPARE_INT_EN		(1 << 21)
+#define   PIPEA_HLINE_INT_EN			(1 << 20)
+#define   PIPEA_VBLANK_INT_EN			(1 << 19)
+#define   SPRITEB_FLIP_DONE_INT_EN		(1 << 18)
+#define   SPRITEA_FLIP_DONE_INT_EN		(1 << 17)
+#define   PLANEA_FLIPDONE_INT_EN		(1 << 16)
+#define   PIPEC_LINE_COMPARE_INT_EN		(1 << 13)
+#define   PIPEC_HLINE_INT_EN			(1 << 12)
+#define   PIPEC_VBLANK_INT_EN			(1 << 11)
+#define   SPRITEF_FLIPDONE_INT_EN		(1 << 10)
+#define   SPRITEE_FLIPDONE_INT_EN		(1 << 9)
+#define   PLANEC_FLIPDONE_INT_EN		(1 << 8)
 
 #define DPINVGTT				_MMIO(VLV_DISPLAY_BASE + 0x7002c) /* VLV/CHV only */
-#define   SPRITEF_INVALID_GTT_INT_EN		(1<<27)
-#define   SPRITEE_INVALID_GTT_INT_EN		(1<<26)
-#define   PLANEC_INVALID_GTT_INT_EN		(1<<25)
-#define   CURSORC_INVALID_GTT_INT_EN		(1<<24)
-#define   CURSORB_INVALID_GTT_INT_EN		(1<<23)
-#define   CURSORA_INVALID_GTT_INT_EN		(1<<22)
-#define   SPRITED_INVALID_GTT_INT_EN		(1<<21)
-#define   SPRITEC_INVALID_GTT_INT_EN		(1<<20)
-#define   PLANEB_INVALID_GTT_INT_EN		(1<<19)
-#define   SPRITEB_INVALID_GTT_INT_EN		(1<<18)
-#define   SPRITEA_INVALID_GTT_INT_EN		(1<<17)
-#define   PLANEA_INVALID_GTT_INT_EN		(1<<16)
+#define   SPRITEF_INVALID_GTT_INT_EN		(1 << 27)
+#define   SPRITEE_INVALID_GTT_INT_EN		(1 << 26)
+#define   PLANEC_INVALID_GTT_INT_EN		(1 << 25)
+#define   CURSORC_INVALID_GTT_INT_EN		(1 << 24)
+#define   CURSORB_INVALID_GTT_INT_EN		(1 << 23)
+#define   CURSORA_INVALID_GTT_INT_EN		(1 << 22)
+#define   SPRITED_INVALID_GTT_INT_EN		(1 << 21)
+#define   SPRITEC_INVALID_GTT_INT_EN		(1 << 20)
+#define   PLANEB_INVALID_GTT_INT_EN		(1 << 19)
+#define   SPRITEB_INVALID_GTT_INT_EN		(1 << 18)
+#define   SPRITEA_INVALID_GTT_INT_EN		(1 << 17)
+#define   PLANEA_INVALID_GTT_INT_EN		(1 << 16)
 #define   DPINVGTT_EN_MASK			0xff0000
 #define   DPINVGTT_EN_MASK_CHV			0xfff0000
-#define   SPRITEF_INVALID_GTT_STATUS		(1<<11)
-#define   SPRITEE_INVALID_GTT_STATUS		(1<<10)
-#define   PLANEC_INVALID_GTT_STATUS		(1<<9)
-#define   CURSORC_INVALID_GTT_STATUS		(1<<8)
-#define   CURSORB_INVALID_GTT_STATUS		(1<<7)
-#define   CURSORA_INVALID_GTT_STATUS		(1<<6)
-#define   SPRITED_INVALID_GTT_STATUS		(1<<5)
-#define   SPRITEC_INVALID_GTT_STATUS		(1<<4)
-#define   PLANEB_INVALID_GTT_STATUS		(1<<3)
-#define   SPRITEB_INVALID_GTT_STATUS		(1<<2)
-#define   SPRITEA_INVALID_GTT_STATUS		(1<<1)
-#define   PLANEA_INVALID_GTT_STATUS		(1<<0)
+#define   SPRITEF_INVALID_GTT_STATUS		(1 << 11)
+#define   SPRITEE_INVALID_GTT_STATUS		(1 << 10)
+#define   PLANEC_INVALID_GTT_STATUS		(1 << 9)
+#define   CURSORC_INVALID_GTT_STATUS		(1 << 8)
+#define   CURSORB_INVALID_GTT_STATUS		(1 << 7)
+#define   CURSORA_INVALID_GTT_STATUS		(1 << 6)
+#define   SPRITED_INVALID_GTT_STATUS		(1 << 5)
+#define   SPRITEC_INVALID_GTT_STATUS		(1 << 4)
+#define   PLANEB_INVALID_GTT_STATUS		(1 << 3)
+#define   SPRITEB_INVALID_GTT_STATUS		(1 << 2)
+#define   SPRITEA_INVALID_GTT_STATUS		(1 << 1)
+#define   PLANEA_INVALID_GTT_STATUS		(1 << 0)
 #define   DPINVGTT_STATUS_MASK			0xff
 #define   DPINVGTT_STATUS_MASK_CHV		0xfff
 
@@ -5627,149 +5645,149 @@ enum {
 /* pnv/gen4/g4x/vlv/chv */
 #define DSPFW1		_MMIO(dev_priv->info.display_mmio_offset + 0x70034)
 #define   DSPFW_SR_SHIFT		23
-#define   DSPFW_SR_MASK			(0x1ff<<23)
+#define   DSPFW_SR_MASK			(0x1ff << 23)
 #define   DSPFW_CURSORB_SHIFT		16
-#define   DSPFW_CURSORB_MASK		(0x3f<<16)
+#define   DSPFW_CURSORB_MASK		(0x3f << 16)
 #define   DSPFW_PLANEB_SHIFT		8
-#define   DSPFW_PLANEB_MASK		(0x7f<<8)
-#define   DSPFW_PLANEB_MASK_VLV		(0xff<<8) /* vlv/chv */
+#define   DSPFW_PLANEB_MASK		(0x7f << 8)
+#define   DSPFW_PLANEB_MASK_VLV		(0xff << 8) /* vlv/chv */
 #define   DSPFW_PLANEA_SHIFT		0
-#define   DSPFW_PLANEA_MASK		(0x7f<<0)
-#define   DSPFW_PLANEA_MASK_VLV		(0xff<<0) /* vlv/chv */
+#define   DSPFW_PLANEA_MASK		(0x7f << 0)
+#define   DSPFW_PLANEA_MASK_VLV		(0xff << 0) /* vlv/chv */
 #define DSPFW2		_MMIO(dev_priv->info.display_mmio_offset + 0x70038)
-#define   DSPFW_FBC_SR_EN		(1<<31)	  /* g4x */
+#define   DSPFW_FBC_SR_EN		(1 << 31)	  /* g4x */
 #define   DSPFW_FBC_SR_SHIFT		28
-#define   DSPFW_FBC_SR_MASK		(0x7<<28) /* g4x */
+#define   DSPFW_FBC_SR_MASK		(0x7 << 28) /* g4x */
 #define   DSPFW_FBC_HPLL_SR_SHIFT	24
-#define   DSPFW_FBC_HPLL_SR_MASK	(0xf<<24) /* g4x */
+#define   DSPFW_FBC_HPLL_SR_MASK	(0xf << 24) /* g4x */
 #define   DSPFW_SPRITEB_SHIFT		(16)
-#define   DSPFW_SPRITEB_MASK		(0x7f<<16) /* g4x */
-#define   DSPFW_SPRITEB_MASK_VLV	(0xff<<16) /* vlv/chv */
+#define   DSPFW_SPRITEB_MASK		(0x7f << 16) /* g4x */
+#define   DSPFW_SPRITEB_MASK_VLV	(0xff << 16) /* vlv/chv */
 #define   DSPFW_CURSORA_SHIFT		8
-#define   DSPFW_CURSORA_MASK		(0x3f<<8)
+#define   DSPFW_CURSORA_MASK		(0x3f << 8)
 #define   DSPFW_PLANEC_OLD_SHIFT	0
-#define   DSPFW_PLANEC_OLD_MASK		(0x7f<<0) /* pre-gen4 sprite C */
+#define   DSPFW_PLANEC_OLD_MASK		(0x7f << 0) /* pre-gen4 sprite C */
 #define   DSPFW_SPRITEA_SHIFT		0
-#define   DSPFW_SPRITEA_MASK		(0x7f<<0) /* g4x */
-#define   DSPFW_SPRITEA_MASK_VLV	(0xff<<0) /* vlv/chv */
+#define   DSPFW_SPRITEA_MASK		(0x7f << 0) /* g4x */
+#define   DSPFW_SPRITEA_MASK_VLV	(0xff << 0) /* vlv/chv */
 #define DSPFW3		_MMIO(dev_priv->info.display_mmio_offset + 0x7003c)
-#define   DSPFW_HPLL_SR_EN		(1<<31)
-#define   PINEVIEW_SELF_REFRESH_EN	(1<<30)
+#define   DSPFW_HPLL_SR_EN		(1 << 31)
+#define   PINEVIEW_SELF_REFRESH_EN	(1 << 30)
 #define   DSPFW_CURSOR_SR_SHIFT		24
-#define   DSPFW_CURSOR_SR_MASK		(0x3f<<24)
+#define   DSPFW_CURSOR_SR_MASK		(0x3f << 24)
 #define   DSPFW_HPLL_CURSOR_SHIFT	16
-#define   DSPFW_HPLL_CURSOR_MASK	(0x3f<<16)
+#define   DSPFW_HPLL_CURSOR_MASK	(0x3f << 16)
 #define   DSPFW_HPLL_SR_SHIFT		0
-#define   DSPFW_HPLL_SR_MASK		(0x1ff<<0)
+#define   DSPFW_HPLL_SR_MASK		(0x1ff << 0)
 
 /* vlv/chv */
 #define DSPFW4		_MMIO(VLV_DISPLAY_BASE + 0x70070)
 #define   DSPFW_SPRITEB_WM1_SHIFT	16
-#define   DSPFW_SPRITEB_WM1_MASK	(0xff<<16)
+#define   DSPFW_SPRITEB_WM1_MASK	(0xff << 16)
 #define   DSPFW_CURSORA_WM1_SHIFT	8
-#define   DSPFW_CURSORA_WM1_MASK	(0x3f<<8)
+#define   DSPFW_CURSORA_WM1_MASK	(0x3f << 8)
 #define   DSPFW_SPRITEA_WM1_SHIFT	0
-#define   DSPFW_SPRITEA_WM1_MASK	(0xff<<0)
+#define   DSPFW_SPRITEA_WM1_MASK	(0xff << 0)
 #define DSPFW5		_MMIO(VLV_DISPLAY_BASE + 0x70074)
 #define   DSPFW_PLANEB_WM1_SHIFT	24
-#define   DSPFW_PLANEB_WM1_MASK		(0xff<<24)
+#define   DSPFW_PLANEB_WM1_MASK		(0xff << 24)
 #define   DSPFW_PLANEA_WM1_SHIFT	16
-#define   DSPFW_PLANEA_WM1_MASK		(0xff<<16)
+#define   DSPFW_PLANEA_WM1_MASK		(0xff << 16)
 #define   DSPFW_CURSORB_WM1_SHIFT	8
-#define   DSPFW_CURSORB_WM1_MASK	(0x3f<<8)
+#define   DSPFW_CURSORB_WM1_MASK	(0x3f << 8)
 #define   DSPFW_CURSOR_SR_WM1_SHIFT	0
-#define   DSPFW_CURSOR_SR_WM1_MASK	(0x3f<<0)
+#define   DSPFW_CURSOR_SR_WM1_MASK	(0x3f << 0)
 #define DSPFW6		_MMIO(VLV_DISPLAY_BASE + 0x70078)
 #define   DSPFW_SR_WM1_SHIFT		0
-#define   DSPFW_SR_WM1_MASK		(0x1ff<<0)
+#define   DSPFW_SR_WM1_MASK		(0x1ff << 0)
 #define DSPFW7		_MMIO(VLV_DISPLAY_BASE + 0x7007c)
 #define DSPFW7_CHV	_MMIO(VLV_DISPLAY_BASE + 0x700b4) /* wtf #1? */
 #define   DSPFW_SPRITED_WM1_SHIFT	24
-#define   DSPFW_SPRITED_WM1_MASK	(0xff<<24)
+#define   DSPFW_SPRITED_WM1_MASK	(0xff << 24)
 #define   DSPFW_SPRITED_SHIFT		16
-#define   DSPFW_SPRITED_MASK_VLV	(0xff<<16)
+#define   DSPFW_SPRITED_MASK_VLV	(0xff << 16)
 #define   DSPFW_SPRITEC_WM1_SHIFT	8
-#define   DSPFW_SPRITEC_WM1_MASK	(0xff<<8)
+#define   DSPFW_SPRITEC_WM1_MASK	(0xff << 8)
 #define   DSPFW_SPRITEC_SHIFT		0
-#define   DSPFW_SPRITEC_MASK_VLV	(0xff<<0)
+#define   DSPFW_SPRITEC_MASK_VLV	(0xff << 0)
 #define DSPFW8_CHV	_MMIO(VLV_DISPLAY_BASE + 0x700b8)
 #define   DSPFW_SPRITEF_WM1_SHIFT	24
-#define   DSPFW_SPRITEF_WM1_MASK	(0xff<<24)
+#define   DSPFW_SPRITEF_WM1_MASK	(0xff << 24)
 #define   DSPFW_SPRITEF_SHIFT		16
-#define   DSPFW_SPRITEF_MASK_VLV	(0xff<<16)
+#define   DSPFW_SPRITEF_MASK_VLV	(0xff << 16)
 #define   DSPFW_SPRITEE_WM1_SHIFT	8
-#define   DSPFW_SPRITEE_WM1_MASK	(0xff<<8)
+#define   DSPFW_SPRITEE_WM1_MASK	(0xff << 8)
 #define   DSPFW_SPRITEE_SHIFT		0
-#define   DSPFW_SPRITEE_MASK_VLV	(0xff<<0)
+#define   DSPFW_SPRITEE_MASK_VLV	(0xff << 0)
 #define DSPFW9_CHV	_MMIO(VLV_DISPLAY_BASE + 0x7007c) /* wtf #2? */
 #define   DSPFW_PLANEC_WM1_SHIFT	24
-#define   DSPFW_PLANEC_WM1_MASK		(0xff<<24)
+#define   DSPFW_PLANEC_WM1_MASK		(0xff << 24)
 #define   DSPFW_PLANEC_SHIFT		16
-#define   DSPFW_PLANEC_MASK_VLV		(0xff<<16)
+#define   DSPFW_PLANEC_MASK_VLV		(0xff << 16)
 #define   DSPFW_CURSORC_WM1_SHIFT	8
-#define   DSPFW_CURSORC_WM1_MASK	(0x3f<<16)
+#define   DSPFW_CURSORC_WM1_MASK	(0x3f << 16)
 #define   DSPFW_CURSORC_SHIFT		0
-#define   DSPFW_CURSORC_MASK		(0x3f<<0)
+#define   DSPFW_CURSORC_MASK		(0x3f << 0)
 
 /* vlv/chv high order bits */
 #define DSPHOWM		_MMIO(VLV_DISPLAY_BASE + 0x70064)
 #define   DSPFW_SR_HI_SHIFT		24
-#define   DSPFW_SR_HI_MASK		(3<<24) /* 2 bits for chv, 1 for vlv */
+#define   DSPFW_SR_HI_MASK		(3 << 24) /* 2 bits for chv, 1 for vlv */
 #define   DSPFW_SPRITEF_HI_SHIFT	23
-#define   DSPFW_SPRITEF_HI_MASK		(1<<23)
+#define   DSPFW_SPRITEF_HI_MASK		(1 << 23)
 #define   DSPFW_SPRITEE_HI_SHIFT	22
-#define   DSPFW_SPRITEE_HI_MASK		(1<<22)
+#define   DSPFW_SPRITEE_HI_MASK		(1 << 22)
 #define   DSPFW_PLANEC_HI_SHIFT		21
-#define   DSPFW_PLANEC_HI_MASK		(1<<21)
+#define   DSPFW_PLANEC_HI_MASK		(1 << 21)
 #define   DSPFW_SPRITED_HI_SHIFT	20
-#define   DSPFW_SPRITED_HI_MASK		(1<<20)
+#define   DSPFW_SPRITED_HI_MASK		(1 << 20)
 #define   DSPFW_SPRITEC_HI_SHIFT	16
-#define   DSPFW_SPRITEC_HI_MASK		(1<<16)
+#define   DSPFW_SPRITEC_HI_MASK		(1 << 16)
 #define   DSPFW_PLANEB_HI_SHIFT		12
-#define   DSPFW_PLANEB_HI_MASK		(1<<12)
+#define   DSPFW_PLANEB_HI_MASK		(1 << 12)
 #define   DSPFW_SPRITEB_HI_SHIFT	8
-#define   DSPFW_SPRITEB_HI_MASK		(1<<8)
+#define   DSPFW_SPRITEB_HI_MASK		(1 << 8)
 #define   DSPFW_SPRITEA_HI_SHIFT	4
-#define   DSPFW_SPRITEA_HI_MASK		(1<<4)
+#define   DSPFW_SPRITEA_HI_MASK		(1 << 4)
 #define   DSPFW_PLANEA_HI_SHIFT		0
-#define   DSPFW_PLANEA_HI_MASK		(1<<0)
+#define   DSPFW_PLANEA_HI_MASK		(1 << 0)
 #define DSPHOWM1	_MMIO(VLV_DISPLAY_BASE + 0x70068)
 #define   DSPFW_SR_WM1_HI_SHIFT		24
-#define   DSPFW_SR_WM1_HI_MASK		(3<<24) /* 2 bits for chv, 1 for vlv */
+#define   DSPFW_SR_WM1_HI_MASK		(3 << 24) /* 2 bits for chv, 1 for vlv */
 #define   DSPFW_SPRITEF_WM1_HI_SHIFT	23
-#define   DSPFW_SPRITEF_WM1_HI_MASK	(1<<23)
+#define   DSPFW_SPRITEF_WM1_HI_MASK	(1 << 23)
 #define   DSPFW_SPRITEE_WM1_HI_SHIFT	22
-#define   DSPFW_SPRITEE_WM1_HI_MASK	(1<<22)
+#define   DSPFW_SPRITEE_WM1_HI_MASK	(1 << 22)
 #define   DSPFW_PLANEC_WM1_HI_SHIFT	21
-#define   DSPFW_PLANEC_WM1_HI_MASK	(1<<21)
+#define   DSPFW_PLANEC_WM1_HI_MASK	(1 << 21)
 #define   DSPFW_SPRITED_WM1_HI_SHIFT	20
-#define   DSPFW_SPRITED_WM1_HI_MASK	(1<<20)
+#define   DSPFW_SPRITED_WM1_HI_MASK	(1 << 20)
 #define   DSPFW_SPRITEC_WM1_HI_SHIFT	16
-#define   DSPFW_SPRITEC_WM1_HI_MASK	(1<<16)
+#define   DSPFW_SPRITEC_WM1_HI_MASK	(1 << 16)
 #define   DSPFW_PLANEB_WM1_HI_SHIFT	12
-#define   DSPFW_PLANEB_WM1_HI_MASK	(1<<12)
+#define   DSPFW_PLANEB_WM1_HI_MASK	(1 << 12)
 #define   DSPFW_SPRITEB_WM1_HI_SHIFT	8
-#define   DSPFW_SPRITEB_WM1_HI_MASK	(1<<8)
+#define   DSPFW_SPRITEB_WM1_HI_MASK	(1 << 8)
 #define   DSPFW_SPRITEA_WM1_HI_SHIFT	4
-#define   DSPFW_SPRITEA_WM1_HI_MASK	(1<<4)
+#define   DSPFW_SPRITEA_WM1_HI_MASK	(1 << 4)
 #define   DSPFW_PLANEA_WM1_HI_SHIFT	0
-#define   DSPFW_PLANEA_WM1_HI_MASK	(1<<0)
+#define   DSPFW_PLANEA_WM1_HI_MASK	(1 << 0)
 
 /* drain latency register values*/
 #define VLV_DDL(pipe)			_MMIO(VLV_DISPLAY_BASE + 0x70050 + 4 * (pipe))
 #define DDL_CURSOR_SHIFT		24
-#define DDL_SPRITE_SHIFT(sprite)	(8+8*(sprite))
+#define DDL_SPRITE_SHIFT(sprite)	(8 + 8 * (sprite))
 #define DDL_PLANE_SHIFT			0
-#define DDL_PRECISION_HIGH		(1<<7)
-#define DDL_PRECISION_LOW		(0<<7)
+#define DDL_PRECISION_HIGH		(1 << 7)
+#define DDL_PRECISION_LOW		(0 << 7)
 #define DRAIN_LATENCY_MASK		0x7f
 
 #define CBR1_VLV			_MMIO(VLV_DISPLAY_BASE + 0x70400)
-#define  CBR_PND_DEADLINE_DISABLE	(1<<31)
-#define  CBR_PWM_CLOCK_MUX_SELECT	(1<<30)
+#define  CBR_PND_DEADLINE_DISABLE	(1 << 31)
+#define  CBR_PWM_CLOCK_MUX_SELECT	(1 << 30)
 
 #define CBR4_VLV			_MMIO(VLV_DISPLAY_BASE + 0x70450)
-#define  CBR_DPLLBMD_PIPE(pipe)		(1<<(7+(pipe)*11)) /* pipes B and C */
+#define  CBR_DPLLBMD_PIPE(pipe)		(1 << (7 + (pipe) * 11)) /* pipes B and C */
 
 /* FIFO watermark sizes etc */
 #define G4X_FIFO_LINE_SIZE	64
@@ -5841,32 +5859,32 @@ enum {
 
 /* define the Watermark register on Ironlake */
 #define WM0_PIPEA_ILK		_MMIO(0x45100)
-#define  WM0_PIPE_PLANE_MASK	(0xffff<<16)
+#define  WM0_PIPE_PLANE_MASK	(0xffff << 16)
 #define  WM0_PIPE_PLANE_SHIFT	16
-#define  WM0_PIPE_SPRITE_MASK	(0xff<<8)
+#define  WM0_PIPE_SPRITE_MASK	(0xff << 8)
 #define  WM0_PIPE_SPRITE_SHIFT	8
 #define  WM0_PIPE_CURSOR_MASK	(0xff)
 
 #define WM0_PIPEB_ILK		_MMIO(0x45104)
 #define WM0_PIPEC_IVB		_MMIO(0x45200)
 #define WM1_LP_ILK		_MMIO(0x45108)
-#define  WM1_LP_SR_EN		(1<<31)
+#define  WM1_LP_SR_EN		(1 << 31)
 #define  WM1_LP_LATENCY_SHIFT	24
-#define  WM1_LP_LATENCY_MASK	(0x7f<<24)
-#define  WM1_LP_FBC_MASK	(0xf<<20)
+#define  WM1_LP_LATENCY_MASK	(0x7f << 24)
+#define  WM1_LP_FBC_MASK	(0xf << 20)
 #define  WM1_LP_FBC_SHIFT	20
 #define  WM1_LP_FBC_SHIFT_BDW	19
-#define  WM1_LP_SR_MASK		(0x7ff<<8)
+#define  WM1_LP_SR_MASK		(0x7ff << 8)
 #define  WM1_LP_SR_SHIFT	8
 #define  WM1_LP_CURSOR_MASK	(0xff)
 #define WM2_LP_ILK		_MMIO(0x4510c)
-#define  WM2_LP_EN		(1<<31)
+#define  WM2_LP_EN		(1 << 31)
 #define WM3_LP_ILK		_MMIO(0x45110)
-#define  WM3_LP_EN		(1<<31)
+#define  WM3_LP_EN		(1 << 31)
 #define WM1S_LP_ILK		_MMIO(0x45120)
 #define WM2S_LP_IVB		_MMIO(0x45124)
 #define WM3S_LP_IVB		_MMIO(0x45128)
-#define  WM1S_LP_EN		(1<<31)
+#define  WM1S_LP_EN		(1 << 31)
 
 #define HSW_WM_LP_VAL(lat, fbc, pri, cur) \
 	(WM3_LP_EN | ((lat) << WM1_LP_LATENCY_SHIFT) | \
@@ -5923,7 +5941,7 @@ enum {
 #define   CURSOR_ENABLE		0x80000000
 #define   CURSOR_GAMMA_ENABLE	0x40000000
 #define   CURSOR_STRIDE_SHIFT	28
-#define   CURSOR_STRIDE(x)	((ffs(x)-9) << CURSOR_STRIDE_SHIFT) /* 256,512,1k,2k */
+#define   CURSOR_STRIDE(x)	((ffs(x) - 9) << CURSOR_STRIDE_SHIFT) /* 256,512,1k,2k */
 #define   CURSOR_FORMAT_SHIFT	24
 #define   CURSOR_FORMAT_MASK	(0x07 << CURSOR_FORMAT_SHIFT)
 #define   CURSOR_FORMAT_2C	(0x00 << CURSOR_FORMAT_SHIFT)
@@ -5944,8 +5962,8 @@ enum {
 #define   MCURSOR_PIPE_SELECT_SHIFT	28
 #define   MCURSOR_PIPE_SELECT(pipe)	((pipe) << 28)
 #define   MCURSOR_GAMMA_ENABLE  (1 << 26)
-#define   MCURSOR_PIPE_CSC_ENABLE (1<<24)
-#define   MCURSOR_ROTATE_180	(1<<15)
+#define   MCURSOR_PIPE_CSC_ENABLE (1 << 24)
+#define   MCURSOR_ROTATE_180	(1 << 15)
 #define   MCURSOR_TRICKLE_FEED_DISABLE	(1 << 14)
 #define _CURABASE		0x70084
 #define _CURAPOS		0x70088
@@ -5983,41 +6001,41 @@ enum {
 
 /* Display A control */
 #define _DSPACNTR				0x70180
-#define   DISPLAY_PLANE_ENABLE			(1<<31)
+#define   DISPLAY_PLANE_ENABLE			(1 << 31)
 #define   DISPLAY_PLANE_DISABLE			0
-#define   DISPPLANE_GAMMA_ENABLE		(1<<30)
+#define   DISPPLANE_GAMMA_ENABLE		(1 << 30)
 #define   DISPPLANE_GAMMA_DISABLE		0
-#define   DISPPLANE_PIXFORMAT_MASK		(0xf<<26)
-#define   DISPPLANE_YUV422			(0x0<<26)
-#define   DISPPLANE_8BPP			(0x2<<26)
-#define   DISPPLANE_BGRA555			(0x3<<26)
-#define   DISPPLANE_BGRX555			(0x4<<26)
-#define   DISPPLANE_BGRX565			(0x5<<26)
-#define   DISPPLANE_BGRX888			(0x6<<26)
-#define   DISPPLANE_BGRA888			(0x7<<26)
-#define   DISPPLANE_RGBX101010			(0x8<<26)
-#define   DISPPLANE_RGBA101010			(0x9<<26)
-#define   DISPPLANE_BGRX101010			(0xa<<26)
-#define   DISPPLANE_RGBX161616			(0xc<<26)
-#define   DISPPLANE_RGBX888			(0xe<<26)
-#define   DISPPLANE_RGBA888			(0xf<<26)
-#define   DISPPLANE_STEREO_ENABLE		(1<<25)
+#define   DISPPLANE_PIXFORMAT_MASK		(0xf << 26)
+#define   DISPPLANE_YUV422			(0x0 << 26)
+#define   DISPPLANE_8BPP			(0x2 << 26)
+#define   DISPPLANE_BGRA555			(0x3 << 26)
+#define   DISPPLANE_BGRX555			(0x4 << 26)
+#define   DISPPLANE_BGRX565			(0x5 << 26)
+#define   DISPPLANE_BGRX888			(0x6 << 26)
+#define   DISPPLANE_BGRA888			(0x7 << 26)
+#define   DISPPLANE_RGBX101010			(0x8 << 26)
+#define   DISPPLANE_RGBA101010			(0x9 << 26)
+#define   DISPPLANE_BGRX101010			(0xa << 26)
+#define   DISPPLANE_RGBX161616			(0xc << 26)
+#define   DISPPLANE_RGBX888			(0xe << 26)
+#define   DISPPLANE_RGBA888			(0xf << 26)
+#define   DISPPLANE_STEREO_ENABLE		(1 << 25)
 #define   DISPPLANE_STEREO_DISABLE		0
-#define   DISPPLANE_PIPE_CSC_ENABLE		(1<<24)
+#define   DISPPLANE_PIPE_CSC_ENABLE		(1 << 24)
 #define   DISPPLANE_SEL_PIPE_SHIFT		24
-#define   DISPPLANE_SEL_PIPE_MASK		(3<<DISPPLANE_SEL_PIPE_SHIFT)
-#define   DISPPLANE_SEL_PIPE(pipe)		((pipe)<<DISPPLANE_SEL_PIPE_SHIFT)
-#define   DISPPLANE_SRC_KEY_ENABLE		(1<<22)
+#define   DISPPLANE_SEL_PIPE_MASK		(3 << DISPPLANE_SEL_PIPE_SHIFT)
+#define   DISPPLANE_SEL_PIPE(pipe)		((pipe) << DISPPLANE_SEL_PIPE_SHIFT)
+#define   DISPPLANE_SRC_KEY_ENABLE		(1 << 22)
 #define   DISPPLANE_SRC_KEY_DISABLE		0
-#define   DISPPLANE_LINE_DOUBLE			(1<<20)
+#define   DISPPLANE_LINE_DOUBLE			(1 << 20)
 #define   DISPPLANE_NO_LINE_DOUBLE		0
 #define   DISPPLANE_STEREO_POLARITY_FIRST	0
-#define   DISPPLANE_STEREO_POLARITY_SECOND	(1<<18)
-#define   DISPPLANE_ALPHA_PREMULTIPLY		(1<<16) /* CHV pipe B */
-#define   DISPPLANE_ROTATE_180			(1<<15)
-#define   DISPPLANE_TRICKLE_FEED_DISABLE	(1<<14) /* Ironlake */
-#define   DISPPLANE_TILED			(1<<10)
-#define   DISPPLANE_MIRROR			(1<<8) /* CHV pipe B */
+#define   DISPPLANE_STEREO_POLARITY_SECOND	(1 << 18)
+#define   DISPPLANE_ALPHA_PREMULTIPLY		(1 << 16) /* CHV pipe B */
+#define   DISPPLANE_ROTATE_180			(1 << 15)
+#define   DISPPLANE_TRICKLE_FEED_DISABLE	(1 << 14) /* Ironlake */
+#define   DISPPLANE_TILED			(1 << 10)
+#define   DISPPLANE_MIRROR			(1 << 8) /* CHV pipe B */
 #define _DSPAADDR				0x70184
 #define _DSPASTRIDE				0x70188
 #define _DSPAPOS				0x7018C /* reserved */
@@ -6040,15 +6058,15 @@ enum {
 
 /* CHV pipe B blender and primary plane */
 #define _CHV_BLEND_A		0x60a00
-#define   CHV_BLEND_LEGACY		(0<<30)
-#define   CHV_BLEND_ANDROID		(1<<30)
-#define   CHV_BLEND_MPO			(2<<30)
-#define   CHV_BLEND_MASK		(3<<30)
+#define   CHV_BLEND_LEGACY		(0 << 30)
+#define   CHV_BLEND_ANDROID		(1 << 30)
+#define   CHV_BLEND_MPO			(2 << 30)
+#define   CHV_BLEND_MASK		(3 << 30)
 #define _CHV_CANVAS_A		0x60a04
 #define _PRIMPOS_A		0x60a08
 #define _PRIMSIZE_A		0x60a0c
 #define _PRIMCNSTALPHA_A	0x60a10
-#define   PRIM_CONST_ALPHA_ENABLE	(1<<31)
+#define   PRIM_CONST_ALPHA_ENABLE	(1 << 31)
 
 #define CHV_BLEND(pipe)		_MMIO_TRANS2(pipe, _CHV_BLEND_A)
 #define CHV_CANVAS(pipe)	_MMIO_TRANS2(pipe, _CHV_CANVAS_A)
@@ -6058,8 +6076,8 @@ enum {
 
 /* Display/Sprite base address macros */
 #define DISP_BASEADDR_MASK	(0xfffff000)
-#define I915_LO_DISPBASE(val)	(val & ~DISP_BASEADDR_MASK)
-#define I915_HI_DISPBASE(val)	(val & DISP_BASEADDR_MASK)
+#define I915_LO_DISPBASE(val)	((val) & ~DISP_BASEADDR_MASK)
+#define I915_HI_DISPBASE(val)	((val) & DISP_BASEADDR_MASK)
 
 /*
  * VBIOS flags
@@ -6089,7 +6107,7 @@ enum {
 
 /* Display B control */
 #define _DSPBCNTR		(dev_priv->info.display_mmio_offset + 0x71180)
-#define   DISPPLANE_ALPHA_TRANS_ENABLE		(1<<15)
+#define   DISPPLANE_ALPHA_TRANS_ENABLE		(1 << 15)
 #define   DISPPLANE_ALPHA_TRANS_DISABLE		0
 #define   DISPPLANE_SPRITE_ABOVE_DISPLAY	0
 #define   DISPPLANE_SPRITE_ABOVE_OVERLAY	(1)
@@ -6104,27 +6122,27 @@ enum {
 
 /* Sprite A control */
 #define _DVSACNTR		0x72180
-#define   DVS_ENABLE		(1<<31)
-#define   DVS_GAMMA_ENABLE	(1<<30)
-#define   DVS_YUV_RANGE_CORRECTION_DISABLE	(1<<27)
-#define   DVS_PIXFORMAT_MASK	(3<<25)
-#define   DVS_FORMAT_YUV422	(0<<25)
-#define   DVS_FORMAT_RGBX101010	(1<<25)
-#define   DVS_FORMAT_RGBX888	(2<<25)
-#define   DVS_FORMAT_RGBX161616	(3<<25)
-#define   DVS_PIPE_CSC_ENABLE   (1<<24)
-#define   DVS_SOURCE_KEY	(1<<22)
-#define   DVS_RGB_ORDER_XBGR	(1<<20)
-#define   DVS_YUV_FORMAT_BT709	(1<<18)
-#define   DVS_YUV_BYTE_ORDER_MASK (3<<16)
-#define   DVS_YUV_ORDER_YUYV	(0<<16)
-#define   DVS_YUV_ORDER_UYVY	(1<<16)
-#define   DVS_YUV_ORDER_YVYU	(2<<16)
-#define   DVS_YUV_ORDER_VYUY	(3<<16)
-#define   DVS_ROTATE_180	(1<<15)
-#define   DVS_DEST_KEY		(1<<2)
-#define   DVS_TRICKLE_FEED_DISABLE (1<<14)
-#define   DVS_TILED		(1<<10)
+#define   DVS_ENABLE		(1 << 31)
+#define   DVS_GAMMA_ENABLE	(1 << 30)
+#define   DVS_YUV_RANGE_CORRECTION_DISABLE	(1 << 27)
+#define   DVS_PIXFORMAT_MASK	(3 << 25)
+#define   DVS_FORMAT_YUV422	(0 << 25)
+#define   DVS_FORMAT_RGBX101010	(1 << 25)
+#define   DVS_FORMAT_RGBX888	(2 << 25)
+#define   DVS_FORMAT_RGBX161616	(3 << 25)
+#define   DVS_PIPE_CSC_ENABLE   (1 << 24)
+#define   DVS_SOURCE_KEY	(1 << 22)
+#define   DVS_RGB_ORDER_XBGR	(1 << 20)
+#define   DVS_YUV_FORMAT_BT709	(1 << 18)
+#define   DVS_YUV_BYTE_ORDER_MASK (3 << 16)
+#define   DVS_YUV_ORDER_YUYV	(0 << 16)
+#define   DVS_YUV_ORDER_UYVY	(1 << 16)
+#define   DVS_YUV_ORDER_YVYU	(2 << 16)
+#define   DVS_YUV_ORDER_VYUY	(3 << 16)
+#define   DVS_ROTATE_180	(1 << 15)
+#define   DVS_DEST_KEY		(1 << 2)
+#define   DVS_TRICKLE_FEED_DISABLE (1 << 14)
+#define   DVS_TILED		(1 << 10)
 #define _DVSALINOFF		0x72184
 #define _DVSASTRIDE		0x72188
 #define _DVSAPOS		0x7218c
@@ -6136,13 +6154,13 @@ enum {
 #define _DVSATILEOFF		0x721a4
 #define _DVSASURFLIVE		0x721ac
 #define _DVSASCALE		0x72204
-#define   DVS_SCALE_ENABLE	(1<<31)
-#define   DVS_FILTER_MASK	(3<<29)
-#define   DVS_FILTER_MEDIUM	(0<<29)
-#define   DVS_FILTER_ENHANCING	(1<<29)
-#define   DVS_FILTER_SOFTENING	(2<<29)
-#define   DVS_VERTICAL_OFFSET_HALF (1<<28) /* must be enabled below */
-#define   DVS_VERTICAL_OFFSET_ENABLE (1<<27)
+#define   DVS_SCALE_ENABLE	(1 << 31)
+#define   DVS_FILTER_MASK	(3 << 29)
+#define   DVS_FILTER_MEDIUM	(0 << 29)
+#define   DVS_FILTER_ENHANCING	(1 << 29)
+#define   DVS_FILTER_SOFTENING	(2 << 29)
+#define   DVS_VERTICAL_OFFSET_HALF (1 << 28) /* must be enabled below */
+#define   DVS_VERTICAL_OFFSET_ENABLE (1 << 27)
 #define _DVSAGAMC		0x72300
 
 #define _DVSBCNTR		0x73180
@@ -6173,31 +6191,31 @@ enum {
 #define DVSSURFLIVE(pipe) _MMIO_PIPE(pipe, _DVSASURFLIVE, _DVSBSURFLIVE)
 
 #define _SPRA_CTL		0x70280
-#define   SPRITE_ENABLE			(1<<31)
-#define   SPRITE_GAMMA_ENABLE		(1<<30)
-#define   SPRITE_YUV_RANGE_CORRECTION_DISABLE	(1<<28)
-#define   SPRITE_PIXFORMAT_MASK		(7<<25)
-#define   SPRITE_FORMAT_YUV422		(0<<25)
-#define   SPRITE_FORMAT_RGBX101010	(1<<25)
-#define   SPRITE_FORMAT_RGBX888		(2<<25)
-#define   SPRITE_FORMAT_RGBX161616	(3<<25)
-#define   SPRITE_FORMAT_YUV444		(4<<25)
-#define   SPRITE_FORMAT_XR_BGR101010	(5<<25) /* Extended range */
-#define   SPRITE_PIPE_CSC_ENABLE	(1<<24)
-#define   SPRITE_SOURCE_KEY		(1<<22)
-#define   SPRITE_RGB_ORDER_RGBX		(1<<20) /* only for 888 and 161616 */
-#define   SPRITE_YUV_TO_RGB_CSC_DISABLE	(1<<19)
-#define   SPRITE_YUV_TO_RGB_CSC_FORMAT_BT709	(1<<18) /* 0 is BT601 */
-#define   SPRITE_YUV_BYTE_ORDER_MASK	(3<<16)
-#define   SPRITE_YUV_ORDER_YUYV		(0<<16)
-#define   SPRITE_YUV_ORDER_UYVY		(1<<16)
-#define   SPRITE_YUV_ORDER_YVYU		(2<<16)
-#define   SPRITE_YUV_ORDER_VYUY		(3<<16)
-#define   SPRITE_ROTATE_180		(1<<15)
-#define   SPRITE_TRICKLE_FEED_DISABLE	(1<<14)
-#define   SPRITE_INT_GAMMA_ENABLE	(1<<13)
-#define   SPRITE_TILED			(1<<10)
-#define   SPRITE_DEST_KEY		(1<<2)
+#define   SPRITE_ENABLE			(1 << 31)
+#define   SPRITE_GAMMA_ENABLE		(1 << 30)
+#define   SPRITE_YUV_RANGE_CORRECTION_DISABLE	(1 << 28)
+#define   SPRITE_PIXFORMAT_MASK		(7 << 25)
+#define   SPRITE_FORMAT_YUV422		(0 << 25)
+#define   SPRITE_FORMAT_RGBX101010	(1 << 25)
+#define   SPRITE_FORMAT_RGBX888		(2 << 25)
+#define   SPRITE_FORMAT_RGBX161616	(3 << 25)
+#define   SPRITE_FORMAT_YUV444		(4 << 25)
+#define   SPRITE_FORMAT_XR_BGR101010	(5 << 25) /* Extended range */
+#define   SPRITE_PIPE_CSC_ENABLE	(1 << 24)
+#define   SPRITE_SOURCE_KEY		(1 << 22)
+#define   SPRITE_RGB_ORDER_RGBX		(1 << 20) /* only for 888 and 161616 */
+#define   SPRITE_YUV_TO_RGB_CSC_DISABLE	(1 << 19)
+#define   SPRITE_YUV_TO_RGB_CSC_FORMAT_BT709	(1 << 18) /* 0 is BT601 */
+#define   SPRITE_YUV_BYTE_ORDER_MASK	(3 << 16)
+#define   SPRITE_YUV_ORDER_YUYV		(0 << 16)
+#define   SPRITE_YUV_ORDER_UYVY		(1 << 16)
+#define   SPRITE_YUV_ORDER_YVYU		(2 << 16)
+#define   SPRITE_YUV_ORDER_VYUY		(3 << 16)
+#define   SPRITE_ROTATE_180		(1 << 15)
+#define   SPRITE_TRICKLE_FEED_DISABLE	(1 << 14)
+#define   SPRITE_INT_GAMMA_ENABLE	(1 << 13)
+#define   SPRITE_TILED			(1 << 10)
+#define   SPRITE_DEST_KEY		(1 << 2)
 #define _SPRA_LINOFF		0x70284
 #define _SPRA_STRIDE		0x70288
 #define _SPRA_POS		0x7028c
@@ -6210,13 +6228,13 @@ enum {
 #define _SPRA_OFFSET		0x702a4
 #define _SPRA_SURFLIVE		0x702ac
 #define _SPRA_SCALE		0x70304
-#define   SPRITE_SCALE_ENABLE	(1<<31)
-#define   SPRITE_FILTER_MASK	(3<<29)
-#define   SPRITE_FILTER_MEDIUM	(0<<29)
-#define   SPRITE_FILTER_ENHANCING	(1<<29)
-#define   SPRITE_FILTER_SOFTENING	(2<<29)
-#define   SPRITE_VERTICAL_OFFSET_HALF	(1<<28) /* must be enabled below */
-#define   SPRITE_VERTICAL_OFFSET_ENABLE	(1<<27)
+#define   SPRITE_SCALE_ENABLE	(1 << 31)
+#define   SPRITE_FILTER_MASK	(3 << 29)
+#define   SPRITE_FILTER_MEDIUM	(0 << 29)
+#define   SPRITE_FILTER_ENHANCING	(1 << 29)
+#define   SPRITE_FILTER_SOFTENING	(2 << 29)
+#define   SPRITE_VERTICAL_OFFSET_HALF	(1 << 28) /* must be enabled below */
+#define   SPRITE_VERTICAL_OFFSET_ENABLE	(1 << 27)
 #define _SPRA_GAMC		0x70400
 
 #define _SPRB_CTL		0x71280
@@ -6250,28 +6268,28 @@ enum {
 #define SPRSURFLIVE(pipe) _MMIO_PIPE(pipe, _SPRA_SURFLIVE, _SPRB_SURFLIVE)
 
 #define _SPACNTR		(VLV_DISPLAY_BASE + 0x72180)
-#define   SP_ENABLE			(1<<31)
-#define   SP_GAMMA_ENABLE		(1<<30)
-#define   SP_PIXFORMAT_MASK		(0xf<<26)
-#define   SP_FORMAT_YUV422		(0<<26)
-#define   SP_FORMAT_BGR565		(5<<26)
-#define   SP_FORMAT_BGRX8888		(6<<26)
-#define   SP_FORMAT_BGRA8888		(7<<26)
-#define   SP_FORMAT_RGBX1010102		(8<<26)
-#define   SP_FORMAT_RGBA1010102		(9<<26)
-#define   SP_FORMAT_RGBX8888		(0xe<<26)
-#define   SP_FORMAT_RGBA8888		(0xf<<26)
-#define   SP_ALPHA_PREMULTIPLY		(1<<23) /* CHV pipe B */
-#define   SP_SOURCE_KEY			(1<<22)
-#define   SP_YUV_FORMAT_BT709		(1<<18)
-#define   SP_YUV_BYTE_ORDER_MASK	(3<<16)
-#define   SP_YUV_ORDER_YUYV		(0<<16)
-#define   SP_YUV_ORDER_UYVY		(1<<16)
-#define   SP_YUV_ORDER_YVYU		(2<<16)
-#define   SP_YUV_ORDER_VYUY		(3<<16)
-#define   SP_ROTATE_180			(1<<15)
-#define   SP_TILED			(1<<10)
-#define   SP_MIRROR			(1<<8) /* CHV pipe B */
+#define   SP_ENABLE			(1 << 31)
+#define   SP_GAMMA_ENABLE		(1 << 30)
+#define   SP_PIXFORMAT_MASK		(0xf << 26)
+#define   SP_FORMAT_YUV422		(0 << 26)
+#define   SP_FORMAT_BGR565		(5 << 26)
+#define   SP_FORMAT_BGRX8888		(6 << 26)
+#define   SP_FORMAT_BGRA8888		(7 << 26)
+#define   SP_FORMAT_RGBX1010102		(8 << 26)
+#define   SP_FORMAT_RGBA1010102		(9 << 26)
+#define   SP_FORMAT_RGBX8888		(0xe << 26)
+#define   SP_FORMAT_RGBA8888		(0xf << 26)
+#define   SP_ALPHA_PREMULTIPLY		(1 << 23) /* CHV pipe B */
+#define   SP_SOURCE_KEY			(1 << 22)
+#define   SP_YUV_FORMAT_BT709		(1 << 18)
+#define   SP_YUV_BYTE_ORDER_MASK	(3 << 16)
+#define   SP_YUV_ORDER_YUYV		(0 << 16)
+#define   SP_YUV_ORDER_UYVY		(1 << 16)
+#define   SP_YUV_ORDER_YVYU		(2 << 16)
+#define   SP_YUV_ORDER_VYUY		(3 << 16)
+#define   SP_ROTATE_180			(1 << 15)
+#define   SP_TILED			(1 << 10)
+#define   SP_MIRROR			(1 << 8) /* CHV pipe B */
 #define _SPALINOFF		(VLV_DISPLAY_BASE + 0x72184)
 #define _SPASTRIDE		(VLV_DISPLAY_BASE + 0x72188)
 #define _SPAPOS			(VLV_DISPLAY_BASE + 0x7218c)
@@ -6282,7 +6300,7 @@ enum {
 #define _SPAKEYMAXVAL		(VLV_DISPLAY_BASE + 0x721a0)
 #define _SPATILEOFF		(VLV_DISPLAY_BASE + 0x721a4)
 #define _SPACONSTALPHA		(VLV_DISPLAY_BASE + 0x721a8)
-#define   SP_CONST_ALPHA_ENABLE		(1<<31)
+#define   SP_CONST_ALPHA_ENABLE		(1 << 31)
 #define _SPACLRC0		(VLV_DISPLAY_BASE + 0x721d0)
 #define   SP_CONTRAST(x)		((x) << 18) /* u3.6 */
 #define   SP_BRIGHTNESS(x)		((x) & 0xff) /* s8 */
@@ -6374,40 +6392,40 @@ enum {
  * correctly map to the same formats in ICL, as long as bit 23 is set to 0
  */
 #define   PLANE_CTL_FORMAT_MASK			(0xf << 24)
-#define   PLANE_CTL_FORMAT_YUV422		(  0 << 24)
-#define   PLANE_CTL_FORMAT_NV12			(  1 << 24)
-#define   PLANE_CTL_FORMAT_XRGB_2101010		(  2 << 24)
-#define   PLANE_CTL_FORMAT_XRGB_8888		(  4 << 24)
-#define   PLANE_CTL_FORMAT_XRGB_16161616F	(  6 << 24)
-#define   PLANE_CTL_FORMAT_AYUV			(  8 << 24)
-#define   PLANE_CTL_FORMAT_INDEXED		( 12 << 24)
-#define   PLANE_CTL_FORMAT_RGB_565		( 14 << 24)
+#define   PLANE_CTL_FORMAT_YUV422		(0 << 24)
+#define   PLANE_CTL_FORMAT_NV12			(1 << 24)
+#define   PLANE_CTL_FORMAT_XRGB_2101010		(2 << 24)
+#define   PLANE_CTL_FORMAT_XRGB_8888		(4 << 24)
+#define   PLANE_CTL_FORMAT_XRGB_16161616F	(6 << 24)
+#define   PLANE_CTL_FORMAT_AYUV			(8 << 24)
+#define   PLANE_CTL_FORMAT_INDEXED		(12 << 24)
+#define   PLANE_CTL_FORMAT_RGB_565		(14 << 24)
 #define   ICL_PLANE_CTL_FORMAT_MASK		(0x1f << 23)
 #define   PLANE_CTL_PIPE_CSC_ENABLE		(1 << 23) /* Pre-GLK */
 #define   PLANE_CTL_KEY_ENABLE_MASK		(0x3 << 21)
-#define   PLANE_CTL_KEY_ENABLE_SOURCE		(  1 << 21)
-#define   PLANE_CTL_KEY_ENABLE_DESTINATION	(  2 << 21)
+#define   PLANE_CTL_KEY_ENABLE_SOURCE		(1 << 21)
+#define   PLANE_CTL_KEY_ENABLE_DESTINATION	(2 << 21)
 #define   PLANE_CTL_ORDER_BGRX			(0 << 20)
 #define   PLANE_CTL_ORDER_RGBX			(1 << 20)
 #define   PLANE_CTL_YUV_TO_RGB_CSC_FORMAT_BT709	(1 << 18)
 #define   PLANE_CTL_YUV422_ORDER_MASK		(0x3 << 16)
-#define   PLANE_CTL_YUV422_YUYV			(  0 << 16)
-#define   PLANE_CTL_YUV422_UYVY			(  1 << 16)
-#define   PLANE_CTL_YUV422_YVYU			(  2 << 16)
-#define   PLANE_CTL_YUV422_VYUY			(  3 << 16)
+#define   PLANE_CTL_YUV422_YUYV			(0 << 16)
+#define   PLANE_CTL_YUV422_UYVY			(1 << 16)
+#define   PLANE_CTL_YUV422_YVYU			(2 << 16)
+#define   PLANE_CTL_YUV422_VYUY			(3 << 16)
 #define   PLANE_CTL_DECOMPRESSION_ENABLE	(1 << 15)
 #define   PLANE_CTL_TRICKLE_FEED_DISABLE	(1 << 14)
 #define   PLANE_CTL_PLANE_GAMMA_DISABLE		(1 << 13) /* Pre-GLK */
 #define   PLANE_CTL_TILED_MASK			(0x7 << 10)
-#define   PLANE_CTL_TILED_LINEAR		(  0 << 10)
-#define   PLANE_CTL_TILED_X			(  1 << 10)
-#define   PLANE_CTL_TILED_Y			(  4 << 10)
-#define   PLANE_CTL_TILED_YF			(  5 << 10)
-#define   PLANE_CTL_FLIP_HORIZONTAL		(  1 << 8)
+#define   PLANE_CTL_TILED_LINEAR		(0 << 10)
+#define   PLANE_CTL_TILED_X			(1 << 10)
+#define   PLANE_CTL_TILED_Y			(4 << 10)
+#define   PLANE_CTL_TILED_YF			(5 << 10)
+#define   PLANE_CTL_FLIP_HORIZONTAL		(1 << 8)
 #define   PLANE_CTL_ALPHA_MASK			(0x3 << 4) /* Pre-GLK */
-#define   PLANE_CTL_ALPHA_DISABLE		(  0 << 4)
-#define   PLANE_CTL_ALPHA_SW_PREMULTIPLY	(  2 << 4)
-#define   PLANE_CTL_ALPHA_HW_PREMULTIPLY	(  3 << 4)
+#define   PLANE_CTL_ALPHA_DISABLE		(0 << 4)
+#define   PLANE_CTL_ALPHA_SW_PREMULTIPLY	(2 << 4)
+#define   PLANE_CTL_ALPHA_HW_PREMULTIPLY	(3 << 4)
 #define   PLANE_CTL_ROTATE_MASK			0x3
 #define   PLANE_CTL_ROTATE_0			0x0
 #define   PLANE_CTL_ROTATE_90			0x1
@@ -6635,7 +6653,7 @@ enum {
 # define VFMUNIT_CLOCK_GATE_DISABLE		(1 << 11)
 
 #define FDI_PLL_FREQ_CTL        _MMIO(0x46030)
-#define  FDI_PLL_FREQ_CHANGE_REQUEST    (1<<24)
+#define  FDI_PLL_FREQ_CHANGE_REQUEST    (1 << 24)
 #define  FDI_PLL_FREQ_LOCK_LIMIT_MASK   0xfff00
 #define  FDI_PLL_FREQ_DISABLE_COUNT_LIMIT_MASK  0xff
 
@@ -6684,14 +6702,14 @@ enum {
 /* IVB+ has 3 fitters, 0 is 7x5 capable, the other two only 3x3 */
 #define _PFA_CTL_1               0x68080
 #define _PFB_CTL_1               0x68880
-#define  PF_ENABLE              (1<<31)
-#define  PF_PIPE_SEL_MASK_IVB	(3<<29)
-#define  PF_PIPE_SEL_IVB(pipe)	((pipe)<<29)
-#define  PF_FILTER_MASK		(3<<23)
-#define  PF_FILTER_PROGRAMMED	(0<<23)
-#define  PF_FILTER_MED_3x3	(1<<23)
-#define  PF_FILTER_EDGE_ENHANCE	(2<<23)
-#define  PF_FILTER_EDGE_SOFTEN	(3<<23)
+#define  PF_ENABLE              (1 << 31)
+#define  PF_PIPE_SEL_MASK_IVB	(3 << 29)
+#define  PF_PIPE_SEL_IVB(pipe)	((pipe) << 29)
+#define  PF_FILTER_MASK		(3 << 23)
+#define  PF_FILTER_PROGRAMMED	(0 << 23)
+#define  PF_FILTER_MED_3x3	(1 << 23)
+#define  PF_FILTER_EDGE_ENHANCE	(2 << 23)
+#define  PF_FILTER_EDGE_SOFTEN	(3 << 23)
 #define _PFA_WIN_SZ		0x68074
 #define _PFB_WIN_SZ		0x68874
 #define _PFA_WIN_POS		0x68070
@@ -6709,7 +6727,7 @@ enum {
 
 #define _PSA_CTL		0x68180
 #define _PSB_CTL		0x68980
-#define PS_ENABLE		(1<<31)
+#define PS_ENABLE		(1 << 31)
 #define _PSA_WIN_SZ		0x68174
 #define _PSB_WIN_SZ		0x68974
 #define _PSA_WIN_POS		0x68170
@@ -6811,7 +6829,7 @@ enum {
 #define _PS_ECC_STAT_2B     0x68AD0
 #define _PS_ECC_STAT_1C     0x691D0
 
-#define _ID(id, a, b) ((a) + (id)*((b)-(a)))
+#define _ID(id, a, b) ((a) + (id) * ((b) - (a)))
 #define SKL_PS_CTRL(pipe, id) _MMIO_PIPE(pipe,        \
 			_ID(id, _PS_1A_CTRL, _PS_2A_CTRL),       \
 			_ID(id, _PS_1B_CTRL, _PS_2B_CTRL))
@@ -6892,37 +6910,37 @@ enum {
 #define DE_PIPEB_CRC_DONE	(1 << 10)
 #define DE_PIPEB_FIFO_UNDERRUN  (1 << 8)
 #define DE_PIPEA_VBLANK         (1 << 7)
-#define DE_PIPE_VBLANK(pipe)    (1 << (7 + 8*(pipe)))
+#define DE_PIPE_VBLANK(pipe)    (1 << (7 + 8 * (pipe)))
 #define DE_PIPEA_EVEN_FIELD     (1 << 6)
 #define DE_PIPEA_ODD_FIELD      (1 << 5)
 #define DE_PIPEA_LINE_COMPARE   (1 << 4)
 #define DE_PIPEA_VSYNC          (1 << 3)
 #define DE_PIPEA_CRC_DONE	(1 << 2)
-#define DE_PIPE_CRC_DONE(pipe)	(1 << (2 + 8*(pipe)))
+#define DE_PIPE_CRC_DONE(pipe)	(1 << (2 + 8 * (pipe)))
 #define DE_PIPEA_FIFO_UNDERRUN  (1 << 0)
-#define DE_PIPE_FIFO_UNDERRUN(pipe)  (1 << (8*(pipe)))
+#define DE_PIPE_FIFO_UNDERRUN(pipe)  (1 << (8 * (pipe)))
 
 /* More Ivybridge lolz */
-#define DE_ERR_INT_IVB			(1<<30)
-#define DE_GSE_IVB			(1<<29)
-#define DE_PCH_EVENT_IVB		(1<<28)
-#define DE_DP_A_HOTPLUG_IVB		(1<<27)
-#define DE_AUX_CHANNEL_A_IVB		(1<<26)
-#define DE_EDP_PSR_INT_HSW		(1<<19)
-#define DE_SPRITEC_FLIP_DONE_IVB	(1<<14)
-#define DE_PLANEC_FLIP_DONE_IVB		(1<<13)
-#define DE_PIPEC_VBLANK_IVB		(1<<10)
-#define DE_SPRITEB_FLIP_DONE_IVB	(1<<9)
-#define DE_PLANEB_FLIP_DONE_IVB		(1<<8)
-#define DE_PIPEB_VBLANK_IVB		(1<<5)
-#define DE_SPRITEA_FLIP_DONE_IVB	(1<<4)
-#define DE_PLANEA_FLIP_DONE_IVB		(1<<3)
-#define DE_PLANE_FLIP_DONE_IVB(plane)	(1<< (3 + 5*(plane)))
-#define DE_PIPEA_VBLANK_IVB		(1<<0)
+#define DE_ERR_INT_IVB			(1 << 30)
+#define DE_GSE_IVB			(1 << 29)
+#define DE_PCH_EVENT_IVB		(1 << 28)
+#define DE_DP_A_HOTPLUG_IVB		(1 << 27)
+#define DE_AUX_CHANNEL_A_IVB		(1 << 26)
+#define DE_EDP_PSR_INT_HSW		(1 << 19)
+#define DE_SPRITEC_FLIP_DONE_IVB	(1 << 14)
+#define DE_PLANEC_FLIP_DONE_IVB		(1 << 13)
+#define DE_PIPEC_VBLANK_IVB		(1 << 10)
+#define DE_SPRITEB_FLIP_DONE_IVB	(1 << 9)
+#define DE_PLANEB_FLIP_DONE_IVB		(1 << 8)
+#define DE_PIPEB_VBLANK_IVB		(1 << 5)
+#define DE_SPRITEA_FLIP_DONE_IVB	(1 << 4)
+#define DE_PLANEA_FLIP_DONE_IVB		(1 << 3)
+#define DE_PLANE_FLIP_DONE_IVB(plane)	(1 << (3 + 5 * (plane)))
+#define DE_PIPEA_VBLANK_IVB		(1 << 0)
 #define DE_PIPE_VBLANK_IVB(pipe)	(1 << ((pipe) * 5))
 
 #define VLV_MASTER_IER			_MMIO(0x4400c) /* Gunit master IER */
-#define   MASTER_INTERRUPT_ENABLE	(1<<31)
+#define   MASTER_INTERRUPT_ENABLE	(1 << 31)
 
 #define DEISR   _MMIO(0x44000)
 #define DEIMR   _MMIO(0x44004)
@@ -6935,37 +6953,37 @@ enum {
 #define GTIER   _MMIO(0x4401c)
 
 #define GEN8_MASTER_IRQ			_MMIO(0x44200)
-#define  GEN8_MASTER_IRQ_CONTROL	(1<<31)
-#define  GEN8_PCU_IRQ			(1<<30)
-#define  GEN8_DE_PCH_IRQ		(1<<23)
-#define  GEN8_DE_MISC_IRQ		(1<<22)
-#define  GEN8_DE_PORT_IRQ		(1<<20)
-#define  GEN8_DE_PIPE_C_IRQ		(1<<18)
-#define  GEN8_DE_PIPE_B_IRQ		(1<<17)
-#define  GEN8_DE_PIPE_A_IRQ		(1<<16)
-#define  GEN8_DE_PIPE_IRQ(pipe)		(1<<(16+(pipe)))
-#define  GEN8_GT_VECS_IRQ		(1<<6)
-#define  GEN8_GT_GUC_IRQ		(1<<5)
-#define  GEN8_GT_PM_IRQ			(1<<4)
-#define  GEN8_GT_VCS2_IRQ		(1<<3)
-#define  GEN8_GT_VCS1_IRQ		(1<<2)
-#define  GEN8_GT_BCS_IRQ		(1<<1)
-#define  GEN8_GT_RCS_IRQ		(1<<0)
+#define  GEN8_MASTER_IRQ_CONTROL	(1 << 31)
+#define  GEN8_PCU_IRQ			(1 << 30)
+#define  GEN8_DE_PCH_IRQ		(1 << 23)
+#define  GEN8_DE_MISC_IRQ		(1 << 22)
+#define  GEN8_DE_PORT_IRQ		(1 << 20)
+#define  GEN8_DE_PIPE_C_IRQ		(1 << 18)
+#define  GEN8_DE_PIPE_B_IRQ		(1 << 17)
+#define  GEN8_DE_PIPE_A_IRQ		(1 << 16)
+#define  GEN8_DE_PIPE_IRQ(pipe)		(1 << (16 + (pipe)))
+#define  GEN8_GT_VECS_IRQ		(1 << 6)
+#define  GEN8_GT_GUC_IRQ		(1 << 5)
+#define  GEN8_GT_PM_IRQ			(1 << 4)
+#define  GEN8_GT_VCS2_IRQ		(1 << 3)
+#define  GEN8_GT_VCS1_IRQ		(1 << 2)
+#define  GEN8_GT_BCS_IRQ		(1 << 1)
+#define  GEN8_GT_RCS_IRQ		(1 << 0)
 
 #define GEN8_GT_ISR(which) _MMIO(0x44300 + (0x10 * (which)))
 #define GEN8_GT_IMR(which) _MMIO(0x44304 + (0x10 * (which)))
 #define GEN8_GT_IIR(which) _MMIO(0x44308 + (0x10 * (which)))
 #define GEN8_GT_IER(which) _MMIO(0x4430c + (0x10 * (which)))
 
-#define GEN9_GUC_TO_HOST_INT_EVENT	(1<<31)
-#define GEN9_GUC_EXEC_ERROR_EVENT	(1<<30)
-#define GEN9_GUC_DISPLAY_EVENT		(1<<29)
-#define GEN9_GUC_SEMA_SIGNAL_EVENT	(1<<28)
-#define GEN9_GUC_IOMMU_MSG_EVENT	(1<<27)
-#define GEN9_GUC_DB_RING_EVENT		(1<<26)
-#define GEN9_GUC_DMA_DONE_EVENT		(1<<25)
-#define GEN9_GUC_FATAL_ERROR_EVENT	(1<<24)
-#define GEN9_GUC_NOTIFICATION_EVENT	(1<<23)
+#define GEN9_GUC_TO_HOST_INT_EVENT	(1 << 31)
+#define GEN9_GUC_EXEC_ERROR_EVENT	(1 << 30)
+#define GEN9_GUC_DISPLAY_EVENT		(1 << 29)
+#define GEN9_GUC_SEMA_SIGNAL_EVENT	(1 << 28)
+#define GEN9_GUC_IOMMU_MSG_EVENT	(1 << 27)
+#define GEN9_GUC_DB_RING_EVENT		(1 << 26)
+#define GEN9_GUC_DMA_DONE_EVENT		(1 << 25)
+#define GEN9_GUC_FATAL_ERROR_EVENT	(1 << 24)
+#define GEN9_GUC_NOTIFICATION_EVENT	(1 << 23)
 
 #define GEN8_RCS_IRQ_SHIFT 0
 #define GEN8_BCS_IRQ_SHIFT 16
@@ -7014,6 +7032,7 @@ enum {
 #define GEN8_DE_PORT_IMR _MMIO(0x44444)
 #define GEN8_DE_PORT_IIR _MMIO(0x44448)
 #define GEN8_DE_PORT_IER _MMIO(0x4444c)
+#define  ICL_AUX_CHANNEL_E		(1 << 29)
 #define  CNL_AUX_CHANNEL_F		(1 << 28)
 #define  GEN9_AUX_CHANNEL_D		(1 << 27)
 #define  GEN9_AUX_CHANNEL_C		(1 << 26)
@@ -7040,9 +7059,16 @@ enum {
 #define GEN8_PCU_IIR _MMIO(0x444e8)
 #define GEN8_PCU_IER _MMIO(0x444ec)
 
+#define GEN11_GU_MISC_ISR	_MMIO(0x444f0)
+#define GEN11_GU_MISC_IMR	_MMIO(0x444f4)
+#define GEN11_GU_MISC_IIR	_MMIO(0x444f8)
+#define GEN11_GU_MISC_IER	_MMIO(0x444fc)
+#define  GEN11_GU_MISC_GSE	(1 << 27)
+
 #define GEN11_GFX_MSTR_IRQ		_MMIO(0x190010)
 #define  GEN11_MASTER_IRQ		(1 << 31)
 #define  GEN11_PCU_IRQ			(1 << 30)
+#define  GEN11_GU_MISC_IRQ		(1 << 29)
 #define  GEN11_DISPLAY_IRQ		(1 << 16)
 #define  GEN11_GT_DW_IRQ(x)		(1 << (x))
 #define  GEN11_GT_DW1_IRQ		(1 << 1)
@@ -7053,11 +7079,40 @@ enum {
 #define  GEN11_AUDIO_CODEC_IRQ		(1 << 24)
 #define  GEN11_DE_PCH_IRQ		(1 << 23)
 #define  GEN11_DE_MISC_IRQ		(1 << 22)
+#define  GEN11_DE_HPD_IRQ		(1 << 21)
 #define  GEN11_DE_PORT_IRQ		(1 << 20)
 #define  GEN11_DE_PIPE_C		(1 << 18)
 #define  GEN11_DE_PIPE_B		(1 << 17)
 #define  GEN11_DE_PIPE_A		(1 << 16)
 
+#define GEN11_DE_HPD_ISR		_MMIO(0x44470)
+#define GEN11_DE_HPD_IMR		_MMIO(0x44474)
+#define GEN11_DE_HPD_IIR		_MMIO(0x44478)
+#define GEN11_DE_HPD_IER		_MMIO(0x4447c)
+#define  GEN11_TC4_HOTPLUG			(1 << 19)
+#define  GEN11_TC3_HOTPLUG			(1 << 18)
+#define  GEN11_TC2_HOTPLUG			(1 << 17)
+#define  GEN11_TC1_HOTPLUG			(1 << 16)
+#define  GEN11_DE_TC_HOTPLUG_MASK		(GEN11_TC4_HOTPLUG | \
+						 GEN11_TC3_HOTPLUG | \
+						 GEN11_TC2_HOTPLUG | \
+						 GEN11_TC1_HOTPLUG)
+#define  GEN11_TBT4_HOTPLUG			(1 << 3)
+#define  GEN11_TBT3_HOTPLUG			(1 << 2)
+#define  GEN11_TBT2_HOTPLUG			(1 << 1)
+#define  GEN11_TBT1_HOTPLUG			(1 << 0)
+#define  GEN11_DE_TBT_HOTPLUG_MASK		(GEN11_TBT4_HOTPLUG | \
+						 GEN11_TBT3_HOTPLUG | \
+						 GEN11_TBT2_HOTPLUG | \
+						 GEN11_TBT1_HOTPLUG)
+
+#define GEN11_TBT_HOTPLUG_CTL				_MMIO(0x44030)
+#define GEN11_TC_HOTPLUG_CTL				_MMIO(0x44038)
+#define  GEN11_HOTPLUG_CTL_ENABLE(tc_port)		(8 << (tc_port) * 4)
+#define  GEN11_HOTPLUG_CTL_LONG_DETECT(tc_port)		(2 << (tc_port) * 4)
+#define  GEN11_HOTPLUG_CTL_SHORT_DETECT(tc_port)	(1 << (tc_port) * 4)
+#define  GEN11_HOTPLUG_CTL_NO_DETECT(tc_port)		(0 << (tc_port) * 4)
+
 #define GEN11_GT_INTR_DW0		_MMIO(0x190018)
 #define  GEN11_CSME			(31)
 #define  GEN11_GUNIT			(28)
@@ -7072,7 +7127,7 @@ enum {
 #define  GEN11_VECS(x)			(31 - (x))
 #define  GEN11_VCS(x)			(x)
 
-#define GEN11_GT_INTR_DW(x)		_MMIO(0x190018 + (x * 4))
+#define GEN11_GT_INTR_DW(x)		_MMIO(0x190018 + ((x) * 4))
 
 #define GEN11_INTR_IDENTITY_REG0	_MMIO(0x190060)
 #define GEN11_INTR_IDENTITY_REG1	_MMIO(0x190064)
@@ -7081,12 +7136,12 @@ enum {
 #define  GEN11_INTR_ENGINE_INSTANCE(x)	(((x) & GENMASK(25, 20)) >> 20)
 #define  GEN11_INTR_ENGINE_INTR(x)	((x) & 0xffff)
 
-#define GEN11_INTR_IDENTITY_REG(x)	_MMIO(0x190060 + (x * 4))
+#define GEN11_INTR_IDENTITY_REG(x)	_MMIO(0x190060 + ((x) * 4))
 
 #define GEN11_IIR_REG0_SELECTOR		_MMIO(0x190070)
 #define GEN11_IIR_REG1_SELECTOR		_MMIO(0x190074)
 
-#define GEN11_IIR_REG_SELECTOR(x)	_MMIO(0x190070 + (x * 4))
+#define GEN11_IIR_REG_SELECTOR(x)	_MMIO(0x190070 + ((x) * 4))
 
 #define GEN11_RENDER_COPY_INTR_ENABLE	_MMIO(0x190030)
 #define GEN11_VCS_VECS_INTR_ENABLE	_MMIO(0x190034)
@@ -7108,8 +7163,8 @@ enum {
 #define ILK_DISPLAY_CHICKEN2	_MMIO(0x42004)
 /* Required on all Ironlake and Sandybridge according to the B-Spec. */
 #define  ILK_ELPIN_409_SELECT	(1 << 25)
-#define  ILK_DPARB_GATE	(1<<22)
-#define  ILK_VSDPFD_FULL	(1<<21)
+#define  ILK_DPARB_GATE	(1 << 22)
+#define  ILK_VSDPFD_FULL	(1 << 21)
 #define FUSE_STRAP			_MMIO(0x42014)
 #define  ILK_INTERNAL_GRAPHICS_DISABLE	(1 << 31)
 #define  ILK_INTERNAL_DISPLAY_DISABLE	(1 << 30)
@@ -7159,31 +7214,31 @@ enum {
 #define CHICKEN_TRANS_A         0x420c0
 #define CHICKEN_TRANS_B         0x420c4
 #define CHICKEN_TRANS(trans) _MMIO_TRANS(trans, CHICKEN_TRANS_A, CHICKEN_TRANS_B)
-#define  VSC_DATA_SEL_SOFTWARE_CONTROL	(1<<25) /* GLK and CNL+ */
-#define  DDI_TRAINING_OVERRIDE_ENABLE	(1<<19)
-#define  DDI_TRAINING_OVERRIDE_VALUE	(1<<18)
-#define  DDIE_TRAINING_OVERRIDE_ENABLE	(1<<17) /* CHICKEN_TRANS_A only */
-#define  DDIE_TRAINING_OVERRIDE_VALUE	(1<<16) /* CHICKEN_TRANS_A only */
-#define  PSR2_ADD_VERTICAL_LINE_COUNT   (1<<15)
-#define  PSR2_VSC_ENABLE_PROG_HEADER    (1<<12)
+#define  VSC_DATA_SEL_SOFTWARE_CONTROL	(1 << 25) /* GLK and CNL+ */
+#define  DDI_TRAINING_OVERRIDE_ENABLE	(1 << 19)
+#define  DDI_TRAINING_OVERRIDE_VALUE	(1 << 18)
+#define  DDIE_TRAINING_OVERRIDE_ENABLE	(1 << 17) /* CHICKEN_TRANS_A only */
+#define  DDIE_TRAINING_OVERRIDE_VALUE	(1 << 16) /* CHICKEN_TRANS_A only */
+#define  PSR2_ADD_VERTICAL_LINE_COUNT   (1 << 15)
+#define  PSR2_VSC_ENABLE_PROG_HEADER    (1 << 12)
 
 #define DISP_ARB_CTL	_MMIO(0x45000)
-#define  DISP_FBC_MEMORY_WAKE		(1<<31)
-#define  DISP_TILE_SURFACE_SWIZZLING	(1<<13)
-#define  DISP_FBC_WM_DIS		(1<<15)
+#define  DISP_FBC_MEMORY_WAKE		(1 << 31)
+#define  DISP_TILE_SURFACE_SWIZZLING	(1 << 13)
+#define  DISP_FBC_WM_DIS		(1 << 15)
 #define DISP_ARB_CTL2	_MMIO(0x45004)
-#define  DISP_DATA_PARTITION_5_6	(1<<6)
-#define  DISP_IPC_ENABLE		(1<<3)
+#define  DISP_DATA_PARTITION_5_6	(1 << 6)
+#define  DISP_IPC_ENABLE		(1 << 3)
 #define DBUF_CTL	_MMIO(0x45008)
 #define DBUF_CTL_S1	_MMIO(0x45008)
 #define DBUF_CTL_S2	_MMIO(0x44FE8)
-#define  DBUF_POWER_REQUEST		(1<<31)
-#define  DBUF_POWER_STATE		(1<<30)
+#define  DBUF_POWER_REQUEST		(1 << 31)
+#define  DBUF_POWER_STATE		(1 << 30)
 #define GEN7_MSG_CTL	_MMIO(0x45010)
-#define  WAIT_FOR_PCH_RESET_ACK		(1<<1)
-#define  WAIT_FOR_PCH_FLR_ACK		(1<<0)
+#define  WAIT_FOR_PCH_RESET_ACK		(1 << 1)
+#define  WAIT_FOR_PCH_FLR_ACK		(1 << 0)
 #define HSW_NDE_RSTWRN_OPT	_MMIO(0x46408)
-#define  RESET_PCH_HANDSHAKE_ENABLE	(1<<4)
+#define  RESET_PCH_HANDSHAKE_ENABLE	(1 << 4)
 
 #define GEN8_CHICKEN_DCPR_1		_MMIO(0x46430)
 #define   SKL_SELECT_ALTERNATE_DC_EXIT	(1 << 30)
@@ -7208,16 +7263,16 @@ enum {
 #define ICL_DSSM_CDCLK_PLL_REFCLK_38_4MHz	(2 << 29)
 
 #define GEN7_FF_SLICE_CS_CHICKEN1	_MMIO(0x20e0)
-#define   GEN9_FFSC_PERCTX_PREEMPT_CTRL	(1<<14)
+#define   GEN9_FFSC_PERCTX_PREEMPT_CTRL	(1 << 14)
 
 #define FF_SLICE_CS_CHICKEN2			_MMIO(0x20e4)
-#define  GEN9_TSG_BARRIER_ACK_DISABLE		(1<<8)
-#define  GEN9_POOLED_EU_LOAD_BALANCING_FIX_DISABLE  (1<<10)
+#define  GEN9_TSG_BARRIER_ACK_DISABLE		(1 << 8)
+#define  GEN9_POOLED_EU_LOAD_BALANCING_FIX_DISABLE  (1 << 10)
 
 #define GEN9_CS_DEBUG_MODE1		_MMIO(0x20ec)
 #define GEN9_CTX_PREEMPT_REG		_MMIO(0x2248)
 #define GEN8_CS_CHICKEN1		_MMIO(0x2580)
-#define GEN9_PREEMPT_3D_OBJECT_LEVEL		(1<<0)
+#define GEN9_PREEMPT_3D_OBJECT_LEVEL		(1 << 0)
 #define GEN9_PREEMPT_GPGPU_LEVEL(hi, lo)	(((hi) << 2) | ((lo) << 1))
 #define GEN9_PREEMPT_GPGPU_MID_THREAD_LEVEL	GEN9_PREEMPT_GPGPU_LEVEL(0, 0)
 #define GEN9_PREEMPT_GPGPU_THREAD_GROUP_LEVEL	GEN9_PREEMPT_GPGPU_LEVEL(0, 1)
@@ -7239,11 +7294,11 @@ enum {
   #define GEN11_BLEND_EMB_FIX_DISABLE_IN_RCC	(1 << 11)
 
 #define HIZ_CHICKEN					_MMIO(0x7018)
-# define CHV_HZ_8X8_MODE_IN_1X				(1<<15)
-# define BDW_HIZ_POWER_COMPILER_CLOCK_GATING_DISABLE	(1<<3)
+# define CHV_HZ_8X8_MODE_IN_1X				(1 << 15)
+# define BDW_HIZ_POWER_COMPILER_CLOCK_GATING_DISABLE	(1 << 3)
 
 #define GEN9_SLICE_COMMON_ECO_CHICKEN0		_MMIO(0x7308)
-#define  DISABLE_PIXEL_MASK_CAMMING		(1<<14)
+#define  DISABLE_PIXEL_MASK_CAMMING		(1 << 14)
 
 #define GEN9_SLICE_COMMON_ECO_CHICKEN1		_MMIO(0x731c)
 #define   GEN11_STATE_CACHE_REDIRECT_TO_CS	(1 << 11)
@@ -7264,7 +7319,7 @@ enum {
 
 #define GEN7_L3CNTLREG1				_MMIO(0xB01C)
 #define  GEN7_WA_FOR_GEN7_L3_CONTROL			0x3C47FF8C
-#define  GEN7_L3AGDIS				(1<<19)
+#define  GEN7_L3AGDIS				(1 << 19)
 #define GEN7_L3CNTLREG2				_MMIO(0xB020)
 #define GEN7_L3CNTLREG3				_MMIO(0xB024)
 
@@ -7274,7 +7329,7 @@ enum {
 #define   GEN11_I2M_WRITE_DISABLE		(1 << 28)
 
 #define GEN7_L3SQCREG4				_MMIO(0xb034)
-#define  L3SQ_URB_READ_CAM_MATCH_DISABLE	(1<<27)
+#define  L3SQ_URB_READ_CAM_MATCH_DISABLE	(1 << 27)
 
 #define GEN8_L3SQCREG4				_MMIO(0xb118)
 #define  GEN11_LQSC_CLEAN_EVICT_DISABLE		(1 << 6)
@@ -7285,12 +7340,12 @@ enum {
 #define HDC_CHICKEN0				_MMIO(0x7300)
 #define CNL_HDC_CHICKEN0			_MMIO(0xE5F0)
 #define ICL_HDC_MODE				_MMIO(0xE5F4)
-#define  HDC_FORCE_CSR_NON_COHERENT_OVR_DISABLE	(1<<15)
-#define  HDC_FENCE_DEST_SLM_DISABLE		(1<<14)
-#define  HDC_DONOT_FETCH_MEM_WHEN_MASKED	(1<<11)
-#define  HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT	(1<<5)
-#define  HDC_FORCE_NON_COHERENT			(1<<4)
-#define  HDC_BARRIER_PERFORMANCE_DISABLE	(1<<10)
+#define  HDC_FORCE_CSR_NON_COHERENT_OVR_DISABLE	(1 << 15)
+#define  HDC_FENCE_DEST_SLM_DISABLE		(1 << 14)
+#define  HDC_DONOT_FETCH_MEM_WHEN_MASKED	(1 << 11)
+#define  HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT	(1 << 5)
+#define  HDC_FORCE_NON_COHERENT			(1 << 4)
+#define  HDC_BARRIER_PERFORMANCE_DISABLE	(1 << 10)
 
 #define GEN8_HDC_CHICKEN1			_MMIO(0x7304)
 
@@ -7303,13 +7358,13 @@ enum {
 
 /* WaCatErrorRejectionIssue */
 #define GEN7_SQ_CHICKEN_MBCUNIT_CONFIG		_MMIO(0x9030)
-#define  GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB	(1<<11)
+#define  GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB	(1 << 11)
 
 #define HSW_SCRATCH1				_MMIO(0xb038)
-#define  HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE	(1<<27)
+#define  HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE	(1 << 27)
 
 #define BDW_SCRATCH1					_MMIO(0xb11c)
-#define  GEN9_LBS_SLA_RETRY_TIMER_DECREMENT_ENABLE	(1<<2)
+#define  GEN9_LBS_SLA_RETRY_TIMER_DECREMENT_ENABLE	(1 << 2)
 
 /* PCH */
 
@@ -7408,8 +7463,8 @@ enum {
 #define SDEIER  _MMIO(0xc400c)
 
 #define SERR_INT			_MMIO(0xc4040)
-#define  SERR_INT_POISON		(1<<31)
-#define  SERR_INT_TRANS_FIFO_UNDERRUN(pipe)	(1<<((pipe)*3))
+#define  SERR_INT_POISON		(1 << 31)
+#define  SERR_INT_TRANS_FIFO_UNDERRUN(pipe)	(1 << ((pipe) * 3))
 
 /* digital port hotplug */
 #define PCH_PORT_HOTPLUG		_MMIO(0xc4030)	/* SHOTPLUG_CTL */
@@ -7478,46 +7533,46 @@ enum {
 
 #define _PCH_DPLL_A              0xc6014
 #define _PCH_DPLL_B              0xc6018
-#define PCH_DPLL(pll) _MMIO(pll == 0 ? _PCH_DPLL_A : _PCH_DPLL_B)
+#define PCH_DPLL(pll) _MMIO((pll) == 0 ? _PCH_DPLL_A : _PCH_DPLL_B)
 
 #define _PCH_FPA0                0xc6040
-#define  FP_CB_TUNE		(0x3<<22)
+#define  FP_CB_TUNE		(0x3 << 22)
 #define _PCH_FPA1                0xc6044
 #define _PCH_FPB0                0xc6048
 #define _PCH_FPB1                0xc604c
-#define PCH_FP0(pll) _MMIO(pll == 0 ? _PCH_FPA0 : _PCH_FPB0)
-#define PCH_FP1(pll) _MMIO(pll == 0 ? _PCH_FPA1 : _PCH_FPB1)
+#define PCH_FP0(pll) _MMIO((pll) == 0 ? _PCH_FPA0 : _PCH_FPB0)
+#define PCH_FP1(pll) _MMIO((pll) == 0 ? _PCH_FPA1 : _PCH_FPB1)
 
 #define PCH_DPLL_TEST           _MMIO(0xc606c)
 
 #define PCH_DREF_CONTROL        _MMIO(0xC6200)
 #define  DREF_CONTROL_MASK      0x7fc3
-#define  DREF_CPU_SOURCE_OUTPUT_DISABLE         (0<<13)
-#define  DREF_CPU_SOURCE_OUTPUT_DOWNSPREAD      (2<<13)
-#define  DREF_CPU_SOURCE_OUTPUT_NONSPREAD       (3<<13)
-#define  DREF_CPU_SOURCE_OUTPUT_MASK		(3<<13)
-#define  DREF_SSC_SOURCE_DISABLE                (0<<11)
-#define  DREF_SSC_SOURCE_ENABLE                 (2<<11)
-#define  DREF_SSC_SOURCE_MASK			(3<<11)
-#define  DREF_NONSPREAD_SOURCE_DISABLE          (0<<9)
-#define  DREF_NONSPREAD_CK505_ENABLE		(1<<9)
-#define  DREF_NONSPREAD_SOURCE_ENABLE           (2<<9)
-#define  DREF_NONSPREAD_SOURCE_MASK		(3<<9)
-#define  DREF_SUPERSPREAD_SOURCE_DISABLE        (0<<7)
-#define  DREF_SUPERSPREAD_SOURCE_ENABLE         (2<<7)
-#define  DREF_SUPERSPREAD_SOURCE_MASK		(3<<7)
-#define  DREF_SSC4_DOWNSPREAD                   (0<<6)
-#define  DREF_SSC4_CENTERSPREAD                 (1<<6)
-#define  DREF_SSC1_DISABLE                      (0<<1)
-#define  DREF_SSC1_ENABLE                       (1<<1)
+#define  DREF_CPU_SOURCE_OUTPUT_DISABLE         (0 << 13)
+#define  DREF_CPU_SOURCE_OUTPUT_DOWNSPREAD      (2 << 13)
+#define  DREF_CPU_SOURCE_OUTPUT_NONSPREAD       (3 << 13)
+#define  DREF_CPU_SOURCE_OUTPUT_MASK		(3 << 13)
+#define  DREF_SSC_SOURCE_DISABLE                (0 << 11)
+#define  DREF_SSC_SOURCE_ENABLE                 (2 << 11)
+#define  DREF_SSC_SOURCE_MASK			(3 << 11)
+#define  DREF_NONSPREAD_SOURCE_DISABLE          (0 << 9)
+#define  DREF_NONSPREAD_CK505_ENABLE		(1 << 9)
+#define  DREF_NONSPREAD_SOURCE_ENABLE           (2 << 9)
+#define  DREF_NONSPREAD_SOURCE_MASK		(3 << 9)
+#define  DREF_SUPERSPREAD_SOURCE_DISABLE        (0 << 7)
+#define  DREF_SUPERSPREAD_SOURCE_ENABLE         (2 << 7)
+#define  DREF_SUPERSPREAD_SOURCE_MASK		(3 << 7)
+#define  DREF_SSC4_DOWNSPREAD                   (0 << 6)
+#define  DREF_SSC4_CENTERSPREAD                 (1 << 6)
+#define  DREF_SSC1_DISABLE                      (0 << 1)
+#define  DREF_SSC1_ENABLE                       (1 << 1)
 #define  DREF_SSC4_DISABLE                      (0)
 #define  DREF_SSC4_ENABLE                       (1)
 
 #define PCH_RAWCLK_FREQ         _MMIO(0xc6204)
 #define  FDL_TP1_TIMER_SHIFT    12
-#define  FDL_TP1_TIMER_MASK     (3<<12)
+#define  FDL_TP1_TIMER_MASK     (3 << 12)
 #define  FDL_TP2_TIMER_SHIFT    10
-#define  FDL_TP2_TIMER_MASK     (3<<10)
+#define  FDL_TP2_TIMER_MASK     (3 << 10)
 #define  RAWCLK_FREQ_MASK       0x3ff
 #define  CNP_RAWCLK_DIV_MASK	(0x3ff << 16)
 #define  CNP_RAWCLK_DIV(div)	((div) << 16)
@@ -7554,7 +7609,7 @@ enum {
 #define  TRANS_VBLANK_END_SHIFT		16
 #define  TRANS_VBLANK_START_SHIFT	0
 #define _PCH_TRANS_VSYNC_A		0xe0014
-#define  TRANS_VSYNC_END_SHIFT	 	16
+#define  TRANS_VSYNC_END_SHIFT		16
 #define  TRANS_VSYNC_START_SHIFT	0
 #define _PCH_TRANS_VSYNCSHIFT_A		0xe0028
 
@@ -7642,7 +7697,7 @@ enum {
 #define HSW_TVIDEO_DIP_VSC_DATA(trans, i)	_MMIO_TRANS2(trans, _HSW_VIDEO_DIP_VSC_DATA_A + (i) * 4)
 
 #define _HSW_STEREO_3D_CTL_A		0x70020
-#define   S3D_ENABLE			(1<<31)
+#define   S3D_ENABLE			(1 << 31)
 #define _HSW_STEREO_3D_CTL_B		0x71020
 
 #define HSW_STEREO_3D_CTL(trans)	_MMIO_PIPE2(trans, _HSW_STEREO_3D_CTL_A)
@@ -7685,156 +7740,156 @@ enum {
 #define _PCH_TRANSBCONF              0xf1008
 #define PCH_TRANSCONF(pipe)	_MMIO_PIPE(pipe, _PCH_TRANSACONF, _PCH_TRANSBCONF)
 #define LPT_TRANSCONF		PCH_TRANSCONF(PIPE_A) /* lpt has only one transcoder */
-#define  TRANS_DISABLE          (0<<31)
-#define  TRANS_ENABLE           (1<<31)
-#define  TRANS_STATE_MASK       (1<<30)
-#define  TRANS_STATE_DISABLE    (0<<30)
-#define  TRANS_STATE_ENABLE     (1<<30)
-#define  TRANS_FSYNC_DELAY_HB1  (0<<27)
-#define  TRANS_FSYNC_DELAY_HB2  (1<<27)
-#define  TRANS_FSYNC_DELAY_HB3  (2<<27)
-#define  TRANS_FSYNC_DELAY_HB4  (3<<27)
-#define  TRANS_INTERLACE_MASK   (7<<21)
-#define  TRANS_PROGRESSIVE      (0<<21)
-#define  TRANS_INTERLACED       (3<<21)
-#define  TRANS_LEGACY_INTERLACED_ILK (2<<21)
-#define  TRANS_8BPC             (0<<5)
-#define  TRANS_10BPC            (1<<5)
-#define  TRANS_6BPC             (2<<5)
-#define  TRANS_12BPC            (3<<5)
+#define  TRANS_DISABLE          (0 << 31)
+#define  TRANS_ENABLE           (1 << 31)
+#define  TRANS_STATE_MASK       (1 << 30)
+#define  TRANS_STATE_DISABLE    (0 << 30)
+#define  TRANS_STATE_ENABLE     (1 << 30)
+#define  TRANS_FSYNC_DELAY_HB1  (0 << 27)
+#define  TRANS_FSYNC_DELAY_HB2  (1 << 27)
+#define  TRANS_FSYNC_DELAY_HB3  (2 << 27)
+#define  TRANS_FSYNC_DELAY_HB4  (3 << 27)
+#define  TRANS_INTERLACE_MASK   (7 << 21)
+#define  TRANS_PROGRESSIVE      (0 << 21)
+#define  TRANS_INTERLACED       (3 << 21)
+#define  TRANS_LEGACY_INTERLACED_ILK (2 << 21)
+#define  TRANS_8BPC             (0 << 5)
+#define  TRANS_10BPC            (1 << 5)
+#define  TRANS_6BPC             (2 << 5)
+#define  TRANS_12BPC            (3 << 5)
 
 #define _TRANSA_CHICKEN1	 0xf0060
 #define _TRANSB_CHICKEN1	 0xf1060
 #define TRANS_CHICKEN1(pipe)	_MMIO_PIPE(pipe, _TRANSA_CHICKEN1, _TRANSB_CHICKEN1)
-#define  TRANS_CHICKEN1_HDMIUNIT_GC_DISABLE	(1<<10)
-#define  TRANS_CHICKEN1_DP0UNIT_GC_DISABLE	(1<<4)
+#define  TRANS_CHICKEN1_HDMIUNIT_GC_DISABLE	(1 << 10)
+#define  TRANS_CHICKEN1_DP0UNIT_GC_DISABLE	(1 << 4)
 #define _TRANSA_CHICKEN2	 0xf0064
 #define _TRANSB_CHICKEN2	 0xf1064
 #define TRANS_CHICKEN2(pipe)	_MMIO_PIPE(pipe, _TRANSA_CHICKEN2, _TRANSB_CHICKEN2)
-#define  TRANS_CHICKEN2_TIMING_OVERRIDE			(1<<31)
-#define  TRANS_CHICKEN2_FDI_POLARITY_REVERSED		(1<<29)
-#define  TRANS_CHICKEN2_FRAME_START_DELAY_MASK		(3<<27)
-#define  TRANS_CHICKEN2_DISABLE_DEEP_COLOR_COUNTER	(1<<26)
-#define  TRANS_CHICKEN2_DISABLE_DEEP_COLOR_MODESWITCH	(1<<25)
+#define  TRANS_CHICKEN2_TIMING_OVERRIDE			(1 << 31)
+#define  TRANS_CHICKEN2_FDI_POLARITY_REVERSED		(1 << 29)
+#define  TRANS_CHICKEN2_FRAME_START_DELAY_MASK		(3 << 27)
+#define  TRANS_CHICKEN2_DISABLE_DEEP_COLOR_COUNTER	(1 << 26)
+#define  TRANS_CHICKEN2_DISABLE_DEEP_COLOR_MODESWITCH	(1 << 25)
 
 #define SOUTH_CHICKEN1		_MMIO(0xc2000)
 #define  FDIA_PHASE_SYNC_SHIFT_OVR	19
 #define  FDIA_PHASE_SYNC_SHIFT_EN	18
-#define  FDI_PHASE_SYNC_OVR(pipe) (1<<(FDIA_PHASE_SYNC_SHIFT_OVR - ((pipe) * 2)))
-#define  FDI_PHASE_SYNC_EN(pipe) (1<<(FDIA_PHASE_SYNC_SHIFT_EN - ((pipe) * 2)))
+#define  FDI_PHASE_SYNC_OVR(pipe) (1 << (FDIA_PHASE_SYNC_SHIFT_OVR - ((pipe) * 2)))
+#define  FDI_PHASE_SYNC_EN(pipe) (1 << (FDIA_PHASE_SYNC_SHIFT_EN - ((pipe) * 2)))
 #define  FDI_BC_BIFURCATION_SELECT	(1 << 12)
 #define  CHASSIS_CLK_REQ_DURATION_MASK	(0xf << 8)
 #define  CHASSIS_CLK_REQ_DURATION(x)	((x) << 8)
-#define  SPT_PWM_GRANULARITY		(1<<0)
+#define  SPT_PWM_GRANULARITY		(1 << 0)
 #define SOUTH_CHICKEN2		_MMIO(0xc2004)
-#define  FDI_MPHY_IOSFSB_RESET_STATUS	(1<<13)
-#define  FDI_MPHY_IOSFSB_RESET_CTL	(1<<12)
-#define  LPT_PWM_GRANULARITY		(1<<5)
-#define  DPLS_EDP_PPS_FIX_DIS		(1<<0)
+#define  FDI_MPHY_IOSFSB_RESET_STATUS	(1 << 13)
+#define  FDI_MPHY_IOSFSB_RESET_CTL	(1 << 12)
+#define  LPT_PWM_GRANULARITY		(1 << 5)
+#define  DPLS_EDP_PPS_FIX_DIS		(1 << 0)
 
 #define _FDI_RXA_CHICKEN        0xc200c
 #define _FDI_RXB_CHICKEN        0xc2010
-#define  FDI_RX_PHASE_SYNC_POINTER_OVR	(1<<1)
-#define  FDI_RX_PHASE_SYNC_POINTER_EN	(1<<0)
+#define  FDI_RX_PHASE_SYNC_POINTER_OVR	(1 << 1)
+#define  FDI_RX_PHASE_SYNC_POINTER_EN	(1 << 0)
 #define FDI_RX_CHICKEN(pipe)	_MMIO_PIPE(pipe, _FDI_RXA_CHICKEN, _FDI_RXB_CHICKEN)
 
 #define SOUTH_DSPCLK_GATE_D	_MMIO(0xc2020)
-#define  PCH_GMBUSUNIT_CLOCK_GATE_DISABLE (1<<31)
-#define  PCH_DPLUNIT_CLOCK_GATE_DISABLE (1<<30)
-#define  PCH_DPLSUNIT_CLOCK_GATE_DISABLE (1<<29)
-#define  PCH_CPUNIT_CLOCK_GATE_DISABLE (1<<14)
-#define  CNP_PWM_CGE_GATING_DISABLE (1<<13)
-#define  PCH_LP_PARTITION_LEVEL_DISABLE  (1<<12)
+#define  PCH_GMBUSUNIT_CLOCK_GATE_DISABLE (1 << 31)
+#define  PCH_DPLUNIT_CLOCK_GATE_DISABLE (1 << 30)
+#define  PCH_DPLSUNIT_CLOCK_GATE_DISABLE (1 << 29)
+#define  PCH_CPUNIT_CLOCK_GATE_DISABLE (1 << 14)
+#define  CNP_PWM_CGE_GATING_DISABLE (1 << 13)
+#define  PCH_LP_PARTITION_LEVEL_DISABLE  (1 << 12)
 
 /* CPU: FDI_TX */
 #define _FDI_TXA_CTL            0x60100
 #define _FDI_TXB_CTL            0x61100
 #define FDI_TX_CTL(pipe)	_MMIO_PIPE(pipe, _FDI_TXA_CTL, _FDI_TXB_CTL)
-#define  FDI_TX_DISABLE         (0<<31)
-#define  FDI_TX_ENABLE          (1<<31)
-#define  FDI_LINK_TRAIN_PATTERN_1       (0<<28)
-#define  FDI_LINK_TRAIN_PATTERN_2       (1<<28)
-#define  FDI_LINK_TRAIN_PATTERN_IDLE    (2<<28)
-#define  FDI_LINK_TRAIN_NONE            (3<<28)
-#define  FDI_LINK_TRAIN_VOLTAGE_0_4V    (0<<25)
-#define  FDI_LINK_TRAIN_VOLTAGE_0_6V    (1<<25)
-#define  FDI_LINK_TRAIN_VOLTAGE_0_8V    (2<<25)
-#define  FDI_LINK_TRAIN_VOLTAGE_1_2V    (3<<25)
-#define  FDI_LINK_TRAIN_PRE_EMPHASIS_NONE (0<<22)
-#define  FDI_LINK_TRAIN_PRE_EMPHASIS_1_5X (1<<22)
-#define  FDI_LINK_TRAIN_PRE_EMPHASIS_2X   (2<<22)
-#define  FDI_LINK_TRAIN_PRE_EMPHASIS_3X   (3<<22)
+#define  FDI_TX_DISABLE         (0 << 31)
+#define  FDI_TX_ENABLE          (1 << 31)
+#define  FDI_LINK_TRAIN_PATTERN_1       (0 << 28)
+#define  FDI_LINK_TRAIN_PATTERN_2       (1 << 28)
+#define  FDI_LINK_TRAIN_PATTERN_IDLE    (2 << 28)
+#define  FDI_LINK_TRAIN_NONE            (3 << 28)
+#define  FDI_LINK_TRAIN_VOLTAGE_0_4V    (0 << 25)
+#define  FDI_LINK_TRAIN_VOLTAGE_0_6V    (1 << 25)
+#define  FDI_LINK_TRAIN_VOLTAGE_0_8V    (2 << 25)
+#define  FDI_LINK_TRAIN_VOLTAGE_1_2V    (3 << 25)
+#define  FDI_LINK_TRAIN_PRE_EMPHASIS_NONE (0 << 22)
+#define  FDI_LINK_TRAIN_PRE_EMPHASIS_1_5X (1 << 22)
+#define  FDI_LINK_TRAIN_PRE_EMPHASIS_2X   (2 << 22)
+#define  FDI_LINK_TRAIN_PRE_EMPHASIS_3X   (3 << 22)
 /* ILK always use 400mV 0dB for voltage swing and pre-emphasis level.
    SNB has different settings. */
 /* SNB A-stepping */
-#define  FDI_LINK_TRAIN_400MV_0DB_SNB_A		(0x38<<22)
-#define  FDI_LINK_TRAIN_400MV_6DB_SNB_A		(0x02<<22)
-#define  FDI_LINK_TRAIN_600MV_3_5DB_SNB_A	(0x01<<22)
-#define  FDI_LINK_TRAIN_800MV_0DB_SNB_A		(0x0<<22)
+#define  FDI_LINK_TRAIN_400MV_0DB_SNB_A		(0x38 << 22)
+#define  FDI_LINK_TRAIN_400MV_6DB_SNB_A		(0x02 << 22)
+#define  FDI_LINK_TRAIN_600MV_3_5DB_SNB_A	(0x01 << 22)
+#define  FDI_LINK_TRAIN_800MV_0DB_SNB_A		(0x0 << 22)
 /* SNB B-stepping */
-#define  FDI_LINK_TRAIN_400MV_0DB_SNB_B		(0x0<<22)
-#define  FDI_LINK_TRAIN_400MV_6DB_SNB_B		(0x3a<<22)
-#define  FDI_LINK_TRAIN_600MV_3_5DB_SNB_B	(0x39<<22)
-#define  FDI_LINK_TRAIN_800MV_0DB_SNB_B		(0x38<<22)
-#define  FDI_LINK_TRAIN_VOL_EMP_MASK		(0x3f<<22)
+#define  FDI_LINK_TRAIN_400MV_0DB_SNB_B		(0x0 << 22)
+#define  FDI_LINK_TRAIN_400MV_6DB_SNB_B		(0x3a << 22)
+#define  FDI_LINK_TRAIN_600MV_3_5DB_SNB_B	(0x39 << 22)
+#define  FDI_LINK_TRAIN_800MV_0DB_SNB_B		(0x38 << 22)
+#define  FDI_LINK_TRAIN_VOL_EMP_MASK		(0x3f << 22)
 #define  FDI_DP_PORT_WIDTH_SHIFT		19
 #define  FDI_DP_PORT_WIDTH_MASK			(7 << FDI_DP_PORT_WIDTH_SHIFT)
 #define  FDI_DP_PORT_WIDTH(width)           (((width) - 1) << FDI_DP_PORT_WIDTH_SHIFT)
-#define  FDI_TX_ENHANCE_FRAME_ENABLE    (1<<18)
+#define  FDI_TX_ENHANCE_FRAME_ENABLE    (1 << 18)
 /* Ironlake: hardwired to 1 */
-#define  FDI_TX_PLL_ENABLE              (1<<14)
+#define  FDI_TX_PLL_ENABLE              (1 << 14)
 
 /* Ivybridge has different bits for lolz */
-#define  FDI_LINK_TRAIN_PATTERN_1_IVB       (0<<8)
-#define  FDI_LINK_TRAIN_PATTERN_2_IVB       (1<<8)
-#define  FDI_LINK_TRAIN_PATTERN_IDLE_IVB    (2<<8)
-#define  FDI_LINK_TRAIN_NONE_IVB            (3<<8)
+#define  FDI_LINK_TRAIN_PATTERN_1_IVB       (0 << 8)
+#define  FDI_LINK_TRAIN_PATTERN_2_IVB       (1 << 8)
+#define  FDI_LINK_TRAIN_PATTERN_IDLE_IVB    (2 << 8)
+#define  FDI_LINK_TRAIN_NONE_IVB            (3 << 8)
 
 /* both Tx and Rx */
-#define  FDI_COMPOSITE_SYNC		(1<<11)
-#define  FDI_LINK_TRAIN_AUTO		(1<<10)
-#define  FDI_SCRAMBLING_ENABLE          (0<<7)
-#define  FDI_SCRAMBLING_DISABLE         (1<<7)
+#define  FDI_COMPOSITE_SYNC		(1 << 11)
+#define  FDI_LINK_TRAIN_AUTO		(1 << 10)
+#define  FDI_SCRAMBLING_ENABLE          (0 << 7)
+#define  FDI_SCRAMBLING_DISABLE         (1 << 7)
 
 /* FDI_RX, FDI_X is hard-wired to Transcoder_X */
 #define _FDI_RXA_CTL             0xf000c
 #define _FDI_RXB_CTL             0xf100c
 #define FDI_RX_CTL(pipe)	_MMIO_PIPE(pipe, _FDI_RXA_CTL, _FDI_RXB_CTL)
-#define  FDI_RX_ENABLE          (1<<31)
+#define  FDI_RX_ENABLE          (1 << 31)
 /* train, dp width same as FDI_TX */
-#define  FDI_FS_ERRC_ENABLE		(1<<27)
-#define  FDI_FE_ERRC_ENABLE		(1<<26)
-#define  FDI_RX_POLARITY_REVERSED_LPT	(1<<16)
-#define  FDI_8BPC                       (0<<16)
-#define  FDI_10BPC                      (1<<16)
-#define  FDI_6BPC                       (2<<16)
-#define  FDI_12BPC                      (3<<16)
-#define  FDI_RX_LINK_REVERSAL_OVERRIDE  (1<<15)
-#define  FDI_DMI_LINK_REVERSE_MASK      (1<<14)
-#define  FDI_RX_PLL_ENABLE              (1<<13)
-#define  FDI_FS_ERR_CORRECT_ENABLE      (1<<11)
-#define  FDI_FE_ERR_CORRECT_ENABLE      (1<<10)
-#define  FDI_FS_ERR_REPORT_ENABLE       (1<<9)
-#define  FDI_FE_ERR_REPORT_ENABLE       (1<<8)
-#define  FDI_RX_ENHANCE_FRAME_ENABLE    (1<<6)
-#define  FDI_PCDCLK	                (1<<4)
+#define  FDI_FS_ERRC_ENABLE		(1 << 27)
+#define  FDI_FE_ERRC_ENABLE		(1 << 26)
+#define  FDI_RX_POLARITY_REVERSED_LPT	(1 << 16)
+#define  FDI_8BPC                       (0 << 16)
+#define  FDI_10BPC                      (1 << 16)
+#define  FDI_6BPC                       (2 << 16)
+#define  FDI_12BPC                      (3 << 16)
+#define  FDI_RX_LINK_REVERSAL_OVERRIDE  (1 << 15)
+#define  FDI_DMI_LINK_REVERSE_MASK      (1 << 14)
+#define  FDI_RX_PLL_ENABLE              (1 << 13)
+#define  FDI_FS_ERR_CORRECT_ENABLE      (1 << 11)
+#define  FDI_FE_ERR_CORRECT_ENABLE      (1 << 10)
+#define  FDI_FS_ERR_REPORT_ENABLE       (1 << 9)
+#define  FDI_FE_ERR_REPORT_ENABLE       (1 << 8)
+#define  FDI_RX_ENHANCE_FRAME_ENABLE    (1 << 6)
+#define  FDI_PCDCLK	                (1 << 4)
 /* CPT */
-#define  FDI_AUTO_TRAINING			(1<<10)
-#define  FDI_LINK_TRAIN_PATTERN_1_CPT		(0<<8)
-#define  FDI_LINK_TRAIN_PATTERN_2_CPT		(1<<8)
-#define  FDI_LINK_TRAIN_PATTERN_IDLE_CPT	(2<<8)
-#define  FDI_LINK_TRAIN_NORMAL_CPT		(3<<8)
-#define  FDI_LINK_TRAIN_PATTERN_MASK_CPT	(3<<8)
+#define  FDI_AUTO_TRAINING			(1 << 10)
+#define  FDI_LINK_TRAIN_PATTERN_1_CPT		(0 << 8)
+#define  FDI_LINK_TRAIN_PATTERN_2_CPT		(1 << 8)
+#define  FDI_LINK_TRAIN_PATTERN_IDLE_CPT	(2 << 8)
+#define  FDI_LINK_TRAIN_NORMAL_CPT		(3 << 8)
+#define  FDI_LINK_TRAIN_PATTERN_MASK_CPT	(3 << 8)
 
 #define _FDI_RXA_MISC			0xf0010
 #define _FDI_RXB_MISC			0xf1010
-#define  FDI_RX_PWRDN_LANE1_MASK	(3<<26)
-#define  FDI_RX_PWRDN_LANE1_VAL(x)	((x)<<26)
-#define  FDI_RX_PWRDN_LANE0_MASK	(3<<24)
-#define  FDI_RX_PWRDN_LANE0_VAL(x)	((x)<<24)
-#define  FDI_RX_TP1_TO_TP2_48		(2<<20)
-#define  FDI_RX_TP1_TO_TP2_64		(3<<20)
-#define  FDI_RX_FDI_DELAY_90		(0x90<<0)
+#define  FDI_RX_PWRDN_LANE1_MASK	(3 << 26)
+#define  FDI_RX_PWRDN_LANE1_VAL(x)	((x) << 26)
+#define  FDI_RX_PWRDN_LANE0_MASK	(3 << 24)
+#define  FDI_RX_PWRDN_LANE0_VAL(x)	((x) << 24)
+#define  FDI_RX_TP1_TO_TP2_48		(2 << 20)
+#define  FDI_RX_TP1_TO_TP2_64		(3 << 20)
+#define  FDI_RX_FDI_DELAY_90		(0x90 << 0)
 #define FDI_RX_MISC(pipe)	_MMIO_PIPE(pipe, _FDI_RXA_MISC, _FDI_RXB_MISC)
 
 #define _FDI_RXA_TUSIZE1        0xf0030
@@ -7845,17 +7900,17 @@ enum {
 #define FDI_RX_TUSIZE2(pipe)	_MMIO_PIPE(pipe, _FDI_RXA_TUSIZE2, _FDI_RXB_TUSIZE2)
 
 /* FDI_RX interrupt register format */
-#define FDI_RX_INTER_LANE_ALIGN         (1<<10)
-#define FDI_RX_SYMBOL_LOCK              (1<<9) /* train 2 */
-#define FDI_RX_BIT_LOCK                 (1<<8) /* train 1 */
-#define FDI_RX_TRAIN_PATTERN_2_FAIL     (1<<7)
-#define FDI_RX_FS_CODE_ERR              (1<<6)
-#define FDI_RX_FE_CODE_ERR              (1<<5)
-#define FDI_RX_SYMBOL_ERR_RATE_ABOVE    (1<<4)
-#define FDI_RX_HDCP_LINK_FAIL           (1<<3)
-#define FDI_RX_PIXEL_FIFO_OVERFLOW      (1<<2)
-#define FDI_RX_CROSS_CLOCK_OVERFLOW     (1<<1)
-#define FDI_RX_SYMBOL_QUEUE_OVERFLOW    (1<<0)
+#define FDI_RX_INTER_LANE_ALIGN         (1 << 10)
+#define FDI_RX_SYMBOL_LOCK              (1 << 9) /* train 2 */
+#define FDI_RX_BIT_LOCK                 (1 << 8) /* train 1 */
+#define FDI_RX_TRAIN_PATTERN_2_FAIL     (1 << 7)
+#define FDI_RX_FS_CODE_ERR              (1 << 6)
+#define FDI_RX_FE_CODE_ERR              (1 << 5)
+#define FDI_RX_SYMBOL_ERR_RATE_ABOVE    (1 << 4)
+#define FDI_RX_HDCP_LINK_FAIL           (1 << 3)
+#define FDI_RX_PIXEL_FIFO_OVERFLOW      (1 << 2)
+#define FDI_RX_CROSS_CLOCK_OVERFLOW     (1 << 1)
+#define FDI_RX_SYMBOL_QUEUE_OVERFLOW    (1 << 0)
 
 #define _FDI_RXA_IIR            0xf0014
 #define _FDI_RXA_IMR            0xf0018
@@ -7905,54 +7960,54 @@ enum {
 #define _TRANS_DP_CTL_B		0xe1300
 #define _TRANS_DP_CTL_C		0xe2300
 #define TRANS_DP_CTL(pipe)	_MMIO_PIPE(pipe, _TRANS_DP_CTL_A, _TRANS_DP_CTL_B)
-#define  TRANS_DP_OUTPUT_ENABLE	(1<<31)
+#define  TRANS_DP_OUTPUT_ENABLE	(1 << 31)
 #define  TRANS_DP_PORT_SEL_MASK		(3 << 29)
 #define  TRANS_DP_PORT_SEL_NONE		(3 << 29)
 #define  TRANS_DP_PORT_SEL(port)	(((port) - PORT_B) << 29)
-#define  TRANS_DP_AUDIO_ONLY	(1<<26)
-#define  TRANS_DP_ENH_FRAMING	(1<<18)
-#define  TRANS_DP_8BPC		(0<<9)
-#define  TRANS_DP_10BPC		(1<<9)
-#define  TRANS_DP_6BPC		(2<<9)
-#define  TRANS_DP_12BPC		(3<<9)
-#define  TRANS_DP_BPC_MASK	(3<<9)
-#define  TRANS_DP_VSYNC_ACTIVE_HIGH	(1<<4)
+#define  TRANS_DP_AUDIO_ONLY	(1 << 26)
+#define  TRANS_DP_ENH_FRAMING	(1 << 18)
+#define  TRANS_DP_8BPC		(0 << 9)
+#define  TRANS_DP_10BPC		(1 << 9)
+#define  TRANS_DP_6BPC		(2 << 9)
+#define  TRANS_DP_12BPC		(3 << 9)
+#define  TRANS_DP_BPC_MASK	(3 << 9)
+#define  TRANS_DP_VSYNC_ACTIVE_HIGH	(1 << 4)
 #define  TRANS_DP_VSYNC_ACTIVE_LOW	0
-#define  TRANS_DP_HSYNC_ACTIVE_HIGH	(1<<3)
+#define  TRANS_DP_HSYNC_ACTIVE_HIGH	(1 << 3)
 #define  TRANS_DP_HSYNC_ACTIVE_LOW	0
-#define  TRANS_DP_SYNC_MASK	(3<<3)
+#define  TRANS_DP_SYNC_MASK	(3 << 3)
 
 /* SNB eDP training params */
 /* SNB A-stepping */
-#define  EDP_LINK_TRAIN_400MV_0DB_SNB_A		(0x38<<22)
-#define  EDP_LINK_TRAIN_400MV_6DB_SNB_A		(0x02<<22)
-#define  EDP_LINK_TRAIN_600MV_3_5DB_SNB_A	(0x01<<22)
-#define  EDP_LINK_TRAIN_800MV_0DB_SNB_A		(0x0<<22)
+#define  EDP_LINK_TRAIN_400MV_0DB_SNB_A		(0x38 << 22)
+#define  EDP_LINK_TRAIN_400MV_6DB_SNB_A		(0x02 << 22)
+#define  EDP_LINK_TRAIN_600MV_3_5DB_SNB_A	(0x01 << 22)
+#define  EDP_LINK_TRAIN_800MV_0DB_SNB_A		(0x0 << 22)
 /* SNB B-stepping */
-#define  EDP_LINK_TRAIN_400_600MV_0DB_SNB_B	(0x0<<22)
-#define  EDP_LINK_TRAIN_400MV_3_5DB_SNB_B	(0x1<<22)
-#define  EDP_LINK_TRAIN_400_600MV_6DB_SNB_B	(0x3a<<22)
-#define  EDP_LINK_TRAIN_600_800MV_3_5DB_SNB_B	(0x39<<22)
-#define  EDP_LINK_TRAIN_800_1200MV_0DB_SNB_B	(0x38<<22)
-#define  EDP_LINK_TRAIN_VOL_EMP_MASK_SNB	(0x3f<<22)
+#define  EDP_LINK_TRAIN_400_600MV_0DB_SNB_B	(0x0 << 22)
+#define  EDP_LINK_TRAIN_400MV_3_5DB_SNB_B	(0x1 << 22)
+#define  EDP_LINK_TRAIN_400_600MV_6DB_SNB_B	(0x3a << 22)
+#define  EDP_LINK_TRAIN_600_800MV_3_5DB_SNB_B	(0x39 << 22)
+#define  EDP_LINK_TRAIN_800_1200MV_0DB_SNB_B	(0x38 << 22)
+#define  EDP_LINK_TRAIN_VOL_EMP_MASK_SNB	(0x3f << 22)
 
 /* IVB */
-#define EDP_LINK_TRAIN_400MV_0DB_IVB		(0x24 <<22)
-#define EDP_LINK_TRAIN_400MV_3_5DB_IVB		(0x2a <<22)
-#define EDP_LINK_TRAIN_400MV_6DB_IVB		(0x2f <<22)
-#define EDP_LINK_TRAIN_600MV_0DB_IVB		(0x30 <<22)
-#define EDP_LINK_TRAIN_600MV_3_5DB_IVB		(0x36 <<22)
-#define EDP_LINK_TRAIN_800MV_0DB_IVB		(0x38 <<22)
-#define EDP_LINK_TRAIN_800MV_3_5DB_IVB		(0x3e <<22)
+#define EDP_LINK_TRAIN_400MV_0DB_IVB		(0x24 << 22)
+#define EDP_LINK_TRAIN_400MV_3_5DB_IVB		(0x2a << 22)
+#define EDP_LINK_TRAIN_400MV_6DB_IVB		(0x2f << 22)
+#define EDP_LINK_TRAIN_600MV_0DB_IVB		(0x30 << 22)
+#define EDP_LINK_TRAIN_600MV_3_5DB_IVB		(0x36 << 22)
+#define EDP_LINK_TRAIN_800MV_0DB_IVB		(0x38 << 22)
+#define EDP_LINK_TRAIN_800MV_3_5DB_IVB		(0x3e << 22)
 
 /* legacy values */
-#define EDP_LINK_TRAIN_500MV_0DB_IVB		(0x00 <<22)
-#define EDP_LINK_TRAIN_1000MV_0DB_IVB		(0x20 <<22)
-#define EDP_LINK_TRAIN_500MV_3_5DB_IVB		(0x02 <<22)
-#define EDP_LINK_TRAIN_1000MV_3_5DB_IVB		(0x22 <<22)
-#define EDP_LINK_TRAIN_1000MV_6DB_IVB		(0x23 <<22)
+#define EDP_LINK_TRAIN_500MV_0DB_IVB		(0x00 << 22)
+#define EDP_LINK_TRAIN_1000MV_0DB_IVB		(0x20 << 22)
+#define EDP_LINK_TRAIN_500MV_3_5DB_IVB		(0x02 << 22)
+#define EDP_LINK_TRAIN_1000MV_3_5DB_IVB		(0x22 << 22)
+#define EDP_LINK_TRAIN_1000MV_6DB_IVB		(0x23 << 22)
 
-#define  EDP_LINK_TRAIN_VOL_EMP_MASK_IVB	(0x3f<<22)
+#define  EDP_LINK_TRAIN_VOL_EMP_MASK_IVB	(0x3f << 22)
 
 #define  VLV_PMWGICZ				_MMIO(0x1300a4)
 
@@ -7999,7 +8054,7 @@ enum {
 #define   FORCEWAKE_KERNEL_FALLBACK		BIT(15)
 #define  FORCEWAKE_MT_ACK			_MMIO(0x130040)
 #define  ECOBUS					_MMIO(0xa180)
-#define    FORCEWAKE_MT_ENABLE			(1<<5)
+#define    FORCEWAKE_MT_ENABLE			(1 << 5)
 #define  VLV_SPAREG2H				_MMIO(0xA194)
 #define  GEN9_PWRGT_DOMAIN_STATUS		_MMIO(0xA2A0)
 #define   GEN9_PWRGT_MEDIA_STATUS_MASK		(1 << 0)
@@ -8008,13 +8063,13 @@ enum {
 #define  GTFIFODBG				_MMIO(0x120000)
 #define    GT_FIFO_SBDEDICATE_FREE_ENTRY_CHV	(0x1f << 20)
 #define    GT_FIFO_FREE_ENTRIES_CHV		(0x7f << 13)
-#define    GT_FIFO_SBDROPERR			(1<<6)
-#define    GT_FIFO_BLOBDROPERR			(1<<5)
-#define    GT_FIFO_SB_READ_ABORTERR		(1<<4)
-#define    GT_FIFO_DROPERR			(1<<3)
-#define    GT_FIFO_OVFERR			(1<<2)
-#define    GT_FIFO_IAWRERR			(1<<1)
-#define    GT_FIFO_IARDERR			(1<<0)
+#define    GT_FIFO_SBDROPERR			(1 << 6)
+#define    GT_FIFO_BLOBDROPERR			(1 << 5)
+#define    GT_FIFO_SB_READ_ABORTERR		(1 << 4)
+#define    GT_FIFO_DROPERR			(1 << 3)
+#define    GT_FIFO_OVFERR			(1 << 2)
+#define    GT_FIFO_IAWRERR			(1 << 1)
+#define    GT_FIFO_IARDERR			(1 << 0)
 
 #define  GTFIFOCTL				_MMIO(0x120008)
 #define    GT_FIFO_FREE_ENTRIES_MASK		0x7f
@@ -8048,37 +8103,37 @@ enum {
 # define GEN6_OACSUNIT_CLOCK_GATE_DISABLE		(1 << 20)
 
 #define GEN7_UCGCTL4				_MMIO(0x940c)
-#define  GEN7_L3BANK2X_CLOCK_GATE_DISABLE	(1<<25)
-#define  GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE	(1<<14)
+#define  GEN7_L3BANK2X_CLOCK_GATE_DISABLE	(1 << 25)
+#define  GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE	(1 << 14)
 
 #define GEN6_RCGCTL1				_MMIO(0x9410)
 #define GEN6_RCGCTL2				_MMIO(0x9414)
 #define GEN6_RSTCTL				_MMIO(0x9420)
 
 #define GEN8_UCGCTL6				_MMIO(0x9430)
-#define   GEN8_GAPSUNIT_CLOCK_GATE_DISABLE	(1<<24)
-#define   GEN8_SDEUNIT_CLOCK_GATE_DISABLE	(1<<14)
-#define   GEN8_HDCUNIT_CLOCK_GATE_DISABLE_HDCREQ (1<<28)
+#define   GEN8_GAPSUNIT_CLOCK_GATE_DISABLE	(1 << 24)
+#define   GEN8_SDEUNIT_CLOCK_GATE_DISABLE	(1 << 14)
+#define   GEN8_HDCUNIT_CLOCK_GATE_DISABLE_HDCREQ (1 << 28)
 
 #define GEN6_GFXPAUSE				_MMIO(0xA000)
 #define GEN6_RPNSWREQ				_MMIO(0xA008)
-#define   GEN6_TURBO_DISABLE			(1<<31)
-#define   GEN6_FREQUENCY(x)			((x)<<25)
-#define   HSW_FREQUENCY(x)			((x)<<24)
-#define   GEN9_FREQUENCY(x)			((x)<<23)
-#define   GEN6_OFFSET(x)			((x)<<19)
-#define   GEN6_AGGRESSIVE_TURBO			(0<<15)
+#define   GEN6_TURBO_DISABLE			(1 << 31)
+#define   GEN6_FREQUENCY(x)			((x) << 25)
+#define   HSW_FREQUENCY(x)			((x) << 24)
+#define   GEN9_FREQUENCY(x)			((x) << 23)
+#define   GEN6_OFFSET(x)			((x) << 19)
+#define   GEN6_AGGRESSIVE_TURBO			(0 << 15)
 #define GEN6_RC_VIDEO_FREQ			_MMIO(0xA00C)
 #define GEN6_RC_CONTROL				_MMIO(0xA090)
-#define   GEN6_RC_CTL_RC6pp_ENABLE		(1<<16)
-#define   GEN6_RC_CTL_RC6p_ENABLE		(1<<17)
-#define   GEN6_RC_CTL_RC6_ENABLE		(1<<18)
-#define   GEN6_RC_CTL_RC1e_ENABLE		(1<<20)
-#define   GEN6_RC_CTL_RC7_ENABLE		(1<<22)
-#define   VLV_RC_CTL_CTX_RST_PARALLEL		(1<<24)
-#define   GEN7_RC_CTL_TO_MODE			(1<<28)
-#define   GEN6_RC_CTL_EI_MODE(x)		((x)<<27)
-#define   GEN6_RC_CTL_HW_ENABLE			(1<<31)
+#define   GEN6_RC_CTL_RC6pp_ENABLE		(1 << 16)
+#define   GEN6_RC_CTL_RC6p_ENABLE		(1 << 17)
+#define   GEN6_RC_CTL_RC6_ENABLE		(1 << 18)
+#define   GEN6_RC_CTL_RC1e_ENABLE		(1 << 20)
+#define   GEN6_RC_CTL_RC7_ENABLE		(1 << 22)
+#define   VLV_RC_CTL_CTX_RST_PARALLEL		(1 << 24)
+#define   GEN7_RC_CTL_TO_MODE			(1 << 28)
+#define   GEN6_RC_CTL_EI_MODE(x)		((x) << 27)
+#define   GEN6_RC_CTL_HW_ENABLE			(1 << 31)
 #define GEN6_RP_DOWN_TIMEOUT			_MMIO(0xA010)
 #define GEN6_RP_INTERRUPT_LIMITS		_MMIO(0xA014)
 #define GEN6_RPSTAT1				_MMIO(0xA01C)
@@ -8089,19 +8144,19 @@ enum {
 #define   HSW_CAGF_MASK				(0x7f << HSW_CAGF_SHIFT)
 #define   GEN9_CAGF_MASK			(0x1ff << GEN9_CAGF_SHIFT)
 #define GEN6_RP_CONTROL				_MMIO(0xA024)
-#define   GEN6_RP_MEDIA_TURBO			(1<<11)
-#define   GEN6_RP_MEDIA_MODE_MASK		(3<<9)
-#define   GEN6_RP_MEDIA_HW_TURBO_MODE		(3<<9)
-#define   GEN6_RP_MEDIA_HW_NORMAL_MODE		(2<<9)
-#define   GEN6_RP_MEDIA_HW_MODE			(1<<9)
-#define   GEN6_RP_MEDIA_SW_MODE			(0<<9)
-#define   GEN6_RP_MEDIA_IS_GFX			(1<<8)
-#define   GEN6_RP_ENABLE			(1<<7)
-#define   GEN6_RP_UP_IDLE_MIN			(0x1<<3)
-#define   GEN6_RP_UP_BUSY_AVG			(0x2<<3)
-#define   GEN6_RP_UP_BUSY_CONT			(0x4<<3)
-#define   GEN6_RP_DOWN_IDLE_AVG			(0x2<<0)
-#define   GEN6_RP_DOWN_IDLE_CONT		(0x1<<0)
+#define   GEN6_RP_MEDIA_TURBO			(1 << 11)
+#define   GEN6_RP_MEDIA_MODE_MASK		(3 << 9)
+#define   GEN6_RP_MEDIA_HW_TURBO_MODE		(3 << 9)
+#define   GEN6_RP_MEDIA_HW_NORMAL_MODE		(2 << 9)
+#define   GEN6_RP_MEDIA_HW_MODE			(1 << 9)
+#define   GEN6_RP_MEDIA_SW_MODE			(0 << 9)
+#define   GEN6_RP_MEDIA_IS_GFX			(1 << 8)
+#define   GEN6_RP_ENABLE			(1 << 7)
+#define   GEN6_RP_UP_IDLE_MIN			(0x1 << 3)
+#define   GEN6_RP_UP_BUSY_AVG			(0x2 << 3)
+#define   GEN6_RP_UP_BUSY_CONT			(0x4 << 3)
+#define   GEN6_RP_DOWN_IDLE_AVG			(0x2 << 0)
+#define   GEN6_RP_DOWN_IDLE_CONT		(0x1 << 0)
 #define GEN6_RP_UP_THRESHOLD			_MMIO(0xA02C)
 #define GEN6_RP_DOWN_THRESHOLD			_MMIO(0xA030)
 #define GEN6_RP_CUR_UP_EI			_MMIO(0xA050)
@@ -8137,15 +8192,15 @@ enum {
 #define VLV_RCEDATA				_MMIO(0xA0BC)
 #define GEN6_RC6pp_THRESHOLD			_MMIO(0xA0C0)
 #define GEN6_PMINTRMSK				_MMIO(0xA168)
-#define   GEN8_PMINTR_DISABLE_REDIRECT_TO_GUC	(1<<31)
-#define   ARAT_EXPIRED_INTRMSK			(1<<9)
+#define   GEN8_PMINTR_DISABLE_REDIRECT_TO_GUC	(1 << 31)
+#define   ARAT_EXPIRED_INTRMSK			(1 << 9)
 #define GEN8_MISC_CTRL0				_MMIO(0xA180)
 #define VLV_PWRDWNUPCTL				_MMIO(0xA294)
 #define GEN9_MEDIA_PG_IDLE_HYSTERESIS		_MMIO(0xA0C4)
 #define GEN9_RENDER_PG_IDLE_HYSTERESIS		_MMIO(0xA0C8)
 #define GEN9_PG_ENABLE				_MMIO(0xA210)
-#define GEN9_RENDER_PG_ENABLE			(1<<0)
-#define GEN9_MEDIA_PG_ENABLE			(1<<1)
+#define GEN9_RENDER_PG_ENABLE			(1 << 0)
+#define GEN9_MEDIA_PG_ENABLE			(1 << 1)
 #define GEN8_PUSHBUS_CONTROL			_MMIO(0xA248)
 #define GEN8_PUSHBUS_ENABLE			_MMIO(0xA250)
 #define GEN8_PUSHBUS_SHIFT			_MMIO(0xA25C)
@@ -8158,13 +8213,13 @@ enum {
 #define GEN6_PMIMR				_MMIO(0x44024) /* rps_lock */
 #define GEN6_PMIIR				_MMIO(0x44028)
 #define GEN6_PMIER				_MMIO(0x4402C)
-#define  GEN6_PM_MBOX_EVENT			(1<<25)
-#define  GEN6_PM_THERMAL_EVENT			(1<<24)
-#define  GEN6_PM_RP_DOWN_TIMEOUT		(1<<6)
-#define  GEN6_PM_RP_UP_THRESHOLD		(1<<5)
-#define  GEN6_PM_RP_DOWN_THRESHOLD		(1<<4)
-#define  GEN6_PM_RP_UP_EI_EXPIRED		(1<<2)
-#define  GEN6_PM_RP_DOWN_EI_EXPIRED		(1<<1)
+#define  GEN6_PM_MBOX_EVENT			(1 << 25)
+#define  GEN6_PM_THERMAL_EVENT			(1 << 24)
+#define  GEN6_PM_RP_DOWN_TIMEOUT		(1 << 6)
+#define  GEN6_PM_RP_UP_THRESHOLD		(1 << 5)
+#define  GEN6_PM_RP_DOWN_THRESHOLD		(1 << 4)
+#define  GEN6_PM_RP_UP_EI_EXPIRED		(1 << 2)
+#define  GEN6_PM_RP_DOWN_EI_EXPIRED		(1 << 1)
 #define  GEN6_PM_RPS_EVENTS			(GEN6_PM_RP_UP_THRESHOLD | \
 						 GEN6_PM_RP_DOWN_THRESHOLD | \
 						 GEN6_PM_RP_DOWN_TIMEOUT)
@@ -8173,16 +8228,16 @@ enum {
 #define GEN7_GT_SCRATCH_REG_NUM			8
 
 #define VLV_GTLC_SURVIVABILITY_REG              _MMIO(0x130098)
-#define VLV_GFX_CLK_STATUS_BIT			(1<<3)
-#define VLV_GFX_CLK_FORCE_ON_BIT		(1<<2)
+#define VLV_GFX_CLK_STATUS_BIT			(1 << 3)
+#define VLV_GFX_CLK_FORCE_ON_BIT		(1 << 2)
 
 #define GEN6_GT_GFX_RC6_LOCKED			_MMIO(0x138104)
 #define VLV_COUNTER_CONTROL			_MMIO(0x138104)
-#define   VLV_COUNT_RANGE_HIGH			(1<<15)
-#define   VLV_MEDIA_RC0_COUNT_EN		(1<<5)
-#define   VLV_RENDER_RC0_COUNT_EN		(1<<4)
-#define   VLV_MEDIA_RC6_COUNT_EN		(1<<1)
-#define   VLV_RENDER_RC6_COUNT_EN		(1<<0)
+#define   VLV_COUNT_RANGE_HIGH			(1 << 15)
+#define   VLV_MEDIA_RC0_COUNT_EN		(1 << 5)
+#define   VLV_RENDER_RC0_COUNT_EN		(1 << 4)
+#define   VLV_MEDIA_RC6_COUNT_EN		(1 << 1)
+#define   VLV_RENDER_RC6_COUNT_EN		(1 << 0)
 #define GEN6_GT_GFX_RC6				_MMIO(0x138108)
 #define VLV_GT_RENDER_RC6			_MMIO(0x138108)
 #define VLV_GT_MEDIA_RC6			_MMIO(0x13810C)
@@ -8193,7 +8248,7 @@ enum {
 #define VLV_MEDIA_C0_COUNT			_MMIO(0x13811C)
 
 #define GEN6_PCODE_MAILBOX			_MMIO(0x138124)
-#define   GEN6_PCODE_READY			(1<<31)
+#define   GEN6_PCODE_READY			(1 << 31)
 #define   GEN6_PCODE_ERROR_MASK			0xFF
 #define     GEN6_PCODE_SUCCESS			0x0
 #define     GEN6_PCODE_ILLEGAL_CMD		0x1
@@ -8237,7 +8292,7 @@ enum {
 #define GEN6_PCODE_DATA1			_MMIO(0x13812C)
 
 #define GEN6_GT_CORE_STATUS		_MMIO(0x138060)
-#define   GEN6_CORE_CPD_STATE_MASK	(7<<4)
+#define   GEN6_CORE_CPD_STATE_MASK	(7 << 4)
 #define   GEN6_RCn_MASK			7
 #define   GEN6_RC0			0
 #define   GEN6_RC3			2
@@ -8249,26 +8304,26 @@ enum {
 
 #define CHV_POWER_SS0_SIG1		_MMIO(0xa720)
 #define CHV_POWER_SS1_SIG1		_MMIO(0xa728)
-#define   CHV_SS_PG_ENABLE		(1<<1)
-#define   CHV_EU08_PG_ENABLE		(1<<9)
-#define   CHV_EU19_PG_ENABLE		(1<<17)
-#define   CHV_EU210_PG_ENABLE		(1<<25)
+#define   CHV_SS_PG_ENABLE		(1 << 1)
+#define   CHV_EU08_PG_ENABLE		(1 << 9)
+#define   CHV_EU19_PG_ENABLE		(1 << 17)
+#define   CHV_EU210_PG_ENABLE		(1 << 25)
 
 #define CHV_POWER_SS0_SIG2		_MMIO(0xa724)
 #define CHV_POWER_SS1_SIG2		_MMIO(0xa72c)
-#define   CHV_EU311_PG_ENABLE		(1<<1)
+#define   CHV_EU311_PG_ENABLE		(1 << 1)
 
-#define GEN9_SLICE_PGCTL_ACK(slice)	_MMIO(0x804c + (slice)*0x4)
+#define GEN9_SLICE_PGCTL_ACK(slice)	_MMIO(0x804c + (slice) * 0x4)
 #define GEN10_SLICE_PGCTL_ACK(slice)	_MMIO(0x804c + ((slice) / 3) * 0x34 + \
 					      ((slice) % 3) * 0x4)
 #define   GEN9_PGCTL_SLICE_ACK		(1 << 0)
-#define   GEN9_PGCTL_SS_ACK(subslice)	(1 << (2 + (subslice)*2))
+#define   GEN9_PGCTL_SS_ACK(subslice)	(1 << (2 + (subslice) * 2))
 #define   GEN10_PGCTL_VALID_SS_MASK(slice) ((slice) == 0 ? 0x7F : 0x1F)
 
-#define GEN9_SS01_EU_PGCTL_ACK(slice)	_MMIO(0x805c + (slice)*0x8)
+#define GEN9_SS01_EU_PGCTL_ACK(slice)	_MMIO(0x805c + (slice) * 0x8)
 #define GEN10_SS01_EU_PGCTL_ACK(slice)	_MMIO(0x805c + ((slice) / 3) * 0x30 + \
 					      ((slice) % 3) * 0x8)
-#define GEN9_SS23_EU_PGCTL_ACK(slice)	_MMIO(0x8060 + (slice)*0x8)
+#define GEN9_SS23_EU_PGCTL_ACK(slice)	_MMIO(0x8060 + (slice) * 0x8)
 #define GEN10_SS23_EU_PGCTL_ACK(slice)	_MMIO(0x8060 + ((slice) / 3) * 0x30 + \
 					      ((slice) % 3) * 0x8)
 #define   GEN9_PGCTL_SSA_EU08_ACK	(1 << 0)
@@ -8281,10 +8336,10 @@ enum {
 #define   GEN9_PGCTL_SSB_EU311_ACK	(1 << 14)
 
 #define GEN7_MISCCPCTL				_MMIO(0x9424)
-#define   GEN7_DOP_CLOCK_GATE_ENABLE		(1<<0)
-#define   GEN8_DOP_CLOCK_GATE_CFCLK_ENABLE	(1<<2)
-#define   GEN8_DOP_CLOCK_GATE_GUC_ENABLE	(1<<4)
-#define   GEN8_DOP_CLOCK_GATE_MEDIA_ENABLE     (1<<6)
+#define   GEN7_DOP_CLOCK_GATE_ENABLE		(1 << 0)
+#define   GEN8_DOP_CLOCK_GATE_CFCLK_ENABLE	(1 << 2)
+#define   GEN8_DOP_CLOCK_GATE_GUC_ENABLE	(1 << 4)
+#define   GEN8_DOP_CLOCK_GATE_MEDIA_ENABLE     (1 << 6)
 
 #define GEN8_GARBCNTL				_MMIO(0xB004)
 #define   GEN9_GAPS_TSV_CREDIT_DISABLE		(1 << 7)
@@ -8313,38 +8368,38 @@ enum {
 
 /* IVYBRIDGE DPF */
 #define GEN7_L3CDERRST1(slice)		_MMIO(0xB008 + (slice) * 0x200) /* L3CD Error Status 1 */
-#define   GEN7_L3CDERRST1_ROW_MASK	(0x7ff<<14)
-#define   GEN7_PARITY_ERROR_VALID	(1<<13)
-#define   GEN7_L3CDERRST1_BANK_MASK	(3<<11)
-#define   GEN7_L3CDERRST1_SUBBANK_MASK	(7<<8)
+#define   GEN7_L3CDERRST1_ROW_MASK	(0x7ff << 14)
+#define   GEN7_PARITY_ERROR_VALID	(1 << 13)
+#define   GEN7_L3CDERRST1_BANK_MASK	(3 << 11)
+#define   GEN7_L3CDERRST1_SUBBANK_MASK	(7 << 8)
 #define GEN7_PARITY_ERROR_ROW(reg) \
-		((reg & GEN7_L3CDERRST1_ROW_MASK) >> 14)
+		(((reg) & GEN7_L3CDERRST1_ROW_MASK) >> 14)
 #define GEN7_PARITY_ERROR_BANK(reg) \
-		((reg & GEN7_L3CDERRST1_BANK_MASK) >> 11)
+		(((reg) & GEN7_L3CDERRST1_BANK_MASK) >> 11)
 #define GEN7_PARITY_ERROR_SUBBANK(reg) \
-		((reg & GEN7_L3CDERRST1_SUBBANK_MASK) >> 8)
-#define   GEN7_L3CDERRST1_ENABLE	(1<<7)
+		(((reg) & GEN7_L3CDERRST1_SUBBANK_MASK) >> 8)
+#define   GEN7_L3CDERRST1_ENABLE	(1 << 7)
 
 #define GEN7_L3LOG(slice, i)		_MMIO(0xB070 + (slice) * 0x200 + (i) * 4)
 #define GEN7_L3LOG_SIZE			0x80
 
 #define GEN7_HALF_SLICE_CHICKEN1	_MMIO(0xe100) /* IVB GT1 + VLV */
 #define GEN7_HALF_SLICE_CHICKEN1_GT2	_MMIO(0xf100)
-#define   GEN7_MAX_PS_THREAD_DEP		(8<<12)
-#define   GEN7_SINGLE_SUBSCAN_DISPATCH_ENABLE	(1<<10)
-#define   GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE	(1<<4)
-#define   GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE	(1<<3)
+#define   GEN7_MAX_PS_THREAD_DEP		(8 << 12)
+#define   GEN7_SINGLE_SUBSCAN_DISPATCH_ENABLE	(1 << 10)
+#define   GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE	(1 << 4)
+#define   GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE	(1 << 3)
 
 #define GEN9_HALF_SLICE_CHICKEN5	_MMIO(0xe188)
-#define   GEN9_DG_MIRROR_FIX_ENABLE	(1<<5)
-#define   GEN9_CCS_TLB_PREFETCH_ENABLE	(1<<3)
+#define   GEN9_DG_MIRROR_FIX_ENABLE	(1 << 5)
+#define   GEN9_CCS_TLB_PREFETCH_ENABLE	(1 << 3)
 
 #define GEN8_ROW_CHICKEN		_MMIO(0xe4f0)
-#define   FLOW_CONTROL_ENABLE		(1<<15)
-#define   PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE	(1<<8)
-#define   STALL_DOP_GATING_DISABLE		(1<<5)
-#define   THROTTLE_12_5				(7<<2)
-#define   DISABLE_EARLY_EOT			(1<<1)
+#define   FLOW_CONTROL_ENABLE		(1 << 15)
+#define   PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE	(1 << 8)
+#define   STALL_DOP_GATING_DISABLE		(1 << 5)
+#define   THROTTLE_12_5				(7 << 2)
+#define   DISABLE_EARLY_EOT			(1 << 1)
 
 #define GEN7_ROW_CHICKEN2		_MMIO(0xe4f4)
 #define GEN7_ROW_CHICKEN2_GT2		_MMIO(0xf4f4)
@@ -8356,19 +8411,19 @@ enum {
 #define  HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE    (1 << 6)
 
 #define HALF_SLICE_CHICKEN2		_MMIO(0xe180)
-#define   GEN8_ST_PO_DISABLE		(1<<13)
+#define   GEN8_ST_PO_DISABLE		(1 << 13)
 
 #define HALF_SLICE_CHICKEN3		_MMIO(0xe184)
-#define   HSW_SAMPLE_C_PERFORMANCE	(1<<9)
-#define   GEN8_CENTROID_PIXEL_OPT_DIS	(1<<8)
-#define   GEN9_DISABLE_OCL_OOB_SUPPRESS_LOGIC	(1<<5)
-#define   CNL_FAST_ANISO_L1_BANKING_FIX	(1<<4)
-#define   GEN8_SAMPLER_POWER_BYPASS_DIS	(1<<1)
+#define   HSW_SAMPLE_C_PERFORMANCE	(1 << 9)
+#define   GEN8_CENTROID_PIXEL_OPT_DIS	(1 << 8)
+#define   GEN9_DISABLE_OCL_OOB_SUPPRESS_LOGIC	(1 << 5)
+#define   CNL_FAST_ANISO_L1_BANKING_FIX	(1 << 4)
+#define   GEN8_SAMPLER_POWER_BYPASS_DIS	(1 << 1)
 
 #define GEN9_HALF_SLICE_CHICKEN7	_MMIO(0xe194)
-#define   GEN9_SAMPLER_HASH_COMPRESSED_READ_ADDR	(1<<8)
-#define   GEN9_ENABLE_YV12_BUGFIX	(1<<4)
-#define   GEN9_ENABLE_GPGPU_PREEMPTION	(1<<2)
+#define   GEN9_SAMPLER_HASH_COMPRESSED_READ_ADDR	(1 << 8)
+#define   GEN9_ENABLE_YV12_BUGFIX	(1 << 4)
+#define   GEN9_ENABLE_GPGPU_PREEMPTION	(1 << 2)
 
 /* Audio */
 #define G4X_AUD_VID_DID			_MMIO(dev_priv->info.display_mmio_offset + 0x62020)
@@ -8521,9 +8576,9 @@ enum {
 #define   HSW_PWR_WELL_CTL_REQ(pw)		(1 << (_HSW_PW_SHIFT(pw) + 1))
 #define   HSW_PWR_WELL_CTL_STATE(pw)		(1 << _HSW_PW_SHIFT(pw))
 #define HSW_PWR_WELL_CTL5			_MMIO(0x45410)
-#define   HSW_PWR_WELL_ENABLE_SINGLE_STEP	(1<<31)
-#define   HSW_PWR_WELL_PWR_GATE_OVERRIDE	(1<<20)
-#define   HSW_PWR_WELL_FORCE_ON			(1<<19)
+#define   HSW_PWR_WELL_ENABLE_SINGLE_STEP	(1 << 31)
+#define   HSW_PWR_WELL_PWR_GATE_OVERRIDE	(1 << 20)
+#define   HSW_PWR_WELL_FORCE_ON			(1 << 19)
 #define HSW_PWR_WELL_CTL6			_MMIO(0x45414)
 
 /* SKL Fuse Status */
@@ -8534,7 +8589,7 @@ enum skl_power_gate {
 };
 
 #define SKL_FUSE_STATUS				_MMIO(0x42000)
-#define  SKL_FUSE_DOWNLOAD_STATUS		(1<<31)
+#define  SKL_FUSE_DOWNLOAD_STATUS		(1 << 31)
 /* PG0 (HW control->no power well ID), PG1..PG2 (SKL_DISP_PW1..SKL_DISP_PW2) */
 #define  SKL_PW_TO_PG(pw)			((pw) - SKL_DISP_PW_1 + SKL_PG1)
 #define  SKL_FUSE_PG_DIST_STATUS(pg)		(1 << (27 - (pg)))
@@ -8549,8 +8604,8 @@ enum skl_power_gate {
 						    _CNL_AUX_ANAOVRD1_C, \
 						    _CNL_AUX_ANAOVRD1_D, \
 						    _CNL_AUX_ANAOVRD1_F))
-#define   CNL_AUX_ANAOVRD1_ENABLE	(1<<16)
-#define   CNL_AUX_ANAOVRD1_LDO_BYPASS	(1<<23)
+#define   CNL_AUX_ANAOVRD1_ENABLE	(1 << 16)
+#define   CNL_AUX_ANAOVRD1_LDO_BYPASS	(1 << 23)
 
 /* HDCP Key Registers */
 #define HDCP_KEY_CONF			_MMIO(0x66c00)
@@ -8595,7 +8650,7 @@ enum skl_power_gate {
 #define HDCP_SHA_V_PRIME_H2		_MMIO(0x66d0C)
 #define HDCP_SHA_V_PRIME_H3		_MMIO(0x66d10)
 #define HDCP_SHA_V_PRIME_H4		_MMIO(0x66d14)
-#define HDCP_SHA_V_PRIME(h)		_MMIO((0x66d04 + h * 4))
+#define HDCP_SHA_V_PRIME(h)		_MMIO((0x66d04 + (h) * 4))
 #define HDCP_SHA_TEXT			_MMIO(0x66d18)
 
 /* HDCP Auth Registers */
@@ -8611,7 +8666,7 @@ enum skl_power_gate {
 					  _PORTC_HDCP_AUTHENC, \
 					  _PORTD_HDCP_AUTHENC, \
 					  _PORTE_HDCP_AUTHENC, \
-					  _PORTF_HDCP_AUTHENC) + x)
+					  _PORTF_HDCP_AUTHENC) + (x))
 #define PORT_HDCP_CONF(port)		_PORT_HDCP_AUTHENC(port, 0x0)
 #define  HDCP_CONF_CAPTURE_AN		BIT(0)
 #define  HDCP_CONF_AUTH_AND_ENC		(BIT(1) | BIT(0))
@@ -8632,7 +8687,7 @@ enum skl_power_gate {
 #define  HDCP_STATUS_R0_READY		BIT(18)
 #define  HDCP_STATUS_AN_READY		BIT(17)
 #define  HDCP_STATUS_CIPHER		BIT(16)
-#define  HDCP_STATUS_FRAME_CNT(x)	((x >> 8) & 0xff)
+#define  HDCP_STATUS_FRAME_CNT(x)	(((x) >> 8) & 0xff)
 
 /* Per-pipe DDI Function Control */
 #define _TRANS_DDI_FUNC_CTL_A		0x60400
@@ -8641,37 +8696,37 @@ enum skl_power_gate {
 #define _TRANS_DDI_FUNC_CTL_EDP		0x6F400
 #define TRANS_DDI_FUNC_CTL(tran) _MMIO_TRANS2(tran, _TRANS_DDI_FUNC_CTL_A)
 
-#define  TRANS_DDI_FUNC_ENABLE		(1<<31)
+#define  TRANS_DDI_FUNC_ENABLE		(1 << 31)
 /* Those bits are ignored by pipe EDP since it can only connect to DDI A */
-#define  TRANS_DDI_PORT_MASK		(7<<28)
+#define  TRANS_DDI_PORT_MASK		(7 << 28)
 #define  TRANS_DDI_PORT_SHIFT		28
-#define  TRANS_DDI_SELECT_PORT(x)	((x)<<28)
-#define  TRANS_DDI_PORT_NONE		(0<<28)
-#define  TRANS_DDI_MODE_SELECT_MASK	(7<<24)
-#define  TRANS_DDI_MODE_SELECT_HDMI	(0<<24)
-#define  TRANS_DDI_MODE_SELECT_DVI	(1<<24)
-#define  TRANS_DDI_MODE_SELECT_DP_SST	(2<<24)
-#define  TRANS_DDI_MODE_SELECT_DP_MST	(3<<24)
-#define  TRANS_DDI_MODE_SELECT_FDI	(4<<24)
-#define  TRANS_DDI_BPC_MASK		(7<<20)
-#define  TRANS_DDI_BPC_8		(0<<20)
-#define  TRANS_DDI_BPC_10		(1<<20)
-#define  TRANS_DDI_BPC_6		(2<<20)
-#define  TRANS_DDI_BPC_12		(3<<20)
-#define  TRANS_DDI_PVSYNC		(1<<17)
-#define  TRANS_DDI_PHSYNC		(1<<16)
-#define  TRANS_DDI_EDP_INPUT_MASK	(7<<12)
-#define  TRANS_DDI_EDP_INPUT_A_ON	(0<<12)
-#define  TRANS_DDI_EDP_INPUT_A_ONOFF	(4<<12)
-#define  TRANS_DDI_EDP_INPUT_B_ONOFF	(5<<12)
-#define  TRANS_DDI_EDP_INPUT_C_ONOFF	(6<<12)
-#define  TRANS_DDI_HDCP_SIGNALLING	(1<<9)
-#define  TRANS_DDI_DP_VC_PAYLOAD_ALLOC	(1<<8)
-#define  TRANS_DDI_HDMI_SCRAMBLER_CTS_ENABLE (1<<7)
-#define  TRANS_DDI_HDMI_SCRAMBLER_RESET_FREQ (1<<6)
-#define  TRANS_DDI_BFI_ENABLE		(1<<4)
-#define  TRANS_DDI_HIGH_TMDS_CHAR_RATE	(1<<4)
-#define  TRANS_DDI_HDMI_SCRAMBLING	(1<<0)
+#define  TRANS_DDI_SELECT_PORT(x)	((x) << 28)
+#define  TRANS_DDI_PORT_NONE		(0 << 28)
+#define  TRANS_DDI_MODE_SELECT_MASK	(7 << 24)
+#define  TRANS_DDI_MODE_SELECT_HDMI	(0 << 24)
+#define  TRANS_DDI_MODE_SELECT_DVI	(1 << 24)
+#define  TRANS_DDI_MODE_SELECT_DP_SST	(2 << 24)
+#define  TRANS_DDI_MODE_SELECT_DP_MST	(3 << 24)
+#define  TRANS_DDI_MODE_SELECT_FDI	(4 << 24)
+#define  TRANS_DDI_BPC_MASK		(7 << 20)
+#define  TRANS_DDI_BPC_8		(0 << 20)
+#define  TRANS_DDI_BPC_10		(1 << 20)
+#define  TRANS_DDI_BPC_6		(2 << 20)
+#define  TRANS_DDI_BPC_12		(3 << 20)
+#define  TRANS_DDI_PVSYNC		(1 << 17)
+#define  TRANS_DDI_PHSYNC		(1 << 16)
+#define  TRANS_DDI_EDP_INPUT_MASK	(7 << 12)
+#define  TRANS_DDI_EDP_INPUT_A_ON	(0 << 12)
+#define  TRANS_DDI_EDP_INPUT_A_ONOFF	(4 << 12)
+#define  TRANS_DDI_EDP_INPUT_B_ONOFF	(5 << 12)
+#define  TRANS_DDI_EDP_INPUT_C_ONOFF	(6 << 12)
+#define  TRANS_DDI_HDCP_SIGNALLING	(1 << 9)
+#define  TRANS_DDI_DP_VC_PAYLOAD_ALLOC	(1 << 8)
+#define  TRANS_DDI_HDMI_SCRAMBLER_CTS_ENABLE (1 << 7)
+#define  TRANS_DDI_HDMI_SCRAMBLER_RESET_FREQ (1 << 6)
+#define  TRANS_DDI_BFI_ENABLE		(1 << 4)
+#define  TRANS_DDI_HIGH_TMDS_CHAR_RATE	(1 << 4)
+#define  TRANS_DDI_HDMI_SCRAMBLING	(1 << 0)
 #define  TRANS_DDI_HDMI_SCRAMBLING_MASK (TRANS_DDI_HDMI_SCRAMBLER_CTS_ENABLE \
 					| TRANS_DDI_HDMI_SCRAMBLER_RESET_FREQ \
 					| TRANS_DDI_HDMI_SCRAMBLING)
@@ -8680,28 +8735,29 @@ enum skl_power_gate {
 #define _DP_TP_CTL_A			0x64040
 #define _DP_TP_CTL_B			0x64140
 #define DP_TP_CTL(port) _MMIO_PORT(port, _DP_TP_CTL_A, _DP_TP_CTL_B)
-#define  DP_TP_CTL_ENABLE			(1<<31)
-#define  DP_TP_CTL_MODE_SST			(0<<27)
-#define  DP_TP_CTL_MODE_MST			(1<<27)
-#define  DP_TP_CTL_FORCE_ACT			(1<<25)
-#define  DP_TP_CTL_ENHANCED_FRAME_ENABLE	(1<<18)
-#define  DP_TP_CTL_FDI_AUTOTRAIN		(1<<15)
-#define  DP_TP_CTL_LINK_TRAIN_MASK		(7<<8)
-#define  DP_TP_CTL_LINK_TRAIN_PAT1		(0<<8)
-#define  DP_TP_CTL_LINK_TRAIN_PAT2		(1<<8)
-#define  DP_TP_CTL_LINK_TRAIN_PAT3		(4<<8)
-#define  DP_TP_CTL_LINK_TRAIN_IDLE		(2<<8)
-#define  DP_TP_CTL_LINK_TRAIN_NORMAL		(3<<8)
-#define  DP_TP_CTL_SCRAMBLE_DISABLE		(1<<7)
+#define  DP_TP_CTL_ENABLE			(1 << 31)
+#define  DP_TP_CTL_MODE_SST			(0 << 27)
+#define  DP_TP_CTL_MODE_MST			(1 << 27)
+#define  DP_TP_CTL_FORCE_ACT			(1 << 25)
+#define  DP_TP_CTL_ENHANCED_FRAME_ENABLE	(1 << 18)
+#define  DP_TP_CTL_FDI_AUTOTRAIN		(1 << 15)
+#define  DP_TP_CTL_LINK_TRAIN_MASK		(7 << 8)
+#define  DP_TP_CTL_LINK_TRAIN_PAT1		(0 << 8)
+#define  DP_TP_CTL_LINK_TRAIN_PAT2		(1 << 8)
+#define  DP_TP_CTL_LINK_TRAIN_PAT3		(4 << 8)
+#define  DP_TP_CTL_LINK_TRAIN_PAT4		(5 << 8)
+#define  DP_TP_CTL_LINK_TRAIN_IDLE		(2 << 8)
+#define  DP_TP_CTL_LINK_TRAIN_NORMAL		(3 << 8)
+#define  DP_TP_CTL_SCRAMBLE_DISABLE		(1 << 7)
 
 /* DisplayPort Transport Status */
 #define _DP_TP_STATUS_A			0x64044
 #define _DP_TP_STATUS_B			0x64144
 #define DP_TP_STATUS(port) _MMIO_PORT(port, _DP_TP_STATUS_A, _DP_TP_STATUS_B)
-#define  DP_TP_STATUS_IDLE_DONE			(1<<25)
-#define  DP_TP_STATUS_ACT_SENT			(1<<24)
-#define  DP_TP_STATUS_MODE_STATUS_MST		(1<<23)
-#define  DP_TP_STATUS_AUTOTRAIN_DONE		(1<<12)
+#define  DP_TP_STATUS_IDLE_DONE			(1 << 25)
+#define  DP_TP_STATUS_ACT_SENT			(1 << 24)
+#define  DP_TP_STATUS_MODE_STATUS_MST		(1 << 23)
+#define  DP_TP_STATUS_AUTOTRAIN_DONE		(1 << 12)
 #define  DP_TP_STATUS_PAYLOAD_MAPPING_VC2	(3 << 8)
 #define  DP_TP_STATUS_PAYLOAD_MAPPING_VC1	(3 << 4)
 #define  DP_TP_STATUS_PAYLOAD_MAPPING_VC0	(3 << 0)
@@ -8710,16 +8766,16 @@ enum skl_power_gate {
 #define _DDI_BUF_CTL_A				0x64000
 #define _DDI_BUF_CTL_B				0x64100
 #define DDI_BUF_CTL(port) _MMIO_PORT(port, _DDI_BUF_CTL_A, _DDI_BUF_CTL_B)
-#define  DDI_BUF_CTL_ENABLE			(1<<31)
+#define  DDI_BUF_CTL_ENABLE			(1 << 31)
 #define  DDI_BUF_TRANS_SELECT(n)	((n) << 24)
-#define  DDI_BUF_EMP_MASK			(0xf<<24)
-#define  DDI_BUF_PORT_REVERSAL			(1<<16)
-#define  DDI_BUF_IS_IDLE			(1<<7)
-#define  DDI_A_4_LANES				(1<<4)
+#define  DDI_BUF_EMP_MASK			(0xf << 24)
+#define  DDI_BUF_PORT_REVERSAL			(1 << 16)
+#define  DDI_BUF_IS_IDLE			(1 << 7)
+#define  DDI_A_4_LANES				(1 << 4)
 #define  DDI_PORT_WIDTH(width)			(((width) - 1) << 1)
 #define  DDI_PORT_WIDTH_MASK			(7 << 1)
 #define  DDI_PORT_WIDTH_SHIFT			1
-#define  DDI_INIT_DISPLAY_DETECTED		(1<<0)
+#define  DDI_INIT_DISPLAY_DETECTED		(1 << 0)
 
 /* DDI Buffer Translations */
 #define _DDI_BUF_TRANS_A		0x64E00
@@ -8734,95 +8790,99 @@ enum skl_power_gate {
 #define SBI_ADDR			_MMIO(0xC6000)
 #define SBI_DATA			_MMIO(0xC6004)
 #define SBI_CTL_STAT			_MMIO(0xC6008)
-#define  SBI_CTL_DEST_ICLK		(0x0<<16)
-#define  SBI_CTL_DEST_MPHY		(0x1<<16)
-#define  SBI_CTL_OP_IORD		(0x2<<8)
-#define  SBI_CTL_OP_IOWR		(0x3<<8)
-#define  SBI_CTL_OP_CRRD		(0x6<<8)
-#define  SBI_CTL_OP_CRWR		(0x7<<8)
-#define  SBI_RESPONSE_FAIL		(0x1<<1)
-#define  SBI_RESPONSE_SUCCESS		(0x0<<1)
-#define  SBI_BUSY			(0x1<<0)
-#define  SBI_READY			(0x0<<0)
+#define  SBI_CTL_DEST_ICLK		(0x0 << 16)
+#define  SBI_CTL_DEST_MPHY		(0x1 << 16)
+#define  SBI_CTL_OP_IORD		(0x2 << 8)
+#define  SBI_CTL_OP_IOWR		(0x3 << 8)
+#define  SBI_CTL_OP_CRRD		(0x6 << 8)
+#define  SBI_CTL_OP_CRWR		(0x7 << 8)
+#define  SBI_RESPONSE_FAIL		(0x1 << 1)
+#define  SBI_RESPONSE_SUCCESS		(0x0 << 1)
+#define  SBI_BUSY			(0x1 << 0)
+#define  SBI_READY			(0x0 << 0)
 
 /* SBI offsets */
 #define  SBI_SSCDIVINTPHASE			0x0200
 #define  SBI_SSCDIVINTPHASE6			0x0600
 #define   SBI_SSCDIVINTPHASE_DIVSEL_SHIFT	1
-#define   SBI_SSCDIVINTPHASE_DIVSEL_MASK	(0x7f<<1)
-#define   SBI_SSCDIVINTPHASE_DIVSEL(x)		((x)<<1)
+#define   SBI_SSCDIVINTPHASE_DIVSEL_MASK	(0x7f << 1)
+#define   SBI_SSCDIVINTPHASE_DIVSEL(x)		((x) << 1)
 #define   SBI_SSCDIVINTPHASE_INCVAL_SHIFT	8
-#define   SBI_SSCDIVINTPHASE_INCVAL_MASK	(0x7f<<8)
-#define   SBI_SSCDIVINTPHASE_INCVAL(x)		((x)<<8)
-#define   SBI_SSCDIVINTPHASE_DIR(x)		((x)<<15)
-#define   SBI_SSCDIVINTPHASE_PROPAGATE		(1<<0)
+#define   SBI_SSCDIVINTPHASE_INCVAL_MASK	(0x7f << 8)
+#define   SBI_SSCDIVINTPHASE_INCVAL(x)		((x) << 8)
+#define   SBI_SSCDIVINTPHASE_DIR(x)		((x) << 15)
+#define   SBI_SSCDIVINTPHASE_PROPAGATE		(1 << 0)
 #define  SBI_SSCDITHPHASE			0x0204
 #define  SBI_SSCCTL				0x020c
 #define  SBI_SSCCTL6				0x060C
-#define   SBI_SSCCTL_PATHALT			(1<<3)
-#define   SBI_SSCCTL_DISABLE			(1<<0)
+#define   SBI_SSCCTL_PATHALT			(1 << 3)
+#define   SBI_SSCCTL_DISABLE			(1 << 0)
 #define  SBI_SSCAUXDIV6				0x0610
 #define   SBI_SSCAUXDIV_FINALDIV2SEL_SHIFT	4
-#define   SBI_SSCAUXDIV_FINALDIV2SEL_MASK	(1<<4)
-#define   SBI_SSCAUXDIV_FINALDIV2SEL(x)		((x)<<4)
+#define   SBI_SSCAUXDIV_FINALDIV2SEL_MASK	(1 << 4)
+#define   SBI_SSCAUXDIV_FINALDIV2SEL(x)		((x) << 4)
 #define  SBI_DBUFF0				0x2a00
 #define  SBI_GEN0				0x1f00
-#define   SBI_GEN0_CFG_BUFFENABLE_DISABLE	(1<<0)
+#define   SBI_GEN0_CFG_BUFFENABLE_DISABLE	(1 << 0)
 
 /* LPT PIXCLK_GATE */
 #define PIXCLK_GATE			_MMIO(0xC6020)
-#define  PIXCLK_GATE_UNGATE		(1<<0)
-#define  PIXCLK_GATE_GATE		(0<<0)
+#define  PIXCLK_GATE_UNGATE		(1 << 0)
+#define  PIXCLK_GATE_GATE		(0 << 0)
 
 /* SPLL */
 #define SPLL_CTL			_MMIO(0x46020)
-#define  SPLL_PLL_ENABLE		(1<<31)
-#define  SPLL_PLL_SSC			(1<<28)
-#define  SPLL_PLL_NON_SSC		(2<<28)
-#define  SPLL_PLL_LCPLL			(3<<28)
-#define  SPLL_PLL_REF_MASK		(3<<28)
-#define  SPLL_PLL_FREQ_810MHz		(0<<26)
-#define  SPLL_PLL_FREQ_1350MHz		(1<<26)
-#define  SPLL_PLL_FREQ_2700MHz		(2<<26)
-#define  SPLL_PLL_FREQ_MASK		(3<<26)
+#define  SPLL_PLL_ENABLE		(1 << 31)
+#define  SPLL_PLL_SSC			(1 << 28)
+#define  SPLL_PLL_NON_SSC		(2 << 28)
+#define  SPLL_PLL_LCPLL			(3 << 28)
+#define  SPLL_PLL_REF_MASK		(3 << 28)
+#define  SPLL_PLL_FREQ_810MHz		(0 << 26)
+#define  SPLL_PLL_FREQ_1350MHz		(1 << 26)
+#define  SPLL_PLL_FREQ_2700MHz		(2 << 26)
+#define  SPLL_PLL_FREQ_MASK		(3 << 26)
 
 /* WRPLL */
 #define _WRPLL_CTL1			0x46040
 #define _WRPLL_CTL2			0x46060
 #define WRPLL_CTL(pll)			_MMIO_PIPE(pll, _WRPLL_CTL1, _WRPLL_CTL2)
-#define  WRPLL_PLL_ENABLE		(1<<31)
-#define  WRPLL_PLL_SSC			(1<<28)
-#define  WRPLL_PLL_NON_SSC		(2<<28)
-#define  WRPLL_PLL_LCPLL		(3<<28)
-#define  WRPLL_PLL_REF_MASK		(3<<28)
+#define  WRPLL_PLL_ENABLE		(1 << 31)
+#define  WRPLL_PLL_SSC			(1 << 28)
+#define  WRPLL_PLL_NON_SSC		(2 << 28)
+#define  WRPLL_PLL_LCPLL		(3 << 28)
+#define  WRPLL_PLL_REF_MASK		(3 << 28)
 /* WRPLL divider programming */
-#define  WRPLL_DIVIDER_REFERENCE(x)	((x)<<0)
+#define  WRPLL_DIVIDER_REFERENCE(x)	((x) << 0)
 #define  WRPLL_DIVIDER_REF_MASK		(0xff)
-#define  WRPLL_DIVIDER_POST(x)		((x)<<8)
-#define  WRPLL_DIVIDER_POST_MASK	(0x3f<<8)
+#define  WRPLL_DIVIDER_POST(x)		((x) << 8)
+#define  WRPLL_DIVIDER_POST_MASK	(0x3f << 8)
 #define  WRPLL_DIVIDER_POST_SHIFT	8
-#define  WRPLL_DIVIDER_FEEDBACK(x)	((x)<<16)
+#define  WRPLL_DIVIDER_FEEDBACK(x)	((x) << 16)
 #define  WRPLL_DIVIDER_FB_SHIFT		16
-#define  WRPLL_DIVIDER_FB_MASK		(0xff<<16)
+#define  WRPLL_DIVIDER_FB_MASK		(0xff << 16)
 
 /* Port clock selection */
 #define _PORT_CLK_SEL_A			0x46100
 #define _PORT_CLK_SEL_B			0x46104
 #define PORT_CLK_SEL(port) _MMIO_PORT(port, _PORT_CLK_SEL_A, _PORT_CLK_SEL_B)
-#define  PORT_CLK_SEL_LCPLL_2700	(0<<29)
-#define  PORT_CLK_SEL_LCPLL_1350	(1<<29)
-#define  PORT_CLK_SEL_LCPLL_810		(2<<29)
-#define  PORT_CLK_SEL_SPLL		(3<<29)
-#define  PORT_CLK_SEL_WRPLL(pll)	(((pll)+4)<<29)
-#define  PORT_CLK_SEL_WRPLL1		(4<<29)
-#define  PORT_CLK_SEL_WRPLL2		(5<<29)
-#define  PORT_CLK_SEL_NONE		(7<<29)
-#define  PORT_CLK_SEL_MASK		(7<<29)
+#define  PORT_CLK_SEL_LCPLL_2700	(0 << 29)
+#define  PORT_CLK_SEL_LCPLL_1350	(1 << 29)
+#define  PORT_CLK_SEL_LCPLL_810		(2 << 29)
+#define  PORT_CLK_SEL_SPLL		(3 << 29)
+#define  PORT_CLK_SEL_WRPLL(pll)	(((pll) + 4) << 29)
+#define  PORT_CLK_SEL_WRPLL1		(4 << 29)
+#define  PORT_CLK_SEL_WRPLL2		(5 << 29)
+#define  PORT_CLK_SEL_NONE		(7 << 29)
+#define  PORT_CLK_SEL_MASK		(7 << 29)
 
 /* On ICL+ this is the same as PORT_CLK_SEL, but all bits change. */
 #define DDI_CLK_SEL(port)		PORT_CLK_SEL(port)
 #define  DDI_CLK_SEL_NONE		(0x0 << 28)
 #define  DDI_CLK_SEL_MG			(0x8 << 28)
+#define  DDI_CLK_SEL_TBT_162		(0xC << 28)
+#define  DDI_CLK_SEL_TBT_270		(0xD << 28)
+#define  DDI_CLK_SEL_TBT_540		(0xE << 28)
+#define  DDI_CLK_SEL_TBT_810		(0xF << 28)
 #define  DDI_CLK_SEL_MASK		(0xF << 28)
 
 /* Transcoder clock selection */
@@ -8830,8 +8890,8 @@ enum skl_power_gate {
 #define _TRANS_CLK_SEL_B		0x46144
 #define TRANS_CLK_SEL(tran) _MMIO_TRANS(tran, _TRANS_CLK_SEL_A, _TRANS_CLK_SEL_B)
 /* For each transcoder, we need to select the corresponding port clock */
-#define  TRANS_CLK_SEL_DISABLED		(0x0<<29)
-#define  TRANS_CLK_SEL_PORT(x)		(((x)+1)<<29)
+#define  TRANS_CLK_SEL_DISABLED		(0x0 << 29)
+#define  TRANS_CLK_SEL_PORT(x)		(((x) + 1) << 29)
 
 #define CDCLK_FREQ			_MMIO(0x46200)
 
@@ -8841,28 +8901,28 @@ enum skl_power_gate {
 #define _TRANS_EDP_MSA_MISC		0x6f410
 #define TRANS_MSA_MISC(tran) _MMIO_TRANS2(tran, _TRANSA_MSA_MISC)
 
-#define  TRANS_MSA_SYNC_CLK		(1<<0)
-#define  TRANS_MSA_6_BPC		(0<<5)
-#define  TRANS_MSA_8_BPC		(1<<5)
-#define  TRANS_MSA_10_BPC		(2<<5)
-#define  TRANS_MSA_12_BPC		(3<<5)
-#define  TRANS_MSA_16_BPC		(4<<5)
+#define  TRANS_MSA_SYNC_CLK		(1 << 0)
+#define  TRANS_MSA_6_BPC		(0 << 5)
+#define  TRANS_MSA_8_BPC		(1 << 5)
+#define  TRANS_MSA_10_BPC		(2 << 5)
+#define  TRANS_MSA_12_BPC		(3 << 5)
+#define  TRANS_MSA_16_BPC		(4 << 5)
 
 /* LCPLL Control */
 #define LCPLL_CTL			_MMIO(0x130040)
-#define  LCPLL_PLL_DISABLE		(1<<31)
-#define  LCPLL_PLL_LOCK			(1<<30)
-#define  LCPLL_CLK_FREQ_MASK		(3<<26)
-#define  LCPLL_CLK_FREQ_450		(0<<26)
-#define  LCPLL_CLK_FREQ_54O_BDW		(1<<26)
-#define  LCPLL_CLK_FREQ_337_5_BDW	(2<<26)
-#define  LCPLL_CLK_FREQ_675_BDW		(3<<26)
-#define  LCPLL_CD_CLOCK_DISABLE		(1<<25)
-#define  LCPLL_ROOT_CD_CLOCK_DISABLE	(1<<24)
-#define  LCPLL_CD2X_CLOCK_DISABLE	(1<<23)
-#define  LCPLL_POWER_DOWN_ALLOW		(1<<22)
-#define  LCPLL_CD_SOURCE_FCLK		(1<<21)
-#define  LCPLL_CD_SOURCE_FCLK_DONE	(1<<19)
+#define  LCPLL_PLL_DISABLE		(1 << 31)
+#define  LCPLL_PLL_LOCK			(1 << 30)
+#define  LCPLL_CLK_FREQ_MASK		(3 << 26)
+#define  LCPLL_CLK_FREQ_450		(0 << 26)
+#define  LCPLL_CLK_FREQ_54O_BDW		(1 << 26)
+#define  LCPLL_CLK_FREQ_337_5_BDW	(2 << 26)
+#define  LCPLL_CLK_FREQ_675_BDW		(3 << 26)
+#define  LCPLL_CD_CLOCK_DISABLE		(1 << 25)
+#define  LCPLL_ROOT_CD_CLOCK_DISABLE	(1 << 24)
+#define  LCPLL_CD2X_CLOCK_DISABLE	(1 << 23)
+#define  LCPLL_POWER_DOWN_ALLOW		(1 << 22)
+#define  LCPLL_CD_SOURCE_FCLK		(1 << 21)
+#define  LCPLL_CD_SOURCE_FCLK_DONE	(1 << 19)
 
 /*
  * SKL Clocks
@@ -8890,16 +8950,16 @@ enum skl_power_gate {
 /* LCPLL_CTL */
 #define LCPLL1_CTL		_MMIO(0x46010)
 #define LCPLL2_CTL		_MMIO(0x46014)
-#define  LCPLL_PLL_ENABLE	(1<<31)
+#define  LCPLL_PLL_ENABLE	(1 << 31)
 
 /* DPLL control1 */
 #define DPLL_CTRL1		_MMIO(0x6C058)
-#define  DPLL_CTRL1_HDMI_MODE(id)		(1<<((id)*6+5))
-#define  DPLL_CTRL1_SSC(id)			(1<<((id)*6+4))
-#define  DPLL_CTRL1_LINK_RATE_MASK(id)		(7<<((id)*6+1))
-#define  DPLL_CTRL1_LINK_RATE_SHIFT(id)		((id)*6+1)
-#define  DPLL_CTRL1_LINK_RATE(linkrate, id)	((linkrate)<<((id)*6+1))
-#define  DPLL_CTRL1_OVERRIDE(id)		(1<<((id)*6))
+#define  DPLL_CTRL1_HDMI_MODE(id)		(1 << ((id) * 6 + 5))
+#define  DPLL_CTRL1_SSC(id)			(1 << ((id) * 6 + 4))
+#define  DPLL_CTRL1_LINK_RATE_MASK(id)		(7 << ((id) * 6 + 1))
+#define  DPLL_CTRL1_LINK_RATE_SHIFT(id)		((id) * 6 + 1)
+#define  DPLL_CTRL1_LINK_RATE(linkrate, id)	((linkrate) << ((id) * 6 + 1))
+#define  DPLL_CTRL1_OVERRIDE(id)		(1 << ((id) * 6))
 #define  DPLL_CTRL1_LINK_RATE_2700		0
 #define  DPLL_CTRL1_LINK_RATE_1350		1
 #define  DPLL_CTRL1_LINK_RATE_810		2
@@ -8909,43 +8969,43 @@ enum skl_power_gate {
 
 /* DPLL control2 */
 #define DPLL_CTRL2				_MMIO(0x6C05C)
-#define  DPLL_CTRL2_DDI_CLK_OFF(port)		(1<<((port)+15))
-#define  DPLL_CTRL2_DDI_CLK_SEL_MASK(port)	(3<<((port)*3+1))
-#define  DPLL_CTRL2_DDI_CLK_SEL_SHIFT(port)    ((port)*3+1)
-#define  DPLL_CTRL2_DDI_CLK_SEL(clk, port)	((clk)<<((port)*3+1))
-#define  DPLL_CTRL2_DDI_SEL_OVERRIDE(port)     (1<<((port)*3))
+#define  DPLL_CTRL2_DDI_CLK_OFF(port)		(1 << ((port) + 15))
+#define  DPLL_CTRL2_DDI_CLK_SEL_MASK(port)	(3 << ((port) * 3 + 1))
+#define  DPLL_CTRL2_DDI_CLK_SEL_SHIFT(port)    ((port) * 3 + 1)
+#define  DPLL_CTRL2_DDI_CLK_SEL(clk, port)	((clk) << ((port) * 3 + 1))
+#define  DPLL_CTRL2_DDI_SEL_OVERRIDE(port)     (1 << ((port) * 3))
 
 /* DPLL Status */
 #define DPLL_STATUS	_MMIO(0x6C060)
-#define  DPLL_LOCK(id) (1<<((id)*8))
+#define  DPLL_LOCK(id) (1 << ((id) * 8))
 
 /* DPLL cfg */
 #define _DPLL1_CFGCR1	0x6C040
 #define _DPLL2_CFGCR1	0x6C048
 #define _DPLL3_CFGCR1	0x6C050
-#define  DPLL_CFGCR1_FREQ_ENABLE	(1<<31)
-#define  DPLL_CFGCR1_DCO_FRACTION_MASK	(0x7fff<<9)
-#define  DPLL_CFGCR1_DCO_FRACTION(x)	((x)<<9)
+#define  DPLL_CFGCR1_FREQ_ENABLE	(1 << 31)
+#define  DPLL_CFGCR1_DCO_FRACTION_MASK	(0x7fff << 9)
+#define  DPLL_CFGCR1_DCO_FRACTION(x)	((x) << 9)
 #define  DPLL_CFGCR1_DCO_INTEGER_MASK	(0x1ff)
 
 #define _DPLL1_CFGCR2	0x6C044
 #define _DPLL2_CFGCR2	0x6C04C
 #define _DPLL3_CFGCR2	0x6C054
-#define  DPLL_CFGCR2_QDIV_RATIO_MASK	(0xff<<8)
-#define  DPLL_CFGCR2_QDIV_RATIO(x)	((x)<<8)
-#define  DPLL_CFGCR2_QDIV_MODE(x)	((x)<<7)
-#define  DPLL_CFGCR2_KDIV_MASK		(3<<5)
-#define  DPLL_CFGCR2_KDIV(x)		((x)<<5)
-#define  DPLL_CFGCR2_KDIV_5 (0<<5)
-#define  DPLL_CFGCR2_KDIV_2 (1<<5)
-#define  DPLL_CFGCR2_KDIV_3 (2<<5)
-#define  DPLL_CFGCR2_KDIV_1 (3<<5)
-#define  DPLL_CFGCR2_PDIV_MASK		(7<<2)
-#define  DPLL_CFGCR2_PDIV(x)		((x)<<2)
-#define  DPLL_CFGCR2_PDIV_1 (0<<2)
-#define  DPLL_CFGCR2_PDIV_2 (1<<2)
-#define  DPLL_CFGCR2_PDIV_3 (2<<2)
-#define  DPLL_CFGCR2_PDIV_7 (4<<2)
+#define  DPLL_CFGCR2_QDIV_RATIO_MASK	(0xff << 8)
+#define  DPLL_CFGCR2_QDIV_RATIO(x)	((x) << 8)
+#define  DPLL_CFGCR2_QDIV_MODE(x)	((x) << 7)
+#define  DPLL_CFGCR2_KDIV_MASK		(3 << 5)
+#define  DPLL_CFGCR2_KDIV(x)		((x) << 5)
+#define  DPLL_CFGCR2_KDIV_5 (0 << 5)
+#define  DPLL_CFGCR2_KDIV_2 (1 << 5)
+#define  DPLL_CFGCR2_KDIV_3 (2 << 5)
+#define  DPLL_CFGCR2_KDIV_1 (3 << 5)
+#define  DPLL_CFGCR2_PDIV_MASK		(7 << 2)
+#define  DPLL_CFGCR2_PDIV(x)		((x) << 2)
+#define  DPLL_CFGCR2_PDIV_1 (0 << 2)
+#define  DPLL_CFGCR2_PDIV_2 (1 << 2)
+#define  DPLL_CFGCR2_PDIV_3 (2 << 2)
+#define  DPLL_CFGCR2_PDIV_7 (4 << 2)
 #define  DPLL_CFGCR2_CENTRAL_FREQ_MASK	(3)
 
 #define DPLL_CFGCR1(id)	_MMIO_PIPE((id) - SKL_DPLL1, _DPLL1_CFGCR1, _DPLL2_CFGCR1)
@@ -8957,9 +9017,9 @@ enum skl_power_gate {
 #define DPCLKA_CFGCR0				_MMIO(0x6C200)
 #define DPCLKA_CFGCR0_ICL			_MMIO(0x164280)
 #define  DPCLKA_CFGCR0_DDI_CLK_OFF(port)	(1 << ((port) ==  PORT_F ? 23 : \
-						      (port)+10))
+						      (port) + 10))
 #define  DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(port)	((port) == PORT_F ? 21 : \
-						(port)*2)
+						(port) * 2)
 #define  DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(port)	(3 << DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(port))
 #define  DPCLKA_CFGCR0_DDI_CLK_SEL(pll, port)	((pll) << DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(port))
 
@@ -8972,6 +9032,8 @@ enum skl_power_gate {
 #define  PLL_POWER_STATE	(1 << 26)
 #define CNL_DPLL_ENABLE(pll)	_MMIO_PLL(pll, DPLL0_ENABLE, DPLL1_ENABLE)
 
+#define TBT_PLL_ENABLE		_MMIO(0x46020)
+
 #define _MG_PLL1_ENABLE		0x46030
 #define _MG_PLL2_ENABLE		0x46034
 #define _MG_PLL3_ENABLE		0x46038
@@ -9170,22 +9232,22 @@ enum skl_power_gate {
 /* GEN9 DC */
 #define DC_STATE_EN			_MMIO(0x45504)
 #define  DC_STATE_DISABLE		0
-#define  DC_STATE_EN_UPTO_DC5		(1<<0)
-#define  DC_STATE_EN_DC9		(1<<3)
-#define  DC_STATE_EN_UPTO_DC6		(2<<0)
+#define  DC_STATE_EN_UPTO_DC5		(1 << 0)
+#define  DC_STATE_EN_DC9		(1 << 3)
+#define  DC_STATE_EN_UPTO_DC6		(2 << 0)
 #define  DC_STATE_EN_UPTO_DC5_DC6_MASK   0x3
 
 #define  DC_STATE_DEBUG                  _MMIO(0x45520)
-#define  DC_STATE_DEBUG_MASK_CORES	(1<<0)
-#define  DC_STATE_DEBUG_MASK_MEMORY_UP	(1<<1)
+#define  DC_STATE_DEBUG_MASK_CORES	(1 << 0)
+#define  DC_STATE_DEBUG_MASK_MEMORY_UP	(1 << 1)
 
 /* Please see hsw_read_dcomp() and hsw_write_dcomp() before using this register,
  * since on HSW we can't write to it using I915_WRITE. */
 #define D_COMP_HSW			_MMIO(MCHBAR_MIRROR_BASE_SNB + 0x5F0C)
 #define D_COMP_BDW			_MMIO(0x138144)
-#define  D_COMP_RCOMP_IN_PROGRESS	(1<<9)
-#define  D_COMP_COMP_FORCE		(1<<8)
-#define  D_COMP_COMP_DISABLE		(1<<0)
+#define  D_COMP_RCOMP_IN_PROGRESS	(1 << 9)
+#define  D_COMP_COMP_FORCE		(1 << 8)
+#define  D_COMP_COMP_DISABLE		(1 << 0)
 
 /* Pipe WM_LINETIME - watermark line time */
 #define _PIPE_WM_LINETIME_A		0x45270
@@ -9193,27 +9255,27 @@ enum skl_power_gate {
 #define PIPE_WM_LINETIME(pipe) _MMIO_PIPE(pipe, _PIPE_WM_LINETIME_A, _PIPE_WM_LINETIME_B)
 #define   PIPE_WM_LINETIME_MASK			(0x1ff)
 #define   PIPE_WM_LINETIME_TIME(x)		((x))
-#define   PIPE_WM_LINETIME_IPS_LINETIME_MASK	(0x1ff<<16)
-#define   PIPE_WM_LINETIME_IPS_LINETIME(x)	((x)<<16)
+#define   PIPE_WM_LINETIME_IPS_LINETIME_MASK	(0x1ff << 16)
+#define   PIPE_WM_LINETIME_IPS_LINETIME(x)	((x) << 16)
 
 /* SFUSE_STRAP */
 #define SFUSE_STRAP			_MMIO(0xc2014)
-#define  SFUSE_STRAP_FUSE_LOCK		(1<<13)
-#define  SFUSE_STRAP_RAW_FREQUENCY	(1<<8)
-#define  SFUSE_STRAP_DISPLAY_DISABLED	(1<<7)
-#define  SFUSE_STRAP_CRT_DISABLED	(1<<6)
-#define  SFUSE_STRAP_DDIF_DETECTED	(1<<3)
-#define  SFUSE_STRAP_DDIB_DETECTED	(1<<2)
-#define  SFUSE_STRAP_DDIC_DETECTED	(1<<1)
-#define  SFUSE_STRAP_DDID_DETECTED	(1<<0)
+#define  SFUSE_STRAP_FUSE_LOCK		(1 << 13)
+#define  SFUSE_STRAP_RAW_FREQUENCY	(1 << 8)
+#define  SFUSE_STRAP_DISPLAY_DISABLED	(1 << 7)
+#define  SFUSE_STRAP_CRT_DISABLED	(1 << 6)
+#define  SFUSE_STRAP_DDIF_DETECTED	(1 << 3)
+#define  SFUSE_STRAP_DDIB_DETECTED	(1 << 2)
+#define  SFUSE_STRAP_DDIC_DETECTED	(1 << 1)
+#define  SFUSE_STRAP_DDID_DETECTED	(1 << 0)
 
 #define WM_MISC				_MMIO(0x45260)
 #define  WM_MISC_DATA_PARTITION_5_6	(1 << 0)
 
 #define WM_DBG				_MMIO(0x45280)
-#define  WM_DBG_DISALLOW_MULTIPLE_LP	(1<<0)
-#define  WM_DBG_DISALLOW_MAXFIFO	(1<<1)
-#define  WM_DBG_DISALLOW_SPRITE		(1<<2)
+#define  WM_DBG_DISALLOW_MULTIPLE_LP	(1 << 0)
+#define  WM_DBG_DISALLOW_MAXFIFO	(1 << 1)
+#define  WM_DBG_DISALLOW_SPRITE		(1 << 2)
 
 /* pipe CSC */
 #define _PIPE_A_CSC_COEFF_RY_GY	0x49010
@@ -9376,7 +9438,7 @@ enum skl_power_gate {
 			_MIPI_PORT(port, BXT_MIPI1_TX_ESCLK_FIXDIV_MASK, \
 					BXT_MIPI2_TX_ESCLK_FIXDIV_MASK)
 #define  BXT_MIPI_TX_ESCLK_DIVIDER(port, val)	\
-		((val & 0x3F) << BXT_MIPI_TX_ESCLK_SHIFT(port))
+		(((val) & 0x3F) << BXT_MIPI_TX_ESCLK_SHIFT(port))
 /* RX upper control divider to select actual RX clock output from 8x */
 #define  BXT_MIPI1_RX_ESCLK_UPPER_SHIFT		21
 #define  BXT_MIPI2_RX_ESCLK_UPPER_SHIFT		5
@@ -9389,7 +9451,7 @@ enum skl_power_gate {
 			_MIPI_PORT(port, BXT_MIPI1_RX_ESCLK_UPPER_FIXDIV_MASK, \
 					BXT_MIPI2_RX_ESCLK_UPPER_FIXDIV_MASK)
 #define  BXT_MIPI_RX_ESCLK_UPPER_DIVIDER(port, val)	\
-		((val & 3) << BXT_MIPI_RX_ESCLK_UPPER_SHIFT(port))
+		(((val) & 3) << BXT_MIPI_RX_ESCLK_UPPER_SHIFT(port))
 /* 8/3X divider to select the actual 8/3X clock output from 8x */
 #define  BXT_MIPI1_8X_BY3_SHIFT                19
 #define  BXT_MIPI2_8X_BY3_SHIFT                3
@@ -9402,7 +9464,7 @@ enum skl_power_gate {
 			_MIPI_PORT(port, BXT_MIPI1_8X_BY3_DIVIDER_MASK, \
 						BXT_MIPI2_8X_BY3_DIVIDER_MASK)
 #define  BXT_MIPI_8X_BY3_DIVIDER(port, val)    \
-			((val & 3) << BXT_MIPI_8X_BY3_SHIFT(port))
+			(((val) & 3) << BXT_MIPI_8X_BY3_SHIFT(port))
 /* RX lower control divider to select actual RX clock output from 8x */
 #define  BXT_MIPI1_RX_ESCLK_LOWER_SHIFT		16
 #define  BXT_MIPI2_RX_ESCLK_LOWER_SHIFT		0
@@ -9415,7 +9477,7 @@ enum skl_power_gate {
 			_MIPI_PORT(port, BXT_MIPI1_RX_ESCLK_LOWER_FIXDIV_MASK, \
 					BXT_MIPI2_RX_ESCLK_LOWER_FIXDIV_MASK)
 #define  BXT_MIPI_RX_ESCLK_LOWER_DIVIDER(port, val)	\
-		((val & 3) << BXT_MIPI_RX_ESCLK_LOWER_SHIFT(port))
+		(((val) & 3) << BXT_MIPI_RX_ESCLK_LOWER_SHIFT(port))
 
 #define RX_DIVIDER_BIT_1_2                     0x3
 #define RX_DIVIDER_BIT_3_4                     0xC

+ 4 - 16
drivers/gpu/drm/i915/i915_request.c

@@ -817,6 +817,8 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
 	/* Keep a second pin for the dual retirement along engine and ring */
 	__intel_context_pin(ce);
 
+	rq->infix = rq->ring->emit; /* end of header; start of user payload */
+
 	/* Check that we didn't interrupt ourselves with a new request */
 	GEM_BUG_ON(rq->timeline->seqno != rq->fence.seqno);
 	return rq;
@@ -1016,14 +1018,13 @@ i915_request_await_object(struct i915_request *to,
  * request is not being tracked for completion but the work itself is
  * going to happen on the hardware. This would be a Bad Thing(tm).
  */
-void __i915_request_add(struct i915_request *request, bool flush_caches)
+void i915_request_add(struct i915_request *request)
 {
 	struct intel_engine_cs *engine = request->engine;
 	struct i915_timeline *timeline = request->timeline;
 	struct intel_ring *ring = request->ring;
 	struct i915_request *prev;
 	u32 *cs;
-	int err;
 
 	GEM_TRACE("%s fence %llx:%d\n",
 		  engine->name, request->fence.context, request->fence.seqno);
@@ -1044,20 +1045,7 @@ void __i915_request_add(struct i915_request *request, bool flush_caches)
 	 * know that it is time to use that space up.
 	 */
 	request->reserved_space = 0;
-
-	/*
-	 * Emit any outstanding flushes - execbuf can fail to emit the flush
-	 * after having emitted the batchbuffer command. Hence we need to fix
-	 * things up similar to emitting the lazy request. The difference here
-	 * is that the flush _must_ happen before the next request, no matter
-	 * what.
-	 */
-	if (flush_caches) {
-		err = engine->emit_flush(request, EMIT_FLUSH);
-
-		/* Not allowed to fail! */
-		WARN(err, "engine->emit_flush() failed: %d!\n", err);
-	}
+	engine->emit_flush(request, EMIT_FLUSH);
 
 	/*
 	 * Record the position of the start of the breadcrumb so that

+ 4 - 3
drivers/gpu/drm/i915/i915_request.h

@@ -134,6 +134,9 @@ struct i915_request {
 	/** Position in the ring of the start of the request */
 	u32 head;
 
+	/** Position in the ring of the start of the user packets */
+	u32 infix;
+
 	/**
 	 * Position in the ring of the start of the postfix.
 	 * This is required to calculate the maximum available ring space
@@ -250,9 +253,7 @@ int i915_request_await_object(struct i915_request *to,
 int i915_request_await_dma_fence(struct i915_request *rq,
 				 struct dma_fence *fence);
 
-void __i915_request_add(struct i915_request *rq, bool flush_caches);
-#define i915_request_add(rq) \
-	__i915_request_add(rq, false)
+void i915_request_add(struct i915_request *rq);
 
 void __i915_request_submit(struct i915_request *request);
 void i915_request_submit(struct i915_request *request);

+ 0 - 33
drivers/gpu/drm/i915/i915_trace.h

@@ -973,39 +973,6 @@ DEFINE_EVENT(i915_context, i915_context_free,
 	TP_ARGS(ctx)
 );
 
-/**
- * DOC: switch_mm tracepoint
- *
- * This tracepoint allows tracking of the mm switch, which is an important point
- * in the lifetime of the vm in the legacy submission path. This tracepoint is
- * called only if full ppgtt is enabled.
- */
-TRACE_EVENT(switch_mm,
-	TP_PROTO(struct intel_engine_cs *engine, struct i915_gem_context *to),
-
-	TP_ARGS(engine, to),
-
-	TP_STRUCT__entry(
-			__field(u16, class)
-			__field(u16, instance)
-			__field(struct i915_gem_context *, to)
-			__field(struct i915_address_space *, vm)
-			__field(u32, dev)
-	),
-
-	TP_fast_assign(
-			__entry->class = engine->uabi_class;
-			__entry->instance = engine->instance;
-			__entry->to = to;
-			__entry->vm = to->ppgtt ? &to->ppgtt->vm : NULL;
-			__entry->dev = engine->i915->drm.primary->index;
-	),
-
-	TP_printk("dev=%u, engine=%u:%u, ctx=%p, ctx_vm=%p",
-		  __entry->dev, __entry->class, __entry->instance, __entry->to,
-		  __entry->vm)
-);
-
 #endif /* _I915_TRACE_H_ */
 
 /* This part must be outside protection */

+ 62 - 45
drivers/gpu/drm/i915/i915_vma.c

@@ -95,6 +95,7 @@ vma_create(struct drm_i915_gem_object *obj,
 		init_request_active(&vma->last_read[i], i915_vma_retire);
 	init_request_active(&vma->last_fence, NULL);
 	vma->vm = vm;
+	vma->ops = &vm->vma_ops;
 	vma->obj = obj;
 	vma->resv = obj->resv;
 	vma->size = obj->base.size;
@@ -280,7 +281,7 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
 	GEM_BUG_ON(!vma->pages);
 
 	trace_i915_vma_bind(vma, bind_flags);
-	ret = vma->vm->bind_vma(vma, cache_level, bind_flags);
+	ret = vma->ops->bind_vma(vma, cache_level, bind_flags);
 	if (ret)
 		return ret;
 
@@ -345,7 +346,7 @@ void i915_vma_flush_writes(struct i915_vma *vma)
 
 void i915_vma_unpin_iomap(struct i915_vma *vma)
 {
-	lockdep_assert_held(&vma->obj->base.dev->struct_mutex);
+	lockdep_assert_held(&vma->vm->i915->drm.struct_mutex);
 
 	GEM_BUG_ON(vma->iomap == NULL);
 
@@ -365,6 +366,7 @@ void i915_vma_unpin_and_release(struct i915_vma **p_vma)
 		return;
 
 	obj = vma->obj;
+	GEM_BUG_ON(!obj);
 
 	i915_vma_unpin(vma);
 	i915_vma_close(vma);
@@ -489,7 +491,7 @@ static int
 i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 {
 	struct drm_i915_private *dev_priv = vma->vm->i915;
-	struct drm_i915_gem_object *obj = vma->obj;
+	unsigned int cache_level;
 	u64 start, end;
 	int ret;
 
@@ -524,20 +526,25 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 	 * attempt to find space.
 	 */
 	if (size > end) {
-		DRM_DEBUG("Attempting to bind an object larger than the aperture: request=%llu [object=%zd] > %s aperture=%llu\n",
-			  size, obj->base.size,
-			  flags & PIN_MAPPABLE ? "mappable" : "total",
+		DRM_DEBUG("Attempting to bind an object larger than the aperture: request=%llu > %s aperture=%llu\n",
+			  size, flags & PIN_MAPPABLE ? "mappable" : "total",
 			  end);
 		return -ENOSPC;
 	}
 
-	ret = i915_gem_object_pin_pages(obj);
-	if (ret)
-		return ret;
+	if (vma->obj) {
+		ret = i915_gem_object_pin_pages(vma->obj);
+		if (ret)
+			return ret;
+
+		cache_level = vma->obj->cache_level;
+	} else {
+		cache_level = 0;
+	}
 
 	GEM_BUG_ON(vma->pages);
 
-	ret = vma->vm->set_pages(vma);
+	ret = vma->ops->set_pages(vma);
 	if (ret)
 		goto err_unpin;
 
@@ -550,7 +557,7 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 		}
 
 		ret = i915_gem_gtt_reserve(vma->vm, &vma->node,
-					   size, offset, obj->cache_level,
+					   size, offset, cache_level,
 					   flags);
 		if (ret)
 			goto err_clear;
@@ -589,7 +596,7 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 		}
 
 		ret = i915_gem_gtt_insert(vma->vm, &vma->node,
-					  size, alignment, obj->cache_level,
+					  size, alignment, cache_level,
 					  start, end, flags);
 		if (ret)
 			goto err_clear;
@@ -598,23 +605,28 @@ i915_vma_insert(struct i915_vma *vma, u64 size, u64 alignment, u64 flags)
 		GEM_BUG_ON(vma->node.start + vma->node.size > end);
 	}
 	GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
-	GEM_BUG_ON(!i915_gem_valid_gtt_space(vma, obj->cache_level));
+	GEM_BUG_ON(!i915_gem_valid_gtt_space(vma, cache_level));
 
 	list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
 
-	spin_lock(&dev_priv->mm.obj_lock);
-	list_move_tail(&obj->mm.link, &dev_priv->mm.bound_list);
-	obj->bind_count++;
-	spin_unlock(&dev_priv->mm.obj_lock);
+	if (vma->obj) {
+		struct drm_i915_gem_object *obj = vma->obj;
+
+		spin_lock(&dev_priv->mm.obj_lock);
+		list_move_tail(&obj->mm.link, &dev_priv->mm.bound_list);
+		obj->bind_count++;
+		spin_unlock(&dev_priv->mm.obj_lock);
 
-	assert_bind_count(obj);
+		assert_bind_count(obj);
+	}
 
 	return 0;
 
 err_clear:
-	vma->vm->clear_pages(vma);
+	vma->ops->clear_pages(vma);
 err_unpin:
-	i915_gem_object_unpin_pages(obj);
+	if (vma->obj)
+		i915_gem_object_unpin_pages(vma->obj);
 	return ret;
 }
 
@@ -622,30 +634,35 @@ static void
 i915_vma_remove(struct i915_vma *vma)
 {
 	struct drm_i915_private *i915 = vma->vm->i915;
-	struct drm_i915_gem_object *obj = vma->obj;
 
 	GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
 	GEM_BUG_ON(vma->flags & (I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND));
 
-	vma->vm->clear_pages(vma);
+	vma->ops->clear_pages(vma);
 
 	drm_mm_remove_node(&vma->node);
 	list_move_tail(&vma->vm_link, &vma->vm->unbound_list);
 
-	/* Since the unbound list is global, only move to that list if
+	/*
+	 * Since the unbound list is global, only move to that list if
 	 * no more VMAs exist.
 	 */
-	spin_lock(&i915->mm.obj_lock);
-	if (--obj->bind_count == 0)
-		list_move_tail(&obj->mm.link, &i915->mm.unbound_list);
-	spin_unlock(&i915->mm.obj_lock);
-
-	/* And finally now the object is completely decoupled from this vma,
-	 * we can drop its hold on the backing storage and allow it to be
-	 * reaped by the shrinker.
-	 */
-	i915_gem_object_unpin_pages(obj);
-	assert_bind_count(obj);
+	if (vma->obj) {
+		struct drm_i915_gem_object *obj = vma->obj;
+
+		spin_lock(&i915->mm.obj_lock);
+		if (--obj->bind_count == 0)
+			list_move_tail(&obj->mm.link, &i915->mm.unbound_list);
+		spin_unlock(&i915->mm.obj_lock);
+
+		/*
+		 * And finally now the object is completely decoupled from this
+		 * vma, we can drop its hold on the backing storage and allow
+		 * it to be reaped by the shrinker.
+		 */
+		i915_gem_object_unpin_pages(obj);
+		assert_bind_count(obj);
+	}
 }
 
 int __i915_vma_do_pin(struct i915_vma *vma,
@@ -670,7 +687,7 @@ int __i915_vma_do_pin(struct i915_vma *vma,
 	}
 	GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
 
-	ret = i915_vma_bind(vma, vma->obj->cache_level, flags);
+	ret = i915_vma_bind(vma, vma->obj ? vma->obj->cache_level : 0, flags);
 	if (ret)
 		goto err_remove;
 
@@ -727,6 +744,7 @@ void i915_vma_reopen(struct i915_vma *vma)
 
 static void __i915_vma_destroy(struct i915_vma *vma)
 {
+	struct drm_i915_private *i915 = vma->vm->i915;
 	int i;
 
 	GEM_BUG_ON(vma->node.allocated);
@@ -738,12 +756,13 @@ static void __i915_vma_destroy(struct i915_vma *vma)
 
 	list_del(&vma->obj_link);
 	list_del(&vma->vm_link);
-	rb_erase(&vma->obj_node, &vma->obj->vma_tree);
+	if (vma->obj)
+		rb_erase(&vma->obj_node, &vma->obj->vma_tree);
 
 	if (!i915_vma_is_ggtt(vma))
 		i915_ppgtt_put(i915_vm_to_ppgtt(vma->vm));
 
-	kmem_cache_free(to_i915(vma->obj->base.dev)->vmas, vma);
+	kmem_cache_free(i915->vmas, vma);
 }
 
 void i915_vma_destroy(struct i915_vma *vma)
@@ -809,13 +828,13 @@ void i915_vma_revoke_mmap(struct i915_vma *vma)
 
 int i915_vma_unbind(struct i915_vma *vma)
 {
-	struct drm_i915_gem_object *obj = vma->obj;
 	unsigned long active;
 	int ret;
 
-	lockdep_assert_held(&obj->base.dev->struct_mutex);
+	lockdep_assert_held(&vma->vm->i915->drm.struct_mutex);
 
-	/* First wait upon any activity as retiring the request may
+	/*
+	 * First wait upon any activity as retiring the request may
 	 * have side-effects such as unpinning or even unbinding this vma.
 	 */
 	might_sleep();
@@ -823,7 +842,8 @@ int i915_vma_unbind(struct i915_vma *vma)
 	if (active) {
 		int idx;
 
-		/* When a closed VMA is retired, it is unbound - eek.
+		/*
+		 * When a closed VMA is retired, it is unbound - eek.
 		 * In order to prevent it from being recursively closed,
 		 * take a pin on the vma so that the second unbind is
 		 * aborted.
@@ -861,9 +881,6 @@ int i915_vma_unbind(struct i915_vma *vma)
 	if (!drm_mm_node_allocated(&vma->node))
 		return 0;
 
-	GEM_BUG_ON(obj->bind_count == 0);
-	GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj));
-
 	if (i915_vma_is_map_and_fenceable(vma)) {
 		/*
 		 * Check that we have flushed all writes through the GGTT
@@ -890,7 +907,7 @@ int i915_vma_unbind(struct i915_vma *vma)
 
 	if (likely(!vma->vm->closed)) {
 		trace_i915_vma_unbind(vma);
-		vma->vm->unbind_vma(vma);
+		vma->ops->unbind_vma(vma);
 	}
 	vma->flags &= ~(I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND);
 

+ 9 - 1
drivers/gpu/drm/i915/i915_vma.h

@@ -49,10 +49,12 @@ struct i915_vma {
 	struct drm_mm_node node;
 	struct drm_i915_gem_object *obj;
 	struct i915_address_space *vm;
+	const struct i915_vma_ops *ops;
 	struct drm_i915_fence_reg *fence;
 	struct reservation_object *resv; /** Alias of obj->resv */
 	struct sg_table *pages;
 	void __iomem *iomap;
+	void *private; /* owned by creator */
 	u64 size;
 	u64 display_alignment;
 	struct i915_page_sizes page_sizes;
@@ -339,6 +341,12 @@ static inline void i915_vma_unpin(struct i915_vma *vma)
 	__i915_vma_unpin(vma);
 }
 
+static inline bool i915_vma_is_bound(const struct i915_vma *vma,
+				     unsigned int where)
+{
+	return vma->flags & where;
+}
+
 /**
  * i915_vma_pin_iomap - calls ioremap_wc to map the GGTT VMA via the aperture
  * @vma: VMA to iomap
@@ -407,7 +415,7 @@ static inline void __i915_vma_unpin_fence(struct i915_vma *vma)
 static inline void
 i915_vma_unpin_fence(struct i915_vma *vma)
 {
-	lockdep_assert_held(&vma->obj->base.dev->struct_mutex);
+	/* lockdep_assert_held(&vma->vm->i915->drm.struct_mutex); */
 	if (vma->fence)
 		__i915_vma_unpin_fence(vma);
 }

+ 11 - 16
drivers/gpu/drm/i915/intel_acpi.c

@@ -12,10 +12,6 @@
 #define INTEL_DSM_REVISION_ID 1 /* For Calpella anyway... */
 #define INTEL_DSM_FN_PLATFORM_MUX_INFO 1 /* No args */
 
-static struct intel_dsm_priv {
-	acpi_handle dhandle;
-} intel_dsm_priv;
-
 static const guid_t intel_dsm_guid =
 	GUID_INIT(0x7ed873d3, 0xc2d0, 0x4e4f,
 		  0xa8, 0x54, 0x0f, 0x13, 0x17, 0xb0, 0x1c, 0x2c);
@@ -72,12 +68,12 @@ static char *intel_dsm_mux_type(u8 type)
 	}
 }
 
-static void intel_dsm_platform_mux_info(void)
+static void intel_dsm_platform_mux_info(acpi_handle dhandle)
 {
 	int i;
 	union acpi_object *pkg, *connector_count;
 
-	pkg = acpi_evaluate_dsm_typed(intel_dsm_priv.dhandle, &intel_dsm_guid,
+	pkg = acpi_evaluate_dsm_typed(dhandle, &intel_dsm_guid,
 			INTEL_DSM_REVISION_ID, INTEL_DSM_FN_PLATFORM_MUX_INFO,
 			NULL, ACPI_TYPE_PACKAGE);
 	if (!pkg) {
@@ -107,41 +103,40 @@ static void intel_dsm_platform_mux_info(void)
 	ACPI_FREE(pkg);
 }
 
-static bool intel_dsm_pci_probe(struct pci_dev *pdev)
+static acpi_handle intel_dsm_pci_probe(struct pci_dev *pdev)
 {
 	acpi_handle dhandle;
 
 	dhandle = ACPI_HANDLE(&pdev->dev);
 	if (!dhandle)
-		return false;
+		return NULL;
 
 	if (!acpi_check_dsm(dhandle, &intel_dsm_guid, INTEL_DSM_REVISION_ID,
 			    1 << INTEL_DSM_FN_PLATFORM_MUX_INFO)) {
 		DRM_DEBUG_KMS("no _DSM method for intel device\n");
-		return false;
+		return NULL;
 	}
 
-	intel_dsm_priv.dhandle = dhandle;
-	intel_dsm_platform_mux_info();
+	intel_dsm_platform_mux_info(dhandle);
 
-	return true;
+	return dhandle;
 }
 
 static bool intel_dsm_detect(void)
 {
+	acpi_handle dhandle = NULL;
 	char acpi_method_name[255] = { 0 };
 	struct acpi_buffer buffer = {sizeof(acpi_method_name), acpi_method_name};
 	struct pci_dev *pdev = NULL;
-	bool has_dsm = false;
 	int vga_count = 0;
 
 	while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev)) != NULL) {
 		vga_count++;
-		has_dsm |= intel_dsm_pci_probe(pdev);
+		dhandle = intel_dsm_pci_probe(pdev) ?: dhandle;
 	}
 
-	if (vga_count == 2 && has_dsm) {
-		acpi_get_name(intel_dsm_priv.dhandle, ACPI_FULL_PATHNAME, &buffer);
+	if (vga_count == 2 && dhandle) {
+		acpi_get_name(dhandle, ACPI_FULL_PATHNAME, &buffer);
 		DRM_DEBUG_DRIVER("vga_switcheroo: detected DSM switching method %s handle\n",
 				 acpi_method_name);
 		return true;

+ 4 - 2
drivers/gpu/drm/i915/intel_atomic.c

@@ -59,7 +59,8 @@ int intel_digital_connector_atomic_get_property(struct drm_connector *connector,
 	else if (property == dev_priv->broadcast_rgb_property)
 		*val = intel_conn_state->broadcast_rgb;
 	else {
-		DRM_DEBUG_ATOMIC("Unknown property %s\n", property->name);
+		DRM_DEBUG_ATOMIC("Unknown property [PROP:%d:%s]\n",
+				 property->base.id, property->name);
 		return -EINVAL;
 	}
 
@@ -95,7 +96,8 @@ int intel_digital_connector_atomic_set_property(struct drm_connector *connector,
 		return 0;
 	}
 
-	DRM_DEBUG_ATOMIC("Unknown property %s\n", property->name);
+	DRM_DEBUG_ATOMIC("Unknown property [PROP:%d:%s]\n",
+			 property->base.id, property->name);
 	return -EINVAL;
 }
 

+ 4 - 2
drivers/gpu/drm/i915/intel_atomic_plane.c

@@ -265,7 +265,8 @@ intel_plane_atomic_get_property(struct drm_plane *plane,
 				struct drm_property *property,
 				uint64_t *val)
 {
-	DRM_DEBUG_KMS("Unknown plane property '%s'\n", property->name);
+	DRM_DEBUG_KMS("Unknown property [PROP:%d:%s]\n",
+		      property->base.id, property->name);
 	return -EINVAL;
 }
 
@@ -287,6 +288,7 @@ intel_plane_atomic_set_property(struct drm_plane *plane,
 				struct drm_property *property,
 				uint64_t val)
 {
-	DRM_DEBUG_KMS("Unknown plane property '%s'\n", property->name);
+	DRM_DEBUG_KMS("Unknown property [PROP:%d:%s]\n",
+		      property->base.id, property->name);
 	return -EINVAL;
 }

+ 29 - 19
drivers/gpu/drm/i915/intel_audio.c

@@ -59,6 +59,7 @@
  */
 
 /* DP N/M table */
+#define LC_810M	810000
 #define LC_540M	540000
 #define LC_270M	270000
 #define LC_162M	162000
@@ -99,6 +100,15 @@ static const struct dp_aud_n_m dp_aud_n_m[] = {
 	{ 128000, LC_540M, 4096, 33750 },
 	{ 176400, LC_540M, 3136, 18750 },
 	{ 192000, LC_540M, 2048, 11250 },
+	{ 32000, LC_810M, 1024, 50625 },
+	{ 44100, LC_810M, 784, 28125 },
+	{ 48000, LC_810M, 512, 16875 },
+	{ 64000, LC_810M, 2048, 50625 },
+	{ 88200, LC_810M, 1568, 28125 },
+	{ 96000, LC_810M, 1024, 16875 },
+	{ 128000, LC_810M, 4096, 50625 },
+	{ 176400, LC_810M, 3136, 28125 },
+	{ 192000, LC_810M, 2048, 16875 },
 };
 
 static const struct dp_aud_n_m *
@@ -198,13 +208,13 @@ static int audio_config_hdmi_get_n(const struct intel_crtc_state *crtc_state,
 }
 
 static bool intel_eld_uptodate(struct drm_connector *connector,
-			       i915_reg_t reg_eldv, uint32_t bits_eldv,
-			       i915_reg_t reg_elda, uint32_t bits_elda,
+			       i915_reg_t reg_eldv, u32 bits_eldv,
+			       i915_reg_t reg_elda, u32 bits_elda,
 			       i915_reg_t reg_edid)
 {
 	struct drm_i915_private *dev_priv = to_i915(connector->dev);
-	uint8_t *eld = connector->eld;
-	uint32_t tmp;
+	const u8 *eld = connector->eld;
+	u32 tmp;
 	int i;
 
 	tmp = I915_READ(reg_eldv);
@@ -218,7 +228,7 @@ static bool intel_eld_uptodate(struct drm_connector *connector,
 	I915_WRITE(reg_elda, tmp);
 
 	for (i = 0; i < drm_eld_size(eld) / 4; i++)
-		if (I915_READ(reg_edid) != *((uint32_t *)eld + i))
+		if (I915_READ(reg_edid) != *((const u32 *)eld + i))
 			return false;
 
 	return true;
@@ -229,7 +239,7 @@ static void g4x_audio_codec_disable(struct intel_encoder *encoder,
 				    const struct drm_connector_state *old_conn_state)
 {
 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
-	uint32_t eldv, tmp;
+	u32 eldv, tmp;
 
 	DRM_DEBUG_KMS("Disable audio codec\n");
 
@@ -251,12 +261,12 @@ static void g4x_audio_codec_enable(struct intel_encoder *encoder,
 {
 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
 	struct drm_connector *connector = conn_state->connector;
-	uint8_t *eld = connector->eld;
-	uint32_t eldv;
-	uint32_t tmp;
+	const u8 *eld = connector->eld;
+	u32 eldv;
+	u32 tmp;
 	int len, i;
 
-	DRM_DEBUG_KMS("Enable audio codec, %u bytes ELD\n", eld[2]);
+	DRM_DEBUG_KMS("Enable audio codec, %u bytes ELD\n", drm_eld_size(eld));
 
 	tmp = I915_READ(G4X_AUD_VID_DID);
 	if (tmp == INTEL_AUDIO_DEVBLC || tmp == INTEL_AUDIO_DEVCL)
@@ -278,7 +288,7 @@ static void g4x_audio_codec_enable(struct intel_encoder *encoder,
 	len = min(drm_eld_size(eld) / 4, len);
 	DRM_DEBUG_DRIVER("ELD size %d\n", len);
 	for (i = 0; i < len; i++)
-		I915_WRITE(G4X_HDMIW_HDMIEDID, *((uint32_t *)eld + i));
+		I915_WRITE(G4X_HDMIW_HDMIEDID, *((const u32 *)eld + i));
 
 	tmp = I915_READ(G4X_AUD_CNTL_ST);
 	tmp |= eldv;
@@ -393,7 +403,7 @@ static void hsw_audio_codec_disable(struct intel_encoder *encoder,
 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
 	struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
 	enum pipe pipe = crtc->pipe;
-	uint32_t tmp;
+	u32 tmp;
 
 	DRM_DEBUG_KMS("Disable audio codec on pipe %c\n", pipe_name(pipe));
 
@@ -426,8 +436,8 @@ static void hsw_audio_codec_enable(struct intel_encoder *encoder,
 	struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
 	struct drm_connector *connector = conn_state->connector;
 	enum pipe pipe = crtc->pipe;
-	const uint8_t *eld = connector->eld;
-	uint32_t tmp;
+	const u8 *eld = connector->eld;
+	u32 tmp;
 	int len, i;
 
 	DRM_DEBUG_KMS("Enable audio codec on pipe %c, %u bytes ELD\n",
@@ -456,7 +466,7 @@ static void hsw_audio_codec_enable(struct intel_encoder *encoder,
 	/* Up to 84 bytes of hw ELD buffer */
 	len = min(drm_eld_size(eld), 84);
 	for (i = 0; i < len / 4; i++)
-		I915_WRITE(HSW_AUD_EDID_DATA(pipe), *((uint32_t *)eld + i));
+		I915_WRITE(HSW_AUD_EDID_DATA(pipe), *((const u32 *)eld + i));
 
 	/* ELD valid */
 	tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
@@ -477,7 +487,7 @@ static void ilk_audio_codec_disable(struct intel_encoder *encoder,
 	struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
 	enum pipe pipe = crtc->pipe;
 	enum port port = encoder->port;
-	uint32_t tmp, eldv;
+	u32 tmp, eldv;
 	i915_reg_t aud_config, aud_cntrl_st2;
 
 	DRM_DEBUG_KMS("Disable audio codec on port %c, pipe %c\n",
@@ -524,8 +534,8 @@ static void ilk_audio_codec_enable(struct intel_encoder *encoder,
 	struct drm_connector *connector = conn_state->connector;
 	enum pipe pipe = crtc->pipe;
 	enum port port = encoder->port;
-	uint8_t *eld = connector->eld;
-	uint32_t tmp, eldv;
+	const u8 *eld = connector->eld;
+	u32 tmp, eldv;
 	int len, i;
 	i915_reg_t hdmiw_hdmiedid, aud_config, aud_cntl_st, aud_cntrl_st2;
 
@@ -575,7 +585,7 @@ static void ilk_audio_codec_enable(struct intel_encoder *encoder,
 	/* Up to 84 bytes of hw ELD buffer */
 	len = min(drm_eld_size(eld), 84);
 	for (i = 0; i < len / 4; i++)
-		I915_WRITE(hdmiw_hdmiedid, *((uint32_t *)eld + i));
+		I915_WRITE(hdmiw_hdmiedid, *((const u32 *)eld + i));
 
 	/* ELD valid */
 	tmp = I915_READ(aud_cntrl_st2);

+ 7 - 5
drivers/gpu/drm/i915/intel_bios.c

@@ -652,7 +652,7 @@ parse_edp(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
 	}
 
 	if (bdb->version >= 173) {
-		uint8_t vswing;
+		u8 vswing;
 
 		/* Don't read from VBT if module parameter has valid value*/
 		if (i915_modparams.edp_vswing) {
@@ -710,7 +710,9 @@ parse_psr(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
 	 * New psr options 0=500us, 1=100us, 2=2500us, 3=0us
 	 * Old decimal value is wake up time in multiples of 100 us.
 	 */
-	if (bdb->version >= 209 && IS_GEN9_BC(dev_priv)) {
+	if (bdb->version >= 205 &&
+	    (IS_GEN9_BC(dev_priv) || IS_GEMINILAKE(dev_priv) ||
+	     INTEL_GEN(dev_priv) >= 10)) {
 		switch (psr_table->tp1_wakeup_time) {
 		case 0:
 			dev_priv->vbt.psr.tp1_wakeup_time_us = 500;
@@ -738,7 +740,7 @@ parse_psr(struct drm_i915_private *dev_priv, const struct bdb_header *bdb)
 			dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 100;
 			break;
 		case 3:
-			dev_priv->vbt.psr.tp1_wakeup_time_us = 0;
+			dev_priv->vbt.psr.tp2_tp3_wakeup_time_us = 0;
 			break;
 		default:
 			DRM_DEBUG_KMS("VBT tp2_tp3 wakeup time value %d is outside range[0-3], defaulting to max value 2500us\n",
@@ -964,7 +966,7 @@ static int goto_next_sequence_v3(const u8 *data, int index, int total)
 	 * includes MIPI_SEQ_ELEM_END byte, excludes the final MIPI_SEQ_END
 	 * byte.
 	 */
-	size_of_sequence = *((const uint32_t *)(data + index));
+	size_of_sequence = *((const u32 *)(data + index));
 	index += 4;
 
 	seq_end = index + size_of_sequence;
@@ -1719,7 +1721,7 @@ void intel_bios_init(struct drm_i915_private *dev_priv)
 	const struct bdb_header *bdb;
 	u8 __iomem *bios = NULL;
 
-	if (HAS_PCH_NOP(dev_priv)) {
+	if (INTEL_INFO(dev_priv)->num_pipes == 0) {
 		DRM_DEBUG_KMS("Skipping VBT init due to disabled display.\n");
 		return;
 	}

+ 53 - 3
drivers/gpu/drm/i915/intel_cdclk.c

@@ -991,6 +991,16 @@ static void skl_set_cdclk(struct drm_i915_private *dev_priv,
 	u32 freq_select, cdclk_ctl;
 	int ret;
 
+	/*
+	 * Based on WA#1183 CDCLK rates 308 and 617MHz CDCLK rates are
+	 * unsupported on SKL. In theory this should never happen since only
+	 * the eDP1.4 2.16 and 4.32Gbps rates require it, but eDP1.4 is not
+	 * supported on SKL either, see the above WA. WARN whenever trying to
+	 * use the corresponding VCO freq as that always leads to using the
+	 * minimum 308MHz CDCLK.
+	 */
+	WARN_ON_ONCE(IS_SKYLAKE(dev_priv) && vco == 8640000);
+
 	mutex_lock(&dev_priv->pcu_lock);
 	ret = skl_pcode_request(dev_priv, SKL_PCODE_CDCLK_CONTROL,
 				SKL_CDCLK_PREPARE_FOR_CHANGE,
@@ -1861,11 +1871,35 @@ static void icl_set_cdclk(struct drm_i915_private *dev_priv,
 			      skl_cdclk_decimal(cdclk));
 
 	mutex_lock(&dev_priv->pcu_lock);
-	/* TODO: add proper DVFS support. */
-	sandybridge_pcode_write(dev_priv, SKL_PCODE_CDCLK_CONTROL, 2);
+	sandybridge_pcode_write(dev_priv, SKL_PCODE_CDCLK_CONTROL,
+				cdclk_state->voltage_level);
 	mutex_unlock(&dev_priv->pcu_lock);
 
 	intel_update_cdclk(dev_priv);
+
+	/*
+	 * Can't read out the voltage level :(
+	 * Let's just assume everything is as expected.
+	 */
+	dev_priv->cdclk.hw.voltage_level = cdclk_state->voltage_level;
+}
+
+static u8 icl_calc_voltage_level(int cdclk)
+{
+	switch (cdclk) {
+	case 50000:
+	case 307200:
+	case 312000:
+		return 0;
+	case 556800:
+	case 552000:
+		return 1;
+	default:
+		MISSING_CASE(cdclk);
+	case 652800:
+	case 648000:
+		return 2;
+	}
 }
 
 static void icl_get_cdclk(struct drm_i915_private *dev_priv,
@@ -1899,7 +1933,7 @@ static void icl_get_cdclk(struct drm_i915_private *dev_priv,
 		 */
 		cdclk_state->vco = 0;
 		cdclk_state->cdclk = cdclk_state->bypass;
-		return;
+		goto out;
 	}
 
 	cdclk_state->vco = (val & BXT_DE_PLL_RATIO_MASK) * cdclk_state->ref;
@@ -1908,6 +1942,14 @@ static void icl_get_cdclk(struct drm_i915_private *dev_priv,
 	WARN_ON((val & BXT_CDCLK_CD2X_DIV_SEL_MASK) != 0);
 
 	cdclk_state->cdclk = cdclk_state->vco / 2;
+
+out:
+	/*
+	 * Can't read this out :( Let's assume it's
+	 * at least what the CDCLK frequency requires.
+	 */
+	cdclk_state->voltage_level =
+		icl_calc_voltage_level(cdclk_state->cdclk);
 }
 
 /**
@@ -1950,6 +1992,8 @@ sanitize:
 	sanitized_state.cdclk = icl_calc_cdclk(0, sanitized_state.ref);
 	sanitized_state.vco = icl_calc_cdclk_pll_vco(dev_priv,
 						     sanitized_state.cdclk);
+	sanitized_state.voltage_level =
+				icl_calc_voltage_level(sanitized_state.cdclk);
 
 	icl_set_cdclk(dev_priv, &sanitized_state);
 }
@@ -1967,6 +2011,7 @@ void icl_uninit_cdclk(struct drm_i915_private *dev_priv)
 
 	cdclk_state.cdclk = cdclk_state.bypass;
 	cdclk_state.vco = 0;
+	cdclk_state.voltage_level = icl_calc_voltage_level(cdclk_state.cdclk);
 
 	icl_set_cdclk(dev_priv, &cdclk_state);
 }
@@ -2470,6 +2515,9 @@ static int icl_modeset_calc_cdclk(struct drm_atomic_state *state)
 
 	intel_state->cdclk.logical.vco = vco;
 	intel_state->cdclk.logical.cdclk = cdclk;
+	intel_state->cdclk.logical.voltage_level =
+		max(icl_calc_voltage_level(cdclk),
+		    cnl_compute_min_voltage_level(intel_state));
 
 	if (!intel_state->active_crtcs) {
 		cdclk = icl_calc_cdclk(0, ref);
@@ -2477,6 +2525,8 @@ static int icl_modeset_calc_cdclk(struct drm_atomic_state *state)
 
 		intel_state->cdclk.actual.vco = vco;
 		intel_state->cdclk.actual.cdclk = cdclk;
+		intel_state->cdclk.actual.voltage_level =
+			icl_calc_voltage_level(cdclk);
 	} else {
 		intel_state->cdclk.actual = intel_state->cdclk.logical;
 	}

+ 34 - 1
drivers/gpu/drm/i915/intel_crt.c

@@ -232,6 +232,8 @@ static void hsw_post_disable_crt(struct intel_encoder *encoder,
 {
 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
 
+	intel_ddi_disable_pipe_clock(old_crtc_state);
+
 	pch_post_disable_crt(encoder, old_crtc_state, old_conn_state);
 
 	lpt_disable_pch_transcoder(dev_priv);
@@ -268,6 +270,8 @@ static void hsw_pre_enable_crt(struct intel_encoder *encoder,
 	intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false);
 
 	dev_priv->display.fdi_link_train(crtc, crtc_state);
+
+	intel_ddi_enable_pipe_clock(crtc_state);
 }
 
 static void hsw_enable_crt(struct intel_encoder *encoder,
@@ -304,6 +308,9 @@ intel_crt_mode_valid(struct drm_connector *connector,
 	int max_dotclk = dev_priv->max_dotclk_freq;
 	int max_clock;
 
+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return MODE_NO_DBLESCAN;
+
 	if (mode->clock < 25000)
 		return MODE_CLOCK_LOW;
 
@@ -330,6 +337,10 @@ intel_crt_mode_valid(struct drm_connector *connector,
 	    (ironlake_get_lanes_required(mode->clock, 270000, 24) > 2))
 		return MODE_CLOCK_HIGH;
 
+	/* HSW/BDW FDI limited to 4k */
+	if (mode->hdisplay > 4096)
+		return MODE_H_ILLEGAL;
+
 	return MODE_OK;
 }
 
@@ -337,6 +348,12 @@ static bool intel_crt_compute_config(struct intel_encoder *encoder,
 				     struct intel_crtc_state *pipe_config,
 				     struct drm_connector_state *conn_state)
 {
+	struct drm_display_mode *adjusted_mode =
+		&pipe_config->base.adjusted_mode;
+
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
 	return true;
 }
 
@@ -344,6 +361,12 @@ static bool pch_crt_compute_config(struct intel_encoder *encoder,
 				   struct intel_crtc_state *pipe_config,
 				   struct drm_connector_state *conn_state)
 {
+	struct drm_display_mode *adjusted_mode =
+		&pipe_config->base.adjusted_mode;
+
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
 	pipe_config->has_pch_encoder = true;
 
 	return true;
@@ -354,6 +377,16 @@ static bool hsw_crt_compute_config(struct intel_encoder *encoder,
 				   struct drm_connector_state *conn_state)
 {
 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+	struct drm_display_mode *adjusted_mode =
+		&pipe_config->base.adjusted_mode;
+
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
+	/* HSW/BDW FDI limited to 4k */
+	if (adjusted_mode->crtc_hdisplay > 4096 ||
+	    adjusted_mode->crtc_hblank_start > 4096)
+		return false;
 
 	pipe_config->has_pch_encoder = true;
 
@@ -493,7 +526,7 @@ static bool intel_crt_detect_hotplug(struct drm_connector *connector)
 	 * to get a reliable result.
 	 */
 
-	if (IS_G4X(dev_priv) && !IS_GM45(dev_priv))
+	if (IS_G45(dev_priv))
 		tries = 2;
 	else
 		tries = 1;

+ 36 - 3
drivers/gpu/drm/i915/intel_ddi.c

@@ -915,7 +915,14 @@ static int intel_ddi_hdmi_level(struct drm_i915_private *dev_priv, enum port por
 
 	level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
 
-	if (IS_CANNONLAKE(dev_priv)) {
+	if (IS_ICELAKE(dev_priv)) {
+		if (port == PORT_A || port == PORT_B)
+			icl_get_combo_buf_trans(dev_priv, port,
+						INTEL_OUTPUT_HDMI, &n_entries);
+		else
+			n_entries = ARRAY_SIZE(icl_mg_phy_ddi_translations);
+		default_entry = n_entries - 1;
+	} else if (IS_CANNONLAKE(dev_priv)) {
 		cnl_get_buf_trans_hdmi(dev_priv, &n_entries);
 		default_entry = n_entries - 1;
 	} else if (IS_GEN9_LP(dev_priv)) {
@@ -1055,6 +1062,8 @@ static uint32_t hsw_pll_to_ddi_pll_sel(const struct intel_shared_dpll *pll)
 static uint32_t icl_pll_to_ddi_pll_sel(struct intel_encoder *encoder,
 				       const struct intel_shared_dpll *pll)
 {
+	struct intel_crtc *crtc = to_intel_crtc(encoder->base.crtc);
+	int clock = crtc->config->port_clock;
 	const enum intel_dpll_id id = pll->info->id;
 
 	switch (id) {
@@ -1063,6 +1072,20 @@ static uint32_t icl_pll_to_ddi_pll_sel(struct intel_encoder *encoder,
 	case DPLL_ID_ICL_DPLL0:
 	case DPLL_ID_ICL_DPLL1:
 		return DDI_CLK_SEL_NONE;
+	case DPLL_ID_ICL_TBTPLL:
+		switch (clock) {
+		case 162000:
+			return DDI_CLK_SEL_TBT_162;
+		case 270000:
+			return DDI_CLK_SEL_TBT_270;
+		case 540000:
+			return DDI_CLK_SEL_TBT_540;
+		case 810000:
+			return DDI_CLK_SEL_TBT_810;
+		default:
+			MISSING_CASE(clock);
+			break;
+		}
 	case DPLL_ID_ICL_MGPLL1:
 	case DPLL_ID_ICL_MGPLL2:
 	case DPLL_ID_ICL_MGPLL3:
@@ -2632,6 +2655,8 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
 	intel_dp_start_link_train(intel_dp);
 	if (port != PORT_A || INTEL_GEN(dev_priv) >= 9)
 		intel_dp_stop_link_train(intel_dp);
+
+	intel_ddi_enable_pipe_clock(crtc_state);
 }
 
 static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
@@ -2662,6 +2687,8 @@ static void intel_ddi_pre_enable_hdmi(struct intel_encoder *encoder,
 	if (IS_GEN9_BC(dev_priv))
 		skl_ddi_set_iboost(encoder, level, INTEL_OUTPUT_HDMI);
 
+	intel_ddi_enable_pipe_clock(crtc_state);
+
 	intel_dig_port->set_infoframes(&encoder->base,
 				       crtc_state->has_infoframe,
 				       crtc_state, conn_state);
@@ -2731,6 +2758,8 @@ static void intel_ddi_post_disable_dp(struct intel_encoder *encoder,
 	bool is_mst = intel_crtc_has_type(old_crtc_state,
 					  INTEL_OUTPUT_DP_MST);
 
+	intel_ddi_disable_pipe_clock(old_crtc_state);
+
 	/*
 	 * Power down sink before disabling the port, otherwise we end
 	 * up getting interrupts from the sink on detecting link loss.
@@ -2756,11 +2785,13 @@ static void intel_ddi_post_disable_hdmi(struct intel_encoder *encoder,
 	struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
 	struct intel_hdmi *intel_hdmi = &dig_port->hdmi;
 
-	intel_disable_ddi_buf(encoder);
-
 	dig_port->set_infoframes(&encoder->base, false,
 				 old_crtc_state, old_conn_state);
 
+	intel_ddi_disable_pipe_clock(old_crtc_state);
+
+	intel_disable_ddi_buf(encoder);
+
 	intel_display_power_put(dev_priv, dig_port->ddi_io_power_domain);
 
 	intel_ddi_clk_disable(encoder);
@@ -3047,6 +3078,8 @@ void intel_ddi_compute_min_voltage_level(struct drm_i915_private *dev_priv,
 {
 	if (IS_CANNONLAKE(dev_priv) && crtc_state->port_clock > 594000)
 		crtc_state->min_voltage_level = 2;
+	else if (IS_ICELAKE(dev_priv) && crtc_state->port_clock > 594000)
+		crtc_state->min_voltage_level = 1;
 }
 
 void intel_ddi_get_config(struct intel_encoder *encoder,

+ 104 - 18
drivers/gpu/drm/i915/intel_display.c

@@ -5646,9 +5646,6 @@ static void haswell_crtc_enable(struct intel_crtc_state *pipe_config,
 
 	intel_encoders_pre_enable(crtc, pipe_config, old_state);
 
-	if (!transcoder_is_dsi(cpu_transcoder))
-		intel_ddi_enable_pipe_clock(pipe_config);
-
 	if (intel_crtc_has_dp_encoder(intel_crtc->config))
 		intel_dp_set_m_n(intel_crtc, M1_N1);
 
@@ -5813,7 +5810,7 @@ static void haswell_crtc_disable(struct intel_crtc_state *old_crtc_state,
 	struct drm_crtc *crtc = old_crtc_state->base.crtc;
 	struct drm_i915_private *dev_priv = to_i915(crtc->dev);
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-	enum transcoder cpu_transcoder = intel_crtc->config->cpu_transcoder;
+	enum transcoder cpu_transcoder = old_crtc_state->cpu_transcoder;
 
 	intel_encoders_disable(crtc, old_crtc_state, old_state);
 
@@ -5824,8 +5821,8 @@ static void haswell_crtc_disable(struct intel_crtc_state *old_crtc_state,
 	if (!transcoder_is_dsi(cpu_transcoder))
 		intel_disable_pipe(old_crtc_state);
 
-	if (intel_crtc_has_type(intel_crtc->config, INTEL_OUTPUT_DP_MST))
-		intel_ddi_set_vc_payload_alloc(intel_crtc->config, false);
+	if (intel_crtc_has_type(old_crtc_state, INTEL_OUTPUT_DP_MST))
+		intel_ddi_set_vc_payload_alloc(old_crtc_state, false);
 
 	if (!transcoder_is_dsi(cpu_transcoder))
 		intel_ddi_disable_transcoder_func(dev_priv, cpu_transcoder);
@@ -5835,9 +5832,6 @@ static void haswell_crtc_disable(struct intel_crtc_state *old_crtc_state,
 	else
 		ironlake_pfit_disable(intel_crtc, false);
 
-	if (!transcoder_is_dsi(cpu_transcoder))
-		intel_ddi_disable_pipe_clock(intel_crtc->config);
-
 	intel_encoders_post_disable(crtc, old_crtc_state, old_state);
 
 	if (INTEL_GEN(dev_priv) >= 11)
@@ -9214,6 +9208,44 @@ static void cannonlake_get_ddi_pll(struct drm_i915_private *dev_priv,
 	pipe_config->shared_dpll = intel_get_shared_dpll_by_id(dev_priv, id);
 }
 
+static void icelake_get_ddi_pll(struct drm_i915_private *dev_priv,
+				enum port port,
+				struct intel_crtc_state *pipe_config)
+{
+	enum intel_dpll_id id;
+	u32 temp;
+
+	/* TODO: TBT pll not implemented. */
+	switch (port) {
+	case PORT_A:
+	case PORT_B:
+		temp = I915_READ(DPCLKA_CFGCR0_ICL) &
+		       DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(port);
+		id = temp >> DPCLKA_CFGCR0_DDI_CLK_SEL_SHIFT(port);
+
+		if (WARN_ON(id != DPLL_ID_ICL_DPLL0 && id != DPLL_ID_ICL_DPLL1))
+			return;
+		break;
+	case PORT_C:
+		id = DPLL_ID_ICL_MGPLL1;
+		break;
+	case PORT_D:
+		id = DPLL_ID_ICL_MGPLL2;
+		break;
+	case PORT_E:
+		id = DPLL_ID_ICL_MGPLL3;
+		break;
+	case PORT_F:
+		id = DPLL_ID_ICL_MGPLL4;
+		break;
+	default:
+		MISSING_CASE(port);
+		return;
+	}
+
+	pipe_config->shared_dpll = intel_get_shared_dpll_by_id(dev_priv, id);
+}
+
 static void bxt_get_ddi_pll(struct drm_i915_private *dev_priv,
 				enum port port,
 				struct intel_crtc_state *pipe_config)
@@ -9401,7 +9433,9 @@ static void haswell_get_ddi_port_state(struct intel_crtc *crtc,
 
 	port = (tmp & TRANS_DDI_PORT_MASK) >> TRANS_DDI_PORT_SHIFT;
 
-	if (IS_CANNONLAKE(dev_priv))
+	if (IS_ICELAKE(dev_priv))
+		icelake_get_ddi_pll(dev_priv, port, pipe_config);
+	else if (IS_CANNONLAKE(dev_priv))
 		cannonlake_get_ddi_pll(dev_priv, port, pipe_config);
 	else if (IS_GEN9_BC(dev_priv))
 		skylake_get_ddi_pll(dev_priv, port, pipe_config);
@@ -14054,7 +14088,14 @@ static void intel_setup_outputs(struct drm_i915_private *dev_priv)
 	if (intel_crt_present(dev_priv))
 		intel_crt_init(dev_priv);
 
-	if (IS_GEN9_LP(dev_priv)) {
+	if (IS_ICELAKE(dev_priv)) {
+		intel_ddi_init(dev_priv, PORT_A);
+		intel_ddi_init(dev_priv, PORT_B);
+		intel_ddi_init(dev_priv, PORT_C);
+		intel_ddi_init(dev_priv, PORT_D);
+		intel_ddi_init(dev_priv, PORT_E);
+		intel_ddi_init(dev_priv, PORT_F);
+	} else if (IS_GEN9_LP(dev_priv)) {
 		/*
 		 * FIXME: Broxton doesn't support port detection via the
 		 * DDI_BUF_CTL_A or SFUSE_STRAP registers, find another way to
@@ -14572,12 +14613,26 @@ static enum drm_mode_status
 intel_mode_valid(struct drm_device *dev,
 		 const struct drm_display_mode *mode)
 {
+	struct drm_i915_private *dev_priv = to_i915(dev);
+	int hdisplay_max, htotal_max;
+	int vdisplay_max, vtotal_max;
+
+	/*
+	 * Can't reject DBLSCAN here because Xorg ddxen can add piles
+	 * of DBLSCAN modes to the output's mode list when they detect
+	 * the scaling mode property on the connector. And they don't
+	 * ask the kernel to validate those modes in any way until
+	 * modeset time at which point the client gets a protocol error.
+	 * So in order to not upset those clients we silently ignore the
+	 * DBLSCAN flag on such connectors. For other connectors we will
+	 * reject modes with the DBLSCAN flag in encoder->compute_config().
+	 * And we always reject DBLSCAN modes in connector->mode_valid()
+	 * as we never want such modes on the connector's mode list.
+	 */
+
 	if (mode->vscan > 1)
 		return MODE_NO_VSCAN;
 
-	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
-		return MODE_NO_DBLESCAN;
-
 	if (mode->flags & DRM_MODE_FLAG_HSKEW)
 		return MODE_H_ILLEGAL;
 
@@ -14591,6 +14646,36 @@ intel_mode_valid(struct drm_device *dev,
 			   DRM_MODE_FLAG_CLKDIV2))
 		return MODE_BAD;
 
+	if (INTEL_GEN(dev_priv) >= 9 ||
+	    IS_BROADWELL(dev_priv) || IS_HASWELL(dev_priv)) {
+		hdisplay_max = 8192; /* FDI max 4096 handled elsewhere */
+		vdisplay_max = 4096;
+		htotal_max = 8192;
+		vtotal_max = 8192;
+	} else if (INTEL_GEN(dev_priv) >= 3) {
+		hdisplay_max = 4096;
+		vdisplay_max = 4096;
+		htotal_max = 8192;
+		vtotal_max = 8192;
+	} else {
+		hdisplay_max = 2048;
+		vdisplay_max = 2048;
+		htotal_max = 4096;
+		vtotal_max = 4096;
+	}
+
+	if (mode->hdisplay > hdisplay_max ||
+	    mode->hsync_start > htotal_max ||
+	    mode->hsync_end > htotal_max ||
+	    mode->htotal > htotal_max)
+		return MODE_H_ILLEGAL;
+
+	if (mode->vdisplay > vdisplay_max ||
+	    mode->vsync_start > vtotal_max ||
+	    mode->vsync_end > vtotal_max ||
+	    mode->vtotal > vtotal_max)
+		return MODE_V_ILLEGAL;
+
 	return MODE_OK;
 }
 
@@ -15029,6 +15114,7 @@ int intel_modeset_init(struct drm_device *dev)
 		}
 	}
 
+	/* maximum framebuffer dimensions */
 	if (IS_GEN2(dev_priv)) {
 		dev->mode_config.max_width = 2048;
 		dev->mode_config.max_height = 2048;
@@ -15044,11 +15130,11 @@ int intel_modeset_init(struct drm_device *dev)
 		dev->mode_config.cursor_width = IS_I845G(dev_priv) ? 64 : 512;
 		dev->mode_config.cursor_height = 1023;
 	} else if (IS_GEN2(dev_priv)) {
-		dev->mode_config.cursor_width = GEN2_CURSOR_WIDTH;
-		dev->mode_config.cursor_height = GEN2_CURSOR_HEIGHT;
+		dev->mode_config.cursor_width = 64;
+		dev->mode_config.cursor_height = 64;
 	} else {
-		dev->mode_config.cursor_width = MAX_CURSOR_WIDTH;
-		dev->mode_config.cursor_height = MAX_CURSOR_HEIGHT;
+		dev->mode_config.cursor_width = 256;
+		dev->mode_config.cursor_height = 256;
 	}
 
 	dev->mode_config.fb_base = ggtt->gmadr.start;

+ 2 - 1
drivers/gpu/drm/i915/intel_display.h

@@ -155,7 +155,7 @@ enum aux_ch {
 	AUX_CH_B,
 	AUX_CH_C,
 	AUX_CH_D,
-	_AUX_CH_E, /* does not exist */
+	AUX_CH_E, /* ICL+ */
 	AUX_CH_F,
 };
 
@@ -196,6 +196,7 @@ enum intel_display_power_domain {
 	POWER_DOMAIN_AUX_B,
 	POWER_DOMAIN_AUX_C,
 	POWER_DOMAIN_AUX_D,
+	POWER_DOMAIN_AUX_E,
 	POWER_DOMAIN_AUX_F,
 	POWER_DOMAIN_AUX_IO_A,
 	POWER_DOMAIN_GMBUS,

+ 54 - 24
drivers/gpu/drm/i915/intel_dp.c

@@ -256,6 +256,17 @@ static int cnl_max_source_rate(struct intel_dp *intel_dp)
 	return 810000;
 }
 
+static int icl_max_source_rate(struct intel_dp *intel_dp)
+{
+	struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
+	enum port port = dig_port->base.port;
+
+	if (port == PORT_B)
+		return 540000;
+
+	return 810000;
+}
+
 static void
 intel_dp_set_source_rates(struct intel_dp *intel_dp)
 {
@@ -285,10 +296,13 @@ intel_dp_set_source_rates(struct intel_dp *intel_dp)
 	/* This should only be done once */
 	WARN_ON(intel_dp->source_rates || intel_dp->num_source_rates);
 
-	if (IS_CANNONLAKE(dev_priv)) {
+	if (INTEL_GEN(dev_priv) >= 10) {
 		source_rates = cnl_rates;
 		size = ARRAY_SIZE(cnl_rates);
-		max_rate = cnl_max_source_rate(intel_dp);
+		if (INTEL_GEN(dev_priv) == 10)
+			max_rate = cnl_max_source_rate(intel_dp);
+		else
+			max_rate = icl_max_source_rate(intel_dp);
 	} else if (IS_GEN9_LP(dev_priv)) {
 		source_rates = bxt_rates;
 		size = ARRAY_SIZE(bxt_rates);
@@ -420,6 +434,9 @@ intel_dp_mode_valid(struct drm_connector *connector,
 	int max_rate, mode_rate, max_lanes, max_link_clock;
 	int max_dotclk;
 
+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return MODE_NO_DBLESCAN;
+
 	max_dotclk = intel_dp_downstream_max_dotclock(intel_dp);
 
 	if (intel_dp_is_edp(intel_dp) && fixed_mode) {
@@ -1347,6 +1364,9 @@ static enum aux_ch intel_aux_ch(struct intel_dp *intel_dp)
 	case DP_AUX_D:
 		aux_ch = AUX_CH_D;
 		break;
+	case DP_AUX_E:
+		aux_ch = AUX_CH_E;
+		break;
 	case DP_AUX_F:
 		aux_ch = AUX_CH_F;
 		break;
@@ -1374,6 +1394,8 @@ intel_aux_power_domain(struct intel_dp *intel_dp)
 		return POWER_DOMAIN_AUX_C;
 	case AUX_CH_D:
 		return POWER_DOMAIN_AUX_D;
+	case AUX_CH_E:
+		return POWER_DOMAIN_AUX_E;
 	case AUX_CH_F:
 		return POWER_DOMAIN_AUX_F;
 	default:
@@ -1460,6 +1482,7 @@ static i915_reg_t skl_aux_ctl_reg(struct intel_dp *intel_dp)
 	case AUX_CH_B:
 	case AUX_CH_C:
 	case AUX_CH_D:
+	case AUX_CH_E:
 	case AUX_CH_F:
 		return DP_AUX_CH_CTL(aux_ch);
 	default:
@@ -1478,6 +1501,7 @@ static i915_reg_t skl_aux_data_reg(struct intel_dp *intel_dp, int index)
 	case AUX_CH_B:
 	case AUX_CH_C:
 	case AUX_CH_D:
+	case AUX_CH_E:
 	case AUX_CH_F:
 		return DP_AUX_CH_DATA(aux_ch, index);
 	default:
@@ -1541,6 +1565,13 @@ bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp)
 	return max_rate >= 540000;
 }
 
+bool intel_dp_source_supports_hbr3(struct intel_dp *intel_dp)
+{
+	int max_rate = intel_dp->source_rates[intel_dp->num_source_rates - 1];
+
+	return max_rate >= 810000;
+}
+
 static void
 intel_dp_set_clock(struct intel_encoder *encoder,
 		   struct intel_crtc_state *pipe_config)
@@ -1862,7 +1893,10 @@ intel_dp_compute_config(struct intel_encoder *encoder,
 						conn_state->scaling_mode);
 	}
 
-	if ((IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) &&
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
+	if (HAS_GMCH_DISPLAY(dev_priv) &&
 	    adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE)
 		return false;
 
@@ -2796,16 +2830,6 @@ static void intel_disable_dp(struct intel_encoder *encoder,
 static void g4x_disable_dp(struct intel_encoder *encoder,
 			   const struct intel_crtc_state *old_crtc_state,
 			   const struct drm_connector_state *old_conn_state)
-{
-	intel_disable_dp(encoder, old_crtc_state, old_conn_state);
-
-	/* disable the port before the pipe on g4x */
-	intel_dp_link_down(encoder, old_crtc_state);
-}
-
-static void ilk_disable_dp(struct intel_encoder *encoder,
-			   const struct intel_crtc_state *old_crtc_state,
-			   const struct drm_connector_state *old_conn_state)
 {
 	intel_disable_dp(encoder, old_crtc_state, old_conn_state);
 }
@@ -2821,13 +2845,19 @@ static void vlv_disable_dp(struct intel_encoder *encoder,
 	intel_disable_dp(encoder, old_crtc_state, old_conn_state);
 }
 
-static void ilk_post_disable_dp(struct intel_encoder *encoder,
+static void g4x_post_disable_dp(struct intel_encoder *encoder,
 				const struct intel_crtc_state *old_crtc_state,
 				const struct drm_connector_state *old_conn_state)
 {
 	struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
 	enum port port = encoder->port;
 
+	/*
+	 * Bspec does not list a specific disable sequence for g4x DP.
+	 * Follow the ilk+ sequence (disable pipe before the port) for
+	 * g4x DP as it does not suffer from underruns like the normal
+	 * g4x modeset sequence (disable pipe after the port).
+	 */
 	intel_dp_link_down(encoder, old_crtc_state);
 
 	/* Only ilk+ has port A */
@@ -2866,10 +2896,11 @@ _intel_dp_set_link_train(struct intel_dp *intel_dp,
 	struct drm_i915_private *dev_priv = to_i915(intel_dp_to_dev(intel_dp));
 	struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
 	enum port port = intel_dig_port->base.port;
+	uint8_t train_pat_mask = drm_dp_training_pattern_mask(intel_dp->dpcd);
 
-	if (dp_train_pat & DP_TRAINING_PATTERN_MASK)
+	if (dp_train_pat & train_pat_mask)
 		DRM_DEBUG_KMS("Using DP training pattern TPS%d\n",
-			      dp_train_pat & DP_TRAINING_PATTERN_MASK);
+			      dp_train_pat & train_pat_mask);
 
 	if (HAS_DDI(dev_priv)) {
 		uint32_t temp = I915_READ(DP_TP_CTL(port));
@@ -2880,7 +2911,7 @@ _intel_dp_set_link_train(struct intel_dp *intel_dp,
 			temp &= ~DP_TP_CTL_SCRAMBLE_DISABLE;
 
 		temp &= ~DP_TP_CTL_LINK_TRAIN_MASK;
-		switch (dp_train_pat & DP_TRAINING_PATTERN_MASK) {
+		switch (dp_train_pat & train_pat_mask) {
 		case DP_TRAINING_PATTERN_DISABLE:
 			temp |= DP_TP_CTL_LINK_TRAIN_NORMAL;
 
@@ -2894,6 +2925,9 @@ _intel_dp_set_link_train(struct intel_dp *intel_dp,
 		case DP_TRAINING_PATTERN_3:
 			temp |= DP_TP_CTL_LINK_TRAIN_PAT3;
 			break;
+		case DP_TRAINING_PATTERN_4:
+			temp |= DP_TP_CTL_LINK_TRAIN_PAT4;
+			break;
 		}
 		I915_WRITE(DP_TP_CTL(port), temp);
 
@@ -6344,7 +6378,7 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
 	drm_connector_init(dev, connector, &intel_dp_connector_funcs, type);
 	drm_connector_helper_add(connector, &intel_dp_connector_helper_funcs);
 
-	if (!IS_VALLEYVIEW(dev_priv) && !IS_CHERRYVIEW(dev_priv))
+	if (!HAS_GMCH_DISPLAY(dev_priv))
 		connector->interlace_allowed = true;
 	connector->doublescan_allowed = 0;
 
@@ -6387,7 +6421,7 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
 	 * 0xd.  Failure to do so will result in spurious interrupts being
 	 * generated on the port when a cable is not attached.
 	 */
-	if (IS_G4X(dev_priv) && !IS_GM45(dev_priv)) {
+	if (IS_G45(dev_priv)) {
 		u32 temp = I915_READ(PEG_BAND_GAP_DATA);
 		I915_WRITE(PEG_BAND_GAP_DATA, (temp & ~0xf) | 0xd);
 	}
@@ -6443,15 +6477,11 @@ bool intel_dp_init(struct drm_i915_private *dev_priv,
 		intel_encoder->enable = vlv_enable_dp;
 		intel_encoder->disable = vlv_disable_dp;
 		intel_encoder->post_disable = vlv_post_disable_dp;
-	} else if (INTEL_GEN(dev_priv) >= 5) {
-		intel_encoder->pre_enable = g4x_pre_enable_dp;
-		intel_encoder->enable = g4x_enable_dp;
-		intel_encoder->disable = ilk_disable_dp;
-		intel_encoder->post_disable = ilk_post_disable_dp;
 	} else {
 		intel_encoder->pre_enable = g4x_pre_enable_dp;
 		intel_encoder->enable = g4x_enable_dp;
 		intel_encoder->disable = g4x_disable_dp;
+		intel_encoder->post_disable = g4x_post_disable_dp;
 	}
 
 	intel_dig_port->dp.output_reg = output_reg;

+ 6 - 6
drivers/gpu/drm/i915/intel_dp_aux_backlight.c

@@ -26,7 +26,7 @@
 
 static void set_aux_backlight_enable(struct intel_dp *intel_dp, bool enable)
 {
-	uint8_t reg_val = 0;
+	u8 reg_val = 0;
 
 	/* Early return when display use other mechanism to enable backlight. */
 	if (!(intel_dp->edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP))
@@ -54,11 +54,11 @@ static void set_aux_backlight_enable(struct intel_dp *intel_dp, bool enable)
  * Read the current backlight value from DPCD register(s) based
  * on if 8-bit(MSB) or 16-bit(MSB and LSB) values are supported
  */
-static uint32_t intel_dp_aux_get_backlight(struct intel_connector *connector)
+static u32 intel_dp_aux_get_backlight(struct intel_connector *connector)
 {
 	struct intel_dp *intel_dp = enc_to_intel_dp(&connector->encoder->base);
-	uint8_t read_val[2] = { 0x0 };
-	uint16_t level = 0;
+	u8 read_val[2] = { 0x0 };
+	u16 level = 0;
 
 	if (drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_BACKLIGHT_BRIGHTNESS_MSB,
 			     &read_val, sizeof(read_val)) < 0) {
@@ -82,7 +82,7 @@ intel_dp_aux_set_backlight(const struct drm_connector_state *conn_state, u32 lev
 {
 	struct intel_connector *connector = to_intel_connector(conn_state->connector);
 	struct intel_dp *intel_dp = enc_to_intel_dp(&connector->encoder->base);
-	uint8_t vals[2] = { 0x0 };
+	u8 vals[2] = { 0x0 };
 
 	vals[0] = level;
 
@@ -178,7 +178,7 @@ static void intel_dp_aux_enable_backlight(const struct intel_crtc_state *crtc_st
 {
 	struct intel_connector *connector = to_intel_connector(conn_state->connector);
 	struct intel_dp *intel_dp = enc_to_intel_dp(&connector->encoder->base);
-	uint8_t dpcd_buf, new_dpcd_buf, edp_backlight_mode;
+	u8 dpcd_buf, new_dpcd_buf, edp_backlight_mode;
 
 	if (drm_dp_dpcd_readb(&intel_dp->aux,
 			DP_EDP_BACKLIGHT_MODE_SET_REGISTER, &dpcd_buf) != 1) {

+ 28 - 11
drivers/gpu/drm/i915/intel_dp_link_training.c

@@ -219,14 +219,30 @@ intel_dp_link_training_clock_recovery(struct intel_dp *intel_dp)
 }
 
 /*
- * Pick training pattern for channel equalization. Training Pattern 3 for HBR2
+ * Pick training pattern for channel equalization. Training pattern 4 for HBR3
+ * or for 1.4 devices that support it, training Pattern 3 for HBR2
  * or 1.2 devices that support it, Training Pattern 2 otherwise.
  */
 static u32 intel_dp_training_pattern(struct intel_dp *intel_dp)
 {
-	u32 training_pattern = DP_TRAINING_PATTERN_2;
-	bool source_tps3, sink_tps3;
+	bool source_tps3, sink_tps3, source_tps4, sink_tps4;
 
+	/*
+	 * Intel platforms that support HBR3 also support TPS4. It is mandatory
+	 * for all downstream devices that support HBR3. There are no known eDP
+	 * panels that support TPS4 as of Feb 2018 as per VESA eDP_v1.4b_E1
+	 * specification.
+	 */
+	source_tps4 = intel_dp_source_supports_hbr3(intel_dp);
+	sink_tps4 = drm_dp_tps4_supported(intel_dp->dpcd);
+	if (source_tps4 && sink_tps4) {
+		return DP_TRAINING_PATTERN_4;
+	} else if (intel_dp->link_rate == 810000) {
+		if (!source_tps4)
+			DRM_DEBUG_KMS("8.1 Gbps link rate without source HBR3/TPS4 support\n");
+		if (!sink_tps4)
+			DRM_DEBUG_KMS("8.1 Gbps link rate without sink TPS4 support\n");
+	}
 	/*
 	 * Intel platforms that support HBR2 also support TPS3. TPS3 support is
 	 * also mandatory for downstream devices that support HBR2. However, not
@@ -234,17 +250,16 @@ static u32 intel_dp_training_pattern(struct intel_dp *intel_dp)
 	 */
 	source_tps3 = intel_dp_source_supports_hbr2(intel_dp);
 	sink_tps3 = drm_dp_tps3_supported(intel_dp->dpcd);
-
 	if (source_tps3 && sink_tps3) {
-		training_pattern = DP_TRAINING_PATTERN_3;
-	} else if (intel_dp->link_rate == 540000) {
+		return  DP_TRAINING_PATTERN_3;
+	} else if (intel_dp->link_rate >= 540000) {
 		if (!source_tps3)
-			DRM_DEBUG_KMS("5.4 Gbps link rate without source HBR2/TPS3 support\n");
+			DRM_DEBUG_KMS(">=5.4/6.48 Gbps link rate without source HBR2/TPS3 support\n");
 		if (!sink_tps3)
-			DRM_DEBUG_KMS("5.4 Gbps link rate without sink TPS3 support\n");
+			DRM_DEBUG_KMS(">=5.4/6.48 Gbps link rate without sink TPS3 support\n");
 	}
 
-	return training_pattern;
+	return DP_TRAINING_PATTERN_2;
 }
 
 static bool
@@ -256,11 +271,13 @@ intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
 	bool channel_eq = false;
 
 	training_pattern = intel_dp_training_pattern(intel_dp);
+	/* Scrambling is disabled for TPS2/3 and enabled for TPS4 */
+	if (training_pattern != DP_TRAINING_PATTERN_4)
+		training_pattern |= DP_LINK_SCRAMBLING_DISABLE;
 
 	/* channel equalization */
 	if (!intel_dp_set_link_train(intel_dp,
-				     training_pattern |
-				     DP_LINK_SCRAMBLING_DISABLE)) {
+				     training_pattern)) {
 		DRM_ERROR("failed to start channel equalization\n");
 		return false;
 	}

+ 6 - 0
drivers/gpu/drm/i915/intel_dp_mst.c

@@ -48,6 +48,9 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
 	bool reduce_m_n = drm_dp_has_quirk(&intel_dp->desc,
 					   DP_DPCD_QUIRK_LIMITED_M_N);
 
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
 	pipe_config->has_pch_encoder = false;
 	bpp = 24;
 	if (intel_dp->compliance.test_data.bpc) {
@@ -366,6 +369,9 @@ intel_dp_mst_mode_valid(struct drm_connector *connector,
 	if (!intel_dp)
 		return MODE_ERROR;
 
+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return MODE_NO_DBLESCAN;
+
 	max_link_clock = intel_dp_max_link_rate(intel_dp);
 	max_lanes = intel_dp_max_lane_count(intel_dp);
 

+ 16 - 4
drivers/gpu/drm/i915/intel_dpll_mgr.c

@@ -2857,10 +2857,17 @@ icl_get_dpll(struct intel_crtc *crtc, struct intel_crtc_state *crtc_state,
 	case PORT_D:
 	case PORT_E:
 	case PORT_F:
-		min = icl_port_to_mg_pll_id(port);
-		max = min;
-		ret = icl_calc_mg_pll_state(crtc_state, encoder, clock,
-					    &pll_state);
+		if (0 /* TODO: TBT PLLs */) {
+			min = DPLL_ID_ICL_TBTPLL;
+			max = min;
+			ret = icl_calc_dpll_state(crtc_state, encoder, clock,
+						  &pll_state);
+		} else {
+			min = icl_port_to_mg_pll_id(port);
+			max = min;
+			ret = icl_calc_mg_pll_state(crtc_state, encoder, clock,
+						    &pll_state);
+		}
 		break;
 	default:
 		MISSING_CASE(port);
@@ -2893,6 +2900,8 @@ static i915_reg_t icl_pll_id_to_enable_reg(enum intel_dpll_id id)
 	case DPLL_ID_ICL_DPLL0:
 	case DPLL_ID_ICL_DPLL1:
 		return CNL_DPLL_ENABLE(id);
+	case DPLL_ID_ICL_TBTPLL:
+		return TBT_PLL_ENABLE;
 	case DPLL_ID_ICL_MGPLL1:
 	case DPLL_ID_ICL_MGPLL2:
 	case DPLL_ID_ICL_MGPLL3:
@@ -2920,6 +2929,7 @@ static bool icl_pll_get_hw_state(struct drm_i915_private *dev_priv,
 	switch (id) {
 	case DPLL_ID_ICL_DPLL0:
 	case DPLL_ID_ICL_DPLL1:
+	case DPLL_ID_ICL_TBTPLL:
 		hw_state->cfgcr0 = I915_READ(ICL_DPLL_CFGCR0(id));
 		hw_state->cfgcr1 = I915_READ(ICL_DPLL_CFGCR1(id));
 		break;
@@ -3006,6 +3016,7 @@ static void icl_pll_enable(struct drm_i915_private *dev_priv,
 	switch (id) {
 	case DPLL_ID_ICL_DPLL0:
 	case DPLL_ID_ICL_DPLL1:
+	case DPLL_ID_ICL_TBTPLL:
 		icl_dpll_write(dev_priv, pll);
 		break;
 	case DPLL_ID_ICL_MGPLL1:
@@ -3104,6 +3115,7 @@ static const struct intel_shared_dpll_funcs icl_pll_funcs = {
 static const struct dpll_info icl_plls[] = {
 	{ "DPLL 0",   &icl_pll_funcs, DPLL_ID_ICL_DPLL0,  0 },
 	{ "DPLL 1",   &icl_pll_funcs, DPLL_ID_ICL_DPLL1,  0 },
+	{ "TBT PLL",  &icl_pll_funcs, DPLL_ID_ICL_TBTPLL, 0 },
 	{ "MG PLL 1", &icl_pll_funcs, DPLL_ID_ICL_MGPLL1, 0 },
 	{ "MG PLL 2", &icl_pll_funcs, DPLL_ID_ICL_MGPLL2, 0 },
 	{ "MG PLL 3", &icl_pll_funcs, DPLL_ID_ICL_MGPLL3, 0 },

+ 9 - 5
drivers/gpu/drm/i915/intel_dpll_mgr.h

@@ -113,24 +113,28 @@ enum intel_dpll_id {
 	 * @DPLL_ID_ICL_DPLL1: ICL combo PHY DPLL1
 	 */
 	DPLL_ID_ICL_DPLL1 = 1,
+	/**
+	 * @DPLL_ID_ICL_TBTPLL: ICL TBT PLL
+	 */
+	DPLL_ID_ICL_TBTPLL = 2,
 	/**
 	 * @DPLL_ID_ICL_MGPLL1: ICL MG PLL 1 port 1 (C)
 	 */
-	DPLL_ID_ICL_MGPLL1 = 2,
+	DPLL_ID_ICL_MGPLL1 = 3,
 	/**
 	 * @DPLL_ID_ICL_MGPLL2: ICL MG PLL 1 port 2 (D)
 	 */
-	DPLL_ID_ICL_MGPLL2 = 3,
+	DPLL_ID_ICL_MGPLL2 = 4,
 	/**
 	 * @DPLL_ID_ICL_MGPLL3: ICL MG PLL 1 port 3 (E)
 	 */
-	DPLL_ID_ICL_MGPLL3 = 4,
+	DPLL_ID_ICL_MGPLL3 = 5,
 	/**
 	 * @DPLL_ID_ICL_MGPLL4: ICL MG PLL 1 port 4 (F)
 	 */
-	DPLL_ID_ICL_MGPLL4 = 5,
+	DPLL_ID_ICL_MGPLL4 = 6,
 };
-#define I915_NUM_PLLS 6
+#define I915_NUM_PLLS 7
 
 struct intel_dpll_hw_state {
 	/* i9xx, pch plls */

+ 1 - 6
drivers/gpu/drm/i915/intel_drv.h

@@ -158,12 +158,6 @@
 #define MAX_OUTPUTS 6
 /* maximum connectors per crtcs in the mode set */
 
-/* Maximum cursor sizes */
-#define GEN2_CURSOR_WIDTH 64
-#define GEN2_CURSOR_HEIGHT 64
-#define MAX_CURSOR_WIDTH 256
-#define MAX_CURSOR_HEIGHT 256
-
 #define INTEL_I2C_BUS_DVO 1
 #define INTEL_I2C_BUS_SDVO 2
 
@@ -1716,6 +1710,7 @@ intel_dp_pre_emphasis_max(struct intel_dp *intel_dp, uint8_t voltage_swing);
 void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
 			   uint8_t *link_bw, uint8_t *rate_select);
 bool intel_dp_source_supports_hbr2(struct intel_dp *intel_dp);
+bool intel_dp_source_supports_hbr3(struct intel_dp *intel_dp);
 bool
 intel_dp_get_link_status(struct intel_dp *intel_dp, uint8_t link_status[DP_LINK_STATUS_SIZE]);
 

+ 6 - 0
drivers/gpu/drm/i915/intel_dsi.c

@@ -326,6 +326,9 @@ static bool intel_dsi_compute_config(struct intel_encoder *encoder,
 						conn_state->scaling_mode);
 	}
 
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
 	/* DSI uses short packets for sync events, so clear mode flags for DSI */
 	adjusted_mode->flags = 0;
 
@@ -1266,6 +1269,9 @@ intel_dsi_mode_valid(struct drm_connector *connector,
 
 	DRM_DEBUG_KMS("\n");
 
+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return MODE_NO_DBLESCAN;
+
 	if (fixed_mode) {
 		if (mode->hdisplay > fixed_mode->hdisplay)
 			return MODE_PANEL;

+ 7 - 1
drivers/gpu/drm/i915/intel_dvo.c

@@ -215,6 +215,9 @@ intel_dvo_mode_valid(struct drm_connector *connector,
 	int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
 	int target_clock = mode->clock;
 
+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return MODE_NO_DBLESCAN;
+
 	/* XXX: Validate clock range */
 
 	if (fixed_mode) {
@@ -250,6 +253,9 @@ static bool intel_dvo_compute_config(struct intel_encoder *encoder,
 	if (fixed_mode)
 		intel_fixed_panel_mode(fixed_mode, adjusted_mode);
 
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
 	return true;
 }
 
@@ -432,7 +438,7 @@ void intel_dvo_init(struct drm_i915_private *dev_priv)
 		int gpio;
 		bool dvoinit;
 		enum pipe pipe;
-		uint32_t dpll[I915_MAX_PIPES];
+		u32 dpll[I915_MAX_PIPES];
 		enum port port;
 
 		/*

+ 64 - 11
drivers/gpu/drm/i915/intel_engine_cs.c

@@ -499,7 +499,8 @@ void intel_engine_setup_common(struct intel_engine_cs *engine)
 	intel_engine_init_cmd_parser(engine);
 }
 
-int intel_engine_create_scratch(struct intel_engine_cs *engine, int size)
+int intel_engine_create_scratch(struct intel_engine_cs *engine,
+				unsigned int size)
 {
 	struct drm_i915_gem_object *obj;
 	struct i915_vma *vma;
@@ -533,7 +534,7 @@ err_unref:
 	return ret;
 }
 
-static void intel_engine_cleanup_scratch(struct intel_engine_cs *engine)
+void intel_engine_cleanup_scratch(struct intel_engine_cs *engine)
 {
 	i915_vma_unpin_and_release(&engine->scratch);
 }
@@ -1076,6 +1077,28 @@ void intel_engines_reset_default_submission(struct drm_i915_private *i915)
 		engine->set_default_submission(engine);
 }
 
+/**
+ * intel_engines_sanitize: called after the GPU has lost power
+ * @i915: the i915 device
+ *
+ * Anytime we reset the GPU, either with an explicit GPU reset or through a
+ * PCI power cycle, the GPU loses state and we must reset our state tracking
+ * to match. Note that calling intel_engines_sanitize() if the GPU has not
+ * been reset results in much confusion!
+ */
+void intel_engines_sanitize(struct drm_i915_private *i915)
+{
+	struct intel_engine_cs *engine;
+	enum intel_engine_id id;
+
+	GEM_TRACE("\n");
+
+	for_each_engine(engine, i915, id) {
+		if (engine->reset.reset)
+			engine->reset.reset(engine, NULL);
+	}
+}
+
 /**
  * intel_engines_park: called when the GT is transitioning from busy->idle
  * @i915: the i915 device
@@ -1168,9 +1191,6 @@ void intel_engine_lost_context(struct intel_engine_cs *engine)
 
 	lockdep_assert_held(&engine->i915->drm.struct_mutex);
 
-	engine->legacy_active_context = NULL;
-	engine->legacy_active_ppgtt = NULL;
-
 	ce = fetch_and_zero(&engine->last_retired_context);
 	if (ce)
 		intel_context_unpin(ce);
@@ -1260,7 +1280,7 @@ static void hexdump(struct drm_printer *m, const void *buf, size_t len)
 						rowsize, sizeof(u32),
 						line, sizeof(line),
 						false) >= sizeof(line));
-		drm_printf(m, "%08zx %s\n", pos, line);
+		drm_printf(m, "[%04zx] %s\n", pos, line);
 
 		prev = buf + pos;
 		skip = false;
@@ -1275,6 +1295,8 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
 		&engine->execlists;
 	u64 addr;
 
+	if (engine->id == RCS && IS_GEN(dev_priv, 4, 7))
+		drm_printf(m, "\tCCID: 0x%08x\n", I915_READ(CCID));
 	drm_printf(m, "\tRING_START: 0x%08x\n",
 		   I915_READ(RING_START(engine->mmio_base)));
 	drm_printf(m, "\tRING_HEAD:  0x%08x\n",
@@ -1396,6 +1418,39 @@ static void intel_engine_print_registers(const struct intel_engine_cs *engine,
 	}
 }
 
+static void print_request_ring(struct drm_printer *m, struct i915_request *rq)
+{
+	void *ring;
+	int size;
+
+	drm_printf(m,
+		   "[head %04x, postfix %04x, tail %04x, batch 0x%08x_%08x]:\n",
+		   rq->head, rq->postfix, rq->tail,
+		   rq->batch ? upper_32_bits(rq->batch->node.start) : ~0u,
+		   rq->batch ? lower_32_bits(rq->batch->node.start) : ~0u);
+
+	size = rq->tail - rq->head;
+	if (rq->tail < rq->head)
+		size += rq->ring->size;
+
+	ring = kmalloc(size, GFP_ATOMIC);
+	if (ring) {
+		const void *vaddr = rq->ring->vaddr;
+		unsigned int head = rq->head;
+		unsigned int len = 0;
+
+		if (rq->tail < head) {
+			len = rq->ring->size - head;
+			memcpy(ring, vaddr + head, len);
+			head = 0;
+		}
+		memcpy(ring + len, vaddr + head, size - len);
+
+		hexdump(m, ring, size);
+		kfree(ring);
+	}
+}
+
 void intel_engine_dump(struct intel_engine_cs *engine,
 		       struct drm_printer *m,
 		       const char *header, ...)
@@ -1446,11 +1501,7 @@ void intel_engine_dump(struct intel_engine_cs *engine,
 	rq = i915_gem_find_active_request(engine);
 	if (rq) {
 		print_request(m, rq, "\t\tactive ");
-		drm_printf(m,
-			   "\t\t[head %04x, postfix %04x, tail %04x, batch 0x%08x_%08x]\n",
-			   rq->head, rq->postfix, rq->tail,
-			   rq->batch ? upper_32_bits(rq->batch->node.start) : ~0u,
-			   rq->batch ? lower_32_bits(rq->batch->node.start) : ~0u);
+
 		drm_printf(m, "\t\tring->start:  0x%08x\n",
 			   i915_ggtt_offset(rq->ring->vma));
 		drm_printf(m, "\t\tring->head:   0x%08x\n",
@@ -1461,6 +1512,8 @@ void intel_engine_dump(struct intel_engine_cs *engine,
 			   rq->ring->emit);
 		drm_printf(m, "\t\tring->space:  0x%08x\n",
 			   rq->ring->space);
+
+		print_request_ring(m, rq);
 	}
 
 	rcu_read_unlock();

+ 85 - 30
drivers/gpu/drm/i915/intel_guc.c

@@ -203,13 +203,11 @@ void intel_guc_fini(struct intel_guc *guc)
 	guc_shared_data_destroy(guc);
 }
 
-static u32 get_log_control_flags(void)
+static u32 guc_ctl_debug_flags(struct intel_guc *guc)
 {
-	u32 level = i915_modparams.guc_log_level;
+	u32 level = intel_guc_log_get_level(&guc->log);
 	u32 flags = 0;
 
-	GEM_BUG_ON(level < 0);
-
 	if (!GUC_LOG_LEVEL_IS_ENABLED(level))
 		flags |= GUC_LOG_DEFAULT_DISABLED;
 
@@ -219,6 +217,85 @@ static u32 get_log_control_flags(void)
 		flags |= GUC_LOG_LEVEL_TO_VERBOSITY(level) <<
 			 GUC_LOG_VERBOSITY_SHIFT;
 
+	if (USES_GUC_SUBMISSION(guc_to_i915(guc))) {
+		u32 ads = intel_guc_ggtt_offset(guc, guc->ads_vma)
+			>> PAGE_SHIFT;
+
+		flags |= ads << GUC_ADS_ADDR_SHIFT | GUC_ADS_ENABLED;
+	}
+
+	return flags;
+}
+
+static u32 guc_ctl_feature_flags(struct intel_guc *guc)
+{
+	u32 flags = 0;
+
+	flags |=  GUC_CTL_VCS2_ENABLED;
+
+	if (USES_GUC_SUBMISSION(guc_to_i915(guc)))
+		flags |= GUC_CTL_KERNEL_SUBMISSIONS;
+	else
+		flags |= GUC_CTL_DISABLE_SCHEDULER;
+
+	return flags;
+}
+
+static u32 guc_ctl_ctxinfo_flags(struct intel_guc *guc)
+{
+	u32 flags = 0;
+
+	if (USES_GUC_SUBMISSION(guc_to_i915(guc))) {
+		u32 ctxnum, base;
+
+		base = intel_guc_ggtt_offset(guc, guc->stage_desc_pool);
+		ctxnum = GUC_MAX_STAGE_DESCRIPTORS / 16;
+
+		base >>= PAGE_SHIFT;
+		flags |= (base << GUC_CTL_BASE_ADDR_SHIFT) |
+			(ctxnum << GUC_CTL_CTXNUM_IN16_SHIFT);
+	}
+	return flags;
+}
+
+static u32 guc_ctl_log_params_flags(struct intel_guc *guc)
+{
+	u32 offset = intel_guc_ggtt_offset(guc, guc->log.vma) >> PAGE_SHIFT;
+	u32 flags;
+
+	#if (((CRASH_BUFFER_SIZE) % SZ_1M) == 0)
+	#define UNIT SZ_1M
+	#define FLAG GUC_LOG_ALLOC_IN_MEGABYTE
+	#else
+	#define UNIT SZ_4K
+	#define FLAG 0
+	#endif
+
+	BUILD_BUG_ON(!CRASH_BUFFER_SIZE);
+	BUILD_BUG_ON(!IS_ALIGNED(CRASH_BUFFER_SIZE, UNIT));
+	BUILD_BUG_ON(!DPC_BUFFER_SIZE);
+	BUILD_BUG_ON(!IS_ALIGNED(DPC_BUFFER_SIZE, UNIT));
+	BUILD_BUG_ON(!ISR_BUFFER_SIZE);
+	BUILD_BUG_ON(!IS_ALIGNED(ISR_BUFFER_SIZE, UNIT));
+
+	BUILD_BUG_ON((CRASH_BUFFER_SIZE / UNIT - 1) >
+			(GUC_LOG_CRASH_MASK >> GUC_LOG_CRASH_SHIFT));
+	BUILD_BUG_ON((DPC_BUFFER_SIZE / UNIT - 1) >
+			(GUC_LOG_DPC_MASK >> GUC_LOG_DPC_SHIFT));
+	BUILD_BUG_ON((ISR_BUFFER_SIZE / UNIT - 1) >
+			(GUC_LOG_ISR_MASK >> GUC_LOG_ISR_SHIFT));
+
+	flags = GUC_LOG_VALID |
+		GUC_LOG_NOTIFY_ON_HALF_FULL |
+		FLAG |
+		((CRASH_BUFFER_SIZE / UNIT - 1) << GUC_LOG_CRASH_SHIFT) |
+		((DPC_BUFFER_SIZE / UNIT - 1) << GUC_LOG_DPC_SHIFT) |
+		((ISR_BUFFER_SIZE / UNIT - 1) << GUC_LOG_ISR_SHIFT) |
+		(offset << GUC_LOG_BUF_ADDR_SHIFT);
+
+	#undef UNIT
+	#undef FLAG
+
 	return flags;
 }
 
@@ -245,32 +322,10 @@ void intel_guc_init_params(struct intel_guc *guc)
 
 	params[GUC_CTL_WA] |= GUC_CTL_WA_UK_BY_DRIVER;
 
-	params[GUC_CTL_FEATURE] |= GUC_CTL_DISABLE_SCHEDULER |
-			GUC_CTL_VCS2_ENABLED;
-
-	params[GUC_CTL_LOG_PARAMS] = guc->log.flags;
-
-	params[GUC_CTL_DEBUG] = get_log_control_flags();
-
-	/* If GuC submission is enabled, set up additional parameters here */
-	if (USES_GUC_SUBMISSION(dev_priv)) {
-		u32 ads = intel_guc_ggtt_offset(guc,
-						guc->ads_vma) >> PAGE_SHIFT;
-		u32 pgs = intel_guc_ggtt_offset(guc, guc->stage_desc_pool);
-		u32 ctx_in_16 = GUC_MAX_STAGE_DESCRIPTORS / 16;
-
-		params[GUC_CTL_DEBUG] |= ads << GUC_ADS_ADDR_SHIFT;
-		params[GUC_CTL_DEBUG] |= GUC_ADS_ENABLED;
-
-		pgs >>= PAGE_SHIFT;
-		params[GUC_CTL_CTXINFO] = (pgs << GUC_CTL_BASE_ADDR_SHIFT) |
-			(ctx_in_16 << GUC_CTL_CTXNUM_IN16_SHIFT);
-
-		params[GUC_CTL_FEATURE] |= GUC_CTL_KERNEL_SUBMISSIONS;
-
-		/* Unmask this bit to enable the GuC's internal scheduler */
-		params[GUC_CTL_FEATURE] &= ~GUC_CTL_DISABLE_SCHEDULER;
-	}
+	params[GUC_CTL_FEATURE] = guc_ctl_feature_flags(guc);
+	params[GUC_CTL_LOG_PARAMS]  = guc_ctl_log_params_flags(guc);
+	params[GUC_CTL_DEBUG] = guc_ctl_debug_flags(guc);
+	params[GUC_CTL_CTXINFO] = guc_ctl_ctxinfo_flags(guc);
 
 	/*
 	 * All SOFT_SCRATCH registers are in FORCEWAKE_BLITTER domain and

+ 3 - 17
drivers/gpu/drm/i915/intel_guc_fwif.h

@@ -84,12 +84,12 @@
 #define   GUC_LOG_VALID			(1 << 0)
 #define   GUC_LOG_NOTIFY_ON_HALF_FULL	(1 << 1)
 #define   GUC_LOG_ALLOC_IN_MEGABYTE	(1 << 3)
-#define   GUC_LOG_CRASH_PAGES		1
 #define   GUC_LOG_CRASH_SHIFT		4
-#define   GUC_LOG_DPC_PAGES		7
+#define   GUC_LOG_CRASH_MASK		(0x1 << GUC_LOG_CRASH_SHIFT)
 #define   GUC_LOG_DPC_SHIFT		6
-#define   GUC_LOG_ISR_PAGES		7
+#define   GUC_LOG_DPC_MASK	        (0x7 << GUC_LOG_DPC_SHIFT)
 #define   GUC_LOG_ISR_SHIFT		9
+#define   GUC_LOG_ISR_MASK	        (0x7 << GUC_LOG_ISR_SHIFT)
 #define   GUC_LOG_BUF_ADDR_SHIFT	12
 
 #define GUC_CTL_PAGE_FAULT_CONTROL	5
@@ -532,20 +532,6 @@ enum guc_log_buffer_type {
 };
 
 /**
- * DOC: GuC Log buffer Layout
- *
- * Page0  +-------------------------------+
- *        |   ISR state header (32 bytes) |
- *        |      DPC state header         |
- *        |   Crash dump state header     |
- * Page1  +-------------------------------+
- *        |           ISR logs            |
- * Page9  +-------------------------------+
- *        |           DPC logs            |
- * Page17 +-------------------------------+
- *        |         Crash Dump logs       |
- *        +-------------------------------+
- *
  * Below state structure is used for coordination of retrieval of GuC firmware
  * logs. Separate state is maintained for each log buffer type.
  * read_ptr points to the location where i915 read last in log buffer and

+ 37 - 33
drivers/gpu/drm/i915/intel_guc_log.c

@@ -215,11 +215,11 @@ static unsigned int guc_get_log_buffer_size(enum guc_log_buffer_type type)
 {
 	switch (type) {
 	case GUC_ISR_LOG_BUFFER:
-		return (GUC_LOG_ISR_PAGES + 1) * PAGE_SIZE;
+		return ISR_BUFFER_SIZE;
 	case GUC_DPC_LOG_BUFFER:
-		return (GUC_LOG_DPC_PAGES + 1) * PAGE_SIZE;
+		return DPC_BUFFER_SIZE;
 	case GUC_CRASH_DUMP_LOG_BUFFER:
-		return (GUC_LOG_CRASH_PAGES + 1) * PAGE_SIZE;
+		return CRASH_BUFFER_SIZE;
 	default:
 		MISSING_CASE(type);
 	}
@@ -397,7 +397,7 @@ static int guc_log_relay_create(struct intel_guc_log *log)
 	lockdep_assert_held(&log->relay.lock);
 
 	 /* Keep the size of sub buffers same as shared log buffer */
-	subbuf_size = GUC_LOG_SIZE;
+	subbuf_size = log->vma->size;
 
 	/*
 	 * Store up to 8 snapshots, which is large enough to buffer sufficient
@@ -452,13 +452,34 @@ int intel_guc_log_create(struct intel_guc_log *log)
 {
 	struct intel_guc *guc = log_to_guc(log);
 	struct i915_vma *vma;
-	unsigned long offset;
-	u32 flags;
+	u32 guc_log_size;
 	int ret;
 
 	GEM_BUG_ON(log->vma);
 
-	vma = intel_guc_allocate_vma(guc, GUC_LOG_SIZE);
+	/*
+	 *  GuC Log buffer Layout
+	 *
+	 *  +===============================+ 00B
+	 *  |    Crash dump state header    |
+	 *  +-------------------------------+ 32B
+	 *  |       DPC state header        |
+	 *  +-------------------------------+ 64B
+	 *  |       ISR state header        |
+	 *  +-------------------------------+ 96B
+	 *  |                               |
+	 *  +===============================+ PAGE_SIZE (4KB)
+	 *  |        Crash Dump logs        |
+	 *  +===============================+ + CRASH_SIZE
+	 *  |           DPC logs            |
+	 *  +===============================+ + DPC_SIZE
+	 *  |           ISR logs            |
+	 *  +===============================+ + ISR_SIZE
+	 */
+	guc_log_size = PAGE_SIZE + CRASH_BUFFER_SIZE + DPC_BUFFER_SIZE +
+			ISR_BUFFER_SIZE;
+
+	vma = intel_guc_allocate_vma(guc, guc_log_size);
 	if (IS_ERR(vma)) {
 		ret = PTR_ERR(vma);
 		goto err;
@@ -466,20 +487,12 @@ int intel_guc_log_create(struct intel_guc_log *log)
 
 	log->vma = vma;
 
-	/* each allocated unit is a page */
-	flags = GUC_LOG_VALID | GUC_LOG_NOTIFY_ON_HALF_FULL |
-		(GUC_LOG_DPC_PAGES << GUC_LOG_DPC_SHIFT) |
-		(GUC_LOG_ISR_PAGES << GUC_LOG_ISR_SHIFT) |
-		(GUC_LOG_CRASH_PAGES << GUC_LOG_CRASH_SHIFT);
-
-	offset = intel_guc_ggtt_offset(guc, vma) >> PAGE_SHIFT;
-	log->flags = (offset << GUC_LOG_BUF_ADDR_SHIFT) | flags;
+	log->level = i915_modparams.guc_log_level;
 
 	return 0;
 
 err:
-	/* logging will be off */
-	i915_modparams.guc_log_level = 0;
+	DRM_ERROR("Failed to allocate GuC log buffer. %d\n", ret);
 	return ret;
 }
 
@@ -488,15 +501,7 @@ void intel_guc_log_destroy(struct intel_guc_log *log)
 	i915_vma_unpin_and_release(&log->vma);
 }
 
-int intel_guc_log_level_get(struct intel_guc_log *log)
-{
-	GEM_BUG_ON(!log->vma);
-	GEM_BUG_ON(i915_modparams.guc_log_level < 0);
-
-	return i915_modparams.guc_log_level;
-}
-
-int intel_guc_log_level_set(struct intel_guc_log *log, u64 val)
+int intel_guc_log_set_level(struct intel_guc_log *log, u32 level)
 {
 	struct intel_guc *guc = log_to_guc(log);
 	struct drm_i915_private *dev_priv = guc_to_i915(guc);
@@ -504,33 +509,32 @@ int intel_guc_log_level_set(struct intel_guc_log *log, u64 val)
 
 	BUILD_BUG_ON(GUC_LOG_VERBOSITY_MIN != 0);
 	GEM_BUG_ON(!log->vma);
-	GEM_BUG_ON(i915_modparams.guc_log_level < 0);
 
 	/*
 	 * GuC is recognizing log levels starting from 0 to max, we're using 0
 	 * as indication that logging should be disabled.
 	 */
-	if (val < GUC_LOG_LEVEL_DISABLED || val > GUC_LOG_LEVEL_MAX)
+	if (level < GUC_LOG_LEVEL_DISABLED || level > GUC_LOG_LEVEL_MAX)
 		return -EINVAL;
 
 	mutex_lock(&dev_priv->drm.struct_mutex);
 
-	if (i915_modparams.guc_log_level == val) {
+	if (log->level == level) {
 		ret = 0;
 		goto out_unlock;
 	}
 
 	intel_runtime_pm_get(dev_priv);
-	ret = guc_action_control_log(guc, GUC_LOG_LEVEL_IS_VERBOSE(val),
-				     GUC_LOG_LEVEL_IS_ENABLED(val),
-				     GUC_LOG_LEVEL_TO_VERBOSITY(val));
+	ret = guc_action_control_log(guc, GUC_LOG_LEVEL_IS_VERBOSE(level),
+				     GUC_LOG_LEVEL_IS_ENABLED(level),
+				     GUC_LOG_LEVEL_TO_VERBOSITY(level));
 	intel_runtime_pm_put(dev_priv);
 	if (ret) {
 		DRM_DEBUG_DRIVER("guc_log_control action failed %d\n", ret);
 		goto out_unlock;
 	}
 
-	i915_modparams.guc_log_level = val;
+	log->level = level;
 
 out_unlock:
 	mutex_unlock(&dev_priv->drm.struct_mutex);

+ 17 - 9
drivers/gpu/drm/i915/intel_guc_log.h

@@ -30,15 +30,19 @@
 #include <linux/workqueue.h>
 
 #include "intel_guc_fwif.h"
+#include "i915_gem.h"
 
 struct intel_guc;
 
-/*
- * The first page is to save log buffer state. Allocate one
- * extra page for others in case for overlap
- */
-#define GUC_LOG_SIZE	((1 + GUC_LOG_DPC_PAGES + 1 + GUC_LOG_ISR_PAGES + \
-			  1 + GUC_LOG_CRASH_PAGES + 1) << PAGE_SHIFT)
+#ifdef CONFIG_DRM_I915_DEBUG_GUC
+#define CRASH_BUFFER_SIZE	SZ_2M
+#define DPC_BUFFER_SIZE		SZ_8M
+#define ISR_BUFFER_SIZE		SZ_8M
+#else
+#define CRASH_BUFFER_SIZE	SZ_8K
+#define DPC_BUFFER_SIZE		SZ_32K
+#define ISR_BUFFER_SIZE		SZ_32K
+#endif
 
 /*
  * While we're using plain log level in i915, GuC controls are much more...
@@ -58,7 +62,7 @@ struct intel_guc;
 #define GUC_LOG_LEVEL_MAX GUC_VERBOSITY_TO_LOG_LEVEL(GUC_LOG_VERBOSITY_MAX)
 
 struct intel_guc_log {
-	u32 flags;
+	u32 level;
 	struct i915_vma *vma;
 	struct {
 		void *buf_addr;
@@ -80,8 +84,7 @@ void intel_guc_log_init_early(struct intel_guc_log *log);
 int intel_guc_log_create(struct intel_guc_log *log);
 void intel_guc_log_destroy(struct intel_guc_log *log);
 
-int intel_guc_log_level_get(struct intel_guc_log *log);
-int intel_guc_log_level_set(struct intel_guc_log *log, u64 control_val);
+int intel_guc_log_set_level(struct intel_guc_log *log, u32 level);
 bool intel_guc_log_relay_enabled(const struct intel_guc_log *log);
 int intel_guc_log_relay_open(struct intel_guc_log *log);
 void intel_guc_log_relay_flush(struct intel_guc_log *log);
@@ -89,4 +92,9 @@ void intel_guc_log_relay_close(struct intel_guc_log *log);
 
 void intel_guc_log_handle_flush_event(struct intel_guc_log *log);
 
+static inline u32 intel_guc_log_get_level(struct intel_guc_log *log)
+{
+	return log->level;
+}
+
 #endif

+ 2 - 0
drivers/gpu/drm/i915/intel_gvt.c

@@ -47,6 +47,8 @@ static bool is_supported_device(struct drm_i915_private *dev_priv)
 		return true;
 	if (IS_KABYLAKE(dev_priv))
 		return true;
+	if (IS_BROXTON(dev_priv))
+		return true;
 	return false;
 }
 

+ 16 - 1
drivers/gpu/drm/i915/intel_hangcheck.c

@@ -294,6 +294,7 @@ static void hangcheck_store_sample(struct intel_engine_cs *engine,
 	engine->hangcheck.seqno = hc->seqno;
 	engine->hangcheck.action = hc->action;
 	engine->hangcheck.stalled = hc->stalled;
+	engine->hangcheck.wedged = hc->wedged;
 }
 
 static enum intel_engine_hangcheck_action
@@ -368,6 +369,9 @@ static void hangcheck_accumulate_sample(struct intel_engine_cs *engine,
 
 	hc->stalled = time_after(jiffies,
 				 engine->hangcheck.action_timestamp + timeout);
+	hc->wedged = time_after(jiffies,
+				 engine->hangcheck.action_timestamp +
+				 I915_ENGINE_WEDGED_TIMEOUT);
 }
 
 static void hangcheck_declare_hang(struct drm_i915_private *i915,
@@ -409,7 +413,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
 			     gpu_error.hangcheck_work.work);
 	struct intel_engine_cs *engine;
 	enum intel_engine_id id;
-	unsigned int hung = 0, stuck = 0;
+	unsigned int hung = 0, stuck = 0, wedged = 0;
 
 	if (!i915_modparams.enable_hangcheck)
 		return;
@@ -440,6 +444,17 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
 			if (hc.action != ENGINE_DEAD)
 				stuck |= intel_engine_flag(engine);
 		}
+
+		if (engine->hangcheck.wedged)
+			wedged |= intel_engine_flag(engine);
+	}
+
+	if (wedged) {
+		dev_err(dev_priv->drm.dev,
+			"GPU recovery timed out,"
+			" cancelling all in-flight rendering.\n");
+		GEM_TRACE_DUMP();
+		i915_gem_set_wedged(dev_priv);
 	}
 
 	if (hung)

+ 74 - 30
drivers/gpu/drm/i915/intel_hdmi.c

@@ -51,7 +51,7 @@ assert_hdmi_port_disabled(struct intel_hdmi *intel_hdmi)
 {
 	struct drm_device *dev = intel_hdmi_to_dev(intel_hdmi);
 	struct drm_i915_private *dev_priv = to_i915(dev);
-	uint32_t enabled_bits;
+	u32 enabled_bits;
 
 	enabled_bits = HAS_DDI(dev_priv) ? DDI_BUF_CTL_ENABLE : SDVO_ENABLE;
 
@@ -59,6 +59,15 @@ assert_hdmi_port_disabled(struct intel_hdmi *intel_hdmi)
 	     "HDMI port enabled, expecting disabled\n");
 }
 
+static void
+assert_hdmi_transcoder_func_disabled(struct drm_i915_private *dev_priv,
+				     enum transcoder cpu_transcoder)
+{
+	WARN(I915_READ(TRANS_DDI_FUNC_CTL(cpu_transcoder)) &
+	     TRANS_DDI_FUNC_ENABLE,
+	     "HDMI transcoder function enabled, expecting disabled\n");
+}
+
 struct intel_hdmi *enc_to_intel_hdmi(struct drm_encoder *encoder)
 {
 	struct intel_digital_port *intel_dig_port =
@@ -144,7 +153,7 @@ static void g4x_write_infoframe(struct drm_encoder *encoder,
 				unsigned int type,
 				const void *frame, ssize_t len)
 {
-	const uint32_t *data = frame;
+	const u32 *data = frame;
 	struct drm_device *dev = encoder->dev;
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	u32 val = I915_READ(VIDEO_DIP_CTL);
@@ -199,7 +208,7 @@ static void ibx_write_infoframe(struct drm_encoder *encoder,
 				unsigned int type,
 				const void *frame, ssize_t len)
 {
-	const uint32_t *data = frame;
+	const u32 *data = frame;
 	struct drm_device *dev = encoder->dev;
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
@@ -259,7 +268,7 @@ static void cpt_write_infoframe(struct drm_encoder *encoder,
 				unsigned int type,
 				const void *frame, ssize_t len)
 {
-	const uint32_t *data = frame;
+	const u32 *data = frame;
 	struct drm_device *dev = encoder->dev;
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
@@ -317,7 +326,7 @@ static void vlv_write_infoframe(struct drm_encoder *encoder,
 				unsigned int type,
 				const void *frame, ssize_t len)
 {
-	const uint32_t *data = frame;
+	const u32 *data = frame;
 	struct drm_device *dev = encoder->dev;
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
@@ -376,19 +385,16 @@ static void hsw_write_infoframe(struct drm_encoder *encoder,
 				unsigned int type,
 				const void *frame, ssize_t len)
 {
-	const uint32_t *data = frame;
+	const u32 *data = frame;
 	struct drm_device *dev = encoder->dev;
 	struct drm_i915_private *dev_priv = to_i915(dev);
 	enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
 	i915_reg_t ctl_reg = HSW_TVIDEO_DIP_CTL(cpu_transcoder);
-	i915_reg_t data_reg;
 	int data_size = type == DP_SDP_VSC ?
 		VIDEO_DIP_VSC_DATA_SIZE : VIDEO_DIP_DATA_SIZE;
 	int i;
 	u32 val = I915_READ(ctl_reg);
 
-	data_reg = hsw_dip_data_reg(dev_priv, cpu_transcoder, type, 0);
-
 	val &= ~hsw_infoframe_enable(type);
 	I915_WRITE(ctl_reg, val);
 
@@ -442,7 +448,7 @@ static void intel_write_infoframe(struct drm_encoder *encoder,
 				  union hdmi_infoframe *frame)
 {
 	struct intel_digital_port *intel_dig_port = enc_to_dig_port(encoder);
-	uint8_t buffer[VIDEO_DIP_DATA_SIZE];
+	u8 buffer[VIDEO_DIP_DATA_SIZE];
 	ssize_t len;
 
 	/* see comment above for the reason for this offset */
@@ -838,11 +844,11 @@ static void hsw_set_infoframes(struct drm_encoder *encoder,
 			       const struct drm_connector_state *conn_state)
 {
 	struct drm_i915_private *dev_priv = to_i915(encoder->dev);
-	struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder);
 	i915_reg_t reg = HSW_TVIDEO_DIP_CTL(crtc_state->cpu_transcoder);
 	u32 val = I915_READ(reg);
 
-	assert_hdmi_port_disabled(intel_hdmi);
+	assert_hdmi_transcoder_func_disabled(dev_priv,
+					     crtc_state->cpu_transcoder);
 
 	val &= ~(VIDEO_DIP_ENABLE_VSC_HSW | VIDEO_DIP_ENABLE_AVI_HSW |
 		 VIDEO_DIP_ENABLE_GCP_HSW | VIDEO_DIP_ENABLE_VS_HSW |
@@ -1544,6 +1550,9 @@ intel_hdmi_mode_valid(struct drm_connector *connector,
 	bool force_dvi =
 		READ_ONCE(to_intel_digital_connector_state(connector->state)->force_audio) == HDMI_AUDIO_OFF_DVI;
 
+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return MODE_NO_DBLESCAN;
+
 	clock = mode->clock;
 
 	if ((mode->flags & DRM_MODE_FLAG_3D_MASK) == DRM_MODE_FLAG_3D_FRAME_PACKING)
@@ -1561,14 +1570,23 @@ intel_hdmi_mode_valid(struct drm_connector *connector,
 	/* check if we can do 8bpc */
 	status = hdmi_port_clock_valid(hdmi, clock, true, force_dvi);
 
-	/* if we can't do 8bpc we may still be able to do 12bpc */
-	if (!HAS_GMCH_DISPLAY(dev_priv) && status != MODE_OK && hdmi->has_hdmi_sink && !force_dvi)
-		status = hdmi_port_clock_valid(hdmi, clock * 3 / 2, true, force_dvi);
+	if (hdmi->has_hdmi_sink && !force_dvi) {
+		/* if we can't do 8bpc we may still be able to do 12bpc */
+		if (status != MODE_OK && !HAS_GMCH_DISPLAY(dev_priv))
+			status = hdmi_port_clock_valid(hdmi, clock * 3 / 2,
+						       true, force_dvi);
+
+		/* if we can't do 8,12bpc we may still be able to do 10bpc */
+		if (status != MODE_OK && INTEL_GEN(dev_priv) >= 11)
+			status = hdmi_port_clock_valid(hdmi, clock * 5 / 4,
+						       true, force_dvi);
+	}
 
 	return status;
 }
 
-static bool hdmi_12bpc_possible(const struct intel_crtc_state *crtc_state)
+static bool hdmi_deep_color_possible(const struct intel_crtc_state *crtc_state,
+				     int bpc)
 {
 	struct drm_i915_private *dev_priv =
 		to_i915(crtc_state->base.crtc->dev);
@@ -1580,6 +1598,9 @@ static bool hdmi_12bpc_possible(const struct intel_crtc_state *crtc_state)
 	if (HAS_GMCH_DISPLAY(dev_priv))
 		return false;
 
+	if (bpc == 10 && INTEL_GEN(dev_priv) < 11)
+		return false;
+
 	if (crtc_state->pipe_bpp <= 8*3)
 		return false;
 
@@ -1587,7 +1608,7 @@ static bool hdmi_12bpc_possible(const struct intel_crtc_state *crtc_state)
 		return false;
 
 	/*
-	 * HDMI 12bpc affects the clocks, so it's only possible
+	 * HDMI deep color affects the clocks, so it's only possible
 	 * when not cloning with other encoder types.
 	 */
 	if (crtc_state->output_types != 1 << INTEL_OUTPUT_HDMI)
@@ -1602,16 +1623,24 @@ static bool hdmi_12bpc_possible(const struct intel_crtc_state *crtc_state)
 		if (crtc_state->ycbcr420) {
 			const struct drm_hdmi_info *hdmi = &info->hdmi;
 
-			if (!(hdmi->y420_dc_modes & DRM_EDID_YCBCR420_DC_36))
+			if (bpc == 12 && !(hdmi->y420_dc_modes &
+					   DRM_EDID_YCBCR420_DC_36))
+				return false;
+			else if (bpc == 10 && !(hdmi->y420_dc_modes &
+						DRM_EDID_YCBCR420_DC_30))
 				return false;
 		} else {
-			if (!(info->edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_36))
+			if (bpc == 12 && !(info->edid_hdmi_dc_modes &
+					   DRM_EDID_HDMI_DC_36))
+				return false;
+			else if (bpc == 10 && !(info->edid_hdmi_dc_modes &
+						DRM_EDID_HDMI_DC_30))
 				return false;
 		}
 	}
 
 	/* Display WA #1139: glk */
-	if (IS_GLK_REVID(dev_priv, 0, GLK_REVID_A1) &&
+	if (bpc == 12 && IS_GLK_REVID(dev_priv, 0, GLK_REVID_A1) &&
 	    crtc_state->base.adjusted_mode.htotal > 5460)
 		return false;
 
@@ -1621,7 +1650,8 @@ static bool hdmi_12bpc_possible(const struct intel_crtc_state *crtc_state)
 static bool
 intel_hdmi_ycbcr420_config(struct drm_connector *connector,
 			   struct intel_crtc_state *config,
-			   int *clock_12bpc, int *clock_8bpc)
+			   int *clock_12bpc, int *clock_10bpc,
+			   int *clock_8bpc)
 {
 	struct intel_crtc *intel_crtc = to_intel_crtc(config->base.crtc);
 
@@ -1633,6 +1663,7 @@ intel_hdmi_ycbcr420_config(struct drm_connector *connector,
 	/* YCBCR420 TMDS rate requirement is half the pixel clock */
 	config->port_clock /= 2;
 	*clock_12bpc /= 2;
+	*clock_10bpc /= 2;
 	*clock_8bpc /= 2;
 	config->ycbcr420 = true;
 
@@ -1660,10 +1691,14 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
 	struct intel_digital_connector_state *intel_conn_state =
 		to_intel_digital_connector_state(conn_state);
 	int clock_8bpc = pipe_config->base.adjusted_mode.crtc_clock;
+	int clock_10bpc = clock_8bpc * 5 / 4;
 	int clock_12bpc = clock_8bpc * 3 / 2;
 	int desired_bpp;
 	bool force_dvi = intel_conn_state->force_audio == HDMI_AUDIO_OFF_DVI;
 
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
 	pipe_config->has_hdmi_sink = !force_dvi && intel_hdmi->has_hdmi_sink;
 
 	if (pipe_config->has_hdmi_sink)
@@ -1683,12 +1718,14 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
 	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLCLK) {
 		pipe_config->pixel_multiplier = 2;
 		clock_8bpc *= 2;
+		clock_10bpc *= 2;
 		clock_12bpc *= 2;
 	}
 
 	if (drm_mode_is_420_only(&connector->display_info, adjusted_mode)) {
 		if (!intel_hdmi_ycbcr420_config(connector, pipe_config,
-						&clock_12bpc, &clock_8bpc)) {
+						&clock_12bpc, &clock_10bpc,
+						&clock_8bpc)) {
 			DRM_ERROR("Can't support YCBCR420 output\n");
 			return false;
 		}
@@ -1706,18 +1743,25 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
 	}
 
 	/*
-	 * HDMI is either 12 or 8, so if the display lets 10bpc sneak
-	 * through, clamp it down. Note that g4x/vlv don't support 12bpc hdmi
-	 * outputs. We also need to check that the higher clock still fits
-	 * within limits.
+	 * Note that g4x/vlv don't support 12bpc hdmi outputs. We also need
+	 * to check that the higher clock still fits within limits.
 	 */
-	if (hdmi_12bpc_possible(pipe_config) &&
-	    hdmi_port_clock_valid(intel_hdmi, clock_12bpc, true, force_dvi) == MODE_OK) {
+	if (hdmi_deep_color_possible(pipe_config, 12) &&
+	    hdmi_port_clock_valid(intel_hdmi, clock_12bpc,
+				  true, force_dvi) == MODE_OK) {
 		DRM_DEBUG_KMS("picking bpc to 12 for HDMI output\n");
 		desired_bpp = 12*3;
 
 		/* Need to adjust the port link by 1.5x for 12bpc. */
 		pipe_config->port_clock = clock_12bpc;
+	} else if (hdmi_deep_color_possible(pipe_config, 10) &&
+		   hdmi_port_clock_valid(intel_hdmi, clock_10bpc,
+					 true, force_dvi) == MODE_OK) {
+		DRM_DEBUG_KMS("picking bpc to 10 for HDMI output\n");
+		desired_bpp = 10 * 3;
+
+		/* Need to adjust the port link by 1.25x for 10bpc. */
+		pipe_config->port_clock = clock_10bpc;
 	} else {
 		DRM_DEBUG_KMS("picking bpc to 8 for HDMI output\n");
 		desired_bpp = 8*3;
@@ -2239,7 +2283,7 @@ static u8 intel_hdmi_ddc_pin(struct drm_i915_private *dev_priv,
 		ddc_pin = bxt_port_to_ddc_pin(dev_priv, port);
 	else if (HAS_PCH_CNP(dev_priv))
 		ddc_pin = cnp_port_to_ddc_pin(dev_priv, port);
-	else if (IS_ICELAKE(dev_priv))
+	else if (HAS_PCH_ICP(dev_priv))
 		ddc_pin = icl_port_to_ddc_pin(dev_priv, port);
 	else
 		ddc_pin = g4x_port_to_ddc_pin(dev_priv, port);
@@ -2334,7 +2378,7 @@ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,
 	 * 0xd.  Failure to do so will result in spurious interrupts being
 	 * generated on the port when a cable is not attached.
 	 */
-	if (IS_G4X(dev_priv) && !IS_GM45(dev_priv)) {
+	if (IS_G45(dev_priv)) {
 		u32 temp = I915_READ(PEG_BAND_GAP_DATA);
 		I915_WRITE(PEG_BAND_GAP_DATA, (temp & ~0xf) | 0xd);
 	}

+ 7 - 7
drivers/gpu/drm/i915/intel_i2c.c

@@ -77,12 +77,12 @@ static const struct gmbus_pin gmbus_pins_cnp[] = {
 };
 
 static const struct gmbus_pin gmbus_pins_icp[] = {
-	[GMBUS_PIN_1_BXT] = { "dpa", GPIOA },
-	[GMBUS_PIN_2_BXT] = { "dpb", GPIOB },
-	[GMBUS_PIN_9_TC1_ICP] = { "tc1", GPIOC },
-	[GMBUS_PIN_10_TC2_ICP] = { "tc2", GPIOD },
-	[GMBUS_PIN_11_TC3_ICP] = { "tc3", GPIOE },
-	[GMBUS_PIN_12_TC4_ICP] = { "tc4", GPIOF },
+	[GMBUS_PIN_1_BXT] = { "dpa", GPIOB },
+	[GMBUS_PIN_2_BXT] = { "dpb", GPIOC },
+	[GMBUS_PIN_9_TC1_ICP] = { "tc1", GPIOJ },
+	[GMBUS_PIN_10_TC2_ICP] = { "tc2", GPIOK },
+	[GMBUS_PIN_11_TC3_ICP] = { "tc3", GPIOL },
+	[GMBUS_PIN_12_TC4_ICP] = { "tc4", GPIOM },
 };
 
 /* pin is expected to be valid */
@@ -771,7 +771,7 @@ int intel_setup_gmbus(struct drm_i915_private *dev_priv)
 	unsigned int pin;
 	int ret;
 
-	if (HAS_PCH_NOP(dev_priv))
+	if (INTEL_INFO(dev_priv)->num_pipes == 0)
 		return 0;
 
 	if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))

+ 62 - 27
drivers/gpu/drm/i915/intel_lrc.c

@@ -970,22 +970,19 @@ static void process_csb(struct intel_engine_cs *engine)
 			&engine->status_page.page_addr[I915_HWS_CSB_BUF0_INDEX];
 		unsigned int head, tail;
 
-		if (unlikely(execlists->csb_use_mmio)) {
-			buf = (u32 * __force)
-				(i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_BUF_LO(engine, 0)));
-			execlists->csb_head = -1; /* force mmio read of CSB */
-		}
-
 		/* Clear before reading to catch new interrupts */
 		clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);
 		smp_mb__after_atomic();
 
-		if (unlikely(execlists->csb_head == -1)) { /* after a reset */
+		if (unlikely(execlists->csb_use_mmio)) {
 			if (!fw) {
 				intel_uncore_forcewake_get(i915, execlists->fw_domains);
 				fw = true;
 			}
 
+			buf = (u32 * __force)
+				(i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_BUF_LO(engine, 0)));
+
 			head = readl(i915->regs + i915_mmio_reg_offset(RING_CONTEXT_STATUS_PTR(engine)));
 			tail = GEN8_CSB_WRITE_PTR(head);
 			head = GEN8_CSB_READ_PTR(head);
@@ -1413,6 +1410,7 @@ __execlists_context_pin(struct intel_engine_cs *engine,
 	ce->lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
 	ce->lrc_reg_state[CTX_RING_BUFFER_START+1] =
 		i915_ggtt_offset(ce->ring->vma);
+	GEM_BUG_ON(!intel_ring_offset_valid(ce->ring, ce->ring->head));
 	ce->lrc_reg_state[CTX_RING_HEAD+1] = ce->ring->head;
 
 	ce->state->obj->pin_global++;
@@ -1567,19 +1565,56 @@ static u32 *gen8_init_indirectctx_bb(struct intel_engine_cs *engine, u32 *batch)
 	return batch;
 }
 
+struct lri {
+	i915_reg_t reg;
+	u32 value;
+};
+
+static u32 *emit_lri(u32 *batch, const struct lri *lri, unsigned int count)
+{
+	GEM_BUG_ON(!count || count > 63);
+
+	*batch++ = MI_LOAD_REGISTER_IMM(count);
+	do {
+		*batch++ = i915_mmio_reg_offset(lri->reg);
+		*batch++ = lri->value;
+	} while (lri++, --count);
+	*batch++ = MI_NOOP;
+
+	return batch;
+}
+
 static u32 *gen9_init_indirectctx_bb(struct intel_engine_cs *engine, u32 *batch)
 {
+	static const struct lri lri[] = {
+		/* WaDisableGatherAtSetShaderCommonSlice:skl,bxt,kbl,glk */
+		{
+			COMMON_SLICE_CHICKEN2,
+			__MASKED_FIELD(GEN9_DISABLE_GATHER_AT_SET_SHADER_COMMON_SLICE,
+				       0),
+		},
+
+		/* BSpec: 11391 */
+		{
+			FF_SLICE_CHICKEN,
+			__MASKED_FIELD(FF_SLICE_CHICKEN_CL_PROVOKING_VERTEX_FIX,
+				       FF_SLICE_CHICKEN_CL_PROVOKING_VERTEX_FIX),
+		},
+
+		/* BSpec: 11299 */
+		{
+			_3D_CHICKEN3,
+			__MASKED_FIELD(_3D_CHICKEN_SF_PROVOKING_VERTEX_FIX,
+				       _3D_CHICKEN_SF_PROVOKING_VERTEX_FIX),
+		}
+	};
+
 	*batch++ = MI_ARB_ON_OFF | MI_ARB_DISABLE;
 
 	/* WaFlushCoherentL3CacheLinesAtContextSwitch:skl,bxt,glk */
 	batch = gen8_emit_flush_coherentl3_wa(engine, batch);
 
-	/* WaDisableGatherAtSetShaderCommonSlice:skl,bxt,kbl,glk */
-	*batch++ = MI_LOAD_REGISTER_IMM(1);
-	*batch++ = i915_mmio_reg_offset(COMMON_SLICE_CHICKEN2);
-	*batch++ = _MASKED_BIT_DISABLE(
-			GEN9_DISABLE_GATHER_AT_SET_SHADER_COMMON_SLICE);
-	*batch++ = MI_NOOP;
+	batch = emit_lri(batch, lri, ARRAY_SIZE(lri));
 
 	/* WaClearSlmSpaceAtContextSwitch:kbl */
 	/* Actual scratch location is at 128 bytes offset */
@@ -1809,7 +1844,6 @@ static bool unexpected_starting_state(struct intel_engine_cs *engine)
 
 static int gen8_init_common_ring(struct intel_engine_cs *engine)
 {
-	struct intel_engine_execlists * const execlists = &engine->execlists;
 	int ret;
 
 	ret = intel_mocs_init_engine(engine);
@@ -1827,10 +1861,6 @@ static int gen8_init_common_ring(struct intel_engine_cs *engine)
 
 	enable_execlists(engine);
 
-	/* After a GPU reset, we may have requests to replay */
-	if (execlists->first)
-		tasklet_schedule(&execlists->tasklet);
-
 	return 0;
 }
 
@@ -1964,7 +1994,7 @@ static void execlists_reset(struct intel_engine_cs *engine,
 	spin_unlock(&engine->timeline.lock);
 
 	/* Following the reset, we need to reload the CSB read/write pointers */
-	engine->execlists.csb_head = -1;
+	engine->execlists.csb_head = GEN8_CSB_ENTRIES - 1;
 
 	local_irq_restore(flags);
 
@@ -2001,9 +2031,10 @@ static void execlists_reset(struct intel_engine_cs *engine,
 
 	/* Move the RING_HEAD onto the breadcrumb, past the hanging batch */
 	regs[CTX_RING_BUFFER_START + 1] = i915_ggtt_offset(request->ring->vma);
-	regs[CTX_RING_HEAD + 1] = request->postfix;
 
-	request->ring->head = request->postfix;
+	request->ring->head = intel_ring_wrap(request->ring, request->postfix);
+	regs[CTX_RING_HEAD + 1] = request->ring->head;
+
 	intel_ring_update_space(request->ring);
 
 	/* Reset WaIdleLiteRestore:bdw,skl as well */
@@ -2012,6 +2043,12 @@ static void execlists_reset(struct intel_engine_cs *engine,
 
 static void execlists_reset_finish(struct intel_engine_cs *engine)
 {
+	struct intel_engine_execlists * const execlists = &engine->execlists;
+
+	/* After a GPU reset, we may have requests to replay */
+	if (execlists->first)
+		tasklet_schedule(&execlists->tasklet);
+
 	/*
 	 * Flush the tasklet while we still have the forcewake to be sure
 	 * that it is not allowed to sleep before we restart and reload a
@@ -2021,7 +2058,7 @@ static void execlists_reset_finish(struct intel_engine_cs *engine)
 	 * serialising multiple attempts to reset so that we know that we
 	 * are the only one manipulating tasklet state.
 	 */
-	__tasklet_enable_sync_once(&engine->execlists.tasklet);
+	__tasklet_enable_sync_once(&execlists->tasklet);
 
 	GEM_TRACE("%s\n", engine->name);
 }
@@ -2465,7 +2502,7 @@ static int logical_ring_init(struct intel_engine_cs *engine)
 			upper_32_bits(ce->lrc_desc);
 	}
 
-	engine->execlists.csb_head = -1;
+	engine->execlists.csb_head = GEN8_CSB_ENTRIES - 1;
 
 	return 0;
 
@@ -2768,10 +2805,8 @@ static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
 	context_size += LRC_HEADER_PAGES * PAGE_SIZE;
 
 	ctx_obj = i915_gem_object_create(ctx->i915, context_size);
-	if (IS_ERR(ctx_obj)) {
-		ret = PTR_ERR(ctx_obj);
-		goto error_deref_obj;
-	}
+	if (IS_ERR(ctx_obj))
+		return PTR_ERR(ctx_obj);
 
 	vma = i915_vma_instance(ctx_obj, &ctx->i915->ggtt.vm, NULL);
 	if (IS_ERR(vma)) {

+ 1 - 1
drivers/gpu/drm/i915/intel_lspcon.c

@@ -116,7 +116,7 @@ static int lspcon_change_mode(struct intel_lspcon *lspcon,
 
 static bool lspcon_wake_native_aux_ch(struct intel_lspcon *lspcon)
 {
-	uint8_t rev;
+	u8 rev;
 
 	if (drm_dp_dpcd_readb(&lspcon_to_intel_dp(lspcon)->aux, DP_DPCD_REV,
 			      &rev) != 1) {

+ 5 - 0
drivers/gpu/drm/i915/intel_lvds.c

@@ -378,6 +378,8 @@ intel_lvds_mode_valid(struct drm_connector *connector,
 	struct drm_display_mode *fixed_mode = intel_connector->panel.fixed_mode;
 	int max_pixclk = to_i915(connector->dev)->max_dotclk_freq;
 
+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return MODE_NO_DBLESCAN;
 	if (mode->hdisplay > fixed_mode->hdisplay)
 		return MODE_PANEL;
 	if (mode->vdisplay > fixed_mode->vdisplay)
@@ -427,6 +429,9 @@ static bool intel_lvds_compute_config(struct intel_encoder *intel_encoder,
 	intel_fixed_panel_mode(intel_connector->panel.fixed_mode,
 			       adjusted_mode);
 
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
 	if (HAS_PCH_SPLIT(dev_priv)) {
 		pipe_config->has_pch_encoder = true;
 

+ 12 - 19
drivers/gpu/drm/i915/intel_opregion.c

@@ -608,16 +608,16 @@ void intel_opregion_asle_intr(struct drm_i915_private *dev_priv)
 #define ACPI_EV_LID            (1<<1)
 #define ACPI_EV_DOCK           (1<<2)
 
-static struct intel_opregion *system_opregion;
-
+/*
+ * The only video events relevant to opregion are 0x80. These indicate either a
+ * docking event, lid switch or display switch request. In Linux, these are
+ * handled by the dock, button and video drivers.
+ */
 static int intel_opregion_video_event(struct notifier_block *nb,
 				      unsigned long val, void *data)
 {
-	/* The only video events relevant to opregion are 0x80. These indicate
-	   either a docking event, lid switch or display switch request. In
-	   Linux, these are handled by the dock, button and video drivers.
-	*/
-
+	struct intel_opregion *opregion = container_of(nb, struct intel_opregion,
+						       acpi_notifier);
 	struct acpi_bus_event *event = data;
 	struct opregion_acpi *acpi;
 	int ret = NOTIFY_OK;
@@ -625,10 +625,7 @@ static int intel_opregion_video_event(struct notifier_block *nb,
 	if (strcmp(event->device_class, ACPI_VIDEO_CLASS) != 0)
 		return NOTIFY_DONE;
 
-	if (!system_opregion)
-		return NOTIFY_DONE;
-
-	acpi = system_opregion->acpi;
+	acpi = opregion->acpi;
 
 	if (event->type == 0x80 && ((acpi->cevt & 1) == 0))
 		ret = NOTIFY_BAD;
@@ -638,10 +635,6 @@ static int intel_opregion_video_event(struct notifier_block *nb,
 	return ret;
 }
 
-static struct notifier_block intel_opregion_notifier = {
-	.notifier_call = intel_opregion_video_event,
-};
-
 /*
  * Initialise the DIDL field in opregion. This passes a list of devices to
  * the firmware. Values are defined by section B.4.2 of the ACPI specification
@@ -797,8 +790,8 @@ void intel_opregion_register(struct drm_i915_private *dev_priv)
 		opregion->acpi->csts = 0;
 		opregion->acpi->drdy = 1;
 
-		system_opregion = opregion;
-		register_acpi_notifier(&intel_opregion_notifier);
+		opregion->acpi_notifier.notifier_call = intel_opregion_video_event;
+		register_acpi_notifier(&opregion->acpi_notifier);
 	}
 
 	if (opregion->asle) {
@@ -822,8 +815,8 @@ void intel_opregion_unregister(struct drm_i915_private *dev_priv)
 	if (opregion->acpi) {
 		opregion->acpi->drdy = 0;
 
-		system_opregion = NULL;
-		unregister_acpi_notifier(&intel_opregion_notifier);
+		unregister_acpi_notifier(&opregion->acpi_notifier);
+		opregion->acpi_notifier.notifier_call = NULL;
 	}
 
 	/* just clear all opregion memory pointers now */

+ 1 - 0
drivers/gpu/drm/i915/intel_opregion.h

@@ -49,6 +49,7 @@ struct intel_opregion {
 	u32 vbt_size;
 	u32 *lid_state;
 	struct work_struct asle_work;
+	struct notifier_block acpi_notifier;
 };
 
 #define OPREGION_SIZE            (8 * 1024)

+ 4 - 4
drivers/gpu/drm/i915/intel_panel.c

@@ -406,11 +406,11 @@ intel_panel_detect(struct drm_i915_private *dev_priv)
  * Return @source_val in range [@source_min..@source_max] scaled to range
  * [@target_min..@target_max].
  */
-static uint32_t scale(uint32_t source_val,
-		      uint32_t source_min, uint32_t source_max,
-		      uint32_t target_min, uint32_t target_max)
+static u32 scale(u32 source_val,
+		 u32 source_min, u32 source_max,
+		 u32 target_min, u32 target_max)
 {
-	uint64_t target_val;
+	u64 target_val;
 
 	WARN_ON(source_min > source_max);
 	WARN_ON(target_min > target_max);

+ 8 - 22
drivers/gpu/drm/i915/intel_psr.c

@@ -671,21 +671,7 @@ void intel_psr_enable(struct intel_dp *intel_dp,
 	dev_priv->psr.enable_source(intel_dp, crtc_state);
 	dev_priv->psr.enabled = intel_dp;
 
-	if (INTEL_GEN(dev_priv) >= 9) {
-		intel_psr_activate(intel_dp);
-	} else {
-		/*
-		 * FIXME: Activation should happen immediately since this
-		 * function is just called after pipe is fully trained and
-		 * enabled.
-		 * However on some platforms we face issues when first
-		 * activation follows a modeset so quickly.
-		 *     - On HSW/BDW we get a recoverable frozen screen until
-		 *       next exit-activate sequence.
-		 */
-		schedule_delayed_work(&dev_priv->psr.work,
-				      msecs_to_jiffies(intel_dp->panel_power_cycle_delay * 5));
-	}
+	intel_psr_activate(intel_dp);
 
 unlock:
 	mutex_unlock(&dev_priv->psr.lock);
@@ -768,8 +754,7 @@ void intel_psr_disable(struct intel_dp *intel_dp,
 
 	dev_priv->psr.enabled = NULL;
 	mutex_unlock(&dev_priv->psr.lock);
-
-	cancel_delayed_work_sync(&dev_priv->psr.work);
+	cancel_work_sync(&dev_priv->psr.work);
 }
 
 static bool psr_wait_for_idle(struct drm_i915_private *dev_priv)
@@ -805,10 +790,13 @@ static bool psr_wait_for_idle(struct drm_i915_private *dev_priv)
 static void intel_psr_work(struct work_struct *work)
 {
 	struct drm_i915_private *dev_priv =
-		container_of(work, typeof(*dev_priv), psr.work.work);
+		container_of(work, typeof(*dev_priv), psr.work);
 
 	mutex_lock(&dev_priv->psr.lock);
 
+	if (!dev_priv->psr.enabled)
+		goto unlock;
+
 	/*
 	 * We have to make sure PSR is ready for re-enable
 	 * otherwise it keeps disabled until next full enable/disable cycle.
@@ -949,9 +937,7 @@ void intel_psr_flush(struct drm_i915_private *dev_priv,
 	}
 
 	if (!dev_priv->psr.active && !dev_priv->psr.busy_frontbuffer_bits)
-		if (!work_busy(&dev_priv->psr.work.work))
-			schedule_delayed_work(&dev_priv->psr.work,
-					      msecs_to_jiffies(100));
+		schedule_work(&dev_priv->psr.work);
 	mutex_unlock(&dev_priv->psr.lock);
 }
 
@@ -998,7 +984,7 @@ void intel_psr_init(struct drm_i915_private *dev_priv)
 		dev_priv->psr.link_standby = false;
 	}
 
-	INIT_DELAYED_WORK(&dev_priv->psr.work, intel_psr_work);
+	INIT_WORK(&dev_priv->psr.work, intel_psr_work);
 	mutex_init(&dev_priv->psr.lock);
 
 	dev_priv->psr.enable_source = hsw_psr_enable_source;

+ 157 - 82
drivers/gpu/drm/i915/intel_ringbuffer.c

@@ -496,6 +496,10 @@ static int init_ring_common(struct intel_engine_cs *engine)
 		DRM_DEBUG_DRIVER("%s initialization failed [head=%08x], fudging\n",
 				 engine->name, I915_READ_HEAD(engine));
 
+	/* Check that the ring offsets point within the ring! */
+	GEM_BUG_ON(!intel_ring_offset_valid(ring, ring->head));
+	GEM_BUG_ON(!intel_ring_offset_valid(ring, ring->tail));
+
 	intel_ring_update_space(ring);
 	I915_WRITE_HEAD(engine, ring->head);
 	I915_WRITE_TAIL(engine, ring->tail);
@@ -541,19 +545,23 @@ static struct i915_request *reset_prepare(struct intel_engine_cs *engine)
 	return i915_gem_find_active_request(engine);
 }
 
-static void reset_ring(struct intel_engine_cs *engine,
-		       struct i915_request *request)
+static void skip_request(struct i915_request *rq)
 {
-	GEM_TRACE("%s seqno=%x\n",
-		  engine->name, request ? request->global_seqno : 0);
+	void *vaddr = rq->ring->vaddr;
+	u32 head;
 
-	/*
-	 * RC6 must be prevented until the reset is complete and the engine
-	 * reinitialised. If it occurs in the middle of this sequence, the
-	 * state written to/loaded from the power context is ill-defined (e.g.
-	 * the PP_BASE_DIR may be lost).
-	 */
-	assert_forcewakes_active(engine->i915, FORCEWAKE_ALL);
+	head = rq->infix;
+	if (rq->postfix < head) {
+		memset32(vaddr + head, MI_NOOP,
+			 (rq->ring->size - head) / sizeof(u32));
+		head = 0;
+	}
+	memset32(vaddr + head, MI_NOOP, (rq->postfix - head) / sizeof(u32));
+}
+
+static void reset_ring(struct intel_engine_cs *engine, struct i915_request *rq)
+{
+	GEM_TRACE("%s seqno=%x\n", engine->name, rq ? rq->global_seqno : 0);
 
 	/*
 	 * Try to restore the logical GPU state to match the continuation
@@ -569,43 +577,11 @@ static void reset_ring(struct intel_engine_cs *engine,
 	 * If the request was innocent, we try to replay the request with
 	 * the restored context.
 	 */
-	if (request) {
-		struct drm_i915_private *dev_priv = request->i915;
-		struct intel_context *ce = request->hw_context;
-		struct i915_hw_ppgtt *ppgtt;
-
-		if (ce->state) {
-			I915_WRITE(CCID,
-				   i915_ggtt_offset(ce->state) |
-				   BIT(8) /* must be set! */ |
-				   CCID_EXTENDED_STATE_SAVE |
-				   CCID_EXTENDED_STATE_RESTORE |
-				   CCID_EN);
-		}
-
-		ppgtt = request->gem_context->ppgtt ?: engine->i915->mm.aliasing_ppgtt;
-		if (ppgtt) {
-			u32 pd_offset = ppgtt->pd.base.ggtt_offset << 10;
-
-			I915_WRITE(RING_PP_DIR_DCLV(engine), PP_DIR_DCLV_2G);
-			I915_WRITE(RING_PP_DIR_BASE(engine), pd_offset);
-
-			/* Wait for the PD reload to complete */
-			if (intel_wait_for_register(dev_priv,
-						    RING_PP_DIR_BASE(engine),
-						    BIT(0), 0,
-						    10))
-				DRM_ERROR("Wait for reload of ppgtt page-directory timed out\n");
-
-			ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine);
-		}
-
+	if (rq) {
 		/* If the rq hung, jump to its breadcrumb and skip the batch */
-		if (request->fence.error == -EIO)
-			request->ring->head = request->postfix;
-	} else {
-		engine->legacy_active_context = NULL;
-		engine->legacy_active_ppgtt = NULL;
+		rq->ring->head = intel_ring_wrap(rq->ring, rq->head);
+		if (rq->fence.error == -EIO)
+			skip_request(rq);
 	}
 }
 
@@ -1084,6 +1060,8 @@ err:
 
 void intel_ring_reset(struct intel_ring *ring, u32 tail)
 {
+	GEM_BUG_ON(!intel_ring_offset_valid(ring, tail));
+
 	ring->tail = tail;
 	ring->head = tail;
 	ring->emit = tail;
@@ -1195,6 +1173,27 @@ static void intel_ring_context_destroy(struct intel_context *ce)
 		__i915_gem_object_release_unless_active(ce->state->obj);
 }
 
+static int __context_pin_ppgtt(struct i915_gem_context *ctx)
+{
+	struct i915_hw_ppgtt *ppgtt;
+	int err = 0;
+
+	ppgtt = ctx->ppgtt ?: ctx->i915->mm.aliasing_ppgtt;
+	if (ppgtt)
+		err = gen6_ppgtt_pin(ppgtt);
+
+	return err;
+}
+
+static void __context_unpin_ppgtt(struct i915_gem_context *ctx)
+{
+	struct i915_hw_ppgtt *ppgtt;
+
+	ppgtt = ctx->ppgtt ?: ctx->i915->mm.aliasing_ppgtt;
+	if (ppgtt)
+		gen6_ppgtt_unpin(ppgtt);
+}
+
 static int __context_pin(struct intel_context *ce)
 {
 	struct i915_vma *vma;
@@ -1243,6 +1242,7 @@ static void __context_unpin(struct intel_context *ce)
 
 static void intel_ring_context_unpin(struct intel_context *ce)
 {
+	__context_unpin_ppgtt(ce->gem_context);
 	__context_unpin(ce);
 
 	i915_gem_context_put(ce->gem_context);
@@ -1340,6 +1340,10 @@ __ring_context_pin(struct intel_engine_cs *engine,
 	if (err)
 		goto err;
 
+	err = __context_pin_ppgtt(ce->gem_context);
+	if (err)
+		goto err_unpin;
+
 	i915_gem_context_get(ctx);
 
 	/* One ringbuffer to rule them all */
@@ -1348,6 +1352,8 @@ __ring_context_pin(struct intel_engine_cs *engine,
 
 	return ce;
 
+err_unpin:
+	__context_unpin(ce);
 err:
 	ce->pin_count = 0;
 	return ERR_PTR(err);
@@ -1377,8 +1383,9 @@ intel_ring_context_pin(struct intel_engine_cs *engine,
 
 static int intel_init_ring_buffer(struct intel_engine_cs *engine)
 {
-	struct intel_ring *ring;
 	struct i915_timeline *timeline;
+	struct intel_ring *ring;
+	unsigned int size;
 	int err;
 
 	intel_engine_setup_common(engine);
@@ -1404,12 +1411,21 @@ static int intel_init_ring_buffer(struct intel_engine_cs *engine)
 	GEM_BUG_ON(engine->buffer);
 	engine->buffer = ring;
 
-	err = intel_engine_init_common(engine);
+	size = PAGE_SIZE;
+	if (HAS_BROKEN_CS_TLB(engine->i915))
+		size = I830_WA_SIZE;
+	err = intel_engine_create_scratch(engine, size);
 	if (err)
 		goto err_unpin;
 
+	err = intel_engine_init_common(engine);
+	if (err)
+		goto err_scratch;
+
 	return 0;
 
+err_scratch:
+	intel_engine_cleanup_scratch(engine);
 err_unpin:
 	intel_ring_unpin(ring);
 err_ring:
@@ -1448,6 +1464,48 @@ void intel_legacy_submission_resume(struct drm_i915_private *dev_priv)
 		intel_ring_reset(engine->buffer, 0);
 }
 
+static int load_pd_dir(struct i915_request *rq,
+		       const struct i915_hw_ppgtt *ppgtt)
+{
+	const struct intel_engine_cs * const engine = rq->engine;
+	u32 *cs;
+
+	cs = intel_ring_begin(rq, 6);
+	if (IS_ERR(cs))
+		return PTR_ERR(cs);
+
+	*cs++ = MI_LOAD_REGISTER_IMM(1);
+	*cs++ = i915_mmio_reg_offset(RING_PP_DIR_DCLV(engine));
+	*cs++ = PP_DIR_DCLV_2G;
+
+	*cs++ = MI_LOAD_REGISTER_IMM(1);
+	*cs++ = i915_mmio_reg_offset(RING_PP_DIR_BASE(engine));
+	*cs++ = ppgtt->pd.base.ggtt_offset << 10;
+
+	intel_ring_advance(rq, cs);
+
+	return 0;
+}
+
+static int flush_pd_dir(struct i915_request *rq)
+{
+	const struct intel_engine_cs * const engine = rq->engine;
+	u32 *cs;
+
+	cs = intel_ring_begin(rq, 4);
+	if (IS_ERR(cs))
+		return PTR_ERR(cs);
+
+	/* Stall until the page table load is complete */
+	*cs++ = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT;
+	*cs++ = i915_mmio_reg_offset(RING_PP_DIR_BASE(engine));
+	*cs++ = i915_ggtt_offset(engine->scratch);
+	*cs++ = MI_NOOP;
+
+	intel_ring_advance(rq, cs);
+	return 0;
+}
+
 static inline int mi_set_context(struct i915_request *rq, u32 flags)
 {
 	struct drm_i915_private *i915 = rq->i915;
@@ -1458,6 +1516,7 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
 		(HAS_LEGACY_SEMAPHORES(i915) && IS_GEN7(i915)) ?
 		INTEL_INFO(i915)->num_rings - 1 :
 		0;
+	bool force_restore = false;
 	int len;
 	u32 *cs;
 
@@ -1471,6 +1530,12 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
 	len = 4;
 	if (IS_GEN7(i915))
 		len += 2 + (num_rings ? 4*num_rings + 6 : 0);
+	if (flags & MI_FORCE_RESTORE) {
+		GEM_BUG_ON(flags & MI_RESTORE_INHIBIT);
+		flags &= ~MI_FORCE_RESTORE;
+		force_restore = true;
+		len += 2;
+	}
 
 	cs = intel_ring_begin(rq, len);
 	if (IS_ERR(cs))
@@ -1495,6 +1560,26 @@ static inline int mi_set_context(struct i915_request *rq, u32 flags)
 		}
 	}
 
+	if (force_restore) {
+		/*
+		 * The HW doesn't handle being told to restore the current
+		 * context very well. Quite often it likes goes to go off and
+		 * sulk, especially when it is meant to be reloading PP_DIR.
+		 * A very simple fix to force the reload is to simply switch
+		 * away from the current context and back again.
+		 *
+		 * Note that the kernel_context will contain random state
+		 * following the INHIBIT_RESTORE. We accept this since we
+		 * never use the kernel_context state; it is merely a
+		 * placeholder we use to flush other contexts.
+		 */
+		*cs++ = MI_SET_CONTEXT;
+		*cs++ = i915_ggtt_offset(to_intel_context(i915->kernel_context,
+							  engine)->state) |
+			MI_MM_SPACE_GTT |
+			MI_RESTORE_INHIBIT;
+	}
+
 	*cs++ = MI_NOOP;
 	*cs++ = MI_SET_CONTEXT;
 	*cs++ = i915_ggtt_offset(rq->hw_context->state) | flags;
@@ -1565,31 +1650,28 @@ static int remap_l3(struct i915_request *rq, int slice)
 static int switch_context(struct i915_request *rq)
 {
 	struct intel_engine_cs *engine = rq->engine;
-	struct i915_gem_context *to_ctx = rq->gem_context;
-	struct i915_hw_ppgtt *to_mm =
-		to_ctx->ppgtt ?: rq->i915->mm.aliasing_ppgtt;
-	struct i915_gem_context *from_ctx = engine->legacy_active_context;
-	struct i915_hw_ppgtt *from_mm = engine->legacy_active_ppgtt;
+	struct i915_gem_context *ctx = rq->gem_context;
+	struct i915_hw_ppgtt *ppgtt = ctx->ppgtt ?: rq->i915->mm.aliasing_ppgtt;
+	unsigned int unwind_mm = 0;
 	u32 hw_flags = 0;
 	int ret, i;
 
 	lockdep_assert_held(&rq->i915->drm.struct_mutex);
 	GEM_BUG_ON(HAS_EXECLISTS(rq->i915));
 
-	if (to_mm != from_mm ||
-	    (to_mm && intel_engine_flag(engine) & to_mm->pd_dirty_rings)) {
-		trace_switch_mm(engine, to_ctx);
-		ret = to_mm->switch_mm(to_mm, rq);
+	if (ppgtt) {
+		ret = load_pd_dir(rq, ppgtt);
 		if (ret)
 			goto err;
 
-		to_mm->pd_dirty_rings &= ~intel_engine_flag(engine);
-		engine->legacy_active_ppgtt = to_mm;
-		hw_flags = MI_FORCE_RESTORE;
+		if (intel_engine_flag(engine) & ppgtt->pd_dirty_rings) {
+			unwind_mm = intel_engine_flag(engine);
+			ppgtt->pd_dirty_rings &= ~unwind_mm;
+			hw_flags = MI_FORCE_RESTORE;
+		}
 	}
 
-	if (rq->hw_context->state &&
-	    (to_ctx != from_ctx || hw_flags & MI_FORCE_RESTORE)) {
+	if (rq->hw_context->state) {
 		GEM_BUG_ON(engine->id != RCS);
 
 		/*
@@ -1599,35 +1681,38 @@ static int switch_context(struct i915_request *rq)
 		 * as nothing actually executes using the kernel context; it
 		 * is purely used for flushing user contexts.
 		 */
-		if (i915_gem_context_is_kernel(to_ctx))
+		if (i915_gem_context_is_kernel(ctx))
 			hw_flags = MI_RESTORE_INHIBIT;
 
 		ret = mi_set_context(rq, hw_flags);
 		if (ret)
 			goto err_mm;
+	}
 
-		engine->legacy_active_context = to_ctx;
+	if (ppgtt) {
+		ret = flush_pd_dir(rq);
+		if (ret)
+			goto err_mm;
 	}
 
-	if (to_ctx->remap_slice) {
+	if (ctx->remap_slice) {
 		for (i = 0; i < MAX_L3_SLICES; i++) {
-			if (!(to_ctx->remap_slice & BIT(i)))
+			if (!(ctx->remap_slice & BIT(i)))
 				continue;
 
 			ret = remap_l3(rq, i);
 			if (ret)
-				goto err_ctx;
+				goto err_mm;
 		}
 
-		to_ctx->remap_slice = 0;
+		ctx->remap_slice = 0;
 	}
 
 	return 0;
 
-err_ctx:
-	engine->legacy_active_context = from_ctx;
 err_mm:
-	engine->legacy_active_ppgtt = from_mm;
+	if (unwind_mm)
+		ppgtt->pd_dirty_rings |= unwind_mm;
 err:
 	return ret;
 }
@@ -2130,16 +2215,6 @@ int intel_init_render_ring_buffer(struct intel_engine_cs *engine)
 	if (ret)
 		return ret;
 
-	if (INTEL_GEN(dev_priv) >= 6) {
-		ret = intel_engine_create_scratch(engine, PAGE_SIZE);
-		if (ret)
-			return ret;
-	} else if (HAS_BROKEN_CS_TLB(dev_priv)) {
-		ret = intel_engine_create_scratch(engine, I830_WA_SIZE);
-		if (ret)
-			return ret;
-	}
-
 	return 0;
 }
 

+ 22 - 17
drivers/gpu/drm/i915/intel_ringbuffer.h

@@ -122,7 +122,8 @@ struct intel_engine_hangcheck {
 	int deadlock;
 	struct intel_instdone instdone;
 	struct i915_request *active_request;
-	bool stalled;
+	bool stalled:1;
+	bool wedged:1;
 };
 
 struct intel_ring {
@@ -557,15 +558,6 @@ struct intel_engine_cs {
 	 */
 	struct intel_context *last_retired_context;
 
-	/* We track the current MI_SET_CONTEXT in order to eliminate
-	 * redudant context switches. This presumes that requests are not
-	 * reordered! Or when they are the tracking is updated along with
-	 * the emission of individual requests into the legacy command
-	 * stream (ring).
-	 */
-	struct i915_gem_context *legacy_active_context;
-	struct i915_hw_ppgtt *legacy_active_ppgtt;
-
 	/* status_notifier: list of callbacks for context-switch changes */
 	struct atomic_notifier_head context_status_notifier;
 
@@ -814,6 +806,19 @@ static inline u32 intel_ring_wrap(const struct intel_ring *ring, u32 pos)
 	return pos & (ring->size - 1);
 }
 
+static inline bool
+intel_ring_offset_valid(const struct intel_ring *ring,
+			unsigned int pos)
+{
+	if (pos & -ring->size) /* must be strictly within the ring */
+		return false;
+
+	if (!IS_ALIGNED(pos, 8)) /* must be qword aligned */
+		return false;
+
+	return true;
+}
+
 static inline u32 intel_ring_offset(const struct i915_request *rq, void *addr)
 {
 	/* Don't write ring->size (equivalent to 0) as that hangs some GPUs. */
@@ -825,12 +830,7 @@ static inline u32 intel_ring_offset(const struct i915_request *rq, void *addr)
 static inline void
 assert_ring_tail_valid(const struct intel_ring *ring, unsigned int tail)
 {
-	/* We could combine these into a single tail operation, but keeping
-	 * them as seperate tests will help identify the cause should one
-	 * ever fire.
-	 */
-	GEM_BUG_ON(!IS_ALIGNED(tail, 8));
-	GEM_BUG_ON(tail >= ring->size);
+	GEM_BUG_ON(!intel_ring_offset_valid(ring, tail));
 
 	/*
 	 * "Ring Buffer Use"
@@ -870,9 +870,12 @@ void intel_engine_init_global_seqno(struct intel_engine_cs *engine, u32 seqno);
 
 void intel_engine_setup_common(struct intel_engine_cs *engine);
 int intel_engine_init_common(struct intel_engine_cs *engine);
-int intel_engine_create_scratch(struct intel_engine_cs *engine, int size);
 void intel_engine_cleanup_common(struct intel_engine_cs *engine);
 
+int intel_engine_create_scratch(struct intel_engine_cs *engine,
+				unsigned int size);
+void intel_engine_cleanup_scratch(struct intel_engine_cs *engine);
+
 int intel_init_render_ring_buffer(struct intel_engine_cs *engine);
 int intel_init_bsd_ring_buffer(struct intel_engine_cs *engine);
 int intel_init_blt_ring_buffer(struct intel_engine_cs *engine);
@@ -1049,6 +1052,8 @@ gen8_emit_ggtt_write(u32 *cs, u32 value, u32 gtt_offset)
 	return cs;
 }
 
+void intel_engines_sanitize(struct drm_i915_private *i915);
+
 bool intel_engine_is_idle(struct intel_engine_cs *engine);
 bool intel_engines_are_idle(struct drm_i915_private *dev_priv);
 

+ 2 - 0
drivers/gpu/drm/i915/intel_runtime_pm.c

@@ -128,6 +128,8 @@ intel_display_power_domain_str(enum intel_display_power_domain domain)
 		return "AUX_C";
 	case POWER_DOMAIN_AUX_D:
 		return "AUX_D";
+	case POWER_DOMAIN_AUX_E:
+		return "AUX_E";
 	case POWER_DOMAIN_AUX_F:
 		return "AUX_F";
 	case POWER_DOMAIN_AUX_IO_A:

+ 6 - 0
drivers/gpu/drm/i915/intel_sdvo.c

@@ -1160,6 +1160,9 @@ static bool intel_sdvo_compute_config(struct intel_encoder *encoder,
 							   adjusted_mode);
 	}
 
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
 	/*
 	 * Make the CRTC code factor in the SDVO pixel multiplier.  The
 	 * SDVO device will factor out the multiplier during mode_set.
@@ -1631,6 +1634,9 @@ intel_sdvo_mode_valid(struct drm_connector *connector,
 	struct intel_sdvo *intel_sdvo = intel_attached_sdvo(connector);
 	int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
 
+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return MODE_NO_DBLESCAN;
+
 	if (intel_sdvo->pixel_clock_min > mode->clock)
 		return MODE_CLOCK_LOW;
 

+ 61 - 3
drivers/gpu/drm/i915/intel_sprite.c

@@ -1101,6 +1101,37 @@ intel_check_sprite_plane(struct intel_plane *plane,
 	return 0;
 }
 
+static bool has_dst_key_in_primary_plane(struct drm_i915_private *dev_priv)
+{
+	return INTEL_GEN(dev_priv) >= 9;
+}
+
+static void intel_plane_set_ckey(struct intel_plane_state *plane_state,
+				 const struct drm_intel_sprite_colorkey *set)
+{
+	struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
+	struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
+	struct drm_intel_sprite_colorkey *key = &plane_state->ckey;
+
+	*key = *set;
+
+	/*
+	 * We want src key enabled on the
+	 * sprite and not on the primary.
+	 */
+	if (plane->id == PLANE_PRIMARY &&
+	    set->flags & I915_SET_COLORKEY_SOURCE)
+		key->flags = 0;
+
+	/*
+	 * On SKL+ we want dst key enabled on
+	 * the primary and not on the sprite.
+	 */
+	if (INTEL_GEN(dev_priv) >= 9 && plane->id != PLANE_PRIMARY &&
+	    set->flags & I915_SET_COLORKEY_DESTINATION)
+		key->flags = 0;
+}
+
 int intel_sprite_set_colorkey_ioctl(struct drm_device *dev, void *data,
 				    struct drm_file *file_priv)
 {
@@ -1130,6 +1161,16 @@ int intel_sprite_set_colorkey_ioctl(struct drm_device *dev, void *data,
 	if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY)
 		return -ENOENT;
 
+	/*
+	 * SKL+ only plane 2 can do destination keying against plane 1.
+	 * Also multiple planes can't do destination keying on the same
+	 * pipe simultaneously.
+	 */
+	if (INTEL_GEN(dev_priv) >= 9 &&
+	    to_intel_plane(plane)->id >= PLANE_SPRITE1 &&
+	    set->flags & I915_SET_COLORKEY_DESTINATION)
+		return -EINVAL;
+
 	drm_modeset_acquire_init(&ctx, 0);
 
 	state = drm_atomic_state_alloc(plane->dev);
@@ -1142,11 +1183,28 @@ int intel_sprite_set_colorkey_ioctl(struct drm_device *dev, void *data,
 	while (1) {
 		plane_state = drm_atomic_get_plane_state(state, plane);
 		ret = PTR_ERR_OR_ZERO(plane_state);
-		if (!ret) {
-			to_intel_plane_state(plane_state)->ckey = *set;
-			ret = drm_atomic_commit(state);
+		if (!ret)
+			intel_plane_set_ckey(to_intel_plane_state(plane_state), set);
+
+		/*
+		 * On some platforms we have to configure
+		 * the dst colorkey on the primary plane.
+		 */
+		if (!ret && has_dst_key_in_primary_plane(dev_priv)) {
+			struct intel_crtc *crtc =
+				intel_get_crtc_for_pipe(dev_priv,
+							to_intel_plane(plane)->pipe);
+
+			plane_state = drm_atomic_get_plane_state(state,
+								 crtc->base.primary);
+			ret = PTR_ERR_OR_ZERO(plane_state);
+			if (!ret)
+				intel_plane_set_ckey(to_intel_plane_state(plane_state), set);
 		}
 
+		if (!ret)
+			ret = drm_atomic_commit(state);
+
 		if (ret != -EDEADLK)
 			break;
 

+ 10 - 2
drivers/gpu/drm/i915/intel_tv.c

@@ -846,6 +846,9 @@ intel_tv_mode_valid(struct drm_connector *connector,
 	const struct tv_mode *tv_mode = intel_tv_mode_find(connector->state);
 	int max_dotclk = to_i915(connector->dev)->max_dotclk_freq;
 
+	if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return MODE_NO_DBLESCAN;
+
 	if (mode->clock > max_dotclk)
 		return MODE_CLOCK_HIGH;
 
@@ -873,16 +876,21 @@ intel_tv_compute_config(struct intel_encoder *encoder,
 			struct drm_connector_state *conn_state)
 {
 	const struct tv_mode *tv_mode = intel_tv_mode_find(conn_state);
+	struct drm_display_mode *adjusted_mode =
+		&pipe_config->base.adjusted_mode;
 
 	if (!tv_mode)
 		return false;
 
-	pipe_config->base.adjusted_mode.crtc_clock = tv_mode->clock;
+	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
+		return false;
+
+	adjusted_mode->crtc_clock = tv_mode->clock;
 	DRM_DEBUG_KMS("forcing bpc to 8 for TV\n");
 	pipe_config->pipe_bpp = 8*3;
 
 	/* TV has it's own notion of sync and other mode flags, so clear them. */
-	pipe_config->base.adjusted_mode.flags = 0;
+	adjusted_mode->flags = 0;
 
 	/*
 	 * FIXME: We don't check whether the input mode is actually what we want

+ 1 - 1
drivers/gpu/drm/i915/intel_uc.c

@@ -207,7 +207,7 @@ void intel_uc_init_mmio(struct drm_i915_private *i915)
 
 static void guc_capture_load_err_log(struct intel_guc *guc)
 {
-	if (!guc->log.vma || !i915_modparams.guc_log_level)
+	if (!guc->log.vma || !intel_guc_log_get_level(&guc->log))
 		return;
 
 	if (!guc->load_err_log)

+ 11 - 5
drivers/gpu/drm/i915/intel_uncore.c

@@ -2093,21 +2093,25 @@ static int gen8_reset_engines(struct drm_i915_private *dev_priv,
 {
 	struct intel_engine_cs *engine;
 	unsigned int tmp;
+	int ret;
 
-	for_each_engine_masked(engine, dev_priv, engine_mask, tmp)
-		if (gen8_reset_engine_start(engine))
+	for_each_engine_masked(engine, dev_priv, engine_mask, tmp) {
+		if (gen8_reset_engine_start(engine)) {
+			ret = -EIO;
 			goto not_ready;
+		}
+	}
 
 	if (INTEL_GEN(dev_priv) >= 11)
-		return gen11_reset_engines(dev_priv, engine_mask);
+		ret = gen11_reset_engines(dev_priv, engine_mask);
 	else
-		return gen6_reset_engines(dev_priv, engine_mask);
+		ret = gen6_reset_engines(dev_priv, engine_mask);
 
 not_ready:
 	for_each_engine_masked(engine, dev_priv, engine_mask, tmp)
 		gen8_reset_engine_cancel(engine);
 
-	return -EIO;
+	return ret;
 }
 
 typedef int (*reset_func)(struct drm_i915_private *, unsigned engine_mask);
@@ -2170,6 +2174,8 @@ int intel_gpu_reset(struct drm_i915_private *dev_priv, unsigned engine_mask)
 		 * Thus assume it is best to stop engines on all gens
 		 * where we have a gpu reset.
 		 *
+		 * WaKBLVECSSemaphoreWaitPoll:kbl (on ALL_ENGINES)
+		 *
 		 * WaMediaResetMainRingCleanup:ctg,elk (presumably)
 		 *
 		 * FIXME: Wa for more modern gens needs to be validated

+ 11 - 11
drivers/gpu/drm/i915/intel_uncore.h

@@ -67,21 +67,21 @@ struct intel_uncore_funcs {
 	void (*force_wake_put)(struct drm_i915_private *dev_priv,
 			       enum forcewake_domains domains);
 
-	uint8_t  (*mmio_readb)(struct drm_i915_private *dev_priv,
-			       i915_reg_t r, bool trace);
-	uint16_t (*mmio_readw)(struct drm_i915_private *dev_priv,
-			       i915_reg_t r, bool trace);
-	uint32_t (*mmio_readl)(struct drm_i915_private *dev_priv,
-			       i915_reg_t r, bool trace);
-	uint64_t (*mmio_readq)(struct drm_i915_private *dev_priv,
-			       i915_reg_t r, bool trace);
+	u8 (*mmio_readb)(struct drm_i915_private *dev_priv,
+			 i915_reg_t r, bool trace);
+	u16 (*mmio_readw)(struct drm_i915_private *dev_priv,
+			  i915_reg_t r, bool trace);
+	u32 (*mmio_readl)(struct drm_i915_private *dev_priv,
+			  i915_reg_t r, bool trace);
+	u64 (*mmio_readq)(struct drm_i915_private *dev_priv,
+			  i915_reg_t r, bool trace);
 
 	void (*mmio_writeb)(struct drm_i915_private *dev_priv,
-			    i915_reg_t r, uint8_t val, bool trace);
+			    i915_reg_t r, u8 val, bool trace);
 	void (*mmio_writew)(struct drm_i915_private *dev_priv,
-			    i915_reg_t r, uint16_t val, bool trace);
+			    i915_reg_t r, u16 val, bool trace);
 	void (*mmio_writel)(struct drm_i915_private *dev_priv,
-			    i915_reg_t r, uint32_t val, bool trace);
+			    i915_reg_t r, u32 val, bool trace);
 };
 
 struct intel_forcewake_range {

+ 4 - 2
drivers/gpu/drm/i915/intel_vbt_defs.h

@@ -420,7 +420,9 @@ struct child_device_config {
 	u16 extended_type;
 	u8 dvo_function;
 	u8 dp_usb_type_c:1;					/* 195 */
-	u8 flags2_reserved:7;					/* 195 */
+	u8 tbt:1;						/* 209 */
+	u8 flags2_reserved:2;					/* 195 */
+	u8 dp_port_trace_length:4;				/* 209 */
 	u8 dp_gpio_index;					/* 195 */
 	u16 dp_gpio_pin_num;					/* 195 */
 	u8 dp_iboost_level:4;					/* 196 */
@@ -454,7 +456,7 @@ struct bdb_general_definitions {
 	 * number = (block_size - sizeof(bdb_general_definitions))/
 	 *	     defs->child_dev_size;
 	 */
-	uint8_t devices[0];
+	u8 devices[0];
 } __packed;
 
 /* Mask for DRRS / Panel Channel / SSC / BLT control bits extraction */

+ 59 - 17
drivers/gpu/drm/i915/intel_workarounds.c

@@ -48,29 +48,58 @@
  * - Public functions to init or apply the given workaround type.
  */
 
-static int wa_add(struct drm_i915_private *dev_priv,
-		  i915_reg_t addr,
-		  const u32 mask, const u32 val)
+static void wa_add(struct drm_i915_private *i915,
+		   i915_reg_t reg, const u32 mask, const u32 val)
 {
-	const unsigned int idx = dev_priv->workarounds.count;
+	struct i915_workarounds *wa = &i915->workarounds;
+	unsigned int start = 0, end = wa->count;
+	unsigned int addr = i915_mmio_reg_offset(reg);
+	struct i915_wa_reg *r;
+
+	while (start < end) {
+		unsigned int mid = start + (end - start) / 2;
+
+		if (wa->reg[mid].addr < addr) {
+			start = mid + 1;
+		} else if (wa->reg[mid].addr > addr) {
+			end = mid;
+		} else {
+			r = &wa->reg[mid];
+
+			if ((mask & ~r->mask) == 0) {
+				DRM_ERROR("Discarding overwritten w/a for reg %04x (mask: %08x, value: %08x)\n",
+					  addr, r->mask, r->value);
+
+				r->value &= ~mask;
+			}
+
+			r->value |= val;
+			r->mask  |= mask;
+			return;
+		}
+	}
 
-	if (WARN_ON(idx >= I915_MAX_WA_REGS))
-		return -ENOSPC;
+	if (WARN_ON_ONCE(wa->count >= I915_MAX_WA_REGS)) {
+		DRM_ERROR("Dropping w/a for reg %04x (mask: %08x, value: %08x)\n",
+			  addr, mask, val);
+		return;
+	}
 
-	dev_priv->workarounds.reg[idx].addr = addr;
-	dev_priv->workarounds.reg[idx].value = val;
-	dev_priv->workarounds.reg[idx].mask = mask;
+	r = &wa->reg[wa->count++];
+	r->addr  = addr;
+	r->value = val;
+	r->mask  = mask;
 
-	dev_priv->workarounds.count++;
+	while (r-- > wa->reg) {
+		GEM_BUG_ON(r[0].addr == r[1].addr);
+		if (r[1].addr > r[0].addr)
+			break;
 
-	return 0;
+		swap(r[1], r[0]);
+	}
 }
 
-#define WA_REG(addr, mask, val) do { \
-		const int r = wa_add(dev_priv, (addr), (mask), (val)); \
-		if (r) \
-			return r; \
-	} while (0)
+#define WA_REG(addr, mask, val) wa_add(dev_priv, (addr), (mask), (val))
 
 #define WA_SET_BIT_MASKED(addr, mask) \
 	WA_REG(addr, (mask), _MASKED_BIT_ENABLE(mask))
@@ -540,7 +569,7 @@ int intel_ctx_workarounds_emit(struct i915_request *rq)
 
 	*cs++ = MI_LOAD_REGISTER_IMM(w->count);
 	for (i = 0; i < w->count; i++) {
-		*cs++ = i915_mmio_reg_offset(w->reg[i].addr);
+		*cs++ = w->reg[i].addr;
 		*cs++ = w->reg[i].value;
 	}
 	*cs++ = MI_NOOP;
@@ -666,6 +695,19 @@ static void kbl_gt_workarounds_apply(struct drm_i915_private *dev_priv)
 	I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA,
 		   I915_READ(GEN9_GAMT_ECO_REG_RW_IA) |
 		   GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS);
+
+	/* WaKBLVECSSemaphoreWaitPoll:kbl */
+	if (IS_KBL_REVID(dev_priv, KBL_REVID_A0, KBL_REVID_E0)) {
+		struct intel_engine_cs *engine;
+		unsigned int tmp;
+
+		for_each_engine(engine, dev_priv, tmp) {
+			if (engine->id == RCS)
+				continue;
+
+			I915_WRITE(RING_SEMA_WAIT_POLL(engine->mmio_base), 1);
+		}
+	}
 }
 
 static void glk_gt_workarounds_apply(struct drm_i915_private *dev_priv)

+ 1 - 1
drivers/gpu/drm/i915/selftests/huge_pages.c

@@ -1003,7 +1003,7 @@ static int gpu_write(struct i915_vma *vma,
 	reservation_object_unlock(vma->resv);
 
 err_request:
-	__i915_request_add(rq, err == 0);
+	i915_request_add(rq);
 
 	return err;
 }

+ 2 - 2
drivers/gpu/drm/i915/selftests/i915_gem_coherency.c

@@ -199,7 +199,7 @@ static int gpu_set(struct drm_i915_gem_object *obj,
 
 	cs = intel_ring_begin(rq, 4);
 	if (IS_ERR(cs)) {
-		__i915_request_add(rq, false);
+		i915_request_add(rq);
 		i915_vma_unpin(vma);
 		return PTR_ERR(cs);
 	}
@@ -229,7 +229,7 @@ static int gpu_set(struct drm_i915_gem_object *obj,
 	reservation_object_add_excl_fence(obj->resv, &rq->fence);
 	reservation_object_unlock(obj->resv);
 
-	__i915_request_add(rq, true);
+	i915_request_add(rq);
 
 	return 0;
 }

+ 4 - 4
drivers/gpu/drm/i915/selftests/i915_gem_context.c

@@ -182,12 +182,12 @@ static int gpu_fill(struct drm_i915_gem_object *obj,
 	reservation_object_add_excl_fence(obj->resv, &rq->fence);
 	reservation_object_unlock(obj->resv);
 
-	__i915_request_add(rq, true);
+	i915_request_add(rq);
 
 	return 0;
 
 err_request:
-	__i915_request_add(rq, false);
+	i915_request_add(rq);
 err_batch:
 	i915_vma_unpin(batch);
 err_vma:
@@ -519,8 +519,8 @@ static int igt_switch_to_kernel_context(void *arg)
 	mutex_lock(&i915->drm.struct_mutex);
 	ctx = kernel_context(i915);
 	if (IS_ERR(ctx)) {
-		err = PTR_ERR(ctx);
-		goto out_unlock;
+		mutex_unlock(&i915->drm.struct_mutex);
+		return PTR_ERR(ctx);
 	}
 
 	/* First check idling each individual engine */

+ 8 - 10
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c

@@ -135,21 +135,19 @@ static int igt_ppgtt_alloc(void *arg)
 	struct drm_i915_private *dev_priv = arg;
 	struct i915_hw_ppgtt *ppgtt;
 	u64 size, last;
-	int err;
+	int err = 0;
 
 	/* Allocate a ppggt and try to fill the entire range */
 
 	if (!USES_PPGTT(dev_priv))
 		return 0;
 
-	ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL);
-	if (!ppgtt)
-		return -ENOMEM;
-
 	mutex_lock(&dev_priv->drm.struct_mutex);
-	err = __hw_ppgtt_init(ppgtt, dev_priv);
-	if (err)
-		goto err_ppgtt;
+	ppgtt = __hw_ppgtt_create(dev_priv);
+	if (IS_ERR(ppgtt)) {
+		err = PTR_ERR(ppgtt);
+		goto err_unlock;
+	}
 
 	if (!ppgtt->vm.allocate_va_range)
 		goto err_ppgtt_cleanup;
@@ -189,9 +187,9 @@ static int igt_ppgtt_alloc(void *arg)
 
 err_ppgtt_cleanup:
 	ppgtt->vm.cleanup(&ppgtt->vm);
-err_ppgtt:
-	mutex_unlock(&dev_priv->drm.struct_mutex);
 	kfree(ppgtt);
+err_unlock:
+	mutex_unlock(&dev_priv->drm.struct_mutex);
 	return err;
 }
 

+ 3 - 3
drivers/gpu/drm/i915/selftests/i915_request.c

@@ -342,9 +342,9 @@ static int live_nop_request(void *arg)
 	mutex_lock(&i915->drm.struct_mutex);
 
 	for_each_engine(engine, i915, id) {
-		IGT_TIMEOUT(end_time);
-		struct i915_request *request;
+		struct i915_request *request = NULL;
 		unsigned long n, prime;
+		IGT_TIMEOUT(end_time);
 		ktime_t times[2] = {};
 
 		err = begin_live_test(&t, i915, __func__, engine->name);
@@ -466,7 +466,7 @@ empty_request(struct intel_engine_cs *engine,
 		goto out_request;
 
 out_request:
-	__i915_request_add(request, err == 0);
+	i915_request_add(request);
 	return err ? ERR_PTR(err) : request;
 }
 

+ 8 - 8
drivers/gpu/drm/i915/selftests/intel_hangcheck.c

@@ -245,7 +245,7 @@ hang_create_request(struct hang *h, struct intel_engine_cs *engine)
 
 	err = emit_recurse_batch(h, rq);
 	if (err) {
-		__i915_request_add(rq, false);
+		i915_request_add(rq);
 		return ERR_PTR(err);
 	}
 
@@ -318,7 +318,7 @@ static int igt_hang_sanitycheck(void *arg)
 		*h.batch = MI_BATCH_BUFFER_END;
 		i915_gem_chipset_flush(i915);
 
-		__i915_request_add(rq, true);
+		i915_request_add(rq);
 
 		timeout = i915_request_wait(rq,
 					    I915_WAIT_LOCKED,
@@ -464,7 +464,7 @@ static int __igt_reset_engine(struct drm_i915_private *i915, bool active)
 				}
 
 				i915_request_get(rq);
-				__i915_request_add(rq, true);
+				i915_request_add(rq);
 				mutex_unlock(&i915->drm.struct_mutex);
 
 				if (!wait_until_running(&h, rq)) {
@@ -742,7 +742,7 @@ static int __igt_reset_engines(struct drm_i915_private *i915,
 				}
 
 				i915_request_get(rq);
-				__i915_request_add(rq, true);
+				i915_request_add(rq);
 				mutex_unlock(&i915->drm.struct_mutex);
 
 				if (!wait_until_running(&h, rq)) {
@@ -942,7 +942,7 @@ static int igt_wait_reset(void *arg)
 	}
 
 	i915_request_get(rq);
-	__i915_request_add(rq, true);
+	i915_request_add(rq);
 
 	if (!wait_until_running(&h, rq)) {
 		struct drm_printer p = drm_info_printer(i915->drm.dev);
@@ -1037,7 +1037,7 @@ static int igt_reset_queue(void *arg)
 		}
 
 		i915_request_get(prev);
-		__i915_request_add(prev, true);
+		i915_request_add(prev);
 
 		count = 0;
 		do {
@@ -1051,7 +1051,7 @@ static int igt_reset_queue(void *arg)
 			}
 
 			i915_request_get(rq);
-			__i915_request_add(rq, true);
+			i915_request_add(rq);
 
 			/*
 			 * XXX We don't handle resetting the kernel context
@@ -1184,7 +1184,7 @@ static int igt_handle_error(void *arg)
 	}
 
 	i915_request_get(rq);
-	__i915_request_add(rq, true);
+	i915_request_add(rq);
 
 	if (!wait_until_running(&h, rq)) {
 		struct drm_printer p = drm_info_printer(i915->drm.dev);

+ 1 - 1
drivers/gpu/drm/i915/selftests/intel_lrc.c

@@ -155,7 +155,7 @@ spinner_create_request(struct spinner *spin,
 
 	err = emit_recurse_batch(spin, rq, arbitration_command);
 	if (err) {
-		__i915_request_add(rq, false);
+		i915_request_add(rq);
 		return ERR_PTR(err);
 	}
 

+ 1 - 1
drivers/gpu/drm/i915/selftests/intel_workarounds.c

@@ -75,7 +75,7 @@ read_nonprivs(struct i915_gem_context *ctx, struct intel_engine_cs *engine)
 	i915_gem_object_get(result);
 	i915_gem_object_set_active_reference(result);
 
-	__i915_request_add(rq, true);
+	i915_request_add(rq);
 	i915_vma_unpin(vma);
 
 	return result;

+ 10 - 8
drivers/gpu/drm/i915/selftests/mock_gtt.c

@@ -80,12 +80,13 @@ mock_ppgtt(struct drm_i915_private *i915,
 	ppgtt->vm.clear_range = nop_clear_range;
 	ppgtt->vm.insert_page = mock_insert_page;
 	ppgtt->vm.insert_entries = mock_insert_entries;
-	ppgtt->vm.bind_vma = mock_bind_ppgtt;
-	ppgtt->vm.unbind_vma = mock_unbind_ppgtt;
-	ppgtt->vm.set_pages = ppgtt_set_pages;
-	ppgtt->vm.clear_pages = clear_pages;
 	ppgtt->vm.cleanup = mock_cleanup;
 
+	ppgtt->vm.vma_ops.bind_vma    = mock_bind_ppgtt;
+	ppgtt->vm.vma_ops.unbind_vma  = mock_unbind_ppgtt;
+	ppgtt->vm.vma_ops.set_pages   = ppgtt_set_pages;
+	ppgtt->vm.vma_ops.clear_pages = clear_pages;
+
 	return ppgtt;
 }
 
@@ -116,12 +117,13 @@ void mock_init_ggtt(struct drm_i915_private *i915)
 	ggtt->vm.clear_range = nop_clear_range;
 	ggtt->vm.insert_page = mock_insert_page;
 	ggtt->vm.insert_entries = mock_insert_entries;
-	ggtt->vm.bind_vma = mock_bind_ggtt;
-	ggtt->vm.unbind_vma = mock_unbind_ggtt;
-	ggtt->vm.set_pages = ggtt_set_pages;
-	ggtt->vm.clear_pages = clear_pages;
 	ggtt->vm.cleanup = mock_cleanup;
 
+	ggtt->vm.vma_ops.bind_vma    = mock_bind_ggtt;
+	ggtt->vm.vma_ops.unbind_vma  = mock_unbind_ggtt;
+	ggtt->vm.vma_ops.set_pages   = ggtt_set_pages;
+	ggtt->vm.vma_ops.clear_pages = clear_pages;
+
 	i915_address_space_init(&ggtt->vm, i915, "global");
 }
 

+ 25 - 12
include/drm/i915_pciids.h

@@ -349,7 +349,6 @@
 #define INTEL_KBL_GT2_IDS(info)	\
 	INTEL_VGA_DEVICE(0x5916, info), /* ULT GT2 */ \
 	INTEL_VGA_DEVICE(0x5917, info), /* Mobile GT2 */ \
-	INTEL_VGA_DEVICE(0x591C, info), /* ULX GT2 */ \
 	INTEL_VGA_DEVICE(0x5921, info), /* ULT GT2F */ \
 	INTEL_VGA_DEVICE(0x591E, info), /* ULX GT2 */ \
 	INTEL_VGA_DEVICE(0x5912, info), /* DT  GT2 */ \
@@ -365,11 +364,17 @@
 #define INTEL_KBL_GT4_IDS(info) \
 	INTEL_VGA_DEVICE(0x593B, info) /* Halo GT4 */
 
+/* AML/KBL Y GT2 */
+#define INTEL_AML_GT2_IDS(info) \
+	INTEL_VGA_DEVICE(0x591C, info),  /* ULX GT2 */ \
+	INTEL_VGA_DEVICE(0x87C0, info) /* ULX GT2 */
+
 #define INTEL_KBL_IDS(info) \
 	INTEL_KBL_GT1_IDS(info), \
 	INTEL_KBL_GT2_IDS(info), \
 	INTEL_KBL_GT3_IDS(info), \
-	INTEL_KBL_GT4_IDS(info)
+	INTEL_KBL_GT4_IDS(info), \
+	INTEL_AML_GT2_IDS(info)
 
 /* CFL S */
 #define INTEL_CFL_S_GT1_IDS(info) \
@@ -388,32 +393,40 @@
 	INTEL_VGA_DEVICE(0x3E9B, info), /* Halo GT2 */ \
 	INTEL_VGA_DEVICE(0x3E94, info)  /* Halo GT2 */
 
-/* CFL U GT1 */
-#define INTEL_CFL_U_GT1_IDS(info) \
-	INTEL_VGA_DEVICE(0x3EA1, info), \
-	INTEL_VGA_DEVICE(0x3EA4, info)
-
 /* CFL U GT2 */
 #define INTEL_CFL_U_GT2_IDS(info) \
-	INTEL_VGA_DEVICE(0x3EA0, info), \
-	INTEL_VGA_DEVICE(0x3EA3, info), \
 	INTEL_VGA_DEVICE(0x3EA9, info)
 
 /* CFL U GT3 */
 #define INTEL_CFL_U_GT3_IDS(info) \
-	INTEL_VGA_DEVICE(0x3EA2, info), /* ULT GT3 */ \
 	INTEL_VGA_DEVICE(0x3EA5, info), /* ULT GT3 */ \
 	INTEL_VGA_DEVICE(0x3EA6, info), /* ULT GT3 */ \
 	INTEL_VGA_DEVICE(0x3EA7, info), /* ULT GT3 */ \
 	INTEL_VGA_DEVICE(0x3EA8, info)  /* ULT GT3 */
 
+/* WHL/CFL U GT1 */
+#define INTEL_WHL_U_GT1_IDS(info) \
+	INTEL_VGA_DEVICE(0x3EA1, info)
+
+/* WHL/CFL U GT2 */
+#define INTEL_WHL_U_GT2_IDS(info) \
+	INTEL_VGA_DEVICE(0x3EA0, info)
+
+/* WHL/CFL U GT3 */
+#define INTEL_WHL_U_GT3_IDS(info) \
+	INTEL_VGA_DEVICE(0x3EA2, info), \
+	INTEL_VGA_DEVICE(0x3EA3, info), \
+	INTEL_VGA_DEVICE(0x3EA4, info)
+
 #define INTEL_CFL_IDS(info)	   \
 	INTEL_CFL_S_GT1_IDS(info), \
 	INTEL_CFL_S_GT2_IDS(info), \
 	INTEL_CFL_H_GT2_IDS(info), \
-	INTEL_CFL_U_GT1_IDS(info), \
 	INTEL_CFL_U_GT2_IDS(info), \
-	INTEL_CFL_U_GT3_IDS(info)
+	INTEL_CFL_U_GT3_IDS(info), \
+	INTEL_WHL_U_GT1_IDS(info), \
+	INTEL_WHL_U_GT2_IDS(info), \
+	INTEL_WHL_U_GT3_IDS(info)
 
 /* CNL */
 #define INTEL_CNL_IDS(info) \