瀏覽代碼

drm/i915/gtt: Trust the uncached store to flush wcb

Not all architectures guarantee that uncached read will
flush the write combining buffer. So marking it explicitly
is recommended [1].

However we know the architecture we are operating on
and can avoid wmb as the UC store will flush the wcb [2].

Omit the wmb() before invalidate as redudant.

v2: squash combining and removal (Chris)
v3: remove obsolete comments about posting reads (Chris)

References: http://yarchive.net/comp/linux/write_combining.html [1]
References: http://download.intel.com/design/PentiumII/applnots/24442201.pdf [2]
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180508124154.14586-1-mika.kuoppala@linux.intel.com
Mika Kuoppala 7 年之前
父節點
當前提交
ca6acc2525
共有 1 個文件被更改,包括 8 次插入10 次删除
  1. 8 10
      drivers/gpu/drm/i915/i915_gem_gtt.c

+ 8 - 10
drivers/gpu/drm/i915/i915_gem_gtt.c

@@ -110,7 +110,8 @@ i915_get_ggtt_vma_pages(struct i915_vma *vma);
 
 
 static void gen6_ggtt_invalidate(struct drm_i915_private *dev_priv)
 static void gen6_ggtt_invalidate(struct drm_i915_private *dev_priv)
 {
 {
-	/* Note that as an uncached mmio write, this should flush the
+	/*
+	 * Note that as an uncached mmio write, this will flush the
 	 * WCB of the writes into the GGTT before it triggers the invalidate.
 	 * WCB of the writes into the GGTT before it triggers the invalidate.
 	 */
 	 */
 	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
 	I915_WRITE(GFX_FLSH_CNTL_GEN6, GFX_FLSH_CNTL_EN);
@@ -2418,11 +2419,9 @@ static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
 	for_each_sgt_dma(addr, sgt_iter, vma->pages)
 	for_each_sgt_dma(addr, sgt_iter, vma->pages)
 		gen8_set_pte(gtt_entries++, pte_encode | addr);
 		gen8_set_pte(gtt_entries++, pte_encode | addr);
 
 
-	wmb();
-
-	/* This next bit makes the above posting read even more important. We
-	 * want to flush the TLBs only after we're certain all the PTE updates
-	 * have finished.
+	/*
+	 * We want to flush the TLBs only after we're certain all the PTE
+	 * updates have finished.
 	 */
 	 */
 	ggtt->invalidate(vm->i915);
 	ggtt->invalidate(vm->i915);
 }
 }
@@ -2460,11 +2459,10 @@ static void gen6_ggtt_insert_entries(struct i915_address_space *vm,
 	dma_addr_t addr;
 	dma_addr_t addr;
 	for_each_sgt_dma(addr, iter, vma->pages)
 	for_each_sgt_dma(addr, iter, vma->pages)
 		iowrite32(vm->pte_encode(addr, level, flags), &entries[i++]);
 		iowrite32(vm->pte_encode(addr, level, flags), &entries[i++]);
-	wmb();
 
 
-	/* This next bit makes the above posting read even more important. We
-	 * want to flush the TLBs only after we're certain all the PTE updates
-	 * have finished.
+	/*
+	 * We want to flush the TLBs only after we're certain all the PTE
+	 * updates have finished.
 	 */
 	 */
 	ggtt->invalidate(vm->i915);
 	ggtt->invalidate(vm->i915);
 }
 }