浏览代码

drm/amdgpu: Skip drm_sched_entity related ops for KIQ ring.

Following change 75fbed2 we never initialize or use the GPU
scheduler for KIQ and hence we need to skip KIQ ring when iterating
amdgpu_ctx's scheduler entites.

Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Andrey Grodzovsky 7 年之前
父节点
当前提交
20b6b7885d
共有 1 个文件被更改,包括 18 次插入3 次删除
  1. 18 3
      drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c

+ 18 - 3
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c

@@ -173,9 +173,14 @@ static void amdgpu_ctx_do_release(struct kref *ref)
 
 	ctx = container_of(ref, struct amdgpu_ctx, refcount);
 
-	for (i = 0; i < ctx->adev->num_rings; i++)
+	for (i = 0; i < ctx->adev->num_rings; i++) {
+
+		if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
+			continue;
+
 		drm_sched_entity_fini(&ctx->adev->rings[i]->sched,
 			&ctx->rings[i].entity);
+	}
 
 	amdgpu_ctx_fini(ref);
 }
@@ -452,12 +457,17 @@ void amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr)
 		if (!ctx->adev)
 			return;
 
-		for (i = 0; i < ctx->adev->num_rings; i++)
+		for (i = 0; i < ctx->adev->num_rings; i++) {
+
+			if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
+				continue;
+
 			if (kref_read(&ctx->refcount) == 1)
 				drm_sched_entity_do_release(&ctx->adev->rings[i]->sched,
 						  &ctx->rings[i].entity);
 			else
 				DRM_ERROR("ctx %p is still alive\n", ctx);
+		}
 	}
 }
 
@@ -474,12 +484,17 @@ void amdgpu_ctx_mgr_entity_cleanup(struct amdgpu_ctx_mgr *mgr)
 		if (!ctx->adev)
 			return;
 
-		for (i = 0; i < ctx->adev->num_rings; i++)
+		for (i = 0; i < ctx->adev->num_rings; i++) {
+
+			if (ctx->adev->rings[i] == &ctx->adev->gfx.kiq.ring)
+				continue;
+
 			if (kref_read(&ctx->refcount) == 1)
 				drm_sched_entity_cleanup(&ctx->adev->rings[i]->sched,
 					&ctx->rings[i].entity);
 			else
 				DRM_ERROR("ctx %p is still alive\n", ctx);
+		}
 	}
 }