浏览代码

fm10k: comment next_vf_mbx flow

Add a header comment explaining why we have the somewhat crazy mailbox
flow. This flow is necessary as it prevents the PF<->SM mailbox from
being flooded by the VF messages, which normally trigger a message to
the PF. This helps prevent the case where we see a PF mailbox timeout.

Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Acked-by: Matthew Vick <matthew.vick@intel.com>
Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jeff Kirsher 10 年之前
父节点
当前提交
ada2411d4f
共有 1 个文件被更改,包括 11 次插入0 次删除
  1. 11 0
      drivers/net/ethernet/intel/fm10k/fm10k_iov.c

+ 11 - 0
drivers/net/ethernet/intel/fm10k/fm10k_iov.c

@@ -113,6 +113,13 @@ s32 fm10k_iov_mbx(struct fm10k_intfc *interface)
 	/* lock the mailbox for transmit and receive */
 	fm10k_mbx_lock(interface);
 
+	/* Most VF messages sent to the PF cause the PF to respond by
+	 * requesting from the SM mailbox. This means that too many VF
+	 * messages processed at once could cause a mailbox timeout on the PF.
+	 * To prevent this, store a pointer to the next VF mbx to process. Use
+	 * that as the start of the loop so that we don't starve whichever VF
+	 * got ignored on the previous run.
+	 */
 process_mbx:
 	for (i = iov_data->next_vf_mbx ? : iov_data->num_vfs; i--;) {
 		struct fm10k_vf_info *vf_info = &iov_data->vf_info[i];
@@ -137,6 +144,10 @@ process_mbx:
 		mbx->ops.process(hw, mbx);
 	}
 
+	/* if we stopped processing mailboxes early, update next_vf_mbx.
+	 * Otherwise, reset next_vf_mbx, and restart loop so that we process
+	 * the remaining mailboxes we skipped at the start.
+	 */
 	if (i >= 0) {
 		iov_data->next_vf_mbx = i + 1;
 	} else if (iov_data->next_vf_mbx) {