فهرست منبع

net: stmmac: use correct barrier between coherent memory and MMIO

The last memory barrier in stmmac_xmit()/stmmac_tso_xmit() is placed
between a coherent memory write and a MMIO write:

The own bit is written in First Desc (TSO: MSS desc or First Desc).
<barrier>
The DMA engine is started by a write to the tx desc tail pointer/
enable dma transmission register, i.e. a MMIO write.

This barrier cannot be a simple dma_wmb(), since a dma_wmb() is only
used to guarantee the ordering, with respect to other writes,
to cache coherent DMA memory.

To guarantee that the cache coherent memory writes have completed
before we attempt to write to the cache incoherent MMIO region,
we need to use the more heavyweight barrier wmb().

Signed-off-by: Niklas Cassel <niklas.cassel@axis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Niklas Cassel 7 سال پیش
والد
کامیت
95eb930a40
1فایلهای تغییر یافته به همراه2 افزوده شده و 2 حذف شده
  1. 2 2
      drivers/net/ethernet/stmicro/stmmac/stmmac_main.c

+ 2 - 2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c

@@ -2997,7 +2997,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * descriptor and then barrier is needed to make sure that
 	 * descriptor and then barrier is needed to make sure that
 	 * all is coherent before granting the DMA engine.
 	 * all is coherent before granting the DMA engine.
 	 */
 	 */
-	dma_wmb();
+	wmb();
 
 
 	if (netif_msg_pktdata(priv)) {
 	if (netif_msg_pktdata(priv)) {
 		pr_info("%s: curr=%d dirty=%d f=%d, e=%d, f_p=%p, nfrags %d\n",
 		pr_info("%s: curr=%d dirty=%d f=%d, e=%d, f_p=%p, nfrags %d\n",
@@ -3221,7 +3221,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
 		 * descriptor and then barrier is needed to make sure that
 		 * descriptor and then barrier is needed to make sure that
 		 * all is coherent before granting the DMA engine.
 		 * all is coherent before granting the DMA engine.
 		 */
 		 */
-		dma_wmb();
+		wmb();
 	}
 	}
 
 
 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);
 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);