Explorar o código

Merge branch 'mvneta-hwbm'

Gregory CLEMENT says:

====================
API set for HW Buffer management

This is the sixth version of the API set for HW Buffer management (that was
initially submitted here:
http://thread.gmane.org/gmane.linux.kernel/2125152).

This version is just a rebasing onto the last net-next. I also added
the Tested-by flag from Sebastian Careba : "The patch set applies
successfully and it works well, no more Samba issues any longer".

For the record in the previous versions I made the following changes:
v4 -> v5:
- Add a field with the size of the buffer of the pool was added. It
  then allow to fix some misused size in the mvneta_bm code when using
  the new framework.

- Add a new patch from Marcin for sram allowing to require
  non-bufferable access to the memory. It was needed for the hardware
  buffer management of the mvneta.

- Fix the build issue notified by the 0-day builder when building the
  drivers as module.

v3 -> v4
- Fix build issue when HWBM is not selected

v2 -> v3
- Make a HWBM and a SWBM version of the mvneta_rx() function in order
  to reduce the the conditional code. Kept a condition inside the
  mvneta_poll because specializing this function would have means
  duplicating 95% of the code.

- Put back the register_netdev() call at the end of the mvneta_probe()
  function. In order to have a unique ID for each port, just used a
  global variable in the driver.

- Added a fix from Marcin in the "net: mvneta: bm: add support for
  hardware buffer management" patch: "when dropping packets, only
  buffer pointers passed from BM to descriptors have to be returned to
  the pool. In submitted version after closing the port and
  mvneta_rxq_deinit(), it was very likely that a lot of fake buffers
  are added to the pool, because all descriptors took part in
  iteration."

- Removed the select MVNETA_BM from the Kconfig, it will let the user
  the choice to use not use it if they want.

v1 -> v2
- The hardware buffer management helpers are no more built by default
  and now depend on a hidden config symbol which has to be selected
  by the driver if needed
- The hwbm_pool_refill() and hwbm_pool_add() now receive a gfp_t as
  argument allowing the caller to specify the flag it needs.
- buf_num is now tested to ensure there is no wrapping
- A spinlock has been added to protect the hwbm_pool_add() function in
  SMP or irq context.
- used pr_warn instead of pr_debug in case of errors.
- fixed the mvneta implementation by returning the buffer to the pool
  at various place instead of ignoring it.
- Squashed "bus: mvenus-mbus: Fix size test for
   mvebu_mbus_get_dram_win_info" into bus: mvebu-mbus: provide api for
   obtaining IO and DRAM window information.
- Added my signed-otf-by on all the patches as submitter of the series.
- Renamed the dts patches with the pattern "ARM: dts: platform:"
- Removed the patch "ARM: mvebu: enable SRAM support in
  mvebu_v7_defconfig" of this series and already applied it
- Modified the order of the patches.

In order to ease the test the branch mvneta-BM-framework-v6 is
available at git@github.com:MISL-EBU-System-SW/mainline-public.git.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller %!s(int64=9) %!d(string=hai) anos
pai
achega
c9214f50a2

+ 17 - 2
Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt

@@ -18,15 +18,30 @@ Optional properties:
   "core" for core clock and "bus" for the optional bus clock.
 
 
+Optional properties (valid only for Armada XP/38x):
+
+- buffer-manager: a phandle to a buffer manager node. Please refer to
+  Documentation/devicetree/bindings/net/marvell-neta-bm.txt
+- bm,pool-long: ID of a pool, that will accept all packets of a size
+  higher than 'short' pool's threshold (if set) and up to MTU value.
+  Obligatory, when the port is supposed to use hardware
+  buffer management.
+- bm,pool-short: ID of a pool, that will be used for accepting
+  packets of a size lower than given threshold. If not set, the port
+  will use a single 'long' pool for all packets, as defined above.
+
 Example:
 
-ethernet@d0070000 {
+ethernet@70000 {
 	compatible = "marvell,armada-370-neta";
-	reg = <0xd0070000 0x2500>;
+	reg = <0x70000 0x2500>;
 	interrupts = <8>;
 	clocks = <&gate_clk 4>;
 	tx-csum-limit = <9800>
 	status = "okay";
 	phy = <&phy0>;
 	phy-mode = "rgmii-id";
+	buffer-manager = <&bm>;
+	bm,pool-long = <0>;
+	bm,pool-short = <1>;
 };

+ 49 - 0
Documentation/devicetree/bindings/net/marvell-neta-bm.txt

@@ -0,0 +1,49 @@
+* Marvell Armada 380/XP Buffer Manager driver (BM)
+
+Required properties:
+
+- compatible: should be "marvell,armada-380-neta-bm".
+- reg: address and length of the register set for the device.
+- clocks: a pointer to the reference clock for this device.
+- internal-mem: a phandle to BM internal SRAM definition.
+
+Optional properties (port):
+
+- pool<0 : 3>,capacity: size of external buffer pointers' ring maintained
+  in DRAM. Can be set for each pool (id 0 : 3) separately. The value has
+  to be chosen between 128 and 16352 and it also has to be aligned to 32.
+  Otherwise the driver would adjust a given number or choose default if
+  not set.
+- pool<0 : 3>,pkt-size: maximum size of a packet accepted by a given buffer
+  pointers' pool (id 0 : 3). It will be taken into consideration only when pool
+  type is 'short'. For 'long' ones it would be overridden by port's MTU.
+  If not set a driver will choose a default value.
+
+In order to see how to hook the BM to a given ethernet port, please
+refer to Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt.
+
+Example:
+
+- main node:
+
+bm: bm@c8000 {
+	compatible = "marvell,armada-380-neta-bm";
+	reg = <0xc8000 0xac>;
+	clocks = <&gateclk 13>;
+	internal-mem = <&bm_bppi>;
+	status = "okay";
+	pool2,capacity = <4096>;
+	pool1,pkt-size = <512>;
+};
+
+- internal SRAM node:
+
+bm_bppi: bm-bppi {
+	compatible = "mmio-sram";
+	reg = <MBUS_ID(0x0c, 0x04) 0 0x100000>;
+	ranges = <0 MBUS_ID(0x0c, 0x04) 0 0x100000>;
+	#address-cells = <1>;
+	#size-cells = <1>;
+	clocks = <&gateclk 13>;
+	status = "okay";
+};

+ 5 - 0
Documentation/devicetree/bindings/sram/sram.txt

@@ -25,6 +25,11 @@ Required properties in the sram node:
 - ranges : standard definition, should translate from local addresses
            within the sram to bus addresses
 
+Optional properties in the sram node:
+
+- no-memory-wc : the flag indicating, that SRAM memory region has not to
+                 be remapped as write combining. WC is used by default.
+
 Required properties in the area nodes:
 
 - reg : iomem address range, relative to the SRAM range

+ 19 - 1
arch/arm/boot/dts/armada-385-db-ap.dts

@@ -61,7 +61,8 @@
 		ranges = <MBUS_ID(0xf0, 0x01) 0 0xf1000000 0x100000
 			  MBUS_ID(0x01, 0x1d) 0 0xfff00000 0x100000
 			  MBUS_ID(0x09, 0x19) 0 0xf1100000 0x10000
-			  MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000>;
+			  MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000
+			  MBUS_ID(0x0c, 0x04) 0 0xf1200000 0x100000>;
 
 		internal-regs {
 			spi1: spi@10680 {
@@ -138,12 +139,18 @@
 				status = "okay";
 				phy = <&phy2>;
 				phy-mode = "sgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <1>;
+				bm,pool-short = <3>;
 			};
 
 			ethernet@34000 {
 				status = "okay";
 				phy = <&phy1>;
 				phy-mode = "sgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <2>;
+				bm,pool-short = <3>;
 			};
 
 			ethernet@70000 {
@@ -157,6 +164,13 @@
 				status = "okay";
 				phy = <&phy0>;
 				phy-mode = "rgmii-id";
+				buffer-manager = <&bm>;
+				bm,pool-long = <0>;
+				bm,pool-short = <3>;
+			};
+
+			bm@c8000 {
+				status = "okay";
 			};
 
 			nfc: flash@d0000 {
@@ -178,6 +192,10 @@
 			};
 		};
 
+		bm-bppi {
+			status = "okay";
+		};
+
 		pcie-controller {
 			status = "okay";
 

+ 6 - 0
arch/arm/boot/dts/armada-388-clearfog.dts

@@ -78,6 +78,9 @@
 		internal-regs {
 			ethernet@30000 {
 				phy-mode = "sgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <2>;
+				bm,pool-short = <1>;
 				status = "okay";
 
 				fixed-link {
@@ -88,6 +91,9 @@
 
 			ethernet@34000 {
 				phy-mode = "sgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <3>;
+				bm,pool-short = <1>;
 				status = "okay";
 
 				fixed-link {

+ 16 - 1
arch/arm/boot/dts/armada-388-db.dts

@@ -66,7 +66,8 @@
 		ranges = <MBUS_ID(0xf0, 0x01) 0 0xf1000000 0x100000
 			  MBUS_ID(0x01, 0x1d) 0 0xfff00000 0x100000
 			  MBUS_ID(0x09, 0x19) 0 0xf1100000 0x10000
-			  MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000>;
+			  MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000
+			  MBUS_ID(0x0c, 0x04) 0 0xf1200000 0x100000>;
 
 		internal-regs {
 			spi@10600 {
@@ -99,6 +100,9 @@
 				status = "okay";
 				phy = <&phy1>;
 				phy-mode = "rgmii-id";
+				buffer-manager = <&bm>;
+				bm,pool-long = <2>;
+				bm,pool-short = <3>;
 			};
 
 			usb@58000 {
@@ -109,6 +113,9 @@
 				status = "okay";
 				phy = <&phy0>;
 				phy-mode = "rgmii-id";
+				buffer-manager = <&bm>;
+				bm,pool-long = <0>;
+				bm,pool-short = <1>;
 			};
 
 			mdio@72004 {
@@ -129,6 +136,10 @@
 				status = "okay";
 			};
 
+			bm@c8000 {
+				status = "okay";
+			};
+
 			flash@d0000 {
 				status = "okay";
 				num-cs = <1>;
@@ -169,6 +180,10 @@
 			};
 		};
 
+		bm-bppi {
+			status = "okay";
+		};
+
 		pcie-controller {
 			status = "okay";
 			/*

+ 16 - 1
arch/arm/boot/dts/armada-388-gp.dts

@@ -60,7 +60,8 @@
 		ranges = <MBUS_ID(0xf0, 0x01) 0 0xf1000000 0x100000
 			  MBUS_ID(0x01, 0x1d) 0 0xfff00000 0x100000
 			  MBUS_ID(0x09, 0x19) 0 0xf1100000 0x10000
-			  MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000>;
+			  MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000
+			  MBUS_ID(0x0c, 0x04) 0 0xf1200000 0x100000>;
 
 		internal-regs {
 			spi@10600 {
@@ -133,6 +134,9 @@
 				status = "okay";
 				phy = <&phy1>;
 				phy-mode = "rgmii-id";
+				buffer-manager = <&bm>;
+				bm,pool-long = <2>;
+				bm,pool-short = <3>;
 			};
 
 			/* CON4 */
@@ -152,6 +156,9 @@
 				status = "okay";
 				phy = <&phy0>;
 				phy-mode = "rgmii-id";
+				buffer-manager = <&bm>;
+				bm,pool-long = <0>;
+				bm,pool-short = <1>;
 			};
 
 
@@ -186,6 +193,10 @@
 				};
 			};
 
+			bm@c8000 {
+				status = "okay";
+			};
+
 			sata@e0000 {
 				pinctrl-names = "default";
 				pinctrl-0 = <&sata2_pins>, <&sata3_pins>;
@@ -240,6 +251,10 @@
 			};
 		};
 
+		bm-bppi {
+			status = "okay";
+		};
+
 		pcie-controller {
 			status = "okay";
 			/*

+ 14 - 1
arch/arm/boot/dts/armada-38x-solidrun-microsom.dtsi

@@ -58,7 +58,8 @@
 		ranges = <MBUS_ID(0xf0, 0x01) 0 0xf1000000 0x100000
 			  MBUS_ID(0x01, 0x1d) 0 0xfff00000 0x100000
 			  MBUS_ID(0x09, 0x19) 0 0xf1100000 0x10000
-			  MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000>;
+			  MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000
+			  MBUS_ID(0x0c, 0x04) 0 0xf1200000 0x100000>;
 
 		internal-regs {
 			ethernet@70000 {
@@ -66,6 +67,9 @@
 				pinctrl-names = "default";
 				phy = <&phy_dedicated>;
 				phy-mode = "rgmii-id";
+				buffer-manager = <&bm>;
+				bm,pool-long = <0>;
+				bm,pool-short = <1>;
 				status = "okay";
 			};
 
@@ -110,6 +114,15 @@
 				pinctrl-names = "default";
 				status = "okay";
 			};
+
+			bm@c8000 {
+				status = "okay";
+			};
 		};
+
+		bm-bppi {
+			status = "okay";
+		};
+
 	};
 };

+ 19 - 0
arch/arm/boot/dts/armada-38x.dtsi

@@ -540,6 +540,14 @@
 				status = "disabled";
 			};
 
+			bm: bm@c8000 {
+				compatible = "marvell,armada-380-neta-bm";
+				reg = <0xc8000 0xac>;
+				clocks = <&gateclk 13>;
+				internal-mem = <&bm_bppi>;
+				status = "disabled";
+			};
+
 			sata@e0000 {
 				compatible = "marvell,armada-380-ahci";
 				reg = <0xe0000 0x2000>;
@@ -618,6 +626,17 @@
 			#size-cells = <1>;
 			ranges = <0 MBUS_ID(0x09, 0x15) 0 0x800>;
 		};
+
+		bm_bppi: bm-bppi {
+			compatible = "mmio-sram";
+			reg = <MBUS_ID(0x0c, 0x04) 0 0x100000>;
+			ranges = <0 MBUS_ID(0x0c, 0x04) 0 0x100000>;
+			#address-cells = <1>;
+			#size-cells = <1>;
+			clocks = <&gateclk 13>;
+			no-memory-wc;
+			status = "disabled";
+		};
 	};
 
 	clocks {

+ 18 - 1
arch/arm/boot/dts/armada-xp-db.dts

@@ -77,7 +77,8 @@
 			  MBUS_ID(0x01, 0x1d) 0 0 0xfff00000 0x100000
 			  MBUS_ID(0x01, 0x2f) 0 0 0xf0000000 0x1000000
 			  MBUS_ID(0x09, 0x09) 0 0 0xf8100000 0x10000
-			  MBUS_ID(0x09, 0x05) 0 0 0xf8110000 0x10000>;
+			  MBUS_ID(0x09, 0x05) 0 0 0xf8110000 0x10000
+			  MBUS_ID(0x0c, 0x04) 0 0 0xf1200000 0x100000>;
 
 		devbus-bootcs {
 			status = "okay";
@@ -181,21 +182,33 @@
 				status = "okay";
 				phy = <&phy0>;
 				phy-mode = "rgmii-id";
+				buffer-manager = <&bm>;
+				bm,pool-long = <0>;
 			};
 			ethernet@74000 {
 				status = "okay";
 				phy = <&phy1>;
 				phy-mode = "rgmii-id";
+				buffer-manager = <&bm>;
+				bm,pool-long = <1>;
 			};
 			ethernet@30000 {
 				status = "okay";
 				phy = <&phy2>;
 				phy-mode = "sgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <2>;
 			};
 			ethernet@34000 {
 				status = "okay";
 				phy = <&phy3>;
 				phy-mode = "sgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <3>;
+			};
+
+			bm@c0000 {
+				status = "okay";
 			};
 
 			mvsdio@d4000 {
@@ -230,5 +243,9 @@
 				};
 			};
 		};
+
+		bm-bppi {
+			status = "okay";
+		};
 	};
 };

+ 18 - 1
arch/arm/boot/dts/armada-xp-gp.dts

@@ -96,7 +96,8 @@
 			  MBUS_ID(0x01, 0x1d) 0 0 0xfff00000 0x100000
 			  MBUS_ID(0x01, 0x2f) 0 0 0xf0000000 0x1000000
 			  MBUS_ID(0x09, 0x09) 0 0 0xf8100000 0x10000
-			  MBUS_ID(0x09, 0x05) 0 0 0xf8110000 0x10000>;
+			  MBUS_ID(0x09, 0x05) 0 0 0xf8110000 0x10000
+			  MBUS_ID(0x0c, 0x04) 0 0 0xf1200000 0x100000>;
 
 		devbus-bootcs {
 			status = "okay";
@@ -196,21 +197,29 @@
 				status = "okay";
 				phy = <&phy0>;
 				phy-mode = "qsgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <0>;
 			};
 			ethernet@74000 {
 				status = "okay";
 				phy = <&phy1>;
 				phy-mode = "qsgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <1>;
 			};
 			ethernet@30000 {
 				status = "okay";
 				phy = <&phy2>;
 				phy-mode = "qsgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <2>;
 			};
 			ethernet@34000 {
 				status = "okay";
 				phy = <&phy3>;
 				phy-mode = "qsgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <3>;
 			};
 
 			/* Front-side USB slot */
@@ -235,6 +244,10 @@
 				};
 			};
 
+			bm@c0000 {
+				status = "okay";
+			};
+
 			nand@d0000 {
 				status = "okay";
 				num-cs = <1>;
@@ -243,5 +256,9 @@
 				nand-on-flash-bbt;
 			};
 		};
+
+		bm-bppi {
+			status = "okay";
+		};
 	};
 };

+ 18 - 1
arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts

@@ -67,7 +67,8 @@
 			  MBUS_ID(0x01, 0x1d) 0 0 0xfff00000 0x100000
 			  MBUS_ID(0x01, 0x2f) 0 0 0xf0000000 0x8000000
 			  MBUS_ID(0x09, 0x09) 0 0 0xf8100000 0x10000
-			  MBUS_ID(0x09, 0x05) 0 0 0xf8110000 0x10000>;
+			  MBUS_ID(0x09, 0x05) 0 0 0xf8110000 0x10000
+			  MBUS_ID(0x0c, 0x04) 0 0 0xd1200000 0x100000>;
 
 		devbus-bootcs {
 			status = "okay";
@@ -176,21 +177,29 @@
 				status = "okay";
 				phy = <&phy0>;
 				phy-mode = "sgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <0>;
 			};
 			ethernet@74000 {
 				status = "okay";
 				phy = <&phy1>;
 				phy-mode = "sgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <1>;
 			};
 			ethernet@30000 {
 				status = "okay";
 				phy = <&phy2>;
 				phy-mode = "sgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <2>;
 			};
 			ethernet@34000 {
 				status = "okay";
 				phy = <&phy3>;
 				phy-mode = "sgmii";
+				buffer-manager = <&bm>;
+				bm,pool-long = <3>;
 			};
 			i2c@11000 {
 				status = "okay";
@@ -219,6 +228,14 @@
 			usb@51000 {
 				status = "okay";
 			};
+
+			bm@c0000 {
+				status = "okay";
+			};
+		};
+
+		bm-bppi {
+			status = "okay";
 		};
 	};
 };

+ 19 - 0
arch/arm/boot/dts/armada-xp.dtsi

@@ -253,6 +253,14 @@
 				marvell,crypto-sram-size = <0x800>;
 			};
 
+			bm: bm@c0000 {
+				compatible = "marvell,armada-380-neta-bm";
+				reg = <0xc0000 0xac>;
+				clocks = <&gateclk 13>;
+				internal-mem = <&bm_bppi>;
+				status = "disabled";
+			};
+
 			xor@f0900 {
 				compatible = "marvell,orion-xor";
 				reg = <0xF0900 0x100
@@ -291,6 +299,17 @@
 			#size-cells = <1>;
 			ranges = <0 MBUS_ID(0x09, 0x05) 0 0x800>;
 		};
+
+		bm_bppi: bm-bppi {
+			compatible = "mmio-sram";
+			reg = <MBUS_ID(0x0c, 0x04) 0 0x100000>;
+			ranges = <0 MBUS_ID(0x0c, 0x04) 0 0x100000>;
+			#address-cells = <1>;
+			#size-cells = <1>;
+			clocks = <&gateclk 13>;
+			no-memory-wc;
+			status = "disabled";
+		};
 	};
 
 	clocks {

+ 52 - 0
drivers/bus/mvebu-mbus.c

@@ -948,6 +948,58 @@ void mvebu_mbus_get_pcie_io_aperture(struct resource *res)
 	*res = mbus_state.pcie_io_aperture;
 }
 
+int mvebu_mbus_get_dram_win_info(phys_addr_t phyaddr, u8 *target, u8 *attr)
+{
+	const struct mbus_dram_target_info *dram;
+	int i;
+
+	/* Get dram info */
+	dram = mv_mbus_dram_info();
+	if (!dram) {
+		pr_err("missing DRAM information\n");
+		return -ENODEV;
+	}
+
+	/* Try to find matching DRAM window for phyaddr */
+	for (i = 0; i < dram->num_cs; i++) {
+		const struct mbus_dram_window *cs = dram->cs + i;
+
+		if (cs->base <= phyaddr &&
+			phyaddr <= (cs->base + cs->size - 1)) {
+			*target = dram->mbus_dram_target_id;
+			*attr = cs->mbus_attr;
+			return 0;
+		}
+	}
+
+	pr_err("invalid dram address 0x%x\n", phyaddr);
+	return -EINVAL;
+}
+EXPORT_SYMBOL_GPL(mvebu_mbus_get_dram_win_info);
+
+int mvebu_mbus_get_io_win_info(phys_addr_t phyaddr, u32 *size, u8 *target,
+			       u8 *attr)
+{
+	int win;
+
+	for (win = 0; win < mbus_state.soc->num_wins; win++) {
+		u64 wbase;
+		int enabled;
+
+		mvebu_mbus_read_window(&mbus_state, win, &enabled, &wbase,
+				       size, target, attr, NULL);
+
+		if (!enabled)
+			continue;
+
+		if (wbase <= phyaddr && phyaddr <= wbase + *size)
+			return win;
+	}
+
+	return -EINVAL;
+}
+EXPORT_SYMBOL_GPL(mvebu_mbus_get_io_win_info);
+
 static __init int mvebu_mbus_debugfs_init(void)
 {
 	struct mvebu_mbus_state *s = &mbus_state;

+ 4 - 1
drivers/misc/sram.c

@@ -360,7 +360,10 @@ static int sram_probe(struct platform_device *pdev)
 		return -EBUSY;
 	}
 
-	sram->virt_base = devm_ioremap_wc(sram->dev, res->start, size);
+	if (of_property_read_bool(pdev->dev.of_node, "no-memory-wc"))
+		sram->virt_base = devm_ioremap(sram->dev, res->start, size);
+	else
+		sram->virt_base = devm_ioremap_wc(sram->dev, res->start, size);
 	if (IS_ERR(sram->virt_base))
 		return PTR_ERR(sram->virt_base);
 

+ 14 - 0
drivers/net/ethernet/marvell/Kconfig

@@ -40,6 +40,20 @@ config MVMDIO
 
 	  This driver is used by the MV643XX_ETH and MVNETA drivers.
 
+config MVNETA_BM
+	tristate "Marvell Armada 38x/XP network interface BM support"
+	depends on MVNETA
+	select HWBM
+	---help---
+	  This driver supports auxiliary block of the network
+	  interface units in the Marvell ARMADA XP and ARMADA 38x SoC
+	  family, which is called buffer manager.
+
+	  This driver, when enabled, strictly cooperates with mvneta
+	  driver and is common for all network ports of the devices,
+	  even for Armada 370 SoC, which doesn't support hardware
+	  buffer management.
+
 config MVNETA
 	tristate "Marvell Armada 370/38x/XP network interface support"
 	depends on PLAT_ORION

+ 1 - 0
drivers/net/ethernet/marvell/Makefile

@@ -4,6 +4,7 @@
 
 obj-$(CONFIG_MVMDIO) += mvmdio.o
 obj-$(CONFIG_MV643XX_ETH) += mv643xx_eth.o
+obj-$(CONFIG_MVNETA_BM) += mvneta_bm.o
 obj-$(CONFIG_MVNETA) += mvneta.o
 obj-$(CONFIG_MVPP2) += mvpp2.o
 obj-$(CONFIG_PXA168_ETH) += pxa168_eth.o

+ 472 - 37
drivers/net/ethernet/marvell/mvneta.c

@@ -30,6 +30,8 @@
 #include <linux/phy.h>
 #include <linux/platform_device.h>
 #include <linux/skbuff.h>
+#include <net/hwbm.h>
+#include "mvneta_bm.h"
 #include <net/ip.h>
 #include <net/ipv6.h>
 #include <net/tso.h>
@@ -37,6 +39,10 @@
 /* Registers */
 #define MVNETA_RXQ_CONFIG_REG(q)                (0x1400 + ((q) << 2))
 #define      MVNETA_RXQ_HW_BUF_ALLOC            BIT(0)
+#define      MVNETA_RXQ_SHORT_POOL_ID_SHIFT	4
+#define      MVNETA_RXQ_SHORT_POOL_ID_MASK	0x30
+#define      MVNETA_RXQ_LONG_POOL_ID_SHIFT	6
+#define      MVNETA_RXQ_LONG_POOL_ID_MASK	0xc0
 #define      MVNETA_RXQ_PKT_OFFSET_ALL_MASK     (0xf    << 8)
 #define      MVNETA_RXQ_PKT_OFFSET_MASK(offs)   ((offs) << 8)
 #define MVNETA_RXQ_THRESHOLD_REG(q)             (0x14c0 + ((q) << 2))
@@ -50,6 +56,9 @@
 #define MVNETA_RXQ_STATUS_UPDATE_REG(q)         (0x1500 + ((q) << 2))
 #define      MVNETA_RXQ_ADD_NON_OCCUPIED_SHIFT  16
 #define      MVNETA_RXQ_ADD_NON_OCCUPIED_MAX    255
+#define MVNETA_PORT_POOL_BUFFER_SZ_REG(pool)	(0x1700 + ((pool) << 2))
+#define      MVNETA_PORT_POOL_BUFFER_SZ_SHIFT	3
+#define      MVNETA_PORT_POOL_BUFFER_SZ_MASK	0xfff8
 #define MVNETA_PORT_RX_RESET                    0x1cc0
 #define      MVNETA_PORT_RX_DMA_RESET           BIT(0)
 #define MVNETA_PHY_ADDR                         0x2000
@@ -107,6 +116,7 @@
 #define MVNETA_GMAC_CLOCK_DIVIDER                0x24f4
 #define      MVNETA_GMAC_1MS_CLOCK_ENABLE        BIT(31)
 #define MVNETA_ACC_MODE                          0x2500
+#define MVNETA_BM_ADDRESS                        0x2504
 #define MVNETA_CPU_MAP(cpu)                      (0x2540 + ((cpu) << 2))
 #define      MVNETA_CPU_RXQ_ACCESS_ALL_MASK      0x000000ff
 #define      MVNETA_CPU_TXQ_ACCESS_ALL_MASK      0x0000ff00
@@ -253,7 +263,10 @@
 #define MVNETA_CPU_D_CACHE_LINE_SIZE    32
 #define MVNETA_TX_CSUM_DEF_SIZE		1600
 #define MVNETA_TX_CSUM_MAX_SIZE		9800
-#define MVNETA_ACC_MODE_EXT		1
+#define MVNETA_ACC_MODE_EXT1		1
+#define MVNETA_ACC_MODE_EXT2		2
+
+#define MVNETA_MAX_DECODE_WIN		6
 
 /* Timeout constants */
 #define MVNETA_TX_DISABLE_TIMEOUT_MSEC	1000
@@ -293,7 +306,8 @@
 	((addr >= txq->tso_hdrs_phys) && \
 	 (addr < txq->tso_hdrs_phys + txq->size * TSO_HEADER_SIZE))
 
-#define MVNETA_RX_BUF_SIZE(pkt_size)   ((pkt_size) + NET_SKB_PAD)
+#define MVNETA_RX_GET_BM_POOL_ID(rxd) \
+	(((rxd)->status & MVNETA_RXD_BM_POOL_MASK) >> MVNETA_RXD_BM_POOL_SHIFT)
 
 struct mvneta_statistic {
 	unsigned short offset;
@@ -359,6 +373,7 @@ struct mvneta_pcpu_port {
 };
 
 struct mvneta_port {
+	u8 id;
 	struct mvneta_pcpu_port __percpu	*ports;
 	struct mvneta_pcpu_stats __percpu	*stats;
 
@@ -394,6 +409,11 @@ struct mvneta_port {
 	unsigned int tx_csum_limit;
 	unsigned int use_inband_status:1;
 
+	struct mvneta_bm *bm_priv;
+	struct mvneta_bm_pool *pool_long;
+	struct mvneta_bm_pool *pool_short;
+	int bm_win_id;
+
 	u64 ethtool_stats[ARRAY_SIZE(mvneta_statistics)];
 
 	u32 indir[MVNETA_RSS_LU_TABLE_SIZE];
@@ -419,6 +439,8 @@ struct mvneta_port {
 #define MVNETA_TX_L4_CSUM_NOT	BIT(31)
 
 #define MVNETA_RXD_ERR_CRC		0x0
+#define MVNETA_RXD_BM_POOL_SHIFT	13
+#define MVNETA_RXD_BM_POOL_MASK		(BIT(13) | BIT(14))
 #define MVNETA_RXD_ERR_SUMMARY		BIT(16)
 #define MVNETA_RXD_ERR_OVERRUN		BIT(17)
 #define MVNETA_RXD_ERR_LEN		BIT(18)
@@ -563,6 +585,9 @@ static int rxq_def;
 
 static int rx_copybreak __read_mostly = 256;
 
+/* HW BM need that each port be identify by a unique ID */
+static int global_port_id;
+
 #define MVNETA_DRIVER_NAME "mvneta"
 #define MVNETA_DRIVER_VERSION "1.0"
 
@@ -829,6 +854,215 @@ static void mvneta_rxq_bm_disable(struct mvneta_port *pp,
 	mvreg_write(pp, MVNETA_RXQ_CONFIG_REG(rxq->id), val);
 }
 
+/* Enable buffer management (BM) */
+static void mvneta_rxq_bm_enable(struct mvneta_port *pp,
+				 struct mvneta_rx_queue *rxq)
+{
+	u32 val;
+
+	val = mvreg_read(pp, MVNETA_RXQ_CONFIG_REG(rxq->id));
+	val |= MVNETA_RXQ_HW_BUF_ALLOC;
+	mvreg_write(pp, MVNETA_RXQ_CONFIG_REG(rxq->id), val);
+}
+
+/* Notify HW about port's assignment of pool for bigger packets */
+static void mvneta_rxq_long_pool_set(struct mvneta_port *pp,
+				     struct mvneta_rx_queue *rxq)
+{
+	u32 val;
+
+	val = mvreg_read(pp, MVNETA_RXQ_CONFIG_REG(rxq->id));
+	val &= ~MVNETA_RXQ_LONG_POOL_ID_MASK;
+	val |= (pp->pool_long->id << MVNETA_RXQ_LONG_POOL_ID_SHIFT);
+
+	mvreg_write(pp, MVNETA_RXQ_CONFIG_REG(rxq->id), val);
+}
+
+/* Notify HW about port's assignment of pool for smaller packets */
+static void mvneta_rxq_short_pool_set(struct mvneta_port *pp,
+				      struct mvneta_rx_queue *rxq)
+{
+	u32 val;
+
+	val = mvreg_read(pp, MVNETA_RXQ_CONFIG_REG(rxq->id));
+	val &= ~MVNETA_RXQ_SHORT_POOL_ID_MASK;
+	val |= (pp->pool_short->id << MVNETA_RXQ_SHORT_POOL_ID_SHIFT);
+
+	mvreg_write(pp, MVNETA_RXQ_CONFIG_REG(rxq->id), val);
+}
+
+/* Set port's receive buffer size for assigned BM pool */
+static inline void mvneta_bm_pool_bufsize_set(struct mvneta_port *pp,
+					      int buf_size,
+					      u8 pool_id)
+{
+	u32 val;
+
+	if (!IS_ALIGNED(buf_size, 8)) {
+		dev_warn(pp->dev->dev.parent,
+			 "illegal buf_size value %d, round to %d\n",
+			 buf_size, ALIGN(buf_size, 8));
+		buf_size = ALIGN(buf_size, 8);
+	}
+
+	val = mvreg_read(pp, MVNETA_PORT_POOL_BUFFER_SZ_REG(pool_id));
+	val |= buf_size & MVNETA_PORT_POOL_BUFFER_SZ_MASK;
+	mvreg_write(pp, MVNETA_PORT_POOL_BUFFER_SZ_REG(pool_id), val);
+}
+
+/* Configure MBUS window in order to enable access BM internal SRAM */
+static int mvneta_mbus_io_win_set(struct mvneta_port *pp, u32 base, u32 wsize,
+				  u8 target, u8 attr)
+{
+	u32 win_enable, win_protect;
+	int i;
+
+	win_enable = mvreg_read(pp, MVNETA_BASE_ADDR_ENABLE);
+
+	if (pp->bm_win_id < 0) {
+		/* Find first not occupied window */
+		for (i = 0; i < MVNETA_MAX_DECODE_WIN; i++) {
+			if (win_enable & (1 << i)) {
+				pp->bm_win_id = i;
+				break;
+			}
+		}
+		if (i == MVNETA_MAX_DECODE_WIN)
+			return -ENOMEM;
+	} else {
+		i = pp->bm_win_id;
+	}
+
+	mvreg_write(pp, MVNETA_WIN_BASE(i), 0);
+	mvreg_write(pp, MVNETA_WIN_SIZE(i), 0);
+
+	if (i < 4)
+		mvreg_write(pp, MVNETA_WIN_REMAP(i), 0);
+
+	mvreg_write(pp, MVNETA_WIN_BASE(i), (base & 0xffff0000) |
+		    (attr << 8) | target);
+
+	mvreg_write(pp, MVNETA_WIN_SIZE(i), (wsize - 1) & 0xffff0000);
+
+	win_protect = mvreg_read(pp, MVNETA_ACCESS_PROTECT_ENABLE);
+	win_protect |= 3 << (2 * i);
+	mvreg_write(pp, MVNETA_ACCESS_PROTECT_ENABLE, win_protect);
+
+	win_enable &= ~(1 << i);
+	mvreg_write(pp, MVNETA_BASE_ADDR_ENABLE, win_enable);
+
+	return 0;
+}
+
+/* Assign and initialize pools for port. In case of fail
+ * buffer manager will remain disabled for current port.
+ */
+static int mvneta_bm_port_init(struct platform_device *pdev,
+			       struct mvneta_port *pp)
+{
+	struct device_node *dn = pdev->dev.of_node;
+	u32 long_pool_id, short_pool_id, wsize;
+	u8 target, attr;
+	int err;
+
+	/* Get BM window information */
+	err = mvebu_mbus_get_io_win_info(pp->bm_priv->bppi_phys_addr, &wsize,
+					 &target, &attr);
+	if (err < 0)
+		return err;
+
+	pp->bm_win_id = -1;
+
+	/* Open NETA -> BM window */
+	err = mvneta_mbus_io_win_set(pp, pp->bm_priv->bppi_phys_addr, wsize,
+				     target, attr);
+	if (err < 0) {
+		netdev_info(pp->dev, "fail to configure mbus window to BM\n");
+		return err;
+	}
+
+	if (of_property_read_u32(dn, "bm,pool-long", &long_pool_id)) {
+		netdev_info(pp->dev, "missing long pool id\n");
+		return -EINVAL;
+	}
+
+	/* Create port's long pool depending on mtu */
+	pp->pool_long = mvneta_bm_pool_use(pp->bm_priv, long_pool_id,
+					   MVNETA_BM_LONG, pp->id,
+					   MVNETA_RX_PKT_SIZE(pp->dev->mtu));
+	if (!pp->pool_long) {
+		netdev_info(pp->dev, "fail to obtain long pool for port\n");
+		return -ENOMEM;
+	}
+
+	pp->pool_long->port_map |= 1 << pp->id;
+
+	mvneta_bm_pool_bufsize_set(pp, pp->pool_long->buf_size,
+				   pp->pool_long->id);
+
+	/* If short pool id is not defined, assume using single pool */
+	if (of_property_read_u32(dn, "bm,pool-short", &short_pool_id))
+		short_pool_id = long_pool_id;
+
+	/* Create port's short pool */
+	pp->pool_short = mvneta_bm_pool_use(pp->bm_priv, short_pool_id,
+					    MVNETA_BM_SHORT, pp->id,
+					    MVNETA_BM_SHORT_PKT_SIZE);
+	if (!pp->pool_short) {
+		netdev_info(pp->dev, "fail to obtain short pool for port\n");
+		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
+		return -ENOMEM;
+	}
+
+	if (short_pool_id != long_pool_id) {
+		pp->pool_short->port_map |= 1 << pp->id;
+		mvneta_bm_pool_bufsize_set(pp, pp->pool_short->buf_size,
+					   pp->pool_short->id);
+	}
+
+	return 0;
+}
+
+/* Update settings of a pool for bigger packets */
+static void mvneta_bm_update_mtu(struct mvneta_port *pp, int mtu)
+{
+	struct mvneta_bm_pool *bm_pool = pp->pool_long;
+	struct hwbm_pool *hwbm_pool = &bm_pool->hwbm_pool;
+	int num;
+
+	/* Release all buffers from long pool */
+	mvneta_bm_bufs_free(pp->bm_priv, bm_pool, 1 << pp->id);
+	if (hwbm_pool->buf_num) {
+		WARN(1, "cannot free all buffers in pool %d\n",
+		     bm_pool->id);
+		goto bm_mtu_err;
+	}
+
+	bm_pool->pkt_size = MVNETA_RX_PKT_SIZE(mtu);
+	bm_pool->buf_size = MVNETA_RX_BUF_SIZE(bm_pool->pkt_size);
+	hwbm_pool->frag_size = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) +
+			SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(bm_pool->pkt_size));
+
+	/* Fill entire long pool */
+	num = hwbm_pool_add(hwbm_pool, hwbm_pool->size, GFP_ATOMIC);
+	if (num != hwbm_pool->size) {
+		WARN(1, "pool %d: %d of %d allocated\n",
+		     bm_pool->id, num, hwbm_pool->size);
+		goto bm_mtu_err;
+	}
+	mvneta_bm_pool_bufsize_set(pp, bm_pool->buf_size, bm_pool->id);
+
+	return;
+
+bm_mtu_err:
+	mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
+	mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short, 1 << pp->id);
+
+	pp->bm_priv = NULL;
+	mvreg_write(pp, MVNETA_ACC_MODE, MVNETA_ACC_MODE_EXT1);
+	netdev_info(pp->dev, "fail to update MTU, fall back to software BM\n");
+}
+
 /* Start the Ethernet port RX and TX activity */
 static void mvneta_port_up(struct mvneta_port *pp)
 {
@@ -1149,9 +1383,17 @@ static void mvneta_defaults_set(struct mvneta_port *pp)
 	mvreg_write(pp, MVNETA_PORT_RX_RESET, 0);
 
 	/* Set Port Acceleration Mode */
-	val = MVNETA_ACC_MODE_EXT;
+	if (pp->bm_priv)
+		/* HW buffer management + legacy parser */
+		val = MVNETA_ACC_MODE_EXT2;
+	else
+		/* SW buffer management + legacy parser */
+		val = MVNETA_ACC_MODE_EXT1;
 	mvreg_write(pp, MVNETA_ACC_MODE, val);
 
+	if (pp->bm_priv)
+		mvreg_write(pp, MVNETA_BM_ADDRESS, pp->bm_priv->bppi_phys_addr);
+
 	/* Update val of portCfg register accordingly with all RxQueue types */
 	val = MVNETA_PORT_CONFIG_DEFL_VALUE(pp->rxq_def);
 	mvreg_write(pp, MVNETA_PORT_CONFIG, val);
@@ -1518,23 +1760,25 @@ static void mvneta_txq_done(struct mvneta_port *pp,
 	}
 }
 
-static void *mvneta_frag_alloc(const struct mvneta_port *pp)
+void *mvneta_frag_alloc(unsigned int frag_size)
 {
-	if (likely(pp->frag_size <= PAGE_SIZE))
-		return netdev_alloc_frag(pp->frag_size);
+	if (likely(frag_size <= PAGE_SIZE))
+		return netdev_alloc_frag(frag_size);
 	else
-		return kmalloc(pp->frag_size, GFP_ATOMIC);
+		return kmalloc(frag_size, GFP_ATOMIC);
 }
+EXPORT_SYMBOL_GPL(mvneta_frag_alloc);
 
-static void mvneta_frag_free(const struct mvneta_port *pp, void *data)
+void mvneta_frag_free(unsigned int frag_size, void *data)
 {
-	if (likely(pp->frag_size <= PAGE_SIZE))
+	if (likely(frag_size <= PAGE_SIZE))
 		skb_free_frag(data);
 	else
 		kfree(data);
 }
+EXPORT_SYMBOL_GPL(mvneta_frag_free);
 
-/* Refill processing */
+/* Refill processing for SW buffer management */
 static int mvneta_rx_refill(struct mvneta_port *pp,
 			    struct mvneta_rx_desc *rx_desc)
 
@@ -1542,7 +1786,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
 	dma_addr_t phys_addr;
 	void *data;
 
-	data = mvneta_frag_alloc(pp);
+	data = mvneta_frag_alloc(pp->frag_size);
 	if (!data)
 		return -ENOMEM;
 
@@ -1550,7 +1794,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
 				   MVNETA_RX_BUF_SIZE(pp->pkt_size),
 				   DMA_FROM_DEVICE);
 	if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) {
-		mvneta_frag_free(pp, data);
+		mvneta_frag_free(pp->frag_size, data);
 		return -ENOMEM;
 	}
 
@@ -1596,22 +1840,156 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 	int rx_done, i;
 
 	rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq);
+	if (rx_done)
+		mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done);
+
+	if (pp->bm_priv) {
+		for (i = 0; i < rx_done; i++) {
+			struct mvneta_rx_desc *rx_desc =
+						  mvneta_rxq_next_desc_get(rxq);
+			u8 pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc);
+			struct mvneta_bm_pool *bm_pool;
+
+			bm_pool = &pp->bm_priv->bm_pools[pool_id];
+			/* Return dropped buffer to the pool */
+			mvneta_bm_pool_put_bp(pp->bm_priv, bm_pool,
+					      rx_desc->buf_phys_addr);
+		}
+		return;
+	}
+
 	for (i = 0; i < rxq->size; i++) {
 		struct mvneta_rx_desc *rx_desc = rxq->descs + i;
 		void *data = (void *)rx_desc->buf_cookie;
 
 		dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
 				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
-		mvneta_frag_free(pp, data);
+		mvneta_frag_free(pp->frag_size, data);
 	}
+}
 
-	if (rx_done)
-		mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done);
+/* Main rx processing when using software buffer management */
+static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
+			  struct mvneta_rx_queue *rxq)
+{
+	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
+	struct net_device *dev = pp->dev;
+	int rx_done;
+	u32 rcvd_pkts = 0;
+	u32 rcvd_bytes = 0;
+
+	/* Get number of received packets */
+	rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq);
+
+	if (rx_todo > rx_done)
+		rx_todo = rx_done;
+
+	rx_done = 0;
+
+	/* Fairness NAPI loop */
+	while (rx_done < rx_todo) {
+		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
+		struct sk_buff *skb;
+		unsigned char *data;
+		dma_addr_t phys_addr;
+		u32 rx_status, frag_size;
+		int rx_bytes, err;
+
+		rx_done++;
+		rx_status = rx_desc->status;
+		rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
+		data = (unsigned char *)rx_desc->buf_cookie;
+		phys_addr = rx_desc->buf_phys_addr;
+
+		if (!mvneta_rxq_desc_is_first_last(rx_status) ||
+		    (rx_status & MVNETA_RXD_ERR_SUMMARY)) {
+err_drop_frame:
+			dev->stats.rx_errors++;
+			mvneta_rx_error(pp, rx_desc);
+			/* leave the descriptor untouched */
+			continue;
+		}
+
+		if (rx_bytes <= rx_copybreak) {
+		/* better copy a small frame and not unmap the DMA region */
+			skb = netdev_alloc_skb_ip_align(dev, rx_bytes);
+			if (unlikely(!skb))
+				goto err_drop_frame;
+
+			dma_sync_single_range_for_cpu(dev->dev.parent,
+						      rx_desc->buf_phys_addr,
+						      MVNETA_MH_SIZE + NET_SKB_PAD,
+						      rx_bytes,
+						      DMA_FROM_DEVICE);
+			memcpy(skb_put(skb, rx_bytes),
+			       data + MVNETA_MH_SIZE + NET_SKB_PAD,
+			       rx_bytes);
+
+			skb->protocol = eth_type_trans(skb, dev);
+			mvneta_rx_csum(pp, rx_status, skb);
+			napi_gro_receive(&port->napi, skb);
+
+			rcvd_pkts++;
+			rcvd_bytes += rx_bytes;
+
+			/* leave the descriptor and buffer untouched */
+			continue;
+		}
+
+		/* Refill processing */
+		err = mvneta_rx_refill(pp, rx_desc);
+		if (err) {
+			netdev_err(dev, "Linux processing - Can't refill\n");
+			rxq->missed++;
+			goto err_drop_frame;
+		}
+
+		frag_size = pp->frag_size;
+
+		skb = build_skb(data, frag_size > PAGE_SIZE ? 0 : frag_size);
+
+		/* After refill old buffer has to be unmapped regardless
+		 * the skb is successfully built or not.
+		 */
+		dma_unmap_single(dev->dev.parent, phys_addr,
+				 MVNETA_RX_BUF_SIZE(pp->pkt_size),
+				 DMA_FROM_DEVICE);
+
+		if (!skb)
+			goto err_drop_frame;
+
+		rcvd_pkts++;
+		rcvd_bytes += rx_bytes;
+
+		/* Linux processing */
+		skb_reserve(skb, MVNETA_MH_SIZE + NET_SKB_PAD);
+		skb_put(skb, rx_bytes);
+
+		skb->protocol = eth_type_trans(skb, dev);
+
+		mvneta_rx_csum(pp, rx_status, skb);
+
+		napi_gro_receive(&port->napi, skb);
+	}
+
+	if (rcvd_pkts) {
+		struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
+
+		u64_stats_update_begin(&stats->syncp);
+		stats->rx_packets += rcvd_pkts;
+		stats->rx_bytes   += rcvd_bytes;
+		u64_stats_update_end(&stats->syncp);
+	}
+
+	/* Update rxq management counters */
+	mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done);
+
+	return rx_done;
 }
 
-/* Main rx processing */
-static int mvneta_rx(struct mvneta_port *pp, int rx_todo,
-		     struct mvneta_rx_queue *rxq)
+/* Main rx processing when using hardware buffer management */
+static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
+			  struct mvneta_rx_queue *rxq)
 {
 	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
 	struct net_device *dev = pp->dev;
@@ -1630,21 +2008,29 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo,
 	/* Fairness NAPI loop */
 	while (rx_done < rx_todo) {
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
+		struct mvneta_bm_pool *bm_pool = NULL;
 		struct sk_buff *skb;
 		unsigned char *data;
 		dma_addr_t phys_addr;
-		u32 rx_status;
+		u32 rx_status, frag_size;
 		int rx_bytes, err;
+		u8 pool_id;
 
 		rx_done++;
 		rx_status = rx_desc->status;
 		rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
 		data = (unsigned char *)rx_desc->buf_cookie;
 		phys_addr = rx_desc->buf_phys_addr;
+		pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc);
+		bm_pool = &pp->bm_priv->bm_pools[pool_id];
 
 		if (!mvneta_rxq_desc_is_first_last(rx_status) ||
 		    (rx_status & MVNETA_RXD_ERR_SUMMARY)) {
-		err_drop_frame:
+err_drop_frame_ret_pool:
+			/* Return the buffer to the pool */
+			mvneta_bm_pool_put_bp(pp->bm_priv, bm_pool,
+					      rx_desc->buf_phys_addr);
+err_drop_frame:
 			dev->stats.rx_errors++;
 			mvneta_rx_error(pp, rx_desc);
 			/* leave the descriptor untouched */
@@ -1655,7 +2041,7 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo,
 			/* better copy a small frame and not unmap the DMA region */
 			skb = netdev_alloc_skb_ip_align(dev, rx_bytes);
 			if (unlikely(!skb))
-				goto err_drop_frame;
+				goto err_drop_frame_ret_pool;
 
 			dma_sync_single_range_for_cpu(dev->dev.parent,
 			                              rx_desc->buf_phys_addr,
@@ -1673,26 +2059,31 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo,
 			rcvd_pkts++;
 			rcvd_bytes += rx_bytes;
 
+			/* Return the buffer to the pool */
+			mvneta_bm_pool_put_bp(pp->bm_priv, bm_pool,
+					      rx_desc->buf_phys_addr);
+
 			/* leave the descriptor and buffer untouched */
 			continue;
 		}
 
 		/* Refill processing */
-		err = mvneta_rx_refill(pp, rx_desc);
+		err = hwbm_pool_refill(&bm_pool->hwbm_pool, GFP_ATOMIC);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
 			rxq->missed++;
-			goto err_drop_frame;
+			goto err_drop_frame_ret_pool;
 		}
 
-		skb = build_skb(data, pp->frag_size > PAGE_SIZE ? 0 : pp->frag_size);
+		frag_size = bm_pool->hwbm_pool.frag_size;
+
+		skb = build_skb(data, frag_size > PAGE_SIZE ? 0 : frag_size);
 
 		/* After refill old buffer has to be unmapped regardless
 		 * the skb is successfully built or not.
 		 */
-		dma_unmap_single(dev->dev.parent, phys_addr,
-				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
-
+		dma_unmap_single(&pp->bm_priv->pdev->dev, phys_addr,
+				 bm_pool->buf_size, DMA_FROM_DEVICE);
 		if (!skb)
 			goto err_drop_frame;
 
@@ -2297,7 +2688,10 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
 
 	if (rx_queue) {
 		rx_queue = rx_queue - 1;
-		rx_done = mvneta_rx(pp, budget, &pp->rxqs[rx_queue]);
+		if (pp->bm_priv)
+			rx_done = mvneta_rx_hwbm(pp, budget, &pp->rxqs[rx_queue]);
+		else
+			rx_done = mvneta_rx_swbm(pp, budget, &pp->rxqs[rx_queue]);
 	}
 
 	budget -= rx_done;
@@ -2386,9 +2780,17 @@ static int mvneta_rxq_init(struct mvneta_port *pp,
 	mvneta_rx_pkts_coal_set(pp, rxq, rxq->pkts_coal);
 	mvneta_rx_time_coal_set(pp, rxq, rxq->time_coal);
 
-	/* Fill RXQ with buffers from RX pool */
-	mvneta_rxq_buf_size_set(pp, rxq, MVNETA_RX_BUF_SIZE(pp->pkt_size));
-	mvneta_rxq_bm_disable(pp, rxq);
+	if (!pp->bm_priv) {
+		/* Fill RXQ with buffers from RX pool */
+		mvneta_rxq_buf_size_set(pp, rxq,
+					MVNETA_RX_BUF_SIZE(pp->pkt_size));
+		mvneta_rxq_bm_disable(pp, rxq);
+	} else {
+		mvneta_rxq_bm_enable(pp, rxq);
+		mvneta_rxq_long_pool_set(pp, rxq);
+		mvneta_rxq_short_pool_set(pp, rxq);
+	}
+
 	mvneta_rxq_fill(pp, rxq, rxq->size);
 
 	return 0;
@@ -2661,6 +3063,9 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 	dev->mtu = mtu;
 
 	if (!netif_running(dev)) {
+		if (pp->bm_priv)
+			mvneta_bm_update_mtu(pp, mtu);
+
 		netdev_update_features(dev);
 		return 0;
 	}
@@ -2673,6 +3078,9 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 	mvneta_cleanup_txqs(pp);
 	mvneta_cleanup_rxqs(pp);
 
+	if (pp->bm_priv)
+		mvneta_bm_update_mtu(pp, mtu);
+
 	pp->pkt_size = MVNETA_RX_PKT_SIZE(dev->mtu);
 	pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) +
 	                SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
@@ -3557,6 +3965,7 @@ static int mvneta_probe(struct platform_device *pdev)
 	struct resource *res;
 	struct device_node *dn = pdev->dev.of_node;
 	struct device_node *phy_node;
+	struct device_node *bm_node;
 	struct mvneta_port *pp;
 	struct net_device *dev;
 	const char *dt_mac_addr;
@@ -3690,26 +4099,39 @@ static int mvneta_probe(struct platform_device *pdev)
 
 	pp->tx_csum_limit = tx_csum_limit;
 
+	dram_target_info = mv_mbus_dram_info();
+	if (dram_target_info)
+		mvneta_conf_mbus_windows(pp, dram_target_info);
+
 	pp->tx_ring_size = MVNETA_MAX_TXD;
 	pp->rx_ring_size = MVNETA_MAX_RXD;
 
 	pp->dev = dev;
 	SET_NETDEV_DEV(dev, &pdev->dev);
 
+	pp->id = global_port_id++;
+
+	/* Obtain access to BM resources if enabled and already initialized */
+	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
+	if (bm_node && bm_node->data) {
+		pp->bm_priv = bm_node->data;
+		err = mvneta_bm_port_init(pdev, pp);
+		if (err < 0) {
+			dev_info(&pdev->dev, "use SW buffer management\n");
+			pp->bm_priv = NULL;
+		}
+	}
+
 	err = mvneta_init(&pdev->dev, pp);
 	if (err < 0)
-		goto err_free_stats;
+		goto err_netdev;
 
 	err = mvneta_port_power_up(pp, phy_mode);
 	if (err < 0) {
 		dev_err(&pdev->dev, "can't power up port\n");
-		goto err_free_stats;
+		goto err_netdev;
 	}
 
-	dram_target_info = mv_mbus_dram_info();
-	if (dram_target_info)
-		mvneta_conf_mbus_windows(pp, dram_target_info);
-
 	for_each_present_cpu(cpu) {
 		struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
 
@@ -3744,6 +4166,13 @@ static int mvneta_probe(struct platform_device *pdev)
 
 	return 0;
 
+err_netdev:
+	unregister_netdev(dev);
+	if (pp->bm_priv) {
+		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
+		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
+				       1 << pp->id);
+	}
 err_free_stats:
 	free_percpu(pp->stats);
 err_free_ports:
@@ -3775,6 +4204,12 @@ static int mvneta_remove(struct platform_device *pdev)
 	of_node_put(pp->phy_node);
 	free_netdev(dev);
 
+	if (pp->bm_priv) {
+		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
+		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
+				       1 << pp->id);
+	}
+
 	return 0;
 }
 

+ 487 - 0
drivers/net/ethernet/marvell/mvneta_bm.c

@@ -0,0 +1,487 @@
+/*
+ * Driver for Marvell NETA network controller Buffer Manager.
+ *
+ * Copyright (C) 2015 Marvell
+ *
+ * Marcin Wojtas <mw@semihalf.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/clk.h>
+#include <linux/genalloc.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/mbus.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/skbuff.h>
+#include <net/hwbm.h>
+#include "mvneta_bm.h"
+
+#define MVNETA_BM_DRIVER_NAME "mvneta_bm"
+#define MVNETA_BM_DRIVER_VERSION "1.0"
+
+static void mvneta_bm_write(struct mvneta_bm *priv, u32 offset, u32 data)
+{
+	writel(data, priv->reg_base + offset);
+}
+
+static u32 mvneta_bm_read(struct mvneta_bm *priv, u32 offset)
+{
+	return readl(priv->reg_base + offset);
+}
+
+static void mvneta_bm_pool_enable(struct mvneta_bm *priv, int pool_id)
+{
+	u32 val;
+
+	val = mvneta_bm_read(priv, MVNETA_BM_POOL_BASE_REG(pool_id));
+	val |= MVNETA_BM_POOL_ENABLE_MASK;
+	mvneta_bm_write(priv, MVNETA_BM_POOL_BASE_REG(pool_id), val);
+
+	/* Clear BM cause register */
+	mvneta_bm_write(priv, MVNETA_BM_INTR_CAUSE_REG, 0);
+}
+
+static void mvneta_bm_pool_disable(struct mvneta_bm *priv, int pool_id)
+{
+	u32 val;
+
+	val = mvneta_bm_read(priv, MVNETA_BM_POOL_BASE_REG(pool_id));
+	val &= ~MVNETA_BM_POOL_ENABLE_MASK;
+	mvneta_bm_write(priv, MVNETA_BM_POOL_BASE_REG(pool_id), val);
+}
+
+static inline void mvneta_bm_config_set(struct mvneta_bm *priv, u32 mask)
+{
+	u32 val;
+
+	val = mvneta_bm_read(priv, MVNETA_BM_CONFIG_REG);
+	val |= mask;
+	mvneta_bm_write(priv, MVNETA_BM_CONFIG_REG, val);
+}
+
+static inline void mvneta_bm_config_clear(struct mvneta_bm *priv, u32 mask)
+{
+	u32 val;
+
+	val = mvneta_bm_read(priv, MVNETA_BM_CONFIG_REG);
+	val &= ~mask;
+	mvneta_bm_write(priv, MVNETA_BM_CONFIG_REG, val);
+}
+
+static void mvneta_bm_pool_target_set(struct mvneta_bm *priv, int pool_id,
+				      u8 target_id, u8 attr)
+{
+	u32 val;
+
+	val = mvneta_bm_read(priv, MVNETA_BM_XBAR_POOL_REG(pool_id));
+	val &= ~MVNETA_BM_TARGET_ID_MASK(pool_id);
+	val &= ~MVNETA_BM_XBAR_ATTR_MASK(pool_id);
+	val |= MVNETA_BM_TARGET_ID_VAL(pool_id, target_id);
+	val |= MVNETA_BM_XBAR_ATTR_VAL(pool_id, attr);
+
+	mvneta_bm_write(priv, MVNETA_BM_XBAR_POOL_REG(pool_id), val);
+}
+
+int mvneta_bm_construct(struct hwbm_pool *hwbm_pool, void *buf)
+{
+	struct mvneta_bm_pool *bm_pool =
+		(struct mvneta_bm_pool *)hwbm_pool->priv;
+	struct mvneta_bm *priv = bm_pool->priv;
+	dma_addr_t phys_addr;
+
+	/* In order to update buf_cookie field of RX descriptor properly,
+	 * BM hardware expects buf virtual address to be placed in the
+	 * first four bytes of mapped buffer.
+	 */
+	*(u32 *)buf = (u32)buf;
+	phys_addr = dma_map_single(&priv->pdev->dev, buf, bm_pool->buf_size,
+				   DMA_FROM_DEVICE);
+	if (unlikely(dma_mapping_error(&priv->pdev->dev, phys_addr)))
+		return -ENOMEM;
+
+	mvneta_bm_pool_put_bp(priv, bm_pool, phys_addr);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(mvneta_bm_construct);
+
+/* Create pool */
+static int mvneta_bm_pool_create(struct mvneta_bm *priv,
+				 struct mvneta_bm_pool *bm_pool)
+{
+	struct platform_device *pdev = priv->pdev;
+	u8 target_id, attr;
+	int size_bytes, err;
+	size_bytes = sizeof(u32) * bm_pool->hwbm_pool.size;
+	bm_pool->virt_addr = dma_alloc_coherent(&pdev->dev, size_bytes,
+						&bm_pool->phys_addr,
+						GFP_KERNEL);
+	if (!bm_pool->virt_addr)
+		return -ENOMEM;
+
+	if (!IS_ALIGNED((u32)bm_pool->virt_addr, MVNETA_BM_POOL_PTR_ALIGN)) {
+		dma_free_coherent(&pdev->dev, size_bytes, bm_pool->virt_addr,
+				  bm_pool->phys_addr);
+		dev_err(&pdev->dev, "BM pool %d is not %d bytes aligned\n",
+			bm_pool->id, MVNETA_BM_POOL_PTR_ALIGN);
+		return -ENOMEM;
+	}
+
+	err = mvebu_mbus_get_dram_win_info(bm_pool->phys_addr, &target_id,
+					   &attr);
+	if (err < 0) {
+		dma_free_coherent(&pdev->dev, size_bytes, bm_pool->virt_addr,
+				  bm_pool->phys_addr);
+		return err;
+	}
+
+	/* Set pool address */
+	mvneta_bm_write(priv, MVNETA_BM_POOL_BASE_REG(bm_pool->id),
+			bm_pool->phys_addr);
+
+	mvneta_bm_pool_target_set(priv, bm_pool->id, target_id,  attr);
+	mvneta_bm_pool_enable(priv, bm_pool->id);
+
+	return 0;
+}
+
+/* Notify the driver that BM pool is being used as specific type and return the
+ * pool pointer on success
+ */
+struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv, u8 pool_id,
+					  enum mvneta_bm_type type, u8 port_id,
+					  int pkt_size)
+{
+	struct mvneta_bm_pool *new_pool = &priv->bm_pools[pool_id];
+	int num, err;
+
+	if (new_pool->type == MVNETA_BM_LONG &&
+	    new_pool->port_map != 1 << port_id) {
+		dev_err(&priv->pdev->dev,
+			"long pool cannot be shared by the ports\n");
+		return NULL;
+	}
+
+	if (new_pool->type == MVNETA_BM_SHORT && new_pool->type != type) {
+		dev_err(&priv->pdev->dev,
+			"mixing pools' types between the ports is forbidden\n");
+		return NULL;
+	}
+
+	if (new_pool->pkt_size == 0 || type != MVNETA_BM_SHORT)
+		new_pool->pkt_size = pkt_size;
+
+	/* Allocate buffers in case BM pool hasn't been used yet */
+	if (new_pool->type == MVNETA_BM_FREE) {
+		struct hwbm_pool *hwbm_pool = &new_pool->hwbm_pool;
+
+		new_pool->priv = priv;
+		new_pool->type = type;
+		new_pool->buf_size = MVNETA_RX_BUF_SIZE(new_pool->pkt_size);
+		hwbm_pool->frag_size =
+			SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(new_pool->pkt_size)) +
+			SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+		hwbm_pool->construct = mvneta_bm_construct;
+		hwbm_pool->priv = new_pool;
+
+		/* Create new pool */
+		err = mvneta_bm_pool_create(priv, new_pool);
+		if (err) {
+			dev_err(&priv->pdev->dev, "fail to create pool %d\n",
+				new_pool->id);
+			return NULL;
+		}
+
+		/* Allocate buffers for this pool */
+		num = hwbm_pool_add(hwbm_pool, hwbm_pool->size, GFP_ATOMIC);
+		if (num != hwbm_pool->size) {
+			WARN(1, "pool %d: %d of %d allocated\n",
+			     new_pool->id, num, hwbm_pool->size);
+			return NULL;
+		}
+	}
+
+	return new_pool;
+}
+EXPORT_SYMBOL_GPL(mvneta_bm_pool_use);
+
+/* Free all buffers from the pool */
+void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool,
+			 u8 port_map)
+{
+	int i;
+
+	bm_pool->port_map &= ~port_map;
+	if (bm_pool->port_map)
+		return;
+
+	mvneta_bm_config_set(priv, MVNETA_BM_EMPTY_LIMIT_MASK);
+
+	for (i = 0; i < bm_pool->hwbm_pool.buf_num; i++) {
+		dma_addr_t buf_phys_addr;
+		u32 *vaddr;
+
+		/* Get buffer physical address (indirect access) */
+		buf_phys_addr = mvneta_bm_pool_get_bp(priv, bm_pool);
+
+		/* Work-around to the problems when destroying the pool,
+		 * when it occurs that a read access to BPPI returns 0.
+		 */
+		if (buf_phys_addr == 0)
+			continue;
+
+		vaddr = phys_to_virt(buf_phys_addr);
+		if (!vaddr)
+			break;
+
+		dma_unmap_single(&priv->pdev->dev, buf_phys_addr,
+				 bm_pool->buf_size, DMA_FROM_DEVICE);
+		hwbm_buf_free(&bm_pool->hwbm_pool, vaddr);
+	}
+
+	mvneta_bm_config_clear(priv, MVNETA_BM_EMPTY_LIMIT_MASK);
+
+	/* Update BM driver with number of buffers removed from pool */
+	bm_pool->hwbm_pool.buf_num -= i;
+}
+EXPORT_SYMBOL_GPL(mvneta_bm_bufs_free);
+
+/* Cleanup pool */
+void mvneta_bm_pool_destroy(struct mvneta_bm *priv,
+			    struct mvneta_bm_pool *bm_pool, u8 port_map)
+{
+	struct hwbm_pool *hwbm_pool = &bm_pool->hwbm_pool;
+	bm_pool->port_map &= ~port_map;
+	if (bm_pool->port_map)
+		return;
+
+	bm_pool->type = MVNETA_BM_FREE;
+
+	mvneta_bm_bufs_free(priv, bm_pool, port_map);
+	if (hwbm_pool->buf_num)
+		WARN(1, "cannot free all buffers in pool %d\n", bm_pool->id);
+
+	if (bm_pool->virt_addr) {
+		dma_free_coherent(&priv->pdev->dev,
+				  sizeof(u32) * hwbm_pool->size,
+				  bm_pool->virt_addr, bm_pool->phys_addr);
+		bm_pool->virt_addr = NULL;
+	}
+
+	mvneta_bm_pool_disable(priv, bm_pool->id);
+}
+EXPORT_SYMBOL_GPL(mvneta_bm_pool_destroy);
+
+static void mvneta_bm_pools_init(struct mvneta_bm *priv)
+{
+	struct device_node *dn = priv->pdev->dev.of_node;
+	struct mvneta_bm_pool *bm_pool;
+	char prop[15];
+	u32 size;
+	int i;
+
+	/* Activate BM unit */
+	mvneta_bm_write(priv, MVNETA_BM_COMMAND_REG, MVNETA_BM_START_MASK);
+
+	/* Create all pools with maximum size */
+	for (i = 0; i < MVNETA_BM_POOLS_NUM; i++) {
+		bm_pool = &priv->bm_pools[i];
+		bm_pool->id = i;
+		bm_pool->type = MVNETA_BM_FREE;
+
+		/* Reset read pointer */
+		mvneta_bm_write(priv, MVNETA_BM_POOL_READ_PTR_REG(i), 0);
+
+		/* Reset write pointer */
+		mvneta_bm_write(priv, MVNETA_BM_POOL_WRITE_PTR_REG(i), 0);
+
+		/* Configure pool size according to DT or use default value */
+		sprintf(prop, "pool%d,capacity", i);
+		if (of_property_read_u32(dn, prop, &size)) {
+			size = MVNETA_BM_POOL_CAP_DEF;
+		} else if (size > MVNETA_BM_POOL_CAP_MAX) {
+			dev_warn(&priv->pdev->dev,
+				 "Illegal pool %d capacity %d, set to %d\n",
+				 i, size, MVNETA_BM_POOL_CAP_MAX);
+			size = MVNETA_BM_POOL_CAP_MAX;
+		} else if (size < MVNETA_BM_POOL_CAP_MIN) {
+			dev_warn(&priv->pdev->dev,
+				 "Illegal pool %d capacity %d, set to %d\n",
+				 i, size, MVNETA_BM_POOL_CAP_MIN);
+			size = MVNETA_BM_POOL_CAP_MIN;
+		} else if (!IS_ALIGNED(size, MVNETA_BM_POOL_CAP_ALIGN)) {
+			dev_warn(&priv->pdev->dev,
+				 "Illegal pool %d capacity %d, round to %d\n",
+				 i, size, ALIGN(size,
+				 MVNETA_BM_POOL_CAP_ALIGN));
+			size = ALIGN(size, MVNETA_BM_POOL_CAP_ALIGN);
+		}
+		bm_pool->hwbm_pool.size = size;
+
+		mvneta_bm_write(priv, MVNETA_BM_POOL_SIZE_REG(i),
+				bm_pool->hwbm_pool.size);
+
+		/* Obtain custom pkt_size from DT */
+		sprintf(prop, "pool%d,pkt-size", i);
+		if (of_property_read_u32(dn, prop, &bm_pool->pkt_size))
+			bm_pool->pkt_size = 0;
+	}
+}
+
+static void mvneta_bm_default_set(struct mvneta_bm *priv)
+{
+	u32 val;
+
+	/* Mask BM all interrupts */
+	mvneta_bm_write(priv, MVNETA_BM_INTR_MASK_REG, 0);
+
+	/* Clear BM cause register */
+	mvneta_bm_write(priv, MVNETA_BM_INTR_CAUSE_REG, 0);
+
+	/* Set BM configuration register */
+	val = mvneta_bm_read(priv, MVNETA_BM_CONFIG_REG);
+
+	/* Reduce MaxInBurstSize from 32 BPs to 16 BPs */
+	val &= ~MVNETA_BM_MAX_IN_BURST_SIZE_MASK;
+	val |= MVNETA_BM_MAX_IN_BURST_SIZE_16BP;
+	mvneta_bm_write(priv, MVNETA_BM_CONFIG_REG, val);
+}
+
+static int mvneta_bm_init(struct mvneta_bm *priv)
+{
+	mvneta_bm_default_set(priv);
+
+	/* Allocate and initialize BM pools structures */
+	priv->bm_pools = devm_kcalloc(&priv->pdev->dev, MVNETA_BM_POOLS_NUM,
+				      sizeof(struct mvneta_bm_pool),
+				      GFP_KERNEL);
+	if (!priv->bm_pools)
+		return -ENOMEM;
+
+	mvneta_bm_pools_init(priv);
+
+	return 0;
+}
+
+static int mvneta_bm_get_sram(struct device_node *dn,
+			      struct mvneta_bm *priv)
+{
+	priv->bppi_pool = of_gen_pool_get(dn, "internal-mem", 0);
+	if (!priv->bppi_pool)
+		return -ENOMEM;
+
+	priv->bppi_virt_addr = gen_pool_dma_alloc(priv->bppi_pool,
+						  MVNETA_BM_BPPI_SIZE,
+						  &priv->bppi_phys_addr);
+	if (!priv->bppi_virt_addr)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static void mvneta_bm_put_sram(struct mvneta_bm *priv)
+{
+	gen_pool_free(priv->bppi_pool, priv->bppi_phys_addr,
+		      MVNETA_BM_BPPI_SIZE);
+}
+
+static int mvneta_bm_probe(struct platform_device *pdev)
+{
+	struct device_node *dn = pdev->dev.of_node;
+	struct mvneta_bm *priv;
+	struct resource *res;
+	int err;
+
+	priv = devm_kzalloc(&pdev->dev, sizeof(struct mvneta_bm), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	priv->reg_base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(priv->reg_base))
+		return PTR_ERR(priv->reg_base);
+
+	priv->clk = devm_clk_get(&pdev->dev, NULL);
+	if (IS_ERR(priv->clk))
+		return PTR_ERR(priv->clk);
+	err = clk_prepare_enable(priv->clk);
+	if (err < 0)
+		return err;
+
+	err = mvneta_bm_get_sram(dn, priv);
+	if (err < 0) {
+		dev_err(&pdev->dev, "failed to allocate internal memory\n");
+		goto err_clk;
+	}
+
+	priv->pdev = pdev;
+
+	/* Initialize buffer manager internals */
+	err = mvneta_bm_init(priv);
+	if (err < 0) {
+		dev_err(&pdev->dev, "failed to initialize controller\n");
+		goto err_sram;
+	}
+
+	dn->data = priv;
+	platform_set_drvdata(pdev, priv);
+
+	dev_info(&pdev->dev, "Buffer Manager for network controller enabled\n");
+
+	return 0;
+
+err_sram:
+	mvneta_bm_put_sram(priv);
+err_clk:
+	clk_disable_unprepare(priv->clk);
+	return err;
+}
+
+static int mvneta_bm_remove(struct platform_device *pdev)
+{
+	struct mvneta_bm *priv = platform_get_drvdata(pdev);
+	u8 all_ports_map = 0xff;
+	int i = 0;
+
+	for (i = 0; i < MVNETA_BM_POOLS_NUM; i++) {
+		struct mvneta_bm_pool *bm_pool = &priv->bm_pools[i];
+
+		mvneta_bm_pool_destroy(priv, bm_pool, all_ports_map);
+	}
+
+	mvneta_bm_put_sram(priv);
+
+	/* Dectivate BM unit */
+	mvneta_bm_write(priv, MVNETA_BM_COMMAND_REG, MVNETA_BM_STOP_MASK);
+
+	clk_disable_unprepare(priv->clk);
+
+	return 0;
+}
+
+static const struct of_device_id mvneta_bm_match[] = {
+	{ .compatible = "marvell,armada-380-neta-bm" },
+	{ }
+};
+MODULE_DEVICE_TABLE(of, mvneta_bm_match);
+
+static struct platform_driver mvneta_bm_driver = {
+	.probe = mvneta_bm_probe,
+	.remove = mvneta_bm_remove,
+	.driver = {
+		.name = MVNETA_BM_DRIVER_NAME,
+		.of_match_table = mvneta_bm_match,
+	},
+};
+
+module_platform_driver(mvneta_bm_driver);
+
+MODULE_DESCRIPTION("Marvell NETA Buffer Manager Driver - www.marvell.com");
+MODULE_AUTHOR("Marcin Wojtas <mw@semihalf.com>");
+MODULE_LICENSE("GPL v2");

+ 182 - 0
drivers/net/ethernet/marvell/mvneta_bm.h

@@ -0,0 +1,182 @@
+/*
+ * Driver for Marvell NETA network controller Buffer Manager.
+ *
+ * Copyright (C) 2015 Marvell
+ *
+ * Marcin Wojtas <mw@semihalf.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#ifndef _MVNETA_BM_H_
+#define _MVNETA_BM_H_
+
+/* BM Configuration Register */
+#define MVNETA_BM_CONFIG_REG			0x0
+#define    MVNETA_BM_STATUS_MASK		0x30
+#define    MVNETA_BM_ACTIVE_MASK		BIT(4)
+#define    MVNETA_BM_MAX_IN_BURST_SIZE_MASK	0x60000
+#define    MVNETA_BM_MAX_IN_BURST_SIZE_16BP	BIT(18)
+#define    MVNETA_BM_EMPTY_LIMIT_MASK		BIT(19)
+
+/* BM Activation Register */
+#define MVNETA_BM_COMMAND_REG			0x4
+#define    MVNETA_BM_START_MASK			BIT(0)
+#define    MVNETA_BM_STOP_MASK			BIT(1)
+#define    MVNETA_BM_PAUSE_MASK			BIT(2)
+
+/* BM Xbar interface Register */
+#define MVNETA_BM_XBAR_01_REG			0x8
+#define MVNETA_BM_XBAR_23_REG			0xc
+#define MVNETA_BM_XBAR_POOL_REG(pool)		\
+		(((pool) < 2) ? MVNETA_BM_XBAR_01_REG : MVNETA_BM_XBAR_23_REG)
+#define     MVNETA_BM_TARGET_ID_OFFS(pool)	(((pool) & 1) ? 16 : 0)
+#define     MVNETA_BM_TARGET_ID_MASK(pool)	\
+		(0xf << MVNETA_BM_TARGET_ID_OFFS(pool))
+#define     MVNETA_BM_TARGET_ID_VAL(pool, id)	\
+		((id) << MVNETA_BM_TARGET_ID_OFFS(pool))
+#define     MVNETA_BM_XBAR_ATTR_OFFS(pool)	(((pool) & 1) ? 20 : 4)
+#define     MVNETA_BM_XBAR_ATTR_MASK(pool)	\
+		(0xff << MVNETA_BM_XBAR_ATTR_OFFS(pool))
+#define     MVNETA_BM_XBAR_ATTR_VAL(pool, attr)	\
+		((attr) << MVNETA_BM_XBAR_ATTR_OFFS(pool))
+
+/* Address of External Buffer Pointers Pool Register */
+#define MVNETA_BM_POOL_BASE_REG(pool)		(0x10 + ((pool) << 4))
+#define     MVNETA_BM_POOL_ENABLE_MASK		BIT(0)
+
+/* External Buffer Pointers Pool RD pointer Register */
+#define MVNETA_BM_POOL_READ_PTR_REG(pool)	(0x14 + ((pool) << 4))
+#define     MVNETA_BM_POOL_SET_READ_PTR_MASK	0xfffc
+#define     MVNETA_BM_POOL_GET_READ_PTR_OFFS	16
+#define     MVNETA_BM_POOL_GET_READ_PTR_MASK	0xfffc0000
+
+/* External Buffer Pointers Pool WR pointer */
+#define MVNETA_BM_POOL_WRITE_PTR_REG(pool)	(0x18 + ((pool) << 4))
+#define     MVNETA_BM_POOL_SET_WRITE_PTR_OFFS	0
+#define     MVNETA_BM_POOL_SET_WRITE_PTR_MASK	0xfffc
+#define     MVNETA_BM_POOL_GET_WRITE_PTR_OFFS	16
+#define     MVNETA_BM_POOL_GET_WRITE_PTR_MASK	0xfffc0000
+
+/* External Buffer Pointers Pool Size Register */
+#define MVNETA_BM_POOL_SIZE_REG(pool)		(0x1c + ((pool) << 4))
+#define     MVNETA_BM_POOL_SIZE_MASK		0x3fff
+
+/* BM Interrupt Cause Register */
+#define MVNETA_BM_INTR_CAUSE_REG		(0x50)
+
+/* BM interrupt Mask Register */
+#define MVNETA_BM_INTR_MASK_REG			(0x54)
+
+/* Other definitions */
+#define MVNETA_BM_SHORT_PKT_SIZE		256
+#define MVNETA_BM_POOLS_NUM			4
+#define MVNETA_BM_POOL_CAP_MIN			128
+#define MVNETA_BM_POOL_CAP_DEF			2048
+#define MVNETA_BM_POOL_CAP_MAX			\
+		(16 * 1024 - MVNETA_BM_POOL_CAP_ALIGN)
+#define MVNETA_BM_POOL_CAP_ALIGN		32
+#define MVNETA_BM_POOL_PTR_ALIGN		32
+
+#define MVNETA_BM_POOL_ACCESS_OFFS		8
+
+#define MVNETA_BM_BPPI_SIZE			0x100000
+
+#define MVNETA_RX_BUF_SIZE(pkt_size)   ((pkt_size) + NET_SKB_PAD)
+
+enum mvneta_bm_type {
+	MVNETA_BM_FREE,
+	MVNETA_BM_LONG,
+	MVNETA_BM_SHORT
+};
+
+struct mvneta_bm {
+	void __iomem *reg_base;
+	struct clk *clk;
+	struct platform_device *pdev;
+
+	struct gen_pool *bppi_pool;
+	/* BPPI virtual base address */
+	void __iomem *bppi_virt_addr;
+	/* BPPI physical base address */
+	dma_addr_t bppi_phys_addr;
+
+	/* BM pools */
+	struct mvneta_bm_pool *bm_pools;
+};
+
+struct mvneta_bm_pool {
+	struct hwbm_pool hwbm_pool;
+	/* Pool number in the range 0-3 */
+	u8 id;
+	enum mvneta_bm_type type;
+
+	/* Packet size */
+	int pkt_size;
+	/* Size of the buffer acces through DMA*/
+	u32 buf_size;
+
+	/* BPPE virtual base address */
+	u32 *virt_addr;
+	/* BPPE physical base address */
+	dma_addr_t phys_addr;
+
+	/* Ports using BM pool */
+	u8 port_map;
+
+	struct mvneta_bm *priv;
+};
+
+/* Declarations and definitions */
+void *mvneta_frag_alloc(unsigned int frag_size);
+void mvneta_frag_free(unsigned int frag_size, void *data);
+
+#if defined(CONFIG_MVNETA_BM) || defined(CONFIG_MVNETA_BM_MODULE)
+void mvneta_bm_pool_destroy(struct mvneta_bm *priv,
+			    struct mvneta_bm_pool *bm_pool, u8 port_map);
+void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool,
+			 u8 port_map);
+int mvneta_bm_construct(struct hwbm_pool *hwbm_pool, void *buf);
+int mvneta_bm_pool_refill(struct mvneta_bm *priv,
+			  struct mvneta_bm_pool *bm_pool);
+struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv, u8 pool_id,
+					  enum mvneta_bm_type type, u8 port_id,
+					  int pkt_size);
+
+static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv,
+					 struct mvneta_bm_pool *bm_pool,
+					 dma_addr_t buf_phys_addr)
+{
+	writel_relaxed(buf_phys_addr, priv->bppi_virt_addr +
+		       (bm_pool->id << MVNETA_BM_POOL_ACCESS_OFFS));
+}
+
+static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv,
+					struct mvneta_bm_pool *bm_pool)
+{
+	return readl_relaxed(priv->bppi_virt_addr +
+			     (bm_pool->id << MVNETA_BM_POOL_ACCESS_OFFS));
+}
+#else
+void mvneta_bm_pool_destroy(struct mvneta_bm *priv,
+			    struct mvneta_bm_pool *bm_pool, u8 port_map) {}
+void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool,
+			 u8 port_map) {}
+int mvneta_bm_construct(struct hwbm_pool *hwbm_pool, void *buf) { return 0; }
+int mvneta_bm_pool_refill(struct mvneta_bm *priv,
+			  struct mvneta_bm_pool *bm_pool) {return 0; }
+struct mvneta_bm_pool *mvneta_bm_pool_use(struct mvneta_bm *priv, u8 pool_id,
+					  enum mvneta_bm_type type, u8 port_id,
+					  int pkt_size) { return NULL; }
+
+static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv,
+					 struct mvneta_bm_pool *bm_pool,
+					 dma_addr_t buf_phys_addr) {}
+
+static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv,
+					struct mvneta_bm_pool *bm_pool)
+{ return 0; }
+#endif /* CONFIG_MVNETA_BM */
+#endif

+ 3 - 0
include/linux/mbus.h

@@ -69,6 +69,9 @@ static inline const struct mbus_dram_target_info *mv_mbus_dram_info_nooverlap(vo
 int mvebu_mbus_save_cpu_target(u32 *store_addr);
 void mvebu_mbus_get_pcie_mem_aperture(struct resource *res);
 void mvebu_mbus_get_pcie_io_aperture(struct resource *res);
+int mvebu_mbus_get_dram_win_info(phys_addr_t phyaddr, u8 *target, u8 *attr);
+int mvebu_mbus_get_io_win_info(phys_addr_t phyaddr, u32 *size, u8 *target,
+			       u8 *attr);
 int mvebu_mbus_add_window_remap_by_id(unsigned int target,
 				      unsigned int attribute,
 				      phys_addr_t base, size_t size,

+ 28 - 0
include/net/hwbm.h

@@ -0,0 +1,28 @@
+#ifndef _HWBM_H
+#define _HWBM_H
+
+struct hwbm_pool {
+	/* Capacity of the pool */
+	int size;
+	/* Size of the buffers managed */
+	int frag_size;
+	/* Number of buffers currently used by this pool */
+	int buf_num;
+	/* constructor called during alocation */
+	int (*construct)(struct hwbm_pool *bm_pool, void *buf);
+	/* protect acces to the buffer counter*/
+	spinlock_t lock;
+	/* private data */
+	void *priv;
+};
+#ifdef CONFIG_HWBM
+void hwbm_buf_free(struct hwbm_pool *bm_pool, void *buf);
+int hwbm_pool_refill(struct hwbm_pool *bm_pool, gfp_t gfp);
+int hwbm_pool_add(struct hwbm_pool *bm_pool, unsigned int buf_num, gfp_t gfp);
+#else
+void hwbm_buf_free(struct hwbm_pool *bm_pool, void *buf) {}
+int hwbm_pool_refill(struct hwbm_pool *bm_pool, gfp_t gfp) { return 0; }
+int hwbm_pool_add(struct hwbm_pool *bm_pool, unsigned int buf_num, gfp_t gfp)
+{ return 0; }
+#endif /* CONFIG_HWBM */
+#endif /* _HWBM_H */

+ 3 - 0
net/Kconfig

@@ -253,6 +253,9 @@ config XPS
 	depends on SMP
 	default y
 
+config HWBM
+       bool
+
 config SOCK_CGROUP_DATA
 	bool
 	default n

+ 1 - 0
net/core/Makefile

@@ -25,4 +25,5 @@ obj-$(CONFIG_CGROUP_NET_PRIO) += netprio_cgroup.o
 obj-$(CONFIG_CGROUP_NET_CLASSID) += netclassid_cgroup.o
 obj-$(CONFIG_LWTUNNEL) += lwtunnel.o
 obj-$(CONFIG_DST_CACHE) += dst_cache.o
+obj-$(CONFIG_HWBM) += hwbm.o
 obj-$(CONFIG_NET_DEVLINK) += devlink.o

+ 87 - 0
net/core/hwbm.c

@@ -0,0 +1,87 @@
+/* Support for hardware buffer manager.
+ *
+ * Copyright (C) 2016 Marvell
+ *
+ * Gregory CLEMENT <gregory.clement@free-electrons.com>
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ */
+#include <linux/kernel.h>
+#include <linux/printk.h>
+#include <linux/skbuff.h>
+#include <net/hwbm.h>
+
+void hwbm_buf_free(struct hwbm_pool *bm_pool, void *buf)
+{
+	if (likely(bm_pool->frag_size <= PAGE_SIZE))
+		skb_free_frag(buf);
+	else
+		kfree(buf);
+}
+EXPORT_SYMBOL_GPL(hwbm_buf_free);
+
+/* Refill processing for HW buffer management */
+int hwbm_pool_refill(struct hwbm_pool *bm_pool, gfp_t gfp)
+{
+	int frag_size = bm_pool->frag_size;
+	void *buf;
+
+	if (likely(frag_size <= PAGE_SIZE))
+		buf = netdev_alloc_frag(frag_size);
+	else
+		buf = kmalloc(frag_size, gfp);
+
+	if (!buf)
+		return -ENOMEM;
+
+	if (bm_pool->construct)
+		if (bm_pool->construct(bm_pool, buf)) {
+			hwbm_buf_free(bm_pool, buf);
+			return -ENOMEM;
+		}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(hwbm_pool_refill);
+
+int hwbm_pool_add(struct hwbm_pool *bm_pool, unsigned int buf_num, gfp_t gfp)
+{
+	int err, i;
+	unsigned long flags;
+
+	spin_lock_irqsave(&bm_pool->lock, flags);
+	if (bm_pool->buf_num == bm_pool->size) {
+		pr_warn("pool already filled\n");
+		return bm_pool->buf_num;
+	}
+
+	if (buf_num + bm_pool->buf_num > bm_pool->size) {
+		pr_warn("cannot allocate %d buffers for pool\n",
+			buf_num);
+		return 0;
+	}
+
+	if ((buf_num + bm_pool->buf_num) < bm_pool->buf_num) {
+		pr_warn("Adding %d buffers to the %d current buffers will overflow\n",
+			buf_num,  bm_pool->buf_num);
+		return 0;
+	}
+
+	for (i = 0; i < buf_num; i++) {
+		err = hwbm_pool_refill(bm_pool, gfp);
+		if (err < 0)
+			break;
+	}
+
+	/* Update BM driver with number of buffers added to pool */
+	bm_pool->buf_num += i;
+
+	pr_debug("hwpm pool: %d of %d buffers added\n", i, buf_num);
+	spin_unlock_irqrestore(&bm_pool->lock, flags);
+
+	return i;
+}
+EXPORT_SYMBOL_GPL(hwbm_pool_add);