浏览代码

Merge tag 'nand/for-4.13' into MTD

From Boris:
"""
This pull request contains the following core changes:

* addition of on-ecc support to Micron driver
* addition of helpers to help drivers choose most appropriate ECC
  settings
* deletion of dead-code (cached programming and ->errstat() hook)
* make sure drivers that do not support the SET/GET FEATURES command
  return ENOTSUPP use a dummy ->set/get_features implementation
  returning -ENOTSUPP (required for Micron on-die ECC)
* change the semantic of ecc->write_page() for drivers setting the
  NAND_ECC_CUSTOM_PAGE_ACCESS flag
* support exiting 'GET STATUS' command in default ->cmdfunc()
  implementations
* change the prototype of ->setup_data_interface()

A bunch of driver related changes:

* various cleanup, fixes and improvements of the MTK driver
* OMAP DT bindings fixes
* support for ->setup_data_interface() in the fsmc driver
* support for imx7 in the gpmi driver
* finalization of the denali driver rework (thanks to Masahiro for the
  work he's done on this driver)
* fix "bitflips in erased pages" handling in the ifc driver
* addition of PM ops and dynamic timing configuration to the atmel
  driver

And as usual we also have a few minor cleanup/fixes/improvements
patches across the subsystem.
"""
Brian Norris 8 年之前
父节点
当前提交
ef32476f26
共有 45 个文件被更改,包括 2546 次插入1628 次删除
  1. 13 0
      Documentation/devicetree/bindings/mtd/denali-nand.txt
  2. 1 1
      Documentation/devicetree/bindings/mtd/elm.txt
  3. 1 1
      Documentation/devicetree/bindings/mtd/gpmc-nand.txt
  4. 2 2
      Documentation/devicetree/bindings/mtd/gpmc-nor.txt
  5. 1 1
      Documentation/devicetree/bindings/mtd/gpmc-onenand.txt
  6. 13 1
      Documentation/devicetree/bindings/mtd/gpmi-nand.txt
  7. 3 2
      Documentation/devicetree/bindings/mtd/mtk-nand.txt
  8. 1 1
      Documentation/devicetree/bindings/mtd/nand.txt
  9. 2 2
      Documentation/devicetree/bindings/net/gpmc-eth.txt
  10. 6 0
      MAINTAINERS
  11. 3 0
      drivers/mtd/nand/Kconfig
  12. 350 4
      drivers/mtd/nand/atmel/nand-controller.c
  13. 2 0
      drivers/mtd/nand/bcm47xxnflash/ops_bcm4706.c
  14. 2 0
      drivers/mtd/nand/cafe_nand.c
  15. 3 0
      drivers/mtd/nand/davinci_nand.c
  16. 812 1019
      drivers/mtd/nand/denali.c
  17. 150 165
      drivers/mtd/nand/denali.h
  18. 41 12
      drivers/mtd/nand/denali_dt.c
  19. 16 10
      drivers/mtd/nand/denali_pci.c
  20. 2 0
      drivers/mtd/nand/docg4.c
  21. 2 0
      drivers/mtd/nand/fsl_elbc_nand.c
  22. 45 36
      drivers/mtd/nand/fsl_ifc_nand.c
  23. 96 26
      drivers/mtd/nand/fsmc_nand.c
  24. 3 3
      drivers/mtd/nand/gpmi-nand/gpmi-lib.c
  25. 41 34
      drivers/mtd/nand/gpmi-nand/gpmi-nand.c
  26. 9 4
      drivers/mtd/nand/gpmi-nand/gpmi-nand.h
  27. 2 0
      drivers/mtd/nand/hisi504_nand.c
  28. 1 1
      drivers/mtd/nand/jz4780_nand.c
  29. 2 0
      drivers/mtd/nand/mpc5121_nfc.c
  30. 115 113
      drivers/mtd/nand/mtk_ecc.c
  31. 1 1
      drivers/mtd/nand/mtk_ecc.h
  32. 168 111
      drivers/mtd/nand/mtk_nand.c
  33. 5 7
      drivers/mtd/nand/mxc_nand.c
  34. 303 46
      drivers/mtd/nand/nand_base.c
  35. 222 0
      drivers/mtd/nand/nand_micron.c
  36. 5 1
      drivers/mtd/nand/orion_nand.c
  37. 2 0
      drivers/mtd/nand/pxa3xx_nand.c
  38. 2 0
      drivers/mtd/nand/qcom_nandc.c
  39. 2 3
      drivers/mtd/nand/s3c2410.c
  40. 2 0
      drivers/mtd/nand/sh_flctl.c
  41. 3 6
      drivers/mtd/nand/sunxi_nand.c
  42. 16 6
      drivers/mtd/nand/tango_nand.c
  43. 2 0
      drivers/mtd/nand/vf610_nfc.c
  44. 2 0
      drivers/staging/mt29f_spinand/mt29f_spinand.c
  45. 71 9
      include/linux/mtd/nand.h

+ 13 - 0
Documentation/devicetree/bindings/mtd/denali-nand.txt

@@ -3,10 +3,23 @@
 Required properties:
 Required properties:
   - compatible : should be one of the following:
   - compatible : should be one of the following:
       "altr,socfpga-denali-nand"            - for Altera SOCFPGA
       "altr,socfpga-denali-nand"            - for Altera SOCFPGA
+      "socionext,uniphier-denali-nand-v5a"  - for Socionext UniPhier (v5a)
+      "socionext,uniphier-denali-nand-v5b"  - for Socionext UniPhier (v5b)
   - reg : should contain registers location and length for data and reg.
   - reg : should contain registers location and length for data and reg.
   - reg-names: Should contain the reg names "nand_data" and "denali_reg"
   - reg-names: Should contain the reg names "nand_data" and "denali_reg"
   - interrupts : The interrupt number.
   - interrupts : The interrupt number.
 
 
+Optional properties:
+  - nand-ecc-step-size: see nand.txt for details.  If present, the value must be
+      512        for "altr,socfpga-denali-nand"
+      1024       for "socionext,uniphier-denali-nand-v5a"
+      1024       for "socionext,uniphier-denali-nand-v5b"
+  - nand-ecc-strength: see nand.txt for details.  Valid values are:
+      8, 15      for "altr,socfpga-denali-nand"
+      8, 16, 24  for "socionext,uniphier-denali-nand-v5a"
+      8, 16      for "socionext,uniphier-denali-nand-v5b"
+  - nand-ecc-maximize: see nand.txt for details
+
 The device tree may optionally contain sub-nodes describing partitions of the
 The device tree may optionally contain sub-nodes describing partitions of the
 address space. See partition.txt for more detail.
 address space. See partition.txt for more detail.
 
 

+ 1 - 1
Documentation/devicetree/bindings/mtd/elm.txt

@@ -1,7 +1,7 @@
 Error location module
 Error location module
 
 
 Required properties:
 Required properties:
-- compatible: Must be "ti,am33xx-elm"
+- compatible: Must be "ti,am3352-elm"
 - reg: physical base address and size of the registers map.
 - reg: physical base address and size of the registers map.
 - interrupts: Interrupt number for the elm.
 - interrupts: Interrupt number for the elm.
 
 

+ 1 - 1
Documentation/devicetree/bindings/mtd/gpmc-nand.txt

@@ -5,7 +5,7 @@ the GPMC controller with a name of "nand".
 
 
 All timing relevant properties as well as generic gpmc child properties are
 All timing relevant properties as well as generic gpmc child properties are
 explained in a separate documents - please refer to
 explained in a separate documents - please refer to
-Documentation/devicetree/bindings/bus/ti-gpmc.txt
+Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 
 For NAND specific properties such as ECC modes or bus width, please refer to
 For NAND specific properties such as ECC modes or bus width, please refer to
 Documentation/devicetree/bindings/mtd/nand.txt
 Documentation/devicetree/bindings/mtd/nand.txt

+ 2 - 2
Documentation/devicetree/bindings/mtd/gpmc-nor.txt

@@ -5,7 +5,7 @@ child nodes of the GPMC controller with a name of "nor".
 
 
 All timing relevant properties as well as generic GPMC child properties are
 All timing relevant properties as well as generic GPMC child properties are
 explained in a separate documents. Please refer to
 explained in a separate documents. Please refer to
-Documentation/devicetree/bindings/bus/ti-gpmc.txt
+Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 
 Required properties:
 Required properties:
 - bank-width: 		Width of NOR flash in bytes. GPMC supports 8-bit and
 - bank-width: 		Width of NOR flash in bytes. GPMC supports 8-bit and
@@ -28,7 +28,7 @@ Required properties:
 
 
 Optional properties:
 Optional properties:
 - gpmc,XXX		Additional GPMC timings and settings parameters. See
 - gpmc,XXX		Additional GPMC timings and settings parameters. See
-			Documentation/devicetree/bindings/bus/ti-gpmc.txt
+			Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 
 Optional properties for partition table parsing:
 Optional properties for partition table parsing:
 - #address-cells: should be set to 1
 - #address-cells: should be set to 1

+ 1 - 1
Documentation/devicetree/bindings/mtd/gpmc-onenand.txt

@@ -5,7 +5,7 @@ the GPMC controller with a name of "onenand".
 
 
 All timing relevant properties as well as generic gpmc child properties are
 All timing relevant properties as well as generic gpmc child properties are
 explained in a separate documents - please refer to
 explained in a separate documents - please refer to
-Documentation/devicetree/bindings/bus/ti-gpmc.txt
+Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 
 Required properties:
 Required properties:
 
 

+ 13 - 1
Documentation/devicetree/bindings/mtd/gpmi-nand.txt

@@ -4,7 +4,12 @@ The GPMI nand controller provides an interface to control the
 NAND flash chips.
 NAND flash chips.
 
 
 Required properties:
 Required properties:
-  - compatible : should be "fsl,<chip>-gpmi-nand"
+  - compatible : should be "fsl,<chip>-gpmi-nand", chip can be:
+    * imx23
+    * imx28
+    * imx6q
+    * imx6sx
+    * imx7d
   - reg : should contain registers location and length for gpmi and bch.
   - reg : should contain registers location and length for gpmi and bch.
   - reg-names: Should contain the reg names "gpmi-nand" and "bch"
   - reg-names: Should contain the reg names "gpmi-nand" and "bch"
   - interrupts : BCH interrupt number.
   - interrupts : BCH interrupt number.
@@ -13,6 +18,13 @@ Required properties:
     and GPMI DMA channel ID.
     and GPMI DMA channel ID.
     Refer to dma.txt and fsl-mxs-dma.txt for details.
     Refer to dma.txt and fsl-mxs-dma.txt for details.
   - dma-names: Must be "rx-tx".
   - dma-names: Must be "rx-tx".
+  - clocks : clocks phandle and clock specifier corresponding to each clock
+    specified in clock-names.
+  - clock-names : The "gpmi_io" clock is always required. Which clocks are
+    exactly required depends on chip:
+    * imx23/imx28 : "gpmi_io"
+    * imx6q/sx : "gpmi_io", "gpmi_apb", "gpmi_bch", "gpmi_bch_apb", "per1_bch"
+    * imx7d : "gpmi_io", "gpmi_bch_apb"
 
 
 Optional properties:
 Optional properties:
   - nand-on-flash-bbt: boolean to enable on flash bbt option if not
   - nand-on-flash-bbt: boolean to enable on flash bbt option if not

+ 3 - 2
Documentation/devicetree/bindings/mtd/mtk-nand.txt

@@ -12,7 +12,8 @@ tree nodes.
 
 
 The first part of NFC is NAND Controller Interface (NFI) HW.
 The first part of NFC is NAND Controller Interface (NFI) HW.
 Required NFI properties:
 Required NFI properties:
-- compatible:			Should be "mediatek,mtxxxx-nfc".
+- compatible:			Should be one of "mediatek,mt2701-nfc",
+				"mediatek,mt2712-nfc".
 - reg:				Base physical address and size of NFI.
 - reg:				Base physical address and size of NFI.
 - interrupts:			Interrupts of NFI.
 - interrupts:			Interrupts of NFI.
 - clocks:			NFI required clocks.
 - clocks:			NFI required clocks.
@@ -141,7 +142,7 @@ Example:
 ==============
 ==============
 
 
 Required BCH properties:
 Required BCH properties:
-- compatible:	Should be "mediatek,mtxxxx-ecc".
+- compatible:	Should be one of "mediatek,mt2701-ecc", "mediatek,mt2712-ecc".
 - reg:		Base physical address and size of ECC.
 - reg:		Base physical address and size of ECC.
 - interrupts:	Interrupts of ECC.
 - interrupts:	Interrupts of ECC.
 - clocks:	ECC required clocks.
 - clocks:	ECC required clocks.

+ 1 - 1
Documentation/devicetree/bindings/mtd/nand.txt

@@ -21,7 +21,7 @@ Optional NAND chip properties:
 
 
 - nand-ecc-mode : String, operation mode of the NAND ecc mode.
 - nand-ecc-mode : String, operation mode of the NAND ecc mode.
 		  Supported values are: "none", "soft", "hw", "hw_syndrome",
 		  Supported values are: "none", "soft", "hw", "hw_syndrome",
-		  "hw_oob_first".
+		  "hw_oob_first", "on-die".
 		  Deprecated values:
 		  Deprecated values:
 		  "soft_bch": use "soft" and nand-ecc-algo instead
 		  "soft_bch": use "soft" and nand-ecc-algo instead
 - nand-ecc-algo: string, algorithm of NAND ECC.
 - nand-ecc-algo: string, algorithm of NAND ECC.

+ 2 - 2
Documentation/devicetree/bindings/net/gpmc-eth.txt

@@ -9,7 +9,7 @@ the GPMC controller with an "ethernet" name.
 
 
 All timing relevant properties as well as generic GPMC child properties are
 All timing relevant properties as well as generic GPMC child properties are
 explained in a separate documents. Please refer to
 explained in a separate documents. Please refer to
-Documentation/devicetree/bindings/bus/ti-gpmc.txt
+Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 
 For the properties relevant to the ethernet controller connected to the GPMC
 For the properties relevant to the ethernet controller connected to the GPMC
 refer to the binding documentation of the device. For example, the documentation
 refer to the binding documentation of the device. For example, the documentation
@@ -43,7 +43,7 @@ Required properties:
 
 
 Optional properties:
 Optional properties:
 - gpmc,XXX		Additional GPMC timings and settings parameters. See
 - gpmc,XXX		Additional GPMC timings and settings parameters. See
-			Documentation/devicetree/bindings/bus/ti-gpmc.txt
+			Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
 
 
 Example:
 Example:
 
 

+ 6 - 0
MAINTAINERS

@@ -3895,6 +3895,12 @@ M:	Pali Rohár <pali.rohar@gmail.com>
 S:	Maintained
 S:	Maintained
 F:	drivers/platform/x86/dell-wmi.c
 F:	drivers/platform/x86/dell-wmi.c
 
 
+DENALI NAND DRIVER
+M:	Masahiro Yamada <yamada.masahiro@socionext.com>
+L:	linux-mtd@lists.infradead.org
+S:	Supported
+F:	drivers/mtd/nand/denali*
+
 DESIGNWARE USB2 DRD IP DRIVER
 DESIGNWARE USB2 DRD IP DRIVER
 M:	John Youn <johnyoun@synopsys.com>
 M:	John Youn <johnyoun@synopsys.com>
 L:	linux-usb@vger.kernel.org
 L:	linux-usb@vger.kernel.org

+ 3 - 0
drivers/mtd/nand/Kconfig

@@ -308,6 +308,7 @@ config MTD_NAND_CS553X
 config MTD_NAND_ATMEL
 config MTD_NAND_ATMEL
 	tristate "Support for NAND Flash / SmartMedia on AT91"
 	tristate "Support for NAND Flash / SmartMedia on AT91"
 	depends on ARCH_AT91
 	depends on ARCH_AT91
+	select MFD_ATMEL_SMC
 	help
 	help
 	  Enables support for NAND Flash / Smart Media Card interface
 	  Enables support for NAND Flash / Smart Media Card interface
 	  on Atmel AT91 processors.
 	  on Atmel AT91 processors.
@@ -542,6 +543,7 @@ config MTD_NAND_SUNXI
 
 
 config MTD_NAND_HISI504
 config MTD_NAND_HISI504
 	tristate "Support for NAND controller on Hisilicon SoC Hip04"
 	tristate "Support for NAND controller on Hisilicon SoC Hip04"
+	depends on ARCH_HISI || COMPILE_TEST
 	depends on HAS_DMA
 	depends on HAS_DMA
 	help
 	help
 	  Enables support for NAND controller on Hisilicon SoC Hip04.
 	  Enables support for NAND controller on Hisilicon SoC Hip04.
@@ -555,6 +557,7 @@ config MTD_NAND_QCOM
 
 
 config MTD_NAND_MTK
 config MTD_NAND_MTK
 	tristate "Support for NAND controller on MTK SoCs"
 	tristate "Support for NAND controller on MTK SoCs"
+	depends on ARCH_MEDIATEK || COMPILE_TEST
 	depends on HAS_DMA
 	depends on HAS_DMA
 	help
 	help
 	  Enables support for NAND controller on MTK SoCs.
 	  Enables support for NAND controller on MTK SoCs.

+ 350 - 4
drivers/mtd/nand/atmel/nand-controller.c

@@ -57,6 +57,7 @@
 #include <linux/interrupt.h>
 #include <linux/interrupt.h>
 #include <linux/mfd/syscon.h>
 #include <linux/mfd/syscon.h>
 #include <linux/mfd/syscon/atmel-matrix.h>
 #include <linux/mfd/syscon/atmel-matrix.h>
+#include <linux/mfd/syscon/atmel-smc.h>
 #include <linux/module.h>
 #include <linux/module.h>
 #include <linux/mtd/nand.h>
 #include <linux/mtd/nand.h>
 #include <linux/of_address.h>
 #include <linux/of_address.h>
@@ -64,7 +65,6 @@
 #include <linux/of_platform.h>
 #include <linux/of_platform.h>
 #include <linux/iopoll.h>
 #include <linux/iopoll.h>
 #include <linux/platform_device.h>
 #include <linux/platform_device.h>
-#include <linux/platform_data/atmel.h>
 #include <linux/regmap.h>
 #include <linux/regmap.h>
 
 
 #include "pmecc.h"
 #include "pmecc.h"
@@ -151,6 +151,8 @@ struct atmel_nand_cs {
 		void __iomem *virt;
 		void __iomem *virt;
 		dma_addr_t dma;
 		dma_addr_t dma;
 	} io;
 	} io;
+
+	struct atmel_smc_cs_conf smcconf;
 };
 };
 
 
 struct atmel_nand {
 struct atmel_nand {
@@ -196,6 +198,8 @@ struct atmel_nand_controller_ops {
 	void (*nand_init)(struct atmel_nand_controller *nc,
 	void (*nand_init)(struct atmel_nand_controller *nc,
 			  struct atmel_nand *nand);
 			  struct atmel_nand *nand);
 	int (*ecc_init)(struct atmel_nand *nand);
 	int (*ecc_init)(struct atmel_nand *nand);
+	int (*setup_data_interface)(struct atmel_nand *nand, int csline,
+				    const struct nand_data_interface *conf);
 };
 };
 
 
 struct atmel_nand_controller_caps {
 struct atmel_nand_controller_caps {
@@ -912,7 +916,7 @@ static int atmel_hsmc_nand_pmecc_write_pg(struct nand_chip *chip,
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	struct atmel_nand *nand = to_atmel_nand(chip);
 	struct atmel_nand *nand = to_atmel_nand(chip);
 	struct atmel_hsmc_nand_controller *nc;
 	struct atmel_hsmc_nand_controller *nc;
-	int ret;
+	int ret, status;
 
 
 	nc = to_hsmc_nand_controller(chip->controller);
 	nc = to_hsmc_nand_controller(chip->controller);
 
 
@@ -954,6 +958,10 @@ static int atmel_hsmc_nand_pmecc_write_pg(struct nand_chip *chip,
 		dev_err(nc->base.dev, "Failed to program NAND page (err = %d)\n",
 		dev_err(nc->base.dev, "Failed to program NAND page (err = %d)\n",
 			ret);
 			ret);
 
 
+	status = chip->waitfunc(mtd, chip);
+	if (status & NAND_STATUS_FAIL)
+		return -EIO;
+
 	return ret;
 	return ret;
 }
 }
 
 
@@ -1175,6 +1183,295 @@ static int atmel_hsmc_nand_ecc_init(struct atmel_nand *nand)
 	return 0;
 	return 0;
 }
 }
 
 
+static int atmel_smc_nand_prepare_smcconf(struct atmel_nand *nand,
+					const struct nand_data_interface *conf,
+					struct atmel_smc_cs_conf *smcconf)
+{
+	u32 ncycles, totalcycles, timeps, mckperiodps;
+	struct atmel_nand_controller *nc;
+	int ret;
+
+	nc = to_nand_controller(nand->base.controller);
+
+	/* DDR interface not supported. */
+	if (conf->type != NAND_SDR_IFACE)
+		return -ENOTSUPP;
+
+	/*
+	 * tRC < 30ns implies EDO mode. This controller does not support this
+	 * mode.
+	 */
+	if (conf->timings.sdr.tRC_min < 30)
+		return -ENOTSUPP;
+
+	atmel_smc_cs_conf_init(smcconf);
+
+	mckperiodps = NSEC_PER_SEC / clk_get_rate(nc->mck);
+	mckperiodps *= 1000;
+
+	/*
+	 * Set write pulse timing. This one is easy to extract:
+	 *
+	 * NWE_PULSE = tWP
+	 */
+	ncycles = DIV_ROUND_UP(conf->timings.sdr.tWP_min, mckperiodps);
+	totalcycles = ncycles;
+	ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NWE_SHIFT,
+					  ncycles);
+	if (ret)
+		return ret;
+
+	/*
+	 * The write setup timing depends on the operation done on the NAND.
+	 * All operations goes through the same data bus, but the operation
+	 * type depends on the address we are writing to (ALE/CLE address
+	 * lines).
+	 * Since we have no way to differentiate the different operations at
+	 * the SMC level, we must consider the worst case (the biggest setup
+	 * time among all operation types):
+	 *
+	 * NWE_SETUP = max(tCLS, tCS, tALS, tDS) - NWE_PULSE
+	 */
+	timeps = max3(conf->timings.sdr.tCLS_min, conf->timings.sdr.tCS_min,
+		      conf->timings.sdr.tALS_min);
+	timeps = max(timeps, conf->timings.sdr.tDS_min);
+	ncycles = DIV_ROUND_UP(timeps, mckperiodps);
+	ncycles = ncycles > totalcycles ? ncycles - totalcycles : 0;
+	totalcycles += ncycles;
+	ret = atmel_smc_cs_conf_set_setup(smcconf, ATMEL_SMC_NWE_SHIFT,
+					  ncycles);
+	if (ret)
+		return ret;
+
+	/*
+	 * As for the write setup timing, the write hold timing depends on the
+	 * operation done on the NAND:
+	 *
+	 * NWE_HOLD = max(tCLH, tCH, tALH, tDH, tWH)
+	 */
+	timeps = max3(conf->timings.sdr.tCLH_min, conf->timings.sdr.tCH_min,
+		      conf->timings.sdr.tALH_min);
+	timeps = max3(timeps, conf->timings.sdr.tDH_min,
+		      conf->timings.sdr.tWH_min);
+	ncycles = DIV_ROUND_UP(timeps, mckperiodps);
+	totalcycles += ncycles;
+
+	/*
+	 * The write cycle timing is directly matching tWC, but is also
+	 * dependent on the other timings on the setup and hold timings we
+	 * calculated earlier, which gives:
+	 *
+	 * NWE_CYCLE = max(tWC, NWE_SETUP + NWE_PULSE + NWE_HOLD)
+	 */
+	ncycles = DIV_ROUND_UP(conf->timings.sdr.tWC_min, mckperiodps);
+	ncycles = max(totalcycles, ncycles);
+	ret = atmel_smc_cs_conf_set_cycle(smcconf, ATMEL_SMC_NWE_SHIFT,
+					  ncycles);
+	if (ret)
+		return ret;
+
+	/*
+	 * We don't want the CS line to be toggled between each byte/word
+	 * transfer to the NAND. The only way to guarantee that is to have the
+	 * NCS_{WR,RD}_{SETUP,HOLD} timings set to 0, which in turn means:
+	 *
+	 * NCS_WR_PULSE = NWE_CYCLE
+	 */
+	ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NCS_WR_SHIFT,
+					  ncycles);
+	if (ret)
+		return ret;
+
+	/*
+	 * As for the write setup timing, the read hold timing depends on the
+	 * operation done on the NAND:
+	 *
+	 * NRD_HOLD = max(tREH, tRHOH)
+	 */
+	timeps = max(conf->timings.sdr.tREH_min, conf->timings.sdr.tRHOH_min);
+	ncycles = DIV_ROUND_UP(timeps, mckperiodps);
+	totalcycles = ncycles;
+
+	/*
+	 * TDF = tRHZ - NRD_HOLD
+	 */
+	ncycles = DIV_ROUND_UP(conf->timings.sdr.tRHZ_max, mckperiodps);
+	ncycles -= totalcycles;
+
+	/*
+	 * In ONFI 4.0 specs, tRHZ has been increased to support EDO NANDs and
+	 * we might end up with a config that does not fit in the TDF field.
+	 * Just take the max value in this case and hope that the NAND is more
+	 * tolerant than advertised.
+	 */
+	if (ncycles > ATMEL_SMC_MODE_TDF_MAX)
+		ncycles = ATMEL_SMC_MODE_TDF_MAX;
+	else if (ncycles < ATMEL_SMC_MODE_TDF_MIN)
+		ncycles = ATMEL_SMC_MODE_TDF_MIN;
+
+	smcconf->mode |= ATMEL_SMC_MODE_TDF(ncycles) |
+			 ATMEL_SMC_MODE_TDFMODE_OPTIMIZED;
+
+	/*
+	 * Read pulse timing directly matches tRP:
+	 *
+	 * NRD_PULSE = tRP
+	 */
+	ncycles = DIV_ROUND_UP(conf->timings.sdr.tRP_min, mckperiodps);
+	totalcycles += ncycles;
+	ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NRD_SHIFT,
+					  ncycles);
+	if (ret)
+		return ret;
+
+	/*
+	 * The write cycle timing is directly matching tWC, but is also
+	 * dependent on the setup and hold timings we calculated earlier,
+	 * which gives:
+	 *
+	 * NRD_CYCLE = max(tRC, NRD_PULSE + NRD_HOLD)
+	 *
+	 * NRD_SETUP is always 0.
+	 */
+	ncycles = DIV_ROUND_UP(conf->timings.sdr.tRC_min, mckperiodps);
+	ncycles = max(totalcycles, ncycles);
+	ret = atmel_smc_cs_conf_set_cycle(smcconf, ATMEL_SMC_NRD_SHIFT,
+					  ncycles);
+	if (ret)
+		return ret;
+
+	/*
+	 * We don't want the CS line to be toggled between each byte/word
+	 * transfer from the NAND. The only way to guarantee that is to have
+	 * the NCS_{WR,RD}_{SETUP,HOLD} timings set to 0, which in turn means:
+	 *
+	 * NCS_RD_PULSE = NRD_CYCLE
+	 */
+	ret = atmel_smc_cs_conf_set_pulse(smcconf, ATMEL_SMC_NCS_RD_SHIFT,
+					  ncycles);
+	if (ret)
+		return ret;
+
+	/* Txxx timings are directly matching tXXX ones. */
+	ncycles = DIV_ROUND_UP(conf->timings.sdr.tCLR_min, mckperiodps);
+	ret = atmel_smc_cs_conf_set_timing(smcconf,
+					   ATMEL_HSMC_TIMINGS_TCLR_SHIFT,
+					   ncycles);
+	if (ret)
+		return ret;
+
+	ncycles = DIV_ROUND_UP(conf->timings.sdr.tADL_min, mckperiodps);
+	ret = atmel_smc_cs_conf_set_timing(smcconf,
+					   ATMEL_HSMC_TIMINGS_TADL_SHIFT,
+					   ncycles);
+	if (ret)
+		return ret;
+
+	ncycles = DIV_ROUND_UP(conf->timings.sdr.tAR_min, mckperiodps);
+	ret = atmel_smc_cs_conf_set_timing(smcconf,
+					   ATMEL_HSMC_TIMINGS_TAR_SHIFT,
+					   ncycles);
+	if (ret)
+		return ret;
+
+	ncycles = DIV_ROUND_UP(conf->timings.sdr.tRR_min, mckperiodps);
+	ret = atmel_smc_cs_conf_set_timing(smcconf,
+					   ATMEL_HSMC_TIMINGS_TRR_SHIFT,
+					   ncycles);
+	if (ret)
+		return ret;
+
+	ncycles = DIV_ROUND_UP(conf->timings.sdr.tWB_max, mckperiodps);
+	ret = atmel_smc_cs_conf_set_timing(smcconf,
+					   ATMEL_HSMC_TIMINGS_TWB_SHIFT,
+					   ncycles);
+	if (ret)
+		return ret;
+
+	/* Attach the CS line to the NFC logic. */
+	smcconf->timings |= ATMEL_HSMC_TIMINGS_NFSEL;
+
+	/* Set the appropriate data bus width. */
+	if (nand->base.options & NAND_BUSWIDTH_16)
+		smcconf->mode |= ATMEL_SMC_MODE_DBW_16;
+
+	/* Operate in NRD/NWE READ/WRITEMODE. */
+	smcconf->mode |= ATMEL_SMC_MODE_READMODE_NRD |
+			 ATMEL_SMC_MODE_WRITEMODE_NWE;
+
+	return 0;
+}
+
+static int atmel_smc_nand_setup_data_interface(struct atmel_nand *nand,
+					int csline,
+					const struct nand_data_interface *conf)
+{
+	struct atmel_nand_controller *nc;
+	struct atmel_smc_cs_conf smcconf;
+	struct atmel_nand_cs *cs;
+	int ret;
+
+	nc = to_nand_controller(nand->base.controller);
+
+	ret = atmel_smc_nand_prepare_smcconf(nand, conf, &smcconf);
+	if (ret)
+		return ret;
+
+	if (csline == NAND_DATA_IFACE_CHECK_ONLY)
+		return 0;
+
+	cs = &nand->cs[csline];
+	cs->smcconf = smcconf;
+	atmel_smc_cs_conf_apply(nc->smc, cs->id, &cs->smcconf);
+
+	return 0;
+}
+
+static int atmel_hsmc_nand_setup_data_interface(struct atmel_nand *nand,
+					int csline,
+					const struct nand_data_interface *conf)
+{
+	struct atmel_nand_controller *nc;
+	struct atmel_smc_cs_conf smcconf;
+	struct atmel_nand_cs *cs;
+	int ret;
+
+	nc = to_nand_controller(nand->base.controller);
+
+	ret = atmel_smc_nand_prepare_smcconf(nand, conf, &smcconf);
+	if (ret)
+		return ret;
+
+	if (csline == NAND_DATA_IFACE_CHECK_ONLY)
+		return 0;
+
+	cs = &nand->cs[csline];
+	cs->smcconf = smcconf;
+
+	if (cs->rb.type == ATMEL_NAND_NATIVE_RB)
+		cs->smcconf.timings |= ATMEL_HSMC_TIMINGS_RBNSEL(cs->rb.id);
+
+	atmel_hsmc_cs_conf_apply(nc->smc, cs->id, &cs->smcconf);
+
+	return 0;
+}
+
+static int atmel_nand_setup_data_interface(struct mtd_info *mtd, int csline,
+					const struct nand_data_interface *conf)
+{
+	struct nand_chip *chip = mtd_to_nand(mtd);
+	struct atmel_nand *nand = to_atmel_nand(chip);
+	struct atmel_nand_controller *nc;
+
+	nc = to_nand_controller(nand->base.controller);
+
+	if (csline >= nand->numcs ||
+	    (csline < 0 && csline != NAND_DATA_IFACE_CHECK_ONLY))
+		return -EINVAL;
+
+	return nc->caps->ops->setup_data_interface(nand, csline, conf);
+}
+
 static void atmel_nand_init(struct atmel_nand_controller *nc,
 static void atmel_nand_init(struct atmel_nand_controller *nc,
 			    struct atmel_nand *nand)
 			    struct atmel_nand *nand)
 {
 {
@@ -1192,6 +1489,9 @@ static void atmel_nand_init(struct atmel_nand_controller *nc,
 	chip->write_buf = atmel_nand_write_buf;
 	chip->write_buf = atmel_nand_write_buf;
 	chip->select_chip = atmel_nand_select_chip;
 	chip->select_chip = atmel_nand_select_chip;
 
 
+	if (nc->mck && nc->caps->ops->setup_data_interface)
+		chip->setup_data_interface = atmel_nand_setup_data_interface;
+
 	/* Some NANDs require a longer delay than the default one (20us). */
 	/* Some NANDs require a longer delay than the default one (20us). */
 	chip->chip_delay = 40;
 	chip->chip_delay = 40;
 
 
@@ -1677,6 +1977,12 @@ static int atmel_nand_controller_init(struct atmel_nand_controller *nc,
 	if (nc->caps->legacy_of_bindings)
 	if (nc->caps->legacy_of_bindings)
 		return 0;
 		return 0;
 
 
+	nc->mck = of_clk_get(dev->parent->of_node, 0);
+	if (IS_ERR(nc->mck)) {
+		dev_err(dev, "Failed to retrieve MCK clk\n");
+		return PTR_ERR(nc->mck);
+	}
+
 	np = of_parse_phandle(dev->parent->of_node, "atmel,smc", 0);
 	np = of_parse_phandle(dev->parent->of_node, "atmel,smc", 0);
 	if (!np) {
 	if (!np) {
 		dev_err(dev, "Missing or invalid atmel,smc property\n");
 		dev_err(dev, "Missing or invalid atmel,smc property\n");
@@ -1983,6 +2289,7 @@ static const struct atmel_nand_controller_ops atmel_hsmc_nc_ops = {
 	.remove = atmel_hsmc_nand_controller_remove,
 	.remove = atmel_hsmc_nand_controller_remove,
 	.ecc_init = atmel_hsmc_nand_ecc_init,
 	.ecc_init = atmel_hsmc_nand_ecc_init,
 	.nand_init = atmel_hsmc_nand_init,
 	.nand_init = atmel_hsmc_nand_init,
+	.setup_data_interface = atmel_hsmc_nand_setup_data_interface,
 };
 };
 
 
 static const struct atmel_nand_controller_caps atmel_sama5_nc_caps = {
 static const struct atmel_nand_controller_caps atmel_sama5_nc_caps = {
@@ -2037,7 +2344,14 @@ atmel_smc_nand_controller_remove(struct atmel_nand_controller *nc)
 	return 0;
 	return 0;
 }
 }
 
 
-static const struct atmel_nand_controller_ops atmel_smc_nc_ops = {
+/*
+ * The SMC reg layout of at91rm9200 is completely different which prevents us
+ * from re-using atmel_smc_nand_setup_data_interface() for the
+ * ->setup_data_interface() hook.
+ * At this point, there's no support for the at91rm9200 SMC IP, so we leave
+ * ->setup_data_interface() unassigned.
+ */
+static const struct atmel_nand_controller_ops at91rm9200_nc_ops = {
 	.probe = atmel_smc_nand_controller_probe,
 	.probe = atmel_smc_nand_controller_probe,
 	.remove = atmel_smc_nand_controller_remove,
 	.remove = atmel_smc_nand_controller_remove,
 	.ecc_init = atmel_nand_ecc_init,
 	.ecc_init = atmel_nand_ecc_init,
@@ -2045,6 +2359,20 @@ static const struct atmel_nand_controller_ops atmel_smc_nc_ops = {
 };
 };
 
 
 static const struct atmel_nand_controller_caps atmel_rm9200_nc_caps = {
 static const struct atmel_nand_controller_caps atmel_rm9200_nc_caps = {
+	.ale_offs = BIT(21),
+	.cle_offs = BIT(22),
+	.ops = &at91rm9200_nc_ops,
+};
+
+static const struct atmel_nand_controller_ops atmel_smc_nc_ops = {
+	.probe = atmel_smc_nand_controller_probe,
+	.remove = atmel_smc_nand_controller_remove,
+	.ecc_init = atmel_nand_ecc_init,
+	.nand_init = atmel_smc_nand_init,
+	.setup_data_interface = atmel_smc_nand_setup_data_interface,
+};
+
+static const struct atmel_nand_controller_caps atmel_sam9260_nc_caps = {
 	.ale_offs = BIT(21),
 	.ale_offs = BIT(21),
 	.cle_offs = BIT(22),
 	.cle_offs = BIT(22),
 	.ops = &atmel_smc_nc_ops,
 	.ops = &atmel_smc_nc_ops,
@@ -2093,7 +2421,7 @@ static const struct of_device_id atmel_nand_controller_of_ids[] = {
 	},
 	},
 	{
 	{
 		.compatible = "atmel,at91sam9260-nand-controller",
 		.compatible = "atmel,at91sam9260-nand-controller",
-		.data = &atmel_rm9200_nc_caps,
+		.data = &atmel_sam9260_nc_caps,
 	},
 	},
 	{
 	{
 		.compatible = "atmel,at91sam9261-nand-controller",
 		.compatible = "atmel,at91sam9261-nand-controller",
@@ -2181,6 +2509,24 @@ static int atmel_nand_controller_remove(struct platform_device *pdev)
 	return nc->caps->ops->remove(nc);
 	return nc->caps->ops->remove(nc);
 }
 }
 
 
+static __maybe_unused int atmel_nand_controller_resume(struct device *dev)
+{
+	struct atmel_nand_controller *nc = dev_get_drvdata(dev);
+	struct atmel_nand *nand;
+
+	list_for_each_entry(nand, &nc->chips, node) {
+		int i;
+
+		for (i = 0; i < nand->numcs; i++)
+			nand_reset(&nand->base, i);
+	}
+
+	return 0;
+}
+
+static SIMPLE_DEV_PM_OPS(atmel_nand_controller_pm_ops, NULL,
+			 atmel_nand_controller_resume);
+
 static struct platform_driver atmel_nand_controller_driver = {
 static struct platform_driver atmel_nand_controller_driver = {
 	.driver = {
 	.driver = {
 		.name = "atmel-nand-controller",
 		.name = "atmel-nand-controller",

+ 2 - 0
drivers/mtd/nand/bcm47xxnflash/ops_bcm4706.c

@@ -392,6 +392,8 @@ int bcm47xxnflash_ops_bcm4706_init(struct bcm47xxnflash *b47n)
 	b47n->nand_chip.read_byte = bcm47xxnflash_ops_bcm4706_read_byte;
 	b47n->nand_chip.read_byte = bcm47xxnflash_ops_bcm4706_read_byte;
 	b47n->nand_chip.read_buf = bcm47xxnflash_ops_bcm4706_read_buf;
 	b47n->nand_chip.read_buf = bcm47xxnflash_ops_bcm4706_read_buf;
 	b47n->nand_chip.write_buf = bcm47xxnflash_ops_bcm4706_write_buf;
 	b47n->nand_chip.write_buf = bcm47xxnflash_ops_bcm4706_write_buf;
+	b47n->nand_chip.onfi_set_features = nand_onfi_get_set_features_notsupp;
+	b47n->nand_chip.onfi_get_features = nand_onfi_get_set_features_notsupp;
 
 
 	nand_chip->chip_delay = 50;
 	nand_chip->chip_delay = 50;
 	b47n->nand_chip.bbt_options = NAND_BBT_USE_FLASH;
 	b47n->nand_chip.bbt_options = NAND_BBT_USE_FLASH;

+ 2 - 0
drivers/mtd/nand/cafe_nand.c

@@ -654,6 +654,8 @@ static int cafe_nand_probe(struct pci_dev *pdev,
 	cafe->nand.read_buf = cafe_read_buf;
 	cafe->nand.read_buf = cafe_read_buf;
 	cafe->nand.write_buf = cafe_write_buf;
 	cafe->nand.write_buf = cafe_write_buf;
 	cafe->nand.select_chip = cafe_select_chip;
 	cafe->nand.select_chip = cafe_select_chip;
+	cafe->nand.onfi_set_features = nand_onfi_get_set_features_notsupp;
+	cafe->nand.onfi_get_features = nand_onfi_get_set_features_notsupp;
 
 
 	cafe->nand.chip_delay = 0;
 	cafe->nand.chip_delay = 0;
 
 

+ 3 - 0
drivers/mtd/nand/davinci_nand.c

@@ -771,11 +771,14 @@ static int nand_davinci_probe(struct platform_device *pdev)
 			info->chip.ecc.hwctl = nand_davinci_hwctl_4bit;
 			info->chip.ecc.hwctl = nand_davinci_hwctl_4bit;
 			info->chip.ecc.bytes = 10;
 			info->chip.ecc.bytes = 10;
 			info->chip.ecc.options = NAND_ECC_GENERIC_ERASED_CHECK;
 			info->chip.ecc.options = NAND_ECC_GENERIC_ERASED_CHECK;
+			info->chip.ecc.algo = NAND_ECC_BCH;
 		} else {
 		} else {
+			/* 1bit ecc hamming */
 			info->chip.ecc.calculate = nand_davinci_calculate_1bit;
 			info->chip.ecc.calculate = nand_davinci_calculate_1bit;
 			info->chip.ecc.correct = nand_davinci_correct_1bit;
 			info->chip.ecc.correct = nand_davinci_correct_1bit;
 			info->chip.ecc.hwctl = nand_davinci_hwctl_1bit;
 			info->chip.ecc.hwctl = nand_davinci_hwctl_1bit;
 			info->chip.ecc.bytes = 3;
 			info->chip.ecc.bytes = 3;
+			info->chip.ecc.algo = NAND_ECC_HAMMING;
 		}
 		}
 		info->chip.ecc.size = 512;
 		info->chip.ecc.size = 512;
 		info->chip.ecc.strength = pdata->ecc_bits;
 		info->chip.ecc.strength = pdata->ecc_bits;

+ 812 - 1019
drivers/mtd/nand/denali.c

@@ -23,50 +23,43 @@
 #include <linux/mutex.h>
 #include <linux/mutex.h>
 #include <linux/mtd/mtd.h>
 #include <linux/mtd/mtd.h>
 #include <linux/module.h>
 #include <linux/module.h>
+#include <linux/slab.h>
 
 
 #include "denali.h"
 #include "denali.h"
 
 
 MODULE_LICENSE("GPL");
 MODULE_LICENSE("GPL");
 
 
-/*
- * We define a module parameter that allows the user to override
- * the hardware and decide what timing mode should be used.
- */
-#define NAND_DEFAULT_TIMINGS	-1
+#define DENALI_NAND_NAME    "denali-nand"
 
 
-static int onfi_timing_mode = NAND_DEFAULT_TIMINGS;
-module_param(onfi_timing_mode, int, S_IRUGO);
-MODULE_PARM_DESC(onfi_timing_mode,
-	   "Overrides default ONFI setting. -1 indicates use default timings");
+/* Host Data/Command Interface */
+#define DENALI_HOST_ADDR	0x00
+#define DENALI_HOST_DATA	0x10
 
 
-#define DENALI_NAND_NAME    "denali-nand"
+#define DENALI_MAP00		(0 << 26)	/* direct access to buffer */
+#define DENALI_MAP01		(1 << 26)	/* read/write pages in PIO */
+#define DENALI_MAP10		(2 << 26)	/* high-level control plane */
+#define DENALI_MAP11		(3 << 26)	/* direct controller access */
 
 
-/*
- * We define a macro here that combines all interrupts this driver uses into
- * a single constant value, for convenience.
- */
-#define DENALI_IRQ_ALL	(INTR__DMA_CMD_COMP | \
-			INTR__ECC_TRANSACTION_DONE | \
-			INTR__ECC_ERR | \
-			INTR__PROGRAM_FAIL | \
-			INTR__LOAD_COMP | \
-			INTR__PROGRAM_COMP | \
-			INTR__TIME_OUT | \
-			INTR__ERASE_FAIL | \
-			INTR__RST_COMP | \
-			INTR__ERASE_COMP)
+/* MAP11 access cycle type */
+#define DENALI_MAP11_CMD	((DENALI_MAP11) | 0)	/* command cycle */
+#define DENALI_MAP11_ADDR	((DENALI_MAP11) | 1)	/* address cycle */
+#define DENALI_MAP11_DATA	((DENALI_MAP11) | 2)	/* data cycle */
 
 
-/*
- * indicates whether or not the internal value for the flash bank is
- * valid or not
- */
-#define CHIP_SELECT_INVALID	-1
+/* MAP10 commands */
+#define DENALI_ERASE		0x01
+
+#define DENALI_BANK(denali)	((denali)->active_bank << 24)
+
+#define DENALI_INVALID_BANK	-1
+#define DENALI_NR_BANKS		4
 
 
 /*
 /*
- * This macro divides two integers and rounds fractional values up
- * to the nearest integer value.
+ * The bus interface clock, clk_x, is phase aligned with the core clock.  The
+ * clk_x is an integral multiple N of the core clk.  The value N is configured
+ * at IP delivery time, and its available value is 4, 5, or 6.  We need to align
+ * to the largest value to make it work with any possible configuration.
  */
  */
-#define CEIL_DIV(X, Y) (((X)%(Y)) ? ((X)/(Y)+1) : ((X)/(Y)))
+#define DENALI_CLK_X_MULT	6
 
 
 /*
 /*
  * this macro allows us to convert from an MTD structure to our own
  * this macro allows us to convert from an MTD structure to our own
@@ -77,339 +70,11 @@ static inline struct denali_nand_info *mtd_to_denali(struct mtd_info *mtd)
 	return container_of(mtd_to_nand(mtd), struct denali_nand_info, nand);
 	return container_of(mtd_to_nand(mtd), struct denali_nand_info, nand);
 }
 }
 
 
-/*
- * These constants are defined by the driver to enable common driver
- * configuration options.
- */
-#define SPARE_ACCESS		0x41
-#define MAIN_ACCESS		0x42
-#define MAIN_SPARE_ACCESS	0x43
-
-#define DENALI_READ	0
-#define DENALI_WRITE	0x100
-
-/*
- * this is a helper macro that allows us to
- * format the bank into the proper bits for the controller
- */
-#define BANK(x) ((x) << 24)
-
-/* forward declarations */
-static void clear_interrupts(struct denali_nand_info *denali);
-static uint32_t wait_for_irq(struct denali_nand_info *denali,
-							uint32_t irq_mask);
-static void denali_irq_enable(struct denali_nand_info *denali,
-							uint32_t int_mask);
-static uint32_t read_interrupt_status(struct denali_nand_info *denali);
-
-/*
- * Certain operations for the denali NAND controller use an indexed mode to
- * read/write data. The operation is performed by writing the address value
- * of the command to the device memory followed by the data. This function
- * abstracts this common operation.
- */
-static void index_addr(struct denali_nand_info *denali,
-				uint32_t address, uint32_t data)
-{
-	iowrite32(address, denali->flash_mem);
-	iowrite32(data, denali->flash_mem + 0x10);
-}
-
-/* Perform an indexed read of the device */
-static void index_addr_read_data(struct denali_nand_info *denali,
-				 uint32_t address, uint32_t *pdata)
-{
-	iowrite32(address, denali->flash_mem);
-	*pdata = ioread32(denali->flash_mem + 0x10);
-}
-
-/*
- * We need to buffer some data for some of the NAND core routines.
- * The operations manage buffering that data.
- */
-static void reset_buf(struct denali_nand_info *denali)
-{
-	denali->buf.head = denali->buf.tail = 0;
-}
-
-static void write_byte_to_buf(struct denali_nand_info *denali, uint8_t byte)
-{
-	denali->buf.buf[denali->buf.tail++] = byte;
-}
-
-/* reads the status of the device */
-static void read_status(struct denali_nand_info *denali)
-{
-	uint32_t cmd;
-
-	/* initialize the data buffer to store status */
-	reset_buf(denali);
-
-	cmd = ioread32(denali->flash_reg + WRITE_PROTECT);
-	if (cmd)
-		write_byte_to_buf(denali, NAND_STATUS_WP);
-	else
-		write_byte_to_buf(denali, 0);
-}
-
-/* resets a specific device connected to the core */
-static void reset_bank(struct denali_nand_info *denali)
-{
-	uint32_t irq_status;
-	uint32_t irq_mask = INTR__RST_COMP | INTR__TIME_OUT;
-
-	clear_interrupts(denali);
-
-	iowrite32(1 << denali->flash_bank, denali->flash_reg + DEVICE_RESET);
-
-	irq_status = wait_for_irq(denali, irq_mask);
-
-	if (irq_status & INTR__TIME_OUT)
-		dev_err(denali->dev, "reset bank failed.\n");
-}
-
-/* Reset the flash controller */
-static uint16_t denali_nand_reset(struct denali_nand_info *denali)
-{
-	int i;
-
-	for (i = 0; i < denali->max_banks; i++)
-		iowrite32(INTR__RST_COMP | INTR__TIME_OUT,
-		denali->flash_reg + INTR_STATUS(i));
-
-	for (i = 0; i < denali->max_banks; i++) {
-		iowrite32(1 << i, denali->flash_reg + DEVICE_RESET);
-		while (!(ioread32(denali->flash_reg + INTR_STATUS(i)) &
-			(INTR__RST_COMP | INTR__TIME_OUT)))
-			cpu_relax();
-		if (ioread32(denali->flash_reg + INTR_STATUS(i)) &
-			INTR__TIME_OUT)
-			dev_dbg(denali->dev,
-			"NAND Reset operation timed out on bank %d\n", i);
-	}
-
-	for (i = 0; i < denali->max_banks; i++)
-		iowrite32(INTR__RST_COMP | INTR__TIME_OUT,
-			  denali->flash_reg + INTR_STATUS(i));
-
-	return PASS;
-}
-
-/*
- * this routine calculates the ONFI timing values for a given mode and
- * programs the clocking register accordingly. The mode is determined by
- * the get_onfi_nand_para routine.
- */
-static void nand_onfi_timing_set(struct denali_nand_info *denali,
-								uint16_t mode)
-{
-	uint16_t Trea[6] = {40, 30, 25, 20, 20, 16};
-	uint16_t Trp[6] = {50, 25, 17, 15, 12, 10};
-	uint16_t Treh[6] = {30, 15, 15, 10, 10, 7};
-	uint16_t Trc[6] = {100, 50, 35, 30, 25, 20};
-	uint16_t Trhoh[6] = {0, 15, 15, 15, 15, 15};
-	uint16_t Trloh[6] = {0, 0, 0, 0, 5, 5};
-	uint16_t Tcea[6] = {100, 45, 30, 25, 25, 25};
-	uint16_t Tadl[6] = {200, 100, 100, 100, 70, 70};
-	uint16_t Trhw[6] = {200, 100, 100, 100, 100, 100};
-	uint16_t Trhz[6] = {200, 100, 100, 100, 100, 100};
-	uint16_t Twhr[6] = {120, 80, 80, 60, 60, 60};
-	uint16_t Tcs[6] = {70, 35, 25, 25, 20, 15};
-
-	uint16_t data_invalid_rhoh, data_invalid_rloh, data_invalid;
-	uint16_t dv_window = 0;
-	uint16_t en_lo, en_hi;
-	uint16_t acc_clks;
-	uint16_t addr_2_data, re_2_we, re_2_re, we_2_re, cs_cnt;
-
-	en_lo = CEIL_DIV(Trp[mode], CLK_X);
-	en_hi = CEIL_DIV(Treh[mode], CLK_X);
-#if ONFI_BLOOM_TIME
-	if ((en_hi * CLK_X) < (Treh[mode] + 2))
-		en_hi++;
-#endif
-
-	if ((en_lo + en_hi) * CLK_X < Trc[mode])
-		en_lo += CEIL_DIV((Trc[mode] - (en_lo + en_hi) * CLK_X), CLK_X);
-
-	if ((en_lo + en_hi) < CLK_MULTI)
-		en_lo += CLK_MULTI - en_lo - en_hi;
-
-	while (dv_window < 8) {
-		data_invalid_rhoh = en_lo * CLK_X + Trhoh[mode];
-
-		data_invalid_rloh = (en_lo + en_hi) * CLK_X + Trloh[mode];
-
-		data_invalid = data_invalid_rhoh < data_invalid_rloh ?
-					data_invalid_rhoh : data_invalid_rloh;
-
-		dv_window = data_invalid - Trea[mode];
-
-		if (dv_window < 8)
-			en_lo++;
-	}
-
-	acc_clks = CEIL_DIV(Trea[mode], CLK_X);
-
-	while (acc_clks * CLK_X - Trea[mode] < 3)
-		acc_clks++;
-
-	if (data_invalid - acc_clks * CLK_X < 2)
-		dev_warn(denali->dev, "%s, Line %d: Warning!\n",
-			 __FILE__, __LINE__);
-
-	addr_2_data = CEIL_DIV(Tadl[mode], CLK_X);
-	re_2_we = CEIL_DIV(Trhw[mode], CLK_X);
-	re_2_re = CEIL_DIV(Trhz[mode], CLK_X);
-	we_2_re = CEIL_DIV(Twhr[mode], CLK_X);
-	cs_cnt = CEIL_DIV((Tcs[mode] - Trp[mode]), CLK_X);
-	if (cs_cnt == 0)
-		cs_cnt = 1;
-
-	if (Tcea[mode]) {
-		while (cs_cnt * CLK_X + Trea[mode] < Tcea[mode])
-			cs_cnt++;
-	}
-
-#if MODE5_WORKAROUND
-	if (mode == 5)
-		acc_clks = 5;
-#endif
-
-	/* Sighting 3462430: Temporary hack for MT29F128G08CJABAWP:B */
-	if (ioread32(denali->flash_reg + MANUFACTURER_ID) == 0 &&
-		ioread32(denali->flash_reg + DEVICE_ID) == 0x88)
-		acc_clks = 6;
-
-	iowrite32(acc_clks, denali->flash_reg + ACC_CLKS);
-	iowrite32(re_2_we, denali->flash_reg + RE_2_WE);
-	iowrite32(re_2_re, denali->flash_reg + RE_2_RE);
-	iowrite32(we_2_re, denali->flash_reg + WE_2_RE);
-	iowrite32(addr_2_data, denali->flash_reg + ADDR_2_DATA);
-	iowrite32(en_lo, denali->flash_reg + RDWR_EN_LO_CNT);
-	iowrite32(en_hi, denali->flash_reg + RDWR_EN_HI_CNT);
-	iowrite32(cs_cnt, denali->flash_reg + CS_SETUP_CNT);
-}
-
-/* queries the NAND device to see what ONFI modes it supports. */
-static uint16_t get_onfi_nand_para(struct denali_nand_info *denali)
+static void denali_host_write(struct denali_nand_info *denali,
+			      uint32_t addr, uint32_t data)
 {
 {
-	int i;
-
-	/*
-	 * we needn't to do a reset here because driver has already
-	 * reset all the banks before
-	 */
-	if (!(ioread32(denali->flash_reg + ONFI_TIMING_MODE) &
-		ONFI_TIMING_MODE__VALUE))
-		return FAIL;
-
-	for (i = 5; i > 0; i--) {
-		if (ioread32(denali->flash_reg + ONFI_TIMING_MODE) &
-			(0x01 << i))
-			break;
-	}
-
-	nand_onfi_timing_set(denali, i);
-
-	/*
-	 * By now, all the ONFI devices we know support the page cache
-	 * rw feature. So here we enable the pipeline_rw_ahead feature
-	 */
-	/* iowrite32(1, denali->flash_reg + CACHE_WRITE_ENABLE); */
-	/* iowrite32(1, denali->flash_reg + CACHE_READ_ENABLE);  */
-
-	return PASS;
-}
-
-static void get_samsung_nand_para(struct denali_nand_info *denali,
-							uint8_t device_id)
-{
-	if (device_id == 0xd3) { /* Samsung K9WAG08U1A */
-		/* Set timing register values according to datasheet */
-		iowrite32(5, denali->flash_reg + ACC_CLKS);
-		iowrite32(20, denali->flash_reg + RE_2_WE);
-		iowrite32(12, denali->flash_reg + WE_2_RE);
-		iowrite32(14, denali->flash_reg + ADDR_2_DATA);
-		iowrite32(3, denali->flash_reg + RDWR_EN_LO_CNT);
-		iowrite32(2, denali->flash_reg + RDWR_EN_HI_CNT);
-		iowrite32(2, denali->flash_reg + CS_SETUP_CNT);
-	}
-}
-
-static void get_toshiba_nand_para(struct denali_nand_info *denali)
-{
-	/*
-	 * Workaround to fix a controller bug which reports a wrong
-	 * spare area size for some kind of Toshiba NAND device
-	 */
-	if ((ioread32(denali->flash_reg + DEVICE_MAIN_AREA_SIZE) == 4096) &&
-		(ioread32(denali->flash_reg + DEVICE_SPARE_AREA_SIZE) == 64))
-		iowrite32(216, denali->flash_reg + DEVICE_SPARE_AREA_SIZE);
-}
-
-static void get_hynix_nand_para(struct denali_nand_info *denali,
-							uint8_t device_id)
-{
-	switch (device_id) {
-	case 0xD5: /* Hynix H27UAG8T2A, H27UBG8U5A or H27UCG8VFA */
-	case 0xD7: /* Hynix H27UDG8VEM, H27UCG8UDM or H27UCG8V5A */
-		iowrite32(128, denali->flash_reg + PAGES_PER_BLOCK);
-		iowrite32(4096, denali->flash_reg + DEVICE_MAIN_AREA_SIZE);
-		iowrite32(224, denali->flash_reg + DEVICE_SPARE_AREA_SIZE);
-		iowrite32(0, denali->flash_reg + DEVICE_WIDTH);
-		break;
-	default:
-		dev_warn(denali->dev,
-			 "Unknown Hynix NAND (Device ID: 0x%x).\n"
-			 "Will use default parameter values instead.\n",
-			 device_id);
-	}
-}
-
-/*
- * determines how many NAND chips are connected to the controller. Note for
- * Intel CE4100 devices we don't support more than one device.
- */
-static void find_valid_banks(struct denali_nand_info *denali)
-{
-	uint32_t id[denali->max_banks];
-	int i;
-
-	denali->total_used_banks = 1;
-	for (i = 0; i < denali->max_banks; i++) {
-		index_addr(denali, MODE_11 | (i << 24) | 0, 0x90);
-		index_addr(denali, MODE_11 | (i << 24) | 1, 0);
-		index_addr_read_data(denali, MODE_11 | (i << 24) | 2, &id[i]);
-
-		dev_dbg(denali->dev,
-			"Return 1st ID for bank[%d]: %x\n", i, id[i]);
-
-		if (i == 0) {
-			if (!(id[i] & 0x0ff))
-				break; /* WTF? */
-		} else {
-			if ((id[i] & 0x0ff) == (id[0] & 0x0ff))
-				denali->total_used_banks++;
-			else
-				break;
-		}
-	}
-
-	if (denali->platform == INTEL_CE4100) {
-		/*
-		 * Platform limitations of the CE4100 device limit
-		 * users to a single chip solution for NAND.
-		 * Multichip support is not enabled.
-		 */
-		if (denali->total_used_banks != 1) {
-			dev_err(denali->dev,
-				"Sorry, Intel CE4100 only supports a single NAND device.\n");
-			BUG();
-		}
-	}
-	dev_dbg(denali->dev,
-		"denali->total_used_banks: %d\n", denali->total_used_banks);
+	iowrite32(addr, denali->host + DENALI_HOST_ADDR);
+	iowrite32(data, denali->host + DENALI_HOST_DATA);
 }
 }
 
 
 /*
 /*
@@ -418,7 +83,7 @@ static void find_valid_banks(struct denali_nand_info *denali)
  */
  */
 static void detect_max_banks(struct denali_nand_info *denali)
 static void detect_max_banks(struct denali_nand_info *denali)
 {
 {
-	uint32_t features = ioread32(denali->flash_reg + FEATURES);
+	uint32_t features = ioread32(denali->reg + FEATURES);
 
 
 	denali->max_banks = 1 << (features & FEATURES__N_BANKS);
 	denali->max_banks = 1 << (features & FEATURES__N_BANKS);
 
 
@@ -427,227 +92,120 @@ static void detect_max_banks(struct denali_nand_info *denali)
 		denali->max_banks <<= 1;
 		denali->max_banks <<= 1;
 }
 }
 
 
-static uint16_t denali_nand_timing_set(struct denali_nand_info *denali)
+static void denali_enable_irq(struct denali_nand_info *denali)
 {
 {
-	uint16_t status = PASS;
-	uint32_t id_bytes[8], addr;
-	uint8_t maf_id, device_id;
 	int i;
 	int i;
 
 
-	/*
-	 * Use read id method to get device ID and other params.
-	 * For some NAND chips, controller can't report the correct
-	 * device ID by reading from DEVICE_ID register
-	 */
-	addr = MODE_11 | BANK(denali->flash_bank);
-	index_addr(denali, addr | 0, 0x90);
-	index_addr(denali, addr | 1, 0);
-	for (i = 0; i < 8; i++)
-		index_addr_read_data(denali, addr | 2, &id_bytes[i]);
-	maf_id = id_bytes[0];
-	device_id = id_bytes[1];
-
-	if (ioread32(denali->flash_reg + ONFI_DEVICE_NO_OF_LUNS) &
-		ONFI_DEVICE_NO_OF_LUNS__ONFI_DEVICE) { /* ONFI 1.0 NAND */
-		if (FAIL == get_onfi_nand_para(denali))
-			return FAIL;
-	} else if (maf_id == 0xEC) { /* Samsung NAND */
-		get_samsung_nand_para(denali, device_id);
-	} else if (maf_id == 0x98) { /* Toshiba NAND */
-		get_toshiba_nand_para(denali);
-	} else if (maf_id == 0xAD) { /* Hynix NAND */
-		get_hynix_nand_para(denali, device_id);
-	}
-
-	dev_info(denali->dev,
-			"Dump timing register values:\n"
-			"acc_clks: %d, re_2_we: %d, re_2_re: %d\n"
-			"we_2_re: %d, addr_2_data: %d, rdwr_en_lo_cnt: %d\n"
-			"rdwr_en_hi_cnt: %d, cs_setup_cnt: %d\n",
-			ioread32(denali->flash_reg + ACC_CLKS),
-			ioread32(denali->flash_reg + RE_2_WE),
-			ioread32(denali->flash_reg + RE_2_RE),
-			ioread32(denali->flash_reg + WE_2_RE),
-			ioread32(denali->flash_reg + ADDR_2_DATA),
-			ioread32(denali->flash_reg + RDWR_EN_LO_CNT),
-			ioread32(denali->flash_reg + RDWR_EN_HI_CNT),
-			ioread32(denali->flash_reg + CS_SETUP_CNT));
-
-	find_valid_banks(denali);
-
-	/*
-	 * If the user specified to override the default timings
-	 * with a specific ONFI mode, we apply those changes here.
-	 */
-	if (onfi_timing_mode != NAND_DEFAULT_TIMINGS)
-		nand_onfi_timing_set(denali, onfi_timing_mode);
-
-	return status;
+	for (i = 0; i < DENALI_NR_BANKS; i++)
+		iowrite32(U32_MAX, denali->reg + INTR_EN(i));
+	iowrite32(GLOBAL_INT_EN_FLAG, denali->reg + GLOBAL_INT_ENABLE);
 }
 }
 
 
-static void denali_set_intr_modes(struct denali_nand_info *denali,
-					uint16_t INT_ENABLE)
+static void denali_disable_irq(struct denali_nand_info *denali)
 {
 {
-	if (INT_ENABLE)
-		iowrite32(1, denali->flash_reg + GLOBAL_INT_ENABLE);
-	else
-		iowrite32(0, denali->flash_reg + GLOBAL_INT_ENABLE);
-}
-
-/*
- * validation function to verify that the controlling software is making
- * a valid request
- */
-static inline bool is_flash_bank_valid(int flash_bank)
-{
-	return flash_bank >= 0 && flash_bank < 4;
-}
-
-static void denali_irq_init(struct denali_nand_info *denali)
-{
-	uint32_t int_mask;
 	int i;
 	int i;
 
 
-	/* Disable global interrupts */
-	denali_set_intr_modes(denali, false);
-
-	int_mask = DENALI_IRQ_ALL;
-
-	/* Clear all status bits */
-	for (i = 0; i < denali->max_banks; ++i)
-		iowrite32(0xFFFF, denali->flash_reg + INTR_STATUS(i));
-
-	denali_irq_enable(denali, int_mask);
+	for (i = 0; i < DENALI_NR_BANKS; i++)
+		iowrite32(0, denali->reg + INTR_EN(i));
+	iowrite32(0, denali->reg + GLOBAL_INT_ENABLE);
 }
 }
 
 
-static void denali_irq_cleanup(int irqnum, struct denali_nand_info *denali)
+static void denali_clear_irq(struct denali_nand_info *denali,
+			     int bank, uint32_t irq_status)
 {
 {
-	denali_set_intr_modes(denali, false);
+	/* write one to clear bits */
+	iowrite32(irq_status, denali->reg + INTR_STATUS(bank));
 }
 }
 
 
-static void denali_irq_enable(struct denali_nand_info *denali,
-							uint32_t int_mask)
+static void denali_clear_irq_all(struct denali_nand_info *denali)
 {
 {
 	int i;
 	int i;
 
 
-	for (i = 0; i < denali->max_banks; ++i)
-		iowrite32(int_mask, denali->flash_reg + INTR_EN(i));
+	for (i = 0; i < DENALI_NR_BANKS; i++)
+		denali_clear_irq(denali, i, U32_MAX);
 }
 }
 
 
-/*
- * This function only returns when an interrupt that this driver cares about
- * occurs. This is to reduce the overhead of servicing interrupts
- */
-static inline uint32_t denali_irq_detected(struct denali_nand_info *denali)
+static irqreturn_t denali_isr(int irq, void *dev_id)
 {
 {
-	return read_interrupt_status(denali) & DENALI_IRQ_ALL;
-}
+	struct denali_nand_info *denali = dev_id;
+	irqreturn_t ret = IRQ_NONE;
+	uint32_t irq_status;
+	int i;
 
 
-/* Interrupts are cleared by writing a 1 to the appropriate status bit */
-static inline void clear_interrupt(struct denali_nand_info *denali,
-							uint32_t irq_mask)
-{
-	uint32_t intr_status_reg;
+	spin_lock(&denali->irq_lock);
 
 
-	intr_status_reg = INTR_STATUS(denali->flash_bank);
+	for (i = 0; i < DENALI_NR_BANKS; i++) {
+		irq_status = ioread32(denali->reg + INTR_STATUS(i));
+		if (irq_status)
+			ret = IRQ_HANDLED;
 
 
-	iowrite32(irq_mask, denali->flash_reg + intr_status_reg);
-}
+		denali_clear_irq(denali, i, irq_status);
 
 
-static void clear_interrupts(struct denali_nand_info *denali)
-{
-	uint32_t status;
+		if (i != denali->active_bank)
+			continue;
 
 
-	spin_lock_irq(&denali->irq_lock);
+		denali->irq_status |= irq_status;
 
 
-	status = read_interrupt_status(denali);
-	clear_interrupt(denali, status);
+		if (denali->irq_status & denali->irq_mask)
+			complete(&denali->complete);
+	}
+
+	spin_unlock(&denali->irq_lock);
 
 
-	denali->irq_status = 0x0;
-	spin_unlock_irq(&denali->irq_lock);
+	return ret;
 }
 }
 
 
-static uint32_t read_interrupt_status(struct denali_nand_info *denali)
+static void denali_reset_irq(struct denali_nand_info *denali)
 {
 {
-	uint32_t intr_status_reg;
-
-	intr_status_reg = INTR_STATUS(denali->flash_bank);
+	unsigned long flags;
 
 
-	return ioread32(denali->flash_reg + intr_status_reg);
+	spin_lock_irqsave(&denali->irq_lock, flags);
+	denali->irq_status = 0;
+	denali->irq_mask = 0;
+	spin_unlock_irqrestore(&denali->irq_lock, flags);
 }
 }
 
 
-/*
- * This is the interrupt service routine. It handles all interrupts
- * sent to this device. Note that on CE4100, this is a shared interrupt.
- */
-static irqreturn_t denali_isr(int irq, void *dev_id)
+static uint32_t denali_wait_for_irq(struct denali_nand_info *denali,
+				    uint32_t irq_mask)
 {
 {
-	struct denali_nand_info *denali = dev_id;
+	unsigned long time_left, flags;
 	uint32_t irq_status;
 	uint32_t irq_status;
-	irqreturn_t result = IRQ_NONE;
 
 
-	spin_lock(&denali->irq_lock);
+	spin_lock_irqsave(&denali->irq_lock, flags);
 
 
-	/* check to see if a valid NAND chip has been selected. */
-	if (is_flash_bank_valid(denali->flash_bank)) {
-		/*
-		 * check to see if controller generated the interrupt,
-		 * since this is a shared interrupt
-		 */
-		irq_status = denali_irq_detected(denali);
-		if (irq_status != 0) {
-			/* handle interrupt */
-			/* first acknowledge it */
-			clear_interrupt(denali, irq_status);
-			/*
-			 * store the status in the device context for someone
-			 * to read
-			 */
-			denali->irq_status |= irq_status;
-			/* notify anyone who cares that it happened */
-			complete(&denali->complete);
-			/* tell the OS that we've handled this */
-			result = IRQ_HANDLED;
-		}
+	irq_status = denali->irq_status;
+
+	if (irq_mask & irq_status) {
+		/* return immediately if the IRQ has already happened. */
+		spin_unlock_irqrestore(&denali->irq_lock, flags);
+		return irq_status;
 	}
 	}
-	spin_unlock(&denali->irq_lock);
-	return result;
-}
 
 
-static uint32_t wait_for_irq(struct denali_nand_info *denali, uint32_t irq_mask)
-{
-	unsigned long comp_res;
-	uint32_t intr_status;
-	unsigned long timeout = msecs_to_jiffies(1000);
+	denali->irq_mask = irq_mask;
+	reinit_completion(&denali->complete);
+	spin_unlock_irqrestore(&denali->irq_lock, flags);
 
 
-	do {
-		comp_res =
-			wait_for_completion_timeout(&denali->complete, timeout);
-		spin_lock_irq(&denali->irq_lock);
-		intr_status = denali->irq_status;
-
-		if (intr_status & irq_mask) {
-			denali->irq_status &= ~irq_mask;
-			spin_unlock_irq(&denali->irq_lock);
-			/* our interrupt was detected */
-			break;
-		}
+	time_left = wait_for_completion_timeout(&denali->complete,
+						msecs_to_jiffies(1000));
+	if (!time_left) {
+		dev_err(denali->dev, "timeout while waiting for irq 0x%x\n",
+			denali->irq_mask);
+		return 0;
+	}
 
 
-		/*
-		 * these are not the interrupts you are looking for -
-		 * need to wait again
-		 */
-		spin_unlock_irq(&denali->irq_lock);
-	} while (comp_res != 0);
+	return denali->irq_status;
+}
+
+static uint32_t denali_check_irq(struct denali_nand_info *denali)
+{
+	unsigned long flags;
+	uint32_t irq_status;
 
 
-	if (comp_res == 0) {
-		/* timeout */
-		pr_err("timeout occurred, status = 0x%x, mask = 0x%x\n",
-				intr_status, irq_mask);
+	spin_lock_irqsave(&denali->irq_lock, flags);
+	irq_status = denali->irq_status;
+	spin_unlock_irqrestore(&denali->irq_lock, flags);
 
 
-		intr_status = 0;
-	}
-	return intr_status;
+	return irq_status;
 }
 }
 
 
 /*
 /*
@@ -664,153 +222,111 @@ static void setup_ecc_for_xfer(struct denali_nand_info *denali, bool ecc_en,
 	transfer_spare_flag = transfer_spare ? TRANSFER_SPARE_REG__FLAG : 0;
 	transfer_spare_flag = transfer_spare ? TRANSFER_SPARE_REG__FLAG : 0;
 
 
 	/* Enable spare area/ECC per user's request. */
 	/* Enable spare area/ECC per user's request. */
-	iowrite32(ecc_en_flag, denali->flash_reg + ECC_ENABLE);
-	iowrite32(transfer_spare_flag, denali->flash_reg + TRANSFER_SPARE_REG);
+	iowrite32(ecc_en_flag, denali->reg + ECC_ENABLE);
+	iowrite32(transfer_spare_flag, denali->reg + TRANSFER_SPARE_REG);
 }
 }
 
 
-/*
- * sends a pipeline command operation to the controller. See the Denali NAND
- * controller's user guide for more information (section 4.2.3.6).
- */
-static int denali_send_pipeline_cmd(struct denali_nand_info *denali,
-				    bool ecc_en, bool transfer_spare,
-				    int access_type, int op)
+static void denali_read_buf(struct mtd_info *mtd, uint8_t *buf, int len)
 {
 {
-	int status = PASS;
-	uint32_t addr, cmd;
-
-	setup_ecc_for_xfer(denali, ecc_en, transfer_spare);
+	struct denali_nand_info *denali = mtd_to_denali(mtd);
+	int i;
 
 
-	clear_interrupts(denali);
+	iowrite32(DENALI_MAP11_DATA | DENALI_BANK(denali),
+		  denali->host + DENALI_HOST_ADDR);
 
 
-	addr = BANK(denali->flash_bank) | denali->page;
+	for (i = 0; i < len; i++)
+		buf[i] = ioread32(denali->host + DENALI_HOST_DATA);
+}
 
 
-	if (op == DENALI_WRITE && access_type != SPARE_ACCESS) {
-		cmd = MODE_01 | addr;
-		iowrite32(cmd, denali->flash_mem);
-	} else if (op == DENALI_WRITE && access_type == SPARE_ACCESS) {
-		/* read spare area */
-		cmd = MODE_10 | addr;
-		index_addr(denali, cmd, access_type);
+static void denali_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len)
+{
+	struct denali_nand_info *denali = mtd_to_denali(mtd);
+	int i;
 
 
-		cmd = MODE_01 | addr;
-		iowrite32(cmd, denali->flash_mem);
-	} else if (op == DENALI_READ) {
-		/* setup page read request for access type */
-		cmd = MODE_10 | addr;
-		index_addr(denali, cmd, access_type);
+	iowrite32(DENALI_MAP11_DATA | DENALI_BANK(denali),
+		  denali->host + DENALI_HOST_ADDR);
 
 
-		cmd = MODE_01 | addr;
-		iowrite32(cmd, denali->flash_mem);
-	}
-	return status;
+	for (i = 0; i < len; i++)
+		iowrite32(buf[i], denali->host + DENALI_HOST_DATA);
 }
 }
 
 
-/* helper function that simply writes a buffer to the flash */
-static int write_data_to_flash_mem(struct denali_nand_info *denali,
-				   const uint8_t *buf, int len)
+static void denali_read_buf16(struct mtd_info *mtd, uint8_t *buf, int len)
 {
 {
-	uint32_t *buf32;
+	struct denali_nand_info *denali = mtd_to_denali(mtd);
+	uint16_t *buf16 = (uint16_t *)buf;
 	int i;
 	int i;
 
 
-	/*
-	 * verify that the len is a multiple of 4.
-	 * see comment in read_data_from_flash_mem()
-	 */
-	BUG_ON((len % 4) != 0);
+	iowrite32(DENALI_MAP11_DATA | DENALI_BANK(denali),
+		  denali->host + DENALI_HOST_ADDR);
 
 
-	/* write the data to the flash memory */
-	buf32 = (uint32_t *)buf;
-	for (i = 0; i < len / 4; i++)
-		iowrite32(*buf32++, denali->flash_mem + 0x10);
-	return i * 4; /* intent is to return the number of bytes read */
+	for (i = 0; i < len / 2; i++)
+		buf16[i] = ioread32(denali->host + DENALI_HOST_DATA);
 }
 }
 
 
-/* helper function that simply reads a buffer from the flash */
-static int read_data_from_flash_mem(struct denali_nand_info *denali,
-				    uint8_t *buf, int len)
+static void denali_write_buf16(struct mtd_info *mtd, const uint8_t *buf,
+			       int len)
 {
 {
-	uint32_t *buf32;
+	struct denali_nand_info *denali = mtd_to_denali(mtd);
+	const uint16_t *buf16 = (const uint16_t *)buf;
 	int i;
 	int i;
 
 
-	/*
-	 * we assume that len will be a multiple of 4, if not it would be nice
-	 * to know about it ASAP rather than have random failures...
-	 * This assumption is based on the fact that this function is designed
-	 * to be used to read flash pages, which are typically multiples of 4.
-	 */
-	BUG_ON((len % 4) != 0);
+	iowrite32(DENALI_MAP11_DATA | DENALI_BANK(denali),
+		  denali->host + DENALI_HOST_ADDR);
 
 
-	/* transfer the data from the flash */
-	buf32 = (uint32_t *)buf;
-	for (i = 0; i < len / 4; i++)
-		*buf32++ = ioread32(denali->flash_mem + 0x10);
-	return i * 4; /* intent is to return the number of bytes read */
+	for (i = 0; i < len / 2; i++)
+		iowrite32(buf16[i], denali->host + DENALI_HOST_DATA);
 }
 }
 
 
-/* writes OOB data to the device */
-static int write_oob_data(struct mtd_info *mtd, uint8_t *buf, int page)
+static uint8_t denali_read_byte(struct mtd_info *mtd)
 {
 {
-	struct denali_nand_info *denali = mtd_to_denali(mtd);
-	uint32_t irq_status;
-	uint32_t irq_mask = INTR__PROGRAM_COMP | INTR__PROGRAM_FAIL;
-	int status = 0;
+	uint8_t byte;
 
 
-	denali->page = page;
+	denali_read_buf(mtd, &byte, 1);
 
 
-	if (denali_send_pipeline_cmd(denali, false, false, SPARE_ACCESS,
-							DENALI_WRITE) == PASS) {
-		write_data_to_flash_mem(denali, buf, mtd->oobsize);
+	return byte;
+}
 
 
-		/* wait for operation to complete */
-		irq_status = wait_for_irq(denali, irq_mask);
+static void denali_write_byte(struct mtd_info *mtd, uint8_t byte)
+{
+	denali_write_buf(mtd, &byte, 1);
+}
 
 
-		if (irq_status == 0) {
-			dev_err(denali->dev, "OOB write failed\n");
-			status = -EIO;
-		}
-	} else {
-		dev_err(denali->dev, "unable to send pipeline command\n");
-		status = -EIO;
-	}
-	return status;
+static uint16_t denali_read_word(struct mtd_info *mtd)
+{
+	uint16_t word;
+
+	denali_read_buf16(mtd, (uint8_t *)&word, 2);
+
+	return word;
 }
 }
 
 
-/* reads OOB data from the device */
-static void read_oob_data(struct mtd_info *mtd, uint8_t *buf, int page)
+static void denali_cmd_ctrl(struct mtd_info *mtd, int dat, unsigned int ctrl)
 {
 {
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
-	uint32_t irq_mask = INTR__LOAD_COMP;
-	uint32_t irq_status, addr, cmd;
+	uint32_t type;
 
 
-	denali->page = page;
+	if (ctrl & NAND_CLE)
+		type = DENALI_MAP11_CMD;
+	else if (ctrl & NAND_ALE)
+		type = DENALI_MAP11_ADDR;
+	else
+		return;
 
 
-	if (denali_send_pipeline_cmd(denali, false, true, SPARE_ACCESS,
-							DENALI_READ) == PASS) {
-		read_data_from_flash_mem(denali, buf, mtd->oobsize);
+	/*
+	 * Some commands are followed by chip->dev_ready or chip->waitfunc.
+	 * irq_status must be cleared here to catch the R/B# interrupt later.
+	 */
+	if (ctrl & NAND_CTRL_CHANGE)
+		denali_reset_irq(denali);
 
 
-		/*
-		 * wait for command to be accepted
-		 * can always use status0 bit as the
-		 * mask is identical for each bank.
-		 */
-		irq_status = wait_for_irq(denali, irq_mask);
+	denali_host_write(denali, DENALI_BANK(denali) | type, dat);
+}
 
 
-		if (irq_status == 0)
-			dev_err(denali->dev, "page on OOB timeout %d\n",
-					denali->page);
+static int denali_dev_ready(struct mtd_info *mtd)
+{
+	struct denali_nand_info *denali = mtd_to_denali(mtd);
 
 
-		/*
-		 * We set the device back to MAIN_ACCESS here as I observed
-		 * instability with the controller if you do a block erase
-		 * and the last transaction was a SPARE_ACCESS. Block erase
-		 * is reliable (according to the MTD test infrastructure)
-		 * if you are in MAIN_ACCESS.
-		 */
-		addr = BANK(denali->flash_bank) | denali->page;
-		cmd = MODE_10 | addr;
-		index_addr(denali, cmd, MAIN_ACCESS);
-	}
+	return !!(denali_check_irq(denali) & INTR__INT_ACT);
 }
 }
 
 
 static int denali_check_erased_page(struct mtd_info *mtd,
 static int denali_check_erased_page(struct mtd_info *mtd,
@@ -856,11 +372,11 @@ static int denali_hw_ecc_fixup(struct mtd_info *mtd,
 			       unsigned long *uncor_ecc_flags)
 			       unsigned long *uncor_ecc_flags)
 {
 {
 	struct nand_chip *chip = mtd_to_nand(mtd);
 	struct nand_chip *chip = mtd_to_nand(mtd);
-	int bank = denali->flash_bank;
+	int bank = denali->active_bank;
 	uint32_t ecc_cor;
 	uint32_t ecc_cor;
 	unsigned int max_bitflips;
 	unsigned int max_bitflips;
 
 
-	ecc_cor = ioread32(denali->flash_reg + ECC_COR_INFO(bank));
+	ecc_cor = ioread32(denali->reg + ECC_COR_INFO(bank));
 	ecc_cor >>= ECC_COR_INFO__SHIFT(bank);
 	ecc_cor >>= ECC_COR_INFO__SHIFT(bank);
 
 
 	if (ecc_cor & ECC_COR_INFO__UNCOR_ERR) {
 	if (ecc_cor & ECC_COR_INFO__UNCOR_ERR) {
@@ -886,8 +402,6 @@ static int denali_hw_ecc_fixup(struct mtd_info *mtd,
 	return max_bitflips;
 	return max_bitflips;
 }
 }
 
 
-#define ECC_SECTOR_SIZE 512
-
 #define ECC_SECTOR(x)	(((x) & ECC_ERROR_ADDRESS__SECTOR_NR) >> 12)
 #define ECC_SECTOR(x)	(((x) & ECC_ERROR_ADDRESS__SECTOR_NR) >> 12)
 #define ECC_BYTE(x)	(((x) & ECC_ERROR_ADDRESS__OFFSET))
 #define ECC_BYTE(x)	(((x) & ECC_ERROR_ADDRESS__OFFSET))
 #define ECC_CORRECTION_VALUE(x) ((x) & ERR_CORRECTION_INFO__BYTEMASK)
 #define ECC_CORRECTION_VALUE(x) ((x) & ERR_CORRECTION_INFO__BYTEMASK)
@@ -899,22 +413,23 @@ static int denali_sw_ecc_fixup(struct mtd_info *mtd,
 			       struct denali_nand_info *denali,
 			       struct denali_nand_info *denali,
 			       unsigned long *uncor_ecc_flags, uint8_t *buf)
 			       unsigned long *uncor_ecc_flags, uint8_t *buf)
 {
 {
+	unsigned int ecc_size = denali->nand.ecc.size;
 	unsigned int bitflips = 0;
 	unsigned int bitflips = 0;
 	unsigned int max_bitflips = 0;
 	unsigned int max_bitflips = 0;
 	uint32_t err_addr, err_cor_info;
 	uint32_t err_addr, err_cor_info;
 	unsigned int err_byte, err_sector, err_device;
 	unsigned int err_byte, err_sector, err_device;
 	uint8_t err_cor_value;
 	uint8_t err_cor_value;
 	unsigned int prev_sector = 0;
 	unsigned int prev_sector = 0;
+	uint32_t irq_status;
 
 
-	/* read the ECC errors. we'll ignore them for now */
-	denali_set_intr_modes(denali, false);
+	denali_reset_irq(denali);
 
 
 	do {
 	do {
-		err_addr = ioread32(denali->flash_reg + ECC_ERROR_ADDRESS);
+		err_addr = ioread32(denali->reg + ECC_ERROR_ADDRESS);
 		err_sector = ECC_SECTOR(err_addr);
 		err_sector = ECC_SECTOR(err_addr);
 		err_byte = ECC_BYTE(err_addr);
 		err_byte = ECC_BYTE(err_addr);
 
 
-		err_cor_info = ioread32(denali->flash_reg + ERR_CORRECTION_INFO);
+		err_cor_info = ioread32(denali->reg + ERR_CORRECTION_INFO);
 		err_cor_value = ECC_CORRECTION_VALUE(err_cor_info);
 		err_cor_value = ECC_CORRECTION_VALUE(err_cor_info);
 		err_device = ECC_ERR_DEVICE(err_cor_info);
 		err_device = ECC_ERR_DEVICE(err_cor_info);
 
 
@@ -928,9 +443,9 @@ static int denali_sw_ecc_fixup(struct mtd_info *mtd,
 			 * an erased sector.
 			 * an erased sector.
 			 */
 			 */
 			*uncor_ecc_flags |= BIT(err_sector);
 			*uncor_ecc_flags |= BIT(err_sector);
-		} else if (err_byte < ECC_SECTOR_SIZE) {
+		} else if (err_byte < ecc_size) {
 			/*
 			/*
-			 * If err_byte is larger than ECC_SECTOR_SIZE, means error
+			 * If err_byte is larger than ecc_size, means error
 			 * happened in OOB, so we ignore it. It's no need for
 			 * happened in OOB, so we ignore it. It's no need for
 			 * us to correct it err_device is represented the NAND
 			 * us to correct it err_device is represented the NAND
 			 * error bits are happened in if there are more than
 			 * error bits are happened in if there are more than
@@ -939,8 +454,8 @@ static int denali_sw_ecc_fixup(struct mtd_info *mtd,
 			int offset;
 			int offset;
 			unsigned int flips_in_byte;
 			unsigned int flips_in_byte;
 
 
-			offset = (err_sector * ECC_SECTOR_SIZE + err_byte) *
-						denali->devnum + err_device;
+			offset = (err_sector * ecc_size + err_byte) *
+					denali->devs_per_cs + err_device;
 
 
 			/* correct the ECC error */
 			/* correct the ECC error */
 			flips_in_byte = hweight8(buf[offset] ^ err_cor_value);
 			flips_in_byte = hweight8(buf[offset] ^ err_cor_value);
@@ -959,10 +474,9 @@ static int denali_sw_ecc_fixup(struct mtd_info *mtd,
 	 * ECC_TRANSACTION_DONE interrupt, so here just wait for
 	 * ECC_TRANSACTION_DONE interrupt, so here just wait for
 	 * a while for this interrupt
 	 * a while for this interrupt
 	 */
 	 */
-	while (!(read_interrupt_status(denali) & INTR__ECC_TRANSACTION_DONE))
-		cpu_relax();
-	clear_interrupts(denali);
-	denali_set_intr_modes(denali, true);
+	irq_status = denali_wait_for_irq(denali, INTR__ECC_TRANSACTION_DONE);
+	if (!(irq_status & INTR__ECC_TRANSACTION_DONE))
+		return -EIO;
 
 
 	return max_bitflips;
 	return max_bitflips;
 }
 }
@@ -970,17 +484,17 @@ static int denali_sw_ecc_fixup(struct mtd_info *mtd,
 /* programs the controller to either enable/disable DMA transfers */
 /* programs the controller to either enable/disable DMA transfers */
 static void denali_enable_dma(struct denali_nand_info *denali, bool en)
 static void denali_enable_dma(struct denali_nand_info *denali, bool en)
 {
 {
-	iowrite32(en ? DMA_ENABLE__FLAG : 0, denali->flash_reg + DMA_ENABLE);
-	ioread32(denali->flash_reg + DMA_ENABLE);
+	iowrite32(en ? DMA_ENABLE__FLAG : 0, denali->reg + DMA_ENABLE);
+	ioread32(denali->reg + DMA_ENABLE);
 }
 }
 
 
-static void denali_setup_dma64(struct denali_nand_info *denali, int op)
+static void denali_setup_dma64(struct denali_nand_info *denali,
+			       dma_addr_t dma_addr, int page, int write)
 {
 {
 	uint32_t mode;
 	uint32_t mode;
 	const int page_count = 1;
 	const int page_count = 1;
-	uint64_t addr = denali->buf.dma_buf;
 
 
-	mode = MODE_10 | BANK(denali->flash_bank) | denali->page;
+	mode = DENALI_MAP10 | DENALI_BANK(denali) | page;
 
 
 	/* DMA is a three step process */
 	/* DMA is a three step process */
 
 
@@ -988,191 +502,354 @@ static void denali_setup_dma64(struct denali_nand_info *denali, int op)
 	 * 1. setup transfer type, interrupt when complete,
 	 * 1. setup transfer type, interrupt when complete,
 	 *    burst len = 64 bytes, the number of pages
 	 *    burst len = 64 bytes, the number of pages
 	 */
 	 */
-	index_addr(denali, mode, 0x01002000 | (64 << 16) | op | page_count);
+	denali_host_write(denali, mode,
+			  0x01002000 | (64 << 16) | (write << 8) | page_count);
 
 
 	/* 2. set memory low address */
 	/* 2. set memory low address */
-	index_addr(denali, mode, addr);
+	denali_host_write(denali, mode, dma_addr);
 
 
 	/* 3. set memory high address */
 	/* 3. set memory high address */
-	index_addr(denali, mode, addr >> 32);
+	denali_host_write(denali, mode, (uint64_t)dma_addr >> 32);
 }
 }
 
 
-static void denali_setup_dma32(struct denali_nand_info *denali, int op)
+static void denali_setup_dma32(struct denali_nand_info *denali,
+			       dma_addr_t dma_addr, int page, int write)
 {
 {
 	uint32_t mode;
 	uint32_t mode;
 	const int page_count = 1;
 	const int page_count = 1;
-	uint32_t addr = denali->buf.dma_buf;
 
 
-	mode = MODE_10 | BANK(denali->flash_bank);
+	mode = DENALI_MAP10 | DENALI_BANK(denali);
 
 
 	/* DMA is a four step process */
 	/* DMA is a four step process */
 
 
 	/* 1. setup transfer type and # of pages */
 	/* 1. setup transfer type and # of pages */
-	index_addr(denali, mode | denali->page, 0x2000 | op | page_count);
+	denali_host_write(denali, mode | page,
+			  0x2000 | (write << 8) | page_count);
 
 
 	/* 2. set memory high address bits 23:8 */
 	/* 2. set memory high address bits 23:8 */
-	index_addr(denali, mode | ((addr >> 16) << 8), 0x2200);
+	denali_host_write(denali, mode | ((dma_addr >> 16) << 8), 0x2200);
 
 
 	/* 3. set memory low address bits 23:8 */
 	/* 3. set memory low address bits 23:8 */
-	index_addr(denali, mode | ((addr & 0xffff) << 8), 0x2300);
+	denali_host_write(denali, mode | ((dma_addr & 0xffff) << 8), 0x2300);
 
 
 	/* 4. interrupt when complete, burst len = 64 bytes */
 	/* 4. interrupt when complete, burst len = 64 bytes */
-	index_addr(denali, mode | 0x14000, 0x2400);
+	denali_host_write(denali, mode | 0x14000, 0x2400);
 }
 }
 
 
-static void denali_setup_dma(struct denali_nand_info *denali, int op)
+static void denali_setup_dma(struct denali_nand_info *denali,
+			     dma_addr_t dma_addr, int page, int write)
 {
 {
 	if (denali->caps & DENALI_CAP_DMA_64BIT)
 	if (denali->caps & DENALI_CAP_DMA_64BIT)
-		denali_setup_dma64(denali, op);
+		denali_setup_dma64(denali, dma_addr, page, write);
 	else
 	else
-		denali_setup_dma32(denali, op);
+		denali_setup_dma32(denali, dma_addr, page, write);
 }
 }
 
 
-/*
- * writes a page. user specifies type, and this function handles the
- * configuration details.
- */
-static int write_page(struct mtd_info *mtd, struct nand_chip *chip,
-			const uint8_t *buf, bool raw_xfer)
+static int denali_pio_read(struct denali_nand_info *denali, void *buf,
+			   size_t size, int page, int raw)
 {
 {
-	struct denali_nand_info *denali = mtd_to_denali(mtd);
-	dma_addr_t addr = denali->buf.dma_buf;
-	size_t size = mtd->writesize + mtd->oobsize;
+	uint32_t addr = DENALI_BANK(denali) | page;
+	uint32_t *buf32 = (uint32_t *)buf;
+	uint32_t irq_status, ecc_err_mask;
+	int i;
+
+	if (denali->caps & DENALI_CAP_HW_ECC_FIXUP)
+		ecc_err_mask = INTR__ECC_UNCOR_ERR;
+	else
+		ecc_err_mask = INTR__ECC_ERR;
+
+	denali_reset_irq(denali);
+
+	iowrite32(DENALI_MAP01 | addr, denali->host + DENALI_HOST_ADDR);
+	for (i = 0; i < size / 4; i++)
+		*buf32++ = ioread32(denali->host + DENALI_HOST_DATA);
+
+	irq_status = denali_wait_for_irq(denali, INTR__PAGE_XFER_INC);
+	if (!(irq_status & INTR__PAGE_XFER_INC))
+		return -EIO;
+
+	if (irq_status & INTR__ERASED_PAGE)
+		memset(buf, 0xff, size);
+
+	return irq_status & ecc_err_mask ? -EBADMSG : 0;
+}
+
+static int denali_pio_write(struct denali_nand_info *denali,
+			    const void *buf, size_t size, int page, int raw)
+{
+	uint32_t addr = DENALI_BANK(denali) | page;
+	const uint32_t *buf32 = (uint32_t *)buf;
 	uint32_t irq_status;
 	uint32_t irq_status;
-	uint32_t irq_mask = INTR__DMA_CMD_COMP | INTR__PROGRAM_FAIL;
+	int i;
 
 
-	/*
-	 * if it is a raw xfer, we want to disable ecc and send the spare area.
-	 * !raw_xfer - enable ecc
-	 * raw_xfer - transfer spare
-	 */
-	setup_ecc_for_xfer(denali, !raw_xfer, raw_xfer);
+	denali_reset_irq(denali);
 
 
-	/* copy buffer into DMA buffer */
-	memcpy(denali->buf.buf, buf, mtd->writesize);
+	iowrite32(DENALI_MAP01 | addr, denali->host + DENALI_HOST_ADDR);
+	for (i = 0; i < size / 4; i++)
+		iowrite32(*buf32++, denali->host + DENALI_HOST_DATA);
 
 
-	if (raw_xfer) {
-		/* transfer the data to the spare area */
-		memcpy(denali->buf.buf + mtd->writesize,
-			chip->oob_poi,
-			mtd->oobsize);
+	irq_status = denali_wait_for_irq(denali,
+				INTR__PROGRAM_COMP | INTR__PROGRAM_FAIL);
+	if (!(irq_status & INTR__PROGRAM_COMP))
+		return -EIO;
+
+	return 0;
+}
+
+static int denali_pio_xfer(struct denali_nand_info *denali, void *buf,
+			   size_t size, int page, int raw, int write)
+{
+	if (write)
+		return denali_pio_write(denali, buf, size, page, raw);
+	else
+		return denali_pio_read(denali, buf, size, page, raw);
+}
+
+static int denali_dma_xfer(struct denali_nand_info *denali, void *buf,
+			   size_t size, int page, int raw, int write)
+{
+	dma_addr_t dma_addr;
+	uint32_t irq_mask, irq_status, ecc_err_mask;
+	enum dma_data_direction dir = write ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+	int ret = 0;
+
+	dma_addr = dma_map_single(denali->dev, buf, size, dir);
+	if (dma_mapping_error(denali->dev, dma_addr)) {
+		dev_dbg(denali->dev, "Failed to DMA-map buffer. Trying PIO.\n");
+		return denali_pio_xfer(denali, buf, size, page, raw, write);
 	}
 	}
 
 
-	dma_sync_single_for_device(denali->dev, addr, size, DMA_TO_DEVICE);
+	if (write) {
+		/*
+		 * INTR__PROGRAM_COMP is never asserted for the DMA transfer.
+		 * We can use INTR__DMA_CMD_COMP instead.  This flag is asserted
+		 * when the page program is completed.
+		 */
+		irq_mask = INTR__DMA_CMD_COMP | INTR__PROGRAM_FAIL;
+		ecc_err_mask = 0;
+	} else if (denali->caps & DENALI_CAP_HW_ECC_FIXUP) {
+		irq_mask = INTR__DMA_CMD_COMP;
+		ecc_err_mask = INTR__ECC_UNCOR_ERR;
+	} else {
+		irq_mask = INTR__DMA_CMD_COMP;
+		ecc_err_mask = INTR__ECC_ERR;
+	}
 
 
-	clear_interrupts(denali);
 	denali_enable_dma(denali, true);
 	denali_enable_dma(denali, true);
 
 
-	denali_setup_dma(denali, DENALI_WRITE);
+	denali_reset_irq(denali);
+	denali_setup_dma(denali, dma_addr, page, write);
 
 
 	/* wait for operation to complete */
 	/* wait for operation to complete */
-	irq_status = wait_for_irq(denali, irq_mask);
-
-	if (irq_status == 0) {
-		dev_err(denali->dev, "timeout on write_page (type = %d)\n",
-			raw_xfer);
-		denali->status = NAND_STATUS_FAIL;
-	}
+	irq_status = denali_wait_for_irq(denali, irq_mask);
+	if (!(irq_status & INTR__DMA_CMD_COMP))
+		ret = -EIO;
+	else if (irq_status & ecc_err_mask)
+		ret = -EBADMSG;
 
 
 	denali_enable_dma(denali, false);
 	denali_enable_dma(denali, false);
-	dma_sync_single_for_cpu(denali->dev, addr, size, DMA_TO_DEVICE);
+	dma_unmap_single(denali->dev, dma_addr, size, dir);
 
 
-	return 0;
-}
+	if (irq_status & INTR__ERASED_PAGE)
+		memset(buf, 0xff, size);
 
 
-/* NAND core entry points */
+	return ret;
+}
 
 
-/*
- * this is the callback that the NAND core calls to write a page. Since
- * writing a page with ECC or without is similar, all the work is done
- * by write_page above.
- */
-static int denali_write_page(struct mtd_info *mtd, struct nand_chip *chip,
-				const uint8_t *buf, int oob_required, int page)
+static int denali_data_xfer(struct denali_nand_info *denali, void *buf,
+			    size_t size, int page, int raw, int write)
 {
 {
-	/*
-	 * for regular page writes, we let HW handle all the ECC
-	 * data written to the device.
-	 */
-	return write_page(mtd, chip, buf, false);
+	setup_ecc_for_xfer(denali, !raw, raw);
+
+	if (denali->dma_avail)
+		return denali_dma_xfer(denali, buf, size, page, raw, write);
+	else
+		return denali_pio_xfer(denali, buf, size, page, raw, write);
 }
 }
 
 
-/*
- * This is the callback that the NAND core calls to write a page without ECC.
- * raw access is similar to ECC page writes, so all the work is done in the
- * write_page() function above.
- */
-static int denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
-				 const uint8_t *buf, int oob_required,
-				 int page)
+static void denali_oob_xfer(struct mtd_info *mtd, struct nand_chip *chip,
+			    int page, int write)
 {
 {
-	/*
-	 * for raw page writes, we want to disable ECC and simply write
-	 * whatever data is in the buffer.
-	 */
-	return write_page(mtd, chip, buf, true);
+	struct denali_nand_info *denali = mtd_to_denali(mtd);
+	unsigned int start_cmd = write ? NAND_CMD_SEQIN : NAND_CMD_READ0;
+	unsigned int rnd_cmd = write ? NAND_CMD_RNDIN : NAND_CMD_RNDOUT;
+	int writesize = mtd->writesize;
+	int oobsize = mtd->oobsize;
+	uint8_t *bufpoi = chip->oob_poi;
+	int ecc_steps = chip->ecc.steps;
+	int ecc_size = chip->ecc.size;
+	int ecc_bytes = chip->ecc.bytes;
+	int oob_skip = denali->oob_skip_bytes;
+	size_t size = writesize + oobsize;
+	int i, pos, len;
+
+	/* BBM at the beginning of the OOB area */
+	chip->cmdfunc(mtd, start_cmd, writesize, page);
+	if (write)
+		chip->write_buf(mtd, bufpoi, oob_skip);
+	else
+		chip->read_buf(mtd, bufpoi, oob_skip);
+	bufpoi += oob_skip;
+
+	/* OOB ECC */
+	for (i = 0; i < ecc_steps; i++) {
+		pos = ecc_size + i * (ecc_size + ecc_bytes);
+		len = ecc_bytes;
+
+		if (pos >= writesize)
+			pos += oob_skip;
+		else if (pos + len > writesize)
+			len = writesize - pos;
+
+		chip->cmdfunc(mtd, rnd_cmd, pos, -1);
+		if (write)
+			chip->write_buf(mtd, bufpoi, len);
+		else
+			chip->read_buf(mtd, bufpoi, len);
+		bufpoi += len;
+		if (len < ecc_bytes) {
+			len = ecc_bytes - len;
+			chip->cmdfunc(mtd, rnd_cmd, writesize + oob_skip, -1);
+			if (write)
+				chip->write_buf(mtd, bufpoi, len);
+			else
+				chip->read_buf(mtd, bufpoi, len);
+			bufpoi += len;
+		}
+	}
+
+	/* OOB free */
+	len = oobsize - (bufpoi - chip->oob_poi);
+	chip->cmdfunc(mtd, rnd_cmd, size - len, -1);
+	if (write)
+		chip->write_buf(mtd, bufpoi, len);
+	else
+		chip->read_buf(mtd, bufpoi, len);
 }
 }
 
 
-static int denali_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
-			    int page)
+static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+				uint8_t *buf, int oob_required, int page)
 {
 {
-	return write_oob_data(mtd, chip->oob_poi, page);
+	struct denali_nand_info *denali = mtd_to_denali(mtd);
+	int writesize = mtd->writesize;
+	int oobsize = mtd->oobsize;
+	int ecc_steps = chip->ecc.steps;
+	int ecc_size = chip->ecc.size;
+	int ecc_bytes = chip->ecc.bytes;
+	void *dma_buf = denali->buf;
+	int oob_skip = denali->oob_skip_bytes;
+	size_t size = writesize + oobsize;
+	int ret, i, pos, len;
+
+	ret = denali_data_xfer(denali, dma_buf, size, page, 1, 0);
+	if (ret)
+		return ret;
+
+	/* Arrange the buffer for syndrome payload/ecc layout */
+	if (buf) {
+		for (i = 0; i < ecc_steps; i++) {
+			pos = i * (ecc_size + ecc_bytes);
+			len = ecc_size;
+
+			if (pos >= writesize)
+				pos += oob_skip;
+			else if (pos + len > writesize)
+				len = writesize - pos;
+
+			memcpy(buf, dma_buf + pos, len);
+			buf += len;
+			if (len < ecc_size) {
+				len = ecc_size - len;
+				memcpy(buf, dma_buf + writesize + oob_skip,
+				       len);
+				buf += len;
+			}
+		}
+	}
+
+	if (oob_required) {
+		uint8_t *oob = chip->oob_poi;
+
+		/* BBM at the beginning of the OOB area */
+		memcpy(oob, dma_buf + writesize, oob_skip);
+		oob += oob_skip;
+
+		/* OOB ECC */
+		for (i = 0; i < ecc_steps; i++) {
+			pos = ecc_size + i * (ecc_size + ecc_bytes);
+			len = ecc_bytes;
+
+			if (pos >= writesize)
+				pos += oob_skip;
+			else if (pos + len > writesize)
+				len = writesize - pos;
+
+			memcpy(oob, dma_buf + pos, len);
+			oob += len;
+			if (len < ecc_bytes) {
+				len = ecc_bytes - len;
+				memcpy(oob, dma_buf + writesize + oob_skip,
+				       len);
+				oob += len;
+			}
+		}
+
+		/* OOB free */
+		len = oobsize - (oob - chip->oob_poi);
+		memcpy(oob, dma_buf + size - len, len);
+	}
+
+	return 0;
 }
 }
 
 
 static int denali_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
 static int denali_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
 			   int page)
 			   int page)
 {
 {
-	read_oob_data(mtd, chip->oob_poi, page);
+	denali_oob_xfer(mtd, chip, page, 0);
 
 
 	return 0;
 	return 0;
 }
 }
 
 
-static int denali_read_page(struct mtd_info *mtd, struct nand_chip *chip,
-			    uint8_t *buf, int oob_required, int page)
+static int denali_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
+			    int page)
 {
 {
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
-	dma_addr_t addr = denali->buf.dma_buf;
-	size_t size = mtd->writesize + mtd->oobsize;
-	uint32_t irq_status;
-	uint32_t irq_mask = denali->caps & DENALI_CAP_HW_ECC_FIXUP ?
-				INTR__DMA_CMD_COMP | INTR__ECC_UNCOR_ERR :
-				INTR__ECC_TRANSACTION_DONE | INTR__ECC_ERR;
-	unsigned long uncor_ecc_flags = 0;
-	int stat = 0;
+	int status;
 
 
-	if (page != denali->page) {
-		dev_err(denali->dev,
-			"IN %s: page %d is not equal to denali->page %d",
-			__func__, page, denali->page);
-		BUG();
-	}
+	denali_reset_irq(denali);
 
 
-	setup_ecc_for_xfer(denali, true, false);
+	denali_oob_xfer(mtd, chip, page, 1);
 
 
-	denali_enable_dma(denali, true);
-	dma_sync_single_for_device(denali->dev, addr, size, DMA_FROM_DEVICE);
+	chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
+	status = chip->waitfunc(mtd, chip);
 
 
-	clear_interrupts(denali);
-	denali_setup_dma(denali, DENALI_READ);
-
-	/* wait for operation to complete */
-	irq_status = wait_for_irq(denali, irq_mask);
+	return status & NAND_STATUS_FAIL ? -EIO : 0;
+}
 
 
-	dma_sync_single_for_cpu(denali->dev, addr, size, DMA_FROM_DEVICE);
+static int denali_read_page(struct mtd_info *mtd, struct nand_chip *chip,
+			    uint8_t *buf, int oob_required, int page)
+{
+	struct denali_nand_info *denali = mtd_to_denali(mtd);
+	unsigned long uncor_ecc_flags = 0;
+	int stat = 0;
+	int ret;
 
 
-	memcpy(buf, denali->buf.buf, mtd->writesize);
+	ret = denali_data_xfer(denali, buf, mtd->writesize, page, 0, 0);
+	if (ret && ret != -EBADMSG)
+		return ret;
 
 
 	if (denali->caps & DENALI_CAP_HW_ECC_FIXUP)
 	if (denali->caps & DENALI_CAP_HW_ECC_FIXUP)
 		stat = denali_hw_ecc_fixup(mtd, denali, &uncor_ecc_flags);
 		stat = denali_hw_ecc_fixup(mtd, denali, &uncor_ecc_flags);
-	else if (irq_status & INTR__ECC_ERR)
+	else if (ret == -EBADMSG)
 		stat = denali_sw_ecc_fixup(mtd, denali, &uncor_ecc_flags, buf);
 		stat = denali_sw_ecc_fixup(mtd, denali, &uncor_ecc_flags, buf);
-	denali_enable_dma(denali, false);
 
 
 	if (stat < 0)
 	if (stat < 0)
 		return stat;
 		return stat;
 
 
 	if (uncor_ecc_flags) {
 	if (uncor_ecc_flags) {
-		read_oob_data(mtd, chip->oob_poi, denali->page);
+		ret = denali_read_oob(mtd, chip, page);
+		if (ret)
+			return ret;
 
 
 		stat = denali_check_erased_page(mtd, chip, buf,
 		stat = denali_check_erased_page(mtd, chip, buf,
 						uncor_ecc_flags, stat);
 						uncor_ecc_flags, stat);
@@ -1181,137 +858,266 @@ static int denali_read_page(struct mtd_info *mtd, struct nand_chip *chip,
 	return stat;
 	return stat;
 }
 }
 
 
-static int denali_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
-				uint8_t *buf, int oob_required, int page)
+static int denali_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+				 const uint8_t *buf, int oob_required, int page)
 {
 {
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
-	dma_addr_t addr = denali->buf.dma_buf;
-	size_t size = mtd->writesize + mtd->oobsize;
-	uint32_t irq_mask = INTR__DMA_CMD_COMP;
-
-	if (page != denali->page) {
-		dev_err(denali->dev,
-			"IN %s: page %d is not equal to denali->page %d",
-			__func__, page, denali->page);
-		BUG();
-	}
-
-	setup_ecc_for_xfer(denali, false, true);
-	denali_enable_dma(denali, true);
-
-	dma_sync_single_for_device(denali->dev, addr, size, DMA_FROM_DEVICE);
-
-	clear_interrupts(denali);
-	denali_setup_dma(denali, DENALI_READ);
-
-	/* wait for operation to complete */
-	wait_for_irq(denali, irq_mask);
+	int writesize = mtd->writesize;
+	int oobsize = mtd->oobsize;
+	int ecc_steps = chip->ecc.steps;
+	int ecc_size = chip->ecc.size;
+	int ecc_bytes = chip->ecc.bytes;
+	void *dma_buf = denali->buf;
+	int oob_skip = denali->oob_skip_bytes;
+	size_t size = writesize + oobsize;
+	int i, pos, len;
 
 
-	dma_sync_single_for_cpu(denali->dev, addr, size, DMA_FROM_DEVICE);
+	/*
+	 * Fill the buffer with 0xff first except the full page transfer.
+	 * This simplifies the logic.
+	 */
+	if (!buf || !oob_required)
+		memset(dma_buf, 0xff, size);
+
+	/* Arrange the buffer for syndrome payload/ecc layout */
+	if (buf) {
+		for (i = 0; i < ecc_steps; i++) {
+			pos = i * (ecc_size + ecc_bytes);
+			len = ecc_size;
+
+			if (pos >= writesize)
+				pos += oob_skip;
+			else if (pos + len > writesize)
+				len = writesize - pos;
+
+			memcpy(dma_buf + pos, buf, len);
+			buf += len;
+			if (len < ecc_size) {
+				len = ecc_size - len;
+				memcpy(dma_buf + writesize + oob_skip, buf,
+				       len);
+				buf += len;
+			}
+		}
+	}
 
 
-	denali_enable_dma(denali, false);
+	if (oob_required) {
+		const uint8_t *oob = chip->oob_poi;
+
+		/* BBM at the beginning of the OOB area */
+		memcpy(dma_buf + writesize, oob, oob_skip);
+		oob += oob_skip;
+
+		/* OOB ECC */
+		for (i = 0; i < ecc_steps; i++) {
+			pos = ecc_size + i * (ecc_size + ecc_bytes);
+			len = ecc_bytes;
+
+			if (pos >= writesize)
+				pos += oob_skip;
+			else if (pos + len > writesize)
+				len = writesize - pos;
+
+			memcpy(dma_buf + pos, oob, len);
+			oob += len;
+			if (len < ecc_bytes) {
+				len = ecc_bytes - len;
+				memcpy(dma_buf + writesize + oob_skip, oob,
+				       len);
+				oob += len;
+			}
+		}
 
 
-	memcpy(buf, denali->buf.buf, mtd->writesize);
-	memcpy(chip->oob_poi, denali->buf.buf + mtd->writesize, mtd->oobsize);
+		/* OOB free */
+		len = oobsize - (oob - chip->oob_poi);
+		memcpy(dma_buf + size - len, oob, len);
+	}
 
 
-	return 0;
+	return denali_data_xfer(denali, dma_buf, size, page, 1, 1);
 }
 }
 
 
-static uint8_t denali_read_byte(struct mtd_info *mtd)
+static int denali_write_page(struct mtd_info *mtd, struct nand_chip *chip,
+			     const uint8_t *buf, int oob_required, int page)
 {
 {
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
-	uint8_t result = 0xff;
-
-	if (denali->buf.head < denali->buf.tail)
-		result = denali->buf.buf[denali->buf.head++];
 
 
-	return result;
+	return denali_data_xfer(denali, (void *)buf, mtd->writesize,
+				page, 0, 1);
 }
 }
 
 
 static void denali_select_chip(struct mtd_info *mtd, int chip)
 static void denali_select_chip(struct mtd_info *mtd, int chip)
 {
 {
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
 
 
-	spin_lock_irq(&denali->irq_lock);
-	denali->flash_bank = chip;
-	spin_unlock_irq(&denali->irq_lock);
+	denali->active_bank = chip;
 }
 }
 
 
 static int denali_waitfunc(struct mtd_info *mtd, struct nand_chip *chip)
 static int denali_waitfunc(struct mtd_info *mtd, struct nand_chip *chip)
 {
 {
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
-	int status = denali->status;
+	uint32_t irq_status;
 
 
-	denali->status = 0;
+	/* R/B# pin transitioned from low to high? */
+	irq_status = denali_wait_for_irq(denali, INTR__INT_ACT);
 
 
-	return status;
+	return irq_status & INTR__INT_ACT ? 0 : NAND_STATUS_FAIL;
 }
 }
 
 
 static int denali_erase(struct mtd_info *mtd, int page)
 static int denali_erase(struct mtd_info *mtd, int page)
 {
 {
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
+	uint32_t irq_status;
 
 
-	uint32_t cmd, irq_status;
-
-	clear_interrupts(denali);
+	denali_reset_irq(denali);
 
 
-	/* setup page read request for access type */
-	cmd = MODE_10 | BANK(denali->flash_bank) | page;
-	index_addr(denali, cmd, 0x1);
+	denali_host_write(denali, DENALI_MAP10 | DENALI_BANK(denali) | page,
+			  DENALI_ERASE);
 
 
 	/* wait for erase to complete or failure to occur */
 	/* wait for erase to complete or failure to occur */
-	irq_status = wait_for_irq(denali, INTR__ERASE_COMP | INTR__ERASE_FAIL);
+	irq_status = denali_wait_for_irq(denali,
+					 INTR__ERASE_COMP | INTR__ERASE_FAIL);
 
 
-	return irq_status & INTR__ERASE_FAIL ? NAND_STATUS_FAIL : PASS;
+	return irq_status & INTR__ERASE_COMP ? 0 : NAND_STATUS_FAIL;
 }
 }
 
 
-static void denali_cmdfunc(struct mtd_info *mtd, unsigned int cmd, int col,
-			   int page)
+#define DIV_ROUND_DOWN_ULL(ll, d) \
+	({ unsigned long long _tmp = (ll); do_div(_tmp, d); _tmp; })
+
+static int denali_setup_data_interface(struct mtd_info *mtd, int chipnr,
+				       const struct nand_data_interface *conf)
 {
 {
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
 	struct denali_nand_info *denali = mtd_to_denali(mtd);
-	uint32_t addr, id;
+	const struct nand_sdr_timings *timings;
+	unsigned long t_clk;
+	int acc_clks, re_2_we, re_2_re, we_2_re, addr_2_data;
+	int rdwr_en_lo, rdwr_en_hi, rdwr_en_lo_hi, cs_setup;
+	int addr_2_data_mask;
+	uint32_t tmp;
+
+	timings = nand_get_sdr_timings(conf);
+	if (IS_ERR(timings))
+		return PTR_ERR(timings);
+
+	/* clk_x period in picoseconds */
+	t_clk = DIV_ROUND_DOWN_ULL(1000000000000ULL, denali->clk_x_rate);
+	if (!t_clk)
+		return -EINVAL;
+
+	if (chipnr == NAND_DATA_IFACE_CHECK_ONLY)
+		return 0;
+
+	/* tREA -> ACC_CLKS */
+	acc_clks = DIV_ROUND_UP(timings->tREA_max, t_clk);
+	acc_clks = min_t(int, acc_clks, ACC_CLKS__VALUE);
+
+	tmp = ioread32(denali->reg + ACC_CLKS);
+	tmp &= ~ACC_CLKS__VALUE;
+	tmp |= acc_clks;
+	iowrite32(tmp, denali->reg + ACC_CLKS);
+
+	/* tRWH -> RE_2_WE */
+	re_2_we = DIV_ROUND_UP(timings->tRHW_min, t_clk);
+	re_2_we = min_t(int, re_2_we, RE_2_WE__VALUE);
+
+	tmp = ioread32(denali->reg + RE_2_WE);
+	tmp &= ~RE_2_WE__VALUE;
+	tmp |= re_2_we;
+	iowrite32(tmp, denali->reg + RE_2_WE);
+
+	/* tRHZ -> RE_2_RE */
+	re_2_re = DIV_ROUND_UP(timings->tRHZ_max, t_clk);
+	re_2_re = min_t(int, re_2_re, RE_2_RE__VALUE);
+
+	tmp = ioread32(denali->reg + RE_2_RE);
+	tmp &= ~RE_2_RE__VALUE;
+	tmp |= re_2_re;
+	iowrite32(tmp, denali->reg + RE_2_RE);
+
+	/* tWHR -> WE_2_RE */
+	we_2_re = DIV_ROUND_UP(timings->tWHR_min, t_clk);
+	we_2_re = min_t(int, we_2_re, TWHR2_AND_WE_2_RE__WE_2_RE);
+
+	tmp = ioread32(denali->reg + TWHR2_AND_WE_2_RE);
+	tmp &= ~TWHR2_AND_WE_2_RE__WE_2_RE;
+	tmp |= we_2_re;
+	iowrite32(tmp, denali->reg + TWHR2_AND_WE_2_RE);
+
+	/* tADL -> ADDR_2_DATA */
+
+	/* for older versions, ADDR_2_DATA is only 6 bit wide */
+	addr_2_data_mask = TCWAW_AND_ADDR_2_DATA__ADDR_2_DATA;
+	if (denali->revision < 0x0501)
+		addr_2_data_mask >>= 1;
+
+	addr_2_data = DIV_ROUND_UP(timings->tADL_min, t_clk);
+	addr_2_data = min_t(int, addr_2_data, addr_2_data_mask);
+
+	tmp = ioread32(denali->reg + TCWAW_AND_ADDR_2_DATA);
+	tmp &= ~addr_2_data_mask;
+	tmp |= addr_2_data;
+	iowrite32(tmp, denali->reg + TCWAW_AND_ADDR_2_DATA);
+
+	/* tREH, tWH -> RDWR_EN_HI_CNT */
+	rdwr_en_hi = DIV_ROUND_UP(max(timings->tREH_min, timings->tWH_min),
+				  t_clk);
+	rdwr_en_hi = min_t(int, rdwr_en_hi, RDWR_EN_HI_CNT__VALUE);
+
+	tmp = ioread32(denali->reg + RDWR_EN_HI_CNT);
+	tmp &= ~RDWR_EN_HI_CNT__VALUE;
+	tmp |= rdwr_en_hi;
+	iowrite32(tmp, denali->reg + RDWR_EN_HI_CNT);
+
+	/* tRP, tWP -> RDWR_EN_LO_CNT */
+	rdwr_en_lo = DIV_ROUND_UP(max(timings->tRP_min, timings->tWP_min),
+				  t_clk);
+	rdwr_en_lo_hi = DIV_ROUND_UP(max(timings->tRC_min, timings->tWC_min),
+				     t_clk);
+	rdwr_en_lo_hi = max(rdwr_en_lo_hi, DENALI_CLK_X_MULT);
+	rdwr_en_lo = max(rdwr_en_lo, rdwr_en_lo_hi - rdwr_en_hi);
+	rdwr_en_lo = min_t(int, rdwr_en_lo, RDWR_EN_LO_CNT__VALUE);
+
+	tmp = ioread32(denali->reg + RDWR_EN_LO_CNT);
+	tmp &= ~RDWR_EN_LO_CNT__VALUE;
+	tmp |= rdwr_en_lo;
+	iowrite32(tmp, denali->reg + RDWR_EN_LO_CNT);
+
+	/* tCS, tCEA -> CS_SETUP_CNT */
+	cs_setup = max3((int)DIV_ROUND_UP(timings->tCS_min, t_clk) - rdwr_en_lo,
+			(int)DIV_ROUND_UP(timings->tCEA_max, t_clk) - acc_clks,
+			0);
+	cs_setup = min_t(int, cs_setup, CS_SETUP_CNT__VALUE);
+
+	tmp = ioread32(denali->reg + CS_SETUP_CNT);
+	tmp &= ~CS_SETUP_CNT__VALUE;
+	tmp |= cs_setup;
+	iowrite32(tmp, denali->reg + CS_SETUP_CNT);
+
+	return 0;
+}
+
+static void denali_reset_banks(struct denali_nand_info *denali)
+{
+	u32 irq_status;
 	int i;
 	int i;
 
 
-	switch (cmd) {
-	case NAND_CMD_PAGEPROG:
-		break;
-	case NAND_CMD_STATUS:
-		read_status(denali);
-		break;
-	case NAND_CMD_READID:
-	case NAND_CMD_PARAM:
-		reset_buf(denali);
-		/*
-		 * sometimes ManufactureId read from register is not right
-		 * e.g. some of Micron MT29F32G08QAA MLC NAND chips
-		 * So here we send READID cmd to NAND insteand
-		 */
-		addr = MODE_11 | BANK(denali->flash_bank);
-		index_addr(denali, addr | 0, 0x90);
-		index_addr(denali, addr | 1, col);
-		for (i = 0; i < 8; i++) {
-			index_addr_read_data(denali, addr | 2, &id);
-			write_byte_to_buf(denali, id);
-		}
-		break;
-	case NAND_CMD_READ0:
-	case NAND_CMD_SEQIN:
-		denali->page = page;
-		break;
-	case NAND_CMD_RESET:
-		reset_bank(denali);
-		break;
-	case NAND_CMD_READOOB:
-		/* TODO: Read OOB data */
-		break;
-	default:
-		pr_err(": unsupported command received 0x%x\n", cmd);
-		break;
+	for (i = 0; i < denali->max_banks; i++) {
+		denali->active_bank = i;
+
+		denali_reset_irq(denali);
+
+		iowrite32(DEVICE_RESET__BANK(i),
+			  denali->reg + DEVICE_RESET);
+
+		irq_status = denali_wait_for_irq(denali,
+			INTR__RST_COMP | INTR__INT_ACT | INTR__TIME_OUT);
+		if (!(irq_status & INTR__INT_ACT))
+			break;
 	}
 	}
+
+	dev_dbg(denali->dev, "%d chips connected\n", i);
+	denali->max_banks = i;
 }
 }
-/* end NAND core entry points */
 
 
-/* Initialization code to bring the device up to a known good state */
 static void denali_hw_init(struct denali_nand_info *denali)
 static void denali_hw_init(struct denali_nand_info *denali)
 {
 {
 	/*
 	/*
@@ -1319,8 +1125,7 @@ static void denali_hw_init(struct denali_nand_info *denali)
 	 * override it.
 	 * override it.
 	 */
 	 */
 	if (!denali->revision)
 	if (!denali->revision)
-		denali->revision =
-				swab16(ioread32(denali->flash_reg + REVISION));
+		denali->revision = swab16(ioread32(denali->reg + REVISION));
 
 
 	/*
 	/*
 	 * tell driver how many bit controller will skip before
 	 * tell driver how many bit controller will skip before
@@ -1328,30 +1133,51 @@ static void denali_hw_init(struct denali_nand_info *denali)
 	 * set by firmware. So we read this value out.
 	 * set by firmware. So we read this value out.
 	 * if this value is 0, just let it be.
 	 * if this value is 0, just let it be.
 	 */
 	 */
-	denali->bbtskipbytes = ioread32(denali->flash_reg +
-						SPARE_AREA_SKIP_BYTES);
+	denali->oob_skip_bytes = ioread32(denali->reg + SPARE_AREA_SKIP_BYTES);
 	detect_max_banks(denali);
 	detect_max_banks(denali);
-	denali_nand_reset(denali);
-	iowrite32(0x0F, denali->flash_reg + RB_PIN_ENABLED);
-	iowrite32(CHIP_EN_DONT_CARE__FLAG,
-			denali->flash_reg + CHIP_ENABLE_DONT_CARE);
+	iowrite32(0x0F, denali->reg + RB_PIN_ENABLED);
+	iowrite32(CHIP_EN_DONT_CARE__FLAG, denali->reg + CHIP_ENABLE_DONT_CARE);
 
 
-	iowrite32(0xffff, denali->flash_reg + SPARE_AREA_MARKER);
+	iowrite32(0xffff, denali->reg + SPARE_AREA_MARKER);
 
 
 	/* Should set value for these registers when init */
 	/* Should set value for these registers when init */
-	iowrite32(0, denali->flash_reg + TWO_ROW_ADDR_CYCLES);
-	iowrite32(1, denali->flash_reg + ECC_ENABLE);
-	denali_nand_timing_set(denali);
-	denali_irq_init(denali);
+	iowrite32(0, denali->reg + TWO_ROW_ADDR_CYCLES);
+	iowrite32(1, denali->reg + ECC_ENABLE);
 }
 }
 
 
-/*
- * Althogh controller spec said SLC ECC is forceb to be 4bit,
- * but denali controller in MRST only support 15bit and 8bit ECC
- * correction
- */
-#define ECC_8BITS	14
-#define ECC_15BITS	26
+int denali_calc_ecc_bytes(int step_size, int strength)
+{
+	/* BCH code.  Denali requires ecc.bytes to be multiple of 2 */
+	return DIV_ROUND_UP(strength * fls(step_size * 8), 16) * 2;
+}
+EXPORT_SYMBOL(denali_calc_ecc_bytes);
+
+static int denali_ecc_setup(struct mtd_info *mtd, struct nand_chip *chip,
+			    struct denali_nand_info *denali)
+{
+	int oobavail = mtd->oobsize - denali->oob_skip_bytes;
+	int ret;
+
+	/*
+	 * If .size and .strength are already set (usually by DT),
+	 * check if they are supported by this controller.
+	 */
+	if (chip->ecc.size && chip->ecc.strength)
+		return nand_check_ecc_caps(chip, denali->ecc_caps, oobavail);
+
+	/*
+	 * We want .size and .strength closest to the chip's requirement
+	 * unless NAND_ECC_MAXIMIZE is requested.
+	 */
+	if (!(chip->ecc.options & NAND_ECC_MAXIMIZE)) {
+		ret = nand_match_ecc_req(chip, denali->ecc_caps, oobavail);
+		if (!ret)
+			return 0;
+	}
+
+	/* Max ECC strength is the last thing we can do */
+	return nand_maximize_ecc(chip, denali->ecc_caps, oobavail);
+}
 
 
 static int denali_ooblayout_ecc(struct mtd_info *mtd, int section,
 static int denali_ooblayout_ecc(struct mtd_info *mtd, int section,
 				struct mtd_oob_region *oobregion)
 				struct mtd_oob_region *oobregion)
@@ -1362,7 +1188,7 @@ static int denali_ooblayout_ecc(struct mtd_info *mtd, int section,
 	if (section)
 	if (section)
 		return -ERANGE;
 		return -ERANGE;
 
 
-	oobregion->offset = denali->bbtskipbytes;
+	oobregion->offset = denali->oob_skip_bytes;
 	oobregion->length = chip->ecc.total;
 	oobregion->length = chip->ecc.total;
 
 
 	return 0;
 	return 0;
@@ -1377,7 +1203,7 @@ static int denali_ooblayout_free(struct mtd_info *mtd, int section,
 	if (section)
 	if (section)
 		return -ERANGE;
 		return -ERANGE;
 
 
-	oobregion->offset = chip->ecc.total + denali->bbtskipbytes;
+	oobregion->offset = chip->ecc.total + denali->oob_skip_bytes;
 	oobregion->length = mtd->oobsize - oobregion->offset;
 	oobregion->length = mtd->oobsize - oobregion->offset;
 
 
 	return 0;
 	return 0;
@@ -1388,29 +1214,6 @@ static const struct mtd_ooblayout_ops denali_ooblayout_ops = {
 	.free = denali_ooblayout_free,
 	.free = denali_ooblayout_free,
 };
 };
 
 
-static uint8_t bbt_pattern[] = {'B', 'b', 't', '0' };
-static uint8_t mirror_pattern[] = {'1', 't', 'b', 'B' };
-
-static struct nand_bbt_descr bbt_main_descr = {
-	.options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE
-		| NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP,
-	.offs =	8,
-	.len = 4,
-	.veroffs = 12,
-	.maxblocks = 4,
-	.pattern = bbt_pattern,
-};
-
-static struct nand_bbt_descr bbt_mirror_descr = {
-	.options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE
-		| NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP,
-	.offs =	8,
-	.len = 4,
-	.veroffs = 12,
-	.maxblocks = 4,
-	.pattern = mirror_pattern,
-};
-
 /* initialize driver data structures */
 /* initialize driver data structures */
 static void denali_drv_init(struct denali_nand_info *denali)
 static void denali_drv_init(struct denali_nand_info *denali)
 {
 {
@@ -1425,12 +1228,6 @@ static void denali_drv_init(struct denali_nand_info *denali)
 	 * element that might be access shared data (interrupt status)
 	 * element that might be access shared data (interrupt status)
 	 */
 	 */
 	spin_lock_init(&denali->irq_lock);
 	spin_lock_init(&denali->irq_lock);
-
-	/* indicate that MTD has not selected a valid bank yet */
-	denali->flash_bank = CHIP_SELECT_INVALID;
-
-	/* initialize our irq_status variable to indicate no interrupts */
-	denali->irq_status = 0;
 }
 }
 
 
 static int denali_multidev_fixup(struct denali_nand_info *denali)
 static int denali_multidev_fixup(struct denali_nand_info *denali)
@@ -1445,23 +1242,23 @@ static int denali_multidev_fixup(struct denali_nand_info *denali)
 	 * In this case, the core framework knows nothing about this fact,
 	 * In this case, the core framework knows nothing about this fact,
 	 * so we should tell it the _logical_ pagesize and anything necessary.
 	 * so we should tell it the _logical_ pagesize and anything necessary.
 	 */
 	 */
-	denali->devnum = ioread32(denali->flash_reg + DEVICES_CONNECTED);
+	denali->devs_per_cs = ioread32(denali->reg + DEVICES_CONNECTED);
 
 
 	/*
 	/*
 	 * On some SoCs, DEVICES_CONNECTED is not auto-detected.
 	 * On some SoCs, DEVICES_CONNECTED is not auto-detected.
 	 * For those, DEVICES_CONNECTED is left to 0.  Set 1 if it is the case.
 	 * For those, DEVICES_CONNECTED is left to 0.  Set 1 if it is the case.
 	 */
 	 */
-	if (denali->devnum == 0) {
-		denali->devnum = 1;
-		iowrite32(1, denali->flash_reg + DEVICES_CONNECTED);
+	if (denali->devs_per_cs == 0) {
+		denali->devs_per_cs = 1;
+		iowrite32(1, denali->reg + DEVICES_CONNECTED);
 	}
 	}
 
 
-	if (denali->devnum == 1)
+	if (denali->devs_per_cs == 1)
 		return 0;
 		return 0;
 
 
-	if (denali->devnum != 2) {
+	if (denali->devs_per_cs != 2) {
 		dev_err(denali->dev, "unsupported number of devices %d\n",
 		dev_err(denali->dev, "unsupported number of devices %d\n",
-			denali->devnum);
+			denali->devs_per_cs);
 		return -EINVAL;
 		return -EINVAL;
 	}
 	}
 
 
@@ -1479,7 +1276,7 @@ static int denali_multidev_fixup(struct denali_nand_info *denali)
 	chip->ecc.size <<= 1;
 	chip->ecc.size <<= 1;
 	chip->ecc.bytes <<= 1;
 	chip->ecc.bytes <<= 1;
 	chip->ecc.strength <<= 1;
 	chip->ecc.strength <<= 1;
-	denali->bbtskipbytes <<= 1;
+	denali->oob_skip_bytes <<= 1;
 
 
 	return 0;
 	return 0;
 }
 }
@@ -1490,27 +1287,12 @@ int denali_init(struct denali_nand_info *denali)
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	int ret;
 	int ret;
 
 
-	if (denali->platform == INTEL_CE4100) {
-		/*
-		 * Due to a silicon limitation, we can only support
-		 * ONFI timing mode 1 and below.
-		 */
-		if (onfi_timing_mode < -1 || onfi_timing_mode > 1) {
-			pr_err("Intel CE4100 only supports ONFI timing mode 1 or below\n");
-			return -EINVAL;
-		}
-	}
-
-	/* allocate a temporary buffer for nand_scan_ident() */
-	denali->buf.buf = devm_kzalloc(denali->dev, PAGE_SIZE,
-					GFP_DMA | GFP_KERNEL);
-	if (!denali->buf.buf)
-		return -ENOMEM;
-
 	mtd->dev.parent = denali->dev;
 	mtd->dev.parent = denali->dev;
 	denali_hw_init(denali);
 	denali_hw_init(denali);
 	denali_drv_init(denali);
 	denali_drv_init(denali);
 
 
+	denali_clear_irq_all(denali);
+
 	/* Request IRQ after all the hardware initialization is finished */
 	/* Request IRQ after all the hardware initialization is finished */
 	ret = devm_request_irq(denali->dev, denali->irq, denali_isr,
 	ret = devm_request_irq(denali->dev, denali->irq, denali_isr,
 			       IRQF_SHARED, DENALI_NAND_NAME, denali);
 			       IRQF_SHARED, DENALI_NAND_NAME, denali);
@@ -1519,8 +1301,11 @@ int denali_init(struct denali_nand_info *denali)
 		return ret;
 		return ret;
 	}
 	}
 
 
-	/* now that our ISR is registered, we can enable interrupts */
-	denali_set_intr_modes(denali, true);
+	denali_enable_irq(denali);
+	denali_reset_banks(denali);
+
+	denali->active_bank = DENALI_INVALID_BANK;
+
 	nand_set_flash_node(chip, denali->dev->of_node);
 	nand_set_flash_node(chip, denali->dev->of_node);
 	/* Fallback to the default name if DT did not give "label" property */
 	/* Fallback to the default name if DT did not give "label" property */
 	if (!mtd->name)
 	if (!mtd->name)
@@ -1528,10 +1313,17 @@ int denali_init(struct denali_nand_info *denali)
 
 
 	/* register the driver with the NAND core subsystem */
 	/* register the driver with the NAND core subsystem */
 	chip->select_chip = denali_select_chip;
 	chip->select_chip = denali_select_chip;
-	chip->cmdfunc = denali_cmdfunc;
 	chip->read_byte = denali_read_byte;
 	chip->read_byte = denali_read_byte;
+	chip->write_byte = denali_write_byte;
+	chip->read_word = denali_read_word;
+	chip->cmd_ctrl = denali_cmd_ctrl;
+	chip->dev_ready = denali_dev_ready;
 	chip->waitfunc = denali_waitfunc;
 	chip->waitfunc = denali_waitfunc;
 
 
+	/* clk rate info is needed for setup_data_interface */
+	if (denali->clk_x_rate)
+		chip->setup_data_interface = denali_setup_data_interface;
+
 	/*
 	/*
 	 * scan for NAND devices attached to the controller
 	 * scan for NAND devices attached to the controller
 	 * this is the first stage in a two step process to register
 	 * this is the first stage in a two step process to register
@@ -1539,33 +1331,25 @@ int denali_init(struct denali_nand_info *denali)
 	 */
 	 */
 	ret = nand_scan_ident(mtd, denali->max_banks, NULL);
 	ret = nand_scan_ident(mtd, denali->max_banks, NULL);
 	if (ret)
 	if (ret)
-		goto failed_req_irq;
-
-	/* allocate the right size buffer now */
-	devm_kfree(denali->dev, denali->buf.buf);
-	denali->buf.buf = devm_kzalloc(denali->dev,
-			     mtd->writesize + mtd->oobsize,
-			     GFP_KERNEL);
-	if (!denali->buf.buf) {
-		ret = -ENOMEM;
-		goto failed_req_irq;
-	}
+		goto disable_irq;
 
 
-	ret = dma_set_mask(denali->dev,
-			   DMA_BIT_MASK(denali->caps & DENALI_CAP_DMA_64BIT ?
-					64 : 32));
-	if (ret) {
-		dev_err(denali->dev, "No usable DMA configuration\n");
-		goto failed_req_irq;
+	if (ioread32(denali->reg + FEATURES) & FEATURES__DMA)
+		denali->dma_avail = 1;
+
+	if (denali->dma_avail) {
+		int dma_bit = denali->caps & DENALI_CAP_DMA_64BIT ? 64 : 32;
+
+		ret = dma_set_mask(denali->dev, DMA_BIT_MASK(dma_bit));
+		if (ret) {
+			dev_info(denali->dev,
+				 "Failed to set DMA mask. Disabling DMA.\n");
+			denali->dma_avail = 0;
+		}
 	}
 	}
 
 
-	denali->buf.dma_buf = dma_map_single(denali->dev, denali->buf.buf,
-			     mtd->writesize + mtd->oobsize,
-			     DMA_BIDIRECTIONAL);
-	if (dma_mapping_error(denali->dev, denali->buf.dma_buf)) {
-		dev_err(denali->dev, "Failed to map DMA buffer\n");
-		ret = -EIO;
-		goto failed_req_irq;
+	if (denali->dma_avail) {
+		chip->options |= NAND_USE_BOUNCE_BUFFER;
+		chip->buf_align = 16;
 	}
 	}
 
 
 	/*
 	/*
@@ -1574,46 +1358,49 @@ int denali_init(struct denali_nand_info *denali)
 	 * bad block management.
 	 * bad block management.
 	 */
 	 */
 
 
-	/* Bad block management */
-	chip->bbt_td = &bbt_main_descr;
-	chip->bbt_md = &bbt_mirror_descr;
-
-	/* skip the scan for now until we have OOB read and write support */
 	chip->bbt_options |= NAND_BBT_USE_FLASH;
 	chip->bbt_options |= NAND_BBT_USE_FLASH;
-	chip->options |= NAND_SKIP_BBTSCAN;
+	chip->bbt_options |= NAND_BBT_NO_OOB;
+
 	chip->ecc.mode = NAND_ECC_HW_SYNDROME;
 	chip->ecc.mode = NAND_ECC_HW_SYNDROME;
 
 
 	/* no subpage writes on denali */
 	/* no subpage writes on denali */
 	chip->options |= NAND_NO_SUBPAGE_WRITE;
 	chip->options |= NAND_NO_SUBPAGE_WRITE;
 
 
-	/*
-	 * Denali Controller only support 15bit and 8bit ECC in MRST,
-	 * so just let controller do 15bit ECC for MLC and 8bit ECC for
-	 * SLC if possible.
-	 * */
-	if (!nand_is_slc(chip) &&
-			(mtd->oobsize > (denali->bbtskipbytes +
-			ECC_15BITS * (mtd->writesize /
-			ECC_SECTOR_SIZE)))) {
-		/* if MLC OOB size is large enough, use 15bit ECC*/
-		chip->ecc.strength = 15;
-		chip->ecc.bytes = ECC_15BITS;
-		iowrite32(15, denali->flash_reg + ECC_CORRECTION);
-	} else if (mtd->oobsize < (denali->bbtskipbytes +
-			ECC_8BITS * (mtd->writesize /
-			ECC_SECTOR_SIZE))) {
-		pr_err("Your NAND chip OOB is not large enough to contain 8bit ECC correction codes");
-		goto failed_req_irq;
-	} else {
-		chip->ecc.strength = 8;
-		chip->ecc.bytes = ECC_8BITS;
-		iowrite32(8, denali->flash_reg + ECC_CORRECTION);
+	ret = denali_ecc_setup(mtd, chip, denali);
+	if (ret) {
+		dev_err(denali->dev, "Failed to setup ECC settings.\n");
+		goto disable_irq;
 	}
 	}
 
 
+	dev_dbg(denali->dev,
+		"chosen ECC settings: step=%d, strength=%d, bytes=%d\n",
+		chip->ecc.size, chip->ecc.strength, chip->ecc.bytes);
+
+	iowrite32(MAKE_ECC_CORRECTION(chip->ecc.strength, 1),
+		  denali->reg + ECC_CORRECTION);
+	iowrite32(mtd->erasesize / mtd->writesize,
+		  denali->reg + PAGES_PER_BLOCK);
+	iowrite32(chip->options & NAND_BUSWIDTH_16 ? 1 : 0,
+		  denali->reg + DEVICE_WIDTH);
+	iowrite32(mtd->writesize, denali->reg + DEVICE_MAIN_AREA_SIZE);
+	iowrite32(mtd->oobsize, denali->reg + DEVICE_SPARE_AREA_SIZE);
+
+	iowrite32(chip->ecc.size, denali->reg + CFG_DATA_BLOCK_SIZE);
+	iowrite32(chip->ecc.size, denali->reg + CFG_LAST_DATA_BLOCK_SIZE);
+	/* chip->ecc.steps is set by nand_scan_tail(); not available here */
+	iowrite32(mtd->writesize / chip->ecc.size,
+		  denali->reg + CFG_NUM_DATA_BLOCKS);
+
 	mtd_set_ooblayout(mtd, &denali_ooblayout_ops);
 	mtd_set_ooblayout(mtd, &denali_ooblayout_ops);
 
 
-	/* override the default read operations */
-	chip->ecc.size = ECC_SECTOR_SIZE;
+	if (chip->options & NAND_BUSWIDTH_16) {
+		chip->read_buf = denali_read_buf16;
+		chip->write_buf = denali_write_buf16;
+	} else {
+		chip->read_buf = denali_read_buf;
+		chip->write_buf = denali_write_buf;
+	}
+	chip->ecc.options |= NAND_ECC_CUSTOM_PAGE_ACCESS;
 	chip->ecc.read_page = denali_read_page;
 	chip->ecc.read_page = denali_read_page;
 	chip->ecc.read_page_raw = denali_read_page_raw;
 	chip->ecc.read_page_raw = denali_read_page_raw;
 	chip->ecc.write_page = denali_write_page;
 	chip->ecc.write_page = denali_write_page;
@@ -1624,21 +1411,34 @@ int denali_init(struct denali_nand_info *denali)
 
 
 	ret = denali_multidev_fixup(denali);
 	ret = denali_multidev_fixup(denali);
 	if (ret)
 	if (ret)
-		goto failed_req_irq;
+		goto disable_irq;
+
+	/*
+	 * This buffer is DMA-mapped by denali_{read,write}_page_raw.  Do not
+	 * use devm_kmalloc() because the memory allocated by devm_ does not
+	 * guarantee DMA-safe alignment.
+	 */
+	denali->buf = kmalloc(mtd->writesize + mtd->oobsize, GFP_KERNEL);
+	if (!denali->buf) {
+		ret = -ENOMEM;
+		goto disable_irq;
+	}
 
 
 	ret = nand_scan_tail(mtd);
 	ret = nand_scan_tail(mtd);
 	if (ret)
 	if (ret)
-		goto failed_req_irq;
+		goto free_buf;
 
 
 	ret = mtd_device_register(mtd, NULL, 0);
 	ret = mtd_device_register(mtd, NULL, 0);
 	if (ret) {
 	if (ret) {
 		dev_err(denali->dev, "Failed to register MTD: %d\n", ret);
 		dev_err(denali->dev, "Failed to register MTD: %d\n", ret);
-		goto failed_req_irq;
+		goto free_buf;
 	}
 	}
 	return 0;
 	return 0;
 
 
-failed_req_irq:
-	denali_irq_cleanup(denali->irq, denali);
+free_buf:
+	kfree(denali->buf);
+disable_irq:
+	denali_disable_irq(denali);
 
 
 	return ret;
 	return ret;
 }
 }
@@ -1648,16 +1448,9 @@ EXPORT_SYMBOL(denali_init);
 void denali_remove(struct denali_nand_info *denali)
 void denali_remove(struct denali_nand_info *denali)
 {
 {
 	struct mtd_info *mtd = nand_to_mtd(&denali->nand);
 	struct mtd_info *mtd = nand_to_mtd(&denali->nand);
-	/*
-	 * Pre-compute DMA buffer size to avoid any problems in case
-	 * nand_release() ever changes in a way that mtd->writesize and
-	 * mtd->oobsize are not reliable after this call.
-	 */
-	int bufsize = mtd->writesize + mtd->oobsize;
 
 
 	nand_release(mtd);
 	nand_release(mtd);
-	denali_irq_cleanup(denali->irq, denali);
-	dma_unmap_single(denali->dev, denali->buf.dma_buf, bufsize,
-			 DMA_BIDIRECTIONAL);
+	kfree(denali->buf);
+	denali_disable_irq(denali);
 }
 }
 EXPORT_SYMBOL(denali_remove);
 EXPORT_SYMBOL(denali_remove);

+ 150 - 165
drivers/mtd/nand/denali.h

@@ -24,330 +24,315 @@
 #include <linux/mtd/nand.h>
 #include <linux/mtd/nand.h>
 
 
 #define DEVICE_RESET				0x0
 #define DEVICE_RESET				0x0
-#define     DEVICE_RESET__BANK0				0x0001
-#define     DEVICE_RESET__BANK1				0x0002
-#define     DEVICE_RESET__BANK2				0x0004
-#define     DEVICE_RESET__BANK3				0x0008
+#define     DEVICE_RESET__BANK(bank)			BIT(bank)
 
 
 #define TRANSFER_SPARE_REG			0x10
 #define TRANSFER_SPARE_REG			0x10
-#define     TRANSFER_SPARE_REG__FLAG			0x0001
+#define     TRANSFER_SPARE_REG__FLAG			BIT(0)
 
 
 #define LOAD_WAIT_CNT				0x20
 #define LOAD_WAIT_CNT				0x20
-#define     LOAD_WAIT_CNT__VALUE			0xffff
+#define     LOAD_WAIT_CNT__VALUE			GENMASK(15, 0)
 
 
 #define PROGRAM_WAIT_CNT			0x30
 #define PROGRAM_WAIT_CNT			0x30
-#define     PROGRAM_WAIT_CNT__VALUE			0xffff
+#define     PROGRAM_WAIT_CNT__VALUE			GENMASK(15, 0)
 
 
 #define ERASE_WAIT_CNT				0x40
 #define ERASE_WAIT_CNT				0x40
-#define     ERASE_WAIT_CNT__VALUE			0xffff
+#define     ERASE_WAIT_CNT__VALUE			GENMASK(15, 0)
 
 
 #define INT_MON_CYCCNT				0x50
 #define INT_MON_CYCCNT				0x50
-#define     INT_MON_CYCCNT__VALUE			0xffff
+#define     INT_MON_CYCCNT__VALUE			GENMASK(15, 0)
 
 
 #define RB_PIN_ENABLED				0x60
 #define RB_PIN_ENABLED				0x60
-#define     RB_PIN_ENABLED__BANK0			0x0001
-#define     RB_PIN_ENABLED__BANK1			0x0002
-#define     RB_PIN_ENABLED__BANK2			0x0004
-#define     RB_PIN_ENABLED__BANK3			0x0008
+#define     RB_PIN_ENABLED__BANK(bank)			BIT(bank)
 
 
 #define MULTIPLANE_OPERATION			0x70
 #define MULTIPLANE_OPERATION			0x70
-#define     MULTIPLANE_OPERATION__FLAG			0x0001
+#define     MULTIPLANE_OPERATION__FLAG			BIT(0)
 
 
 #define MULTIPLANE_READ_ENABLE			0x80
 #define MULTIPLANE_READ_ENABLE			0x80
-#define     MULTIPLANE_READ_ENABLE__FLAG		0x0001
+#define     MULTIPLANE_READ_ENABLE__FLAG		BIT(0)
 
 
 #define COPYBACK_DISABLE			0x90
 #define COPYBACK_DISABLE			0x90
-#define     COPYBACK_DISABLE__FLAG			0x0001
+#define     COPYBACK_DISABLE__FLAG			BIT(0)
 
 
 #define CACHE_WRITE_ENABLE			0xa0
 #define CACHE_WRITE_ENABLE			0xa0
-#define     CACHE_WRITE_ENABLE__FLAG			0x0001
+#define     CACHE_WRITE_ENABLE__FLAG			BIT(0)
 
 
 #define CACHE_READ_ENABLE			0xb0
 #define CACHE_READ_ENABLE			0xb0
-#define     CACHE_READ_ENABLE__FLAG			0x0001
+#define     CACHE_READ_ENABLE__FLAG			BIT(0)
 
 
 #define PREFETCH_MODE				0xc0
 #define PREFETCH_MODE				0xc0
-#define     PREFETCH_MODE__PREFETCH_EN			0x0001
-#define     PREFETCH_MODE__PREFETCH_BURST_LENGTH	0xfff0
+#define     PREFETCH_MODE__PREFETCH_EN			BIT(0)
+#define     PREFETCH_MODE__PREFETCH_BURST_LENGTH	GENMASK(15, 4)
 
 
 #define CHIP_ENABLE_DONT_CARE			0xd0
 #define CHIP_ENABLE_DONT_CARE			0xd0
-#define     CHIP_EN_DONT_CARE__FLAG			0x01
+#define     CHIP_EN_DONT_CARE__FLAG			BIT(0)
 
 
 #define ECC_ENABLE				0xe0
 #define ECC_ENABLE				0xe0
-#define     ECC_ENABLE__FLAG				0x0001
+#define     ECC_ENABLE__FLAG				BIT(0)
 
 
 #define GLOBAL_INT_ENABLE			0xf0
 #define GLOBAL_INT_ENABLE			0xf0
-#define     GLOBAL_INT_EN_FLAG				0x01
+#define     GLOBAL_INT_EN_FLAG				BIT(0)
 
 
-#define WE_2_RE					0x100
-#define     WE_2_RE__VALUE				0x003f
+#define TWHR2_AND_WE_2_RE			0x100
+#define     TWHR2_AND_WE_2_RE__WE_2_RE			GENMASK(5, 0)
+#define     TWHR2_AND_WE_2_RE__TWHR2			GENMASK(13, 8)
 
 
-#define ADDR_2_DATA				0x110
-#define     ADDR_2_DATA__VALUE				0x003f
+#define TCWAW_AND_ADDR_2_DATA			0x110
+/* The width of ADDR_2_DATA is 6 bit for old IP, 7 bit for new IP */
+#define     TCWAW_AND_ADDR_2_DATA__ADDR_2_DATA		GENMASK(6, 0)
+#define     TCWAW_AND_ADDR_2_DATA__TCWAW		GENMASK(13, 8)
 
 
 #define RE_2_WE					0x120
 #define RE_2_WE					0x120
-#define     RE_2_WE__VALUE				0x003f
+#define     RE_2_WE__VALUE				GENMASK(5, 0)
 
 
 #define ACC_CLKS				0x130
 #define ACC_CLKS				0x130
-#define     ACC_CLKS__VALUE				0x000f
+#define     ACC_CLKS__VALUE				GENMASK(3, 0)
 
 
 #define NUMBER_OF_PLANES			0x140
 #define NUMBER_OF_PLANES			0x140
-#define     NUMBER_OF_PLANES__VALUE			0x0007
+#define     NUMBER_OF_PLANES__VALUE			GENMASK(2, 0)
 
 
 #define PAGES_PER_BLOCK				0x150
 #define PAGES_PER_BLOCK				0x150
-#define     PAGES_PER_BLOCK__VALUE			0xffff
+#define     PAGES_PER_BLOCK__VALUE			GENMASK(15, 0)
 
 
 #define DEVICE_WIDTH				0x160
 #define DEVICE_WIDTH				0x160
-#define     DEVICE_WIDTH__VALUE				0x0003
+#define     DEVICE_WIDTH__VALUE				GENMASK(1, 0)
 
 
 #define DEVICE_MAIN_AREA_SIZE			0x170
 #define DEVICE_MAIN_AREA_SIZE			0x170
-#define     DEVICE_MAIN_AREA_SIZE__VALUE		0xffff
+#define     DEVICE_MAIN_AREA_SIZE__VALUE		GENMASK(15, 0)
 
 
 #define DEVICE_SPARE_AREA_SIZE			0x180
 #define DEVICE_SPARE_AREA_SIZE			0x180
-#define     DEVICE_SPARE_AREA_SIZE__VALUE		0xffff
+#define     DEVICE_SPARE_AREA_SIZE__VALUE		GENMASK(15, 0)
 
 
 #define TWO_ROW_ADDR_CYCLES			0x190
 #define TWO_ROW_ADDR_CYCLES			0x190
-#define     TWO_ROW_ADDR_CYCLES__FLAG			0x0001
+#define     TWO_ROW_ADDR_CYCLES__FLAG			BIT(0)
 
 
 #define MULTIPLANE_ADDR_RESTRICT		0x1a0
 #define MULTIPLANE_ADDR_RESTRICT		0x1a0
-#define     MULTIPLANE_ADDR_RESTRICT__FLAG		0x0001
+#define     MULTIPLANE_ADDR_RESTRICT__FLAG		BIT(0)
 
 
 #define ECC_CORRECTION				0x1b0
 #define ECC_CORRECTION				0x1b0
-#define     ECC_CORRECTION__VALUE			0x001f
+#define     ECC_CORRECTION__VALUE			GENMASK(4, 0)
+#define     ECC_CORRECTION__ERASE_THRESHOLD		GENMASK(31, 16)
+#define     MAKE_ECC_CORRECTION(val, thresh)		\
+			(((val) & (ECC_CORRECTION__VALUE)) | \
+			(((thresh) << 16) & (ECC_CORRECTION__ERASE_THRESHOLD)))
 
 
 #define READ_MODE				0x1c0
 #define READ_MODE				0x1c0
-#define     READ_MODE__VALUE				0x000f
+#define     READ_MODE__VALUE				GENMASK(3, 0)
 
 
 #define WRITE_MODE				0x1d0
 #define WRITE_MODE				0x1d0
-#define     WRITE_MODE__VALUE				0x000f
+#define     WRITE_MODE__VALUE				GENMASK(3, 0)
 
 
 #define COPYBACK_MODE				0x1e0
 #define COPYBACK_MODE				0x1e0
-#define     COPYBACK_MODE__VALUE			0x000f
+#define     COPYBACK_MODE__VALUE			GENMASK(3, 0)
 
 
 #define RDWR_EN_LO_CNT				0x1f0
 #define RDWR_EN_LO_CNT				0x1f0
-#define     RDWR_EN_LO_CNT__VALUE			0x001f
+#define     RDWR_EN_LO_CNT__VALUE			GENMASK(4, 0)
 
 
 #define RDWR_EN_HI_CNT				0x200
 #define RDWR_EN_HI_CNT				0x200
-#define     RDWR_EN_HI_CNT__VALUE			0x001f
+#define     RDWR_EN_HI_CNT__VALUE			GENMASK(4, 0)
 
 
 #define MAX_RD_DELAY				0x210
 #define MAX_RD_DELAY				0x210
-#define     MAX_RD_DELAY__VALUE				0x000f
+#define     MAX_RD_DELAY__VALUE				GENMASK(3, 0)
 
 
 #define CS_SETUP_CNT				0x220
 #define CS_SETUP_CNT				0x220
-#define     CS_SETUP_CNT__VALUE				0x001f
+#define     CS_SETUP_CNT__VALUE				GENMASK(4, 0)
+#define     CS_SETUP_CNT__TWB				GENMASK(17, 12)
 
 
 #define SPARE_AREA_SKIP_BYTES			0x230
 #define SPARE_AREA_SKIP_BYTES			0x230
-#define     SPARE_AREA_SKIP_BYTES__VALUE		0x003f
+#define     SPARE_AREA_SKIP_BYTES__VALUE		GENMASK(5, 0)
 
 
 #define SPARE_AREA_MARKER			0x240
 #define SPARE_AREA_MARKER			0x240
-#define     SPARE_AREA_MARKER__VALUE			0xffff
+#define     SPARE_AREA_MARKER__VALUE			GENMASK(15, 0)
 
 
 #define DEVICES_CONNECTED			0x250
 #define DEVICES_CONNECTED			0x250
-#define     DEVICES_CONNECTED__VALUE			0x0007
+#define     DEVICES_CONNECTED__VALUE			GENMASK(2, 0)
 
 
 #define DIE_MASK				0x260
 #define DIE_MASK				0x260
-#define     DIE_MASK__VALUE				0x00ff
+#define     DIE_MASK__VALUE				GENMASK(7, 0)
 
 
 #define FIRST_BLOCK_OF_NEXT_PLANE		0x270
 #define FIRST_BLOCK_OF_NEXT_PLANE		0x270
-#define     FIRST_BLOCK_OF_NEXT_PLANE__VALUE		0xffff
+#define     FIRST_BLOCK_OF_NEXT_PLANE__VALUE		GENMASK(15, 0)
 
 
 #define WRITE_PROTECT				0x280
 #define WRITE_PROTECT				0x280
-#define     WRITE_PROTECT__FLAG				0x0001
+#define     WRITE_PROTECT__FLAG				BIT(0)
 
 
 #define RE_2_RE					0x290
 #define RE_2_RE					0x290
-#define     RE_2_RE__VALUE				0x003f
+#define     RE_2_RE__VALUE				GENMASK(5, 0)
 
 
 #define MANUFACTURER_ID				0x300
 #define MANUFACTURER_ID				0x300
-#define     MANUFACTURER_ID__VALUE			0x00ff
+#define     MANUFACTURER_ID__VALUE			GENMASK(7, 0)
 
 
 #define DEVICE_ID				0x310
 #define DEVICE_ID				0x310
-#define     DEVICE_ID__VALUE				0x00ff
+#define     DEVICE_ID__VALUE				GENMASK(7, 0)
 
 
 #define DEVICE_PARAM_0				0x320
 #define DEVICE_PARAM_0				0x320
-#define     DEVICE_PARAM_0__VALUE			0x00ff
+#define     DEVICE_PARAM_0__VALUE			GENMASK(7, 0)
 
 
 #define DEVICE_PARAM_1				0x330
 #define DEVICE_PARAM_1				0x330
-#define     DEVICE_PARAM_1__VALUE			0x00ff
+#define     DEVICE_PARAM_1__VALUE			GENMASK(7, 0)
 
 
 #define DEVICE_PARAM_2				0x340
 #define DEVICE_PARAM_2				0x340
-#define     DEVICE_PARAM_2__VALUE			0x00ff
+#define     DEVICE_PARAM_2__VALUE			GENMASK(7, 0)
 
 
 #define LOGICAL_PAGE_DATA_SIZE			0x350
 #define LOGICAL_PAGE_DATA_SIZE			0x350
-#define     LOGICAL_PAGE_DATA_SIZE__VALUE		0xffff
+#define     LOGICAL_PAGE_DATA_SIZE__VALUE		GENMASK(15, 0)
 
 
 #define LOGICAL_PAGE_SPARE_SIZE			0x360
 #define LOGICAL_PAGE_SPARE_SIZE			0x360
-#define     LOGICAL_PAGE_SPARE_SIZE__VALUE		0xffff
+#define     LOGICAL_PAGE_SPARE_SIZE__VALUE		GENMASK(15, 0)
 
 
 #define REVISION				0x370
 #define REVISION				0x370
-#define     REVISION__VALUE				0xffff
+#define     REVISION__VALUE				GENMASK(15, 0)
 
 
 #define ONFI_DEVICE_FEATURES			0x380
 #define ONFI_DEVICE_FEATURES			0x380
-#define     ONFI_DEVICE_FEATURES__VALUE			0x003f
+#define     ONFI_DEVICE_FEATURES__VALUE			GENMASK(5, 0)
 
 
 #define ONFI_OPTIONAL_COMMANDS			0x390
 #define ONFI_OPTIONAL_COMMANDS			0x390
-#define     ONFI_OPTIONAL_COMMANDS__VALUE		0x003f
+#define     ONFI_OPTIONAL_COMMANDS__VALUE		GENMASK(5, 0)
 
 
 #define ONFI_TIMING_MODE			0x3a0
 #define ONFI_TIMING_MODE			0x3a0
-#define     ONFI_TIMING_MODE__VALUE			0x003f
+#define     ONFI_TIMING_MODE__VALUE			GENMASK(5, 0)
 
 
 #define ONFI_PGM_CACHE_TIMING_MODE		0x3b0
 #define ONFI_PGM_CACHE_TIMING_MODE		0x3b0
-#define     ONFI_PGM_CACHE_TIMING_MODE__VALUE		0x003f
+#define     ONFI_PGM_CACHE_TIMING_MODE__VALUE		GENMASK(5, 0)
 
 
 #define ONFI_DEVICE_NO_OF_LUNS			0x3c0
 #define ONFI_DEVICE_NO_OF_LUNS			0x3c0
-#define     ONFI_DEVICE_NO_OF_LUNS__NO_OF_LUNS		0x00ff
-#define     ONFI_DEVICE_NO_OF_LUNS__ONFI_DEVICE		0x0100
+#define     ONFI_DEVICE_NO_OF_LUNS__NO_OF_LUNS		GENMASK(7, 0)
+#define     ONFI_DEVICE_NO_OF_LUNS__ONFI_DEVICE		BIT(8)
 
 
 #define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L	0x3d0
 #define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L	0x3d0
-#define     ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L__VALUE	0xffff
+#define     ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L__VALUE	GENMASK(15, 0)
 
 
 #define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U	0x3e0
 #define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U	0x3e0
-#define     ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U__VALUE	0xffff
-
-#define FEATURES					0x3f0
-#define     FEATURES__N_BANKS				0x0003
-#define     FEATURES__ECC_MAX_ERR			0x003c
-#define     FEATURES__DMA				0x0040
-#define     FEATURES__CMD_DMA				0x0080
-#define     FEATURES__PARTITION				0x0100
-#define     FEATURES__XDMA_SIDEBAND			0x0200
-#define     FEATURES__GPREG				0x0400
-#define     FEATURES__INDEX_ADDR			0x0800
+#define     ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U__VALUE	GENMASK(15, 0)
+
+#define FEATURES				0x3f0
+#define     FEATURES__N_BANKS				GENMASK(1, 0)
+#define     FEATURES__ECC_MAX_ERR			GENMASK(5, 2)
+#define     FEATURES__DMA				BIT(6)
+#define     FEATURES__CMD_DMA				BIT(7)
+#define     FEATURES__PARTITION				BIT(8)
+#define     FEATURES__XDMA_SIDEBAND			BIT(9)
+#define     FEATURES__GPREG				BIT(10)
+#define     FEATURES__INDEX_ADDR			BIT(11)
 
 
 #define TRANSFER_MODE				0x400
 #define TRANSFER_MODE				0x400
-#define     TRANSFER_MODE__VALUE			0x0003
+#define     TRANSFER_MODE__VALUE			GENMASK(1, 0)
 
 
-#define INTR_STATUS(__bank)	(0x410 + ((__bank) * 0x50))
-#define INTR_EN(__bank)		(0x420 + ((__bank) * 0x50))
+#define INTR_STATUS(bank)			(0x410 + (bank) * 0x50)
+#define INTR_EN(bank)				(0x420 + (bank) * 0x50)
 /* bit[1:0] is used differently depending on IP version */
 /* bit[1:0] is used differently depending on IP version */
-#define     INTR__ECC_UNCOR_ERR				0x0001	/* new IP */
-#define     INTR__ECC_TRANSACTION_DONE			0x0001	/* old IP */
-#define     INTR__ECC_ERR				0x0002	/* old IP */
-#define     INTR__DMA_CMD_COMP				0x0004
-#define     INTR__TIME_OUT				0x0008
-#define     INTR__PROGRAM_FAIL				0x0010
-#define     INTR__ERASE_FAIL				0x0020
-#define     INTR__LOAD_COMP				0x0040
-#define     INTR__PROGRAM_COMP				0x0080
-#define     INTR__ERASE_COMP				0x0100
-#define     INTR__PIPE_CPYBCK_CMD_COMP			0x0200
-#define     INTR__LOCKED_BLK				0x0400
-#define     INTR__UNSUP_CMD				0x0800
-#define     INTR__INT_ACT				0x1000
-#define     INTR__RST_COMP				0x2000
-#define     INTR__PIPE_CMD_ERR				0x4000
-#define     INTR__PAGE_XFER_INC				0x8000
-
-#define PAGE_CNT(__bank)	(0x430 + ((__bank) * 0x50))
-#define ERR_PAGE_ADDR(__bank)	(0x440 + ((__bank) * 0x50))
-#define ERR_BLOCK_ADDR(__bank)	(0x450 + ((__bank) * 0x50))
+#define     INTR__ECC_UNCOR_ERR				BIT(0)	/* new IP */
+#define     INTR__ECC_TRANSACTION_DONE			BIT(0)	/* old IP */
+#define     INTR__ECC_ERR				BIT(1)	/* old IP */
+#define     INTR__DMA_CMD_COMP				BIT(2)
+#define     INTR__TIME_OUT				BIT(3)
+#define     INTR__PROGRAM_FAIL				BIT(4)
+#define     INTR__ERASE_FAIL				BIT(5)
+#define     INTR__LOAD_COMP				BIT(6)
+#define     INTR__PROGRAM_COMP				BIT(7)
+#define     INTR__ERASE_COMP				BIT(8)
+#define     INTR__PIPE_CPYBCK_CMD_COMP			BIT(9)
+#define     INTR__LOCKED_BLK				BIT(10)
+#define     INTR__UNSUP_CMD				BIT(11)
+#define     INTR__INT_ACT				BIT(12)
+#define     INTR__RST_COMP				BIT(13)
+#define     INTR__PIPE_CMD_ERR				BIT(14)
+#define     INTR__PAGE_XFER_INC				BIT(15)
+#define     INTR__ERASED_PAGE				BIT(16)
+
+#define PAGE_CNT(bank)				(0x430 + (bank) * 0x50)
+#define ERR_PAGE_ADDR(bank)			(0x440 + (bank) * 0x50)
+#define ERR_BLOCK_ADDR(bank)			(0x450 + (bank) * 0x50)
 
 
 #define ECC_THRESHOLD				0x600
 #define ECC_THRESHOLD				0x600
-#define     ECC_THRESHOLD__VALUE			0x03ff
+#define     ECC_THRESHOLD__VALUE			GENMASK(9, 0)
 
 
 #define ECC_ERROR_BLOCK_ADDRESS			0x610
 #define ECC_ERROR_BLOCK_ADDRESS			0x610
-#define     ECC_ERROR_BLOCK_ADDRESS__VALUE		0xffff
+#define     ECC_ERROR_BLOCK_ADDRESS__VALUE		GENMASK(15, 0)
 
 
 #define ECC_ERROR_PAGE_ADDRESS			0x620
 #define ECC_ERROR_PAGE_ADDRESS			0x620
-#define     ECC_ERROR_PAGE_ADDRESS__VALUE		0x0fff
-#define     ECC_ERROR_PAGE_ADDRESS__BANK		0xf000
+#define     ECC_ERROR_PAGE_ADDRESS__VALUE		GENMASK(11, 0)
+#define     ECC_ERROR_PAGE_ADDRESS__BANK		GENMASK(15, 12)
 
 
 #define ECC_ERROR_ADDRESS			0x630
 #define ECC_ERROR_ADDRESS			0x630
-#define     ECC_ERROR_ADDRESS__OFFSET			0x0fff
-#define     ECC_ERROR_ADDRESS__SECTOR_NR		0xf000
+#define     ECC_ERROR_ADDRESS__OFFSET			GENMASK(11, 0)
+#define     ECC_ERROR_ADDRESS__SECTOR_NR		GENMASK(15, 12)
 
 
 #define ERR_CORRECTION_INFO			0x640
 #define ERR_CORRECTION_INFO			0x640
-#define     ERR_CORRECTION_INFO__BYTEMASK		0x00ff
-#define     ERR_CORRECTION_INFO__DEVICE_NR		0x0f00
-#define     ERR_CORRECTION_INFO__ERROR_TYPE		0x4000
-#define     ERR_CORRECTION_INFO__LAST_ERR_INFO		0x8000
+#define     ERR_CORRECTION_INFO__BYTEMASK		GENMASK(7, 0)
+#define     ERR_CORRECTION_INFO__DEVICE_NR		GENMASK(11, 8)
+#define     ERR_CORRECTION_INFO__ERROR_TYPE		BIT(14)
+#define     ERR_CORRECTION_INFO__LAST_ERR_INFO		BIT(15)
 
 
 #define ECC_COR_INFO(bank)			(0x650 + (bank) / 2 * 0x10)
 #define ECC_COR_INFO(bank)			(0x650 + (bank) / 2 * 0x10)
 #define     ECC_COR_INFO__SHIFT(bank)			((bank) % 2 * 8)
 #define     ECC_COR_INFO__SHIFT(bank)			((bank) % 2 * 8)
-#define     ECC_COR_INFO__MAX_ERRORS			0x007f
-#define     ECC_COR_INFO__UNCOR_ERR			0x0080
+#define     ECC_COR_INFO__MAX_ERRORS			GENMASK(6, 0)
+#define     ECC_COR_INFO__UNCOR_ERR			BIT(7)
+
+#define CFG_DATA_BLOCK_SIZE			0x6b0
+
+#define CFG_LAST_DATA_BLOCK_SIZE		0x6c0
+
+#define CFG_NUM_DATA_BLOCKS			0x6d0
+
+#define CFG_META_DATA_SIZE			0x6e0
 
 
 #define DMA_ENABLE				0x700
 #define DMA_ENABLE				0x700
-#define     DMA_ENABLE__FLAG				0x0001
+#define     DMA_ENABLE__FLAG				BIT(0)
 
 
 #define IGNORE_ECC_DONE				0x710
 #define IGNORE_ECC_DONE				0x710
-#define     IGNORE_ECC_DONE__FLAG			0x0001
+#define     IGNORE_ECC_DONE__FLAG			BIT(0)
 
 
 #define DMA_INTR				0x720
 #define DMA_INTR				0x720
 #define DMA_INTR_EN				0x730
 #define DMA_INTR_EN				0x730
-#define     DMA_INTR__TARGET_ERROR			0x0001
-#define     DMA_INTR__DESC_COMP_CHANNEL0		0x0002
-#define     DMA_INTR__DESC_COMP_CHANNEL1		0x0004
-#define     DMA_INTR__DESC_COMP_CHANNEL2		0x0008
-#define     DMA_INTR__DESC_COMP_CHANNEL3		0x0010
-#define     DMA_INTR__MEMCOPY_DESC_COMP			0x0020
+#define     DMA_INTR__TARGET_ERROR			BIT(0)
+#define     DMA_INTR__DESC_COMP_CHANNEL0		BIT(1)
+#define     DMA_INTR__DESC_COMP_CHANNEL1		BIT(2)
+#define     DMA_INTR__DESC_COMP_CHANNEL2		BIT(3)
+#define     DMA_INTR__DESC_COMP_CHANNEL3		BIT(4)
+#define     DMA_INTR__MEMCOPY_DESC_COMP			BIT(5)
 
 
 #define TARGET_ERR_ADDR_LO			0x740
 #define TARGET_ERR_ADDR_LO			0x740
-#define     TARGET_ERR_ADDR_LO__VALUE			0xffff
+#define     TARGET_ERR_ADDR_LO__VALUE			GENMASK(15, 0)
 
 
 #define TARGET_ERR_ADDR_HI			0x750
 #define TARGET_ERR_ADDR_HI			0x750
-#define     TARGET_ERR_ADDR_HI__VALUE			0xffff
+#define     TARGET_ERR_ADDR_HI__VALUE			GENMASK(15, 0)
 
 
 #define CHNL_ACTIVE				0x760
 #define CHNL_ACTIVE				0x760
-#define     CHNL_ACTIVE__CHANNEL0			0x0001
-#define     CHNL_ACTIVE__CHANNEL1			0x0002
-#define     CHNL_ACTIVE__CHANNEL2			0x0004
-#define     CHNL_ACTIVE__CHANNEL3			0x0008
-
-#define FAIL 1                  /*failed flag*/
-#define PASS 0                  /*success flag*/
-
-#define CLK_X  5
-#define CLK_MULTI 4
-
-#define ONFI_BLOOM_TIME         1
-#define MODE5_WORKAROUND        0
-
-
-#define MODE_00    0x00000000
-#define MODE_01    0x04000000
-#define MODE_10    0x08000000
-#define MODE_11    0x0C000000
-
-#define ECC_SECTOR_SIZE     512
-
-struct nand_buf {
-	int head;
-	int tail;
-	uint8_t *buf;
-	dma_addr_t dma_buf;
-};
-
-#define INTEL_CE4100	1
-#define INTEL_MRST	2
-#define DT		3
+#define     CHNL_ACTIVE__CHANNEL0			BIT(0)
+#define     CHNL_ACTIVE__CHANNEL1			BIT(1)
+#define     CHNL_ACTIVE__CHANNEL2			BIT(2)
+#define     CHNL_ACTIVE__CHANNEL3			BIT(3)
 
 
 struct denali_nand_info {
 struct denali_nand_info {
 	struct nand_chip nand;
 	struct nand_chip nand;
-	int flash_bank; /* currently selected chip */
-	int status;
-	int platform;
-	struct nand_buf buf;
+	unsigned long clk_x_rate;	/* bus interface clock rate */
+	int active_bank;		/* currently selected bank */
 	struct device *dev;
 	struct device *dev;
-	int total_used_banks;
-	int page;
-	void __iomem *flash_reg;	/* Register Interface */
-	void __iomem *flash_mem;	/* Host Data/Command Interface */
+	void __iomem *reg;		/* Register Interface */
+	void __iomem *host;		/* Host Data/Command Interface */
 
 
 	/* elements used by ISR */
 	/* elements used by ISR */
 	struct completion complete;
 	struct completion complete;
 	spinlock_t irq_lock;
 	spinlock_t irq_lock;
+	uint32_t irq_mask;
 	uint32_t irq_status;
 	uint32_t irq_status;
 	int irq;
 	int irq;
 
 
-	int devnum;	/* represent how many nands connected */
-	int bbtskipbytes;
+	void *buf;
+	dma_addr_t dma_addr;
+	int dma_avail;
+	int devs_per_cs;		/* devices connected in parallel */
+	int oob_skip_bytes;
 	int max_banks;
 	int max_banks;
 	unsigned int revision;
 	unsigned int revision;
 	unsigned int caps;
 	unsigned int caps;
+	const struct nand_ecc_caps *ecc_caps;
 };
 };
 
 
 #define DENALI_CAP_HW_ECC_FIXUP			BIT(0)
 #define DENALI_CAP_HW_ECC_FIXUP			BIT(0)
 #define DENALI_CAP_DMA_64BIT			BIT(1)
 #define DENALI_CAP_DMA_64BIT			BIT(1)
 
 
+int denali_calc_ecc_bytes(int step_size, int strength);
 extern int denali_init(struct denali_nand_info *denali);
 extern int denali_init(struct denali_nand_info *denali);
 extern void denali_remove(struct denali_nand_info *denali);
 extern void denali_remove(struct denali_nand_info *denali);
 
 

+ 41 - 12
drivers/mtd/nand/denali_dt.c

@@ -32,10 +32,31 @@ struct denali_dt {
 struct denali_dt_data {
 struct denali_dt_data {
 	unsigned int revision;
 	unsigned int revision;
 	unsigned int caps;
 	unsigned int caps;
+	const struct nand_ecc_caps *ecc_caps;
 };
 };
 
 
+NAND_ECC_CAPS_SINGLE(denali_socfpga_ecc_caps, denali_calc_ecc_bytes,
+		     512, 8, 15);
 static const struct denali_dt_data denali_socfpga_data = {
 static const struct denali_dt_data denali_socfpga_data = {
 	.caps = DENALI_CAP_HW_ECC_FIXUP,
 	.caps = DENALI_CAP_HW_ECC_FIXUP,
+	.ecc_caps = &denali_socfpga_ecc_caps,
+};
+
+NAND_ECC_CAPS_SINGLE(denali_uniphier_v5a_ecc_caps, denali_calc_ecc_bytes,
+		     1024, 8, 16, 24);
+static const struct denali_dt_data denali_uniphier_v5a_data = {
+	.caps = DENALI_CAP_HW_ECC_FIXUP |
+		DENALI_CAP_DMA_64BIT,
+	.ecc_caps = &denali_uniphier_v5a_ecc_caps,
+};
+
+NAND_ECC_CAPS_SINGLE(denali_uniphier_v5b_ecc_caps, denali_calc_ecc_bytes,
+		     1024, 8, 16);
+static const struct denali_dt_data denali_uniphier_v5b_data = {
+	.revision = 0x0501,
+	.caps = DENALI_CAP_HW_ECC_FIXUP |
+		DENALI_CAP_DMA_64BIT,
+	.ecc_caps = &denali_uniphier_v5b_ecc_caps,
 };
 };
 
 
 static const struct of_device_id denali_nand_dt_ids[] = {
 static const struct of_device_id denali_nand_dt_ids[] = {
@@ -43,13 +64,21 @@ static const struct of_device_id denali_nand_dt_ids[] = {
 		.compatible = "altr,socfpga-denali-nand",
 		.compatible = "altr,socfpga-denali-nand",
 		.data = &denali_socfpga_data,
 		.data = &denali_socfpga_data,
 	},
 	},
+	{
+		.compatible = "socionext,uniphier-denali-nand-v5a",
+		.data = &denali_uniphier_v5a_data,
+	},
+	{
+		.compatible = "socionext,uniphier-denali-nand-v5b",
+		.data = &denali_uniphier_v5b_data,
+	},
 	{ /* sentinel */ }
 	{ /* sentinel */ }
 };
 };
 MODULE_DEVICE_TABLE(of, denali_nand_dt_ids);
 MODULE_DEVICE_TABLE(of, denali_nand_dt_ids);
 
 
 static int denali_dt_probe(struct platform_device *pdev)
 static int denali_dt_probe(struct platform_device *pdev)
 {
 {
-	struct resource *denali_reg, *nand_data;
+	struct resource *res;
 	struct denali_dt *dt;
 	struct denali_dt *dt;
 	const struct denali_dt_data *data;
 	const struct denali_dt_data *data;
 	struct denali_nand_info *denali;
 	struct denali_nand_info *denali;
@@ -64,9 +93,9 @@ static int denali_dt_probe(struct platform_device *pdev)
 	if (data) {
 	if (data) {
 		denali->revision = data->revision;
 		denali->revision = data->revision;
 		denali->caps = data->caps;
 		denali->caps = data->caps;
+		denali->ecc_caps = data->ecc_caps;
 	}
 	}
 
 
-	denali->platform = DT;
 	denali->dev = &pdev->dev;
 	denali->dev = &pdev->dev;
 	denali->irq = platform_get_irq(pdev, 0);
 	denali->irq = platform_get_irq(pdev, 0);
 	if (denali->irq < 0) {
 	if (denali->irq < 0) {
@@ -74,17 +103,15 @@ static int denali_dt_probe(struct platform_device *pdev)
 		return denali->irq;
 		return denali->irq;
 	}
 	}
 
 
-	denali_reg = platform_get_resource_byname(pdev, IORESOURCE_MEM,
-						  "denali_reg");
-	denali->flash_reg = devm_ioremap_resource(&pdev->dev, denali_reg);
-	if (IS_ERR(denali->flash_reg))
-		return PTR_ERR(denali->flash_reg);
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "denali_reg");
+	denali->reg = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(denali->reg))
+		return PTR_ERR(denali->reg);
 
 
-	nand_data = platform_get_resource_byname(pdev, IORESOURCE_MEM,
-						 "nand_data");
-	denali->flash_mem = devm_ioremap_resource(&pdev->dev, nand_data);
-	if (IS_ERR(denali->flash_mem))
-		return PTR_ERR(denali->flash_mem);
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "nand_data");
+	denali->host = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(denali->host))
+		return PTR_ERR(denali->host);
 
 
 	dt->clk = devm_clk_get(&pdev->dev, NULL);
 	dt->clk = devm_clk_get(&pdev->dev, NULL);
 	if (IS_ERR(dt->clk)) {
 	if (IS_ERR(dt->clk)) {
@@ -93,6 +120,8 @@ static int denali_dt_probe(struct platform_device *pdev)
 	}
 	}
 	clk_prepare_enable(dt->clk);
 	clk_prepare_enable(dt->clk);
 
 
+	denali->clk_x_rate = clk_get_rate(dt->clk);
+
 	ret = denali_init(denali);
 	ret = denali_init(denali);
 	if (ret)
 	if (ret)
 		goto out_disable_clk;
 		goto out_disable_clk;

+ 16 - 10
drivers/mtd/nand/denali_pci.c

@@ -19,6 +19,9 @@
 
 
 #define DENALI_NAND_NAME    "denali-nand-pci"
 #define DENALI_NAND_NAME    "denali-nand-pci"
 
 
+#define INTEL_CE4100	1
+#define INTEL_MRST	2
+
 /* List of platforms this NAND controller has be integrated into */
 /* List of platforms this NAND controller has be integrated into */
 static const struct pci_device_id denali_pci_ids[] = {
 static const struct pci_device_id denali_pci_ids[] = {
 	{ PCI_VDEVICE(INTEL, 0x0701), INTEL_CE4100 },
 	{ PCI_VDEVICE(INTEL, 0x0701), INTEL_CE4100 },
@@ -27,6 +30,8 @@ static const struct pci_device_id denali_pci_ids[] = {
 };
 };
 MODULE_DEVICE_TABLE(pci, denali_pci_ids);
 MODULE_DEVICE_TABLE(pci, denali_pci_ids);
 
 
+NAND_ECC_CAPS_SINGLE(denali_pci_ecc_caps, denali_calc_ecc_bytes, 512, 8, 15);
+
 static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
 static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
 {
 {
 	int ret;
 	int ret;
@@ -45,13 +50,11 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
 	}
 	}
 
 
 	if (id->driver_data == INTEL_CE4100) {
 	if (id->driver_data == INTEL_CE4100) {
-		denali->platform = INTEL_CE4100;
 		mem_base = pci_resource_start(dev, 0);
 		mem_base = pci_resource_start(dev, 0);
 		mem_len = pci_resource_len(dev, 1);
 		mem_len = pci_resource_len(dev, 1);
 		csr_base = pci_resource_start(dev, 1);
 		csr_base = pci_resource_start(dev, 1);
 		csr_len = pci_resource_len(dev, 1);
 		csr_len = pci_resource_len(dev, 1);
 	} else {
 	} else {
-		denali->platform = INTEL_MRST;
 		csr_base = pci_resource_start(dev, 0);
 		csr_base = pci_resource_start(dev, 0);
 		csr_len = pci_resource_len(dev, 0);
 		csr_len = pci_resource_len(dev, 0);
 		mem_base = pci_resource_start(dev, 1);
 		mem_base = pci_resource_start(dev, 1);
@@ -65,6 +68,9 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
 	pci_set_master(dev);
 	pci_set_master(dev);
 	denali->dev = &dev->dev;
 	denali->dev = &dev->dev;
 	denali->irq = dev->irq;
 	denali->irq = dev->irq;
+	denali->ecc_caps = &denali_pci_ecc_caps;
+	denali->nand.ecc.options |= NAND_ECC_MAXIMIZE;
+	denali->clk_x_rate = 200000000;		/* 200 MHz */
 
 
 	ret = pci_request_regions(dev, DENALI_NAND_NAME);
 	ret = pci_request_regions(dev, DENALI_NAND_NAME);
 	if (ret) {
 	if (ret) {
@@ -72,14 +78,14 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
 		return ret;
 		return ret;
 	}
 	}
 
 
-	denali->flash_reg = ioremap_nocache(csr_base, csr_len);
-	if (!denali->flash_reg) {
+	denali->reg = ioremap_nocache(csr_base, csr_len);
+	if (!denali->reg) {
 		dev_err(&dev->dev, "Spectra: Unable to remap memory region\n");
 		dev_err(&dev->dev, "Spectra: Unable to remap memory region\n");
 		return -ENOMEM;
 		return -ENOMEM;
 	}
 	}
 
 
-	denali->flash_mem = ioremap_nocache(mem_base, mem_len);
-	if (!denali->flash_mem) {
+	denali->host = ioremap_nocache(mem_base, mem_len);
+	if (!denali->host) {
 		dev_err(&dev->dev, "Spectra: ioremap_nocache failed!");
 		dev_err(&dev->dev, "Spectra: ioremap_nocache failed!");
 		ret = -ENOMEM;
 		ret = -ENOMEM;
 		goto failed_remap_reg;
 		goto failed_remap_reg;
@@ -94,9 +100,9 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
 	return 0;
 	return 0;
 
 
 failed_remap_mem:
 failed_remap_mem:
-	iounmap(denali->flash_mem);
+	iounmap(denali->host);
 failed_remap_reg:
 failed_remap_reg:
-	iounmap(denali->flash_reg);
+	iounmap(denali->reg);
 	return ret;
 	return ret;
 }
 }
 
 
@@ -106,8 +112,8 @@ static void denali_pci_remove(struct pci_dev *dev)
 	struct denali_nand_info *denali = pci_get_drvdata(dev);
 	struct denali_nand_info *denali = pci_get_drvdata(dev);
 
 
 	denali_remove(denali);
 	denali_remove(denali);
-	iounmap(denali->flash_reg);
-	iounmap(denali->flash_mem);
+	iounmap(denali->reg);
+	iounmap(denali->host);
 }
 }
 
 
 static struct pci_driver denali_pci_driver = {
 static struct pci_driver denali_pci_driver = {

+ 2 - 0
drivers/mtd/nand/docg4.c

@@ -1260,6 +1260,8 @@ static void __init init_mtd_structs(struct mtd_info *mtd)
 	nand->read_buf = docg4_read_buf;
 	nand->read_buf = docg4_read_buf;
 	nand->write_buf = docg4_write_buf16;
 	nand->write_buf = docg4_write_buf16;
 	nand->erase = docg4_erase_block;
 	nand->erase = docg4_erase_block;
+	nand->onfi_set_features = nand_onfi_get_set_features_notsupp;
+	nand->onfi_get_features = nand_onfi_get_set_features_notsupp;
 	nand->ecc.read_page = docg4_read_page;
 	nand->ecc.read_page = docg4_read_page;
 	nand->ecc.write_page = docg4_write_page;
 	nand->ecc.write_page = docg4_write_page;
 	nand->ecc.read_page_raw = docg4_read_page_raw;
 	nand->ecc.read_page_raw = docg4_read_page_raw;

+ 2 - 0
drivers/mtd/nand/fsl_elbc_nand.c

@@ -775,6 +775,8 @@ static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv)
 	chip->select_chip = fsl_elbc_select_chip;
 	chip->select_chip = fsl_elbc_select_chip;
 	chip->cmdfunc = fsl_elbc_cmdfunc;
 	chip->cmdfunc = fsl_elbc_cmdfunc;
 	chip->waitfunc = fsl_elbc_wait;
 	chip->waitfunc = fsl_elbc_wait;
+	chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+	chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
 
 	chip->bbt_td = &bbt_main_descr;
 	chip->bbt_td = &bbt_main_descr;
 	chip->bbt_md = &bbt_mirror_descr;
 	chip->bbt_md = &bbt_mirror_descr;

+ 45 - 36
drivers/mtd/nand/fsl_ifc_nand.c

@@ -171,34 +171,6 @@ static void set_addr(struct mtd_info *mtd, int column, int page_addr, int oob)
 		ifc_nand_ctrl->index += mtd->writesize;
 		ifc_nand_ctrl->index += mtd->writesize;
 }
 }
 
 
-static int is_blank(struct mtd_info *mtd, unsigned int bufnum)
-{
-	struct nand_chip *chip = mtd_to_nand(mtd);
-	struct fsl_ifc_mtd *priv = nand_get_controller_data(chip);
-	u8 __iomem *addr = priv->vbase + bufnum * (mtd->writesize * 2);
-	u32 __iomem *mainarea = (u32 __iomem *)addr;
-	u8 __iomem *oob = addr + mtd->writesize;
-	struct mtd_oob_region oobregion = { };
-	int i, section = 0;
-
-	for (i = 0; i < mtd->writesize / 4; i++) {
-		if (__raw_readl(&mainarea[i]) != 0xffffffff)
-			return 0;
-	}
-
-	mtd_ooblayout_ecc(mtd, section++, &oobregion);
-	while (oobregion.length) {
-		for (i = 0; i < oobregion.length; i++) {
-			if (__raw_readb(&oob[oobregion.offset + i]) != 0xff)
-				return 0;
-		}
-
-		mtd_ooblayout_ecc(mtd, section++, &oobregion);
-	}
-
-	return 1;
-}
-
 /* returns nonzero if entire page is blank */
 /* returns nonzero if entire page is blank */
 static int check_read_ecc(struct mtd_info *mtd, struct fsl_ifc_ctrl *ctrl,
 static int check_read_ecc(struct mtd_info *mtd, struct fsl_ifc_ctrl *ctrl,
 			  u32 *eccstat, unsigned int bufnum)
 			  u32 *eccstat, unsigned int bufnum)
@@ -274,16 +246,14 @@ static void fsl_ifc_run_command(struct mtd_info *mtd)
 			if (errors == 15) {
 			if (errors == 15) {
 				/*
 				/*
 				 * Uncorrectable error.
 				 * Uncorrectable error.
-				 * OK only if the whole page is blank.
+				 * We'll check for blank pages later.
 				 *
 				 *
 				 * We disable ECCER reporting due to...
 				 * We disable ECCER reporting due to...
 				 * erratum IFC-A002770 -- so report it now if we
 				 * erratum IFC-A002770 -- so report it now if we
 				 * see an uncorrectable error in ECCSTAT.
 				 * see an uncorrectable error in ECCSTAT.
 				 */
 				 */
-				if (!is_blank(mtd, bufnum))
-					ctrl->nand_stat |=
-						IFC_NAND_EVTER_STAT_ECCER;
-				break;
+				ctrl->nand_stat |= IFC_NAND_EVTER_STAT_ECCER;
+				continue;
 			}
 			}
 
 
 			mtd->ecc_stats.corrected += errors;
 			mtd->ecc_stats.corrected += errors;
@@ -678,6 +648,39 @@ static int fsl_ifc_wait(struct mtd_info *mtd, struct nand_chip *chip)
 	return nand_fsr | NAND_STATUS_WP;
 	return nand_fsr | NAND_STATUS_WP;
 }
 }
 
 
+/*
+ * The controller does not check for bitflips in erased pages,
+ * therefore software must check instead.
+ */
+static int check_erased_page(struct nand_chip *chip, u8 *buf)
+{
+	struct mtd_info *mtd = nand_to_mtd(chip);
+	u8 *ecc = chip->oob_poi;
+	const int ecc_size = chip->ecc.bytes;
+	const int pkt_size = chip->ecc.size;
+	int i, res, bitflips = 0;
+	struct mtd_oob_region oobregion = { };
+
+	mtd_ooblayout_ecc(mtd, 0, &oobregion);
+	ecc += oobregion.offset;
+
+	for (i = 0; i < chip->ecc.steps; ++i) {
+		res = nand_check_erased_ecc_chunk(buf, pkt_size, ecc, ecc_size,
+						  NULL, 0,
+						  chip->ecc.strength);
+		if (res < 0)
+			mtd->ecc_stats.failed++;
+		else
+			mtd->ecc_stats.corrected += res;
+
+		bitflips = max(res, bitflips);
+		buf += pkt_size;
+		ecc += ecc_size;
+	}
+
+	return bitflips;
+}
+
 static int fsl_ifc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
 static int fsl_ifc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
 			     uint8_t *buf, int oob_required, int page)
 			     uint8_t *buf, int oob_required, int page)
 {
 {
@@ -689,8 +692,12 @@ static int fsl_ifc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
 	if (oob_required)
 	if (oob_required)
 		fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize);
 		fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize);
 
 
-	if (ctrl->nand_stat & IFC_NAND_EVTER_STAT_ECCER)
-		dev_err(priv->dev, "NAND Flash ECC Uncorrectable Error\n");
+	if (ctrl->nand_stat & IFC_NAND_EVTER_STAT_ECCER) {
+		if (!oob_required)
+			fsl_ifc_read_buf(mtd, chip->oob_poi, mtd->oobsize);
+
+		return check_erased_page(chip, buf);
+	}
 
 
 	if (ctrl->nand_stat != IFC_NAND_EVTER_STAT_OPC)
 	if (ctrl->nand_stat != IFC_NAND_EVTER_STAT_OPC)
 		mtd->ecc_stats.failed++;
 		mtd->ecc_stats.failed++;
@@ -831,6 +838,8 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
 	chip->select_chip = fsl_ifc_select_chip;
 	chip->select_chip = fsl_ifc_select_chip;
 	chip->cmdfunc = fsl_ifc_cmdfunc;
 	chip->cmdfunc = fsl_ifc_cmdfunc;
 	chip->waitfunc = fsl_ifc_wait;
 	chip->waitfunc = fsl_ifc_wait;
+	chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+	chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
 
 	chip->bbt_td = &bbt_main_descr;
 	chip->bbt_td = &bbt_main_descr;
 	chip->bbt_md = &bbt_mirror_descr;
 	chip->bbt_md = &bbt_mirror_descr;
@@ -904,7 +913,7 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
 		chip->ecc.algo = NAND_ECC_HAMMING;
 		chip->ecc.algo = NAND_ECC_HAMMING;
 	}
 	}
 
 
-	if (ctrl->version == FSL_IFC_VERSION_1_1_0)
+	if (ctrl->version >= FSL_IFC_VERSION_1_1_0)
 		fsl_ifc_sram_init(priv);
 		fsl_ifc_sram_init(priv);
 
 
 	return 0;
 	return 0;

+ 96 - 26
drivers/mtd/nand/fsmc_nand.c

@@ -302,25 +302,13 @@ static void fsmc_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl)
  * This routine initializes timing parameters related to NAND memory access in
  * This routine initializes timing parameters related to NAND memory access in
  * FSMC registers
  * FSMC registers
  */
  */
-static void fsmc_nand_setup(void __iomem *regs, uint32_t bank,
-			   uint32_t busw, struct fsmc_nand_timings *timings)
+static void fsmc_nand_setup(struct fsmc_nand_data *host,
+			    struct fsmc_nand_timings *tims)
 {
 {
 	uint32_t value = FSMC_DEVTYPE_NAND | FSMC_ENABLE | FSMC_WAITON;
 	uint32_t value = FSMC_DEVTYPE_NAND | FSMC_ENABLE | FSMC_WAITON;
 	uint32_t tclr, tar, thiz, thold, twait, tset;
 	uint32_t tclr, tar, thiz, thold, twait, tset;
-	struct fsmc_nand_timings *tims;
-	struct fsmc_nand_timings default_timings = {
-		.tclr	= FSMC_TCLR_1,
-		.tar	= FSMC_TAR_1,
-		.thiz	= FSMC_THIZ_1,
-		.thold	= FSMC_THOLD_4,
-		.twait	= FSMC_TWAIT_6,
-		.tset	= FSMC_TSET_0,
-	};
-
-	if (timings)
-		tims = timings;
-	else
-		tims = &default_timings;
+	unsigned int bank = host->bank;
+	void __iomem *regs = host->regs_va;
 
 
 	tclr = (tims->tclr & FSMC_TCLR_MASK) << FSMC_TCLR_SHIFT;
 	tclr = (tims->tclr & FSMC_TCLR_MASK) << FSMC_TCLR_SHIFT;
 	tar = (tims->tar & FSMC_TAR_MASK) << FSMC_TAR_SHIFT;
 	tar = (tims->tar & FSMC_TAR_MASK) << FSMC_TAR_SHIFT;
@@ -329,7 +317,7 @@ static void fsmc_nand_setup(void __iomem *regs, uint32_t bank,
 	twait = (tims->twait & FSMC_TWAIT_MASK) << FSMC_TWAIT_SHIFT;
 	twait = (tims->twait & FSMC_TWAIT_MASK) << FSMC_TWAIT_SHIFT;
 	tset = (tims->tset & FSMC_TSET_MASK) << FSMC_TSET_SHIFT;
 	tset = (tims->tset & FSMC_TSET_MASK) << FSMC_TSET_SHIFT;
 
 
-	if (busw)
+	if (host->nand.options & NAND_BUSWIDTH_16)
 		writel_relaxed(value | FSMC_DEVWID_16,
 		writel_relaxed(value | FSMC_DEVWID_16,
 				FSMC_NAND_REG(regs, bank, PC));
 				FSMC_NAND_REG(regs, bank, PC));
 	else
 	else
@@ -344,6 +332,87 @@ static void fsmc_nand_setup(void __iomem *regs, uint32_t bank,
 			FSMC_NAND_REG(regs, bank, ATTRIB));
 			FSMC_NAND_REG(regs, bank, ATTRIB));
 }
 }
 
 
+static int fsmc_calc_timings(struct fsmc_nand_data *host,
+			     const struct nand_sdr_timings *sdrt,
+			     struct fsmc_nand_timings *tims)
+{
+	unsigned long hclk = clk_get_rate(host->clk);
+	unsigned long hclkn = NSEC_PER_SEC / hclk;
+	uint32_t thiz, thold, twait, tset;
+
+	if (sdrt->tRC_min < 30000)
+		return -EOPNOTSUPP;
+
+	tims->tar = DIV_ROUND_UP(sdrt->tAR_min / 1000, hclkn) - 1;
+	if (tims->tar > FSMC_TAR_MASK)
+		tims->tar = FSMC_TAR_MASK;
+	tims->tclr = DIV_ROUND_UP(sdrt->tCLR_min / 1000, hclkn) - 1;
+	if (tims->tclr > FSMC_TCLR_MASK)
+		tims->tclr = FSMC_TCLR_MASK;
+
+	thiz = sdrt->tCS_min - sdrt->tWP_min;
+	tims->thiz = DIV_ROUND_UP(thiz / 1000, hclkn);
+
+	thold = sdrt->tDH_min;
+	if (thold < sdrt->tCH_min)
+		thold = sdrt->tCH_min;
+	if (thold < sdrt->tCLH_min)
+		thold = sdrt->tCLH_min;
+	if (thold < sdrt->tWH_min)
+		thold = sdrt->tWH_min;
+	if (thold < sdrt->tALH_min)
+		thold = sdrt->tALH_min;
+	if (thold < sdrt->tREH_min)
+		thold = sdrt->tREH_min;
+	tims->thold = DIV_ROUND_UP(thold / 1000, hclkn);
+	if (tims->thold == 0)
+		tims->thold = 1;
+	else if (tims->thold > FSMC_THOLD_MASK)
+		tims->thold = FSMC_THOLD_MASK;
+
+	twait = max(sdrt->tRP_min, sdrt->tWP_min);
+	tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1;
+	if (tims->twait == 0)
+		tims->twait = 1;
+	else if (tims->twait > FSMC_TWAIT_MASK)
+		tims->twait = FSMC_TWAIT_MASK;
+
+	tset = max(sdrt->tCS_min - sdrt->tWP_min,
+		   sdrt->tCEA_max - sdrt->tREA_max);
+	tims->tset = DIV_ROUND_UP(tset / 1000, hclkn) - 1;
+	if (tims->tset == 0)
+		tims->tset = 1;
+	else if (tims->tset > FSMC_TSET_MASK)
+		tims->tset = FSMC_TSET_MASK;
+
+	return 0;
+}
+
+static int fsmc_setup_data_interface(struct mtd_info *mtd, int csline,
+				     const struct nand_data_interface *conf)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct fsmc_nand_data *host = nand_get_controller_data(nand);
+	struct fsmc_nand_timings tims;
+	const struct nand_sdr_timings *sdrt;
+	int ret;
+
+	sdrt = nand_get_sdr_timings(conf);
+	if (IS_ERR(sdrt))
+		return PTR_ERR(sdrt);
+
+	ret = fsmc_calc_timings(host, sdrt, &tims);
+	if (ret)
+		return ret;
+
+	if (csline == NAND_DATA_IFACE_CHECK_ONLY)
+		return 0;
+
+	fsmc_nand_setup(host, &tims);
+
+	return 0;
+}
+
 /*
 /*
  * fsmc_enable_hwecc - Enables Hardware ECC through FSMC registers
  * fsmc_enable_hwecc - Enables Hardware ECC through FSMC registers
  */
  */
@@ -796,10 +865,8 @@ static int fsmc_nand_probe_config_dt(struct platform_device *pdev,
 		return -ENOMEM;
 		return -ENOMEM;
 	ret = of_property_read_u8_array(np, "timings", (u8 *)host->dev_timings,
 	ret = of_property_read_u8_array(np, "timings", (u8 *)host->dev_timings,
 						sizeof(*host->dev_timings));
 						sizeof(*host->dev_timings));
-	if (ret) {
-		dev_info(&pdev->dev, "No timings in dts specified, using default timings!\n");
+	if (ret)
 		host->dev_timings = NULL;
 		host->dev_timings = NULL;
-	}
 
 
 	/* Set default NAND bank to 0 */
 	/* Set default NAND bank to 0 */
 	host->bank = 0;
 	host->bank = 0;
@@ -933,9 +1000,10 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
 		break;
 		break;
 	}
 	}
 
 
-	fsmc_nand_setup(host->regs_va, host->bank,
-			nand->options & NAND_BUSWIDTH_16,
-			host->dev_timings);
+	if (host->dev_timings)
+		fsmc_nand_setup(host, host->dev_timings);
+	else
+		nand->setup_data_interface = fsmc_setup_data_interface;
 
 
 	if (AMBA_REV_BITS(host->pid) >= 8) {
 	if (AMBA_REV_BITS(host->pid) >= 8) {
 		nand->ecc.read_page = fsmc_read_page_hwecc;
 		nand->ecc.read_page = fsmc_read_page_hwecc;
@@ -986,6 +1054,9 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
 				break;
 				break;
 			}
 			}
 
 
+		case NAND_ECC_ON_DIE:
+			break;
+
 		default:
 		default:
 			dev_err(&pdev->dev, "Unsupported ECC mode!\n");
 			dev_err(&pdev->dev, "Unsupported ECC mode!\n");
 			goto err_probe;
 			goto err_probe;
@@ -1073,9 +1144,8 @@ static int fsmc_nand_resume(struct device *dev)
 	struct fsmc_nand_data *host = dev_get_drvdata(dev);
 	struct fsmc_nand_data *host = dev_get_drvdata(dev);
 	if (host) {
 	if (host) {
 		clk_prepare_enable(host->clk);
 		clk_prepare_enable(host->clk);
-		fsmc_nand_setup(host->regs_va, host->bank,
-				host->nand.options & NAND_BUSWIDTH_16,
-				host->dev_timings);
+		if (host->dev_timings)
+			fsmc_nand_setup(host, host->dev_timings);
 	}
 	}
 	return 0;
 	return 0;
 }
 }

+ 3 - 3
drivers/mtd/nand/gpmi-nand/gpmi-lib.c

@@ -26,7 +26,7 @@
 #include "gpmi-regs.h"
 #include "gpmi-regs.h"
 #include "bch-regs.h"
 #include "bch-regs.h"
 
 
-static struct timing_threshod timing_default_threshold = {
+static struct timing_threshold timing_default_threshold = {
 	.max_data_setup_cycles       = (BM_GPMI_TIMING0_DATA_SETUP >>
 	.max_data_setup_cycles       = (BM_GPMI_TIMING0_DATA_SETUP >>
 						BP_GPMI_TIMING0_DATA_SETUP),
 						BP_GPMI_TIMING0_DATA_SETUP),
 	.internal_data_setup_in_ns   = 0,
 	.internal_data_setup_in_ns   = 0,
@@ -329,7 +329,7 @@ static unsigned int ns_to_cycles(unsigned int time,
 static int gpmi_nfc_compute_hardware_timing(struct gpmi_nand_data *this,
 static int gpmi_nfc_compute_hardware_timing(struct gpmi_nand_data *this,
 					struct gpmi_nfc_hardware_timing *hw)
 					struct gpmi_nfc_hardware_timing *hw)
 {
 {
-	struct timing_threshod *nfc = &timing_default_threshold;
+	struct timing_threshold *nfc = &timing_default_threshold;
 	struct resources *r = &this->resources;
 	struct resources *r = &this->resources;
 	struct nand_chip *nand = &this->nand;
 	struct nand_chip *nand = &this->nand;
 	struct nand_timing target = this->timing;
 	struct nand_timing target = this->timing;
@@ -932,7 +932,7 @@ static int enable_edo_mode(struct gpmi_nand_data *this, int mode)
 
 
 	nand->select_chip(mtd, 0);
 	nand->select_chip(mtd, 0);
 
 
-	/* [1] send SET FEATURE commond to NAND */
+	/* [1] send SET FEATURE command to NAND */
 	feature[0] = mode;
 	feature[0] = mode;
 	ret = nand->onfi_set_features(mtd, nand,
 	ret = nand->onfi_set_features(mtd, nand,
 				ONFI_FEATURE_ADDR_TIMING_MODE, feature);
 				ONFI_FEATURE_ADDR_TIMING_MODE, feature);

+ 41 - 34
drivers/mtd/nand/gpmi-nand/gpmi-nand.c

@@ -82,6 +82,10 @@ static int gpmi_ooblayout_free(struct mtd_info *mtd, int section,
 	return 0;
 	return 0;
 }
 }
 
 
+static const char * const gpmi_clks_for_mx2x[] = {
+	"gpmi_io",
+};
+
 static const struct mtd_ooblayout_ops gpmi_ooblayout_ops = {
 static const struct mtd_ooblayout_ops gpmi_ooblayout_ops = {
 	.ecc = gpmi_ooblayout_ecc,
 	.ecc = gpmi_ooblayout_ecc,
 	.free = gpmi_ooblayout_free,
 	.free = gpmi_ooblayout_free,
@@ -91,24 +95,48 @@ static const struct gpmi_devdata gpmi_devdata_imx23 = {
 	.type = IS_MX23,
 	.type = IS_MX23,
 	.bch_max_ecc_strength = 20,
 	.bch_max_ecc_strength = 20,
 	.max_chain_delay = 16,
 	.max_chain_delay = 16,
+	.clks = gpmi_clks_for_mx2x,
+	.clks_count = ARRAY_SIZE(gpmi_clks_for_mx2x),
 };
 };
 
 
 static const struct gpmi_devdata gpmi_devdata_imx28 = {
 static const struct gpmi_devdata gpmi_devdata_imx28 = {
 	.type = IS_MX28,
 	.type = IS_MX28,
 	.bch_max_ecc_strength = 20,
 	.bch_max_ecc_strength = 20,
 	.max_chain_delay = 16,
 	.max_chain_delay = 16,
+	.clks = gpmi_clks_for_mx2x,
+	.clks_count = ARRAY_SIZE(gpmi_clks_for_mx2x),
+};
+
+static const char * const gpmi_clks_for_mx6[] = {
+	"gpmi_io", "gpmi_apb", "gpmi_bch", "gpmi_bch_apb", "per1_bch",
 };
 };
 
 
 static const struct gpmi_devdata gpmi_devdata_imx6q = {
 static const struct gpmi_devdata gpmi_devdata_imx6q = {
 	.type = IS_MX6Q,
 	.type = IS_MX6Q,
 	.bch_max_ecc_strength = 40,
 	.bch_max_ecc_strength = 40,
 	.max_chain_delay = 12,
 	.max_chain_delay = 12,
+	.clks = gpmi_clks_for_mx6,
+	.clks_count = ARRAY_SIZE(gpmi_clks_for_mx6),
 };
 };
 
 
 static const struct gpmi_devdata gpmi_devdata_imx6sx = {
 static const struct gpmi_devdata gpmi_devdata_imx6sx = {
 	.type = IS_MX6SX,
 	.type = IS_MX6SX,
 	.bch_max_ecc_strength = 62,
 	.bch_max_ecc_strength = 62,
 	.max_chain_delay = 12,
 	.max_chain_delay = 12,
+	.clks = gpmi_clks_for_mx6,
+	.clks_count = ARRAY_SIZE(gpmi_clks_for_mx6),
+};
+
+static const char * const gpmi_clks_for_mx7d[] = {
+	"gpmi_io", "gpmi_bch_apb",
+};
+
+static const struct gpmi_devdata gpmi_devdata_imx7d = {
+	.type = IS_MX7D,
+	.bch_max_ecc_strength = 62,
+	.max_chain_delay = 12,
+	.clks = gpmi_clks_for_mx7d,
+	.clks_count = ARRAY_SIZE(gpmi_clks_for_mx7d),
 };
 };
 
 
 static irqreturn_t bch_irq(int irq, void *cookie)
 static irqreturn_t bch_irq(int irq, void *cookie)
@@ -599,35 +627,14 @@ acquire_err:
 	return -EINVAL;
 	return -EINVAL;
 }
 }
 
 
-static char *extra_clks_for_mx6q[GPMI_CLK_MAX] = {
-	"gpmi_apb", "gpmi_bch", "gpmi_bch_apb", "per1_bch",
-};
-
 static int gpmi_get_clks(struct gpmi_nand_data *this)
 static int gpmi_get_clks(struct gpmi_nand_data *this)
 {
 {
 	struct resources *r = &this->resources;
 	struct resources *r = &this->resources;
-	char **extra_clks = NULL;
 	struct clk *clk;
 	struct clk *clk;
 	int err, i;
 	int err, i;
 
 
-	/* The main clock is stored in the first. */
-	r->clock[0] = devm_clk_get(this->dev, "gpmi_io");
-	if (IS_ERR(r->clock[0])) {
-		err = PTR_ERR(r->clock[0]);
-		goto err_clock;
-	}
-
-	/* Get extra clocks */
-	if (GPMI_IS_MX6(this))
-		extra_clks = extra_clks_for_mx6q;
-	if (!extra_clks)
-		return 0;
-
-	for (i = 1; i < GPMI_CLK_MAX; i++) {
-		if (extra_clks[i - 1] == NULL)
-			break;
-
-		clk = devm_clk_get(this->dev, extra_clks[i - 1]);
+	for (i = 0; i < this->devdata->clks_count; i++) {
+		clk = devm_clk_get(this->dev, this->devdata->clks[i]);
 		if (IS_ERR(clk)) {
 		if (IS_ERR(clk)) {
 			err = PTR_ERR(clk);
 			err = PTR_ERR(clk);
 			goto err_clock;
 			goto err_clock;
@@ -1929,12 +1936,6 @@ static int gpmi_set_geometry(struct gpmi_nand_data *this)
 	return gpmi_alloc_dma_buffer(this);
 	return gpmi_alloc_dma_buffer(this);
 }
 }
 
 
-static void gpmi_nand_exit(struct gpmi_nand_data *this)
-{
-	nand_release(nand_to_mtd(&this->nand));
-	gpmi_free_dma_buffer(this);
-}
-
 static int gpmi_init_last(struct gpmi_nand_data *this)
 static int gpmi_init_last(struct gpmi_nand_data *this)
 {
 {
 	struct nand_chip *chip = &this->nand;
 	struct nand_chip *chip = &this->nand;
@@ -2048,18 +2049,20 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
 
 
 	ret = nand_boot_init(this);
 	ret = nand_boot_init(this);
 	if (ret)
 	if (ret)
-		goto err_out;
+		goto err_nand_cleanup;
 	ret = chip->scan_bbt(mtd);
 	ret = chip->scan_bbt(mtd);
 	if (ret)
 	if (ret)
-		goto err_out;
+		goto err_nand_cleanup;
 
 
 	ret = mtd_device_register(mtd, NULL, 0);
 	ret = mtd_device_register(mtd, NULL, 0);
 	if (ret)
 	if (ret)
-		goto err_out;
+		goto err_nand_cleanup;
 	return 0;
 	return 0;
 
 
+err_nand_cleanup:
+	nand_cleanup(chip);
 err_out:
 err_out:
-	gpmi_nand_exit(this);
+	gpmi_free_dma_buffer(this);
 	return ret;
 	return ret;
 }
 }
 
 
@@ -2076,6 +2079,9 @@ static const struct of_device_id gpmi_nand_id_table[] = {
 	}, {
 	}, {
 		.compatible = "fsl,imx6sx-gpmi-nand",
 		.compatible = "fsl,imx6sx-gpmi-nand",
 		.data = &gpmi_devdata_imx6sx,
 		.data = &gpmi_devdata_imx6sx,
+	}, {
+		.compatible = "fsl,imx7d-gpmi-nand",
+		.data = &gpmi_devdata_imx7d,
 	}, {}
 	}, {}
 };
 };
 MODULE_DEVICE_TABLE(of, gpmi_nand_id_table);
 MODULE_DEVICE_TABLE(of, gpmi_nand_id_table);
@@ -2129,7 +2135,8 @@ static int gpmi_nand_remove(struct platform_device *pdev)
 {
 {
 	struct gpmi_nand_data *this = platform_get_drvdata(pdev);
 	struct gpmi_nand_data *this = platform_get_drvdata(pdev);
 
 
-	gpmi_nand_exit(this);
+	nand_release(nand_to_mtd(&this->nand));
+	gpmi_free_dma_buffer(this);
 	release_resources(this);
 	release_resources(this);
 	return 0;
 	return 0;
 }
 }

+ 9 - 4
drivers/mtd/nand/gpmi-nand/gpmi-nand.h

@@ -123,13 +123,16 @@ enum gpmi_type {
 	IS_MX23,
 	IS_MX23,
 	IS_MX28,
 	IS_MX28,
 	IS_MX6Q,
 	IS_MX6Q,
-	IS_MX6SX
+	IS_MX6SX,
+	IS_MX7D,
 };
 };
 
 
 struct gpmi_devdata {
 struct gpmi_devdata {
 	enum gpmi_type type;
 	enum gpmi_type type;
 	int bch_max_ecc_strength;
 	int bch_max_ecc_strength;
 	int max_chain_delay; /* See the async EDO mode */
 	int max_chain_delay; /* See the async EDO mode */
+	const char * const *clks;
+	const int clks_count;
 };
 };
 
 
 struct gpmi_nand_data {
 struct gpmi_nand_data {
@@ -231,7 +234,7 @@ struct gpmi_nfc_hardware_timing {
 };
 };
 
 
 /**
 /**
- * struct timing_threshod - Timing threshold
+ * struct timing_threshold - Timing threshold
  * @max_data_setup_cycles:       The maximum number of data setup cycles that
  * @max_data_setup_cycles:       The maximum number of data setup cycles that
  *                               can be expressed in the hardware.
  *                               can be expressed in the hardware.
  * @internal_data_setup_in_ns:   The time, in ns, that the NFC hardware requires
  * @internal_data_setup_in_ns:   The time, in ns, that the NFC hardware requires
@@ -253,7 +256,7 @@ struct gpmi_nfc_hardware_timing {
  *                               progress, this is the clock frequency during
  *                               progress, this is the clock frequency during
  *                               the most recent I/O transaction.
  *                               the most recent I/O transaction.
  */
  */
-struct timing_threshod {
+struct timing_threshold {
 	const unsigned int      max_chip_count;
 	const unsigned int      max_chip_count;
 	const unsigned int      max_data_setup_cycles;
 	const unsigned int      max_data_setup_cycles;
 	const unsigned int      internal_data_setup_in_ns;
 	const unsigned int      internal_data_setup_in_ns;
@@ -305,6 +308,8 @@ void gpmi_copy_bits(u8 *dst, size_t dst_bit_off,
 #define GPMI_IS_MX28(x)		((x)->devdata->type == IS_MX28)
 #define GPMI_IS_MX28(x)		((x)->devdata->type == IS_MX28)
 #define GPMI_IS_MX6Q(x)		((x)->devdata->type == IS_MX6Q)
 #define GPMI_IS_MX6Q(x)		((x)->devdata->type == IS_MX6Q)
 #define GPMI_IS_MX6SX(x)	((x)->devdata->type == IS_MX6SX)
 #define GPMI_IS_MX6SX(x)	((x)->devdata->type == IS_MX6SX)
+#define GPMI_IS_MX7D(x)		((x)->devdata->type == IS_MX7D)
 
 
-#define GPMI_IS_MX6(x)		(GPMI_IS_MX6Q(x) || GPMI_IS_MX6SX(x))
+#define GPMI_IS_MX6(x)		(GPMI_IS_MX6Q(x) || GPMI_IS_MX6SX(x) || \
+				 GPMI_IS_MX7D(x))
 #endif
 #endif

+ 2 - 0
drivers/mtd/nand/hisi504_nand.c

@@ -764,6 +764,8 @@ static int hisi_nfc_probe(struct platform_device *pdev)
 	chip->write_buf		= hisi_nfc_write_buf;
 	chip->write_buf		= hisi_nfc_write_buf;
 	chip->read_buf		= hisi_nfc_read_buf;
 	chip->read_buf		= hisi_nfc_read_buf;
 	chip->chip_delay	= HINFC504_CHIP_DELAY;
 	chip->chip_delay	= HINFC504_CHIP_DELAY;
+	chip->onfi_set_features	= nand_onfi_get_set_features_notsupp;
+	chip->onfi_get_features	= nand_onfi_get_set_features_notsupp;
 
 
 	hisi_nfc_host_init(host);
 	hisi_nfc_host_init(host);
 
 

+ 1 - 1
drivers/mtd/nand/jz4780_nand.c

@@ -205,7 +205,7 @@ static int jz4780_nand_init_ecc(struct jz4780_nand_chip *nand, struct device *de
 		return -EINVAL;
 		return -EINVAL;
 	}
 	}
 
 
-	mtd->ooblayout = &nand_ooblayout_lp_ops;
+	mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
 
 
 	return 0;
 	return 0;
 }
 }

+ 2 - 0
drivers/mtd/nand/mpc5121_nfc.c

@@ -708,6 +708,8 @@ static int mpc5121_nfc_probe(struct platform_device *op)
 	chip->read_buf = mpc5121_nfc_read_buf;
 	chip->read_buf = mpc5121_nfc_read_buf;
 	chip->write_buf = mpc5121_nfc_write_buf;
 	chip->write_buf = mpc5121_nfc_write_buf;
 	chip->select_chip = mpc5121_nfc_select_chip;
 	chip->select_chip = mpc5121_nfc_select_chip;
+	chip->onfi_set_features	= nand_onfi_get_set_features_notsupp;
+	chip->onfi_get_features	= nand_onfi_get_set_features_notsupp;
 	chip->bbt_options = NAND_BBT_USE_FLASH;
 	chip->bbt_options = NAND_BBT_USE_FLASH;
 	chip->ecc.mode = NAND_ECC_SOFT;
 	chip->ecc.mode = NAND_ECC_SOFT;
 	chip->ecc.algo = NAND_ECC_HAMMING;
 	chip->ecc.algo = NAND_ECC_HAMMING;

+ 115 - 113
drivers/mtd/nand/mtk_ecc.c

@@ -28,36 +28,16 @@
 
 
 #define ECC_IDLE_MASK		BIT(0)
 #define ECC_IDLE_MASK		BIT(0)
 #define ECC_IRQ_EN		BIT(0)
 #define ECC_IRQ_EN		BIT(0)
+#define ECC_PG_IRQ_SEL		BIT(1)
 #define ECC_OP_ENABLE		(1)
 #define ECC_OP_ENABLE		(1)
 #define ECC_OP_DISABLE		(0)
 #define ECC_OP_DISABLE		(0)
 
 
 #define ECC_ENCCON		(0x00)
 #define ECC_ENCCON		(0x00)
 #define ECC_ENCCNFG		(0x04)
 #define ECC_ENCCNFG		(0x04)
-#define		ECC_CNFG_4BIT		(0)
-#define		ECC_CNFG_6BIT		(1)
-#define		ECC_CNFG_8BIT		(2)
-#define		ECC_CNFG_10BIT		(3)
-#define		ECC_CNFG_12BIT		(4)
-#define		ECC_CNFG_14BIT		(5)
-#define		ECC_CNFG_16BIT		(6)
-#define		ECC_CNFG_18BIT		(7)
-#define		ECC_CNFG_20BIT		(8)
-#define		ECC_CNFG_22BIT		(9)
-#define		ECC_CNFG_24BIT		(0xa)
-#define		ECC_CNFG_28BIT		(0xb)
-#define		ECC_CNFG_32BIT		(0xc)
-#define		ECC_CNFG_36BIT		(0xd)
-#define		ECC_CNFG_40BIT		(0xe)
-#define		ECC_CNFG_44BIT		(0xf)
-#define		ECC_CNFG_48BIT		(0x10)
-#define		ECC_CNFG_52BIT		(0x11)
-#define		ECC_CNFG_56BIT		(0x12)
-#define		ECC_CNFG_60BIT		(0x13)
 #define		ECC_MODE_SHIFT		(5)
 #define		ECC_MODE_SHIFT		(5)
 #define		ECC_MS_SHIFT		(16)
 #define		ECC_MS_SHIFT		(16)
 #define ECC_ENCDIADDR		(0x08)
 #define ECC_ENCDIADDR		(0x08)
 #define ECC_ENCIDLE		(0x0C)
 #define ECC_ENCIDLE		(0x0C)
-#define ECC_ENCPAR(x)		(0x10 + (x) * sizeof(u32))
 #define ECC_ENCIRQ_EN		(0x80)
 #define ECC_ENCIRQ_EN		(0x80)
 #define ECC_ENCIRQ_STA		(0x84)
 #define ECC_ENCIRQ_STA		(0x84)
 #define ECC_DECCON		(0x100)
 #define ECC_DECCON		(0x100)
@@ -66,7 +46,6 @@
 #define		DEC_CNFG_CORRECT	(0x3 << 12)
 #define		DEC_CNFG_CORRECT	(0x3 << 12)
 #define ECC_DECIDLE		(0x10C)
 #define ECC_DECIDLE		(0x10C)
 #define ECC_DECENUM0		(0x114)
 #define ECC_DECENUM0		(0x114)
-#define		ERR_MASK		(0x3f)
 #define ECC_DECDONE		(0x124)
 #define ECC_DECDONE		(0x124)
 #define ECC_DECIRQ_EN		(0x200)
 #define ECC_DECIRQ_EN		(0x200)
 #define ECC_DECIRQ_STA		(0x204)
 #define ECC_DECIRQ_STA		(0x204)
@@ -78,8 +57,17 @@
 #define ECC_IRQ_REG(op)		((op) == ECC_ENCODE ? \
 #define ECC_IRQ_REG(op)		((op) == ECC_ENCODE ? \
 					ECC_ENCIRQ_EN : ECC_DECIRQ_EN)
 					ECC_ENCIRQ_EN : ECC_DECIRQ_EN)
 
 
+struct mtk_ecc_caps {
+	u32 err_mask;
+	const u8 *ecc_strength;
+	u8 num_ecc_strength;
+	u32 encode_parity_reg0;
+	int pg_irq_sel;
+};
+
 struct mtk_ecc {
 struct mtk_ecc {
 	struct device *dev;
 	struct device *dev;
+	const struct mtk_ecc_caps *caps;
 	void __iomem *regs;
 	void __iomem *regs;
 	struct clk *clk;
 	struct clk *clk;
 
 
@@ -87,7 +75,18 @@ struct mtk_ecc {
 	struct mutex lock;
 	struct mutex lock;
 	u32 sectors;
 	u32 sectors;
 
 
-	u8 eccdata[112];
+	u8 *eccdata;
+};
+
+/* ecc strength that each IP supports */
+static const u8 ecc_strength_mt2701[] = {
+	4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36,
+	40, 44, 48, 52, 56, 60
+};
+
+static const u8 ecc_strength_mt2712[] = {
+	4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36,
+	40, 44, 48, 52, 56, 60, 68, 72, 80
 };
 };
 
 
 static inline void mtk_ecc_wait_idle(struct mtk_ecc *ecc,
 static inline void mtk_ecc_wait_idle(struct mtk_ecc *ecc,
@@ -136,77 +135,24 @@ static irqreturn_t mtk_ecc_irq(int irq, void *id)
 	return IRQ_HANDLED;
 	return IRQ_HANDLED;
 }
 }
 
 
-static void mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
+static int mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
 {
 {
-	u32 ecc_bit = ECC_CNFG_4BIT, dec_sz, enc_sz;
-	u32 reg;
-
-	switch (config->strength) {
-	case 4:
-		ecc_bit = ECC_CNFG_4BIT;
-		break;
-	case 6:
-		ecc_bit = ECC_CNFG_6BIT;
-		break;
-	case 8:
-		ecc_bit = ECC_CNFG_8BIT;
-		break;
-	case 10:
-		ecc_bit = ECC_CNFG_10BIT;
-		break;
-	case 12:
-		ecc_bit = ECC_CNFG_12BIT;
-		break;
-	case 14:
-		ecc_bit = ECC_CNFG_14BIT;
-		break;
-	case 16:
-		ecc_bit = ECC_CNFG_16BIT;
-		break;
-	case 18:
-		ecc_bit = ECC_CNFG_18BIT;
-		break;
-	case 20:
-		ecc_bit = ECC_CNFG_20BIT;
-		break;
-	case 22:
-		ecc_bit = ECC_CNFG_22BIT;
-		break;
-	case 24:
-		ecc_bit = ECC_CNFG_24BIT;
-		break;
-	case 28:
-		ecc_bit = ECC_CNFG_28BIT;
-		break;
-	case 32:
-		ecc_bit = ECC_CNFG_32BIT;
-		break;
-	case 36:
-		ecc_bit = ECC_CNFG_36BIT;
-		break;
-	case 40:
-		ecc_bit = ECC_CNFG_40BIT;
-		break;
-	case 44:
-		ecc_bit = ECC_CNFG_44BIT;
-		break;
-	case 48:
-		ecc_bit = ECC_CNFG_48BIT;
-		break;
-	case 52:
-		ecc_bit = ECC_CNFG_52BIT;
-		break;
-	case 56:
-		ecc_bit = ECC_CNFG_56BIT;
-		break;
-	case 60:
-		ecc_bit = ECC_CNFG_60BIT;
-		break;
-	default:
-		dev_err(ecc->dev, "invalid strength %d, default to 4 bits\n",
+	u32 ecc_bit, dec_sz, enc_sz;
+	u32 reg, i;
+
+	for (i = 0; i < ecc->caps->num_ecc_strength; i++) {
+		if (ecc->caps->ecc_strength[i] == config->strength)
+			break;
+	}
+
+	if (i == ecc->caps->num_ecc_strength) {
+		dev_err(ecc->dev, "invalid ecc strength %d\n",
 			config->strength);
 			config->strength);
+		return -EINVAL;
 	}
 	}
 
 
+	ecc_bit = i;
+
 	if (config->op == ECC_ENCODE) {
 	if (config->op == ECC_ENCODE) {
 		/* configure ECC encoder (in bits) */
 		/* configure ECC encoder (in bits) */
 		enc_sz = config->len << 3;
 		enc_sz = config->len << 3;
@@ -232,6 +178,8 @@ static void mtk_ecc_config(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
 		if (config->sectors)
 		if (config->sectors)
 			ecc->sectors = 1 << (config->sectors - 1);
 			ecc->sectors = 1 << (config->sectors - 1);
 	}
 	}
+
+	return 0;
 }
 }
 
 
 void mtk_ecc_get_stats(struct mtk_ecc *ecc, struct mtk_ecc_stats *stats,
 void mtk_ecc_get_stats(struct mtk_ecc *ecc, struct mtk_ecc_stats *stats,
@@ -247,8 +195,8 @@ void mtk_ecc_get_stats(struct mtk_ecc *ecc, struct mtk_ecc_stats *stats,
 		offset = (i >> 2) << 2;
 		offset = (i >> 2) << 2;
 		err = readl(ecc->regs + ECC_DECENUM0 + offset);
 		err = readl(ecc->regs + ECC_DECENUM0 + offset);
 		err = err >> ((i % 4) * 8);
 		err = err >> ((i % 4) * 8);
-		err &= ERR_MASK;
-		if (err == ERR_MASK) {
+		err &= ecc->caps->err_mask;
+		if (err == ecc->caps->err_mask) {
 			/* uncorrectable errors */
 			/* uncorrectable errors */
 			stats->failed++;
 			stats->failed++;
 			continue;
 			continue;
@@ -313,6 +261,7 @@ EXPORT_SYMBOL(of_mtk_ecc_get);
 int mtk_ecc_enable(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
 int mtk_ecc_enable(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
 {
 {
 	enum mtk_ecc_operation op = config->op;
 	enum mtk_ecc_operation op = config->op;
+	u16 reg_val;
 	int ret;
 	int ret;
 
 
 	ret = mutex_lock_interruptible(&ecc->lock);
 	ret = mutex_lock_interruptible(&ecc->lock);
@@ -322,11 +271,27 @@ int mtk_ecc_enable(struct mtk_ecc *ecc, struct mtk_ecc_config *config)
 	}
 	}
 
 
 	mtk_ecc_wait_idle(ecc, op);
 	mtk_ecc_wait_idle(ecc, op);
-	mtk_ecc_config(ecc, config);
-	writew(ECC_OP_ENABLE, ecc->regs + ECC_CTL_REG(op));
 
 
-	init_completion(&ecc->done);
-	writew(ECC_IRQ_EN, ecc->regs + ECC_IRQ_REG(op));
+	ret = mtk_ecc_config(ecc, config);
+	if (ret) {
+		mutex_unlock(&ecc->lock);
+		return ret;
+	}
+
+	if (config->mode != ECC_NFI_MODE || op != ECC_ENCODE) {
+		init_completion(&ecc->done);
+		reg_val = ECC_IRQ_EN;
+		/*
+		 * For ECC_NFI_MODE, if ecc->caps->pg_irq_sel is 1, then it
+		 * means this chip can only generate one ecc irq during page
+		 * read / write. If is 0, generate one ecc irq each ecc step.
+		 */
+		if (ecc->caps->pg_irq_sel && config->mode == ECC_NFI_MODE)
+			reg_val |= ECC_PG_IRQ_SEL;
+		writew(reg_val, ecc->regs + ECC_IRQ_REG(op));
+	}
+
+	writew(ECC_OP_ENABLE, ecc->regs + ECC_CTL_REG(op));
 
 
 	return 0;
 	return 0;
 }
 }
@@ -396,7 +361,9 @@ int mtk_ecc_encode(struct mtk_ecc *ecc, struct mtk_ecc_config *config,
 	len = (config->strength * ECC_PARITY_BITS + 7) >> 3;
 	len = (config->strength * ECC_PARITY_BITS + 7) >> 3;
 
 
 	/* write the parity bytes generated by the ECC back to temp buffer */
 	/* write the parity bytes generated by the ECC back to temp buffer */
-	__ioread32_copy(ecc->eccdata, ecc->regs + ECC_ENCPAR(0), round_up(len, 4));
+	__ioread32_copy(ecc->eccdata,
+			ecc->regs + ecc->caps->encode_parity_reg0,
+			round_up(len, 4));
 
 
 	/* copy into possibly unaligned OOB region with actual length */
 	/* copy into possibly unaligned OOB region with actual length */
 	memcpy(data + bytes, ecc->eccdata, len);
 	memcpy(data + bytes, ecc->eccdata, len);
@@ -409,37 +376,79 @@ timeout:
 }
 }
 EXPORT_SYMBOL(mtk_ecc_encode);
 EXPORT_SYMBOL(mtk_ecc_encode);
 
 
-void mtk_ecc_adjust_strength(u32 *p)
+void mtk_ecc_adjust_strength(struct mtk_ecc *ecc, u32 *p)
 {
 {
-	u32 ecc[] = {4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 36,
-			40, 44, 48, 52, 56, 60};
+	const u8 *ecc_strength = ecc->caps->ecc_strength;
 	int i;
 	int i;
 
 
-	for (i = 0; i < ARRAY_SIZE(ecc); i++) {
-		if (*p <= ecc[i]) {
+	for (i = 0; i < ecc->caps->num_ecc_strength; i++) {
+		if (*p <= ecc_strength[i]) {
 			if (!i)
 			if (!i)
-				*p = ecc[i];
-			else if (*p != ecc[i])
-				*p = ecc[i - 1];
+				*p = ecc_strength[i];
+			else if (*p != ecc_strength[i])
+				*p = ecc_strength[i - 1];
 			return;
 			return;
 		}
 		}
 	}
 	}
 
 
-	*p = ecc[ARRAY_SIZE(ecc) - 1];
+	*p = ecc_strength[ecc->caps->num_ecc_strength - 1];
 }
 }
 EXPORT_SYMBOL(mtk_ecc_adjust_strength);
 EXPORT_SYMBOL(mtk_ecc_adjust_strength);
 
 
+static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
+	.err_mask = 0x3f,
+	.ecc_strength = ecc_strength_mt2701,
+	.num_ecc_strength = 20,
+	.encode_parity_reg0 = 0x10,
+	.pg_irq_sel = 0,
+};
+
+static const struct mtk_ecc_caps mtk_ecc_caps_mt2712 = {
+	.err_mask = 0x7f,
+	.ecc_strength = ecc_strength_mt2712,
+	.num_ecc_strength = 23,
+	.encode_parity_reg0 = 0x300,
+	.pg_irq_sel = 1,
+};
+
+static const struct of_device_id mtk_ecc_dt_match[] = {
+	{
+		.compatible = "mediatek,mt2701-ecc",
+		.data = &mtk_ecc_caps_mt2701,
+	}, {
+		.compatible = "mediatek,mt2712-ecc",
+		.data = &mtk_ecc_caps_mt2712,
+	},
+	{},
+};
+
 static int mtk_ecc_probe(struct platform_device *pdev)
 static int mtk_ecc_probe(struct platform_device *pdev)
 {
 {
 	struct device *dev = &pdev->dev;
 	struct device *dev = &pdev->dev;
 	struct mtk_ecc *ecc;
 	struct mtk_ecc *ecc;
 	struct resource *res;
 	struct resource *res;
+	const struct of_device_id *of_ecc_id = NULL;
+	u32 max_eccdata_size;
 	int irq, ret;
 	int irq, ret;
 
 
 	ecc = devm_kzalloc(dev, sizeof(*ecc), GFP_KERNEL);
 	ecc = devm_kzalloc(dev, sizeof(*ecc), GFP_KERNEL);
 	if (!ecc)
 	if (!ecc)
 		return -ENOMEM;
 		return -ENOMEM;
 
 
+	of_ecc_id = of_match_device(mtk_ecc_dt_match, &pdev->dev);
+	if (!of_ecc_id)
+		return -ENODEV;
+
+	ecc->caps = of_ecc_id->data;
+
+	max_eccdata_size = ecc->caps->num_ecc_strength - 1;
+	max_eccdata_size = ecc->caps->ecc_strength[max_eccdata_size];
+	max_eccdata_size = (max_eccdata_size * ECC_PARITY_BITS + 7) >> 3;
+	max_eccdata_size = round_up(max_eccdata_size, 4);
+	ecc->eccdata = devm_kzalloc(dev, max_eccdata_size, GFP_KERNEL);
+	if (!ecc->eccdata)
+		return -ENOMEM;
+
 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
 	ecc->regs = devm_ioremap_resource(dev, res);
 	ecc->regs = devm_ioremap_resource(dev, res);
 	if (IS_ERR(ecc->regs)) {
 	if (IS_ERR(ecc->regs)) {
@@ -500,19 +509,12 @@ static int mtk_ecc_resume(struct device *dev)
 		return ret;
 		return ret;
 	}
 	}
 
 
-	mtk_ecc_hw_init(ecc);
-
 	return 0;
 	return 0;
 }
 }
 
 
 static SIMPLE_DEV_PM_OPS(mtk_ecc_pm_ops, mtk_ecc_suspend, mtk_ecc_resume);
 static SIMPLE_DEV_PM_OPS(mtk_ecc_pm_ops, mtk_ecc_suspend, mtk_ecc_resume);
 #endif
 #endif
 
 
-static const struct of_device_id mtk_ecc_dt_match[] = {
-	{ .compatible = "mediatek,mt2701-ecc" },
-	{},
-};
-
 MODULE_DEVICE_TABLE(of, mtk_ecc_dt_match);
 MODULE_DEVICE_TABLE(of, mtk_ecc_dt_match);
 
 
 static struct platform_driver mtk_ecc_driver = {
 static struct platform_driver mtk_ecc_driver = {

+ 1 - 1
drivers/mtd/nand/mtk_ecc.h

@@ -42,7 +42,7 @@ void mtk_ecc_get_stats(struct mtk_ecc *, struct mtk_ecc_stats *, int);
 int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation);
 int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation);
 int mtk_ecc_enable(struct mtk_ecc *, struct mtk_ecc_config *);
 int mtk_ecc_enable(struct mtk_ecc *, struct mtk_ecc_config *);
 void mtk_ecc_disable(struct mtk_ecc *);
 void mtk_ecc_disable(struct mtk_ecc *);
-void mtk_ecc_adjust_strength(u32 *);
+void mtk_ecc_adjust_strength(struct mtk_ecc *ecc, u32 *p);
 
 
 struct mtk_ecc *of_mtk_ecc_get(struct device_node *);
 struct mtk_ecc *of_mtk_ecc_get(struct device_node *);
 void mtk_ecc_release(struct mtk_ecc *);
 void mtk_ecc_release(struct mtk_ecc *);

+ 168 - 111
drivers/mtd/nand/mtk_nand.c

@@ -24,6 +24,7 @@
 #include <linux/module.h>
 #include <linux/module.h>
 #include <linux/iopoll.h>
 #include <linux/iopoll.h>
 #include <linux/of.h>
 #include <linux/of.h>
+#include <linux/of_device.h>
 #include "mtk_ecc.h"
 #include "mtk_ecc.h"
 
 
 /* NAND controller register definition */
 /* NAND controller register definition */
@@ -38,23 +39,6 @@
 #define NFI_PAGEFMT		(0x04)
 #define NFI_PAGEFMT		(0x04)
 #define		PAGEFMT_FDM_ECC_SHIFT	(12)
 #define		PAGEFMT_FDM_ECC_SHIFT	(12)
 #define		PAGEFMT_FDM_SHIFT	(8)
 #define		PAGEFMT_FDM_SHIFT	(8)
-#define		PAGEFMT_SPARE_16	(0)
-#define		PAGEFMT_SPARE_26	(1)
-#define		PAGEFMT_SPARE_27	(2)
-#define		PAGEFMT_SPARE_28	(3)
-#define		PAGEFMT_SPARE_32	(4)
-#define		PAGEFMT_SPARE_36	(5)
-#define		PAGEFMT_SPARE_40	(6)
-#define		PAGEFMT_SPARE_44	(7)
-#define		PAGEFMT_SPARE_48	(8)
-#define		PAGEFMT_SPARE_49	(9)
-#define		PAGEFMT_SPARE_50	(0xa)
-#define		PAGEFMT_SPARE_51	(0xb)
-#define		PAGEFMT_SPARE_52	(0xc)
-#define		PAGEFMT_SPARE_62	(0xd)
-#define		PAGEFMT_SPARE_63	(0xe)
-#define		PAGEFMT_SPARE_64	(0xf)
-#define		PAGEFMT_SPARE_SHIFT	(4)
 #define		PAGEFMT_SEC_SEL_512	BIT(2)
 #define		PAGEFMT_SEC_SEL_512	BIT(2)
 #define		PAGEFMT_512_2K		(0)
 #define		PAGEFMT_512_2K		(0)
 #define		PAGEFMT_2K_4K		(1)
 #define		PAGEFMT_2K_4K		(1)
@@ -115,6 +99,17 @@
 #define MTK_RESET_TIMEOUT	(1000000)
 #define MTK_RESET_TIMEOUT	(1000000)
 #define MTK_MAX_SECTOR		(16)
 #define MTK_MAX_SECTOR		(16)
 #define MTK_NAND_MAX_NSELS	(2)
 #define MTK_NAND_MAX_NSELS	(2)
+#define MTK_NFC_MIN_SPARE	(16)
+#define ACCTIMING(tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt) \
+	((tpoecs) << 28 | (tprecs) << 22 | (tc2r) << 16 | \
+	(tw2r) << 12 | (twh) << 8 | (twst) << 4 | (trlt))
+
+struct mtk_nfc_caps {
+	const u8 *spare_size;
+	u8 num_spare_size;
+	u8 pageformat_spare_shift;
+	u8 nfi_clk_div;
+};
 
 
 struct mtk_nfc_bad_mark_ctl {
 struct mtk_nfc_bad_mark_ctl {
 	void (*bm_swap)(struct mtd_info *, u8 *buf, int raw);
 	void (*bm_swap)(struct mtd_info *, u8 *buf, int raw);
@@ -155,6 +150,7 @@ struct mtk_nfc {
 	struct mtk_ecc *ecc;
 	struct mtk_ecc *ecc;
 
 
 	struct device *dev;
 	struct device *dev;
+	const struct mtk_nfc_caps *caps;
 	void __iomem *regs;
 	void __iomem *regs;
 
 
 	struct completion done;
 	struct completion done;
@@ -163,6 +159,20 @@ struct mtk_nfc {
 	u8 *buffer;
 	u8 *buffer;
 };
 };
 
 
+/*
+ * supported spare size of each IP.
+ * order should be the same with the spare size bitfiled defination of
+ * register NFI_PAGEFMT.
+ */
+static const u8 spare_size_mt2701[] = {
+	16, 26, 27, 28, 32, 36, 40, 44,	48, 49, 50, 51, 52, 62, 63, 64
+};
+
+static const u8 spare_size_mt2712[] = {
+	16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51, 52, 62, 61, 63, 64, 67,
+	74
+};
+
 static inline struct mtk_nfc_nand_chip *to_mtk_nand(struct nand_chip *nand)
 static inline struct mtk_nfc_nand_chip *to_mtk_nand(struct nand_chip *nand)
 {
 {
 	return container_of(nand, struct mtk_nfc_nand_chip, nand);
 	return container_of(nand, struct mtk_nfc_nand_chip, nand);
@@ -308,7 +318,7 @@ static int mtk_nfc_hw_runtime_config(struct mtd_info *mtd)
 	struct nand_chip *chip = mtd_to_nand(mtd);
 	struct nand_chip *chip = mtd_to_nand(mtd);
 	struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
 	struct mtk_nfc_nand_chip *mtk_nand = to_mtk_nand(chip);
 	struct mtk_nfc *nfc = nand_get_controller_data(chip);
 	struct mtk_nfc *nfc = nand_get_controller_data(chip);
-	u32 fmt, spare;
+	u32 fmt, spare, i;
 
 
 	if (!mtd->writesize)
 	if (!mtd->writesize)
 		return 0;
 		return 0;
@@ -352,63 +362,21 @@ static int mtk_nfc_hw_runtime_config(struct mtd_info *mtd)
 	if (chip->ecc.size == 1024)
 	if (chip->ecc.size == 1024)
 		spare >>= 1;
 		spare >>= 1;
 
 
-	switch (spare) {
-	case 16:
-		fmt |= (PAGEFMT_SPARE_16 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 26:
-		fmt |= (PAGEFMT_SPARE_26 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 27:
-		fmt |= (PAGEFMT_SPARE_27 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 28:
-		fmt |= (PAGEFMT_SPARE_28 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 32:
-		fmt |= (PAGEFMT_SPARE_32 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 36:
-		fmt |= (PAGEFMT_SPARE_36 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 40:
-		fmt |= (PAGEFMT_SPARE_40 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 44:
-		fmt |= (PAGEFMT_SPARE_44 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 48:
-		fmt |= (PAGEFMT_SPARE_48 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 49:
-		fmt |= (PAGEFMT_SPARE_49 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 50:
-		fmt |= (PAGEFMT_SPARE_50 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 51:
-		fmt |= (PAGEFMT_SPARE_51 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 52:
-		fmt |= (PAGEFMT_SPARE_52 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 62:
-		fmt |= (PAGEFMT_SPARE_62 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 63:
-		fmt |= (PAGEFMT_SPARE_63 << PAGEFMT_SPARE_SHIFT);
-		break;
-	case 64:
-		fmt |= (PAGEFMT_SPARE_64 << PAGEFMT_SPARE_SHIFT);
-		break;
-	default:
-		dev_err(nfc->dev, "invalid spare per sector %d\n", spare);
+	for (i = 0; i < nfc->caps->num_spare_size; i++) {
+		if (nfc->caps->spare_size[i] == spare)
+			break;
+	}
+
+	if (i == nfc->caps->num_spare_size) {
+		dev_err(nfc->dev, "invalid spare size %d\n", spare);
 		return -EINVAL;
 		return -EINVAL;
 	}
 	}
 
 
+	fmt |= i << nfc->caps->pageformat_spare_shift;
+
 	fmt |= mtk_nand->fdm.reg_size << PAGEFMT_FDM_SHIFT;
 	fmt |= mtk_nand->fdm.reg_size << PAGEFMT_FDM_SHIFT;
 	fmt |= mtk_nand->fdm.ecc_size << PAGEFMT_FDM_ECC_SHIFT;
 	fmt |= mtk_nand->fdm.ecc_size << PAGEFMT_FDM_ECC_SHIFT;
-	nfi_writew(nfc, fmt, NFI_PAGEFMT);
+	nfi_writel(nfc, fmt, NFI_PAGEFMT);
 
 
 	nfc->ecc_cfg.strength = chip->ecc.strength;
 	nfc->ecc_cfg.strength = chip->ecc.strength;
 	nfc->ecc_cfg.len = chip->ecc.size + mtk_nand->fdm.ecc_size;
 	nfc->ecc_cfg.len = chip->ecc.size + mtk_nand->fdm.ecc_size;
@@ -531,6 +499,74 @@ static void mtk_nfc_write_buf(struct mtd_info *mtd, const u8 *buf, int len)
 		mtk_nfc_write_byte(mtd, buf[i]);
 		mtk_nfc_write_byte(mtd, buf[i]);
 }
 }
 
 
+static int mtk_nfc_setup_data_interface(struct mtd_info *mtd, int csline,
+					const struct nand_data_interface *conf)
+{
+	struct mtk_nfc *nfc = nand_get_controller_data(mtd_to_nand(mtd));
+	const struct nand_sdr_timings *timings;
+	u32 rate, tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt;
+
+	timings = nand_get_sdr_timings(conf);
+	if (IS_ERR(timings))
+		return -ENOTSUPP;
+
+	if (csline == NAND_DATA_IFACE_CHECK_ONLY)
+		return 0;
+
+	rate = clk_get_rate(nfc->clk.nfi_clk);
+	/* There is a frequency divider in some IPs */
+	rate /= nfc->caps->nfi_clk_div;
+
+	/* turn clock rate into KHZ */
+	rate /= 1000;
+
+	tpoecs = max(timings->tALH_min, timings->tCLH_min) / 1000;
+	tpoecs = DIV_ROUND_UP(tpoecs * rate, 1000000);
+	tpoecs &= 0xf;
+
+	tprecs = max(timings->tCLS_min, timings->tALS_min) / 1000;
+	tprecs = DIV_ROUND_UP(tprecs * rate, 1000000);
+	tprecs &= 0x3f;
+
+	/* sdr interface has no tCR which means CE# low to RE# low */
+	tc2r = 0;
+
+	tw2r = timings->tWHR_min / 1000;
+	tw2r = DIV_ROUND_UP(tw2r * rate, 1000000);
+	tw2r = DIV_ROUND_UP(tw2r - 1, 2);
+	tw2r &= 0xf;
+
+	twh = max(timings->tREH_min, timings->tWH_min) / 1000;
+	twh = DIV_ROUND_UP(twh * rate, 1000000) - 1;
+	twh &= 0xf;
+
+	twst = timings->tWP_min / 1000;
+	twst = DIV_ROUND_UP(twst * rate, 1000000) - 1;
+	twst &= 0xf;
+
+	trlt = max(timings->tREA_max, timings->tRP_min) / 1000;
+	trlt = DIV_ROUND_UP(trlt * rate, 1000000) - 1;
+	trlt &= 0xf;
+
+	/*
+	 * ACCON: access timing control register
+	 * -------------------------------------
+	 * 31:28: tpoecs, minimum required time for CS post pulling down after
+	 *        accessing the device
+	 * 27:22: tprecs, minimum required time for CS pre pulling down before
+	 *        accessing the device
+	 * 21:16: tc2r, minimum required time from NCEB low to NREB low
+	 * 15:12: tw2r, minimum required time from NWEB high to NREB low.
+	 * 11:08: twh, write enable hold time
+	 * 07:04: twst, write wait states
+	 * 03:00: trlt, read wait states
+	 */
+	trlt = ACCTIMING(tpoecs, tprecs, tc2r, tw2r, twh, twst, trlt);
+	nfi_writel(nfc, trlt, NFI_ACCCON);
+
+	return 0;
+}
+
 static int mtk_nfc_sector_encode(struct nand_chip *chip, u8 *data)
 static int mtk_nfc_sector_encode(struct nand_chip *chip, u8 *data)
 {
 {
 	struct mtk_nfc *nfc = nand_get_controller_data(chip);
 	struct mtk_nfc *nfc = nand_get_controller_data(chip);
@@ -987,21 +1023,6 @@ static int mtk_nfc_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
 
 
 static inline void mtk_nfc_hw_init(struct mtk_nfc *nfc)
 static inline void mtk_nfc_hw_init(struct mtk_nfc *nfc)
 {
 {
-	/*
-	 * ACCON: access timing control register
-	 * -------------------------------------
-	 * 31:28: minimum required time for CS post pulling down after accessing
-	 *	the device
-	 * 27:22: minimum required time for CS pre pulling down before accessing
-	 *	the device
-	 * 21:16: minimum required time from NCEB low to NREB low
-	 * 15:12: minimum required time from NWEB high to NREB low.
-	 * 11:08: write enable hold time
-	 * 07:04: write wait states
-	 * 03:00: read wait states
-	 */
-	nfi_writel(nfc, 0x10804211, NFI_ACCCON);
-
 	/*
 	/*
 	 * CNRNB: nand ready/busy register
 	 * CNRNB: nand ready/busy register
 	 * -------------------------------
 	 * -------------------------------
@@ -1009,7 +1030,7 @@ static inline void mtk_nfc_hw_init(struct mtk_nfc *nfc)
 	 * 0  : poll the status of the busy/ready signal after [7:4]*16 cycles.
 	 * 0  : poll the status of the busy/ready signal after [7:4]*16 cycles.
 	 */
 	 */
 	nfi_writew(nfc, 0xf1, NFI_CNRNB);
 	nfi_writew(nfc, 0xf1, NFI_CNRNB);
-	nfi_writew(nfc, PAGEFMT_8K_16K, NFI_PAGEFMT);
+	nfi_writel(nfc, PAGEFMT_8K_16K, NFI_PAGEFMT);
 
 
 	mtk_nfc_hw_reset(nfc);
 	mtk_nfc_hw_reset(nfc);
 
 
@@ -1131,12 +1152,12 @@ static void mtk_nfc_set_bad_mark_ctl(struct mtk_nfc_bad_mark_ctl *bm_ctl,
 	}
 	}
 }
 }
 
 
-static void mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
+static int mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
 {
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
 	struct nand_chip *nand = mtd_to_nand(mtd);
-	u32 spare[] = {16, 26, 27, 28, 32, 36, 40, 44,
-			48, 49, 50, 51, 52, 62, 63, 64};
-	u32 eccsteps, i;
+	struct mtk_nfc *nfc = nand_get_controller_data(nand);
+	const u8 *spare = nfc->caps->spare_size;
+	u32 eccsteps, i, closest_spare = 0;
 
 
 	eccsteps = mtd->writesize / nand->ecc.size;
 	eccsteps = mtd->writesize / nand->ecc.size;
 	*sps = mtd->oobsize / eccsteps;
 	*sps = mtd->oobsize / eccsteps;
@@ -1144,28 +1165,31 @@ static void mtk_nfc_set_spare_per_sector(u32 *sps, struct mtd_info *mtd)
 	if (nand->ecc.size == 1024)
 	if (nand->ecc.size == 1024)
 		*sps >>= 1;
 		*sps >>= 1;
 
 
-	for (i = 0; i < ARRAY_SIZE(spare); i++) {
-		if (*sps <= spare[i]) {
-			if (!i)
-				*sps = spare[i];
-			else if (*sps != spare[i])
-				*sps = spare[i - 1];
-			break;
+	if (*sps < MTK_NFC_MIN_SPARE)
+		return -EINVAL;
+
+	for (i = 0; i < nfc->caps->num_spare_size; i++) {
+		if (*sps >= spare[i] && spare[i] >= spare[closest_spare]) {
+			closest_spare = i;
+			if (*sps == spare[i])
+				break;
 		}
 		}
 	}
 	}
 
 
-	if (i >= ARRAY_SIZE(spare))
-		*sps = spare[ARRAY_SIZE(spare) - 1];
+	*sps = spare[closest_spare];
 
 
 	if (nand->ecc.size == 1024)
 	if (nand->ecc.size == 1024)
 		*sps <<= 1;
 		*sps <<= 1;
+
+	return 0;
 }
 }
 
 
 static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
 static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
 {
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
 	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct mtk_nfc *nfc = nand_get_controller_data(nand);
 	u32 spare;
 	u32 spare;
-	int free;
+	int free, ret;
 
 
 	/* support only ecc hw mode */
 	/* support only ecc hw mode */
 	if (nand->ecc.mode != NAND_ECC_HW) {
 	if (nand->ecc.mode != NAND_ECC_HW) {
@@ -1194,7 +1218,9 @@ static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
 			nand->ecc.size = 1024;
 			nand->ecc.size = 1024;
 		}
 		}
 
 
-		mtk_nfc_set_spare_per_sector(&spare, mtd);
+		ret = mtk_nfc_set_spare_per_sector(&spare, mtd);
+		if (ret)
+			return ret;
 
 
 		/* calculate oob bytes except ecc parity data */
 		/* calculate oob bytes except ecc parity data */
 		free = ((nand->ecc.strength * ECC_PARITY_BITS) + 7) >> 3;
 		free = ((nand->ecc.strength * ECC_PARITY_BITS) + 7) >> 3;
@@ -1214,7 +1240,7 @@ static int mtk_nfc_ecc_init(struct device *dev, struct mtd_info *mtd)
 		}
 		}
 	}
 	}
 
 
-	mtk_ecc_adjust_strength(&nand->ecc.strength);
+	mtk_ecc_adjust_strength(nfc->ecc, &nand->ecc.strength);
 
 
 	dev_info(dev, "eccsize %d eccstrength %d\n",
 	dev_info(dev, "eccsize %d eccstrength %d\n",
 		 nand->ecc.size, nand->ecc.strength);
 		 nand->ecc.size, nand->ecc.strength);
@@ -1271,6 +1297,7 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
 	nand->read_byte = mtk_nfc_read_byte;
 	nand->read_byte = mtk_nfc_read_byte;
 	nand->read_buf = mtk_nfc_read_buf;
 	nand->read_buf = mtk_nfc_read_buf;
 	nand->cmd_ctrl = mtk_nfc_cmd_ctrl;
 	nand->cmd_ctrl = mtk_nfc_cmd_ctrl;
+	nand->setup_data_interface = mtk_nfc_setup_data_interface;
 
 
 	/* set default mode in case dt entry is missing */
 	/* set default mode in case dt entry is missing */
 	nand->ecc.mode = NAND_ECC_HW;
 	nand->ecc.mode = NAND_ECC_HW;
@@ -1312,7 +1339,10 @@ static int mtk_nfc_nand_chip_init(struct device *dev, struct mtk_nfc *nfc,
 		return -EINVAL;
 		return -EINVAL;
 	}
 	}
 
 
-	mtk_nfc_set_spare_per_sector(&chip->spare_per_sector, mtd);
+	ret = mtk_nfc_set_spare_per_sector(&chip->spare_per_sector, mtd);
+	if (ret)
+		return ret;
+
 	mtk_nfc_set_fdm(&chip->fdm, mtd);
 	mtk_nfc_set_fdm(&chip->fdm, mtd);
 	mtk_nfc_set_bad_mark_ctl(&chip->bad_mark, mtd);
 	mtk_nfc_set_bad_mark_ctl(&chip->bad_mark, mtd);
 
 
@@ -1354,12 +1384,39 @@ static int mtk_nfc_nand_chips_init(struct device *dev, struct mtk_nfc *nfc)
 	return 0;
 	return 0;
 }
 }
 
 
+static const struct mtk_nfc_caps mtk_nfc_caps_mt2701 = {
+	.spare_size = spare_size_mt2701,
+	.num_spare_size = 16,
+	.pageformat_spare_shift = 4,
+	.nfi_clk_div = 1,
+};
+
+static const struct mtk_nfc_caps mtk_nfc_caps_mt2712 = {
+	.spare_size = spare_size_mt2712,
+	.num_spare_size = 19,
+	.pageformat_spare_shift = 16,
+	.nfi_clk_div = 2,
+};
+
+static const struct of_device_id mtk_nfc_id_table[] = {
+	{
+		.compatible = "mediatek,mt2701-nfc",
+		.data = &mtk_nfc_caps_mt2701,
+	}, {
+		.compatible = "mediatek,mt2712-nfc",
+		.data = &mtk_nfc_caps_mt2712,
+	},
+	{}
+};
+MODULE_DEVICE_TABLE(of, mtk_nfc_id_table);
+
 static int mtk_nfc_probe(struct platform_device *pdev)
 static int mtk_nfc_probe(struct platform_device *pdev)
 {
 {
 	struct device *dev = &pdev->dev;
 	struct device *dev = &pdev->dev;
 	struct device_node *np = dev->of_node;
 	struct device_node *np = dev->of_node;
 	struct mtk_nfc *nfc;
 	struct mtk_nfc *nfc;
 	struct resource *res;
 	struct resource *res;
+	const struct of_device_id *of_nfc_id = NULL;
 	int ret, irq;
 	int ret, irq;
 
 
 	nfc = devm_kzalloc(dev, sizeof(*nfc), GFP_KERNEL);
 	nfc = devm_kzalloc(dev, sizeof(*nfc), GFP_KERNEL);
@@ -1423,6 +1480,14 @@ static int mtk_nfc_probe(struct platform_device *pdev)
 		goto clk_disable;
 		goto clk_disable;
 	}
 	}
 
 
+	of_nfc_id = of_match_device(mtk_nfc_id_table, &pdev->dev);
+	if (!of_nfc_id) {
+		ret = -ENODEV;
+		goto clk_disable;
+	}
+
+	nfc->caps = of_nfc_id->data;
+
 	platform_set_drvdata(pdev, nfc);
 	platform_set_drvdata(pdev, nfc);
 
 
 	ret = mtk_nfc_nand_chips_init(dev, nfc);
 	ret = mtk_nfc_nand_chips_init(dev, nfc);
@@ -1485,8 +1550,6 @@ static int mtk_nfc_resume(struct device *dev)
 	if (ret)
 	if (ret)
 		return ret;
 		return ret;
 
 
-	mtk_nfc_hw_init(nfc);
-
 	/* reset NAND chip if VCC was powered off */
 	/* reset NAND chip if VCC was powered off */
 	list_for_each_entry(chip, &nfc->chips, node) {
 	list_for_each_entry(chip, &nfc->chips, node) {
 		nand = &chip->nand;
 		nand = &chip->nand;
@@ -1503,12 +1566,6 @@ static int mtk_nfc_resume(struct device *dev)
 static SIMPLE_DEV_PM_OPS(mtk_nfc_pm_ops, mtk_nfc_suspend, mtk_nfc_resume);
 static SIMPLE_DEV_PM_OPS(mtk_nfc_pm_ops, mtk_nfc_suspend, mtk_nfc_resume);
 #endif
 #endif
 
 
-static const struct of_device_id mtk_nfc_id_table[] = {
-	{ .compatible = "mediatek,mt2701-nfc" },
-	{}
-};
-MODULE_DEVICE_TABLE(of, mtk_nfc_id_table);
-
 static struct platform_driver mtk_nfc_driver = {
 static struct platform_driver mtk_nfc_driver = {
 	.probe  = mtk_nfc_probe,
 	.probe  = mtk_nfc_probe,
 	.remove = mtk_nfc_remove,
 	.remove = mtk_nfc_remove,

+ 5 - 7
drivers/mtd/nand/mxc_nand.c

@@ -152,9 +152,8 @@ struct mxc_nand_devtype_data {
 	void (*select_chip)(struct mtd_info *mtd, int chip);
 	void (*select_chip)(struct mtd_info *mtd, int chip);
 	int (*correct_data)(struct mtd_info *mtd, u_char *dat,
 	int (*correct_data)(struct mtd_info *mtd, u_char *dat,
 			u_char *read_ecc, u_char *calc_ecc);
 			u_char *read_ecc, u_char *calc_ecc);
-	int (*setup_data_interface)(struct mtd_info *mtd,
-				    const struct nand_data_interface *conf,
-				    bool check_only);
+	int (*setup_data_interface)(struct mtd_info *mtd, int csline,
+				    const struct nand_data_interface *conf);
 
 
 	/*
 	/*
 	 * On i.MX21 the CONFIG2:INT bit cannot be read if interrupts are masked
 	 * On i.MX21 the CONFIG2:INT bit cannot be read if interrupts are masked
@@ -1015,9 +1014,8 @@ static void preset_v1(struct mtd_info *mtd)
 	writew(0x4, NFC_V1_V2_WRPROT);
 	writew(0x4, NFC_V1_V2_WRPROT);
 }
 }
 
 
-static int mxc_nand_v2_setup_data_interface(struct mtd_info *mtd,
-					const struct nand_data_interface *conf,
-					bool check_only)
+static int mxc_nand_v2_setup_data_interface(struct mtd_info *mtd, int csline,
+					const struct nand_data_interface *conf)
 {
 {
 	struct nand_chip *nand_chip = mtd_to_nand(mtd);
 	struct nand_chip *nand_chip = mtd_to_nand(mtd);
 	struct mxc_nand_host *host = nand_get_controller_data(nand_chip);
 	struct mxc_nand_host *host = nand_get_controller_data(nand_chip);
@@ -1075,7 +1073,7 @@ static int mxc_nand_v2_setup_data_interface(struct mtd_info *mtd,
 		return -EINVAL;
 		return -EINVAL;
 	}
 	}
 
 
-	if (check_only)
+	if (csline == NAND_DATA_IFACE_CHECK_ONLY)
 		return 0;
 		return 0;
 
 
 	ret = clk_set_rate(host->clk, rate);
 	ret = clk_set_rate(host->clk, rate);

+ 303 - 46
drivers/mtd/nand/nand_base.c

@@ -753,6 +753,16 @@ static void nand_command(struct mtd_info *mtd, unsigned int command,
 		return;
 		return;
 
 
 		/* This applies to read commands */
 		/* This applies to read commands */
+	case NAND_CMD_READ0:
+		/*
+		 * READ0 is sometimes used to exit GET STATUS mode. When this
+		 * is the case no address cycles are requested, and we can use
+		 * this information to detect that we should not wait for the
+		 * device to be ready.
+		 */
+		if (column == -1 && page_addr == -1)
+			return;
+
 	default:
 	default:
 		/*
 		/*
 		 * If we don't have access to the busy pin, we apply the given
 		 * If we don't have access to the busy pin, we apply the given
@@ -887,6 +897,15 @@ static void nand_command_lp(struct mtd_info *mtd, unsigned int command,
 		return;
 		return;
 
 
 	case NAND_CMD_READ0:
 	case NAND_CMD_READ0:
+		/*
+		 * READ0 is sometimes used to exit GET STATUS mode. When this
+		 * is the case no address cycles are requested, and we can use
+		 * this information to detect that READSTART should not be
+		 * issued.
+		 */
+		if (column == -1 && page_addr == -1)
+			return;
+
 		chip->cmd_ctrl(mtd, NAND_CMD_READSTART,
 		chip->cmd_ctrl(mtd, NAND_CMD_READSTART,
 			       NAND_NCE | NAND_CLE | NAND_CTRL_CHANGE);
 			       NAND_NCE | NAND_CLE | NAND_CTRL_CHANGE);
 		chip->cmd_ctrl(mtd, NAND_CMD_NONE,
 		chip->cmd_ctrl(mtd, NAND_CMD_NONE,
@@ -1042,12 +1061,13 @@ static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip)
 /**
 /**
  * nand_reset_data_interface - Reset data interface and timings
  * nand_reset_data_interface - Reset data interface and timings
  * @chip: The NAND chip
  * @chip: The NAND chip
+ * @chipnr: Internal die id
  *
  *
  * Reset the Data interface and timings to ONFI mode 0.
  * Reset the Data interface and timings to ONFI mode 0.
  *
  *
  * Returns 0 for success or negative error code otherwise.
  * Returns 0 for success or negative error code otherwise.
  */
  */
-static int nand_reset_data_interface(struct nand_chip *chip)
+static int nand_reset_data_interface(struct nand_chip *chip, int chipnr)
 {
 {
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	const struct nand_data_interface *conf;
 	const struct nand_data_interface *conf;
@@ -1071,7 +1091,7 @@ static int nand_reset_data_interface(struct nand_chip *chip)
 	 */
 	 */
 
 
 	conf = nand_get_default_data_interface();
 	conf = nand_get_default_data_interface();
-	ret = chip->setup_data_interface(mtd, conf, false);
+	ret = chip->setup_data_interface(mtd, chipnr, conf);
 	if (ret)
 	if (ret)
 		pr_err("Failed to configure data interface to SDR timing mode 0\n");
 		pr_err("Failed to configure data interface to SDR timing mode 0\n");
 
 
@@ -1081,6 +1101,7 @@ static int nand_reset_data_interface(struct nand_chip *chip)
 /**
 /**
  * nand_setup_data_interface - Setup the best data interface and timings
  * nand_setup_data_interface - Setup the best data interface and timings
  * @chip: The NAND chip
  * @chip: The NAND chip
+ * @chipnr: Internal die id
  *
  *
  * Find and configure the best data interface and NAND timings supported by
  * Find and configure the best data interface and NAND timings supported by
  * the chip and the driver.
  * the chip and the driver.
@@ -1090,7 +1111,7 @@ static int nand_reset_data_interface(struct nand_chip *chip)
  *
  *
  * Returns 0 for success or negative error code otherwise.
  * Returns 0 for success or negative error code otherwise.
  */
  */
-static int nand_setup_data_interface(struct nand_chip *chip)
+static int nand_setup_data_interface(struct nand_chip *chip, int chipnr)
 {
 {
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	int ret;
 	int ret;
@@ -1114,7 +1135,7 @@ static int nand_setup_data_interface(struct nand_chip *chip)
 			goto err;
 			goto err;
 	}
 	}
 
 
-	ret = chip->setup_data_interface(mtd, chip->data_interface, false);
+	ret = chip->setup_data_interface(mtd, chipnr, chip->data_interface);
 err:
 err:
 	return ret;
 	return ret;
 }
 }
@@ -1165,8 +1186,10 @@ static int nand_init_data_interface(struct nand_chip *chip)
 		if (ret)
 		if (ret)
 			continue;
 			continue;
 
 
-		ret = chip->setup_data_interface(mtd, chip->data_interface,
-						 true);
+		/* Pass -1 to only */
+		ret = chip->setup_data_interface(mtd,
+						 NAND_DATA_IFACE_CHECK_ONLY,
+						 chip->data_interface);
 		if (!ret) {
 		if (!ret) {
 			chip->onfi_timing_mode_default = mode;
 			chip->onfi_timing_mode_default = mode;
 			break;
 			break;
@@ -1193,7 +1216,7 @@ int nand_reset(struct nand_chip *chip, int chipnr)
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	int ret;
 	int ret;
 
 
-	ret = nand_reset_data_interface(chip);
+	ret = nand_reset_data_interface(chip, chipnr);
 	if (ret)
 	if (ret)
 		return ret;
 		return ret;
 
 
@@ -1206,7 +1229,7 @@ int nand_reset(struct nand_chip *chip, int chipnr)
 	chip->select_chip(mtd, -1);
 	chip->select_chip(mtd, -1);
 
 
 	chip->select_chip(mtd, chipnr);
 	chip->select_chip(mtd, chipnr);
-	ret = nand_setup_data_interface(chip);
+	ret = nand_setup_data_interface(chip, chipnr);
 	chip->select_chip(mtd, -1);
 	chip->select_chip(mtd, -1);
 	if (ret)
 	if (ret)
 		return ret;
 		return ret;
@@ -1421,7 +1444,10 @@ static int nand_check_erased_buf(void *buf, int len, int bitflips_threshold)
 
 
 	for (; len >= sizeof(long);
 	for (; len >= sizeof(long);
 	     len -= sizeof(long), bitmap += sizeof(long)) {
 	     len -= sizeof(long), bitmap += sizeof(long)) {
-		weight = hweight_long(*((unsigned long *)bitmap));
+		unsigned long d = *((unsigned long *)bitmap);
+		if (d == ~0UL)
+			continue;
+		weight = hweight_long(d);
 		bitflips += BITS_PER_LONG - weight;
 		bitflips += BITS_PER_LONG - weight;
 		if (unlikely(bitflips > bitflips_threshold))
 		if (unlikely(bitflips > bitflips_threshold))
 			return -EBADMSG;
 			return -EBADMSG;
@@ -1524,14 +1550,15 @@ EXPORT_SYMBOL(nand_check_erased_ecc_chunk);
  *
  *
  * Not for syndrome calculating ECC controllers, which use a special oob layout.
  * Not for syndrome calculating ECC controllers, which use a special oob layout.
  */
  */
-static int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
-			      uint8_t *buf, int oob_required, int page)
+int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+		       uint8_t *buf, int oob_required, int page)
 {
 {
 	chip->read_buf(mtd, buf, mtd->writesize);
 	chip->read_buf(mtd, buf, mtd->writesize);
 	if (oob_required)
 	if (oob_required)
 		chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
 		chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
 	return 0;
 	return 0;
 }
 }
+EXPORT_SYMBOL(nand_read_page_raw);
 
 
 /**
 /**
  * nand_read_page_raw_syndrome - [INTERN] read raw page data without ecc
  * nand_read_page_raw_syndrome - [INTERN] read raw page data without ecc
@@ -2469,8 +2496,8 @@ static int nand_read_oob(struct mtd_info *mtd, loff_t from,
  *
  *
  * Not for syndrome calculating ECC controllers, which use a special oob layout.
  * Not for syndrome calculating ECC controllers, which use a special oob layout.
  */
  */
-static int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
-			       const uint8_t *buf, int oob_required, int page)
+int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+			const uint8_t *buf, int oob_required, int page)
 {
 {
 	chip->write_buf(mtd, buf, mtd->writesize);
 	chip->write_buf(mtd, buf, mtd->writesize);
 	if (oob_required)
 	if (oob_required)
@@ -2478,6 +2505,7 @@ static int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
 
 
 	return 0;
 	return 0;
 }
 }
+EXPORT_SYMBOL(nand_write_page_raw);
 
 
 /**
 /**
  * nand_write_page_raw_syndrome - [INTERN] raw page write function
  * nand_write_page_raw_syndrome - [INTERN] raw page write function
@@ -2715,7 +2743,7 @@ static int nand_write_page_syndrome(struct mtd_info *mtd,
  */
  */
 static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip,
 static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip,
 		uint32_t offset, int data_len, const uint8_t *buf,
 		uint32_t offset, int data_len, const uint8_t *buf,
-		int oob_required, int page, int cached, int raw)
+		int oob_required, int page, int raw)
 {
 {
 	int status, subpage;
 	int status, subpage;
 
 
@@ -2741,30 +2769,12 @@ static int nand_write_page(struct mtd_info *mtd, struct nand_chip *chip,
 	if (status < 0)
 	if (status < 0)
 		return status;
 		return status;
 
 
-	/*
-	 * Cached progamming disabled for now. Not sure if it's worth the
-	 * trouble. The speed gain is not very impressive. (2.3->2.6Mib/s).
-	 */
-	cached = 0;
+	if (nand_standard_page_accessors(&chip->ecc)) {
+		chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
 
 
-	if (!cached || !NAND_HAS_CACHEPROG(chip)) {
-
-		if (nand_standard_page_accessors(&chip->ecc))
-			chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
 		status = chip->waitfunc(mtd, chip);
 		status = chip->waitfunc(mtd, chip);
-		/*
-		 * See if operation failed and additional status checks are
-		 * available.
-		 */
-		if ((status & NAND_STATUS_FAIL) && (chip->errstat))
-			status = chip->errstat(mtd, chip, FL_WRITING, status,
-					       page);
-
 		if (status & NAND_STATUS_FAIL)
 		if (status & NAND_STATUS_FAIL)
 			return -EIO;
 			return -EIO;
-	} else {
-		chip->cmdfunc(mtd, NAND_CMD_CACHEDPROG, -1, -1);
-		status = chip->waitfunc(mtd, chip);
 	}
 	}
 
 
 	return 0;
 	return 0;
@@ -2872,7 +2882,6 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
 
 
 	while (1) {
 	while (1) {
 		int bytes = mtd->writesize;
 		int bytes = mtd->writesize;
-		int cached = writelen > bytes && page != blockmask;
 		uint8_t *wbuf = buf;
 		uint8_t *wbuf = buf;
 		int use_bufpoi;
 		int use_bufpoi;
 		int part_pagewr = (column || writelen < mtd->writesize);
 		int part_pagewr = (column || writelen < mtd->writesize);
@@ -2890,7 +2899,6 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
 		if (use_bufpoi) {
 		if (use_bufpoi) {
 			pr_debug("%s: using write bounce buffer for buf@%p\n",
 			pr_debug("%s: using write bounce buffer for buf@%p\n",
 					 __func__, buf);
 					 __func__, buf);
-			cached = 0;
 			if (part_pagewr)
 			if (part_pagewr)
 				bytes = min_t(int, bytes - column, writelen);
 				bytes = min_t(int, bytes - column, writelen);
 			chip->pagebuf = -1;
 			chip->pagebuf = -1;
@@ -2909,7 +2917,7 @@ static int nand_do_write_ops(struct mtd_info *mtd, loff_t to,
 		}
 		}
 
 
 		ret = nand_write_page(mtd, chip, column, bytes, wbuf,
 		ret = nand_write_page(mtd, chip, column, bytes, wbuf,
-				      oob_required, page, cached,
+				      oob_required, page,
 				      (ops->mode == MTD_OPS_RAW));
 				      (ops->mode == MTD_OPS_RAW));
 		if (ret)
 		if (ret)
 			break;
 			break;
@@ -3225,14 +3233,6 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr,
 
 
 		status = chip->erase(mtd, page & chip->pagemask);
 		status = chip->erase(mtd, page & chip->pagemask);
 
 
-		/*
-		 * See if operation failed and additional status checks are
-		 * available
-		 */
-		if ((status & NAND_STATUS_FAIL) && (chip->errstat))
-			status = chip->errstat(mtd, chip, FL_ERASING,
-					       status, page);
-
 		/* See if block erase succeeded */
 		/* See if block erase succeeded */
 		if (status & NAND_STATUS_FAIL) {
 		if (status & NAND_STATUS_FAIL) {
 			pr_debug("%s: failed erase, page 0x%08x\n",
 			pr_debug("%s: failed erase, page 0x%08x\n",
@@ -3418,6 +3418,25 @@ static int nand_onfi_get_features(struct mtd_info *mtd, struct nand_chip *chip,
 	return 0;
 	return 0;
 }
 }
 
 
+/**
+ * nand_onfi_get_set_features_notsupp - set/get features stub returning
+ *					-ENOTSUPP
+ * @mtd: MTD device structure
+ * @chip: nand chip info structure
+ * @addr: feature address.
+ * @subfeature_param: the subfeature parameters, a four bytes array.
+ *
+ * Should be used by NAND controller drivers that do not support the SET/GET
+ * FEATURES operations.
+ */
+int nand_onfi_get_set_features_notsupp(struct mtd_info *mtd,
+				       struct nand_chip *chip, int addr,
+				       u8 *subfeature_param)
+{
+	return -ENOTSUPP;
+}
+EXPORT_SYMBOL(nand_onfi_get_set_features_notsupp);
+
 /**
 /**
  * nand_suspend - [MTD Interface] Suspend the NAND flash
  * nand_suspend - [MTD Interface] Suspend the NAND flash
  * @mtd: MTD device structure
  * @mtd: MTD device structure
@@ -4177,6 +4196,7 @@ static const char * const nand_ecc_modes[] = {
 	[NAND_ECC_HW]		= "hw",
 	[NAND_ECC_HW]		= "hw",
 	[NAND_ECC_HW_SYNDROME]	= "hw_syndrome",
 	[NAND_ECC_HW_SYNDROME]	= "hw_syndrome",
 	[NAND_ECC_HW_OOB_FIRST]	= "hw_oob_first",
 	[NAND_ECC_HW_OOB_FIRST]	= "hw_oob_first",
+	[NAND_ECC_ON_DIE]	= "on-die",
 };
 };
 
 
 static int of_get_nand_ecc_mode(struct device_node *np)
 static int of_get_nand_ecc_mode(struct device_node *np)
@@ -4371,7 +4391,7 @@ int nand_scan_ident(struct mtd_info *mtd, int maxchips,
 	 * For the other dies, nand_reset() will automatically switch to the
 	 * For the other dies, nand_reset() will automatically switch to the
 	 * best mode for us.
 	 * best mode for us.
 	 */
 	 */
-	ret = nand_setup_data_interface(chip);
+	ret = nand_setup_data_interface(chip, 0);
 	if (ret)
 	if (ret)
 		goto err_nand_init;
 		goto err_nand_init;
 
 
@@ -4509,6 +4529,226 @@ static int nand_set_ecc_soft_ops(struct mtd_info *mtd)
 	}
 	}
 }
 }
 
 
+/**
+ * nand_check_ecc_caps - check the sanity of preset ECC settings
+ * @chip: nand chip info structure
+ * @caps: ECC caps info structure
+ * @oobavail: OOB size that the ECC engine can use
+ *
+ * When ECC step size and strength are already set, check if they are supported
+ * by the controller and the calculated ECC bytes fit within the chip's OOB.
+ * On success, the calculated ECC bytes is set.
+ */
+int nand_check_ecc_caps(struct nand_chip *chip,
+			const struct nand_ecc_caps *caps, int oobavail)
+{
+	struct mtd_info *mtd = nand_to_mtd(chip);
+	const struct nand_ecc_step_info *stepinfo;
+	int preset_step = chip->ecc.size;
+	int preset_strength = chip->ecc.strength;
+	int nsteps, ecc_bytes;
+	int i, j;
+
+	if (WARN_ON(oobavail < 0))
+		return -EINVAL;
+
+	if (!preset_step || !preset_strength)
+		return -ENODATA;
+
+	nsteps = mtd->writesize / preset_step;
+
+	for (i = 0; i < caps->nstepinfos; i++) {
+		stepinfo = &caps->stepinfos[i];
+
+		if (stepinfo->stepsize != preset_step)
+			continue;
+
+		for (j = 0; j < stepinfo->nstrengths; j++) {
+			if (stepinfo->strengths[j] != preset_strength)
+				continue;
+
+			ecc_bytes = caps->calc_ecc_bytes(preset_step,
+							 preset_strength);
+			if (WARN_ON_ONCE(ecc_bytes < 0))
+				return ecc_bytes;
+
+			if (ecc_bytes * nsteps > oobavail) {
+				pr_err("ECC (step, strength) = (%d, %d) does not fit in OOB",
+				       preset_step, preset_strength);
+				return -ENOSPC;
+			}
+
+			chip->ecc.bytes = ecc_bytes;
+
+			return 0;
+		}
+	}
+
+	pr_err("ECC (step, strength) = (%d, %d) not supported on this controller",
+	       preset_step, preset_strength);
+
+	return -ENOTSUPP;
+}
+EXPORT_SYMBOL_GPL(nand_check_ecc_caps);
+
+/**
+ * nand_match_ecc_req - meet the chip's requirement with least ECC bytes
+ * @chip: nand chip info structure
+ * @caps: ECC engine caps info structure
+ * @oobavail: OOB size that the ECC engine can use
+ *
+ * If a chip's ECC requirement is provided, try to meet it with the least
+ * number of ECC bytes (i.e. with the largest number of OOB-free bytes).
+ * On success, the chosen ECC settings are set.
+ */
+int nand_match_ecc_req(struct nand_chip *chip,
+		       const struct nand_ecc_caps *caps, int oobavail)
+{
+	struct mtd_info *mtd = nand_to_mtd(chip);
+	const struct nand_ecc_step_info *stepinfo;
+	int req_step = chip->ecc_step_ds;
+	int req_strength = chip->ecc_strength_ds;
+	int req_corr, step_size, strength, nsteps, ecc_bytes, ecc_bytes_total;
+	int best_step, best_strength, best_ecc_bytes;
+	int best_ecc_bytes_total = INT_MAX;
+	int i, j;
+
+	if (WARN_ON(oobavail < 0))
+		return -EINVAL;
+
+	/* No information provided by the NAND chip */
+	if (!req_step || !req_strength)
+		return -ENOTSUPP;
+
+	/* number of correctable bits the chip requires in a page */
+	req_corr = mtd->writesize / req_step * req_strength;
+
+	for (i = 0; i < caps->nstepinfos; i++) {
+		stepinfo = &caps->stepinfos[i];
+		step_size = stepinfo->stepsize;
+
+		for (j = 0; j < stepinfo->nstrengths; j++) {
+			strength = stepinfo->strengths[j];
+
+			/*
+			 * If both step size and strength are smaller than the
+			 * chip's requirement, it is not easy to compare the
+			 * resulted reliability.
+			 */
+			if (step_size < req_step && strength < req_strength)
+				continue;
+
+			if (mtd->writesize % step_size)
+				continue;
+
+			nsteps = mtd->writesize / step_size;
+
+			ecc_bytes = caps->calc_ecc_bytes(step_size, strength);
+			if (WARN_ON_ONCE(ecc_bytes < 0))
+				continue;
+			ecc_bytes_total = ecc_bytes * nsteps;
+
+			if (ecc_bytes_total > oobavail ||
+			    strength * nsteps < req_corr)
+				continue;
+
+			/*
+			 * We assume the best is to meet the chip's requrement
+			 * with the least number of ECC bytes.
+			 */
+			if (ecc_bytes_total < best_ecc_bytes_total) {
+				best_ecc_bytes_total = ecc_bytes_total;
+				best_step = step_size;
+				best_strength = strength;
+				best_ecc_bytes = ecc_bytes;
+			}
+		}
+	}
+
+	if (best_ecc_bytes_total == INT_MAX)
+		return -ENOTSUPP;
+
+	chip->ecc.size = best_step;
+	chip->ecc.strength = best_strength;
+	chip->ecc.bytes = best_ecc_bytes;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(nand_match_ecc_req);
+
+/**
+ * nand_maximize_ecc - choose the max ECC strength available
+ * @chip: nand chip info structure
+ * @caps: ECC engine caps info structure
+ * @oobavail: OOB size that the ECC engine can use
+ *
+ * Choose the max ECC strength that is supported on the controller, and can fit
+ * within the chip's OOB.  On success, the chosen ECC settings are set.
+ */
+int nand_maximize_ecc(struct nand_chip *chip,
+		      const struct nand_ecc_caps *caps, int oobavail)
+{
+	struct mtd_info *mtd = nand_to_mtd(chip);
+	const struct nand_ecc_step_info *stepinfo;
+	int step_size, strength, nsteps, ecc_bytes, corr;
+	int best_corr = 0;
+	int best_step = 0;
+	int best_strength, best_ecc_bytes;
+	int i, j;
+
+	if (WARN_ON(oobavail < 0))
+		return -EINVAL;
+
+	for (i = 0; i < caps->nstepinfos; i++) {
+		stepinfo = &caps->stepinfos[i];
+		step_size = stepinfo->stepsize;
+
+		/* If chip->ecc.size is already set, respect it */
+		if (chip->ecc.size && step_size != chip->ecc.size)
+			continue;
+
+		for (j = 0; j < stepinfo->nstrengths; j++) {
+			strength = stepinfo->strengths[j];
+
+			if (mtd->writesize % step_size)
+				continue;
+
+			nsteps = mtd->writesize / step_size;
+
+			ecc_bytes = caps->calc_ecc_bytes(step_size, strength);
+			if (WARN_ON_ONCE(ecc_bytes < 0))
+				continue;
+
+			if (ecc_bytes * nsteps > oobavail)
+				continue;
+
+			corr = strength * nsteps;
+
+			/*
+			 * If the number of correctable bits is the same,
+			 * bigger step_size has more reliability.
+			 */
+			if (corr > best_corr ||
+			    (corr == best_corr && step_size > best_step)) {
+				best_corr = corr;
+				best_step = step_size;
+				best_strength = strength;
+				best_ecc_bytes = ecc_bytes;
+			}
+		}
+	}
+
+	if (!best_corr)
+		return -ENOTSUPP;
+
+	chip->ecc.size = best_step;
+	chip->ecc.strength = best_strength;
+	chip->ecc.bytes = best_ecc_bytes;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(nand_maximize_ecc);
+
 /*
 /*
  * Check if the chip configuration meet the datasheet requirements.
  * Check if the chip configuration meet the datasheet requirements.
 
 
@@ -4730,6 +4970,18 @@ int nand_scan_tail(struct mtd_info *mtd)
 		}
 		}
 		break;
 		break;
 
 
+	case NAND_ECC_ON_DIE:
+		if (!ecc->read_page || !ecc->write_page) {
+			WARN(1, "No ECC functions supplied; on-die ECC not possible\n");
+			ret = -EINVAL;
+			goto err_free;
+		}
+		if (!ecc->read_oob)
+			ecc->read_oob = nand_read_oob_std;
+		if (!ecc->write_oob)
+			ecc->write_oob = nand_write_oob_std;
+		break;
+
 	case NAND_ECC_NONE:
 	case NAND_ECC_NONE:
 		pr_warn("NAND_ECC_NONE selected by board driver. This is not recommended!\n");
 		pr_warn("NAND_ECC_NONE selected by board driver. This is not recommended!\n");
 		ecc->read_page = nand_read_page_raw;
 		ecc->read_page = nand_read_page_raw;
@@ -4770,6 +5022,11 @@ int nand_scan_tail(struct mtd_info *mtd)
 		goto err_free;
 		goto err_free;
 	}
 	}
 	ecc->total = ecc->steps * ecc->bytes;
 	ecc->total = ecc->steps * ecc->bytes;
+	if (ecc->total > mtd->oobsize) {
+		WARN(1, "Total number of ECC bytes exceeded oobsize\n");
+		ret = -EINVAL;
+		goto err_free;
+	}
 
 
 	/*
 	/*
 	 * The number of bytes available for a client to place data into
 	 * The number of bytes available for a client to place data into

+ 222 - 0
drivers/mtd/nand/nand_micron.c

@@ -17,6 +17,12 @@
 
 
 #include <linux/mtd/nand.h>
 #include <linux/mtd/nand.h>
 
 
+/*
+ * Special Micron status bit that indicates when the block has been
+ * corrected by on-die ECC and should be rewritten
+ */
+#define NAND_STATUS_WRITE_RECOMMENDED	BIT(3)
+
 struct nand_onfi_vendor_micron {
 struct nand_onfi_vendor_micron {
 	u8 two_plane_read;
 	u8 two_plane_read;
 	u8 read_cache;
 	u8 read_cache;
@@ -66,9 +72,197 @@ static int micron_nand_onfi_init(struct nand_chip *chip)
 	return 0;
 	return 0;
 }
 }
 
 
+static int micron_nand_on_die_ooblayout_ecc(struct mtd_info *mtd, int section,
+					    struct mtd_oob_region *oobregion)
+{
+	if (section >= 4)
+		return -ERANGE;
+
+	oobregion->offset = (section * 16) + 8;
+	oobregion->length = 8;
+
+	return 0;
+}
+
+static int micron_nand_on_die_ooblayout_free(struct mtd_info *mtd, int section,
+					     struct mtd_oob_region *oobregion)
+{
+	if (section >= 4)
+		return -ERANGE;
+
+	oobregion->offset = (section * 16) + 2;
+	oobregion->length = 6;
+
+	return 0;
+}
+
+static const struct mtd_ooblayout_ops micron_nand_on_die_ooblayout_ops = {
+	.ecc = micron_nand_on_die_ooblayout_ecc,
+	.free = micron_nand_on_die_ooblayout_free,
+};
+
+static int micron_nand_on_die_ecc_setup(struct nand_chip *chip, bool enable)
+{
+	u8 feature[ONFI_SUBFEATURE_PARAM_LEN] = { 0, };
+
+	if (enable)
+		feature[0] |= ONFI_FEATURE_ON_DIE_ECC_EN;
+
+	return chip->onfi_set_features(nand_to_mtd(chip), chip,
+				       ONFI_FEATURE_ON_DIE_ECC, feature);
+}
+
+static int
+micron_nand_read_page_on_die_ecc(struct mtd_info *mtd, struct nand_chip *chip,
+				 uint8_t *buf, int oob_required,
+				 int page)
+{
+	int status;
+	int max_bitflips = 0;
+
+	micron_nand_on_die_ecc_setup(chip, true);
+
+	chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
+	chip->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1);
+	status = chip->read_byte(mtd);
+	if (status & NAND_STATUS_FAIL)
+		mtd->ecc_stats.failed++;
+	/*
+	 * The internal ECC doesn't tell us the number of bitflips
+	 * that have been corrected, but tells us if it recommends to
+	 * rewrite the block. If it's the case, then we pretend we had
+	 * a number of bitflips equal to the ECC strength, which will
+	 * hint the NAND core to rewrite the block.
+	 */
+	else if (status & NAND_STATUS_WRITE_RECOMMENDED)
+		max_bitflips = chip->ecc.strength;
+
+	chip->cmdfunc(mtd, NAND_CMD_READ0, -1, -1);
+
+	nand_read_page_raw(mtd, chip, buf, oob_required, page);
+
+	micron_nand_on_die_ecc_setup(chip, false);
+
+	return max_bitflips;
+}
+
+static int
+micron_nand_write_page_on_die_ecc(struct mtd_info *mtd, struct nand_chip *chip,
+				  const uint8_t *buf, int oob_required,
+				  int page)
+{
+	int status;
+
+	micron_nand_on_die_ecc_setup(chip, true);
+
+	chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
+	nand_write_page_raw(mtd, chip, buf, oob_required, page);
+	chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
+	status = chip->waitfunc(mtd, chip);
+
+	micron_nand_on_die_ecc_setup(chip, false);
+
+	return status & NAND_STATUS_FAIL ? -EIO : 0;
+}
+
+static int
+micron_nand_read_page_raw_on_die_ecc(struct mtd_info *mtd,
+				     struct nand_chip *chip,
+				     uint8_t *buf, int oob_required,
+				     int page)
+{
+	chip->cmdfunc(mtd, NAND_CMD_READ0, 0x00, page);
+	nand_read_page_raw(mtd, chip, buf, oob_required, page);
+
+	return 0;
+}
+
+static int
+micron_nand_write_page_raw_on_die_ecc(struct mtd_info *mtd,
+				      struct nand_chip *chip,
+				      const uint8_t *buf, int oob_required,
+				      int page)
+{
+	int status;
+
+	chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0x00, page);
+	nand_write_page_raw(mtd, chip, buf, oob_required, page);
+	chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
+	status = chip->waitfunc(mtd, chip);
+
+	return status & NAND_STATUS_FAIL ? -EIO : 0;
+}
+
+enum {
+	/* The NAND flash doesn't support on-die ECC */
+	MICRON_ON_DIE_UNSUPPORTED,
+
+	/*
+	 * The NAND flash supports on-die ECC and it can be
+	 * enabled/disabled by a set features command.
+	 */
+	MICRON_ON_DIE_SUPPORTED,
+
+	/*
+	 * The NAND flash supports on-die ECC, and it cannot be
+	 * disabled.
+	 */
+	MICRON_ON_DIE_MANDATORY,
+};
+
+/*
+ * Try to detect if the NAND support on-die ECC. To do this, we enable
+ * the feature, and read back if it has been enabled as expected. We
+ * also check if it can be disabled, because some Micron NANDs do not
+ * allow disabling the on-die ECC and we don't support such NANDs for
+ * now.
+ *
+ * This function also has the side effect of disabling on-die ECC if
+ * it had been left enabled by the firmware/bootloader.
+ */
+static int micron_supports_on_die_ecc(struct nand_chip *chip)
+{
+	u8 feature[ONFI_SUBFEATURE_PARAM_LEN] = { 0, };
+	int ret;
+
+	if (chip->onfi_version == 0)
+		return MICRON_ON_DIE_UNSUPPORTED;
+
+	if (chip->bits_per_cell != 1)
+		return MICRON_ON_DIE_UNSUPPORTED;
+
+	ret = micron_nand_on_die_ecc_setup(chip, true);
+	if (ret)
+		return MICRON_ON_DIE_UNSUPPORTED;
+
+	chip->onfi_get_features(nand_to_mtd(chip), chip,
+				ONFI_FEATURE_ON_DIE_ECC, feature);
+	if ((feature[0] & ONFI_FEATURE_ON_DIE_ECC_EN) == 0)
+		return MICRON_ON_DIE_UNSUPPORTED;
+
+	ret = micron_nand_on_die_ecc_setup(chip, false);
+	if (ret)
+		return MICRON_ON_DIE_UNSUPPORTED;
+
+	chip->onfi_get_features(nand_to_mtd(chip), chip,
+				ONFI_FEATURE_ON_DIE_ECC, feature);
+	if (feature[0] & ONFI_FEATURE_ON_DIE_ECC_EN)
+		return MICRON_ON_DIE_MANDATORY;
+
+	/*
+	 * Some Micron NANDs have an on-die ECC of 4/512, some other
+	 * 8/512. We only support the former.
+	 */
+	if (chip->onfi_params.ecc_bits != 4)
+		return MICRON_ON_DIE_UNSUPPORTED;
+
+	return MICRON_ON_DIE_SUPPORTED;
+}
+
 static int micron_nand_init(struct nand_chip *chip)
 static int micron_nand_init(struct nand_chip *chip)
 {
 {
 	struct mtd_info *mtd = nand_to_mtd(chip);
 	struct mtd_info *mtd = nand_to_mtd(chip);
+	int ondie;
 	int ret;
 	int ret;
 
 
 	ret = micron_nand_onfi_init(chip);
 	ret = micron_nand_onfi_init(chip);
@@ -78,6 +272,34 @@ static int micron_nand_init(struct nand_chip *chip)
 	if (mtd->writesize == 2048)
 	if (mtd->writesize == 2048)
 		chip->bbt_options |= NAND_BBT_SCAN2NDPAGE;
 		chip->bbt_options |= NAND_BBT_SCAN2NDPAGE;
 
 
+	ondie = micron_supports_on_die_ecc(chip);
+
+	if (ondie == MICRON_ON_DIE_MANDATORY) {
+		pr_err("On-die ECC forcefully enabled, not supported\n");
+		return -EINVAL;
+	}
+
+	if (chip->ecc.mode == NAND_ECC_ON_DIE) {
+		if (ondie == MICRON_ON_DIE_UNSUPPORTED) {
+			pr_err("On-die ECC selected but not supported\n");
+			return -EINVAL;
+		}
+
+		chip->ecc.options = NAND_ECC_CUSTOM_PAGE_ACCESS;
+		chip->ecc.bytes = 8;
+		chip->ecc.size = 512;
+		chip->ecc.strength = 4;
+		chip->ecc.algo = NAND_ECC_BCH;
+		chip->ecc.read_page = micron_nand_read_page_on_die_ecc;
+		chip->ecc.write_page = micron_nand_write_page_on_die_ecc;
+		chip->ecc.read_page_raw =
+			micron_nand_read_page_raw_on_die_ecc;
+		chip->ecc.write_page_raw =
+			micron_nand_write_page_raw_on_die_ecc;
+
+		mtd_set_ooblayout(mtd, &micron_nand_on_die_ooblayout_ops);
+	}
+
 	return 0;
 	return 0;
 }
 }
 
 

+ 5 - 1
drivers/mtd/nand/orion_nand.c

@@ -166,7 +166,11 @@ static int __init orion_nand_probe(struct platform_device *pdev)
 		}
 		}
 	}
 	}
 
 
-	clk_prepare_enable(info->clk);
+	ret = clk_prepare_enable(info->clk);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to prepare clock!\n");
+		return ret;
+	}
 
 
 	ret = nand_scan(mtd, 1);
 	ret = nand_scan(mtd, 1);
 	if (ret)
 	if (ret)

+ 2 - 0
drivers/mtd/nand/pxa3xx_nand.c

@@ -1812,6 +1812,8 @@ static int alloc_nand_resource(struct platform_device *pdev)
 		chip->write_buf		= pxa3xx_nand_write_buf;
 		chip->write_buf		= pxa3xx_nand_write_buf;
 		chip->options		|= NAND_NO_SUBPAGE_WRITE;
 		chip->options		|= NAND_NO_SUBPAGE_WRITE;
 		chip->cmdfunc		= nand_cmdfunc;
 		chip->cmdfunc		= nand_cmdfunc;
+		chip->onfi_set_features	= nand_onfi_get_set_features_notsupp;
+		chip->onfi_get_features	= nand_onfi_get_set_features_notsupp;
 	}
 	}
 
 
 	nand_hw_control_init(chip->controller);
 	nand_hw_control_init(chip->controller);

+ 2 - 0
drivers/mtd/nand/qcom_nandc.c

@@ -2008,6 +2008,8 @@ static int qcom_nand_host_init(struct qcom_nand_controller *nandc,
 	chip->read_byte		= qcom_nandc_read_byte;
 	chip->read_byte		= qcom_nandc_read_byte;
 	chip->read_buf		= qcom_nandc_read_buf;
 	chip->read_buf		= qcom_nandc_read_buf;
 	chip->write_buf		= qcom_nandc_write_buf;
 	chip->write_buf		= qcom_nandc_write_buf;
+	chip->onfi_set_features	= nand_onfi_get_set_features_notsupp;
+	chip->onfi_get_features	= nand_onfi_get_set_features_notsupp;
 
 
 	/*
 	/*
 	 * the bad block marker is readable only when we read the last codeword
 	 * the bad block marker is readable only when we read the last codeword

+ 2 - 3
drivers/mtd/nand/s3c2410.c

@@ -812,9 +812,8 @@ static int s3c2410_nand_add_partition(struct s3c2410_nand_info *info,
 	return -ENODEV;
 	return -ENODEV;
 }
 }
 
 
-static int s3c2410_nand_setup_data_interface(struct mtd_info *mtd,
-					const struct nand_data_interface *conf,
-					bool check_only)
+static int s3c2410_nand_setup_data_interface(struct mtd_info *mtd, int csline,
+					const struct nand_data_interface *conf)
 {
 {
 	struct s3c2410_nand_info *info = s3c2410_nand_mtd_toinfo(mtd);
 	struct s3c2410_nand_info *info = s3c2410_nand_mtd_toinfo(mtd);
 	struct s3c2410_platform_nand *pdata = info->platform;
 	struct s3c2410_platform_nand *pdata = info->platform;

+ 2 - 0
drivers/mtd/nand/sh_flctl.c

@@ -1183,6 +1183,8 @@ static int flctl_probe(struct platform_device *pdev)
 	nand->read_buf = flctl_read_buf;
 	nand->read_buf = flctl_read_buf;
 	nand->select_chip = flctl_select_chip;
 	nand->select_chip = flctl_select_chip;
 	nand->cmdfunc = flctl_cmdfunc;
 	nand->cmdfunc = flctl_cmdfunc;
+	nand->onfi_set_features = nand_onfi_get_set_features_notsupp;
+	nand->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
 
 	if (pdata->flcmncr_val & SEL_16BIT)
 	if (pdata->flcmncr_val & SEL_16BIT)
 		nand->options |= NAND_BUSWIDTH_16;
 		nand->options |= NAND_BUSWIDTH_16;

+ 3 - 6
drivers/mtd/nand/sunxi_nand.c

@@ -1301,7 +1301,6 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 
 
 	sunxi_nfc_hw_ecc_enable(mtd);
 	sunxi_nfc_hw_ecc_enable(mtd);
 
 
-	chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
 	for (i = data_offs / ecc->size;
 	for (i = data_offs / ecc->size;
 	     i < DIV_ROUND_UP(data_offs + readlen, ecc->size); i++) {
 	     i < DIV_ROUND_UP(data_offs + readlen, ecc->size); i++) {
 		int data_off = i * ecc->size;
 		int data_off = i * ecc->size;
@@ -1592,9 +1591,8 @@ static int _sunxi_nand_lookup_timing(const s32 *lut, int lut_size, u32 duration,
 #define sunxi_nand_lookup_timing(l, p, c) \
 #define sunxi_nand_lookup_timing(l, p, c) \
 			_sunxi_nand_lookup_timing(l, ARRAY_SIZE(l), p, c)
 			_sunxi_nand_lookup_timing(l, ARRAY_SIZE(l), p, c)
 
 
-static int sunxi_nfc_setup_data_interface(struct mtd_info *mtd,
-					const struct nand_data_interface *conf,
-					bool check_only)
+static int sunxi_nfc_setup_data_interface(struct mtd_info *mtd, int csline,
+					const struct nand_data_interface *conf)
 {
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
 	struct nand_chip *nand = mtd_to_nand(mtd);
 	struct sunxi_nand_chip *chip = to_sunxi_nand(nand);
 	struct sunxi_nand_chip *chip = to_sunxi_nand(nand);
@@ -1707,7 +1705,7 @@ static int sunxi_nfc_setup_data_interface(struct mtd_info *mtd,
 		return tRHW;
 		return tRHW;
 	}
 	}
 
 
-	if (check_only)
+	if (csline == NAND_DATA_IFACE_CHECK_ONLY)
 		return 0;
 		return 0;
 
 
 	/*
 	/*
@@ -1922,7 +1920,6 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 	ecc->write_subpage = sunxi_nfc_hw_ecc_write_subpage;
 	ecc->write_subpage = sunxi_nfc_hw_ecc_write_subpage;
 	ecc->read_oob_raw = nand_read_oob_std;
 	ecc->read_oob_raw = nand_read_oob_std;
 	ecc->write_oob_raw = nand_write_oob_std;
 	ecc->write_oob_raw = nand_write_oob_std;
-	ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
 
 
 	return 0;
 	return 0;
 }
 }

+ 16 - 6
drivers/mtd/nand/tango_nand.c

@@ -303,7 +303,7 @@ static int tango_write_page(struct mtd_info *mtd, struct nand_chip *chip,
 			    const u8 *buf, int oob_required, int page)
 			    const u8 *buf, int oob_required, int page)
 {
 {
 	struct tango_nfc *nfc = to_tango_nfc(chip->controller);
 	struct tango_nfc *nfc = to_tango_nfc(chip->controller);
-	int err, len = mtd->writesize;
+	int err, status, len = mtd->writesize;
 
 
 	/* Calling tango_write_oob() would send PAGEPROG twice */
 	/* Calling tango_write_oob() would send PAGEPROG twice */
 	if (oob_required)
 	if (oob_required)
@@ -314,6 +314,10 @@ static int tango_write_page(struct mtd_info *mtd, struct nand_chip *chip,
 	if (err)
 	if (err)
 		return err;
 		return err;
 
 
+	status = chip->waitfunc(mtd, chip);
+	if (status & NAND_STATUS_FAIL)
+		return -EIO;
+
 	return 0;
 	return 0;
 }
 }
 
 
@@ -340,7 +344,7 @@ static void aux_write(struct nand_chip *chip, const u8 **buf, int len, int *pos)
 
 
 	if (!*buf) {
 	if (!*buf) {
 		/* skip over "len" bytes */
 		/* skip over "len" bytes */
-		chip->cmdfunc(mtd, NAND_CMD_SEQIN, *pos, -1);
+		chip->cmdfunc(mtd, NAND_CMD_RNDIN, *pos, -1);
 	} else {
 	} else {
 		tango_write_buf(mtd, *buf, len);
 		tango_write_buf(mtd, *buf, len);
 		*buf += len;
 		*buf += len;
@@ -431,9 +435,16 @@ static int tango_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
 static int tango_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
 static int tango_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
 				const u8 *buf, int oob_required, int page)
 				const u8 *buf, int oob_required, int page)
 {
 {
+	int status;
+
 	chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page);
 	chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page);
 	raw_write(chip, buf, chip->oob_poi);
 	raw_write(chip, buf, chip->oob_poi);
 	chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
 	chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
+
+	status = chip->waitfunc(mtd, chip);
+	if (status & NAND_STATUS_FAIL)
+		return -EIO;
+
 	return 0;
 	return 0;
 }
 }
 
 
@@ -484,9 +495,8 @@ static u32 to_ticks(int kHz, int ps)
 	return DIV_ROUND_UP_ULL((u64)kHz * ps, NSEC_PER_SEC);
 	return DIV_ROUND_UP_ULL((u64)kHz * ps, NSEC_PER_SEC);
 }
 }
 
 
-static int tango_set_timings(struct mtd_info *mtd,
-			     const struct nand_data_interface *conf,
-			     bool check_only)
+static int tango_set_timings(struct mtd_info *mtd, int csline,
+			     const struct nand_data_interface *conf)
 {
 {
 	const struct nand_sdr_timings *sdr = nand_get_sdr_timings(conf);
 	const struct nand_sdr_timings *sdr = nand_get_sdr_timings(conf);
 	struct nand_chip *chip = mtd_to_nand(mtd);
 	struct nand_chip *chip = mtd_to_nand(mtd);
@@ -498,7 +508,7 @@ static int tango_set_timings(struct mtd_info *mtd,
 	if (IS_ERR(sdr))
 	if (IS_ERR(sdr))
 		return PTR_ERR(sdr);
 		return PTR_ERR(sdr);
 
 
-	if (check_only)
+	if (csline == NAND_DATA_IFACE_CHECK_ONLY)
 		return 0;
 		return 0;
 
 
 	Trdy = to_ticks(kHz, sdr->tCEA_max - sdr->tREA_max);
 	Trdy = to_ticks(kHz, sdr->tCEA_max - sdr->tREA_max);

+ 2 - 0
drivers/mtd/nand/vf610_nfc.c

@@ -703,6 +703,8 @@ static int vf610_nfc_probe(struct platform_device *pdev)
 	chip->read_buf = vf610_nfc_read_buf;
 	chip->read_buf = vf610_nfc_read_buf;
 	chip->write_buf = vf610_nfc_write_buf;
 	chip->write_buf = vf610_nfc_write_buf;
 	chip->select_chip = vf610_nfc_select_chip;
 	chip->select_chip = vf610_nfc_select_chip;
+	chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+	chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
 
 	chip->options |= NAND_NO_SUBPAGE_WRITE;
 	chip->options |= NAND_NO_SUBPAGE_WRITE;
 
 

+ 2 - 0
drivers/staging/mt29f_spinand/mt29f_spinand.c

@@ -915,6 +915,8 @@ static int spinand_probe(struct spi_device *spi_nand)
 	chip->waitfunc	= spinand_wait;
 	chip->waitfunc	= spinand_wait;
 	chip->options	|= NAND_CACHEPRG;
 	chip->options	|= NAND_CACHEPRG;
 	chip->select_chip = spinand_select_chip;
 	chip->select_chip = spinand_select_chip;
+	chip->onfi_set_features = nand_onfi_get_set_features_notsupp;
+	chip->onfi_get_features = nand_onfi_get_set_features_notsupp;
 
 
 	mtd = nand_to_mtd(chip);
 	mtd = nand_to_mtd(chip);
 
 

+ 71 - 9
include/linux/mtd/nand.h

@@ -107,6 +107,8 @@ int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len);
 #define NAND_STATUS_READY	0x40
 #define NAND_STATUS_READY	0x40
 #define NAND_STATUS_WP		0x80
 #define NAND_STATUS_WP		0x80
 
 
+#define NAND_DATA_IFACE_CHECK_ONLY	-1
+
 /*
 /*
  * Constants for ECC_MODES
  * Constants for ECC_MODES
  */
  */
@@ -116,6 +118,7 @@ typedef enum {
 	NAND_ECC_HW,
 	NAND_ECC_HW,
 	NAND_ECC_HW_SYNDROME,
 	NAND_ECC_HW_SYNDROME,
 	NAND_ECC_HW_OOB_FIRST,
 	NAND_ECC_HW_OOB_FIRST,
+	NAND_ECC_ON_DIE,
 } nand_ecc_modes_t;
 } nand_ecc_modes_t;
 
 
 enum nand_ecc_algo {
 enum nand_ecc_algo {
@@ -257,6 +260,8 @@ struct nand_chip;
 
 
 /* Vendor-specific feature address (Micron) */
 /* Vendor-specific feature address (Micron) */
 #define ONFI_FEATURE_ADDR_READ_RETRY	0x89
 #define ONFI_FEATURE_ADDR_READ_RETRY	0x89
+#define ONFI_FEATURE_ON_DIE_ECC		0x90
+#define   ONFI_FEATURE_ON_DIE_ECC_EN	BIT(3)
 
 
 /* ONFI subfeature parameters length */
 /* ONFI subfeature parameters length */
 #define ONFI_SUBFEATURE_PARAM_LEN	4
 #define ONFI_SUBFEATURE_PARAM_LEN	4
@@ -476,6 +481,44 @@ static inline void nand_hw_control_init(struct nand_hw_control *nfc)
 	init_waitqueue_head(&nfc->wq);
 	init_waitqueue_head(&nfc->wq);
 }
 }
 
 
+/**
+ * struct nand_ecc_step_info - ECC step information of ECC engine
+ * @stepsize: data bytes per ECC step
+ * @strengths: array of supported strengths
+ * @nstrengths: number of supported strengths
+ */
+struct nand_ecc_step_info {
+	int stepsize;
+	const int *strengths;
+	int nstrengths;
+};
+
+/**
+ * struct nand_ecc_caps - capability of ECC engine
+ * @stepinfos: array of ECC step information
+ * @nstepinfos: number of ECC step information
+ * @calc_ecc_bytes: driver's hook to calculate ECC bytes per step
+ */
+struct nand_ecc_caps {
+	const struct nand_ecc_step_info *stepinfos;
+	int nstepinfos;
+	int (*calc_ecc_bytes)(int step_size, int strength);
+};
+
+/* a shorthand to generate struct nand_ecc_caps with only one ECC stepsize */
+#define NAND_ECC_CAPS_SINGLE(__name, __calc, __step, ...)	\
+static const int __name##_strengths[] = { __VA_ARGS__ };	\
+static const struct nand_ecc_step_info __name##_stepinfo = {	\
+	.stepsize = __step,					\
+	.strengths = __name##_strengths,			\
+	.nstrengths = ARRAY_SIZE(__name##_strengths),		\
+};								\
+static const struct nand_ecc_caps __name = {			\
+	.stepinfos = &__name##_stepinfo,			\
+	.nstepinfos = 1,					\
+	.calc_ecc_bytes = __calc,				\
+}
+
 /**
 /**
  * struct nand_ecc_ctrl - Control structure for ECC
  * struct nand_ecc_ctrl - Control structure for ECC
  * @mode:	ECC mode
  * @mode:	ECC mode
@@ -815,7 +858,10 @@ struct nand_manufacturer_ops {
  * @read_retries:	[INTERN] the number of read retry modes supported
  * @read_retries:	[INTERN] the number of read retry modes supported
  * @onfi_set_features:	[REPLACEABLE] set the features for ONFI nand
  * @onfi_set_features:	[REPLACEABLE] set the features for ONFI nand
  * @onfi_get_features:	[REPLACEABLE] get the features for ONFI nand
  * @onfi_get_features:	[REPLACEABLE] get the features for ONFI nand
- * @setup_data_interface: [OPTIONAL] setup the data interface and timing
+ * @setup_data_interface: [OPTIONAL] setup the data interface and timing. If
+ *			  chipnr is set to %NAND_DATA_IFACE_CHECK_ONLY this
+ *			  means the configuration should not be applied but
+ *			  only checked.
  * @bbt:		[INTERN] bad block table pointer
  * @bbt:		[INTERN] bad block table pointer
  * @bbt_td:		[REPLACEABLE] bad block table descriptor for flash
  * @bbt_td:		[REPLACEABLE] bad block table descriptor for flash
  *			lookup.
  *			lookup.
@@ -826,9 +872,6 @@ struct nand_manufacturer_ops {
  *			structure which is shared among multiple independent
  *			structure which is shared among multiple independent
  *			devices.
  *			devices.
  * @priv:		[OPTIONAL] pointer to private chip data
  * @priv:		[OPTIONAL] pointer to private chip data
- * @errstat:		[OPTIONAL] hardware specific function to perform
- *			additional error status checks (determine if errors are
- *			correctable).
  * @manufacturer:	[INTERN] Contains manufacturer information
  * @manufacturer:	[INTERN] Contains manufacturer information
  */
  */
 
 
@@ -852,16 +895,13 @@ struct nand_chip {
 	int(*waitfunc)(struct mtd_info *mtd, struct nand_chip *this);
 	int(*waitfunc)(struct mtd_info *mtd, struct nand_chip *this);
 	int (*erase)(struct mtd_info *mtd, int page);
 	int (*erase)(struct mtd_info *mtd, int page);
 	int (*scan_bbt)(struct mtd_info *mtd);
 	int (*scan_bbt)(struct mtd_info *mtd);
-	int (*errstat)(struct mtd_info *mtd, struct nand_chip *this, int state,
-			int status, int page);
 	int (*onfi_set_features)(struct mtd_info *mtd, struct nand_chip *chip,
 	int (*onfi_set_features)(struct mtd_info *mtd, struct nand_chip *chip,
 			int feature_addr, uint8_t *subfeature_para);
 			int feature_addr, uint8_t *subfeature_para);
 	int (*onfi_get_features)(struct mtd_info *mtd, struct nand_chip *chip,
 	int (*onfi_get_features)(struct mtd_info *mtd, struct nand_chip *chip,
 			int feature_addr, uint8_t *subfeature_para);
 			int feature_addr, uint8_t *subfeature_para);
 	int (*setup_read_retry)(struct mtd_info *mtd, int retry_mode);
 	int (*setup_read_retry)(struct mtd_info *mtd, int retry_mode);
-	int (*setup_data_interface)(struct mtd_info *mtd,
-				    const struct nand_data_interface *conf,
-				    bool check_only);
+	int (*setup_data_interface)(struct mtd_info *mtd, int chipnr,
+				    const struct nand_data_interface *conf);
 
 
 
 
 	int chip_delay;
 	int chip_delay;
@@ -1244,6 +1284,15 @@ int nand_check_erased_ecc_chunk(void *data, int datalen,
 				void *extraoob, int extraooblen,
 				void *extraoob, int extraooblen,
 				int threshold);
 				int threshold);
 
 
+int nand_check_ecc_caps(struct nand_chip *chip,
+			const struct nand_ecc_caps *caps, int oobavail);
+
+int nand_match_ecc_req(struct nand_chip *chip,
+		       const struct nand_ecc_caps *caps,  int oobavail);
+
+int nand_maximize_ecc(struct nand_chip *chip,
+		      const struct nand_ecc_caps *caps, int oobavail);
+
 /* Default write_oob implementation */
 /* Default write_oob implementation */
 int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page);
 int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page);
 
 
@@ -1258,6 +1307,19 @@ int nand_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page);
 int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip,
 int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip,
 			   int page);
 			   int page);
 
 
+/* Stub used by drivers that do not support GET/SET FEATURES operations */
+int nand_onfi_get_set_features_notsupp(struct mtd_info *mtd,
+				       struct nand_chip *chip, int addr,
+				       u8 *subfeature_param);
+
+/* Default read_page_raw implementation */
+int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+		       uint8_t *buf, int oob_required, int page);
+
+/* Default write_page_raw implementation */
+int nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip,
+			const uint8_t *buf, int oob_required, int page);
+
 /* Reset and initialize a NAND device */
 /* Reset and initialize a NAND device */
 int nand_reset(struct nand_chip *chip, int chipnr);
 int nand_reset(struct nand_chip *chip, int chipnr);