Browse Source

Merge branch 'pci/msi' into next

* pci/msi:
  PCI/MSI: Update MSI/MSI-X bits in PCIEBUS-HOWTO
  PCI/MSI: Document pci_alloc_irq_vectors(), deprecate pci_enable_msi()
  PCI/MSI: Return -ENOSPC if pci_enable_msi_range() can't get enough vectors
  PCI/portdrv: Use pci_irq_alloc_vectors()
  PCI/MSI: Check that we have a legacy interrupt line before using it
  PCI/MSI: Remove pci_msi_domain_{alloc,free}_irqs()
  PCI/MSI: Remove unused pci_msi_create_default_irq_domain()
  PCI/MSI: Return failure when msix_setup_entries() fails
  PCI/MSI: Remove pci_enable_msi_{exact,range}()
  amd-xgbe: Update PCI support to use new IRQ functions
  [media] cobalt: use pci_irq_allocate_vectors()
  PCI/MSI: Fix msi_capability_init() kernel-doc warnings
Bjorn Helgaas 8 years ago
parent
commit
3ec2574e31

+ 2 - 4
Documentation/PCI/MSI-HOWTO.txt

@@ -162,8 +162,6 @@ The following old APIs to enable and disable MSI or MSI-X interrupts should
 not be used in new code:
 not be used in new code:
 
 
   pci_enable_msi()		/* deprecated */
   pci_enable_msi()		/* deprecated */
-  pci_enable_msi_range()	/* deprecated */
-  pci_enable_msi_exact()	/* deprecated */
   pci_disable_msi()		/* deprecated */
   pci_disable_msi()		/* deprecated */
   pci_enable_msix_range()	/* deprecated */
   pci_enable_msix_range()	/* deprecated */
   pci_enable_msix_exact()	/* deprecated */
   pci_enable_msix_exact()	/* deprecated */
@@ -268,5 +266,5 @@ or disabled (0).  If 0 is found in any of the msi_bus files belonging
 to bridges between the PCI root and the device, MSIs are disabled.
 to bridges between the PCI root and the device, MSIs are disabled.
 
 
 It is also worth checking the device driver to see whether it supports MSIs.
 It is also worth checking the device driver to see whether it supports MSIs.
-For example, it may contain calls to pci_enable_msi_range() or
-pci_enable_msix_range().
+For example, it may contain calls to pci_irq_alloc_vectors() with the
+PCI_IRQ_MSI or PCI_IRQ_MSIX flags.

+ 7 - 26
Documentation/PCI/PCIEBUS-HOWTO.txt

@@ -161,21 +161,13 @@ Since all service drivers of a PCI-PCI Bridge Port device are
 allowed to run simultaneously, below lists a few of possible resource
 allowed to run simultaneously, below lists a few of possible resource
 conflicts with proposed solutions.
 conflicts with proposed solutions.
 
 
-6.1 MSI Vector Resource
-
-The MSI capability structure enables a device software driver to call
-pci_enable_msi to request MSI based interrupts. Once MSI interrupts
-are enabled on a device, it stays in this mode until a device driver
-calls pci_disable_msi to disable MSI interrupts and revert back to
-INTx emulation mode. Since service drivers of the same PCI-PCI Bridge
-port share the same physical device, if an individual service driver
-calls pci_enable_msi/pci_disable_msi it may result unpredictable
-behavior. For example, two service drivers run simultaneously on the
-same physical Root Port. Both service drivers call pci_enable_msi to
-request MSI based interrupts. A service driver may not know whether
-any other service drivers have run on this Root Port. If either one
-of them calls pci_disable_msi, it puts the other service driver
-in a wrong interrupt mode.
+6.1 MSI and MSI-X Vector Resource
+
+Once MSI or MSI-X interrupts are enabled on a device, it stays in this
+mode until they are disabled again.  Since service drivers of the same
+PCI-PCI Bridge port share the same physical device, if an individual
+service driver enables or disables MSI/MSI-X mode it may result
+unpredictable behavior.
 
 
 To avoid this situation all service drivers are not permitted to
 To avoid this situation all service drivers are not permitted to
 switch interrupt mode on its device. The PCI Express Port Bus driver
 switch interrupt mode on its device. The PCI Express Port Bus driver
@@ -187,17 +179,6 @@ driver. Service drivers should use (struct pcie_device*)dev->irq to
 call request_irq/free_irq. In addition, the interrupt mode is stored
 call request_irq/free_irq. In addition, the interrupt mode is stored
 in the field interrupt_mode of struct pcie_device.
 in the field interrupt_mode of struct pcie_device.
 
 
-6.2 MSI-X Vector Resources
-
-Similar to the MSI a device driver for an MSI-X capable device can
-call pci_enable_msix to request MSI-X interrupts. All service drivers
-are not permitted to switch interrupt mode on its device. The PCI
-Express Port Bus driver is responsible for determining the interrupt
-mode and this should be transparent to service drivers. Any attempt
-by service driver to call pci_enable_msix/pci_disable_msix may
-result unpredictable behavior. Service drivers should use
-(struct pcie_device*)dev->irq and call request_irq/free_irq.
-
 6.3 PCI Memory/IO Mapped Regions
 6.3 PCI Memory/IO Mapped Regions
 
 
 Service drivers for PCI Express Power Management (PME), Advanced
 Service drivers for PCI Express Power Management (PME), Advanced

+ 12 - 12
Documentation/PCI/pci.txt

@@ -382,18 +382,18 @@ The fundamental difference between MSI and MSI-X is how multiple
 "vectors" get allocated. MSI requires contiguous blocks of vectors
 "vectors" get allocated. MSI requires contiguous blocks of vectors
 while MSI-X can allocate several individual ones.
 while MSI-X can allocate several individual ones.
 
 
-MSI capability can be enabled by calling pci_enable_msi() or
-pci_enable_msix() before calling request_irq(). This causes
-the PCI support to program CPU vector data into the PCI device
-capability registers.
-
-If your PCI device supports both, try to enable MSI-X first.
-Only one can be enabled at a time.  Many architectures, chip-sets,
-or BIOSes do NOT support MSI or MSI-X and the call to pci_enable_msi/msix
-will fail. This is important to note since many drivers have
-two (or more) interrupt handlers: one for MSI/MSI-X and another for IRQs.
-They choose which handler to register with request_irq() based on the
-return value from pci_enable_msi/msix().
+MSI capability can be enabled by calling pci_alloc_irq_vectors() with the
+PCI_IRQ_MSI and/or PCI_IRQ_MSIX flags before calling request_irq(). This
+causes the PCI support to program CPU vector data into the PCI device
+capability registers. Many architectures, chip-sets, or BIOSes do NOT
+support MSI or MSI-X and a call to pci_alloc_irq_vectors with just
+the PCI_IRQ_MSI and PCI_IRQ_MSIX flags will fail, so try to always
+specify PCI_IRQ_LEGACY as well.
+
+Drivers that have different interrupt handlers for MSI/MSI-X and
+legacy INTx should chose the right one based on the msi_enabled
+and msix_enabled flags in the pci_dev structure after calling
+pci_alloc_irq_vectors.
 
 
 There are (at least) two really good reasons for using MSI:
 There are (at least) two really good reasons for using MSI:
 1) MSI is an exclusive interrupt vector by definition.
 1) MSI is an exclusive interrupt vector by definition.

+ 1 - 1
arch/x86/kernel/apic/msi.c

@@ -82,7 +82,7 @@ int native_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 	if (domain == NULL)
 	if (domain == NULL)
 		return -ENOSYS;
 		return -ENOSYS;
 
 
-	return pci_msi_domain_alloc_irqs(domain, dev, nvec, type);
+	return msi_domain_alloc_irqs(domain, &dev->dev, nvec);
 }
 }
 
 
 void native_teardown_msi_irq(unsigned int irq)
 void native_teardown_msi_irq(unsigned int irq)

+ 2 - 6
drivers/media/pci/cobalt/cobalt-driver.c

@@ -308,9 +308,7 @@ static void cobalt_pci_iounmap(struct cobalt *cobalt, struct pci_dev *pci_dev)
 static void cobalt_free_msi(struct cobalt *cobalt, struct pci_dev *pci_dev)
 static void cobalt_free_msi(struct cobalt *cobalt, struct pci_dev *pci_dev)
 {
 {
 	free_irq(pci_dev->irq, (void *)cobalt);
 	free_irq(pci_dev->irq, (void *)cobalt);
-
-	if (cobalt->msi_enabled)
-		pci_disable_msi(pci_dev);
+	pci_free_irq_vectors(pci_dev);
 }
 }
 
 
 static int cobalt_setup_pci(struct cobalt *cobalt, struct pci_dev *pci_dev,
 static int cobalt_setup_pci(struct cobalt *cobalt, struct pci_dev *pci_dev,
@@ -387,14 +385,12 @@ static int cobalt_setup_pci(struct cobalt *cobalt, struct pci_dev *pci_dev,
 	   from being generated. */
 	   from being generated. */
 	cobalt_set_interrupt(cobalt, false);
 	cobalt_set_interrupt(cobalt, false);
 
 
-	if (pci_enable_msi_range(pci_dev, 1, 1) < 1) {
+	if (pci_alloc_irq_vectors(pci_dev, 1, 1, PCI_IRQ_MSI) < 1) {
 		cobalt_err("Could not enable MSI\n");
 		cobalt_err("Could not enable MSI\n");
-		cobalt->msi_enabled = false;
 		ret = -EIO;
 		ret = -EIO;
 		goto err_release;
 		goto err_release;
 	}
 	}
 	msi_config_show(cobalt, pci_dev);
 	msi_config_show(cobalt, pci_dev);
-	cobalt->msi_enabled = true;
 
 
 	/* Register IRQ */
 	/* Register IRQ */
 	if (request_irq(pci_dev->irq, cobalt_irq_handler, IRQF_SHARED,
 	if (request_irq(pci_dev->irq, cobalt_irq_handler, IRQF_SHARED,

+ 0 - 2
drivers/media/pci/cobalt/cobalt-driver.h

@@ -287,8 +287,6 @@ struct cobalt {
 	u32 irq_none;
 	u32 irq_none;
 	u32 irq_full_fifo;
 	u32 irq_full_fifo;
 
 
-	bool msi_enabled;
-
 	/* omnitek dma */
 	/* omnitek dma */
 	int dma_channels;
 	int dma_channels;
 	int first_fifo_channel;
 	int first_fifo_channel;

+ 38 - 90
drivers/net/ethernet/amd/xgbe/xgbe-pci.c

@@ -122,104 +122,40 @@
 #include "xgbe.h"
 #include "xgbe.h"
 #include "xgbe-common.h"
 #include "xgbe-common.h"
 
 
-static int xgbe_config_msi(struct xgbe_prv_data *pdata)
+static int xgbe_config_multi_msi(struct xgbe_prv_data *pdata)
 {
 {
-	unsigned int msi_count;
+	unsigned int vector_count;
 	unsigned int i, j;
 	unsigned int i, j;
 	int ret;
 	int ret;
 
 
-	msi_count = XGBE_MSIX_BASE_COUNT;
-	msi_count += max(pdata->rx_ring_count,
-			 pdata->tx_ring_count);
-	msi_count = roundup_pow_of_two(msi_count);
+	vector_count = XGBE_MSI_BASE_COUNT;
+	vector_count += max(pdata->rx_ring_count,
+			    pdata->tx_ring_count);
 
 
-	ret = pci_enable_msi_exact(pdata->pcidev, msi_count);
+	ret = pci_alloc_irq_vectors(pdata->pcidev, XGBE_MSI_MIN_COUNT,
+				    vector_count, PCI_IRQ_MSI | PCI_IRQ_MSIX);
 	if (ret < 0) {
 	if (ret < 0) {
-		dev_info(pdata->dev, "MSI request for %u interrupts failed\n",
-			 msi_count);
-
-		ret = pci_enable_msi(pdata->pcidev);
-		if (ret < 0) {
-			dev_info(pdata->dev, "MSI enablement failed\n");
-			return ret;
-		}
-
-		msi_count = 1;
-	}
-
-	pdata->irq_count = msi_count;
-
-	pdata->dev_irq = pdata->pcidev->irq;
-
-	if (msi_count > 1) {
-		pdata->ecc_irq = pdata->pcidev->irq + 1;
-		pdata->i2c_irq = pdata->pcidev->irq + 2;
-		pdata->an_irq = pdata->pcidev->irq + 3;
-
-		for (i = XGBE_MSIX_BASE_COUNT, j = 0;
-		     (i < msi_count) && (j < XGBE_MAX_DMA_CHANNELS);
-		     i++, j++)
-			pdata->channel_irq[j] = pdata->pcidev->irq + i;
-		pdata->channel_irq_count = j;
-
-		pdata->per_channel_irq = 1;
-		pdata->channel_irq_mode = XGBE_IRQ_MODE_LEVEL;
-	} else {
-		pdata->ecc_irq = pdata->pcidev->irq;
-		pdata->i2c_irq = pdata->pcidev->irq;
-		pdata->an_irq = pdata->pcidev->irq;
-	}
-
-	if (netif_msg_probe(pdata))
-		dev_dbg(pdata->dev, "MSI interrupts enabled\n");
-
-	return 0;
-}
-
-static int xgbe_config_msix(struct xgbe_prv_data *pdata)
-{
-	unsigned int msix_count;
-	unsigned int i, j;
-	int ret;
-
-	msix_count = XGBE_MSIX_BASE_COUNT;
-	msix_count += max(pdata->rx_ring_count,
-			  pdata->tx_ring_count);
-
-	pdata->msix_entries = devm_kcalloc(pdata->dev, msix_count,
-					   sizeof(struct msix_entry),
-					   GFP_KERNEL);
-	if (!pdata->msix_entries)
-		return -ENOMEM;
-
-	for (i = 0; i < msix_count; i++)
-		pdata->msix_entries[i].entry = i;
-
-	ret = pci_enable_msix_range(pdata->pcidev, pdata->msix_entries,
-				    XGBE_MSIX_MIN_COUNT, msix_count);
-	if (ret < 0) {
-		dev_info(pdata->dev, "MSI-X enablement failed\n");
-		devm_kfree(pdata->dev, pdata->msix_entries);
-		pdata->msix_entries = NULL;
+		dev_info(pdata->dev, "multi MSI/MSI-X enablement failed\n");
 		return ret;
 		return ret;
 	}
 	}
 
 
 	pdata->irq_count = ret;
 	pdata->irq_count = ret;
 
 
-	pdata->dev_irq = pdata->msix_entries[0].vector;
-	pdata->ecc_irq = pdata->msix_entries[1].vector;
-	pdata->i2c_irq = pdata->msix_entries[2].vector;
-	pdata->an_irq = pdata->msix_entries[3].vector;
+	pdata->dev_irq = pci_irq_vector(pdata->pcidev, 0);
+	pdata->ecc_irq = pci_irq_vector(pdata->pcidev, 1);
+	pdata->i2c_irq = pci_irq_vector(pdata->pcidev, 2);
+	pdata->an_irq = pci_irq_vector(pdata->pcidev, 3);
 
 
-	for (i = XGBE_MSIX_BASE_COUNT, j = 0; i < ret; i++, j++)
-		pdata->channel_irq[j] = pdata->msix_entries[i].vector;
+	for (i = XGBE_MSI_BASE_COUNT, j = 0; i < ret; i++, j++)
+		pdata->channel_irq[j] = pci_irq_vector(pdata->pcidev, i);
 	pdata->channel_irq_count = j;
 	pdata->channel_irq_count = j;
 
 
 	pdata->per_channel_irq = 1;
 	pdata->per_channel_irq = 1;
 	pdata->channel_irq_mode = XGBE_IRQ_MODE_LEVEL;
 	pdata->channel_irq_mode = XGBE_IRQ_MODE_LEVEL;
 
 
 	if (netif_msg_probe(pdata))
 	if (netif_msg_probe(pdata))
-		dev_dbg(pdata->dev, "MSI-X interrupts enabled\n");
+		dev_dbg(pdata->dev, "multi %s interrupts enabled\n",
+			pdata->pcidev->msix_enabled ? "MSI-X" : "MSI");
 
 
 	return 0;
 	return 0;
 }
 }
@@ -228,21 +164,28 @@ static int xgbe_config_irqs(struct xgbe_prv_data *pdata)
 {
 {
 	int ret;
 	int ret;
 
 
-	ret = xgbe_config_msix(pdata);
+	ret = xgbe_config_multi_msi(pdata);
 	if (!ret)
 	if (!ret)
 		goto out;
 		goto out;
 
 
-	ret = xgbe_config_msi(pdata);
-	if (!ret)
-		goto out;
+	ret = pci_alloc_irq_vectors(pdata->pcidev, 1, 1,
+				    PCI_IRQ_LEGACY | PCI_IRQ_MSI);
+	if (ret < 0) {
+		dev_info(pdata->dev, "single IRQ enablement failed\n");
+		return ret;
+	}
 
 
 	pdata->irq_count = 1;
 	pdata->irq_count = 1;
-	pdata->irq_shared = 1;
+	pdata->channel_irq_count = 1;
+
+	pdata->dev_irq = pci_irq_vector(pdata->pcidev, 0);
+	pdata->ecc_irq = pci_irq_vector(pdata->pcidev, 0);
+	pdata->i2c_irq = pci_irq_vector(pdata->pcidev, 0);
+	pdata->an_irq = pci_irq_vector(pdata->pcidev, 0);
 
 
-	pdata->dev_irq = pdata->pcidev->irq;
-	pdata->ecc_irq = pdata->pcidev->irq;
-	pdata->i2c_irq = pdata->pcidev->irq;
-	pdata->an_irq = pdata->pcidev->irq;
+	if (netif_msg_probe(pdata))
+		dev_dbg(pdata->dev, "single %s interrupt enabled\n",
+			pdata->pcidev->msi_enabled ?  "MSI" : "legacy");
 
 
 out:
 out:
 	if (netif_msg_probe(pdata)) {
 	if (netif_msg_probe(pdata)) {
@@ -412,12 +355,15 @@ static int xgbe_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	/* Configure the netdev resource */
 	/* Configure the netdev resource */
 	ret = xgbe_config_netdev(pdata);
 	ret = xgbe_config_netdev(pdata);
 	if (ret)
 	if (ret)
-		goto err_pci_enable;
+		goto err_irq_vectors;
 
 
 	netdev_notice(pdata->netdev, "net device enabled\n");
 	netdev_notice(pdata->netdev, "net device enabled\n");
 
 
 	return 0;
 	return 0;
 
 
+err_irq_vectors:
+	pci_free_irq_vectors(pdata->pcidev);
+
 err_pci_enable:
 err_pci_enable:
 	xgbe_free_pdata(pdata);
 	xgbe_free_pdata(pdata);
 
 
@@ -433,6 +379,8 @@ static void xgbe_pci_remove(struct pci_dev *pdev)
 
 
 	xgbe_deconfig_netdev(pdata);
 	xgbe_deconfig_netdev(pdata);
 
 
+	pci_free_irq_vectors(pdata->pcidev);
+
 	xgbe_free_pdata(pdata);
 	xgbe_free_pdata(pdata);
 }
 }
 
 

+ 3 - 5
drivers/net/ethernet/amd/xgbe/xgbe.h

@@ -211,9 +211,9 @@
 #define XGBE_MAC_PROP_OFFSET	0x1d000
 #define XGBE_MAC_PROP_OFFSET	0x1d000
 #define XGBE_I2C_CTRL_OFFSET	0x1e000
 #define XGBE_I2C_CTRL_OFFSET	0x1e000
 
 
-/* PCI MSIx support */
-#define XGBE_MSIX_BASE_COUNT	4
-#define XGBE_MSIX_MIN_COUNT	(XGBE_MSIX_BASE_COUNT + 1)
+/* PCI MSI/MSIx support */
+#define XGBE_MSI_BASE_COUNT	4
+#define XGBE_MSI_MIN_COUNT	(XGBE_MSI_BASE_COUNT + 1)
 
 
 /* PCI clock frequencies */
 /* PCI clock frequencies */
 #define XGBE_V2_DMA_CLOCK_FREQ	500000000	/* 500 MHz */
 #define XGBE_V2_DMA_CLOCK_FREQ	500000000	/* 500 MHz */
@@ -980,14 +980,12 @@ struct xgbe_prv_data {
 	unsigned int desc_ded_count;
 	unsigned int desc_ded_count;
 	unsigned int desc_sec_count;
 	unsigned int desc_sec_count;
 
 
-	struct msix_entry *msix_entries;
 	int dev_irq;
 	int dev_irq;
 	int ecc_irq;
 	int ecc_irq;
 	int i2c_irq;
 	int i2c_irq;
 	int channel_irq[XGBE_MAX_DMA_CHANNELS];
 	int channel_irq[XGBE_MAX_DMA_CHANNELS];
 
 
 	unsigned int per_channel_irq;
 	unsigned int per_channel_irq;
-	unsigned int irq_shared;
 	unsigned int irq_count;
 	unsigned int irq_count;
 	unsigned int channel_irq_count;
 	unsigned int channel_irq_count;
 	unsigned int channel_irq_mode;
 	unsigned int channel_irq_mode;

+ 21 - 99
drivers/pci/msi.c

@@ -32,32 +32,13 @@ int pci_msi_ignore_mask;
 #define msix_table_size(flags)	((flags & PCI_MSIX_FLAGS_QSIZE) + 1)
 #define msix_table_size(flags)	((flags & PCI_MSIX_FLAGS_QSIZE) + 1)
 
 
 #ifdef CONFIG_PCI_MSI_IRQ_DOMAIN
 #ifdef CONFIG_PCI_MSI_IRQ_DOMAIN
-static struct irq_domain *pci_msi_default_domain;
-static DEFINE_MUTEX(pci_msi_domain_lock);
-
-struct irq_domain * __weak arch_get_pci_msi_domain(struct pci_dev *dev)
-{
-	return pci_msi_default_domain;
-}
-
-static struct irq_domain *pci_msi_get_domain(struct pci_dev *dev)
-{
-	struct irq_domain *domain;
-
-	domain = dev_get_msi_domain(&dev->dev);
-	if (domain)
-		return domain;
-
-	return arch_get_pci_msi_domain(dev);
-}
-
 static int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 static int pci_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 {
 {
 	struct irq_domain *domain;
 	struct irq_domain *domain;
 
 
-	domain = pci_msi_get_domain(dev);
+	domain = dev_get_msi_domain(&dev->dev);
 	if (domain && irq_domain_is_hierarchy(domain))
 	if (domain && irq_domain_is_hierarchy(domain))
-		return pci_msi_domain_alloc_irqs(domain, dev, nvec, type);
+		return msi_domain_alloc_irqs(domain, &dev->dev, nvec);
 
 
 	return arch_setup_msi_irqs(dev, nvec, type);
 	return arch_setup_msi_irqs(dev, nvec, type);
 }
 }
@@ -66,9 +47,9 @@ static void pci_msi_teardown_msi_irqs(struct pci_dev *dev)
 {
 {
 	struct irq_domain *domain;
 	struct irq_domain *domain;
 
 
-	domain = pci_msi_get_domain(dev);
+	domain = dev_get_msi_domain(&dev->dev);
 	if (domain && irq_domain_is_hierarchy(domain))
 	if (domain && irq_domain_is_hierarchy(domain))
-		pci_msi_domain_free_irqs(domain, dev);
+		msi_domain_free_irqs(domain, &dev->dev);
 	else
 	else
 		arch_teardown_msi_irqs(dev);
 		arch_teardown_msi_irqs(dev);
 }
 }
@@ -610,7 +591,7 @@ static int msi_verify_entries(struct pci_dev *dev)
  * msi_capability_init - configure device's MSI capability structure
  * msi_capability_init - configure device's MSI capability structure
  * @dev: pointer to the pci_dev data structure of MSI device function
  * @dev: pointer to the pci_dev data structure of MSI device function
  * @nvec: number of interrupts to allocate
  * @nvec: number of interrupts to allocate
- * @affinity: flag to indicate cpu irq affinity mask should be set
+ * @affd: description of automatic irq affinity assignments (may be %NULL)
  *
  *
  * Setup the MSI capability structure of the device with the requested
  * Setup the MSI capability structure of the device with the requested
  * number of interrupts.  A return value of zero indicates the successful
  * number of interrupts.  A return value of zero indicates the successful
@@ -731,7 +712,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
 	ret = 0;
 	ret = 0;
 out:
 out:
 	kfree(masks);
 	kfree(masks);
-	return 0;
+	return ret;
 }
 }
 
 
 static void msix_program_entries(struct pci_dev *dev,
 static void msix_program_entries(struct pci_dev *dev,
@@ -1084,7 +1065,7 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
 	if (nvec < 0)
 	if (nvec < 0)
 		return nvec;
 		return nvec;
 	if (nvec < minvec)
 	if (nvec < minvec)
-		return -EINVAL;
+		return -ENOSPC;
 
 
 	if (nvec > maxvec)
 	if (nvec > maxvec)
 		nvec = maxvec;
 		nvec = maxvec;
@@ -1109,23 +1090,15 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
 	}
 	}
 }
 }
 
 
-/**
- * pci_enable_msi_range - configure device's MSI capability structure
- * @dev: device to configure
- * @minvec: minimal number of interrupts to configure
- * @maxvec: maximum number of interrupts to configure
- *
- * This function tries to allocate a maximum possible number of interrupts in a
- * range between @minvec and @maxvec. It returns a negative errno if an error
- * occurs. If it succeeds, it returns the actual number of interrupts allocated
- * and updates the @dev's irq member to the lowest new interrupt number;
- * the other interrupt numbers allocated to this device are consecutive.
- **/
-int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)
+/* deprecated, don't use */
+int pci_enable_msi(struct pci_dev *dev)
 {
 {
-	return __pci_enable_msi_range(dev, minvec, maxvec, NULL);
+	int rc = __pci_enable_msi_range(dev, 1, 1, NULL);
+	if (rc < 0)
+		return rc;
+	return 0;
 }
 }
-EXPORT_SYMBOL(pci_enable_msi_range);
+EXPORT_SYMBOL(pci_enable_msi);
 
 
 static int __pci_enable_msix_range(struct pci_dev *dev,
 static int __pci_enable_msix_range(struct pci_dev *dev,
 				   struct msix_entry *entries, int minvec,
 				   struct msix_entry *entries, int minvec,
@@ -1225,9 +1198,11 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
 	}
 	}
 
 
 	/* use legacy irq if allowed */
 	/* use legacy irq if allowed */
-	if ((flags & PCI_IRQ_LEGACY) && min_vecs == 1) {
-		pci_intx(dev, 1);
-		return 1;
+	if (flags & PCI_IRQ_LEGACY) {
+		if (min_vecs == 1 && dev->irq) {
+			pci_intx(dev, 1);
+			return 1;
+		}
 	}
 	}
 
 
 	return vecs;
 	return vecs;
@@ -1381,7 +1356,7 @@ int pci_msi_domain_check_cap(struct irq_domain *domain,
 {
 {
 	struct msi_desc *desc = first_pci_msi_entry(to_pci_dev(dev));
 	struct msi_desc *desc = first_pci_msi_entry(to_pci_dev(dev));
 
 
-	/* Special handling to support pci_enable_msi_range() */
+	/* Special handling to support __pci_enable_msi_range() */
 	if (pci_msi_desc_is_multi_msi(desc) &&
 	if (pci_msi_desc_is_multi_msi(desc) &&
 	    !(info->flags & MSI_FLAG_MULTI_PCI_MSI))
 	    !(info->flags & MSI_FLAG_MULTI_PCI_MSI))
 		return 1;
 		return 1;
@@ -1394,7 +1369,7 @@ int pci_msi_domain_check_cap(struct irq_domain *domain,
 static int pci_msi_domain_handle_error(struct irq_domain *domain,
 static int pci_msi_domain_handle_error(struct irq_domain *domain,
 				       struct msi_desc *desc, int error)
 				       struct msi_desc *desc, int error)
 {
 {
-	/* Special handling to support pci_enable_msi_range() */
+	/* Special handling to support __pci_enable_msi_range() */
 	if (pci_msi_desc_is_multi_msi(desc) && error == -ENOSPC)
 	if (pci_msi_desc_is_multi_msi(desc) && error == -ENOSPC)
 		return 1;
 		return 1;
 
 
@@ -1481,59 +1456,6 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
 }
 }
 EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain);
 EXPORT_SYMBOL_GPL(pci_msi_create_irq_domain);
 
 
-/**
- * pci_msi_domain_alloc_irqs - Allocate interrupts for @dev in @domain
- * @domain:	The interrupt domain to allocate from
- * @dev:	The device for which to allocate
- * @nvec:	The number of interrupts to allocate
- * @type:	Unused to allow simpler migration from the arch_XXX interfaces
- *
- * Returns:
- * A virtual interrupt number or an error code in case of failure
- */
-int pci_msi_domain_alloc_irqs(struct irq_domain *domain, struct pci_dev *dev,
-			      int nvec, int type)
-{
-	return msi_domain_alloc_irqs(domain, &dev->dev, nvec);
-}
-
-/**
- * pci_msi_domain_free_irqs - Free interrupts for @dev in @domain
- * @domain:	The interrupt domain
- * @dev:	The device for which to free interrupts
- */
-void pci_msi_domain_free_irqs(struct irq_domain *domain, struct pci_dev *dev)
-{
-	msi_domain_free_irqs(domain, &dev->dev);
-}
-
-/**
- * pci_msi_create_default_irq_domain - Create a default MSI interrupt domain
- * @fwnode:	Optional fwnode of the interrupt controller
- * @info:	MSI domain info
- * @parent:	Parent irq domain
- *
- * Returns: A domain pointer or NULL in case of failure. If successful
- * the default PCI/MSI irqdomain pointer is updated.
- */
-struct irq_domain *pci_msi_create_default_irq_domain(struct fwnode_handle *fwnode,
-		struct msi_domain_info *info, struct irq_domain *parent)
-{
-	struct irq_domain *domain;
-
-	mutex_lock(&pci_msi_domain_lock);
-	if (pci_msi_default_domain) {
-		pr_err("PCI: default irq domain for PCI MSI has already been created.\n");
-		domain = NULL;
-	} else {
-		domain = pci_msi_create_irq_domain(fwnode, info, parent);
-		pci_msi_default_domain = domain;
-	}
-	mutex_unlock(&pci_msi_domain_lock);
-
-	return domain;
-}
-
 static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data)
 static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data)
 {
 {
 	u32 *pa = data;
 	u32 *pa = data;

+ 48 - 113
drivers/pci/pcie/portdrv_core.c

@@ -43,53 +43,17 @@ static void release_pcie_device(struct device *dev)
 	kfree(to_pcie_device(dev));
 	kfree(to_pcie_device(dev));
 }
 }
 
 
-/**
- * pcie_port_msix_add_entry - add entry to given array of MSI-X entries
- * @entries: Array of MSI-X entries
- * @new_entry: Index of the entry to add to the array
- * @nr_entries: Number of entries already in the array
- *
- * Return value: Position of the added entry in the array
- */
-static int pcie_port_msix_add_entry(
-	struct msix_entry *entries, int new_entry, int nr_entries)
-{
-	int j;
-
-	for (j = 0; j < nr_entries; j++)
-		if (entries[j].entry == new_entry)
-			return j;
-
-	entries[j].entry = new_entry;
-	return j;
-}
-
 /**
 /**
  * pcie_port_enable_msix - try to set up MSI-X as interrupt mode for given port
  * pcie_port_enable_msix - try to set up MSI-X as interrupt mode for given port
  * @dev: PCI Express port to handle
  * @dev: PCI Express port to handle
- * @vectors: Array of interrupt vectors to populate
+ * @irqs: Array of interrupt vectors to populate
  * @mask: Bitmask of port capabilities returned by get_port_device_capability()
  * @mask: Bitmask of port capabilities returned by get_port_device_capability()
  *
  *
  * Return value: 0 on success, error code on failure
  * Return value: 0 on success, error code on failure
  */
  */
-static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
+static int pcie_port_enable_msix(struct pci_dev *dev, int *irqs, int mask)
 {
 {
-	struct msix_entry *msix_entries;
-	int idx[PCIE_PORT_DEVICE_MAXSERVICES];
-	int nr_entries, status, pos, i, nvec;
-	u16 reg16;
-	u32 reg32;
-
-	nr_entries = pci_msix_vec_count(dev);
-	if (nr_entries < 0)
-		return nr_entries;
-	BUG_ON(!nr_entries);
-	if (nr_entries > PCIE_PORT_MAX_MSIX_ENTRIES)
-		nr_entries = PCIE_PORT_MAX_MSIX_ENTRIES;
-
-	msix_entries = kzalloc(sizeof(*msix_entries) * nr_entries, GFP_KERNEL);
-	if (!msix_entries)
-		return -ENOMEM;
+	int nr_entries, entry, nvec = 0;
 
 
 	/*
 	/*
 	 * Allocate as many entries as the port wants, so that we can check
 	 * Allocate as many entries as the port wants, so that we can check
@@ -97,20 +61,13 @@ static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
 	 * equal to the number of entries this port actually uses, we'll happily
 	 * equal to the number of entries this port actually uses, we'll happily
 	 * go through without any tricks.
 	 * go through without any tricks.
 	 */
 	 */
-	for (i = 0; i < nr_entries; i++)
-		msix_entries[i].entry = i;
-
-	status = pci_enable_msix_exact(dev, msix_entries, nr_entries);
-	if (status)
-		goto Exit;
-
-	for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++)
-		idx[i] = -1;
-	status = -EIO;
-	nvec = 0;
+	nr_entries = pci_alloc_irq_vectors(dev, 1, PCIE_PORT_MAX_MSIX_ENTRIES,
+			PCI_IRQ_MSIX);
+	if (nr_entries < 0)
+		return nr_entries;
 
 
 	if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP)) {
 	if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP)) {
-		int entry;
+		u16 reg16;
 
 
 		/*
 		/*
 		 * The code below follows the PCI Express Base Specification 2.0
 		 * The code below follows the PCI Express Base Specification 2.0
@@ -125,18 +82,16 @@ static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
 		pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16);
 		pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16);
 		entry = (reg16 & PCI_EXP_FLAGS_IRQ) >> 9;
 		entry = (reg16 & PCI_EXP_FLAGS_IRQ) >> 9;
 		if (entry >= nr_entries)
 		if (entry >= nr_entries)
-			goto Error;
+			goto out_free_irqs;
 
 
-		i = pcie_port_msix_add_entry(msix_entries, entry, nvec);
-		if (i == nvec)
-			nvec++;
+		irqs[PCIE_PORT_SERVICE_PME_SHIFT] = pci_irq_vector(dev, entry);
+		irqs[PCIE_PORT_SERVICE_HP_SHIFT] = pci_irq_vector(dev, entry);
 
 
-		idx[PCIE_PORT_SERVICE_PME_SHIFT] = i;
-		idx[PCIE_PORT_SERVICE_HP_SHIFT] = i;
+		nvec = max(nvec, entry + 1);
 	}
 	}
 
 
 	if (mask & PCIE_PORT_SERVICE_AER) {
 	if (mask & PCIE_PORT_SERVICE_AER) {
-		int entry;
+		u32 reg32, pos;
 
 
 		/*
 		/*
 		 * The code below follows Section 7.10.10 of the PCI Express
 		 * The code below follows Section 7.10.10 of the PCI Express
@@ -151,13 +106,11 @@ static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
 		pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32);
 		pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32);
 		entry = reg32 >> 27;
 		entry = reg32 >> 27;
 		if (entry >= nr_entries)
 		if (entry >= nr_entries)
-			goto Error;
+			goto out_free_irqs;
 
 
-		i = pcie_port_msix_add_entry(msix_entries, entry, nvec);
-		if (i == nvec)
-			nvec++;
+		irqs[PCIE_PORT_SERVICE_AER_SHIFT] = pci_irq_vector(dev, entry);
 
 
-		idx[PCIE_PORT_SERVICE_AER_SHIFT] = i;
+		nvec = max(nvec, entry + 1);
 	}
 	}
 
 
 	/*
 	/*
@@ -165,41 +118,39 @@ static int pcie_port_enable_msix(struct pci_dev *dev, int *vectors, int mask)
 	 * what we have.  Otherwise, the port has some extra entries not for the
 	 * what we have.  Otherwise, the port has some extra entries not for the
 	 * services we know and we need to work around that.
 	 * services we know and we need to work around that.
 	 */
 	 */
-	if (nvec == nr_entries) {
-		status = 0;
-	} else {
+	if (nvec != nr_entries) {
 		/* Drop the temporary MSI-X setup */
 		/* Drop the temporary MSI-X setup */
-		pci_disable_msix(dev);
+		pci_free_irq_vectors(dev);
 
 
 		/* Now allocate the MSI-X vectors for real */
 		/* Now allocate the MSI-X vectors for real */
-		status = pci_enable_msix_exact(dev, msix_entries, nvec);
-		if (status)
-			goto Exit;
+		nr_entries = pci_alloc_irq_vectors(dev, nvec, nvec,
+				PCI_IRQ_MSIX);
+		if (nr_entries < 0)
+			return nr_entries;
 	}
 	}
 
 
-	for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++)
-		vectors[i] = idx[i] >= 0 ? msix_entries[idx[i]].vector : -1;
-
- Exit:
-	kfree(msix_entries);
-	return status;
+	return 0;
 
 
- Error:
-	pci_disable_msix(dev);
-	goto Exit;
+out_free_irqs:
+	pci_free_irq_vectors(dev);
+	return -EIO;
 }
 }
 
 
 /**
 /**
- * init_service_irqs - initialize irqs for PCI Express port services
+ * pcie_init_service_irqs - initialize irqs for PCI Express port services
  * @dev: PCI Express port to handle
  * @dev: PCI Express port to handle
  * @irqs: Array of irqs to populate
  * @irqs: Array of irqs to populate
  * @mask: Bitmask of port capabilities returned by get_port_device_capability()
  * @mask: Bitmask of port capabilities returned by get_port_device_capability()
  *
  *
  * Return value: Interrupt mode associated with the port
  * Return value: Interrupt mode associated with the port
  */
  */
-static int init_service_irqs(struct pci_dev *dev, int *irqs, int mask)
+static int pcie_init_service_irqs(struct pci_dev *dev, int *irqs, int mask)
 {
 {
-	int i, irq = -1;
+	unsigned flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI;
+	int ret, i;
+
+	for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++)
+		irqs[i] = -1;
 
 
 	/*
 	/*
 	 * If MSI cannot be used for PCIe PME or hotplug, we have to use
 	 * If MSI cannot be used for PCIe PME or hotplug, we have to use
@@ -207,41 +158,25 @@ static int init_service_irqs(struct pci_dev *dev, int *irqs, int mask)
 	 */
 	 */
 	if (((mask & PCIE_PORT_SERVICE_PME) && pcie_pme_no_msi()) ||
 	if (((mask & PCIE_PORT_SERVICE_PME) && pcie_pme_no_msi()) ||
 	    ((mask & PCIE_PORT_SERVICE_HP) && pciehp_no_msi())) {
 	    ((mask & PCIE_PORT_SERVICE_HP) && pciehp_no_msi())) {
-		if (dev->irq)
-			irq = dev->irq;
-		goto no_msi;
+		flags &= ~PCI_IRQ_MSI;
+	} else {
+		/* Try to use MSI-X if supported */
+		if (!pcie_port_enable_msix(dev, irqs, mask))
+			return 0;
 	}
 	}
 
 
-	/* Try to use MSI-X if supported */
-	if (!pcie_port_enable_msix(dev, irqs, mask))
-		return 0;
-
-	/*
-	 * We're not going to use MSI-X, so try MSI and fall back to INTx.
-	 * If neither MSI/MSI-X nor INTx available, try other interrupt.  On
-	 * some platforms, root port doesn't support MSI/MSI-X/INTx in RC mode.
-	 */
-	if (!pci_enable_msi(dev) || dev->irq)
-		irq = dev->irq;
+	ret = pci_alloc_irq_vectors(dev, 1, 1, flags);
+	if (ret < 0)
+		return -ENODEV;
 
 
- no_msi:
-	for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++)
-		irqs[i] = irq;
-	irqs[PCIE_PORT_SERVICE_VC_SHIFT] = -1;
+	for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) {
+		if (i != PCIE_PORT_SERVICE_VC_SHIFT)
+			irqs[i] = pci_irq_vector(dev, 0);
+	}
 
 
-	if (irq < 0)
-		return -ENODEV;
 	return 0;
 	return 0;
 }
 }
 
 
-static void cleanup_service_irqs(struct pci_dev *dev)
-{
-	if (dev->msix_enabled)
-		pci_disable_msix(dev);
-	else if (dev->msi_enabled)
-		pci_disable_msi(dev);
-}
-
 /**
 /**
  * get_port_device_capability - discover capabilities of a PCI Express port
  * get_port_device_capability - discover capabilities of a PCI Express port
  * @dev: PCI Express port to examine
  * @dev: PCI Express port to examine
@@ -378,7 +313,7 @@ int pcie_port_device_register(struct pci_dev *dev)
 	 * that can be used in the absence of irqs.  Allow them to determine
 	 * that can be used in the absence of irqs.  Allow them to determine
 	 * if that is to be used.
 	 * if that is to be used.
 	 */
 	 */
-	status = init_service_irqs(dev, irqs, capabilities);
+	status = pcie_init_service_irqs(dev, irqs, capabilities);
 	if (status) {
 	if (status) {
 		capabilities &= PCIE_PORT_SERVICE_VC | PCIE_PORT_SERVICE_HP;
 		capabilities &= PCIE_PORT_SERVICE_VC | PCIE_PORT_SERVICE_HP;
 		if (!capabilities)
 		if (!capabilities)
@@ -401,7 +336,7 @@ int pcie_port_device_register(struct pci_dev *dev)
 	return 0;
 	return 0;
 
 
 error_cleanup_irqs:
 error_cleanup_irqs:
-	cleanup_service_irqs(dev);
+	pci_free_irq_vectors(dev);
 error_disable:
 error_disable:
 	pci_disable_device(dev);
 	pci_disable_device(dev);
 	return status;
 	return status;
@@ -469,7 +404,7 @@ static int remove_iter(struct device *dev, void *data)
 void pcie_port_device_remove(struct pci_dev *dev)
 void pcie_port_device_remove(struct pci_dev *dev)
 {
 {
 	device_for_each_child(&dev->dev, NULL, remove_iter);
 	device_for_each_child(&dev->dev, NULL, remove_iter);
-	cleanup_service_irqs(dev);
+	pci_free_irq_vectors(dev);
 	pci_disable_device(dev);
 	pci_disable_device(dev);
 }
 }
 
 

+ 0 - 6
include/linux/msi.h

@@ -316,12 +316,6 @@ void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg);
 struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
 struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
 					     struct msi_domain_info *info,
 					     struct msi_domain_info *info,
 					     struct irq_domain *parent);
 					     struct irq_domain *parent);
-int pci_msi_domain_alloc_irqs(struct irq_domain *domain, struct pci_dev *dev,
-			      int nvec, int type);
-void pci_msi_domain_free_irqs(struct irq_domain *domain, struct pci_dev *dev);
-struct irq_domain *pci_msi_create_default_irq_domain(struct fwnode_handle *fwnode,
-		 struct msi_domain_info *info, struct irq_domain *parent);
-
 irq_hw_number_t pci_msi_domain_calc_hwirq(struct pci_dev *dev,
 irq_hw_number_t pci_msi_domain_calc_hwirq(struct pci_dev *dev,
 					  struct msi_desc *desc);
 					  struct msi_desc *desc);
 int pci_msi_domain_check_cap(struct irq_domain *domain,
 int pci_msi_domain_check_cap(struct irq_domain *domain,

+ 2 - 14
include/linux/pci.h

@@ -1306,14 +1306,7 @@ void pci_msix_shutdown(struct pci_dev *dev);
 void pci_disable_msix(struct pci_dev *dev);
 void pci_disable_msix(struct pci_dev *dev);
 void pci_restore_msi_state(struct pci_dev *dev);
 void pci_restore_msi_state(struct pci_dev *dev);
 int pci_msi_enabled(void);
 int pci_msi_enabled(void);
-int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec);
-static inline int pci_enable_msi_exact(struct pci_dev *dev, int nvec)
-{
-	int rc = pci_enable_msi_range(dev, nvec, nvec);
-	if (rc < 0)
-		return rc;
-	return 0;
-}
+int pci_enable_msi(struct pci_dev *dev);
 int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
 int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,
 			  int minvec, int maxvec);
 			  int minvec, int maxvec);
 static inline int pci_enable_msix_exact(struct pci_dev *dev,
 static inline int pci_enable_msix_exact(struct pci_dev *dev,
@@ -1344,10 +1337,7 @@ static inline void pci_msix_shutdown(struct pci_dev *dev) { }
 static inline void pci_disable_msix(struct pci_dev *dev) { }
 static inline void pci_disable_msix(struct pci_dev *dev) { }
 static inline void pci_restore_msi_state(struct pci_dev *dev) { }
 static inline void pci_restore_msi_state(struct pci_dev *dev) { }
 static inline int pci_msi_enabled(void) { return 0; }
 static inline int pci_msi_enabled(void) { return 0; }
-static inline int pci_enable_msi_range(struct pci_dev *dev, int minvec,
-				       int maxvec)
-{ return -ENOSYS; }
-static inline int pci_enable_msi_exact(struct pci_dev *dev, int nvec)
+static inline int pci_enable_msi(struct pci_dev *dev)
 { return -ENOSYS; }
 { return -ENOSYS; }
 static inline int pci_enable_msix_range(struct pci_dev *dev,
 static inline int pci_enable_msix_range(struct pci_dev *dev,
 		      struct msix_entry *entries, int minvec, int maxvec)
 		      struct msix_entry *entries, int minvec, int maxvec)
@@ -1423,8 +1413,6 @@ static inline void pcie_set_ecrc_checking(struct pci_dev *dev) { }
 static inline void pcie_ecrc_get_policy(char *str) { }
 static inline void pcie_ecrc_get_policy(char *str) { }
 #endif
 #endif
 
 
-#define pci_enable_msi(pdev)	pci_enable_msi_exact(pdev, 1)
-
 #ifdef CONFIG_HT_IRQ
 #ifdef CONFIG_HT_IRQ
 /* The functions a driver should call */
 /* The functions a driver should call */
 int  ht_create_irq(struct pci_dev *dev, int idx);
 int  ht_create_irq(struct pci_dev *dev, int idx);