summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* iommu/amd: Add amd_iommu_device_info() functionJoerg Roedel2011-12-152-0/+69
| | | | | | | This function can be used to find out which features necessary for IOMMUv2 usage are available on a given device. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* iommu/amd: Adapt IOMMU driver to PCI register name changesJoerg Roedel2011-12-151-8/+8
| | | | | | | | The symbolic register names for PCI and PASID changed in PCI code. This patch adapts the AMD IOMMU driver to these changes. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Merge remote-tracking branch 'pci/pri-changes' into x86/amdJoerg Roedel2011-12-1516-105/+269
|\
| * PCI: More PRI/PASID cleanupAlex Williamson2011-12-052-48/+51
| | | | | | | | | | | | | | | | | | | | | | More consistency cleanups. Drop the _OFF, separate and indent CTRL/CAP/STATUS bit definitions. This helped find the previous mis-use of bit 0 in the PASID capability register. Reviewed-by: Joerg Roedel <joerg.roedel@amd.com> Tested-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
| * PCI: Enable is not exposed as a PASID capabilityAlex Williamson2011-12-051-3/+2
| | | | | | | | | | | | | | | | | | | | | | The PASID ECN indicates bit 0 is reserved in the capability register. Switch pci_enable_pasid() to error if PASID is already enabled and don't expose enable as a feature in pci_pasid_features(). Reviewed-by: Joerg Roedel <joerg.roedel@amd.com> Tested-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
| * PCI: msi: Disable msi interrupts when we initialize a pci deviceEric W. Biederman2011-12-051-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I traced a nasty kexec on panic boot failure to the fact that we had screaming msi interrupts and we were not disabling the msi messages at kernel startup. The booting kernel had not enabled those interupts so was not prepared to handle them. I can see no reason why we would ever want to leave the msi interrupts enabled at boot if something else has enabled those interrupts. The pci spec specifies that msi interrupts should be off by default. Drivers are expected to enable the msi interrupts if they want to use them. Our interrupt handling code reprograms the interrupt handlers at boot and will not be be able to do anything useful with an unexpected interrupt. This patch applies cleanly all of the way back to 2.6.32 where I noticed the problem. Cc: stable@kernel.org Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
| * PCI/ACPI/PM: Avoid resuming devices that don't signal PMERafael J. Wysocki2011-12-051-4/+8
| | | | | | | | | | | | | | | | | | | | Modify pci_acpi_wake_dev() to avoid resuming PME-capable devices whose PME Status bits are not set, which may happen currently if several devices are associated with the same wakeup GPE and all of them are notified whenever at least one of them signals PME. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
| * PCI/ACPI: Make acpiphp ignore root bridges using SHPC native hotplugRafael J. Wysocki2011-12-052-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the kernel has requested control of the SHPC native hotplug feature for a given root bridge, the acpiphp driver should not try to handle that root bridge and it should leave it to shpchp. Failing to do so causes problems to happen if shpchp is loaded and unloaded before loading acpiphp (ACPI-based hotplug won't work in that case anyway). To address this issue make find_root_bridges() ignore PCI root bridges with SHPC native hotplug enabled and make add_bridge() return error code if SHPC native hotplug is enabled for the given root bridge. This causes acpiphp to refuse to load if SHPC native hotplug is enabled for all root bridges and to refuse binding to the root bridges with SHPC native hotplug enabled. Reviewed-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
| * PCI: pciehp: Handle push button event asynchronouslyKenji Kaneshige2011-12-054-13/+2
| | | | | | | | | | | | | | | | | | | | | | Use non-ordered workqueue for attention button events. Attention button events on each slot can be handled asynchronously. So we should use non-ordered workqueue. This patch also removes ordered workqueue in pciehp as a result. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
| * PCI: pciehp: Fix wrong workqueue cleanupKenji Kaneshige2011-12-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Fix improper workqueue cleanup. In the current pciehp, pcied_cleanup() calls destroy_workqueue() before calling pcie_port_service_unregister(). This causes kernel oops because flush_workqueue() is called in the pcie_port_service_unregister() code path after the workqueue was destroyed. So pcied_cleanup() must call pcie_port_service_unregister() first before calling destroy_workqueue(). Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
| * PCI: Rework ASPM disable codeMatthew Garrett2011-12-054-24/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Right now we forcibly clear ASPM state on all devices if the BIOS indicates that the feature isn't supported. Based on the Microsoft presentation "PCI Express In Depth for Windows Vista and Beyond", I'm starting to think that this may be an error. The implication is that unless the platform grants full control via _OSC, Windows will not touch any PCIe features - including ASPM. In that case clearing ASPM state would be an error unless the platform has granted us that control. This patch reworks the ASPM disabling code such that the actual clearing of state is triggered by a successful handoff of PCIe control to the OS. The general ASPM code undergoes some changes in order to ensure that the ability to clear the bits isn't overridden by ASPM having already been disabled. Further, this theoretically now allows for situations where only a subset of PCIe roots hand over control, leaving the others in the BIOS state. It's difficult to know for sure that this is the right thing to do - there's zero public documentation on the interaction between all of these components. But enough vendors enable ASPM on platforms and then set this bit that it seems likely that they're expecting the OS to leave them alone. Measured to save around 5W on an idle Thinkpad X220. Signed-off-by: Matthew Garrett <mjg@redhat.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
| * PCI: Fix PRI and PASID consistencyAlex Williamson2011-12-052-12/+12
| | | | | | | | | | | | | | | | These are extended capabilities, rename and move to proper group for consistency. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
| * PCI/sysfs: add per pci device msi[x] irq listing (v5)Neil Horman2011-12-054-0/+133
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a per-pci-device subdirectory in sysfs called: /sys/bus/pci/devices/<device>/msi_irqs This sub-directory exports the set of msi vectors allocated by a given pci device, by creating a numbered sub-directory for each vector beneath msi_irqs. For each vector various attributes can be exported. Currently the only attribute is called mode, which tracks the operational mode of that vector (msi vs. msix) Acked-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
* | Merge branch 'iommu/page-sizes' into x86/amdJoerg Roedel2011-12-148-70/+205
|\ \ | | | | | | | | | | | | Conflicts: drivers/iommu/amd_iommu.c
| * | iommu/core: remove the temporary pgsize settingsOhad Ben-Cohen2011-11-101-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | Now that all IOMMU drivers are exporting their supported pgsizes, we can remove the default pgsize settings in register_iommu(). Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | iommu/intel: announce supported page sizesOhad Ben-Cohen2011-11-101-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let the IOMMU core know we support arbitrary page sizes (as long as they're an order of 4KiB). This way the IOMMU core will retain the existing behavior we're used to; it will let us map regions that: - their size is an order of 4KiB - they are naturally aligned Note: Intel IOMMU hardware doesn't support arbitrary page sizes, but the driver does (it splits arbitrary-sized mappings into the pages supported by the hardware). To make everything simpler for now, though, this patch effectively tells the IOMMU core to keep giving this driver the same memory regions it did before, so nothing is changed as far as it's concerned. At this point, the page sizes announced remain static within the IOMMU core. To correctly utilize the pgsize-splitting of the IOMMU core by this driver, it seems that some core changes should still be done, because Intel's IOMMU page size capabilities seem to have the potential to be different between different DMA remapping devices. Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | iommu/amd: announce supported page sizesOhad Ben-Cohen2011-11-101-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let the IOMMU core know we support arbitrary page sizes (as long as they're an order of 4KiB). This way the IOMMU core will retain the existing behavior we're used to; it will let us map regions that: - their size is an order of 4KiB - they are naturally aligned Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> Cc: Joerg Roedel <Joerg.Roedel@amd.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | iommu/msm: announce supported page sizesOhad Ben-Cohen2011-11-101-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let the IOMMU core know we support 4KiB, 64KiB, 1MiB and 16MiB page sizes. This way the IOMMU core can split any arbitrary-sized physically contiguous regions (that it needs to map) as needed. Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> Acked-by: David Brown <davidb@codeaurora.org> Cc: Stepan Moskovchenko <stepanm@codeaurora.org> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | iommu/omap: announce supported page sizesOhad Ben-Cohen2011-11-101-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let the IOMMU core know we support 4KiB, 64KiB, 1MiB and 16MiB page sizes. This way the IOMMU core can split any arbitrary-sized physically contiguous regions (that it needs to map) as needed. Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> Cc: Hiroshi DOYU <hdoyu@nvidia.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | iommu/core: split mapping to page sizes as supported by the hardwareOhad Ben-Cohen2011-11-104-32/+144
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When mapping a memory region, split it to page sizes as supported by the iommu hardware. Always prefer bigger pages, when possible, in order to reduce the TLB pressure. The logic to do that is now added to the IOMMU core, so neither the iommu drivers themselves nor users of the IOMMU API have to duplicate it. This allows a more lenient granularity of mappings; traditionally the IOMMU API took 'order' (of a page) as a mapping size, and directly let the low level iommu drivers handle the mapping, but now that the IOMMU core can split arbitrary memory regions into pages, we can remove this limitation, so users don't have to split those regions by themselves. Currently the supported page sizes are advertised once and they then remain static. That works well for OMAP and MSM but it would probably not fly well with intel's hardware, where the page size capabilities seem to have the potential to be different between several DMA remapping devices. register_iommu() currently sets a default pgsize behavior, so we can convert the IOMMU drivers in subsequent patches. After all the drivers are converted, the temporary default settings will be removed. Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted to deal with bytes instead of page order. Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review! Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> Cc: David Brown <davidb@codeaurora.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Joerg Roedel <Joerg.Roedel@amd.com> Cc: Stepan Moskovchenko <stepanm@codeaurora.org> Cc: KyongHo Cho <pullip.cho@samsung.com> Cc: Hiroshi DOYU <hdoyu@nvidia.com> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: kvm@vger.kernel.org Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | iommu/core: stop converting bytes to page order back and forthOhad Ben-Cohen2011-11-106-42/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Express sizes in bytes rather than in page order, to eliminate the size->order->size conversions we have whenever the IOMMU API is calling the low level drivers' map/unmap methods. Adopt all existing drivers. Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> Cc: David Brown <davidb@codeaurora.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Joerg Roedel <Joerg.Roedel@amd.com> Cc: Stepan Moskovchenko <stepanm@codeaurora.org> Cc: KyongHo Cho <pullip.cho@samsung.com> Cc: Hiroshi DOYU <hdoyu@nvidia.com> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Add invalid_ppr callbackJoerg Roedel2011-12-142-5/+86
| | | | | | | | | | | | | | | | | | | | | This callback can be used to change the PRI response code sent to a device when a PPR fault fails. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Implement notifiers for IOMMUv2Joerg Roedel2011-12-142-11/+178
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since pages are not pinned anymore we need notifications when the VMM changes the page-tables. Use mmu_notifiers for that. Also use the task_exit notifier from the profiling subsystem to shutdown all contexts related to this task. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Implement IO page-fault handlerJoerg Roedel2011-12-121-8/+196
| | | | | | | | | | | | | | | | | | | | | Register the notifier for PPR faults and handle them as necessary. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Add routines to bind/unbind a pasidJoerg Roedel2011-12-122-0/+332
| | | | | | | | | | | | | | | | | | | | | This patch adds routines to bind a specific process address-space to a given PASID. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Implement device aquisition code for IOMMUv2Joerg Roedel2011-12-122-1/+232
| | | | | | | | | | | | | | | | | | | | | | | | This patch adds the amd_iommu_init_device() and amd_iommu_free_device() functions which make a device and the IOMMU ready for IOMMUv2 usage. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Add driver stub for AMD IOMMUv2 supportJoerg Roedel2011-12-123-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | Add a Kconfig option for the optional driver. Since it is optional it can be compiled as a module and will only be loaded when required by another driver. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Add stat counter for IOMMUv2 eventsJoerg Roedel2011-12-121-0/+17
| | | | | | | | | | | | | | | | | | | | | Add some interesting statistic counters for events when IOMMUv2 is active. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Add device errata handlingJoerg Roedel2011-12-123-3/+73
| | | | | | | | | | | | | | | | | | | | | Add infrastructure for errata-handling and handle two known erratas in the IOMMUv2 code. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Add function to get IOMMUv2 domain for pdevJoerg Roedel2011-12-123-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | The AMD IOMMUv2 driver needs to get the IOMMUv2 domain associated with a particular device. This patch adds a function to get this information. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Implement function to send PPR completionsJoerg Roedel2011-12-123-0/+63
| | | | | | | | | | | | | | | | | | | | | To send completions for PPR requests this patch adds a function which can be used by the IOMMUv2 driver. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Implement functions to manage GCR3 tableJoerg Roedel2011-12-123-0/+135
| | | | | | | | | | | | | | | | | | | | | | | | This patch adds functions necessary to set and clear the GCR3 values associated with a particular PASID in an IOMMUv2 domain. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Implement IOMMUv2 TLB flushing routinesJoerg Roedel2011-12-123-0/+140
| | | | | | | | | | | | | | | | | | | | | | | | The functions added with this patch allow to manage the IOMMU and the device TLBs for all devices in an IOMMUv2 domain. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Add support for IOMMUv2 domain modeJoerg Roedel2011-12-125-5/+180
| | | | | | | | | | | | | | | | | | | | | This patch adds support for protection domains that implement two-level paging for devices. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Add amd_iommu_domain_direct_map functionJoerg Roedel2011-12-122-2/+39
| | | | | | | | | | | | | | | | | | | | | | | | This function can be used to switch a domain into paging-mode 0. In this mode all devices can access physical system memory directly without any remapping. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Implement notifier for PPR faultsJoerg Roedel2011-12-123-2/+125
| | | | | | | | | | | | | | | | | | | | | Add a notifer at which a module can attach to get informed about incoming PPR faults. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Put IOMMUv2 capable devices in pt_domainJoerg Roedel2011-12-124-16/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the device starts to use IOMMUv2 features the dma handles need to stay valid. The only sane way to do this is to use a identity mapping for the device and not translate it by the iommu. This is implemented with this patch. Since this lifts the device-isolation there is also a new kernel parameter which allows to disable that feature. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Add iommuv2 flag to struct amd_iommuJoerg Roedel2011-12-123-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In mixed IOMMU setups this flag inidicates whether an IOMMU supports the v2 features or not. This patch also adds a global flag together with a function to query that flag from other code. The flag shows if at least one IOMMUv2 is in the system. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Enable GT mode when supported by IOMMUJoerg Roedel2011-12-122-0/+10
| | | | | | | | | | | | | | | | | | | | | This feature needs to be enabled before IOMMUv2 DTEs can be set up. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Setup PPR log when supported by IOMMUJoerg Roedel2011-12-122-0/+65
| | | | | | | | | | | | | | | | | | | | | Allocate and enable a log buffer for peripheral page faults when the IOMMU supports this feature. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Get the maximum number of PASIDs supportedJoerg Roedel2011-12-122-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | Read the number of PASIDs supported by each IOMMU in the system and take the smallest number as the maximum value supported by the IOMMU driver. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | iommu/amd: Convert dev_table_entry to u64Joerg Roedel2011-12-123-16/+18
| | | | | | | | | | | | | | | | | | | | | | | | Convert the contents of 'struct dev_table_entry' to u64 to allow updating the DTE wit 64bit writes as required by the spec. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* | | Linux 3.2-rc5v3.2-rc5Linus Torvalds2011-12-101-1/+1
| | |
* | | Merge git://git.samba.org/sfrench/cifs-2.6Linus Torvalds2011-12-094-5/+39
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.samba.org/sfrench/cifs-2.6: cifs: check for NULL last_entry before calling cifs_save_resume_key cifs: attempt to freeze while looping on a receive attempt cifs: Fix sparse warning when calling cifs_strtoUCS CIFS: Add descriptions to the brlock cache functions
| * | | cifs: check for NULL last_entry before calling cifs_save_resume_keyJeff Layton2011-12-091-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prior to commit eaf35b1, cifs_save_resume_key had some NULL pointer checks at the top. It turns out that at least one of those NULL pointer checks is needed after all. When the LastNameOffset in a FIND reply appears to be beyond the end of the buffer, CIFSFindFirst and CIFSFindNext will set srch_inf.last_entry to NULL. Since eaf35b1, the code will now oops in this situation. Fix this by having the callers check for a NULL last entry pointer before calling cifs_save_resume_key. No change is needed for the call site in cifs_readdir as it's not reachable with a NULL current_entry pointer. This should fix: https://bugzilla.redhat.com/show_bug.cgi?id=750247 Cc: stable@vger.kernel.org Cc: Christoph Hellwig <hch@infradead.org> Reported-by: Adam G. Metzler <adamgmetzler@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
| * | | cifs: attempt to freeze while looping on a receive attemptJeff Layton2011-12-091-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the recent overhaul of the demultiplex thread receive path, I neglected to ensure that we attempt to freeze on each pass through the receive loop. Reported-and-Tested-by: Woody Suwalski <terraluna977@gmail.com> Reported-and-Tested-by: Adam Williamson <awilliam@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <smfrench@gmail.com>
| * | | cifs: Fix sparse warning when calling cifs_strtoUCSSteve French2011-12-091-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix sparse endian check warning while calling cifs_strtoUCS CHECK fs/cifs/smbencrypt.c fs/cifs/smbencrypt.c:216:37: warning: incorrect type in argument 1 (different base types) fs/cifs/smbencrypt.c:216:37: expected restricted __le16 [usertype] *<noident> fs/cifs/smbencrypt.c:216:37: got unsigned short *<noident> Signed-off-by: Steve French <smfrench@gmail.com> Acked-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com
| * | | CIFS: Add descriptions to the brlock cache functionsPavel Shilovsky2011-12-091-0/+26
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Signed-off-by: Steve French <sfrench@us.ibm.com>
* | | | Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds2011-12-098-44/+68
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, efi: Calling __pa() with an ioremap()ed address is invalid x86, hpet: Immediately disable HPET timer 1 if rtc irq is masked x86/intel_mid: Kconfig select fix x86/intel_mid: Fix the Kconfig for MID selection
| * | | | x86, efi: Calling __pa() with an ioremap()ed address is invalidMatt Fleming2011-12-096-35/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we encounter an efi_memory_desc_t without EFI_MEMORY_WB set in ->attribute we currently call set_memory_uc(), which in turn calls __pa() on a potentially ioremap'd address. On CONFIG_X86_32 this is invalid, resulting in the following oops on some machines: BUG: unable to handle kernel paging request at f7f22280 IP: [<c10257b9>] reserve_ram_pages_type+0x89/0x210 [...] Call Trace: [<c104f8ca>] ? page_is_ram+0x1a/0x40 [<c1025aff>] reserve_memtype+0xdf/0x2f0 [<c1024dc9>] set_memory_uc+0x49/0xa0 [<c19334d0>] efi_enter_virtual_mode+0x1c2/0x3aa [<c19216d4>] start_kernel+0x291/0x2f2 [<c19211c7>] ? loglevel+0x1b/0x1b [<c19210bf>] i386_start_kernel+0xbf/0xc8 A better approach to this problem is to map the memory region with the correct attributes from the start, instead of modifying it after the fact. The uncached case can be handled by ioremap_nocache() and the cached by ioremap_cache(). Despite first impressions, it's not possible to use ioremap_cache() to map all cached memory regions on CONFIG_X86_64 because EFI_RUNTIME_SERVICES_DATA regions really don't like being mapped into the vmalloc space, as detailed in the following bug report, https://bugzilla.redhat.com/show_bug.cgi?id=748516 Therefore, we need to ensure that any EFI_RUNTIME_SERVICES_DATA regions are covered by the direct kernel mapping table on CONFIG_X86_64. To accomplish this we now map E820_RESERVED_EFI regions via the direct kernel mapping with the initial call to init_memory_mapping() in setup_arch(), whereas previously these regions wouldn't be mapped if they were after the last E820_RAM region until efi_ioremap() was called. Doing it this way allows us to delete efi_ioremap() completely. Signed-off-by: Matt Fleming <matt.fleming@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Matthew Garrett <mjg@redhat.com> Cc: Zhang Rui <rui.zhang@intel.com> Cc: Huang Ying <huang.ying.caritas@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1321621751-3650-1-git-send-email-matt@console-pimps.org Signed-off-by: Ingo Molnar <mingo@elte.hu>