summaryrefslogtreecommitdiffstats
path: root/drivers (follow)
Commit message (Collapse)AuthorAgeFilesLines
* dm snapshot: split out exception store implementationsAlasdair G Kergon2009-01-066-737/+833
| | | | | | | | Move the existing snapshot exception store implementations out into separate files. Later patches will place these behind a new interface in preparation for alternative implementations. Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm snapshot: rename struct exception_storeJonathan Brassow2009-01-063-24/+25
| | | | | | | Rename struct exception_store to dm_exception_store. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm snapshot: separate out exception store interfaceJonathan Brassow2009-01-064-120/+134
| | | | | | | | | | | | | Pull structures that bridge the gap between snapshot and exception store out of dm-snap.h and put them in a new .h file - dm-exception-store.h. This file will define the API for new exception stores. Ultimately, dm-snap.h is unnecessary, since only dm-snap.c should be using it. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm mpath: move trigger_event to system workqueueAlasdair G Kergon2009-01-061-4/+4
| | | | | | | | | The same workqueue is used both for sending uevents and processing queued I/O. Deadlock has been reported in RHEL5 when sending a uevent was blocked waiting for the queued I/O to be processed. Use scheduled_work() for the asynchronous uevents instead. Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm: add name and uuid to sysfsMilan Broz2009-01-064-2/+136
| | | | | | | | | | | | | | | | | | | | | Implement simple read-only sysfs entry for device-mapper block device. This patch adds a simple sysfs directory named "dm" under block device properties and implements - name attribute (string containing mapped device name) - uuid attribute (string containing UUID, or empty string if not set) The kobject is embedded in mapped_device struct, so no additional memory allocation is needed for initializing sysfs entry. During the processing of sysfs attribute we need to lock mapped device which is done by a new function dm_get_from_kobj, which returns the md associated with kobject and increases the usage count. Each 'show attribute' function is responsible for its own locking. Signed-off-by: Milan Broz <mbroz@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm table: rework reference countingMikulas Patocka2009-01-064-20/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rework table reference counting. The existing code uses a reference counter. When the last reference is dropped and the counter reaches zero, the table destructor is called. Table reference counters are acquired/released from upcalls from other kernel code (dm_any_congested, dm_merge_bvec, dm_unplug_all). If the reference counter reaches zero in one of the upcalls, the table destructor is called from almost random kernel code. This leads to various problems: * dm_any_congested being called under a spinlock, which calls the destructor, which calls some sleeping function. * the destructor attempting to take a lock that is already taken by the same process. * stale reference from some other kernel code keeps the table constructed, which keeps some devices open, even after successful return from "dmsetup remove". This can confuse lvm and prevent closing of underlying devices or reusing device minor numbers. The patch changes reference counting so that the table destructor can be called only at predetermined places. The table has always exactly one reference from either mapped_device->map or hash_cell->new_map. After this patch, this reference is not counted in table->holders. A pair of dm_create_table/dm_destroy_table functions is used for table creation/destruction. Temporary references from the other code increase table->holders. A pair of dm_table_get/dm_table_put functions is used to manipulate it. When the table is about to be destroyed, we wait for table->holders to reach 0. Then, we call the table destructor. We use active waiting with msleep(1), because the situation happens rarely (to one user in 5 years) and removing the device isn't performance-critical task: the user doesn't care if it takes one tick more or not. This way, the destructor is called only at specific points (dm_table_destroy function) and the above problems associated with lazy destruction can't happen. Finally remove the temporary protection added to dm_any_congested(). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm: support barriers on simple devicesAndi Kleen2009-01-064-10/+26
| | | | | | | | | | | | | | | | | Implement barrier support for single device DM devices This patch implements barrier support in DM for the common case of dm linear just remapping a single underlying device. In this case we can safely pass the barrier through because there can be no reordering between devices. NB. Any DM device might cease to support barriers if it gets reconfigured so code must continue to allow for a possible -EOPNOTSUPP on every barrier bio submitted. - agk Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm request: add cachesKiyoshi Ueda2009-01-061-1/+40
| | | | | | | | This patch prepares some kmem_caches for request-based dm. Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm ioctl: allow dm_copy_name_and_uuid to return only one fieldMilan Broz2009-01-061-2/+4
| | | | | | | | | | | Allow NULL buffer in dm_copy_name_and_uuid if you only want to return one of the fields. (Required by a following patch that adds these fields to sysfs.) Signed-off-by: Milan Broz <mbroz@redhat.com> Reviewed-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm log: ensure log bitmap fits on log deviceMilan Broz2009-01-061-0/+8
| | | | | | | Check that the log bitmap will fit within the log device. Signed-off-by: Milan Broz <mbroz@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm log: move region_size validationMilan Broz2009-01-062-14/+14
| | | | | | | | | Move log size validation from mirror target to log constructor. Removed PAGE_SIZE restriction we no longer think necessary. Signed-off-by: Milan Broz <mbroz@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm log: avoid reinitialising io_req on every operationTakahiro Yasui2009-01-061-10/+7
| | | | | | | | | | | | | rw_header function updates three members of io_req data every time when I/O is processed. bi_rw and notify.fn are never modified once they get initialized, and so they can be set in advance. header_to_disk() can also be pulled out of write_header() since only one caller needs it and write_header() can be replaced by rw_header() directly. Signed-off-by: Takahiro Yasui <tyasui@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm: consolidate target deregistration error handlingMikulas Patocka2009-01-069-48/+16
| | | | | | | | | | | | | | | | | | | | Change dm_unregister_target to return void and use BUG() for error reporting. dm_unregister_target can only fail because of programming bug in the target driver. It can't fail because of user's behavior or disk errors. This patch changes unregister_target to return void and use BUG if someone tries to unregister non-registered target or unregister target that is in use. This patch removes code duplication (testing of error codes in all dm targets) and reports bugs in just one place, in dm_unregister_target. In some target drivers, these return codes were ignored, which could lead to a situation where bugs could be missed. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm raid1: fix error countJonathan Brassow2009-01-061-3/+3
| | | | | | | | | | | | | | | | | Always increase the error count when I/O on a leg of a mirror fails. The error count is used to decide whether to select an alternative mirror leg. If the target doesn't use the "handle_errors" feature, the error count is not updated and the bio can get requeued forever by the read callback. Fix it by increasing error_count before the handle_errors feature checking. Cc: stable@kernel.org Signed-off-by: Milan Broz <mbroz@redhat.com> Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm log: fix dm_io_client leak on error pathsTakahiro Yasui2009-01-061-0/+5
| | | | | | | | | | | In create_log_context function, dm_io_client_destroy function needs to be called, when memory allocation of disk_header, sync_bits and recovering_bits failed, but dm_io_client_destroy is not called. Cc: stable@kernel.org Signed-off-by: Takahiro Yasui <tyasui@redhat.com> Acked-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm snapshot: change yield to msleepMikulas Patocka2009-01-061-3/+4
| | | | | | | | | Change yield() to msleep(1). If the thread had realtime priority, yield() doesn't really yield, so the yielding process would loop indefinitely and cause machine lockup. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* dm table: drop reference at unbindMikulas Patocka2009-01-061-1/+1
| | | | | | | | | | | | | Move one dm_table_put() so that the last reference in the thread gets dropped in __unbind(). This is required for a following patch, dm-table-rework-reference-counting.patch, which will change the logic in such a way that table destructor is called only at specific points in the code. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* rtc: add alarm/update irq interfacesAlessandro Zummo2009-01-043-17/+94
| | | | | | | | | | | | | | | | Add standard interfaces for alarm/update irqs enabling. Drivers are no more required to implement equivalent ioctl code as rtc-dev will provide it. UIE emulation should now be handled correctly and will work even for those RTC drivers who cannot be configured to do both UIE and AIE. Signed-off-by: Alessandro Zummo <a.zummo@towertech.it> Cc: David Brownell <david-b@pacbell.net> Cc: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* viafb: fix crashes due to 4k stack overflowBruno Prémont2009-01-041-119/+129
| | | | | | | | | | | | | | | | | | The function viafb_cursor() uses 2 stack-variables of CURSOR_SIZE bits; CURSOR_SIZE is defined as (8 * 1024). Using up twice 1k on stack is too much for 4k-stack (though it works with 8k-stacks). Make those two variables kzalloc'ed to preserve stack space. Also merge the whole lot of local struct's in viafb_ioctl into a union so the stack usage gets minimized here as well. (struct's are only accessed in their indicidual IOCTL case) This second part is only compile-tested as I know of no userspace app using the IOCTLs. Signed-off-by: Bruno Prémont <bonbons@linux-vserver.org> Cc: <JosephChan@via.com.tw> Cc: Krzysztof Helt <krzysztof.h1@poczta.fm> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'cpus4096-for-linus-3' of ↵Linus Torvalds2009-01-037-58/+135
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'cpus4096-for-linus-3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (77 commits) x86: setup_per_cpu_areas() cleanup cpumask: fix compile error when CONFIG_NR_CPUS is not defined cpumask: use alloc_cpumask_var_node where appropriate cpumask: convert shared_cpu_map in acpi_processor* structs to cpumask_var_t x86: use cpumask_var_t in acpi/boot.c x86: cleanup some remaining usages of NR_CPUS where s/b nr_cpu_ids sched: put back some stack hog changes that were undone in kernel/sched.c x86: enable cpus display of kernel_max and offlined cpus ia64: cpumask fix for is_affinity_mask_valid() cpumask: convert RCU implementations, fix xtensa: define __fls mn10300: define __fls m32r: define __fls h8300: define __fls frv: define __fls cris: define __fls cpumask: CONFIG_DISABLE_OBSOLETE_CPUMASK_FUNCTIONS cpumask: zero extra bits in alloc_cpumask_var_node cpumask: replace for_each_cpu_mask_nr with for_each_cpu in kernel/time/ cpumask: convert mm/ ...
| * cpumask: fix compile error when CONFIG_NR_CPUS is not definedMike Travis2009-01-031-1/+1
| | | | | | | | | | | | | | | | CONFIG_NR_CPUS will be defined for all arch's whether SMP or not, but it may not have made it into all arches yet. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * cpumask: convert shared_cpu_map in acpi_processor* structs to cpumask_var_tRusty Russell2009-01-033-44/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: Reduce memory usage, use new API. This is part of an effort to reduce structure sizes for machines configured with large NR_CPUS. cpumask_t gets replaced by cpumask_var_t, which is either struct cpumask[1] (small NR_CPUS) or struct cpumask * (large NR_CPUS). (Changes to powernow-k* by <travis>.) Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * Merge branch 'master' of ↵Mike Travis2009-01-031616-61223/+145230
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-cpumask into merge-rr-cpumask Conflicts: arch/x86/kernel/io_apic.c kernel/rcuclassic.c kernel/sched.c kernel/time/tick-sched.c Signed-off-by: Mike Travis <travis@sgi.com> [ mingo@elte.hu: backmerged typo fix for io_apic.c ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * percpu: fix percpu accessors to potentially !cpu_possible() cpus: pnpbiosRusty Russell2009-01-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: CPU iterator bugfixes Percpu areas are only allocated for possible cpus. In general, you shouldn't access random cpu's percpu areas. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Mike Travis <travis@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: Adam Belay <ambx1@neo.rr.com>
| | * Merge branch 'master' of ↵Rusty Russell2008-12-31657-19874/+82905
| | |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 Conflicts: arch/x86/kernel/io_apic.c
| | * | cpumask: use new cpumask API in drivers/infiniband/hw/ipathRusty Russell2008-12-291-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup We're moving from handing around cpumask_t's to handing around struct cpumask *'s. cpus_*, cpumask_t and cpu_*_map are deprecated: convert to cpumask_*, cpu_*_mask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Ralph Campbell <infinipath@qlogic.com>
| | * | cpumask: use new cpumask API in drivers/infiniband/hw/ehcaRusty Russell2008-12-291-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup We're moving from handing around cpumask_t's to handing around struct cpumask *'s. cpus_*, cpumask_t and cpu_*_map are deprecated: convert to cpumask_*, cpu_*_mask. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Tested-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Cc: Christoph Raisch <raisch@de.ibm.com>
| | * | cpumask: use for_each_online_cpu() in drivers/infiniband/hw/ehca/ehca_irq.cRusty Russell2008-12-291-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup In future, accessing cpu numbers beyond nr_cpu_ids (the runtime limit) will be undefined. We can avoid future problems by using for_each_online_cpu() here. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Tested-by: Hoang-Nam Nguyen <hnguyen@de.ibm.com> Cc: Christoph Raisch <raisch@de.ibm.com>
| | * | Merge branch 'master' of ↵Rusty Russell2008-12-291042-41588/+62727
| | |\ \ | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6
| | * | | cpumask: add sysfs displays for configured and disabled cpu mapsMike Travis2008-12-191-0/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: add new sysfs files. Add sysfs files "kernel_max" and "offline" to display the max CPU index allowed (NR_CPUS-1), and the map of cpus that are offline. Cpus can be offlined via HOTPLUG, disabled by the BIOS ACPI tables, or if they exceed the number of cpus allowed by the NR_CPUS config option, or the "maxcpus=NUM" kernel start parameter. The "possible_cpus=NUM" parameter can also extend the number of possible cpus allowed, in which case the cpus not present at startup will be in the offline state. (These cpus can be HOTPLUGGED ON after system startup [pending a follow-on patch to provide the capability via the /sys/devices/sys/cpu/cpuN/online mechanism to bring them online.]) By design, the "offlined cpus > possible cpus" display will always use the following formats: * all possible cpus online: "x$" or "x-y$" * some possible cpus offline: ".*,x$" or ".*,x-y$" where: x == number of possible cpus (nr_cpu_ids); and y == number of cpus >= NR_CPUS or maxcpus (if y > x). One use of this feature is for distros to select (or configure) the appropriate kernel to install for the resident system. Notes: * cpus offlined <= possible cpus will be printed for all architectures. * cpus offlined > possible cpus will only be printed for arches that set 'total_cpus' [X86 only in this patch]. Based on tip/cpus4096 + .../rusty/linux-2.6-for-ingo.git/master + x86-only-patches sent 12/15. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* | | | | Merge branch 'for-linus' of ↵Linus Torvalds2009-01-034-144/+947
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux-2.6-iommu * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux-2.6-iommu: (89 commits) AMD IOMMU: remove now unnecessary #ifdefs AMD IOMMU: prealloc_protection_domains should be static kvm/iommu: fix compile warning AMD IOMMU: add statistics about total number of map requests AMD IOMMU: add statistics about allocated io memory AMD IOMMU: add stats counter for domain tlb flushes AMD IOMMU: add stats counter for single iommu domain tlb flushes AMD IOMMU: add stats counter for cross-page request AMD IOMMU: add stats counter for free_coherent requests AMD IOMMU: add stats counter for alloc_coherent requests AMD IOMMU: add stats counter for unmap_sg requests AMD IOMMU: add stats counter for map_sg requests AMD IOMMU: add stats counter for unmap_single requests AMD IOMMU: add stats counter for map_single requests AMD IOMMU: add stats counter for completion wait events AMD IOMMU: add init code for statistic collection AMD IOMMU: add necessary header defines for stats counting AMD IOMMU: add Kconfig entry for statistic collection code AMD IOMMU: use dev_name in iommu_enable function AMD IOMMU: use calc_devid in prealloc_protection_domains ...
| * | | | | intel-iommu: fix bit shift at DOMAIN_FLAG_P2P_MULTIPLE_DEVICESMike Day2009-01-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Mike Day <ncmike@ncultra.org> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: remove now unused intel_iommu_found functionJoerg Roedel2009-01-031-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: register functions for the IOMMU APIJoerg Roedel2009-01-031-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: adapt domain iova_to_phys function for IOMMU APIJoerg Roedel2009-01-031-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: adapt domain map and unmap functions for IOMMU APIJoerg Roedel2009-01-031-13/+20
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: adapt device attach and detach functions for IOMMU APIJoerg Roedel2009-01-031-12/+15
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | VT-d: adapt domain init and destroy functions for IOMMU APIJoerg Roedel2009-01-031-15/+18
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | select IOMMU_API when DMAR and/or AMD_IOMMU is selectedJoerg Roedel2009-01-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These two IOMMUs can implement the current version of this API. So select the API if one or both of these IOMMU drivers is selected. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | add frontend implementation for the IOMMU APIJoerg Roedel2009-01-031-0/+100
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This API can be used by KVM for accessing different types of IOMMUs to do device passthrough to guests. Beside that this API can also be used by device drivers to map non-linear host memory into dma-linear addresses to prevent sgather-gather DMA. UIO may be another user for this API. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
| * | | | | Check agaw is sufficient for mapped memoryWeidong Han2009-01-031-0/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When domain is related to multiple iommus, need to check if the minimum agaw is sufficient for the mapped memory Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Change intel iommu APIs of virtual machine domainWeidong Han2009-01-031-70/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These APIs are used by KVM to use VT-d Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Change domain_context_mapping_one for virtual machine domainWeidong Han2009-01-031-3/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | vm_domid won't be set in context, find available domain id for a device from its iommu. For a virtual machine domain, a default agaw will be set, and skip top levels of page tables for iommu which has less agaw than default. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Allocation and free functions of virtual machine domainWeidong Han2009-01-031-2/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | virtual machine domain is different from native DMA-API domain, implement separate allocation and free functions for virtual machine domain. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Add domain_flush_cacheWeidong Han2009-01-031-17/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because virtual machine domain may have multiple devices from different iommus, it cannot use __iommu_flush_cache. In some common low level functions, use domain_flush_cache instead of __iommu_flush_cache. On the other hand, in some functions, iommu can is specified or domain cannot be got, still use __iommu_flush_cache Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Add/remove domain device info for virtual machine domainWeidong Han2009-01-031-5/+166
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add iommu reference count in domain, and add a lock to protect iommu setting including iommu_bmp, iommu_count and iommu_coherency. virtual machine domain may have multiple devices from different iommus, so it needs to do more things when add/remove domain device info. Thus implement separate these functions for virtual machine domain. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | Add domain flag DOMAIN_FLAG_VIRTUAL_MACHINEWeidong Han2009-01-031-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add this flag for VT-d used in virtual machine, like KVM. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | iommu coherencyWeidong Han2009-01-031-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In dmar_domain, more than one iommus may be included in iommu_bmp. Due to "Coherency" capability may be different across iommus, set this variable to indicate iommu access is coherent or not. Only when all related iommus in a dmar_domain are all coherent, iommu access of this domain is coherent. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | calculate agaw for each iommuWeidong Han2009-01-032-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "SAGAW" capability may be different across iommus. Use a default agaw, but if default agaw is not supported in some iommus, choose a less supported agaw. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
| * | | | | iommu bitmap instead of iommu pointer in dmar_domainWeidong Han2009-01-031-30/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to support assigning multiple devices from different iommus to a domain, iommu bitmap is used to keep all iommus the domain are related to. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>