summaryrefslogtreecommitdiffstats
path: root/drivers/vhost/vhost.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* vhost: switch vhost get_indirect() to iov_iter, kill memcpy_fromiovec()Al Viro2015-02-041-2/+4
| | | | | | | Cc: Michael S. Tsirkin <mst@redhat.com> Cc: kvm@vger.kernel.org Cc: virtualization@lists.linux-foundation.org Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vhost: relax used address alignmentMichael S. Tsirkin2014-12-291-3/+7
| | | | | | | | | | | | | virtio 1.0 only requires used address to be 4 byte aligned, vhost required 8 bytes (size of vring_used_elem). Fix up vhost to match that. Additionally, while vhost correctly requires 8 byte alignment for log, it's unconnected to used ring: it's a consequence that log has u64 entries. Tweak code to make that clearer. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: virtio 1.0 endian-ness supportMichael S. Tsirkin2014-12-091-37/+49
| | | | | | | | Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: switch to __get/__put_user exclusivelyMichael S. Tsirkin2014-12-091-3/+10
| | | | | | | | | | | | | | | | | Most places in vhost can use __get/__put_user rather than get/put_user since addresses are pre-validated. This should be good for performance, but this also will help make code sparse-clean: get/put_user macros don't play well with __virtioXX bitwise tags. Switch to get/put_user to __ variants everywhere in vhost. There's one exception - for consistency switch that as well, and add an explicit access_ok check. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: move memory pointer to VQsMichael S. Tsirkin2014-06-091-32/+25
| | | | | | | | | | | | | | | | commit 2ae76693b8bcabf370b981cd00c36cd41d33fabc vhost: replace rcu with mutex replaced rcu sync for memory accesses with VQ mutex locl/unlock. This is correct since all accesses are under VQ mutex, but incomplete: we still do useless rcu lock/unlock operations, someone might copy this code into some other context where this won't be right. This use of RCU is also non standard and hard to understand. Let's copy the pointer to each VQ structure, this way the access rules become straight-forward, and there's no need for RCU anymore. Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: move acked_features to VQsMichael S. Tsirkin2014-06-091-17/+19
| | | | | | | | Refactor code to make sure features are only accessed under VQ mutex. This makes everything simpler, no need for RCU here anymore. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: replace rcu with mutexMichael S. Tsirkin2014-06-091-1/+9
| | | | | | | | | | | All memory accesses are done under some VQ mutex. So lock/unlock all VQs is a faster equivalent of synchronize_rcu() for memory access changes. Some guests cause a lot of these changes, so it's helpful to make them faster. Reported-by: "Gonglei (Arei)" <arei.gonglei@huawei.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: remove the dead branchZhi Yong Wu2013-12-061-3/+1
| | | | | | | | | Since vhost_dev_init() forever return 0, some branches are never run, therefore need to be removed. Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* vhost: wake up worker outside spin_lockQin Chuanyu2013-09-171-1/+3
| | | | | | | | | | | | | | | | | the wake_up_process func is included by spin_lock/unlock in vhost_work_queue, but it could be done outside the spin_lock. I have test it with kernel 3.0.27 and guest suse11-sp2 using iperf, the num as below. original modified thread_num tp(Gbps) vhost(%) | tp(Gbps) vhost(%) 1 9.59 28.82 | 9.59 27.49 8 9.61 32.92 | 9.62 26.77 64 9.58 46.48 | 9.55 38.99 256 9.6 63.7 | 9.6 52.59 Signed-off-by: Chuanyu Qin <qinchuanyu@huawei.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: switch to use vhost_add_used_n()Jason Wang2013-09-041-42/+12
| | | | | | | | | Let vhost_add_used() to use vhost_add_used_n() to reduce the code duplication. To avoid the overhead brought by __copy_to_user(). We will use put_user() when one used need to be added. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* vhost: Include linux/uio.h instead of linux/socket.hAsias He2013-08-211-1/+1
| | | | | | | | | | | memcpy_fromiovec is moved from net/core/iovec.c to lib/iovec.c. linux/uio.h provides the declaration for memcpy_fromiovec. Include linux/uio.h instead of inux/socket.h for it. Signed-off-by: Asias He <asias@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* vhost: Make vhost a separate moduleAsias He2013-07-071-1/+50
| | | | | | | | | | | | Currently, vhost-net and vhost-scsi are sharing the vhost core code. However, vhost-scsi shares the code by including the vhost.c file directly. Making vhost a separate module makes it is easier to share code with other vhost devices. Signed-off-by: Asias He <asias@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: Simplify dev->vqs[i] accessAsias He2013-07-071-17/+18
| | | | | Signed-off-by: Asias He <asias@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: check owner before we overwrite ubuf_infoMichael S. Tsirkin2013-06-111-1/+7
| | | | | | | | If device has an owner, we shouldn't touch ubuf_info since it might be in use. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* vhost: drop virtio_net.h dependencyMichael S. Tsirkin2013-05-061-1/+1
| | | | | | | There's no net specific code in vhost.c anymore, don't include the virtio_net.h header. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: Export vhost_dev_set_ownerAsias He2013-05-061-1/+1
| | | | | Signed-off-by: Asias He <asias@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: fix error handling in RESET_OWNER ioctlMichael S. Tsirkin2013-05-011-9/+7
| | | | | | | | | | RESET_OWNER ioctl would leave the fd in a bad state if memory allocation failed: device is stopped but owner is not reset. Make state changes after allocating memory, such that a failed ioctl has no effect. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: move per-vq net specific fields out to netMichael S. Tsirkin2013-05-011-2/+0
| | | | | | | This will remove the need for vhost scsi to pull in virtio-net.h. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: move vhost-net zerocopy fields to net.cAsias He2013-05-011-56/+1
| | | | | | | | On top of 'vhost: Allow device specific fields per vq', we can move device specific fields to device virt queue from vhost virt queue. Signed-off-by: Asias He <asias@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: Allow device specific fields per vqAsias He2013-05-011-44/+44
| | | | | | | | | | | This is useful for any device who wants device specific fields per vq. For example, tcm_vhost wants a per vq field to track requests which are in flight on the vq. Also, on top of this we can add patches to move things like ubufs from vhost.h out to net.c. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Asias He <asias@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost_net: remove tx polling stateJason Wang2013-04-111-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After commit 2b8b328b61c799957a456a5a8dab8cc7dea68575 (vhost_net: handle polling errors when setting backend), we in fact track the polling state through poll->wqh, so there's no need to duplicate the work with an extra vhost_net_polling_state. So this patch removes this and make the code simpler. This patch also removes the all tx starting/stopping code in tx path according to Michael's suggestion. Netperf test shows almost the same result in stream test, but gets improvements on TCP_RR tests (both zerocopy or copy) especially on low load cases. Tested between multiqueue kvm guest and external host with two direct connected 82599s. zerocopy disabled: sessions|transaction rates|normalize| before/after/+improvements 1 | 9510.24/11727.29/+23.3% | 693.54/887.68/+28.0% | 25| 192931.50/241729.87/+25.3% | 2376.80/2771.70/+16.6% | 50| 277634.64/291905.76/+5% | 3118.36/3230.11/+3.6% | zerocopy enabled: sessions|transaction rates|normalize| before/after/+improvements 1 | 7318.33/11929.76/+63.0% | 521.86/843.30/+61.6% | 25| 167264.88/242422.15/+44.9% | 2181.60/2788.16/+27.8% | 50| 272181.02/294347.04/+8.1% | 3071.56/3257.85/+6.1% | Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* vhost_net: handle polling errors when setting backendJason Wang2013-01-291-3/+15
| | | | | | | | | | | | | | | | | | | | | | | Currently, the polling errors were ignored, which can lead following issues: - vhost remove itself unconditionally from waitqueue when stopping the poll, this may crash the kernel since the previous attempt of starting may fail to add itself to the waitqueue - userspace may think the backend were successfully set even when the polling failed. Solve this by: - check poll->wqh before trying to remove from waitqueue - report polling errors in vhost_poll_start(), tx_poll_start(), the return value will be checked and returned when userspace want to set the backend After this fix, there still could be a polling failure after backend is set, it will addressed by the next patch. Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* vhost: avoid backend flush on vring opsMichael S. Tsirkin2012-12-061-4/+3
| | | | | | | | | | | vring changes already do a flush internally where appropriate, so we do not need a second flush. It's currently not very expensive but a follow-up patch makes flush more heavy-weight, so remove the extra flush here to avoid regressing performance if call or kick fds are changed on data path. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller2012-11-291-1/+1
|\ | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
| * vhost: fix length for cross region descriptorMichael S. Tsirkin2012-11-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | If a single descriptor crosses a region, the second chunk length should be decremented by size translated so far, instead it includes the full descriptor length. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | vhost: move -net specific code outMichael S. Tsirkin2012-11-031-46/+7
| | | | | | | | | | | | | | | | Zerocopy handling code is vhost-net specific. Move it from vhost.c/vhost.h out to net.c Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | vhost: track zero copy failures using DMA lengthMichael S. Tsirkin2012-11-031-3/+4
| | | | | | | | | | | | | | | | This will be used to disable zerocopy when error rate is high. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | vhost-net: cleanup macros for DMA status trackingMichael S. Tsirkin2012-11-031-1/+1
| | | | | | | | | | | | | | | | | | Better document macros for DMA tracking. Add an explicit one for DMA in progress instead of relying on user supplying len != 1. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | skb: report completion status for zero copy skbsMichael S. Tsirkin2012-11-031-1/+1
|/ | | | | | | | | | | | | | | | Even if skb is marked for zero copy, net core might still decide to copy it later which is somewhat slower than a copy in user context: besides copying the data we need to pin/unpin the pages. Add a parameter reporting such cases through zero copy callback: if this happens a lot, device can take this into account and switch to copying in user context. This patch updates all users but ignores the passed value for now: it will be used by follow-up patches. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* vhost_set_vring(): turn pollstart/pollstop into boolAl Viro2012-09-271-4/+4
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* vhost: make vhost work queue visibleStefan Hajnoczi2012-07-221-3/+2
| | | | | | | | | | | | | | | | | | The vhost work queue allows processing to be done in vhost worker thread context, which uses the owner process mm. Access to the vring and guest memory is typically only possible from vhost worker context so it is useful to allow work to be queued directly by users. Currently vhost_net only uses the poll wrappers which do not expose the work queue functions. However, for tcm_vhost (vhost_scsi) it will be necessary to queue custom work. Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com> Cc: Zhi Yong Wu <wuzhy@cn.ibm.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: use USER_DS in vhost_worker threadJens Freimann2012-06-271-0/+3
| | | | | | | | | | | | | On some architectures address spaces are set up in a way that this is not necessary to work properly but on some others (like s390) it is. Make sure we operate on the user address space to allow copy_xxx_user() from the vhost_worker() thread by setting it explicitly before calling use_mm() and revert it after unuse_mm(). Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* vhost: zerocopy: poll vq in zerocopy callbackJason Wang2012-05-021-0/+1
| | | | | | | | | | We add used and signal guest in worker thread but did not poll the virtqueue during the zero copy callback. This may lead the missing of adding and signalling during zerocopy. Solve this by polling the virtqueue and let it wakeup the worker during callback. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* skbuff: struct ubuf_info callback type safetyMichael S. Tsirkin2012-04-131-3/+2
| | | | | | | | | | | The skb struct ubuf_info callback gets passed struct ubuf_info itself, not the arg value as the field name and the function signature seem to imply. Rename the arg field to ctx to match usage, add documentation and change the callback argument type to make usage clear and to have compiler check correctness. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'vhost-net' of ↵David S. Miller2012-03-231-4/+7
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
| * vhost: fix release path lockdep checksMichael S. Tsirkin2012-02-281-4/+5
| | | | | | | | | | | | | | | | We shouldn't hold any locks on release path. Pass a flag to vhost_dev_cleanup to use the lockdep info correctly. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Sasha Levin <levinsasha928@gmail.com>
| * vhost: don't forget to schedule()Nadav Har'El2012-02-281-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a tiny, but important, patch to vhost. Vhost's worker thread only called schedule() when it had no work to do, and it wanted to go to sleep. But if there's always work to do, e.g., the guest is running a network-intensive program like netperf with small message sizes, schedule() was *never* called. This had several negative implications (on non-preemptive kernels): 1. Passing time was not properly accounted to the "vhost" process (ps and top would wrongly show it using zero CPU time). 2. Sometimes error messages about RCU timeouts would be printed, if the core running the vhost thread didn't schedule() for a very long time. 3. Worst of all, a vhost thread would "hog" the core. If several vhost threads need to share the same core, typically one would get most of the CPU time (and its associated guest most of the performance), while the others hardly get any work done. The trivial solution is to add if (need_resched()) schedule(); After doing every piece of work. This will not do the heavy schedule() all the time, just when the timer interrupt decided a reschedule is warranted (so need_resched returns true). Thanks to Abel Gordon for this patch. Signed-off-by: Nadav Har'El <nyh@il.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* | vhost: remove the second argument of k[un]map_atomic()Cong Wang2012-03-201-2/+2
|/ | | | Signed-off-by: Cong Wang <amwang@redhat.com>
* vhost: optimize interrupt enable/disableMichael S. Tsirkin2011-07-191-2/+2
| | | | | | | | As we now only update used ring after enabling the backend, we can write flags with __put_user: as that's done on data path, it matters. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: fix zcopy reference countingMichael S. Tsirkin2011-07-191-1/+0
| | | | | | | Fix get/put refcount imbalance with zero copy, which caused qemu to hang forever on guest driver unload. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: set log when updating used flags or avail eventJason Wang2011-07-191-30/+54
| | | | | | | | | We need to log writes when updating used flags and avail event fields. Otherwise the guest may see a stale value after migration and miss notifying the host. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: init used ring after backend was setJason Wang2011-07-191-8/+6
| | | | | | | | | | Move the used ring initialization after backend was set. This makes it possible to disable the backend and tweak the used ring, then restart. This will also make it possible to log the used ring write correctly. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: vhost TX zero-copy supportMichael S. Tsirkin2011-07-181-15/+113
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | >From: Shirley Ma <mashirle@us.ibm.com> This adds experimental zero copy support in vhost-net, disabled by default. To enable, set experimental_zcopytx module option to 1. This patch maintains the outstanding userspace buffers in the sequence it is delivered to vhost. The outstanding userspace buffers will be marked as done once the lower device buffers DMA has finished. This is monitored through last reference of kfree_skb callback. Two buffer indices are used for this purpose. The vhost-net device passes the userspace buffers info to lower device skb through message control. DMA done status check and guest notification are handled by handle_tx: in the worst case is all buffers in the vq are in pending/done status, so we need to notify guest to release DMA done buffers first before we get any new buffers from the vq. One known problem is that if the guest stops submitting buffers, buffers might never get used until some further action, e.g. device reset. This does not seem to affect linux guests. Signed-off-by: Shirley <xma@us.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* vhost: support event indexMichael S. Tsirkin2011-05-301-34/+104
| | | | | | | | Support the new event index feature. When acked, utilize it to reduce the # of interrupts sent to the guest. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* Correct occurrences ofRob Landley2011-05-061-1/+1
| | | | | | | | | | - Documentation/kvm/ to Documentation/virtual/kvm - Documentation/uml/ to Documentation/virtual/uml - Documentation/lguest/ to Documentation/virtual/lguest throughout the kernel source tree. Signed-off-by: Rob Landley <rob@landley.net> Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
* vhost: copy_from_user -> __copy_from_userMichael S. Tsirkin2011-03-081-1/+1
| | | | | | | | copy_from_user is pretty high on perf top profile, replacing it with __copy_from_user helps. It's also safe because we do access_ok checks during setup. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: Cleanup vhost.c and net.cKrishna Kumar2011-03-081-16/+37
| | | | | | | Minor cleanup of vhost.c and net.c to match coding style. Signed-off-by: Krishna Kumar <krkumar2@in.ibm.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: fix signed/unsigned comparisonMichael S. Tsirkin2011-01-101-7/+11
| | | | | | | | | | | To detect that a sequence number is done, we are doing math on unsigned integers so the result is unsigned too. Not what was intended for the <= comparison. The result is user stuck forever in flush call. Convert to int to fix this. Further, get rid of ({}) to make code clearer. Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: better variable name in loggingMichael S. Tsirkin2010-12-091-4/+4
| | | | | | | | We really store a page offset in write_address, so rename it write_page to avoid confusion. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost: correctly set bits of dirty pagesMichael S. Tsirkin2010-12-091-1/+2
| | | | | | | | | | Fix two bugs in dirty page logging: When counting pages we should increase address by 1 instead of VHOST_PAGE_SIZE. Make log_write() correctly process requests that cross pages with write_address not starting at page boundary. Reported-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>