| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client
Pull Ceph updates from Sage Weil:
"This has a mix of bug fixes and cleanups.
Alex's patch fixes a rare race in RBD. Ilya's patches fix an ENOENT
check when a second rbd image is mapped and a couple memory leaks.
Zheng fixes several issues with fragmented directories and multiple
MDSs. Josh fixes a spin/sleep issue, and Josh and Guangliang's
patches fix setting and unsetting RBD images read-only.
Naturally there are several other cleanups mixed in for good measure"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client: (23 commits)
rbd: only set disk to read-only once
rbd: move calls that may sleep out of spin lock range
rbd: add ioctl for rbd
ceph: use truncate_pagecache() instead of truncate_inode_pages()
ceph: include time stamp in every MDS request
rbd: fix ida/idr memory leak
rbd: use reference counts for image requests
rbd: fix osd_request memory leak in __rbd_dev_header_watch_sync()
rbd: make sure we have latest osdmap on 'rbd map'
libceph: add ceph_monc_wait_osdmap()
libceph: mon_get_version request infrastructure
libceph: recognize poolop requests in debugfs
ceph: refactor readpage_nounlock() to make the logic clearer
mds: check cap ID when handling cap export message
ceph: remember subtree root dirfrag's auth MDS
ceph: introduce ceph_fill_fragtree()
ceph: handle cap import atomically
ceph: pre-allocate ceph_cap struct for ceph_add_cap()
ceph: update inode fields according to issued caps
rbd: replace IS_ERR and PTR_ERR with PTR_ERR_OR_ZERO
...
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
rbd_open(), called every time the device is opened, calls
set_device_ro(). There's no reason to set the device read-only or
read-write every time it is opened. Just do this once during device
setup, using set_disk_ro() instead because the struct block_device
isn't available to us there.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <elder@linaro.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
get_user() and set_disk_ro() may allocate memory, leading to a
potential deadlock if theye are called while a spin lock is held.
Move the acquisition and release of rbd_dev->lock from rbd_ioctl()
into rbd_ioctl_set_ro(), so it can occur between get_user() and
set_disk_ro().
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Alex Elder <elder@linaro.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When running the following commands:
[root@ceph0 mnt]# blockdev --setro /dev/rbd1
[root@ceph0 mnt]# blockdev --getro /dev/rbd1
0
The block setro didn't take effect, it is because
the rbd doesn't support ioctl of block driver.
This resolves:
http://tracker.ceph.com/issues/6265
Signed-off-by: Guangliang Zhao <guangliang@unitedstack.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
|
| |
| |
| |
| | |
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We recently modified the client/MDS protocol to include a timestamp in the
client request. This allows ctime updates to follow the client's clock
in most cases, which avoids subtle problems when clocks are out of sync
and timestamps are updated sometimes by the MDS clock (for most requests)
and sometimes by the client clock (for cap writeback).
Signed-off-by: Sage Weil <sage@inktank.com>
|
| |
| |
| |
| |
| |
| |
| | |
ida_destroy() needs to be called on module exit to release ida caches.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Alex Elder <elder@linaro.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Each image request contains a reference count, but to date it has
not actually been used. (I think this was just an oversight.) A
recent report involving rbd failing an assertion shed light on why
and where we need to use these reference counts.
Every OSD request associated with an object request uses
rbd_osd_req_callback() as its callback function. That function will
call a helper function (dependent on the type of OSD request) that
will set the object request's "done" flag if the object request if
appropriate. If that "done" flag is set, the object request is
passed to rbd_obj_request_complete().
In rbd_obj_request_complete(), requests are processed in sequential
order. So if an object request completes before one of its
predecessors in the image request, the completion is deferred.
Otherwise, if it's a completing object's "turn" to be completed, it
is passed to rbd_img_obj_end_request(), which records the result of
the operation, accumulates transferred bytes, and so on. Next, the
successor to this request is checked and if it is marked "done",
(deferred) completion processing is performed on that request, and
so on. If the last object request in an image request is completed,
rbd_img_request_complete() is called, which (typically) destroys
the image request.
There is a race here, however. The instant an object request is
marked "done" it can be provided (by a thread handling completion of
one of its predecessor operations) to rbd_img_obj_end_request(),
which (for the last request) can then lead to the image request
getting torn down. And this can happen *before* that object has
itself entered rbd_img_obj_end_request(). As a result, once it
*does* enter that function, the image request (and even the object
request itself) may have been freed and become invalid.
All that's necessary to avoid this is to properly count references
to the image requests. We tear down an image request's object
requests all at once--only when the entire image request has
completed. So there's no need for an image request to count
references for its object requests. However, we don't want an
image request to go away until the last of its object requests
has passed through rbd_img_obj_callback(). In other words,
we don't want rbd_img_request_complete() to necessarily
result in the image request being destroyed, because it may
get called before we've finished processing on all of its
object requests.
So the fix is to add a reference to an image request for
each of its object requests. The reference can be viewed
as representing an object request that has not yet finished
its call to rbd_img_obj_callback(). That is emphasized by
getting the reference right after assigning that as the image
object's callback function. The corresponding release of that
reference is done at the end of rbd_img_obj_callback(), which
every image object request passes through exactly once.
Cc: stable@vger.kernel.org
Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Ilya Dryomov <ilya.dryomov@inktank.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
osd_request, along with r_request and r_reply messages attached to it
are leaked in __rbd_dev_header_watch_sync() if the requested image
doesn't exist. This is because lingering requests are special and get
an extra ref in the reply path. Fix it by unregistering linger request
on the error path and split __rbd_dev_header_watch_sync() into two
functions to make it maintainable.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Given an existing idle mapping (img1), mapping an image (img2) in
a newly created pool (pool2) fails:
$ ceph osd pool create pool1 8 8
$ rbd create --size 1000 pool1/img1
$ sudo rbd map pool1/img1
$ ceph osd pool create pool2 8 8
$ rbd create --size 1000 pool2/img2
$ sudo rbd map pool2/img2
rbd: sysfs write failed
rbd: map failed: (2) No such file or directory
This is because client instances are shared by default and we don't
request an osdmap update when bumping a ref on an existing client. The
fix is to use the mon_get_version request to see if the osdmap we have
is the latest, and block until the requested update is received if it's
not.
Fixes: http://tracker.ceph.com/issues/8184
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add ceph_monc_wait_osdmap(), which will block until the osdmap with the
specified epoch is received or timeout occurs.
Export both of these as they are going to be needed by rbd.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add support for mon_get_version requests to libceph. This reuses much
of the ceph_mon_generic_request infrastructure, with one exception.
Older OSDs don't set mon_get_version reply hdr->tid even if the
original request had a non-zero tid, which makes it impossible to
lookup ceph_mon_generic_request contexts by tid in get_generic_reply()
for such replies. As a workaround, we allocate a reply message on the
reply path. This can probably interfere with revoke, but I don't see
a better way.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Recognize poolop requests in debugfs monc dump, fix prink format
specifiers - tid is unsigned.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If the return value of ceph_osdc_readpages() is not negative,
it is certainly greater than or equal to zero.
Remove the useless condition judgment and redundant braces.
Signed-off-by: Zhang Zhen <zhenzhang.zhang@huawei.com>
Reviewed-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
handle following sequence of events:
- mds0 exports an inode to mds1. client receives the cap import
message from mds1. caps from mds0 are removed while handling
the cap import message.
- mds1 exports an inode to mds0. client receives the cap export
message from mds1. handle_cap_export() adds placeholder caps
for mds0
- client receives the first cap export message (for exporting
inode from mds0 to mds1)
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
remember dirfrag's auth MDS when it's different from its parent inode's
auth MDS.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Move the code that update the i_fragtree into a separate function.
Also add simple probabilistic test to decide whether the i_fragtree
should be updated
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
cap import messages are processed by both handle_cap_import() and
handle_cap_grant(). These two functions are not executed in the same
atomic context, so they can races with cap release.
The fix is make handle_cap_import() not release the i_ceph_lock when
it returns. Let handle_cap_grant() release the lock after it finishes
its job.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
So that ceph_add_cap() can be used while i_ceph_lock is locked.
This simplifies the code that handle cap import/export.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Cap message and request reply from non-auth MDS may carry stale
information (corresponding locks are in LOCK states) even they
have the newest inode version. So client should update inode fields
according to issued caps.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
This patch fixes coccinelle error regarding usage of IS_ERR and
PTR_ERR instead of PTR_ERR_OR_ZERO.
Signed-off-by: Duan Jiong <duanj.fnst@cn.fujitsu.com>
Reviewed-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
cap grant/revoke message from non-auth MDS can update inode's size
and truncate_seq/truncate_size. (the message arrives before auth
MDS's cap trunc message)
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
posix_acl_xattr_set() already does the check, and it's the only
way to feed in an ACL from userspace.
So the check here is useless, remove it.
Signed-off-by: zhang zhen <zhenzhang.zhang@huawei.com>
Reviewed-by: Yan, Zheng <zheng.z.yan@intel.com>
|
| |
| |
| |
| |
| | |
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Reviewed-by: Yan, Zheng <zheng.z.yan@intel.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
Pull OMAP3 updates from Mauro Carvalho Chehab:
"Some driver improvements on OMAP3. This series depend on some iommu
patches already merged"
* 'topic/omap3isp' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: (26 commits)
[media] omap3isp: Rename isp_buffer isp_addr field to dma
[media] omap3isp: Move to videobuf2
[media] v4l: vb2: Add a function to discard all DONE buffers
[media] omap3isp: Cancel all queued buffers when stopping the video stream
[media] omap3isp: Move buffer irqlist to isp_buffer structure
[media] omap3isp: Move queue irqlock to isp_video structure
[media] omap3isp: Move queue mutex to isp_video structure
[media] omap3isp: queue: Don't build scatterlist for kernel buffer
[media] omap3isp: Use the ARM DMA IOMMU-aware operations
[media] omap3isp: queue: Use sg_alloc_table_from_pages()
[media] omap3isp: queue: Map PFNMAP buffers to device
[media] omap3isp: queue: Fix the dma_map_sg() return value check
[media] omap3isp: queue: Allocate kernel buffers with dma_alloc_coherent
[media] omap3isp: queue: Inline the ispmmu_v(un)map functions
[media] omap3isp: queue: Merge the prepare and sglist functions
[media] omap3isp: queue: Use sg_table structure
[media] omap3isp: queue: Move IOMMU handling code to the queue
[media] omap3isp: video: Set the buffer bytesused field at completion time
[media] omap3isp: ccdc: Use the DMA API for FPC
[media] omap3isp: ccdc: Use the DMA API for LSC
...
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
No functional changes.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Replace the custom buffers queue implementation with a videobuf2 queue.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When suspending a device while a video stream is active all buffers
marked as done but not dequeued yet will be kept across suspend and
given back to userspace after resume. This will result in outdated
buffers being dequeued.
Introduce a new vb2 function to mark all done buffers as erroneous
instead, to be used by drivers at resume time.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Hans Verkuil <hans.verkuil@cisco.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When stopping a video stream the driver waits for ongoing DMA opeations
to complete for the currently active buffer, but doesn't release the
non-active queued buffers. This isn't a problem in most cases as the
video device is usually closed after the stream is stopped, which will
release all the buffers. However the problem would generate a warning
when switching to videobuf2. Fix it by cancelling all buffers after DMA
operations have completed.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This prepares for the move to videobuf2.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This prepares for the move to videobuf2.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This prepares for the move to videobuf2.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The scatterlist is not needed for those buffers, don't build it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Attach an ARM DMA I/O virtual address space to the ISP device. This
switches to the IOMMU-aware ARM DMA backend, we can thus remove the
explicit calls to the OMAP IOMMU map and unmap functions.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Replace the custom implementation with a call to the scatterlist helper
function.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Userspace PFNMAP buffers need to be mapped to the device like the
userspace non-PFNMAP buffers in order for the DMA mapping implementation
to create IOMMU mappings when we'll switch to the IOMMU-aware DMA
mapping backend.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
dma_map_sg() can merge sglist entries, and can thus return a number of
mapped entries different than the original value. Don't consider this as
an error.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
And retrieve the related sg table using dma_get_sgtable().
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The ispmmu_vmap() and ispmmu_vunmap() functions are just wrappers around
omap_iommu_vmap() and omap_iommu_vunmap(). Inline them.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In preparation for the switch to the DMA API merge the two functions
that handle buffer preparation for the USERPTR cases (both page-backed
and non page-backed memory).
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Replace the sglen and sglist fields stored in the buffer structure with
an sg_table. This allows using the sg table allocation helper function.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
As a preparation for the switch from the OMAP IOMMU API to the DMA API
move all IOMMU handling code from the video node implementation to the
buffers queue implementation.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The v4l buffer bytesused field is a value that will be returned to
userspace when the buffer gets dequeued. As such it doesn't need to be
set early at buffer queue time. Move the assignment to buffer completion
in the omap3isp_video_buffer_next() function to prepare for the video
buffers queue refactoring.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Replace the OMAP-specific IOMMU API usage by the DMA API for FPC. The
table is now allocated using dma_alloc_coherent() and the related sg
table is retrieved using dma_get_sgtable() for sync operations.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Replace the OMAP-specific IOMMU API usage by the DMA API for LSC. The
table is now allocated using dma_alloc_coherent() and the related sg
table is retrieved using dma_get_sgtable() for sync operations.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Replace the OMAP-specific IOMMU API usage by the DMA API. All buffers
are now allocated using dma_alloc_coherent() and the related sg table is
retrieved using dma_get_sgtable() for sync operations.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The driver stores the IOMMU mapped iovm struct pointer in the buffer
structure but only needs the iovm sg table. Store the sg table instead
to prepare the migration to the DMA API.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The fields store buffer addresses as seen from the device. The first one
is used with an external DMA engine while the second one is used with
the ISP DMA engine. As they're never used together, merge them.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Move code common between the isp_stat_bufs_alloc_dma() and
isp_stat_bufs_alloc_iommu() functions to isp_stat_bufs_alloc().
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The WARN_ON statements in the buffer allocation functions try to catch
conditions where buffers would have already been allocated. As the
buffers are explicitly freed right before being allocated this can't
happen.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Sakari Ailus <sakari.ailus@iki.fi>
Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com>
|