summaryrefslogtreecommitdiffstats
path: root/mm/iov_iter.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* switch iov_iter_get_pages() to passing maximal number of pagesAl Viro2014-08-071-9/+8
| | | | | | ... instead of maximal size. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* bio_vec-backed iov_iterAl Viro2014-05-061-32/+358
| | | | | | | | | | | | | | | New variant of iov_iter - ITER_BVEC in iter->type, backed with bio_vec array instead of iovec one. Primitives taught to deal with such beasts, __swap_write() switched to using that kind of iov_iter. Note that bio_vec is just a <page, offset, length> triple - there's nothing block-specific about it. I've left the definition where it was, but took it from under ifdef CONFIG_BLOCK. Next target: ->splice_write()... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* optimize copy_page_{to,from}_iter()Al Viro2014-05-061-0/+8
| | | | | | | | | | if we'd ended up in the end of a segment, jump to the beginning of the next one (iov_offset = 0, iov++), rather than having the next primitive deal with that. Ought to be folded back... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* new helper: copy_page_from_iter()Al Viro2014-05-061-0/+78
| | | | | | | parallel to copy_page_to_iter(). pipe_write() switched to it (and became ->write_iter()). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* new helper: iov_iter_get_pages_alloc()Al Viro2014-05-061-0/+40
| | | | | | | | same as iov_iter_get_pages(), except that pages array is allocated (kmalloc if possible, vmalloc if that fails) and left for caller to free. Lustre and NFS ->direct_IO() switched to it. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* new helper: iov_iter_npages()Al Viro2014-05-061-0/+27
| | | | | | | | counts the pages covered by iov_iter, up to given limit. do_block_direct_io() and fuse_iter_npages() switched to it. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* new helper: iov_iter_get_pages()Al Viro2014-05-061-0/+27
| | | | | | | | | | | | | | iov_iter_get_pages(iter, pages, maxsize, &start) grabs references pinning the pages of up to maxsize of (contiguous) data from iter. Returns the amount of memory grabbed or -error. In case of success, the requested area begins at offset start in pages[0] and runs through pages[1], etc. Less than requested amount might be returned - either because the contiguous area in the beginning of iterator is smaller than requested, or because the kernel failed to pin that many pages. direct-io.c switched to using iov_iter_get_pages() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* start adding the tag to iov_iterAl Viro2014-05-061-0/+15
| | | | | | | | | For now, just use the same thing we pass to ->direct_IO() - it's all iovec-based at the moment. Pass it explicitly to iov_iter_init() and account for kvec vs. iovec in there, by the same kludge NFS ->direct_IO() uses. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* new primitive: iov_iter_alignment()Al Viro2014-05-061-0/+25
| | | | | | | returns the value aligned as badly as the worst remaining segment in iov_iter is. Use instead of open-coded equivalents. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* kill iov_iter_copy_from_user()Al Viro2014-05-061-27/+0
| | | | | | | all callers can use copy_page_from_iter() and it actually simplifies them. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* take iov_iter stuff to mm/iov_iter.cAl Viro2014-04-021-0/+224
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>