| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Signed-off-by: Cong Wang <amwang@redhat.com>
|
|
|
|
|
|
| |
Found by Coverity software (http://scan.coverity.com).
Signed-off-by: Anton Altaparmakov <anton@tuxera.com>
|
|
|
|
|
|
| |
Fixes generated by 'codespell' and manually reviewed.
Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The regression was caused by:
commit[a32ea1e1f925399e0d81ca3f7394a44a6dafa12c] Fix read/truncate race
This causes ntfs_readpage() to be called for a zero i_size inode, which
failed when the file was compressed and non-resident.
Thanks a lot to Mike Galbraith for reporting the issue and tracking down
the commit that caused the regression.
Looking into it I found three bugs which the patch fixes.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
Tested-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Big thanks go to Mathias Kolehmainen for reporting the bug, providing
debug output and testing the patches I sent him to get it working.
The fix was to stop calling ntfs_attr_set() at mount time as that causes
balance_dirty_pages_ratelimited() to be called which on systems with
little memory actually tries to go and balance the dirty pages which tries
to take the s_umount semaphore but because we are still in fill_super()
across which the VFS holds s_umount for writing this results in a
deadlock.
We now do the dirty work by hand by submitting individual buffers. This
has the annoying "feature" that mounting can take a few seconds if the
journal is large as we have clear it all. One day someone should improve
on this by deferring the journal clearing to a helper kernel thread so it
can be done in the background but I don't have time for this at the moment
and the current solution works fine so I am leaving it like this for now.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure pages are uptodate after returning from read_cache_page, which allows
us to cut out most of the filesystem-internal PageUptodate calls.
I didn't have a great look down the call chains, but this appears to fixes 7
possible use-before uptodate in hfs, 2 in hfsplus, 1 in jfs, a few in
ecryptfs, 1 in jffs2, and a possible cleared data overwritten with readpage in
block2mtd. All depending on whether the filler is async and/or can return
with a !uptodate page.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
|
|
|
| |
Replace the incorrect debugging check of "#ifdef NTFS_DEBUG" with
just "#ifdef DEBUG".
Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Acked-by: Anton Altaparmakov <aia21@cantab.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
|
| |
SLAB_NOFS is an alias of GFP_NOFS.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
|
|
|
|
|
| |
Conversion of booleans to: generic-boolean.patch (2006-08-23)
Signed-off-by: Richard Knutsson <ricknu-0@student.ltu.se>
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
|
|
|
|
|
|
| |
Add read_mapping_page() which is used for callers that pass
mapping->a_ops->readpage as the filler for read_cache_page. This removes
some duplication from filesystem code.
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
|
|
|
|
| |
from read inode and new inode code paths.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
| |
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
| |
continued the attribute lookup loop instead of aborting it.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch converts the inode semaphore to a mutex. I have tested it on
XFS and compiled as much as one can consider on an ia64. Anyway your
luck with it might be different.
Modified-by: Ingo Molnar <mingo@elte.hu>
(finished the conversion)
Signed-off-by: Jes Sorensen <jes@sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
|
|
| |
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
| |
Minor tidying.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
| |
and cond_resched() in the main loop as we could be dirtying a lot of
pages and this ensures we play nice with the VM and the system as a
whole.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
| |
extend the allocation of an attributes. Optionally, the data size,
but not the initialized size can be extended, too.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
| |
which is zero for a resident attribute but should no longer be zero
once the attribute is non-resident as it then has real clusters
allocated.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
|
|
| |
as an extra parameter. This is needed since we need to know the size
before we can map the mft record and our callers always know it. The
reason we cannot simply read the size from the vfs inode i_size is
that this is not necessarily uptodate. This happens when
ntfs_attr_make_non_resident() is called in the ->truncate call path.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
|
|
| |
specifying whether the cluster are being allocated to extend an
attribute or to fill a hole.
- Change ntfs_attr_make_non_resident() to call ntfs_cluster_alloc()
with @is_extension set to TRUE and remove the runlist terminator
fixup code as this is now done by ntfs_cluster_alloc().
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
| |
search context as argument. This allows calling it with the mft
record mapped. Update all callers.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
| |
search context. This allows calling it with the mft record mapped.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
| |
Also, add BUG() checks to ntfs_attr_make_non_resident() and
ntfs_attr_set() to ensure that these functions are never called
for compressed or encrypted attributes.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
|
|
|
| |
- Fix a bug in ntfs_map_runlist_nolock() where we forgot to protect
access to the allocated size in the ntfs inode with the size lock.
- Fix ntfs_attr_vcn_to_lcn_nolock() and ntfs_attr_find_vcn_nolock() to
return LCN_ENOENT when there is no runlist and the allocated size is
zero.
- Fix load_attribute_list() to handle the case of a NULL runlist.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
| |
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
| |
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
|
|
| |
if the requested vcn is inside it. Otherwise we get into problems
when we try to map an out of bounds vcn because we then try to map
the already mapped runlist fragment which causes
ntfs_mapping_pairs_decompress() to fail and return error. Update
ntfs_attr_find_vcn_nolock() accordingly.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
| |
and ntfs_mapping_pairs_build() to allow the runlist encoding to be
partial which is desirable when filling holes in sparse attributes.
Update all callers.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
| |
LCN_ENOENT in ntfs_attr_make_non_resident(). Otherwise the runlist
code gets confused.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
| |
possible (fs/ntfs/{attrib.c,index.c,super.c}). Thanks to Al Viro and
Pekka Enberg.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
| |
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
|
|
| |
- Add ifdef NTFS_RW around write specific code if fs/ntfs/runlist.[hc] and
fs/ntfs/attrib.[hc].
- Minor bugfix to fs/ntfs/attrib.c::ntfs_attr_make_non_resident() where the
runlist was not freed in all error cases.
- Add fs/ntfs/runlist.[hc]::ntfs_rl_find_vcn_nolock().
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
| |
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
|
|
| |
and handle the case where an attribute is converted from resident
to non-resident by a concurrent file write.
- Reorder some operations when converting an attribute from resident
to non-resident (fs/ntfs/attrib.c) so it is safe wrt concurrent
->readpage and ->writepage.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
| |
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
|
|
| |
dropping the read lock and taking the write lock we were not checking
whether someone else did not already do the work we wanted to do.
- Rename ntfs_find_vcn_nolock() to ntfs_attr_find_vcn_nolock().
- Tidy up some comments in fs/ntfs/runlist.c.
- Add LCN_ENOMEM and LCN_EIO definitions to fs/ntfs/runlist.h.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
| |
write code.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
| |
non-resident in fs/ntfs/attrib.c::ntfs_attr_can_be_non_resident().
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
helper ntfs_map_runlist_nolock() which is used by ntfs_map_runlist().
This allows us to map runlist fragments with the runlist lock already
held without having to drop and reacquire it around the call. Adapt
all callers.
- Change ntfs_find_vcn() to ntfs_find_vcn_nolock() which takes a locked
runlist. This allows us to find runlist elements with the runlist
lock already held without having to drop and reacquire it around the
call. Adapt all callers.
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
|
|
| |
Signed-off-by: Anton Altaparmakov <aia21@cantab.net>
|
|
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!
|