diff options
author | Benjamin Marzinski <bmarzins@redhat.com> | 2007-03-23 21:51:56 +0100 |
---|---|---|
committer | Steven Whitehouse <swhiteho@redhat.com> | 2007-05-01 10:10:52 +0200 |
commit | 172e045a7fcc3ee647fa70dbd585a3c247b49cb2 (patch) | |
tree | ffe2a4baeea3061020e57240a2cdae6253ce2d68 /fs/fat/Makefile | |
parent | [GFS2] Fix log entry list corruption (diff) | |
download | linux-172e045a7fcc3ee647fa70dbd585a3c247b49cb2.tar.xz linux-172e045a7fcc3ee647fa70dbd585a3c247b49cb2.zip |
[GFS2] flush the log if a transaction can't allocate space
This is a fix for bz #208514. When GFS2 frees up space, the freed blocks
aren't available for reuse until the resource group is successfully written
to the ondisk journal. So in rare cases, GFS2 operations will fail, saying
that the filesystem is out of space, when in reality, you are just waiting for
a log flush. For instance, on a 1Gig filesystem, if I continually write 10 Mb
to a file, and then truncate it, after a hundred interations, the write will
fail with -ENOSPC, even though the filesystem is just 1% full.
The attached patch calls a log flush in these cases. I tested this patch
fairly heavily to check if there were any locking issues that I missed, and
it seems to work just fine. Also, this patch only does the log flush if
get_local_rgrp makes a complete loop of resource groups without skipping
any do to locking issues. The code would be slightly simpler if it just always
did the log flush after the first failed pass, and you could only ever have
to go through the loop twice, instead of up to three times. However, I guessed
that failing to find a rg simply do to locking issues would be common enough
to skip the log flush in that case, but I'm not certain that this is the right
way to go. Either way, I don't suppose this code will be hit all that often.
Signed-off-by: Benjamin E. Marzinski <bmarzins@redhat.com>
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Diffstat (limited to 'fs/fat/Makefile')
0 files changed, 0 insertions, 0 deletions