| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linux
* 'writeback-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linux:
writeback: Add a 'reason' to wb_writeback_work
writeback: send work item to queue_io, move_expired_inodes
writeback: trace event balance_dirty_pages
writeback: trace event bdi_dirty_ratelimit
writeback: fix ppc compile warnings on do_div(long long, unsigned long)
writeback: per-bdi background threshold
writeback: dirty position control - bdi reserve area
writeback: control dirty pause time
writeback: limit max dirty pause time
writeback: IO-less balance_dirty_pages()
writeback: per task dirty rate limit
writeback: stabilize bdi->dirty_ratelimit
writeback: dirty rate control
writeback: add bg_threshold parameter to __bdi_update_bandwidth()
writeback: dirty position control
writeback: account per-bdi accumulated dirtied pages
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This creates a new 'reason' field in a wb_writeback_work
structure, which unambiguously identifies who initiates
writeback activity. A 'wb_reason' enumeration has been
added to writeback.h, to enumerate the possible reasons.
The 'writeback_work_class' and tracepoint event class and
'writeback_queue_io' tracepoints are updated to include the
symbolic 'reason' in all trace events.
And the 'writeback_inodes_sbXXX' family of routines has had
a wb_stats parameter added to them, so callers can specify
why writeback is being started.
Acked-by: Jan Kara <jack@suse.cz>
Signed-off-by: Curt Wohlgemuth <curtw@google.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of sending ->older_than_this to queue_io() and
move_expired_inodes(), send the entire wb_writeback_work
structure. There are other fields of a work item that are
useful in these routines and in tracepoints.
Acked-by: Jan Kara <jack@suse.cz>
Signed-off-by: Curt Wohlgemuth <curtw@google.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
Useful for analyzing the dynamics of the throttling algorithms and
debugging user reported problems.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| | |
It helps understand how various throttle bandwidths are updated.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fix powerpc compile warnings
mm/page-writeback.c: In function 'bdi_position_ratio':
mm/page-writeback.c:622:3: warning: comparison of distinct pointer types lacks a cast [enabled by default]
page-writeback.c:635:4: warning: comparison of distinct pointer types lacks a cast [enabled by default]
Also fix gcc "uninitialized var" warnings.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
One thing puzzled me is that in JBOD case, the per-disk writeout
performance is smaller than the corresponding single-disk case even
when they have comparable bdi_thresh. Tracing shows find that in single
disk case, bdi_writeback is always kept high while in JBOD case, it
could drop low from time to time and correspondingly bdi_reclaimable
could sometimes rush high.
The fix is to watch bdi_reclaimable and kick background writeback as
soon as it goes high. This resembles the global background threshold
but in per-bdi manner. The trick is, as long as bdi_reclaimable does
not go high, bdi_writeback naturally won't go low because
bdi_reclaimable+bdi_writeback ~= bdi_thresh.
With less fluctuated writeback pages, JBOD performance is observed to
increase noticeably in various cases.
vmstat:nr_written values before/after patch:
3.1.0-rc4-wo-underrun+ 3.1.0-rc4-bgthresh3+
------------------------ ------------------------
125596480 +25.9% 158179363 JBOD-10HDD-16G/ext4-100dd-1M-24p-16384M-20:10-X
61790815 +110.4% 130032231 JBOD-10HDD-16G/ext4-10dd-1M-24p-16384M-20:10-X
58853546 -0.1% 58823828 JBOD-10HDD-16G/ext4-1dd-1M-24p-16384M-20:10-X
110159811 +24.7% 137355377 JBOD-10HDD-16G/xfs-100dd-1M-24p-16384M-20:10-X
69544762 +10.8% 77080047 JBOD-10HDD-16G/xfs-10dd-1M-24p-16384M-20:10-X
50644862 +0.5% 50890006 JBOD-10HDD-16G/xfs-1dd-1M-24p-16384M-20:10-X
42677090 +28.0% 54643527 JBOD-10HDD-thresh=100M/ext4-100dd-1M-24p-16384M-100M:10-X
47491324 +13.3% 53785605 JBOD-10HDD-thresh=100M/ext4-10dd-1M-24p-16384M-100M:10-X
52548986 +0.9% 53001031 JBOD-10HDD-thresh=100M/ext4-1dd-1M-24p-16384M-100M:10-X
26783091 +36.8% 36650248 JBOD-10HDD-thresh=100M/xfs-100dd-1M-24p-16384M-100M:10-X
35526347 +14.0% 40492312 JBOD-10HDD-thresh=100M/xfs-10dd-1M-24p-16384M-100M:10-X
44670723 -1.1% 44177606 JBOD-10HDD-thresh=100M/xfs-1dd-1M-24p-16384M-100M:10-X
127996037 +22.4% 156719990 JBOD-10HDD-thresh=2G/ext4-100dd-1M-24p-16384M-2048M:10-X
57518856 +3.8% 59677625 JBOD-10HDD-thresh=2G/ext4-10dd-1M-24p-16384M-2048M:10-X
51919909 +12.2% 58269894 JBOD-10HDD-thresh=2G/ext4-1dd-1M-24p-16384M-2048M:10-X
86410514 +79.0% 154660433 JBOD-10HDD-thresh=2G/xfs-100dd-1M-24p-16384M-2048M:10-X
40132519 +38.6% 55617893 JBOD-10HDD-thresh=2G/xfs-10dd-1M-24p-16384M-2048M:10-X
48423248 +7.5% 52042927 JBOD-10HDD-thresh=2G/xfs-1dd-1M-24p-16384M-2048M:10-X
206041046 +44.1% 296846536 JBOD-10HDD-thresh=4G/xfs-100dd-1M-24p-16384M-4096M:10-X
72312903 -19.4% 58272885 JBOD-10HDD-thresh=4G/xfs-10dd-1M-24p-16384M-4096M:10-X
50635672 -0.5% 50384787 JBOD-10HDD-thresh=4G/xfs-1dd-1M-24p-16384M-4096M:10-X
68308534 +115.7% 147324758 JBOD-10HDD-thresh=800M/ext4-100dd-1M-24p-16384M-800M:10-X
57882933 +14.5% 66269621 JBOD-10HDD-thresh=800M/ext4-10dd-1M-24p-16384M-800M:10-X
52183472 +12.8% 58855181 JBOD-10HDD-thresh=800M/ext4-1dd-1M-24p-16384M-800M:10-X
53788956 +94.2% 104460352 JBOD-10HDD-thresh=800M/xfs-100dd-1M-24p-16384M-800M:10-X
44493342 +35.5% 60298210 JBOD-10HDD-thresh=800M/xfs-10dd-1M-24p-16384M-800M:10-X
42641209 +18.9% 50681038 JBOD-10HDD-thresh=800M/xfs-1dd-1M-24p-16384M-800M:10-X
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Keep a minimal pool of dirty pages for each bdi, so that the disk IO
queues won't underrun. Also gently increase a small bdi_thresh to avoid
it stuck in 0 for some light dirtied bdi.
It's particularly useful for JBOD and small memory system.
It may result in (pos_ratio > 1) at the setpoint and push the dirty
pages high. This is more or less intended because the bdi is in the
danger of IO queue underflow.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The dirty pause time shall ultimately be controlled by adjusting
nr_dirtied_pause, since there is relationship
pause = pages_dirtied / task_ratelimit
Assuming
pages_dirtied ~= nr_dirtied_pause
task_ratelimit ~= dirty_ratelimit
We get
nr_dirtied_pause ~= dirty_ratelimit * desired_pause
Here dirty_ratelimit is preferred over task_ratelimit because it's
more stable.
It's also important to limit possible large transitional errors:
- bw is changing quickly
- pages_dirtied << nr_dirtied_pause on entering dirty exceeded area
- pages_dirtied >> nr_dirtied_pause on btrfs (to be improved by a
separate fix, but still expect non-trivial errors)
So we end up using the above formula inside clamp_val().
The best test case for this code is to run 100 "dd bs=4M" tasks on
btrfs and check its pause time distribution.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Apply two policies to scale down the max pause time for
1) small number of concurrent dirtiers
2) small memory system (comparing to storage bandwidth)
MAX_PAUSE=200ms may only be suitable for high end servers with lots of
concurrent dirtiers, where the large pause time can reduce much overheads.
Otherwise, smaller pause time is desirable whenever possible, so as to
get good responsiveness and smooth user experiences. It's actually
required for good disk utilization in the case when all the dirty pages
can be synced to disk within MAX_PAUSE=200ms.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As proposed by Chris, Dave and Jan, don't start foreground writeback IO
inside balance_dirty_pages(). Instead, simply let it idle sleep for some
time to throttle the dirtying task. In the mean while, kick off the
per-bdi flusher thread to do background writeback IO.
RATIONALS
=========
- disk seeks on concurrent writeback of multiple inodes (Dave Chinner)
If every thread doing writes and being throttled start foreground
writeback, it leads to N IO submitters from at least N different
inodes at the same time, end up with N different sets of IO being
issued with potentially zero locality to each other, resulting in
much lower elevator sort/merge efficiency and hence we seek the disk
all over the place to service the different sets of IO.
OTOH, if there is only one submission thread, it doesn't jump between
inodes in the same way when congestion clears - it keeps writing to
the same inode, resulting in large related chunks of sequential IOs
being issued to the disk. This is more efficient than the above
foreground writeback because the elevator works better and the disk
seeks less.
- lock contention and cache bouncing on concurrent IO submitters (Dave Chinner)
With this patchset, the fs_mark benchmark on a 12-drive software RAID0 goes
from CPU bound to IO bound, freeing "3-4 CPUs worth of spinlock contention".
* "CPU usage has dropped by ~55%", "it certainly appears that most of
the CPU time saving comes from the removal of contention on the
inode_wb_list_lock" (IMHO at least 10% comes from the reduction of
cacheline bouncing, because the new code is able to call much less
frequently into balance_dirty_pages() and hence access the global
page states)
* the user space "App overhead" is reduced by 20%, by avoiding the
cacheline pollution by the complex writeback code path
* "for a ~5% throughput reduction", "the number of write IOs have
dropped by ~25%", and the elapsed time reduced from 41:42.17 to
40:53.23.
* On a simple test of 100 dd, it reduces the CPU %system time from 30% to 3%,
and improves IO throughput from 38MB/s to 42MB/s.
- IO size too small for fast arrays and too large for slow USB sticks
The write_chunk used by current balance_dirty_pages() cannot be
directly set to some large value (eg. 128MB) for better IO efficiency.
Because it could lead to more than 1 second user perceivable stalls.
Even the current 4MB write size may be too large for slow USB sticks.
The fact that balance_dirty_pages() starts IO on itself couples the
IO size to wait time, which makes it hard to do suitable IO size while
keeping the wait time under control.
Now it's possible to increase writeback chunk size proportional to the
disk bandwidth. In a simple test of 50 dd's on XFS, 1-HDD, 3GB ram,
the larger writeback size dramatically reduces the seek count to 1/10
(far beyond my expectation) and improves the write throughput by 24%.
- long block time in balance_dirty_pages() hurts desktop responsiveness
Many of us may have the experience: it often takes a couple of seconds
or even long time to stop a heavy writing dd/cp/tar command with
Ctrl-C or "kill -9".
- IO pipeline broken by bumpy write() progress
There are a broad class of "loop {read(buf); write(buf);}" applications
whose read() pipeline will be under-utilized or even come to a stop if
the write()s have long latencies _or_ don't progress in a constant rate.
The current threshold based throttling inherently transfers the large
low level IO completion fluctuations to bumpy application write()s,
and further deteriorates with increasing number of dirtiers and/or bdi's.
For example, when doing 50 dd's + 1 remote rsync to an XFS partition,
the rsync progresses very bumpy in legacy kernel, and throughput is
improved by 67% by this patchset. (plus the larger write chunk size,
it will be 93% speedup).
The new rate based throttling can support 1000+ dd's with excellent
smoothness, low latency and low overheads.
For the above reasons, it's much better to do IO-less and low latency
pauses in balance_dirty_pages().
Jan Kara, Dave Chinner and me explored the scheme to let
balance_dirty_pages() wait for enough writeback IO completions to
safeguard the dirty limit. However it's found to have two problems:
- in large NUMA systems, the per-cpu counters may have big accounting
errors, leading to big throttle wait time and jitters.
- NFS may kill large amount of unstable pages with one single COMMIT.
Because NFS server serves COMMIT with expensive fsync() IOs, it is
desirable to delay and reduce the number of COMMITs. So it's not
likely to optimize away such kind of bursty IO completions, and the
resulted large (and tiny) stall times in IO completion based throttling.
So here is a pause time oriented approach, which tries to control the
pause time in each balance_dirty_pages() invocations, by controlling
the number of pages dirtied before calling balance_dirty_pages(), for
smooth and efficient dirty throttling:
- avoid useless (eg. zero pause time) balance_dirty_pages() calls
- avoid too small pause time (less than 4ms, which burns CPU power)
- avoid too large pause time (more than 200ms, which hurts responsiveness)
- avoid big fluctuations of pause times
It can control pause times at will. The default policy (in a followup
patch) will be to do ~10ms pauses in 1-dd case, and increase to ~100ms
in 1000-dd case.
BEHAVIOR CHANGE
===============
(1) dirty threshold
Users will notice that the applications will get throttled once crossing
the global (background + dirty)/2=15% threshold, and then balanced around
17.5%. Before patch, the behavior is to just throttle it at 20% dirtyable
memory in 1-dd case.
Since the task will be soft throttled earlier than before, it may be
perceived by end users as performance "slow down" if his application
happens to dirty more than 15% dirtyable memory.
(2) smoothness/responsiveness
Users will notice a more responsive system during heavy writeback.
"killall dd" will take effect instantly.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add two fields to task_struct.
1) account dirtied pages in the individual tasks, for accuracy
2) per-task balance_dirty_pages() call intervals, for flexibility
The balance_dirty_pages() call interval (ie. nr_dirtied_pause) will
scale near-sqrt to the safety gap between dirty pages and threshold.
The main problem of per-task nr_dirtied is, if 1k+ tasks start dirtying
pages at exactly the same time, each task will be assigned a large
initial nr_dirtied_pause, so that the dirty threshold will be exceeded
long before each task reached its nr_dirtied_pause and hence call
balance_dirty_pages().
The solution is to watch for the number of pages dirtied on each CPU in
between the calls into balance_dirty_pages(). If it exceeds ratelimit_pages
(3% dirty threshold), force call balance_dirty_pages() for a chance to
set bdi->dirty_exceeded. In normal situations, this safeguarding
condition is not expected to trigger at all.
On the sqrt in dirty_poll_interval():
It will serve as an initial guess when dirty pages are still in the
freerun area.
When dirty pages are floating inside the dirty control scope [freerun,
limit], a followup patch will use some refined dirty poll interval to
get the desired pause time.
thresh-dirty (MB) sqrt
1 16
2 22
4 32
8 45
16 64
32 90
64 128
128 181
256 256
512 362
1024 512
The above table means, given 1MB (or 1GB) gap and the dd tasks polling
balance_dirty_pages() on every 16 (or 512) pages, the dirty limit won't
be exceeded as long as there are less than 16 (or 512) concurrent dd's.
So sqrt naturally leads to less overheads and more safe concurrent tasks
for large memory servers, which have large (thresh-freerun) gaps.
peter: keep the per-CPU ratelimit for safeguarding the 1k+ tasks case
CC: Peter Zijlstra <a.p.zijlstra@chello.nl>
Reviewed-by: Andrea Righi <andrea@betterlinux.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There are some imperfections in balanced_dirty_ratelimit.
1) large fluctuations
The dirty_rate used for computing balanced_dirty_ratelimit is merely
averaged in the past 200ms (very small comparing to the 3s estimation
period for write_bw), which makes rather dispersed distribution of
balanced_dirty_ratelimit.
It's pretty hard to average out the singular points by increasing the
estimation period. Considering that the averaging technique will
introduce very undesirable time lags, I give it up totally. (btw, the 3s
write_bw averaging time lag is much more acceptable because its impact
is one-way and therefore won't lead to oscillations.)
The more practical way is filtering -- most singular
balanced_dirty_ratelimit points can be filtered out by remembering some
prev_balanced_rate and prev_prev_balanced_rate. However the more
reliable way is to guard balanced_dirty_ratelimit with task_ratelimit.
2) due to truncates and fs redirties, the (write_bw <=> dirty_rate)
match could become unbalanced, which may lead to large systematical
errors in balanced_dirty_ratelimit. The truncates, due to its possibly
bumpy nature, can hardly be compensated smoothly. So let's face it. When
some over-estimated balanced_dirty_ratelimit brings dirty_ratelimit
high, dirty pages will go higher than the setpoint. task_ratelimit will
in turn become lower than dirty_ratelimit. So if we consider both
balanced_dirty_ratelimit and task_ratelimit and update dirty_ratelimit
only when they are on the same side of dirty_ratelimit, the systematical
errors in balanced_dirty_ratelimit won't be able to bring
dirty_ratelimit far away.
The balanced_dirty_ratelimit estimation may also be inaccurate near
@limit or @freerun, however is less an issue.
3) since we ultimately want to
- keep the fluctuations of task ratelimit as small as possible
- keep the dirty pages around the setpoint as long time as possible
the update policy used for (2) also serves the above goals nicely:
if for some reason the dirty pages are high (task_ratelimit < dirty_ratelimit),
and dirty_ratelimit is low (dirty_ratelimit < balanced_dirty_ratelimit),
there is no point to bring up dirty_ratelimit in a hurry only to hurt
both the above two goals.
So, we make use of task_ratelimit to limit the update of dirty_ratelimit
in two ways:
1) avoid changing dirty rate when it's against the position control target
(the adjusted rate will slow down the progress of dirty pages going
back to setpoint).
2) limit the step size. task_ratelimit is changing values step by step,
leaving a consistent trace comparing to the randomly jumping
balanced_dirty_ratelimit. task_ratelimit also has the nice smaller
errors in stable state and typically larger errors when there are big
errors in rate. So it's a pretty good limiting factor for the step
size of dirty_ratelimit.
Note that bdi->dirty_ratelimit is always tracking balanced_dirty_ratelimit.
task_ratelimit is merely used as a limiting factor.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It's all about bdi->dirty_ratelimit, which aims to be (write_bw / N)
when there are N dd tasks.
On write() syscall, use bdi->dirty_ratelimit
============================================
balance_dirty_pages(pages_dirtied)
{
task_ratelimit = bdi->dirty_ratelimit * bdi_position_ratio();
pause = pages_dirtied / task_ratelimit;
sleep(pause);
}
On every 200ms, update bdi->dirty_ratelimit
===========================================
bdi_update_dirty_ratelimit()
{
task_ratelimit = bdi->dirty_ratelimit * bdi_position_ratio();
balanced_dirty_ratelimit = task_ratelimit * write_bw / dirty_rate;
bdi->dirty_ratelimit = balanced_dirty_ratelimit
}
Estimation of balanced bdi->dirty_ratelimit
===========================================
balanced task_ratelimit
-----------------------
balance_dirty_pages() needs to throttle tasks dirtying pages such that
the total amount of dirty pages stays below the specified dirty limit in
order to avoid memory deadlocks. Furthermore we desire fairness in that
tasks get throttled proportionally to the amount of pages they dirty.
IOW we want to throttle tasks such that we match the dirty rate to the
writeout bandwidth, this yields a stable amount of dirty pages:
dirty_rate == write_bw (1)
The fairness requirement gives us:
task_ratelimit = balanced_dirty_ratelimit
== write_bw / N (2)
where N is the number of dd tasks. We don't know N beforehand, but
still can estimate balanced_dirty_ratelimit within 200ms.
Start by throttling each dd task at rate
task_ratelimit = task_ratelimit_0 (3)
(any non-zero initial value is OK)
After 200ms, we measured
dirty_rate = # of pages dirtied by all dd's / 200ms
write_bw = # of pages written to the disk / 200ms
For the aggressive dd dirtiers, the equality holds
dirty_rate == N * task_rate
== N * task_ratelimit_0 (4)
Or
task_ratelimit_0 == dirty_rate / N (5)
Now we conclude that the balanced task ratelimit can be estimated by
write_bw
balanced_dirty_ratelimit = task_ratelimit_0 * ---------- (6)
dirty_rate
Because with (4) and (5) we can get the desired equality (1):
write_bw
balanced_dirty_ratelimit == (dirty_rate / N) * ----------
dirty_rate
== write_bw / N
Then using the balanced task ratelimit we can compute task pause times like:
task_pause = task->nr_dirtied / task_ratelimit
task_ratelimit with position control
------------------------------------
However, while the above gives us means of matching the dirty rate to
the writeout bandwidth, it at best provides us with a stable dirty page
count (assuming a static system). In order to control the dirty page
count such that it is high enough to provide performance, but does not
exceed the specified limit we need another control.
The dirty position control works by extending (2) to
task_ratelimit = balanced_dirty_ratelimit * pos_ratio (7)
where pos_ratio is a negative feedback function that subjects to
1) f(setpoint) = 1.0
2) df/dx < 0
That is, if the dirty pages are ABOVE the setpoint, we throttle each
task a bit more HEAVY than balanced_dirty_ratelimit, so that the dirty
pages are created less fast than they are cleaned, thus DROP to the
setpoints (and the reverse).
Based on (7) and the assumption that both dirty_ratelimit and pos_ratio
remains CONSTANT for the past 200ms, we get
task_ratelimit_0 = balanced_dirty_ratelimit * pos_ratio (8)
Putting (8) into (6), we get the formula used in
bdi_update_dirty_ratelimit():
write_bw
balanced_dirty_ratelimit *= pos_ratio * ---------- (9)
dirty_rate
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| | |
No behavior change.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
bdi_position_ratio() provides a scale factor to bdi->dirty_ratelimit, so
that the resulted task rate limit can drive the dirty pages back to the
global/bdi setpoints.
Old scheme is,
|
free run area | throttle area
----------------------------------------+---------------------------->
thresh^ dirty pages
New scheme is,
^ task rate limit
|
| *
| *
| *
|[free run] * [smooth throttled]
| *
| *
| *
..bdi->dirty_ratelimit..........*
| . *
| . *
| . *
| . *
| . *
+-------------------------------.-----------------------*------------>
setpoint^ limit^ dirty pages
The slope of the bdi control line should be
1) large enough to pull the dirty pages to setpoint reasonably fast
2) small enough to avoid big fluctuations in the resulted pos_ratio and
hence task ratelimit
Since the fluctuation range of the bdi dirty pages is typically observed
to be within 1-second worth of data, the bdi control line's slope is
selected to be a linear function of bdi write bandwidth, so that it can
adapt to slow/fast storage devices well.
Assume the bdi control line
pos_ratio = 1.0 + k * (dirty - bdi_setpoint)
where k is the negative slope.
If targeting for 12.5% fluctuation range in pos_ratio when dirty pages
are fluctuating in range
[bdi_setpoint - write_bw/2, bdi_setpoint + write_bw/2],
we get slope
k = - 1 / (8 * write_bw)
Let pos_ratio(x_intercept) = 0, we get the parameter used in code:
x_intercept = bdi_setpoint + 8 * write_bw
The global/bdi slopes are nicely complementing each other when the
system has only one major bdi (indicated by bdi_thresh ~= thresh):
1) slope of global control line => scaling to the control scope size
2) slope of main bdi control line => scaling to the writeout bandwidth
so that
- in memory tight systems, (1) becomes strong enough to squeeze dirty
pages inside the control scope
- in large memory systems where the "gravity" of (1) for pulling the
dirty pages to setpoint is too weak, (2) can back (1) up and drive
dirty pages to bdi_setpoint ~= setpoint reasonably fast.
Unfortunately in JBOD setups, the fluctuation range of bdi threshold
is related to memory size due to the interferences between disks. In
this case, the bdi slope will be weighted sum of write_bw and bdi_thresh.
Given equations
span = x_intercept - bdi_setpoint
k = df/dx = - 1 / span
and the extremum values
span = bdi_thresh
dx = bdi_thresh
we get
df = - dx / span = - 1.0
That means, when bdi_dirty deviates bdi_thresh up, pos_ratio and hence
task ratelimit will fluctuate by -100%.
peter: use 3rd order polynomial for the global control line
CC: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Jan Kara <jack@suse.cz>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Introduce the BDI_DIRTIED counter. It will be used for estimating the
bdi's dirty bandwidth.
CC: Jan Kara <jack@suse.cz>
CC: Michael Rubin <mrubin@google.com>
CC: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending
* 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending:
target: use ->exectute_task for all CDB emulation
target: remove SCF_EMULATE_CDB_ASYNC
target: refactor transport_emulate_control_cdb
target: pass the se_task to the CDB emulation callback
target: split core_scsi3_emulate_pr
target: split core_scsi2_emulate_crh
target: Add generic active I/O shutdown logic
target: add back error handling in transport_complete_task
target/pscsi: blk_make_request() returns an ERR_PTR()
target: Remove core TRANSPORT_FREE_CMD_INTR usage
target: Make TFO->check_stop_free return free status
iscsi-target: Fix non-immediate TMR handling
iscsi-target: Add missing CMDSN_LOWER_THAN_EXP check in iscsit_handle_scsi_cmd
target: Avoid double list_del for aborted se_tmr_req
target: Minor cleanups to core_tmr_drain_tmr_list
target: Fix wrong se_tmr being added to drain_tmr_list
target: Fix incorrect se_cmd assignment in core_tmr_drain_tmr_list
target: Check -ENOMEM to signal QUEUE_FULL from fabric callbacks
tcm_loop: Add explict read buffer memset for SCF_SCSI_CONTROL_SG_IO_CDB
target: Fix compile warning w/ missing module.h include
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Instead of calling into transport_emulate_control_cdb from
__transport_execute_tasks for some CDBs always set up ->exectute_tasks
in the command sequence and use it uniformly.
(nab: Add default passthrough break for SERVICE_ACTION_IN)
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
All ->execute_task instances now need to complete the I/O explicitly,
which can either happen synchronously or asynchronously.
Note that a lot of the CDB emulations appear to return success even if
some lowlevel operations failed. Given that this is an existing issue
this patch doesn't change that fact.
(nab: Adding missing switch breaks in PR-IN + PR_OUT)
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Encapsulate each CDB emulation into a function of its own, to prepare
setting ->exectute_task to these routines.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We want to be able to handle all CDBs through it and remove hacks like
always using the first task in a CDB in target_report_luns.
Also rename the callback to ->execute_task to better describe its use.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Split core_scsi2_emulate_crh into one routine each for the
PERSISTENT_RESERVE_IN and PERSISTENT_RESERVE_OUT side.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Split core_scsi2_emulate_crh into one routine each for the reserve and
release side. The common code now is in a helper called by both
routines.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch adds the initial pieces of generic active I/O shutdown logic.
This is intended to be a 'opt-in' feature for fabric modules that
includes the following functions to provide a mechinism for fabric
modules to track se_cmd via se_session->sess_cmd_list:
*) target_get_sess_cmd() - Add se_cmd to sess->sess_cmd_list, called
from fabric module incoming I/O path.
*) target_put_sess_cmd() - Check for completion or drop se_cmd from
->sess_cmd_list
*) target_splice_sess_cmd_list() - Splice active I/O list from
->sess_cmd_list to ->sess_wait_list, can called with HW fabric
lock held.
*) target_wait_for_sess_cmds() - Walk ->sess_wait_list waiting on
individual ->cmd_wait_comp. Optional transport_wait_for_tasks()
call.
target_splice_sess_cmd_list() is allowed to be called under HW fabric
lock, and performs the splice into se_sess->sess_wait_list and set
se_cmd->cmd_wait_set. Then target_wait_for_sess_cmds() walks the list
waiting for individual target_put_sess_cmd() fabric callbacks to
complete.
It also adds TFO->check_release_cmd() to split the completion and memory
release calls, where a fabric module uses target_put_sess_cmd() to check
for I/O completion during session shutdown. This is currently pushed out
into fabric modules as current fabric code may sleep here waiting for
TFO->check_stop_free() to complete in main response path, and because
target_wait_for_sess_cmds() calling TFO->release_cmd() to free fabric
descriptor memory directly.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The commit
target: use a workqueue for I/O completions
accidentally removed setting t_tasks_failed in transport_complete_task.
Add it back in a slightly cleaner way; now it is set for every failed task
instead of special casing the last one completing by using the success
argument directly for it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The check is wrong here because blk_make_request() returns an
ERR_PTR() and it doesn't return NULL.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch drops TRANSPORT_FREE_CMD_INTR usage from target core, which
includes the removal of transport_generic_free_cmd_intr() symbol,
TRANSPORT_FREE_CMD_INTR usage in transport_processing_thread(), and
special case LUN_RESET handling to skip TRANSPORT_FREE_CMD_INTR processing
in core_tmr_drain_cmd_list(). We now expect that fabric modules will
use an internal workqueue to provide process context when releasing
se_cmd descriptor resources via transport_generic_free_cmd().
Reported-by: Christoph Hellwig <hch@lst.de>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Roland Dreier <roland@purestorage.com>
Cc: Madhuranath Iyengar <mni@risingtidesystems.com>
Signed-off-by: Nicholas Bellinger <nab@risingtidesystems.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch converts target_core_fabric_ops->check_stop_free() usage in
transport_cmd_check_stop() and associated fabric module usage to
return '1' when the passed se_cmd has been released directly within
->check_stop_free(), or return '0' when the passed se_cmd has not
been released.
This addresses an issue where transport_cmd_finish_abort() ->
transport_cmd_check_stop_to_fabric() was leaking descriptors during
LUN_RESET for modules using ->check_stop_free(), but not directly
releasing se_cmd in all cases.
Cc: stable@kernel.org
Signed-off-by: Nicholas Bellinger <nab@risingtidesystems.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch addresses two issues with non immediate TMR handling in
iscsit_handle_task_mgt_cmd(). The first involves breakage due to
v3.1-rc conversion of iscsit_sequence_cmd(), which upon good status
would hit the iscsit_add_reject_from_cmd() block of code. This patch
adds an explict check for CMDSN_ERROR_CANNOT_RECOVER.
The second adds a check to return when non immediate TMR operation is
detected after iscsit_ack_from_expstatsn(), as iscsit_sequence_cmd()
-> iscsit_execute_cmd() will have called transport_generic_handle_tmr()
for the non immediate TMR case already.
Cc: Andy Grover <agrover@redhat.com>
Cc: stable@kernel.org
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch adds a missing CMDSN_LOWER_THAN_EXP return check for
iscsit_sequence_cmd() in iscsit_handle_scsi_cmd() that was incorrectly
dropped during the v3.1-rc cleanups to use iscsit_sequence_cmd().
Cc: Andy Grover <agrover@redhat.com>
Cc: stable@kernel.org
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
After the list_del() in core_tmr_drain_tmr_list(),
core_tmr_release_req() would list_del() the same object again.
Call graph:
core_tmr_drain_tmr_list
transport_cmd_finish_abort_tmr
transport_generic_remove
transport_free_se_cmd
core_tmr_release_req
So use list_del_init(), as list_del() of an initialized list_head is
safe and essentially a nop. In the CONFIG_DEBUG_LIST case, list_del()
actually poisons the list_head, but that is fine as we free the object
directly afterwards.
Signed-off-by: Joern Engel <joern@logfs.org>
Cc: stable@kernel.org
Signed-off-by: Nicholas Bellinger <nab@risingtidesystems.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch adds a handful minor cleanups to core_tmr_drain_tmr_list() that
remove an unnecessary NULL check, use list_for_each_entry_safe() instead of
list_entry(), and makes the drain_tmr_list walk use *tmr_p instead of
directly referencing the passed *tmr function parameter.
Signed-off-by: Joern Engel <joern@logfs.org>
Cc: Joern Engel <joern@logfs.org>
Reviewed-by: Roland Dreier <roland@purestorage.com>
Cc: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch fixes another bug from LUN_RESET re-org fallout in
core_tmr_drain_tmr_list() that was adding the wrong se_tmr_req
into the local drain_tmr_list to be walked + released.
Signed-off-by: Joern Engel <joern@logfs.org>
Cc: Joern Engel <joern@logfs.org>
Reviewed-by: Roland Dreier <roland@purestorage.com>
Cc: Roland Dreier <roland@purestorage.com>
Cc: stable@kernel.org
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch fixes a bug in core_tmr_drain_tmr_list() where drain_tmr_list
was using the wrong se_tmr_req for cmd assignment due to a typo during the
LUN_RESET re-org. This was resulting in general protection faults while
using the leftover bogus *tmr_p pointer from list_for_each_entry_safe().
Signed-off-by: Joern Engel <joern@logfs.org>
Cc: Joern Engel <joern@logfs.org>
Cc: stable@kernel.org
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch changes target core to also check for -ENOMEM from fabric callbacks
to signal QUEUE_FULL status, instead of just -EAGAIN in order to catch a
larger set of fabric failure cases that want to trigger QUEUE_FULL logic.
This includes the callbacks for ->write_pending(), ->queue_data_in() and
->queue_status().
It also makes transport_generic_write_pending() return zero upon QUEUE_FULL,
and removes two unnecessary -EAGAIN checks to catch write pending QUEUE_FULL
cases from transport_generic_new_cmd() failures in transport_handle_cdb_direct()
and transport_processing_thread():TRANSPORT_NEW_CMD_MAP state.
Reported-by: Bart Van Assche <bvanassche@acm.org>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch addresses an issue with buggy userspace code sending I/O
via scsi-generic that does not explictly clear their associated read
buffers. It adds an explict memset of the first SGL entry within
tcm_loop_new_cmd_map() for SCF_SCSI_CONTROL_SG_IO_CDB payloads that
are currently guaranteed to be a single SGL by target-core code.
This issue is a side effect of the v3.1-rc1 merge to remove the
extra memcpy between certain control CDB types using a contigious
+ cleared buffer in target-core, and performing a memcpy into the
SGL list within tcm_loop.
It was originally mainfesting itself by udev + scsi_id + scsi-generic
not properly setting up the expected /dev/disk/by-id/ symlinks because
the INQUIRY payload was containing extra bogus data preventing the
proper NAA IEEE WWN from being parsed by userspace.
Cc: Christoph Hellwig <hch@lst.de>
Cc: Andy Grover <agrover@redhat.com>
Cc: stable@kernel.org
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch fixes the following compile warning in target_core_cdb.c in
recent linux-next code due to the new use of EXPORT_SYMBOL() for
target_get_task_cdb().
drivers/target/target_core_cdb.c:1316: warning: data definition has no type or storage class
drivers/target/target_core_cdb.c:1316: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’
drivers/target/target_core_cdb.c:1316: warning: parameter names (without types) in function declaration
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild
* 'trivial' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild:
scsi: drop unused Kconfig symbol
pci: drop unused Kconfig symbol
stmmac: drop unused Kconfig symbol
x86: drop unused Kconfig symbol
powerpc: drop unused Kconfig symbols
powerpc: 40x: drop unused Kconfig symbol
mips: drop unused Kconfig symbols
openrisc: drop unused Kconfig symbols
arm: at91: drop unused Kconfig symbol
samples: drop unused Kconfig symbol
m32r: drop unused Kconfig symbol
score: drop unused Kconfig symbols
sh: drop unused Kconfig symbol
um: drop unused Kconfig symbol
sparc: drop unused Kconfig symbol
alpha: drop unused Kconfig symbol
Fix up trivial conflict in drivers/net/ethernet/stmicro/stmmac/Kconfig
as per Michal: the STMMAC_DUAL_MAC config variable is still unused and
should be deleted.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
There's no other Kconfig symbol that depends on XEN_PCIDEV_FE_DEBUG.
Neither is there anything that uses CONFIG_XEN_PCIDEV_FE_DEBUG.
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: Josh Boyer <jwboyer@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|