| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
| |
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
| |
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
| |
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
| |
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
| |
There are some failure paths which share common codes
before return, so simplify them by move common codes
to the end of function, and just goto out in case
failure happened.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous patch provides protection for other modes
such as CREATE, MANAGE, GROW and INCREMENTAL. And for
ASSEMBLE mode, we also need to protect during the process
of assemble clustered raid.
However, we can only know the array is clustered or not
when the metadata is ready, so the lock_cluster is called
after select_devices(). And we could re-read the metadata
when doing auto-assembly, so refresh the locking.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, the dlm locking only protects several
functions which writes to superblock (update_super,
add_to_super and store_super), and we missed other
funcs such as add_internal_bitmap. We also need to
call the funcs which read superblock under the
locking protection to avoid consistent issue.
So let's remove the dlm stuffs from super1.c, and
provide the locking mechanism to the main() except
assemble mode which will be handled in next commit.
And since we can identify it is a clustered raid or
not based on check the different conditions of each
mode, so the change should not have effect on native
array.
And we improve the existed locking stuffs as follows:
1. replace ls_unlock with ls_unlock_wait since we
should return when unlock operation is complete.
2. inspired by lvm, let's also try to use the existed
lockspace first before creat a lockspace blindly if
the lockspace not released for some reason.
3. try more times before quit if EAGAIN happened for
locking.
Note: for MANAGE mode, we do not need to get lock if
node just want to confirm device change, otherwise we
can't add a disk to cluster since all nodes are compete
for the lock.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
| |
Signed-off-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With "--force", we can assemble the array even if some superblocks
appear out-of-date. But their data layout is regarded to make sense.
In reshape cases, if two devices claims different reshape progresses,
we cannot forcely assemble them back to array. Kernel will treat only
one of them as reshape progress. However, their data is still laid on
different layouts. It may lead to disaster if reshape goes on.
Reproducible Steps:
mdadm -C /dev/md0 --assume-clean -l5 -n3 /dev/loop[012]
mdadm -a /dev/md0 /dev/loop3
mdadm -G /dev/md0 -n4
mdadm -f /dev/md0 /dev/loop0 # after a period
mdadm -S /dev/md0 # after another period
mdadm -E /dev/loop[01] # make sure that they claims different ones
mdadm -Af -R /dev/md0 /dev/loop[023] # give no enough devices for
force_array() to pick non-fresh devices
cat /sys/block/md0/md/reshape_position # You can see that Kernel resume
reshape the from any progress of them.
Note: The unit of mdadm -E is KB, but reshape_position's is sector.
In order to prevent disaster, we add logics to prevent devices with
different reshape progress from being added into the array.
Reported-by: Allen Peng <allenpeng@synology.com>
Reviewed-by: Alex Wu <alexwu@synology.com>
Signed-off-by: BingJing Chang <bingjingc@synology.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
| |
This commit extends ab0c6bb ("imsm: update name in --detail-platform").
Refer user to RSTe/VROC manual when needed.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These udev rules attempt to set a safe kernel controller
timeout for disks containing RAID level 1 or higher
partitions for commodity disks which do not have SCTERC
capability, or do have it but it is disabled.
No attempt is made to change the STCERC settings on devices
which support it.
This attempts to mitigate the problem described here:
https://raid.wiki.kernel.org/index.php/Timeout_Mismatch
http://strugglers.net/~andy/blog/2015/11/09/linux-software-raid-and-drive-timeouts/
where the kernel controller may timeout on a read from a
disk after the default timeout of 30 seconds and consequently
cause mdraid to regard the disk as dead and eject it from the
RAID array.
The mitigation is to set the timeout to 180 seconds for disks
which contain a RAID level 1 or higher partition.
Signed-off-by: Jonathan G. Underwood <jonathan.underwood@gmail.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
| |
Signed-off-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mixing level and chunk changes in one grow operation is not supported.
Mdadm performs level migration correctly and ignores new chunk, but
after migration it tries to write this chunk to sysfs properties.
This is dangerous and can cause unexpected behaviours.
Block it before level migration starts.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I was able to trigger this curious problem that seems to happen only on
one of our server:
Segmentation fault
This md volume is a raid1 volume made of 2 device mapper (dm-multipath)
devices and the underlying LUNs are imported via iSCSI.
Applying the following patch (see below) seems to fix the problem:
mdadm: /dev/md/10.4.237.12-volume has been started with 2 drives.
But I'm not sure if it's the right fix or if there're some other
problems that I'm missing.
More details about the md superblocks that might help to better
understand the nature of the problem:
dev: 36001405a04ed0c104881100000000000p2
/dev/mapper/36001405a04ed0c104881100000000000p2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5f3e8283:7f831b85:bc1958b9:6f2787a4
Name : 10.4.237.12-volume
Creation Time : Thu Jul 27 14:43:16 2017
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1073729503 (511.99 GiB 549.75 GB)
Array Size : 536864704 (511.99 GiB 549.75 GB)
Used Dev Size : 1073729408 (511.99 GiB 549.75 GB)
Data Offset : 8192 sectors
Super Offset : 8 sectors
Unused Space : before=8104 sectors, after=95 sectors
State : clean
Device UUID : 16dae7e3:42f3487f:fbeac43a:71cf1f63
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Aug 8 11:12:22 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 518c443e - correct
Events : 167
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
dev: 36001405a04ed0c104881200000000000p2
/dev/mapper/36001405a04ed0c104881200000000000p2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 5f3e8283:7f831b85:bc1958b9:6f2787a4
Name : 10.4.237.12-volume
Creation Time : Thu Jul 27 14:43:16 2017
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1073729503 (511.99 GiB 549.75 GB)
Array Size : 536864704 (511.99 GiB 549.75 GB)
Used Dev Size : 1073729408 (511.99 GiB 549.75 GB)
Data Offset : 8192 sectors
Super Offset : 8 sectors
Unused Space : before=8104 sectors, after=95 sectors
State : clean
Device UUID : ef612bdd:e475fe02:5d3fc55e:53612f34
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Aug 8 11:12:22 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : c39534fd - correct
Events : 167
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
dev: 36001405a04ed0c104881100000000000p2
00001000 fc 4e 2b a9 01 00 00 00 01 00 00 00 00 00 00 00 |.N+.............|
00001010 5f 3e 82 83 7f 83 1b 85 bc 19 58 b9 6f 27 87 a4 |_>........X.o'..|
00001020 31 30 2e 34 2e 32 33 37 2e 31 32 2d 76 6f 6c 75 |10.4.237.12-volu|
00001030 6d 65 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |me..............|
00001040 64 50 7a 59 00 00 00 00 01 00 00 00 00 00 00 00 |dPzY............|
00001050 80 cf ff 3f 00 00 00 00 00 00 00 00 02 00 00 00 |...?............|
00001060 08 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00001070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00001080 00 20 00 00 00 00 00 00 df cf ff 3f 00 00 00 00 |. .........?....|
00001090 08 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
000010a0 00 00 00 00 00 00 00 00 16 da e7 e3 42 f3 48 7f |............B.H.|
000010b0 fb ea c4 3a 71 cf 1f 63 00 00 08 00 48 00 00 00 |...:q..c....H...|
000010c0 54 f0 89 59 00 00 00 00 a7 00 00 00 00 00 00 00 |T..Y............|
000010d0 ff ff ff ff ff ff ff ff 9c 43 8c 51 80 00 00 00 |.........C.Q....|
000010e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00001100 00 00 01 00 fe ff fe ff fe ff fe ff fe ff fe ff |................|
00001110 fe ff fe ff fe ff fe ff fe ff fe ff fe ff fe ff |................|
*
00001200 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00002000 62 69 74 6d 04 00 00 00 5f 3e 82 83 7f 83 1b 85 |bitm...._>......|
00002010 bc 19 58 b9 6f 27 87 a4 a7 00 00 00 00 00 00 00 |..X.o'..........|
00002020 a7 00 00 00 00 00 00 00 80 cf ff 3f 00 00 00 00 |...........?....|
00002030 00 00 00 00 00 00 00 01 05 00 00 00 00 00 00 00 |................|
00002040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00003100 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
*
00004000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
003ffe00
dev: 36001405a04ed0c104881200000000000p2
00001000 fc 4e 2b a9 01 00 00 00 01 00 00 00 00 00 00 00 |.N+.............|
00001010 5f 3e 82 83 7f 83 1b 85 bc 19 58 b9 6f 27 87 a4 |_>........X.o'..|
00001020 31 30 2e 34 2e 32 33 37 2e 31 32 2d 76 6f 6c 75 |10.4.237.12-volu|
00001030 6d 65 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |me..............|
00001040 64 50 7a 59 00 00 00 00 01 00 00 00 00 00 00 00 |dPzY............|
00001050 80 cf ff 3f 00 00 00 00 00 00 00 00 02 00 00 00 |...?............|
00001060 08 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00001070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00001080 00 20 00 00 00 00 00 00 df cf ff 3f 00 00 00 00 |. .........?....|
00001090 08 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
000010a0 01 00 00 00 00 00 00 00 ef 61 2b dd e4 75 fe 02 |.........a+..u..|
000010b0 5d 3f c5 5e 53 61 2f 34 00 00 08 00 48 00 00 00 |]?.^Sa/4....H...|
000010c0 54 f0 89 59 00 00 00 00 a7 00 00 00 00 00 00 00 |T..Y............|
000010d0 ff ff ff ff ff ff ff ff 5b 34 95 c3 80 00 00 00 |........[4......|
000010e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00001100 00 00 01 00 fe ff fe ff fe ff fe ff fe ff fe ff |................|
00001110 fe ff fe ff fe ff fe ff fe ff fe ff fe ff fe ff |................|
*
00001200 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00002000 62 69 74 6d 04 00 00 00 5f 3e 82 83 7f 83 1b 85 |bitm...._>......|
00002010 bc 19 58 b9 6f 27 87 a4 a7 00 00 00 00 00 00 00 |..X.o'..........|
00002020 a7 00 00 00 00 00 00 00 80 cf ff 3f 00 00 00 00 |...........?....|
00002030 00 00 00 00 00 00 00 01 05 00 00 00 00 00 00 00 |................|
00002040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00003100 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
*
00004000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
003ffe00
Assemble: prevent segfault with faulty "best" devices
In Assemble(), after context reload, best[i] can be -1 in some cases,
and before checking if this value is negative we use it to access
devices[j].i.disk.raid_disk, potentially causing a segfault.
Check if best[i] is negative before using it to prevent this potential
segfault.
Signed-off-by: Andrea Righi <andrea@betterlinux.com>
Fixes: 69a481166be6 ("Assemble array with write journal")
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Robert LeBlanc <robert@leblancnet.us>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
| |
01r10_Grow_resize:
1. Create clustered raid10 with smaller size, then resize the
mddev to max size, finally change back to smaller size.
2. Create clustered raid10 with smaller chunk-size, then resize
it to larger, and trigger reshape.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
00r10_Create: It contains 4 scenarios of creating clustered raid10.
1. General creating, master node does resync and slave node does
Pending.
2. Creating clustered raid10 with --assume-clean.
3. Creating clustered raid10 with spare disk.
4. Creating clustered raid10 with --name.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
| |
01r1_Grow_resize: Create clustered raid1 with smaller size, then
resize the mddev to max size, finally change back to smaller size.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
00r1_Create: It contains 4 scenarios of creating clustered raid1.
1. General creating, master node does resync and slave node does
Pending.
2. Creating clustered raid1 with --assume-clean parameter.
3. Creating clustered raid1 with spare disk.
4. Creating clustered raid1 with --name.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
| |
By now, mdadm has two test suites to cover traditional sofr-raid
testing and clustermd testing, the '--testdir=' option supports
to switch which suite to test, tests/ or clustermd_tests/.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For clustermd testing, it needs user deploys the basic cluster
manually, test scripts don't cover auto-deploy cluster due to
different linux distributions have lots of difference.
Then complete the configuration in cluster_conf, please refer to
the detail comments in 'cluster_conf'.
1. 'func.sh' source file, it achieves feature functions for
clustermd testing.
2. 'cluster_conf' configure file, it contains two parts as
the input of testing.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
| |
To make 'test' file concise, move some functions to new file
tests/func.sh, and leave core functions in 'test' file.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
| |
1. delete the mdadm -As, keep the original testing scene intact.
2. move some actions into 'array' test, 'mdadm -D $array' would
complain errors if $array is null.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Only Imsm get_disk_controller_domain returns disk controller domain for
each disk. It causes that mdadm automatically creates disk controller
domain policy for imsm metadata, and imsm containers in the same disk
controller domain can take spare for recovery.
Ignore spares if only one imsm domain is matched.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
| |
For IMSM enterprise firmware starting with major version 6, present the
platform name as Intel VROC.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
| |
Since the default layout of raid10 is n2, so we
should allow the behavior.
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If disk has disappeared from the system and appears again, it is added to the
corresponding container as long the metadata matches and disk number is set.
This code had no effect on imsm until commit 20dc76d15b40 ("imsm: Set disk slot
number"). Now the disk is added to container but not to the array - it is
correct as the disk is out-of-sync. Rebuild should start for the disk but it
doesn't. There is the same behaviour for both imsm and ddf metadata.
There is no point to handle out-of-sync disk as "good member of array" so
remove that part of code. There are no scenarios when monitor is already
running and disk can be safely added to the array. Just write initial metadata
to the disk so it's taken for rebuild.
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
| |
s->size > 1 : s->size is '1' when '--grow --size max'
parameter is specified, so correct this test here.
Fixes: 1b21c449e6f2 ("mdadm/grow: adding a test to ensure resize was required")
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If RAID10 gets degraded during resync and is stopped, it doesn't continue
resync after automatic assemble and it is reported to be in sync. Resync
is blocked because the disk is missing. It should not happen for RAID10 as
it can still continue with 3 disks.
Count missing disks. Block resync only if number of missing disks exceeds
limit for given RAID level (only different for RAID10). Check if the
disk under recovery is present. If not, resync should be allowed to run.
Signed-off-by: Maksymilian Kunt <maksymilian.kunt@intel.com>
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 4515fb28a53a ("Add detail information when can not connect
monitor") was added to warn about failed connection to monitor in
WaitClean function (see link below).
Mdmon runs for IMSM containers when they have array with redundancy so
if mdmon doesn't run, mdadm prints this error. This is misleading and
unnecessary. Just print it in WaitClean function.
The sock in WaitClean is deprecated so it is removed.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1375002
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the disk fails, it goes into faulty state first and it is removed
from the array in a while. It gives mdadm monitor a chance to see the disk
has failed and notify an event (e.g. FailSpare). It doesn't work when
sysfs is used to get a number of disks in the array as it skips faulty
disk. ioctl implementation doesn't differentiate between active and
faulty disk. Do the same for sysfs then. It should not matter that number
of disks reported is greater than list of disk structures returned by the
call because the same approach already takes place for offline disks.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
| |
When RAID is created between VMD and SATA disks, printed message is
"Mixing devices attached to different VMD domains is not allowed". This message
is unclear and misleading because creating spanned containers between different
VMD domains is allowed. Set error message to more precise text.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
| |
We are now considering to extend clustered raid to
support raid10. But only near layout is supported,
so make the check when create the array or switch
the bitmap from internal to clustered.
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Improve error detection after SG_IO ioctl. Checking only the return
value and response length is insufficient and leads to anomalies if a
drive does not have a serial number.
Reported-by: NeilBrown <neilb@suse.com>
Tested-by: NeilBrown <neilb@suse.com>
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since mdadm 3.3 is has not been correct to call ->avail_size if
metadata hasn't been read from the device. ->validate_geometry
should be used instead.
Unfortunately array_try_spare() didn't get the memo, and it can crash
when adding a spare with no metdata.
So change it to use ->validate_geometry().
Only one place remains that uses ->avail_size(), and that is safe.
Also fix a comment with a typo.
Reported-and-tested-by: Bjørnar Ness <bjornar.ness@gmail.com>
Fixes: 641da7459192 ("super1: separate to version of _avail_space1().")
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just like the other template services, include the instance
name (I%) in the description of
mdadm-last-resort@.service
mdadm-last-resort@.timer
so that it is clear from the logs which array is affected.
Reported-by: Andrei Borzenkov <arvidjaar@gmail.com>
Link: http://bugzilla.opensuse.org/show_bug.cgi?id=1064915
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
| |
Split 'write to new_array' out into a function named create_named_array.
And fixed a trivial compiling warning 'warn_unused_result' against commit:
fdbf7aaa1956 (mdopen: call "modprobe md_mod" if it might be needed.)
Suggested-by: NeilBrown <neilb@suse.com>
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To fix the commit: 4b74a905a67e
(mdadm/grow: Component size must be larger than chunk size)
array.level > 1 : against the raids which chunk_size is meaningful.
s->size > 0 : ensure that changing component size has required.
array.chunk_size / 1024 > s->size : ensure component size should
be always >= current chunk_size when requires resize, otherwise,
mddev->pers->resize would be set mddev->dev_sectors as '0'.
Reported-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Suggested-by: NeilBrown <neilb@suse.com>
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The systemd developers like to keep control of the
lib/systemd namespace, and haven't approved of the use
of lib/systemd/scripts. So we should stop using it.
Move the mdadm_env.sh script, optionally sourced by
mdmonitor.service, to a new directory /usr/lib/mdadm.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
| |
We should remove the tmp file on signals as well as on exit.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
| |
Use 'logger' to report when mdcheck starts, stops, or continues
the check on an array.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
| |
mdstat: it should be corrected as 6 when strncmp 'resync'.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
| |
This commit doesn't change any codes, just tidy up the
code formatting.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This case tries to allow raid5 reshape to use backwards direction.
It changes chunksize after reshape and stops the raid. Then starts
the raid again.
Signed-off-by: Xiao Ni <xni@redhat.com>
Suggested-by: Jes Sorensen <jes.sorensen@gmail.com>
Suggested-by: Zhilong Liu <zlliu@suse.com>
Suggested-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After switch root new mdmon is started. It sends initrd mdmon a signal
to terminate. initrd mdmon receives it and switches the safe mode delay
to 1 ms in order to get array to clean state and flush last version of
metadata. The problem is sysfs filesystem is not available to initrd mdmon
after switch root so the original safe mode delay is unchanged. The delay
is set to few seconds - if there is a lot of traffic on the filesystem,
initrd mdmon doesn't terminate for a long time (no clean state). There
are 2 instances of mdmon. initrd mdmon flushes metadata when array goes
to clean state but this metadata might be already outdated.
Use file descriptor obtained on mdmon start to change safe mode delay.
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If first disk of IMSM RAID1 is failed but still present in the system,
the array is not auto-assembled. Auto-assemble uses raid disk slot from
metadata to index disks. As it's not set, the valid disk is seen as a
replacement disk and its metadata is ignored. The problem is not
observed for other RAID levels as they have more than 2 disks -
replacement disks are only stored under uneven indexes so third disk
metadata is used in such scenario.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Reviewed-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
| |
Try to use the full line length and avoid breaking up lines excessively.
Equally break up lines that are too long for no reason.
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
|
| |
When rebuild is initiated by the UEFI driver it is possible that the new
disk will not contain a valid ppl header. Just write the initial ppl
and don't abort assembly.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
| |
Use the first map to get the correct disk when rebuilding and not the
failed disk from the second map.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|
|
|
|
|
|
|
|
| |
Set resync_start to 0 when starting a rebuilding array to make the
kernel perform ppl recovery before the rebuild.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
|