Every article tag can be clicked to get a list of all articles in that category. Every article tag also has an RSS feed! You can customize an RSS feed too!
Need help with Raid lINUX dont see my raid or drivers
Page: «2/2
  Go to:
iwantlinuxgames Aug 3, 2020
Quoting: F.UltraBootable raid-0 sounds more like a grub2 problem than md since md does not handle boots. I have installed tons of servers on raid-1 using md so don't know what your issues have been there (dead simple setup in both Debian and Ubuntu, no need for chroot or anything like that). Raid-0 I'm unsure of since I have never had any reason to use that for boot, I usually go with 1+0 instead when that kind of performance is needed for storage if I have the OS on the same drive as the data.

You are indeed correct that it is a grub2 issue....the raid driver from mdadm would need to be moved into grub so that the array can be assembled prior to OS boot, as it would need to know where the /boot partition and linux-image are. Thusly, mdadm is incapable of RAID 0, since, at the current time, the mdadm driver is in the kernel image, and the kernel image would be scattered across the various block devices. mdadm with a RAID 1 array is just basically a "mirror" copy(hence raid mirroring) of block devices. Therefore, the kernel image isn't scattered across multiple block devices, and grub can find the /boot partion and load the kernel image. Ubuntu server install media will load mdadm and even install mdadm to your target devices. Ubuntu Desktop on the other hand....."Desktop users don't run RAID" seems to pretty much be the consensus in the Ubuntu community. Thusly, this requires a chroot, even for a RAID 1 array, though with RAID 1 you generally don't have to worry about not getting into the OS. I have a bigger need for MORE SPACE than i have a need for data integrity. Even still, i'd prefer a hardware controller even for RAID 1 as i could just move the entire batch(card and attached disks) to new hardware and boot(and have done so in the past, but using RAID 0 on the controller).

QuoteSorry about the Areca-1230, just assumed that it was the name of your mobo and not of your controller :), not a manufacturer that is available over here. That board runs an embedded web server, telnet+v100 terminal, a snmp daemon and is using an Xscale cpu so it is running "something", either BSD or some proprietary realtime os (don't think it is running Linux since they don't carry any license details in their documentation or firmware blob).

Linux kernels since like 2.6-ish have supported Areca hardware via the arc kernel module. In fact, where Linux really shines is its support for such types of cards: Areca, High Point, Promise...High end controller cards(as i said, mine cost me over $850 and i used it for several years. I passed it onto a friend who is now using it in one of his systems to learn RAID. He's a windows user.) Windows requires loading drivers most of the time for such cards.

QuoteSo like most hardware today, it's basically a small computer running software raid, but one could argue that this is just me being semantically pedantic and your main point seems to be the "ease of use" angle anyway and not what it technically is "behind the scene" which is the angle that I'm more interested in so I think we can just ignore that :)

Believe me, i'm probably as, or more, pendantic than you LOL....as i stated earlier, if you want to understand the processes a hardware controller handles for you, mdadm is the way to go, unless you need a bootable RAID 0.

QuoteAlso it perhaps should be said that on the server that I posted the 24 drives from I use btrfs in raid1 which gives me advantages that no hardware raid can give me.

I prefer XFS(according to benchmarks, it's the fastest performing filesystem around, although a bit fragile. it's terrible with power outages, though lately it seems to have improved). I haven't bothered to check the performance of btrfs. I have also not used LVM in many years either, so i can't speak to the performance of it nowadays. When i was experimenting with it years ago, the performance was utterly horrible, even with an XFS filesystem. I understand it has improved in striping the last several years, but i've become too spoiled by hardware and FakeRAID controllers.

addendum: i just checked on btrfs performance(source: Phoronix benchmark test published 3 July 2020). yep, looks like i'll be using XFS for quite some time, although F2FS is quite the contender.

Last edited by iwantlinuxgames on 3 August 2020 at 10:57 pm UTC
F.Ultra Aug 4, 2020
Quoting: iwantlinuxgames
Quoting: F.UltraBootable raid-0 sounds more like a grub2 problem than md since md does not handle boots. I have installed tons of servers on raid-1 using md so don't know what your issues have been there (dead simple setup in both Debian and Ubuntu, no need for chroot or anything like that). Raid-0 I'm unsure of since I have never had any reason to use that for boot, I usually go with 1+0 instead when that kind of performance is needed for storage if I have the OS on the same drive as the data.

You are indeed correct that it is a grub2 issue....the raid driver from mdadm would need to be moved into grub so that the array can be assembled prior to OS boot, as it would need to know where the /boot partition and linux-image are. Thusly, mdadm is incapable of RAID 0, since, at the current time, the mdadm driver is in the kernel image, and the kernel image would be scattered across the various block devices. mdadm with a RAID 1 array is just basically a "mirror" copy(hence raid mirroring) of block devices. Therefore, the kernel image isn't scattered across multiple block devices, and grub can find the /boot partion and load the kernel image. Ubuntu server install media will load mdadm and even install mdadm to your target devices. Ubuntu Desktop on the other hand....."Desktop users don't run RAID" seems to pretty much be the consensus in the Ubuntu community. Thusly, this requires a chroot, even for a RAID 1 array, though with RAID 1 you generally don't have to worry about not getting into the OS. I have a bigger need for MORE SPACE than i have a need for data integrity. Even still, i'd prefer a hardware controller even for RAID 1 as i could just move the entire batch(card and attached disks) to new hardware and boot(and have done so in the past, but using RAID 0 on the controller).

QuoteSorry about the Areca-1230, just assumed that it was the name of your mobo and not of your controller :), not a manufacturer that is available over here. That board runs an embedded web server, telnet+v100 terminal, a snmp daemon and is using an Xscale cpu so it is running "something", either BSD or some proprietary realtime os (don't think it is running Linux since they don't carry any license details in their documentation or firmware blob).

Linux kernels since like 2.6-ish have supported Areca hardware via the arc kernel module. In fact, where Linux really shines is its support for such types of cards: Areca, High Point, Promise...High end controller cards(as i said, mine cost me over $850 and i used it for several years. I passed it onto a friend who is now using it in one of his systems to learn RAID. He's a windows user.) Windows requires loading drivers most of the time for such cards.

QuoteSo like most hardware today, it's basically a small computer running software raid, but one could argue that this is just me being semantically pedantic and your main point seems to be the "ease of use" angle anyway and not what it technically is "behind the scene" which is the angle that I'm more interested in so I think we can just ignore that :)

Believe me, i'm probably as, or more, pendantic than you LOL....as i stated earlier, if you want to understand the processes a hardware controller handles for you, mdadm is the way to go, unless you need a bootable RAID 0.

QuoteAlso it perhaps should be said that on the server that I posted the 24 drives from I use btrfs in raid1 which gives me advantages that no hardware raid can give me.

I prefer XFS(according to benchmarks, it's the fastest performing filesystem around, although a bit fragile. it's terrible with power outages, though lately it seems to have improved). I haven't bothered to check the performance of btrfs. I have also not used LVM in many years either, so i can't speak to the performance of it nowadays. When i was experimenting with it years ago, the performance was utterly horrible, even with an XFS filesystem. I understand it has improved in striping the last several years, but i've become too spoiled by hardware and FakeRAID controllers.

addendum: i just checked on btrfs performance(source: Phoronix benchmark test published 3 July 2020). yep, looks like i'll be using XFS for quite some time, although F2FS is quite the contender.

I don't use btrfs for the performance, I use it for the "avoid silent bitrot/diskerrors" advantage. And I think that there is an alternative installer for Ubuntu Desktop (I'm writing this from a Ubuntu raid-5 desktop at home and I had to do zero shenanigans to get it to work, but then even if it's a 20.04 it started out as a 8.04 so things can have changed since then with the desktop installer, my desktop at work is also a raid-5 that I think started as a 14.04LTS).
iwantlinuxgames Aug 4, 2020
Quoting: F.Ultra
Quoting: iwantlinuxgames
Quoting: F.UltraBootable raid-0 sounds more like a grub2 problem than md since md does not handle boots. I have installed tons of servers on raid-1 using md so don't know what your issues have been there (dead simple setup in both Debian and Ubuntu, no need for chroot or anything like that). Raid-0 I'm unsure of since I have never had any reason to use that for boot, I usually go with 1+0 instead when that kind of performance is needed for storage if I have the OS on the same drive as the data.

You are indeed correct that it is a grub2 issue....the raid driver from mdadm would need to be moved into grub so that the array can be assembled prior to OS boot, as it would need to know where the /boot partition and linux-image are. Thusly, mdadm is incapable of RAID 0, since, at the current time, the mdadm driver is in the kernel image, and the kernel image would be scattered across the various block devices. mdadm with a RAID 1 array is just basically a "mirror" copy(hence raid mirroring) of block devices. Therefore, the kernel image isn't scattered across multiple block devices, and grub can find the /boot partion and load the kernel image. Ubuntu server install media will load mdadm and even install mdadm to your target devices. Ubuntu Desktop on the other hand....."Desktop users don't run RAID" seems to pretty much be the consensus in the Ubuntu community. Thusly, this requires a chroot, even for a RAID 1 array, though with RAID 1 you generally don't have to worry about not getting into the OS. I have a bigger need for MORE SPACE than i have a need for data integrity. Even still, i'd prefer a hardware controller even for RAID 1 as i could just move the entire batch(card and attached disks) to new hardware and boot(and have done so in the past, but using RAID 0 on the controller).

QuoteSorry about the Areca-1230, just assumed that it was the name of your mobo and not of your controller :), not a manufacturer that is available over here. That board runs an embedded web server, telnet+v100 terminal, a snmp daemon and is using an Xscale cpu so it is running "something", either BSD or some proprietary realtime os (don't think it is running Linux since they don't carry any license details in their documentation or firmware blob).

Linux kernels since like 2.6-ish have supported Areca hardware via the arc kernel module. In fact, where Linux really shines is its support for such types of cards: Areca, High Point, Promise...High end controller cards(as i said, mine cost me over $850 and i used it for several years. I passed it onto a friend who is now using it in one of his systems to learn RAID. He's a windows user.) Windows requires loading drivers most of the time for such cards.

QuoteSo like most hardware today, it's basically a small computer running software raid, but one could argue that this is just me being semantically pedantic and your main point seems to be the "ease of use" angle anyway and not what it technically is "behind the scene" which is the angle that I'm more interested in so I think we can just ignore that :)

Believe me, i'm probably as, or more, pendantic than you LOL....as i stated earlier, if you want to understand the processes a hardware controller handles for you, mdadm is the way to go, unless you need a bootable RAID 0.

QuoteAlso it perhaps should be said that on the server that I posted the 24 drives from I use btrfs in raid1 which gives me advantages that no hardware raid can give me.

I prefer XFS(according to benchmarks, it's the fastest performing filesystem around, although a bit fragile. it's terrible with power outages, though lately it seems to have improved). I haven't bothered to check the performance of btrfs. I have also not used LVM in many years either, so i can't speak to the performance of it nowadays. When i was experimenting with it years ago, the performance was utterly horrible, even with an XFS filesystem. I understand it has improved in striping the last several years, but i've become too spoiled by hardware and FakeRAID controllers.

addendum: i just checked on btrfs performance(source: Phoronix benchmark test published 3 July 2020). yep, looks like i'll be using XFS for quite some time, although F2FS is quite the contender.

I don't use btrfs for the performance, I use it for the "avoid silent bitrot/diskerrors" advantage. And I think that there is an alternative installer for Ubuntu Desktop (I'm writing this from a Ubuntu raid-5 desktop at home and I had to do zero shenanigans to get it to work, but then even if it's a 20.04 it started out as a 8.04 so things can have changed since then with the desktop installer, my desktop at work is also a raid-5 that I think started as a 14.04LTS).

yeah...i'm too impatient LOL...the little bit of important data i have i backup fairly regulary...the rest can easily be replaced(although the 600+ kung fu movie collection i have isn't easily replaced. some go back as far as the 60s and tracking them down would be almost nigh impossible.). I gotta say though....the 2+ TB steam library takes a couple of days to reinstall.

I'm on Kubuntu 20.04(GNOME Desktop drives nuts!) and the installer takes that kind of tinkering with mdadm. I "HAD" to make sure i could boot and mount the array since it was mounting on /home and i have settings in there i've been carrying since 12.04. i think after the upgrade to the High Point 7303 w/ 4x 2TB Sabrent Rocket Qs i'm going to start a fresh /home and just copy out the stuff i want and start with some new settings for things(ughh...means i have to try and remember what-all chrome tabs i have open(2 windows with about 20+ tabs in each...it's convenient to just have the cache and settings already for chrome on a fresh install)). i'm so ready to get rid of this 6disk hotswap bay and all that friggin cabling. I have a 5.25in 7port USB bay to go in there. For the life of me, i can't figure out why they put the majority of USB slots on the rear-side of the mobo...Terribly inconvenient if you ask me. Plus, in a year or two when i upgrade the mobo and CPU to a PCIE gen 4 3rd gen Threadripper, i can just pop the 7303 in a PCIE bus and boot. In theory anyway...i had the same thought about the last upgrade from an AMD 8220(?..can't remember but i know it was in the 8xxx series of CPUs..8core) to the Threadripper 1950X and it didn't boot. kernel panicked with something about "SEV" or some such and i had to do a fresh install.
crt0mega Aug 4, 2020
I've got a (grub-) bootable md raid0 with four pieces of spinning rust.
iwantlinuxgames Aug 4, 2020
Quoting: crt0megaI've got a (grub-) bootable md raid0 with four pieces of spinning rust.

could you point me to some documentation on how you achieved this?
crt0mega Aug 4, 2020
Quoting: iwantlinuxgamescould you point me to some documentation on how you achieved this?
I wish I could but basically I've googled a lot last year and got it working somehow. I could try to reproduce the hoops I've had to jump through later this week in a VM and put them together in a short guide
iwantlinuxgames Aug 5, 2020
Quoting: crt0mega
Quoting: iwantlinuxgamescould you point me to some documentation on how you achieved this?
I wish I could but basically I've googled a lot last year and got it working somehow. I could try to reproduce the hoops I've had to jump through later this week in a VM and put them together in a short guide

that would be awesome thanks!!! i've spent the last year+ off and on looking into a bootable md raid 0 and have had pretty much 0 luck.

Last edited by iwantlinuxgames on 5 August 2020 at 1:05 am UTC
crt0mega Aug 8, 2020
It's too warm here for doing anything else than eat/drink/sleep. I'm pretty sure I followed mostly the instructions found here.. I did not create a boot partition on every hdd but I left the space empty on 3 of 4.
F.Ultra Feb 7, 2021
Necroposting but stumbled upon this old video from Linus Tech Tips that demonstrated exactly what I was talking about earlier on the dangers of HW raid vs SW raid, in short their raid cards died and due to the proprietary on disk format of the HW raid they could not recover the data using other cards.

https://www.youtube.com/watch?v=gSrnXgAmK8k
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register


Or login with...
Sign in with Steam Sign in with Google
Social logins require cookies to stay logged in.