Confused on Steam Play and Proton? Be sure to check out our guide.
Need help with Raid lINUX dont see my raid or drivers
Page: 1/2»
  Go to:
fires Jul 20, 2020
Need help with Raid lINUX dont see my raid or drivers

ok i went into Bios did my raid came out while installing LINUX there is no hdd or any raid

so this must be drivers but how and witch ones do i need ?

many thankS

Last edited by fires on 20 July 2020 at 6:42 pm UTC
damarrin Jul 20, 2020
As a rule of thumb, you create your raid from linux, not from BIOS. In the bios you should set all your drives to AHCI or whatever the option is and create the raid from a live Linux environment using something like mdadm.

There is little chance that the mobo manufacturer provides Linux drivers for raid, unless perhaps it’s a server mobo or controller, in which case the drivers will be for something like rhel 6 and useless anyway. Linux’s software raid is very good anyway and what you should be using.
fires Jul 25, 2020
Quoting: damarrinAs a rule of thumb, you create your raid from linux, not from BIOS. In the bios you should set all your drives to AHCI or whatever the option is and create the raid from a live Linux environment using something like mdadm.

There is little chance that the mobo manufacturer provides Linux drivers for raid, unless perhaps it’s a server mobo or controller, in which case the drivers will be for something like rhel 6 and useless anyway. Linux’s software raid is very good anyway and what you should be using.

Many thanks

i will look into mdadm

but i am ABLE TO DO RAID FROM MOBO

but i did not know i should keep them AHCI now they are in AHCI so i will see how to build the raid fron linux
iwantlinuxgames Jul 26, 2020
Quoting: fires
Quoting: damarrinAs a rule of thumb, you create your raid from linux, not from BIOS. In the bios you should set all your drives to AHCI or whatever the option is and create the raid from a live Linux environment using something like mdadm.

There is little chance that the mobo manufacturer provides Linux drivers for raid, unless perhaps it’s a server mobo or controller, in which case the drivers will be for something like rhel 6 and useless anyway. Linux’s software raid is very good anyway and what you should be using.

Many thanks

i will look into mdadm

but i am ABLE TO DO RAID FROM MOBO

but i did not know i should keep them AHCI now they are in AHCI so i will see how to build the raid fron linux

as a "general rule of thumb" no you do not. you ONLY use mdadm IF you're mobo's raid chipset isn't supported. The MAJORITY of RAID chipsets are supported by the DMRAID package, which configures mapped devices. x399 and B450 based AMD mobos are NOT supported if they have an AMD-RAID chipset that you are trying to utilize for NVME RAID.

We need more info about your board and it's chipset. Generally you will find the mapped device under /dev/mapper...it will often have a name something like pdc_abcdefg or nvidia_abcdefg depending on your chipset. you will need to MANUALLY partition this device: sudo fdisk /dev/mapper/pdc_abcdefg.

create your partition amd write the changes to the device. you will now have something like /dev/mapper/pdc_abcdefg1, this will be your partition.

I don't know what distro of Linux your are attemprting to install, but with Ubuntu, "GENERALLY" the dmraid driver modules are loaded in the live installer. If not, sudo apt install dmraid and then partition the mapped device.

During the install for Ubuntu, if you choose manual installation as opposed to guided, you can choose your raid array. when you get to the partitioning utility, set your / (root) and filesystems. Where it asks you to install the bootloader, be sure to select "/dev/pdc_abcdefg" AND NOT the partition.

Linux software is NOT preferred BECAUSE it doesn't do a bootable RAID 0 config.

In 15+ years i have NEVER had to use mdadm with onboard and add-in raid card RAID arrays. For a mass storage enthusiast, the preference is:

Hardware RAID controller(with it's own IO processor and RAM)
FakeRAID controller(add-in card or onboard chipset)
Software RAID(AS a very last resort)

If you can supply some more info about your setup: mobo model, choice of distro, possibly even RAID chipset, I should be able to provide you with a little better info.

addendum: /dev/mapper devices won't be shown via fdisk -l

addendum II: "FakeRAID" is preferrable to mdadm software RAID because of the mobo/add-in card chipset handling the LVM bits. Other than being aware of how dmraid works and interfaces with mapped block devices, installing Linux is a fairly simple and straightforward process.

mdadm ON THE OTHER HAND, often requires the following process:

install and load the mdadm modules and utilities
create and assemble the array
proceed with the install
after the install you need to mount the array and chroot to environment the array will be utilised in(if you're doing RAID 0 this will be the block device you set up to boot from and load the kernel image, since mdadm CANNOT do bootable RAID 0)
in the chroot, you need to install mdadm, assemble and activate the array, edit fstab to ensure the array will be mounted
you may or may not need to edit /etc/default/grub(depends on the kernel version you are using. after 5.2/5.3(?), there were some changes made regarding mdadm and grub interaction)
since you're already there, might as well run updates in the chroot and make any config adjustments if needed(better install that proprietary NVIDIA driver while you're there)

once you're positive your array won't blow up when you reboot, reboot.

If you're lucky, you're array will be mounted and you can log into your fresh Linux install. Most likely, it'll fail, and you'll need to boot from the live installer again, reinstall mdadm, re-assemble the array(if it's a RAID1 and bootable), and look thru system boot logs to see where it failed and adjust config files accordingly. And reboot.

the only "good" thing about mdadm is that it gives you an appreciation for all of the underlying processes a FakeRAID card/chipset handles for you.

Harware RAID controllers are the preferrential means of running RAID. They posssess their own IO Processors and many times have onboard RAM. Most have a RAID util accessible via POST hotkey as well as via an ethernet interface. LVM on the controller presents arrays as a single block device to the OS. Many times these controllers support Hot-swap, allowing you to yank and replace failed storage devices and rebuild arrays without having to shut the system down(This is supported by pretty much ALL SATA RAID interfaces, including onboard mobo chips).

mdadm has it's place. it's great if you need to move some data somewhere to make some partition/filesystem changes where you may not have a big enough storage device, but several USB storage devices of approximately the same size. They can be pooled to create a storage area for those files.

mdadm can also be used to create nested RAID levels, if you're so inclined. For instance, if you're mobo has ports for 6 disks, u can use the onboard RAID util and create 3x RAID 0 arrays, and the in linux you could create another RAID 0 array with the 3 arrays exposed to the OS. There's no advantage to this. And it just creates a layer of unnecssary complexity. BUT, in the vain of Mythbusters, "If it's worth doing, it's worth overdoing". throw in multiple add-in FakeRAID cards, apply some mdadm, and polish it off with LVM2 and you've got an entirely over-engineered storage array. And an apprectiation for what hardware controllers turn into an easy click-thru process.

Last edited by iwantlinuxgames on 27 July 2020 at 1:13 am UTC
damarrin Jul 27, 2020
I don't agree, but whatever. It looks like the OP has no idea what they're doing and mdadm lets you get a raid up and running with a couple of commands using standard Linux /dev/sdX devices on any hardware, without trying to figure out what their controller is and whether it is supported and what precisely needs to be done to get it all up and running on their specific hardware.
fires Jul 27, 2020
Quoting: iwantlinuxgames
Quoting: fires
Quoting: damarrinAs a rule of thumb, you create your raid from linux, not from BIOS. In the bios you should set all your drives to AHCI or whatever the option is and create the raid from a live Linux environment using something like mdadm.

There is little chance that the mobo manufacturer provides Linux drivers for raid, unless perhaps it’s a server mobo or controller, in which case the drivers will be for something like rhel 6 and useless anyway. Linux’s software raid is very good anyway and what you should be using.

Many thanks

i will look into mdadm

but i am ABLE TO DO RAID FROM MOBO

but i did not know i should keep them AHCI now they are in AHCI so i will see how to build the raid fron linux

as a "general rule of thumb" no you do not. you ONLY use mdadm IF you're mobo's raid chipset isn't supported. The MAJORITY of RAID chipsets are supported by the DMRAID package, which configures mapped devices. x399 and B450 based AMD mobos are NOT supported if they have an AMD-RAID chipset that you are trying to utilize for NVME RAID.

We need more info about your board and it's chipset. Generally you will find the mapped device under /dev/mapper...it will often have a name something like pdc_abcdefg or nvidia_abcdefg depending on your chipset. you will need to MANUALLY partition this device: sudo fdisk /dev/mapper/pdc_abcdefg.

create your partition amd write the changes to the device. you will now have something like /dev/mapper/pdc_abcdefg1, this will be your partition.

I don't know what distro of Linux your are attemprting to install, but with Ubuntu, "GENERALLY" the dmraid driver modules are loaded in the live installer. If not, sudo apt install dmraid and then partition the mapped device.

During the install for Ubuntu, if you choose manual installation as opposed to guided, you can choose your raid array. when you get to the partitioning utility, set your / (root) and filesystems. Where it asks you to install the bootloader, be sure to select "/dev/pdc_abcdefg" AND NOT the partition.

Linux software is NOT preferred BECAUSE it doesn't do a bootable RAID 0 config.

In 15+ years i have NEVER had to use mdadm with onboard and add-in raid card RAID arrays. For a mass storage enthusiast, the preference is:

Hardware RAID controller(with it's own IO processor and RAM)
FakeRAID controller(add-in card or onboard chipset)
Software RAID(AS a very last resort)

If you can supply some more info about your setup: mobo model, choice of distro, possibly even RAID chipset, I should be able to provide you with a little better info.

addendum: /dev/mapper devices won't be shown via fdisk -l

addendum II: "FakeRAID" is preferrable to mdadm software RAID because of the mobo/add-in card chipset handling the LVM bits. Other than being aware of how dmraid works and interfaces with mapped block devices, installing Linux is a fairly simple and straightforward process.

mdadm ON THE OTHER HAND, often requires the following process:

install and load the mdadm modules and utilities
create and assemble the array
proceed with the install
after the install you need to mount the array and chroot to environment the array will be utilised in(if you're doing RAID 0 this will be the block device you set up to boot from and load the kernel image, since mdadm CANNOT do bootable RAID 0)
in the chroot, you need to install mdadm, assemble and activate the array, edit fstab to ensure the array will be mounted
you may or may not need to edit /etc/default/grub(depends on the kernel version you are using. after 5.2/5.3(?), there were some changes made regarding mdadm and grub interaction)
since you're already there, might as well run updates in the chroot and make any config adjustments if needed(better install that proprietary NVIDIA driver while you're there)

once you're positive your array won't blow up when you reboot, reboot.

If you're lucky, you're array will be mounted and you can log into your fresh Linux install. Most likely, it'll fail, and you'll need to boot from the live installer again, reinstall mdadm, re-assemble the array(if it's a RAID1 and bootable), and look thru system boot logs to see where it failed and adjust config files accordingly. And reboot.

the only "good" thing about mdadm is that it gives you an appreciation for all of the underlying processes a FakeRAID card/chipset handles for you.

Harware RAID controllers are the preferrential means of running RAID. They posssess their own IO Processors and many times have onboard RAM. Most have a RAID util accessible via POST hotkey as well as via an ethernet interface. LVM on the controller presents arrays as a single block device to the OS. Many times these controllers support Hot-swap, allowing you to yank and replace failed storage devices and rebuild arrays without having to shut the system down(This is supported by pretty much ALL SATA RAID interfaces, including onboard mobo chips).

mdadm has it's place. it's great if you need to move some data somewhere to make some partition/filesystem changes where you may not have a big enough storage device, but several USB storage devices of approximately the same size. They can be pooled to create a storage area for those files.

mdadm can also be used to create nested RAID levels, if you're so inclined. For instance, if you're mobo has ports for 6 disks, u can use the onboard RAID util and create 3x RAID 0 arrays, and the in linux you could create another RAID 0 array with the 3 arrays exposed to the OS. There's no advantage to this. And it just creates a layer of unnecssary complexity. BUT, in the vain of Mythbusters, "If it's worth doing, it's worth overdoing". throw in multiple add-in FakeRAID cards, apply some mdadm, and polish it off with LVM2 and you've got an entirely over-engineered storage array. And an apprectiation for what hardware controllers turn into an easy click-thru process.


Sorry for the delay

i just read ur post i will post back with all info

many thanks
F.Ultra Aug 2, 2020
Quoting: iwantlinuxgames
Quoting: fires
Quoting: damarrinAs a rule of thumb, you create your raid from linux, not from BIOS. In the bios you should set all your drives to AHCI or whatever the option is and create the raid from a live Linux environment using something like mdadm.

There is little chance that the mobo manufacturer provides Linux drivers for raid, unless perhaps it’s a server mobo or controller, in which case the drivers will be for something like rhel 6 and useless anyway. Linux’s software raid is very good anyway and what you should be using.

Many thanks

i will look into mdadm

but i am ABLE TO DO RAID FROM MOBO

but i did not know i should keep them AHCI now they are in AHCI so i will see how to build the raid fron linux

as a "general rule of thumb" no you do not. you ONLY use mdadm IF you're mobo's raid chipset isn't supported. The MAJORITY of RAID chipsets are supported by the DMRAID package, which configures mapped devices. x399 and B450 based AMD mobos are NOT supported if they have an AMD-RAID chipset that you are trying to utilize for NVME RAID.

We need more info about your board and it's chipset. Generally you will find the mapped device under /dev/mapper...it will often have a name something like pdc_abcdefg or nvidia_abcdefg depending on your chipset. you will need to MANUALLY partition this device: sudo fdisk /dev/mapper/pdc_abcdefg.

create your partition amd write the changes to the device. you will now have something like /dev/mapper/pdc_abcdefg1, this will be your partition.

I don't know what distro of Linux your are attemprting to install, but with Ubuntu, "GENERALLY" the dmraid driver modules are loaded in the live installer. If not, sudo apt install dmraid and then partition the mapped device.

During the install for Ubuntu, if you choose manual installation as opposed to guided, you can choose your raid array. when you get to the partitioning utility, set your / (root) and filesystems. Where it asks you to install the bootloader, be sure to select "/dev/pdc_abcdefg" AND NOT the partition.

Linux software is NOT preferred BECAUSE it doesn't do a bootable RAID 0 config.

In 15+ years i have NEVER had to use mdadm with onboard and add-in raid card RAID arrays. For a mass storage enthusiast, the preference is:

Hardware RAID controller(with it's own IO processor and RAM)
FakeRAID controller(add-in card or onboard chipset)
Software RAID(AS a very last resort)

If you can supply some more info about your setup: mobo model, choice of distro, possibly even RAID chipset, I should be able to provide you with a little better info.

addendum: /dev/mapper devices won't be shown via fdisk -l

addendum II: "FakeRAID" is preferrable to mdadm software RAID because of the mobo/add-in card chipset handling the LVM bits. Other than being aware of how dmraid works and interfaces with mapped block devices, installing Linux is a fairly simple and straightforward process.

mdadm ON THE OTHER HAND, often requires the following process:

install and load the mdadm modules and utilities
create and assemble the array
proceed with the install
after the install you need to mount the array and chroot to environment the array will be utilised in(if you're doing RAID 0 this will be the block device you set up to boot from and load the kernel image, since mdadm CANNOT do bootable RAID 0)
in the chroot, you need to install mdadm, assemble and activate the array, edit fstab to ensure the array will be mounted
you may or may not need to edit /etc/default/grub(depends on the kernel version you are using. after 5.2/5.3(?), there were some changes made regarding mdadm and grub interaction)
since you're already there, might as well run updates in the chroot and make any config adjustments if needed(better install that proprietary NVIDIA driver while you're there)

once you're positive your array won't blow up when you reboot, reboot.

If you're lucky, you're array will be mounted and you can log into your fresh Linux install. Most likely, it'll fail, and you'll need to boot from the live installer again, reinstall mdadm, re-assemble the array(if it's a RAID1 and bootable), and look thru system boot logs to see where it failed and adjust config files accordingly. And reboot.

the only "good" thing about mdadm is that it gives you an appreciation for all of the underlying processes a FakeRAID card/chipset handles for you.

Harware RAID controllers are the preferrential means of running RAID. They posssess their own IO Processors and many times have onboard RAM. Most have a RAID util accessible via POST hotkey as well as via an ethernet interface. LVM on the controller presents arrays as a single block device to the OS. Many times these controllers support Hot-swap, allowing you to yank and replace failed storage devices and rebuild arrays without having to shut the system down(This is supported by pretty much ALL SATA RAID interfaces, including onboard mobo chips).

mdadm has it's place. it's great if you need to move some data somewhere to make some partition/filesystem changes where you may not have a big enough storage device, but several USB storage devices of approximately the same size. They can be pooled to create a storage area for those files.

mdadm can also be used to create nested RAID levels, if you're so inclined. For instance, if you're mobo has ports for 6 disks, u can use the onboard RAID util and create 3x RAID 0 arrays, and the in linux you could create another RAID 0 array with the 3 arrays exposed to the OS. There's no advantage to this. And it just creates a layer of unnecssary complexity. BUT, in the vain of Mythbusters, "If it's worth doing, it's worth overdoing". throw in multiple add-in FakeRAID cards, apply some mdadm, and polish it off with LVM2 and you've got an entirely over-engineered storage array. And an apprectiation for what hardware controllers turn into an easy click-thru process.

Somewhat disagree with your preferential list. MD is superior to both FakeRaid and "HardwareRaid" (put Hardware in quotes since close to all of those are just small RTOS:es running their own software raid [and often they are Linux running md]) in one major thing and that is when your card or mobo breaks, with md you can replace your mobo to what ever brand you want and things will still work, with the other solutions you must have the exact same FakeRaid or Raid-card.
iwantlinuxgames Aug 3, 2020
Quoting: F.Ultra
Quoting: iwantlinuxgames
Quoting: fires
Quoting: damarrinAs a rule of thumb, you create your raid from linux, not from BIOS. In the bios you should set all your drives to AHCI or whatever the option is and create the raid from a live Linux environment using something like mdadm.

There is little chance that the mobo manufacturer provides Linux drivers for raid, unless perhaps it’s a server mobo or controller, in which case the drivers will be for something like rhel 6 and useless anyway. Linux’s software raid is very good anyway and what you should be using.


Many thanks

i will look into mdadm

but i am ABLE TO DO RAID FROM MOBO

but i did not know i should keep them AHCI now they are in AHCI so i will see how to build the raid fron linux

as a "general rule of thumb" no you do not. you ONLY use mdadm IF you're mobo's raid chipset isn't supported. The MAJORITY of RAID chipsets are supported by the DMRAID package, which configures mapped devices. x399 and B450 based AMD mobos are NOT supported if they have an AMD-RAID chipset that you are trying to utilize for NVME RAID.

We need more info about your board and it's chipset. Generally you will find the mapped device under /dev/mapper...it will often have a name something like pdc_abcdefg or nvidia_abcdefg depending on your chipset. you will need to MANUALLY partition this device: sudo fdisk /dev/mapper/pdc_abcdefg.

create your partition amd write the changes to the device. you will now have something like /dev/mapper/pdc_abcdefg1, this will be your partition.

I don't know what distro of Linux your are attemprting to install, but with Ubuntu, "GENERALLY" the dmraid driver modules are loaded in the live installer. If not, sudo apt install dmraid and then partition the mapped device.

During the install for Ubuntu, if you choose manual installation as opposed to guided, you can choose your raid array. when you get to the partitioning utility, set your / (root) and filesystems. Where it asks you to install the bootloader, be sure to select "/dev/pdc_abcdefg" AND NOT the partition.

Linux software is NOT preferred BECAUSE it doesn't do a bootable RAID 0 config.

In 15+ years i have NEVER had to use mdadm with onboard and add-in raid card RAID arrays. For a mass storage enthusiast, the preference is:

Hardware RAID controller(with it's own IO processor and RAM)
FakeRAID controller(add-in card or onboard chipset)
Software RAID(AS a very last resort)

If you can supply some more info about your setup: mobo model, choice of distro, possibly even RAID chipset, I should be able to provide you with a little better info.

addendum: /dev/mapper devices won't be shown via fdisk -l

addendum II: "FakeRAID" is preferrable to mdadm software RAID because of the mobo/add-in card chipset handling the LVM bits. Other than being aware of how dmraid works and interfaces with mapped block devices, installing Linux is a fairly simple and straightforward process.

mdadm ON THE OTHER HAND, often requires the following process:

install and load the mdadm modules and utilities
create and assemble the array
proceed with the install
after the install you need to mount the array and chroot to environment the array will be utilised in(if you're doing RAID 0 this will be the block device you set up to boot from and load the kernel image, since mdadm CANNOT do bootable RAID 0)
in the chroot, you need to install mdadm, assemble and activate the array, edit fstab to ensure the array will be mounted
you may or may not need to edit /etc/default/grub(depends on the kernel version you are using. after 5.2/5.3(?), there were some changes made regarding mdadm and grub interaction)
since you're already there, might as well run updates in the chroot and make any config adjustments if needed(better install that proprietary NVIDIA driver while you're there)

once you're positive your array won't blow up when you reboot, reboot.

If you're lucky, you're array will be mounted and you can log into your fresh Linux install. Most likely, it'll fail, and you'll need to boot from the live installer again, reinstall mdadm, re-assemble the array(if it's a RAID1 and bootable), and look thru system boot logs to see where it failed and adjust config files accordingly. And reboot.

the only "good" thing about mdadm is that it gives you an appreciation for all of the underlying processes a FakeRAID card/chipset handles for you.

Harware RAID controllers are the preferrential means of running RAID. They posssess their own IO Processors and many times have onboard RAM. Most have a RAID util accessible via POST hotkey as well as via an ethernet interface. LVM on the controller presents arrays as a single block device to the OS. Many times these controllers support Hot-swap, allowing you to yank and replace failed storage devices and rebuild arrays without having to shut the system down(This is supported by pretty much ALL SATA RAID interfaces, including onboard mobo chips).

mdadm has it's place. it's great if you need to move some data somewhere to make some partition/filesystem changes where you may not have a big enough storage device, but several USB storage devices of approximately the same size. They can be pooled to create a storage area for those files.

mdadm can also be used to create nested RAID levels, if you're so inclined. For instance, if you're mobo has ports for 6 disks, u can use the onboard RAID util and create 3x RAID 0 arrays, and the in linux you could create another RAID 0 array with the 3 arrays exposed to the OS. There's no advantage to this. And it just creates a layer of unnecssary complexity. BUT, in the vain of Mythbusters, "If it's worth doing, it's worth overdoing". throw in multiple add-in FakeRAID cards, apply some mdadm, and polish it off with LVM2 and you've got an entirely over-engineered storage array. And an apprectiation for what hardware controllers turn into an easy click-thru process.

Somewhat disagree with your preferential list. MD is superior to both FakeRaid and "HardwareRaid" (put Hardware in quotes since close to all of those are just small RTOS:es running their own software raid [and often they are Linux running md]) in one major thing and that is when your card or mobo breaks, with md you can replace your mobo to what ever brand you want and things will still work, with the other solutions you must have the exact same FakeRaid or Raid-card.

actually, mdadm is capable of assembling arrays from FakeRaid cards(as long as the chipsets fall within Silicon Image, Promise, Nvidia and one other chipset that escapes me)....as far as my Areca 1230-ML, it had an intel IO processor, and the BIOS on it was much too small to be running anything more than the firmware. there was no embedded linux involved. And i ALSO had no issues getting mdadm to re-assemble those arrays...of course, if you're concerned with data loss, then you should be making regular backups. Hardware controllers are preferred since they only present a single block device to the OS, instead of spamming fdisk -l with a ton of block devices, which can become a bit confusing if you have over 6 disks in your RAID(my areca had 12 disks attached + 8 disks attached via onboard RAID).

Of course, if you just prefer tinkering around, mdadm is great for that. Myself, i've become too lazy to be doing all that tinkering around anymore and prefer "the easy path". And hardware controllers are the easiest path.
F.Ultra Aug 3, 2020
Quoting: iwantlinuxgames
Quoting: F.Ultra
Quoting: iwantlinuxgames
Quoting: fires
Quoting: damarrinAs a rule of thumb, you create your raid from linux, not from BIOS. In the bios you should set all your drives to AHCI or whatever the option is and create the raid from a live Linux environment using something like mdadm.

There is little chance that the mobo manufacturer provides Linux drivers for raid, unless perhaps it’s a server mobo or controller, in which case the drivers will be for something like rhel 6 and useless anyway. Linux’s software raid is very good anyway and what you should be using.


Many thanks

i will look into mdadm

but i am ABLE TO DO RAID FROM MOBO

but i did not know i should keep them AHCI now they are in AHCI so i will see how to build the raid fron linux

as a "general rule of thumb" no you do not. you ONLY use mdadm IF you're mobo's raid chipset isn't supported. The MAJORITY of RAID chipsets are supported by the DMRAID package, which configures mapped devices. x399 and B450 based AMD mobos are NOT supported if they have an AMD-RAID chipset that you are trying to utilize for NVME RAID.

We need more info about your board and it's chipset. Generally you will find the mapped device under /dev/mapper...it will often have a name something like pdc_abcdefg or nvidia_abcdefg depending on your chipset. you will need to MANUALLY partition this device: sudo fdisk /dev/mapper/pdc_abcdefg.

create your partition amd write the changes to the device. you will now have something like /dev/mapper/pdc_abcdefg1, this will be your partition.

I don't know what distro of Linux your are attemprting to install, but with Ubuntu, "GENERALLY" the dmraid driver modules are loaded in the live installer. If not, sudo apt install dmraid and then partition the mapped device.

During the install for Ubuntu, if you choose manual installation as opposed to guided, you can choose your raid array. when you get to the partitioning utility, set your / (root) and filesystems. Where it asks you to install the bootloader, be sure to select "/dev/pdc_abcdefg" AND NOT the partition.

Linux software is NOT preferred BECAUSE it doesn't do a bootable RAID 0 config.

In 15+ years i have NEVER had to use mdadm with onboard and add-in raid card RAID arrays. For a mass storage enthusiast, the preference is:

Hardware RAID controller(with it's own IO processor and RAM)
FakeRAID controller(add-in card or onboard chipset)
Software RAID(AS a very last resort)

If you can supply some more info about your setup: mobo model, choice of distro, possibly even RAID chipset, I should be able to provide you with a little better info.

addendum: /dev/mapper devices won't be shown via fdisk -l

addendum II: "FakeRAID" is preferrable to mdadm software RAID because of the mobo/add-in card chipset handling the LVM bits. Other than being aware of how dmraid works and interfaces with mapped block devices, installing Linux is a fairly simple and straightforward process.

mdadm ON THE OTHER HAND, often requires the following process:

install and load the mdadm modules and utilities
create and assemble the array
proceed with the install
after the install you need to mount the array and chroot to environment the array will be utilised in(if you're doing RAID 0 this will be the block device you set up to boot from and load the kernel image, since mdadm CANNOT do bootable RAID 0)
in the chroot, you need to install mdadm, assemble and activate the array, edit fstab to ensure the array will be mounted
you may or may not need to edit /etc/default/grub(depends on the kernel version you are using. after 5.2/5.3(?), there were some changes made regarding mdadm and grub interaction)
since you're already there, might as well run updates in the chroot and make any config adjustments if needed(better install that proprietary NVIDIA driver while you're there)

once you're positive your array won't blow up when you reboot, reboot.

If you're lucky, you're array will be mounted and you can log into your fresh Linux install. Most likely, it'll fail, and you'll need to boot from the live installer again, reinstall mdadm, re-assemble the array(if it's a RAID1 and bootable), and look thru system boot logs to see where it failed and adjust config files accordingly. And reboot.

the only "good" thing about mdadm is that it gives you an appreciation for all of the underlying processes a FakeRAID card/chipset handles for you.

Harware RAID controllers are the preferrential means of running RAID. They posssess their own IO Processors and many times have onboard RAM. Most have a RAID util accessible via POST hotkey as well as via an ethernet interface. LVM on the controller presents arrays as a single block device to the OS. Many times these controllers support Hot-swap, allowing you to yank and replace failed storage devices and rebuild arrays without having to shut the system down(This is supported by pretty much ALL SATA RAID interfaces, including onboard mobo chips).

mdadm has it's place. it's great if you need to move some data somewhere to make some partition/filesystem changes where you may not have a big enough storage device, but several USB storage devices of approximately the same size. They can be pooled to create a storage area for those files.

mdadm can also be used to create nested RAID levels, if you're so inclined. For instance, if you're mobo has ports for 6 disks, u can use the onboard RAID util and create 3x RAID 0 arrays, and the in linux you could create another RAID 0 array with the 3 arrays exposed to the OS. There's no advantage to this. And it just creates a layer of unnecssary complexity. BUT, in the vain of Mythbusters, "If it's worth doing, it's worth overdoing". throw in multiple add-in FakeRAID cards, apply some mdadm, and polish it off with LVM2 and you've got an entirely over-engineered storage array. And an apprectiation for what hardware controllers turn into an easy click-thru process.

Somewhat disagree with your preferential list. MD is superior to both FakeRaid and "HardwareRaid" (put Hardware in quotes since close to all of those are just small RTOS:es running their own software raid [and often they are Linux running md]) in one major thing and that is when your card or mobo breaks, with md you can replace your mobo to what ever brand you want and things will still work, with the other solutions you must have the exact same FakeRaid or Raid-card.

actually, mdadm is capable of assembling arrays from FakeRaid cards(as long as the chipsets fall within Silicon Image, Promise, Nvidia and one other chipset that escapes me)....as far as my Areca 1230-ML, it had an intel IO processor, and the BIOS on it was much too small to be running anything more than the firmware. there was no embedded linux involved. And i ALSO had no issues getting mdadm to re-assemble those arrays...of course, if you're concerned with data loss, then you should be making regular backups. Hardware controllers are preferred since they only present a single block device to the OS, instead of spamming fdisk -l with a ton of block devices, which can become a bit confusing if you have over 6 disks in your RAID(my areca had 12 disks attached + 8 disks attached via onboard RAID).

Of course, if you just prefer tinkering around, mdadm is great for that. Myself, i've become too lazy to be doing all that tinkering around anymore and prefer "the easy path". And hardware controllers are the easiest path.

FakeRaid does not use any embedded Linux since FakeRaid is just software raid (hence why you have to use mdadm) anyway, actually I struggle to find any real benefit of the FakeRaid cards at all. My mentioning of embedded Linux was for the Hardware Raid cards.

Yes Backups should always be done, but if your mobo dies or you want to upgrade to a different brand/chipset then there is a world of less hurt if you use plain md since all you have to do is to plug in the new drives while with other forms of raid you have to bring back stuff from backups.

Btw if you feel like 6+ disks as separate drives is scary then Halloween comes early for you my friend :-), here is an excerpt from one of my servers:

 
[email protected]:~# fdisk -l | grep Disk
Disk /dev/sdb: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sda: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdd: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdc: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdv: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdf: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdh: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdw: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdu: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdx: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdm: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdl: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdg: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdk: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdi: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sde: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdo: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdj: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdn: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdr: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sds: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdp: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdt: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdq: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdy: 118 GiB, 126701535232 byte, 247463936 sektorer
Disk /dev/sdz: 118 GiB, 126701535232 byte, 247463936 sektorer
iwantlinuxgames Aug 3, 2020
Quoting: F.Ultra
Quoting: iwantlinuxgames
Quoting: F.Ultra
Quoting: iwantlinuxgames
Quoting: fires
Quoting: damarrinAs a rule of thumb, you create your raid from linux, not from BIOS. In the bios you should set all your drives to AHCI or whatever the option is and create the raid from a live Linux environment using something like mdadm.

There is little chance that the mobo manufacturer provides Linux drivers for raid, unless perhaps it’s a server mobo or controller, in which case the drivers will be for something like rhel 6 and useless anyway. Linux’s software raid is very good anyway and what you should be using.


Many thanks

i will look into mdadm

but i am ABLE TO DO RAID FROM MOBO

but i did not know i should keep them AHCI now they are in AHCI so i will see how to build the raid fron linux

as a "general rule of thumb" no you do not. you ONLY use mdadm IF you're mobo's raid chipset isn't supported. The MAJORITY of RAID chipsets are supported by the DMRAID package, which configures mapped devices. x399 and B450 based AMD mobos are NOT supported if they have an AMD-RAID chipset that you are trying to utilize for NVME RAID.

We need more info about your board and it's chipset. Generally you will find the mapped device under /dev/mapper...it will often have a name something like pdc_abcdefg or nvidia_abcdefg depending on your chipset. you will need to MANUALLY partition this device: sudo fdisk /dev/mapper/pdc_abcdefg.

create your partition amd write the changes to the device. you will now have something like /dev/mapper/pdc_abcdefg1, this will be your partition.

I don't know what distro of Linux your are attemprting to install, but with Ubuntu, "GENERALLY" the dmraid driver modules are loaded in the live installer. If not, sudo apt install dmraid and then partition the mapped device.

During the install for Ubuntu, if you choose manual installation as opposed to guided, you can choose your raid array. when you get to the partitioning utility, set your / (root) and filesystems. Where it asks you to install the bootloader, be sure to select "/dev/pdc_abcdefg" AND NOT the partition.

Linux software is NOT preferred BECAUSE it doesn't do a bootable RAID 0 config.

In 15+ years i have NEVER had to use mdadm with onboard and add-in raid card RAID arrays. For a mass storage enthusiast, the preference is:

Hardware RAID controller(with it's own IO processor and RAM)
FakeRAID controller(add-in card or onboard chipset)
Software RAID(AS a very last resort)

If you can supply some more info about your setup: mobo model, choice of distro, possibly even RAID chipset, I should be able to provide you with a little better info.

addendum: /dev/mapper devices won't be shown via fdisk -l

addendum II: "FakeRAID" is preferrable to mdadm software RAID because of the mobo/add-in card chipset handling the LVM bits. Other than being aware of how dmraid works and interfaces with mapped block devices, installing Linux is a fairly simple and straightforward process.

mdadm ON THE OTHER HAND, often requires the following process:

install and load the mdadm modules and utilities
create and assemble the array
proceed with the install
after the install you need to mount the array and chroot to environment the array will be utilised in(if you're doing RAID 0 this will be the block device you set up to boot from and load the kernel image, since mdadm CANNOT do bootable RAID 0)
in the chroot, you need to install mdadm, assemble and activate the array, edit fstab to ensure the array will be mounted
you may or may not need to edit /etc/default/grub(depends on the kernel version you are using. after 5.2/5.3(?), there were some changes made regarding mdadm and grub interaction)
since you're already there, might as well run updates in the chroot and make any config adjustments if needed(better install that proprietary NVIDIA driver while you're there)

once you're positive your array won't blow up when you reboot, reboot.

If you're lucky, you're array will be mounted and you can log into your fresh Linux install. Most likely, it'll fail, and you'll need to boot from the live installer again, reinstall mdadm, re-assemble the array(if it's a RAID1 and bootable), and look thru system boot logs to see where it failed and adjust config files accordingly. And reboot.

the only "good" thing about mdadm is that it gives you an appreciation for all of the underlying processes a FakeRAID card/chipset handles for you.

Harware RAID controllers are the preferrential means of running RAID. They posssess their own IO Processors and many times have onboard RAM. Most have a RAID util accessible via POST hotkey as well as via an ethernet interface. LVM on the controller presents arrays as a single block device to the OS. Many times these controllers support Hot-swap, allowing you to yank and replace failed storage devices and rebuild arrays without having to shut the system down(This is supported by pretty much ALL SATA RAID interfaces, including onboard mobo chips).

mdadm has it's place. it's great if you need to move some data somewhere to make some partition/filesystem changes where you may not have a big enough storage device, but several USB storage devices of approximately the same size. They can be pooled to create a storage area for those files.

mdadm can also be used to create nested RAID levels, if you're so inclined. For instance, if you're mobo has ports for 6 disks, u can use the onboard RAID util and create 3x RAID 0 arrays, and the in linux you could create another RAID 0 array with the 3 arrays exposed to the OS. There's no advantage to this. And it just creates a layer of unnecssary complexity. BUT, in the vain of Mythbusters, "If it's worth doing, it's worth overdoing". throw in multiple add-in FakeRAID cards, apply some mdadm, and polish it off with LVM2 and you've got an entirely over-engineered storage array. And an apprectiation for what hardware controllers turn into an easy click-thru process.

Somewhat disagree with your preferential list. MD is superior to both FakeRaid and "HardwareRaid" (put Hardware in quotes since close to all of those are just small RTOS:es running their own software raid [and often they are Linux running md]) in one major thing and that is when your card or mobo breaks, with md you can replace your mobo to what ever brand you want and things will still work, with the other solutions you must have the exact same FakeRaid or Raid-card.

actually, mdadm is capable of assembling arrays from FakeRaid cards(as long as the chipsets fall within Silicon Image, Promise, Nvidia and one other chipset that escapes me)....as far as my Areca 1230-ML, it had an intel IO processor, and the BIOS on it was much too small to be running anything more than the firmware. there was no embedded linux involved. And i ALSO had no issues getting mdadm to re-assemble those arrays...of course, if you're concerned with data loss, then you should be making regular backups. Hardware controllers are preferred since they only present a single block device to the OS, instead of spamming fdisk -l with a ton of block devices, which can become a bit confusing if you have over 6 disks in your RAID(my areca had 12 disks attached + 8 disks attached via onboard RAID).

Of course, if you just prefer tinkering around, mdadm is great for that. Myself, i've become too lazy to be doing all that tinkering around anymore and prefer "the easy path". And hardware controllers are the easiest path.

FakeRaid does not use any embedded Linux since FakeRaid is just software raid (hence why you have to use mdadm) anyway, actually I struggle to find any real benefit of the FakeRaid cards at all. My mentioning of embedded Linux was for the Hardware Raid cards.

Yes Backups should always be done, but if your mobo dies or you want to upgrade to a different brand/chipset then there is a world of less hurt if you use plain md since all you have to do is to plug in the new drives while with other forms of raid you have to bring back stuff from backups.

Btw if you feel like 6+ disks as separate drives is scary then Halloween comes early for you my friend :-), here is an excerpt from one of my servers:

 
[email protected]:~# fdisk -l | grep Disk
Disk /dev/sdb: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sda: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdd: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdc: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdv: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdf: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdh: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdw: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdu: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdx: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdm: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdl: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdg: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdk: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdi: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sde: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdo: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdj: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdn: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdr: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sds: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdp: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdt: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdq: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdy: 118 GiB, 126701535232 byte, 247463936 sektorer
Disk /dev/sdz: 118 GiB, 126701535232 byte, 247463936 sektorer

yeah that's too much to deal with.....and an Areca-1230 isn't a FakeRAID card....it was an $850 hardware controller. With no embedded Linux. With FakeRAID cards you still have to deal with fdisk being spammed. But at least with dmraid(for FakeRAID cards), it gave you a single block device to deal with under /dev/mapper. Yes, i realise md creates dm-0(i'm using mdadm right now because there are no linux drivers for PCIE NVME RAID on an X399 mobo.) It's STILL NOT going to change my mind about one single block device being presented to the OS by a hardware controller. And i'd much rather deal with a FakeRAID card than md. I'm currently eyeing a High Point SSD7103 Bootable 4X M.2 NVME RAID Controller so I can finally ditch these last 8 2.5in SSDs and the hotswap bay. And get rid of all that damn cabling clutter. AND double my storage space. If the 4TB Sabrent Rocket Q NVMEs weren't $1500 each, i'd go for those. But alas.. :/

QuoteBackups should always be done, but if your mobo dies or you want to upgrade to a different brand/chipset then there is a world of less hurt if you use plain md since all you have to do is to plug in the new drives while with other forms of raid you have to bring back stuff from backups.

Which is what I'd have to do with md if i add brand new disks in a brand new array. Or, as in my recent experience last year, a new mobo(the aforementioned x399(Phantom Gaming 6)) and CPU(Threadripper 1950x). I had to make backups, tear down the old rig, install the new bits, remove the areca, create new arrays with md(since the onboard controller had performance issues with SATA RAID with the proprietary linux driver), and restore from backup.

PLUS, as i mentioned in one of my previous replies, mdadm DOES NOT support bootable RAID 0, so i have to boot from a 1TB NVME, with almost 95% of the disk going to waste for the / partition. A setup i am most displeased with.
[code]
df -h
Filesystem Size Used Avail Use% Mounted on
udev 48G 0 48G 0% /dev
tmpfs 9.5G 1.9M 9.5G 1% /run
/dev/nvme2n1p1 954G 52G 902G 6% /
tmpfs 48G 1.2G 47G 3% /dev/shm
tmpfs 5.0M 8.0K 5.0M 1% /run/lock
tmpfs 48G 0 48G 0% /sys/fs/cgroup
/dev/md0 5.6T 3.6T 2.1T 64% /home
tmpfs 9.5G 16K 9.5G 1% /run/user/1000
[/code]

you can see there for yourself on /dev/nvme2n1p1. so i have almost another TB going to waste that could be used in an array.

addendum:(i had to leave for work) so let's review:

Hardware Controller card:
"portable", ie the card can be moved across multiple hardware refreshes
presents a single block device to the OS
"reliable", most controllers can last a decade or better
supports hotswap
BOOTABLE RAID 0

Cons: expensive, but that expense means you have a piece of hardware with a decent warranty(often 5 years, sometimes 7), as well as lasting beyond being outdated(my Areca was several years old before i decomissioned it, and was still working when i did so. i replaced it because it was SATA II, and the 12 disk array was slower than a single one of my NVMEs(Areca:bursts up to 1GB/s, NVME pretty consistent 2GB+/s). Thusly, a hardware controller is preferred

FakeRAID:
presents a single block device to the OS but doesn't mask the individual block devices from the OS
"portable", can move a card across multiple hardware refreshes
Online Expansion
supports hotswap
BOOTABLE RAID 0

Cons: not very reliable, because they are cheap. If your card or onboard fails, you'll need to get a card with a chipset in the same mfg line.(mdadm can be used to re-assemble these and rescue data)

mdadm:
"portable", disks can be moved from one system to another and the array re-assembled.
can re-assemble arrays from FakeRAID controllers in the Promise, Silicon Image, and Nvidia chipset lines
Online expansion
Great for RAID 1,4,5,6,10
supports hotswap(via the chipset)

Cons:NO BOOTABLE RAID 0, Installing the OS to a RAID 1 array generally requires a chroot into the OS install on the array from the installer media, installing mdadm, and configuring madam.conf, and sacrificing your first-born to appease the data gods and grant you luck that it boots on the first go-around. Usually it doesn't. No masking of individual devices from the OS.

No, I think i still prefer Hardware Controllers over the other 2 options. Nothing easier than creating the arrays in the BIOS utility, booting your install media, installing to a SINGLE block device, not having to chroot, and then rebooting. Done....As i said, I'm done with all that tinkering shit. I'll pay more for the speed, reliability, and convenience offered by a hardware controller.

Last edited by iwantlinuxgames on 3 August 2020 at 6:39 pm UTC
F.Ultra Aug 3, 2020
Quoting: iwantlinuxgames
Quoting: F.Ultra
Quoting: iwantlinuxgames
Quoting: F.Ultra
Quoting: iwantlinuxgames
Quoting: fires
Quoting: damarrinAs a rule of thumb, you create your raid from linux, not from BIOS. In the bios you should set all your drives to AHCI or whatever the option is and create the raid from a live Linux environment using something like mdadm.

There is little chance that the mobo manufacturer provides Linux drivers for raid, unless perhaps it’s a server mobo or controller, in which case the drivers will be for something like rhel 6 and useless anyway. Linux’s software raid is very good anyway and what you should be using.


Many thanks

i will look into mdadm

but i am ABLE TO DO RAID FROM MOBO

but i did not know i should keep them AHCI now they are in AHCI so i will see how to build the raid fron linux

as a "general rule of thumb" no you do not. you ONLY use mdadm IF you're mobo's raid chipset isn't supported. The MAJORITY of RAID chipsets are supported by the DMRAID package, which configures mapped devices. x399 and B450 based AMD mobos are NOT supported if they have an AMD-RAID chipset that you are trying to utilize for NVME RAID.

We need more info about your board and it's chipset. Generally you will find the mapped device under /dev/mapper...it will often have a name something like pdc_abcdefg or nvidia_abcdefg depending on your chipset. you will need to MANUALLY partition this device: sudo fdisk /dev/mapper/pdc_abcdefg.

create your partition amd write the changes to the device. you will now have something like /dev/mapper/pdc_abcdefg1, this will be your partition.

I don't know what distro of Linux your are attemprting to install, but with Ubuntu, "GENERALLY" the dmraid driver modules are loaded in the live installer. If not, sudo apt install dmraid and then partition the mapped device.

During the install for Ubuntu, if you choose manual installation as opposed to guided, you can choose your raid array. when you get to the partitioning utility, set your / (root) and filesystems. Where it asks you to install the bootloader, be sure to select "/dev/pdc_abcdefg" AND NOT the partition.

Linux software is NOT preferred BECAUSE it doesn't do a bootable RAID 0 config.

In 15+ years i have NEVER had to use mdadm with onboard and add-in raid card RAID arrays. For a mass storage enthusiast, the preference is:

Hardware RAID controller(with it's own IO processor and RAM)
FakeRAID controller(add-in card or onboard chipset)
Software RAID(AS a very last resort)

If you can supply some more info about your setup: mobo model, choice of distro, possibly even RAID chipset, I should be able to provide you with a little better info.

addendum: /dev/mapper devices won't be shown via fdisk -l

addendum II: "FakeRAID" is preferrable to mdadm software RAID because of the mobo/add-in card chipset handling the LVM bits. Other than being aware of how dmraid works and interfaces with mapped block devices, installing Linux is a fairly simple and straightforward process.

mdadm ON THE OTHER HAND, often requires the following process:

install and load the mdadm modules and utilities
create and assemble the array
proceed with the install
after the install you need to mount the array and chroot to environment the array will be utilised in(if you're doing RAID 0 this will be the block device you set up to boot from and load the kernel image, since mdadm CANNOT do bootable RAID 0)
in the chroot, you need to install mdadm, assemble and activate the array, edit fstab to ensure the array will be mounted
you may or may not need to edit /etc/default/grub(depends on the kernel version you are using. after 5.2/5.3(?), there were some changes made regarding mdadm and grub interaction)
since you're already there, might as well run updates in the chroot and make any config adjustments if needed(better install that proprietary NVIDIA driver while you're there)

once you're positive your array won't blow up when you reboot, reboot.

If you're lucky, you're array will be mounted and you can log into your fresh Linux install. Most likely, it'll fail, and you'll need to boot from the live installer again, reinstall mdadm, re-assemble the array(if it's a RAID1 and bootable), and look thru system boot logs to see where it failed and adjust config files accordingly. And reboot.

the only "good" thing about mdadm is that it gives you an appreciation for all of the underlying processes a FakeRAID card/chipset handles for you.

Harware RAID controllers are the preferrential means of running RAID. They posssess their own IO Processors and many times have onboard RAM. Most have a RAID util accessible via POST hotkey as well as via an ethernet interface. LVM on the controller presents arrays as a single block device to the OS. Many times these controllers support Hot-swap, allowing you to yank and replace failed storage devices and rebuild arrays without having to shut the system down(This is supported by pretty much ALL SATA RAID interfaces, including onboard mobo chips).

mdadm has it's place. it's great if you need to move some data somewhere to make some partition/filesystem changes where you may not have a big enough storage device, but several USB storage devices of approximately the same size. They can be pooled to create a storage area for those files.

mdadm can also be used to create nested RAID levels, if you're so inclined. For instance, if you're mobo has ports for 6 disks, u can use the onboard RAID util and create 3x RAID 0 arrays, and the in linux you could create another RAID 0 array with the 3 arrays exposed to the OS. There's no advantage to this. And it just creates a layer of unnecssary complexity. BUT, in the vain of Mythbusters, "If it's worth doing, it's worth overdoing". throw in multiple add-in FakeRAID cards, apply some mdadm, and polish it off with LVM2 and you've got an entirely over-engineered storage array. And an apprectiation for what hardware controllers turn into an easy click-thru process.

Somewhat disagree with your preferential list. MD is superior to both FakeRaid and "HardwareRaid" (put Hardware in quotes since close to all of those are just small RTOS:es running their own software raid [and often they are Linux running md]) in one major thing and that is when your card or mobo breaks, with md you can replace your mobo to what ever brand you want and things will still work, with the other solutions you must have the exact same FakeRaid or Raid-card.

actually, mdadm is capable of assembling arrays from FakeRaid cards(as long as the chipsets fall within Silicon Image, Promise, Nvidia and one other chipset that escapes me)....as far as my Areca 1230-ML, it had an intel IO processor, and the BIOS on it was much too small to be running anything more than the firmware. there was no embedded linux involved. And i ALSO had no issues getting mdadm to re-assemble those arrays...of course, if you're concerned with data loss, then you should be making regular backups. Hardware controllers are preferred since they only present a single block device to the OS, instead of spamming fdisk -l with a ton of block devices, which can become a bit confusing if you have over 6 disks in your RAID(my areca had 12 disks attached + 8 disks attached via onboard RAID).

Of course, if you just prefer tinkering around, mdadm is great for that. Myself, i've become too lazy to be doing all that tinkering around anymore and prefer "the easy path". And hardware controllers are the easiest path.

FakeRaid does not use any embedded Linux since FakeRaid is just software raid (hence why you have to use mdadm) anyway, actually I struggle to find any real benefit of the FakeRaid cards at all. My mentioning of embedded Linux was for the Hardware Raid cards.

Yes Backups should always be done, but if your mobo dies or you want to upgrade to a different brand/chipset then there is a world of less hurt if you use plain md since all you have to do is to plug in the new drives while with other forms of raid you have to bring back stuff from backups.

Btw if you feel like 6+ disks as separate drives is scary then Halloween comes early for you my friend :-), here is an excerpt from one of my servers:

 
[email protected]:~# fdisk -l | grep Disk
Disk /dev/sdb: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sda: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdd: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdc: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdv: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdf: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdh: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdw: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdu: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdx: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdm: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdl: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdg: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdk: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdi: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sde: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdo: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdj: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdn: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdr: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sds: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdp: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdt: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdq: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdy: 118 GiB, 126701535232 byte, 247463936 sektorer
Disk /dev/sdz: 118 GiB, 126701535232 byte, 247463936 sektorer

yeah that's too much to deal with.....and an Areca-1230 isn't a FakeRAID card....it was an $850 hardware controller. With no embedded Linux. With FakeRAID cards you still have to deal with fdisk being spammed. But at least with dmraid(for FakeRAID cards), it gave you a single block device to deal with under /dev/mapper. Yes, i realise md creates dm-0(i'm using mdadm right now because there are no linux drivers for PCIE NVME RAID on an X399 mobo.) It's STILL NOT going to change my mind about one single block device being presented to the OS by a hardware controller. And i'd much rather deal with a FakeRAID card than md. I'm currently eyeing a High Point SSD7103 Bootable 4X M.2 NVME RAID Controller so I can finally ditch these last 8 2.5in SSDs and the hotswap bay. And get rid of all that damn cabling clutter. AND double my storage space. If the 4TB Sabrent Rocket Q NVMEs weren't $1500 each, i'd go for those. But alas.. :/

QuoteBackups should always be done, but if your mobo dies or you want to upgrade to a different brand/chipset then there is a world of less hurt if you use plain md since all you have to do is to plug in the new drives while with other forms of raid you have to bring back stuff from backups.

Which is what I'd have to do with md if i add brand new disks in a brand new array. Or, as in my recent experience last year, a new mobo(the aforementioned x399(Phantom Gaming 6)) and CPU(Threadripper 1950x). I had to make backups, tear down the old rig, install the new bits, remove the areca, create new arrays with md(since the onboard controller had performance issues with SATA RAID with the proprietary linux driver), and restore from backup.

PLUS, as i mentioned in one of my previous replies, mdadm DOES NOT support bootable RAID 0, so i have to boot from a 1TB NVME, with almost 95% of the disk going to waste for the / partition. A setup i am most displeased with.
[code]
df -h
Filesystem Size Used Avail Use% Mounted on
udev 48G 0 48G 0% /dev
tmpfs 9.5G 1.9M 9.5G 1% /run
/dev/nvme2n1p1 954G 52G 902G 6% /
tmpfs 48G 1.2G 47G 3% /dev/shm
tmpfs 5.0M 8.0K 5.0M 1% /run/lock
tmpfs 48G 0 48G 0% /sys/fs/cgroup
/dev/md0 5.6T 3.6T 2.1T 64% /home
tmpfs 9.5G 16K 9.5G 1% /run/user/1000
[/code]

you can see there for yourself on /dev/nvme2n1p1. so i have almost another TB going to waste that could be used in an array.

addendum:(i had to leave for work) so let's review:

Hardware Controller card:
"portable", ie the card can be moved across multiple hardware refreshes
presents a single block device to the OS
"reliable", most controllers can last a decade or better
supports hotswap
BOOTABLE RAID 0

Cons: expensive, but that expense means you have a piece of hardware with a decent warranty(often 5 years, sometimes 7), as well as lasting beyond being outdated(my Areca was several years old before i decomissioned it, and was still working when i did so. i replaced it because it was SATA II, and the 12 disk array was slower than a single one of my NVMEs(Areca:bursts up to 1GB/s, NVME pretty consistent 2GB+/s). Thusly, a hardware controller is preferred

FakeRAID:
presents a single block device to the OS but doesn't mask the individual block devices from the OS
"portable", can move a card across multiple hardware refreshes
Online Expansion
supports hotswap
BOOTABLE RAID 0

Cons: not very reliable, because they are cheap. If your card or onboard fails, you'll need to get a card with a chipset in the same mfg line.(mdadm can be used to re-assemble these and rescue data)

mdadm:
"portable", disks can be moved from one system to another and the array re-assembled.
can re-assemble arrays from FakeRAID controllers in the Promise, Silicon Image, and Nvidia chipset lines
Online expansion
Great for RAID 1,4,5,6,10
supports hotswap(via the chipset)

Cons:NO BOOTABLE RAID 0, Installing the OS to a RAID 1 array generally requires a chroot into the OS install on the array from the installer media, installing mdadm, and configuring madam.conf, and sacrificing your first-born to appease the data gods and grant you luck that it boots on the first go-around. Usually it doesn't. No masking of individual devices from the OS.

No, I think i still prefer Hardware Controllers over the other 2 options. Nothing easier than creating the arrays in the BIOS utility, booting your install media, installing to a SINGLE block device, not having to chroot, and then rebooting. Done....As i said, I'm done with all that tinkering shit. I'll pay more for the speed, reliability, and convenience offered by a hardware controller.

Bootable raid-0 sounds more like a grub2 problem than md since md does not handle boots. I have installed tons of servers on raid-1 using md so don't know what your issues have been there (dead simple setup in both Debian and Ubuntu, no need for chroot or anything like that). Raid-0 I'm unsure of since I have never had any reason to use that for boot, I usually go with 1+0 instead when that kind of performance is needed for storage if I have the OS on the same drive as the data.

Sorry about the Areca-1230, just assumed that it was the name of your mobo and not of your controller :), not a manufacturer that is available over here. That board runs an embedded web server, telnet+v100 terminal, a snmp daemon and is using an Xscale cpu so it is running "something", either BSD or some proprietary realtime os (don't think it is running Linux since they don't carry any license details in their documentation or firmware blob).

So like most hardware today, it's basically a small computer running software raid, but one could argue that this is just me being semantically pedantic and your main point seems to be the "ease of use" angle anyway and not what it technically is "behind the scene" which is the angle that I'm more interested in so I think we can just ignore that :)

Also it perhaps should be said that on the server that I posted the 24 drives from I use btrfs in raid1 which gives me advantages that no hardware raid can give me.

Last edited by F.Ultra on 3 August 2020 at 9:10 pm UTC
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register


Or login with...
Sign in with Steam Sign in with Google
Social logins require cookies to stay logged in.