While you're here, please consider supporting GamingOnLinux on:
Reward Tiers:
Patreon. Plain Donations:
PayPal.
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Reward Tiers:
This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!
You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register
- Nexus Mods retire their in-development cross-platform app to focus back on Vortex
- Canonical call for testing their Steam gaming Snap for Arm Linux
- Windows compatibility layer Wine 11 arrives bringing masses of improvements to Linux
- European Commission gathering feedback on the importance of open source
- GOG plan to look a bit closer at Linux through 2026
- > See more over 30 days here
- Weekend Players' Club 2026-01-16
- CatKiller - Welcome back to the GamingOnLinux Forum
- simplyseven - A New Game Screenshots Thread
- JohnLambrechts - Will you buy the new Steam Machine?
- mr-victory - Game recommendation?
- JSVRamirez - See more posts
How to setup OpenMW for modern Morrowind on Linux / SteamOS and Steam Deck
How to install Hollow Knight: Silksong mods on Linux, SteamOS and Steam Deck
ok i went into Bios did my raid came out while installing LINUX there is no hdd or any raid
so this must be drivers but how and witch ones do i need ?
many thankS
Last edited by fires on 20 Jul 2020 at 6:42 pm UTC
There is little chance that the mobo manufacturer provides Linux drivers for raid, unless perhaps it’s a server mobo or controller, in which case the drivers will be for something like rhel 6 and useless anyway. Linux’s software raid is very good anyway and what you should be using.
i will look into mdadm
but i am ABLE TO DO RAID FROM MOBO
but i did not know i should keep them AHCI now they are in AHCI so i will see how to build the raid fron linux
View PC info
We need more info about your board and it's chipset. Generally you will find the mapped device under /dev/mapper...it will often have a name something like pdc_abcdefg or nvidia_abcdefg depending on your chipset. you will need to MANUALLY partition this device: sudo fdisk /dev/mapper/pdc_abcdefg.
create your partition amd write the changes to the device. you will now have something like /dev/mapper/pdc_abcdefg1, this will be your partition.
I don't know what distro of Linux your are attemprting to install, but with Ubuntu, "GENERALLY" the dmraid driver modules are loaded in the live installer. If not, sudo apt install dmraid and then partition the mapped device.
During the install for Ubuntu, if you choose manual installation as opposed to guided, you can choose your raid array. when you get to the partitioning utility, set your / (root) and filesystems. Where it asks you to install the bootloader, be sure to select "/dev/pdc_abcdefg" AND NOT the partition.
Linux software is NOT preferred BECAUSE it doesn't do a bootable RAID 0 config.
In 15+ years i have NEVER had to use mdadm with onboard and add-in raid card RAID arrays. For a mass storage enthusiast, the preference is:
Hardware RAID controller(with it's own IO processor and RAM)
FakeRAID controller(add-in card or onboard chipset)
Software RAID(AS a very last resort)
If you can supply some more info about your setup: mobo model, choice of distro, possibly even RAID chipset, I should be able to provide you with a little better info.
addendum: /dev/mapper devices won't be shown via fdisk -l
addendum II: "FakeRAID" is preferrable to mdadm software RAID because of the mobo/add-in card chipset handling the LVM bits. Other than being aware of how dmraid works and interfaces with mapped block devices, installing Linux is a fairly simple and straightforward process.
mdadm ON THE OTHER HAND, often requires the following process:
install and load the mdadm modules and utilities
create and assemble the array
proceed with the install
after the install you need to mount the array and chroot to environment the array will be utilised in(if you're doing RAID 0 this will be the block device you set up to boot from and load the kernel image, since mdadm CANNOT do bootable RAID 0)
in the chroot, you need to install mdadm, assemble and activate the array, edit fstab to ensure the array will be mounted
you may or may not need to edit /etc/default/grub(depends on the kernel version you are using. after 5.2/5.3(?), there were some changes made regarding mdadm and grub interaction)
since you're already there, might as well run updates in the chroot and make any config adjustments if needed(better install that proprietary NVIDIA driver while you're there)
once you're positive your array won't blow up when you reboot, reboot.
If you're lucky, you're array will be mounted and you can log into your fresh Linux install. Most likely, it'll fail, and you'll need to boot from the live installer again, reinstall mdadm, re-assemble the array(if it's a RAID1 and bootable), and look thru system boot logs to see where it failed and adjust config files accordingly. And reboot.
the only "good" thing about mdadm is that it gives you an appreciation for all of the underlying processes a FakeRAID card/chipset handles for you.
Harware RAID controllers are the preferrential means of running RAID. They posssess their own IO Processors and many times have onboard RAM. Most have a RAID util accessible via POST hotkey as well as via an ethernet interface. LVM on the controller presents arrays as a single block device to the OS. Many times these controllers support Hot-swap, allowing you to yank and replace failed storage devices and rebuild arrays without having to shut the system down(This is supported by pretty much ALL SATA RAID interfaces, including onboard mobo chips).
mdadm has it's place. it's great if you need to move some data somewhere to make some partition/filesystem changes where you may not have a big enough storage device, but several USB storage devices of approximately the same size. They can be pooled to create a storage area for those files.
mdadm can also be used to create nested RAID levels, if you're so inclined. For instance, if you're mobo has ports for 6 disks, u can use the onboard RAID util and create 3x RAID 0 arrays, and the in linux you could create another RAID 0 array with the 3 arrays exposed to the OS. There's no advantage to this. And it just creates a layer of unnecssary complexity. BUT, in the vain of Mythbusters, "If it's worth doing, it's worth overdoing". throw in multiple add-in FakeRAID cards, apply some mdadm, and polish it off with LVM2 and you've got an entirely over-engineered storage array. And an apprectiation for what hardware controllers turn into an easy click-thru process.
Last edited by iwantlinuxgames on 27 Jul 2020 at 1:13 am UTC
i just read ur post i will post back with all info
many thanks
View PC info
Of course, if you just prefer tinkering around, mdadm is great for that. Myself, i've become too lazy to be doing all that tinkering around anymore and prefer "the easy path". And hardware controllers are the easiest path.
Yes Backups should always be done, but if your mobo dies or you want to upgrade to a different brand/chipset then there is a world of less hurt if you use plain md since all you have to do is to plug in the new drives while with other forms of raid you have to bring back stuff from backups.
Btw if you feel like 6+ disks as separate drives is scary then Halloween comes early for you my friend :-), here is an excerpt from one of my servers:
[email protected]:~# fdisk -l | grep Disk
Disk /dev/sdb: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sda: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdd: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdc: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdv: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdf: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdh: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdw: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdu: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdx: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdm: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdl: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdg: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdk: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdi: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sde: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdo: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdj: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdn: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdr: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sds: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdp: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdt: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdq: 9,1 TiB, 10000831348736 byte, 19532873728 sektorer
Disk /dev/sdy: 118 GiB, 126701535232 byte, 247463936 sektorer
Disk /dev/sdz: 118 GiB, 126701535232 byte, 247463936 sektorer
View PC info
Which is what I'd have to do with md if i add brand new disks in a brand new array. Or, as in my recent experience last year, a new mobo(the aforementioned x399(Phantom Gaming 6)) and CPU(Threadripper 1950x). I had to make backups, tear down the old rig, install the new bits, remove the areca, create new arrays with md(since the onboard controller had performance issues with SATA RAID with the proprietary linux driver), and restore from backup.
PLUS, as i mentioned in one of my previous replies, mdadm DOES NOT support bootable RAID 0, so i have to boot from a 1TB NVME, with almost 95% of the disk going to waste for the / partition. A setup i am most displeased with.
[code]
df -h
Filesystem Size Used Avail Use% Mounted on
udev 48G 0 48G 0% /dev
tmpfs 9.5G 1.9M 9.5G 1% /run
/dev/nvme2n1p1 954G 52G 902G 6% /
tmpfs 48G 1.2G 47G 3% /dev/shm
tmpfs 5.0M 8.0K 5.0M 1% /run/lock
tmpfs 48G 0 48G 0% /sys/fs/cgroup
/dev/md0 5.6T 3.6T 2.1T 64% /home
tmpfs 9.5G 16K 9.5G 1% /run/user/1000
[/code]
you can see there for yourself on /dev/nvme2n1p1. so i have almost another TB going to waste that could be used in an array.
addendum:(i had to leave for work) so let's review:
Hardware Controller card:
"portable", ie the card can be moved across multiple hardware refreshes
presents a single block device to the OS
"reliable", most controllers can last a decade or better
supports hotswap
BOOTABLE RAID 0
Cons: expensive, but that expense means you have a piece of hardware with a decent warranty(often 5 years, sometimes 7), as well as lasting beyond being outdated(my Areca was several years old before i decomissioned it, and was still working when i did so. i replaced it because it was SATA II, and the 12 disk array was slower than a single one of my NVMEs(Areca:bursts up to 1GB/s, NVME pretty consistent 2GB+/s). Thusly, a hardware controller is preferred
FakeRAID:
presents a single block device to the OS but doesn't mask the individual block devices from the OS
"portable", can move a card across multiple hardware refreshes
Online Expansion
supports hotswap
BOOTABLE RAID 0
Cons: not very reliable, because they are cheap. If your card or onboard fails, you'll need to get a card with a chipset in the same mfg line.(mdadm can be used to re-assemble these and rescue data)
mdadm:
"portable", disks can be moved from one system to another and the array re-assembled.
can re-assemble arrays from FakeRAID controllers in the Promise, Silicon Image, and Nvidia chipset lines
Online expansion
Great for RAID 1,4,5,6,10
supports hotswap(via the chipset)
Cons:NO BOOTABLE RAID 0, Installing the OS to a RAID 1 array generally requires a chroot into the OS install on the array from the installer media, installing mdadm, and configuring madam.conf, and sacrificing your first-born to appease the data gods and grant you luck that it boots on the first go-around. Usually it doesn't. No masking of individual devices from the OS.
No, I think i still prefer Hardware Controllers over the other 2 options. Nothing easier than creating the arrays in the BIOS utility, booting your install media, installing to a SINGLE block device, not having to chroot, and then rebooting. Done....As i said, I'm done with all that tinkering shit. I'll pay more for the speed, reliability, and convenience offered by a hardware controller.
Last edited by iwantlinuxgames on 3 Aug 2020 at 6:39 pm UTC
Sorry about the Areca-1230, just assumed that it was the name of your mobo and not of your controller :), not a manufacturer that is available over here. That board runs an embedded web server, telnet+v100 terminal, a snmp daemon and is using an Xscale cpu so it is running "something", either BSD or some proprietary realtime os (don't think it is running Linux since they don't carry any license details in their documentation or firmware blob).
So like most hardware today, it's basically a small computer running software raid, but one could argue that this is just me being semantically pedantic and your main point seems to be the "ease of use" angle anyway and not what it technically is "behind the scene" which is the angle that I'm more interested in so I think we can just ignore that :)
Also it perhaps should be said that on the server that I posted the 24 drives from I use btrfs in raid1 which gives me advantages that no hardware raid can give me.
Last edited by F.Ultra on 3 Aug 2020 at 9:10 pm UTC
View PC info
Linux kernels since like 2.6-ish have supported Areca hardware via the arc kernel module. In fact, where Linux really shines is its support for such types of cards: Areca, High Point, Promise...High end controller cards(as i said, mine cost me over $850 and i used it for several years. I passed it onto a friend who is now using it in one of his systems to learn RAID. He's a windows user.) Windows requires loading drivers most of the time for such cards.
Believe me, i'm probably as, or more, pendantic than you LOL....as i stated earlier, if you want to understand the processes a hardware controller handles for you, mdadm is the way to go, unless you need a bootable RAID 0.
I prefer XFS(according to benchmarks, it's the fastest performing filesystem around, although a bit fragile. it's terrible with power outages, though lately it seems to have improved). I haven't bothered to check the performance of btrfs. I have also not used LVM in many years either, so i can't speak to the performance of it nowadays. When i was experimenting with it years ago, the performance was utterly horrible, even with an XFS filesystem. I understand it has improved in striping the last several years, but i've become too spoiled by hardware and FakeRAID controllers.
addendum: i just checked on btrfs performance(source: Phoronix benchmark test published 3 July 2020). yep, looks like i'll be using XFS for quite some time, although F2FS is quite the contender.
Last edited by iwantlinuxgames on 3 Aug 2020 at 10:57 pm UTC
View PC info
I'm on Kubuntu 20.04(GNOME Desktop drives nuts!) and the installer takes that kind of tinkering with mdadm. I "HAD" to make sure i could boot and mount the array since it was mounting on /home and i have settings in there i've been carrying since 12.04. i think after the upgrade to the High Point 7303 w/ 4x 2TB Sabrent Rocket Qs i'm going to start a fresh /home and just copy out the stuff i want and start with some new settings for things(ughh...means i have to try and remember what-all chrome tabs i have open(2 windows with about 20+ tabs in each...it's convenient to just have the cache and settings already for chrome on a fresh install)). i'm so ready to get rid of this 6disk hotswap bay and all that friggin cabling. I have a 5.25in 7port USB bay to go in there. For the life of me, i can't figure out why they put the majority of USB slots on the rear-side of the mobo...Terribly inconvenient if you ask me. Plus, in a year or two when i upgrade the mobo and CPU to a PCIE gen 4 3rd gen Threadripper, i can just pop the 7303 in a PCIE bus and boot. In theory anyway...i had the same thought about the last upgrade from an AMD 8220(?..can't remember but i know it was in the 8xxx series of CPUs..8core) to the Threadripper 1950X and it didn't boot. kernel panicked with something about "SEV" or some such and i had to do a fresh install.
View PC info
View PC info
Last edited by iwantlinuxgames on 5 Aug 2020 at 1:05 am UTC
