You can sign up to get a daily email of our articles, see the Mailing List page.
Kubuntu 19.04, X399 chipset, AMD-RAID, and pcie nvme RAID
iwantlinuxgames Jul 6, 2019
My recent upgrade path has come to a standstill. I recently upgraded to a Threadripper 1950X, which is mounted on an Asrock Phantom Gaming 6 mobo. This mobo has support for PCIE NVME RAID. RAID configuration is handle thru the UEFI interface via the RAIDXpert2 configuration utility.

I have already experimented with this utility configuring a small RAID 0 array consisting of 2x 2.5in 60GB SSDs and loaded the rcraid.ko driver(see the rcraid-dkms github repo for a little more information). This small array appeared to the OS as a single /dev/sdX scsi device, so i thought all was good.

4 days ago, I purchased 2x Samsung 970 EVO+ PCIE NVME M.2 sticks. Performed the same actions as I did for the 2x 2.5 SSDs. And, I am unable to get the array recognized by the system, or by the Kubuntu USB install flash drive I made. I have spent the last few days googling, reading guides over and over, and I am getting nowhere. I have already filed an issue report on the rcraid-dkms github page, but I was hoping "possibly" someone here may already be using an X399 chipset based board and has succesfully created an NVME RAID array and are using it as a bootable drive.

the following is some dmesg output after insertion of the rcraid.ko module while booted into the Kubuntu Live "CD" environment via UEFI boot:

[ 859.325295] <5>AMD, Inc. rcraid raid driver version 8.1.0 build_number 8.1.0-00039 built Jul 05 2019
[ 859.325297] <5>rcraid built on kubuntu by root on Fri 05 Jul 2019 07:53:18 PM UTC
[ 859.325299] <5>rcraid: cmd_q_depth 512, tag_q_depth 16, max_xfer 448, use_swl 0xffffffff
[ 859.325420] <5>rcraid_probe_one: vendor = 0x1022 device 0x43bd
[ 859.325429] <5>rcraid_probe_one: Total adapters matched 1
[ 859.325847] <5>rcraid: rc_init_adapter 64 bit DMA enabled
[ 859.326014] <6>### rc_init_adapter(): RC_EnableZPODD = 0
[ 859.326096] <3>rcraid:0 request_threaded_irq irq 160
[ 859.326101] <5>rcraid: card 0: AMD, Inc. AHCI
[ 860.400327] <6>rcraid: rc_event: config change detected on bus 0
[ 860.400867] scsi host4: AMD, Inc. AMD-RAID
[ 860.406385] scsi 4:0:24:0: Processor AMD-RAID Configuration V1.2 PQ: 0 ANSI: 5
[ 860.411387] scsi 4:0:24:0: Attached scsi generic sg3 type 3

as you can see, the OS recognizes the RAID controller and even recognizes that a disk device is available, however, it fails to create any type of /dev/sdX block device.

the output of sudo fdisk -l while booted into the Kubuntu Live "CD" environment:

Disk /dev/loop0: 1.7 GiB, 1845854208 bytes, 3605184 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sda: 5.6 TiB, 6143996854272 bytes, 11999993856 sectors
Disk model: ARC-1231-VOL#01
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 97E0EB8F-E0B4-4355-B889-DCDF8E6D8322

Device Start End Sectors Size Type
/dev/sda1 2048 11999991807 11999989760 5.6T Linux filesystem

Disk /dev/nvme0n1: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: INTEL SSDPEKKW010T8
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F95B1B91-01E0-4AD3-A9A5-C68E08FFCA97

Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 2000408575 2000406528 953.9G Linux filesystem

Disk /dev/nvme1n1: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 970 EVO Plus 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8fa8c2d5

Device Boot Start End Sectors Size Id Type
/dev/nvme1n1p1 63 1953497150 1953497088 931.5G 82 Linux swap / Solaris

Disk /dev/nvme2n1: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 970 EVO Plus 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x3313112f

Device Boot Start End Sectors Size Id Type
/dev/nvme2n1p1 63 1953497150 1953497088 931.5G 82 Linux swap / Solaris

Disk /dev/sdb: 7.5 GiB, 8004829184 bytes, 15634432 sectors
Disk model: USB Flash Drive
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x4b9627aa

Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 0 3742559 3742560 1.8G 0 Empty
/dev/sdb2 3711844 3719331 7488 3.7M ef EFI (FAT-12/16/32)

as you can see, the NVME devices are recognized by the system individually, however they "should" be showing up as a single block device in a manner similar to my Areca 1231 RAID host bus adapter(ARC-1231-VOL#01 as /dev/sda). In fact, it is this HBA that i am trying to phase out. it is several years old now, i'm not sure how much more life I'm going to get out of it, not to mention i have a 6TB raid0 array consisting of 12x 512GB 2.5in SSDs housed in 2x 6tray hotswap bays, and the nest of cables is unwieldy. I wish to replace this monstrosity with 7x 1TB NVMEs(3 on the mobo and 4 housed in an Asrock Quad Ultra m.2 HBA(it has no raid capability)).

I hope someone here can maybe advise me on whether PCIE NVME raid is possible under Linux or not. If it isn't currently possible, then my upgrade path may have to take a slight detour :(.

Thanks for any help anyone can offer.
Liam Dawe Jul 7, 2019
Quoting: iwantlinuxgamesI hope someone here can maybe advise me on whether PCIE NVME raid is possible under Linux or not. If it isn't currently possible, then my upgrade path may have to take a slight detour :(.
Well, Phoronix seemed able to.
iwantlinuxgames Jul 7, 2019
Quoting: liamdawe
Quoting: iwantlinuxgamesI hope someone here can maybe advise me on whether PCIE NVME raid is possible under Linux or not. If it isn't currently possible, then my upgrade path may have to take a slight detour :(.
Well, Phoronix seemed able to.

aye, they did, but using btrfs, or mdadm, but no mention of AMD-RAID.... I'm really trying to avoid using those methods if i can help it....sorry, I'm a bit spoiled after years of FakeRAID and hardware HBAs.

I'm thinking i may have to consider assembling a 4TB array of 2.5in SSDs via the onboard sata controllers (dmraid) and probably go with mdadm for the NVMEs until such time as the rcraid driver can be utilised in the manner I'm hoping for.
iwantlinuxgames Jul 9, 2019
So it looks like at the current time, PCIE NVME "FakeRAID" via the AMD-RAID controller chipset isn't possible via the AMD provided drivers. So it seems i'm going to have to tolerate my Areca hardware RAID HBA for awhile longer until i have all the NVME sticks I need and configure my system with a single boot stick and / on an md RAID array(10TBs should suffice for a spell, i think).

:(

Hopefully sometime in the near future dmraid will support the AMD-RAID chipset and NVME RAID features.
iwantlinuxgames Jul 15, 2019
So after some futzing about, experimentation, and encountering roadblock after roadblock, I've managed to arrive at a solution that is workable, but isn't necessarily to my satisfaction(as I stated to Liam in another post in this thread, i have been greatly spoiled by fakeRAID and hardware RAID host bus adapters). This has truly been a most aggravating and disappointing week.

A little back history: several years ago I purchased an Areca 1231-ML 12 port sata ii SAS/SATA RAID HBA. This HBA featured:
pcie 2.0 x8 interface
800MHz I/O processor
1GB ddr2 ram( which i bumped to 2GB having an extra 2GB module laying about)
10mbps rj45 ethernet port
a BIOS loadable configuration utility
a web-based config utility
online raid level migration and expansion
pin headers for a battery backup unit
pin headers for an HDD activity LED strip

and best of all, it was supported by the linux kernel. no need to go messin with compiling the driver.

for the last several years this HBA has served me well, but she has the unpleasant side effect of being a many tentacled beast. Surely pcie nvme raid would be my saviour from this cabling hell.

She's also now several years old and I'm not sure how much more life I'll get out of her. I'd like to decomission her while she still has the dignity of being one of those devices in the box of old hardware that still works great. As I said, she has served me well and she deserves to be decommissioned so.

So now we move to the present situation. If you have or are planning to upgrade to an x399 chipset board , be advised that the onboard RAID absolutely sucks. I fought with it over 3 days, to discover as i posted prior, that pcie nvme raid is not possible with the amd provided drivers, which are closed source blobs. I was able to make the sata RAID bits work with some SSDs i removed from the Areca after shuffling some data about. The performance was absolute shit. There's also no support for online raid level migration, and no support for online raid expansion.

So, I've ended up going with md raid. Not at all an option I'm happy about(no support for a bootable raid 0 array, meaning i have to have a seperate boot disk.no support for online expansion with raid 0). It is however, only a temporary solution as I research bootable pcie nvme raid host bus adapters, preferrably with their own io processor and ram. Most nvme HBAs I've looked at so far seem to be fakeRAID types of cards. My current Asrock Ultra Quad M.2 Adapter card lacks any type of raid features and relies on the host system to provide said functionality. But without that functionality being provided by the current linux driver, the card is useless.

So I've had to alter my upgrade path once more. That is if I can find just the right HBAs. I've seen some that offer the abilty to be "paired" and control the nvme sticks via one HBA raid utility. So, that's what I'm hoping will be possible.

2x nvme raid hba
8x 2TB pcie 3.0 nvme ssd

this increases my costs and timeline significantly. :(

To say the least, this whole experience has left a bad taste in my mouth, with the amd provided driver being, for lack of a better description, utter shit.

Anyone can recommend some nvme raid host bus adapters?
While you're here, please consider supporting GamingOnLinux on:

Reward Tiers: Patreon. Plain Donations: PayPal.

This ensures all of our main content remains totally free for everyone! Patreon supporters can also remove all adverts and sponsors! Supporting us helps bring good, fresh content. Without your continued support, we simply could not continue!

You can find even more ways to support us on this dedicated page any time. If you already are, thank you!
Login / Register


Or login with...
Sign in with Steam Sign in with Google
Social logins require cookies to stay logged in.