These switches are needed if you place more cards :D, if you got 4 pcie-x slots you can add 3 x 4 nvme drives what would happen if you make these in a raid-0 setup :D I would not even try to boot from it but would be interesting to see how fast it would run. However i am not sure how many motherboards really have more than 2 or 3 REAL pci-e X16 slots. So far as i know are most limited to 1 or 2 real 16 slots and the rest are often shared or limited. I have been looking at many but found only a few with 2 real x16 slots
lol really could be an issue if you have too many devices in the machine enabled. We suppose to be able to run them and people claim that we do no longer have this issue, but if you do not have the latest new hardware it actually still exists. I have never believed that a gigabyte z170x gaming 7 motherboard would give issues, but it actualy did so. The reason being 2 nic's and 3 nvme ssd,s 4 x ssd storage and the list goes on finally repoduced stutter and sound issues and some artifacts on my screen. I kept switching and resetting the bios till it finally ran a bit more stable. But i am kinda sure the problem still exists
Does this have specific requirements of the system to work? I thought certain chipsets explicitly do *not* allow splitting the CPU PCIe lanes to multiple devices. That's why B250 boards (for example) cannot split the 16 CPU PCIe lanes to multiple graphics cards. Isn't this similar? Or is there a PCIe switch on the board? Would that have a tangible latency impact?
The motherboard BIOS needs to support bifurcation. In the ASRock UEFI, it shows up as the "4x4" option. In the ASUS UEFI, it shows up as the "x4/x4/x4/x4" option. These AICs do NOT need a PLX chip, as long as the motherboard can split an x16 PCIe slot into 4 @ x4 PCIe 3.0 lanes, hence 4 x NVMe SSDs. In practice, idle CPU cores (of which are are now many) can now do the job formerly done by dedicated Input-Out Processors on hardware RAID cards. For those who desire to do a fresh install of Windows 10 in a RAID-0 array of 4 x NVMe SSDs, the ASRock approach can be done easily by following the directions which you can request from their Tech Support group. When we requested same, ASRock replied with those directions the next day. The Intel approach currently requires a VROC "dongle", and some reports have claimed that VROC requires Intel SSDs. Because we are currently busy designing a system with the ASRock X399M motherboard, we are not thoroughly familiar with all of the details of Intel's implementation. Hope this helps.
Intel has made the drivers Intel SSD's only and removed the old driver quickly before i could get my hands on it. This far i am pretty sure that they are never going to allow other brands to make use of the driver. Maybe some hardware driver guru is able to make it work, but my guess is its gonna stay they way it is now
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
10 Comments
Back to Article
edzieba - Tuesday, January 16, 2018 - link
Oh no, the terrible return of manual IRQ conflict resolution!Lord of the Bored - Wednesday, January 17, 2018 - link
Funny, my first thought was "DIP switches... how nostalgic."bronan - Thursday, May 16, 2019 - link
These switches are needed if you place more cards :D, if you got 4 pcie-x slots you can add 3 x 4 nvme drives what would happen if you make these in a raid-0 setup :DI would not even try to boot from it but would be interesting to see how fast it would run.
However i am not sure how many motherboards really have more than 2 or 3 REAL pci-e X16 slots. So far as i know are most limited to 1 or 2 real 16 slots and the rest are often shared or limited. I have been looking at many but found only a few with 2 real x16 slots
bronan - Thursday, May 16, 2019 - link
lol really could be an issue if you have too many devices in the machine enabled.We suppose to be able to run them and people claim that we do no longer have this issue, but if you do not have the latest new hardware it actually still exists.
I have never believed that a gigabyte z170x gaming 7 motherboard would give issues, but it actualy did so. The reason being 2 nic's and 3 nvme ssd,s 4 x ssd storage and the list goes on finally repoduced stutter and sound issues and some artifacts on my screen. I kept switching and resetting the bios till it finally ran a bit more stable. But i am kinda sure the problem still exists
rhysiam - Tuesday, January 16, 2018 - link
Does this have specific requirements of the system to work? I thought certain chipsets explicitly do *not* allow splitting the CPU PCIe lanes to multiple devices. That's why B250 boards (for example) cannot split the 16 CPU PCIe lanes to multiple graphics cards. Isn't this similar? Or is there a PCIe switch on the board? Would that have a tangible latency impact?Just a few questions!
supremelaw - Thursday, May 10, 2018 - link
The motherboard BIOS needs to support bifurcation. In the ASRock UEFI, it shows up as the "4x4" option. In the ASUS UEFI, it shows up as the "x4/x4/x4/x4" option. These AICs do NOT need a PLX chip, as long as the motherboard can split an x16 PCIe slot into 4 @ x4 PCIe 3.0 lanes, hence 4 x NVMe SSDs. In practice, idle CPU cores (of which are are now many) can now do the job formerly done by dedicated Input-Out Processors on hardware RAID cards. For those who desire to do a fresh install of Windows 10 in a RAID-0 array of 4 x NVMe SSDs, the ASRock approach can be done easily by following the directions which you can request from their Tech Support group. When we requested same, ASRock replied with those directions the next day. The Intel approach currently requires a VROC "dongle", and some reports have claimed that VROC requires Intel SSDs. Because we are currently busy designing a system with the ASRock X399M motherboard, we are not thoroughly familiar with all of the details of Intel's implementation. Hope this helps.supremelaw - Thursday, May 10, 2018 - link
EDIT: "(of which are are now many)" should be "(of which there are now many)"Sorry for the typo.
bronan - Thursday, May 16, 2019 - link
Intel has made the drivers Intel SSD's only and removed the old driver quickly before i could get my hands on it.This far i am pretty sure that they are never going to allow other brands to make use of the driver.
Maybe some hardware driver guru is able to make it work, but my guess is its gonna stay they way it is now