Hi everyone,
I’ve been trying for quite some time to troubleshoot PCI BAR errors in Unraid that only manifest when connecting my U.2 NVMe drives via Oculink to the Supermicro X11SPH-nCTPF in my HL15.
From the research I’ve done, the only way I’ve been able to make the drives usable in Unraid is to enable Intel VMD (Volume Management Device) in the BIOS for PStack0 and then also setting NVMe0 and NVMe1 to Enabled as well.
However, when doing this, Unraid then throws the following type of errors kernel: pci 10000:00:00.0: BAR 13: failed to assign [io size 0x1000]
Reports of similar errors for other Unraid users in the Unraid forums indicate this has to do with PCI Reallocation failing, and some responses in their forums suggest turning off PCI Reallocation in Unraid via boot parameter modifications.
It isn’t clear to me why these errors are manifesting when enabling VMD or why I need to enable VMD for the U.2 NVMe drives when connected via Oculink.
It also doesn’t feel like disabling PCI Reallocation is the correct corse of action.
In searching through the forums here, I did not come across others reporting similar issues.
Any feedback, help or ideas are welcomed.
If more information or more robust details of BIOS settings/configurations are need, please let me know.
P.S., i’m still learning, so go easy on me 
I’m not sure. This seems like it may be a driver issue.
Have you tried updating the motherboard BIOS?
What is the physical setup … make/model of cables/sleds/drives?
1 Like
Hi David, yes, I did update the BIOS, but that did not resolve the issue.
The setup is as follows:
- 2x Samsung 980 Pro M.2 2280 NVMe SSDs
- 2x Vantec M.2 NVMe to U.2 SFF-8639 Adapters
- 2x Chenyang Oculink SFF-8611 to U.2 SFF-8639 Cables
FWIW, all of the PCI-E slots are also populated as well as the single M.2 NVMe slot on the motherboard. Though I have tried removing all Expansion Cards and the M.2 NVMe from the motherboard.
I can’t confirm, but I have some suspicions that adapting a M.2 NVMe to U.2 over Oculink may be compounding the issue due to connection protocol communication (or something to that effect). However I have not been able to find any documented instances of this not being a viable method.
One more thing, regarding the driver issue possibility. It is my understanding that Intel VMD isn’t typically desirable and drivers for Linux systems are not supported/available.
I’m still not clear why I am even needing to enable the Intel VMD in order for the drives to be usable.
With VMD completely disabled, The BIOS and Unraid do not see the Oculink drives at all. However, with VMD Enabled but each of the Oculink NVMe drives disabled, Unraid sees them but cannot manage/mount them.
Only with VMD Enabled and its sub-setting for NVMe0 and NVMe1 Enabled, is the BIOS and Unraid able to both see and manage the drives.
I keep coming back to the question of why is Intel VMD needing to be enabled at all 
Thanks for engaging with me!
LMK if you need anything else or have any other questions.
One thing you could try is to get a U.2 drive–it could just be a cheap used low capacity one off of eBay–and connect that via the Chenyang cables and see what the results are. That would eliminate the Vantec adapter and the m.2/u.2 conversions as potential issues.
If that still fails, or if you want to go in a different order, you could also get a Supermicro CBL-SAST-0956 cable and try that instead of the Chenyang cable. It seems like the Oculink standard isn’t particularly well defined and/or implemented and/or has been affected by the different PCIe generations, and cables can actually matter. Cables branded by the motherboard manufacturer might have more success.
If no-one else responds here, you could try cross posting in the Level1Techs forums; https://forum.level1techs.com/. Level1Techs has posted a number of Youtube videos about adapting these sort of “PCIe in a different form factor” ports.
I’m not sure, but I think the original purpose of VMD was related to RAIDing SSDs or Optane drives. A bit like using a hardware RAID controller to create a RAID pool from a group of disks, then passing that group as one logical disk to NAS software, rather than passing the drives to the software directly. Not recommended. But it’s getting set up by the OS via a different driver path.
1 Like
Thanks for the advice/guidance.
Will try a legit U.2 drive and the Supermicro cable.
Posting in the Level1Techs forum is a good idea too.
I’ll follow-up once I get all this tested. It’d be good to document it in case anyone else encounters these issues.
It does seem like I shouldn’t have to enable VMD to use a U.2 NVMe drive. So fingers crossed it’s just the cable!
Nothing in the motherboard manual to me indicates VMD is required or that the Oculink ports can’t be used in conjunction with the M.2 slot and all PCI-E slots populated.
1 Like
Agreed. The block diagram on page 18 of the manual shows them connected directly to the CPU, and the block diagram matches the knowledge that the LGA 3647 CPUs have 48 available external PCIe lanes. The only CPU-related overlap on the X11SPH should be with Slots 5 and 6.
1 Like
UPDATE: obtained both a true U.2 NVMe drive as well as Supermicro CBL-SAST-0956 cables, disabled the Intel VMD settings and can now confirm that the issue was with the Chenyang cables 
Both the true U.2 NVMe drive as well as both of my M.2 NVMe drives in their Vantec adapters are functioning as expected in Unraid without any errors reported!
I’m relieved to have resolved the issue, appreciate the feedback, perspective and advice you provided @DigitalGarden - Thank you 
It is frustrating to know that the various brands of Oculink cables are hit or miss. I do have some suspicions that the issue with the Chenyang cables may be related to its use of SATA power connectors whereas the Supermicro cable is using Molex power connectors. But it is unlikely that I’d try testing additional cables since I now have a working configuration 
Thanks again @DigitalGarden 
2 Likes
Im a Unraid User if you need any advice
Hi @iownyrface, the assistance @DigitalGarden provided helped me resolve the issue I encountered (noted in my previous post), but if you have any additional feedback, comments or opinions you want to share, please do 
Not sure your goals with Unraid but there are some less obvious settings you should implement
Making sure the Appdata and System Share are only on Flash Storage( This is where Docker Data and Docker Image live) for best performance and ease of use
in Docker Settings make sure your not using Docker Directory( This can have issues depending on filesystems if your appdata is on ZFS this is mandatory as welll regardless of FileSystem MacVlan is not recommended but instead to use IPvlan
In Global Share Settings make sure you have Exclusive Shares enabled this will allow greater access to the disks and reduce IO wait for shares that only live on FLASH storage also hardlink are recommended are recommended I also use Direct IO but thats more your choice
If you want better write performance to the array( assume your using the array) you may option to change the settting Tunable (md_write_method): which is in DISK Settings under settings at the top to reconstruct write but read the description and make the call for you
Most people also when setting a new share up use High Water as a method which will more evenly disperse data but some prefer to completely fill each disk first
I could ramble on for more but dont know what specitically you already know
2 Likes
Thanks for the tips, tricks and optimization points @iownyrface. I have most of those recommendations implemented (been an Unraid user for some time). I transitioned to a HL15 Fully Built system to improve stability and always-on performance after coming from typical consumer grade hardware.
This thread in particular was primarily focused on issues encountered when attempting to use NVMe SSDs connected through the Oculink interface on the Supermicro X11SPH-nCTPF motherboard 45Homelab (45Drives) supplies in the fully built HL15.
I’m very appreciative of all the guides, advice, tips, tricks, opinions and general knowledge that the Unraid community provides and shares. It truly has made my Unraid experience amazing!
Thanks again for your input @iownyrface 
2 Likes