Problem with 2xNVMe HAT in Pironman 5 MAX

Hi all,

I am testing a ZFS mirror pool on the raspberry pi 5 (16 GBytes of RAM) with a Pironman 5 MAX case.

I have been doing a lot of testing and I found a repeatable, consistent problem that completely hangs the raspberry OS (Debian 13 trixie, all terminal through the lite version).

With the two drives in the ZFS pool, all online, if I do a hammer test on the pool with fio as:
fio --name=zfs-stability --directory=/mnt --rw=randrw --rwmixread=70 --bs=64k --ioengine=libaio --iodepth=32 --size=2G --numjobs=4 --runtime=3600s --time_based --group_reporting
Some 2-3 minutes in, The pi completely freezes and I see in dmesg the following:

[  431.584404] nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10
[  431.584414] nvme nvme0: Does your device have a faulty power saving mode enabled?
[  431.584417] nvme nvme0: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug
[  431.925272] nvme nvme1: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0xffff
[  431.925275] nvme nvme1: Does your device have a faulty power saving mode enabled?
[  431.925277] nvme nvme1: Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off" and report a bug
[  432.072403] nvme 0001:03:00.0: enabling device (0000 -> 0002)
[  432.072420] nvme nvme0: Disabling device after reset failure: -19

Note that I am using a 5.1Volt, 5Amp power supply and
dmesg | grep -i "voltage"
gives nothing and
vcgencmd get_throttled
returns 0 hex.

After the Pi is hanged, I have to physically remove the usb-c power… WAIT A FEW minutes and power back on… If I just reset the power, I can see the two orange drive activity leds in the HAT blink very faintly (not bright), and the disks are not recognized… I have to unplug again.

I have been doing repeated tests for days with all kinds of commands (including adding nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off to /boot/firmware/cmdline.txt that the dmesg message hints at.

Nothing works.

Now… if I PHYSICALLY remove one of the disks in the drive, the ZFS pool works (mirror) but in degraded mode… and I can chuck along no problem in the fio test. It has been running now for almost an hour. Very healthy 400MiB/sec reads and 180MiB/sec writes.

Both disks are Kingston Technology Company, Inc. KC3000/FURY Renegade NVMe SSD [E18] (rev 01) (prog-if 02 [NVM Express] 1-TB drives which are are specfically listed in Compatible NVMe SSDs — SunFounder Pironman 5 documentation as being compatible. I am monitoring during the test the two drives with a watch using smartctl -a /dev/nvme0n1 | grep -i ‘Temperature’; smartctl -a /dev/nvme1n1 | grep -i ‘Temperature’and drives stay cool at less than 50ºC all the time (I am using heatsinks on both of them and the pironman 5 MAX dual small fans in the case are on.

This is very frustrating.

It seems that when using the two disks, the ASMedia switch just resets after 1-2 minutes and drops completely. The pi hangs. This makes my usecase not viable of having a ZFS mirror for redundancy. Given that with just one disk, fio and the ZFS pool chug along, I do not think this is a ZFS issue.

Is this a limitation of the ASM1182e 2-Port PCIe x1 Gen2 Packet Switch that the HAT has?
I have done LOTS of things:

  1. Tried with the other flat pci cable that comes in the pironman 5 MAX case. Same results.
  2. I even purchased and tried ANOTHER 2xNVME HAT from Freenove (it uses the same solution of getting 5Volts from the raspberry and the same ASMedia chip. It also comes with other flat cable, which I used. Same results.

I understand this is a very hard one to crack.
Is there something that I can do?
If this is a limitation of hat… where is this stated? The use case of mirroring drives, with ZFS or any other software raid should be pretty common, right?

Sorry for the poor user experience you encountered.

Both the Dual NVMe PIP and the two NVMe SSDs are powered via the Raspberry Pi’s USB‑C port (5V supply). Although the 5A power adapter provides sufficient total power, the power distribution circuit on the Dual NVMe PIP may not meet the instantaneous current demand when both drives are under high load simultaneously (NVMe peak power can reach 5–7W per drive; 10–14W for two drives combined, potentially exceeding the rated load of the PIP’s power delivery chip).

As a temporary workaround, you may try the following solution to see if it works for your setup:

If your ZFS mirror pool requires sustained high‑load operation, keep one NVMe on the HAT and connect the second SSD externally using a USB 3.0 hard‑drive enclosure (USB 3.0 offers up to ~450 MB/s theoretical bandwidth, sufficient for most use cases). This avoids the ASM1182e chip on the Dual NVMe PIP, and ZFS can still seamlessly recognize the drives as a mirror pool, greatly improving stability.

We take this issue very seriously and have already prioritized it for further investigation. In the future, we plan to upgrade the Dual NVMe PIP module—including increasing its current‑handling capacity—so that dual SSDs can operate stably under all typical workloads.

1 Like

Thank you very much for your answer.

Are you sure this might be a power issue? As stated, there are no low voltage warnings in `dmesg`. I have used a USB power meter that gives live readings in the power usb-c port and it never went over 4 Amps.

Aren’t the USB3.0 ports also powered by the raspberry? Or you mean a USB3.0 enclosure that is externally powered?
I can try that setup to narrow down the problem. However, it completely defeats the purpose of having a Pironman 5 MAX case for me… the main interesting feature is the double NVMe HAT.

Also, I need the other two USB3.0 ports (which have shared bandwidth, so the 450 MB/sec is for both) for Ethernet dongles as this machine is also acting as a router.

Would powering the HAT through the 5 volts pins externally help? Or is this the power circuitry of the HAT?

Thank you!

This issue is unlikely to be caused by insufficient power adapter capacity.

Our assessment is that the dual NVMe PIP module circuit cannot supply high current to the SSDs simultaneously, especially under load or high‑load conditions. This limitation may lead to insufficient current delivery, potentially causing SSD instability or failure.

Therefore, we will need time to upgrade the dual NVMe PIP module by increasing its current‑handling capability in order to address this problem.

1 Like

Makes sense that the power circuitry of the HAT is the culprit here.

I think it makes no sense to sell a HAT that supports 2xNMVe drives and then not being able to use them simultaneously.

To be fair, I’ve been reading forums and other users have reported similar problems with competing products like Geekworm X1004. I have been able to test myself another similar HAT from Freenove and I got to exactly the same problem… so this is not unique to Sundounder.

Oh well… I’ll see that I can do. Thanks anyway!

1 Like

Sorry for the inconvenience and sincerely thank you for your understanding and support.

We will continue to seek better solutions to address this issue in the future.

OK. I got this working, but not with the Pironman 5 MAX 2xNVME hat.

I purchased a second 5.1Volt, 5 Amp power supply. Then I bought a Freenove 4xNVMe HAT. It has a different PCIe switch (ASMedia Technology Inc. ASM1184e 4-Port PCIe x1 Gen2 Packet Switch). Most importantly, this HAT has a USB-C port, to which I connected the power supply. This way, there is plenty of power for the two NVMe drives. Link here: Freenove Quad / Dual M.2 NVMe Adapter for Raspberry Pi 5, SSD HAT AI H.

I have been in continuous 5-hour FIO intensive tests, and it is rock solid.

Maybe this gives you a hint about how to fix your HAT. Of course, this other HAT, together with the USB-C input cable, does not fit inside the Pironman 5 MAX case.

Thank you for sharing your ideas. We will review and consider all valuable suggestions thoroughly, aiming to identify and implement the optimal solution.

1 Like

Hi. Thanks for investigating this. I had the case in my cart for some time. Before I buy it, have you by any chance mounted the quad nvme hat inside you max case? Curious to see if it will work. Suppose one just have to drill an extra hole for the 2nd power cord to go in, also if it is able to mount on some of the standoffs in the case… If you have any fotos of your setup with that new hat, appreciate if you could share it.

The 4x HAT from Freenove (the only one I found that has a separate USB-C power input) does not fit inside the Pironman 5 MAX case. It is a tad wider and longer, and even if the hat physically fit inside the case in the same orientation as the 2xNVMe HAT that comes with the case, there would be the issue of the USB-C power input. The USB-C port is on the side, and once you connect the wire, the connector protrudes several cm. Another issue is the length of the flat flexible 16-pin PFC FFC cable for the PCIe connector. The one that comes with the case (they are so kind as to give us two) is fairly long (70 mm), but it really limits the options.

I ended up putting the 4xNVMe HAT on the upper side of the case (left side top, like to the left of the small screen). Mainly due to the flat cable not reaching it. Some kepton tape, some double-sided tape, some zip ties. Janly as heck. I put a Noctua 5V USB fan (40 mm) on top of some heatsinks.
It works… the drives are very cool, and it passes the several hour fio torture test, but it looks very ugly.

I have ordered some longer flat PFC FFC 16-pin cables. They are out of spec, Actually, the ones that come with the Pironman 5 MAX case are also out of spec. According to Raspberry, the max length for those flat wires is 50 mm. I intend to see if the longer wires work, and if they do, to mount the 4x NVME hat on top of the case… that is more secure,e and I would be able to do without the zip ties.

1 Like

I am no USB-C expert, but some thougths:

Sunfounder has to clearly state in the website that 2x drives DO NOT WORK. They list drives as compatible, I have two of the drives that are listed as compatible (Kingston KC3000), Sunfounder states that this is good for RAID0-1 setups, but I do not see how this can work. Getting the power from the 5V pins in the PI GPIO does not give enough current. IIRC the pogo pins are 5V, 2A. This gives us 10 watts… most NVMe drives are 3.3V 2.5 A, which is 8.25 watts… How in earth can we support two drives? the most efficient DRAM less drives I have found are 3.3V 1.8 A which is almost 6 watts… so two efficient drives would be already 2 watts (20%) over the 5V 2A limit.

The list of drives that are compatible should be more comprehensively tested. Maybe lower power and DRAMLESS drive work… but I am not going to spend hundreds of euros in buying pairs of drives to test myself.

What I would like to see in a HAT revision by Sunfounder:

  1. Get the power directly from the USB-C power input. I think the 5.1V 5Amps power supplies should be enough. This would need, if I am not mistaken, some usb-c circuitry to leach two usb-c lines. Having to use two separated usb-c power supplies is ugly and expensive.

  2. It would be even nicer, now that we have to rework the hat, to have a pcie switch that is 3.0 capable. There is the ASM2806 PCIe 3.0 switch chip and there are some hats in the market already ready for it (but, interestingly, with the same power limitations): PCIe3.0 to Dual M.2 HAT+ for Raspberry Pi 5 features ASMedia ASM2806 PCIe 3.0 switch - CNX Software . Not sure about the implications around the flat wire length.

Just trying to help here!

1 Like

FWIIW: I am running 2ea of the pi 1TB drives, m.2 2230. I have seen it stated that 2230 drives draw less power than 2280, but it is almost impossible to find stats. Trade off may be heat, I run my fans pretty fast, rarely see over 40C on anything. I am running an OpenMediaVault server with mdadm mirror RAID. I don’t stress it as much as you but I can stream movies and such. I mostly use it as a rsync server. Probably need to knock wood.

1 Like