2012 Mac Pro (GTX980Ti) [1st,12C,E] + GTX 780 Ti x3 @ 64Gbps-PCIe (Cyclone Backplane) + macOS 10.11.6 [hoeveler]
just wanted to share my experience with this setup.
I am working with a Mac Pro 5.1, dual X5690, 96GB RAM, 512GB AHCI SSD, with 3 monitors attached. My internal GPU is a flashed GTX 780. I'm using the Mac Pro for graphic design and Blender 3D rendering running Mavericks and El Capitan. I needed more GPU power for 3D rendering, so I decided to build this expansion box according to hoevelers instructions. I used a Cyclone host bus adapter card 426, a Cyclone backplane 427, a Molex cable 2m, a BeQuiet case and a BeQuiet 1200W PSU.
1. PCIe slot No. 1 vs. slot No. 2
When I put the internal GPU to PCIe slot No. 1 (lowest) and the HBA card to slot No. 2, the system will shut down 100% within 5 to 10 minutes with kernel panic when doing graphic design work or 3D rendering. It only works when I put the HBA card to PCIe slot No. 1, which is normally reserved for the internal GPU. Don't know if this is special to my Mac Pro or if others face the same problem, so I wanted to share this information.
2. Maximum number of GPUs
These were the GPU setups I tested running 10.11. It didn't make a difference which of the cards were internal to the Mac Pro and which were attached to the Cyclone backplane.
GTX 780 + 2 x GTX 980ti + GT120 = Boot
GTX 780 + 2 x GTX 980ti + 2 x GT120 = No Boot
3 x GTX 980ti = Boot
3 x GTX 980ti + GT120 = No Boot
3 x GTX 980ti + ATI HD5870 = No Boot
2 x GTX 780 + 2 x GTX 980ti = Boot
2 x GTX 780 + 2 x GTX 980ti seems to be the »best« setup I could get given that I didn't want to upgrade past 10.11.
Hope these information help others to build their Cyclone GPU expansion systems. 3D rendering is really fast now for me, although I have to admit that adding a second 780 didn't make much of a difference anymore.
Mac Pro 5.1 Mac OS 10.11
Thanks for your info! I'm so glad to see some others here have built a version of my backplane setup. I too had several issues when I had this system running on Mac Pro classic, and I think it was mostly due to the poor support Apple has for Nvidia. I have since switched my graphics work to Windows 10, and took my backplane system along with it. It's been so much more stable under Windows 10. That being said, I just went through a bit of a labored upgrade to macOS Mojave, by purchasing an officially recommended AMD card from the Apple site. I haven't bother trying out this setup on my Mac Pro now running Mojave (10.14), but I wonder if that's more stable?
Im reading with interest, even if this post is from some time ago. Do you, or anyone else who has done this, know if the motherboard+bios need to support PCI bifuraction for this to work? (Im considering this solution to expand an mitx H81i motherboard that doesnt support bifurcation). Any views appreciated, would love to do this!
@natlu_twoheightohnine, it doesn't use PCI bifurcation. The HBA connects to the adapter on the backplane. The backplane has a PCIe switch to support all the downstream PCIe slots.
@joevt, yes this is precisely how I understand it to work. In fact, the only time I had issues with the system was when I tried setting it up with a "Workstation" (WS) series of Asus motherboard that had a PLX chip onboard to handle active switching of the PCIe "traffic" (My layman's understanding of it). The cyclone backplane has similar hardware "traffic controller" that seems to have been interfering. So a motherboard that does NOT have some kind of PCIe active switching would be best.
@hoeveler, You should be able to chain multiple PLX chips together. Maybe a BIOS setting could have fixed your motherboard.
PCIe is a tree structure. The PLX chips add multiple branches. The MacPro7,1 has a PLX chip (96 lanes) with two upstream links and multiple downstream links.
Thunderbolt uses PCIe so it has a tree structure too. A Thunderbolt controller is like a PLX chip - it can have multiple downstream devices (up to 4, plus its own devices - USB and NHI). Some Thunderbolt devices have a PLX chip to provide multiple slots (e.g. Sonnet Echo Express III-D). Some PCIe cards have PLX chips to connect multiple devices (SATA + USB, or USB x2, or NVMe x2 or x4 or x6 or x8).
The OWC Thunderbolt Hub has three downstream Thunderbolt ports.
@hoeveler, Wow this is a really cool setup that is right up my alley. I did not know that PCIe extension backplanes like this existed, much less ones that are ATX form factor. I did an external build with my Lenovo Tiny, but I am actually thinking this would be of good use for my server to add a GPU and space for other PCIe cards. It's a 1U Dell R430 so only has two low profile slots that are very limited on space and thermals. Currently I have a dual 10G and 4 port gigabit card in there, but really wanted to add a GPU to run a remote play server on a virtual machine. Install Windows on a VM, passthrough the GPU, and play remotely on it with Nvidia's Gamestream technology. With this external enclosure idea you gave me, I could also run a second weaker GPU for Plex and have some room to grow for other uses as well.
I had a pair of X5675 chips in my last server and they are great little powerhouses for their age and cost and TDP. They lasted a very long time as budget gaming CPUs via the X58 chipset boards.
@merritt_bishop, Yeah it's a very versatile setup. Unfortunately upon completion of a render project that was heavy on the GPUs, I think it put a bit too much wear on the PC board. It had intermittent connection issues - I contacted Cyclone Microsystems and they can fix it for $800. That isn't actually a bad price for it - I was planning on holding off on the repair until my next GPU render project that I need to use it for. They're hard to find on ebay.
I got the same board and it is running perfect. It is not causing any shutdowns. I only purchased the board with the NVIDIA cable and the Adapter Card. Mounted it in an old IBM case with 1600W Rosewill PSU