Two eGPUs (on the same controller)?
This has been bothering me for a very long time. What happens if you plug in two eGPUs into the two ports of, say, a PC (or the same controller on a Mac, Macs have two controllers). How does the system determine bandwidth allocation?
Lenovo ThinkPad 25 -- GALAX SNPR TB3 1060 -- Lenovo Graphics Dock -- Benq BL2411PT - - two PackedPixels - Dasung not-eReader backer
My personal experience is:
1) MacOSX: no issues, works like charm.
2) Windows: impossible to run two eGPU at one controller at once.
Are you running the eGPUs on Mac from the same controller? I mean, four ports need two controllers, no doubt.
Yes, in another post somewhere he said 2 eGPUs seem to work fine on the same controller under MacOS, but not so much in windows. I used to be interested in whether or not this was possible in Linux, but I've given up hope of getting linux to run on my macbook pro.
Not sure how the system determines bandwidth allocation, but it for sure can't go over 40Gbps. Here are some screen captures I did this afternoon using a 2018 Mac mini paired with Razer Core X Chroma + RX Vega 64 LC and Gigabyte Gaming Box + RX Vega 56 Nano. Interpret the GPGPU what you may, the total is right under 40Gbps when both GPUs ran concurrently. When they ran individually, they could use the max TB3 connection bandwidth.
I have not tested on a Windows PC. My experience with dual and multiple eGPUs are with Macs only. In macOS the system can handle up to four eGPUs since 10.14 Mojave. In Windows via Boot Camp mode it's a huge challenge due to error 12. In order to get the two AMD eGPUs going with this 2018 Mac mini, I used @goalque's automate-eGPU EFI as well as disablement of several PCI Express roots. I was able to do a similar setup using the AKiTiO Node Duo hosting two RX 580 cards paired with a 2016 15" MacBook Pro [link to build].
Two or more cards are beneficial for compute tasks only. Games would use the eGPU that was initiated first by Windows or the one powering the primary display. Here are some numbers in Luxmark 3.1.
Not sure how the system determines bandwidth allocation, but it for sure can't go over 40Gbps. Here are some screen captures I did this afternoon using a 2018 Mac mini paired with Razer Core X Chroma + RX Vega 64 LC and Gigabyte Gaming Box + RX Vega 56 Nano. Interpret the GPGPU what you may, the total is right under 40Gbps when both GPUs ran concurrently.
40 Gbps is for a single port. A controller has two ports so the max would be 80 Gbps. But we know the max PCIe traffic per port is 22 Gbps. So for two ports it would be 44 Gbps but the Thunderbolt controller is PCIe 3.0 x4 which is 31.5 Gbps.
Are the two rates (one per GPU) that are measured by AIDA64 using the same time interval or are they measured with different time intervals?
A) same time interval: GPU1 bytes + GPU2 bytes between time 1 and time 2.
B) different time interval: GPU1 bytes between time 1 and time 2, GPU2 bytes between time 3 and time 4.
It must be the latter, because doing 2362 MB/s and 2668 MB/s simultaneously = 5030 MB/s which is greater than 31.5 Gbps.
I've tried raiding two NVMe drives, one per Thunderbolt port of the same Thunderbolt controller and could only get 23 Gbps, only slightly over the 22 Gbps allowed by a single port.
Unless you connected the two GPUs to different controllers? Your hwinfo screenshot shows they are on the same controller. I guess the slight dip you see (e.g. 2675 down to 2362) comes from just having two GPUs connected - the benchmark is not testing them at the same time.
Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts