Poor CPU performance with eGPU connected?
Has anyone experienced poor CPU performance when an eGPU is connected?
I've just picked up a TB3 enclosure and in the main everything on the GPU side looks all good (compute performance, TB speed etc.). Where things get strange is that with the enclosure connected CPU performance seems to drop, or stay pegged in a low performance/thermal mode. This is most visible in games - there is a clear CPU bottleneck and sub-standard performance.
DOOM 2016 with its performance metrics provides a good way to illustrate this; with the eGPU disconnected I see CPU performance in the realm of 8-16ms (with variation but enough to deliver 60FPS) but with the enclosure connected this value is pegged above 30ms, effectively limiting my frame rate to <30FPS.
- 2017 Macbook Pro 13 (non touchbar), i7-7660U, 16GB RAM
- HP Omen Enclosure connected to the lower left TB3 Port
- AMD R9 290x
- Windows 10 Pro 1803 running under bootcamp
- Latest AMD Drivers
- Latest Intel iGPU drivers
- Booting via rEFInd with flag in config to keep iGPU alive
- Disabled iSight PCIE to free up bandwidth
What I've attempted:
- Remove both AMD and Intel drivers and re-install using DDU
- Use alternate Thunderbolt port
- Use throttlestop to force Turbo mode on CPU
- Ensure power management is in high performance mode
Short of reinstalling windows 10 (a PITA on a Macbook) I'm stumped. On paper my old desktop machine (i5-750) should have similar CPU performance to the newer i7-7660 but in gaming (my use case for the dock) its miles off...
I guess the problem is not with CPU, it's with GPU. Modern games mainly use graphics. On my setup MacBook doesn't even turn on the fans when I play games (since using eGPU).
Try turning off Intel Graphics in Device Manager and using external display.
I know this is a very late bump, but I'm having the same issues and I can't figure out why. I'm using a MacBook Pro 16" with the i9 9880h and the 5500m with 4gb of vram, along with a Razer Core V2 and a Radeon VII. I get way, way better performance in DOOM 2016 without the eGPU, and terrible performance with it. In fact, even if I limit the game to using fewer cores, the performance is exactly the same. Is there any way to improve CPU performance? I've tried disabling the intel graphics in Device Manager, but that didn't help at all. Any help would be appreciated!
Basically any resolution is CPU limited, at least within reason. Quality settings don't matter either, low and ultra are the same. Both internal and external monitors are affected. Both OpenGL and Vulkan are affected too, though OpenGL is worse. And what is asynchronous compute?
Doom 2016 runs at a comparatively high FPS rate - even at 4K ultra settings. A high FPS rate bottlenecks TB3.
Try testing the VII by running the superposition 4K benchmark with the external display connected to the razer core v2.
I'll test Superposition in a minute, but using Doom 2016's built-in performance statistics with the eGPU plugged in, the GPU is never the issue, it's always the CPU unless I'm looking at the corner of a wall or something. Without the eGPU, I'm usually able to get about 120FPS, but with the eGPU, I can't get above 60 unless I'm looking in a corner.
I also gave Titanfall 2 a shot, and I had some of the same issues. In the opening area, looking away from a hallway, I can get 300FPS, but looking at the hallway, I get 30FPS or worse. I don't think this is normal, is it?
More info if it helps: I'm also using apple_set_os v0.5 in order to make my eGPU work at all in Bootcamp. Could that be causing some bottlenecking issues? https://egpu.io/forums/mac-setup/macbook-pro-16-egpu-hardlock-and-bootloop-issues/#post-72736
The superposition compute benchmark will get away from all the potential variables introduced by graphic workloads in specific scenes in your gaming collection.
Basically, the superposition score will tell you if your overall setup is running properly. So it’s a good step back and look at the big-picture kind of thing.
Great, so no problems with your overall setup.
Gaming can require a higher level of real-time data feeding than benchmarking does. As you’ve found, specific scenes’ real-time data needs saturate the available bandwidth of TB.
Which is likely why you’re seeing better performance with the internal 5500m.
There’s some detailed explanation of this by @p-mac in another thread: