Setup & Software Discussions
[Build+Benchmark] MacBook Pro 15 2017 + Custom Aorus VEGA NANO Gaming Box for ma...

[Build+Benchmark] MacBook Pro 15 2017 + Custom Aorus VEGA NANO Gaming Box for macOS Machine Learning  


New Member
Joined: 2 years ago

Hi everyone,

I just want to share my experience building a custom eGPU box for the purpose of doing Machine Learning on macOS. Previously I've got a AORUS 1070 Gaming Box and it's been working quite well pre Mojave; post upgrading Mojave everything breaks and therefore I decided to buy a custom PowerColor Vega 56 Nano box from NewEgg and ship it all the way to Australia. As with many custom PowerColor Vega 56 builds here, it requires a 8-pin pci gpu power to dual 8 (6+2) cable. Interestingly it's faster for NewEgg to ship the card from the US to AU than getting the cable from interstate... Anyway.

I had my concern prior to custom fitting this to the Aorus Gaming Box due to the PSU. The PSU is a gold rated 450W one, and I did a bit of a calculation:
- The Vega's Nano TDP is 210W.
- The box has power passthrough max at 100W.
- It has a qualcomm QuickCharge 3.0. According to the spec that can max out at 20W.
- The 3 fans (2 on the sides) + The PSU fan = 15W.
- All other USB peripherals (3 ports) ~= 15W.

Total up 360W, and I'm giving it 20% extra = 432W. This is under 450W, this might work.

So photo of the build: . As you can see the card fits perfectly. No other modifications are required.


The laptop recognises the box instantly. It's quite refreshing seeing no 'hack' or any other scripts are required. eGPU disconnection works.

I gave it some machine learning test. I chose `plaidml` due to the fact that how it has already worked with keras, and supports OpenCL and *Metal*, the new Apple's GPU acceleration API stack that also exposes compute capability. I ran the test on the `mobilenet` and `vgg19` using plaidbench on the 3 compute engines I've got connected to the laptop:
- The built-in Intel HD Graphic 630.
- The dGPU: AMD Radeon Pro 560 4GB.
- And the eGPU: Powercolor Vega 56 Nano 8GB in AORUS Gaming Box

The results (all in seconds, smaller is better):

For MobileNet model (the simpler of the two):

Compute Engine Compile times Execution time Execution per sample
Intel HD Graphic 630 3.6319 39.5003 0.039
dGPU: Radeon Pro 560 4GB 4.3957 9.4924 0.009
eGPU: Vega 56 Nano 8GB 4.2442 8.5127 0.008

The eGPU is 12.5% faster than the dGPU and is 487.5% faster than the built-in Intel card.

For VGG19 model (much bulkier):

Compute Engine Compile times Execution time Execution per sample
Intel HD Graphic 630 9.5966 396.0498 0.3867
dGPU: Radeon Pro 560 4GB 10.4601 73.3705 0.0716
eGPU: Vega 56 Nano 8GB 8.4916 21.3637 0.0208

This is the real deal: The eGPU is 350% faster than the dGPU and is 1859.13% faster than the built-in Intel card. This is also due to the fact that the eGPU has twice the graphic memory as the dGPU, and it is also HBM2 memory.

Doing another test using CL!ng for raw compute power, I've got (results in GFLOPS/s):

Compute Engine Scalar Vector  
Intel HD Graphic 630 211.84 207.96  
dGPU: Radeon Pro 560 4GB 573.28 664.72  
eGPU: Vega 56 Nano 8GB 1793.73  1817.02  

This confirms that the eGPU is ~ 3 times faster than the dGPU.


I'm pretty happy. This is going to cut down my model training and inferencing time a lot. However, why stops here. I'm not a gamer but it's interesting to see how the eGPU is handling graphic rendering. I ran two tests in Unigine Valley Benchmark, Extreme HD, 8xAA at 1080p and 1440p. Results:

1440p: Min 20.4, Max: 64.1, AVG: 36.4. Score: 1522.
1080p: Min 28.8, Max: 103.9, AVG: 56.0. Score: 2343.


I can also confirm that Adobe Photoshop CC 2019 recognises the eGPU and also uses it for acceleration.

During all of these benchmarks (the machine learning and Unigine), the eGPU heat hovers around 56 Celcius (~ 132 F) so it's not alarming at all. The noise level is great, I find the Vega produces the less coil whine than the 1070 that came with the box.

Verdict? Do it. This is pretty much the same box that Apple is selling for AU$1,999 and I've got them all for ~ AU$1,400 (and I still have the original mini 1070 that comes with the box). If your workload has OpenCL/Metal acceleration then this is probably the best setup that gives you both the portability and the power.

P/s: ML on macOS is getting interesting. I love how plaidML just works out of the box without difficult driver installation and it already supports Metal (I also benchmarked Metal vs OpenCL and it's consistently yielding 5-15% better performance). Tensorflow will hopefully have metal support soon.

MacBook Pro (15-inch, 2017)
- Mojave 10.14.4 (18E174f)
- Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
- 16GB RAM
- dGPU: Radeon Pro 560 4GB

- Radeon RX Vega 56 8 GB in Aorus Gaming Box.

itsage liked
Famed Member Admin
Joined: 3 years ago

What an excellent build! Thank you for sharing your experience.

Best ultrabooks for eGPU use | eGPU enclosure buying guide

Estimable Member
Joined: 1 year ago


Excellent idea. 

The rx580 gaming box generates in
open cl
Scalar 2517
Vector 2448
Scalar 1794
Vector 2135
and my intel 655 of my mbp 13 2018 is Scalar 211 Vector 302 in open cl

While Valey extreme hd in 1080 is min 24 max 75 Ave 38 1601

So your valey results are more like vega 56 while the cling numbers look strange. Did you experience any crashes? or any overheating? 

While valey at extreme hd at 1440p is Min 15.5 Max 46.3 Av 25.5 1068

This post was modified 10 months ago

A. 2.7 GHz I7 4 Cores, 16Gb, 1TB MBP 13 2018 TB3 , EGPU Gigabyte Gaming Box RX580 8Gb

B. 3.1 GHz I7, 16Gb, 1TB MBP 13 2015 TB2 , EGPU Gigabyte Gaming Box RX580 8Gb