2018 15" MacBook Pro (RP555X) [8th,6C,H] + GTX 1080 x2 @ 32Gbps-TB3 (AORUS Gamin...
Clear all

2018 15" MacBook Pro (RP555X) [8th,6C,H] + GTX 1080 x2 @ 32Gbps-TB3 (AORUS Gaming Box) + macOS 10.13.6 [DrEGPU]  


Estimable Member
Joined: 2 years ago

Edit: Added some more pics, benchmarks, and thoughts. Fixed errors.

Topic title = 2018 15" MacBook Pro RP555X + 2x Aorus Gaming Box 1080 @32Gbps + MacOS 10.13.6 +DrEGPU

System specs
-MacBook Pro 15 inch (mid-2018), i7-8850 6-core, 32GB Ram, 4TB SSD
-iGPU Intel UHD Graphics 630
-dGPU AMD Radeon Pro 555X
-External Monitor: various, but mainly an old Acer
-Mac OSX 10.13.6

eGPU hardware
-Aorus Gaming Box 1080
-Aorus Gaming Box 1080
-Aorus Gaming Box 2070 (Win10 only!)

Hardware pictures



You can see the eGPU's are working hard and each core is being used (the unlabeled window with the bars). 
The second GPU is behind the laptop. 
Please excuse the mess!

Another pic showing the About MacOS --> System Report for the TB3 connections :

opposite side

Installation steps

I followed the steps for installing and running purge-wrangler:

With the laptop OFF, I plugged each eGPU on each side of the laptop, ie 1 on the left and 1 on the right.

I plugged a single monitor into one them via a DVI-DP adapter.
I started up the laptop and then ran purge-wrangler to install drivers and enable nvidia GPUs. 
I went to settings -->  energy saver and flipped off and then back on the "automatics graphics switching"
The laptop screen will look scrambled for a few seconds, and you can login shortly thereafter.


*Unfortunately, PyTorch does not come with GPU support for MacOS so you have to compile from source. 
*Also, unfortunately, you need Xcode versions that Apple doesn't let you download and install anymore for free. 

Install Xcode 9.4.1     <--- you may have to cough up $99 to be a paid developer to get this, or your university/work may already have site license.
Install the Command Line Tools (CLT) for Xcode 9.4.1     <--- same thing for the CLT. You might have to pay to be an paid Apple Developer

Install CUDA for MacOS (v10.1)

Install cuDNN (v7.5)

Install Anaconda 3

Open a MacOS terminal window.
type: nano .bash_profile     <--- Make it so you don't have to export PATHs every time
Copy and paste the following:     <--- change info in [brackets] to be specific for your system (without the brackets!)
export PATH=/usr/local/bin:$PATH
export PATH=/Developer/NVIDIA/CUDA-[10.1]/bin${PATH:+:${PATH}}
export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-[10.1]/lib\
X <--- remove this X                ${DYLD_LIBRARY_PATH:+:${DYLD_LIBRARY_PATH}}
export CUDA_HOME=/[wherever cuDNN installed to, for example mine is /users/dregpu/cuda]

Install PyTorch:
cd [wherever you want to install pytorch. mine is /users/dregpu. the install script will create another sub-directory called pytorch] export CMAKE_PREFIX_PATH=/anaconda3 <--- or wherever you installed anaconda
git clone --recursive [email protected]:pytorch/pytorch.git
cd pytorch
pip install cmake
MACOSX_DEPLOYMENT_TARGET=10.13 CC=clang CXX=clang++ python setup.py install [takes a long time!] pip install torchvision

You now have PyTorch working with multiple GPU support! Well, sort of... (see below)


On a dataset with 40,000 JPGs, it takes about 4-5 minutes per epoch using AlexNet
It took 5.5 hours to go through through it with VGG19

I ran Valley and Heaven and confirmed it was using the 1080 via activity monitor GPU history. Geekbench requires money to test anything other than OpenCL, so if someone wants to send some money my way... 😉 
For the machine learning benchmarks, I installed PlaidML  as the backend to try and make everything as similar as possible between GPU's. If you go to the plaidML github page, there is some sample code at the bottom using some CIFAR data. I just copied and pasted that into a python script file, but changed range(10) to range (100), so the tests wouldn't finish instantly.  Since MacOS doesn't support nvidia RTX cards I booted into Windows (bootcamp) and ran the PlaidML stuff there, so take that comparison with a grain of salt.  



You get lots and lots of warnings during pytorch compile, but it shouldn't give you errors and stop.
The 2x eGPU setup is obviously much, much faster than just using the CPU. 
You can plug in peripherals into the gaming boxes and it doesn't seem to wreck things. I plugged an external SSD.
Plugging stuff into the free TB3 ports seem to lock up the system and require a hard reboot. I guess the initialization wrecks the eGPUs causing a kernel panic or something. 
I think some of the warnings during pytorch compile are relevant. Whenever I start up my script and it engages the GPUs, the laptop screen goes blank and the user interface seems to go slowly. It's as if the system is using the hardworking eGPU's to drive display and the iGPU is inactivated.
I didn't do anything like try to disable the Radeon 555X.
According to a post (can't find the link again) trying to compile PyTorch from source on linux, the pytorch installer script doesn't include some header files necessary for gcc.
I might try to recompile again according to this: https://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/index.html
Cuda 10.1 is supposedly compatible with Xcode 10, so maybe that paid apple developer account isn't necessary after all!
Under bootcamp Win10, the Radeon 555X doesn't seem to run OpenCL, as PlaidML couldn't see the GPU. The Radeon drivers from AMD's website don't seem to want to install, as there's some kind of block setup. I'm getting really annoyed at Apple lately.  They make it soooo hard to get any serious work done!

DO NOT UPGRADE TO MOJAVE (10.14)!!! It will deactivate the nVidia drivers, so no more eGPU support!

2019 Mac Pro (RP580X) [10th,16C,W] + RTX 3090 @ 32Gbps-TB3 (Razer Core X) + Win10 20H2 [build link]  

itsage liked
Noble Member
Joined: 2 years ago

I still don't get it, how you could run two nVidia eGPUs in MacOSX, while everybody else can't. @itsage, do you know why?

About this note: "PyTorch is kind of working. The laptop screen turns off and the remaining monitor is running slowly."
It's "sudo pmset -a gpuswitch 0" which solves this problem.

And about this note. "Under bootcamp Win10, the Radeon 555X doesn't seem to run OpenCL,"
this is a bug in newest nVidia drivers. They won't do OpenCL when AMD is drivers are present. This is a known issue as well.

This post was modified 2 years ago

2018 15" MBP & 2015 13" MBP connected to RTX2080Ti GTX1080Ti GTX1080 Vega56 RX580 R9-290 GTX680

2018 15" MacBook Pro (RP560X) [8th,6C,H] + RX 5700 XT @ 32Gbps-TB3 (ASUS XG Station Pro) + Win10 & macOS 10.15.4 // Navi vs Radeon VII vs GTX 1080 Ti [build link]