graphic card shows as NVIDIA Chip Model and Sierra 10.12.6 cannot boot with eGPU plugged
I have bought my first eGPU equipment and followed the Beginner's guide. Many thanks for the step-by-step tutorials and ressources !!
My laptop is a MBP 13inch early 2015 running on Sierra 10.12.6 ; the enclosure is a Sonnet eGFX Breakaway 550W with inside a NVIDIA Titan X. To connect I have an adapter Thunderbolt 3-2 and a 2m cable Thunderbolt 2-2, both are official Apple product.
I disabled SIP and ran the automate-eGPU.sh installation. It recognized my card and installed the suited web drivers. If I run it again, it says that eGPU is enabled and web drivers up to date.
However, I am confused as I read that the recommended way to boot is with already (cold) plugged eGPU. However, for me it does not work, the screen remains black at start. If I boot without the eGPU and plug it afterwards, it does show the thunderbolt device (the Sonnet box) but the graphic card is shown as NVIDIA Chip Model and the type is GPU instead of External GPU ...
Is there anything I can/should do about that ? Because from what I read, inversely it should only work with cold plug/unplug ... (if I hot unplug it does crash / black screen as "expected")
I will try to continue setting up with CUDA and cuDNN installation.
I only intend to use the eGPU for deep learning and no gaming / display / video rendering.
About this there are a some points that I would please ask for clarification as I am confused.
I don't have a HDMI headless display adapter and did not follow the step
"Set the eGPU display/ghost display as the primary"
Is it an issue for using the eGPU with my laptop for deep learning ? Does being plugged or not to an external display (eg. HDMI) matters ? And there is an HDMI output on the card mounted in the eGPU enclosure .. should it be used rather than the MBP HDMI ?
Many thanks for all these ressources and any hints/clarifications
purge-wrangler.sh ✧ purge-nvda.sh ✧ set-eGPU.sh ✧ automate-eGPU EFI Installer
Troubleshooting eGPUs on macOS
Command Line Swiss Knife
eGPU Hardware Chart
Multiple Build Guides
Current: MacBook Pro RP560X + 480/R9 Fury/Vega 64 | GTX 780/1070
Previous: 2014 MacBook Pro 750M + 480/R9 Fury | GTX 780/980 Ti/1070
@adrien_chatton As you can see in my build guide, it's not easy to run an (non Kepler) nVidia card with this particular MacBook model. This "timed plug" is very annoying.
My tip is to switch to AMD for this MacBook, it will work flawlessly with it.
That is brillant trick 🙂 no idea how you found out, thank you !!
I did the timed plugin and now it does recognize as a titan x.
It's fine for me to have to do that kind of trick, and I need CUDA / cuDNN for my purpose so I have to go with NVIDIA.
I would like to please ask you a couple last things as there are some steps I did not really understand or I am not sure if they apply to me or not ...
For the purpose of deep learning GPU use and on Sierra 10.12.6.
_ I did not perform the ghost display step neither set particular display parameters .. does it matter ?
_ I use my MBP sometimes with or without an HDMI external display, does it affect how the eGPU is used for computations ? And can I use either the MBP HDMI port or the one directly on the GPU ?
_ I started the installation with disabling SIP and running automate-eGPU.sh ; I did not use purge-wrangler.sh or purge-nvda.sh as you @mac_editor refer to .. is it only for High Sierra or should I also run them on my setup ?
Thanks for your help and to the community !!
for instance, if I check the graphics, it shows the NVIDIA GTX Titan x GPU along with the Intel iris default GPU.
but if I check the monitors with an HDMI external display plugged, both are processed with the Intel iris 6100, would this matter only if I want to use the eGPU for video and games ? or is it also affecting CUDA computations ?
Just to let the community knows that the setup works perfect with the time-plugged boot.
I did not follow the display/ghost display setup step.
I am using the eGPU only for running deep learning models, with the installation steps I followed, I could get cuda 9.0 and cuDNN 7.0 working and supported by the respective pytorch source.
I did not benchmark heavy computations but the speed is really good, even with the memory transfer through thunderbolt 2, compares to the cards on server.
Thanks for the help !