Mid 2015 15-inch MacBook Pro eGPU Master Thread
Thanks to @nando4's and @itsage's earlier implementations, I was interested in trying out a direct M.2 PCIe 3.0 x4 interface of my Mid 2015 15" MBP (M370X) for eGPU. This is now my primary Mac as it has a brand new battery and top case with the keyboard. Official battery replacement was not as expensive as I thought and the cycle count was 0! This model supports "silent clicking", a nice feature that turns off the sound of your trackpad's click. The new MBPs do not have this feature. There are so many things we miss from this machine. The most interesting thing is PCIe 3.0 x4 of course, more than doubling the transfer speed compared to TB2, and leaves behind TB3 as well due to its 22Gbps limit and complex design.
As we know, Apple uses a proprietary interface but luckily there exists M.2 NVMe adapters for upgrading 2013-2015 MBP's SSD. So I ordered both the short and longer adapters from Sintech:
It turned out that the socket of the longer adapter was too tall. During reassembly of the lower case, the other plastic clip in the middle did not seem to snap in. For this reason, I recommend to use the short Sintech adapter which did not have this issue.
The root issue with the Thunderbolt is too narrow PCIe bridges. This is a big obstacle in Windows, resulting in error code 12. I decided to put my theory in practice and enlarged the problematic memory area the eGPU requests. To my knowledge, this method is not described anywhere else. I originally got the idea some years ago when someone asked how to fix the error code 12 with the nMP on this forum. The memory areas presented here are machine specific and not applicable to other systems as such. However, if you choose the right values, there is a high chance it will work.
The iGPU activation also differs from previous implementions because now we control the gmux directly, not through nvram variables. It follows the same memory address sequences that Linux driver uses:
The fastest way to clone your internal disk is to use dd command, for example I am using Corsair MP510 960GB (be careful with disk identifiers):
Works well between two NVMe SSD Boot Camp installations but Windows does not like if you are cloning from the original AHCI SSD or from the NVMe SSD to USB. Use Boot Camp Assistant with your new NVMe SSD. My shortcut was WinClone but that also failed if the destination was USB drive. The final fix was sysprep, otherwise the boot got stuck at blue screen:
With NVMeFix ( https://github.com/acidanthera/NVMeFix ) I saw a reduce in power consumption, not close to original Apple SSD readings but very welcome improvement. NVMe SSDs are completely another story, let's now focus on eGPU.
A TB2 disk is possible but it takes more PCIe resources due to bridges and shows up irregularly in startup manager. For these reasons, I recommend an external USB for Boot Camp. I chose Samsung Portable SSD T5, attached to the right side USB 3 port (up to 5Gbps).
1) Identify your external disk.
2) sudo mkdir /Volumes/EFI
3) sudo mount -t msdos /dev/disk2s1 /Volumes/EFI (where disk2s1 is your EFI disk identifier)
4) Download EFI Dev Kit: https://sourceforge.net/projects/efidevkit/files/latest/download
5) Create startup.nsh EFI Shell script
Rows 4-6 are only necessary if your eGPU occupies the TB2 port (left top) and you are using a Radeon VII, because it requires twice as much non-prefetchable memory in 32bit space than RX 580. The tested eGPU enclosures via TB2 were Netstor HL23T and Asus XG-Station Pro. You might need to disable and enable the PCI Express Root Port as shown in the image below:
Uncomment these rows (remove the # character) in that case. The R43SG does not need rows 4-6 so you can use the script as shown. The rest of the commands activate the iGPU and completely switches off the dGPU, making possible to install latest official AMD drivers.
6) Your folder structure in Finder should look as follows:
where bootx64.efi is the renamed Shell_Full.efi file from EFI Dev Kit's /Edk/Other/Maintained/Application/UefiShell/bin/x64/ folder.
apple_set_os.efi from here:
and Microsoft's bootx64.efi renamed as bootx64-original.efi
7) Not sure if this step is necessary but disable SIP and run:
8) Shut down your Mac, then turn on with the option key pressed down, and select "EFI boot".
Benchmarks (RX 580)
|M.2 external||TB2 external||difference||M.2 internal||TB2 internal||difference|
|Forza Horizon 4||94.9||49.5||47.4%||90.6||37||59.2%|
|3DMark 11 Extr.||4769||4612||3.3%||4776||4575||4.2%|
Benchmarks (Radeon VII)
|M.2 external||TB2 external||difference||M.2 internal||TB2 internal||difference|
|Forza Horizon 4||129.3||70.3||45.6%||117.8||55.6||52.8%|
|3DMark 11 Extr.||9609||9338||2.8%||9654||8848||8.3%|
M.2 interface and RX 580 beats Radeon VII and TB2 in Forza Horizon 4. Amazing. For some reason TimeSpy produced an error when I tried to use internal screen. Nowadays, setting the preference is easy in Windows 10:
On the macOS side, you don't need any workarounds with the ADT-Link R43SG. The eGPU works as if it was a built-in GPU.
@goalque, thank you for this first M.2 eGPU implementation on a MacBook. Excellent results on a 32Gbps M.2 port compared to the 16Gbps TB2 port.
A) 2020 MacBook Pro, i7-1068NG7, 32GB RAM, 1TB, EGPU Razer Core X, Gigabyte OC 3080 10Gb, Samsung 49 1440p UltraWide C49RG
Mac OS Catalina 10.15.7, Internal Bootcamp Windows 10 latest update previously W10 2004 pci.sys swap.
B) 2.7 GHz I7 4 Cores, 16Gb, 1TB MBP 13 2018 TB3 , EGPU Razer Core X, Nitro+ RX5700 xt 8Gb, LG 32UK550
Mac OS Catalina 10.15.2, Ext SSD Windows 10 1903 V1 .295
I don't know but very possible. I don't have the nMP. It depends on the card. Some old cards such as a Quadro NVS 295 have low base address register requirements and should be plug-and-play on Windows Boot Camp. Modern cards aren't and you have to resize Thunderbolt bridges to get around error code 12 via TB2.
The R43SG attachs itself to the PCI Express Root Port and has higher chance to work without workarounds.
@tsakal In terms of hardware, the SSD drive slot in the 2013 Mac Pro is routed through the Graphics B card and runs at x4 PCIe 2.0 to the PCH. The Thunderbolt 2 ports has direct CPU access and run at x4 PCIe 2.0. Another physical challenge is a magnet placed near the Power Button on the outter shell. Without which the computer won't turn on.
In terms of software, I think it's possible and would be nice to have a solid solution to error 12. This of course depends on the connected eGPU.
Here's the script for iGPU-only 2015 15" MBP if you are going to try a TB3 enclosure & Apple TB2 to TB3 adapter:
As long as the PCIe root port of the eGPU is located at the same address. The address might be different.
It doesn't matter if you boot up from an internal SSD or external USB. The internal SSD slot can be empty.
However, do not try to boot with multiple Boot Camp installations (for example the other on the internal SSD), always make sure that Apple's startup manager shows a single "EFI Boot".
Use an amfeltec angelshark carrier board and connect two GPUs plus keep the SSD.
@joevt I think the PCIe drive slot in the nMP is not as good as Thunderbolt 2 ports. It provides no higher bandwidth and goes through the PCH compared to TB2 ports that has direct access to CPU through a x8 PCIe 3.0 switch.
But the PCIe slot won't have the latency of Thunderbolt. You've shown gaming benchmarks before where PCIe 3.0 x1 can beat Thunderbolt 3.
In this case we are comparing PCIe 2.0 x4 and Thunderbolt 2 which should have a greater difference since PCIe 2.0 x4 is twice as fast as PCIe 3.0 x1, and Thunderbolt 2 is slower than Thunderbolt 3.
I don't think going through the PCH will reduce the PCI 2.0 x4 bandwidth of the SSD slot as much as going through Thunderbolt 2 does (also limited to PCIe 2.0 x4) even though the PCH is shared with WiFi, SSD, SMC, HD Audio, and Ethernet because they're not all used at the same time.
@joevt I agree Thunderbolt encoding/decoding speed would make the Thunderbolt 2 connection slower than PCIe 2.0 through the PCH. If we consider the Amfeltec Angelshark carrier board to host stock SSD, and one or two eGPU that would be quite a bit more convoluted than in my test of dGPU running at x1 PCIe 3.0 straight to CPU.