2017 15" Alienware 15 R3 (GTX1070) [7th,4C,H] + GTX 1080 Ti @ 32Gbps-TB3 (ASUS XG Station Pro) + Win10 1803 [itsage] // TB3 vs Alienware Graphics Amplifier
I tested performance difference between AGA vs TB3 eGPU several times and without a doubt the Graphics Amplifier is better. This build is for those who want to maximize their GPU expansion with an Alienware laptop. The discrete Nvidia card can work alongside two Nvidia eGPUs, one through AGA port and another through TB3 port.
2017 Alienware 15 R3 - i7-7700HQ/GTX 1070 dGPU/HD Graphics 630 iGPU/32GB RAM/512GB NVMe SSD + 1TB HDD
Both the AGA port and Thunderbolt 3 port have a 4 PCI lane connection directly to the CPU. By default connecting the Alienware Graphics Amplifier requires a reboot to get the AGA eGPU going. It also disables the dGPU. In order the keep the GTX 1070 dGPU enabled I uninstalled the AGA software in Windows and ran DDU to remove all graphics drivers. Once this is done, I connected everything prior to boot then manually installed the latest Nvidia graphics drivers.
I also ran some gaming benchmarks. The external monitor was Samsung CHG90 running at 3840 x 1080 @ 100Hz (due to HDMI cable connection).
|Alienware 15 R3||1070 dGPU||1080 Ti TB3 eGPU||2080 Ti AGA eGPU|
|3DMark Time Spy||5,744||8,487||13,873|
|3DMark Fire Strike||17,719||21,216||32,746|
|Tomb Raider 2013||90.5 FPS||129.0 FPS||191.3 FPS|
|Dirt Rally||73.1 FPS||80.4 FPS||87.4 FPS|
|Shadow of Mordor||54.4 FPS||72.6 FPS||108.4 FPS|
|Hitman||46.9 FPS||62.9 FPS||65.7 FPS|
|Strange Brigade||55.0 FPS||84.0 FPS||118.0 FPS|
|The Division 2||41.0 FPS||44.0 FPS||72.0 FPS|
|Assassin Creed Odyssey||36.0 FPS||32.0 FPS||54.0 FPS|
|Ghost Recon||33.2 FPS||30.6 FPS||53.8 FPS|
|Forza Horizon 4||77.0 FPS||61.0 FPS||108.0 FPS|
This build would make an excellent setup or computing and machine learning tasks. It can get loud when everything get going full speed. While the Thunderbolt 3 connection starts showing its limits the Alienware Graphics Amplifier works similarly to an internal x4 PCIe slot connection.
@itsage Looking at the last three results, seems like bandwidth limits given how a laptop 1070 is doing better than a 1080 Ti over TB3? If not, what do you think accounts for the difference for these two GPUs in those games? Could you also add tests for 2080 Ti over TB3 to the results?
@mac_editor I think 22Gbps bandwidth might not be the issue but rather the Thunderbolt 3 controller speed. Certain games make it harder for the end-to-end encoding and decoding by the TB3 controllers. I ran the RTX 2080 Ti and Radeon VII recently to compare dGPU vs eGPU. You'd see a very similar limits in games such as Forza Horizon 4, Assassin Creed Odyssey, and Ghost Recon.
This is completely different to your results, where the RTX 2080 Ti wins handily.
How can this be explained? Can this be the difference because of brand and type of the card?
Ah, It's not TB3, it's AGA, this explains something, of course.
|AW15R3 + RTX2080||AGA FHD||TB3 FHD||AGA QHD||TB3 QHD||AGA 4K||TB3 4K||AGA 5K||TB3 5K|
|3DMark Time Spy||15,846||13,564||10,545||9,416||5,167||4,879||3,029||2,946|
|3DMark Fire Strike||25,428||20,452||18,342||13,675||9,898||7,694||5,129||4,785|
|Tomb Raider 2013||228.5 FPS||187.7 FPS||152.7 FPS||135.1 FPS||77.2 FPS||74.7 FPS||44.2 FPS||43.9 FPS|
|Shadow of Mordor||161.6 FPS||127.3 FPS||121.7 FPS||99.4 FPS||71.0 FPS||62.6 FPS||45.7 FPS||42.7 FPS|
|Dirt Rally||100.3 FPS||119.25 FPS||99.3 FPS||106.0 FPS||73.9 FPS||74.9 FPS||52.5 FPS||53.3 FPS|
|Hitman||86.3 FPS||81.7 FPS||77.1 FPS||68.5 FPS||38.6 FPS||35.8 FPS||21.5 FPS||21.1 FPS|
@itsage: Great table as always.
Question: Did you merge my two posts before or is this done automatically?
I merged them manually. We had requested the forum software developer for auto-merge feature but it's not possible yet.
@itsage thank you for the insight. Okay, so the TB3 encode/decode is the bottleneck for those games. But what influences this encode/decode? In the 3 games in question, why is the encode/decode process slower vs. other games? If we can understand what properties exactly affect performance (such as specific game settings, textures, etc), it would be an important step forward in understanding TB and it’s limits.
Such differences make it hard to compare correctly.
Other example is the comparison with Radeon VII: My GTX 1080 Ti was 10% better than the Radeon VII in average, but of course this would be different compared with another GTX 1080 Ti which is 10% worse...