Best Laptops for eGPU - June 2020 Thunderbolt 3 Laptop Buyer's Guide
I am still searching for a laptop for my new setup and there are lots of questions i still have and things i would like to understand better then i do at the time, thanks to anyone who can clear some things up for me (sorry this will be a long post):
uhm just a small note before i begin, @4chip4 you above mentioned the hp 13t as a replacement for the x360, from everything i could find the hp spectre 13t costs the same (in europe!) has less battery and the screen moves less, also the flex seems to be more. the trade off is a few mm less thickness. Worth it? For most not i guess. I really like the idea of putting the ports at the back and having a thinner laptop that way, sadly the spectre 13 becomes quite large by having that and relatively thick top and bottom bezles making it notably deeper then the dell xps 13 at only 2mm less thickness, with lower performance (28 watt CPU xps 13) weaker thermals smaller battery lower buildquality and the same price tag (again i only know about EU and hp seems to have more of a premium price added then other companies) 🙁 kinda sad as i like the idea 🙁
thanks @itsage you say the dGPU model does, does that imply the other one has 4 lanes? as to my knowledge that one also only has 2 lanes ;/ (from everything i read and saw in terms of benchmarks it suggests to me that unless you go into loopback mode the loss of 2x is only about 5% higher then the loss of a 4x, is this correct?)
Tied to this question, i still have very little reliable information on the overall performance difference bettwen a laptop setup and a desktop.
here https://egpu.io/forums/mac-setup/pcie-slot-dgpu-vs-thunderbolt-3-egpu-internal-display-test/ it is suggested that the overall loss is about 20% in 1080p. Then Hardware Unboxed video paints a very different picture of loss rates from 20 ("best case scenario") to over 50%. Could someone also adress the mentioned latency issue and gove some details on this problem?
Sadly the former tested only with lower end GPUs while the later (video) doesnt really go into spec details of the setups used.
itsage suggested in the mentioned egpu.io post that "benchmarks for 1080p show 15% performance drop , 1440p 8% performance drop and 4k 5% performance drop with the same CPU in the same Desktop PC"
again these tests were done with not really high end gpus (980ti), does this matter or olds true for any gpu?
in this discussion https://egpu.io/forums/which-gear-should-i-buy/i7-7700hq-vs-i7-8550u/#post-38998 a comment from @4chip4 suggests (to me at least) that this lower performance drop at higher resolution is solely because of lower absolute framerates due to where the bottleneck of the CPU limits the GPU and gives diminished returns above that level. Which would imply that if high framerates are attempted no matter the resolution the drop would be higher, as bottlenecking is a thing here the CPU should be the main factpr, but in any way i can expect around 20%? Is this a correct conclusion or am i missing something?
I noticed that the benchmarks posted in the PCIe vs TB3 post vary quite drastically, more importantly it seems to have a pattern, while most tests are in the low loss range two titles stand out, Tom Clancy's GhostRecon and Shadow of Mordor consitently having the highest loss. both of those titles are heavily CPU limited and espc GhostRecon is notorious for allowing 0 play with leveling out CPU GPU usage. What confuses me is that those tests were done using comparable CPUs right, so can anyone explain why this is?
So lets talk about CPUs:
itsage mentioned in the original post that it may be worth the wait for the H CPUs, now that we do have a number of good laptops with them and had a bit of time, are there any conclusions or results someone can make about using one with an eGPU? is the 8750H actually better then the 8550U? (this discussion is also here: https://egpu.io/forums/which-gear-should-i-buy/i7-7700hq-vs-i7-8550u/#post-38998 just thought the buyers guide may be a better place to discuss or update if these CPUs actually do offer an advantage.)
@4chip4 mentioned (talking mainly about VR) in this discussion about CPUs that: "going from 7500U to 8550U was definitely a big step up" and "7700HQ and while I did no benchmarks, the subjective performance felt largely on-par with the 8550U" This signals to me quite a big leap with the i7 8550U, an unusually big leap actually considering the increase in cores and ghz compared to the other chips, more on this later.
you would assume that a better cpu actually leads to better performance, but a number of reports i saw does not really suggest this, for example here https://www.ultrabookreview.com/20435-xps-15-9570-review-live/ we can see that the new xps 13 outperforms the xps 15 (in total slightly but in graphics very clearly! the physics (CPU based??) seems to barely balence it out) even thou it has the far better CPU, any suggestion on why this may be?
seemingly manufacturers dont think a 8750H chip for eGPu use is beneficial or worth the effort either, as every single laptop with a 8750H on the market also has a dGPU (isnt that odd? is it really that niche?) You could build more portable devices without the dGPU and even manage that crazy heat of the 45W series far better with 2 fans and full length heatpipes to both sides of the CPU, so why isnt this done?
Earlier last year reports of intel planning on integrating tb3 into their upcomming cpus were all over the place ( https://www.tomshardware.com/news/intel-integrates-thunderbolt-to-cpu,34501.html) and the mentioned prior article states the following: "my current theory is that where the XPS 13 9370’s TB3 signal feeds directly to the CPU, the XPS 15 9570 is using an additional daughterboard (Alpine Ridge) to provide TB3, which may induce some performance degradation in comparison." What are your thoughts on this? why can i not find any reports on such tb3 integration online but many users experiencing this kind of result in testing?
Edit: i found this today: "2Q19: Comet Lake-U (4 core, 14++nm) + ICL-PCH (14nm, TB3 integration)" ( https://www.reddit.com/r/hardware/comments/8q8987/intel_cpu_roadmap_8core_coffee_lake_refresh_in/)
Here is another post strongly suggesting something is going on with the 8th gen CPU: https://www.reddit.com/r/Huawei/comments/8niq84/matebook_x_pro_i7_3d_mark_scores_razer_core/
the matebook is a 2x pcie laptop and also uses the weaker CPU, the performance drop is quite drastic overall and the score using a 1080TI (firestrike 12500) is similar to what we saw in the article on the XPS13 when using a 1070 eGPU (12175) so i really dont know what is going on there !?
but it gets cazier, this system still outperforms the razer which has a 7th gen HQ processor (equivalent or better then the 8550U) and certainly the razer also has 4x pcie vs the 2x of the matebook x pro.
my explanation to this is that the 8th gen U chip has some kind of optimization or integration for tb3 and is for that reason far superior (and also that you shouldnt use 1080ti s on 2x tb3 :D) about the negative hardwareUnboxed video on eGPUs i assume this to be related to the poor results he got which are so far from what others, espc itsage state.
Now lastly: 10month ago the 8550U was released, what do you think when we will see 9th gen U chips? whiskey late is the name i heared rumored, as well as a double digit improvement. Are there any other large improvements you see comming in the soon future? there is still nothing about Tb4 that i found and no mention of the integration from intel in over a year. The 875oH seems for some reason not to be superior to the 8550U for eGPUs and from what i read about higher resolutions it only depends on the framerate, this means running an 1180 on a 8550U would lead to similar performance drops on 1440p to what we see in 1080p with current 1080(tis)
As my goal is UWQHD @110+ FPS that means on current tech there is no way i could do that and i will have to wait for hopefully a good 9th gen U chip that is able to give better support for a GTX 1180
please let me know what you think about the points i made and the questions asked, please correct me if made mistakes (which i likely did, as i am not an expert on any of this just an AI student with some interest in hardware)
Thanks already in advance!
There are two more that might need consideration.
1. Samsung Notebook 9 15.
You might think this would not be a contender, but a Windows Central reviewer stated it as 4-lane Thunderbolt 3 here:
Even though it has an MX150 discrete GPU. I did a lot of investigation to try to get some clarification somewhere about this, and I found this nugget on the Notebookcheck review ( https://www.notebookcheck.net/Samsung-Notebook-9-NP900X5T-i7-8550U-GeForce-MX150-Laptop-Review.287284.0.html ). You would think with an MX150 dGPU that it would have a 4-lane implementation since that's the physical wiring, but on inspection of this review it looks like the MX150 is configured as 2-lane, possibly lending credence that they saved 4-lanes for the TB3 port. Here is proof that the MX150 is configured in an x2 PCIe fashion:
There is some solid hope here that this svelte 2.8lb notebook has a proper TB3 implementation as stated. I noticed that my 1080 in my current notebook doesn't cut the lane width in power saving only the data rate, so hopefully this is hard wired to 2-lanes. Also, since it has a dGPU in there, there's a pretty beefy shared cooling solution as the GPU and CPU are on the same heatpipe, fins, and fans. So if only the CPU is being stressed it has plenty of cooling to keep the clocks up. A repaste on this would probably help it maintain some pretty impressive boost clocks while only stressing the CPU.
2. LG Gram 15 (this was previously mentioned in the thread).
It is proven that the TB3 on the more expensive model with an i7-8550U is 4-lane. The problem with this unit is that the cooling is extremely poor and it can hamper it's use for gaming with an eGPU, as seen here:
There is hope, however, as one user did a TIM reapplication to theirs and it no longer throttled. Seen here:
No idea what TIM was used, probably Grizzly Kryonaut. If this truly fixes throttling, it might make the LG Gram 15 a real contender.
I'm leaning towards the Samsung, as it would kind of be nice to have the MX150 on the go, as long as we can definitively prove it's using 4-lane. But it's really nice to see that these two sub-3lb ~15" ultrabooks are possible contenders.
Looking at the connections on a Thinkpad X1 Tablet 3rd Gen. The root looks like a proper 8 GT/s, as is the NVMe, but the card, 750 GTX in an Aorus 1070 Box, it looks like it's connected at 2.5 GT/s.
Could the TB3 on these tablets be a bit nerfed? Or is the current card/box just not needing the full speed.
Lenovo Thinkpad X1 Tablet 3rd Gen and 2070 RTX Aorus Gaming Box (Custom)
Web development, Video Editing, 2D + 3D animation