2013 14" Lenovo Thinkpad L440 + [email protected] (EXP GDC 8.4d) + Win10 [knon]^
Update 20-June: a new EC cable resolved unreliability issues. Now can run a Gen2 link.
Since I am new here, I want to start by mentioning that this forum has helped tremendously with some of the problems that have occurred with my set-up.
I have a Palit GeForce GTX 1050 and a EXP GDC Beast v8.4d over ExpressCard. I'm not sure about the specifics of the PSU, as I bought the eGPU set-up as a complete solution. It is part of a small case that fits the entire solution.
First of all I should mention that I installed the 1050 in my desktop computer (an AMD-based system with Windows 7 64-bit) on which it ran fine with the latest nVidia drivers (378.92). So that makes me think that there is nothing wrong with the card itself.
As laptops I have the following available to me:
- Intel Core i5-4330M (with HD Graphics 4600)
- 8 GB RAM
- No dedicated GPU
- Windows 7 (64-bit) (I've also tried Windows 8.1 and 10, but this is my "target" OS.)
- Intel Core Duo processor (not sure exactly which one right now)
- 2 GB RAM
- Dedicated GPU: 64MB ATI Mobility RADEON X1300 (or the X1400, not sure, but I think it doesn't matter)
- Windows 7 (32-bit)
I will try to explain my experiences more or less chronologically, although I'm sure I won't remember everything that I have done in the past 20-or-so days.
First I connected to the L440 before turning the machine on resulted in a black screen and the laptop not booting. Waiting for the laptop to boot past the BIOS and then connecting the eGPU had one of the following effects:
- The eGPU would not get recognized at all; nothing seemed to happen.
- The laptop would get weird lag spikes: Processing would hang for a couple of seconds (disabling even mouse and keyboard), do something for a second, hang a couple of seconds again, and so on. This would not stop until I unplugged the eGPU; then the computer would function completely normal again. The task manager showed that the CPUs were not busy at all and I also noticed that the same exact behavior when I booted Ubuntu, which indicates that it was some low-level hardware issue.
After switching the PWR to ON on the eGPU adapter, the lag spikes would stop when connecting the ExpressCard to the laptop and the GPU got recognized as a VGA device.
Then I tried to install the latest nVidia drivers (378.92) and got the Code 12 error. After applying the DSDT override (first without Setup 1.x, later with) I got the Large Memory and the Code 12 got replaced with a Code 43. I confirmed that the laptop supports hot-plugging on the PCI-E port, but the Code 43 would persist.
It turns out that the only driver that does not produce a Code 43 is the very first that supports this card (375.63). I tried all other available drivers and it is always a Code 43.
With those drivers installed the card gets detected, the nVidia Settings shows the spinning 3D logo and Optimus also worked. GPU-Z showed that all the current indicators have sane values. However, when starting a 3D application that does a little bit more, I got severe stability problems. I tried Starcraft II and Helldivers and both crashed in one way or the other (BSOD, the same lag spikes as mentioned before and Windows reporting that the graphics driver has crashed). Running 3DMark 11 resulted in either a BSOD or a white/black/grey screen and everything hangs.
It wasn't until I changed the PCI-express in the BIOS from version 2.0 to 1.0 that I got some form of stability: Starcraft II and Helldivers start successfully and run without any problem, as long as the graphics settings aren't pushed too far up. However, 3DMark still crashed the same as before. At this point I also started trying the newer 3DMark (Fire Strike), which exhibited the same behavior.
After that I tried to use the PSU from my desktop computer (which previously had had the GTX 1050 installed briefly and ran fine) to power the dock and finally the solution started running with more stability. 3DMark Fire Strike runs all the way through without crashing and I also started some of the other tests. After concluding this, I started using the PSU from an old desktop PC that I still had in my basement and the solution has been working well ever since. The only crash that I have experienced since then was when I put Starcraft II up to very high settings and even then the crash only happened after more than an hour of playing.
So right now I have the following open questions:
- Has anyone been able to get the GTX 1050 to work with this kind of set-up and more current drivers?
- I read some of the release notes for the different drivers since then and they mention some BSOD fixes and some other driver crash fixes for certain set-ups, specifically with the 1050 ti.
- Any other ideas to try and get the PCI express v2.0 working?
- I tried a dedicated power outlet and wrapping the ExpressCard cable with aluminium, with no change.
I forgot to mention that the Lenovo T60 basically had the same problems, both with the driver version limitation and the 3DMark crashes.
- Be sure to power the Ti as correctly like you did at the end, if you use a normal power supply be sure to use the 4pin from the cpu connection to feed the GDC, the dedicated pci-e 6pin from the power supply for the card if needed. The 1050 do not draw a lot so if you have a decent named power supply it should be ok.
- It seems the pascal card have some trouble to get installed, you need to have, like you said, the hotplug enable into the bios and do the driver trick, specified in the thread that can be found into this section. So basically, installing a driver, remove the forced reboot and installing straight another version of the driver. Like this it should work and not be the source of your instability. Unfortunately it work only with a few specified drivers, not all, if i got it well.
- If you can't get a decent 2.0 link with both laptops, i would incriminate more the GDC itself than the cable. If you wrap the cable into aluminum, you need to get a tiny piece connected to the a GND i.e the metal part of the HDMI connector, any screws/metal pieace of the laptop, otherwise the shielding is not efficient. The current induced into the aluminum shield from elecromagentic field, need to flow over something. If you not ground the shield somewhere, there is nowhere where the current can flow, so it propagate the noise instead removing it, if my memories are good. The pci-e 2.0 should work fine on the GDC v8.4d usually, if the cable/GDC are not defective and if you don't live under high voltage power lines.
- Pls try to plug the HDMI connector just a bit into the GDC, to not plug it all until the end. I got my GDC recognized booting along the laptop, covering the GDC 3.3v pin coupled with a Thinkpad T530i. The shortest pin on the GDC HDMI connector is the 3.3v, so plugging just a bit should exclude the 3.3v and maybe boot the card/GDC/EC with the laptop.
- You could mail the guy who sold it and ask to get a mpci-e instead the EC connector, maybe you would get a stable 2.0 with it. But i would definitively send a mail to your re-seller saying that the issue is present from different laptops, so it is not the laptop pcb/traces itself.
- Try to disable all other stuff that are connected to your chipset into the bios by pci-e and check if the 2.0 link is stable after that.
- Unmount your GDC, there is a third switch on it called slim line, switch it to turbo instead normal, check if the link is stable.
- Check you cpu/chipset t°, if tour chipset is overheating it could explain the difficulties to get a decent link.
- Clean your EC port, check if the connector is not dirty when you plug in/out.
The difference between pci-e 1.0 and 2.0 is mainly the data encryption and the connection speed. Where the 2.0 link need to get a decent cable/adapter to support 5Ghz frequency instead of the 2.5Ghz 1.0 link frequency.
- To be clear: I have a 1050 card, not a 1050 Ti. The 1050 does not need a dedicated power source. Other than that I am connecting the PSU the only way possible (via the 20 pin and 4 pin connectors).
- I tried installing the 375.63 and then a newer version and even before the reboot I can see the Code 43 right when the driver is installed. After a reboot the Code 43 persists. I also tried installing the laptop drivers (instead of the desktop ones, which I had been trying before), but I think currently the desktop and laptop driver packages are identical (apart from their name). It really appears as if nVidia made some changes at some point which invalidates this set-up. Everything other than the 375.63 is a dead end. I suppose I'm lucky to still have "a" version that works with this set-up. =)
- The T60 does not support PCIe 2.0, I believe. At least it has no switch for it in the BIOS and it doesn't have the BSOD and crashing issues that I have on the L440. I suppose it could also be the cable, although I somehow doubt it, since it functions consistently on 1.1 mode. I will try the grounding and aluminium wrap combination. I suppose I could also ground the aluminium on the screws of the PSU case.
- I will try this.
- The mpci-e is not really a solution for me, since this is my main working laptop and I would like to keep it mobile. However, if the above attempts don't show promising results, I will contact my re-seller with all this information.
- Already tried disabling the other devices. This has not really fixed anything in the past.
- Are you sure there is such a switch? I didn't see it and none of the reference documents seem to mention it. There is the PWR switch and the delay switch. I have the PWR to ON and delay turned off.
- From my sensor monitoring I have never seen anything get excessively warm. This includes both the CPU and GPU.
- I'm going to assume that it is not the connector on the laptop, since the same behavior occurs on both laptops. The EC card itself seems clean to me.
- It is ok to connect it like that. The only thing is to be sure to have a decent named power supply, with at least 15A on the 12v, even if the card have low power consumption. It is the stability of the 12v rail that should be assessed, so if you had trouble with the 1st power supply, i can only suppose that was a cheap 500watt noname, with less than 15A on 12v.
- For the whole driver issues ask in the right thread that can be found in the section, because unfortunately i can't help you further on that, i have an AMD card on my setup.
- Yes any ground is ok, just be sure it is a ground! Like you said the external power supply case/screw are grounded, same on your laptop. Did you live in a place where there is a lot of EM noise? High voltage stuff, room full of electronics, etc? As i explained you the difference between the 2.0/1.0, there is not a lot you could do. Again, in good conditions of cable, GDC, laptop wiring, gpu, the 2.0 link should be stable.
- I understand you, but unscrewing a couple of screws under the laptop to switch with you wifi card should not be a problem if you dont need the eGPU on the fly and it is dropped on a desk.
- Obviously yes, pls refer to my thread or implementation guide. I unmounted the whole GDC a lot of times.
1. Indeed, the PSU is a bit of a crappy one. But then I tried connecting the PSU from my desktop (which had no problem with the GTX 1050 when I tried it out) to the eGPU set-up. It specifies on the PSU itself that it runs 20A on 12V. Even though it ran a little bit more stable, it still crashed the video driver the moment the GPU needs to do something (even without any 3D rendering) with Generation 2.
7. I found the switch and switched it and tried with the old and new PSU; nothing runs Gen2 stable.
At this point I am just going to accept the situation and run with the Gen1 limitation for now, especially since it is enough for my current purposes. Also, I remember seeing some chart somewhere (on one of the sites that sells the GDC) that puts this generation of GPUs in the "too fast" area. What ever that means...
Thank you for all the help and may the next "me" find this topic enlightening. =)
Well at this point i would incriminate 1st the EC cable and connector.
For example, on the Dell e6320 i'm playing with atm, the EC port is connected via a riser to the motherboard; not optimal for link speeds, since you got 2 connection in this case, the eGPU one EC connector, then the motherboard one, that could kill the Gen 2.0 signal stability.
So i would suggest you to order a mpci-e connector, less than 20$ and try with it, it is not bad to have a spare one.
Other than incriminate the cable, i see less the GDC as the issue.
It seemed like the cable was getting more and more instable, resulting in BSODs when touching it and sometimes even when not touching it at all, all at PCI express v1.1. So I ordered a new PCI-express cable for the dock. And from the first tests that I have done, it seems to work wonders. I now even was able to run the set-up at PCI express v2.0; both 3DMark 11 and StarCraft 2 at extreme settings worked without crashing the system (which is a primer). Considering that this was actually also one of the hottest days of the year, I am extra suprised with this result, expecting the extra heat to have a negative impact. But it's also the reason I turned it off after it effectively turned the room into a sauna, so I have not very "stable" results yet.
If I get more instability in the future, I will write another post here, but for now my set-up seems to be working fine.