Ubuntu 19.04 - Easy-to-use setup script for your EGPU
I have created a script which automatically detects your (E)GPUs and creates the needed X-Server configuration files.
You won't have to mess around with finding the correct BUS-IDs and convert them from dec to hex or anything like that, the script takes care of it.
Just execute the setup command and choose which GPU is the internal, and which the external one.
After that, your computer will automatically detect on startup wheter your EGPU is connected or not, and decides which X-Server configuration it is going to use.
For further information, please refer to the README in my Github Repository.
You'll also find all the source code in there.
Connect your EGPU to your computer and make sure the Thunderbolt connection is authorized. Then execute the following commands.
$ sudo add-apt-repository ppa:hertg/egpu-switcher $ sudo apt update $ sudo apt install egpu-switcher $ sudo egpu-switcher setup
No more steps needed, your computer will automatically select the correct X-Server configuration on startup.
I am using this script with my Lenovo Thinkpad X1 Extreme (Hybrid graphics with a Nvidia GTX 1050 Ti). My EGPU is a GTX 1080 in a Mantiz MZ-02 VENUS enclosure.
This was only tested in Ubuntu 19.04, but it might work in other distros / versions too.
As i have no AMD GPU, this was only tested with Nvidia, but theoretically AMD GPUs could work too. Would be great if someone could test that and report back if it works. 🙂 Update: @itsage successfully tested it, and it does work with AMD GPUs as well. There seem to be issues in using 5K+ displays, but those are unlikely to be directly related to the script.
Why yet another script
I was initially quite overwhelmed by the steps i had to take in order to make my EGPU work with Ubuntu. As i had no knowledge about X-Server and why i needed to tamper around with it. I created this project mainly to learn more about X-Server, GPUs in Linux and how to publish packages for Ubuntu.
I am in no means an expert, and there certainly are some bugs, but i've tried my best and hope that someone may find it useful.
If this script doesn't work for you
Please let me know or feel free to create a pull request.
Also, the whole setup process can be reverted by executing egpu-switcher cleanup or by removing the package completely with apt remove --purge egpu-switcher. This command will even restore your previous xorg.conf file, if you had one.
Please also refer to these other great projects if mine doesn't work for you:
Thanks, looking forward to it!
I hope that "amdgpu" is the correct driver name to use in the xorg.conf, cause that's what i currently write into it, if the GPU has "AMD" in its name.
Super-happy to see eGPU on Linux get easier and easier. Wish I had a bare metal Linux machine to experiment + a bunch of time.
@hertg I have great news to report. It worked first try on my Alienware 15 R3! First test was with the Razer Core + WX 9100. I connected a 5K monitor HP Z27q but the drivers couldn’t combine the two DisplayPort streams to produce 5K resolutions. Even a single DisplayPort connection didn’t work well because instead of going to 4K, it was a vertical half of 5K.
I then tried another eGPU setup [Razer Core X Chroma + Radeon VII]. It worked too and the connected monitor this time was a Samsung 49″. It’s a single DisplayPort conenction and I got full resolution, 3840 x 1080 at 144Hz. I fired up Steam Play and got Age of Empires II running nicely on the external monitor.
There’s a strange cursor jump with the internal display when the AMD eGPU is connected. You may have seen in the first photo, the system couldn’t detect the Intel iGPU because the internal display is directly attached to the GTX 1070 dGPU. Perhaps I need to install a different set of Nvidia drivers?
@itsage Great to hear that it (somewhat) works with AMD GPUs.
That's a very strange behaviour indeed. I never tested it with a 4k+ monitor, but it works for me with the monitors below.
Unfortunately, I don't think that i can do much about that issue in my script, since all it does is creating an xorg.conf file with the following contents:
Section "Module" Load "modesetting" EndSection Section "Device" Identifier "Device0" Driver "<your-driver>" BusID "<your-bus-id>" Option "AllowEmptyInitialConfiguration" Option "AllowExternalGpus" "True" EndSection
For the integrated graphics: Did you disable the integrated graphics in the BIOS? Because that is what i had to do, in order to install Ubuntu 19.04. I then re-enabled it later after the installation was complete. If you connected your internal display directly to the dedicated GPU, i think you should still see the integrated graphics, but i might be wrong about that (?).
It's also kind of strange, why your Wireless Network Adapter shows up in the list, would you mind posting the output of the following command: lspci | grep -Ei "3d|vga"
Also try executing the lspci command without grep, and see if your integrated graphics shows up in this list.
Which nvidia drivers do you have installed currently? I am on nvidia-418.
I will post the results of my system below, maybe it helps you in finding a possible issue.
sudo apt list --installed *nvidia*
lspci | grep -Ei "3d|vga"
I made a switch from using a desktop to using a notebook + egpu, so the GTX 1080 is the one i had in my desktop previously.
Yes, as far as i know, AMD does publish their drivers open source, therefore it can be better integrated into Linux. I'm still hoping Nvidia might do the same someday.
@hertg Thank you for the advice. I was on nvidia-390. The AMD eGPU is running Mesa 19.0.2 and Nvidia dGPU is running 418 now.
I was also able to replicate this success on a 2019 Razer Blade Stealth. Intel iGPU, MX150 dGPU, and Radeon VII eGPU all showed up. There’s some “invalid number” message after I set the GPU preferences but things seem to work fine.
Did using the Nvidia 418 driver resolve any of your issues?
Thanks for the feedback, i think there's a bug in my script where i detect if the egpu is connected (It probably doesn't understand the Bus-ID "PCI:10:0:0").
Will try to fix that soon, when i've got time. With that bug, the automatic detection probably won't work on your system.
I added this thread to the Top Nav Menu just now.
This is great! Thank you.
Perhaps, if we do some changes, I would suggest to alter this guide at little. For me it looks a little bit too complicated and meanwhile outdated. This may be just my opinion. Perhaps somebody else shares this opinion, too? @mac_editor?
I just released version 0.9.0, you can update with the following command.
sudo apt update sudo apt --only-upgrade install egpu-switcher
I fixed the issue where EGPUs on double digit PCI-Lanes weren't detected properly.
Also added a little 5s delay for the detection on startup. I realized that the script somehow ran before the EGPU could even connect to the computer, i hope that doesn't happen anymore.
This is really fantastic, thank you!
I've taken some of the other scripts that have been posted here and modified them with some extra features. One thing that I found really useful was affecting the /usr/share/vulkan/icd.d files (used by vulkan applications).
My problem is that I have a Dell XPS 9575. This system has an Intel iGPU, a dedicated AMD GPU (Vega M) and then when I have my eGPU enabled (Nvidia RTX 2070) I have the following files in that directory:
And many vulkan applications (think games running via DXVK) don't allow you to manually select which vulkan icd to use. Lutris now has this feature, but for those of you that want to guarantee that a specific GPU is used, the best approach is to move the other files to another temporary directory. In my case I move them to a directory called /usr/share/vulkan/icd.d/egpu-helper-backup
If I don't do this most applications tend to pic the AMD GPU (which is discrete, but certainly not as fast as my eGPU).
I've integrated this into this script: https://gitlab.com/rstrube/linux_mercury/blob/master/supporting/egpu-helper/egpu-helper
You can see that if the user passes in 'egpu' as a parameter, in addition to doing the xorg symlinking, it also will move any Intel and AMD icd files to a backup directory (and any nivida icd files from the backup directory and back to the main icd.d directory).
Perhaps you could integrate this into the script? I think the complicated part is that the files you move will vary based on an indiviuals setup. For me I move Intel and AMD icd files when running my eGPU, but for others this might not be the case. In your case you have two nividia GPUs (the laptop's discrete and the eGPU one), so you would need to keep the icd file for nvidia and just move the Intel one (I believe, although I'm not sure?).
I'd also be happy to try to come up with a nice solution and open a PR.
Let me know your thoughts!
Edit: One other thing I was curious about, when the eGPU is not connected, you're specifically use an xorg.conf.internal file that would use (for example) the "intel" driver, and modsetting for the other GPUs (if you picked the Intel iGPU as your INTERNAL GPU). I believe it's now recommended to just use modesetting for Intel iGPUs. See this except from the Arch wiki:
Note: Some (Debian & Ubuntu, Fedora, KDE) recommend not installing the driver, and instead falling back on the modesetting driver for fourth generation and newer GPUs. See , , Xorg#Installation, and . However, the modesetting driver can cause problems such as Chromium Issue 370022. Also, the modesetting driver will not be benefited by Intel GuC/HuC/DMC firmware.
Perhaps the best course of action would just be to not have any xorg.conf in the event that the user does not have their eGPU connected and let Xorg figure things out?
That sounds really interesting, altough i do not have any experience with vulkan whatsoever.
I would be happy to integrate this, but i need to know if this is a really specific setup you have, or if there are other people with the same kind of setup and use-case.
It's important to me that i can keep the script as generic as possible and most importantly that it stays as simple as possible.
That being said, if we can define how the script detects if the user has a vulkan setup (maybe ask him in the setup process),
and define a generic naming-convention for the files (maybe by adding the bus-id to the filename?) it might really be a useful extension.
Is the setup you have, with the *.json files for your different GPUs, a standard type of setup or did you come up with that?
Does vulkan automatically select a json file from /usr/share/vulkan/icd.d/, no matter what its name is?
Regarding the modesetting config, good point.
As you may have seen, I'm using the same xorg.conf template for the external and the internal GPU, and just replace the driver and bus-id.
Did i understand correctly, that it is recommended for the Intel integrated graphics to not having specified the driver at all, and instead just use the modesetting driver?
Your suggestion to not create an xorg.conf.internal file could be a good idea.
But there may be people that want to add custom xorg configurations when running on the internal GPU, therefore we still need some kind of xorg.conf.internal.
Perhaps we could leave the xorg.conf.internal file empty, if the user selects the Intel integrated graphics as internal GPU.
What are your thoughts about that?
You seem to be well informed about the topic, so thanks alot for the input!
How do you get the 'internal' display to work in addition to the 'external' displays connected to the eGPU? Right now I can get either the internal to work with the discrete GPU or the externals to work with the eGPU but not all 3.
Thanks for the report, i have also recognized this issue recently.
But in my case, my internal screen sometimes works and sometimes doesn't...
I've tried to narrow down the cause of this problem in the last 2 hours, but without any luck.
As i was tracking down the problem, i realized that on the login-screen all my monitors worked and after login the internal stopped working.
I also somehow managed to invert the problem, that the external monitors stopped working after login instead of the internal one.
My current workaround is that i switched from using gdm3 to using lightdm.
It seems that all my monitors are working right now, on login and after login.
(see: https://askubuntu.com/a/1049669/952594 )
Would be great if anyone with more experience in this topic could have a short look at it.
I'm sorry, if i couldn't help you.
Most display managers run 1 session for both the login screen and the desktop, but I think gdm3 is unique in that it uses 2 different sessions, one for the login screen and one for the desktop with both running at the same time when you're logged in. Combined with gdm3 using wayland by default(?) I think this makes things difficult when using scripts like this to change the x server so I'd recommend sticking with a DM like lightdm to make things easiest.
To get all your monitors to work I usually just have to mess around with xrandr until everything shows up since you guys have nvidia dgpus it might take more trickery to get those working. If you do find an xrandr configuration that works and want it to load automatically check out autorandr
Question. I used this on my Lenovo Yoga 730 13". It works great when an external display is also connected, however, when I am just using the internal screen mouse clicks and typing lag horribly. Plug the external display in and the lag disappears immediately. Any trouble shooting advice?
I don't know if you are having the exact same issues, but maybe the following can help you.
I've had similar issues with unbearable input lag and sudden crashes of the USB ports from my Mantiz Venus.
My mouse would disconnect after a few seconds and my keyboard would sometimes press the sammmmmmmmmmmmme key for several seconds.
The same symptoms were described in the Arch Linux wiki on the Dell TB16 article.
Going to the BIOS and change Thunderbolt Security from "User Authorization" to "No Security" has fixed that problem for me.
Now my peripherals are all working properly.
If you handle very sensitive data on your computer, this may not be an option.
For more information on the possible vulnerability see http://thunderclap.io/
But to be better protected against that, you would have to have enabled at least "Secure Connect" and locked your UEFI/BIOS with a strong password in the first place.
I am using a dell xps with 1070ti egpu. I am connecting this using thunderbolt 3 port and PCI Express 3.0 graphic card dock.
Will this even work or do I need to purchase a new gpu docking station?
Hey, just wanted to say thanks for making that script it works great!!!!
I can confirm that it works with an Alienware m15 and the Alienware Graphics Amplifier. Only issue I have had is sometimes when been using it as laptop and then go back to desktop, have had to run the setup again like it forgot the settings, but when I re-run the setup again, works again!!
Thanks again for this!! You rock.
Great to hear that it works for your setup!
Does it still display the same PCI-Bus for your EGPU when you have to re-run the setup?
I plan on refactoring the way the configuration get saved, it is a somewhat dirty solution right now.
But i'm pretty busy in the next few weeks, so this will take some time.
I'm not posting from my original account since the website is giving errors over and over on log in (no idea why).
I've tested a couple of the main guides on how to setup the egpu but so far yours was the only one that provided a very clean install experience and very smooth usage without any glitches (had a few on other setup methods provided on forum).
system: manjaro linux, xfce
i7-8550U, 8GB Ram, mantiz venus with Saphire nitro amd RX580 4GB
I have a couple of doubts regarding the setup, could you please help me out understanding a few things?
1 - I'm on manjaro distro using xfce desktop and once I boot (without egpu) the ram usage is around 600MB. When booted with the egpu the ram usage is around 950MB. Is it normal to have an increase by 300/350MB after boot without launching anything? (or task manager there's nothing pointing to the usage though)
2- for the first time with egpu I managed to install and play quite well the only game I play in the last years, Heroes of the Storm. While playing the egpu does the usual fan noise but nothing overwhelming but the laptop cpu (i7-8550U with active cooling, 1 fan) must be super intensive working since fan is always on maximum (no dust accumulated, relatively recent laptop). I expected that most of the workload would be done on the egpu side and cpu usage would decrease, right? I changed all video settings to low on the game options (by default everything was on ultra but the gameplay was kinda mediocre)
3- after playing for a while or on very intensive moments the gameplay image stutters a bit; with all settings at lowest. Is the gpu weak for this kind of tasks? How can I test whether the gpu is taking all the workload it should be undertaking and not splitting too much on cpu or integrated gpu (intel)?
Thank you so much for publishing your setup guide and files. It was super easy to install and get working. For the first time I managed to play this game and have a smooth usage of egpu for regular use. It was all bash/shell programming? Please let me pay you a few coffees as token of grattitude!!!
I'm not posting from my original account since the website is giving errors over and over on log in (no idea why).
Maybe try to delete cache and cookies for the eGPU.io website. I've had to do this a couple times in the past.
Safari -> Preferences -> Privacy -> Manage Website Data -> egpu.io