Enclosure & Hardware Discussions
AMD Navi & PCIe 4.0 eGPU?
 

AMD Navi & PCIe 4.0 eGPU?  

  RSS

Phoenix2063
(@phoenix2063)
Eminent Member
Joined: 1 year ago
 

Thought I'd start a thread ahead of the June 10th unveiling of AMD Navi at E3 which also includes PCIe 4.0 compatibility. From the computex demo we should be looking at RTX 2070 performance and perhaps 10% faster in AMD tailored games. Although that was using a PCIe 4.0 motherboard which I'm unsure if eGPU's would even integrate at all?

Navi likely to launch alongside their new Ryzen chips on 7th July. 

In addition to this summer's Navi launch, AMD's roadmap has 'Navi plus' arriving early next year which going by the computex demo targeting the RTC 2070, will likely be their higher end models going after the 2080 and maybe even the 2080ti.

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
Topic Tags
Defoler
(@defoler)
Eminent Member
Joined: 7 months ago
 

Lets be honest.
Even PCI 3.0 currently in an eGPU system, isn't being fully utilized.
Our current problem is not PCIE but TB3 bandwidth.
USB 4, which will replace TB3, is set to be 40Gb/s, just like TB3. So bandwidth wise, it isn't going to allow us to even utilize PCIE 4.0. And since PCIE 4.0 will be backward compatible, we should be able to put PCIE 4.0 cards in the current PCIE 3.0 envelopes.
Whether we will get new envelopes with PCIE 4.0, will even depends if we need them. PCIE 4.0 vs PCIE 3.0 should give you the exact same results, as you are only limited to the connector.

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
OliverB
(@oliverb)
Noble Member
Joined: 10 months ago
 

@defoler
(you are not @defiler, right? We had this confusion already)
40 GBs is supposed to be enough if the application/game is well programmed. I don't see any point in USB 4, moreover it's the same bandwidth?! What exactly is the gain?

2018 15" MBP & 2015 13" MBP connected to RTX2080Ti GTX1080Ti GTX1080 Vega56 RX580 R9-290 GTX680


ReplyQuote
Defiler
(@defiler)
Eminent Member
Joined: 6 months ago
 
Posted by: OliverB

@defoler
(you are not @defiler, right? We had this confusion already)
40 GBs is supposed to be enough if the application/game is well programmed. I don't see any point in USB 4, moreover it's the same bandwidth?! What exactly is the gain?

No he's not, but I keep getting notifications I'm being mentioned in posts. 🙂

Dell Precision 5530 + EVGA GeForce GTX 1080 Ti FTW3 HYBRID crammed in an HP OMEN Accelerator


OliverB liked
ReplyQuote
Defoler
(@defoler)
Eminent Member
Joined: 7 months ago
 
Posted by: OliverB

@defoler
(you are not @defiler, right? We had this confusion already)
40 GBs is supposed to be enough if the application/game is well programmed. I don't see any point in USB 4, moreover it's the same bandwidth?! What exactly is the gain?

No, not the same person 🙂 It was asked several times by now.

And look at the USB 4.0 proposed specs.

USB 4.0 is basically an update to USB 3.2 with TB3 becoming free entered into the mix. To make both TB3 and USB 3.2 becoming a more coherent standard. After all, 3.0, 3.1, 3.2, is way too confusing to 99% of people. So moving to USB 4.0 is just making sense of the standards.

USB 3.2 is limited to 20Gb/s (driven from TB3), and so USB 4.0 is going to be an upgraded version of that, allowing 40Gb/s with certified cables (just like you need today for TB3 to gain 40Gb/s). Also note the small b.

Anyway, PCIE 3.0 is limited to 15.7GB/s at full x16 lanes,  and PCIE 4.0 is going to give double that.
But note the B, not b. Meaning PCIE 3.0 is basically 125.6Gb/s (or TB3 is just 5GB/s), which is already 3 times that of TB3. So PCIE 4.0 will be 6 times faster than USB 4.0 or TB3.
Trying to stick PCIE 3.0 full data transfer ability to TB3/USB 4.0 will be like trying to cram an elephant into 2x2m shop. Trying to do that with PCIE 4.0 will be trying to cram a t-rex into the same space, and expect not to be eaten in the process.

You understand now the difference and why USB4, PCIE3.0, PCIE4.0, aren't going to matter for us?
We are already at the peak of data transfer to the external PCIE connection. Even if you have the best software running full 40Gb/s, you are not even tickling PCIE 3.0, let alone PCIE 4.0 (or even PCIE 5.0 expected to already come not so far away).

And even in reverse, PCIE 4.0 is bringing their own special PCIE connector, OCuLink-2, a standard which is an alternative to TB3, and it runs at 8GB/s (or 64Gb/s, 1.6x of TB3/USB 4.0). And even that isn't going to be even nearly enough to saturate PCIE 3.0, let alone PCIE 4.0 which rely on it.

So TL;DR:
No, PCIE 4.0 and USB 4.0 are going to do jack shit to us. 2080 TI will still be bottlenecked using eGPU. Even radeon VII and 2080 are on that verge.
And the new cards will be just as bottlenecked, meaning we are already reaching the max we are going to get. If you get a 2x faster card than 2080TI, it will not be enough to run it through PCIE 4.0 x4 to gain its full performance. Most likely you will hinder it by 30% at least. The faster the card, the more you lose bottlenecking it.

Until we get faster connector (at least 2x of TB3/USB 4.0), we will only see dGPU inside the laptops becoming faster than any eGPU setup you can use in a few years.
Even OCuLink-2 isn't going to cut it. 
And lets be honest, I don't see apple making a special OCuLink-2 connector to their laptops (maybe only a couple laptop makers at most on special laptops), so apple at least, will be limited and so will most other laptop makers.

This post was modified 3 months ago

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
Phoenix2063
(@phoenix2063)
Eminent Member
Joined: 1 year ago
 

Third party Navi prototypes on show at computex, ditched the blower style card
https://www.google.com/amp/s/amp.tomshardware.com/news/navi-prototype-cards,39528.html

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
Phoenix2063
(@phoenix2063)
Eminent Member
Joined: 1 year ago
 

Here we are talking about PCIe 4.0 when PCIe 5.0 has been announced at 128gbs transfer speeds so double PCIe 4.0! 😯 
https://www.google.com/amp/s/www.techspot.com/amp/news/80310-pcie-50-specification-announced-offer-128gbs-transfer-speeds.html

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
Eightarmedpet
(@eightarmedpet)
Noble Member
Joined: 3 years ago

ReplyQuote
joevt
(@joevt)
Reputable Member
Joined: 2 years ago
 
Posted by: Defoler
Posted by: OliverB

40 GBs is supposed to be enough if the application/game is well programmed. I don't see any point in USB 4, moreover it's the same bandwidth?! What exactly is the gain?

look at the USB 4.0 proposed specs.

USB 4.0 is basically an update to USB 3.2 with TB3 becoming free entered into the mix. To make both TB3 and USB 3.2 becoming a more coherent standard. After all, 3.0, 3.1, 3.2, is way too confusing to 99% of people. So moving to USB 4.0 is just making sense of the standards.

It's called USB4. They removed the space and decimal point. Yeah, the only gain I see is they added TB3. The USB spec is open (freely available), so maybe it will make TB3 spec open as well (or maybe they'll continue to hide it like they currently do, and like VESA DisplayPort 1.4 spec etc.).

Posted by: Defoler

USB 3.2 is limited to 20Gb/s (driven from TB3)

USB 3.2 is limited to 10 Gb/s x2 (two lanes, 20 Gb/s total) (driven from USB 3.1). It also adds 5 Gb/s x2 (two lanes, 10 Gb/s total) (driven from USB 3.0). Thunderbolt is actually 10.3125 Gb/s x2 (two lanes, 20.625 Gb/s total) or 20.625 Gb/s x2 (two lanes, 41.25 Gb/s total). They are all two lane, full-duplex protocols (separate lanes for send and receive, four lanes total). All full-duplex two lane protocols require a USB-C connector at both ends because USB-A can only do one full-duplex one lane protocol (USB 3.2 gen 2 x1 max).

USB-C has a DisplayPort alt mode that uses four lanes in one direction (the receive lanes are used as transmit lanes) (USB 2.0 is still available as separate data lines). Another DisplayPort alt mode combines two lanes of DisplayPort and one lane of USB 3.1. Virtual Link is another USB-C alt mode similar to DisplayPort alt mode, except the two USB 2.0 data lines can support USB 3.1 gen 2 (requires special cable).

The alt modes are separate from the USB spec. Thunderbolt is also an alt mode. But with USB4, it is not an alt mode anymore? Instead, I guess it becomes the main mode and works as Thunderbolt always did (includes PCIe and DisplayPort packets and allows for custom packets such as Thunderbolt network IP, Thunderbolt Target Display Mode, and Thunderbolt Target Disk Mode (the last two are currently MacOS only and the protocol is unknown so you can't create a driver for Windows or Linux). USB4 traffic for USB4 devices can be a new packet type. Older Thunderbolt devices can ignore packets that they don't understand - the packets are not addressed to them. This is similar to network packets that you would see from an Ethernet port, except this is a USB4 port.

Posted by: Defoler

and so USB 4.0 is going to be an upgraded version of that, allowing 40Gb/s with certified cables (just like you need today for TB3 to gain 40Gb/s).

USB4 combines USB 3.2 and Thunderbolt. I haven't read anything about what's been added on top of those.

Posted by: Defoler

Anyway, PCIE 3.0 is limited to 15.7GB/s at full x16 lanes,  and PCIE 4.0 is going to give double that.
But note the B, not b. Meaning PCIE 3.0 is basically 125.6Gb/s (or TB3 is just 5GB/s), which is already 3 times that of TB3. So PCIE 4.0 will be 6 times faster than USB 4.0 or TB3.

PCIe 3.0 is 126.03 Gb/s (after accounting for the 128b/130b encoding).
Thunderbolt 3 is 40 Gb/s (after accounting for the 64b/66b encoding). Same for USB4, I guess.
USB 3.2x2 is 19.39 Gb/s (after accounting for the 128b/132b encoding).

But the difference between Thunderbolt and PCIe is worse than what you stated because PCIe traffic over Thunderbolt is limited to 22 Gb/s so PCIe 3.0 x16 is 5.73 times greater and PCIe 4.0 x16 is 11.46 times greater. Even PCIe 1.0 x16 is faster than Thunderbolt. Maybe USB4 will allow the entire Thunderbolt bandwidth to be used for PCIe traffic when no other type of traffic exists instead of just 22 Gb/s? That would be nice but it won't affect existing Thunderbolt devices (unless 40 Gb/s was always an option and only a firmware update is required to unlock it).

Posted by: Defoler

Trying to stick PCIE 3.0 full data transfer ability to TB3/USB 4.0 will be like trying to cram an elephant into 2x2m shop. Trying to do that with PCIE 4.0 will be trying to cram a t-rex into the same space, and expect not to be eaten in the process.

You understand now the difference and why USB4, PCIE3.0, PCIE4.0, aren't going to matter for us?
We are already at the peak of data transfer to the external PCIE connection. Even if you have the best software running full 40Gb/s, you are not even tickling PCIE 3.0, let alone PCIE 4.0 (or even PCIE 5.0 expected to already come not so far away).

And even in reverse, PCIE 4.0 is bringing their own special PCIE connector, OCuLink-2, a standard which is an alternative to TB3, and it runs at 8GB/s (or 64Gb/s, 1.6x of TB3/USB 4.0). And even that isn't going to be even nearly enough to saturate PCIE 3.0, let alone PCIE 4.0 which rely on it.

OCuLink-2 is 63.02 Gb/s (7.88 GB/s), 2.86 times more than 22 Gb/s Thunderbolt or 1.58 times more than 40 Gb/s Thunderbolt (if it ever exists).

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
OliverB
(@oliverb)
Noble Member
Joined: 10 months ago
 

@defiler and @defoler
you guys really rule! you are the reason why this community lives with humor and good information.
Thank you very much, Sirs.

2018 15" MBP & 2015 13" MBP connected to RTX2080Ti GTX1080Ti GTX1080 Vega56 RX580 R9-290 GTX680


ReplyQuote
Defoler
(@defoler)
Eminent Member
Joined: 7 months ago
 

@joevt
A few things.

It's called USB4

Even if its called USB4, I have a feeling we will get USB 4.1 and 4.2 at some point. They never make up their mind.

USB4 combines USB 3.2 and Thunderbolt.

That is what I said. They even said so on the drafts and the end spec.

It also adds 5 Gb/s x2 (two lanes, 10 Gb/s total) (driven from USB 3.0).

That is not correct. USB 3.0 does not have 5Gb/s x2. It is still based and locked in terms of standard to the old A  connector, which was only able to allow 5Gb/s on a single data lane.
USB 3.1 was also locked to A connector (locked I mean that they were forced to also allow those speeds on the A connector), and because of that, 3.1 allowed only 10Gb/s with the compression, and only that through gen 2. Gen 1 was also locked to 5Gb/s just like 3.0.

USB 3.2 allows two lanes through USB-C and compression, but the use of two lanes, is thanks to TB3. TB3 standard came out in 2016 from apple and intel, and the usb-c standard which came after was allowed through support from intel. So USB 3.2 was not driven by USB 3.1 but more by TB3 from intel and apple, who wanted USB-C hardware to be more common for apple.

But the difference between Thunderbolt and PCIe is worse than what you stated because PCIe traffic over Thunderbolt is limited to 22 Gb/s so PCIe 3.0 x16 is 5.73 times greater and PCIe 4.0 x16 is 11.46 times greater. Even PCIe 1.0 x16 is faster than Thunderbolt. Maybe USB4 will allow the entire Thunderbolt bandwidth to be used for PCIe traffic when no other type of traffic exists instead of just 22 Gb/s? That would be nice but it won't affect existing Thunderbolt devices (unless 40 Gb/s was always an option and only a firmware update is required to unlock it

What you are saying is that correct.
TB3 has 40Gb/s if you also account for the usb-c data reserves. It does indeed has 22Gb if you split and allow USB to transfer through it (which some of the cases do). But the standard allow non split data transfer, which allows full 40Gb/s connection.
That is part of the difference for example between the razer chroma core and the X. The X has a 40Gb/s hardware (I have seen a couple of youtubers actually testing it). It does not split and uses the full data lanes. So you are more limited by the data cables. The chroma has a split to data and usb in order to drive the ethernet and 4xusb connectors seperately from the PCIE data stream

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
joevt
(@joevt)
Reputable Member
Joined: 2 years ago
 
Posted by: Defoler

Even if its called USB4, I have a feeling we will get USB 4.1 and 4.2 at some point. They never make up their mind.

I'm guessing they made up their mind to stop using point releases, so the next version will be USB5 but we'll have to wait and see since I can't find clear confirmation about the reasoning behind the new name (other than to simplify it).

Posted by: Defoler

USB4 combines USB 3.2 and Thunderbolt.

That is what I said. They even said so on the drafts and the end spec.

You said it was an upgrade of 3.2 as if 3.2 was getting new features. I called it a combination of 3.2 and Thunderbolt because the new features are from Thunderbolt and don't affect the 3.2 part.

Posted by: Defoler

It also adds 5 Gb/s x2 (two lanes, 10 Gb/s total) (driven from USB 3.0).

That is not correct. USB 3.0 does not have 5Gb/s x2.

I meant that the 5 Gb/s speed is from USB 3.0. Using dual lane (x2) is new to USB 3.2. I am confused by what you mean when you said "driven from TB3". Maybe you mean that the higher speed (dual lane) modes of USB 3.2 were created to compete with TB3?

Posted by: Defiler

It is still based and locked in terms of standard to the old A  connector, which was only able to allow 5Gb/s on a single data lane.
USB 3.1 was also locked to A connector (locked I mean that they were forced to also allow those speeds on the A connector), and because of that, 3.1 allowed only 10Gb/s with the compression, and only that through gen 2. Gen 1 was also locked to 5Gb/s just like 3.0.

What compression? USB doesn't have compression. The USB 3.x Type-A connector adds four pins to the USB 2.x Type-A connector to accomodate a single super-speed bi-directional (full-duplex) data lane. There are two pins for send and two pins for receive (shielded twisted pair differential signalling). The Type-A connector can support 5Gb/s and 10 Gb/s.

Posted by: Defoler

USB 3.2 allows two lanes through USB-C and compression, but the use of two lanes, is thanks to TB3. TB3 standard came out in 2016 from apple and intel, and the usb-c standard which came after was allowed through support from intel. So USB 3.2 was not driven by USB 3.1 but more by TB3 from intel and apple, who wanted USB-C hardware to be more common for apple.

When I said USB 3.2 is driven by USB 3.1, I meant that it uses the same speed 10 Gb/s but adds another lane. USB-C spec was published in August 2014. It had extra pins to support alternate modes such as DisplayPort alternate mode (published September 2014). Thunderbolt 3 alternate mode was announced in 2015. Therefore, the use of two lanes is thanks mostly to USB-C for supportting enough bi-directional data lines for future alternate modes and partly to Thunderbolt for being an example of a two lane bi-directional link.

Posted by: Defoler

But the difference between Thunderbolt and PCIe is worse than what you stated because PCIe traffic over Thunderbolt is limited to 22 Gb/s so PCIe 3.0 x16 is 5.73 times greater and PCIe 4.0 x16 is 11.46 times greater. Even PCIe 1.0 x16 is faster than Thunderbolt. Maybe USB4 will allow the entire Thunderbolt bandwidth to be used for PCIe traffic when no other type of traffic exists instead of just 22 Gb/s? That would be nice but it won't affect existing Thunderbolt devices (unless 40 Gb/s was always an option and only a firmware update is required to unlock it

What you are saying is that correct.
TB3 has 40Gb/s if you also account for the usb-c data reserves. It does indeed has 22Gb if you split and allow USB to transfer through it (which some of the cases do). But the standard allow non split data transfer, which allows full 40Gb/s connection.
That is part of the difference for example between the razer chroma core and the X. The X has a 40Gb/s hardware (I have seen a couple of youtubers actually testing it). It does not split and uses the full data lanes. So you are more limited by the data cables. The chroma has a split to data and usb in order to drive the ethernet and 4xusb connectors seperately from the PCIE data stream

What's a USB-C data reserve? Do you mean USB 3.1 gen 2 or DisplayPort? I have seen no benchmarks showing greater than 22 Gbps from a Thunderbolt port. Please post a link to an example showing otherwise. A cable either allows 20 Gbps or 40 Gbps - there is no other limitation caused by a cable (except PD). The Chroma uses a second Thunderbolt controller to handle USB and ethernet but that controller is connected to the first controller so they must share bandwidth to the computer.

We can try to get more than 22 Gb/s from a Thunderbolt controller by raiding two drives (one for each Thunderbolt port). When I tried that, I only got 23 Gb/s. Another option would be to try raiding a Thunderbolt drive with a USB 3.1 gen 2 drive, each connected to a different Thunderbolt port of the Thunderbolt controller. The raid would consist of three partitions on the Thunderbolt drive and one partition on the USB drive because Thunderbolt is over twice as fast as USB.

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
Phoenix2063
(@phoenix2063)
Eminent Member
Joined: 1 year ago
 

So with USB 4 out in 2020, any sign of Thunderbolt 4? If so, expected bandwidth?

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
Defoler
(@defoler)
Eminent Member
Joined: 7 months ago
 
Posted by: joevt

You said it was an upgrade of 3.2 as if 3.2 was getting new features. I called it a combination of 3.2 and Thunderbolt because the new features are from Thunderbolt and don't affect the 3.2 part.

At what exact point did I point or even hinted to new featurers?? I said, on my first post, the exact think you are saying now that you claim you contradict me, that USB 3.2 is upgraded with TB3, which is the added lanes.

I meant that the 5 Gb/s speed is from USB 3.0. Using dual lane (x2) is new to USB 3.2. I am confused by what you mean when you said "driven from TB3". Maybe you mean that the higher speed (dual lane) modes of USB 3.2 were created to compete with TB3?

Again, USB-C was developed for intel/apple TB3. The duel lane ability came from that connector along with TB3 specs. USB 3.1 was developed along side it, and with the help of the compression (or encoding, because it is actually data compression), it got the superspeed. 3.2 added the duel lane later on which was the base of TB3. So USB 3.1 and 3.2 were both spec driven by TB3.

What compression? USB doesn't have compression. The USB 3.x Type-A connector adds four pins to the USB 2.x Type-A connector to accomodate a single super-speed bi-directional (full-duplex) data lane. There are two pins for send and two pins for receive (shielded twisted pair differential signalling). The Type-A connector can support 5Gb/s and 10 Gb/s.

Encoding (it is a compression though). USB 3.1 allowed different encoding on the same data paths, which allowed 10GB/s. USB-C added the second lane that 3.2 uses for its duel lane. This was built first specifically for TB3. USB 3.1 was develop alongside USB-C, but they did not add the duel lane to it.

When I said USB 3.2 is driven by USB 3.1, I meant that it uses the same speed 10 Gb/s but adds another lane. USB-C spec was published in August 2014. It had extra pins to support alternate modes such as DisplayPort alternate mode (published September 2014). Thunderbolt 3 alternate mode was announced in 2015. Therefore, the use of two lanes is thanks mostly to USB-C for supportting enough bi-directional data lines for future alternate modes and partly to Thunderbolt for being an example of a two lane bi-directional link.

Using the same speed does not mean it was driven by USB 3.1. That standard was a result of encoding (or compresson) on the same data lines. USB 3.2 was driven by TB3 because of the added data lanes support to USB-C needed for TB3, which was developed alongside USB 3.1 (which didn't have alternate nor duel lanes) and alongside TB3 (for the 2015 macbook pro). Alternate mode was not part of USB 3.1 at all. USB 3.2 on the other hand has alternate mode because TB3 had it. While publishing time you are correct, you are incorrect on usage and development time and for.

What's a USB-C data reserve? Do you mean USB 3.1 gen 2 or DisplayPort? I have seen no benchmarks showing greater than 22 Gbps from a Thunderbolt port. Please post a link to an example showing otherwise. A cable either allows 20 Gbps or 40 Gbps - there is no other limitation caused by a cable (except PD). The Chroma uses a second Thunderbolt controller to handle USB and ethernet but that controller is connected to the first controller so they must share bandwidth to the computer.

I would recommend reading the TB3 spec sheet. Easy to find. TB3 data only (aka usb mode), is locked to 22Gb/s, yes. But in PCIE 3.0 data mode (not usb mode), it support more. From the datasheet:

Thunderbolt 3 uses PCIe x4 gen 3 data rate with 128kB header sizes. For a single Thunderbolt chip with two ports, the x4 PCIe interface data rate is shared across the ports. Two links of (4 lane) DisplayPort 1.2 consume 2x (4 x 5.4 Gbps) or 43.2 Gbps.

While chroma uses a second controller for USB, but it runs on a single TB3 connector, so it splits the TB3 data lanes to 18Gb/s and 22Gb/s, and because of that, unline the core, it limites you.

We can try to get more than 22 Gb/s from a Thunderbolt controller by raiding two drives (one for each Thunderbolt port). When I tried that, I only got 23 Gb/s. Another option would be to try raiding a Thunderbolt drive with a USB 3.1 gen 2 drive, each connected to a different Thunderbolt port of the Thunderbolt controller. The raid would consist of three partitions on the Thunderbolt drive and one partition on the USB drive because Thunderbolt is over twice as fast as USB.

What you are talking about is trying to run USB data stream on TB3 controler, which is incorrect to do. To run full 40Gb/s, you need to connect it to a PCIE hub, and put on that a drive. If you just connect data drives to it, they will be locked to 22Gb/s. But that is not the case with eGPU, since eGPU is run on PCIE, not just data stream.

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
Defoler
(@defoler)
Eminent Member
Joined: 7 months ago
 
Posted by: Phoenix2063

So with USB 4 out in 2020, any sign of Thunderbolt 4? If so, expected bandwidth?

That depends if intel moves to PCIE 4.0 on their next iteration.
If it, there will be no TB4. It is based mainly on PCIE speeds and conncted to the CPU though PCIE lanes, so unless they get faster, and since intel aren't adding more PCIE lanes, it doesn't look good that TB4 will come out next year.

Intel's ice lake-sp (servers) is expected to have PCIE 4.0 in 2020, but the mobile version which were announced recently are not. They have TB3 only and PCIE 3.0.
And from intel's history, it most likely mean we won't see the refresh and PCIE 4.0 at least until late 2020.
AMD might drive intel to put their plans early with PCIE 4.0. Hopefully. But it might still not mean TB4. And if apple decide to shift to USB 4 on their next iteration, We might look for maybe 2021 to TB4.

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
wimpzilla
(@wimpzilla)
Honorable Member
Joined: 2 years ago
 

To be honest i don't think the next iteration of TB have anything to do with the pci-e speeds.

If i recall correctly the main challenge for any wired connectivity are the wires themselves.
It is way more difficult to get reliable high speed communication between two devices linked by wires.
Rather than just upgrading the TB3 controller die with the new pci-e gen4 speeds.

This post was modified 3 months ago

2012 13-inch Dell Latitude E6320 + R9 [email protected] (EXP GDC 8.4) + Win10
E=Mc²


itsage liked
ReplyQuote
Eightarmedpet
(@eightarmedpet)
Noble Member
Joined: 3 years ago
 

So Navi is announced... and @phoenix2063 prediction was pretty accurate.
I'm not overwhelmed - looks to be 10% performance increase for high TDP no Ray tracing (arguably not currently useful) and only about 50 USD cheaper (with Nvidia set to announce some weird update/price drop).

Anyone planning on getting one? Can't imagine Apple including them in their line up for about a year so macOS driver support will be an issue I'd guess?

2017 13" MacBook Pro Touch Bar
GTX1060 + AKiTiO Thunder3 + Win10
GTX1070 + Sonnet Breakaway Box + Win10
GTX1070 + Razer Core V1 + Win10
Vega 56 + Razer Core V1 + macOS + Win10
Vega 56 + Mantiz Venus + macOS + W10

---

LG 5K Ultrafine flickering issue fix


itsage liked
ReplyQuote
Phoenix2063
(@phoenix2063)
Eminent Member
Joined: 1 year ago
 

Thing to keep I mind is that enabling ray tracing on Nvidia cards cripples frame rates, even at 1080p which defies the point of buying a current generation card over a 10 series. Current gen just can't take advantage of it so just doesn't seem worth it for gaming imho.
As such Navi 5700XT with +10% perfoance over a 2070 is very relevent, also AMD seem to be looking at software options for Ray Tracing so may get that to a degree down the line. 

Imho current gen cards just can't take advantage of it so just doesn't seem worth it for gaming, especially as we're still struggling to get decent frames at 4k. 

Best to wait for independent reviews and overclocked 3rd party cards to land August/September. I have a freesync monitor that's not currently supported by Nvidia so would be interested in it.

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
wimpzilla
(@wimpzilla)
Honorable Member
Joined: 2 years ago
 

Nvidia support VESA Adaptive Sync just fine, if i recall it properly.
Freesync is just a brand added on top of the VESA Adaptive Sync by AMD, i would suppose your monitor would works fine with Nvidia gpu's.

This post was modified 2 months ago

2012 13-inch Dell Latitude E6320 + R9 [email protected] (EXP GDC 8.4) + Win10
E=Mc²


ReplyQuote
Phoenix2063
(@phoenix2063)
Eminent Member
Joined: 1 year ago
 

Haven't checked on it in a while, from my understanding there's a limited selection of monitors it supports but would expand over time, last i check my Asus XG32VQ wasn't listed but may have been now. 

Been waiting for Navi before dropping money on the eGPU and the 5700XT is pretty much on budget for me and being AMD will give me less headache's that Nvidia for my Macbook. Also i'm mulling over building a gaming AMD Ryzen PC next year, depending on how the eGPU set up pans out, so in the long run an AMD card is my proffered choice. 

Side note to this being the AMD roadmap, it specifically stated we'd get 7nm+ Navi cards next year so i expect that'll be when they drop a Radeon 7 replacement. Potentially a dual GPU card given the recent Mac Pro card revealed, which i'd assume will help them reach 2080 & even 2080ti clock speeds (Radeon 7 does 'compete' with the 2080 in AMD friendly titles).

Pending: Add my system information and expected eGPU configuration to my signature to give context to my posts


ReplyQuote
wimpzilla
(@wimpzilla)
Honorable Member
Joined: 2 years ago
 

If i recall it correctly, the feature is supported by Nvidia gpu and drivers, since G-Sync is based on the same technology. 

Then Nvidia launched its campaign to validate Adaptive Sync top end monitors already available, outside the G-Sync branded ones.
But it doesn't mean that it would not work, it only mean that the variable refresh rate gaming experience have not been validated, if i understood things well.

This post was modified 2 months ago

2012 13-inch Dell Latitude E6320 + R9 [email protected] (EXP GDC 8.4) + Win10
E=Mc²


ReplyQuote
OliverB
(@oliverb)
Noble Member
Joined: 10 months ago
 

I have a FreeSync Monitor and can use G-Sync with a GTX1080Ti since some time ago.

2018 15" MBP & 2015 13" MBP connected to RTX2080Ti GTX1080Ti GTX1080 Vega56 RX580 R9-290 GTX680


ReplyQuote