US$349 Mantiz Venus TB3 enclosure/dock
 
Notifications
Clear all

US$349 Mantiz Venus TB3 enclosure/dock  

 of  112
  RSS

nando4
(@nando4)
Noble Member Admin
Joined: 4 years ago
 

Posted by: Mymantiz_John

3. regarding the performance. it is nothing related with the XPS of NV CPU. it is Intel firmware issue. for the time being, Intel suggest to lock the PCI-E bandwidth to be 22GB. we'll use the suggested FW to send our device to the Lab. then after that, we will adjust the setting to 22GB to 32GB. total bandwidth of TBT3 is 40GB. we can afford 32GB for PCI-e + 5GB for USB 3.0+ 1GB for Lan. The rest is able to be with SATA.

John, as I understand it, the USB 3.0, LAN and SATA controlles all hang off a TB3 PCIe bridge on the Mantiz enclosure. They then communicate back to the host TB3 controller (attached at x4 3.0, 32Gbps).  May  I suggest to do CUDA-Z tests with your firmware bandwidth parameters to maximize the eGPU PCIe bandwidth.

This architecture being consistent across these enclosures with additional ports . The sole exception so far being the Asus XG Station 2, that has a USB switch and can send additional port traffic down an alternate USB-B cable pathway when connected as discussed here. That then having a bandwidth sparing effect on the 32Gbps TB3 PCIe channel.

eGPU Setup 1.35    •    eGPU Port Bandwidth Reference Table

 
2015 15" Dell Precision 7510 (Q M1000M) [6th,4C,H] + GTX 1080 Ti @32Gbps-M2 (ADT-Link R43SG) + Win10 1803 [build link]  


ReplyQuote
Mymantiz_John
(@mymantiz_john)
Vendor
Joined: 3 years ago
 
Posted by: nando4

 

Posted by: Mymantiz_John

3. regarding the performance. it is nothing related with the XPS of NV CPU. it is Intel firmware issue. for the time being, Intel suggest to lock the PCI-E bandwidth to be 22GB. we'll use the suggested FW to send our device to the Lab. then after that, we will adjust the setting to 22GB to 32GB. total bandwidth of TBT3 is 40GB. we can afford 32GB for PCI-e + 5GB for USB 3.0+ 1GB for Lan. The rest is able to be with SATA.

John, as I understand it, the USB 3.0, LAN and SATA controlles all hang off a TB3 PCIe bridge on the Mantiz enclosure. They then communicate back to the host TB3 controller (attached at x4 3.0, 32Gbps).  May  I suggest to do CUDA-Z tests with your firmware bandwidth parameters to maximize the eGPU PCIe bandwidth.

This architecture being consistent across these enclosures with additional ports . The sole exception so far being the Asus XG Station 2, that has a USB switch and can send additional port traffic down an alternate USB-B cable pathway when connected as discussed here. That then having a bandwidth sparing effect on the 32Gbps TB3 PCIe channel.

   

HI:

According to your suggestion, we had requested to Intel to increase the limitation from 22G to 32G. and for the current design, AR support the USB w/o going through the PCI-E. AR chip directly support the usb interface. Our IO board design is an individual USB-Hub to the Type A & Gigabite Lan. it won't effect to the PCI-e Bandwidth. We will test it after getting Intel's approval. Thank you.

 

Mantiz: ● ●


ReplyQuote
nando4
(@nando4)
Noble Member Admin
Joined: 4 years ago
 
Posted by: Mymantiz_John  

HI:

According to your suggestion, we had requested to Intel to increase the limitation from 22G to 32G. and for the current design, AR support the USB w/o going through the PCI-E. AR chip directly support the usb interface. Our IO board design is an individual USB-Hub to the Type A & Gigabite Lan. it won't effect to the PCI-e Bandwidth. We will test it after getting Intel's approval. Thank you.

John the 22Gbps parameter may set the min PCIe slot bandwidth, leaving up to 10Gbps for the TB3's USB 3.1 controller. With no additional devices attached, you'd probably get max 32Gbps.

Any enclosure device (SATA, USB 3.0, LAN) attached via the TB3 controller's PCIe bridge or USB 3.1 controller (hangs off the PCIe bridge) will then use some of the 32Gbps PCIe bandwidth. As there is no additional TB3 daisychaining port to attach a DP LCD, I believe it's not possible to get 40Gbps from the Mantiz enclosures.

Would you mind posting a hwinfo64 pic of your hardware layout like shown here ?

eGPU Setup 1.35    •    eGPU Port Bandwidth Reference Table

 
2015 15" Dell Precision 7510 (Q M1000M) [6th,4C,H] + GTX 1080 Ti @32Gbps-M2 (ADT-Link R43SG) + Win10 1803 [build link]  


ReplyQuote
Mymantiz_John
(@mymantiz_john)
Vendor
Joined: 3 years ago
 
Posted by: nando4

 

Posted by: Mymantiz_John  

HI:

According to your suggestion, we had requested to Intel to increase the limitation from 22G to 32G. and for the current design, AR support the USB w/o going through the PCI-E. AR chip directly support the usb interface. Our IO board design is an individual USB-Hub to the Type A & Gigabite Lan. it won't effect to the PCI-e Bandwidth. We will test it after getting Intel's approval. Thank you.

John the 22Gbps parameter may set the min PCIe slot bandwidth, leaving up to 10Gbps for the TB3's USB 3.1 controller. With no additional devices attached, you'd probably get max 32Gbps.

Any enclosure device (SATA, USB 3.0, LAN) attached via the TB3 controller's PCIe bridge or USB 3.1 controller (hangs off the PCIe bridge) will then use some of the 32Gbps PCIe bandwidth. As there is no additional TB3 daisychaining port to attach a DP LCD, I believe it's not possible to get 40Gbps from the Mantiz enclosures.

I assume your SATA, LAN, USB 3.0 ports

   

HI:

I think you were mislead by the HWinfo64. Please check my attached , directly screenshot from the Windows 10 device manager.

Host Thunderbolt

B: Mantiz DSL6540 Lane 1

C: Mantiz DSL6540 Lane 2

you can see the USB controller is not under B and B&C are the same level. B+C is total 40GB. so, there's nothing related with the Lan 1 PCI-e x 4. 

Mantiz: ● ●


ReplyQuote
Mymantiz_John
(@mymantiz_john)
Vendor
Joined: 3 years ago
 
Posted by: Mymantiz_John

 

Posted by: nando4

 

Posted by: Mymantiz_John  

HI:

According to your suggestion, we had requested to Intel to increase the limitation from 22G to 32G. and for the current design, AR support the USB w/o going through the PCI-E. AR chip directly support the usb interface. Our IO board design is an individual USB-Hub to the Type A & Gigabite Lan. it won't effect to the PCI-e Bandwidth. We will test it after getting Intel's approval. Thank you.

John the 22Gbps parameter may set the min PCIe slot bandwidth, leaving up to 10Gbps for the TB3's USB 3.1 controller. With no additional devices attached, you'd probably get max 32Gbps.

Any enclosure device (SATA, USB 3.0, LAN) attached via the TB3 controller's PCIe bridge or USB 3.1 controller (hangs off the PCIe bridge) will then use some of the 32Gbps PCIe bandwidth. As there is no additional TB3 daisychaining port to attach a DP LCD, I believe it's not possible to get 40Gbps from the Mantiz enclosures.

I assume your SATA, LAN, USB 3.0 ports

   

HI:

I think you were mislead by the HWinfo64. Please check my attached , directly screenshot from the Windows 10 device manager.

Host Thunderbolt

B: Mantiz DSL6540 Lane 1

C: Mantiz DSL6540 Lane 2

you can see the USB controller is not under B and B&C are the same level. B+C is total 40GB. so, there's nothing related with the Lan 1 PCI-e x 4. 

   

Mantiz: ● ●


ReplyQuote
nando4
(@nando4)
Noble Member Admin
Joined: 4 years ago
 
Posted by: Mymantiz_John

 HI:

I think you were mislead by the HWinfo64. Please check my attached , directly screenshot from the Windows 10 device manager.

Host Thunderbolt

B: Mantiz DSL6540 Lane 1

C: Mantiz DSL6540 Lane 2

you can see the USB controller is not under B and B&C are the same level. B+C is total 40GB. so, there's nothing related with the Lan 1 PCI-e x 4. 

   

I can see LAN and USB are coming off the TB3 USB controller, which itself goes back to the TB3 PCIe bridge. The PCIe bridge communicates back with the host whose root PCIe port runs at 32Gbps.

So yes, use of your additional ports will share the 32Gbps-TB2 PCIe bandwidth with the PCIe video card. That's to be expected. Only way around that would be to install a USB hub in the enclosure and run a separate USB cable back to the host, like XG Station2 does.

Don't worry.. the Venus/Saturn is not a XG2 Station $699 device. Our users are looking at it for it's benefits over the Node: 60W charging, additional ports, smaller chassis for a small price bump.

Though you may wish to put in the fine print that use of those devices will take some eGPU bandwidth so you don't get returns.

If the enclosure has a TB3 daisychain port and attached a DP LCD, then could get 40Gbps of combined PCIe+DP traffic across the TB3 link. So any TB3 enclosure without the daisychaining port are 32Gbps-TB3 devices. Bet you someone will return one when they find out they are not 40Gbps as advertised.

eGPU Setup 1.35    •    eGPU Port Bandwidth Reference Table

 
2015 15" Dell Precision 7510 (Q M1000M) [6th,4C,H] + GTX 1080 Ti @32Gbps-M2 (ADT-Link R43SG) + Win10 1803 [build link]  


ikir liked
ReplyQuote
rhx123
(@rhx123)
Eminent Member Moderator
Joined: 4 years ago
 

Nando, I think you are mistaken here, TB3 does not carry USB3 over PCIe, it can take bandwith from the remaining 8 gbps.

The USB data is sent directly over TB3 packets. If you look at the PCB you will see the PCIe lanes come directly from the TB3 chip, which does not have a PCIe switch in it, just look at the cost of PLX chips.... 

This is why the USB/Lan ports worked on hosts that did not support the eGPU protocol on the old firmware version.

XPS 13 9360 + Acer Graphics Dock


ReplyQuote
nando4
(@nando4)
Noble Member Admin
Joined: 4 years ago
 
Posted by: Richard

 

Nando, I think you are mistaken here, TB3 does not carry USB3 over PCIe, it can take bandwith from the remaining 8 gbps.

The USB data is sent directly over TB3 packets. If you look at the PCB you will see the PCIe lanes come directly from the TB3 chip, which does not have a PCIe switch in it, just look at the cost of PLX chips.... 

This is why the USB/Lan ports worked on hosts that did not support the eGPU protocol on the old firmware version.

   

No mistake at all. The layout is best seen and explained here , copied below (just ignore the annotation). TB3 does carry USB 3.1 per wikipedia.

The TB3 USB 3.1 controller sits under a PCIe bridge. They all communicate back down via the host's PCIe root port, which for the XPS shown below, is a x2 3.0 16Gbps link (max) but for 2016 MBPs is a x4 3.0 32Gbps link.

eGPU Setup 1.35    •    eGPU Port Bandwidth Reference Table

 
2015 15" Dell Precision 7510 (Q M1000M) [6th,4C,H] + GTX 1080 Ti @32Gbps-M2 (ADT-Link R43SG) + Win10 1803 [build link]  


ReplyQuote
rhx123
(@rhx123)
Eminent Member Moderator
Joined: 4 years ago
 

Yes, the point is kind of moot because of the link between the TB3 controller and host CPU. Not forgetting the limitation with the TB3 controller coming from the Chipset and not the CPU, DMI3.0 limits EVERY PCIe device hung off the chipset to effectivly PCIe 3.0 speeds. This includes M.2 SSD. See just over half way down this page:

http://www.anandtech.com/show/10343/the-intel-skull-canyon-nuc6i7kyk-minipc-review

XPS 13 9360 + Acer Graphics Dock


nando4 liked
ReplyQuote
jconly
(@jconly)
Eminent Member
Joined: 3 years ago
 

Sorry guys, I got a little lost here with this latest chatter. 
Please correct me if I'm wrong, but what I'm taking away from this is that:

PCIe portion of TB3 link is used for the GPU itself, running at 32GB.  
The remaining 8GB of the 40GB spec is used as a USB channel, providing bandwidth to the other items on board (Network, USB, Sata)

If this is the case, then no GPU performance lost utilizing the extra ports, compared to running a GPU on the Node without additional ports.  

 

To do: Create my signature with system and expected eGPU configuration information to give context to my posts. I have no builds.

.

ReplyQuote
 of  112