ALERT: half H2D performance issue on TI83 TB3 enclosures (Node, Venus, Devil Box, XG Station 2)  

Page 2 / 8 Prev Next
 

Jaye
 Jaye
(@jaye)
Eminent Member
Joined:11 months  ago
Posts: 38
March 30, 2017 3:35 pm  

I’m kinda confused.. Akitio reported here that they’ve got an updated FW from Intel that fixes the issue, right? I’m correct? 

On the other side, Mr. Mymantiz_John said different things and there is no any confirmation about FW fix from Intel.

I will be thankful if someone would clarify this information for me.

2017 13" HP Spectre 13" x360 + GTX1080Ti@32Gbps-TB3 (AKiTiO Node) + Win10
2016 15" Asus UX501VW GTX960M + GTX1080Ti@32Gbps-TB3 (AKiTiO Node) + Win10


ReplyQuote
Mymantiz_John
(@mymantiz_john)
Vendor
Joined:11 months  ago
Posts: 425
March 30, 2017 3:48 pm  
Posted by: Jaye

 

I’m kinda confused.. Akitio reported here that they’ve got an updated FW from Intel that fixes the issue, right? I’m correct? 

On the other side, Mr. Mymantiz_John said different things and there is no any confirmation about FW fix from Intel.

I will be thankful if someone would clarify this information for me.

   

For an eGPU FW, currently there’s no update. and the value from Cuda Z & Matlab had been reported to Intel. and again, I have both versions and still not able to see any significant difference in performance based on my Test. the value is  only half , why the performance  in benchmark & games are that difficult and invisible to be seen?  

I had also send 2 PCBA with different FW to the profession lab to measure the H2D PCIe Bandwidth, I’ll have result soon.

 

 

Mantiz: ShopFacebookTwitter


Jaye, theitsage and nando4 liked
ReplyQuote
Jaye
 Jaye
(@jaye)
Eminent Member
Joined:11 months  ago
Posts: 38
March 30, 2017 4:20 pm  

Okay, from my experience with this issue:

My Laptop: ASUS UX501VW (i7 -6700HQ) – should be enough of CPU power for anything. 

Please check my screenshots.

Internal Display (Akitio Node + STRIX 1070)

External Display(Same hardware)


 

There is obvious fps difference on external display, but I believe that even external display should perform better, no?

There are results of desktop with gtx 1070 on internet showing 200 (max value) fps in the same benchmark: 

I know that there is slowdown comparing to desktop performance, but should it be that big? 

 

Anyway, even if those are totally correct results, I think I have one of cases when H2D bandwidth matters:

I’m running World of Warcraft Legion with 1920×1080, Ultra settings, distance setting is 7 (out of 10 possible) – getting 60 fps. When I’m changing distance value to 8+ (up to 10) – fps is 40-45. (various YouTube videos show 60+ fps still). I don’t think that this slowdown is related to TB3 slower speeds, but to overall bandwidth available. In my opinion, slowdowns are based not on complexity of visual effects\particles\etc, but on overall data speed available. 

Another case with WoW Legion: When I’m suddenly changing camera’s direction (totally new scene on display) – slowdown happens. CPU? I don’t think so, I don’t have this issue with internal 960M. 

I hope my info will help with this issue.

 

Edited: 11 months  ago

2017 13" HP Spectre 13" x360 + GTX1080Ti@32Gbps-TB3 (AKiTiO Node) + Win10
2016 15" Asus UX501VW GTX960M + GTX1080Ti@32Gbps-TB3 (AKiTiO Node) + Win10


ReplyQuote
goalque
(@goalque)
Honorable Member Admin
Joined:1 year  ago
Posts: 779
March 30, 2017 4:42 pm  
Posted by: DanKnight

 

Update March 29, 2017:

Intel has responded to us with a beta firmware. We’re currently running tests for stability and seeing if it comes with any other issues. Intel has also reached out to the other companies as well (according to them).

   

But isn’t the Node also eGFX certified?

I was thinking, if the H2D bandwidth goes through DP protocol lanes and therefore PCIe DATA bandwidth reduction is not visible in gaming?

Notice the single-precision GPU core performance: 9114Gflops/s vs 5047Gflops/s (A Dell Precision M7510 – TI82 Razer Core vs TI83 AKiTiO Node).

On the other hand, ~500MiB/s H2D difference between the Node & Devil Box (TI82) did not make any difference in CUDA-Z’s GPU core performance numbers. I will conduct some more research.

 

automate-eGPU.shapple_set_os.efi
--
late-2016 13" Macbook Pro nTB + Vega64@32Gbps-TB3 (Netstor HL23T) + macOS & Win10
late-2016 13" Macbook Pro nTB + GTX980/RX580@32Gbps-TB3 (Netstor HL23T) + macOS10.13 & Win10


ReplyQuote
Mymantiz_John
(@mymantiz_john)
Vendor
Joined:11 months  ago
Posts: 425
March 30, 2017 4:44 pm  
Posted by: Jaye

 

Okay, from my experience with this issue:

..

I hope my info will help with this issue.

 

HI, 

1. from your records, your external FPS is better than the internal , what’s the issues? and OS should be win10 ( recommended  by Intel ) it showed Win8 in your config.

2. My result in DT: Core i7 6700 VS Laptop VS  Razer Blade : Core I7 6500U  VS Intel NUC Core I7 6770 HQ  VS Lenovo X1 Core I7 7600U Both OS win10. Driver Version: 376.9

BenchMark for F1

benchmark For F2

all tested are under eGPU FW.

 

Edited: 11 months  ago

Mantiz: ShopFacebookTwitter


ReplyQuote
nando4
(@nando4)
Noble Member Admin
Joined:1 year  ago
Posts: 1578
March 30, 2017 4:59 pm  
Posted by: goalque

 But isn’t the Node also eGFX certified?

I was thinking, if the H2D bandwidth goes through DP protocol lanes and therefore PCIe DATA bandwidth reduction is not visible in gaming?

Notice the single-precision GPU core performance: 9114Gflops/s vs 5047Gflops/s (A Dell Precision M7510 – TI82 Razer Core vs TI83 AKiTiO Node).

On the other hand, ~500MiB/s H2D difference between the Node & Devil Box (TI82) did not make any difference in CUDA-Z’s GPU core performance numbers. I will conduct some more research.

 

   

@Goalque, those GPU computation results sure are strange. It was on a GTX1070 for both results. Also these H2D numbers are all with NVidia cards. Do AMD cards also see a H2D reduction in bandwidth? AMD’s PCIeSpeedTest tool showing CPU->GPU (H2D) bandwidth last worked in Win8 for me.  It can be downloaded here: https://egpu.io/wp-content/uploads/2017/03/PCIeSpeedTest_v0.2.zip

Testing that or some OpenCL bandwidth tool would give a better indicator of this being a TB3 controller issue, with AMD and NVidia equally affected,  or some anomoly with NVidia card handshaking on a TB3 PCIe port.

Edited: 11 months  ago

eGPU Port Bandwidth Reference TableeGPU Setup 1.35


goalque liked
ReplyQuote
goalque
(@goalque)
Honorable Member Admin
Joined:1 year  ago
Posts: 779
March 30, 2017 5:15 pm  

automate-eGPU.sh -clpeak option yields some OpenCL numbers. I am not sure how accurate they are – may indicate something. 

automate-eGPU.shapple_set_os.efi
--
late-2016 13" Macbook Pro nTB + Vega64@32Gbps-TB3 (Netstor HL23T) + macOS & Win10
late-2016 13" Macbook Pro nTB + GTX980/RX580@32Gbps-TB3 (Netstor HL23T) + macOS10.13 & Win10


nando4 liked
ReplyQuote
PFCBarefoot
(@pfcbarefoot)
Active Member
Joined:11 months  ago
Posts: 14
March 30, 2017 10:19 pm  

So I guess in layman’s terms, ya’ll are saying that the current “issue” of 1,100MiB/s is not actually an issue and the performance of 1,100MiB/s vs 2,200MiB/s is pretty much equal?? I don’t have another enclosure to test this, but if that is indeed the case, then I will need to get rid of my Node because the 1060 out performs the 1080. Or did this just go completely over my head??


ReplyQuote
goalque
(@goalque)
Honorable Member Admin
Joined:1 year  ago
Posts: 779
March 30, 2017 10:48 pm  

Mid 2015 15″ MBP (M370X) with Apple TB2-to-TB3 adapter vs Late 2016 13″ MBP (non-touch)

Devil Box (TI82), reference GTX 980, external DP monitor (3840 x 2160)

Belkin 0,5m TB3 40Gbps cable

Windows 10 Boot Camp

External screen, 2016 13″ MBP

Internal screen, 2016 13″ MBP

External screen, 2015 15″ MBP

Internal screen, 2015 15″ MBP

CUDA-Z (Mid 2015 15″ MBP, Apple TB2-to-TB3 adapter)

Valley detects Windows 10 as 8 because Windows 10 is not listed under system requirements on their web site.

Thunderbolt 2 is very competitive, at least with the quad core CPU + external monitor. Beats direct TB3-TB3.

Edited: 11 months  ago

automate-eGPU.shapple_set_os.efi
--
late-2016 13" Macbook Pro nTB + Vega64@32Gbps-TB3 (Netstor HL23T) + macOS & Win10
late-2016 13" Macbook Pro nTB + GTX980/RX580@32Gbps-TB3 (Netstor HL23T) + macOS10.13 & Win10


ReplyQuote
ddqp
 ddqp
(@ddqp)
Eminent Member
Joined:11 months  ago
Posts: 31
March 31, 2017 12:21 am  

so it’s no meaning to buy a TB3 enclosure for now, from both price and performance side of view.

Edited: 11 months  ago

Late Macbook Pro 2016 13' touch bar + AKITIO node + GTX 1050TI 4G Windows


ReplyQuote
nando4
(@nando4)
Noble Member Admin
Joined:1 year  ago
Posts: 1578
March 31, 2017 12:34 am  
Posted by: PFCBarefoot

 

So I guess in layman’s terms, ya’ll are saying that the current “issue” of 1,100MiB/s is not actually an issue and the performance of 1,100MiB/s vs 2,200MiB/s is pretty much equal?? I don’t have another enclosure to test this, but if that is indeed the case, then I will need to get rid of my Node because the 1060 out performs the 1080. Or did this just go completely over my head?

Until proven otherwise, the issue remains as per the opening post.

That being we are seeing 9.22Gbps (1100MiB/s) H2D performance rather than 22Gbps matching Intel’s TB3 spec when tested with TI83-based TB3 enclosures.

We require this observation to be analysed and corrected (or refuted) by the enclosure vendors and/or Intel. If left unanswered then by consumer laws you are entitled to a refund for your TI83-based enclosure for misadvertising and purchase a TI82 Razer Core instead which we have no example yet of underperformance. The opening post serving as evidence of  TI83-based TB3 enclosure underperformance.

 

Posted by: goalque

 Thunderbolt 2 is very competitive, at least with the quad core CPU + external monitor. Beats direct TB3-TB3.   

Beats direct TB3-TB3 – absolutely.  Your result matching others we’ve seen – A TI83 enclosure attached to a TB2 Macbook via a Apple TB3-TB2 adapter gets better H2D results  than a native TB3 notebook with the same enclosure. Those results being up to 1400MiB/s for a TB2 Macbook vs 1100MiB/s for a native 4-lane TB3 Macbook/notebook.

Edited: 11 months  ago

eGPU Port Bandwidth Reference TableeGPU Setup 1.35


ReplyQuote
Mymantiz_John
(@mymantiz_john)
Vendor
Joined:11 months  ago
Posts: 425
March 31, 2017 12:44 am  

@Goalque,  what is the CPU for both model? I know CPU does matter for those benchmark  can u test Tb2 and tb3 box with  mbp2013 none touch? Just via adapter? 

 

Edited: 11 months  ago

Mantiz: ShopFacebookTwitter


ReplyQuote
Mymantiz_John
(@mymantiz_john)
Vendor
Joined:11 months  ago
Posts: 425
March 31, 2017 4:27 am  
Posted by: nando4

 

Posted by: PFCBarefoot

 

So I guess in layman’s terms, ya’ll are saying that the current “issue” of 1,100MiB/s is not actually an issue and the performance of 1,100MiB/s vs 2,200MiB/s is pretty much equal?? I don’t have another enclosure to test this, but if that is indeed the case, then I will need to get rid of my Node because the 1060 out performs the 1080. Or did this just go completely over my head?

Until proven otherwise, the issue remains as per the opening post.

That being we are seeing 9.22Gbps (1100MiB/s) H2D performance rather than 22Gbps matching Intel’s TB3 spec when tested with TI83-based TB3 enclosures.

We require this observation to be analysed and corrected (or refuted) by the enclosure vendors and/or Intel. If left unanswered then by consumer laws you are entitled to a refund for your TI83-based enclosure for misadvertising and purchase a TI82 Razer Core instead which we have no example yet of underperformance. The opening post serving as evidence of  TI83-based TB3 enclosure underperformance.

 

Posted by: goalque

 Thunderbolt 2 is very competitive, at least with the quad core CPU + external monitor. Beats direct TB3-TB3.   

Beats direct TB3-TB3 – absolutely.  Your result matching others we’ve seen – A TI83 enclosure attached to a TB2 Macbook via a Apple TB3-TB2 adapter gets better H2D results  than a native TB3 notebook with the same enclosure. Those results being up to 1400MiB/s for a TB2 Macbook vs 1100MiB/s for a native 4-lane TB3 Macbook/notebook.

   

HI;

The comparison stands on the different point.

MBP 2016 Late 13″ None Touch: CPU: Dual Core I5 6360 VS MBP2015 CPU: Quad Core I7 2.5Ghz to test a TBT3 box.

W/O seeing the result I knew the Quadcore will be better. see my result.

I use the Skylake NUC Quadcore 6770 HQ VS Skylake Razer Blade Dual Core : 6500U I can see the gap , the gap is caused by CPU. if anyone has TB2 & TB3 box to put in the same TB3 host , the benchmark score comes meaningful. 

NUC VS RAZER

and @nando previousely you don’t rely on Valley and now you do, with the different hosts , different CPU, ( Dual VS Quad ) what is the main proof in TB2 is better than TB3? 

Mantiz: ShopFacebookTwitter


ReplyQuote
Mymantiz_John
(@mymantiz_john)
Vendor
Joined:11 months  ago
Posts: 425
March 31, 2017 4:40 am  

https://egpu.io/forums/thunderbolt-enclosures/mantis-venus/paged/5/#post-5797

it shows no difference in 2200mb or 1100mb CUDAZ H/D tested in the same Host.

so, in the same host, no matter the CUDA Z is 2200mb or 1100mb, Unigine Heaven & Valley had no difference.

Mantiz: ShopFacebookTwitter


ReplyQuote
ddqp
 ddqp
(@ddqp)
Eminent Member
Joined:11 months  ago
Posts: 31
March 31, 2017 4:58 am  
Posted by: Mymantiz_John

 

https://egpu.io/forums/thunderbolt-enclosures/mantis-venus/paged/5/#post-5797

it shows no difference in 2200mb or 1100mb CUDAZ H/D tested in the same Host.

so, in the same host, no matter the CUDA Z is 2200mb or 1100mb, Unigine Heaven & Valley had no difference.

   

People believe what they want to believe, even speaking with data, data could be the data people want to show.

but anyway, the matlab value should be meaningful, and before the bandwidth can’t be verified by a solid way, I do believe that there is something wrong with the TI83.

 

Hope @DanKnight could give some extra more professional information.

Late Macbook Pro 2016 13' touch bar + AKITIO node + GTX 1050TI 4G Windows


nando4 liked
ReplyQuote
Jaye
 Jaye
(@jaye)
Eminent Member
Joined:11 months  ago
Posts: 38
March 31, 2017 5:18 am  

Despite benchmark results,  I’m still getting slowdowns in WoW.

One more example – in Assassin’s Creed: Black Flag I’m getting 60 fps when character stays on the same place or walks slowly. Should I run in game in one direction – drops to 40+ fps appear from time to time. 

It is definitely not SSD or CPU bottleneck. 

 

P.S. About Windows 8: it is actually Win10 – just wrong name in benchmark. 

2017 13" HP Spectre 13" x360 + GTX1080Ti@32Gbps-TB3 (AKiTiO Node) + Win10
2016 15" Asus UX501VW GTX960M + GTX1080Ti@32Gbps-TB3 (AKiTiO Node) + Win10


ReplyQuote
goalque
(@goalque)
Honorable Member Admin
Joined:1 year  ago
Posts: 779
March 31, 2017 5:52 am  
Posted by: Mymantiz_John

 

@Goalque,  what is the CPU for both model? I know CPU does matter for those benchmark  can u test Tb2 and tb3 box with  mbp2013 none touch? Just via adapter?    

I agree. The TB2 + quad core CPU performs better than TB3 + dual core CPU when running the Valley on the eGPU monitor.

automate-eGPU.shapple_set_os.efi
--
late-2016 13" Macbook Pro nTB + Vega64@32Gbps-TB3 (Netstor HL23T) + macOS & Win10
late-2016 13" Macbook Pro nTB + GTX980/RX580@32Gbps-TB3 (Netstor HL23T) + macOS10.13 & Win10


ReplyQuote
ikir
 ikir
(@ikir)
Vendor
Joined:1 year  ago
Posts: 583
March 31, 2017 6:31 am  

My Thunder3 reach 1670 MiB/s connected vis Thunderbolt 3 on MacBook Pro 13″ touchbar macOS 10.12.4

@jaye

it is normal, I play Elder Scroll Online in 4K and go from 60fps to 30fps it depends on the scene, elements in screen

Edited: 11 months  ago

eGPU.it | LG 29" curved ultrawide display
MacBook Pro 2017 touchbar i7 3.5Ghz - 16GB RAM - 512GB PCIe SSD + Mantiz Venus with AMD Radeon RX 580


ReplyQuote
goalque
(@goalque)
Honorable Member Admin
Joined:1 year  ago
Posts: 779

ReplyQuote
wimpzilla
(@wimpzilla)
Reputable Member
Joined:1 year  ago
Posts: 333
March 31, 2017 10:17 am  

It is really funny and amuse me to saw that nobody is able to to compile and perform decent ingame benchmark instead of throwing useless valley scores or basic benchmarks. In the review world, benchmark do not mean nothing and are not representative of gpu computation power, nor real ingame performances! It feel like the test was done lightly just to point out what you want!

In a decent review and test build, you put the hardware under stress and check the output to propose a conclusion, opinion about the hardware for the consumer.

There is no mystery, hardware based bandwidth matter ingames, people noticed a big difference, reported it, unlikely this difference will not impact on performances somewhere.

So i ask you kindly to really stop to try to give an opinion without solid data, it feel too week to build an opinion about. As long hardware tester, imo i couldn’t find a decent test, well done, that finally give us a clear result on TB2/TB3 performances comparing the FW, then close this topic pointing the problem and moving on a new one looking for a solution!

Edited: 11 months  ago

2012 13-inch Dell Latitude E6320 + R9 270X@4Gbps-mPCIe (EXP GDC 8.4) + Win10
E=Mc²


nando4 and Mymantiz_John liked
ReplyQuote
Jetcopter
(@jetcopter)
Active Member
Joined:11 months  ago
Posts: 5
March 31, 2017 1:00 pm  

Here is my 3DMark FireStrike score of 12,872 with a GTX 1080 in the Node: http://www.3dmark.com/3dm/18965848?

Without the Node, that card is over 22,000.

 


ReplyQuote
kotlos
(@kotlos)
Trusted Member
Joined:11 months  ago
Posts: 83
March 31, 2017 1:45 pm  

I think the best test would be comparing real world performance (game performance or a professional application) with the same card and computer of a TB2TI82  and TB3TI83 case.

From what I have seen until now nobody has shown a significant difference of a real world application in such a test. 

If there is no difference then either the DeviceToHost speed metric is wrong or non of the real world applications actually depend on this metric or there is another bottleneck before that one becomes a significant factor. If it is the latter then a firmware (from intel ?) should address this issue. 

 

Edited: 11 months  ago

ReplyQuote
Jetcopter
(@jetcopter)
Active Member
Joined:11 months  ago
Posts: 5
March 31, 2017 1:50 pm  

I have seen a difference in the games I have played.  But using something like 3DMark is general accepted as a good method (less subjective) to compare performance between setups and cards. It is essentially measuring FPS which is a good indicator of performance. 


ReplyQuote
Mymantiz_John
(@mymantiz_john)
Vendor
Joined:11 months  ago
Posts: 425
March 31, 2017 1:57 pm  
Posted by: Jetcopter

 

I have seen a difference in the games I have played.  But using something like 3DMark is general accepted as a good method (less subjective) to compare performance between setups and cards. It is essentially measuring FPS which is a good indicator of performance. 

   

Hi

3dmark I had done the test, I saw no difference between 2200mb and 1100mb.  

Mantiz: ShopFacebookTwitter


ReplyQuote
kotlos
(@kotlos)
Trusted Member
Joined:11 months  ago
Posts: 83
March 31, 2017 1:58 pm  
Posted by: Jetcopter

 

I have seen a difference in the games I have played.  But using something like 3DMark is general accepted as a good method (less subjective) to compare performance between setups and cards. It is essentially measuring FPS which is a good indicator of performance. 

 

I guess 3DMark could be used as well, but then the comparison should be between a TI82 and a TI83 case (which is done and shown no difference), not between an internal and an external configuration. 

Edited: 11 months  ago

ReplyQuote
theitsage
(@itsage)
Noble Member Admin
Joined:1 year  ago
Posts: 2009
March 31, 2017 2:06 pm  

I have AKiTiO Thunder2 and AKiTiO Node which I can test for the difference between Thunderbolt 2 and Thunderbolt 3 speed. Anyone with TI82 and TI83 enclosures to do comparable tests?

Edited: 11 months  ago

Numerous implementation guides


ReplyQuote
Mymantiz_John
(@mymantiz_john)
Vendor
Joined:11 months  ago
Posts: 425
March 31, 2017 2:24 pm  
Posted by: kotlos

 

Posted by: Jetcopter

 

I have seen a difference in the games I have played.  But using something like 3DMark is general accepted as a good method (less subjective) to compare performance between setups and cards. It is essentially measuring FPS which is a good indicator of performance. 

   

I guess 3DMark could be used as well, but then the comparison should be between a TI82 and a TI83 case, not between an internal and an external configuration. 

   

My comparison was based on Razer TI82 ( 2200mb ) VS Mantiz Venus Ti83 ( 1100MB )

Mantiz: ShopFacebookTwitter


kotlos liked
ReplyQuote
goalque
(@goalque)
Honorable Member Admin
Joined:1 year  ago
Posts: 779
March 31, 2017 4:25 pm  
Posted by: wimpzilla

 

It is really funny and amuse me to saw that nobody is able to to compile and perform decent ingame benchmark instead of throwing useless valley scores or basic benchmarks.

Would you explain why the Valley scores are useless? We need a tool to prove whether H2D bandwidth matters or not. None of the benchmarks on this thread were not any better, including 3DMark and Compubench. @Mymantiz_John did not see any performance gap between the 2200MiB/s vs 1100MiB/s.

The Matlab and CUDA-Z tests are correct. I always get roughly ~500MiB/s difference in bandwidth (Node vs Devil Box), but no any difference in OpenCL, CUDA or DirectX11 benchmarks.

A German review of the Devil Box and their numbers prove the same fact what I pointed out in a quick Valley bench: dual core CPU is useless in gaming if the aim is to maximize the performance:

https://www.computerbase.de/2016-10/powercolor-devil-box-test

Modern GPUs don’t require much bandwidth:

http://www.tested.com/tech/457440-theoretical-vs-actual-bandwidth-pci-express-and-thunderbolt/

“Everything down to x16 1.1 and its equivalents (x8 2.0, x4 3.0) provides sufficient gaming performance even with the latest graphics hardware, losing only 5% average in worst-case”.

“Last year’s most powerful graphics cards perform just fine at PCIe 2.0 x8 or even PCIe 3.0 x4”

The CPU/PCH makes the real difference. I tried to measure PCIe traffic but unfortunately my CPU (i5-6360U) is not supported:

 

./pcm-pcie.x

 Processor Counter Monitor: PCIe Bandwidth Monitoring Utility 
 This utility measures PCIe bandwidth in real-time

 PCIe event definitions (each event counts as a transfer): 
   PCIe read events (PCI devices reading from memory - application writes to disk/network/PCIe device):
     PCIePRd   - PCIe UC read transfer (partial cache line)
     PCIeRdCur* - PCIe read current transfer (full cache line)
         On Haswell Server PCIeRdCur counts both full/partial cache lines
     RFO*      - Demand Data RFO
     CRd*      - Demand Code Read
     DRd       - Demand Data Read
     PCIeNSWr  - PCIe Non-snoop write transfer (partial cache line)
   PCIe write events (PCI devices writing to memory - application reads from disk/network/PCIe device):
     PCIeWiLF  - PCIe Write transfer (non-allocating) (full cache line)
     PCIeItoM  - PCIe Write transfer (allocating) (full cache line)
     PCIeNSWr  - PCIe Non-snoop write transfer (partial cache line)
     PCIeNSWrF - PCIe Non-snoop write transfer (full cache line)
     ItoM      - PCIe write full cache line
     RFO       - PCIe parial Write
   CPU MMIO events (CPU reading/writing to PCIe devices):
     PRd       - MMIO Read [Haswell Server only] (Partial Cache Line)
     WiL       - MMIO Write (Full/Partial)

 * - NOTE: Depending on the configuration of your BIOS, this tool may report '0' if the message
           has not been selected.

Number of physical cores: 4
Number of logical cores: 4
Number of online logical cores: 4
Threads (logical cores) per physical core: 1
Num sockets: 3
Physical cores per socket: 1
Core PMU (perfmon) version: 4
Number of core PMU generic (programmable) counters: 4
Width of generic (programmable) counters: 48 bits
Number of core PMU fixed counters: 3
Width of fixed counters: 48 bits
Nominal core frequency: 2000000000 Hz
Package thermal spec power: 15 Watt; Package minimum power: 0 Watt; Package maximum power: 0 Watt; 

Detected Intel(R) Core(TM) i5-6360U CPU @ 2.00GHz "Intel(r) microarchitecture codename Skylake"
Jaketown, Ivytown, Haswell, Broadwell-DE Server CPU is required for this tool! Program aborted
Cleaning up
 Zeroed PMU registers

The core count is incorrect. i5-6360U has only 2 cores:

http://ark.intel.com/products/91156/Intel-Core-i5-6360U-Processor-4M-Cache-up-to-3_10-GHz

Edited: 11 months  ago

automate-eGPU.shapple_set_os.efi
--
late-2016 13" Macbook Pro nTB + Vega64@32Gbps-TB3 (Netstor HL23T) + macOS & Win10
late-2016 13" Macbook Pro nTB + GTX980/RX580@32Gbps-TB3 (Netstor HL23T) + macOS10.13 & Win10


ikir liked
ReplyQuote
wimpzilla
(@wimpzilla)
Reputable Member
Joined:1 year  ago
Posts: 333
March 31, 2017 5:59 pm  

The answer is simple but didn’t want to begin an off topic on benchmark vs games performances. I will simply point out one thing: in the top hardware sites reviews for a cpu or vga, witch part of the test when you read it, show more time spent testing: the 2 graphs showing 3DMark/Valley result or the 15 others graphs showing ingame results, with different resolution, with different games, etc?

Well you got your answer, i think. This is also the difference between cheap review and the well done ones, where you can make an opinion about and decide if buy or not!

Test protocol:

-Same laptop, possibly with a true quad. Same bios, ME firmware, same windows, mac install, same drivers.

-Same cable.

-Same video adapter, same clocks.

-Same power supply.

-Both T82/T83 TBE box adapters, each test should refer to a precise firmware, both with the eGPU one both with the pci-e one.

-Sames games with a built in benchmark like Tomb Raider, The Division, R6: Siege, Shadows of Mordor, GTAV, Grid Autosporp, Grid Rally, Grid 2, DOOM, Hitman, METRO, etc.

-Same resolution 1080p, 1440p, 4K.

-Repeat each test minimum 3 times, recording low/max/avg fps, if you can also record the frametime, t°, usage, etc.

 

=> This protocol should be applied on the same laptop, first with TB2 only laptop, then another time with a TB3 only laptop.

Hope it is clear. 🙂

What make real difference in games it is the cpu speed/clock and how much cache it have. What allow i7 dual core to anyway perform not so bad, it is the high clock when boosting under the turbo. The best example of that, if you are interested, is the Intel I3 7350k!!! Perform extremely well when the multithreading is not the main thing to focus.

 

 

 

Edited: 11 months  ago

2012 13-inch Dell Latitude E6320 + R9 270X@4Gbps-mPCIe (EXP GDC 8.4) + Win10
E=Mc²


ReplyQuote
Poblopuablo
(@poblopuablo)
Eminent Member
Joined:12 months  ago
Posts: 27
March 31, 2017 6:10 pm  
Posted by: wimpzilla

 

The answer is simple but didn’t want to begin an off topic on benchmark vs games performances. I will simply point out one thing: in the top hardware sites reviews for a cpu or vga, witch part of the test when you read it, show more time spent testing: the 2 graphs showing 3DMark/Valley result or the 15 others graphs showing ingame results, with differents resolution, with different games, etc?

Well you got your answer, i think. This is also the difference between cheap review and the well done ones, where you can make an opinion about and decide if buy or not!

Test protocol:

-Same laptop, possibly with a true quad. Same bios, ME firmware, same windows, mac install, same drivers.

-Same cable.

-Same video adapter.

-Same power supply.

-Both T82/T83 TBE box adapters, each test should refer to a precise firmware, both with the eGPU one both with the pci-e one.

-Sames games with a built in benchmark like BF4 (kinda), Tomb Raider, The Division, R6: Siege, Shadows of Mordor, GTAV, Grid Autosporp, Grid Rally, Grid 2, DOOM, etc.

-Same resoltion 1080p, 1440p, 4K.

-Repeat the each test minimum 3 times, with low/max/avg fps, if you can also record the frame time, t°, usage, etc.

 

=> This protocol should be applied on the same laptop first with TB2 only, then another time with a TB3 only laptop.

 

Hope it is clear. 🙂

 

 

 

   

I thought the clarification was not for TB3 vs TB2. You have to use different computers which will likely not have the same specs, because it’s unlikely the only difference between 2 computers is just 1 port(as far I know).  For example, a Mac book with TB2 won’t have the same cpu (and other specs) as a MacBook with TB3

 

Posted by: kotlos

 

I think the best test would be comparing real world performance (game performance or a professional application) with the same card and computer of a TB2TI82  and TB3TI83 case.

 

but rather TI82 vs TI83(you just switch out the GPU docks, this way all computer specs will stay the same) i didn’t think it was the actual TB2 v TB3 ports (but rather the difference  speeds that resembles TB3(TI82) v TB2(TI83). Both of which actually utilize TB3, but have different results)

 

Edited: 11 months  ago

ReplyQuote
wimpzilla
(@wimpzilla)
Reputable Member
Joined:1 year  ago
Posts: 333
March 31, 2017 6:20 pm  

You can use both box on a laptop that is TB2 or TB3, i hope! Since it was posted getting better result with a TB3 box on a TB2 laptop than a TB3 box on a TB3 laptop.

-TB3 laptop + TB3/TB2 box

-TB2 laptop + TB3/TB2 box

=> Compare results!

Edited: 11 months  ago

2012 13-inch Dell Latitude E6320 + R9 270X@4Gbps-mPCIe (EXP GDC 8.4) + Win10
E=Mc²


ReplyQuote
Poblopuablo
(@poblopuablo)
Eminent Member
Joined:12 months  ago
Posts: 27
March 31, 2017 6:25 pm  

Those laptops won’t have the same specs(cpu I can almost gaurentee is different.) And that matters. Along with other specs. 2 different PCs will have different bios and drivers.


ReplyQuote
Poblopuablo
(@poblopuablo)
Eminent Member
Joined:12 months  ago
Posts: 27
March 31, 2017 6:27 pm  

I see what your saying, but can somebody go from TB2 egpu to TB3 laptop? 


ReplyQuote
wimpzilla
(@wimpzilla)
Reputable Member
Joined:1 year  ago
Posts: 333
March 31, 2017 6:29 pm  

It is not hard to swap a cpu/ram/ssd between laptops, for the sake of keeping everithing same!

2012 13-inch Dell Latitude E6320 + R9 270X@4Gbps-mPCIe (EXP GDC 8.4) + Win10
E=Mc²


ReplyQuote
Poblopuablo
(@poblopuablo)
Eminent Member
Joined:12 months  ago
Posts: 27
March 31, 2017 6:33 pm  

Are there laptops that can:

(1)switch CPUs?

(2)Be the same generation cpu?

(3)will they have the same bios/Mobo?

 And if so does one have a TB2 and the other TB3? 

 

I thought the TB2 and TB3 were far enough apart that the Cpu generation (such as 4700hq vs 6700hq) would be different and thuse causing incompatible cpu swaps, and Mobo(different chipset) and ram(ddr3 vs ddr4).

 

I might be wrong tho, but the chances seem slim. 

 

Edited: 11 months  ago

ReplyQuote
wimpzilla
(@wimpzilla)
Reputable Member
Joined:1 year  ago
Posts: 333
March 31, 2017 6:40 pm  

I’m really not into the TBE stuff, i do not know nothing about and don’t want to know for now. I will never choose first a TBE for an eGPU, for me only real pci-e lanes matters. I just try to help you guys that deal with it each day to build a decent ingame protocol.

Test, show graphs, give a feeling about the graphs, like i have 150fps but it is unplayable due to constant stutter, give a conclusion, like yes under METRO i loose 2 fps on 1080p, 5 fps on 1440p etc, so there is or not difference ingame.

If problem, move on for a solution, if not move on for the next show!

If you can’t change cpu, just put them on the same performance, clocking them to the same IPC, so the 4700HQ to 4ghz and the 6700HQ to 3.7Ghz, you can use cinebench r15 or cpu-z to evaluate the IPC performances.

Edited: 11 months  ago

2012 13-inch Dell Latitude E6320 + R9 270X@4Gbps-mPCIe (EXP GDC 8.4) + Win10
E=Mc²


ReplyQuote
Page 2 / 8 Prev Next
  
Working

Please Login or Register