https://egpu.io/forums/thunderbolt-enclosures/mantis-venus/paged/5/#post-5797
it shows no difference in 2200mb or 1100mb CUDAZ H/D tested in the same Host.
so, in the same host, no matter the CUDA Z is 2200mb or 1100mb, Unigine Heaven & Valley had no difference.
People believe what they want to believe, even speaking with data, data could be the data people want to show.
but anyway, the matlab value should be meaningful, and before the bandwidth can't be verified by a solid way, I do believe that there is something wrong with the TI83.
Hope @DanKnight could give some extra more professional information.
Late Macbook Pro 2016 13' touch bar + AKITIO node + GTX 1050TI 4G Windows
Despite benchmark results, I'm still getting slowdowns in WoW.
One more example - in Assassin's Creed: Black Flag I'm getting 60 fps when character stays on the same place or walks slowly. Should I run in game in one direction - drops to 40+ fps appear from time to time.
It is definitely not SSD or CPU bottleneck.
P.S. About Windows 8: it is actually Win10 - just wrong name in benchmark.
2017 13" HP Spectre 13" x360 + GTX1080Ti@32Gbps-TB3 (AKiTiO Node) + Win10
2016 13" HP Spectre X360 [7th,2C,U] + GTX 1080 Ti @ 32Gbps-TB3 (AKiTiO Node) + Win10 [build link]
@Goalque, what is the CPU for both model? I know CPU does matter for those benchmark can u test Tb2 and tb3 box with mbp2013 none touch? Just via adapter?
I agree. The TB2 + quad core CPU performs better than TB3 + dual core CPU when running the Valley on the eGPU monitor.
automate-eGPU EFI ● apple_set_os.efi
Mid 2015 15-inch MacBook Pro eGPU Master Thread
2018 13" MacBook Pro [8th,4C,U] + Radeon VII @ 32Gbps-TB3 (ASUS XG Station Pro) + Win10 1809 [build link]
My Thunder3 reach 1670 MiB/s connected vis Thunderbolt 3 on MacBook Pro 13" touchbar macOS 10.12.4
it is normal, I play Elder Scroll Online in 4K and go from 60fps to 30fps it depends on the scene, elements in screen
MacBook Pro 13" 2020 Touch Bar M1 8-core CPU 8-core GPU - 16GB unified memory - 512GB PCIe SSD
MacBook Pro 13" 2020 Touch Bar i7 quad-core 2.3Ghz - 16GB RAM - 1TB PCIe SSD
my awesome Radeon VII eGPU
my Mantiz Venus extreme mod with Sapphire Nitro+ RX Vega 64
2018 13" MacBook Pro [8th,4C,U] + Radeon VII @ 32Gbps-TB3 (Mantiz Venus) + macOS 10.15 [build link]
@ikir: Your CPU is bottlenecking.
automate-eGPU EFI ● apple_set_os.efi
Mid 2015 15-inch MacBook Pro eGPU Master Thread
2018 13" MacBook Pro [8th,4C,U] + Radeon VII @ 32Gbps-TB3 (ASUS XG Station Pro) + Win10 1809 [build link]
It is really funny and amuse me to saw that nobody is able to to compile and perform decent ingame benchmark instead of throwing useless valley scores or basic benchmarks. In the review world, benchmark do not mean nothing and are not representative of gpu computation power, nor real ingame performances! It feel like the test was done lightly just to point out what you want!
In a decent review and test build, you put the hardware under stress and check the output to propose a conclusion, opinion about the hardware for the consumer.
There is no mystery, hardware based bandwidth matter ingames, people noticed a big difference, reported it, unlikely this difference will not impact on performances somewhere.
So i ask you kindly to really stop to try to give an opinion without solid data, it feel too week to build an opinion about. As long hardware tester, imo i couldn't find a decent test, well done, that finally give us a clear result on TB2/TB3 performances comparing the FW, then close this topic pointing the problem and moving on a new one looking for a solution!
2012 13-inch Dell Latitude E6320 + R9 270X@4Gbps-mPCIe (EXP GDC 8.4) + Win10
E=Mc²
2012 15" Lenovo ThinkPad T530 [2nd,4C,Q] + R9 270X @ 4Gbps-mPCIe2 (EXP GDC 8.4) + Win10 [build link]
Here is my 3DMark FireStrike score of 12,872 with a GTX 1080 in the Node: http://www.3dmark.com/3dm/18965848?
Without the Node, that card is over 22,000.
To do: Create my signature with system and expected eGPU configuration information to give context to my posts. I have no builds.
.I think the best test would be comparing real world performance (game performance or a professional application) with the same card and computer of a TB2TI82 and TB3TI83 case.
From what I have seen until now nobody has shown a significant difference of a real world application in such a test.
If there is no difference then either the DeviceToHost speed metric is wrong or non of the real world applications actually depend on this metric or there is another bottleneck before that one becomes a significant factor. If it is the latter then a firmware (from intel ?) should address this issue.
To do: Create my signature with system and expected eGPU configuration information to give context to my posts. I have no builds.
.I have seen a difference in the games I have played. But using something like 3DMark is general accepted as a good method (less subjective) to compare performance between setups and cards. It is essentially measuring FPS which is a good indicator of performance.
To do: Create my signature with system and expected eGPU configuration information to give context to my posts. I have no builds.
.
I have seen a difference in the games I have played. But using something like 3DMark is general accepted as a good method (less subjective) to compare performance between setups and cards. It is essentially measuring FPS which is a good indicator of performance.
Hi
3dmark I had done the test, I saw no difference between 2200mb and 1100mb.
Mantiz: ● ●