Intel killing off the competition
As you all probably already know Intel is one of the main giant behind the thunderbolt technology. However, it has becoming clear these past years that Intel was also the main culprit behind the hindering of the technology. From huge upfront licensing fees, to months of delay waiting on Intel's approval and verification of a product, Intel has always been one of the major reason why there's not much more diverse and affordable thunderbolt products, including eGPUs. Thanks to them we only have a handful of eGPUs solution available as of now and even then, most of them have yet been fully released to the public.
With all that said, i would like to know why is Intel doing this? If anything else, it seems to me that they are doing themself a disservice. Also, why haven't some Chinese no name manufacturer reverse engineered thunderbolt specification to make what ever product they want without Intel's approval?
Current Setup: Acer Aspire VN7-592g / i7-6700hq / 16GB DDR4 2133 Mhz / SM960 EVO M.2 + SM850 EVO Sata III
Nvdia 960M 4GB dGpu / Nvdia 1060 6GB eGPU (Akitio Node)
Intel is not killing the competition: There is none to be had. Thunderbolt is owned and guided by Intel and they are the only ones making the controllers. I do not know whether Intel is limiting the technology as the means to make a lot of money, or simply because it is a complicated flagship technology with their name on it, and as a result, they want to make sure implementations are properly working before they get shipped. But if I had to guess, I would place my money on the second: Intel has no interest in limiting Thunderbolt - They have an interest in it being as widespread as possible and Intel does not compete with their own customers (who are the OEMs, not the end user), I mean it is not like they are making their own enclosures to compete with Akitio/Razer/Power Color.
In any case, reverse-engineering Thunderbolt is not exactly an easy task. This is not as simple as hacking a few cables together (as is the case with Expresscard/mPCIe/m.2): Not only the protocols themselves are complex, they also support hotplug. Finally, making a (cheap) 20/40Gbps PHY is not trivial, either. Remember, Intel is not the only one who tried to design consumer PCIe-over-a-cable, but they are the only ones who actually managed to get this working for a relatively low price. It means that the problem is more complicated than we can spot from here.
EDIT: In the Thunderbolt1/2 days, I suspect the main limiting factor to Thunderbolt acceptance was Apple, rather than Intel. Intel designed Thunderbolt for Apple, hence the use of the mDP connector format at the time. I have no idea what the deal between Apple and Intel was at the time, but it might be that Apple was also getting a cut of the cake, as well as being able to call some shots (like, for example, not allowing eGPU use, which would harm Apple's profits on higher end systems) which may explain why PCs with Thunderbolt were rare. Once TB3 rolled around, it looks like the Apple/eGPU veil was lifted, and now all that remains now is getting these complex systems actually working.
Which as we know isn't easy, even with modern enclosures, and people experience a lot of issues still, even with supposedly supported configurations.
"Always listen to experts. They'll tell you what can't be done, and why. Then do it."- Robert A. Heinlein, "Time Enough for Love."
Intel creates the CPU, hosting PCIe bridge and TB controller that attaches to it. It would make business sense then that they steer the technology how best suits. Including preventing bolt on CUDA/OpenCL eGPU processing units, a direct competitor to Intel’s important CPU market.
We’ve see Intel lockng down more and more chipset features per generation that were previously used for eGPU tweaking. eg: PCIe port lane width, enabling/disabling ports are all in ME FW, requiring Intel tools to change and dangerous bios flashing.