How to make video cards or why those who criticize the 20X0 series of video cards are not quite right
Unfortunately not all sites do tests in synthetics, but nevertheless they are very important for understanding what is happening on the video card market. Let us prove this.
Consider the first graph:
We see that in this synthetic test the RTX 2080 ti is leading, followed by Titan XP, RTX 208? GTX 1080 ti, RX Vega 6? GTX 1080 and GTX 1070.
Consider the second graph:
We see that RX Vega 64 is leading in the second synthetic test, followed by RTX 2080 ti Titan XP, GTX 1080 ti, RTX 208? GTX 1080 and GTX 1070.
And what will you ask? And the fact that the first graph is leading 2080ti, and on the second literally ripping out the competitors RX Vega 64. This means that the 2080ti does not have graphical calculations to save space, to a minimum, and for RX Vega 6? the significant area of the chip is given for the turnover, not for graphic calculations . RX Vega 64 is a server card, which was thrown on the gaming market. I think now it is not necessary to explain why RX Vega 64 is so badly dispersed, why it has such a large TDP and such a big chip. And why she acted so modestly. I think that AMD released this card perfectly aware of what was happening and that it consciously donated user cards to the market of servers and professional payments.
In fact, it's still worse (this is my opinion, which I do not impose to anyone), the whole line of AMD cards starting from the 7000 series and GCN is more server-oriented. Therefore, when many journalists enthusiastically painted the merits of GCN architecture, I wrote back then that it was the sunset of the graphic division of AMD and was right. Well, RX Vega 64 is a cherry on the cake of this madness and the last drop for me - I was very disappointed in the company's actions.
The proof is another graph:
Now there is a lot of talk that NVIDIA is bad, that the prices are high, that there is no competition. Well, who is to blame for the fact that there is no competition?
There are also a lot of conversations that AMD said they did not invest much in R and D and therefore That AMD will go to 7-nm and then show everyone I declare with all responsibility - all this is nonsense. To the current situation in the user market AMD investments in development have nothing to do, in the same Vega, there was enough money and it was done for a very long time, and as a result, it was full of zinc. And 7-nm. Well AMD will go with this approach at 7-nm and what's next? What NV on 7-nm can not move? And everything will happen again. So 7-nm does not give anything without philosophy.
Can AMD make a normal game map right now? Yes, it's enough to throw out a maximum of non-graphical calculations from GCN and get a decent card, maybe it will not be able to fully compete with GTX, but it will be a good card, though this recipe completely contradicts the AMD philosophy. Moreover, if AMD took a 5000 or 6000 line and added DX12 to it, it would be a leader. The fact is that VLIW4 by my calculations is about 30% more efficient than GCN on the area * frequency. And nothing surprising is not present as superscalar architecture always loses on efficiency vector. The secret of closing VLIW architecture is simple - it is somewhat worse for non-graphical calculations, so it was closed and replaced with Buldozer-GCN. AMD is interested in the fat pro market, and not beggars who can not pay 5?000 bucks for a graphics card.
Having dealt with the efficiency of architectures and the cause of the current situation on the video card market, let's move on to the announcement of 20X0 video cards, on which tons of dirt were poured from some people in the forums. In general, everything revolves around the accusations of high prices and low productivity growth compared to the 10X0 series. But got the same "unnecessary" raytrazing, but about DLSS in every way concealed.
The reason for the small increase in performance is banal - about 1/4 of the chip is occupied by RT cores, and about 1/4 tensor ones, and that's only about 1/2 of the area on the graphics chip itself. Hence the high cost - the chip turned out well, very big, almost like a titanium.
To this we should add that the 12-nm process technology in terms of compactness is practically the same 16-nm. The announcement of RTX to some extent strongly resembles the announcement of the 6000 series from AMD: the maximum of optimization and a small increase in performance with a small increase in computing resources.
As for the raytracing, in my opinion 4K60fps can not wait for another two generations. As for a number of information with shadows, you can play no higher than 60pss 1080p, and shadows + reflections about 30fps at maximum settings. The fact is that in fact, the raytracing is calculated on the same cores as the shaders and until either the number of transistors does not increase significantly or a separate hardware unit appears - the performance situation does not change, although there is hope for optimization, different photonic maps, etc., e. Contrary to statements about uselessness, in my opinion, raytracing is at least as much as a step from DX8 to DX9.
Why is everyone silent about DLSS?
As you can see, the inclusion of DLSS mode literally breaks the cards of the previous generation. Yes, 2080 is quite playable in 4K.
What is DLSS? This is in some sense hardware smoothing, where smoothing occurs without participation or almost without the participation of the GPU, a separate block. Those. The GPU runs approximately or almost as fast as if it worked without smoothing. The difference between old and new anti-aliasing is large, but without smoothing the difference will drop to "raw" performance.
DLSS has two drawbacks: first DLSS requires support from the game (at least as far as it is clear now), and it can not just be included for any game and now the average performance of 20X0 depends on how many games NV will include DLSS and this is a big drawback. Secondly, anti-aliasing could be implemented with smaller forces through a separate hardware unit, and it would be more compact and would not require support from the games. Why did NV implement hardware anti-aliasing through less efficient tensor cores? Because NV wants to sell these cards and pro and gamers, well, a universal solution is almost always less effective than a specialized one.
Is the conclusion of 20X0 a complete catastrophe as the forum "analysts" shout? No, but in many ways the weight of this will not depend on how many games, including those already released, will appear DLSS and what will be its average efficiency.
Secondly, it would be very good if there would be a raytracing with superresolution on tensor cores from 1080p to 4K. (Super resolution on the NV tensor cores has already been demonstrated, it remains only to find out whether it is compatible with the raytracing).
I also want to add that there is no competition and will not be at least until the release of the new architecture from AMD, and what it will be for architecture is still a big question.
Do you want to buy cheap 10Х0? It depends on whether you believe in new technologies. If not, then buy without a doubt, but if we say NV in many games implement DLSS, then you can remain fools in the hope for a "raw" power 10X0.
It may be interesting
Pleasant data, significant and magnificent plan, as offer great stuff with smart thoughts and ideas, bunches of incredible data and motivation, the two of which I need, on account of offer such an accommodating data here.
Situs QQ Online