I know you are talking graphics
Nope, CPU's. Ryzen 5.
Are Intel processors still better than AMD, I switched to Intel years ago - and haven't looked back?
In terms of performance per dollar, AMD is out in front.
Intel maintains a SLIGHT advantage in raw gaming if you have more money than brains, but that money would for normal people be better put into a better video card than CPU.
In terms of general compute, AMD is WAY out in front as Zen 2 -- the entire Ryzen 3600 and higher cpu's -- now have a faster IPC (instructions per clock) than Intel, are shipping with better clocks at the price point, and come with more cores.
In any multithreaded heavy task, any Zen2 architecture chip -- 3600,3600x,3700x,3800x, 3900x, etc, etc, is blowing Intel out of the water for the simple fact you get more threads, and the "infinity fabric" (their term for inter-processor communication) are talking to each-other and sharing memory/cache more efficiently than Intel.
The entire Zen2 launch last year leaving Intel flat-footed in their response, with their new 10xxx series feeling like nothing more than Intel desperately throwing higher clocks and more voltage at it. It's like Athlon XP part 2, where a 2.2ghz Athlon XP was delivering performance equal to a 3ghz P4.
Much of this comes from AMD's new "chiplet' design where they manufacture multiple smaller cores in discrete packages, typically 3 or 4 cores and four threads per "chiplet", and then tie them together on the PCB substrate with a controller chip -- the wiring between the chips and the controller again being called "infiinity fabric".
The Chiplet design also reduces waste in the process drastically lowering their manufacturing costs. In CPU construction a LOT of silicon is usually just not viable and sent to the dump. By making smaller chiplets of multiple cores, if one core or even two cores is bad you can still use the rest of the perfectly good silicon. With the old monolithic die method the entire thing would have had to have been pitched. This is why Intel's yeilds of late are causing supply chain problems, as their struggles to get below 14nm manufacture mated to the fact that if any part is dead the whole chip is junk results in a much higher waste rate, particularly if your manufacture process is flawed.... and Intel's still struggling to reach a 10nm process as they manufacture in-house, whilst AMD is on TSMC's 7nm fabs.
Because this "fabric" is designed to be scaleable, it allows the same chiplets to be used across ALL designs for ALL tasks, you just bin them based on what performance the silicon can handle, and change the number of chiplets for however many cores you want for the job.
So while the bottom of the Zen 2 line -- the 3600 -- is 2 chiplets delivering 6 cores and 12 threads, meaning they're three core chiplets (4 core with one disabled due to manufacture error).
What gets crazy is when you get up into the high end, where AMD now has the $750 US Ryzen 9 3950x, delivering 16 cores and 32 threads at 3.5ghz base, 4.7ghz boost... and that's just the consumer line on socket AM4, which is fun since all the new AMD consumer CPU's that matter of the past 2 generations. Unlike Intel which changes the socket every time the wind blows.
It also is fun that AMD beat Intel to market with PCIE gen4 doubling again the bandwidth available to external hardware, and provides 24 PCIE lanes across the entire product line. That 24 might sound inferior to some intel offerings, but one needs to remember that we really don't do SLI anymore, and the chips also provide dedicated SATA and nVME lines in ADDITION to the general purpose ones. With the X570 chipset four of those gen4 lines are split out to twice as many gen3 x1s that are multiplexed between devices. Between the on-cpu devices and the chipset you get a lot more bang for your buck on device speeds.
... and that's just the consumer line. There's also Threadripper which is for ultra high end workstations running circles around anything Intel offers, and the Epyc data center chips. The 3990X "Threadripper" being almost identical in general specs to the Epyc 7742. 64 cores and 128 threads with both boosting to 3.4ghz... Though threadripper has a base of 2.9 whilst Epyc has a base of 2.2. The lower clock being because Epyc is expected to be put into U2 rackmounts, whilst threadripper goes in more spacious desktop style cases.
The big difference though between Epyc and Threadripper is maximum supported memory. Threadripper "only" supports up to 1tb, whilst Epyc can support an utterly insane 4tb of RAM. That's RAM!!! For comparison a Ryzen 5 through 9 of Zen 2 architecture tops out at 128 gigs.
... and again, Intel's best high end data/server/compute options cost more and can't even touch the core counts even with multiple CPU's. Even then, you can go dual Epyc.
Though for dual Epyc Rome with max RAM you could buy a fully loaded BMW X5. We're talking $75K+ by the time you get mobo, both CPU's, and all that memory.
But the kicker is that $7000 Epyc Rome server CPU is built up using the extact same chiplets and architecture as the $175 Ryzen 5 3600!
Commonality of parts is smart logistics. Another place Intel's "let's change everything because we can" is biting them logistically.
So in terms of CPU, AMD is kicking Intel's ass up, down, left, right, and sideways... and that's Zen2. The 3rd generation of the Zen architecture is supposed to drop later this year! As it is, for the first time EVER, AMD has market control. This past year they actually outsold Intel 3 to 1!!! That's a crazy demographic shift. If you care about building "the best gaming rig" and money is no object, Intel MIGHT still be your choice, but if you care about much of anything else, you'd be a moron to not buy a 3rd gen Ryzen.
AMD's also doing better on the graphics side, though nVidia is still way out in front on overall performance. The AMD Radeon 5700XT competes well on price-performance to the mid-range nVidia RTX 2070 cards, but they don't have anything out right now to even come close to the 2080's. The old Vega architecture was a deed end for graphics though it's the most powerful compute per clock or compute per dollar of any video card, but the new Navi architecture is way ahead. Problem is they didn't release the high end card first... though we should be hearing about the 5900XT in the coming month.
Problem is, nVidia IS
announcing the RTX 30x0 series in March which is supposed to be another generational shift like the 10xx series were. I'm probably gonna pick up a RTX 3070 once the prices lines up with my wallet and the first gen releases have been all patched up and fixed by the early adoption suckers.
Big advice with new hardware? Let other chumps and rubes try it on release.