Difference between ps5 and xbox is less than 20%

CPU: less than 10% clockspeed difference
GPU: less than 20% tflop difference
RAM: different but overall very similar configurations
SSD: faster on PS5 but both are fast enough

Attached: 1565393988513.jpg (1920x1080, 135.2K)

Other urls found in this thread:

forum.beyond3d.com/threads/gcn-and-mixed-wavefronts.59610/
twitter.com/SFWRedditVideos

cope

Attached: Screenshot_20200316-182149.jpg (1440x2042, 276.87K)

can someone explain to me how cerny said the advantage of higher clocks over more compute units is that higher clocks means everything like rasterization is that much faster, but doesn't having more compute units also make it faster?

Bargaining

Just bullshit to hide the fact their console is considerably weaker.

I mean I understand how it would be faster if you can only split your task to same or less compute units than ps5 has since then the xbox would have idling cores but I don't know how it applies to shit like rasterization, don't you have millions of triangles saturating your compute units constantly? idk

reminder that xboneX is much more powerful than the ps4 but ps4 games generally still look better.

this tflop meme needs to fucking stop.
its literally the "bit wars" all over again.

this time especially since the difference is so fucking small compared to ps4 and xbone which also isn't that big of a difference

Rasterisation isn't a volume-intensive process: it doesn't require a lot of bandwidth on the motherboard as such, because you're not going to be rasterising to incredibly large resolutions.

yeah but if you have like 5 million triangles on screen to rasterize then you're going to have stuff for even 10 000 compute units to do right?

Dude he's only 20% taller

oh yeah your xbox or playstation is your poor mans lamborghini or ferrari lmao

ROPs crunch pixels per clock. If each system has the same ROP count, the higher clocks would give the PS5 more Gpixel/s.

not in multiplats

aren't rops tied to compute units though? like his point was that less compute units clocked higher is what he supposedly prefers over more compute units clocked lower (assuming equal tflops in both scenarios)

It will be a 20% difference in price then.
That will dictate what consumers will want to buy in the end.

>aren't rops tied to compute units though
The ROPs are GPU back end, not directly tied to the CU.

alright, I thought they're just hardware inside the compute units

I CAN PLAY EVERY SINGLE ONE OF MY OLD XBOX GAMES!

Attached: 1571720715622.jpg (464x636, 67.87K)

Measuring the power of consoles should be a sum of all its parts but that’s not how we measure consoles. We almost always use the GPU to measure consoles. In this case the XBSX is over 20% better than the PS5. PS5’s specs are theoretical maxes from variable clocks aka boost mode. So it’s more than that. We almost never use the CPU to measure a console’s performance unless it’s PS3 CELL 12-bit precision bullshit theoretical FLOPS which should never have mattered anyways

Wasn't the difference between Xbone and PS4 even less but Xbone games ran at a noticeably worse performance?

Attached: huh....jpg (510x346, 27.05K)

If sony brings the ps5 for 400eurodollars with a decent lineup then the next xbox will have no chance of winning.

PS4 was 1.84TF, xbone original 1.24TF, this is a 50% difference

Nope, they're not part of the CU at all.
The other half of Cerney's point was regarding programming for the wavefront to keep the ALU utilization high. Less CU with higher execution speed is theoretically easier to program for and extrapolate the most performance from even if more CU would provide higher potential performance. Its peak sperg shit that only a handful of programmers ever deal with.
forum.beyond3d.com/threads/gcn-and-mixed-wavefronts.59610/

Cerny has legit actual autism and can’t speak in layman’s terms. In this case he’s full of shit and trying to tout the super special sauce magical fairy dust SSD that’s faster than my work’s enterprise SAN array of $2000 SSDs in IOPS. Which is also bullshit, it’s faster than what’s in the XBSX but nobody will ever notice at these speeds

Lmao
RDR2 on Xbox One X absolutely dabs on every PS4 game ever released

Attached: seethestation5.jpg (894x1093, 118.13K)

this ignores that if they're tried to the compute units then they should get clockspeed benefits on ps5 right?

like MS is sayting it's 12TF but 25TF with ray tracing so is it then 10.28TF PS5 but like 21TF with ray tracing?

according to that thread it seems to legit make a difference on GCN but does it do anything on RDNA2?

Yeah, no matter how you cut it the difference isn't that great.
Its like PS4 Pro vs Xbox 1X just with newer hardware.

it's not even close, and that's with Sony dishonestly inflating the GPU numbers

Attached: consoles.png (1392x816, 66.94K)

RDNA is just an evolution of GCN being a large SIMD engine. The primary difference is that GCN family arch is using 4X16 SIMD lanes. RDNA family arch is 2X32 SIMD lanes which is more forgiving for graphics workloads right off the bat. Wavefront considerations are still a thing.

okay so for the layman, the excuse of having higher clocks and less compute units is not total bullshit and will maybe allow a bit easier utilization on the PS5 hardware?

you can derive the figures in the OP from that image

>20% difference in the most liberal interpetation
Uh oh

Higher clock is cheaper than compute units because it's a smaller die.
But they're better cool the thing well or else.

well yeah it seems to be a 20% number crunshing difference overall on the GPU

The differences are larger than the difference between base Xbox One and PS4 and we know how that went. Begin coping now Sonybros

on the other hand the die has to be able to handle higher voltages because of the increased frequency making it potentially not that much cheaper

ROPs are the guys that actually draw the triangles.
They get the pixel position and UV etc (based on previous work on the CUs) then they ask a CU to calculate the pixel color. then they plot the pixel color.

VARIABLE
CLOCK
RATES

I probably won't buy either of them but fuck Sony dropped the ball.

1.84tf PS4 vs 1.24tf Xbone is a 50% difference so umm idk

they wanted to be in double digits, 12 vs 9 is a bad look

>sony
>inflating anything
Nnguy, this is all AMD hardware regardless of whose name is on it. Theres no reason do doubt Zen2 CPU cores being 3.5ghz, or RDNA2 arch hitting 2.2ghz. Navi 10 aka the 5700XT has no trouble overclocking to 2.1ghz and RDNA2 is supposed to bring a 50% improvement in perf/watt over RDNA1.

Its not total bullshit, but it is very in depth technical autism. I seriously doubt that most developers ever do this extensive optimization to minimize wasted cycles and get the absolute most performance out of the limited GPU in these consoles. Potentially the PS5 will be slightly easier for an autist savant dev to reach their performance targets when dealing with a mixed graphics and compute workload.

$81,000 is less than a 20% decrease of $100,000 but I still know which one I'm picking.

>CPU: less than 10% clockspeed difference
>GPU: less than 20% tflop difference
>RAM: different but overall very similar configurations
>SSD: more than 200% on PS5
t.phil

Indeed.
So i suppose sony and microsoft are betting here on the process.

I think it's a bit fishy that the max is 3.5GHz for CPU and 2.2 for GPU, making me assume it's never going to be both and having both under load means compromizing on both

NOOOOOOOOOOOO

>I think it's a bit fishy that the max is 3.5GHz for CPU and 2.2 for GPU, making me assume it's never going to be both and having both under load means compromizing on both
That'd be a pretty good guess, because every APU thus far has used this load balancing behavior between the CPU and GPU.
The PS5 probably draws less power than the new Xboner, but won't have as much CPU performance when the GPU is being highly taxed.

It's 1.4 for Xbox One S and the CPU is better though the S still looks and performs markedly worse than base PS4.

it was 1.24tf on release and the console used ddr3 with way lower bandwidth on top of that, though it has some sram

20% is a lot when we’re talking 30fps games. A playable 30fps game on XBSX would be an unplayable 24fps on PS5 at the same resolution. 20% being 6 frames when talking 30fps. It’s more than 20% too because the PS5 specs are boost mode maximums. PS5 is gonna still need checkerboard upscaling To 4K but at 1800p

Snoybois trying to damage control

Irrelevant.

oh okay

>turns shadow resolution from ultra to very high
>gains that 6fps
lmao

instead of variable wattage, that's fixed so they know how hot it can possibly get

Raytraced shadows are probably either making this worse or better.
Depth shadows are a piece of shit so huge, i bet raytrace shadows will be faster than depthshit on the geforce 30xx

I'm assuming the PS5 will be like 20% behind and the visuals will be toned back that much

Both have ray-tracing so the same logic applies.
Also this is like saying PS4 Pro was 8.4TF with
FP-16, more of a half truth in real world terms.

Honestly, it's not that great. But it's definitely not a Xbone vs PS4 scenario like back in 2013. What was that, a 35 to 40% difference?

I'd wager we are looking at 1800p vs 2160p at the same framerate, or 60 fps vs 75 fps at the same resolution.