Both Sony and MS use AMD tech so why didn't Sony just copy what MS were doing instead of releasing some gimped console with an SSD nobody cares about? Why would anyone want to settle for worse performance and/or resolution in their video games?
PS5 BAD SPECS
I only play on consoles with 9000 teraflops
Because sony is a poor company with no leverage.
>the senior editor of the "technology" webpage that couldnt even mount a PC
ps5 is gonna be underpowered in comparison with the sexb, but posting the Verge shit should be illegal.
It’s hilarious that console manufacturers have invented a term for the sole purpose of obscuring actual performance measurements but still allowing them to use it as a measure of performance, all so they don’t have to compete with PC specs. They’re literally flaunting that they lie to the consumer in their face and the console babies not only lap it up but evangelize this shit. What a fucking joke, a company could take a shit on a plate and console fags would ask for seconds.
>they don’t have to compete with PC specs
PC is so irrelevant that it's not even considered competition. Rockstar releases its games 1 year later on Windows, and the ports are completely broken at launch.
9.2 TF confirmed and with the recent hack of AMD it has also been confirmed that PS5 is going with the RDNA 1.5 as well.
Why did Sony decided to do such a weak console anyway?
Teraflops? That wasn't an "invented term" by console manufacturers. It's a scientific measurement to quantify something. All GPU have tflop measurements for fp32, fp64 and sometimes fp16.
One GPU has 36 CUs, the other one 56.... I don't think These consoles will launch at the same price unless MS is planning on selling at a loss. Cheaper for virtually the same performance would be a good deal for most people.
So why flops and not, you know, the ACTUAL performance metrics the hardware is measured by
You do know GPU aren't just for games, right? Most GPU that come off the production lines at AMD and NVidia go into compute focused cards which are sold to big companies to use in their servers. The TF performance of a GPU literally tells you how a GPU will perform relative to another GPU within the same architecture family. Both PS5 and XBSX are based on RDNA so the TF numbers are representative in giving us an early look at how game performance will be. Xbox has more TF so games will run better on the xbox whether its higher resolution, higher graphical settings or same res and settings as PS5 but with better performance since there is additional headroom there.
All consoles are sold with an initial loss and they make the money back later by selling games, peripherals, online subscriptions etc. Microsoft is a far bigger company than Sony and if they play aggressive can take a much bigger loss per sold console. I think that's what's happening. Why should Microsoft's shareholders care anyway if this small Xbox division gives up some profit when the CEO sells it as a long time strategy to crush the competition. Meanwhile if Sony which mainly sells playstations nowadays says they will only make 50$ during the lifespan of a console their shareholders will shit their beds. Just my thoughts.
>all so they don’t have to compete with PC specs
Why the fuck would they be competing by something completely arbitrary based on how much money an individual is able to spend?
Ps5 has just doomed the entire generation : all games will be held back by it in some fashion. Xbox and pc lose either way . Thank you Sony
the tflop figure doesn't scale linearly with perfromance even on the same architecthure, you can look at some gcn family cards, one with 7tf, another with 10tf and see the 10tf one is only 20% faster for example
same goes with say rtx turing cards
>mount a PC
What exactly do you mean by this?
Why would console specs matter if they both always get the same shitty games and you can't run anything that isn't approved by daddy Microsoft/Sony?
Nintendo is the exception for obvious reasons.
Didn't say it does but it's a good indicator of where GPUs stack up against each other. A higher TF GPU within the same architecture family will always perform better in identical scenarios assuming there isn't some major bottleneck somewhere. There was only a 1.8tf difference between the ps4 pro and xbox one x but games overwhelmingly ran better on the xbox due to the more powerful GPU combined with higher memory bandwidth. Expect the same with the upcoming consoles because its basically the same story. PS5 GPU is weaker and has 100gb/s lower memory bandwidth. Bandwidth is absolutely crucial for pushing higher resolutions such as 4K which these consoles are being advertised for.
>There was only a 1.8tf difference between the ps4 pro and xbox one x but games overwhelmingly ran better on the xbox due to the more powerful GPU combined with higher memory bandwidth
cmon now if you do the math you figure it had 50% more tflops
Percentages don't matter when it comes to tflops because its unrepresentative. The actual tflop difference is what matters. This is the biggest gap between the two consoles I think theres ever been relating to gflops/tflops. Not that tflops is the be all and end all but all the numbers are overwhelmingly in xbox favor such as like i said about memory bandwidth.
The 10.2 TF performance is highly theoretical. It can't maintain both CPU and GPU at those clock rates but will have to downclock one for the other.
The absurd GPU overclocking will also put huge demands on cooling so expect another PS4 Pro jet engine taking off.
Expect CPU to perform more in the 3.2 Ghz range to keep the GPU clocks. SeX has fixed CPU speed of 3.8 Ghz.
Variable clock rate is also to blame for poor PS4 BC as PS4 games are intended for a fixed clock speed.
The SSD has impressive r/w speeds but the RAM is slower on a smaller bus. And total SSD storage is 200gb less than SeX.
If you think by now that Sony wasn't overclocking their 9.2TF to inch closer to MS, you're delusional. GitHub leak was real. 9.2TF was the initial goal but later last-minute overclocking was made to inch closer to a much stronger SeX. It is an afterthought and it shows. This is also why they can't show us the PS5 box yet. They're still trying to find optimal cooling for it which impacts console size.
>why didn't Sony just copy what MS were doing
to not make the ps5 cost 1-2 hundred $ more, like the SX will cost
Sony wants to keep the price around the miracle 399$ number, at least with the first version of the ps5 anyway
having a good homebrew SSD doesn't raise the prices that much yet it still gives the ps5 some unique feature the first and second party devs can make use of.
you're legit retarded
do you not understand that adding 2tf to a 2tf card is double the tflops performance but adding that 2tf to say 10tf is not double
the percentage difference is what gives a way better indication of the actual difference over saying one has only 1tf more or some shit, that 1tf can literally be double the perfromance or only 10%
>Variable clock rate is also to blame for poor PS4 BC as PS4 games are intended for a fixed clock speed.
this is wrong, the only variability in the clock speeds is how the developers choose to distrubute power consumption between the GPU and CPU. There is no thermal throttling like in PCs on EITHER console.
The SSD in the ps5 is more expensive than any component in either console according to some analysts. You can't even get 5.5gb/s SSDs now and the ones we do have at around 3.5gb/s cost $400 for 1TB. There's a reason why Sony had to give us a weird amount like 825GB instead of a full 1TB. They wanted to hit that $499 mark.
Sony knows they are a netflix and FIFA machine, why bother with better specs?
Why does the metric used for performance change every gen? In the PS2/Xbox era, it was CPU MEGAHERTZ. Then with the 360/PS3 is was MUH CORES. With the XBone/PS5 is was MUH VRAM. Now its MUH TERAFLOPS.
PS4 games are coded around a fixed clock speed. This is why they can't promise full BC yet. Even in the 100 games they tested, some of them ran into issues.
Are you pretending to be stupid? Please stop talking about things you clearly don't understand. Percentages mean jack shit. The absolute increase is what matters. 2-3TF difference is the biggest there's ever been between the two consoles. 3TF difference will have a much bigger effect on how games perform than a 1TF difference even though going from 1TF to 2TF is a 100% gain. 3 > 1. Only people who use percentages to compare are fanboys on maximum cope trying downplay the difference between the consoles and because they don't understand that having more TF matters not bigger percentages.
If you're really desperate to talk percentages the xbox has 44% more CUs than the PS5.
Because PS5 and Xbone use the same identical GPU architecture and they're similar in pretty much everything but GPU and memory.
it's pretty clear that they are waiting for MS to reveal the SX's price first and cut under it
If MS announces that the SX will cost 600$ then Sony will go with 500$
if the SX is 500$ then Sony will very likely take the early losses and sell it for 400$
>no games
>no performance
>no players
Is anything going right for sony or is this the end?
literally explain to me then how a 2tf to 4tf increase is the same as a 10tf to 12tf increase
is this some autistic bait?
Those all still matter. Tflops is just specifically for the GPU. The xbox has faster VRAM as well and faster CPU clockspeed.
Yes but sony actually invested into R&D and probably got a lot of patents for the new SSD controller they using which they'll probably sell to other manufacturers to make up for the cost.
I think most games will run at top GPU clock and low CPU clock the whole time, specially sony the movies.
look at this retard here thinking that adding 20% more floating point capabilities to the GPU will somehow matter more than adding 100% more floating point capabilities
Because console fanboys are tech illiterate. Although now that everyone is buying stuff from AMD with the same architecture, we now have an apples-to-apples comparison so comparing TFLOPs makes a ton of sense: it's an actual theoretical performance metric as opposed to nonsense. Before this current generation, consoles were so different that you can latch onto literally any arbitrary feature (bits in a word, BLAST PROCESSING, Emotion Engine, whatever) and proclaim that it instantly turned everything into gold and that everything without that feature was shit. Now you can't really do that because the only real difference in the consoles is the number on the BOM.
A 2TF difference is a 2TF difference, brainlet. If you have 2 chocolates and I have 4 chocolates or if you had 10 chocolates and I had 12 chocolates I still have 2 whole chocolates more than you. Those 2 chocolates don't suddenly have less value just because of some retard who doesn't understand how to apply percentages.
>retarded food analogies
it's like having two cars, one drives at 50km/h and the other 100km/h
then the other scenario is 350km/h and 400km/h
those two extra chocolate pieces don't get you twice the perfromance (or chocolate), goddamn you're retarded
I never understood this argument. Obviously 1Tf vs 2Tf is a bigger impact than 9 vs 12. It entirely relies on the fact that games are being made to run on both systems and in different generations.
1tf vs 2tf is like a game running at 30fps va 60fps. 9 vs 12 is like 45fps vs 60fps, but with modern technology you will probably end up with 60fps at 900p vs 60fps at 1080p.
So Sony is retarded and don't know how to design a console?
Nobody said it does though you clinical retard. A 2TF difference is a 2TF difference regardless of whether its 2vs4 or 10vs12. That is an extra 2TF of compute capabilities developers can tap into and no amount of damage control and downplaying will change that. If TF didn't matter why did Sony have to implement variable clocks to push their TF higher from 9.2 to 10.2? We already know for a fact the PS5 was 9.2TF baseclock just half a year ago and now suddenly they use AMD smartshift which is fucking laptop tech to attempt to make the PS5 a bit faster.
You're not going to stop arguing anyway. I don't need to damage control because I'll just buy the more powerful console since I'm not aligned with either brand (I had a ps4 this gen). The performance analysis at the end of the year will reveal all so I have nothing much extra to say other than I told you so. Feel free to screenshot this :)
well you're the one saying it will matter much more magically
there's literally no downplaying in stating the fact which is that going from 10tf to 12tf is a 20% increase and this is 20% more compute capabilities, yes it's 2tf and this 2tf amounts to 20%
in the other scenario going from 2tf to 4tf is literally twice the capabilities and you think that pointing this out is somehow downplaying it, it's just honesty
Literally this. We are looking at 20-25% worse performance on the ps5. A fast ssd will do nothing other than boot your ps5 faster, and maybe some loading gimmick on 1st party titles, you are still playing at 720p.
Mark Cerny is retarded and can't design a console or OS. Remember when boost mode on ps4 pro was optional because it could break games? That's basically the same situation as this. Microsoft runs all their games and OS in a hypervisor so emulating games on new hardware is a matter of just remapping the hyperv to the new hardware. The game can't tell the difference e.g. 360 emulated games on xbox one all think they're running on a 360.
I refuse to believe that either of these consoles will be under 500, because I will get one if that is the case
the clockspeed is variable insofar as the developers set it. they can set it to whatever clockspeed they want and have it stay there. they can have it fixed as any ps4 game is concerned
Its too early to say but based on what we know of each consoles capabilities that number could rise much higher due to ML upscaling and VRS on the xbox side. Both of which the PS5 can't do.
If there is something I learnt from being an idort, it's that even though PC can technically have the best specs, devs will always make games according to the lowest common denominator.
So even if the Xbox SeX was twice as powerful as the PS5, any multiplat (or any game, really) will be made with the lowest specs in mind.
All this sperging and autism over console performance is just retarded.
After the xbox one x was out performing the ps4 pro in rdr 2 I am probably going to switch this gen because it's looking like the same thing is going to happen. Im still waitkng for the series S or whatever the fuck they come up with because launch consoles are always trash
Because MS helped engineer RDNA2's VRS implementation and then got a patent on it. They also baked DX12/DXR1.1 into the GPU as part of making it DX12U certified. All that shit is MS IP that Sony cannot use
Additionally, Sony's virtualization tech is garbage and can only disable parts of the GPU at the Shader Engine level, which meant they needed either 36/72 active CU to maintain back compat. 72 would be too hot/expensive so they had to go with 36 + a hybrid GCN/RDNA architecture, and then did cope overclocking to try and close the gap.
MS on the other hand uses fucking Hyper-V and can use whatever the fuck number of CU and architectures they want and still keep back compat + massive enhancements for all back compat games
ps5 should go with 1440p instead of the 4k meme, then it would whoop xbox ass even though it's weaker because xbox would be wasting that power to run current gen games at 4k lmao
I know both will chace this fucking 4k bullshit though and I hate it
>muh percentage sonygher is shitting up another thread
friendly reminder that this is the guy who made the following comparison
xbox one is 50% weaker than ps4 (1.2 tf compared to 1.8 tf)
xbox one x is only 25% stronger than the ps5 (12 tf compared to 9 tf)
btw if you cant see whats wrong here, you're a lowIQ subhuman
I've looked at what vsr is and I don't understand why it can't be done with software, what's the limiations?
I predict people will say there'd a huge overhead, okay but why?
>xbox one x
series x*
DX12 too, Sony really needed better hardware, worst case scenario the PS5 is 50% slower.
>I've looked at what vsr is and I don't understand why it can't be done with software, what's the limiations?
It can, (there's one FPS game on current consoles that does do it in SW, can't remember which one) but like any implementation in SW vs HW the HW acceleration is orders of magnitude faster
Not remotely true. Most multiplats are made with the stronger console in mind and then downscale the resolution to match the lower end systems. Expect Ps5 to run at 720p 60fps.
he should've said ps4 is 50% stronger than xbox one and then the figures would make sense
and yes base ps4 is 50% stronger than the base xbox one
and yes xbox series x is 25% stronger than the ps5
in terms of tflops that is
DirectML and VRS are both part of DX12. VRS is actually patented in DX12 for the specific method MS uses. I doubt Sony would have any issues with their API but the reality is the xbox has more and better features whilst also being more powerful on the GPU, CPU and VRAM side. It's looking like a landslide win in performance for the xbox.
The PS4 is 33% stronger than the XboxOne you pathetic retard who can't into basic math
but why? what is the overhead with rendering certain parts of the screen at a lower resolution? I want some answer that's more than just "oh it's slower", why is it slower exactly?
What the fuck will Sony do if (while unlikely) MS decides to subsidize and price SeX at $400? After all, they are buying in much larger bulk than Sony as xCloud blades are dual purpose for Azure when they are idling. It's possible MS got their shit for a lot cheaper and the BOM ends up similar to the PS5 $450
You do know that percentage increase and decrease aren't the same, right?
Xbox has both the "deluxe" console and a cheap console, they dont have to do shit
Depending on what kind of SW VRS you're using their could be a shit ton of CPU overhead. And regardless of what kind of VRS, the CPU has to make a bunch of seperate draw calls to the GPU because that's the only way to get it to shade each area at a reduced rate
1.8 is 50% higher than 1.2
I don't understand the SSD thing. It seems totally retarded to me.
The storage hierarchy works on orders of magnitude. Your cache is one to two orders of magnitude faster than your memory which is one to two orders of magnitude faster than your garbage bottom tier shit storage. This disparity makes a huge difference in what kinds of problems can be solved by your cache, your memory, and your shitty disk.
The reason why SSDs in general were such a big deal compared to HDDs is because SSDs are an order of magnitude faster than HDDs. The reason why people were so excited for 3D Xpoint before Intel botched everything about it was because it was claiming to be an order of magnitude faster than a NAND SSD.
Now Sony is trying to make a big deal about their SSD compared to another SSD, but their SSD isn't an order of magnitude faster. So it means fuck all because any improvement less is marginal - you can't really do anything new, you just do what you were already doing faster.
It seems so desperate to try to steer the conversation towards this metric when the improvement is so tiny.
i think that bothers me the most about this gen, the SSD stuff. microsoft has their own proprietary expansion but like itll be available. where as Sony is allowing any SSD with speed comparable to theirs but there is also some red tape on how they perform and wont be the same or even off the same load times according to Cerny. they both support external for older games which is nice i guess.
Already been dubunked
Troll somewhere else
again it's just you saying it's slower oh it's overhead oh it's slower, but what about it is slower?
typically I've seen vrs has 3 or 4 different resolutions it renders at, does this really make drawcalls an issue? I don't think so
I am not at all trying to say it isn't slower or some shit but I want to understand where the limiations come into play exactly
They probably had a certain budget in mind, along with power draw and heat, noise, etc. I also wonder if maybe they assumed the mid-gen upgrade shit wasn't going to be repeated and based their jump in power off the base PS4. I don't really blame them, to be honest.
yes which is why I said he should've said base ps4 is 50% stronger, not that xbox one is 50% weaker
X1 GPU:
>12 CU of GCN2
>914MHz clock speed
12 CU * (64 shader/CU) * (2 FLOP/cycle) * 914MHz / 1 trillion = 1.4TF
1.8 is not 50% more than 1.4
AhahahahahAHAHAHAHAHAHAHHAHAHA
HOLY FUCKNG SHIT HE EVEN USES ONLINE CALCULATORS AND STILL FUCKS UP
X IS "STRONGER" THAN Y
X IS THE "FIRST VALUE" SO WHY ARE YOU USING THE WEAKER XBOX LOL
WHAT THIRD WORLD SHITHOLE ARE YOU FROM POST IT RIGHT NOW
Rumors suggest they were initiallly gunning for a 2019 release.
I think you're talking about the xbox one s or something
I take the weaker value and see how much of an increase the higher value is
>all this seething
PS5 will fuck xbox into oblivion just like playstation 1 fucked sega and nintendo into oblivion.
Xbots, enjoy your 20 million sales and no support from any publishers whatsoever.
No xbox has 1.2tf wtf are you on?
>sonyghers are subhumans who cant into BASIC math
AHAHAHAHAHAHHAAHHAHA HOLY FUCKING SHIT I CANT EVN
THIS EXPLAINS EVERYTHING
I just used the values in the original comparison post I saw, iirc the actual vales are 1.3 something tf and 1.84tf, anyway really close to 50% in the end
Ok, lets say for example you're using 2x2 VRS (ie you shade groups of 4 pixels in one shader pass)
Because the non-VRS GPU only knows how to either shade 1 pixel or not shade 1 pixel, the CPU has to send a draw call for each tiny little 2x2 pixel area with the shader for those 4 pixels. Then the GPU can run it and the result is VRS in SW. Where as a GPU that supports HW VRS can be sent "draw this polygon, with this shader and this level of VRS" one time and the GPU does the rest of the work
Even the VCR isn't 1.2TF
12 CU * (64 shader/CU) * (2 FLOP/cycle) * 853MHz / 1 trillion = 1.3TF
you're a legitimate retard thanks for outing yourself
always knew you had to be fucked in the head to be a sonyroach, nigger you need online calculators to help with percentages
shit eating retard
You know Sony fans are insufferable when they take an objective fact like the PS5 being weaker and try to damage control and deflect to other platforms subjectively being worse based on their own tastes. Normal people would take the L and accept their console has worse hardware. Not Sony fans. I expect this next 7 years for 9th gen to go very badly for Sony fans especially when digital foundry starts putting out comparison videos.
Explains a lot about them if I'm honest.
>i dont care that the ps5 runs at 720p i cant even notice resolution anyway
10.28tf to 12.1tf = 17.7% increase
9.2tf to 12.1tf = 31.52% increase
1.3tf to 1.84tf = 41.5% increase
well here are the numbers
Going from 1.3 to 1.8 isn't even a 40% increase so how can you say its 50% and really close?
oh nvm the 12.1tf figure I used is probably BS because I can't see it in the digital foundry article, it hsou
PS5 will have Bloodborne 2 as exclusive, what then Xbones?
You cannot sustain 2.23GHz on the PS5 GPU without it melting. GitHub wasn't wrong, the max sustained clock of the PS5 GPU is 2000MHz, giving 9.2TF compute
plus RT scales with TMUs in RDNA2 which means SeX will curbstomp PS5 in RT workloads