Sony screwed up hard, the cell processor is kinda of a between a cpu and a gpu. originally the ps3 was meant to have only cells on it. they were expecting that the same approach that made ps2 both powerful and cheap could be done in a single chip that could then just be stacked up together for different purposes but working together, did i also mention that it was supposed to be very cheap? however the style of hardware used on PC GPUs had both caught up in price and exceeded in performance... and it was much easier to use. while that the cell processor was having troubles reaching up the price and performance they wanted. with microsoft literally going from "dude, lets make a console" to production in less than 6 months(which eventually lead to the RROD) with a last minute doubling of ram on top. sony was caught up with their pants down and slapped a "not quite what we wanted" cell with a "whatever you had at hand" gpu they got from nvidia and the ps3 was born. sony ironicly had the same experience with the 360 that Sega had with the ps1. and had to clobber something together with what they had at hand to match up the performance. is not that the 360 was the best thing ever, it wasn't. most of the same problems the ps3 had also plagued the 360. but it was sure cheaper to make compared to the ps3 at first
@@khhnator The GPU in the PS3 is from Nvidia. Also I think the inclusion of the PS2 hardware on the PS3 motherboard sure didn't help for the cost of production of the PS3. That's why they slowly removed the PS2 back-compatibility on further revisions.
@@johnsams1388 the inclusion of the EE on early PS3s wasn't what made it expensive. It was the bluray drive followed by the RSX which was supplied by Nvidia. All in all the PS3 cost around $800 to make. Sony was taking a $200 loss on each 60GB launch model sold despite the infamous $599.99 price tag. Edit: the inclusion of the EE on launch model PS3s is estimated to be about $30 back in 2006. Not much compared to the $125 bluray drive and $130 RSX chip at the time. Why Sony ditched BC on PS3? I think mainly to lower power consumption, heat, and noise
I remember the devs confirming in the "making of" documentary that was included in the EU version of MGS2 that they actually had a different character model for Snake for the intro part of the game, because it was too complex to code the rain splashing on his body. The rain was basically part of the model.
As cool as making individual rain particles detect character model collisions sounds, it makes much more sense for each character model to emit it's own rain splash effect.
I'm glad that they pushed for expressive high-poly models over static cell-shaded ones. 'Shinkawa touch' would've been cool, but it just wouldn't be as immersive as what they settled on.
Mrgrumpy that sounds unlikely. The ps2 had massive fill rate and effects like this is what it excelled at. That's why all ports of mgs2 and silent Hill 2 suffer. The rain and fog effect were actually extremely efficient on the ps2
Pcsx2 still needs improvements in their emulator i had to dowgrade to an older version of pcsx2 to play games. so it's still needs improvements performance wise and compatability wise too.
@@cluesagi no no no i don't talk about that.i am talking about the things that pcsx2 needs to be improved. And emulation wise it's way back and has outdated plugin settings. and also they need to put vulkan gs compatability with pcsx2 this would be a treat for lower end cpus and it would help performance wise a lot
Imagine how much it must suck to learn some obscure hardware for years and just as you finally get it, just as you finally feel like you can get 99% or more performance of out the thing, just as your in-house library reaches maturity, it becomes obsolete and all your knowledge is suddenly worthless.
This is why most game consoles aren't built this way anymore. Being able to Stream Absolutely Everything into 40 different coprocessors is cool, but it also means years of developer training before games can actually take advantage of your hardware.
You are not devs are you ? All Knowledge isn't magically lost. Learning to optimize on very constraint hardware can help you be a better dev. All consoles are computers. Now consoles use PC hardware because they are way more performant than anything sony, microsoft, nitendo can build on their own for a very competitive price, nowadays virtually anything run on x64 or ARM.
Games consoles provided a lot of incentive to publishers to push the hardware to the limits. When you get to the end of the console's lifespan, games start incorporating hardware tricks that do things that previously were thought to be impossible. That is the advantage of consoles versus the PC - developers know that there are millions of almost identical hardware platforms, so they do not need to worry about compatibility issues.
It's also worth noting that the EE has an additional proprietary set of SIMD integer instructions called MMI (Multimedia Instructions) which act on the 128-bit registers, working with them as vectors of 64-bit, 32-bit, 16-bit or 8-bit vectors. The problem is that you'd need to write your own assembly code to use these instructions as the compiler included in the SDK did not have auto vectorization (and to be honest, just wasn't very good at optimization at all since it was very early GCC). Oh, and there's also the fact that the EE has TWO ALUs, but once again, you have to write your own assembly code to use the second ALU... This is a huge part of why writing code for the PS2 is such a journey :/
@@tinto278 It is interesting going back and running old pre-MMX/pre-SSE supported games on modern hardware. They are amazingly fast but nowhere near as fast as you would imagine. I remember finding this out when I was running Turok 2 on a Core 2 Duo and it was barely pulling 200fps. Still fast but not optimized fast.
It wasn't a "very early gcc". It was originally EGCS, a hostile fork of GCC, which was for the time pretty awesome, the overall compiler landscape being considered. I mean their competition was Metrowerks, so... easy wins! Even the numerous x86-specific compilers out there had very limited optimisation capabilities compared to what we're used to nowadays.
@@SianaGearz Hey, thanks for the clarification! I wasn't aware of the EGCS SDK, my only contact with the official SDK is the version with GCC 3.3. And yeah, I know that the landscape was far from ideal back then, and I recognize the GCC devs were doing all they could. Sorry if I sounded too hateful or anything like that!
I remember in an old interview long after the fact, one of the hardware engineers mentioned that they wanted it to be hard to develop for so devs wouldn't be able to easily make multiplatform version of their games. It's a good thing that carrying that mentality into the next console generation didn't completely and totally backfire Oh wait
@@rabbitcreative actually things have gotten way easier the last decade, remember when anyone enforced ie6 compatibility and wanted the latest stuff modern browsers supported, but it had to run on ie6. Sort of like the days of Ps2 and PS3 development in the games industry!
@@werpu12 I used to do web dev and ie6 was a pain. Because working in the MS software ecosystem can be archaic. Web dev now I don't know. But as far as I can make out it's still CSS and html with plugged in javascript for fun stuff. Google made an animation standard, I'm not sure why that was necessary as CSS already animates. But html and css are mature so they become less complicated. That allows you to concentrate on being fast and the designers know what to do and can concentrate on developing style and sophistication into their designs. So you get a website that animates as you scroll in fun ways but will focused on accommodating that technique and visual language because it is expected by your managers who want to appear modern and cutting edge to their customers who might not be sophisticated in their understanding of communication design and marketing. What this means is that creative control is being lead by the platform provider as it introduces something to promote the platform that is new hardware and technology. But the core html and css tools for building websites is still enough to make ever more sophisticated designs and user experiences indefinitely and easily by using human thought and problem solving with simple information hierarchies and visual language. So your time is not spent learning things which are to look eye catching but is limiting in it's application and time consuming to execute, preventing real solutions to the problem of how do I make a _good_ website or game, not a _new_ one.
Developers complaining about PS2s complexity, all the while Sony was having a Judas Priest moment with the PS3... "You've got another thing coming...."
Both Silent Hill 2 and Luigi's Mansion had insanely good realtime shadow effects, considering this was 20 years ago! I was disappointed to see that Condemned (2005, Xbox 360 and PC) didn't even have flashlight shadows, despite running on much more advanced hardware.
I just love how "we're NOT skimping on an FPU this time" informed a part of the way the hardware was marketed. It just all sounded like mind-blowing mystical gobbledygook to teenage me.
Gotta say, as far as that generation of hardware goes, MGS2 still holds up pretty well in the visual department. They chose a good aesthetic that didn't diminish too horribly with age.
True, I gotta say the comparatively lo fi ps2 graphics are such a unique vibe I still like it. We are super spoiled today with free Android games with graphics on par or surpassing that of ps2.
@@DFisher-de1dwMGS 3 looks really good, I think MGS 2’a environments hold up a bit better as the clean, industrial aesthetic better suited the texture limitations at the time. A lot of textures in that game are essentially pixel art images where all the details are aligned to the pixel grid so that they got smooth, sharp edges even when the textures are 128 or 256px big. MGS 3 uses more photo-sourced textures so has a more grungy but also more obviously blurry appearance. The closer over-the-shoulder camera doesn’t help.
No wonder why Sony just put PS2 Hardware on early Fat PS3 to backward compatibility rather than emulate Emotion Engine on Cell Processor, because PS2 CPU is so hard to emulate until early 2010's
even today, they rely on cloud to play those titles(PSNow). -ps2 emulation would have significant stutters if they did it directly on last-gen hardware.- EDIT: I stand corrected regarding ps2 emulation on ps4.
The GPU in PS2 was a pure fill-rate monster -- a last foray of the classic 3D hardware accelerators, just at the dawn of the programmable pixel shaders. Since it didn't have programmable features, each pixel was attached to a texture operation and all effects were achieved through register combining and blending. To mitigate the 100% overdraw effect of that approach, the embedded DRAM buffer had very wide multi-ported interface to the graphics core, so all hidden surfaces were just brute-forced at zero performance overhead without the need to spend transistor budget for occlusion rejection. The drawback was the small size of the eDRAM buffer, but here is why the game developers had to learn how to use the powerful DMAC engine to shuffle data in and out at run-time.
Did they? I don't doubt Sony were well aware of how much trouble developers were having with the Saturn but did they actually publicly say it anywhere themselves?
@@yugoprowers They couldn't really at the time they took the Dreamcast from the market. Shenmue I and II cost so much in development, EA wasn't interested, consumers and developers alike were still shaken by the aftershocks cursed by the Saturn and of course, Sony had their ugly ass DVD player that also played videogames eventually. As much as I love the Dreamcast myself, times were rough for SEGA.
I respect the developers of these great PS2 games even more now, knowing the difficulty behind their work. Today, we have it very easy with PC hardware being used in consoles and easy to use engines such as Unity and UE.
Always though dev issues on Sony's consoles started at the PS3, this was very interesting. Explains why PS2 emulation is so difficult and even to this day, some games don't work at all or just have a ton of issues.
'The Emotion Engine, named as such due it's high requirement for emotion streamed directly from the developers using the platform. Generally leaving them void and in a zombie-like state' - Anonymous Sony Exec
3:30 Sega did something very similar with the Mega Drive / Genesis. It includes the same Z80 processor as on their previous Master System console, for backwards compatibility - but was otherwise used (usually) as a dedicated sound & music controller.
They should have ditched the Master System backwards compatibility on the Genesis during the design phase, and put the money and effort towards a better VDP. What was the point in even including it when you still needed to buy a 70 dollar add on to even access the backwards compatibility? The Master System sold very poorly, did they really need to made the Genesis backwards compatible with a system that barely anyone knew even existed? They could have included a better VDP that allowed for more colors onscreen than 64, or could display more sprites, or even included some hardware sprite scaling capabilities. Had they done that instead, would they have even needed to make a Sega CD or a 32X? In order to stave off the visually superior SNES? Would they have avoided burned those bridges with consumers with the 2 expensive failed addons? Would they have felt the need to surprise release the Saturn and burn bridges with consumers, retailers, and developers? Maybe the Playstation wouldn't have eaten its lunch in that case. Who knows, maybe Sega was forced to leave the console business entirely, because in the late 80's they made the choice to include Master System BC on the Genesis in lieu of better specs that would keep the Genesis competitive. It set off a whole chain reaction of bad decisions that almost killed the company :)
@@homiedclown Master System was incredibly popular in Europe + South America. Sega made plenty of mistakes over the years but MS backwards compatibility wasn't one of them...
@@homiedclown Also, the Power Base Converter was only $35, iirc. It was quite affordable. (I had one.) Plus, you're underestimating how useful that Z80 was. Being able to offload all sound processing onto a coprocessor took a lot of load off the main CPU. It allowed sound \ music with nearly unlimited technical complexity, since the sound team wasn't competing for cycles. Besides, seriously, in 1988-89 the Mega Drive/Genesis was already the most powerful and graphically impressive console on the market, and yet had an affordable price. The high end features you're talking about, like sprite scaling, were only in arcade games. That would have driven the price WAY up. Like Neo Geo prices.
I shipped a few PS2 games. "Load balancing" the 7 processors (!) (EE, VU0, VU1, GS, SPU, IOP, and IPU) was a complete PITA but it was a beautiful sight to have everything running. I'm just thankful I didn't have to do any Saturn games! That thing was even more insane.
@@SerBallister That's actually the one combination where you don't have to be as careful but it still required some caution of synchronization -- or at least in my case. I remember having to hack around the sample decoder playback demo as it had a tendency to hang. I never did track down what the cause was or why my hack worked.
Well Saturn also comes with much lower expectations that you will actually achieve something extraordinary, and it's not like the bulk of its library were masterpieces of hardware utilisation, by far not! And the hardware that was there wasn't full of errata, so it could have been worse :D
You consider different functional ALU’s in a CPU as different processors? So does a single core basic out of order X86 CPU have 3 processors if it had two integer units and one fpu?
The idea behind PS2 and PS3 was ahead of its time. Instead of building large and complex chips with lots of memory which would do things for developers almost by themselves and would ease up their coding, Ken Kutaragi made it other way around. He and his team designed a chips which would need to be unlocked and understood to unleash their true power and what is the most amazing thing about it, they were ultra efficient and tiny in die size, not mentioning power consumption compared to what they could deliver on the screen. The key to success was to master the silicon and find a way how to unlock maximum of its performance. However the problem was that gaming became a business and business had no patience and wasn't willing to pay developers to play with the HW they received, so in the end the battle was won to simpler and easier to code chip architectures and designs which helped businesses to save time and costs on developing the games. On one hand its quite logical, but on the other industry lost diversity and challenge to unlock true potential of the chips which initially came with the consoles. This is why it is quite amazing that PS3 with its unique processor and in total of 512 MB of memory could run such an amazing games especially in the latter stages of the console lifecycle - because developers could squeeze every single bit from it was intended to do. Consoles like PS2 or PS3 had a different approach to how games should be computed and the idea was quite revolutionary if you look at what you could get from these machines. That is why I will always value all consoles before they became more less slightly custom PCs with x86 CPUs. I think Nintendo with its move to ARM architecture has found a sweet spot for the future and I think it will benefit from this decision in the long run especially now when Nvidia is in process of acquiring ARM. ARM is not custom architecture like Cell or PS2's EE, but at least its a much more code and performance per watt efficient architecture design in comparison to x86. I think PCs will need to eventually include ARM in its ecosystem and push to create apps natively including games. I am quite positive that one day in the future AMD will either need to switch to new ARM like design (probably licensing it from Nvidia LOL ;) or it will develop its own similar to ARM processor as processors die keeps getting larger and larger. Multi chiplet solution is going to work for a while, but even that wont be a silver bullet how to overcome main hurdles and obstacles of large chips. Just look at the most powerful HPC computer in the world actually coming from Japan ;) (maybe Ken Kutaragi helped little bit?:) Also I am quite surprised that Apple hasn't released its own console yet or its gaming TV box or something like that now when they are creating new ecosystem for their own ARM based M1 chips. It will be quite interesting to watch which architecture will continue to keep kicking ass and which will be replaced. I think future is quite bright for some very custom type of architectures developer know quite a lot about and can deliver amazing stuff. Sadly the good old days when new HW or console was released and nobody knew how to develop for it are gone with its magical "Heureka!" moments when developers figured out something amazing and made it real. Unfortunately everything even in life gets over its "rock n roll" first stage times and excitement opens space to routine and stereotype which is securing and comfortable, but true and best developers out there will always like a challenge the norms and start a new "revolution" to unlock unknown hidden performance with the chips which are considered "too difficult" to master and code on. That's why I salute and respect PS2, PS3 and all the "complicated" designs, because they were truly revolutionary!
Good read, but in retrospect what examples do we have of what the PS2 allowed for that couldn't be done on Xbox, some fog shaders? More of a magic trick than a reason to reinvent the box.
@@Kakerate2 Indeed, magic tricks which were only possible on the unique HW with one of the kind chip architecture and features which once mastered were much more efficient than same results achieved on the general purpose and widely known chips with plenty of dev tools. The difference in what was achieved on both HW & SW side between PS1/N64 vs PS2/Xbox/GMC gen and then PS3/X360 gen was much more revolutionary and noticeable especially on the efficiency side than between PS3/X360 vs PS4/XOne gens and now current PS5/XbX gen. PC like x86 based consoles will never be what they were before when they could achieve better results and challenge even the best PCs while being much more efficient. It all comes to general HW and chip architectures targeting first of all code quality and efficiency so devs can squeeze every single drop of performance with much lower penalty on the efficiency with bit more effort while with even more hard work and effort they can introduce new, never seen revolutionary features.
That show was a complete waste of car, hardware, and paint. Oh thanks for the fish tank, Xhibit. That’ll be fun to clean after the first day parked in the sun boils the occupants. Too bad I have to unbolt the trunk TV to empty it.
@@nickwallette6201 They basically made full scale hotwheels that were cool to look at as a kid. As an adult it's obvious that putting 10 flatscreens, a minibar and a PS2 in a car with a broken transmission doesn't exactly help the owner.
@@vasopel Helps when your console is also a cheap bluray player that doesn't require a subscription for Netflix & RUclips.... Imo the PS3 was less of a console and was more an excellent media box that also did a decent enough job of running games
I always thought the PS2 was pretty easy to develop for, I guess I was wrong. Thank you for the clarifications ! EDIT : Wait, is this why RE : Code Veronica looked bland AF compared to the Dreamcast version ?
That was more due to differences between the two consoles. The PS2 lacks texture compression support that the Dreamcast has. Textures on PS2 are less detailed and colorful because of that. Also, Code Veronica for PS2 was a port of the Dreamcast version. Were it made from scratch for the PS2, it could look better.
@@MadCapybaraRX Agreed. Devil May Cry is a better demonstration of the ps2's capabilities, doubt that would run at such a high frame rate on the DC and have so many particle effects
Another great video! :) This familiar topic of "being hard to develop for" seems to be getting more popular now. The detail you provided on some of the design philosophies was really great and fascinating. Looking forward to the next!
When my 2 favorite console hardware channels collide! i know we shouldn't choose favorites but you've covered so many consoles I think it's only fair ;)
It's more about Color Palette I guess. Gran Turismo 4's Color Palette is still impressive even compared to modern games. It captures 2000s digital camera's aesthetics perfectly.
When people this days makes jokes about how the PS2 pales in comparison in hardware compared to the Gamecube and Xbox, i just say Gran Turismo 4, MSG3, Jak 3, Black. Developers skills>More powerful hardware
Gran Turismo 4 (and it's spin off Tourist Trophy) take the spot light, 1080i resolution and constant 60 fps with over 600 cars. That's really impressive for a 2000s console
@@JinteiModding Wasn't true 1080i since the PS2 can only output a 440 vertical lines resolution (when the GC, XBOX and even the Dreamcast was able to reach 480p or higher)
@@sebastiankulche I don't even dislike GT4, I just don't see how. Sure the cars are nicely modeled and shaded, but it's still a racing game, meaning no particle effects, no bloom, no changing light sources, barely any animation work -- no fancy eye candy. Texture work isn't even that good, incomparable to games like Silent Hill 3 or 4. Yes, those are slower games, but still have large areas to navigate, and PS2 has no problem loading geometry. Burnout 3 is a prettier racing game.
I love these super in-depth videos about older hardware. So cool to hear a developers perspective on a console. I know how nostalgic it is for me as a gamer, but this is a really fun insight into development.
I highly recommend the video "How Crash Bandicoot Hacked The Original Playstation" after you watch this. It's half an hour long but one of the original devs goes into great detail on all the hoops to jump through and hacks put in place in order to get it to even run.
It certainly simplified development. I remember the performance for Renderware wasn't that great compared to custom engines, but you would expect that.
@@SerBallister i saw more videos the following days. I learned that performance wasn't good at all but most studios prefered to use renderware to don't bother with the complex hardware. But the graphics were still impressive, compare a ps2 to a og xbox or a gamecube. Ps2 is behind but little difference compared to the MUCH more powerful other consoles.
The limited amount of ram also meant that audio quality was limited. Take for example SMT nocturne with its terribly compressed music or the KH games that had to use MIDI tracks to keep a decent quality. I wonder if the developers could've done any better in these cases by using DMA more efficiently
MVG - thanks for actually narrating your videos from a script. You leave natural pauses which allows viewers to properly absorb the subject. Unlike so many shitty channels where they jump cut every sentence causing a non-stop unending barrage of words.
I'm such a huge MVG fan - seriously love how technically in-depth you go on subjects like these while still keeping it understandable. Also, I've been using your homebrew on the original Xbox since the early days of xbins. So much respect for you! Thanks for everything.
With the input, you can run all the game logic. Just reuse the code from the psx version. The other processor is only for the looks. People want pretty games, not good games.
It'd be interesting to see developers releasing games on the ps2, now that we have a better understanding of how to harness the power. The games released for the ps2 were amazing. That and I love when devs get the most out of hardware with significantly tighter limitations than the cutting edge stuff available now. Breathing new life into the hardware from my teen years.
PS2 had an absolutely beastly library. I still go back to find new gems every now and again. The library was so incredibly diverse with different genres. Horror, JRPGs, Shooters, puzzle games, racing sims, and quirky indie games. It had them all.
It was an interesting system to program for. Our lead didn’t like it because it wasn’t like a PC. I liked it. However, we ported two PC games on the Quake 2 engine and we found that our biggest slowdown was the instruction and data caches constantly stalling the graphics DMA when we sent it off to Sony for profiling. As it was a port, we couldn’t afford to change to code much, but I saw that games written from the ground up with the PS2 in mind ran wonderfully. Getting all the processors balanced must have been a beautiful thing.
Hey dude I'm on the process of currently learning assembly starting with the 6502 processor. I'm also completely new to coding. But since my journey these videos were usually skipped of yours but now they're making so much more sense to me. I'm very much looking forward to enjoying them retrospectively now. Cheers!
I remember the government restrictions on the ps2 since it was so powerful it could be used for missile guidance. I’d love to see you do a video on that!
I miss the days when console makers created systems powered by unique hardware. Sure, everything is easier now that things are more standardized, but console hardware is definitely not as fascinating to me as it once was.
I'd argue that it's a good thing that things are more standardized. I'd rather get more optimized games and backwards compatibility than know my console has quirky hardware that I will never benefit from.
@@jayt1077 Yeah, i see where your coming from. there just something interesting about learning what made these older consoles and pc's tick. it seemed like they had to learn how to do more with less and in the end made them better!
It would be cool to see more examples of developers learning to take advantage of the PS2's abilities in a future video. The two examples you showed were really interesting.
Checkout Transformers (not the movie based games), Downhill Domination, Dragon Quest VIII, Ace Combat series all 60fps games with large areas. Also Valkyrie profile 2, Shadow Hearts Covenant, Genji Dawn of the Samurai, Silent Hill 3, all really great looking games.
@@retrosoul8770 adding the list: Peter Jackson's King Kong, Cold Winter, Cold Fear, Tru Crime NYC, Reservoir Dogs, Flatout 2, Black, Mercenaries, Destroy all Humans 1&2, Ford Racing 3, WRC Rally Evolved, 24 The Game, Rise to Honour, Ghosthunter, Primal, Project Zero 3, Rule of Rose, Alpha Romeo, LA Rush, Test Drive Unlimited, Driver 3 & Parallel Lines, Scarface, The Godfather, RE Outbreak 1&2, The Getaway Black Monday, Hitman Contacts & Blood Money, Splinter Cell Chaos Theory & Double Agent, Onimusha 3 & 4, Van Helsing, Darkwatch, Fahrenheit, Tourist Trophy, Crush N Burn, Haunting Ground, Shadow of Rome, Batman Begins, Catwoman, Stolen, Without Warning, Conflict Global Terror, Brother's in Arms 1&2, Alone in the Dark 5, Forbidden Siren 1&2, Time Splitters 3, Second Sight, Psi-Ops, Rogue Ops, Alias, The Matrix: Path of Neo
Shinji Mikami is a goddamn legend along with Inafune. They were not afraid to call out the BS from Japanese console makers despite the Japanese always having massive ego and being unable to recognize that a foreigner could be capable of doing something better than them. They both called out japan for being outdated and complicated in vidya
and ps2 being the best selling console *ever* was quite a good incentive. still, it was a gamble on sony's side. they pulled the same trick with ps3 and xbox 360 kind of beat it in some major markets. sony has learned it very well ever since.
@@ryobibattery what's the relationship between fram rate and game details ? In MGS3 you got to approach the enemy and interact with the Environment in a way I haven't seen even in the Current Gen games. Damn if you don't eat your food in several days (owr real days) , it will got rotten. You can even got the flue . Every playthrough I discover new Things. By the Way. In the HD collection you can play MGS3 in 60 fps so yeah
CELL was definitely a Sony Ego trip. Considering how it was configured it is a miracle it ran as well as it did, silly cpu/gpu jack of all trades master of non uArch, no wonder it went the way of the dodo. Oh and all the silly myths about it; The bus to the RSX means that no, SPEs didn't directly enhance gpu abilities, just cpu bottlenecking on the main PPE thread. Kinda like how a old soundcards could take loads off the cpu, and those old physx cards or dedicated nvidia gpus used for physx could.
@@anasevi9456 Fully programmable Pixal Shaders were able to achieve far more beneficial performance enhancements in just the following generation of GPU's too. GForce 8 rendered the SPE's obsolete.
If anything, Sony always tried to "bet on the future" with their consoles. The PSX used a CD drive instead of cartridges, the PS2 had the two VU-s for parallel processing (which *was* the way of the future, only with multiple CPU cores as opposed to addon chips), the PS3 had the CELL which pushed this idea further, only it didn't really worked this time. By the time the PS4 came, PC hardware leapfrogged consoles, and has MS or Sony chosen anything else, it'd been laughingly weak compared to a mid range gaming PC. And of course neither Intel nor AMD allows you to modify their design in the way the EE and the CELL did. This was still true for the PS5, but the landscape *is* changing.
@@EvanOfTheDarkness It's more that non-x86 architectures in general were nearly abandoned aside from ARM which was still very low performance at the time. A flagship phone from 2015 would probably outperform the PS4 and Xbox One CPU, but a 2013 ARM design would've been worse than what they got from AMD. Going with AMD had two main positives: it was cheaper and easier to develop for. The Jaguar cores were absolutely pathetic but by slapping enough of them together they hoped that parallellism would once again save them, it sort of did, but cyberpunk shows what happens when a game isn't heavily optimized to keep cpu utilization to a minimum.
Not in this case. The EE is just a MIPS CPU, not much different from the easy and familiar PS1 CPU. The complexity was for two reasons, the market was consolidating into desktop x86 CPU and vector coprocessors where new. This meant that que was they used the EE the traditional way lime they did with the PS1 copying how they needed to work on the just a part of the compute power was used. The industry wasn't used to dealing with parallel processing nor vector calculations (IBM Altivec was nich and AVX was years in the future). Instead of saying that Sony liked to use "difficulty" technology is more precise to say that they liked to use new technology, pushing for what they though would become standard in the future and this will always demand a learning curve.
Eh I feel differently as it makes multi platform development way easier on the devs if they all share so much in common as they do now, I think we've just found our standard now. That's not to say that no consoles deviate from the norm like the switch which uses an arm CPU
Totally agreed, it too made games exciting even if those were considered mutliplat because every version had unique differences. It was much crazier on the Super Nintendo and MegaDrive as well as Saturn, PS1 and Nintendo64 but the 6th gen too was crazy in that regard. Now it's... you buy the same game on whatever set up box or PC you have at home.
@@waititstuesdaygod I miss the platform exclusives. If you were a 1-Console family, you chose a platform, a few franchises, a mascot, and a controller, and that became your identity. Everything now is so generic, it makes having competing systems kind of redundant, save for the market pressure economics.
It's interesting, the PS2 parallelism, despite the clock being slower than other processors of other architectures at the time, could spit out more processing per clock than any other.
Absolutely brilliant. The Playstation 2 was such a gem. So many classics, and gorgeous exclusives that really withstand the test of time. Granted, late PS2 games are much more advanced than early titles for the most part, but it was a generation that truly helped shape the modern gaming landscape.
I remember having wrapped an OG Xbox game, and our next game was PS2 exclusive. Our engineer told me that all of my art and textures had to be reduced in color... to 16 colors total. Those old Apple IIGS palette management skills came in handy.
@@Clay3613 probably because PS2 was the main platform for developers and since Xbox had an almost totally different architecture they worked and looked differently without a rework for that specific console. It's similar on how Bayonetta looks on PS3 compared to Xbox 360 (that is one of the worst ports ever). I am not really an expert but I think it's the most logic answer here,I could be wrong.
@@salvatronprime9882 Bullshit. PS2 games have better lightning than any Xbox title, where characters look like cardboard cut outs from the environment. Xbox has nothing on shading and colour depth with its weakling DirectX 8.1 API.
Mark Cerny really deserves the credit for turning this trend around for the PS4 as he was adamant from the start that he wanted an x86 architecture and now many devs say PS5 is one of the easiest console to work on.
It was also much more strait forward than the XBOX 1. Arguably MS had the stronger dev kit, but still had multi tiered gpu buffering and several co-processors. Where as the PS4 was more about brunt horsepower. Strangely it kind of went the opposite way this gen, MS went wide, while Sony went for speed. Series X is certainly more powerful but I have heard the PS5 is easier to utilize what it has. Even tho they are functionally identical.
That is sort of comparing if apples or oranges are easier to squeeze. If you were to develop a game for "an X 86 processor and a Y GPU" today, that would be quite a challenge. But it is simply not how it works anymore, no matter what CPU architecture there is in the box. Hence the near absence of PS2 documentation and SDKs being the key point here. Just translate that situation to any modern X86 hardware, and you will probably not have a very good time. ;) At core, the PS2 way of doing things was still the way of the future. Even on PS4. Parallelism.
@@matthias6933 not really. X360 went for programmable unified shaders and heavy multitreaded CPU. It had eDRAM also. no such things were on PCs at the time. Maybe on a dual core CPU the 2nd One was used to keep High scores, if you were lucky. 😉
Kinda wish Sony would've stuck to their original intended strategy of integrating the hardware of each console into the next machine, primarily for better backwards compatibility, but also because both the PS2 and PS3 hardware had unique capabilities that are difficult to recreate or emulate even on current top of the line hardware.
For me, the most important feature of old consoles is the fact that you actually owned the copy of the game that you bought! You could bring it along to your friend’s house or sell it after you’ve had enough of it. Nowadays you just pay for a stupid _license_ to play a game but you don’t own a copy of it! Damn the corporates!
Nintendo made the most impressive hardware of that generation. Compact, cheap yet quite powerful. They probably should have went with full size DVDs though.
It was also about reuse of what was in the small chuches. What was in the chance could practically just be fed to the GS without any toll at all while you where calculating the next data batch to feed on. The GS was a beast that in almost felt it never stopped as long you told it to do something :)
Very good episode. I didn't follow everything but it's fun learning that some older consoles really were different than PC's hardware wise. Now days it feels like all consoles are just a complex cross between a laptop and a PC. But I could very well be wrong on that.
@@wishusknight3009 at least 2600 was a success for a while. there are many other consoles that didn't even have a whiff of a success such as sega saturn.
The PS1 was an easy console to develop for, which played a part in its success by attracting the third party developers that Sony, a newcomer, really needed.
Outstanding quality content, really enjoyed it! Unfortunately there was no mention about Jak and Daxter, it's also a technical marvel of its time. I would really appreciate a video where you explain it how that game works on the PS2, with its custom engine and programming language. :)
While the PS2 could produce an impressive number of polygons and visual effects, the textures were often quite muddy compared to the competition. MGS2 is a good example. Even the older Dreamcast seemed much better in this respect (e.g. Sonic Adventure).
The problem is that creating a powerful proprietary specs designed for gaming, based on high end desktops would costed a lot of money for Sony. They would faced more criticism for having an overpriced consoles than having an underpowered console
Some PS2 games like God of War II that has no load times although streaming from DVD are really impressive. And with the Rendeware Engine there was a good solution available for third party developers. It also provided some technically impressive games like Burnout 3 and Black. Need for Speed: Undercover and SSX 3 didn't look too bad either. You could really see how developers tamed the beast over the years. :-)
@@Myndale Sony's broken IEEE-754 implementation. Also, "doubles" had to emulated in software. **OUCH.** I remember scanning the "map" file to verify we didn't have any double usage in our code. Something C++ has never solved: **Turn OFF stupid automatic upcasts.**
That might not be super interesting, the only criticism I ever heard of the GameCube was that the memory controller was over-engineered. The only other interesting tidbit I can think of is that it had a fairly-programmable TEV that could perform similar functions to the pixel shaders in other gpus at the time. Technically, it wasn't a huge deal, but it was an interesting design choice.
@@bombkangaroo I think it had a lot of capabilities that didn't get much use. I always found it amusing that even Nintendo wasn't great at getting performance out of it. Compare Wind Waker (mostly flat shading and simple lighting, 30 FPS) with Star Fox Adventures (complex shading and texturing, dynamic lighting, realistic fur/grass effects, 60 FPS). One thing that probably got on a lot of nerves is that the CPU is big endian, LSB first. Completely backward from most. It usually just means changing some tools to export in the opposite byte order, but it can really bite you in the behind when you're not used to it.
@@renakunisaki and Star Wars Rebel Strike which pushes the GameCube to it's full capacity and allegedly it's the highest polygon count of that generation.
@@demonsty Agreed. Honestly, Sony was lucky people were buying PlayStation 2 despite the weak launch lineup, as the install base pushed developers to stick with the hardware. Atari (with the Jaguar) and Sega (with the Saturn) weren't so lucky. I really would've liked to see Jaguar and Saturn get a fair shake with developers, though the tools were largely to blame in both cases, and the tools simply didn't improve at the rate that PlayStation 2 and 3 tools did over time. Frankly, this ended up biting Sony with PlayStation 3 when it initially didn't sell very well due to its high price, as there're numerous examples of devs simply outsourcing the development of PlayStation 3 ports of games because they had no interest in tying up their main teams with the ordeal. I can't remember for sure, but I recall devs being very quick to drop PlayStation 3 when PlayStation 4 came out relative to Xbox 360, as some devs were still making Xbox 360 versions of games while not bothering with a PlayStation 3 version.
The PS2 still to this day is my favorite 3D platformer console of all time. Jak and Daxter, Ratchet and Clank, Sly cooper and God Of War (to some extent a platformer) all started on PS2. I have so many great memories on this console. Its sad that it gave developers a hard time to develop for though.
The art style that devs were going for is even more pronounced on a proper crt via component or at least s video. PS2s interlaced, analog graphics get dimmed, softened, and motion blurred to hell on most lcds. Leading many ppl to misunderstand and underrate the quality of its graphics.
@@Grandmaster-Kush I hear ya. It's still worth it for me personally because on my 20in Trinitron, most games still look better than on my lcd monitor upscaled via pcsx2 (plus no motion blur). (Granted my lcd monitor is kinda old so maybe games would look better on a more modern panel.) Moreover pcsx2 struggles with some of my favorites, like Ace Combat, Zone of the Enders 2nd runner for example, more slowdown than what's present on original hardware. Plus I have a huge collection, idk if it's possible to run game DVDs via pcsx2...but it'd be nice.
The Sony PlayStation 2 was wildly popular when it came out. I remember not being impressed by it at launch. But developers learned the hardware and software started looking, sounding, and playing brilliantly. I miss the days of truly bespoke hardware. Developers always found ways to push such devices beyond what was thought as impossible.
The small cache size was something developers were already used to. PS1, N64, and Dreamcast had even smaller on chip caches. The PS2's cache was easily in the good enough range - even considering that the more detailed models required more vertices.
Very fascinating that a more multimedia system like the ps2 sold as much and a bare bones console like the Wii sold about as much. Those 2 very different approaches equally meant with huge sales, just shows the gaming landscape is constantly changing.
The philosophy behind the PS2's DMAC, and later the PS3's SPU memory system, sounds a lot like the PS5's IO Complex. Instead of developers constantly streaming data from memory to caches they've moved to streaming data from storage to RAM.
Sounds like Sony was way way ahead of its time in teaching efficient devs of today. Like this is ALL we talk today in programming, ain't it? Cache locality is the number one question in how fast your program can be. Essentially we have tiny tiny caches compared to how fast the processors are and every cache miss in data is a massive slowdown. And perhaps not coincidentally parallelism/concurrency is also number one headache today. Programming languages are racing to figure out how to construct the handling so that the dev can easily run in parallel/concurrency and still have it being effective and bug free. Same for trying to take advantage over SIMD. PS2 forced them to think that back then already. I'm not saying it was a good thing for PS2 or that the devs enjoyed it, but I swear they brought with them some insight to today as a consequence.
@Modern Vintage Gamer Am I imagining things, or does the PS2 hardware seem a little like a simplified SGI Indigo or Octane workstation in how it is configured?
Excellent video! I would love to see your take on Sega Saturn's architecture next! Low Score Boy's video on the Saturn was a absolute delight to watch because I could finally understand many of the complexities developers had to consider when developing for Sega's pretty grey/white box of fun. Especially considering how most of them never knew how to do parallel programming with dual CPUs, let alone two graphics chips!
Hats off to all the playstation 2 developers that had to work with Emotion engine and all the great games we got out of it ! This was EXTREMELY educational and technical, a far cry from visual programming you'd do in unity and the likes today !
I think I understand. with traditional PC development you create a stack of tasks then the whole stack is fed through the proper processing units designated to handle the different qualities of the tasks whereas with the ps2 the units at your disposal can't handle a huge workload dumped on them but the units are flexible and theres one unit whose specialty is parsing and feeding the tasks through the different units at the correct intervals and as they are able. this manner could conceivably mean higher processing ability and speed as the units are working in tandem and decreasing the load on singular units. of course I'm an idiot so that could all be completely wrong
Fortunately, Sony would wise up with their next console, the Playstation 3 and it's easy-to-develop-for CELL... Wait a minute.
Sony screwed up hard, the cell processor is kinda of a between a cpu and a gpu. originally the ps3 was meant to have only cells on it. they were expecting that the same approach that made ps2 both powerful and cheap could be done in a single chip that could then just be stacked up together for different purposes but working together, did i also mention that it was supposed to be very cheap?
however the style of hardware used on PC GPUs had both caught up in price and exceeded in performance... and it was much easier to use. while that the cell processor was having troubles reaching up the price and performance they wanted.
with microsoft literally going from "dude, lets make a console" to production in less than 6 months(which eventually lead to the RROD) with a last minute doubling of ram on top. sony was caught up with their pants down and slapped a "not quite what we wanted" cell with a "whatever you had at hand" gpu they got from nvidia and the ps3 was born.
sony ironicly had the same experience with the 360 that Sega had with the ps1. and had to clobber something together with what they had at hand to match up the performance.
is not that the 360 was the best thing ever, it wasn't. most of the same problems the ps3 had also plagued the 360. but it was sure cheaper to make compared to the ps3 at first
@@khhnator The GPU in the PS3 is from Nvidia. Also I think the inclusion of the PS2 hardware on the PS3 motherboard sure didn't help for the cost of production of the PS3. That's why they slowly removed the PS2 back-compatibility on further revisions.
@@johnsams1388 oh true, 360 that used amd. i mixed them up
@@johnsams1388 the inclusion of the EE on early PS3s wasn't what made it expensive. It was the bluray drive followed by the RSX which was supplied by Nvidia. All in all the PS3 cost around $800 to make. Sony was taking a $200 loss on each 60GB launch model sold despite the infamous $599.99 price tag.
Edit: the inclusion of the EE on launch model PS3s is estimated to be about $30 back in 2006. Not much compared to the $125 bluray drive and $130 RSX chip at the time. Why Sony ditched BC on PS3? I think mainly to lower power consumption, heat, and noise
No
I remember the devs confirming in the "making of" documentary that was included in the EU version of MGS2 that they actually had a different character model for Snake for the intro part of the game, because it was too complex to code the rain splashing on his body. The rain was basically part of the model.
Interesting.
As cool as making individual rain particles detect character model collisions sounds, it makes much more sense for each character model to emit it's own rain splash effect.
I'm glad that they pushed for expressive high-poly models over static cell-shaded ones. 'Shinkawa touch' would've been cool, but it just wouldn't be as immersive as what they settled on.
Mrgrumpy that sounds unlikely. The ps2 had massive fill rate and effects like this is what it excelled at. That's why all ports of mgs2 and silent Hill 2 suffer. The rain and fog effect were actually extremely efficient on the ps2
@@alanlee67 ruclips.net/video/NkQC6sSK7HI/видео.html It's actually mentioned here at around 10:45
All things considered, the PCSX2 team did an amazing job emulating the PS2.
Yes, and if i remember correctly, they said that the ps2 hardware is a 'beast' in terms of emulating it to a pc
Still needs to be improved
Pcsx2 still needs improvements in their emulator i had to dowgrade to an older version of pcsx2 to play games. so it's still needs improvements performance wise and compatability wise too.
Which is good, since there's really no other convenient way to play PS2 games nowadays
@@cluesagi no no no i don't talk about that.i am talking about the things that pcsx2 needs to be improved. And emulation wise it's way back and has outdated plugin settings. and also they need to put vulkan gs compatability with pcsx2 this would be a treat for lower end cpus and it would help performance wise a lot
Must be called the Emotion Engine because it left developers with many emotions.
@Endless Sporadic Oh most definitely.
Beuna esa!
cue *Rage Awakened*
Emotions of frustation
lol
Imagine how much it must suck to learn some obscure hardware for years and just as you finally get it, just as you finally feel like you can get 99% or more performance of out the thing, just as your in-house library reaches maturity, it becomes obsolete and all your knowledge is suddenly worthless.
This is why most game consoles aren't built this way anymore. Being able to Stream Absolutely Everything into 40 different coprocessors is cool, but it also means years of developer training before games can actually take advantage of your hardware.
This is why I love how the PS3 and 360 had such a long lifespan. Games continued to get better as devs built more efficient code for the hardware.
@@itchyisvegeta by then the dominant console was already a pc.
You are not devs are you ? All Knowledge isn't magically lost. Learning to optimize on very constraint hardware can help you be a better dev. All consoles are computers. Now consoles use PC hardware because they are way more performant than anything sony, microsoft, nitendo can build on their own for a very competitive price, nowadays virtually anything run on x64 or ARM.
Games consoles provided a lot of incentive to publishers to push the hardware to the limits. When you get to the end of the console's lifespan, games start incorporating hardware tricks that do things that previously were thought to be impossible. That is the advantage of consoles versus the PC - developers know that there are millions of almost identical hardware platforms, so they do not need to worry about compatibility issues.
It's also worth noting that the EE has an additional proprietary set of SIMD integer instructions called MMI (Multimedia Instructions) which act on the 128-bit registers, working with them as vectors of 64-bit, 32-bit, 16-bit or 8-bit vectors. The problem is that you'd need to write your own assembly code to use these instructions as the compiler included in the SDK did not have auto vectorization (and to be honest, just wasn't very good at optimization at all since it was very early GCC). Oh, and there's also the fact that the EE has TWO ALUs, but once again, you have to write your own assembly code to use the second ALU... This is a huge part of why writing code for the PS2 is such a journey :/
My cpu in my computer at the time had MMX. Doom and quake 1 never supported MMX or 3D Now! : (
@@tinto278 It is interesting going back and running old pre-MMX/pre-SSE supported games on modern hardware. They are amazingly fast but nowhere near as fast as you would imagine. I remember finding this out when I was running Turok 2 on a Core 2 Duo and it was barely pulling 200fps. Still fast but not optimized fast.
Wait, a second ALU? The PS2 was single-core right? What a design Oo
It wasn't a "very early gcc". It was originally EGCS, a hostile fork of GCC, which was for the time pretty awesome, the overall compiler landscape being considered. I mean their competition was Metrowerks, so... easy wins! Even the numerous x86-specific compilers out there had very limited optimisation capabilities compared to what we're used to nowadays.
@@SianaGearz Hey, thanks for the clarification! I wasn't aware of the EGCS SDK, my only contact with the official SDK is the version with GCC 3.3. And yeah, I know that the landscape was far from ideal back then, and I recognize the GCC devs were doing all they could. Sorry if I sounded too hateful or anything like that!
I remember in an old interview long after the fact, one of the hardware engineers mentioned that they wanted it to be hard to develop for so devs wouldn't be able to easily make multiplatform version of their games.
It's a good thing that carrying that mentality into the next console generation didn't completely and totally backfire
Oh wait
> they wanted it to be hard to develop for so devs wouldn't be able to easily
Welcome to web-development in 2022.
@@rabbitcreative actually things have gotten way easier the last decade, remember when anyone enforced ie6 compatibility and wanted the latest stuff modern browsers supported, but it had to run on ie6. Sort of like the days of Ps2 and PS3 development in the games industry!
@@werpu12 I used to do web dev and ie6 was a pain. Because working in the MS software ecosystem can be archaic. Web dev now I don't know. But as far as I can make out it's still CSS and html with plugged in javascript for fun stuff. Google made an animation standard, I'm not sure why that was necessary as CSS already animates.
But html and css are mature so they become less complicated. That allows you to concentrate on being fast and the designers know what to do and can concentrate on developing style and sophistication into their designs.
So you get a website that animates as you scroll in fun ways but will focused on accommodating that technique and visual language because it is expected by your managers who want to appear modern and cutting edge to their customers who might not be sophisticated in their understanding of communication design and marketing.
What this means is that creative control is being lead by the platform provider as it introduces something to promote the platform that is new hardware and technology. But the core html and css tools for building websites is still enough to make ever more sophisticated designs and user experiences indefinitely and easily by using human thought and problem solving with simple information hierarchies and visual language.
So your time is not spent learning things which are to look eye catching but is limiting in it's application and time consuming to execute, preventing real solutions to the problem of how do I make a _good_ website or game, not a _new_ one.
Developers complaining about PS2s complexity, all the while Sony was having a Judas Priest moment with the PS3... "You've got another thing coming...."
@@vlc-cosplayer 😂 GG
@@vlc-cosplayer Just when you thought it was defeated... (Screen begins to shake...)
@@vlc-cosplayer That was the boss's true form.
Ps3 was the sega saturn of its generation, for the complex arquitecture and because it has games for everyone
I just got a Prey flashback 😂
The lighting from the flashlights in Silent Hill 2 & 3 are still incredibly detailed by 2021 standards.
Both Silent Hill 2 and Luigi's Mansion had insanely good realtime shadow effects, considering this was 20 years ago!
I was disappointed to see that Condemned (2005, Xbox 360 and PC) didn't even have flashlight shadows, despite running on much more advanced hardware.
That because the PS2 had 48GB/s and Its real time fill rate was 1200 MP/sec.
@3dmarth sorry but doom 3 on xbox also have realistic shadow and dont forget about half life. ( btw i m a PlayStation fan)
The Emotion Engine might be my favorite name ever for a processor
ThreadRipper
@@byronjoel1400 Nah, Emotion Engine is still a better name, although Threadripper is a cool name
Voodoo Graphics is probably the coolest GPU name
I just love how "we're NOT skimping on an FPU this time" informed a part of the way the hardware was marketed.
It just all sounded like mind-blowing mystical gobbledygook to teenage me.
Blast Processor is good also by sega.
Gotta say, as far as that generation of hardware goes, MGS2 still holds up pretty well in the visual department. They chose a good aesthetic that didn't diminish too horribly with age.
3
True, I gotta say the comparatively lo fi ps2 graphics are such a unique vibe I still like it. We are super spoiled today with free Android games with graphics on par or surpassing that of ps2.
Gran turismo 4 in 1080i still looks amazing.
@poof69420 Snake Eater aged just as well, what have you been smoking?
@@DFisher-de1dwMGS 3 looks really good, I think MGS 2’a environments hold up a bit better as the clean, industrial aesthetic better suited the texture limitations at the time. A lot of textures in that game are essentially pixel art images where all the details are aligned to the pixel grid so that they got smooth, sharp edges even when the textures are 128 or 256px big. MGS 3 uses more photo-sourced textures so has a more grungy but also more obviously blurry appearance. The closer over-the-shoulder camera doesn’t help.
No wonder why Sony just put PS2 Hardware on early Fat PS3 to backward compatibility rather than emulate Emotion Engine on Cell Processor, because PS2 CPU is so hard to emulate until early 2010's
even today, they rely on cloud to play those titles(PSNow). -ps2 emulation would have significant stutters if they did it directly on last-gen hardware.-
EDIT: I stand corrected regarding ps2 emulation on ps4.
Thanks for the links. 👍
The second ps3 iteration removed ps2 hardware and used ps2 emulation to be backwards compatible.
@@proCaylak Simply not true
@@erikheijden9828 it is true and Sony's ps2 classic emulator on both the later ps3 and ps4 uses pcsx2's code base and has the same glitches.
There’s no morning like an MVG morning
Based
Facts 👍🏼
True
Or evening
MVG Mondays baby!
The GPU in PS2 was a pure fill-rate monster -- a last foray of the classic 3D hardware accelerators, just at the dawn of the programmable pixel shaders. Since it didn't have programmable features, each pixel was attached to a texture operation and all effects were achieved through register combining and blending. To mitigate the 100% overdraw effect of that approach, the embedded DRAM buffer had very wide multi-ported interface to the graphics core, so all hidden surfaces were just brute-forced at zero performance overhead without the need to spend transistor budget for occlusion rejection. The drawback was the small size of the eDRAM buffer, but here is why the game developers had to learn how to use the powerful DMAC engine to shuffle data in and out at run-time.
I understand some of those words
It's funny how Sony mocked Sega for how hard it was to programme for the Saturn, only to do the same thing in their next console.
And the console past that too.
I loved the PS3 for its exclusives but I have to say, devs had a much better time with the 360.
Twice haha..the PS2 and PS3
Did they? I don't doubt Sony were well aware of how much trouble developers were having with the Saturn but did they actually publicly say it anywhere themselves?
SEGA's problem was not sticking to a console long enough. Well that, and how SEGA Japan, wasn't working well with SEGA of America.
@@yugoprowers They couldn't really at the time they took the Dreamcast from the market.
Shenmue I and II cost so much in development, EA wasn't interested, consumers and developers alike were still shaken by the aftershocks cursed by the Saturn and of course, Sony had their ugly ass DVD player that also played videogames eventually.
As much as I love the Dreamcast myself, times were rough for SEGA.
I respect the developers of these great PS2 games even more now, knowing the difficulty behind their work. Today, we have it very easy with PC hardware being used in consoles and easy to use engines such as Unity and UE.
Unity isn't as easy to use as it likes to pretend. There is just lots of resources for beginners to game development in general.
I think the best game Engine for the moment is Cry Engine
@@cyrusxover766 if UE5 is the most famous, this doesn't mean it's the best
Always though dev issues on Sony's consoles started at the PS3, this was very interesting. Explains why PS2 emulation is so difficult and even to this day, some games don't work at all or just have a ton of issues.
'The Emotion Engine, named as such due it's high requirement for emotion streamed directly from the developers using the platform. Generally leaving them void and in a zombie-like state' - Anonymous Sony Exec
Small caches just mean she's holding you closer to her Emotion Engine.
Gold
My PS2 can’t possibly be this cute.
Best comment in RUclips history!
What do I have this warm fuzzy feeling after reading this
Unfortunatly she is a sellout.
3:30 Sega did something very similar with the Mega Drive / Genesis. It includes the same Z80 processor as on their previous Master System console, for backwards compatibility - but was otherwise used (usually) as a dedicated sound & music controller.
They should have ditched the Master System backwards compatibility on the Genesis during the design phase, and put the money and effort towards a better VDP. What was the point in even including it when you still needed to buy a 70 dollar add on to even access the backwards compatibility? The Master System sold very poorly, did they really need to made the Genesis backwards compatible with a system that barely anyone knew even existed? They could have included a better VDP that allowed for more colors onscreen than 64, or could display more sprites, or even included some hardware sprite scaling capabilities. Had they done that instead, would they have even needed to make a Sega CD or a 32X? In order to stave off the visually superior SNES? Would they have avoided burned those bridges with consumers with the 2 expensive failed addons? Would they have felt the need to surprise release the Saturn and burn bridges with consumers, retailers, and developers? Maybe the Playstation wouldn't have eaten its lunch in that case. Who knows, maybe Sega was forced to leave the console business entirely, because in the late 80's they made the choice to include Master System BC on the Genesis in lieu of better specs that would keep the Genesis competitive. It set off a whole chain reaction of bad decisions that almost killed the company :)
@@homiedclown Master System was incredibly popular in Europe + South America. Sega made plenty of mistakes over the years but MS backwards compatibility wasn't one of them...
@@homiedclown Also, the Power Base Converter was only $35, iirc. It was quite affordable. (I had one.) Plus, you're underestimating how useful that Z80 was. Being able to offload all sound processing onto a coprocessor took a lot of load off the main CPU. It allowed sound \ music with nearly unlimited technical complexity, since the sound team wasn't competing for cycles.
Besides, seriously, in 1988-89 the Mega Drive/Genesis was already the most powerful and graphically impressive console on the market, and yet had an affordable price. The high end features you're talking about, like sprite scaling, were only in arcade games. That would have driven the price WAY up. Like Neo Geo prices.
@@homiedclown You're thinking of only the american market. Sega had to think of the world-wide market.
I shipped a few PS2 games. "Load balancing" the 7 processors (!) (EE, VU0, VU1, GS, SPU, IOP, and IPU) was a complete PITA but it was a beautiful sight to have everything running.
I'm just thankful I didn't have to do any Saturn games! That thing was even more insane.
IPU & VU at the same time ? Some kind of movie decoding on top of 3d ?
@@SerBallister That's actually the one combination where you don't have to be as careful but it still required some caution of synchronization -- or at least in my case. I remember having to hack around the sample decoder playback demo as it had a tendency to hang. I never did track down what the cause was or why my hack worked.
Well Saturn also comes with much lower expectations that you will actually achieve something extraordinary, and it's not like the bulk of its library were masterpieces of hardware utilisation, by far not! And the hardware that was there wasn't full of errata, so it could have been worse :D
I think final Fantasy 10 had some scenes like that.
Obviously the FF games on the PS1 used the MDEC, GTE, r3000, GPU and SPU all at the sametime
You consider different functional ALU’s in a CPU as different processors? So does a single core basic out of order X86 CPU have 3 processors if it had two integer units and one fpu?
The idea behind PS2 and PS3 was ahead of its time. Instead of building large and complex chips with lots of memory which would do things for developers almost by themselves and would ease up their coding, Ken Kutaragi made it other way around. He and his team designed a chips which would need to be unlocked and understood to unleash their true power and what is the most amazing thing about it, they were ultra efficient and tiny in die size, not mentioning power consumption compared to what they could deliver on the screen. The key to success was to master the silicon and find a way how to unlock maximum of its performance.
However the problem was that gaming became a business and business had no patience and wasn't willing to pay developers to play with the HW they received, so in the end the battle was won to simpler and easier to code chip architectures and designs which helped businesses to save time and costs on developing the games. On one hand its quite logical, but on the other industry lost diversity and challenge to unlock true potential of the chips which initially came with the consoles. This is why it is quite amazing that PS3 with its unique processor and in total of 512 MB of memory could run such an amazing games especially in the latter stages of the console lifecycle - because developers could squeeze every single bit from it was intended to do.
Consoles like PS2 or PS3 had a different approach to how games should be computed and the idea was quite revolutionary if you look at what you could get from these machines. That is why I will always value all consoles before they became more less slightly custom PCs with x86 CPUs.
I think Nintendo with its move to ARM architecture has found a sweet spot for the future and I think it will benefit from this decision in the long run especially now when Nvidia is in process of acquiring ARM. ARM is not custom architecture like Cell or PS2's EE, but at least its a much more code and performance per watt efficient architecture design in comparison to x86.
I think PCs will need to eventually include ARM in its ecosystem and push to create apps natively including games. I am quite positive that one day in the future AMD will either need to switch to new ARM like design (probably licensing it from Nvidia LOL ;) or it will develop its own similar to ARM processor as processors die keeps getting larger and larger. Multi chiplet solution is going to work for a while, but even that wont be a silver bullet how to overcome main hurdles and obstacles of large chips.
Just look at the most powerful HPC computer in the world actually coming from Japan ;) (maybe Ken Kutaragi helped little bit?:)
Also I am quite surprised that Apple hasn't released its own console yet or its gaming TV box or something like that now when they are creating new ecosystem for their own ARM based M1 chips. It will be quite interesting to watch which architecture will continue to keep kicking ass and which will be replaced. I think future is quite bright for some very custom type of architectures developer know quite a lot about and can deliver amazing stuff. Sadly the good old days when new HW or console was released and nobody knew how to develop for it are gone with its magical "Heureka!" moments when developers figured out something amazing and made it real.
Unfortunately everything even in life gets over its "rock n roll" first stage times and excitement opens space to routine and stereotype which is securing and comfortable, but true and best developers out there will always like a challenge the norms and start a new "revolution" to unlock unknown hidden performance with the chips which are considered "too difficult" to master and code on. That's why I salute and respect PS2, PS3 and all the "complicated" designs, because they were truly revolutionary!
Good read, but in retrospect what examples do we have of what the PS2 allowed for that couldn't be done on Xbox, some fog shaders? More of a magic trick than a reason to reinvent the box.
@@Kakerate2 Indeed, magic tricks which were only possible on the unique HW with one of the kind chip architecture and features which once mastered were much more efficient than same results achieved on the general purpose and widely known chips with plenty of dev tools. The difference in what was achieved on both HW & SW side between PS1/N64 vs PS2/Xbox/GMC gen and then PS3/X360 gen was much more revolutionary and noticeable especially on the efficiency side than between PS3/X360 vs PS4/XOne gens and now current PS5/XbX gen. PC like x86 based consoles will never be what they were before when they could achieve better results and challenge even the best PCs while being much more efficient. It all comes to general HW and chip architectures targeting first of all code quality and efficiency so devs can squeeze every single drop of performance with much lower penalty on the efficiency with bit more effort while with even more hard work and effort they can introduce new, never seen revolutionary features.
Half of the 150,000,000 sold PS2 consoles probably went to pimp my ride who put at least 10 of them in every car.😂
Yooo lol
What you made me remembering. It was fun to see them putting a PS2/PS3 in almost every work they did.
That show was a complete waste of car, hardware, and paint. Oh thanks for the fish tank, Xhibit. That’ll be fun to clean after the first day parked in the sun boils the occupants. Too bad I have to unbolt the trunk TV to empty it.
@@nickwallette6201 They basically made full scale hotwheels that were cool to look at as a kid.
As an adult it's obvious that putting 10 flatscreens, a minibar and a PS2 in a car with a broken transmission doesn't exactly help the owner.
“Written off as too complicated.”
SpongeBob narrator: *Two thousand games later.*
most were ports anyway (I think)
@@vasopel still 2000+ games and 150+ million consoles sold.
@@ps5hasnogames55 so? most ps3 games were ports too, games were better on xbox360. but in the end of the day ps3 sold more consoles than the 360. :-)
*over 4.200 games
@@vasopel Helps when your console is also a cheap bluray player that doesn't require a subscription for Netflix & RUclips....
Imo the PS3 was less of a console and was more an excellent media box that also did a decent enough job of running games
This needs an entire series, great video!
Maybe expand on the parallelism on PS2 vs PS3 and why it is good/bad on each.
I always thought the PS2 was pretty easy to develop for, I guess I was wrong. Thank you for the clarifications !
EDIT : Wait, is this why RE : Code Veronica looked bland AF compared to the Dreamcast version ?
That was more due to differences between the two consoles. The PS2 lacks texture compression support that the Dreamcast has. Textures on PS2 are less detailed and colorful because of that.
Also, Code Veronica for PS2 was a port of the Dreamcast version. Were it made from scratch for the PS2, it could look better.
@@MadCapybaraRX Agreed. Devil May Cry is a better demonstration of the ps2's capabilities, doubt that would run at such a high frame rate on the DC and have so many particle effects
Another great video! :)
This familiar topic of "being hard to develop for" seems to be getting more popular now.
The detail you provided on some of the design philosophies was really great and fascinating. Looking forward to the next!
When my 2 favorite console hardware channels collide! i know we shouldn't choose favorites but you've covered so many consoles I think it's only fair ;)
both of you need to collab. its fascinating to hear from both of you
Funny how the Gran Turismo games can look photo-realistic from certain angles!
i think its recorded from emulator
It's more about Color Palette I guess. Gran Turismo 4's Color Palette is still impressive even compared to modern games. It captures 2000s digital camera's aesthetics perfectly.
Yeah its crazy how good it looks with high resolution and 60 fps.
GT4 is a technical masterpiece, and really shows what skilled developers could do on the PS2
When people this days makes jokes about how the PS2 pales in comparison in hardware compared to the Gamecube and Xbox, i just say Gran Turismo 4, MSG3, Jak 3, Black.
Developers skills>More powerful hardware
The fact that early games looked amazing despite these complications holy cow
Gran Turismo 4 (and it's spin off Tourist Trophy) take the spot light, 1080i resolution and constant 60 fps with over 600 cars. That's really impressive for a 2000s console
@@JinteiModding Wasn't true 1080i since the PS2 can only output a 440 vertical lines resolution (when the GC, XBOX and even the Dreamcast was able to reach 480p or higher)
@@JinteiModding Everyone overrating GT4, completely overlooking games like DMC3 which looked great AND ran at 60 FPS.
@@VergilHiltsLT Is ok if you didnt like the game, but at least you have to recognize that it looks amazing.
@@sebastiankulche I don't even dislike GT4, I just don't see how. Sure the cars are nicely modeled and shaded, but it's still a racing game, meaning no particle effects, no bloom, no changing light sources, barely any animation work -- no fancy eye candy. Texture work isn't even that good, incomparable to games like Silent Hill 3 or 4. Yes, those are slower games, but still have large areas to navigate, and PS2 has no problem loading geometry.
Burnout 3 is a prettier racing game.
Dude, I love how you're able to break this stuff down in a way those of us with little to no technical knowledge can understand. Great video.
I love these super in-depth videos about older hardware. So cool to hear a developers perspective on a console. I know how nostalgic it is for me as a gamer, but this is a really fun insight into development.
I highly recommend the video "How Crash Bandicoot Hacked The Original Playstation" after you watch this. It's half an hour long but one of the original devs goes into great detail on all the hoops to jump through and hacks put in place in order to get it to even run.
Great video, but no mention of how RenderWare helped developers to improve PS2 game development?
Thisssssssss Renderware did a brilliant job with GTA SA and Bornout games! Absolutely astonishing running on the fairly limited hardware
It certainly simplified development. I remember the performance for Renderware wasn't that great compared to custom engines, but you would expect that.
@@SerBallister i saw more videos the following days. I learned that performance wasn't good at all but most studios prefered to use renderware to don't bother with the complex hardware. But the graphics were still impressive, compare a ps2 to a og xbox or a gamecube. Ps2 is behind but little difference compared to the MUCH more powerful other consoles.
The limited amount of ram also meant that audio quality was limited. Take for example SMT nocturne with its terribly compressed music or the KH games that had to use MIDI tracks to keep a decent quality. I wonder if the developers could've done any better in these cases by using DMA more efficiently
Some (probably most) games would stream directly from disc to get around the RAM limitations.
Resident evil 4 is a good example.
MVG - thanks for actually narrating your videos from a script. You leave natural pauses which allows viewers to properly absorb the subject. Unlike so many shitty channels where they jump cut every sentence causing a non-stop unending barrage of words.
I'm such a huge MVG fan - seriously love how technically in-depth you go on subjects like these while still keeping it understandable.
Also, I've been using your homebrew on the original Xbox since the early days of xbins. So much respect for you! Thanks for everything.
Love how the whole PS1 processor is just in charge of input. Fantastic use of resources and also jawdroppingly overkill.
With the input, you can run all the game logic. Just reuse the code from the psx version. The other processor is only for the looks. People want pretty games, not good games.
It'd be interesting to see developers releasing games on the ps2, now that we have a better understanding of how to harness the power. The games released for the ps2 were amazing. That and I love when devs get the most out of hardware with significantly tighter limitations than the cutting edge stuff available now. Breathing new life into the hardware from my teen years.
I love my PS2 and I keep coming back to playing it. There are so many great games I still haven't played yet
PS2 had an absolutely beastly library. I still go back to find new gems every now and again. The library was so incredibly diverse with different genres. Horror, JRPGs, Shooters, puzzle games, racing sims, and quirky indie games. It had them all.
I can normally understand most of these videos, but this one just went over my head. No wonder it was so hard!
It was an interesting system to program for. Our lead didn’t like it because it wasn’t like a PC. I liked it. However, we ported two PC games on the Quake 2 engine and we found that our biggest slowdown was the instruction and data caches constantly stalling the graphics DMA when we sent it off to Sony for profiling. As it was a port, we couldn’t afford to change to code much, but I saw that games written from the ground up with the PS2 in mind ran wonderfully. Getting all the processors balanced must have been a beautiful thing.
Hey dude I'm on the process of currently learning assembly starting with the 6502 processor. I'm also completely new to coding. But since my journey these videos were usually skipped of yours but now they're making so much more sense to me. I'm very much looking forward to enjoying them retrospectively now. Cheers!
I remember the government restrictions on the ps2 since it was so powerful it could be used for missile guidance. I’d love to see you do a video on that!
I miss the days when console makers created systems powered by unique hardware. Sure, everything is easier now that things are more standardized, but console hardware is definitely not as fascinating to me as it once was.
you're heartless and blockheaded as the company.
I'd argue that it's a good thing that things are more standardized. I'd rather get more optimized games and backwards compatibility than know my console has quirky hardware that I will never benefit from.
@@FraserSouris I agree, standardized hardware is far more practical. However, its not as interesting.
True
@@jayt1077 Yeah, i see where your coming from. there just something interesting about learning what made these older consoles and pc's tick. it seemed like they had to learn how to do more with less and in the end made them better!
It would be cool to see more examples of developers learning to take advantage of the PS2's abilities in a future video. The two examples you showed were really interesting.
Checkout Transformers (not the movie based games), Downhill Domination, Dragon Quest VIII, Ace Combat series all 60fps games with large areas.
Also Valkyrie profile 2, Shadow Hearts Covenant, Genji Dawn of the Samurai, Silent Hill 3, all really great looking games.
@@retrosoul8770 adding the list: Peter Jackson's King Kong, Cold Winter, Cold Fear, Tru Crime NYC, Reservoir Dogs, Flatout 2, Black, Mercenaries, Destroy all Humans 1&2, Ford Racing 3, WRC Rally Evolved, 24 The Game, Rise to Honour, Ghosthunter, Primal, Project Zero 3, Rule of Rose, Alpha Romeo, LA Rush, Test Drive Unlimited, Driver 3 & Parallel Lines, Scarface, The Godfather, RE Outbreak 1&2, The Getaway Black Monday, Hitman Contacts & Blood Money, Splinter Cell Chaos Theory & Double Agent, Onimusha 3 & 4, Van Helsing, Darkwatch, Fahrenheit, Tourist Trophy, Crush N Burn, Haunting Ground, Shadow of Rome, Batman Begins, Catwoman, Stolen, Without Warning, Conflict Global Terror, Brother's in Arms 1&2, Alone in the Dark 5, Forbidden Siren 1&2, Time Splitters 3, Second Sight, Psi-Ops, Rogue Ops, Alias, The Matrix: Path of Neo
Shinji Mikami is a goddamn legend along with Inafune. They were not afraid to call out the BS from Japanese console makers despite the Japanese always having massive ego and being unable to recognize that a foreigner could be capable of doing something better than them. They both called out japan for being outdated and complicated in vidya
Shinji get in to the emotion engine
Inafune...oh how the mighty have fallen.
@@goldman77700 Prease bow your head when addressing the honorable businessman inafune. It's better than nothing.
@@UZUK143 SHINJI GET INTO THE FUCKING SURVIVAL HORROR GENRE AGAIN!
@@andree1991 No😎
Despite being a difficult system, the PS2 delivered many classics still worth playing today :)
and ps2 being the best selling console *ever* was quite a good incentive. still, it was a gamble on sony's side. they pulled the same trick with ps3 and xbox 360 kind of beat it in some major markets. sony has learned it very well ever since.
@@proCaylak yeah
Nope, it sucked and had shitty framerates :)
Man, i still can't play God Hand. ;_;
@@mikeuk66 Cry some more
Haunting ground, and Silent hill 2 and 3 look amazing for ps2 games
PS2 have so many games with higher graphics
grand turismo 2
DMC3 from 2005 runs at goddamn 60 FPS and looks gorgeous. Unlike the horrid HD re-releases.
It's impressive how they managed to create MGS 3 on ps2 one of the most detailed game.
30 fps tho
@@ryobibattery what's the relationship between fram rate and game details ?
In MGS3 you got to approach the enemy and interact with the Environment in a way I haven't seen even in the Current Gen games.
Damn if you don't eat your food in several days (owr real days) , it will got rotten.
You can even got the flue .
Every playthrough I discover new Things.
By the Way. In the HD collection you can play MGS3 in 60 fps so yeah
Devil May Cry 3 is 60 FPS. On a PS2. And it's a looker on real hardware, too bad HD versions completely butchered the VFX.
Tenchu games 60 fps mate
@@ryobibattery What old game had 60 fps anyway... you are nitpicking a lot...
Because Sony has a history of picking difficult to develop for CPUs in the name of raw power.
Fortunately they wised up by the PS4.
CELL was definitely a Sony Ego trip.
Considering how it was configured it is a miracle it ran as well as it did, silly cpu/gpu jack of all trades master of non uArch, no wonder it went the way of the dodo.
Oh and all the silly myths about it; The bus to the RSX means that no, SPEs didn't directly enhance gpu abilities, just cpu bottlenecking on the main PPE thread.
Kinda like how a old soundcards could take loads off the cpu, and those old physx cards or dedicated nvidia gpus used for physx could.
@@anasevi9456 Fully programmable Pixal Shaders were able to achieve far more beneficial performance enhancements in just the following generation of GPU's too. GForce 8 rendered the SPE's obsolete.
If anything, Sony always tried to "bet on the future" with their consoles. The PSX used a CD drive instead of cartridges, the PS2 had the two VU-s for parallel processing (which *was* the way of the future, only with multiple CPU cores as opposed to addon chips), the PS3 had the CELL which pushed this idea further, only it didn't really worked this time.
By the time the PS4 came, PC hardware leapfrogged consoles, and has MS or Sony chosen anything else, it'd been laughingly weak compared to a mid range gaming PC. And of course neither Intel nor AMD allows you to modify their design in the way the EE and the CELL did. This was still true for the PS5, but the landscape *is* changing.
@@EvanOfTheDarkness It's more that non-x86 architectures in general were nearly abandoned aside from ARM which was still very low performance at the time. A flagship phone from 2015 would probably outperform the PS4 and Xbox One CPU, but a 2013 ARM design would've been worse than what they got from AMD.
Going with AMD had two main positives: it was cheaper and easier to develop for. The Jaguar cores were absolutely pathetic but by slapping enough of them together they hoped that parallellism would once again save them, it sort of did, but cyberpunk shows what happens when a game isn't heavily optimized to keep cpu utilization to a minimum.
Not in this case.
The EE is just a MIPS CPU, not much different from the easy and familiar PS1 CPU. The complexity was for two reasons, the market was consolidating into desktop x86 CPU and vector coprocessors where new.
This meant that que was they used the EE the traditional way lime they did with the PS1 copying how they needed to work on the just a part of the compute power was used. The industry wasn't used to dealing with parallel processing nor vector calculations (IBM Altivec was nich and AVX was years in the future).
Instead of saying that Sony liked to use "difficulty" technology is more precise to say that they liked to use new technology, pushing for what they though would become standard in the future and this will always demand a learning curve.
I still prefer the times when consoles had a complete different architecture than a pc, exciting tech with pros and cons, its all what is this about
Aye, consoles were consoles. I think the PS3 was the last "true" console. Before they just became glorified PCs.
Eh I feel differently as it makes multi platform development way easier on the devs if they all share so much in common as they do now, I think we've just found our standard now. That's not to say that no consoles deviate from the norm like the switch which uses an arm CPU
Totally agreed, it too made games exciting even if those were considered mutliplat because every version had unique differences.
It was much crazier on the Super Nintendo and MegaDrive as well as Saturn, PS1 and Nintendo64 but the 6th gen too was crazy in that regard.
Now it's... you buy the same game on whatever set up box or PC you have at home.
@@DukeDudeston PS4 and Xb1 are shitty pc
@@waititstuesdaygod I miss the platform exclusives. If you were a 1-Console family, you chose a platform, a few franchises, a mascot, and a controller, and that became your identity. Everything now is so generic, it makes having competing systems kind of redundant, save for the market pressure economics.
It's interesting, the PS2 parallelism, despite the clock being slower than other processors of other architectures at the time, could spit out more processing per clock than any other.
*looks at his dreamcast
you were the chosen one, little disc grinding noise friend
Get yourself an optical disc emulator, pal
The tile-based translucent-effect-rendering friend!
DC was very developer friendly, but the hardware itself was nowhere near as powerful as any of the consoles that came after.
@@RetrOrigin It was basically a Naomi arcade mobo compressed into a home console, like the Neo Geo was based on the SNK arcade cabinet mobos.
I still believe there is one timeline where gaming took the right path and both Dreamcast and GameCube kicked out Sony out of console business.
Yes! This is the video I’ve been wanting to see for months. Thanks for breaking down the architecture
Absolutely brilliant. The Playstation 2 was such a gem. So many classics, and gorgeous exclusives that really withstand the test of time. Granted, late PS2 games are much more advanced than early titles for the most part, but it was a generation that truly helped shape the modern gaming landscape.
I remember having wrapped an OG Xbox game, and our next game was PS2 exclusive. Our engineer told me that all of my art and textures had to be reduced in color... to 16 colors total. Those old Apple IIGS palette management skills came in handy.
This is why I can't stand to look at PS2 games. They look worse than Voodoo1 color palette.
Then why did many big multi-plat titles look worse on Xbox?
@@Clay3613 probably because PS2 was the main platform for developers and since Xbox had an almost totally different architecture they worked and looked differently without a rework for that specific console.
It's similar on how Bayonetta looks on PS3 compared to Xbox 360 (that is one of the worst ports ever).
I am not really an expert but I think it's the most logic answer here,I could be wrong.
@@salvatronprime9882 Bullshit. PS2 games have better lightning than any Xbox title, where characters look like cardboard cut outs from the environment.
Xbox has nothing on shading and colour depth with its weakling DirectX 8.1 API.
@@VergilHiltsLT ROFL, that's the 12-year old child understanding of technology that I've come to expect from Sony fanboys. Bravo.
Mark Cerny really deserves the credit for turning this trend around for the PS4 as he was adamant from the start that he wanted an x86 architecture and now many devs say PS5 is one of the easiest console to work on.
It also makes porting to and from PC much easier
PS5 is an absolute breeze.
It was also much more strait forward than the XBOX 1. Arguably MS had the stronger dev kit, but still had multi tiered gpu buffering and several co-processors. Where as the PS4 was more about brunt horsepower. Strangely it kind of went the opposite way this gen, MS went wide, while Sony went for speed. Series X is certainly more powerful but I have heard the PS5 is easier to utilize what it has. Even tho they are functionally identical.
That is sort of comparing if apples or oranges are easier to squeeze.
If you were to develop a game for "an X 86 processor and a Y GPU" today, that would be quite a challenge. But it is simply not how it works anymore, no matter what CPU architecture there is in the box. Hence the near absence of PS2 documentation and SDKs being the key point here. Just translate that situation to any modern X86 hardware, and you will probably not have a very good time. ;)
At core, the PS2 way of doing things was still the way of the future. Even on PS4. Parallelism.
@@matthias6933 not really. X360 went for programmable unified shaders and heavy multitreaded CPU. It had eDRAM also. no such things were on PCs at the time. Maybe on a dual core CPU the 2nd One was used to keep High scores, if you were lucky. 😉
My favorite of all time, was absolutely blown away by it and it brought me and my friends much joy during our teens
Great video - I'd love to see an example of how to code the PS2 and keep all of the chips fed with data.
Kinda wish Sony would've stuck to their original intended strategy of integrating the hardware of each console into the next machine, primarily for better backwards compatibility, but also because both the PS2 and PS3 hardware had unique capabilities that are difficult to recreate or emulate even on current top of the line hardware.
How do you manage to talk about those things without getting boring youre so good omg
For me, the most important feature of old consoles is the fact that you actually owned the copy of the game that you bought! You could bring it along to your friend’s house or sell it after you’ve had enough of it.
Nowadays you just pay for a stupid _license_ to play a game but you don’t own a copy of it! Damn the corporates!
PS2 is my number one gaming platform and I feel like you just scratched the surface of this awesome device :)
Love watching MVG videos!!! Thanks man! Love you in-depth analysis
Man, I love me some buttery-smooth 60 FPS PS2 game footage!
Such mesmerizing information on the PS2! Thank you MVG for being so awesome!!
Nintendo made the most impressive hardware of that generation. Compact, cheap yet quite powerful. They probably should have went with full size DVDs though.
It was also about reuse of what was in the small chuches. What was in the chance could practically just be fed to the GS without any toll at all while you where calculating the next data batch to feed on. The GS was a beast that in almost felt it never stopped as long you told it to do something :)
i miss gt4. damn that "friend" that loaned a barrow.
Very good episode. I didn't follow everything but it's fun learning that some older consoles really were different than PC's hardware wise.
Now days it feels like all consoles are just a complex cross between a laptop and a PC. But I could very well be wrong on that.
At this point I feel like every console was hard to develop games for
Almost none worse than the 2600
@@wishusknight3009 at least 2600 was a success for a while. there are many other consoles that didn't even have a whiff of a success such as sega saturn.
The PS1 was an easy console to develop for, which played a part in its success by attracting the third party developers that Sony, a newcomer, really needed.
2600, saturn, ps2 and ps3 are the infamous ones I know
@@abhitron and it was an excellent newcomer compared to its complicated AF rivals such as Sega Saturn and Nintendo 64.
Outstanding quality content, really enjoyed it!
Unfortunately there was no mention about Jak and Daxter, it's also a technical marvel of its time.
I would really appreciate a video where you explain it how that game works on the PS2, with its custom engine and programming language. :)
While the PS2 could produce an impressive number of polygons and visual effects, the textures were often quite muddy compared to the competition. MGS2 is a good example. Even the older Dreamcast seemed much better in this respect (e.g. Sonic Adventure).
Fantastic explanation. The production value of your content is beyond professional.
Seeing all the ridge racer makes me happy
All things aside, it was amazing that this old console could render millions of polygons and run smoothly but even now most mobile devices can't.
The PS2 was ahead of its time with its complexities same as the PS3. Nowadays most consoles are small form factor PCs.
The problem is that creating a powerful proprietary specs designed for gaming, based on high end desktops would costed a lot of money for Sony.
They would faced more criticism for having an overpriced consoles than having an underpowered console
Some PS2 games like God of War II that has no load times although streaming from DVD are really impressive. And with the Rendeware Engine there was a good solution available for third party developers. It also provided some technically impressive games like Burnout 3 and Black.
Need for Speed: Undercover and SSX 3 didn't look too bad either. You could really see how developers tamed the beast over the years. :-)
no mention of Sony's hack-y floating point precision shenanigans?
Or the broken Z compare on the GS?
Oh god, I'd forgotton about that. It was because they packed VIF tags alongside 24-bit floats or something, wasn't it?
@@Myndale Sony's broken IEEE-754 implementation.
Also, "doubles" had to emulated in software. **OUCH.** I remember scanning the "map" file to verify we didn't have any double usage in our code. Something C++ has never solved: **Turn OFF stupid automatic upcasts.**
Another great video. Love to watch aone like this for the GameCube
That might not be super interesting, the only criticism I ever heard of the GameCube was that the memory controller was over-engineered. The only other interesting tidbit I can think of is that it had a fairly-programmable TEV that could perform similar functions to the pixel shaders in other gpus at the time. Technically, it wasn't a huge deal, but it was an interesting design choice.
@@bombkangaroo I think it had a lot of capabilities that didn't get much use. I always found it amusing that even Nintendo wasn't great at getting performance out of it. Compare Wind Waker (mostly flat shading and simple lighting, 30 FPS) with Star Fox Adventures (complex shading and texturing, dynamic lighting, realistic fur/grass effects, 60 FPS).
One thing that probably got on a lot of nerves is that the CPU is big endian, LSB first. Completely backward from most. It usually just means changing some tools to export in the opposite byte order, but it can really bite you in the behind when you're not used to it.
@@renakunisaki and Star Wars Rebel Strike which pushes the GameCube to it's full capacity and allegedly it's the highest polygon count of that generation.
Interesting. Cant wait for you to do an Atari Jaguar one.
oh yeah that'd be more interesting than ps2 even.
@@demonsty Agreed. Honestly, Sony was lucky people were buying PlayStation 2 despite the weak launch lineup, as the install base pushed developers to stick with the hardware. Atari (with the Jaguar) and Sega (with the Saturn) weren't so lucky. I really would've liked to see Jaguar and Saturn get a fair shake with developers, though the tools were largely to blame in both cases, and the tools simply didn't improve at the rate that PlayStation 2 and 3 tools did over time.
Frankly, this ended up biting Sony with PlayStation 3 when it initially didn't sell very well due to its high price, as there're numerous examples of devs simply outsourcing the development of PlayStation 3 ports of games because they had no interest in tying up their main teams with the ordeal. I can't remember for sure, but I recall devs being very quick to drop PlayStation 3 when PlayStation 4 came out relative to Xbox 360, as some devs were still making Xbox 360 versions of games while not bothering with a PlayStation 3 version.
I appreciate MVG so much. I was a huge fan of emulation in the 90’s and am happy this channel exists.
Sounds like Emotion Engine was inspired by the creator of Crash Bandicoots design hack.
The PS2 still to this day is my favorite 3D platformer console of all time. Jak and Daxter, Ratchet and Clank, Sly cooper and God Of War (to some extent a platformer) all started on PS2. I have so many great memories on this console. Its sad that it gave developers a hard time to develop for though.
I miss the artstyle of PS2 games, I dislike the ultra realistic 8k texture RTX games thats the rage
The art style that devs were going for is even more pronounced on a proper crt via component or at least s video. PS2s interlaced, analog graphics get dimmed, softened, and motion blurred to hell on most lcds. Leading many ppl to misunderstand and underrate the quality of its graphics.
@@retrosoul8770 Definitley, used to have a CRT just for my PS2, nowadays I find it easier to emulate tough
@@Grandmaster-Kush I hear ya. It's still worth it for me personally because on my 20in Trinitron, most games still look better than on my lcd monitor upscaled via pcsx2 (plus no motion blur). (Granted my lcd monitor is kinda old so maybe games would look better on a more modern panel.)
Moreover pcsx2 struggles with some of my favorites, like Ace Combat, Zone of the Enders 2nd runner for example, more slowdown than what's present on original hardware. Plus I have a huge collection, idk if it's possible to run game DVDs via pcsx2...but it'd be nice.
@@retrosoul8770 You can run CD versions on PCSX2 but it's recommended to use ISO rips for better load times, I played MGS2 on disc just a while ago
@@Grandmaster-Kush darn, no dvd huh. Ok thanks. I'll give it a try someday.
The Sony PlayStation 2 was wildly popular when it came out. I remember not being impressed by it at launch. But developers learned the hardware and software started looking, sounding, and playing brilliantly.
I miss the days of truly bespoke hardware. Developers always found ways to push such devices beyond what was thought as impossible.
I love when developers talk about the Emotion Engine!
The small cache size was something developers were already used to. PS1, N64, and Dreamcast had even smaller on chip caches. The PS2's cache was easily in the good enough range - even considering that the more detailed models required more vertices.
Ps2 started a lot of franchise. Thanks PS2
Very fascinating that a more multimedia system like the ps2 sold as much and a bare bones console like the Wii sold about as much. Those 2 very different approaches equally meant with huge sales, just shows the gaming landscape is constantly changing.
The philosophy behind the PS2's DMAC, and later the PS3's SPU memory system, sounds a lot like the PS5's IO Complex. Instead of developers constantly streaming data from memory to caches they've moved to streaming data from storage to RAM.
Sounds like Sony was way way ahead of its time in teaching efficient devs of today. Like this is ALL we talk today in programming, ain't it? Cache locality is the number one question in how fast your program can be. Essentially we have tiny tiny caches compared to how fast the processors are and every cache miss in data is a massive slowdown.
And perhaps not coincidentally parallelism/concurrency is also number one headache today. Programming languages are racing to figure out how to construct the handling so that the dev can easily run in parallel/concurrency and still have it being effective and bug free. Same for trying to take advantage over SIMD.
PS2 forced them to think that back then already. I'm not saying it was a good thing for PS2 or that the devs enjoyed it, but I swear they brought with them some insight to today as a consequence.
@Modern Vintage Gamer Am I imagining things, or does the PS2 hardware seem a little like a simplified SGI Indigo or Octane workstation in how it is configured?
Excellent video! I would love to see your take on Sega Saturn's architecture next! Low Score Boy's video on the Saturn was a absolute delight to watch because I could finally understand many of the complexities developers had to consider when developing for Sega's pretty grey/white box of fun. Especially considering how most of them never knew how to do parallel programming with dual CPUs, let alone two graphics chips!
Hats off to all the playstation 2 developers that had to work with Emotion engine and all the great games we got out of it ! This was EXTREMELY educational and technical, a far cry from visual programming you'd do in unity and the likes today !
I think I understand. with traditional PC development you create a stack of tasks then the whole stack is fed through the proper processing units designated to handle the different qualities of the tasks whereas with the ps2 the units at your disposal can't handle a huge workload dumped on them but the units are flexible and theres one unit whose specialty is parsing and feeding the tasks through the different units at the correct intervals and as they are able. this manner could conceivably mean higher processing ability and speed as the units are working in tandem and decreasing the load on singular units.
of course I'm an idiot so that could all be completely wrong
Stack in computer science is something else. You mean “batch”. No idea what “quality” is? “Type”?
PS2, the most exciting video game console ever!
Thanks for the awesome video! I love PS2 and PS3 explorations, both consoles seem like they were designed to go on forever.
Where is the steam deck reaction and possibility video?
I still love the distinct look of ps2 games, seeing the games used in this video gets me nostalgic