8:56 One of those “couple of other things” in the Amiga was the Copper. This implemented display lists, which allowed it to do fancy animation effects without actually needing to blit large arrays of pixels around. Now *that* was impressive.
Back when generating fractals on the computer was a new thing to do in the 80s, there was a DOS program called FRACTINT where the “INT” part was because it could do fractal calculations relatively quickly without a coprocessor, using just integer maths. Having an FPU was a luxury.
Ooh, yeah I was randomly zooming into fractals without any special reason with that program. But I think I either ran it on an Amiga (don't know if it was available for that, can't remember) or a Pentium 60 otherwise. ALL THAT COMPUTING POWER!
@@superdau Fractal generation was an entire genre of software. While I'm not sure that Fractint itself was available, there were certainly equivlalents. Vista and Mandelbrot, for example
Took a course in chaos theory at Uni, and when my father heard about it he proudly handed me a book on fractals written very much in the 80's. "Obviously, graphics like these couldn't be produced on a home computer, but require large amounts of time on a university computer cluster" *(128x128 pixel koch snowflake)*
David Gustavsson FRACTINT didn’t come out until the very late 80s, so that would have been absolutely true at the time. Any chance the book you’re referring to was the James Gleick one?
I clearly remember using the Intel 8087 co-processor in the 1980s, and it made a huge difference to the speed of floating point arithmetic. It was quite an expensive add on...
The Intel 486 was the first to have a built-in co-processor. Not all chips had this feature, in the SX version it was either absent, or disabled after failing manufacturing failure. All subsequent Intel chips always have a built in processor (Pentium onwards).
@@gordonrichardson2972 The 486SX was a completely cynical marketing ploy. It lacked the (functioning) floating-point hardware, but you could buy a 487SX chip to add this. However, the 487SX was more than just a floating-point coprocessor: it was actually a complete functioning CPU. Plugging it in caused the 486SX to turn itself off and hand over all functions to the 487SX!
Long before the Spectre security vulnerability, there was this terrible fear that the Pentium FDIV bug could produce a wrong answer in your calculations (maybe one in a million chance if you were a serious number cruncher). The shock and horror caused Intel a pre-tax charge of $475 million!? Edit: Only very early versions of the Pentium, the P4 is fine. Its kinda confusing that we had the 486, then Pentium P4...
It was expensive, but also the number of transistors on the chip was reduced, and thus the yield was much better and you get a better price for the customers. As the number of transistors was increasing, the cost of adding the FPU was actually not that expensive to add to the chip.
You know what would be cool? A high-level look at some CPU architectures. Like, even today if you want to license an ARM core, you still get a question whether you need a floating point operations to be done in one clock-cycle or if you can get away with it taking longer.
In the context of this channel they should do an abstract overview of all the different MPU/MCU/CPU/PLD stuff used in hardware design courses over the years. That would be really interesting for the plebs that haven't attended, and could be nostalgic for those that have.
Considering there are, of course, any hardware focused teachers. I don't know how is Nottingham Uni structured. It may be all history/software for all I know. Also, MCUs aren't exactly what we imagine a computer to be but they're extremely interesting little devices. And firmware engineers are few and sought after. There are so many great topics to cover.
@@hrnekbezucha Yeah, I know I'm mixing interests. I am currently goofing around with vintage MPUs for the first time but my primary interests are MCU's and PLDs :)
I remember my dad fitting a maths co-processor to our 486 SX25. I think I remember him saying that this made it a DX2 50. I was 10 at the time so didn't really understand.
6:09 Actually that was done using fixed-point (scaled-integer) calculations. Which were quite a pain to work with, but saved on floating-point hardware. As the hardware cost dropped, eventually it got to the point where the saving from leaving floating point out was negligible compared to the cost of programming without it. This was about the early-to-mid-1990s.
The first Intel 8087 co-processor had a total of 45,000 transistors, whereas today's CPUs (Pentium onwards) have millions, which shifts the cost ratio considerably.
It was never about "savings". Intel simply liked charging double. You have to remember their games with first 486SX being locked DXes, or 487SX. Competition forced them to end it.
Anybody remember the Weitek processors for PCs? The early versions of 3D studio had support for them. Allways wondered what sort of performance boost they could deliver (relative to the x87).
8:24 The trouble with the hardware blitters in the Amiga and Atari ST is that they were never as powerful or flexible as software. The Macintosh had its QuickDraw graphics routines, which implemented sophisticated drawing modes and nonrectangular clipping. When you tried to build similar functionality on top of a more limited hardware blitter, the performance would often end up worse than if it was all done in software.
With some Amiga models (not all of them) it was possible to add in an expansion board with a new processor that effectively took over as the main CPU... so you could get an accelerator board for the Amiga 1200, for instance, with a 68040 or 68060 processor that took over from the main 68EC020 chip.
There were ways developed to hardcore upgrade most Amigas, even the 600, a feat which involved fitting a PLCC socket upside down to the surface mount 68000 and basically bolting it to prevent it coming loose.
Any chance that we'll ever get a video on management engines and how we can safely use our computers without a management engine? Seems like all the major CPUs have them, but there's zero point in having them.
This is a _very_ serious topic and one I'd love to see Computerphile tackle. The Intel Management Engine (Intel ME) is a very insidious piece of hardware and very difficult to extricate from your (Intel) computer -- and AMD has an equivalent beast, so no easy out there. Maybe computers built around ARM chips are an alternative? There is ME Cleaner for the Intel situation -- maybe you guys could do a demo of that.
@@modolief ARM has a similar engine called TrustZone, but that's for their chips with speculative execution. Here's to future success of RISC-V and the Mill processor.
Would there be any benefits to moving some of the more bloaty/specialized CPU instructions to a separate accelerator chip/card ? For instance, I'm guess that normal consumer PCs do not necessarily need HW accelerated hashing. Maybe they don't even really need AVX. Would removing such functions free up chip area? And could that chip area be used to somehow speed up the more commonly used instructions?
I was hoping for mention of machines such as the Commodore 128 (6502/Z80), or machines such as the BBC Micro or Amiga/Macintosh with x86 hardware add-ons (so 680x0/80x86), where there was more than one general purpose CPU in the system that could operate simultaneously, and thus the issues that could come up in such a system, such as bus contention, and how this problem was dealt with in hardware or software on those systems. Any chance of a video about this in the future please?
C128 was a trash fire, you couldnt run both CPUs simultanously. C128D was even worse, you paid almost full Atari ST price for a system with 3 CPUs, but only 1 ever working at any given moment.
@@rasz Ahh, but even if you can't use more then one CPU at a time in the 128, it's still an interesting case to discuss, because of the difference in opcodes and method of switching control, to name just two. In the case of the Amiga with the Janus library, that offered some very interesting possibilities that I don't think were ever really explored as much as they could have been. And I understand some Apple Machines were sold with a full 80486 - the Power Macintosh 6100 DOS Compatible is one of them, and those are interesting machines for such a discussion as well.
I remember being in awe of the NeXT workstations with a 68030 CPU and a 56001 DSP. I now maintain legacy systems that combine a 68060 derivative with an ADSP-2181 for I/O. Seriously idiosyncratic assembly language on those things...
A modern example I have played around with for a while is the Nintendo DS. It had both an ARM6 and an ARM11, each having different speeds and capabilities. You mainly would program on the ARM11 but the ARM6 acts like a coprocessor for things like wifi, etc.
IIRC it was actually there for gba compatability. So, in design it's actually something like the z80 in the Megadrive, which was there partially for backwards compatability with the Master System.
Are they like FPU's, except it is matrix arithmetic instead. Effectively an MAU Matrix Arithmetic Unit. I don't know whether graphics card do more than matrices (what a torturous life that must be).
GPUs mostly do both floating point & integer matrix-vector arithmetics of small fixed arities plus a couple of specialized perspective rasterization pipelines.
Modern GPUs are, for all practical purposes, CPUs. The difference between a CPU and a GPU today is that CPUs are optimized for running code on a small number of threads (10's), while GPUs are optimized for running code on a very large number of threads (100's of thousands). The reason they're separate is that these uses require very different microarchitecture under the hood to run fast. A modern CPU throws huge amounts of hardware resources at making a single thread run 10% faster. This is extremely wasteful if you're concerned with running code with many, many threads. GPUs tend to also have more "exotic" specialized hardware units internally than CPUs have. Things like triangle rasterizers, texture units, ROPs, and very recently even specialized BVH traversal units for ray tracing. These would have been fast enough for shader software, but developers keep throwing more and more work at them as graphics move forward. CPUs tend to have much larger caches than GPUs. This is largely because cache misses hurt CPUs much more than GPUs, since an access all the way to RAM incurrs 100's of clock cycles of latency before the results come back. A GPU will just switch to another thread while waiting. A CPU often doesn't have this luxary because there might not even BE another thread to switch to.
The Nextstation had a Digital Signal Processor (DSP) for handling multimedia functions like voicemail attachments in an e-mail (If you had too many ahhs and umms in your voicemail, you could edit it before you sent it!). That and 32 bit color (8 bits for Red, Blue and Green + 8 bits for transparency) made it really hard to go back to DOS and CGA graphics.
Fun fact. if you're doing CG rendering and you have an AMD Threadripper, and your graphics card is "average" (AMD RX560) Utilizing the CPU to render it rather than the GPU actually yields a faster result. I mention the 560 is average, it's average low end, The CPU in this case is astoundingly fast and any decent rendering or graphics baking software can utilize every thread that can be handed to it, It illustrates the point you make about handing it to the dedicated hardware becomes redundant because the CPU can do it faster than the dedicated hardware, In this case, The advantage of handing it to the GPU is that while its rendering the machine can do other tasks, albeit a little slower as it communicates with the gpu periodically, but if you set the CPU to process it. it will get done faster, but the system will feel very very very sluggish as its dedicating most of it's cycles to that software. I just thought it was an interesting point that even in the modern era can be illustrated if your system hardware isn't equally balanced where there's one chip that's way faster than the other. (this machines primary job is crunching numbers and audio processing, so the GPU is almost an afterthought)
I would love to see computerphile do an episode about computer/internet operating speeds and the relationship between hardware and softare. And ive never quite understood what firmware is. Why do storebought PCs slow down so soon after buying?
When you buy a PC new it's going to be running as pure and untouched as possible, aside from any bloat that the manufacturer has added. You then get it home and add all of your peripheral drivers and software, install the dozens of software packages that you use, download a couple of browsers and then a handful of games. All of these things are taking up the resources (CPU/RAM) that was previously free when you initially bought the machine. Modern computers are abundant in resources and timescales may only be milliseconds but each one of these things takes time and effort to 'do something' which is most likely the slowdown that you see.
All of the collectors put one in, because. In reality only specialized software used FPU pre ~1994. First non scientific/business "program" I encountered that used one was Magic Carpet (game), and then Quake dropped in 1996.
Uniform memory access architecture should be seen as an ideal along with the generalization of multi-processing parallel computing. Ideally general purpose programming could send tasks to both CPU and GPU as resources are used. One issue with this is the need for Just In Time compiling for all programs running on the system. This means that games developed with Shader programs simply send their programs to run on the GPU while the OS can recompile and move a program to the GPU that is either better fit or to offload the CPU. If then any part of the GPU still needs dedicated memory that part of the GPU should be treated as a sub-processor with highly specific functions and infrequent reprogramming. The rest is unified and as such general purpose programs can be parallelized between processors since they share memory. Some processors are interdependent. Like the keyboard CPU is the only one that can get interrupts for the physical keys nor would unified memory make a lot of sense for a peripheral. In this sense the CPU on the device is really a sub-processor which might be general purpose for development purposes since microcontrollers are so accessible. But its functionality is slave to the system. It doesn't implement general purpose processing or programability. This is also true of much older GPUs or the display controller hardware on dedicated cards or as part of GPU chipsets.
I was hoping you would get into something more interesting like computers that actually had 2 CPUs like the Fujitsu FM-7 with 2 MC6809 or Sega Genesis/MegaDrive with it’s MC68000 and Z80. Perhaps in another video.
This *was* the FPU (floating-point coprocessor). The 80486 was the first generation IIRC which had the FPU integrated on a single die, but only for the "DX" version. There was also an "SX" version without built-in floating-point support, and a 80487SX coprocessor. With the Pentium, Intel finally chose to make floating-point support a full, on-die feature of every new processor.
Not really. It's designed specifically to accept the matching FPU. It's best to think of the 8088 and FPU combo as one processor cut into two parts. If you wanted to multiprocessor an XT class machine, you'd probably want to design a card to fit into the cpu socket... or on the PC bus itself.
RUclips community subtitles are switched on to allow the community to help subtitle the films. Sadly this means the automatic subs don't show. Perhaps go into community subs and look there? >Sean Once someone starts work on this the subtitles will be here: ruclips.net/user/timedtext_video?v=CDpL9wOQcus&ref=share
Most home computers employed a rudimentary GPU to free up the processor from the quite onerus task of generating a video signal. Some of the very early, cheaper systems did not and the ZX80. The problem is that generating a video signal is not a task that can be interrupted, so ideally you need something in parallel with the CPU doing it so as to leave your CPU free to work. The fun thing is that this, not the CPU speed, is ultimately responsible for most of the distinctive video limitations of a system.
I love this channel, but sometimes I have problems with the very fast speaking and with no subtitles. The start of the video is really too fast for an average non-native English speaker.
You don't have to be a diabetic to drink diet drinks. A few milligrams of Aspartame is a far better option than a few teaspoons of sugar, even for a non-diabetic.
@@TruthNerds Oh sure. I don't take sugar or any sweeteners in my tea or coffee, and I probably drink more of those than soft drinks. But as a non (alcohol) drinker, if I go in a bar or pub, then tea or coffee isn't always on offer, or if it is then it's some gunk from a machine, so I need to know there's a soft drink option for me.
ofc its not if your native language is english. Mine is hungarian, and i have a B2 english language certificate. And another reason is the special vocabulary he uses. Although i read a lot in the topic of computer science, there are always quite a lot unknown words for me in his videos.
6 лет назад+2
chains00 try slowing it down just a little. It will help.
Yay, in between Christmas and new year computerphile!
Are you tslking about the commercial Christmas, or the one that ends in January?
8:56 One of those “couple of other things” in the Amiga was the Copper. This implemented display lists, which allowed it to do fancy animation effects without actually needing to blit large arrays of pixels around. Now *that* was impressive.
My MegaST4 has a Blitter chip. Mad props for mentioning it. Cheers!
Back when generating fractals on the computer was a new thing to do in the 80s, there was a DOS program called FRACTINT where the “INT” part was because it could do fractal calculations relatively quickly without a coprocessor, using just integer maths. Having an FPU was a luxury.
Ooh, yeah I was randomly zooming into fractals without any special reason with that program. But I think I either ran it on an Amiga (don't know if it was available for that, can't remember) or a Pentium 60 otherwise. ALL THAT COMPUTING POWER!
Yep, flashback from the 1980s!
@@superdau Fractal generation was an entire genre of software. While I'm not sure that Fractint itself was available, there were certainly equivlalents. Vista and Mandelbrot, for example
Took a course in chaos theory at Uni, and when my father heard about it he proudly handed me a book on fractals written very much in the 80's. "Obviously, graphics like these couldn't be produced on a home computer, but require large amounts of time on a university computer cluster" *(128x128 pixel koch snowflake)*
David Gustavsson FRACTINT didn’t come out until the very late 80s, so that would have been absolutely true at the time. Any chance the book you’re referring to was the James Gleick one?
I clearly remember using the Intel 8087 co-processor in the 1980s, and it made a huge difference to the speed of floating point arithmetic. It was quite an expensive add on...
It's interesting, because it doesn't really act like a co-processor so much as an extension to the actual CPU.
The Intel 486 was the first to have a built-in co-processor. Not all chips had this feature, in the SX version it was either absent, or disabled after failing manufacturing failure. All subsequent Intel chips always have a built in processor (Pentium onwards).
@@gordonrichardson2972 The 486SX was a completely cynical marketing ploy. It lacked the (functioning) floating-point hardware, but you could buy a 487SX chip to add this. However, the 487SX was more than just a floating-point coprocessor: it was actually a complete functioning CPU. Plugging it in caused the 486SX to turn itself off and hand over all functions to the 487SX!
Long before the Spectre security vulnerability, there was this terrible fear that the Pentium FDIV bug could produce a wrong answer in your calculations (maybe one in a million chance if you were a serious number cruncher). The shock and horror caused Intel a pre-tax charge of $475 million!? Edit: Only very early versions of the Pentium, the P4 is fine. Its kinda confusing that we had the 486, then Pentium P4...
It was expensive, but also the number of transistors on the chip was reduced, and thus the yield was much better and you get a better price for the customers. As the number of transistors was increasing, the cost of adding the FPU was actually not that expensive to add to the chip.
You know what would be cool? A high-level look at some CPU architectures. Like, even today if you want to license an ARM core, you still get a question whether you need a floating point operations to be done in one clock-cycle or if you can get away with it taking longer.
In the context of this channel they should do an abstract overview of all the different MPU/MCU/CPU/PLD stuff used in hardware design courses over the years. That would be really interesting for the plebs that haven't attended, and could be nostalgic for those that have.
Considering there are, of course, any hardware focused teachers. I don't know how is Nottingham Uni structured. It may be all history/software for all I know. Also, MCUs aren't exactly what we imagine a computer to be but they're extremely interesting little devices. And firmware engineers are few and sought after. There are so many great topics to cover.
@@hrnekbezucha
Yeah, I know I'm mixing interests. I am currently goofing around with vintage MPUs for the first time but my primary interests are MCU's and PLDs :)
3:00 I love the way you say "pretty much" :D
I remember my dad fitting a maths co-processor to our 486 SX25. I think I remember him saying that this made it a DX2 50. I was 10 at the time so didn't really understand.
That high society "prrretty much" at 3:00
6:09 Actually that was done using fixed-point (scaled-integer) calculations. Which were quite a pain to work with, but saved on floating-point hardware.
As the hardware cost dropped, eventually it got to the point where the saving from leaving floating point out was negligible compared to the cost of programming without it. This was about the early-to-mid-1990s.
The first Intel 8087 co-processor had a total of 45,000 transistors, whereas today's CPUs (Pentium onwards) have millions, which shifts the cost ratio considerably.
It was never about "savings". Intel simply liked charging double. You have to remember their games with first 486SX being locked DXes, or 487SX. Competition forced them to end it.
@@rasz It was never just about Intel. Unix workstations used quite a wide variety of chips then -- MIPS, PowerPC, Alpha, SPARC, HP-PA etc.
Sean is such a great videographer. I hope Brady is proud.
"I am using my keyboards CPU to gain a 5% boost in bitcoin mining speed"
- Some dood from the 90's, if bitcoins was a thing back then.
Anybody remember the Weitek processors for PCs? The early versions of 3D studio had support for them. Allways wondered what sort of performance boost they could deliver (relative to the x87).
8:24 The trouble with the hardware blitters in the Amiga and Atari ST is that they were never as powerful or flexible as software. The Macintosh had its QuickDraw graphics routines, which implemented sophisticated drawing modes and nonrectangular clipping. When you tried to build similar functionality on top of a more limited hardware blitter, the performance would often end up worse than if it was all done in software.
A little bummed that he didn't even mention any modern co-processors/accelerators.
*cough* red rocket X *cough*
awe I was hoping for an episode on co-processors and accelerators
With some Amiga models (not all of them) it was possible to add in an expansion board with a new processor that effectively took over as the main CPU... so you could get an accelerator board for the Amiga 1200, for instance, with a 68040 or 68060 processor that took over from the main 68EC020 chip.
Only the PC rivaled the Amiga in expandability. People didn't really go all out on expanding Amigas until after 1992.
There were ways developed to hardcore upgrade most Amigas, even the 600, a feat which involved fitting a PLCC socket upside down to the surface mount 68000 and basically bolting it to prevent it coming loose.
Aw, I was hoping it was going to look at the "sideways" CPU expansions for the BBC Micro too. 6502, Z80, 68000, 32016, ARM...
Amiga! The best computers I've used.
These are (were) called co-processors. Amiga has lots of them! Amiga rules (well, used to rule :-/ )
Excellent description!!
Any chance that we'll ever get a video on management engines and how we can safely use our computers without a management engine? Seems like all the major CPUs have them, but there's zero point in having them.
This is a _very_ serious topic and one I'd love to see Computerphile tackle.
The Intel Management Engine (Intel ME) is a very insidious piece of hardware and very difficult to extricate from your (Intel) computer -- and AMD has an equivalent beast, so no easy out there. Maybe computers built around ARM chips are an alternative? There is ME Cleaner for the Intel situation -- maybe you guys could do a demo of that.
@@modolief ARM has a similar engine called TrustZone, but that's for their chips with speculative execution. Here's to future success of RISC-V and the Mill processor.
@@SimGunther Wow, the Mill processor! I thought nobody knew about that.
Would there be any benefits to moving some of the more bloaty/specialized CPU instructions to a separate accelerator chip/card ? For instance, I'm guess that normal consumer PCs do not necessarily need HW accelerated hashing. Maybe they don't even really need AVX. Would removing such functions free up chip area? And could that chip area be used to somehow speed up the more commonly used instructions?
I was hoping for mention of machines such as the Commodore 128 (6502/Z80), or machines such as the BBC Micro or Amiga/Macintosh with x86 hardware add-ons (so 680x0/80x86), where there was more than one general purpose CPU in the system that could operate simultaneously, and thus the issues that could come up in such a system, such as bus contention, and how this problem was dealt with in hardware or software on those systems. Any chance of a video about this in the future please?
C128 was a trash fire, you couldnt run both CPUs simultanously. C128D was even worse, you paid almost full Atari ST price for a system with 3 CPUs, but only 1 ever working at any given moment.
@@rasz Ahh, but even if you can't use more then one CPU at a time in the 128, it's still an interesting case to discuss, because of the difference in opcodes and method of switching control, to name just two. In the case of the Amiga with the Janus library, that offered some very interesting possibilities that I don't think were ever really explored as much as they could have been. And I understand some Apple Machines were sold with a full 80486 - the Power Macintosh 6100 DOS Compatible is one of them, and those are interesting machines for such a discussion as well.
I remember being in awe of the NeXT workstations with a 68030 CPU and a 56001 DSP.
I now maintain legacy systems that combine a 68060 derivative with an ADSP-2181 for I/O. Seriously idiosyncratic assembly language on those things...
A modern example I have played around with for a while is the Nintendo DS. It had both an ARM6 and an ARM11, each having different speeds and capabilities. You mainly would program on the ARM11 but the ARM6 acts like a coprocessor for things like wifi, etc.
IIRC it was actually there for gba compatability. So, in design it's actually something like the z80 in the Megadrive, which was there partially for backwards compatability with the Master System.
@@TheTurnipKing pretty much. Smart design if you ask me.
Are GPUs considered as additional processors?
absolutely
Are they like FPU's, except it is matrix arithmetic instead. Effectively an MAU Matrix Arithmetic Unit.
I don't know whether graphics card do more than matrices (what a torturous life that must be).
Honestly, they're more like complete, dedicated sub-computers.
GPUs mostly do both floating point & integer matrix-vector arithmetics of small fixed arities plus a couple of specialized perspective rasterization pipelines.
Modern GPUs are, for all practical purposes, CPUs. The difference between a CPU and a GPU today is that CPUs are optimized for running code on a small number of threads (10's), while GPUs are optimized for running code on a very large number of threads (100's of thousands).
The reason they're separate is that these uses require very different microarchitecture under the hood to run fast. A modern CPU throws huge amounts of hardware resources at making a single thread run 10% faster. This is extremely wasteful if you're concerned with running code with many, many threads.
GPUs tend to also have more "exotic" specialized hardware units internally than CPUs have. Things like triangle rasterizers, texture units, ROPs, and very recently even specialized BVH traversal units for ray tracing. These would have been fast enough for shader software, but developers keep throwing more and more work at them as graphics move forward.
CPUs tend to have much larger caches than GPUs. This is largely because cache misses hurt CPUs much more than GPUs, since an access all the way to RAM incurrs 100's of clock cycles of latency before the results come back. A GPU will just switch to another thread while waiting. A CPU often doesn't have this luxary because there might not even BE another thread to switch to.
The Nextstation had a Digital Signal Processor (DSP) for handling multimedia functions like voicemail attachments in an e-mail (If you had too many ahhs and umms in your voicemail, you could edit it before you sent it!). That and 32 bit color (8 bits for Red, Blue and Green + 8 bits for transparency) made it really hard to go back to DOS and CGA graphics.
Fun fact. if you're doing CG rendering and you have an AMD Threadripper, and your graphics card is "average" (AMD RX560) Utilizing the CPU to render it rather than the GPU actually yields a faster result. I mention the 560 is average, it's average low end, The CPU in this case is astoundingly fast and any decent rendering or graphics baking software can utilize every thread that can be handed to it, It illustrates the point you make about handing it to the dedicated hardware becomes redundant because the CPU can do it faster than the dedicated hardware, In this case, The advantage of handing it to the GPU is that while its rendering the machine can do other tasks, albeit a little slower as it communicates with the gpu periodically, but if you set the CPU to process it. it will get done faster, but the system will feel very very very sluggish as its dedicating most of it's cycles to that software. I just thought it was an interesting point that even in the modern era can be illustrated if your system hardware isn't equally balanced where there's one chip that's way faster than the other. (this machines primary job is crunching numbers and audio processing, so the GPU is almost an afterthought)
But how many flops did that early flop unit flop if it could flop in flops?
Early co-processors like the 8087 worked in mili-flops (joke) compared to today's CPUs.
A couple hundred kiloflops or low megaflops. With software emulation, only a few kiloflops.
Next, GPGPU videos please!
I would love to see computerphile do an episode about computer/internet operating speeds and the relationship between hardware and softare. And ive never quite understood what firmware is. Why do storebought PCs slow down so soon after buying?
When you buy a PC new it's going to be running as pure and untouched as possible, aside from any bloat that the manufacturer has added. You then get it home and add all of your peripheral drivers and software, install the dozens of software packages that you use, download a couple of browsers and then a handful of games. All of these things are taking up the resources (CPU/RAM) that was previously free when you initially bought the machine. Modern computers are abundant in resources and timescales may only be milliseconds but each one of these things takes time and effort to 'do something' which is most likely the slowdown that you see.
3:00 lol what was that !
Yay A cuber
And similar thing for "blitter chip" at 07:05, even though we hear the same word at 07:03 just fine.
Prrrety
Only Dave from EEVblog has a old laptop with a POPULATED socket for FPU. (Of all the videos I have seen)
All of the collectors put one in, because.
In reality only specialized software used FPU pre ~1994. First non scientific/business "program" I encountered that used one was Magic Carpet (game), and then Quake dropped in 1996.
Uniform memory access architecture should be seen as an ideal along with the generalization of multi-processing parallel computing. Ideally general purpose programming could send tasks to both CPU and GPU as resources are used. One issue with this is the need for Just In Time compiling for all programs running on the system. This means that games developed with Shader programs simply send their programs to run on the GPU while the OS can recompile and move a program to the GPU that is either better fit or to offload the CPU. If then any part of the GPU still needs dedicated memory that part of the GPU should be treated as a sub-processor with highly specific functions and infrequent reprogramming. The rest is unified and as such general purpose programs can be parallelized between processors since they share memory.
Some processors are interdependent. Like the keyboard CPU is the only one that can get interrupts for the physical keys nor would unified memory make a lot of sense for a peripheral. In this sense the CPU on the device is really a sub-processor which might be general purpose for development purposes since microcontrollers are so accessible. But its functionality is slave to the system. It doesn't implement general purpose processing or programability. This is also true of much older GPUs or the display controller hardware on dedicated cards or as part of GPU chipsets.
7:35 Presenter points finger at one of an array of a dozen nondescript-looking chips. Audience nods sagely.
I was hoping you would get into something more interesting like computers that actually had 2 CPUs like the Fujitsu FM-7 with 2 MC6809 or Sega Genesis/MegaDrive with it’s MC68000 and Z80. Perhaps in another video.
or C64 + Commodore 1541 combo used as a dual CPU system.
The Intel 286/386/486 used to have a coprocessor called -87. What was that about?
This *was* the FPU (floating-point coprocessor). The 80486 was the first generation IIRC which had the FPU integrated on a single die, but only for the "DX" version. There was also an "SX" version without built-in floating-point support, and a 80487SX coprocessor.
With the Pentium, Intel finally chose to make floating-point support a full, on-die feature of every new processor.
@Jeff Jibson Interesting, thanks… I never even heard of Weitek before. No wonder that company went under if the sales were that bad.
math co-processor, remember them well... installed several cad lab with them... those where the days.
Why not just call them what most people called them back then? Co-Processors ? (I might have the spelling and hyphen wrong.)
Didn't expect an upload, but that's okay
could you make a converter for the fpu slot to accept a 8088?
that be cool
Not really. It's designed specifically to accept the matching FPU. It's best to think of the 8088 and FPU combo as one processor cut into two parts.
If you wanted to multiprocessor an XT class machine, you'd probably want to design a card to fit into the cpu socket... or on the PC bus itself.
@@TheTurnipKing well, unfortunatly im not smart enough for PCB design XD.
YOU MUST CONSTRUCT ADDITIONAL -PYLONS- PROCESSORS.
How about a video on fixed-point arithmetic as a poor-man's alternative to floating-point?
Why do Computerphile videos start off with an out of context clip from the middle of the video?
Thanks for great videos.
I thought this was going to be about the mixed performance of various tasks on multi core computers, not a history of sort of multi processing.
Dr Steve Bagley takes us on a trip into an FPU, next video please ?
You must construct additional processors!
What about i860?
The i860 was not very successful by most accounts, except for very specialised applications, such as signal processing and aerospace.
You should invest in a stand for the camera, the wobbling, and swaying of the camera is annoying.
put subtitles please
RUclips community subtitles are switched on to allow the community to help subtitle the
films. Sadly this means the automatic subs don't show.
Perhaps go into community subs and look there? >Sean Once someone starts work
on this the subtitles will be here:
ruclips.net/user/timedtext_video?v=CDpL9wOQcus&ref=share
Dr Steve needs a bigger office!
Blitter always seemed to me like a rudimentary GPU
Most home computers employed a rudimentary GPU to free up the processor from the quite onerus task of generating a video signal. Some of the very early, cheaper systems did not and the ZX80.
The problem is that generating a video signal is not a task that can be interrupted, so ideally you need something in parallel with the CPU doing it so as to leave your CPU free to work.
The fun thing is that this, not the CPU speed, is ultimately responsible for most of the distinctive video limitations of a system.
the camera angle is awfully low
Happy 7E3 to you all or should i say happy 11111100011?
FE3? Did you switch the first bit?
What would he do without his two hands to help him speak?
Sound like teckies always knew work load will always increase. Cool
is this channel for english-native-speakers only ?
Russian here, have no problems
It's for any person who loves computers. I believe it is filmed at a university in great Britain. Not sure where though
Nottingham
I love this channel, but sometimes I have problems with the very fast speaking and with no subtitles. The start of the video is really too fast for an average non-native English speaker.
Christmas Pudding Mug Mix
When i was kid, i learned how to program ANTIC, which is graphic processor of Atari 65XE.
Me too 😃
His shirt looks like Minecraft bedrock.
These days you can do those calculations on ya watch :)
A CPU of that subsystem... yeah, no. It's not a CPU then.
Amiga!!
try making a game with only 40 kilobytes, like an NES game
Built like a tank or built properly........... ;)
2:24
*_ThAt's a LoT Of DAmAGe_*
Is this English?
Kubilay Yazoğlu ingiliz zaten
I hope all of these videos were done in the same day, either that or Steve have a very boring wardrobe. Nice shirt though.
You guys picked a bad upload time, between hundreds of CCC videos
Is Steve diabetic, or does he just genuinely like Diet Coke? (I'm both, just for reference.)
Phosphoric acid dissolves your teeth!
You don't have to be a diabetic to drink diet drinks. A few milligrams of Aspartame is a far better option than a few teaspoons of sugar, even for a non-diabetic.
@@mandolinic You could also drink tea… without sugart. :-)
@@TruthNerds
Oh sure. I don't take sugar or any sweeteners in my tea or coffee, and I probably drink more of those than soft drinks. But as a non (alcohol) drinker, if I go in a bar or pub, then tea or coffee isn't always on offer, or if it is then it's some gunk from a machine, so I need to know there's a soft drink option for me.
加法的演算器(Alphabet )
Asukabet(飛鳥割法)千度割
千次元函数頭
ATOK333
π.ππ
Swag
What'S MAFFS? Is that an acronym?
...Oh... You mean math :P
Your speech is really hard to understand :ccc
chains00 no it ain’t
@@simatbirch it really is for a non native English speaker. (at least for me) it would be amazing if they enable automatically created subtitles
ofc its not if your native language is english. Mine is hungarian, and i have a B2 english language certificate. And another reason is the special vocabulary he uses. Although i read a lot in the topic of computer science, there are always quite a lot unknown words for me in his videos.
chains00 try slowing it down just a little. It will help.
First 😁
Benjamin Buter Petersen
Actually Gareth Ellis is...