Yep, me too. Technically I have little interest in this history but every time I hear him I end up utterly fascinated and attentive. It's like he hypnotises me or something!
The way his videos start and stop I think he actually has been sitting in that room talking for years and they just turn the camera on whenever they need a new video.
These videos are fascinating as they delve into the history of our profession. For example, I remember a BYTE magazine editorial from 1977 in which an electrical engineer couldn't understand what all the fuss was about denser memory since "nobody could ever possibly need to use more than 1K." When I learned Pascal in 1980, we were taught about p-code. The idea we were taught was that standard Pascal was so basic in its functionality (it was created as a teaching tool to teach structured programming, the buzzword du jour then) that industrial use would inevitably lead to many different versions of extended compilers. So if you used an extended compiler on one computer, it would generate p-code that you could then port to another computer and be able to run through its p-code interpreter. The idea never really caught on for Pascal, but the Internet made Java bytecode a viable option. Admittedly, since I spent my career in embedded programming, I didn't get much exposure to other forms of intermediate code.
Ah. UCSD Pascal. How I remember hating how type-tight it was. I remember writing a lot of stuff in FORTH, thought p-code would be an extension of its philosophy. I went the other way, into mainframes. P-code, provided it had been designed to take advantage of common ISA function that wasn't yet available (64-bit addressing and virtual memory being big ones), would have been one way to address the portability issue. Unfortunately, at the time, portability wasn't seen as that much of an issue. How differently it might have gone had p-code arrived just 10-12 years later. Or not. :-)
Structured code never really disappeared, a basic version of it just became so standard that the alternative was mostly forgotten, and when someone wanted a more advanced version of it they started raving about "objects", which themselves are really just an extension (if even that much) of structured programming.
I remember seeing a diagram of p-code on a whiteboard. This was when I was working for David Joslin (formerly on the Pascal committee), I once asked him about Algol-68 and he said "The trouble is that I can't really think in Algol-68 the way I can in Pascal". I was a very junior programmer who felt he had a long way to go!
Prof. Brailsford really reminds me of my grandad. My grandad was on the HMS Belfast (chief comms engineer), and when he was telling me stuff he had the exact same demeanour when telling stories. I love how the people from their era are so passionate about their field.
I worked on computers with ROM that, if added up, was 12k, and the RAM was made from a pair of Mostek MK4007 serial shift registers, giving you 512 bits of RAM in the 2 shift registers. Note bits not bytes.
Looked up the last memory there, Mostek Mk1002 shift registers, used to act as a serial store. 256 bits, in 2 128 bit shift registers, in a TO100 can. A whole 512 bits again. Was made from unobtanium then, only now do I see the datasheets on Archive.org.
I would love to watch one on LLVM, especially since we're talking about UNCOL. But I think Professor Brailsford's priority should stay on these fascinating historical stories that few from the younger generations had known. They are rare on the internet, especially in video form, especially from an excellent storyteller like our professor.
We have come a long way haven't we. I have very fond memories of learning Algol 68R and assembly coding Z80 at the University of Nottingham in the late 1970s.
1:42 "... was a typical electronics engineer, 'You shouldn't be messing about with C, everything can be done in Assembly. I'm being very generous with you, I'm giving you a 2K ROM to put your program text in, and if you can generate that out of C and not overflow, fine.'"
@@DVSProductions that's from a very machine-centric point of view. Languages are meant to be writable and readable for human beings, and let the machine itself, or rather ever faster machines, achieve efficiencies. In a way, your statement is like the electrical engineer's in Fernando's quote. You're arguing that we should write code in service of the machine, when computer languages should be in service to human programmers. Because in that sense, Python is a very excellent language! The next generation will learn computer languages that are even less efficient than Python, and your "complied into assembly" will seem as quaint as trying to code for machines like Pascal's calculator does now.
LLVM is damn close to that universal intermediate language. It's gotten to the point where the maintainers are rejecting front-ends like Pascal because it is just too much hassle for them to keep everything in sync.
LLVM finally made the achitechture robust enough so that you only need a frontend (like Clang) to compile for basically anything as it uses an Intermediate Language
Ehhh... LLVM IR isn't necessarily portable, clang still needs to know about different platforms/architectures, if only so all the macros and constants can be properly defined.
GCC was there first. An awesome cross-compiler system that works _if_ you understand how to build one. It's not simple. I've had seasoned IT pros go completely confounded when I describe the process.
Yeah ,LLVM IR is definitely the closest thing to UNCOL we've yet to see but there are still some prevalent issues. The front end still needs to know some specific details about the target architecture, such as register width and some details of the target ABI (such as preferred alignments). You can get this information from LLVM, but you must do it at the time the IR is generated, not when the machine code is generated, in order for things like type size and field offset constants to be accurate. It does seem like this would be possible to abstract away partially with some sort of IR level constant variables that work like c defines, but I don't think it'll ever go away entirely as long as variations in register width and abi still exist
@@8a41jt building gcc is not that hard cmon, it may seem so at first but if you are used to programming in C and using build systems or used to install software you have to build from source it is def not hard.
Brady, you should legitimately sit this man down and just let him talk. He can go on about some very interesting things, and I'm sure he has great stories.
UNCOL sounds a lot like LLVM and the principal of an intermediate API drives projects such as SDL and JAVA. The concept of a universal binary is arguably what javascript driven webpages have become too - Our web browsers are Z-code interpreters with fonts ;)
Except that the concept doesn't really apply to scripting languages, since there will always be at least one layer lower down that interprets the script. Sure it's possible to compile javascript to machine code, but it's super obtuse.
Java itself was founded on the idea of a 'universal bytecode'. (not javascript, JAVA - these are not the same thing.) And, while that does kinda work, it's only possible by taking some extreme hardware abstractions and cutting people off from doing tasks unless there's an API for it. (the API cannot in itself be written in Java, which shows up the limitations of the concept pretty well.) Of course, what happened to Java, since it had bytecode that behaved like a very generic kind of computer is... We eventually got hardware implementations of Java Bytecode interpreters... XD
@@KuraIthys Having a hardware implementation of the JVM or a J - instruction set is pretty hilarious. You can argue weather the JVM solves UNCOL but I'd say what actually does solve it (to an extent) is ART (Android Runtime ) ^^
@@gizmoguyar - I believe it does - any Javascript engine worth its salt will JIT compile the code. Certainly the block that runs? has been changed from what was provided. Yes, it's not literally a binary - but as far as writing the code once and running it everywhere? it's functionally the same thing. That's why I mention APIs under the same umbrella - in terms of providing the write once, run anywhere goal? different CPUs are actually the easy part. You could perfectly translate every instruction inside a PS3 PowerPC binary but there are so many higher level specifics - the GPU, API - how the controllers work etc I propose that the intent of a 'universal binary' is to run software on any architecture and to this end? API's must be part of the conversation. The actual mode of execution, be it native binary, emulated/interpreted machine code, JIT compiled code or interpreted script? small fries given how high level modern programs are. You know - for somebody that really hates JAVA and the terribly slow virtual machines called web browsers? I constantly make a case for them ;)
I feel so ashamed that I missed the opportunity to be a great electronic engineer and leaned towards software more than hardware. These things seems so fascinating, these people, the true pioneers and heroes of modern computers. I wish I was one of them or working in this industry. Hope I'll overcome this feeling with exceptional work in software.
The Z80 was my first computer, in the disguise as a NASCOM with 64K DRAM and 32K SRAM (I did extend the standard version soon after I got it). Mostly I programmed in assembler, at that time using C or or any other high level language had a rather high overhead. I did also wire up some other Z80 computers. I did not need much RAM, but mostly used EPROM (When writing in assembler your have very tight control of the resources used). Later on I also got disc drives and used CPM on it, but it was fairly slow compared to my SRAM. Even today a PC cannot match the initilization speed of the NASCOM. From I turned on the computer or pressed reset it took 1 or 2 seconds before I was in my development environment (Selected from a menu and then copied from SRAM to DRAM). When needing to read or write stuff to external storage the NASCOM was very slow with a audio tape.
Yeah, some processors are worse than others for compiling. I recall a discussion about the lack of 6502 or even 65816 based compilers for most languages... And someone pointed out that their custom purpose-built language taking into account the quirks of this processor family was 'only' about 10 times slower than well-written assembly... Yeah. If THAT's your best case, then no wonder nobody wanted to use high level languages back then. XD Of course, you could ask what makes the 6502 so bad specifically, well it's the lack of functionality for using the stack. Most high level languages are built around the concept of a 'stack frame'. Which means that stuff like function calls, parameter passing and concepts like local variables all exist on the stack; A stack frame typically being an address for a return location (if it's a function call), and all the parameters and local variables for that function all pushed onto the stack in a specific order, with the running program referencing all these variables as an offset into the stack. The 65816 contains a lot of extra features specifically for this (though still not as many as a compiler designer might want), but the 6502 can't work directly on the stack this way, (no stack relative offsets, few stack manipulation tools besides the basic push and pop, etc). So it becomes really difficult to write a 6502 based compiler for anything that works well with the concept of a stack frame... And alternate solutions tend to be a lot more complicated to implement.
I liked the Z80 because of the index registers, and because most if not all Z80 CPUs had the undocumented instructions which would allow you to use each 16-bit index register as two 8-bit registers, if you had the memory space for the larger instruction length. Also, the Z80 natively supported dynamic RAM refreshes, which had the incidental effect that someone using a high-level language could use the refresh register contents to initialise the pseudo-random number generator to a largely unpredictable starting value. The problem of only a 64K memory space was overcome in some computers by "memory banking," where an I/O port would control which physical bank of memory was "mapped into" the logical memory space. I can remember working with one machine which ran MP/M (a multi-user version of CP/M) and which had around 112K of RAM. 16K was fixed at the top of the memory space, while the remaining 96K was divided into 6 x 16K banks and each bank could be "mapped" into one of the three 16k memory spaces in the lower 48K of memory space. The MP/M Operating System ran in the top 16k and would keep track of which programs were loaded into which memory banks. Interrupts would allow it to switch between programs. Even so, "shoehorning" programs into limited memory space was almost an art form.
I find it really funny how you talk about Z-code, and it's probably not related at all, but there was something called the Z-machine that was made as a virtual machine to interpret text adventure games by a company named Infocom. It basically did the same thing, but just interpreted instead of compiling to assembly. In this case, the Z stands for "Zork".
My first experience with embedded systems was with Boston System Office's attempt at an 8086 cross compiler running on a Micro VAX. The less I recall about that nightmare the better I feel!
An electrical engineer builds someone a board with 1k of ROM and 128 bytes of RAM. Two months later, the person returns to the electrical engineer, missing half their hair, bags under their eyes so strong it looks like they’ve been hit in both eyes with a bowling ball, and smelling like they haven’t showered since the election engineer last saw them. They place the board on the electrical engineers table, and say “well, I finally did it. I got the software to work” The electrical engineer thinks to himself “well then, I guess this is all I’ll ever need to give anyone from now on!” And now, 45 years later, he still uses 1k/128b as his reference point of how much people need with all the new products he develops, only ever making concessions for when something is actually cheaper to just use more. You’re all now thinking of a product you think this theoretical engineer built, I’m curious what you’re thinking of.
Dude some times I find myself thinking "I wish I was around back then, I could make some revolutionary discoveries" but then again I realize that all of those discoveries that I would have to make would be made by people back then, and I wouldn't be able to study them.
How do you mention BCPL in a video that mentions ZedCode, without also mentioning BCPL's own intermediate code, OCODE? Where is Dr. Martin Richards' interview to tell us his point of view? The more I learn about computing history, the more I see Bell Labs as historical revisionists. Don't get me wrong: Bell Labs was influential and important in the history of computing for sure. But, they always seem to take the view that they invented everything. They didn't.
In particular, Bell Labs bridged CompSci in America and Britain, thus this particular viewpoint. These are first-person narratives, not meant to reflect a complete history of computers. Your statement strikes me as too defensive. Having been a young adult in the Boston area at the time, my hackles are sometimes raised when Southern California takes too much credit for microcomputer and internet advances, when Boston (around Route 128 in particular), IBM in New York, and even St. Louis made important advances, too. But I don't expect greybeards from Silicon Valley, or even Redmond, to talk about all those things. At best, they acknowledge MIT, and everything else is the computer revolution in the Valley.
Try "Computer Automation Naked Mini". Might get you closer to the answer. Of course, "Computer Automation Alpha 16" might get close to the proper item you are searching for. Yes, I have written code for an Alpha 16, it was a long time ago.
In the serial-to-parallel interface board, it's unclear to me why the error messages had to be in RAM. If they existed in RAM, wouldn't they have had to originate from ROM? And if this is so, why not use the version that's already in ROM?
I guessed that they meant error output, otherwise, yeah, that doesn't make much sense. Then the next optimisation is to just use codes rather than messages. I do have to say, this all seems way too complex for something that's just doing a serial to parallel transform, I wonder what else it was doing. Probably a lot of flow control, that's always tricky to figure out.
He's talking about logging the errors in RAM, not about generating the errors. Obviously the ROM would detect the errors and generate the error messages, then those messages would be stored in RAM to be examined later.
WOOOHOOO a new video with The Professor! :D:D:D Also, what the duck is that ROM doing in the 2K-4K slot on that z80 board? How does one boot such a board?
And for anyone wondering, you sometimes "don't just put ROM there" because you genuinely do need to change values at the entry point for some reason, such as so that you have an easier time developing the ROM itself. Nowadays we would just program an EEPROM with something else, but it wasn't always practical: for those situations, they would bootstrap with techniques like what I described, and once they got that working would transform it into an actual ROM, and either use it to develop an even better ROM, or use it to load other programs.
@@absalomdraconis Yeah you're right; i remember DIY articles in electronics magazines from the 80s that described so-called ROM emulators, which were basically a RAM chip with a ribbon cable connecting the RAM chip to a separate computer which could then dump a block of binary code/data into the RAM. The target computer would only be able to read from the RAM chip, effectively making it look like a ROM chip.
Generally, there was a bit of hardware which disabled all memory accesses on reset, forcing them to return 0 (NOP in Z80 language) until a particular memory location was accessed, which would make the hardware-disable deactivate until the next reset. The result is that the Z80 would scream through 4096 (in this case) NOP instructions, ignoring actual memory contents until location 4096 was accessed, at which point the initialisation hardware would disable itself and allow memory accesses so the ROM could do its job. Another method, as mentioned here, was to start the ROM at location 0 and then use bank-switching at some point to "move" the ROM to later in the memory space and substitute RAM in the space previously used by the ROM.
It's hilarious hearing him talk about arguing with electronics engineers about memory limits, it makes me think of people arguing about C and other more highly abstracted languages nowadays.
Yes, but requires added software support for it, and then becomes very bound to the particular machine type, as there are so many ways to do a bank switch, minimum being bank size, and location you swap out. The bare Z80 being limited to 64K, along with the IO space limits, was the issue till enhanced Z80 processors came out that were supersets of this. Still, IIRC being made by Sharp and Casio, and most common in cash registers as the majority of the processing power in the machine, though there they also integrate a whole raft of peripheral devices as well, along with assorted display driver solutions and printer drivers on the same chip.
While later Z80 derivatives have included bank switching, it wasn't native in the original CPU. And there was no standard MMU chipset to drive the RAM and provide paging back then, either. Several companies built bank-switching support into their hardware, largely to support MP/M (essentially multi-user CP/M) and it's later version CCP/M, but such implementations were all proprietary - there was no standard.
The Z80 does not support bank switching; it does not have any registers or MMU. However, it can be implemented externally, I think I had 512KB in my Z80, running cpm3 which DID support bank switching. Most ram was used as a disk.
Not an immediate predecessor: the time gap is too big. Conceptually: yes, any of the chips that were used as "hobby" boards were aiming at the same target as the Pi later went for.
Electronics engineers and computer scientists somehow managed to trick a piece of rock into thinking, and then spawned an entire field of witchcraft around that. If they didn't speak some bizarre language in their incantations, I'd be worried for humanity ;)
You couldn't do a serial to parallel converter in 2K of RAM? How complicated was this proprietary parallel protocol? This sounds like a job for a PIC16F84 with its 68 *bytes* of RAM. Even that's overkill.
2K of ram on the bottom sounds like you needed some kind of external bootloader chip? afaik the Z80 always initializes the PC at 0? or did you just hope the ram cleared on power up and wait for 2K NOPs until you reached the rom?
A common technique is to have a latch which disables RAM at address zero until code has done something to indicate that it has taken control (e.g. perform any I/O operation). This will make it possible to put a JP in ROM at address zero which jumps to a mirror of the ROM at some other address. That can in turn perform an I/O operation and then store whatever needs to be written to RAM at address zero.
So Z-code and UNCOL are great grandfathers to LLVM? Is LLVM fated to fail over peculiarities of different hardware, or did something new emerge to make it viable?
From a brief skimming of the Wikipedia article, it looks like LLVM started life as a replacement for GCC's intermediate representation that ended up taking on a life of its own.
Except the opposite, in that RPi's are meant to be cheap and accessible for academic purposes. Components like the "Naked Mini" were purposely designed to be proprietary and create reliance on the company for maintenance and updates. Some of the shortcomings of the Pi are from needing to use components that electronic companies are not, or are no longer interested in, exploiting for IP. Actually, we can see here how from the early days, software was used to "get around" problems with incompatible hardware, and "Open Source" is simply this approach codified in legal language and made explicit. In that sense, this video fits in with other FOSS videos on Computerphile.
@@lister_of_smeg6545 The problem with that idea is that uninitialised RAM can, and often will, have random values in each location, which would almost certainly lead to a crash.
@D.O.A. on power-on, or after a reset, a Z80 will always begin executing code from address $0000. EDIT: By the way, i think the 6502 and family also uses a similar scheme to the 6809, i.e. having those crucial vectors at the top of the memory map.
It's possible that after a power-on reset, the ROM is actually mapped at both address 0x0000 and address 0x0800, but soon after startup, the software in ROM jumps to a "real" ROM address in the 0x0800 - 0x0FFF range, then switches an external latch that re-routes addresses 0x0000 - 0x07FF to the lower RAM. I guess the advantage of that might be that the non-maskable interrupt routine (at location 0x0066), and the mode-0 & mode-1 maskable interrupt routines & destinations of the RST instruction (at locations 0x0000 - 0x0038) would then be in RAM, and thus modifiable.
think of how different the computers world would look if businesses, and the internet itself, rallied around the Tandy instead of IBM PC (well, clones as much as IBM's products themselves). Microsoft would never have gotten far, but also Linux and FOSS might not have gotten far.
Richard Smith Or UCSD p-code, or FORTH or .NET. In the end it's easier to reconfigure your compiler into a cross compiler (by writing the high level code that will be in the native compiler), debug it on the working system, then cross compile the finished compiler with itself.
Even worse is being just 10 bytes short, and you have to decide that most error codes will be cryptic numbers, needing a look up table for the operator to read out what exactly the error is.
I once spent a week saving 4 bytes of program memory - it was either do that so it fitted into 2K ROM or we didn’t have a product. It took over a year to write the program, it was the only electronics in a payphone (apart from a time/calendar chip and a DTMF generator) so had to control the coin mechanism, read the keypad, calculate the call cost (including time and day), send 10PPS dialling (complicated by having a processor clock that might run at anywhere from 30kHz to 150kHz when dialling had to be +/-10%) and several other minor tasks. And the icing on the cake was the memory was split in 64 byte pages, when you got to the end of one you had to long jump to the next, that meant nearly every time I changed a function it would no longer fit the page and I had to juggle every function into a different arrangement in the pages. At the end of all that we sent the code to Japan and waited 6 weeks for ROM to be programmed on the microcontroller and shipped back to us. Prof. Brailsford had it easy!
It may well appear in a future video. Java wasn't even thought about until around 10 years after the time Prof B is talking about about, and wasn't commercially available for another 5 or so years after that.
@@alexpent1482 but jokes aside. look to see what programming languages you would use in class or for work and start a youtube tutorial or something. Or read the documentations and example code. Then code it yourself.
@@melkiorwiseman5234 Java code is translated into machine-independent bytecode, which is, in turn, run by a Java Virtual Machine (which is a piece of software). So Java is intermediate between a purely compiled language (source code->binary executable) and an interpreted language.
@@luckyluckydog123 That's kind of what I thought. It's called a "byte-code compiled" language. It's how Liberty Basic works and is one step past the original BASIC language, although even Microsoft BASIC had a kind-of pre-compile built into it which turned "key words" into single byte codes. If Java uses more than 8 bits for its codes then you might call it "word-code compiled" instead but the principle stays the same. Anyway, thanks for the information.
🎶Ay bed ced ded ed ef ged, aych ai jay kay, el em en o ped, qyu ar ess, ted yu ved, double-yu ex, why and zed, Now I know my ay-bed-ceds, next time won't you sing with med🎶
No no no Zed comes last, so has a name that sounds different so you know you got to the last character in the alphabet. Stops overflow attacks on the alphabet register
Zed code... that I go for 100%. (french canadian here)... americans are too lazy to say zed... like aluminium (aluminum in spoken US)... the "i" is too hard to pronouce for them, like the Zed. But I do not worry, they still have the ARMY...
I could listen to this man talk about computers for years
Yep, me too. Technically I have little interest in this history but every time I hear him I end up utterly fascinated and attentive. It's like he hypnotises me or something!
@@arcanics1971 I swear it helps me relax , and after he is done speaking I just zzzzzzzz
@@mohamedhabas7391 Glad I'm not the only one. His voice is pure ASMR.
Bested only by Sir David Attenborough
The way his videos start and stop I think he actually has been sitting in that room talking for years and they just turn the camera on whenever they need a new video.
These videos are fascinating as they delve into the history of our profession. For example, I remember a BYTE magazine editorial from 1977 in which an electrical engineer couldn't understand what all the fuss was about denser memory since "nobody could ever possibly need to use more than 1K."
When I learned Pascal in 1980, we were taught about p-code. The idea we were taught was that standard Pascal was so basic in its functionality (it was created as a teaching tool to teach structured programming, the buzzword du jour then) that industrial use would inevitably lead to many different versions of extended compilers. So if you used an extended compiler on one computer, it would generate p-code that you could then port to another computer and be able to run through its p-code interpreter. The idea never really caught on for Pascal, but the Internet made Java bytecode a viable option.
Admittedly, since I spent my career in embedded programming, I didn't get much exposure to other forms of intermediate code.
Ah. UCSD Pascal. How I remember hating how type-tight it was. I remember writing a lot of stuff in FORTH, thought p-code would be an extension of its philosophy. I went the other way, into mainframes. P-code, provided it had been designed to take advantage of common ISA function that wasn't yet available (64-bit addressing and virtual memory being big ones), would have been one way to address the portability issue. Unfortunately, at the time, portability wasn't seen as that much of an issue. How differently it might have gone had p-code arrived just 10-12 years later. Or not. :-)
Structured code never really disappeared, a basic version of it just became so standard that the alternative was mostly forgotten, and when someone wanted a more advanced version of it they started raving about "objects", which themselves are really just an extension (if even that much) of structured programming.
I remember seeing a diagram of p-code on a whiteboard. This was when I was working for David Joslin (formerly on the Pascal committee), I once asked him about Algol-68 and he said "The trouble is that I can't really think in Algol-68 the way I can in Pascal". I was a very junior programmer who felt he had a long way to go!
Prof. Brailsford really reminds me of my grandad. My grandad was on the HMS Belfast (chief comms engineer), and when he was telling me stuff he had the exact same demeanour when telling stories. I love how the people from their era are so passionate about their field.
Same here.
2k of RAM... Riches beyond the dream!!!
I can see chrome crying in the background. Poor browser is hungry
I worked on computers with ROM that, if added up, was 12k, and the RAM was made from a pair of Mostek MK4007 serial shift registers, giving you 512 bits of RAM in the 2 shift registers. Note bits not bytes.
Looked up the last memory there, Mostek Mk1002 shift registers, used to act as a serial store. 256 bits, in 2 128 bit shift registers, in a TO100 can. A whole 512 bits again. Was made from unobtanium then, only now do I see the datasheets on Archive.org.
@@FoxDren sysctl vm.swappiness = 2000000
Early home computers didn't have a lot more than that actually
Please do a show on LLVM
I would love to watch one on LLVM, especially since we're talking about UNCOL. But I think Professor Brailsford's priority should stay on these fascinating historical stories that few from the younger generations had known. They are rare on the internet, especially in video form, especially from an excellent storyteller like our professor.
They really should get Chris Lattner for that
might be too specific for a cs channel
@@FrankHarwald we are talking about cross compilers no?
My thought exactly that he was describing llvm, lol
Professor Brailsford is my favourite speaker. Could listen to him all day talk about computers. Such an interesting character, I look up to him.
I’ve been listening to this man for years, and it is wonderful 🤗
We have come a long way haven't we. I have very fond memories of learning Algol 68R and assembly coding Z80 at the University of Nottingham in the late 1970s.
Z80 is where it's at ^~^
1:42 "... was a typical electronics engineer, 'You shouldn't be messing about with C, everything can be done in Assembly. I'm being very generous with you, I'm giving you a 2K ROM to put your program text in, and if you can generate that out of C and not overflow, fine.'"
@@epsi well at least c got compiled into assembly and therefore was still very fast. Python is just slow and bad as a standalone language
@@DVSProductions that's from a very machine-centric point of view. Languages are meant to be writable and readable for human beings, and let the machine itself, or rather ever faster machines, achieve efficiencies. In a way, your statement is like the electrical engineer's in Fernando's quote. You're arguing that we should write code in service of the machine, when computer languages should be in service to human programmers. Because in that sense, Python is a very excellent language! The next generation will learn computer languages that are even less efficient than Python, and your "complied into assembly" will seem as quaint as trying to code for machines like Pascal's calculator does now.
LLVM is damn close to that universal intermediate language. It's gotten to the point where the maintainers are rejecting front-ends like Pascal because it is just too much hassle for them to keep everything in sync.
I just now realized that Serial means in series instead of parallel!
LLVM finally made the achitechture robust enough so that you only need a frontend (like Clang) to compile for basically anything as it uses an Intermediate Language
Ehhh... LLVM IR isn't necessarily portable, clang still needs to know about different platforms/architectures, if only so all the macros and constants can be properly defined.
GCC was there first. An awesome cross-compiler system that works _if_ you understand how to build one. It's not simple. I've had seasoned IT pros go completely confounded when I describe the process.
Yeah ,LLVM IR is definitely the closest thing to UNCOL we've yet to see but there are still some prevalent issues. The front end still needs to know some specific details about the target architecture, such as register width and some details of the target ABI (such as preferred alignments). You can get this information from LLVM, but you must do it at the time the IR is generated, not when the machine code is generated, in order for things like type size and field offset constants to be accurate. It does seem like this would be possible to abstract away partially with some sort of IR level constant variables that work like c defines, but I don't think it'll ever go away entirely as long as variations in register width and abi still exist
@@8a41jt building gcc is not that hard cmon, it may seem so at first but if you are used to programming in C and using build systems or used to install software you have to build from source it is def not hard.
@@marcossidoruk8033 All true, Marcos. All depends on your understanding of the big picture. I don't think it's hard; but I know many people who do .
There is no way I'm Googling "naked mini"!
Thank you Professor Brailsford I really enjoyed this.
Brady, you should legitimately sit this man down and just let him talk. He can go on about some very interesting things, and I'm sure he has great stories.
UNCOL sounds a lot like LLVM and the principal of an intermediate API drives projects such as SDL and JAVA. The concept of a universal binary is arguably what javascript driven webpages have become too - Our web browsers are Z-code interpreters with fonts ;)
Except that the concept doesn't really apply to scripting languages, since there will always be at least one layer lower down that interprets the script. Sure it's possible to compile javascript to machine code, but it's super obtuse.
Java itself was founded on the idea of a 'universal bytecode'.
(not javascript, JAVA - these are not the same thing.)
And, while that does kinda work, it's only possible by taking some extreme hardware abstractions and cutting people off from doing tasks unless there's an API for it. (the API cannot in itself be written in Java, which shows up the limitations of the concept pretty well.)
Of course, what happened to Java, since it had bytecode that behaved like a very generic kind of computer is...
We eventually got hardware implementations of Java Bytecode interpreters... XD
@@KuraIthys Having a hardware implementation of the JVM or a J - instruction set is pretty hilarious.
You can argue weather the JVM solves UNCOL but I'd say what actually does solve it (to an extent) is ART (Android Runtime ) ^^
@@gizmoguyar - I believe it does - any Javascript engine worth its salt will JIT compile the code. Certainly the block that runs? has been changed from what was provided. Yes, it's not literally a binary - but as far as writing the code once and running it everywhere? it's functionally the same thing.
That's why I mention APIs under the same umbrella - in terms of providing the write once, run anywhere goal? different CPUs are actually the easy part.
You could perfectly translate every instruction inside a PS3 PowerPC binary but there are so many higher level specifics - the GPU, API - how the controllers work etc
I propose that the intent of a 'universal binary' is to run software on any architecture and to this end? API's must be part of the conversation.
The actual mode of execution, be it native binary, emulated/interpreted machine code, JIT compiled code or interpreted script? small fries given how high level modern programs are.
You know - for somebody that really hates JAVA and the terribly slow virtual machines called web browsers?
I constantly make a case for them ;)
The E in EPROM stands for Erasable. You can erase an EPROM by exposing it to UV light.
True. EEPROM is Electrically Erasable Programmable Read Only Memory.
“4K might be enough for an electronics engineer...” lol
I managed to get a version of space invaders, including the display file in 1k on the ZX81. You just need to be efficient.
I feel so ashamed that I missed the opportunity to be a great electronic engineer and leaned towards software more than hardware. These things seems so fascinating, these people, the true pioneers and heroes of modern computers. I wish I was one of them or working in this industry. Hope I'll overcome this feeling with exceptional work in software.
Don't be ashamed: you can always pick up electronics as a hobby, provided you can afford a half-decent lab.
The Z80 was my first computer, in the disguise as a NASCOM with 64K DRAM and 32K SRAM (I did extend the standard version soon after I got it). Mostly I programmed in assembler, at that time using C or or any other high level language had a rather high overhead. I did also wire up some other Z80 computers. I did not need much RAM, but mostly used EPROM (When writing in assembler your have very tight control of the resources used).
Later on I also got disc drives and used CPM on it, but it was fairly slow compared to my SRAM. Even today a PC cannot match the initilization speed of the NASCOM. From I turned on the computer or pressed reset it took 1 or 2 seconds before I was in my development environment (Selected from a menu and then copied from SRAM to DRAM). When needing to read or write stuff to external storage the NASCOM was very slow with a audio tape.
Yeah, some processors are worse than others for compiling.
I recall a discussion about the lack of 6502 or even 65816 based compilers for most languages...
And someone pointed out that their custom purpose-built language taking into account the quirks of this processor family was 'only' about 10 times slower than well-written assembly...
Yeah. If THAT's your best case, then no wonder nobody wanted to use high level languages back then. XD
Of course, you could ask what makes the 6502 so bad specifically, well it's the lack of functionality for using the stack.
Most high level languages are built around the concept of a 'stack frame'. Which means that stuff like function calls, parameter passing and concepts like local variables all exist on the stack;
A stack frame typically being an address for a return location (if it's a function call), and all the parameters and local variables for that function all pushed onto the stack in a specific order, with the running program referencing all these variables as an offset into the stack.
The 65816 contains a lot of extra features specifically for this (though still not as many as a compiler designer might want), but the 6502 can't work directly on the stack this way, (no stack relative offsets, few stack manipulation tools besides the basic push and pop, etc).
So it becomes really difficult to write a 6502 based compiler for anything that works well with the concept of a stack frame...
And alternate solutions tend to be a lot more complicated to implement.
I preferred the relative simplicity of 8080 assembler over Z80 assembler, but of course the Z80 was much more popular at the time.
I much preferred the Z80 with X/Y registers, alternate register set (Very useful for high speed interrupt) and block moves.
I liked the Z80 because of the index registers, and because most if not all Z80 CPUs had the undocumented instructions which would allow you to use each 16-bit index register as two 8-bit registers, if you had the memory space for the larger instruction length.
Also, the Z80 natively supported dynamic RAM refreshes, which had the incidental effect that someone using a high-level language could use the refresh register contents to initialise the pseudo-random number generator to a largely unpredictable starting value.
The problem of only a 64K memory space was overcome in some computers by "memory banking," where an I/O port would control which physical bank of memory was "mapped into" the logical memory space.
I can remember working with one machine which ran MP/M (a multi-user version of CP/M) and which had around 112K of RAM. 16K was fixed at the top of the memory space, while the remaining 96K was divided into 6 x 16K banks and each bank could be "mapped" into one of the three 16k memory spaces in the lower 48K of memory space.
The MP/M Operating System ran in the top 16k and would keep track of which programs were loaded into which memory banks. Interrupts would allow it to switch between programs.
Even so, "shoehorning" programs into limited memory space was almost an art form.
I find it really funny how you talk about Z-code, and it's probably not related at all, but there was something called the Z-machine that was made as a virtual machine to interpret text adventure games by a company named Infocom. It basically did the same thing, but just interpreted instead of compiling to assembly. In this case, the Z stands for "Zork".
I was hoping for a mention of the Z-machine too.
My first experience with embedded systems was with Boston System Office's attempt at an 8086 cross compiler running on a Micro VAX. The less I recall about that nightmare the better I feel!
Compiler Writing was my favorite course that I took at University. We did C to ML.
An electrical engineer builds someone a board with 1k of ROM and 128 bytes of RAM. Two months later, the person returns to the electrical engineer, missing half their hair, bags under their eyes so strong it looks like they’ve been hit in both eyes with a bowling ball, and smelling like they haven’t showered since the election engineer last saw them. They place the board on the electrical engineers table, and say “well, I finally did it. I got the software to work”
The electrical engineer thinks to himself “well then, I guess this is all I’ll ever need to give anyone from now on!”
And now, 45 years later, he still uses 1k/128b as his reference point of how much people need with all the new products he develops, only ever making concessions for when something is actually cheaper to just use more.
You’re all now thinking of a product you think this theoretical engineer built, I’m curious what you’re thinking of.
Programmable remote control
Calculator
UNCOL sound a lot like LLVM
Don’t you mean LLVM sounds like UNCOL?
Sharp of mind and sharp of shirt. What's not to like?
I don't get what this man says but I love it.
Maybe it wouldn't be a cross compiler, if it was not so annoyed.
It would be useful to provide links to the previous videos that he mentions in this video.
Interesting seeing this just as webassembly is starting to lift off
Dude some times I find myself thinking "I wish I was around back then, I could make some revolutionary discoveries" but then again I realize that all of those discoveries that I would have to make would be made by people back then, and I wouldn't be able to study them.
How do you mention BCPL in a video that mentions ZedCode, without also mentioning BCPL's own intermediate code, OCODE? Where is Dr. Martin Richards' interview to tell us his point of view? The more I learn about computing history, the more I see Bell Labs as historical revisionists. Don't get me wrong: Bell Labs was influential and important in the history of computing for sure. But, they always seem to take the view that they invented everything. They didn't.
Don't be salty, they made most of the important stuff.
In particular, Bell Labs bridged CompSci in America and Britain, thus this particular viewpoint. These are first-person narratives, not meant to reflect a complete history of computers. Your statement strikes me as too defensive. Having been a young adult in the Boston area at the time, my hackles are sometimes raised when Southern California takes too much credit for microcomputer and internet advances, when Boston (around Route 128 in particular), IBM in New York, and even St. Louis made important advances, too. But I don't expect greybeards from Silicon Valley, or even Redmond, to talk about all those things. At best, they acknowledge MIT, and everything else is the computer revolution in the Valley.
@@squirlmy Couldn't agree more, everyone seems to think everything technological comes from California.
UNCOL is present day Philosophers Stone. It converts any program to gold and gives eternal life to it.
I like the random cigar on the Naked Mini ad.
Perhaps the box of cigars also cost a thousand dollars, thus highlighting the mini's relative affordability?
Try "Computer Automation Naked Mini". Might get you closer to the answer. Of course, "Computer Automation Alpha 16" might get close to the proper item you are searching for.
Yes, I have written code for an Alpha 16, it was a long time ago.
I love computer grandpa❤
In the serial-to-parallel interface board, it's unclear to me why the error messages had to be in RAM. If they existed in RAM, wouldn't they have had to originate from ROM? And if this is so, why not use the version that's already in ROM?
I guessed that they meant error output, otherwise, yeah, that doesn't make much sense. Then the next optimisation is to just use codes rather than messages.
I do have to say, this all seems way too complex for something that's just doing a serial to parallel transform, I wonder what else it was doing. Probably a lot of flow control, that's always tricky to figure out.
He's talking about logging the errors in RAM, not about generating the errors. Obviously the ROM would detect the errors and generate the error messages, then those messages would be stored in RAM to be examined later.
WOOOHOOO a new video with The Professor! :D:D:D
Also, what the duck is that ROM doing in the 2K-4K slot on that z80 board? How does one boot such a board?
@BOOZE & METAL Yes, i'm very well aware of that, hence my question.
And for anyone wondering, you sometimes "don't just put ROM there" because you genuinely do need to change values at the entry point for some reason, such as so that you have an easier time developing the ROM itself. Nowadays we would just program an EEPROM with something else, but it wasn't always practical: for those situations, they would bootstrap with techniques like what I described, and once they got that working would transform it into an actual ROM, and either use it to develop an even better ROM, or use it to load other programs.
Also, now you all know what those switches on the front of the Altair & such were actually there for!
@@absalomdraconis Yeah you're right; i remember DIY articles in electronics magazines from the 80s that described so-called ROM emulators, which were basically a RAM chip with a ribbon cable connecting the RAM chip to a separate computer which could then dump a block of binary code/data into the RAM. The target computer would only be able to read from the RAM chip, effectively making it look like a ROM chip.
Generally, there was a bit of hardware which disabled all memory accesses on reset, forcing them to return 0 (NOP in Z80 language) until a particular memory location was accessed, which would make the hardware-disable deactivate until the next reset.
The result is that the Z80 would scream through 4096 (in this case) NOP instructions, ignoring actual memory contents until location 4096 was accessed, at which point the initialisation hardware would disable itself and allow memory accesses so the ROM could do its job.
Another method, as mentioned here, was to start the ROM at location 0 and then use bank-switching at some point to "move" the ROM to later in the memory space and substitute RAM in the space previously used by the ROM.
It's hilarious hearing him talk about arguing with electronics engineers about memory limits, it makes me think of people arguing about C and other more highly abstracted languages nowadays.
No mention of LLVM?
Yeah IL is huge there. It finally arrived.
Today UNCOL is pretty much LLVM.
"It emits ZED-Code"
The Z80 supports bank switching, 100k is not a hard limit.
Yes, but requires added software support for it, and then becomes very bound to the particular machine type, as there are so many ways to do a bank switch, minimum being bank size, and location you swap out. The bare Z80 being limited to 64K, along with the IO space limits, was the issue till enhanced Z80 processors came out that were supersets of this.
Still, IIRC being made by Sharp and Casio, and most common in cash registers as the majority of the processing power in the machine, though there they also integrate a whole raft of peripheral devices as well, along with assorted display driver solutions and printer drivers on the same chip.
I'm sorry, but the Z80 does NOT support banking; that is something that can only be done with extra hardware.
While later Z80 derivatives have included bank switching, it wasn't native in the original CPU. And there was no standard MMU chipset to drive the RAM and provide paging back then, either.
Several companies built bank-switching support into their hardware, largely to support MP/M (essentially multi-user CP/M) and it's later version CCP/M, but such implementations were all proprietary - there was no standard.
The Z80 does not support bank switching; it does not have any registers or MMU.
However, it can be implemented externally, I think I had 512KB in my Z80, running cpm3 which DID support bank switching.
Most ram was used as a disk.
I love the shirt!
Bit of a change from previous videos where he's had one side of his shirt collar above his sweater, the other below
Did this man just cast a spell?
Computerphile and Numberphile on the same day ? woohoo
I hope Dr. Brailsford has a follow-up to this.
The Z80 computer I built has 32K of RAM and 16K of ROM. Is the Naked Mini the effective predecessor to the Raspberry Pi Compute Module?
Not an immediate predecessor: the time gap is too big.
Conceptually: yes, any of the chips that were used as "hobby" boards were aiming at the same target as the Pi later went for.
I'm working on ARM STM32F103, not much bits to flip
still pretty much for a microcontroller
i go for some attiny (mostly attiny84)
most of the time
@@beep_doop Cool, pls keep us up to date ;-)
its 32 bit, what a luxury
@@naaj100 all we want is this type of space to store crypto keys safely ...(key the same with size of the message)
MeCrisp Stellaris for you then : )
I enjoy numberphile, love deepskyvideos, adore objectivity, but computerphile is all absolute Chinese to my ears.
Electronics engineers and computer scientists somehow managed to trick a piece of rock into thinking, and then spawned an entire field of witchcraft around that. If they didn't speak some bizarre language in their incantations, I'd be worried for humanity ;)
"What we're going to look at today..."
I'm sorry, Professor, but all I can see is your shirt!
It's crazy how little memory there was back then.
You couldn't do a serial to parallel converter in 2K of RAM? How complicated was this proprietary parallel protocol?
This sounds like a job for a PIC16F84 with its 68 *bytes* of RAM. Even that's overkill.
I think 2k memory is required to store complete page not just single word
@@sunnymishra1057 Oh... so is has to buffer large amounts of data, not just the 8 bits for a serial-to-parallel conversion. That makes more sense.
Alec Dacyczyn That is precisely why they were told 2K was a very generous amount of RAM.
Alec Dacyczyn it was probably a little more than serial to parallel conversion, most likely they had to do some protocol conversion too.
I'm sure this guy has even better stories while drunk.
videos about programming languages and programs
me: meh
videos about assembly language, binary, and EDSAC
me: glued to screen
2K of ram on the bottom sounds like you needed some kind of external bootloader chip? afaik the Z80 always initializes the PC at 0? or did you just hope the ram cleared on power up and wait for 2K NOPs until you reached the rom?
A common technique is to have a latch which disables RAM at address zero until code has done something to indicate that it has taken control (e.g. perform any I/O operation). This will make it possible to put a JP in ROM at address zero which jumps to a mirror of the ROM at some other address. That can in turn perform an I/O operation and then store whatever needs to be written to RAM at address zero.
Will the next video include a description of threaded code? i.e. genereted code that's only subroutine calls
FORTH
Why are there 2 radios behind Raspi Bear?
What an odd and complicated memory map for an embedded Z80 system! By default the Z80 boots from address 0000 which can be serviced by a ROM.
So Z-code and UNCOL are great grandfathers to LLVM? Is LLVM fated to fail over peculiarities of different hardware, or did something new emerge to make it viable?
From a brief skimming of the Wikipedia article, it looks like LLVM started life as a replacement for GCC's intermediate representation that ended up taking on a life of its own.
GCC through GIMPLE > LLVM
The birth of the first all-soft driver?
The "Naked Mini" sounds a lot like an early type of Raspberry Pi...
Except the opposite, in that RPi's are meant to be cheap and accessible for academic purposes. Components like the "Naked Mini" were purposely designed to be proprietary and create reliance on the company for maintenance and updates. Some of the shortcomings of the Pi are from needing to use components that electronic companies are not, or are no longer interested in, exploiting for IP. Actually, we can see here how from the early days, software was used to "get around" problems with incompatible hardware, and "Open Source" is simply this approach codified in legal language and made explicit. In that sense, this video fits in with other FOSS videos on Computerphile.
VDU here in Mzansi too
How could the rom split the ram? I ask since Z80 starts at adress 0x0000 so the rom "must" be at 0x0000 - 0x03ff (I could have missed something)
I was wondering about that too.
Well if you had RAM at $0000, the Z80 would simply execute a bunch of NOPs until it reached the address the ROM was mapped to.
@@lister_of_smeg6545 The problem with that idea is that uninitialised RAM can, and often will, have random values in each location, which would almost certainly lead to a crash.
@D.O.A. on power-on, or after a reset, a Z80 will always begin executing code from address $0000.
EDIT: By the way, i think the 6502 and family also uses a similar scheme to the 6809, i.e. having those crucial vectors at the top of the memory map.
It's possible that after a power-on reset, the ROM is actually mapped at both address 0x0000 and address 0x0800, but soon after startup, the software in ROM jumps to a "real" ROM address in the 0x0800 - 0x0FFF range, then switches an external latch that re-routes addresses 0x0000 - 0x07FF to the lower RAM.
I guess the advantage of that might be that the non-maskable interrupt routine (at location 0x0066), and the mode-0 & mode-1 maskable interrupt routines & destinations of the RST instruction (at locations 0x0000 - 0x0038) would then be in RAM, and thus modifiable.
4k of ram and 2k of rom...you're not all that far from my first computer, a TRS-80. I think I had 16k of each, and the incredible ROM basic too!
think of how different the computers world would look if businesses, and the internet itself, rallied around the Tandy instead of IBM PC (well, clones as much as IBM's products themselves). Microsoft would never have gotten far, but also Linux and FOSS might not have gotten far.
I’m an EE and I still use assembly 😀 2k is enough! Jk great video!
I feel like we are working our way towards java :)
Richard Smith Or UCSD p-code, or FORTH or .NET. In the end it's easier to reconfigure your compiler into a cross compiler (by writing the high level code that will be in the native compiler), debug it on the working system, then cross compile the finished compiler with itself.
40 bytes short.
Ooof!
Even worse is being just 10 bytes short, and you have to decide that most error codes will be cryptic numbers, needing a look up table for the operator to read out what exactly the error is.
I once spent a week saving 4 bytes of program memory - it was either do that so it fitted into 2K ROM or we didn’t have a product. It took over a year to write the program, it was the only electronics in a payphone (apart from a time/calendar chip and a DTMF generator) so had to control the coin mechanism, read the keypad, calculate the call cost (including time and day), send 10PPS dialling (complicated by having a processor clock that might run at anywhere from 30kHz to 150kHz when dialling had to be +/-10%) and several other minor tasks. And the icing on the cake was the memory was split in 64 byte pages, when you got to the end of one you had to long jump to the next, that meant nearly every time I changed a function it would no longer fit the page and I had to juggle every function into a different arrangement in the pages. At the end of all that we sent the code to Japan and waited 6 weeks for ROM to be programmed on the microcontroller and shipped back to us. Prof. Brailsford had it easy!
Who else uses these to go to sleep?
Soon it'll all be ARM cores anyway.
What about Java
It may well appear in a future video. Java wasn't even thought about until around 10 years after the time Prof B is talking about about, and wasn't commercially available for another 5 or so years after that.
Isn't Java interpreted rather than compiled? Or at least compiled on-the-fly?
I think Java is compiled twice once from the developer facing Java code into byte code then once again from byte code to machine code at runtime
@@Cookie_Wookie_7 That's close to what I imagined, thanks.
I’m about to start my degree in computer science. I know very little about coding, any tips?
Learn to code
Pluto : thanks for the help. Any tips how I should start?
@@alexpent1482 start a degree in computer science.
Pluto : thanks I appreciate your help
@@alexpent1482 but jokes aside. look to see what programming languages you would use in class or for work and start a youtube tutorial or something. Or read the documentations and example code. Then code it yourself.
hum, JAVA comes to mind
But isn't JAVA interpreted rather than compiled?
@@melkiorwiseman5234 No.
@@melkiorwiseman5234 Java code is translated into machine-independent bytecode, which is, in turn, run by a Java Virtual Machine (which is a piece of software). So Java is intermediate between a purely compiled language (source code->binary executable) and an interpreted language.
@@luckyluckydog123 That's kind of what I thought. It's called a "byte-code compiled" language. It's how Liberty Basic works and is one step past the original BASIC language, although even Microsoft BASIC had a kind-of pre-compile built into it which turned "key words" into single byte codes. If Java uses more than 8 bits for its codes then you might call it "word-code compiled" instead but the principle stays the same. Anyway, thanks for the information.
That's what Forth came for.
...and stayed for... : )
Use the Forth, Luke
Writing a custom IDE at the moment with the goal of cross compiling the client. Just the client though keeping all the dev tools in Java.
Writing new software in java in 2019? How sad
@@noxabellus writing software in Java... how sad.
@@edgeeffect existing in 2022..sad
Please get a tripod.
everyone use x86/x64 and no one gets hurt!!!
i use cortex-a arm64-v8a but you - wherever you go
🎶Ay bed ced ded ed ef ged, aych ai jay kay, el em en o ped, qyu ar ess, ted yu ved, double-yu ex, why and zed, Now I know my ay-bed-ceds, next time won't you sing with med🎶
No no no
Zed comes last, so has a name that sounds different so you know you got to the last character in the alphabet. Stops overflow attacks on the alphabet register
In the thumbnail he looked like Warren Buffet.
Lovely britisch chaps, dang I spelled british wrong...
ffs Zestrix
sorry mate)
First
Zed code... that I go for 100%. (french canadian here)... americans are too lazy to say zed... like aluminium (aluminum in spoken US)... the "i" is too hard to pronouce for them, like the Zed. But I do not worry, they still have the ARMY...
I'm currently heavily under the influence of LSD.
First!)
Not first!
I can't follow these videos. Too slow, too rambling, and pretty meh
I wouldn't describe it as "meh" but I agree this one's kind of rambley..