I love that Ben plays dumb and goes through all the mistakes and issues a beginner might have, showing us how we would troubleshoot something like that. It makes it much more informative, engaging, and relatable than if he just told us exactly how to do it perfectly first try.
Mister Rogers Neighborhood did the same thing: plans would "change" as "unexpected" events happened on the show. All of course scripted out to the individual video frame.
I always feel stupid when I forget semicolons on the end of Arduino code or some other really stupid issue that I shouldn't forget when I compile. Ben makes me feel like that's okay, and that I can still be a good coder even when I make stupid mistakes
Absolutely love how you go through the process, failing a few times, and "discovering" why. It makes learning a lot smoother than just being told "you must do X, Y, and Z", and never find out what might happen if you only "do X and Y".
Neuropsychologically, that technique will trigger the learning response, assuming that the viewer is engaged. Unexpected results trigger the part of the brain that formulates and integrates process corrections. (See "orienting reflex")
I love how he works this in too. You can tell he knows the right answer, but he wants to teach WHY it's the right answer. he shows the shortcuts you can get away with in a pinch, because you know the limits of them.
failing is part of the process of what it means to be a programmer. The 6502 is a relatively simple processor. Modern processors are much more complex and there is a lot to know about the individual subsystems working together. It is not uncommon to not know all the low level nuts and bolts one needs to know in order not to fail. The datasheet of the µC I am working with - a STM32F7 - has 1200 pages. Needless to say, that it is impossible to grasp all of the information just by reading it front to back.
Ben, I'm a firmware engineer and I love your videos, they are relaxing and easy to digest. What you do is a public service, your videos are a perfect starting point for so many students and enthusiasts. Hats off to you, keep it up!
I do enjoy this very much too, as it brings me back to where I started 36 years ago on the Apple II in school learning assembly and understanding 6502 with its surrounding hardware. Todays work for me is embedded stuff. Having seen the basics is really useful, especially when not being a firmware engineer. So, thanks Ben for your videos!
yup... there are a few others on youtube I feel the same, 3Blue 1 Brown, javidx9 (OneLoneCoder), and a few others. Sentdex with his Python series for building Neural Nets, Machine Learning from scratch series is high on the list too... Jason Tuner is another with his C++ series... there's only a few that have that effect but these are just some of them... BlackPenRedPen is another good one.
I love how Ben manages to sound momentarily perplexed about why it's not working (when he knows _exactly_ why it's not working and he knew it was going to do that from the start).
What I really love about the videos is that you go over failure scenarios. That helps much more than just "Here's how to setup an NMI interrupt, like and subscribe". That makes it so much easier to troubleshoot when stuff doesn't work when I recreate things.
One thing that's worth noting: whenever the IRQ line is pulled low, it sets a flag inside the CPU. Whenever the CPU is finished an instruction, if that flag is set and interrupts are enabled, it services the interrupt. So, disabling interrupts doesn't actually _prevent_ them, but _delays_ them. Once you enable them again, any pending interrupts are serviced. Of course, it's only a flag, so even if 1000 interrupt requests happen while disabled, it will only run the handler once. It is also possible to manually clear that flag without actually handling the interrupt.
Exactly. Otherwise, a 'low' IRQ line would try to re-enter the interrupt handler over and over and over and over. It would keep pushing PC onto stack and ever execute a single instruction inside the IRQ handler :) By 'blocking' further interrupts until the RTI instruction is executed, the CPU can do something to 'handle' the event. And yes, if you expect additional interrupts as quickly as that, you could lose track of some events. So different schemes to 'count' the interrupts as quickly as you can, then use some other code to service the device until the 'interrupt count' is back to zero. Tricky stuff, but can be done.
In the 6502, the NMI (non-maskable interrupt) is latched, but the IRQ isn't. It's up to the device or support chips to hold the request until it's cancelled from the IRQ handler. By the way, if anyone is interested, to handle an IRQ or NMI, the CPU runs a modified BRK (break) instruction. The same happens on reset to load the reset vector, but it holds the read pin high to avoid writing the program counter and status flags to the stack. It's a nifty hack to save transistors. This explains those initial 7 cycles, and why a reset decrements the stack pointer by 3 and sets the interrupt disable flag without disturbing memory or the other registers.
@@JB52520 Hi Johnny how does the CPU know what address is inside the IRQ vector at FFFA, i understand that what ever device triggering the request will be inside the IRQ vector the the iRQ subroutine will begin after placing the last address it was reading from on the stack etc- but how does the cpu know what device( address) is triggering the low signal. Cheers Franky
Nintendo used NMI for timing as a sort of watchdog or heartbeat. You didn't want a hard to debug console to crash suddenly in production. A general purpose computer is significantly easier to debug than a running game console, so timers can be maskable with no loss in integrity.
@@mdahsankhan4361 I wasn't getting all of the items from the videos, so it totals around 1300SEK. The most expensive single item (excluding a new programmer) was actually breadboards at 100SEK each. I was seriously considering getting the kits from eater.net but those got a lot more expensive with VAT and shipping and I can't do that right now. Not going to follow the series exactly for now but if this hobby becomes a more regular activity I will and then at least starting over with the kits will make more sense to me.
So that's how it works! When I was 15, I was a wizard with Commodore Basic, and I had a bit of a go ar machine code/ assembly programming, but never got very far because the learning resources available at the time were nowhere near as good as Ben Eater videos. All these years later, and suddenly Interrupts make perfect sense. Thanks Ben. 😃
I just discovered Ben Eater's RUclips videos and electronic kits today. I quickly ordered the 6502 kit and several others. Having cut my teeth on 6502 programming back in the day on my Apple //e and later a IIgs (both of which I still own), I love the purity of 8-bit assembly language (it just makes sense; everything is clean). Kudos to Ben for his absorbing teaching style. I look forward to many adventures with his videos and kits.
You can actually make the counter go up only once just from software: in_irq = $020c ; 1 byte ... reset: lda #0 sta in_irq ... irq: pha ; save accu bit in_irq beq not_in_irq lda #0 ; we were already in the IRQ handler here pla ; remove accu from stack (original is still on previous stack) pla ; remove flags from stack pla ; remove return address from stack pla ; which is 2 bytes jmp await_irq_off_loop not_in_irq: ; actual handler goes here inc counter bne no_overflow inc counter + 1 no_overflow: ; end of actual handler lda #1 sta in_irq await_irq_off_loop: cli ; if the IRQ is still active, this will immediately recursively cause another interrupt
sti ; prevent a second interrupt from interrupting the following code lda #0 ; if we've reached this instruction, the IRQ is off (pin high) sta in_irq pla ; restore accu rti Basically, when we re-enable interrupts inside the interrupt handler, the interrupt handler will get called again. By knowing when this occurs (flag in_irq), we can remove the stuff from the stack and continue on with the awareness the IRQ is still active. I'm not familiar with the 6502 so this code may not work or it may be more succinctly written.
Wouldn't it be simpler to have the IRQ handler just set a flag, and the main loop increment the counter whenever that flag is set, then clear it? So no matter how many times the handler runs, the counter will still only increase once per main loop iteration. (Of course that can still be very fast, but you can add a delay.) Or if you're concerned about the handler blocking the main loop, you can have it end with rts instead of rti. rti is basically "rts and cli at the same time". So you can leave interrupts disabled after serving one until you're ready for the next one.
@@renakunisaki yes, it's usually recommended to keep interrupt service routines as minimal as possible to avoid slowdowns. Often this involves merely copying data somewhere and setting a flag that data needs to be processed by the super loop/main loop.
The 6502 code he used doesn't re-enable interrupts inside the handler (although I know some systems do this). Ben's current problem is that as soon as the RTI is executed, leaving the handler successfully, the CPU re-triggers and re-enters the handler again. If the handler was written to take a really long time, Ben could get his finger back off the button fast enough. But yes, if your system can 'interrupt' the 'interrupt handler', then you need something like what you have. One system I wrote device-drivers for had this. Used an interrupt counter and the 'masked' part of the handler would increment for each hardware event very quickly before enabling new interrupts so it could accurately count them.
@@mikefochtman7164 The way the processor "knows" the interrupt handler shouldn't be interrupted or that it's still in the handler is by the interrupt flag. rti pops the flags as well as the return address. By popping the flags, the interrupt flag is restored. That's not the only way to restore the flags, however. A simple cli works as well, allowing the "interrupt handler" to be interrupted.
What I love so much about your videos is that you take the time to discuss all the faults and misconceptions one could encounter. This builds a deep understanding of the matter and enables us to think the concept further ourselves.
Hi I am a soon to graduate Computer Engineer and your channel is literally everything that I have ever wanted to learn in school but still haven't. I am so beyond excited to binge through all your content. Thank you so much for sharing this information!
I didn't learn much of use in CE. Part of it was that I was a bad student and part was that the curriculum was out of touch. Most of my useful knowledge I either learned on my own or from sources like Ben Eater. Great stuff.
YES BEN NEW VID!!!!!!!! I'm in withdrawal from having too few Ben Eater vids. (PS. We still. all want you to connect your breadboard video card to your breadboard cpu and show some form of video output from it.)
Breadboard CPU could have some problems because it has almost no memory. Even my own version modified to support 4kB of memory would struggle with that. I'm currently working on connecting Ben's graphics card to my Z80 system. And in theory, it should also work with the 6502 design presented in the video. If you wanted to run such VGA generator with breadboard CPU, you should modify it to support at least 64k of memory, if you want full 64 colors on every pixel.
@@eanerickson8915 Exactly. There are at least two ways to accomplish this. If you want to write individual pixels to the screen with no color limitations (besides 64 colors the thing can produce), then it's best to use two RAM chips. One would be connected to VGA generator, and the second one to the CPU bus. Then you would need a circuit that will "swap them around" on CPUs request. This has some downsides though, for example that means you most likely have to redraw every frame from scratch. The second way involves dual-port RAM. These behave like regular RAM chips, but they have two access ports, eg. two sets or address, data and control lines. This allows the CPU to write image data independently from VGA hardware. The downside is that dual-port RAM is more expensive, but more importantly comes in smaller sizes than regular RAM, so some cuts have to be made in order to fit an entire screen in a smaller space. I went with dual-port RAM, and I already have four IDT7132 chips on my desk. That adds up to 8k. My VGA timing hardware is modified for 256x192 resolution (that's important later). To fit every pixel with its color (one byte per pixel), I would need 48k. That obviously won't fit in my 8k of RAM. So what I did, was to divide my available RAM into two sections: image data (6k) and color data (8k). If you do the math, you can see that a black and white image fits perfectly in 6k I gave it. So how does the color work? I divided the entire screen into 8x8 pixel sections. Each section is given two bytes in color RAM, foreground and background color. When a pixel is on in imge RAM, it gets the foreground color of its section. With some clever connections between RAM chips and timing hardware everything can happen in real time, without the need for any additional CPU. This solution is not ideal for graphics, but it adds some color to otherwise black and white display.
Just today a saw a video about the Atari 2600. It also used the 6502, just in a smaller package. Considering it does not need all those address lines since the machine uses only very little ram or rom it makes sense as a cost cutting strategy. However those madmen also removed the interrupt lines! Both of them! Now this would not have mattered too much if not for the fact that the video chip required excellent timing from the cpu every time a line was printed on the screen. The programmer had to make sure of that. It would have been so easy using interrupts, but no... It is only one aspect that made that machine a nightmare to program for. Really, I understand that every cent counts. But deactivating such a convenient tool?
The 2600 was certainly a quirky beast, but it did have ways to deal with the tv scan line. It might be best to think of it backwards, not a cpu with a chip to draw video, but it's a video chip having an assessory (cpu) to feed it instructions. So instead of an interrupt, you can ask the TIA to let the CPU not run until one of the video related timing events (hblank or vblank usually) So your code can setup a scan line, and ask the TIA to 'wake' the cpu when done. Then the very next opcode in your program runs potentially many clock cycles later. This works out better in the end because the video chip runs 3 times faster than the CPU. Even a NOP instruction (one 6502 clock cycle) means the video chip has already just drawn 3 pixels on the screen in that time
@@lorddissy A NOP instruction takes 2 cycles; because the way the 6502 is designed breaks an instruction into T-states, which each last one clock cycle, means that a T-state cannot be both the first and the last one of an instruction. Even if the answer (if there even was one!) was ready on the first tock, there needs to be another T-state to allow the program counter to increase ready for the next instruction. This only affects single-byte instructions. Any instruction 2 bytes long or more needs more than one T-state to read the whole instruction. So something like LDA#&00 (A9 00) reads A9 from memory in the first T-state, reads 00 from memory as the clock ticks (goes low) to begin the second T-state, the 00 will be latched into A when the clock tocks (goes high); and this is _not_ the first T-state so it _can_ be the last. The program counter increases, and the (first byte of the) next instruction is read on the next tick.
It's a weird design in general. Born almost entirely out of a period of time when RAM was unbelievably expensive. It's interesting to see the logical evolution of this design though. The Atari microcomputers. (also the 5200 console, but that was ultimately an afterthought in the end - the chipset was designed to be a game console, but instead was turned into a microcomputer first, and a console after the fact) The Atari 2600 is built around it's graphics chip, the TIA. (Television Interface Adapter) The microcomputer range is built around a chip called GTIA (Gary's Television Interface Adapter). And when you look at what it can do, it's very much like an upgraded TIA. Like TIA it essentially only creates one line of graphics; 8 hardware sprites that have to have their data altered every scanline (they can be scrolled horizontally but not vertically), and a bunch of data that forms a background. It has better colour capabilities and more detail is possible (up to 384 pixels on one line). A minor design change that speaks to how the new system works though - it doesn't have any kind of internal register for background graphics. Instead it has a 3 line data port, and interprets the values on this port in realtime. The chip has 4 modes, which interpret this in different ways. The standard one the systems use most of the time interpet the input as 5 different types of pixel, and several types of blanking instructions (including a switch to or from high resolution mode; standard resolution interprets the input as one of the 5 background palette registers, high resolution effectively interprets it as bit patterns using only 2 of the palette registers - or really, only part of two registers; specifically the luma part of two registers, and the chroma part of one of them. - There doesn't seem to be a good reason for this in terms of circuit complexity, but rather it seems to be intended to reduce the chance of colour artifacts in high resolution graphics.) The other 3 modes all read the data port twice, which means they use 6 bits of data, and thus halve the transmission rate (and thus also resolution). But, in return these modes can specify 16 unique colours; Each of these 3 modes interpret these 16 values differently. (one interprets them as 16 shades of monochrome - one colour, 16 brightnesses), another as all 16 possible chroma values but at the same brightness, while the last is basically 16 colour palette mode; But since the chip only has 5 background and 4 sprite palette registers, in reality this mode only allows 9 simultaneous colours. (though a minor upgrade to the chip could have certainly given a full 16 colour mode this way.) So... Aside from having an external data port and generally better capabilities, this very much is in the same lineage as TIA... But when designing it they quickly realised that repeating the 2600's design made little sense; It was awkward and hard to work with, and RAM was no longer that expensive. So they could've designed a more conventional graphics chip that had video ram and dealt with frames and so on... Instead, they created a chip called ANTIC. In their own words this is a CPU specifically to handle graphics, leaving the 6502 main CPU free for other tasks. And to a point, this IS accurate. though to call ANTIC a CPU is being very generous. ANTIC has a display list, which is a set of special purpose instructions that run every time a new frame is drawn. This display list contains various definitions of different more conventional graphics modes, such as a bitmap mode, or a text mode or the like in different resolutions. What distinguishes ANTIC + GTIA from a regular graphics chip is that ANTIC specifies graphics modes per line, not based on the whole screen. Indeed, why not? ANTIC works by writing data in realtime to those 3 GTIA pins. What kind of data it can write is limited by how ANTIC is designed; You could swap out this chip for something else and radically alter the behaviour of machines built with this setup, even though the GTIA chip that actually does the job of producing onscreen graphics is unchanged. All the text and graphics modes the system supports are dictated by the design of ANTIC (even if some of their limitations, such as colour and resolution are constrained by GTIA) In effect, ANTIC is the result of looking at what kind of graphics capabilities a computer would typically need, getting a 2600, then swapping out the 6502 for a special processor that mimics the behaviour expected of common graphics modes. ANTIC reads data from system memory through DMA, processes it according to what the display list says, then feeds background data through the 3 pin port directly to GTIA, while using further DMA to update the GTIA single line sprites, which in combination gives the illusion that the sprites span the entire height of the screen without the CPU having to do the work of changing it every line. Since it's capabilities are still relatively restricted though, it has the ability to create CPU interrupts at specific scanlines, so that you can trigger a more complex effect with precise timing that DOES use the CPU. A rather roundabout way of solving the problem, but one, it turns out that has some very interesting implications. While no longer a direct descendant of this design, the lessons learnt with these two systems were then used to create the Amiga. the Amiga, like the 8 bit ataris before it, has a graphics chip, and then a co-processor to help the CPU with the heavy lifting. This co-processor is called COPPER, and has been massively generalised and simplified vs ANTIC. Rather than a display list which deals in graphics modes for the screen on a line by line basis with special instructions... COPPER is much simpler in concept. You have a bunch of registers in the system that control features of the graphics hardware (though technically COPPER can be used outside of graphics tasks by writing to other registers). You then have a list of instructions that state to DMA a value from a location in memory to a specified register, then wait X pixels before processing the next instruction. That means it can make changes not just per scanline, but mid-scanline as well. (There is a lower limit to how frequently you can make a change though. No more than about once every 4 pixels drawn) Same basic idea, but much more generalised. Other systems have had features that clearly take inspiration from these. Though rarely quite like COPPER. One example is the SNES. It has a feature called HDMA. What is HDMA? It is a list of instructions in memory. Every H-blank period (eg. Once per scanline), the graphics chip reads any HDMA commands if any are enabled, and copies a value from memory to the register specified by the HDMA command. In effect it splits the difference between the generalised flexibility of the Amiga COPPER chip, and the more restricted scanline based technique of ANTIC. And all of this derives from the weirdness of the 2600, with things built on top of it and generalised bit by bit...
@@lorddissy thinking of the 2600 as more of a "programmable video generator" rather than a "computer with video output" really helps to understand how it works!
@@lorddissy I think you can only ask the TIA to make the CPU sleep until the next hblank. The vertical timing is not handled by the TIA but by the CPU itself! See ruclips.net/video/TRU33ZpY_z8/видео.html Different games could have different numbers of lines per frame, or even the same game could jump around between different numbers of lines per frame!
One other use case for the NMI is synchronizing your program with a timer of some kind - it's perfect for highly timing critical code! One example of this in practice is the NES, which used the NMI to detect and react to the start of the vblank interval of the TV. As you showed, it's very easy to run into race conditions if you're not careful about how you access memory between the nmi interrupt handler and the rest of the code. The easiest solution is to avoid shared memory access between the rest of the program and the nmi to the greatest extent possible, and to really consider all cases where an NMI could occur when shared memory access is required.
Level interrupts are handy for cases when several devices request irqs simultaneously. Service one and rti. The IRQ service is immediately triggered again; but the first device is no longer active, so you service the second. The third entry services the third, etc. You can even check the high-priority devices first.
Agree. Archaic system I worked with had separate interrupt lines, each with separate vector address, for each device. And they were prioritized such that lower level devices couldn't interrupt the higher priority interrupt handlers. Sending all device signals through separate logic into one IRQ line has some advantages. But now the IRQ service has to spend a few instructions just figuring out WHICH device caused the interrupt and then JMP to code for servicing that device. As so often the case, 'do it in hardware' or 'do it in software'.
@@mikefochtman7164 In the mid 80's MOS Tech released a priority interrupt controller for the 6502 that could extend the three vectors to 16 vectors. I don't recall the IC number.
@@byronwatkins2565 Interesting. I also seem to recall some members of the Z80 would do a special 'read' of the data-bus as part of the interrupt service. Then the interrupting device could actually transmit a byte to the CPU that would indicate a particular vector to load from a dedicated page. Or maybe I'm hallucinating, it's been a long time since I touched a Z80. lol
@@mikefochtman7164 My exposure to the Z80 and 6800 was limited. I remember that the 6800 pushed all registers onto the stack when servicing IRQs making them take longer than the 6502, which only pushed status and program counter. Z80 has a request/acknowledge protocol for some reason, but I have never programmed them at low level.
Sounds like CTRL-ALT-DEL would be triggering a non maskable interrupt. I’ve been programming for just over 30 years and have never really been interested in how assembly really works but I have to say that these videos are EXTREMELY interesting. I really appreciate the amount of time you spend on explaining these concepts, GREAT job!
So glad to see you taking this series further Ben! It's especially cool for me to see since I started working on my 65c02 computer a few months before you started this series XD Small world.
With some changes it's possible to easily extend it to other 8-bit memory chips. My version is built around Arduino Micro. It supports 27C64, 128, 256, 512, 010, 020, 040 and 801 EPROMs, 28C64 and 256 EEPROMs and 29C010, 020 and 040 FLASH, with support for 27C16 and 28C16 coming shortly.
Nice random number generator. You can also press the button and release it at the correct time, if the counter rolls over when you release the button, you can stop it at 0. keep going, this videos are a gold mine. Nice work.
Z80 was my baby. My 1991 final year project used two Z80's and a dual port RAM. ultrasonic steered vehicle. In the lab we made a successive approximation ADC from 7400 and 741. Great video. now we have teensy 4.0. lol
I have been using computers since the C64 and working with them for 30+ years. I've never completely understood why a chip would use little-endian design until seeing the code at 6:35. It's so elegant to remember that the result of the overflow goes into the NEXT memory location. No mental gymnastics required! I'll be getting my full set of kits in 2 days, and I can't wait to start building them.
137k of his subs have watched this video. He has 500k something subs. Thats a crazy amount of people active on this channel. Congrats Ben! I look forward to your next video and projects
It depends on the design of the processor. Say you have an interrupt coming in at cycle 7 on a processor with a 6-stage pipeline (FETCH, DECODE, REGISTER READ, ALU, MEMORY ACCESS, REGISTER WRITE). You could have a processor that schedules disabling interrupts and issuing a read of the interrupt vector location from RAM at cycle 7, with all 6 pipeline slots filled; you could have another that schedules disabling interrupts and setting the Program Counter to a fixed vector location at cycle 7, also with all 6 pipeline slots filled. The former is a bit slower, because the pipeline stalls on an immediately-needed load (jump to needs the memory to be loaded). Some architectures do the former (x86, ARM...); some do the latter (MIPS...). You could also have processors that start doing those things only at cycle 13, flushing the pipeline, with only 1 pipeline slot filled at cycle 12. This way lets all instructions resolve with interrupts disabled before the jump; the Program Counter after the interrupt handler is done is 6 instructions later. Because most pipelined architectures have REGISTER WRITE and MEMORY ACCESS stages at the end, you can also have processors that drain the pipeline, but cancel the REGISTER WRITE and MEMORY ACCESS stages of all instructions before the jump to the interrupt handler. Those processors might be able to start the jump to the interrupt vector at cycle 11 instead of 13. The instructions still run, but because their effects cannot be written anymore, it's as if they weren't executed. Then the whole pipeline is re-enabled so that the interrupt handler's effects apply. The Program Counter after the interrupt handler is done is then set to the first cancelled instruction.
I used to play a similar game with stopwatches, like I assume millions of other bored teenagers did. I discovered weird anomalies in the timings of stopwatches that were stopped extremely early. For instance, on more than one stopwatch, a very wide range of times seemed to result in 0.09 s being displayed, while 0.08 s was quite rare. On one stopwatch, I got every time between 0.04 s and 0.10 s, but 0.09 was an extreme outlier, even compared to 0.08 and 0.10. I was never able to record a time less than 0.04 s, even though I got it over a hundred times (compared to tens of thousands for 0.09)
@@lexus4tw There was something promoting 90 ms over other results, and it wasn't just me. My brother got the same result. Since it was possible to get a time as low as 40 ms, this wasn't a human limitation. Something else was going on in the way it calculated very short time spans.
@@EebstertheGreat I had a discussion over this regarding another video, I believe your answer is there: ruclips.net/user/attribution_link?a=lEgwH-YHFVYbtSuT&u=/watch%3Fv%3Dw0VA9kbIL3g%26lc%3DUgywk94UyVoF-oNAq5t4AaABAg.9Au5p7w88jz9AuS5qY5aZY%26feature%3Dem-comments
@@mumiemonstret Yeah that seems plausible. I don't think in this case that the stopwatch was picking from a limited set of values but just that some spanned a much longer time than others. Like, imagine if every time between 0.081 and 0.099 came out as 0.09, whereas to get a 0.08, the time had to be between 0.075 and 0.80 seconds or something. It was actually even more extreme than that, but you get the idea. Somewhere between the timing circuit, the circuit registering the button press, and the chip controlling the LCD display, there was a weird timing phenomenon going on.
Now I understand how the interrupts work on the Saturn 5 Computer Guidance control from SMARTER EVERYDAY AND LINUS TECH TIPS TOUR. Earned my subs. Keep going. Im gonna buy 6502 tommorow.
I am watching this series because I have some 6502 based projects. By the way, IRQs get more complicated once you use the BRK instruction, as you have to do some stack crap to check if the BRK bit is set. And you have to use the copy of the processor status register that was pushed onto the stack during the interrupt process. Also, someone on 6502.org made a circuit using logic chips that gives you 8 interrupt levels.
wow, this is so meaningful to me after you explained it. I had to deal with interupts before, as a programmer, but did not really got to know what really happens at hardware level, now it is crystal clear, and the data race you shown is awesome ! great example
Wouldn't it be necessary to have a sei instruction at the start and cli at the end of the lcd_instruction function to be sure the timing for the LCD doesn't get messed up? This should ensure that the current LCD instruction is always sent correctly. Or is there a better way?
I don't think that's necessary. The LCD timing is handled by the CPU sending a specific signal to the LCD and reading the response to see if it is ready for the next instruction. The LCD controller's documentation specified this signal is the only signal that can be sent as often as you want. So when the CPU wants to send something to the LCD it first enters a loop where it repeatedly sends that signal and reads the response, waiting for the confirmation that the LCD controller is ready for the next instruction. If the CPU gets interrupted while in this waiting loop either the LCD would have said it was ready and when it comes back from the interrupt (unless the interrupt handler sent a signal to the LCD) more time will have past since the LCD said it was ready, so it should still be ready. Or before the interrupt the LCD said it wasn't ready, in which case the CPU would return to the loop of sending the "ready?" signal and continue waiting for the response. So as long as you don't add lcd_instruction calls to the interrupt handlers, I'm pretty sure the loop included in every lcd_instruction call in the main program will maintain the timing as normal.
You usually only SEI for time sensitive processes, where you do not want an interrupt to abandon your time sensitive execution, like bus operations or display processes without buffer.
Those LCD have minimum timing requirements. That means, pulses (eg. the E line) cannot be shorter then that minimum. But they can be longer. That's not a problem. You see, on the LCD there is a processor in its own which handles the individual lines. This processor needs some time to poll all the lines and perform some action on what it sees on the lines. Thus there are minimum requirements, eg. the E-Line pulse cannot be shorter then x microseconds or otherwiese the LCD processor might not recognize the pulse. But it can be as long as you want. You can pull the E-Line low wait half an hour and let it come back to high. This would be perfectly fine for the LCD processor to accept it as "ok, there was a pulse" There are other issues with his programm but really this is not one of them.
@@_nikeee then you've done something wrong. You shouldn't call a potentially long-running routine from an interrupt. Usually, an interrupt should only do something simple like set a flag or increase a counter to tell the main loop that it needs to deal with this event next chance it gets.
What a flashback! I spent hours (days, weeks, months...) swimming in IRQs and NMIs on the 6502 (and Display List Interrupts on the ANTIC chip) in my old Atari800. That's where all the multitasking magic lived. Jumping between the CPU and ANTIC and back and doing as much extra processing as you could get away with before the other noticed was the real trick. 5 cycles here, maybe only 3 cycles or as many as 7 cycles there... Ah, those were the days...
this series is one of the reasons i got into programming and i've learned so many things thanks to you ! you keep doing a really good job at explaining how it works :)
Now if only he had a Makefile. :D [I presume he deliberately chooses not to, for pedagogical purposes... but hey, maybe it's an opportunity to teach Make at some point? ;)]
The part that would drive me crazy is the need to remove the eprom and put back everytime. I wonder how complex it would be to allow in circuit programming ?
@@enjibkk6850 you are missing the point of his vidoes. Its abkit the simplicity of how computers actually work, not about software development, ease of programming etc. Its about wires and hardwire and how things work.
Thanks, Ben! I look forward to these videos. I wish I could support you monetarily but that just isn't in the cards at the moment, so hopefully a large thanks for all of the hard work is all right!
There are a few fairly common uses, besides the power down detection as mentioned. Another example is a watchdog. There's a resettable circuit outside the CPU that generates an NMI after a delay (starting over whenever it's reset), and the code is supposed to send the reset signal to this watchdog circuit every now and again. If the code gets stuck, the watchdog triggers the NMI and the system can do some sort of a partial reboot to get unstuck. This can be pretty important for systems where a crash would be inconvenient or dangerous: better to have the elevator controller get it's mind back together, so to speak, when things go wrong, even if it means stopping at a floor where nobody wants to get on or off, than to stop completely and strand people in between floors...or send them careening up or down forever. Sometimes the watchdog might just trigger a full reset, too. Yet another common use, when developing and debugging low-level or embedded code, is to have the non-maskable interrupt break into a monitor or debugging program of some sort that can do things like display the contents of registers (saved from when the NMI happened), examine or change bytes in memory, etc. Pressing a button or whatever connected to the NMI signal then lets you get a better idea of what is going on when that's clearly not quite what you thought should be going on.
Well. One of my current projects is done on an STM32F7. Part of the design is that I am able to store information in a non voltile way. In order to do that I have a Flash ROM chip on board and use a file system to manage it. The file system is based on ELM-chans well known FAT filesystem. We all know that FAT filesystems are a pain in the ass when something goes wrong, eg. the file allocation table is not written correctly to the medium. Something that actually might happen, if the power goes away unexpectedly and the Filesystem still has the allocation table in ints internal cache and not yet written to the medium. And yes. This is exactly what happens sometimes. Part of my testing precedure of course is how the µC-card handles power outages. And sometimes it happened that the filesystem was no longer usable afterwards. So what I did was to monitor the power lines and if I detect a possible power loss I use an interrupt which immediately closes the filesystem and locks it. The power might come back (it might have just been a small glitch in the power supply input) in which case the filesystem is opened again but if it eventually goes away at least my file system is safe and stays usable. My hardware designer used enough capacity in the power supply such that the processor is able to continue for 10ms after the power input is missing. Enough time to drive the whole system into a safe state.
Hey Ben, with no knowledge that you were going to put out a video, I obviously opened up my PC to do other things this morning but that work is now interrupted by your video and I have to prioritise this 27mins video now.
Perfect timing I am in midst of starting my first Arduino project and I found out about the Interrupt function. I started using it because I wanted to have a code be more efficient. 5 hours and a couple forum pages later I stopped using the interrupt for the same reasons you gave. Also I have some buttons that bounce quite a bit (5 changes detected in one press with 16MHz). So far I didn't know the name for this phenomenon and now I can check your video and the rest of the internet on how to deal with that properly :) Thank you
Use the interrupts but only to set flags to poll later. Ex: Ethernet chip signals a packet has arrived via hardware interrupt so set the packet arrived flag and then exit the interrupt routine. Later, check the flag when convenient and deal with the packet then but not in the interrupt itself.
Switch bouncing is the term you need to search for. Bouncing really is what that phenomenon is called, but the term switch is far more common than button in this context.
As for debouncing, I just do whatever code was supposed to execute when the button is pressed, then loop until the button is sensed to be released then tack on a 150ms delay before proceeding. Adjust delay value as desired.
For Arduino the library Bounce2 is a godsend if you just want to not have to deal with debouncing and process a lot of button inputs, but if you're watching a video like this I encourage you to craft your own solution 😉
Thank you for all the replies. The idea with the flags sounds really cool and I will just do a bit of coding before I really try to implement it. My project is a Kerbal Space Program Control Hub as a present for someone. His birthday is in three months and I figured that is about the time I will need to learn all the necessary skills.
RUclips gives me push notifications when your videos post, and I had to stop what I was doing to watch this one. Now I'm acknowledging and it's time for me to RTI.
Big fan. As a self-taught software engineer, I have some substantial gaps in my knowledge once you get lower-level than C or so. Happy to say this has taught me a ton!
your videos are very intuitive. also learning about real low level programming makes you realise just how much we take easy libraries for granted. awesome stuff
Quick hack : Use an accumulator in int handler setting some variable to 1. In the main loop just increase a global accumulator by that value and then set back the accumulator used in interrupts to zero. Ex: .byte int_flag .byte pad .word accum Int: lda #1 Sta int_flag RTI Main: Load int_flag Load accum Add int_flag , accum Set int_flag , 0. ..
Seeing the display in the thumbnail, before I even watch (which I will): reminds me of OS/2 kernel errors from the 90s. When it would panic, it would print a "TRAP" message, and a bunch of data. If it was interrupted while doing that? All we got was "TRA" on the screen, and no other data to debug it with. The dreaded "TRA"...
I wrote an RTE (real-time executive) and associated tasks for a 6502 on a custom board (DSP) many years ago for a customer. Great fun. I believe the NMI was used for a watchdog timer.
Ah yes, brings back fond memories. Was writing code for device drivers on archaic system. This was a cpu that actually existed of a lot of ECL logic gates on three large boards. And it had a large number (I think it was 16) interrupt lines. So one I/O interrupt could actually interrupt a lower priority one. One particular instruction it had was for interrupt handling were it would 'decrement global interrupt counter'. It had a special side effect of blocking any/all interrupts for one additional instruction. So the standard way of exiting an interrupt handler was 'decrement the global interrupt counter' followed by 'return from interrupt'. Anyway, yeah, takes me back. Interrupt handlers have to be careful to disable, perform some short task, then re-enable interrupts in order to not 'lose' events.
On the old Atari 8-bits, the NMI was used to invoke a Vertical Blank Interrupt, as well as so-called Display List Interrupts (aka Horiztonal Blank Interrupts). Timing for these (controlled by the ANTIC coprocessor) is important to keep the big boss happy (the TV or monitor the Atari's connected to). Understanding some of this at a lower (CPU) level via your videos helps my ageing brain understand a bit more of the concepts and goings-on under the hood of my favorite computer platform. Thanks!
Very good video! You are bring to me a whole new perspective on computers. You might know this, but for anyone who is not aware: in Vim to replicate a series of keystrokes press . So when Ben in the video (19:38) is creating the code for the nmi he manually select the text irq with visual mode (keystroke v), then deleted the text (keystroke x), then enter insert mode (keystroke i), next typed nmi, and then quit insert mode (keystroke esc) and proceedingly repeated the exact same set of actions two lines up for the bne jump comparison. Instead of repeating that complex series of keystroke he could have just pressed . (keystroke period) to repeat the last set of actions he did. So he could have typed . , l, and 3x (erase three characters) to insert nmi and remove the irq characters. While this seems like a nitpicky thing, I have found little tricks like this to greatly improve my time while using Vim.
Hello Ben! You probably won't see this but thanks a lot for all the tutorials. your videos have helped me understand CPU's and now I'm even thinking about starting to develop processors for special cases, like very specific things as mining or certain calculations. I even got inspired enough to consider specializing in semiconductors in a few years after finishing highschool. I have had a lot of fun with your videos and can't thank you more than with this comment currently. Thanks a lot!
Ben is the only creator on RUclips I slowdown playback to 0.75 for haha. Not that I can;t follow the flow but he's definitely processing a few Ghz ahead. Give it atry ;)
Great video as always. Even though I know most of this I love it and always get something new or useful from it. My only complaint would be the frequency ... We need videos more often 😃👍
In all honesty Ben. I think you have potential in this project to add expansion kits. If things go well for my projects I am thinking about providing kits myself and then calling each kit a chapter.Of course I am doing something very different then this. I am building a cluster computer for taking a keyboard controller and then controlling an entire synth cabinet. Your kit and the tutorials has been incredibly helpful in learning the 6502 Processor and getting started in ASM. Keep up the good work.
7:20 - Just noted that you have no "overflow" protection on the string entered before the counter. Yes, the code doesn't allow for any decimal higher than 65536 to be stored (5 characters + \0 terminator) since the 16 bit value operated on cannot go any higher, but just wanted to point it out. Probably not bad practice at all in 8-bit programming, but I'd personally spare some bytes for a "potential" change in code that allowed a 32-bit integer (10 + 1 chars) as decimal to be stored there, since you're not low on memory space. Maybe even space for a 64-bit one (20 chars + the \0 terminator).
The Sega Master System used a Z-80 CPU, but it has a similar arrangement with two different interrupts. Interesting, they actually did attach a button to the NMI line. That's how the pause button on the console works, it triggers the NMI handler.
I was thinking of that while watching the video. The Z-80 used a single-byte instruction called "restart" (RST) to handle interrupts. The first instruction fetch cycle following an interrupt acknowledgement was intended to be a "restart" instruction containing a built-in number indicating where in memory the CPU should restart execution from in order to service the interrupt. The restart numbers ranged from 0 to 7 and restart 0 was identical to a reset instruction since the Z-80 always started execution from memory location 0 following a reset. If I remember correctly, each number above zero was 8 bytes further into the memory, so RST 1 would start the interrupt handler at memory location 8, RST 2 would start the handler at location 16 (10H) and so on. It's not as versatile in some ways as the 6502's way of handling interrupts, but programmers and hardware builders found ingenious ways around its limits. It's also more versatile in another way because you can incorporate RST instructions into your program (using interrupt numbers which you're never going to use as an actual interrupt) and use them as single-byte subroutine calls. In the days when memory was expensive and its space limited to 65536 bytes, every byte saved could be important. Some systems could use "bank switching" to expand the memory, especially on multi-user systems, but that was complicated and home systems almost never used it, at least until the IBM PC era.
Being able to manipulate technology is awesome, and there is something to be said about it, but the true genius shines through when someone can create their own. Nice job!
I absolutely love these videos! Part of my summer project has been doing something similar! I decided to emulate my own cpu like the 6502 using an arduino! It's been fun!
I love that Ben plays dumb and goes through all the mistakes and issues a beginner might have, showing us how we would troubleshoot something like that. It makes it much more informative, engaging, and relatable than if he just told us exactly how to do it perfectly first try.
That's because he has excellent engineering skills and knows how to reflect them through a solid presentation.
@@skilz8098 For real, definitely my favorite channel on youtube. Whenever he releases a new video I get unreasonably excited
Mister Rogers Neighborhood did the same thing: plans would "change" as "unexpected" events happened on the show. All of course scripted out to the individual video frame.
I always feel stupid when I forget semicolons on the end of Arduino code or some other really stupid issue that I shouldn't forget when I compile. Ben makes me feel like that's okay, and that I can still be a good coder even when I make stupid mistakes
@@AlanCanon2222 Ben is the Mr. Rogers of digital logic.
"Apologies for the INTERRUPTION..." I'll let it slide this once.
That was Smooth!
until your IRQ counter is reset 😏
Yeah, that was well played meta. :)
LOL !! Good one !!
I'm sorry, you did not file the appropriate interrupt permission form, thus I will ignore your interruption.
Absolutely love how you go through the process, failing a few times, and "discovering" why. It makes learning a lot smoother than just being told "you must do X, Y, and Z", and never find out what might happen if you only "do X and Y".
Yes, I noticed that too, I really like that.
Neuropsychologically, that technique will trigger the learning response, assuming that the viewer is engaged.
Unexpected results trigger the part of the brain that formulates and integrates process corrections. (See "orienting reflex")
It's a good technique. "What happens if you don't do something?" is always a useful question to ask.
I love how he works this in too.
You can tell he knows the right answer, but he wants to teach WHY it's the right answer.
he shows the shortcuts you can get away with in a pinch, because you know the limits of them.
failing is part of the process of what it means to be a programmer.
The 6502 is a relatively simple processor. Modern processors are much more complex and there is a lot to know about the individual subsystems working together. It is not uncommon to not know all the low level nuts and bolts one needs to know in order not to fail. The datasheet of the µC I am working with - a STM32F7 - has 1200 pages. Needless to say, that it is impossible to grasp all of the information just by reading it front to back.
Ben, I'm a firmware engineer and I love your videos, they are relaxing and easy to digest. What you do is a public service, your videos are a perfect starting point for so many students and enthusiasts. Hats off to you, keep it up!
Okay, so I am not the only firmware engineer who watches these videos and finds them relaxing ;)
Me too. I deal with computer architecture, Ben is real deal.
I do enjoy this very much too, as it brings me back to where I started 36 years ago on the Apple II in school learning assembly and understanding 6502 with its surrounding hardware. Todays work for me is embedded stuff. Having seen the basics is really useful, especially when not being a firmware engineer. So, thanks Ben for your videos!
Videos from Ben Eater are always a NMI - drop everything, watch video, continue with what I was doing before.
yup... there are a few others on youtube I feel the same, 3Blue 1 Brown, javidx9 (OneLoneCoder), and a few others. Sentdex with his Python series for building Neural Nets, Machine Learning from scratch series is high on the list too... Jason Tuner is another with his C++ series... there's only a few that have that effect but these are just some of them... BlackPenRedPen is another good one.
@@skilz8098 Sebastian Lague's coding adventures also come to mind for me.
@@skilz8098 Hey, another bprp viewer! Nice to see ya
Make the interrupt shock you with some voltage so you release the button quicker
This isn't the channel of Michael Reeves though.
Would love a collab tho...
Actualy, make cpu decide if it wants you to be shocked, this is the intended way to use irq.
Electroboom collaboration needed now.
Ben is too educational for micheal
Alternating current only!
"Apologies for the interruption..."
Interrupt acknowledged.
@J Hemphill Yep. This interrupt is routine.
I'm not programmed to handle this interruption so I fast forwarded it
@@HDestroyer787 ERROR PROGRAM FALLTHROUGH
@J Hemphill nope, interrupt routine crashed
@J Hemphill Sloan handled it. Just didn’t:
RTI
I love how Ben manages to sound momentarily perplexed about why it's not working (when he knows _exactly_ why it's not working and he knew it was going to do that from the start).
Joe E / And that's the "art" ( the correct way ) of teaching...🙂
Congrats on 555k subscribers! Nice timing :)
Nice _timing_ you say? lol. Indeed! And congrats indeed! Perhaps that's an interrupt, Ben, for a different kind of video? ;)
Why... Did you do this... You didn't have to... This hurts...😂
HAHAHAHAHA
lmaoo
Fantastic
What I really love about the videos is that you go over failure scenarios. That helps much more than just "Here's how to setup an NMI interrupt, like and subscribe". That makes it so much easier to troubleshoot when stuff doesn't work when I recreate things.
"and hit that bell icons, so Ben's videos can push IRQs to your phone!"
Lesson 1: Hello world. Lesson 2: Interrupts.
Lesson 3: The world!!!!!!
@@nockieboy yes yes yes yes yes
Lesson 4: Fusion Reactor Bootup!
Jajajajajaja
One thing that's worth noting: whenever the IRQ line is pulled low, it sets a flag inside the CPU. Whenever the CPU is finished an instruction, if that flag is set and interrupts are enabled, it services the interrupt. So, disabling interrupts doesn't actually _prevent_ them, but _delays_ them. Once you enable them again, any pending interrupts are serviced.
Of course, it's only a flag, so even if 1000 interrupt requests happen while disabled, it will only run the handler once. It is also possible to manually clear that flag without actually handling the interrupt.
Exactly. Otherwise, a 'low' IRQ line would try to re-enter the interrupt handler over and over and over and over. It would keep pushing PC onto stack and ever execute a single instruction inside the IRQ handler :) By 'blocking' further interrupts until the RTI instruction is executed, the CPU can do something to 'handle' the event.
And yes, if you expect additional interrupts as quickly as that, you could lose track of some events. So different schemes to 'count' the interrupts as quickly as you can, then use some other code to service the device until the 'interrupt count' is back to zero. Tricky stuff, but can be done.
In the 6502, the NMI (non-maskable interrupt) is latched, but the IRQ isn't. It's up to the device or support chips to hold the request until it's cancelled from the IRQ handler.
By the way, if anyone is interested, to handle an IRQ or NMI, the CPU runs a modified BRK (break) instruction. The same happens on reset to load the reset vector, but it holds the read pin high to avoid writing the program counter and status flags to the stack. It's a nifty hack to save transistors. This explains those initial 7 cycles, and why a reset decrements the stack pointer by 3 and sets the interrupt disable flag without disturbing memory or the other registers.
@@JB52520 Hi Johnny how does the CPU know what address is inside the IRQ vector at FFFA, i understand that what ever device triggering the request will be inside the IRQ vector the the iRQ subroutine will begin after placing the last address it was reading from on the stack etc- but how does the cpu know what device( address) is triggering the low signal.
Cheers Franky
"You don't want to use NMI in normal operation." Nintendo: "How about an NMI on every VSYNC?"
At least IBM used a _maskable_ interrupt on a timer
Nintendo used NMI for timing as a sort of watchdog or heartbeat. You didn't want a hard to debug console to crash suddenly in production.
A general purpose computer is significantly easier to debug than a running game console, so timers can be maskable with no loss in integrity.
price?
Perfectly timed, as all things should be
@@mdahsankhan4361 I wasn't getting all of the items from the videos, so it totals around 1300SEK. The most expensive single item (excluding a new programmer) was actually breadboards at 100SEK each. I was seriously considering getting the kits from eater.net but those got a lot more expensive with VAT and shipping and I can't do that right now. Not going to follow the series exactly for now but if this hobby becomes a more regular activity I will and then at least starting over with the kits will make more sense to me.
@@repinuj so long as an interrupt doesn't throw off the timing.
So that's how it works! When I was 15, I was a wizard with Commodore Basic, and I had a bit of a go ar machine code/ assembly programming, but never got very far because the learning resources available at the time were nowhere near as good as Ben Eater videos. All these years later, and suddenly Interrupts make perfect sense. Thanks Ben. 😃
Ben: Uploads
Me: 🥰
RetroGameMechanics also uploads
Me: :o THE TIMING (I gotta sequence those interrupts in turn)
I just discovered Ben Eater's RUclips videos and electronic kits today. I quickly ordered the 6502 kit and several others. Having cut my teeth on 6502 programming back in the day on my Apple //e and later a IIgs (both of which I still own), I love the purity of 8-bit assembly language (it just makes sense; everything is clean). Kudos to Ben for his absorbing teaching style. I look forward to many adventures with his videos and kits.
You can actually make the counter go up only once just from software:
in_irq = $020c ; 1 byte
...
reset:
lda #0
sta in_irq
...
irq:
pha ; save accu
bit in_irq
beq not_in_irq
lda #0
; we were already in the IRQ handler here
pla ; remove accu from stack (original is still on previous stack)
pla ; remove flags from stack
pla ; remove return address from stack
pla ; which is 2 bytes
jmp await_irq_off_loop
not_in_irq:
; actual handler goes here
inc counter
bne no_overflow
inc counter + 1
no_overflow:
; end of actual handler
lda #1
sta in_irq
await_irq_off_loop:
cli ; if the IRQ is still active, this will immediately recursively cause another interrupt
sti ; prevent a second interrupt from interrupting the following code
lda #0 ; if we've reached this instruction, the IRQ is off (pin high)
sta in_irq
pla ; restore accu
rti
Basically, when we re-enable interrupts inside the interrupt handler, the interrupt handler will get called again. By knowing when this occurs (flag in_irq), we can remove the stuff from the stack and continue on with the awareness the IRQ is still active. I'm not familiar with the 6502 so this code may not work or it may be more succinctly written.
Wouldn't it be simpler to have the IRQ handler just set a flag, and the main loop increment the counter whenever that flag is set, then clear it? So no matter how many times the handler runs, the counter will still only increase once per main loop iteration. (Of course that can still be very fast, but you can add a delay.)
Or if you're concerned about the handler blocking the main loop, you can have it end with rts instead of rti. rti is basically "rts and cli at the same time". So you can leave interrupts disabled after serving one until you're ready for the next one.
@@renakunisaki The problem here is that the counter will increase once per main loop iteration, rather than once per button press.
@@renakunisaki yes, it's usually recommended to keep interrupt service routines as minimal as possible to avoid slowdowns. Often this involves merely copying data somewhere and setting a flag that data needs to be processed by the super loop/main loop.
The 6502 code he used doesn't re-enable interrupts inside the handler (although I know some systems do this). Ben's current problem is that as soon as the RTI is executed, leaving the handler successfully, the CPU re-triggers and re-enters the handler again. If the handler was written to take a really long time, Ben could get his finger back off the button fast enough.
But yes, if your system can 'interrupt' the 'interrupt handler', then you need something like what you have. One system I wrote device-drivers for had this. Used an interrupt counter and the 'masked' part of the handler would increment for each hardware event very quickly before enabling new interrupts so it could accurately count them.
@@mikefochtman7164 The way the processor "knows" the interrupt handler shouldn't be interrupted or that it's still in the handler is by the interrupt flag. rti pops the flags as well as the return address. By popping the flags, the interrupt flag is restored. That's not the only way to restore the flags, however. A simple cli works as well, allowing the "interrupt handler" to be interrupted.
What I love so much about your videos is that you take the time to discuss all the faults and misconceptions one could encounter. This builds a deep understanding of the matter and enables us to think the concept further ourselves.
Hi I am a soon to graduate Computer Engineer and your channel is literally everything that I have ever wanted to learn in school but still haven't. I am so beyond excited to binge through all your content. Thank you so much for sharing this information!
I didn't learn much of use in CE. Part of it was that I was a bad student and part was that the curriculum was out of touch. Most of my useful knowledge I either learned on my own or from sources like Ben Eater. Great stuff.
YES BEN NEW VID!!!!!!!! I'm in withdrawal from having too few Ben Eater vids. (PS. We still. all want you to connect your breadboard video card to your breadboard cpu and show some form of video output from it.)
We definitely want that
Breadboard CPU could have some problems because it has almost no memory. Even my own version modified to support 4kB of memory would struggle with that. I'm currently working on connecting Ben's graphics card to my Z80 system. And in theory, it should also work with the 6502 design presented in the video. If you wanted to run such VGA generator with breadboard CPU, you should modify it to support at least 64k of memory, if you want full 64 colors on every pixel.
That would be cool, you would need to use ram instead of eprom?
@@eanerickson8915 Exactly. There are at least two ways to accomplish this. If you want to write individual pixels to the screen with no color limitations (besides 64 colors the thing can produce), then it's best to use two RAM chips. One would be connected to VGA generator, and the second one to the CPU bus. Then you would need a circuit that will "swap them around" on CPUs request. This has some downsides though, for example that means you most likely have to redraw every frame from scratch. The second way involves dual-port RAM. These behave like regular RAM chips, but they have two access ports, eg. two sets or address, data and control lines. This allows the CPU to write image data independently from VGA hardware. The downside is that dual-port RAM is more expensive, but more importantly comes in smaller sizes than regular RAM, so some cuts have to be made in order to fit an entire screen in a smaller space. I went with dual-port RAM, and I already have four IDT7132 chips on my desk. That adds up to 8k. My VGA timing hardware is modified for 256x192 resolution (that's important later). To fit every pixel with its color (one byte per pixel), I would need 48k. That obviously won't fit in my 8k of RAM. So what I did, was to divide my available RAM into two sections: image data (6k) and color data (8k). If you do the math, you can see that a black and white image fits perfectly in 6k I gave it. So how does the color work? I divided the entire screen into 8x8 pixel sections. Each section is given two bytes in color RAM, foreground and background color. When a pixel is on in imge RAM, it gets the foreground color of its section. With some clever connections between RAM chips and timing hardware everything can happen in real time, without the need for any additional CPU. This solution is not ideal for graphics, but it adds some color to otherwise black and white display.
@@k4ktus That's basically what a ZX Spectrum does, right?
Just today a saw a video about the Atari 2600. It also used the 6502, just in a smaller package. Considering it does not need all those address lines since the machine uses only very little ram or rom it makes sense as a cost cutting strategy.
However those madmen also removed the interrupt lines! Both of them!
Now this would not have mattered too much if not for the fact that the video chip required excellent timing from the cpu every time a line was printed on the screen. The programmer had to make sure of that. It would have been so easy using interrupts, but no...
It is only one aspect that made that machine a nightmare to program for.
Really, I understand that every cent counts. But deactivating such a convenient tool?
The 2600 was certainly a quirky beast, but it did have ways to deal with the tv scan line.
It might be best to think of it backwards, not a cpu with a chip to draw video, but it's a video chip having an assessory (cpu) to feed it instructions.
So instead of an interrupt, you can ask the TIA to let the CPU not run until one of the video related timing events (hblank or vblank usually)
So your code can setup a scan line, and ask the TIA to 'wake' the cpu when done. Then the very next opcode in your program runs potentially many clock cycles later.
This works out better in the end because the video chip runs 3 times faster than the CPU. Even a NOP instruction (one 6502 clock cycle) means the video chip has already just drawn 3 pixels on the screen in that time
@@lorddissy A NOP instruction takes 2 cycles; because the way the 6502 is designed breaks an instruction into T-states, which each last one clock cycle, means that a T-state cannot be both the first and the last one of an instruction. Even if the answer (if there even was one!) was ready on the first tock, there needs to be another T-state to allow the program counter to increase ready for the next instruction. This only affects single-byte instructions. Any instruction 2 bytes long or more needs more than one T-state to read the whole instruction. So something like LDA#&00 (A9 00) reads A9 from memory in the first T-state, reads 00 from memory as the clock ticks (goes low) to begin the second T-state, the 00 will be latched into A when the clock tocks (goes high); and this is _not_ the first T-state so it _can_ be the last. The program counter increases, and the (first byte of the) next instruction is read on the next tick.
It's a weird design in general. Born almost entirely out of a period of time when RAM was unbelievably expensive.
It's interesting to see the logical evolution of this design though.
The Atari microcomputers. (also the 5200 console, but that was ultimately an afterthought in the end - the chipset was designed to be a game console, but instead was turned into a microcomputer first, and a console after the fact)
The Atari 2600 is built around it's graphics chip, the TIA. (Television Interface Adapter)
The microcomputer range is built around a chip called GTIA (Gary's Television Interface Adapter).
And when you look at what it can do, it's very much like an upgraded TIA.
Like TIA it essentially only creates one line of graphics; 8 hardware sprites that have to have their data altered every scanline (they can be scrolled horizontally but not vertically), and a bunch of data that forms a background.
It has better colour capabilities and more detail is possible (up to 384 pixels on one line).
A minor design change that speaks to how the new system works though - it doesn't have any kind of internal register for background graphics.
Instead it has a 3 line data port, and interprets the values on this port in realtime. The chip has 4 modes, which interpret this in different ways. The standard one the systems use most of the time interpet the input as 5 different types of pixel, and several types of blanking instructions (including a switch to or from high resolution mode; standard resolution interprets the input as one of the 5 background palette registers, high resolution effectively interprets it as bit patterns using only 2 of the palette registers - or really, only part of two registers; specifically the luma part of two registers, and the chroma part of one of them. - There doesn't seem to be a good reason for this in terms of circuit complexity, but rather it seems to be intended to reduce the chance of colour artifacts in high resolution graphics.)
The other 3 modes all read the data port twice, which means they use 6 bits of data, and thus halve the transmission rate (and thus also resolution). But, in return these modes can specify 16 unique colours; Each of these 3 modes interpret these 16 values differently. (one interprets them as 16 shades of monochrome - one colour, 16 brightnesses), another as all 16 possible chroma values but at the same brightness, while the last is basically 16 colour palette mode; But since the chip only has 5 background and 4 sprite palette registers, in reality this mode only allows 9 simultaneous colours. (though a minor upgrade to the chip could have certainly given a full 16 colour mode this way.)
So... Aside from having an external data port and generally better capabilities, this very much is in the same lineage as TIA...
But when designing it they quickly realised that repeating the 2600's design made little sense; It was awkward and hard to work with, and RAM was no longer that expensive.
So they could've designed a more conventional graphics chip that had video ram and dealt with frames and so on...
Instead, they created a chip called ANTIC.
In their own words this is a CPU specifically to handle graphics, leaving the 6502 main CPU free for other tasks.
And to a point, this IS accurate.
though to call ANTIC a CPU is being very generous.
ANTIC has a display list, which is a set of special purpose instructions that run every time a new frame is drawn.
This display list contains various definitions of different more conventional graphics modes, such as a bitmap mode, or a text mode or the like in different resolutions. What distinguishes ANTIC + GTIA from a regular graphics chip is that ANTIC specifies graphics modes per line, not based on the whole screen.
Indeed, why not? ANTIC works by writing data in realtime to those 3 GTIA pins.
What kind of data it can write is limited by how ANTIC is designed; You could swap out this chip for something else and radically alter the behaviour of machines built with this setup, even though the GTIA chip that actually does the job of producing onscreen graphics is unchanged.
All the text and graphics modes the system supports are dictated by the design of ANTIC (even if some of their limitations, such as colour and resolution are constrained by GTIA)
In effect, ANTIC is the result of looking at what kind of graphics capabilities a computer would typically need, getting a 2600, then swapping out the 6502 for a special processor that mimics the behaviour expected of common graphics modes.
ANTIC reads data from system memory through DMA, processes it according to what the display list says, then feeds background data through the 3 pin port directly to GTIA, while using further DMA to update the GTIA single line sprites, which in combination gives the illusion that the sprites span the entire height of the screen without the CPU having to do the work of changing it every line.
Since it's capabilities are still relatively restricted though, it has the ability to create CPU interrupts at specific scanlines, so that you can trigger a more complex effect with precise timing that DOES use the CPU.
A rather roundabout way of solving the problem, but one, it turns out that has some very interesting implications.
While no longer a direct descendant of this design, the lessons learnt with these two systems were then used to create the Amiga.
the Amiga, like the 8 bit ataris before it, has a graphics chip, and then a co-processor to help the CPU with the heavy lifting.
This co-processor is called COPPER, and has been massively generalised and simplified vs ANTIC.
Rather than a display list which deals in graphics modes for the screen on a line by line basis with special instructions...
COPPER is much simpler in concept. You have a bunch of registers in the system that control features of the graphics hardware (though technically COPPER can be used outside of graphics tasks by writing to other registers).
You then have a list of instructions that state to DMA a value from a location in memory to a specified register, then wait X pixels before processing the next instruction.
That means it can make changes not just per scanline, but mid-scanline as well.
(There is a lower limit to how frequently you can make a change though. No more than about once every 4 pixels drawn)
Same basic idea, but much more generalised.
Other systems have had features that clearly take inspiration from these. Though rarely quite like COPPER.
One example is the SNES.
It has a feature called HDMA.
What is HDMA? It is a list of instructions in memory. Every H-blank period (eg. Once per scanline), the graphics chip reads any HDMA commands if any are enabled, and copies a value from memory to the register specified by the HDMA command.
In effect it splits the difference between the generalised flexibility of the Amiga COPPER chip, and the more restricted scanline based technique of ANTIC.
And all of this derives from the weirdness of the 2600, with things built on top of it and generalised bit by bit...
@@lorddissy thinking of the 2600 as more of a "programmable video generator" rather than a "computer with video output" really helps to understand how it works!
@@lorddissy I think you can only ask the TIA to make the CPU sleep until the next hblank. The vertical timing is not handled by the TIA but by the CPU itself! See ruclips.net/video/TRU33ZpY_z8/видео.html Different games could have different numbers of lines per frame, or even the same game could jump around between different numbers of lines per frame!
NMIs are often used for debugging logic. Like you have a push button you can press to show a debug console on a computer where you can do disassembly.
They're also sometimes used for a soft-reset button, or a warning "heads up, resetting in 1 second".
I have an old iMac with that feature. Does that use a NMI?
One other use case for the NMI is synchronizing your program with a timer of some kind - it's perfect for highly timing critical code! One example of this in practice is the NES, which used the NMI to detect and react to the start of the vblank interval of the TV. As you showed, it's very easy to run into race conditions if you're not careful about how you access memory between the nmi interrupt handler and the rest of the code. The easiest solution is to avoid shared memory access between the rest of the program and the nmi to the greatest extent possible, and to really consider all cases where an NMI could occur when shared memory access is required.
You interrupted my boring afternoon. Thanks
Love this series! Thank you for your work that you put into this series.
Thanks for you wonderful videos. You have inspired me to go ahead and finally learn a programing language at 57.
Oooh, nice! Kudos for putting the effort in! You can do it!!
Cool.... and Assembler too!
You don't want any of that nasty "high level" stuff.
Level interrupts are handy for cases when several devices request irqs simultaneously. Service one and rti. The IRQ service is immediately triggered again; but the first device is no longer active, so you service the second. The third entry services the third, etc. You can even check the high-priority devices first.
Agree. Archaic system I worked with had separate interrupt lines, each with separate vector address, for each device. And they were prioritized such that lower level devices couldn't interrupt the higher priority interrupt handlers. Sending all device signals through separate logic into one IRQ line has some advantages.
But now the IRQ service has to spend a few instructions just figuring out WHICH device caused the interrupt and then JMP to code for servicing that device.
As so often the case, 'do it in hardware' or 'do it in software'.
@@mikefochtman7164 In the mid 80's MOS Tech released a priority interrupt controller for the 6502 that could extend the three vectors to 16 vectors. I don't recall the IC number.
@@byronwatkins2565 Interesting. I also seem to recall some members of the Z80 would do a special 'read' of the data-bus as part of the interrupt service. Then the interrupting device could actually transmit a byte to the CPU that would indicate a particular vector to load from a dedicated page. Or maybe I'm hallucinating, it's been a long time since I touched a Z80. lol
@@mikefochtman7164 My exposure to the Z80 and 6800 was limited. I remember that the 6800 pushed all registers onto the stack when servicing IRQs making them take longer than the 6502, which only pushed status and program counter. Z80 has a request/acknowledge protocol for some reason, but I have never programmed them at low level.
Sounds like CTRL-ALT-DEL would be triggering a non maskable interrupt.
I’ve been programming for just over 30 years and have never really been interested in how assembly really works but I have to say that these videos are EXTREMELY interesting. I really appreciate the amount of time you spend on explaining these concepts, GREAT job!
Sometimes, I just listen to his soothing voice. It's ASMR!
Putting the ASMR into ASSEMBLER
lol me too but I get to learn interesting things simultaneously.
So glad to see you taking this series further Ben! It's especially cool for me to see since I started working on my 65c02 computer a few months before you started this series XD Small world.
Happy to see the uploads. Working on your 8 bit EEPROM just now
With some changes it's possible to easily extend it to other 8-bit memory chips. My version is built around Arduino Micro. It supports 27C64, 128, 256, 512, 010, 020, 040 and 801 EPROMs, 28C64 and 256 EEPROMs and 29C010, 020 and 040 FLASH, with support for 27C16 and 28C16 coming shortly.
Congrats on the 555k subscribers. Let's get you to 6502k soon.
Just to say, this video is excelent. Explanations, visual demonstrations, audio quality, everything is great! Thank you for this!
Nice random number generator. You can also press the button and release it at the correct time, if the counter rolls over when you release the button, you can stop it at 0.
keep going, this videos are a gold mine. Nice work.
Z80 was my baby. My 1991 final year project used two Z80's and a dual port RAM. ultrasonic steered vehicle. In the lab we made a successive approximation ADC from 7400 and 741. Great video. now we have teensy 4.0. lol
I have been using computers since the C64 and working with them for 30+ years. I've never completely understood why a chip would use little-endian design until seeing the code at 6:35. It's so elegant to remember that the result of the overflow goes into the NEXT memory location. No mental gymnastics required! I'll be getting my full set of kits in 2 days, and I can't wait to start building them.
If you hold the button down long enough that the counter increments to 6502, do you win?
yes
Ben will come to your house and personally deliver your prize
It's almost as bad as Scott Manley's "game" in Shenzhen I/O
137k of his subs have watched this video. He has 500k something subs. Thats a crazy amount of people active on this channel. Congrats Ben! I look forward to your next video and projects
How are interrupts implemented on pipelined processors?
It depends on the design of the processor. Say you have an interrupt coming in at cycle 7 on a processor with a 6-stage pipeline (FETCH, DECODE, REGISTER READ, ALU, MEMORY ACCESS, REGISTER WRITE).
You could have a processor that schedules disabling interrupts and issuing a read of the interrupt vector location from RAM at cycle 7, with all 6 pipeline slots filled; you could have another that schedules disabling interrupts and setting the Program Counter to a fixed vector location at cycle 7, also with all 6 pipeline slots filled. The former is a bit slower, because the pipeline stalls on an immediately-needed load (jump to needs the memory to be loaded). Some architectures do the former (x86, ARM...); some do the latter (MIPS...).
You could also have processors that start doing those things only at cycle 13, flushing the pipeline, with only 1 pipeline slot filled at cycle 12. This way lets all instructions resolve with interrupts disabled before the jump; the Program Counter after the interrupt handler is done is 6 instructions later.
Because most pipelined architectures have REGISTER WRITE and MEMORY ACCESS stages at the end, you can also have processors that drain the pipeline, but cancel the REGISTER WRITE and MEMORY ACCESS stages of all instructions before the jump to the interrupt handler. Those processors might be able to start the jump to the interrupt vector at cycle 11 instead of 13. The instructions still run, but because their effects cannot be written anymore, it's as if they weren't executed. Then the whole pipeline is re-enabled so that the interrupt handler's effects apply. The Program Counter after the interrupt handler is done is then set to the first cancelled instruction.
crazy answer, thanks!
I used to play a similar game with stopwatches, like I assume millions of other bored teenagers did. I discovered weird anomalies in the timings of stopwatches that were stopped extremely early. For instance, on more than one stopwatch, a very wide range of times seemed to result in 0.09 s being displayed, while 0.08 s was quite rare. On one stopwatch, I got every time between 0.04 s and 0.10 s, but 0.09 was an extreme outlier, even compared to 0.08 and 0.10. I was never able to record a time less than 0.04 s, even though I got it over a hundred times (compared to tens of thousands for 0.09)
This is basically based on mechanical and human limitations and not really about the electronics. Interrupts execute in micro and nano seconds
@@lexus4tw There was something promoting 90 ms over other results, and it wasn't just me. My brother got the same result. Since it was possible to get a time as low as 40 ms, this wasn't a human limitation. Something else was going on in the way it calculated very short time spans.
@@EebstertheGreat I had a discussion over this regarding another video, I believe your answer is there: ruclips.net/user/attribution_link?a=lEgwH-YHFVYbtSuT&u=/watch%3Fv%3Dw0VA9kbIL3g%26lc%3DUgywk94UyVoF-oNAq5t4AaABAg.9Au5p7w88jz9AuS5qY5aZY%26feature%3Dem-comments
@@mumiemonstret Yeah that seems plausible. I don't think in this case that the stopwatch was picking from a limited set of values but just that some spanned a much longer time than others. Like, imagine if every time between 0.081 and 0.099 came out as 0.09, whereas to get a 0.08, the time had to be between 0.075 and 0.80 seconds or something. It was actually even more extreme than that, but you get the idea. Somewhere between the timing circuit, the circuit registering the button press, and the chip controlling the LCD display, there was a weird timing phenomenon going on.
Now I understand how the interrupts work on the Saturn 5 Computer Guidance control from SMARTER EVERYDAY AND LINUS TECH TIPS TOUR.
Earned my subs. Keep going. Im gonna buy 6502 tommorow.
Great video, all the assembly awoke some bad bad memmories of having to learn and use it for a whole semester.
He knows like the basics to everything of assembly programming .... Unbelievable Bravo
Bravo for basics🤔
@@damiengates7581 bravo for basics to everything
I am watching this series because I have some 6502 based projects.
By the way, IRQs get more complicated once you use the BRK instruction, as you have to do some stack crap to check if the BRK bit is set. And you have to use the copy of the processor status register that was pushed onto the stack during the interrupt process.
Also, someone on 6502.org made a circuit using logic chips that gives you 8 interrupt levels.
Damn. Now I can't wait for the next video! Thanks so much Ben.
wow, this is so meaningful to me after you explained it. I had to deal with interupts before, as a programmer, but did not really got to know what really happens at hardware level, now it is crystal clear, and the data race you shown is awesome ! great example
Wouldn't it be necessary to have a sei instruction at the start and cli at the end of the lcd_instruction function to be sure the timing for the LCD doesn't get messed up? This should ensure that the current LCD instruction is always sent correctly. Or is there a better way?
I don't think that's necessary.
The LCD timing is handled by the CPU sending a specific signal to the LCD and reading the response to see if it is ready for the next instruction. The LCD controller's documentation specified this signal is the only signal that can be sent as often as you want. So when the CPU wants to send something to the LCD it first enters a loop where it repeatedly sends that signal and reads the response, waiting for the confirmation that the LCD controller is ready for the next instruction.
If the CPU gets interrupted while in this waiting loop either the LCD would have said it was ready and when it comes back from the interrupt (unless the interrupt handler sent a signal to the LCD) more time will have past since the LCD said it was ready, so it should still be ready. Or before the interrupt the LCD said it wasn't ready, in which case the CPU would return to the loop of sending the "ready?" signal and continue waiting for the response.
So as long as you don't add lcd_instruction calls to the interrupt handlers, I'm pretty sure the loop included in every lcd_instruction call in the main program will maintain the timing as normal.
You usually only SEI for time sensitive processes, where you do not want an interrupt to abandon your time sensitive execution, like bus operations or display processes without buffer.
Those LCD have minimum timing requirements. That means, pulses (eg. the E line) cannot be shorter then that minimum. But they can be longer. That's not a problem.
You see, on the LCD there is a processor in its own which handles the individual lines. This processor needs some time to poll all the lines and perform some action on what it sees on the lines. Thus there are minimum requirements, eg. the E-Line pulse cannot be shorter then x microseconds or otherwiese the LCD processor might not recognize the pulse. But it can be as long as you want. You can pull the E-Line low wait half an hour and let it come back to high. This would be perfectly fine for the LCD processor to accept it as "ok, there was a pulse"
There are other issues with his programm but really this is not one of them.
@@kallewirsch2263 Yeah, but what if lcd_instruction gets called from an interrupt?
@@_nikeee then you've done something wrong. You shouldn't call a potentially long-running routine from an interrupt. Usually, an interrupt should only do something simple like set a flag or increase a counter to tell the main loop that it needs to deal with this event next chance it gets.
You are the best teacher in RUclips. All your videos are fantastic ! Many thanks !!
What a flashback!
I spent hours (days, weeks, months...) swimming in IRQs and NMIs on the 6502 (and Display List Interrupts on the ANTIC chip) in my old Atari800. That's where all the multitasking magic lived. Jumping between the CPU and ANTIC and back and doing as much extra processing as you could get away with before the other noticed was the real trick. 5 cycles here, maybe only 3 cycles or as many as 7 cycles there... Ah, those were the days...
this series is one of the reasons i got into programming and i've learned so many things thanks to you ! you keep doing a really good job at explaining how it works :)
Absolutely fascinating. I love how compiling (or assembling technically I suppose) is instantaneous.
Now if only he had a Makefile. :D [I presume he deliberately chooses not to, for pedagogical purposes... but hey, maybe it's an opportunity to teach Make at some point? ;)]
The part that would drive me crazy is the need to remove the eprom and put back everytime. I wonder how complex it would be to allow in circuit programming ?
Enji Bkk yeah... that too!
@@enjibkk6850 you are missing the point of his vidoes. Its abkit the simplicity of how computers actually work, not about software development, ease of programming etc. Its about wires and hardwire and how things work.
Thanks, Ben! I look forward to these videos. I wish I could support you monetarily but that just isn't in the cards at the moment, so hopefully a large thanks for all of the hard work is all right!
I always wondered what purpose an interrupt that you could never stop would serve. Now I have some vague ideas. Thanks!
As an old PC tech, most of my interaction with an NMI is a RAM error. “Hardware state is bad! Stop everything!”
There are a few fairly common uses, besides the power down detection as mentioned.
Another example is a watchdog. There's a resettable circuit outside the CPU that generates an NMI after a delay (starting over whenever it's reset), and the code is supposed to send the reset signal to this watchdog circuit every now and again. If the code gets stuck, the watchdog triggers the NMI and the system can do some sort of a partial reboot to get unstuck. This can be pretty important for systems where a crash would be inconvenient or dangerous: better to have the elevator controller get it's mind back together, so to speak, when things go wrong, even if it means stopping at a floor where nobody wants to get on or off, than to stop completely and strand people in between floors...or send them careening up or down forever. Sometimes the watchdog might just trigger a full reset, too.
Yet another common use, when developing and debugging low-level or embedded code, is to have the non-maskable interrupt break into a monitor or debugging program of some sort that can do things like display the contents of registers (saved from when the NMI happened), examine or change bytes in memory, etc. Pressing a button or whatever connected to the NMI signal then lets you get a better idea of what is going on when that's clearly not quite what you thought should be going on.
Another use for the NMI is to trigger it on vblank if you have a DMA controller, as was the case on the NES.
Well. One of my current projects is done on an STM32F7. Part of the design is that I am able to store information in a non voltile way. In order to do that I have a Flash ROM chip on board and use a file system to manage it.
The file system is based on ELM-chans well known FAT filesystem.
We all know that FAT filesystems are a pain in the ass when something goes wrong, eg. the file allocation table is not written correctly to the medium.
Something that actually might happen, if the power goes away unexpectedly and the Filesystem still has the allocation table in ints internal cache and not yet written to the medium.
And yes. This is exactly what happens sometimes. Part of my testing precedure of course is how the µC-card handles power outages. And sometimes it happened that the filesystem was no longer usable afterwards.
So what I did was to monitor the power lines and if I detect a possible power loss I use an interrupt which immediately closes the filesystem and locks it. The power might come back (it might have just been a small glitch in the power supply input) in which case the filesystem is opened again but if it eventually goes away at least my file system is safe and stays usable. My hardware designer used enough capacity in the power supply such that the processor is able to continue for 10ms after the power input is missing. Enough time to drive the whole system into a safe state.
I really enjoy this series. Thank you
Hey Ben, with no knowledge that you were going to put out a video, I obviously opened up my PC to do other things this morning but that work is now interrupted by your video and I have to prioritise this 27mins video now.
It's always a good day when Ben posts
Perfect timing I am in midst of starting my first Arduino project and I found out about the Interrupt function. I started using it because I wanted to have a code be more efficient. 5 hours and a couple forum pages later I stopped using the interrupt for the same reasons you gave.
Also I have some buttons that bounce quite a bit (5 changes detected in one press with 16MHz). So far I didn't know the name for this phenomenon and now I can check your video and the rest of the internet on how to deal with that properly :) Thank you
Use the interrupts but only to set flags to poll later. Ex: Ethernet chip signals a packet has arrived via hardware interrupt so set the packet arrived flag and then exit the interrupt routine. Later, check the flag when convenient and deal with the packet then but not in the interrupt itself.
Switch bouncing is the term you need to search for. Bouncing really is what that phenomenon is called, but the term switch is far more common than button in this context.
As for debouncing, I just do whatever code was supposed to execute when the button is pressed, then loop until the button is sensed to be released then tack on a 150ms delay before proceeding. Adjust delay value as desired.
For Arduino the library Bounce2 is a godsend if you just want to not have to deal with debouncing and process a lot of button inputs, but if you're watching a video like this I encourage you to craft your own solution 😉
Thank you for all the replies. The idea with the flags sounds really cool and I will just do a bit of coding before I really try to implement it.
My project is a Kerbal Space Program Control Hub as a present for someone. His birthday is in three months and I figured that is about the time I will need to learn all the necessary skills.
RUclips gives me push notifications when your videos post, and I had to stop what I was doing to watch this one. Now I'm acknowledging and it's time for me to RTI.
Big fan. As a self-taught software engineer, I have some substantial gaps in my knowledge once you get lower-level than C or so.
Happy to say this has taught me a ton!
Congratulations on 555k subscribers! Just went to the 555 timer video to comment the same thing:D
Ben eater literally eats Binaries ! Best in business . Thanks mate
your videos are very intuitive. also learning about real low level programming makes you realise just how much we take easy libraries for granted. awesome stuff
Quick hack :
Use an accumulator in int handler setting some variable to 1.
In the main loop just increase a global accumulator by that value and then set back the accumulator used in interrupts to zero.
Ex:
.byte int_flag
.byte pad
.word accum
Int:
lda #1
Sta int_flag
RTI
Main:
Load int_flag
Load accum
Add int_flag , accum
Set int_flag , 0.
..
Seeing the display in the thumbnail, before I even watch (which I will): reminds me of OS/2 kernel errors from the 90s. When it would panic, it would print a "TRAP" message, and a bunch of data. If it was interrupted while doing that? All we got was "TRA" on the screen, and no other data to debug it with. The dreaded "TRA"...
I wrote an RTE (real-time executive) and associated tasks for a 6502 on a custom board (DSP) many years ago for a customer. Great fun. I believe the NMI was used for a watchdog timer.
Great break down of debouncing, the parallels with WAI Aria still has me a little stunned. Thanks.
Isn’t the JMP loop missing? Am I missing something? The code shouldn’t be working as far as I can tell...
He removes the JMP loop, but not the BEQ loop
There's a BEQ loop in the print method that jump to loop when the entire string has been printed.
i ordered the kit today! can't wait to build it
Prefect explaination, just began learning this to write my own Kernel in C, and this video definitely helped consolidate those principals.
Ah yes, brings back fond memories. Was writing code for device drivers on archaic system. This was a cpu that actually existed of a lot of ECL logic gates on three large boards. And it had a large number (I think it was 16) interrupt lines. So one I/O interrupt could actually interrupt a lower priority one. One particular instruction it had was for interrupt handling were it would 'decrement global interrupt counter'. It had a special side effect of blocking any/all interrupts for one additional instruction. So the standard way of exiting an interrupt handler was 'decrement the global interrupt counter' followed by 'return from interrupt'.
Anyway, yeah, takes me back. Interrupt handlers have to be careful to disable, perform some short task, then re-enable interrupts in order to not 'lose' events.
You Sir, are simply just great.
On the old Atari 8-bits, the NMI was used to invoke a Vertical Blank Interrupt, as well as so-called Display List Interrupts (aka Horiztonal Blank Interrupts). Timing for these (controlled by the ANTIC coprocessor) is important to keep the big boss happy (the TV or monitor the Atari's connected to).
Understanding some of this at a lower (CPU) level via your videos helps my ageing brain understand a bit more of the concepts and goings-on under the hood of my favorite computer platform. Thanks!
Very good video! You are bring to me a whole new perspective on computers.
You might know this, but for anyone who is not aware: in Vim to replicate a series of keystrokes press . So when Ben in the video (19:38) is creating the code for the nmi he manually select the text irq with visual mode (keystroke v), then deleted the text (keystroke x), then enter insert mode (keystroke i), next typed nmi, and then quit insert mode (keystroke esc) and proceedingly repeated the exact same set of actions two lines up for the bne jump comparison. Instead of repeating that complex series of keystroke he could have just pressed . (keystroke period) to repeat the last set of actions he did. So he could have typed . , l, and 3x (erase three characters) to insert nmi and remove the irq characters. While this seems like a nitpicky thing, I have found little tricks like this to greatly improve my time while using Vim.
I think that was c for change, not xi. . repeats one command, which could handily be e.g. cw for change to end of word.
@@0LoneTech That actually works better!
I only knew about ciw (change inner word) but would change the entier word underscores and all. Thank you!
@@Chris-on5bt You can combine all sorts of operators and motions, as well as apply counts to them. vimhelp.org/motion.txt.html#operator
Your channel is the only one I have the bell turned on for.
servicing interrupts about interrupts, nice!
Hello Ben!
You probably won't see this but thanks a lot for all the tutorials. your videos have helped me understand CPU's and now I'm even thinking about starting to develop processors for special cases, like very specific things as mining or certain calculations. I even got inspired enough to consider specializing in semiconductors in a few years after finishing highschool. I have had a lot of fun with your videos and can't thank you more than with this comment currently.
Thanks a lot!
Ben is the only creator on RUclips I slowdown playback to 0.75 for haha. Not that I can;t follow the flow but he's definitely processing a few Ghz ahead. Give it atry ;)
Great video as always. Even though I know most of this I love it and always get something new or useful from it. My only complaint would be the frequency ... We need videos more often 😃👍
In all honesty Ben. I think you have potential in this project to add expansion kits. If things go well for my projects I am thinking about providing kits myself and then calling each kit a chapter.Of course I am doing something very different then this. I am building a cluster computer for taking a keyboard controller and then controlling an entire synth cabinet. Your kit and the tutorials has been incredibly helpful in learning the 6502 Processor and getting started in ASM. Keep up the good work.
There's 555k subscribers atm. Good thing that I'm subscribed already, otherwise I'd be really conflicted about subscribing.
16bit counter... foreshadowing. At the edge of my seat.
Right? When he showed what happened when holding the counter down for a split second, I was like, "Ben, you sonofa..."
555K Subs, reminds me of the days when we played around with 555 timer on our 8-bit computer.
Excellent video as always, I wish the channel got a couple of million more subs, you deserve it!
7:20 - Just noted that you have no "overflow" protection on the string entered before the counter. Yes, the code doesn't allow for any decimal higher than 65536 to be stored (5 characters + \0 terminator) since the 16 bit value operated on cannot go any higher, but just wanted to point it out.
Probably not bad practice at all in 8-bit programming, but I'd personally spare some bytes for a "potential" change in code that allowed a 32-bit integer (10 + 1 chars) as decimal to be stored there, since you're not low on memory space. Maybe even space for a 64-bit one (20 chars + the \0 terminator).
You never fail to disappoint. Exceptional.
I learn so much from these videos, thank you.
The Sega Master System used a Z-80 CPU, but it has a similar arrangement with two different interrupts. Interesting, they actually did attach a button to the NMI line. That's how the pause button on the console works, it triggers the NMI handler.
I was thinking of that while watching the video. The Z-80 used a single-byte instruction called "restart" (RST) to handle interrupts. The first instruction fetch cycle following an interrupt acknowledgement was intended to be a "restart" instruction containing a built-in number indicating where in memory the CPU should restart execution from in order to service the interrupt. The restart numbers ranged from 0 to 7 and restart 0 was identical to a reset instruction since the Z-80 always started execution from memory location 0 following a reset. If I remember correctly, each number above zero was 8 bytes further into the memory, so RST 1 would start the interrupt handler at memory location 8, RST 2 would start the handler at location 16 (10H) and so on.
It's not as versatile in some ways as the 6502's way of handling interrupts, but programmers and hardware builders found ingenious ways around its limits. It's also more versatile in another way because you can incorporate RST instructions into your program (using interrupt numbers which you're never going to use as an actual interrupt) and use them as single-byte subroutine calls. In the days when memory was expensive and its space limited to 65536 bytes, every byte saved could be important. Some systems could use "bank switching" to expand the memory, especially on multi-user systems, but that was complicated and home systems almost never used it, at least until the IBM PC era.
Excellent video as always Ben!
Keyboard: Hey OS, I have a key! It's "A"!
Kernel: 0x20
This is what I call professional game development right here
game development for professional geeks? :)
Hello, wINTorld!
excited to watch as always
Ben’s videos are always the best. What I dont understand is who dislikes them ? Can they find better in all of youtube?
Thank you for your amazing and inspiring content!
I ordered this kit last night at like midnight, super pumped to get it in
Being able to manipulate technology is awesome, and there is something to be said about it, but the true genius shines through when someone can create their own.
Nice job!
Another great addition to the series.
this was exactly what i was looking into the last couple of days and wondering how to handle it into detail!
I absolutely love these videos! Part of my summer project has been doing something similar! I decided to emulate my own cpu like the 6502 using an arduino! It's been fun!
0:30 sorry my IMR is set.
Another great video, Your definitely skilled in logic and the ability to teach what you know😀 Hats off to you.