Nice, I have been using something similar myself for quite a while, but with quad rendering as the atom. Nice tip about the EBO not being worth it in these cases - I'll be using that info :)
I don't think that the EBO tip is entirely accurate. It makes sense if you have very little vertex data, but once you start adding texture coordinates, texture ids, colors, the vertex reuse saves data on quads for sure.
This is pretty cool! I'll consider using this if I do more 2D rendering in the future. It's a little too barebones for my 3D needs (I'm trying to support GLTF loading), but this is perfect for a 2D game.
Nice video, I've used completely custom "game engines" for game jams etc. before, but mostly for 2D stuff. However, the way you handle textures is very OpenGl 2. In more modern OpenGL you can use TEXTURE_2D_ARRAY: It is basically what you've manually implemented, you just give it 3 indices when sampling. (u, v, and "texture id" aka "layer" or "w"). I think it's an OpenGL 3 feature, might be 3.3, either way WebGL2 supports it :)
Thank you for taking the time to make these videos! I think I understand the concept of a memory arena - I like to think of it as a stage where the memory lives for a certain amount of time. What I don't understand is how do you load specific data into them? Do you just use void pointers everywhere and cast them whenever you need to use them? I think I am on the edge of fully understanding this but I'm not there yet.
Yeah, I implement them as linear allocators, holding an offset of where the next allocation will go with respect to a base pointer. If you haven't seen it already, I made a video about arenas that goes indepth into the semantics of it all (as you said, the allocations living a certain amount of time)
I think I am grasping what you are doing. I am wanting to develop a 3D renderer. This seems to be extendable to the 3D world but I think I will need to study it and make sure. Do you think the concepts work and are transferable? It seems to me but need to think on it further.
Thanks a lot for this video. It’s very helpful since I’m learning OpenGL right now. But I’m wondering why you aren’t using the GL types, like GLuint? What are those u32 types?
they are typedef-ed to the same thing. the u prefix stands for unsigned, the i prefix is for signed, the f prefix stands for floating point. the number stands for how many bits the type takes up. Only reason for using those is shorter type names. (Also since I'm doing opengl function loading myself, I'd have to typedef GL types myself so It isn't really great)
This is a good idea, but I think it should be written more portably. Something that might help is if you download a copy of MingW or CygWin. When I was still using Windows, they made it a lot easier to target POSIX as a platform, and both include a port of bash. For getting input, you might consider using SDL, and for simple GUI construction consider GTK. I think I'm going to see what it takes to port this by getting your tetris clone working.
Oh definitely. The main point of the repository was JUST the renderer c file. The rest of the scaffolding was just my codebase which I'm in the process of making more cross platform at the moment.
GLFW also works and can get you both the rendering and input. I've been using that with CMake, and my code is portable to MinGW, MSVC, and GCC on Linux
@@Shadowblitz16 That's something you would want to use a framebuffer for separately. That's not something the renderer itself is concerned with. You make a separate framebuffer with a texture attachment. If you want to render to the framebuffer, bind the framebuffer then use the renderer like normal (begin frame, push geometry, end frame) then swap over to the screen framebuffer and use the renderer again to render the framebuffer texture
Nice, I have been using something similar myself for quite a while, but with quad rendering as the atom. Nice tip about the EBO not being worth it in these cases - I'll be using that info :)
I don't think that the EBO tip is entirely accurate. It makes sense if you have very little vertex data, but once you start adding texture coordinates, texture ids, colors, the vertex reuse saves data on quads for sure.
@@springogeek yea idk where he got that from, itd be nice if he clarified
This is pretty cool! I'll consider using this if I do more 2D rendering in the future. It's a little too barebones for my 3D needs (I'm trying to support GLTF loading), but this is perfect for a 2D game.
Nice video, I've used completely custom "game engines" for game jams etc. before, but mostly for 2D stuff.
However, the way you handle textures is very OpenGl 2. In more modern OpenGL you can use TEXTURE_2D_ARRAY: It is basically what you've manually implemented, you just give it 3 indices when sampling. (u, v, and "texture id" aka "layer" or "w"). I think it's an OpenGL 3 feature, might be 3.3, either way WebGL2 supports it :)
Thank you for taking the time to make these videos! I think I understand the concept of a memory arena - I like to think of it as a stage where the memory lives for a certain amount of time. What I don't understand is how do you load specific data into them? Do you just use void pointers everywhere and cast them whenever you need to use them? I think I am on the edge of fully understanding this but I'm not there yet.
Yeah, I implement them as linear allocators, holding an offset of where the next allocation will go with respect to a base pointer. If you haven't seen it already, I made a video about arenas that goes indepth into the semantics of it all (as you said, the allocations living a certain amount of time)
I think I am grasping what you are doing. I am wanting to develop a 3D renderer. This seems to be extendable to the 3D world but I think I will need to study it and make sure. Do you think the concepts work and are transferable? It seems to me but need to think on it further.
Thanks a lot for this video. It’s very helpful since I’m learning OpenGL right now. But I’m wondering why you aren’t using the GL types, like GLuint? What are those u32 types?
they are typedef-ed to the same thing.
the u prefix stands for unsigned, the i prefix is for signed, the f prefix stands for floating point. the number stands for how many bits the type takes up. Only reason for using those is shorter type names. (Also since I'm doing opengl function loading myself, I'd have to typedef GL types myself so It isn't really great)
GLuint is just a 32-bit int anyway, so it's the same thing. I typedef'd them to "gl_id" in my codebase to shorten the name, but "u32" works too.
This is a good idea, but I think it should be written more portably. Something that might help is if you download a copy of MingW or CygWin. When I was still using Windows, they made it a lot easier to target POSIX as a platform, and both include a port of bash. For getting input, you might consider using SDL, and for simple GUI construction consider GTK. I think I'm going to see what it takes to port this by getting your tetris clone working.
Oh definitely. The main point of the repository was JUST the renderer c file. The rest of the scaffolding was just my codebase which I'm in the process of making more cross platform at the moment.
GLFW also works and can get you both the rendering and input. I've been using that with CMake, and my code is portable to MinGW, MSVC, and GCC on Linux
@@torphedo6286 Yep, another excellent choice.
things look to have improved since azurite :D
Improved AND simplified a lot indeed!
i like the colorscheme and font , names ?
Font is inconsolata, the colorscheme is Ryan fleury's which is modified from the theme Casey uses for handmade hero. I don't think it has a name.
quads do save space with indices
How would I use render textures?
What do you mean by render textures? I cover rendering simple textures in the video
@@voxelrifts like rendering a texture onto a texture
@@Shadowblitz16 That's something you would want to use a framebuffer for separately. That's not something the renderer itself is concerned with. You make a separate framebuffer with a texture attachment. If you want to render to the framebuffer, bind the framebuffer then use the renderer like normal (begin frame, push geometry, end frame) then swap over to the screen framebuffer and use the renderer again to render the framebuffer texture