TL;DR at the bottom. C being close to "the hardware" is a misconception. C has no idea what "the hardware" is supposed to be, and it comes with its own set of assumptions and it has its own rules and set of operations, collectively known as the C abstract machine. The C abstract machine is what is specified and standardised in the ISO or ANSI C standards. Verbatim, the C standard states "The semantic descriptions in this International Standard describe the behavior of an abstract machine in which issues of optimization are irrelevant." Your C compiler's job is to take a valid standard C program, determine what abstract machine operations it performs, and then translate that to the given compilation target. Compilers make the assumption that your program is valid, i.e. all operations it performs exist in the C abstract machine, otherwise known as not containing undefined behaviour, and optimise them based on this assumption. C inherited much of its worldview/memory model from B or BCPL, ultimately, most types are thin wrappers over "int" which is the everything-type, essentially (used t. If you look at old pre-standard C (otherwise known as K&R C) you'll find types are completely optional, just like B and BCPL. The automatically assumed type as always "int" because that was almost certainly what you wanted. You'd only really want "char" for strings otherwise. As long as your program is valid and the compiler doesn't contain any bugs, it is free to "mutilate" your program in any-which-way it pleases to get you the best runtime performance, as long as (observably) the outcome is the same. Assembly also is not safe here; the CPU itself actually does the same to help you out even further. Modern CPUs turn your code into something completely unrecognisable before actually executing anything, for example: For compilers: 1. Redundant operations are removed 2. Load/store pairs are removed 3: Compilers are capable of somewhat primitive malloc elision and optimisations around dynamic memory allocation 4. Compilers can inline functions into each other, outline common code snippets into functions, roll or unroll loops, autovectorise loops 5. Functions can be prevented or allowed to cross page boundaries depending on how often they are called 6. Compilers can use profiling information to derive hot/cold or likely/unlikely functions or branches 7. Cold branches can be tossed to the bottom of functions to keep hot branches in instruction cache near the branch instruction 8. The "strength" of operations can be reduced by a compiler, transforming a more expensive operation to a cheaper one, such as expressing multiplication or division through bit-shifts with immediate values 9. Compilers can move code out of a loop if it results in the same value each time 10. Compilers can use constant folding and eliminate redundant calculations 11. Compilers can replace expressions with the result evaluated at compile-time 12. Compilers with target CPU information can reorder instructions to tailor to how the pipeline of said CPU is designed 13. Compilers can swap or eliminate nested loops to reduce branching for the CPU to keep track of and result in a more predictable memory access pattern For CPUs: 1. Handling of instructions is pipelined, fetching, decoding, and executing are interleaved. 2. Instructions are actually macro-operations, and certain macro-operations that fit a pattern can be fused or transformed to something more efficient. 3. Macro operations are converted to micro operations, which actually direct the CPU to perform things. 4. Certain micro-op patterns are also detected and converted to more efficient ones. 5. Micro-ops are reordered and scheduled to better utilise hardware and expose inherent parallelism, modern CPUs can have reorder buffers as big as 512 micro-ops. There is duplication of hardware such as ALUs, often with different capabilities to save on silicon (not all ALUs are created equal and support the same operations) and micro-ops are dispatched to execution ports. 6. Small and tight loops are detected and stored in a special cache buffer so that the micro ops for said loop are always quickly in reach. 7. Data dependencies and hazards are identified and the CPU maps architectural registers to actually physical registers which vastly outnumber the architectural ones to better parallelise the workload. This is called register renaming and the mapping is kept in a register alias table. 8. Branch history allows the CPU to detect branch patterns and make predictions to execute a branch speculatively before it is actually reached and to prefetch all the data it requires. IF that branch is taken then the instructions executed are "retired" (committed), else, results are discarded, and through the branch order buffer the CPU state is rolled back. Branches that still have not been retired are known as "in-flight" and modern CPUs can support tens of them. 9. Accesses go through load buffers and store buffers to merge them and use less bus. 10. Load/store pairs can just be directly forwarded, so a datum that is stored to memory or a register that another visibly loads later can be directly transfered in-CPU without touching any registers or memory at all. 11. Data dependencies can be resolved in-pipeline to bypass what would normally be a pipeline hazard, for example an add instruction's result can be forwarded directly to a conditional jump that follows it that has just been decoded, essentially passing the data forward in time. 12. Although Intel is phasing this out because it's somewhat stall (wait) dependent, CPUs can perform simultaneous multi threading, issuing (allocating) instructions from more than 1 thread in 1 cycle where the 2 threads execute on the same engine (ports, ALUs, etc) but hold entirely independent state (register file, etc). 13. Data is prefeteched for predictable memory access patterns to make sure data are available Note: The mentioned techniques are actually really old, and the list is not comprehensive. IBM has been using these since the 60s in their large machines (giant mainframe rack, probably liquid cooled, if you add the punchcard reader, numerous tape drives, teletypewriters, etc it is truly a room-scale setup), and they began coming to microprocessors in the 90s. TL;DR: Nobody is really close to the hardware on modern high-performance CPUs since the early 90s, not even assembly. The compiler (if any) and the CPU are mutilating your code beyond recognition no matter what you use. The final "program" that is executing is not necessarily performing the same operations, at the same time, in the same order, on the same data, as what you wrote, it only has to appear that way, and when it doesn't appear that way and is observable you get things like Spectre and Meltdown.
You are probably right and thank you for the very detailed comment. Though I still believe a person will learn slightly more and be more inclined to continue to lower levels if they learn C. To clarify, though I'm unsure how much the stack and heap memory literally translates to locations in physical hardware, but I just have a feeling that allocating memory with C will incline a person to head to "lower levels". This was definitely a quicker video I wrote the script for and editied in a single day, nearly a single morning, so I didn't consider many of the things I said. This just has me thinking of posting on places like Stack Overflow and the quickest way to get the most detailed and correct answer, it is posting a blatantly wrong answer! Not to say I am "blatantly wrong" but not exactly correct to say a person even well versed in writing C will have knowledge of the many hardware concepts you have mentioned.
While your points are true(except for mixing up concurrency with parallism). C is still a low level language and close to hardware. The CPU is seen as one unit the inner workings is not relevant for being low level. The CPU is meant as a unit accepting instructions and executing them. What you describe on the CPU are rather execution side effects, the CPU is optimized to execute the instruction it gets efficiently, but the execution stays semantically the same. For compilers: you as a programer decide what optimization the compiler uses if you want these kind of optimization go for it, but they are not enforced like you claim. It is highly recommendable to activate compiler optimizations, but activly understanding code runtime for non-optimized builds is much more important in practice. But also for optimized code, it stays semantically the same(the logic stays the same, not the exact execution). You could add scheduling (context switches), virtual memory (MMU,TLB, Cache-lines) and cache invalidations on top. But it still wouldn't change that C is a low level language, just the execution enviroment changed. C is used in many examples how computer work, from virtual memory, scheduling, compiler optimization, assembly programming and even CPU instruction execution. These things are mainly explained in correlation to C.mmendable to activate compiler optimizations, but activly understanding code runtime for non-optimized builds is much more important in practice. But also for optimized code, it stays semantically the same(the logic stays the same, not the exact execution). In addition C is often taught in a performance minded fashion, from stack vs heap, to memory sharing through pointers, to call by value vs call by reference(pointer). And except for structs, unions and arrays you are up to yourself defining data-structures. You learn how arrays, structs and unions would look like in memory. It's also easier to write cache-aware algorithm whereas higher level language not having memory layout promises or alot of data being wrapped with meta-data. The same reasoning arises for batching in I/O task, where knowing the exact memory layout is beneficial. In alot of higher level languages you have to use quirky language features or domain specific concept that the language designer found suitable. But I also wanna add, while I like C alot more than C++, I think C++ has a lot of nice features, from STL,Templates, RAII to namespaces, constexpr, smartpointers and alot more. But alot of C++ codebases are simply written in an awful C++ style.
I carry a torch for C as well, but I would caution you from these conclusions based on the textbook. There are many bad C text books as well. C++, like C, also has many books from more direct sources. In my experience, these better motivate the use of various C++ features. They also tend to keep up better with current features. Anyways, C is a joy for me. Thank you for sharing your project.
To be honest this video was put together in a single morning so I didn't rethink everything or go super deep. It was more of a short opinion video and here's my cool thing I'm building. Thanks for watching!
5:35 "for x in y" is the whole reason I used python for so long, and why I now use rust. C++ OOP style may have been what put me off systems languages for so long, if only I had learned good ol' C first.
@furry_onkoC (pronounced /ˈsiː/ - like the letter c)[6] is a general-purpose programming language. It was created in the 1970s by Dennis Ritchie and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted CPUs. It has found lasting use in operating systems code (especially in kernels[7]), device drivers, and protocol stacks, but its use in application software has been decreasing.[8] C is commonly used on computer architectures that range from the largest supercomputers to the smallest microcontrollers and embedded systems. A successor to the programming language B, C was originally developed at Bell Labs by Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system.[9] During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages,[10][11] with C compilers available for practically all modern computer architectures and operating systems. The book The C Programming Language, co-authored by the original language designer, served for many years as the de facto standard for the language.[12][1] C has been standardized since 1989 by the American National Standards Institute (ANSI) and, subsequently, jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
when i was writing c++ for a project, it felt the same. there are a bunch of different ways to do the same thing and it made me waste time trying to find out "what is the best way". eventually i gave up, just used the one which looked most intuitive to me, and then wait for others to criticize/fix it.
Awesome video, but I think "you know how the computer works if you know C" is very misleading. Sure, C is closer to the hardware. But it takes more than just knowing a language, to know how a computer works. You'd need to dive into operating systems and the kernel, via something like xv6 etc to understand how computers work, and learn assembly. Knowing C alone, doesn't mean you know how a computer works, far from it.
Yeah assembly definitely is useful to learn. I've heard people that know assembly well can write C code and just imagine what the assembly looks like underneath. You're right it takes more than just a language, but my point is that C is more useful for learning computers than any other language. Of course you would have to learn it in addition to kernel, assembly, etc. to be well rounded.
@@joebulfer It's not about assembly either, those are "just" machine instructions. If you want to know how modern computer works you want to look into virtual memory, how that works, how virtual memory address is translated into physical address, realize in such environment all memory accesses are indirect, how TLB speeds up the indirect access (virtual address must be resolved into physical one), how memory is "allocated" in blocks (usually 4096 bytes), how memory access into these blocks called "pages" is split into cache lines, how cache works, how cache associativity works, why memory layout of your data is these days probably more important than the code which is translated into instructions.. a lot of this stuff is all same, same old regardless of asm, c or c++ once you know what's going on the language is less important than your intent, the programming language is just means to express the intent. Before you know how the machine REALLY works you can't have intent, you treat the language as abstract state machine and prefer one over another without any other reason than matter of taste.
3:09 your dynamic array is good, but it can be better: allocations are expensive, and you should avoid doing lots of them frequently, so instead of reallocating for each new element, try allocating multiple slots in advance when there aren't enough, and when there are, simply use the memory already allocated. This is what std::vector in C++ does usually, it has a size and a capacity, with size being how many active elements are in the vector, and capacity being how many slots for elements are in that vector, which is usually more than size
So just have an array of Blocks or EnvItems with a size of MAX_BLOCKS or something so space is already allocated, initially zero them out, then maybe add each time so I don't need to realloc every placing of a block? Or maybe every 50 blocks placed, reallocte the next 50 or something.
@@joebulfer keep both a "count" and "capacity" integers in your struct. the "capacity" is the amount of items you have allocated space for, the size is how many items there actually are. when size reaches capacity, you double the capacity and reallocate. this struct is called a "vector" in most languages.
@joebulfer It's called an ArrayList and the algorithm for it is pretty standard. First, what do you think the initial array's size is going to be. You create an array with Create_array(int n); n is the number of initial element. Your array has { int allocated_amount; int data_length: float factor; // or HARDCODE. *data; } Then, when you add an element, it increase data_length. If data_length > allocated amount. You realocate with Allocated_amount *= factor 1.5 and 2 are common. Then, when you remove element, if data_length < factor^2 * allocated_amount You reallocate with allocated_amount /= factor. This way, you'll have o(n) memory and alloc time on average. It's a 101 algorithm. First year stuff. Often first semester.
"There's not too much to C" - ooft, I have the first ed of K&R book on C, and it's even less pages than your copy: but as you dive in there's most definitely a lot more than meets the eye.
There's not much to C can also become a burden. Often around the point when you should be using a hash map to solve a problem. That said, I've found the "glib" library to be a nice addition to C, as it modernizes the standard library and provides easy to use data structures that are still simple enough to understand for beginners.
"You know how the computer works if you know C" No. You know how PROGRAMS work when you know C. If you want to get low into the hardware, it is best to know ASM.
C is ASM with lots of sugar. You can 100% write in your head a very horribly inneficient compilers. If you just say fuck register and keep every variable change as load r1 change r1 Store r1 And only use as many registers as you need for the next function call or assembly instruction.
i very much agree, when i have to code in usermode i usually use C++ because it is faster to write programs that do not require a direct almost 1:1 translation from code to asm, but there's no way i would use c++ for a kernel mode driver... C is way better than C++
@joebulfer i understand, i think that is wise actually. the good thing about C++ is u can use a very limited subset of its features, keeping it like C (unless youre writing for someone elses project lol. they will surely choose the worst features to use)
Not a skill issue. A lot of well known devs dislike C++ and OOP in general. Linus Torvalds is probably the most famous C++ hater. For me C++ just looks awful and it's hard to understand at a glance, unlike C.
Nah, I love std::pmr. It makes C++ so much more better in my opinion. Judging by your comments in the video, I take it you're not all that experienced with programming - I may be hugely misjudging you, sorry if I do, but the use cases for new and std::vector are vastly different and you certainly don't have to be a C++ expert to know why. "new" keyword, in all languages that use it, allocates some piece of data or object (if the language is OOP). A vector manages a collection of data, dynamically. It resizes the allocation when it needs to (because you're pushing a value on to the vector and it's full, for instance). So using `new` is a much smaller building block than a type like vector. The same principles apply to C. It's just less good than C++. I'm fairly proficient in both and I do prefer some C conventions, certainly, over OOP. But that's about where it stops. Rust is a better version of C, with some C++ concepts. Unfortunately there's other things about Rust I don't like. Zig... just don't get me started. It's why I stick with C/C++.
Yeah I am not super experienced in C or C++. I just want the full malloc/realloc experience to teach myself that first and then I will learn why C++'s std:vector, dynamic arrays, and Rust were created. Lot's of people recommend learning C because then you know the pitfalls of the language.
I'd actually love to hear your thoughts on rust and zig. For what it's worth I also come from C/C++ and I think they're cool (though I haven't done anything meaningful with either). What do you dislike about them?
i also hope the video doesnt get copyright claimed, but you should know that while the composition may be old, if the specific (much more recent) recording is not public domain or otherwise licensed to you, they would have a legal right to take you down
It say's "Copyright" but then says " Copyright-protected content found. The owner allows the content to be used on RUclips. " and also "No Impact". So I think I'm fine.
Great video and a valuable testimony to C's simplicity! Maybe another point to consider: If you stretch yourself a little bit as a more or less average (meaning: non-super-genius) programmer, you can come up with your own C-compiler. This is not true for C++ (or any other more "advanced" language), where you have to be a genius to even understand the language, much less write a compiler for it. Nowadays, many people don't care about how compilers are written, but this creates a dependency on what is given to you upstream that I find uncomfortable.
I think you are mistaking using low level standard library for knowing how the computer works. If you are not specifically doing low level things that you can do in prety much any other compiled language, you are still pretty far away from the actual hardware and OS your programs are running on.
Yeah true, my point at the end should have been "People that are really good at C can go to any other language much easier". Because everything other language is sort of an abstraction over it or literally build in it like Python. This video was not very well thought out and put together in a short time, thank you for watching though and the comment.
After working in assembly, I can pretty comfortably say about 80% of C is just pretty assembly, and assembly is just human readable machine code. C is a "high" level language but its easily on the floor of high level languages. Not that any of that really matters, but working often with C does require you to learn some things about computing/computers I never would've learned with javascript.
@@momparty ha ha ha… Working with assembly and C is just higher level asm? C has absolutely insane level of abstraction over assembly and features you couldn’t even dream of even with a macro assembler. Assembly is not human readable machine code, so much is done for you by the assembler, just look at how many pseudo-ops most assemblers provide. We are not in the 1980’s where assemblers had no features apart from raw op-to-hex translation. Same with C; optimisations, macros and stl are extremely abstracted.
@NotBonzo-dll interesting! I'd been told asm is really close to machine code by a lot of people but I guess they were mistaken- I'm really interested in hearing more about those pseudo-ops you mentioned
@@momparty assembler directives / pseudo-ops are piece of assembly that tell the assembler context of the program to help it correctly emit machine code. This involves things like the raw offset (commonly called org on assemblers) if assembling into a non-relocatable format or sections informations if assembling into relocatable ones (which is even more abstraction and work of the linker yay). Other pseudo ops are used for well linkage specifications like linker exports and imports (commonly called global and extern for symbols), architecture state types (like for x86, you can specify whether to emit 16, 32 or 64 bit code (some instruction hex’s differ).
I don't even understand what it is, tbh. I tried to understand, but if( !Function(a,b,c) ) { //try to handle or f* it and return } is much more understandable for me.
You don't. You are careful and try to prevent them with things like guard conditionals checking if pointers are null before dereferencing them or making sure you free() unused dynamically allocated memory
Ok but how can i create all the object oriented stuff without java keywords like class and new? The legendary C programming book does not teach us game dev through C 😢
Lots of ways. My preferred one is the "glib" and "gobject" libraries. They're what's behind GTK. Still ends up as a bit of a mess though, and there's so much boilerplate. I use Rust these days.
I like C better than C++ as well, but I think book size comparison is not a good way to judge complexity between the two. Most C++ books are mostly filled with the C++ standard library which includes a lot of stuff you'd need to write by hand or use other libraries for. Examples: Data structures, containers, algorithms. C books covering these topics easily make up the missing pages. Ultimately I think C just has a simpler syntax and less ambiguity in what is happening.
just a heads up. while that song likely is public domain, its performance/recording might not be. and, a general advice. I recommend stop criticizing things you clearly have very little experience with. it's a sign of arrogance. while I agree that C is better than C++, your arguments aren't great, and your code is has several beginner mistakes.
or a memory pool . These days I don't even do allocations except through a pool, it solves soooo many problems... fragmentation , cache performance, error management
TL;DR at the bottom.
C being close to "the hardware" is a misconception. C has no idea what "the hardware" is supposed to be, and it comes with its own set of assumptions and it has its own rules and set of operations, collectively known as the C abstract machine. The C abstract machine is what is specified and standardised in the ISO or ANSI C standards. Verbatim, the C standard states "The semantic descriptions in this International Standard describe the behavior of an abstract machine in which issues of optimization are irrelevant." Your C compiler's job is to take a valid standard C program, determine what abstract machine operations it performs, and then translate that to the given compilation target. Compilers make the assumption that your program is valid, i.e. all operations it performs exist in the C abstract machine, otherwise known as not containing undefined behaviour, and optimise them based on this assumption.
C inherited much of its worldview/memory model from B or BCPL, ultimately, most types are thin wrappers over "int" which is the everything-type, essentially (used t. If you look at old pre-standard C (otherwise known as K&R C) you'll find types are completely optional, just like B and BCPL. The automatically assumed type as always "int" because that was almost certainly what you wanted. You'd only really want "char" for strings otherwise.
As long as your program is valid and the compiler doesn't contain any bugs, it is free to "mutilate" your program in any-which-way it pleases to get you the best runtime performance, as long as (observably) the outcome is the same. Assembly also is not safe here; the CPU itself actually does the same to help you out even further. Modern CPUs turn your code into something completely unrecognisable before actually executing anything, for example:
For compilers:
1. Redundant operations are removed
2. Load/store pairs are removed
3: Compilers are capable of somewhat primitive malloc elision and optimisations around dynamic memory allocation
4. Compilers can inline functions into each other, outline common code snippets into functions, roll or unroll loops, autovectorise loops
5. Functions can be prevented or allowed to cross page boundaries depending on how often they are called
6. Compilers can use profiling information to derive hot/cold or likely/unlikely functions or branches
7. Cold branches can be tossed to the bottom of functions to keep hot branches in instruction cache near the branch instruction
8. The "strength" of operations can be reduced by a compiler, transforming a more expensive operation to a cheaper one, such as expressing multiplication or division through bit-shifts with immediate values
9. Compilers can move code out of a loop if it results in the same value each time
10. Compilers can use constant folding and eliminate redundant calculations
11. Compilers can replace expressions with the result evaluated at compile-time
12. Compilers with target CPU information can reorder instructions to tailor to how the pipeline of said CPU is designed
13. Compilers can swap or eliminate nested loops to reduce branching for the CPU to keep track of and result in a more predictable memory access pattern
For CPUs:
1. Handling of instructions is pipelined, fetching, decoding, and executing are interleaved.
2. Instructions are actually macro-operations, and certain macro-operations that fit a pattern can be fused or transformed to something more efficient.
3. Macro operations are converted to micro operations, which actually direct the CPU to perform things.
4. Certain micro-op patterns are also detected and converted to more efficient ones.
5. Micro-ops are reordered and scheduled to better utilise hardware and expose inherent parallelism, modern CPUs can have reorder buffers as big as 512 micro-ops. There is duplication of hardware such as ALUs, often with different capabilities to save on silicon (not all ALUs are created equal and support the same operations) and micro-ops are dispatched to execution ports.
6. Small and tight loops are detected and stored in a special cache buffer so that the micro ops for said loop are always quickly in reach.
7. Data dependencies and hazards are identified and the CPU maps architectural registers to actually physical registers which vastly outnumber the architectural ones to better parallelise the workload. This is called register renaming and the mapping is kept in a register alias table.
8. Branch history allows the CPU to detect branch patterns and make predictions to execute a branch speculatively before it is actually reached and to prefetch all the data it requires. IF that branch is taken then the instructions executed are "retired" (committed), else, results are discarded, and through the branch order buffer the CPU state is rolled back. Branches that still have not been retired are known as "in-flight" and modern CPUs can support tens of them.
9. Accesses go through load buffers and store buffers to merge them and use less bus.
10. Load/store pairs can just be directly forwarded, so a datum that is stored to memory or a register that another visibly loads later can be directly transfered in-CPU without touching any registers or memory at all.
11. Data dependencies can be resolved in-pipeline to bypass what would normally be a pipeline hazard, for example an add instruction's result can be forwarded directly to a conditional jump that follows it that has just been decoded, essentially passing the data forward in time.
12. Although Intel is phasing this out because it's somewhat stall (wait) dependent, CPUs can perform simultaneous multi threading, issuing (allocating) instructions from more than 1 thread in 1 cycle where the 2 threads execute on the same engine (ports, ALUs, etc) but hold entirely independent state (register file, etc).
13. Data is prefeteched for predictable memory access patterns to make sure data are available
Note: The mentioned techniques are actually really old, and the list is not comprehensive. IBM has been using these since the 60s in their large machines (giant mainframe rack, probably liquid cooled, if you add the punchcard reader, numerous tape drives, teletypewriters, etc it is truly a room-scale setup), and they began coming to microprocessors in the 90s.
TL;DR:
Nobody is really close to the hardware on modern high-performance CPUs since the early 90s, not even assembly.
The compiler (if any) and the CPU are mutilating your code beyond recognition no matter what you use.
The final "program" that is executing is not necessarily performing the same operations, at the same time, in the same order, on the same data, as what you wrote, it only has to appear that way, and when it doesn't appear that way and is observable you get things like Spectre and Meltdown.
You are probably right and thank you for the very detailed comment. Though I still believe a person will learn slightly more and be more inclined to continue to lower levels if they learn C.
To clarify, though I'm unsure how much the stack and heap memory literally translates to locations in physical hardware, but I just have a feeling that allocating memory with C will incline a person to head to "lower levels".
This was definitely a quicker video I wrote the script for and editied in a single day, nearly a single morning, so I didn't consider many of the things I said.
This just has me thinking of posting on places like Stack Overflow and the quickest way to get the most detailed and correct answer, it is posting a blatantly wrong answer!
Not to say I am "blatantly wrong" but not exactly correct to say a person even well versed in writing C will have knowledge of the many hardware concepts you have mentioned.
While your points are true(except for mixing up concurrency with parallism).
C is still a low level language and close to hardware. The CPU is seen as one unit the inner workings is not relevant for being low level.
The CPU is meant as a unit accepting instructions and executing them.
What you describe on the CPU are rather execution side effects, the CPU is optimized to execute the instruction it gets efficiently, but the execution stays semantically the same.
For compilers: you as a programer decide what optimization the compiler uses if you want these kind of optimization go for it, but they are not enforced like you claim.
It is highly recommendable to activate compiler optimizations, but activly understanding code runtime for non-optimized builds is much more important in practice.
But also for optimized code, it stays semantically the same(the logic stays the same, not the exact execution).
You could add scheduling (context switches), virtual memory (MMU,TLB, Cache-lines) and cache invalidations on top.
But it still wouldn't change that C is a low level language, just the execution enviroment changed.
C is used in many examples how computer work, from virtual memory, scheduling, compiler optimization, assembly programming and even CPU instruction execution.
These things are mainly explained in correlation to C.mmendable to activate compiler optimizations, but activly understanding code runtime for non-optimized builds is much more important in practice.
But also for optimized code, it stays semantically the same(the logic stays the same, not the exact execution).
In addition C is often taught in a performance minded fashion, from stack vs heap, to memory sharing through pointers, to call by value vs call by reference(pointer).
And except for structs, unions and arrays you are up to yourself defining data-structures.
You learn how arrays, structs and unions would look like in memory.
It's also easier to write cache-aware algorithm whereas higher level language not having memory layout promises or alot of data being wrapped with meta-data.
The same reasoning arises for batching in I/O task, where knowing the exact memory layout is beneficial.
In alot of higher level languages you have to use quirky language features or domain specific concept that the language designer found suitable.
But I also wanna add, while I like C alot more than C++, I think C++ has a lot of nice features, from STL,Templates, RAII to namespaces, constexpr, smartpointers and alot more.
But alot of C++ codebases are simply written in an awful C++ style.
I carry a torch for C as well, but I would caution you from these conclusions based on the textbook. There are many bad C text books as well.
C++, like C, also has many books from more direct sources. In my experience, these better motivate the use of various C++ features. They also tend to keep up better with current features.
Anyways, C is a joy for me. Thank you for sharing your project.
To be honest this video was put together in a single morning so I didn't rethink everything or go super deep. It was more of a short opinion video and here's my cool thing I'm building.
Thanks for watching!
5:35 "for x in y" is the whole reason I used python for so long, and why I now use rust. C++ OOP style may have been what put me off systems languages for so long, if only I had learned good ol' C first.
C is the cutest syntax >~
tf does that even mean? those are literally characters on a screen how are they cute? also replace is with has.
@Hramzhuk c is from "cute", dummy :3
Have you seen Python?
@furry_onkoC (pronounced /ˈsiː/ - like the letter c)[6] is a general-purpose programming language. It was created in the 1970s by Dennis Ritchie and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted CPUs. It has found lasting use in operating systems code (especially in kernels[7]), device drivers, and protocol stacks, but its use in application software has been decreasing.[8] C is commonly used on computer architectures that range from the largest supercomputers to the smallest microcontrollers and embedded systems.
A successor to the programming language B, C was originally developed at Bell Labs by Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system.[9] During the 1980s, C gradually gained popularity. It has become one of the most widely used programming languages,[10][11] with C compilers available for practically all modern computer architectures and operating systems. The book The C Programming Language, co-authored by the original language designer, served for many years as the de facto standard for the language.[12][1] C has been standardized since 1989 by the American National Standards Institute (ANSI) and, subsequently, jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
@furry_onko so what about you, furry lad, what language do you use to code?
when i was writing c++ for a project, it felt the same. there are a bunch of different ways to do the same thing and it made me waste time trying to find out "what is the best way". eventually i gave up, just used the one which looked most intuitive to me, and then wait for others to criticize/fix it.
Awesome video, but I think "you know how the computer works if you know C" is very misleading. Sure, C is closer to the hardware. But it takes more than just knowing a language, to know how a computer works.
You'd need to dive into operating systems and the kernel, via something like xv6 etc to understand how computers work, and learn assembly.
Knowing C alone, doesn't mean you know how a computer works, far from it.
Yeah assembly definitely is useful to learn. I've heard people that know assembly well can write C code and just imagine what the assembly looks like underneath.
You're right it takes more than just a language, but my point is that C is more useful for learning computers than any other language. Of course you would have to learn it in addition to kernel, assembly, etc. to be well rounded.
@@joebulfer It's not about assembly either, those are "just" machine instructions. If you want to know how modern computer works you want to look into virtual memory, how that works, how virtual memory address is translated into physical address, realize in such environment all memory accesses are indirect, how TLB speeds up the indirect access (virtual address must be resolved into physical one), how memory is "allocated" in blocks (usually 4096 bytes), how memory access into these blocks called "pages" is split into cache lines, how cache works, how cache associativity works, why memory layout of your data is these days probably more important than the code which is translated into instructions.. a lot of this stuff is all same, same old regardless of asm, c or c++ once you know what's going on the language is less important than your intent, the programming language is just means to express the intent. Before you know how the machine REALLY works you can't have intent, you treat the language as abstract state machine and prefer one over another without any other reason than matter of taste.
You know how CPUs generally work and how RAM is allocated better than if you didn't write C
@rusi6219 Agreed, though assembly and computer architecture is ideal to learn, learning C is a step in the right direction.
3:09 your dynamic array is good, but it can be better: allocations are expensive, and you should avoid doing lots of them frequently, so instead of reallocating for each new element, try allocating multiple slots in advance when there aren't enough, and when there are, simply use the memory already allocated. This is what std::vector in C++ does usually, it has a size and a capacity, with size being how many active elements are in the vector, and capacity being how many slots for elements are in that vector, which is usually more than size
So just have an array of Blocks or EnvItems with a size of MAX_BLOCKS or something so space is already allocated, initially zero them out, then maybe add each time so I don't need to realloc every placing of a block?
Or maybe every 50 blocks placed, reallocte the next 50 or something.
@@joebulfer generally expanding arrays expand by 1.5x or 2x every time.
@@joebulfer keep both a "count" and "capacity" integers in your struct. the "capacity" is the amount of items you have allocated space for, the size is how many items there actually are. when size reaches capacity, you double the capacity and reallocate. this struct is called a "vector" in most languages.
Bump allocators and CoW semantics is good enough for short-lived objects.
@joebulfer
It's called an ArrayList and the algorithm for it is pretty standard.
First, what do you think the initial array's size is going to be.
You create an array with
Create_array(int n);
n is the number of initial element.
Your array has
{
int allocated_amount;
int data_length:
float factor; // or HARDCODE.
*data;
}
Then, when you add an element, it increase data_length.
If data_length > allocated amount.
You realocate with
Allocated_amount *= factor
1.5 and 2 are common.
Then, when you remove element, if data_length < factor^2 * allocated_amount
You reallocate with allocated_amount /= factor.
This way, you'll have o(n) memory and alloc time on average.
It's a 101 algorithm. First year stuff. Often first semester.
"If you're not allocating memory, you're not living!"
"There's not too much to C" - ooft, I have the first ed of K&R book on C, and it's even less pages than your copy: but as you dive in there's most definitely a lot more than meets the eye.
There's not much to C can also become a burden. Often around the point when you should be using a hash map to solve a problem.
That said, I've found the "glib" library to be a nice addition to C, as it modernizes the standard library and provides easy to use data structures that are still simple enough to understand for beginners.
"You know how the computer works if you know C"
No. You know how PROGRAMS work when you know C. If you want to get low into the hardware, it is best to know ASM.
C is ASM with lots of sugar.
You can 100% write in your head a very horribly inneficient compilers.
If you just say fuck register and keep every variable change as
load r1
change r1
Store r1
And only use as many registers as you need for the next function call or assembly instruction.
i very much agree, when i have to code in usermode i usually use C++ because it is faster to write programs that do not require a direct almost 1:1 translation from code to asm, but there's no way i would use c++ for a kernel mode driver... C is way better than C++
great video and i see your point, but if you are overwhelmed by std::vector vs std::array vs new/delete, then that is a skill issue😅
Could be skill issue yes, eventually I will probably turn to C++, I just want to full Malloc/realloc experience first
@joebulfer i understand, i think that is wise actually. the good thing about C++ is u can use a very limited subset of its features, keeping it like C (unless youre writing for someone elses project lol. they will surely choose the worst features to use)
Not a skill issue.
A lot of well known devs dislike C++ and OOP in general. Linus Torvalds is probably the most famous C++ hater.
For me C++ just looks awful and it's hard to understand at a glance, unlike C.
@@JonDoe-uq1mk then dont the weird features. (again, im talking about a project you control)
C++ object model is a hard paradigm to get into. I'd argue Rust's borrow checker and lifetime semantics are way easier to understand than such.
Nah, I love std::pmr. It makes C++ so much more better in my opinion. Judging by your comments in the video, I take it you're not all that experienced with programming - I may be hugely misjudging you, sorry if I do, but the use cases for new and std::vector are vastly different and you certainly don't have to be a C++ expert to know why. "new" keyword, in all languages that use it, allocates some piece of data or object (if the language is OOP). A vector manages a collection of data, dynamically. It resizes the allocation when it needs to (because you're pushing a value on to the vector and it's full, for instance). So using `new` is a much smaller building block than a type like vector.
The same principles apply to C. It's just less good than C++. I'm fairly proficient in both and I do prefer some C conventions, certainly, over OOP. But that's about where it stops.
Rust is a better version of C, with some C++ concepts. Unfortunately there's other things about Rust I don't like. Zig... just don't get me started.
It's why I stick with C/C++.
Yeah I am not super experienced in C or C++. I just want the full malloc/realloc experience to teach myself that first and then I will learn why C++'s std:vector, dynamic arrays, and Rust were created. Lot's of people recommend learning C because then you know the pitfalls of the language.
I'd actually love to hear your thoughts on rust and zig.
For what it's worth I also come from C/C++ and I think they're cool (though I haven't done anything meaningful with either).
What do you dislike about them?
i also hope the video doesnt get copyright claimed, but you should know that while the composition may be old, if the specific (much more recent) recording is not public domain or otherwise licensed to you, they would have a legal right to take you down
It say's "Copyright" but then says " Copyright-protected content found. The owner allows the content to be used on RUclips. " and also "No Impact". So I think I'm fine.
The book he shows is "C++ Programming: Program Design Including Data Structures , 4th edition , 2008". Maybe not a good book for C++.
Great video and a valuable testimony to C's simplicity! Maybe another point to consider: If you stretch yourself a little bit as a more or less average (meaning: non-super-genius) programmer, you can come up with your own C-compiler. This is not true for C++ (or any other more "advanced" language), where you have to be a genius to even understand the language, much less write a compiler for it. Nowadays, many people don't care about how compilers are written, but this creates a dependency on what is given to you upstream that I find uncomfortable.
I think you are mistaking using low level standard library for knowing how the computer works. If you are not specifically doing low level things that you can do in prety much any other compiled language, you are still pretty far away from the actual hardware and OS your programs are running on.
Yeah true, my point at the end should have been "People that are really good at C can go to any other language much easier". Because everything other language is sort of an abstraction over it or literally build in it like Python.
This video was not very well thought out and put together in a short time, thank you for watching though and the comment.
After working in assembly, I can pretty comfortably say about 80% of C is just pretty assembly, and assembly is just human readable machine code. C is a "high" level language but its easily on the floor of high level languages. Not that any of that really matters, but working often with C does require you to learn some things about computing/computers I never would've learned with javascript.
@@momparty ha ha ha… Working with assembly and C is just higher level asm? C has absolutely insane level of abstraction over assembly and features you couldn’t even dream of even with a macro assembler. Assembly is not human readable machine code, so much is done for you by the assembler, just look at how many pseudo-ops most assemblers provide. We are not in the 1980’s where assemblers had no features apart from raw op-to-hex translation. Same with C; optimisations, macros and stl are extremely abstracted.
@NotBonzo-dll interesting! I'd been told asm is really close to machine code by a lot of people but I guess they were mistaken- I'm really interested in hearing more about those pseudo-ops you mentioned
@@momparty assembler directives / pseudo-ops are piece of assembly that tell the assembler context of the program to help it correctly emit machine code. This involves things like the raw offset (commonly called org on assemblers) if assembling into a non-relocatable format or sections informations if assembling into relocatable ones (which is even more abstraction and work of the linker yay). Other pseudo ops are used for well linkage specifications like linker exports and imports (commonly called global and extern for symbols), architecture state types (like for x86, you can specify whether to emit 16, 32 or 64 bit code (some instruction hex’s differ).
But, if you are an expert in C++, you are also an expert in the computer.
how do you handle exceptions in pure C?
I don't even understand what it is, tbh.
I tried to understand, but if( !Function(a,b,c) ) { //try to handle or f* it and return } is much more understandable for me.
Errors as values, as God intended.
You don't. You are careful and try to prevent them with things like guard conditionals checking if pointers are null before dereferencing them or making sure you free() unused dynamically allocated memory
Ok but how can i create all the object oriented stuff without java keywords like class and new? The legendary C programming book does not teach us game dev through C 😢
Lots of ways. My preferred one is the "glib" and "gobject" libraries. They're what's behind GTK. Still ends up as a bit of a mess though, and there's so much boilerplate.
I use Rust these days.
@llamatronian101 Thanks i'll look that up.
lol. i actually really like using malloc more then allocating statically just cuz i can free manually :))
But doing it in C and not learning what C++ provides, makes you reinvent the wheel all over again.
I like C better than C++ as well, but I think book size comparison is not a good way to judge complexity between the two. Most C++ books are mostly filled with the C++ standard library which includes a lot of stuff you'd need to write by hand or use other libraries for. Examples: Data structures, containers, algorithms. C books covering these topics easily make up the missing pages. Ultimately I think C just has a simpler syntax and less ambiguity in what is happening.
just a heads up. while that song likely is public domain, its performance/recording might not be.
and, a general advice. I recommend stop criticizing things you clearly have very little experience with. it's a sign of arrogance. while I agree that C is better than C++, your arguments aren't great, and your code is has several beginner mistakes.
what a fancy outro
holy banger at the end
try to compile c with zig cc
Using malloc, realloc etc. isn't bad. Have you ever tried to write your own malloc? 😁
No that would be pretty cool to tho.
or a memory pool .
These days I don't even do allocations except through a pool, it solves soooo many problems...
fragmentation , cache performance, error management
Zig is cool too
I don't know much about it, not very widely adopted I believe.
Same for me😢
Hey there! You have a perfectly 32 comments sorry for ruining it :(
(you can delete when you read this or keep it when you reach 64 so it evens out!)
try C3 :)
lol
XD