*My takeaways:* 1. Why study the compiler optimizations? 4:05 2. Simple model of the compiler 8:30 3. Compiler report 9:33 4. Overview of compiler optimizations 13:20 5. Example compiler optimizations 26:00 - Optimizing a scalar 30:20 - Optimizing a structure 33:56 - Optimizing function calls 43:10 - Optimizing loops 56:05 6. WHat compilers can and cannot do 1:04:25 7. Diagnosing failures: case studies 1:06:30
@25:43, the reason for choosing 38 is because it's a dealing with a 32-bit integer on a 64-bit register, so we have an extra 32 bits to play with. 71 > 64 = 2^6, so magic_num = 2^38/71 is definitely less than 2^32, so num_32 * magic_num shifts the top bit by no more than num_32
Those times when the compiler de-optimizes sections of your code or introduces elusive "bugs" in your code, similar to what happened to me when the MFC were still in development and I was programming for Win NT, the good "o'l" times.
Those times when the compiler de-optimizes sections of your code or introduces elusive "bugs" in your code, similar to what happened to me when the MFC were still in development and I was programming for Win NT, the good "o'l" times.
Ahem, the problem at 1:06:00 is very much optimizable by looking at a computation graph and seeing that the two sequences for F12 and -F21 are algebraically equivalent, without needing to know physics.
In the slide it says "unlikely" that it will do so, so maybe it could do it in some cases. Although, two numbers that resulted in the same were not necessarily calculated the same way, and I guess the compiler can be wary
What frustrates me is that compilers don't take advantage of advanced instruction intrinsics automatically enough -- most of the modern CPU optimized SIMD/AVX instructions go unused. You have to manually construct SIMD usage most of the time which makes it a fools errand most of the time unless you're compiling code for your own system. If you're going to distribute compiled software, you have to dumb it down to a neutral architecture that doesn't risk using unsupported opcodes on the target system.
A compiler appears to choose optimal decision branches based upon the choices available. This methodology is similar to substitution networks in theorem provers. Perhaps theorem provers built into compilers will become the first A.I. driven constructs to achieve true sentience and artificial intelligence. Imagine a compiler that understands algebra, philosophy, and all scientific topics, including physics. With the ability to generate and reason over its own code, the compiler would become the ultimate decision maker.
Bullshit. These all decisions magical compiler makes are just code, nothing more. Someone had to wrote this, maybe from some researcher's paper, nothing more. Stop hallucinating
As a compiler engineer, i was interested in the part where professor describes what compilers can't do. And what he describes appears to be true! It will be a real challenge to establish if two forces cancel each other. We could certainly encode rules (or laws of Physics for this example) as type constraints and then compiler may be able to figure that out but that is probably years ahead of the scope of modern compilers.
What the compiler can't do, it's what's implied for the current developer to optimize. Specifically simulations lend themselves to working out the math and solving it satisfyingly before implementing it in code
@@bibanez135 What the compiler can't do, it's what's implied for the current developer to optimize. Specifically simulations lend themselves to working out the math and solving it satisfyingly before implementing it in code
I just realized why the '32 seconds' timing claim is wrong. Now, we can presume the processor has 16 cores, each with only one thread, that could execute 32 billion instructions per second. Except for one thing. Memory. Fastest memory DDR4-4400 has sustained transfer rate 4.4 gb/sec, throttling processors to that., and, ia64/AMD64 instructions run from 1-15 bytes each. If we estimate an average of 4 bytes per instruction, at that rate it requires 40 terabytes of ram, and that would require 4,000 seconds (6 1/2 hours) at the 4.4 ghz rate. Now there is a burst speed of 35gb/sec, so if you could run the memory that fast, at the 35gb/sec burst rate (and I don't think you'd get burst speed continuously) it would take about 120 seconds.
Watched this video to understand minisicule things in Optimization by compiler...hope this give me clear understanding in design of compilers and for my gate prep..
The greatest computer language to develop a Lexical Analyzer and a Compiler for any computer language is CPython hands down. In fact CPython's Regular Expression re.sub() methods and string replace methods work in multiple algorithms nested and can handle the heavy lifting end of both exceptionally well. Under 280 lines of code will do both, and do some things today's compilers cannot do.
The guy stands there looking lime an über-dork in his charlie-chaplin pants and with the stupid beard, if you can call it that... Then he starts to talk, and my god. Perception totally changes, suddenly he's an athletic giant, lol. 👍 Intellect is attractive. 👍
You have to be kind of sick in the head to think studying how a compiler works is fun ;). It is quite interesting, but it's definitely not "fun" for the average engineer/computer scientist. I guess that's why he's at MIT lol.
well if u gotta be a good coder, u've to know the inside out of what is going on, infact the shallow minds are lazy, stuck and might be sick as they seek quick results with little effort
@@imnikhil3831 i agree, which is why I'm watching this series on compiler design. I don't think I need to take the full class at my university, but it is helpful to know a decent amount about the compiler and not leave it as a black box.
It is very fun for me, even I would call it more interesting than kernel / low level development, for sure more interesting than all these popular websh*t / business logic jobs
*My takeaways:*
1. Why study the compiler optimizations? 4:05
2. Simple model of the compiler 8:30
3. Compiler report 9:33
4. Overview of compiler optimizations 13:20
5. Example compiler optimizations 26:00
- Optimizing a scalar 30:20
- Optimizing a structure 33:56
- Optimizing function calls 43:10
- Optimizing loops 56:05
6. WHat compilers can and cannot do 1:04:25
7. Diagnosing failures: case studies 1:06:30
@25:43, the reason for choosing 38 is because it's a dealing with a 32-bit integer on a 64-bit register, so we have an extra 32 bits to play with. 71 > 64 = 2^6, so magic_num = 2^38/71 is definitely less than 2^32, so num_32 * magic_num shifts the top bit by no more than num_32
Thank you for the explanation. 2^38/71+1=3871519817, which is the number that appear in the slides, so the +1 rounding is pre-calculated.
Those times when the compiler de-optimizes sections of your code or introduces elusive "bugs" in your code, similar to what happened to me when the MFC were still in development and I was programming for Win NT, the good "o'l" times.
Those times when the compiler de-optimizes sections of your code or introduces elusive "bugs" in your code, similar to what happened to me when the MFC were still in development and I was programming for Win NT, the good "o'l" times.
what a great teacher! thank you!
Ahem, the problem at 1:06:00 is very much optimizable by looking at a computation graph and seeing that the two sequences for F12 and -F21 are algebraically equivalent, without needing to know physics.
In the slide it says "unlikely" that it will do so, so maybe it could do it in some cases. Although, two numbers that resulted in the same were not necessarily calculated the same way, and I guess the compiler can be wary
@@bibanez135 please tell me in detail what you mean. give at least one example
"What about GCC? I have a 20 thousand line DIVINE intellect compiler that operates Just In Time AND ahead of time."
What frustrates me is that compilers don't take advantage of advanced instruction intrinsics automatically enough -- most of the modern CPU optimized SIMD/AVX instructions go unused.
You have to manually construct SIMD usage most of the time which makes it a fools errand most of the time unless you're compiling code for your own system. If you're going to distribute compiled software, you have to dumb it down to a neutral architecture that doesn't risk using unsupported opcodes on the target system.
For anyone wondering, eax is lower 32 bits of rax
I'm glad I don't have to write a compiler 😉🙏
I just saw the video "restrict: the only C keyword with no C++ equivalent" - inexplicably, I'd never seen the restrict keyword before.
Fantastic content. Chapeau to the tutor.
A compiler appears to choose optimal decision branches based upon the choices available. This methodology is similar to substitution networks in theorem provers. Perhaps theorem provers built into compilers will become the first A.I. driven constructs to achieve true sentience and artificial intelligence. Imagine a compiler that understands algebra, philosophy, and all scientific topics, including physics. With the ability to generate and reason over its own code, the compiler would become the ultimate decision maker.
Bullshit. These all decisions magical compiler makes are just code, nothing more. Someone had to wrote this, maybe from some researcher's paper, nothing more. Stop hallucinating
13:30 yeah not gonna lie they got us in the first half
This is a great lecture
teacher is beyond smart! Great lecture!
Thank you for these lectures, this one was absolutely amazing 🎉
As a compiler engineer, i was interested in the part where professor describes what compilers can't do. And what he describes appears to be true! It will be a real challenge to establish if two forces cancel each other. We could certainly encode rules (or laws of Physics for this example) as type constraints and then compiler may be able to figure that out but that is probably years ahead of the scope of modern compilers.
What the compiler can't do, it's what's implied for the current developer to optimize. Specifically simulations lend themselves to working out the math and solving it satisfyingly before implementing it in code
@@bibanez135 What the compiler can't do, it's what's implied for the current developer to optimize. Specifically simulations lend themselves to working out the math and solving it satisfyingly before implementing it in code
Thank you for this course. :)
I just realized why the '32 seconds' timing claim is wrong. Now, we can presume the processor has 16 cores, each with only one thread, that could execute 32 billion instructions per second. Except for one thing. Memory. Fastest memory DDR4-4400 has sustained transfer rate 4.4 gb/sec, throttling processors to that., and, ia64/AMD64 instructions run from 1-15 bytes each. If we estimate an average of 4 bytes per instruction, at that rate it requires 40 terabytes of ram, and that would require 4,000 seconds (6 1/2 hours) at the 4.4 ghz rate. Now there is a burst speed of 35gb/sec, so if you could run the memory that fast, at the 35gb/sec burst rate (and I don't think you'd get burst speed continuously) it would take about 120 seconds.
4,000 seconds = 1 hour 6 minutes 40 seconds... Just saying.
Enough to not even check the rest of your "calculations" SMH
16:51 compiler optimization
Watched this video to understand minisicule things in Optimization by compiler...hope this give me clear understanding in design of compilers and for my gate prep..
Are you preparing for GATE CS 2022?
He even looks like a genious :)
I think he looks like Neymar :)
At first I thought he had an "interesting" beard, but then I forgot about his looks and listened intently to what he had to say. Great lecture.
I don't know why RUclips recommended this to me, but I stayed for the whole lecture.
You must be knowledgeable with this sort of bend
The greatest computer language to develop a Lexical Analyzer and a Compiler for any computer language is CPython hands down. In fact CPython's Regular Expression re.sub() methods and string replace methods work in multiple algorithms nested and can handle the heavy lifting end of both exceptionally well. Under 280 lines of code will do both, and do some things today's compilers cannot do.
please give me the one example, for the learning
Hacker's delight book
@Pi Pony it contains a lot of tricks involving the use of bits operators
@Pi Pony I don't know anything about compilers or bit operations. I was mentioning the book
Werever dando clases.
please, if you are not difficult tell me the main conclusion of this video in one minute. please
17:06
The guy stands there looking lime an über-dork in his charlie-chaplin pants and with the stupid beard, if you can call it that... Then he starts to talk, and my god. Perception totally changes, suddenly he's an athletic giant, lol.
👍 Intellect is attractive. 👍
But what is his education, CV? -- Er. Sunil Pedgaonkar, India, Consulting Engineer (IT), India
Ok I I
You have to be kind of sick in the head to think studying how a compiler works is fun ;). It is quite interesting, but it's definitely not "fun" for the average engineer/computer scientist. I guess that's why he's at MIT lol.
well if u gotta be a good coder, u've to know the inside out of what is going on, infact the shallow minds are lazy, stuck and might be sick as they seek quick results with little effort
@@imnikhil3831 i agree, which is why I'm watching this series on compiler design. I don't think I need to take the full class at my university, but it is helpful to know a decent amount about the compiler and not leave it as a black box.
It is very fun for me, even I would call it more interesting than kernel / low level development, for sure more interesting than all these popular websh*t / business logic jobs
Here for neymar