GopherCon 2019: Dave Cheney - Two Go Programs, Three Different Profiling Techniques
HTML-код
- Опубликовано: 27 июл 2024
- Go, being a relatively recent statically typed, compiled language, is known to produce efficient programs. But writing a program in Go is not sufficient for good performance.
In this tutorial Dave will demonstrate three different methods of profiling Go programs to diagnose, then improve, the performance of several programs. By the conclusion, you’ll know how to profile the cpu and memory usage of a Go program, understand how to examine an execution trace, and come to grips with the reality of Amdahl’s Law. Наука
Dave Cheney is the unsung hero of the Go community.
Amazing talk! This guy has the ability to present such complex topics in a simple way!
Amazing presentation! I wonder how a software engineer can think of such visual explanations! Hats off.
Clear message, clear structure, easy to understand, thank you
Amazing. Thank you so much for delivering such great presentation.
Great talk ! Also I am novice of go and I can understand! Thanks
great presentation. Thanks
Awesome high quality presentation!!! Thanks a lot!
Thank you for providing a great presentation.
Learnt plenty thanks very much, brilliant presentation!
Amazing talk and great explanation!
Learned a lot from this great talk.
Great talk. I learn something new about Go
Great talk man, instead of just showing slides and talking you are showing all the nuts and bolts that makes the program what it is.
Absolutely amazing talk. Thanks
Amazing. I got a bit stuck at "buffer yourself". I saw this presentation shortly after it was uploaded in 2019 and since then it's become my pet peeve in code reviews as well as sortof catchphrase. "Go buffer yourself".
Great talk, especially switching to tracing when cpu profiling gave no clue.
great talk
Awesome, thanks!
One of the best talks ever. Thank you. Your blog is amazing too, love it
This is bookmark worthy
nice I love Go
Same
Thanks!
When you started talking about thousands of cores and calculating each row or pixel on a core - I thought you were going to dive into GPU programming 🤣
Nice talk. It's kind of infuriating that go doesn't just inline readByte and avoid the heap allocation, though.
In my mac air m1,mode row's performance is more better than mode workers, no matter how many workers used
I did not get it fully, why on the first part, when Dave moves the buffer declaration to file scope, that avoids that escaping of the buffer to the heap. Why did it escape to the heap? I watched his explanation over and over, but it's still not 100% for me.
Also, brilliant presentation!!!
My understanding is that slices work as pointers to a subsection of the underlying array. Because the declaration is only invoked once, there are no memory allocation requests and therefore you are recycling the same memory byte as opposed to asking the OS to give you a new piece of the stack every time you call the function.
The compiler, therefore knows what will happen to the array in the stack once the function is closed and therefore it can declare that keeping it in the stack frame is safe and does not require going off to the heap.
can u share code with us?
What were the 3 different profiling techniques ?
- pprof
- trace
- ??
Memory, CPU and Trace ma dude
Somebody get me link on source code
Unix user "dfc" hehe
Excellent talk, enjoyed it very much! Please, more talks like these in 2021 instead of the political social justice BS you decided to sneak into GopherCon 2020.