I remember being at this talk. Had lunch with Andrew and another conference attendee afterward. He was very kind to answer our questions about Zig and got us even more excited to give it a try!
correction: AFAIK it's DESTDIR that's what does what he talks about, not PREFIX. PREFIX will only affect the middle part which is typically /usr/local or /usr but paths like /etc or /opt are not affected by that. only DESTDIR (which defaults to /) for example, if an app installs /usr/bin/myapp and /etc/myapp.conf, then setting PREFIX=/tmp/foo will have the installer create /tmp/foo/bin/myapp and /etc/myapp.conf! but setging PREFIX=/tmp/foo will create /tmp/foo/usr/local/bin/myapp and /tmp/foo/etc/myapp.conf I'm not sure it's part of POSIX, IIRC I learned this from GNU guidelines (it might be GNU Make docs) but as far as I know it's adhered to in both Fedora and Debian.
I am new in C world and i just recently learn make and its good for in system use but if I have to ship software, i will use zig for that cmake is very ugly in lot of ways don't want to learn that shit.
Very late answer. But by default the zig build system creates a directory called zig-out in the project directory where the binaries are stored. Hope that answers your question :D
I don’t really agree with the docker statement. You avoid almost all the dependency and version problems by making sure the environment is the same. How far should you go when recompiling your dependencies from source? The dependency chain can go on and on until you reach the compiler source code. So now your build step takes forever and you have to maintain a very complicated build pipeline for a lot of software because you didn’t want to have one abstraction layer (docker).
I would agree with this statement but it's usually not that difficult to do so atleast on linux... even with one of the biggest open-source projects like tensorflow and pytorch compiling a version yourself even with all the extra bells and whistles is straight forward when done using correct configuration. Although it might take a little bit in order to configure it correctly build system like zig build and bazel make it super easy if you know them. I personally work on very large C++/Rust based HPC project, we use bazel (will try our zig's build system), it has to go through 4 different compilers (not really they all use llvm backend + extra ) for different intel, amd and nvidia implementations, its usually easier to just set up locally.
Dockerfile builds eliminates the 'works on my machine' issue. Yeah you can force your users to spend hours or days researching and troubleshooting but honestly the time saved probably outweighs the actual compile time overhead. It also serves as install documentation with regards to outside factors, if it builds in the docker container you have every factor accounted for.
@@steffennilsen2132 yeah, I mean I do agree on easy part and controlled environment for builds. Unfortunately for our usecase hpc and I guess game dev require to be ran on bare metal... For even basic testing so setting up toolchain is entirely necessary. For others which don't specifically require this, docker can be good we do use docker for deployment etc. However I do don't like the nature some people have of "dockerify" the build and dev process.
I don't see Docker solving this problem. So how do you put the software on your container? From a repo? You can do that without docker. From source? You can do that without docker. I think you think Docker does more than it actually does. It's not Yocto, it's just a half assed VM/container with a competent definition language format. This language is no magic power, you need to fetch software somehow and if you use live repositories in it, if you generate the image a few months apart you will get a different image.
@@nextlifeonearth Every system is configured differently, or there are differences between versions, etc. Docker gives you a stable environment that's the same on every machine.
Talk should really be about hoe to build C/C++ code from source. Basically brings up all the pain points of make because it is a tool from the 60's. Maybe we should just build a better tool then make?
People don't build things from sources because they don't live in Mama's house anymore, the day has only 24 hours, and they want to work with the apps, not on them. That are the real reasons!
Docker is a crutch for people who do not want to learn building. It's 2024, comp times are mere seconds unless you working on a huge codebase with 100+ member teams. The overhead on docker is more than what people think and does not readily result in robust software. Built and optimized for your machine gives you blazing fast executables. This will matter when your number of services increase. Not to mention, the space requirements of docker that eats up space for your volumes and images. Not optimal. The zig team understands how good software is built. Docker for everything is pure skill issue mostly for "web Devs".
@@raptorate2872 You're talking about the 2024 comp time performance but crying about Docker‘s overhead. That’s my kind of humor! 😂 If I have hardware limitations, I do direct builds. If I have not, I do container. But you can live out your religion. Do what you want to do. I do what I need to do. Cheers. 🍻
@@martinmajewski I guess you forgot about the unnecessary space as well. Docker overhead matters especially when running small devices like IOT and Raspberry Pi. Docker has it's place but if it's your replacement for building from source, that's where you going wrong. Example, you can have DB in docker bo problem but you don't want a ml workflow service running in docker just because the Daemon engine add extra layers when direct GPU access is involved. This also had implications for web apps running on docker severely limiting network performance.
@@raptorate2872 Let me guess: you’re only dealing with RPis in your free time and not with enterprise solutions where other metrics matter than space or CPU time? Good for you. I have nothing against compiling sources, but saying container users are lazy is just stupid. And if somebody can only see black or white (on any topic), I cannot take that person seriously. Good day to you!
Makefiles are WAY MORE GENERIC than this. That's where the Zig build system fails completely. It is painful to write a build.zig (that ultimately will be limited to build the software) while make files are easy to learn and use. Make file rules are mostly intuitive while having to learn Zig to just build something is ridiculous. Also, it doesn't matter what the compiler actually is, you can compile anything with a make file; I use it to build PureScript applications, verify my certificates and make packages for my system.
Great Introductory talk but did not help AT ALL with my `build.zig` issues. Doc is scarce and I wish a core Zig member would info-dump all they know about `build.zig` in a blogpost or video one day. It has a bit of a confusing vocabulary. And tutorial code for it keeps breaking.
if you want help i suggest you make a post on ziggit(.)dev. its the dedicated community forum and andrew himself seems to be acive there daily. its like stack overflow, just for zig and the build system, minus the attitude of SO and allows for more dialogue.
I remember being at this talk. Had lunch with Andrew and another conference attendee afterward. He was very kind to answer our questions about Zig and got us even more excited to give it a try!
"Docker exists because people don't know how to build from source"... THANK YOU!
Exactly.
If you think docker is a build tool you probably don't need it.
That is *one* reason at best
That's on 19:28
I was searching for this. RUclips search in autogenerated subtitles doesn't see the word "Docker" for some reason.
G A M E
C H A N G E R
I've been crying about make/cmake/whatever being fundamentally cringe. Thank you man!
"Software is an experience, a life style." - Andrew Kelley
What a champion
Man, I wish this talk had been around when I started coding....
Absolutely love Andrew man
don't take the road less traveled, but do fiddle with your default configs.
So, this guy figured out how a build system should work and how it should be presented to a programmer, despite the fact that he's 17.
correction: AFAIK it's DESTDIR that's what does what he talks about, not PREFIX. PREFIX will only affect the middle part which is typically /usr/local or /usr but paths like /etc or /opt are not affected by that. only DESTDIR (which defaults to /)
for example, if an app installs /usr/bin/myapp and /etc/myapp.conf, then setting PREFIX=/tmp/foo will have the installer create /tmp/foo/bin/myapp and /etc/myapp.conf! but setging PREFIX=/tmp/foo will create /tmp/foo/usr/local/bin/myapp and /tmp/foo/etc/myapp.conf
I'm not sure it's part of POSIX, IIRC I learned this from GNU guidelines (it might be GNU Make docs) but as far as I know it's adhered to in both Fedora and Debian.
19:04 Glitch in the matrix.
I am new in C world and i just recently learn make and its good for in system use but if I have to ship software, i will use zig for that cmake is very ugly in lot of ways don't want to learn that shit.
Build everything from source, FTW.
19:02 compiler Illuminati preventing us from knowing the truth
38:07 What prefix does “zig build run” use?
Very late answer. But by default the zig build system creates a directory called zig-out in the project directory where the binaries are stored. Hope that answers your question :D
@@michaelscofield4524 thank you!
Nanananicee ❤🎉
6:44
😍👍
I don’t really agree with the docker statement. You avoid almost all the dependency and version problems by making sure the environment is the same. How far should you go when recompiling your dependencies from source? The dependency chain can go on and on until you reach the compiler source code. So now your build step takes forever and you have to maintain a very complicated build pipeline for a lot of software because you didn’t want to have one abstraction layer (docker).
I would agree with this statement but it's usually not that difficult to do so atleast on linux... even with one of the biggest open-source projects like tensorflow and pytorch compiling a version yourself even with all the extra bells and whistles is straight forward when done using correct configuration. Although it might take a little bit in order to configure it correctly build system like zig build and bazel make it super easy if you know them. I personally work on very large C++/Rust based HPC project, we use bazel (will try our zig's build system), it has to go through 4 different compilers (not really they all use llvm backend + extra ) for different intel, amd and nvidia implementations, its usually easier to just set up locally.
Dockerfile builds eliminates the 'works on my machine' issue. Yeah you can force your users to spend hours or days researching and troubleshooting but honestly the time saved probably outweighs the actual compile time overhead. It also serves as install documentation with regards to outside factors, if it builds in the docker container you have every factor accounted for.
@@steffennilsen2132 yeah, I mean I do agree on easy part and controlled environment for builds. Unfortunately for our usecase hpc and I guess game dev require to be ran on bare metal... For even basic testing so setting up toolchain is entirely necessary. For others which don't specifically require this, docker can be good we do use docker for deployment etc. However I do don't like the nature some people have of "dockerify" the build and dev process.
I don't see Docker solving this problem. So how do you put the software on your container? From a repo? You can do that without docker. From source? You can do that without docker.
I think you think Docker does more than it actually does. It's not Yocto, it's just a half assed VM/container with a competent definition language format. This language is no magic power, you need to fetch software somehow and if you use live repositories in it, if you generate the image a few months apart you will get a different image.
@@nextlifeonearth Every system is configured differently, or there are differences between versions, etc. Docker gives you a stable environment that's the same on every machine.
Talk should really be about hoe to build C/C++ code from source. Basically brings up all the pain points of make because it is a tool from the 60's. Maybe we should just build a better tool then make?
They exist. Nobody uses them.
I mean.... that's literally what he did. Zig Build can replace make and build C/C++ from source as he showed in his talk.
There are others, like Ninja but you need a configuration system like CMake with it.
Guix does a lot of these things in a Nix-like way. Just sayin'.
People don't build things from sources because they don't live in Mama's house anymore, the day has only 24 hours, and they want to work with the apps, not on them. That are the real reasons!
Docker is a crutch for people who do not want to learn building. It's 2024, comp times are mere seconds unless you working on a huge codebase with 100+ member teams. The overhead on docker is more than what people think and does not readily result in robust software. Built and optimized for your machine gives you blazing fast executables. This will matter when your number of services increase. Not to mention, the space requirements of docker that eats up space for your volumes and images. Not optimal. The zig team understands how good software is built. Docker for everything is pure skill issue mostly for "web Devs".
@@raptorate2872 You're talking about the 2024 comp time performance but crying about Docker‘s overhead. That’s my kind of humor! 😂
If I have hardware limitations, I do direct builds. If I have not, I do container. But you can live out your religion. Do what you want to do. I do what I need to do.
Cheers. 🍻
@@martinmajewski I guess you forgot about the unnecessary space as well. Docker overhead matters especially when running small devices like IOT and Raspberry Pi. Docker has it's place but if it's your replacement for building from source, that's where you going wrong. Example, you can have DB in docker bo problem but you don't want a ml workflow service running in docker just because the Daemon engine add extra layers when direct GPU access is involved. This also had implications for web apps running on docker severely limiting network performance.
@@martinmajewski comp time frequency Vs continuous deployment on docker. You're missing the point and straining your resources for nothing.
@@raptorate2872 Let me guess: you’re only dealing with RPis in your free time and not with enterprise solutions where other metrics matter than space or CPU time? Good for you. I have nothing against compiling sources, but saying container users are lazy is just stupid. And if somebody can only see black or white (on any topic), I cannot take that person seriously. Good day to you!
Makefiles are WAY MORE GENERIC than this. That's where the Zig build system fails completely.
It is painful to write a build.zig (that ultimately will be limited to build the software) while make files are easy to learn and use.
Make file rules are mostly intuitive while having to learn Zig to just build something is ridiculous.
Also, it doesn't matter what the compiler actually is, you can compile anything with a make file; I use it to build PureScript applications, verify my certificates and make packages for my system.
Great Introductory talk but did not help AT ALL with my `build.zig` issues. Doc is scarce and I wish a core Zig member would info-dump all they know about `build.zig` in a blogpost or video one day. It has a bit of a confusing vocabulary. And tutorial code for it keeps breaking.
if you want help i suggest you make a post on ziggit(.)dev. its the dedicated community forum and andrew himself seems to be acive there daily.
its like stack overflow, just for zig and the build system, minus the attitude of SO and allows for more dialogue.
Somebody should make a system for tutorial code maintenance. It's quite an issue, espcially on tough and complicated subjects.