2:54 - why bother with cat when you can awk and make the command 8-10 characters longer? I get that awk is an interesting program, but this is absurd. Come on DT, these videos are great but this is silly.
Yeah people don't use akw because a lot longer commands. Even the first users of unix like Brian still use cat, is perfectly fine. Also people use grep because is shorter. So cat and grep are the pure essence of Linux programs in the 70s be great at one thing. Awk has its purpose but it isn't to replace grep.
My suggestion is to not waste time showing how a tool can awkwardly emulate another, but what it can do better. Splitting fields is one of the most common uses for awk.
Well i think what's he's highlighting here is that, maybe those tools could share more of they're functionalities in a UNIX mentality, instead of having their own ways of doing some same things. awk, sed and grep could use cat binary and have it in dependency to get text from file. It would make more sense. But i think that's kinda the goal of busybox
To be fair, Luke specifically was talking about piping cat into grep for standard grep uses, he never proposed it as a replacement for cat in it's whole. Interesting video though!
I believe that the Unix philosophy is small single purpose tools that do one thing well. That is why you would use grep instead of awk, not to mention that the grep command line was very much simpler and easier to remember than the awk equivalent. That said, this was a very interesting video on how the same things can be accomplished with different tools.
@@urugulu1656 MS-DOS is now open source, atleast 1.25 and 2.0. So isn't it technically the best non-POSIX os now. repo link github.com/microsoft/MS-DOS
No it's not. Unaboomer while immensely popular, doesnt rely on YT - he is a community builder (without probably realizing it) around i3, as he gets real life problems and shows the solutions. He's a freaking eye opener. He's not a news outlet and doesnt search for topic to cover - he discover something useful, teaches himself, improving a toolkit and shares with us. He shows you realtime what you can do with it from the point of view of regular (yeah, right...) staff of the university of Arizona (or in whichever hole is he know). Picking up on him without understanding his point (which is 20-30%% performance boost, 6 minutes instead of 9 for longer scripts) is childish and as much pathetic as calling him "another creator"
Not piping from cat into grep makes sense. Using awk instead of quicker alternatives does not. grep, head, tail, and sed, are shorter and easier to type and give the same results. I bet they complete quicker as well.
awk is underutilized, _as a scripting language_ . It’s not very useful as a command, but as a scripting language for all sorts of text manipulation. Other stuff often done using awk as a command have their own commands (cutting fields with cut, _grepping eith grep_, replacing characters with tr, simple stream operations with sed, ...). awk is a scripting language!
For beginners 'bc', 'cat', 'curl', 'head', 'grep', 'sed', 'tail', 'wc', and 'wget' are easier to learn for text retrieval and processing; the subprocess operator $() comes in handy too. In contrast, 'awk' is a text-processing utility with a built-in programming language. Those 5 utilities plus 'gnuplot' are sufficient for data analysis and much of data science. If programming language constructs such as variables, loops, and user-defined functions are needed, a standard *nix shell like 'bash' provides these features. By the way, I use all of these utilities and sometimes only the full capabilities of ''awk' on a daily basis to perform all stages of the data science workflow, including machine learning. Whether I choose to go awk-less depends on the project. Okay, I am sadistic. By the way, I use Python and R too but when introducing data analysis or the stages of the data science workflow up to exploratory data analysis. The 'bc', 'cat', 'curl', 'head', 'grep', 'sed', 'tail', 'wc', and 'wget' utilities allow for an interactive, step-wise refinement approach known as REPL without the cognitive overhead of programming language constructs.
The cat replacement can be shortened to awk 1 new.sh. I do this instead of cat when I think I will want to perform an action on the file using awk after I view the contents. Now I can recall the last command and simply replace the 1 with the AWK program.
Great video... but you answered yourself really. The simplest solution (or more readable) is usually the best one... so which do you prefer to use in a script? This: head file.sh Or this: awk '(NR>=0 && NR ... etc etc about 20 more characters... ' file.sh? Which would be more readable and maintainable while also allowing fellow programmers looking at your code to 'get it' quickly?
The reason we use cat at the beginning is that it moves the file being referenced to the start of the line before any processing commands. Otherwise you've got grep followed by a random pattern match, then the file you're processing, often followed by a long sequence of other things. It's easier to read and conceptualise when you have "take file X | search for content y | ..." especially if you're chaining many commands together and later you have more greps or sorts and you switch the mode of their use from including a filename to taking stdin. It makes the $1 stick out and easy to understand rather than burying it somewhere down the commandline.
using awk it feels like im using dvorak keyboard because of those special letters, eapecially when i need to finish jobs faster, i only use awk for missing features like print. time is gold.
At some point these classic Unix tools just become verbs for what we want to do - when we just want to dump files as is, or combine several files into one, we achieve that via the verb of 'cat'. When we want to scrape things per line out of a file, we 'grep' it. If we want to scrape, filter/transform a file, we 'awk' it. Even if there were one uber tool that could do all these things, I'd have to create aliases for 'cat', 'grep', 'awk' so those verbs would still be there
@@bigpod I meant simplification of command he provided as replacement for head. But, in general, it is quite simple command, there's really not much to it :)
Enough of the bloat you old goat. You don't need Cat or exfat.... a head or sed even if the font is in red. Don't use grep, or WEP and especially don't use zipgrep. In the terminal or youtube as you gawk, while you listen to DT talk He says all you need is AWK. -----Dr. Suess
As an awk lover, this video put a smirk on my face. But seriously, awk really shines when manipulating data in a limited number of potential sources. In real life scenarios, searching through large git/mercurial repos, where there are thousands of source files and a lot more build files, awk is not really your friend. All searching operations will be much much slower than grep. Grep will also be much slower than SilverSearcher or RipGrep, but those tools are not a part of any Gnu/Linux distribution. But you are absolutely on point about replacing sed with awk. Even when awk syntax looks longer, it is more meaningful and readable.
Grep is still the least cumbersome and most elegant for every day searches. The bloated commands, sed and awk, are better for more specialized cases. Awk is exceptional when it comes to tabular data.
Thanks for the comparison but to me it remains more tedious to use awk before ripgrep, grep...you name it..in most cases. Anyway still learned something new, thanks for that.
@Peter Andrijeczko for using awk? No, I was sharing an example for newcomers. Its aholes like you who scare people away from linux man. How in the F, am I going to look for congratulations for simple line of awk, when I've written emulators in ASM for both 6805 and 6502. Go bug someone else if you're the bored man.
@Peter Andrijeczko this isn't a penis measuring contest, I'm just saying, you came off as an ahole and you know you did. I'm sure you're way more capable than me but honestly, years don't equate to ability. I found and exploited a security whole in nagravision cards used by popular tv providers. A loop counter was off by one and overwrote the counter itself in ram. The loop itself was decrypting memory with IDEA cipher with a key of zero for a few loops then they got over written with a constant key. I had to encrypt my payload with 2 keys and then used another command to cause a jump to the payload in ram. I've never worked with linux as an employee and I'm self taught but. People will have their strengths and weaknesses all over the place. Like I said this isn't a pecker measuring contest.
@Peter Andrijeczko this why i wrote emulators for both the 6805 and 6502, these cards where using these same architectures depending on the rom version. for the 2nd part of the exploit I wrote code in C++ so i could brute force a set of bytes that would give me a jump to my payload since what was being overwritten in ram was psuedo-random. I need a jump to anywhere from $0080 to $00D0 in ram. but yeah im not pro or expert.. I learn as I go and when it comes to linux I'm still a beginner. Hell I stick to debian or ubuntu for that very reason.
Why use cat? Well, cat is a lot easier to type out than awk '{print $0}' . The awk version of case insensitive grep is even worse since you have to explicitly call out each character that might be in both upper and lower case. awk does have some really useful stuff though. I'll have to dig into it further.
cat and grep and more and tail are the basis of UNIX systems. Do one thing and do it well. Pipe through the different things you are doing. For me it is easier to remember that cat echoes the file and grep gets the input and filters it in that to remember that grep has all those parameters one of witch is the filename. It sure has ANDs also, yet I use to pipe one grep to another if I am filtering on two conditions. In any case, the examples you give are longer to type and harder to remember that easy cat/grep equivalents. Offtopic: After 30 years of using grep for different piped tasks i discovered "grep -r 'someCodeSnippet'" that looks into a folder and shows you all files and file lines that contains a given text. That is soo usefull, I cannot believe I never used it.
Why would you present it like this? now awk seems cool but when your commands are far more verbose to get the same functionality and then advertising as a replacement? It should be advertised as an alt; depending on your use case, it could be more powerful or even slower then one of the other tools in terms of productivity.
grep -v apple takes 2 seconds max to type letters. awk '!/apple/ {print }' takes 8 seconds minimum. if i use grep 1 week straight, awk will take a month. when i become 90 years old and medicare bill costs me 100k to extend just a month of life. awk? no thx. but agree it does things that grep sed cat is missing.
This was an interesting video. However, if I want the head of a file, I will type: head t.txt not the long awk command you suggested. Have you counted how many keystrokes are involved in the awk command? The awk utility is only to be used for complex operations, which other simpler utilities cannot perform. It is not a replacement for all other commands.
When you're working with things like live logs you *must* use cat instead of grep/awk etc to avoid locking the log files you're looking at. Otherwise now you're preventing the system from logging while you work which in some situations (tech support) is unprofessional. If you also need to do columnary processing, you can pipe your cat to awk.
grep -P is not possible with awk; cat is such a common case it deserves its own program (imagine typing awk '{print}' file*.txt every time). Likewise, how does one implement grep -n port *.cc in awk? (I don't, for example, know how to match the start of a file, necessary to have a per file line number, if multiple files are specified on the command line -- grep -n gives you the filename and line number _in that file_, rather than the line number of all input files concatenated, and this is much more useful than what you can do with awk, before one adds the colour coded output you get from modern grep.)
Bit more gooling: I'll answer myself with a couple of examples: awk '/cout/ {print FILENAME ":" FNR ": " $0}' *.cpp awk 'BEGIN {n=0} FNR==1 {n=n+1;print "New File: " FILENAME} END { print n " files" }' *.cpp Sure you can do it, but it takes headscratching and more legwork for your poor overworked fingers. I'm someone who, with a specific task in mind, writes a 'scripts.sh' file defining 1-4 char abbreviated commands for the task at hand, just to minimise typing.
I may need to test this one... $ time awk '{print $0}' new.sh real 0m0.005s user 0m0.002s sys 0m0.002s $ time awk '//'' new.sh real 0m0.003s user 0m0.003s sys 0m0.000s
5 лет назад+3
@@DistroTube For full autism, make sure to time how long it takes to type each one. :-D You don't need quotes around // in this particular case.
One less dependency............but a few extra characters to invoke. This is the simplest possible example of a tradeoff in programming I can think of.
me, watching videos on awk because my coworker who retired always used awk, and in the end he used awk for everything. so I had a really good laugh in the end :D.
Because awk uses more system resources then grep. The reason not to use cat is to minimize the amount of memory needed to be allocated. Only calling grep is more efficient than using awk and also more efficient than evoking cat as well as grep
Krešimir BTFO scrub, check out the benchmarks that came out today
5 лет назад
@@Baccarebel2580 If you're talking about Luke's video, then it shows that you save about 0.001s when using grep like it is intended, instead of awk, which I told you without measuring it. I guess, if you are that pressed for time, it matters, but in normal use nobody can notice the difference. Now, nobody is saying you should use awk for everything (DT's video is a bit tongue-in-cheek), but only that you can and that It doesn't really matter in most normal cases. If you cat into grep, no secret police will show up at your door to take you away.
Ok, since the "one content-creator" ignored this, maybe you would be interested in making a list of tutorials on core-utils with examples? I think it would be really nice
@@xthebumpx I actually have! I use this vim plugin for searching (vimgrep is so slow) that uses ripgrep. But on the command line I prefer to use ag, even though rg is definitely faster.
Everyone should probably use what works for them... I find myself using awk quite often, just because I tend to forget the options/flags for various separate commands (by the time it takes to man, I've already written the awk).
@@eNNercY LOL unix / linix is cool ... but scripting is always quick and dirty ... but will be in no way fast ... Arguing between cat and awk is just clickbait !!
Because grep is a lot faster to search though a file than awk. And the same story for cat. Just add the command time in front of the command to see how long it took to execute. For example.. time cat new.sh && time awk '{ print $0 }' new.sh. Loved the examples..
I hardly ever use cat. I use less most often. The files I deal with are more than just one screen long. I also hardly ever need just one or two key words.
Well although awk can replace cat or grep even if it's bulkier, saying that it can replace sed is quite a bit statement. I do a lot of parsing using sed quick and simple and I can't see how would you trade this to an awk experience. For splitting lines into columns it's fine though.
Okay while in the directory with the files I use the following grep -il "keyword" *.txt | xargs -n1 -i mv {} subdirectory/ I decided not to remove the files just as a precaution. Seems to be working...
It would be really cool if you showed things awk can do that others can't. It sounds like awk is bringing a sledgehammer to hammer a small nail a lot of the time. It's less characters typically to do something with sed or grep, so when they do the job (which is often) why would you type more characters? I would only use awk when I can't do the same thing faster. Is there any performance difference between these commands?
Imagine a carpenter making a video on using solely one specific hammer for all. You’d laugh about that. I wouldn’t abandon all other CLI tools. Besides the tool to replace them all .. wasn’t that Perl?
Just one day after Luke Smith's video.
ruclips.net/video/82NBMvx6vFY/видео.html
Things are heating up. Next: Why awk sucks
Only SIX YEARS after Conner McDaniel's awesome SED series >:P
@@eritert Next on chris titus... Why Awk is the Devil and Why So Many No Longer Use It :P
who's the noob now? very awkward
;)
@@eritert Why awk is bloated.
2:54 - why bother with cat when you can awk and make the command 8-10 characters longer? I get that awk is an interesting program, but this is absurd. Come on DT, these videos are great but this is silly.
He would use "sed" instead..
Yeah people don't use akw because a lot longer commands. Even the first users of unix like Brian still use cat, is perfectly fine. Also people use grep because is shorter. So cat and grep are the pure essence of Linux programs in the 70s be great at one thing. Awk has its purpose but it isn't to replace grep.
Here's a video of Brian the creator of AWK which says grep is his favorite command lmao ruclips.net/video/O9upVbGSBFo/видео.html
Couldnt disagree more.
@@ArdieMejia83 care to elaborate? Why would you still use awk when cat is way shorter and easier to type?
My suggestion is to not waste time showing how a tool can awkwardly emulate another, but what it can do better. Splitting fields is one of the most common uses for awk.
you mean awk wardly dab
Exactly what I was thinking. "Why use "head" when you can type out this much more verbose syntax to accomplish the same thing?" hmm
And splitting fields with awk is not _awk_ wardly (haha) emulating other tools? Like ... _cut_ for cutting fields?
Well i think what's he's highlighting here is that, maybe those tools could share more of they're functionalities in a UNIX mentality, instead of having their own ways of doing some same things. awk, sed and grep could use cat binary and have it in dependency to get text from file. It would make more sense.
But i think that's kinda the goal of busybox
why carry around a hamer when you can hammer things with your shoe?
To be fair, Luke specifically was talking about piping cat into grep for standard grep uses, he never proposed it as a replacement for cat in it's whole. Interesting video though!
I believe that the Unix philosophy is small single purpose tools that do one thing well. That is why you would use grep instead of awk, not to mention that the grep command line was very much simpler and easier to remember than the awk equivalent. That said, this was a very interesting video on how the same things can be accomplished with different tools.
All depends on the task ... and the size of the data-set and the time allowed to do it !!
3D terminal games have been written in Awk. It’s that powerful.
Tetris games have been written in awk. True story.
@@DistroTube I like Tetris, since it was released in europe, because I'm born in the former GDR.
@@msdosm4nfred are you sure this is the right channel for you? just going by the name.
Really? Where what?
@@urugulu1656 MS-DOS is now open source, atleast 1.25 and 2.0. So isn't it technically the best non-POSIX os now. repo link github.com/microsoft/MS-DOS
DistroTube vs Luke Smith : Civil War 😂😂
Than-O.S. is coming! LoL
No it's not. Unaboomer while immensely popular, doesnt rely on YT - he is a community builder (without probably realizing it) around i3, as he gets real life problems and shows the solutions. He's a freaking eye opener. He's not a news outlet and doesnt search for topic to cover - he discover something useful, teaches himself, improving a toolkit and shares with us.
He shows you realtime what you can do with it from the point of view of regular (yeah, right...) staff of the university of Arizona (or in whichever hole is he know).
Picking up on him without understanding his point (which is 20-30%% performance boost, 6 minutes instead of 9 for longer scripts) is childish and as much pathetic as calling him "another creator"
"Why use awk and sed if you can use Perl?"
Just kidding, Perl is awesome.
Why use Perl when you can use Node.js?
@@alexxx4434 Because js it is a soydev language.
You though I was serious?
@@alexxx4434 Yes, sorry. I wooshed my self.
Not piping from cat into grep makes sense. Using awk instead of quicker alternatives does not. grep, head, tail, and sed, are shorter and easier to type and give the same results. I bet they complete quicker as well.
Of course!
This video is spot on. Awk is seriously underutilized. One of my favorite commands.
If you use awk for this, you shouldnt be allowed anywhere near scripting in any production environment :D
@@meyimagalot9497 lol
awk is underutilized, _as a scripting language_ .
It’s not very useful as a command, but as a scripting language for all sorts of text manipulation. Other stuff often done using awk as a command have their own commands (cutting fields with cut, _grepping eith grep_, replacing characters with tr, simple stream operations with sed, ...). awk is a scripting language!
Humm semantics and pedantics, eh? I enter the awk command and use the awk scripting language daily :)
@@mitchelvalentino1569 Pff. awk is for longer scripts (although not getting _too_ long ... usally ~50 loc.)
For beginners 'bc', 'cat', 'curl', 'head', 'grep', 'sed', 'tail', 'wc', and 'wget' are easier to learn for text retrieval and processing; the subprocess operator $() comes in handy too. In contrast, 'awk' is a text-processing utility with a built-in programming language. Those 5 utilities plus 'gnuplot' are sufficient for data analysis and much of data science. If programming language constructs such as variables, loops, and user-defined functions are needed, a standard *nix shell like 'bash' provides these features. By the way, I use all of these utilities and sometimes only the full capabilities of ''awk' on a daily basis to perform all stages of the data science workflow, including machine learning. Whether I choose to go awk-less depends on the project. Okay, I am sadistic. By the way, I use Python and R too but when introducing data analysis or the stages of the data science workflow up to exploratory data analysis. The 'bc', 'cat', 'curl', 'head', 'grep', 'sed', 'tail', 'wc', and 'wget' utilities allow for an interactive, step-wise refinement approach known as REPL without the cognitive overhead of programming language constructs.
Why use grep when you can just print the file out on a dot matrix printer and use white-out on all the lines you don’t want to see?
The cat replacement can be shortened to awk 1 new.sh. I do this instead of cat when I think I will want to perform an action on the file using awk after I view the contents. Now I can recall the last command and simply replace the 1 with the AWK program.
*grabs popcorn* I am enjoying seeing Luke and DT go back and forth with these commands. It's like watching an awesome game of chess.
sed s/Luke/DT ... *shots fired*
Also great way to view firtst few lines using awk.
awk ''FNR
Thank you, Derek. "Bloat" is bloat. Use "blo" instead.
Gnu/Bloat
This was actually hilarious. Thanks DT
Great video... but you answered yourself really. The simplest solution (or more readable) is usually the best one... so which do you prefer to use in a script?
This: head file.sh
Or this: awk '(NR>=0 && NR ... etc etc about 20 more characters... ' file.sh?
Which would be more readable and maintainable while also allowing fellow programmers looking at your code to 'get it' quickly?
Yea I think I recall seing this video recently of some boomer talking about using sed instead of cat.
The reason we use cat at the beginning is that it moves the file being referenced to the start of the line before any processing commands. Otherwise you've got grep followed by a random pattern match, then the file you're processing, often followed by a long sequence of other things. It's easier to read and conceptualise when you have "take file X | search for content y | ..." especially if you're chaining many commands together and later you have more greps or sorts and you switch the mode of their use from including a filename to taking stdin. It makes the $1 stick out and easy to understand rather than burying it somewhere down the commandline.
why say 'content creator'? Just name Luke Smith. good video apart from that dt
YOu know, not to shout out the competition... OMG how low-key, jealous and petty it is.... I cant help it, but to me tells a lot about the character.
Felt almost insulting tbh.
Chill Luke is giving him ideas for new content and he's not bothered to pay respect or acknoledge a gnu user fellow.
using awk it feels like im using dvorak keyboard because of those special letters, eapecially when i need to finish jobs faster, i only use awk for missing features like print. time is gold.
Why use awk when you can use perl
"This got AWKward."
At some point these classic Unix tools just become verbs for what we want to do - when we just want to dump files as is, or combine several files into one, we achieve that via the verb of 'cat'. When we want to scrape things per line out of a file, we 'grep' it. If we want to scrape, filter/transform a file, we 'awk' it. Even if there were one uber tool that could do all these things, I'd have to create aliases for 'cat', 'grep', 'awk' so those verbs would still be there
As for head replacement, you can simplify command to "awk '{ if(NR < 11) print; else exit }' new.sh" :)
@@bigpod I meant simplification of command he provided as replacement for head. But, in general, it is quite simple command, there's really not much to it :)
@@bigpod Sure it is, can't argue with that, but I think that was not a point of the video :)
Some people use TurboTax to file their taxes. I file them with AWK.
Enough of the bloat you old goat.
You don't need Cat or exfat.... a head or sed even if the font is in red.
Don't use grep, or WEP and especially don't use zipgrep.
In the terminal or youtube as you gawk, while you listen to DT talk
He says all you need is AWK.
-----Dr. Suess
If I ever have kids, I'm reading this to them.
beautiful
Wow. Just....wow.
As an awk lover, this video put a smirk on my face. But seriously, awk really shines when manipulating data in a limited number of potential sources. In real life scenarios, searching through large git/mercurial repos, where there are thousands of source files and a lot more build files, awk is not really your friend. All searching operations will be much much slower than grep. Grep will also be much slower than SilverSearcher or RipGrep, but those tools are not a part of any Gnu/Linux distribution.
But you are absolutely on point about replacing sed with awk. Even when awk syntax looks longer, it is more meaningful and readable.
Grep is still the least cumbersome and most elegant for every day searches. The bloated commands, sed and awk, are better for more specialized cases. Awk is exceptional when it comes to tabular data.
For replacing head, you only need: "awk 'NR
I'm using only awk right now for the Advent of Code. It can really do anything.
awk '{print $0} (NR==11){exit}' new.sh seems easier to understand and use than what you used for the head example . great video by the way!!
Only way Luke can redeem himself is if he does a NixOS series
I know what you are doing there! Lol
Awk is like using a sledgehammer to drive in a nail. LoL. I will continue to use cat and grep.
Thanks for the comparison but to me it remains more tedious to use awk before ripgrep, grep...you name it..in most cases. Anyway still learned something new, thanks for that.
awk is awesome, I use it to update pip installs
pip list --outdated | awk 'NR >2 { print $1}' | xargs pip install --upgrade
Nice!
@@DistroTube forgot xargs, but, edited the post
@Peter Andrijeczko for using awk? No, I was sharing an example for newcomers. Its aholes like you who scare people away from linux man. How in the F, am I going to look for congratulations for simple line of awk, when I've written emulators in ASM for both 6805 and 6502. Go bug someone else if you're the bored man.
@Peter Andrijeczko this isn't a penis measuring contest, I'm just saying, you came off as an ahole and you know you did.
I'm sure you're way more capable than me but honestly, years don't equate to ability. I found and exploited a security whole in nagravision cards used by popular tv providers. A loop counter was off by one and overwrote the counter itself in ram. The loop itself was decrypting memory with IDEA cipher with a key of zero for a few loops then they got over written with a constant key. I had to encrypt my payload with 2 keys and then used another command to cause a jump to the payload in ram. I've never worked with linux as an employee and I'm self taught but. People will have their strengths and weaknesses all over the place. Like I said this isn't a pecker measuring contest.
@Peter Andrijeczko this why i wrote emulators for both the 6805 and 6502, these cards where using these same architectures depending on the rom version. for the 2nd part of the exploit I wrote code in C++ so i could brute force a set of bytes that would give me a jump to my payload since what was being overwritten in ram was psuedo-random. I need a jump to anywhere from $0080 to $00D0 in ram. but yeah im not pro or expert.. I learn as I go and when it comes to linux I'm still a beginner. Hell I stick to debian or ubuntu for that very reason.
I seldom use cat. Usually "less" or "more" for large files, sometimes with grep. The pipe is the Unix way.
This video is such a flex that he is literally flexing in the thumbnail, love it!
Awk is 671KB, plus 34K for the links from /usr/bin/awk -> /usr/bin/gawk. Cat, grep, sed, head, and tail are 475KB combined. Awk is teh B|0@t!
I didn't know the "tolower()" parameter in the awk command. I learned it with you. Thanks man. Greetings from Brazil.
I think we need to include all that stuff into systemd )))
The one caveat with replacing grep with awk is that it doesn’t support extended or Perl-like regex the way modern GNU grep does.
Why use head when with an extra 40 characters you can use awk to do the same thing. It took him that long to type out the command he jump cut it.
Why use cat? Well, cat is a lot easier to type out than awk '{print $0}' . The awk version of case insensitive grep is even worse since you have to explicitly call out each character that might be in both upper and lower case. awk does have some really useful stuff though. I'll have to dig into it further.
It's a tool in the toolbox. It's good to be aware of what the strength of each tool is, and use it in a meaningful way. -Capt. Obvious.
"why use a cap when you can use an umbrella on a sunny day"
Master this is reveling to me now I'll use AWK for my log and Metadata work.
cat and grep and more and tail are the basis of UNIX systems. Do one thing and do it well. Pipe through the different things you are doing. For me it is easier to remember that cat echoes the file and grep gets the input and filters it in that to remember that grep has all those parameters one of witch is the filename. It sure has ANDs also, yet I use to pipe one grep to another if I am filtering on two conditions. In any case, the examples you give are longer to type and harder to remember that easy cat/grep equivalents.
Offtopic: After 30 years of using grep for different piped tasks i discovered "grep -r 'someCodeSnippet'" that looks into a folder and shows you all files and file lines that contains a given text. That is soo usefull, I cannot believe I never used it.
Why would you present it like this? now awk seems cool but when your commands are far more verbose to get the same functionality and then advertising as a replacement?
It should be advertised as an alt; depending on your use case, it could be more powerful or even slower then one of the other tools in terms of productivity.
grep -v apple
takes 2 seconds max to type letters.
awk '!/apple/ {print }'
takes 8 seconds minimum.
if i use grep 1 week straight,
awk will take a month.
when i become 90 years old and medicare bill costs me 100k to extend just a month of life.
awk? no thx.
but agree it does things that grep sed cat is missing.
This was an interesting video. However, if I want the head of a file, I will type: head t.txt not the long awk command you suggested. Have you counted how many keystrokes are involved in the awk command? The awk utility is only to be used for complex operations, which other simpler utilities cannot perform. It is not a replacement for all other commands.
When you're working with things like live logs you *must* use cat instead of grep/awk etc to avoid locking the log files you're looking at. Otherwise now you're preventing the system from logging while you work which in some situations (tech support) is unprofessional.
If you also need to do columnary processing, you can pipe your cat to awk.
The last statement fixed the first.
Could you please elaborate or hand me a reference material? How does awk lock the file?
I see where this is going.
_Glenda’s so cuuute!_
1:40 "why bother with grep? grep is sort of a neat program, but it really only does one thing"
have you heard of the unix philosophy?
It's like why use _reduce, map, forEach, filter, every, some, includes,_ where you can use _for loop_ for everything.
Why use awk when you can use perl?
grep -P is not possible with awk; cat is such a common case it deserves its own program (imagine typing awk '{print}' file*.txt every time).
Likewise, how does one implement grep -n port *.cc in awk?
(I don't, for example, know how to match the start of a file, necessary to have a per file line number, if multiple files are specified on the command line -- grep -n gives you the filename and line number _in that file_, rather than the line number of all input files concatenated, and this is much more useful than what you can do with awk, before one adds the colour coded output you get from modern grep.)
Bit more gooling: I'll answer myself with a couple of examples:
awk '/cout/ {print FILENAME ":" FNR ": " $0}' *.cpp
awk 'BEGIN {n=0} FNR==1 {n=n+1;print "New File: " FILENAME} END { print n " files" }' *.cpp
Sure you can do it, but it takes headscratching and more legwork for your poor overworked fingers. I'm someone who, with a specific task in mind, writes a 'scripts.sh' file defining 1-4 char abbreviated commands for the task at hand, just to minimise typing.
Using awk for everything is surely doable, but sounds like a gimmick to my ear. Interesting, though.
I think that's the joke. There is nothing wrong with using cat or grep if that's what you're used to doing.
and now for the bigger question... Y use vi when u got awk, sed, grep and the whole 9 yards or bash utils. nano ftw!
2:54 Typing `awk '{ print $0 }' new.sh` is bloat. Just type `awk // new.sh`
I may need to test this one...
$ time awk '{print $0}' new.sh
real 0m0.005s
user 0m0.002s
sys 0m0.002s
$ time awk '//'' new.sh
real 0m0.003s
user 0m0.003s
sys 0m0.000s
@@DistroTube For full autism, make sure to time how long it takes to type each one. :-D You don't need quotes around // in this particular case.
One less dependency............but a few extra characters to invoke. This is the simplest possible example of a tradeoff in programming I can think of.
me, watching videos on awk because my coworker who retired always used awk, and in the end he used awk for everything.
so I had a really good laugh in the end :D.
Pretty soon you'll be browsing the web with awk and curl /s
Something something "awk"ward
Yeah I know. I couldn't think of anything clever.
Because awk uses more system resources then grep. The reason not to use cat is to minimize the amount of memory needed to be allocated. Only calling grep is more efficient than using awk and also more efficient than evoking cat as well as grep
Congratulations, you saved yourself 0m0.001s.
Krešimir BTFO scrub, check out the benchmarks that came out today
@@Baccarebel2580 If you're talking about Luke's video, then it shows that you save about 0.001s when using grep like it is intended, instead of awk, which I told you without measuring it. I guess, if you are that pressed for time, it matters, but in normal use nobody can notice the difference.
Now, nobody is saying you should use awk for everything (DT's video is a bit tongue-in-cheek), but only that you can and that It doesn't really matter in most normal cases. If you cat into grep, no secret police will show up at your door to take you away.
"We can replace head with awk"
7:37:
Me: Nope
awk is great for work with data tables
True, that's really where it shines.
Ok, since the "one content-creator" ignored this, maybe you would be interested in making a list of tutorials on core-utils with examples? I think it would be really nice
grep has a builtin -r, or recursive, flag, which is very useful. These days I use "The silver searcher" though.
Check out Ripgrep!
@@xthebumpx I actually have! I use this vim plugin for searching (vimgrep is so slow) that uses ripgrep. But on the command line I prefer to use ag, even though rg is definitely faster.
Did he just declare war?
If you have a long chain of commands, it may assist in readability to cat file | at the beginning.
Everyone should probably use what works for them... I find myself using awk quite often, just because I tend to forget the options/flags for various separate commands (by the time it takes to man, I've already written the awk).
It's legit to use cat to just print out a file.
Cat is bloated. Use "
@@bendover4728 No, use '>'. Files are bloated.
@@TheGruselmops LoL! Yo can use 'sudo find / -type f -delete' to get rid of bloating..
@@bendover4728 whahaha
echo "When The Band goes Filing in" | awk '{print substr($1,1,1)substr($2,1,1)substr($5,1,1)}'
Is it „wtf“?
@@eNNercY Yes :)
@@eNNercY LOL unix / linix is cool ... but scripting is always quick and dirty ... but will be in no way fast ...
Arguing between cat and awk is just clickbait !!
awk ‘{print $0}’ file.txt was even more AWKward than cat file.txt | grep “search”
Because grep is a lot faster to search though a file than awk. And the same story for cat. Just add the command time in front of the command to see how long it took to execute. For example.. time cat new.sh && time awk '{ print $0 }' new.sh. Loved the examples..
BEGIN { print "Luke Smith" }
😉
sed "1 i Luke Smith"
I think Luke has a point (performance) whereas DistroTube is looking for some topic to talk about
Of cause, why would you do 「head new.sh」 when you can just write 「awk '(NR>=0 && NR
The tone of the video is not very promising (I am excited)
Awk is great and is one of the commands I love the most... Only problem being I don't even know how to use it.
If you do not use awk yet consider gawk (GNU awk) or nawk (New awk). They are both more powerful.
Its sort of like using shovel instead of spoon imo
sed 11q is not the same as head it seems. Shouldn't it be sed 10q?
echo "cares one No" | cat | cat | awk '{ print $3, $2, $1 }'
Ha ha ha.... awkward ;-)
7:42
How that did appear? What kind of sorcery you are doing?
I shall give thee my like Mister Derek Distrotube!
The magic of Kdenlive. I wonder if awk can edit video? Hmm...
Looks like a good to possible replace a lot of grep and sed uses, by when on earth would you replace cat with awk, that's working harder not smarter?
I hardly ever use cat. I use less most often. The files I deal with are more than just one screen long. I also hardly ever need just one or two key words.
Head can be a lot shorter: awk 'NR
More typing.
Well although awk can replace cat or grep even if it's bulkier, saying that it can replace sed is quite a bit statement. I do a lot of parsing using sed quick and simple and I can't see how would you trade this to an awk experience. For splitting lines into columns it's fine though.
How about using the right tool for the job? Usually that's whatever requires the least code.
Big brain time: trade awk for perl
Have you ever maintain a GNU/Linux package, and which one?
Use programs on what they are designed for. It's like drinking coffee with a bowl instead of a mug, of course you can do that, but what's the point?
Let's say I got 5000 recovered text files in a folder.
What do I do to remove text files containing certain text?
Iterate through the files
For each file, grep keyword, if output, rm file.
Read these two articles and you'll find the answer: shapeshed.com/unix-xargs/
alvinalexander.com/unix/edu/examples/find.shtml
Cheers
Okay while in the directory with the files I use the following
grep -il "keyword" *.txt | xargs -n1 -i mv {} subdirectory/
I decided not to remove the files just as a precaution.
Seems to be working...
Do all the awk commands you showed in this video are POSIX compliant script?
I stuck with POSIX for this vid. There some non-POSIX ways of doing alot of this stuff too.
It would be really cool if you showed things awk can do that others can't. It sounds like awk is bringing a sledgehammer to hammer a small nail a lot of the time. It's less characters typically to do something with sed or grep, so when they do the job (which is often) why would you type more characters? I would only use awk when I can't do the same thing faster. Is there any performance difference between these commands?
Imagine a carpenter making a video on using solely one specific hammer for all. You’d laugh about that.
I wouldn’t abandon all other CLI tools.
Besides the tool to replace them all .. wasn’t that Perl?