Doing this basically as a task for multiple questions in Uni. This is the only video that really made it click. Please do more tutorials, your content is fantastic!
Great introduction to the topic, a few things that i think are worth mentioning, once people have learned the commands that were being demonstrated: If the logs your using have a variable amount of spaces between columns (to make things look nice), that can mess up using cut, to get around that you can use `sed 's/ */ /g` to replace any n spaces in a row with a single space. You can also use awk to replace the sed/cut combo, but that's a whole different topic. uniq also has the extremely useful -c flag which will add a count of how many instances of each item there were. And as an aside if people wanted to cut down on the number of commands used you can do things like `grep expression filepath` or `sort -u` (on a new enough system), but in the context of this video it is probably better that people learn about the existence of the stand alone utilities, which can be more versatile. Once you're confident in using the tools mentioned in the video, but you still find that you need more granularity than the grep/grep -v combo, you can use globbing, which involves special characters that represent concepts like "the start of a line"(^) or the wildcard "any thing"(*) (for example `grep "^Hello*World"` means any line that starts with Hello, and at some point also contains World, with anything or nothing in-between/after). If that still isn't enough you might want to look into using regular expressions with grep, but they can be even harder to wrap your mind around if you've never used them before. (If you don't understand globbing or re really are just from reading this that's fine, I'm just trying to give you the right terms to Google, because once you know something's name it becomes infinitely easier to find resources on them)
this one is one of the most helpful tutorials out there that show how powerful grep and pipe are. Thanks for sharing that and I hope you make more cool stuff.
Thanks! That was informative. The only thing I would have done differently is flip the order of uniq -d and sort. Less items to sort after uniq filters them out.
If I wanted to count the number of times that each unique instance showed up. What would I do for that? Would I do the unique and then do the word count for each instance by using grep for that specific phrase?
@Hackpens very informative video mate. Thanks for sharing. What is this tool you are using? Do you have any video for beginner? I really need to learn this stuff. Kindly help. Thanks again.
From description: > _"I show you how to filter information from a .log file, and you find out just how important strong passwords really are."_ i always wondered that pattern matching has smth to do with password security, but then i thought, you have to have passwords to apply pattern matching on 'em right? 'cz the password input field of a site doesn't accept regex, and generating exhaustive strings from regex doesn't help either... so, what are scenario we are imagining for talking about regex in context of secure passwords?
Thank you. That server has nothing accept a redundant web site on it, but the site name has the word "hack" in it, and the hackers don't know it's not worth hacking. I feel sorry that they try haha
You want the duplicates if they are from different source IP addresses as this means that different people have tried the same user names to access your system
Now dump all the unique IPs into a text file, and run nslookup on each one. $50 says they all are located in China or Russia. At least %98-99 of them. At least that's what I always end up finding.
Hi, great videos again :D Is this amaount of tried logins normal ? If so, this is a bit scary... Is there a way to "hide" the server ? Im a beginner, pls excuse a potential dumb questions/statement.
I learned something new, but I was searching different ting, here is that , kindly help with "How to grep a log file within a specific time period in Linux and with a specific keyword"
That's a nice idea for a video. thanks. In the meantime, you can use grep for dates and times. Try this, which should bring you everything in the Auth log that happened between 8pm and 9 pm on Aug 26th: pi@Node1:/var/log $ cat auth.log | grep "Aug 26 20:" Aug 26 20:00:01 Node1 CRON[14365]: pam_unix(cron:session): session opened for user pi by (uid=0) Aug 26 20:00:03 Node1 CRON[14365]: pam_unix(cron:session): session closed for user pi Aug 26 20:01:01 Node1 CRON[14380]: pam_unix(cron:session): session opened for user pi by (uid=0) Aug 26 20:01:01 Node1 sudo: pi : TTY=unknown ; PWD=/home/pi ; USER=root ; COMMAND=/usr/bin/apt-get update Aug 26 20:01:01 Node1 sudo: pam_unix(s..................................
An idiom I like to use is to rank occurrences of things. If I were interested if there are repeated items, after I sorted the lines, I’d do a unique count and a numerical sort, like this: … | sort | uniq -c | sort -rn | head So I can see the top 10 repeated lines.
For example if iam want crack WPA password of wifi I nees only 8 character or more not less than 8 character So iam asking you to have to print only 8 characters or more not less than 8
@@Anil-vy5vy this is not really useful for cracking Wifi passwords. you need to look at capturing handshakes and then perhaps using aircrack-ng or johntheripper to perform a dictionary attack on the handshake. Alternatively, you could use a utility like Wifite which performs a range of different attacks for you as long as you have all of its dependencies installed.
@@hackpens2246 sorry i didnot see your message but grep helps gor shorting words also to give only certain output length example grep -E '(\w{11,})' modifided.txt > greter11.txt below one is best because we can give from certain lenght words output awk 'length >= 8 && length
@@ashok-hg8se I'm sorry, but this is a video that shows you how to filter log files. I can't offer advice about auditing networks on this video's comment section.
That's true. However, since I used cat to look at the files and decide what strings I was going to grep for, it was easier to leave that there and repeat the commend.
@@hackpens2246 Got it. In such a case I just use *grep '' filename* . Then I take a look at the file and replace *''* with the appropriate grep options and/or a string to search. Performance aside, is *grep ''* equivalent to *cat* ?
@@JJ-rc1ie hi. No, cat is short for concatenate... It basically outputs the content of a file. Grep is searching the file for lines containing a certain string or integer or whatever. In this case, I used cat to print the content of the file to the screen so I could look at it and decide what string I was going to filter for, then I used it to pipe the content of the file into the grep command
@@hackpens2246 Yes, I know the basics of *cat* and *grep* . But I also noticed that grepping for an empty string, i.e. *grep "" filename* seems to be equivalent to *cat* . Don't you agree? *P.S.* *''* in my earlier comment is not a typo but an empty string.
-v works. I'm not sure what you are expecting from grep -w. As far as I can see grep -w does the same thing as grep alone. The grep -v command will show lines that DON'T have the string you specify, whereas grep and grep -w show lines that DO have to string.
thank you so much for this tutorial it helped me a lot with understanding of cat, grep and sort. Are you able to tell me what this command would do "cat -rf ~/syslog | sort | grep -iaE -A 5 'cpu[1-7].*(7[0-9]|8[0-9]|100)' | tee cpu.txt" specifically the numbers after cpu which seem to me like it's a time stamp
Probably a little late to be useful, but the numbers are regular expressions (enabled by the -E flag), so it means: After "cpu" there needs to be [1-7] one digit that is between 1 and 7 (inclusive). Then .* there can be any combination of any characters, of any length, that means anything, or even nothing. Then after that we need to have one of the following three options 7 followed by any digit, 8 followed by any digit, or the number 100 (so 71, 80, and 100 are all valid but 180, or 7 are not. (700 theoretically wouldn't be, but because we didn't specify what has to come after, grep will allow it since the last 0 will be considered part of whatever comes after our expression) Some things that will match it: cPu1 77 cpu6hellohowa re you100 cpu788 cpu66666666100 Things that won't: cpu0 80 coy1 70 I'm not sure if I was very clear with that description, but regular expressions can sometimes be a mess to explain in words. The -A 5 flag means that for every matching line, grep will also print out the 5 lines after the match, for added context.
Doing this basically as a task for multiple questions in Uni. This is the only video that really made it click. Please do more tutorials, your content is fantastic!
Thank you mate. I work in IT. I don't get a lot of time. Any suggestions on a next video?
@@hackpens2246we dont know wjat we dont know 😂
Great introduction to the topic, a few things that i think are worth mentioning, once people have learned the commands that were being demonstrated:
If the logs your using have a variable amount of spaces between columns (to make things look nice), that can mess up using cut, to get around that you can use `sed 's/ */ /g` to replace any n spaces in a row with a single space. You can also use awk to replace the sed/cut combo, but that's a whole different topic.
uniq also has the extremely useful -c flag which will add a count of how many instances of each item there were.
And as an aside if people wanted to cut down on the number of commands used you can do things like `grep expression filepath` or `sort -u` (on a new enough system), but in the context of this video it is probably better that people learn about the existence of the stand alone utilities, which can be more versatile.
Once you're confident in using the tools mentioned in the video, but you still find that you need more granularity than the grep/grep -v combo, you can use globbing, which involves special characters that represent concepts like "the start of a line"(^) or the wildcard "any thing"(*) (for example `grep "^Hello*World"` means any line that starts with Hello, and at some point also contains World, with anything or nothing in-between/after). If that still isn't enough you might want to look into using regular expressions with grep, but they can be even harder to wrap your mind around if you've never used them before. (If you don't understand globbing or re really are just from reading this that's fine, I'm just trying to give you the right terms to Google, because once you know something's name it becomes infinitely easier to find resources on them)
I am checking this video 3year after upload. The video tutorial is on point and clear.
this one is one of the most helpful tutorials out there that show how powerful grep and pipe are. Thanks for sharing that and I hope you make more cool stuff.
This is a wonderful video. A perfect set to be learnt in order to crack interviews.
Glad you think so!
that was one of the most useful and simple tutorial i've ever seen
Glad you think so!
This video has been hugely helpful to me when parsing through log files of numerous types manually (IPtables, Netflow, SSH). Thank you very much mate.
Fantastic to hear mate
I was searching for all command combinations in reading logs to extract an info. this video is great.
You are very good at Linux, hope you continue sharing your knowledge!
this is exactly what I was looking for and even more! thank you so much!
Thanks! That was informative. The only thing I would have done differently is flip the order of uniq -d and sort. Less items to sort after uniq filters them out.
Honestly i was looking for a long time for some good videos for linux, and sir I can tell you, your videos are gold! Thx a lot!
Glad to hear that!
Thanks.. very helpful and will be using this as a reference from now on
Concept explained well in a short video.
Simple and straightforward ❤
Awesome tutorial on cat and grep, Thanks...
It would be nice to see some range greps, meaning pull out all the IP's that hit the systems between 20:00 and 22:00 or something like that
Using - -since .... - -until ....
You sir are incredible at teaching
great video, grep -v is quite useful. thanks for sharing this
Glad it was helpful!
😊 great videos 👍 thank you!!!
If I wanted to count the number of times that each unique instance showed up. What would I do for that? Would I do the unique and then do the word count for each instance by using grep for that specific phrase?
Awesome video! Please don’t stop making Linux, bash, ethical hacking related videos. Thank you. Subscribed!! 😊
Thanks a lot that's very helpfull
I would like to see more cases of analyzing the logs, to learn from you build more experience in that regard thanks
Short and very usefull. Impressed :)
REALLY HELPED THANK YOU SO MUCH
@Hackpens very informative video mate. Thanks for sharing. What is this tool you are using? Do you have any video for beginner? I really need to learn this stuff. Kindly help.
Thanks again.
Thankyou this video was exactly what i needed
Muy buen video, gracias por compartir, saludos desde México
From description: > _"I show you how to filter information from a .log file, and you find out just how important strong passwords really are."_
i always wondered that pattern matching has smth to do with password security, but then i thought, you have to have passwords to apply pattern matching on 'em right? 'cz the password input field of a site doesn't accept regex, and generating exhaustive strings from regex doesn't help either...
so, what are scenario we are imagining for talking about regex in context of secure passwords?
Great info and an enjoyable watch 👍👏
Hi Sir, I have a log file which I cannot see after the command cd /var/log Please give me some suggestions thank you
Use pwd to make sure you're in the correct directory. Then ls -a to list all the files in that directory. If its not there, its not there.
you have a great explanation way
Glad you think so!
Nice tutorial. I'm interested in what you have on that server that is gaining that much attention.
Thank you. That server has nothing accept a redundant web site on it, but the site name has the word "hack" in it, and the hackers don't know it's not worth hacking. I feel sorry that they try haha
@@hackpens2246so u say 😊
thanks for the amazing video
love it
You could configure fail2ban not only for sshd but also for nginx requests to catch 400-404 errors.
You want the duplicates if they are from different source IP addresses as this means that different people have tried the same user names to access your system
I like the way you think ;)
from the ip addres can you find out their location ?
If the user isn't using a VPN service, then yes (an approximate location) using a publicly available tool, like whatismyipaddress.com/ip-lookup
sir can we use awk instead of cut?
hOW CAN I SEE ALL FILES ON HARD DRIVE OR USB ? AND HOW COULD DECRYPTED FILES BE ERASED OR OVERWRITE WITH SUDO SHRED ?
thank you this is very helpful
Thx. Very helpful.
Gold sir 🔥
Very useful tutorial for me
I’m on windows and I’m currently tasked with finding stuff for a log file they gave me
hi do you know how to copy log file from cowrie honeypot is on?
Back in late 90's I wrote a script to track backup take useage.
Please show us!
Now dump all the unique IPs into a text file, and run nslookup on each one. $50 says they all are located in China or Russia. At least %98-99 of them. At least that's what I always end up finding.
awesome video
Thanks for a great vid!
nice , except cut -d " " -f x not working for me , i will dig durther to figure out why..
Hi,
great videos again :D
Is this amaount of tried logins normal ? If so, this is a bit scary...
Is there a way to "hide" the server ? Im a beginner, pls excuse a potential dumb questions/statement.
YES YES YES YES!! MORE OF THIS!!
Thank you!
Good vid, thank you
thank you for this helpful video for a dummy like me!
Glad it was helpful!
16 th field from experience still blow away
For compressed files, zcat zgrep
I learned something new, but I was searching different ting, here is that , kindly help with "How to grep a log file within a specific time period in Linux and with a specific keyword"
That's a nice idea for a video. thanks. In the meantime, you can use grep for dates and times.
Try this, which should bring you everything in the Auth log that happened between 8pm and 9 pm on Aug 26th:
pi@Node1:/var/log $ cat auth.log | grep "Aug 26 20:"
Aug 26 20:00:01 Node1 CRON[14365]: pam_unix(cron:session): session opened for user pi by (uid=0)
Aug 26 20:00:03 Node1 CRON[14365]: pam_unix(cron:session): session closed for user pi
Aug 26 20:01:01 Node1 CRON[14380]: pam_unix(cron:session): session opened for user pi by (uid=0)
Aug 26 20:01:01 Node1 sudo: pi : TTY=unknown ; PWD=/home/pi ; USER=root ; COMMAND=/usr/bin/apt-get update
Aug 26 20:01:01 Node1 sudo: pam_unix(s..................................
@@hackpens2246 Sure, I'll try and update here
Without changing directory how can we do
I don't want each line content just displaying the what are log files present in all other sub directoties also
your awesome thank you Sir
So are you. Thank you!
Good
thank you
You're welcome!
Cool
An idiom I like to use is to rank occurrences of things. If I were interested if there are repeated items, after I sorted the lines, I’d do a unique count and a numerical sort, like this:
… | sort | uniq -c | sort -rn | head
So I can see the top 10 repeated lines.
Very nice. I like that.
How to filter having above 8 character words🤔🤔
hi. 'm not sure what you mean. I have just run this command [ cat fail2ban.log | grep "fail2ban.actions" ] and it returned results. 16 characters...
Ok and thnks for replay
For example if iam want crack WPA password of wifi I nees only 8 character or more not less than 8 character
So iam asking you to have to print only 8 characters or more not less than 8
@@Anil-vy5vy this is not really useful for cracking Wifi passwords. you need to look at capturing handshakes and then perhaps using aircrack-ng or johntheripper to perform a dictionary attack on the handshake. Alternatively, you could use a utility like Wifite which performs a range of different attacks for you as long as you have all of its dependencies installed.
@@hackpens2246 sorry i didnot see your message but grep helps gor shorting words also to give only certain output length example
grep -E '(\w{11,})' modifided.txt > greter11.txt
below one is best because we can give from certain lenght words output
awk 'length >= 8 && length
genius
How to filter logs with root user details and 200 response
You can switch users with "su root" if you know the root password
I'm not sure I understand what you mean by "200 response"
@@hackpens2246 Thanks for the reply. Even I am not sure with that .
How to check which ports are running in a Vm from outside?
@@ashok-hg8se I'm sorry, but this is a video that shows you how to filter log files. I can't offer advice about auditing networks on this video's comment section.
I put a custom messge saying it's the FBI'S system that displays on every ssh attempt
Good work haha
I would suggest not to clear the screen so often. It could be helpful to see the line structure you are working on.
Two or the [ Enter ] would do ...
I'll bear that in mind. Thank you :)
*grep "a_string" filename* - there is no need to use cat in any of the two case presented in this video.
That's true. However, since I used cat to look at the files and decide what strings I was going to grep for, it was easier to leave that there and repeat the commend.
@@hackpens2246 Got it. In such a case I just use *grep '' filename* . Then I take a look at the file and replace *''* with the appropriate grep options and/or a string to search. Performance aside, is *grep ''* equivalent to *cat* ?
@@JJ-rc1ie hi. No, cat is short for concatenate... It basically outputs the content of a file. Grep is searching the file for lines containing a certain string or integer or whatever. In this case, I used cat to print the content of the file to the screen so I could look at it and decide what string I was going to filter for, then I used it to pipe the content of the file into the grep command
@@hackpens2246 Yes, I know the basics of *cat* and *grep* . But I also noticed that grepping for an empty string, i.e. *grep "" filename* seems to be equivalent to *cat* . Don't you agree? *P.S.* *''* in my earlier comment is not a typo but an empty string.
@@JJ-rc1ie it does the same thing. Yes, you're right 😉
6:36 someone tried Minecraft lol
farhan was here
I'm glad you were :)
ain't it grep -w instead of grep -v
-v works. I'm not sure what you are expecting from grep -w. As far as I can see grep -w does the same thing as grep alone. The grep -v command will show lines that DON'T have the string you specify, whereas grep and grep -w show lines that DO have to string.
awk '{print $11}'
thank you so much for this tutorial it helped me a lot with understanding of cat, grep and sort.
Are you able to tell me what this command would do "cat -rf ~/syslog | sort | grep -iaE -A 5 'cpu[1-7].*(7[0-9]|8[0-9]|100)' | tee cpu.txt" specifically the numbers after cpu which seem to me like it's a time stamp
Probably a little late to be useful, but the numbers are regular expressions (enabled by the -E flag), so it means:
After "cpu" there needs to be [1-7] one digit that is between 1 and 7 (inclusive). Then .* there can be any combination of any characters, of any length, that means anything, or even nothing. Then after that we need to have one of the following three options 7 followed by any digit, 8 followed by any digit, or the number 100 (so 71, 80, and 100 are all valid but 180, or 7 are not. (700 theoretically wouldn't be, but because we didn't specify what has to come after, grep will allow it since the last 0 will be considered part of whatever comes after our expression)
Some things that will match it:
cPu1 77
cpu6hellohowa re you100
cpu788
cpu66666666100
Things that won't:
cpu0 80
coy1 70
I'm not sure if I was very clear with that description, but regular expressions can sometimes be a mess to explain in words.
The -A 5 flag means that for every matching line, grep will also print out the 5 lines after the match, for added context.
Quick revision:
#cat auth.log | grep "invalid" | cut -d " " -f 11 | sort | uniq | wc -l
#cat fail2ban.log | grep "Ban" | grep -v "Restore" | cut -d " " -f 16 | sort | uniq -d > ~/uniq_ips.txt