We use spark for our data pipeline at work -- we have tables with 10+ billion records, and our applications end up moving trillions upon trillions of records of data per month. Unfathomable numbers that spark is capable of. Great video!
Greg, thank you so much. I am new to PySpark, and your video is very good in explanation and you did those simple example and I am able to follow you and write in my own Python Notebook to try it out. Will watch your DataFrame basics video next.
You are awesome. Just delivering the right videos. Subscribed a few days back already but hit notifications on for you rn. Cause I wanna watch all your videos
Hi Greg! Great video, do you have one that explains how you convert spark to dfs and vice versa? We pull millions of rows from csvs and looking to do transformations before dropping into a db. Also, how does the distributed computing work on a singular computer? Just distributes it across the cpu cores?
This is an awesome video. I wonder, however, whether you could explain why the end results shows numbers with 12 characters. Didn't you set of numbers only go up to a million, which has 6 digits? You also referred to your hour-long PySpark course. Would you be able to link to it in the show notes, please? Thanks!
Hi Bro, could you please make a video on learning process on bigdata?and what job roles which big data skills i'm really confsed where to start and what to learn! I know python, sql I learned some basics of hdfs, hive, sqoop now i'm trying to learn pyspark
What was your degree in Computer Science or a Data Science course? I'm in my third year for a Computer Science BSc and I feel like I'm at a disadvantage for Data Science. We didn't learn statistics or have a lot of math modules. Most Data Science jobs require a Masters or PhD but I don't want to get a Masters straight after uni so I'm looking at Data Engineering since they accept BSc's. Is that a realistic path into Data Science or am I wasting my time?
I'm a statistics major. I don't think you're at a disadvantage, people very widely respect computer science majors. If anything I'd feel I'm at a disadvantage lol. But agreed, you get less stats courses. I would think some certificates and projects would be enough without needing a masters, unless you're aiming for FAANG or the other top jobs
It looks very similar to us coders, which is great. But pandas and numpy are mainly for dealing with data on the computer you're using. Spark allows us to distribute our workloads across a cluster of machines
Thanks for the tutorial. It was simple and easy to follow. However, when I tried the code in Colab, just by typing "sc" is not invoking spark. Is there any prerequisites - to be installed in Colab before "sc" ?
and what are the machine on what we parallelize the work? They have to be configurated? i mean,if pyspark or spark parrallelize on a cluster,we have to configue the cluster too?
Someone has to configure it. Probably won't be your job though. You'll just select it, kinda like a Python virtual environment, and act as if it's the same as in this video because nothing changes from the programming point of view :)
Thank you! You made me notice I accidentally removed the notebook from the video description. You can grab the notebook code in the video description now. You can actually get PySpark in google colab very easily, with simply !pip install pyspark and then import pyspark, then continue following the steps in this video.
Hello and thanks for this video, I ve been trying to follow and to your average way, but i receive an error : avg = nyt.map(lambda x: (x.title, int(x.rank[0]))) grouped = avg.groupByKey() grouped = grouped.map(lambda x:(x[0], list(x[1]))) averaged = grouped.map(lambda x: (x[0], sum(x[1]) / len(x[1]) )) averaged.collect() 'TypeError: Invalid argument, not a string or column: [1, 3, 7, 8, 12, 14, 20] of type . For column literals, use 'lit', 'array', 'struct' or 'create_map' function.'
Hi , I'd like to ask you a question I'm working on a project that is how linear regression selected feature by apache spark when I want to execute the code for pyspark it gives an error that pyspark dont define and I tried to figure it out in many ways it didn't solve that problem💔
I don't understand why in tutorials like this I often get errors saying, "module x has no attribute 'y.'" In this case, I can't get Python to recognize parallelize.
I personally doubt it. I'm not an expert on this one, but I'd be pretty surprised if python wasn't significantly slower than scala. Of course, if we're talking practically- they're both very fast, but in computational time, I would suspect python is much slower. Thanks!
I think we are both correct. I've been reading up on it, with regards to refreshing my scala or keep chugging away with pyspark. Bottom line: it's good to know both. It depends on the use cases. But in general Scala will perform better monotonically. However what Ive read is: it isn't always about one way gains based solely upon performance or more importantly "one" sole factor, there are pros and cons and sometimes the cumulative gains can weigh either way. For example pythons rich ecosystem can weigh in for achieving a faster result trying to do the same thing with Scala. Another interesting discussion you should start is Koalas. I wrote a blog, trying to get people to weigh in. forums.databricks.com/questions/65646/thoughts-on-if-its-worth-it-to-work-in-koalas.html
We use spark for our data pipeline at work -- we have tables with 10+ billion records, and our applications end up moving trillions upon trillions of records of data per month. Unfathomable numbers that spark is capable of. Great video!
Yeah, it's insane! Thanks so much.
that's the power of distributed systems and parallel computing... computer science is beautiful
I'm a freelance data scientist and I really thankful to find this video, Gregg. Can't expect more! Thank you so much. Good luck with everything. 🙏
That's awesome best of luck in that! And you're very welcome it's my pleasure 😊
Thank you for sharing to the world. I'm currently a supply chain analyst and aspiring supply chain data scientist 🙏
That's excellent to hear and very exciting Joshua! I wish you the best of luck 🥰
I'm just getting into DataBricks and PySpark and this introductory tutorial was a great starter.
Awesome! Hope that goes well :)
Your explanation is clear and the examples are practical and useful for beginners. Thanks a lot and keep it up!
I really appreciate this. You're very welcome 😃
Awesome video. I love using spark at work
Just the type of samples we need to begin with. Meaningful content. thnx.
Glad you enjoyed it!
Thanks for sharing, appreciate the quick run down on this stuff
Glad to hear it!
Greg, thank you so much. I am new to PySpark, and your video is very good in explanation and you did those simple example and I am able to follow you and write in my own Python Notebook to try it out. Will watch your DataFrame basics video next.
Amazing! Sorry for the late reply
You are awesome. Just delivering the right videos. Subscribed a few days back already but hit notifications on for you rn. Cause I wanna watch all your videos
Well that's really great to hear! Thanks so much Tamzid!
No words man! Simply loved it. Appreciate your efforts.
Really glad to hear that! Thank you 😊
Thanks Greg for the wonderful explanation !!
you are a great teacher... keep doing what you do my man
Which big data tools one must learn for beginners and from where to learn( please provide some resources)
Of course I'd recommend my channel - SQL and Spark are the most important ones in my opinion :)
You are awesome, thanks for sharing your knowledge with the world
I really appreciate that Hamid!!!
Concise and very well explained! Thank you so much!!
Thank you and you're very welcome!
Never used Spark before. Thank you.
Me too for the longest time; PySpark is a life changer though!
Explained so well. 5 stars. Love to see more videos..
Really glad to hear it thanks so much!
Good PySpark Primer! Others are either too lengthy or short and vague.
Thanks so much I'm really glad to hear that! :)
now thats what i was looking for
very fine details covered. really useful and easy to understand the spark concepts.
Really glad to hear that.
This video is really helpful. Thanks a lot Gregg.
You're super welcome!
Thank you for great video and for useful education links!
You're super welcome 😃
What an amazing content you're putting here man... thanks for everything!
Thanks so much for the kind words. You're very welcome 🤠
Great overview. Thanks
Very good examples. Thanks man :)
Glad it helped!
Hi Greg! Great video, do you have one that explains how you convert spark to dfs and vice versa? We pull millions of rows from csvs and looking to do transformations before dropping into a db.
Also, how does the distributed computing work on a singular computer? Just distributes it across the cpu cores?
This is an awesome video. I wonder, however, whether you could explain why the end results shows numbers with 12 characters. Didn't you set of numbers only go up to a million, which has 6 digits?
You also referred to your hour-long PySpark course. Would you be able to link to it in the show notes, please? Thanks!
At a certain point he squared all the numbers in the RDD and then kept using the squares from then on.
Hi Bro, could you please make a video on learning process on bigdata?and what job roles which big data skills i'm really confsed where to start and what to learn!
I know python, sql
I learned some basics of hdfs, hive, sqoop
now i'm trying to learn pyspark
Thanks for the feedback, I'll keep this in mind!
Cool video, thanks for making it
Concise and well presented 👍
Very glad you found it useful, James!!
@greg, plz share the link of 1 hr video.. I am unable to find it
such a good tutorial
great great content! BTW, please give us the link of the an-hour-long spark tutorial mentioned in the end,thanks a lot.
Thanks! Here you go: ruclips.net/video/8ypIRp6DPew/видео.html
appreciate this vid. thanks man
Nicely explained.
Great stuff! Thanks
You're very welcome ☺️
What was your degree in Computer Science or a Data Science course?
I'm in my third year for a Computer Science BSc and I feel like I'm at a disadvantage for Data Science. We didn't learn statistics or have a lot of math modules.
Most Data Science jobs require a Masters or PhD but I don't want to get a Masters straight after uni so I'm looking at Data Engineering since they accept BSc's. Is that a realistic path into Data Science or am I wasting my time?
I'm a statistics major. I don't think you're at a disadvantage, people very widely respect computer science majors. If anything I'd feel I'm at a disadvantage lol. But agreed, you get less stats courses. I would think some certificates and projects would be enough without needing a masters, unless you're aiming for FAANG or the other top jobs
This video may help; ruclips.net/video/08G-u9HN8Kc/видео.html
Very useful and interesting! Subscribed :)
Glad to hear it, thanks a ton!
Greg ,had a question on pyspark...how do I find latest parquet files stored in hdfc path using pyspark code
Sorry I don't know! 🤔
It look like using numpy, pandas what is the difference between this and pyspark.
It looks very similar to us coders, which is great. But pandas and numpy are mainly for dealing with data on the computer you're using. Spark allows us to distribute our workloads across a cluster of machines
@@GregHogg Thankyou
Took a minute to get going but well done
Awesome starter!
Thanks for the tutorial. It was simple and easy to follow. However, when I tried the code in Colab, just by typing "sc" is not invoking spark. Is there any prerequisites - to be installed in Colab before "sc" ?
Please check out my notebook. You'll need to pip install PySpark, and write a line or two of code to set it up
@@GregHogg Thank you Greg.
Can you share the link to the hour long tutorial you mentioned at the end, couldn't find it in your spark playlist.
Here you go: ruclips.net/video/8ypIRp6DPew/видео.html
very good, thanks!
You're very welcome Natalia!
Can you also use apply instead of map
Probably
and what are the machine on what we parallelize the work?
They have to be configurated?
i mean,if pyspark or spark parrallelize on a cluster,we have to configue the cluster too?
Someone has to configure it. Probably won't be your job though. You'll just select it, kinda like a Python virtual environment, and act as if it's the same as in this video because nothing changes from the programming point of view :)
@@GregHogg understood. Thx :)
Sensational!
Thank you 😊😊😊
What is the URL to practice? How to setup data for practicing?
Thank you! You made me notice I accidentally removed the notebook from the video description. You can grab the notebook code in the video description now. You can actually get PySpark in google colab very easily, with simply !pip install pyspark and then import pyspark, then continue following the steps in this video.
nice explanation
Thanks a bunch Javid! :)
Great!
Thank you!
Hello and thanks for this video, I ve been trying to follow and to your average way, but i receive an error :
avg = nyt.map(lambda x: (x.title, int(x.rank[0])))
grouped = avg.groupByKey()
grouped = grouped.map(lambda x:(x[0], list(x[1])))
averaged = grouped.map(lambda x: (x[0], sum(x[1]) / len(x[1]) ))
averaged.collect()
'TypeError: Invalid argument, not a string or column: [1, 3, 7, 8, 12, 14, 20] of type . For column literals, use 'lit', 'array', 'struct' or 'create_map' function.'
sc command is not working on my Colab as it's working on this vide... can anyone help?
How is future of spark is flink replacing it ? Is it worth learning for career in big data ?
I don't know what flink is.
@@GregHogg thanks for reply gregg can u pls also tell me the career scope of Apache spark for future
@@abdullahsiddique7787 Spark is and will stay essential for Data science, ML, analysts and big data for a long time.
@@GregHogg thanks gregg appreciate your quick response
@@abdullahsiddique7787 Of course!
Good for learn RDD
Hi , I'd like to ask you a question
I'm working on a project that is how linear regression selected feature by apache spark when I want to execute the code for pyspark it gives an error that pyspark dont define and I tried to figure it out in many ways it didn't solve that problem💔
thanks mate
Very welcome!
Thank you
You're very welcome!
Hi Greg, how can I convert .csv files into .txt files (with comma as delimiter) using pyspark? Do you have a code snippet?
I think you can just change the extension from CSV to txt
Hi Greg, which one good among data science, data analytics or machine learning, AI.. could you pls give a suggestion
Data science / ML
I don't understand why in tutorials like this I often get errors saying, "module x has no attribute 'y.'" In this case, I can't get Python to recognize parallelize.
Not sure sorry!
Hi @Greg Hogg,
I can't seem to access the "sc" object on Google Colab. Which library let's you use that object?
github.com/gahogg/RUclips/blob/master/PySpark%20In%2015%20Minutes.ipynb
@@GregHogg cheers!
Great!
You mentioned an hour long spark video, I can't find it.
ruclips.net/video/8ypIRp6DPew/видео.html
@@GregHogg Could you please paste this link in the description?
@@agnelamodia please see above
back up from the camera my dude. I feel like your staring directly at my soul
Maybe I am
@@GregHoggget em bro.
how is this more useful than numpy?
NumPy works on one computer. Spark works on as many as you want
@@GregHogg thanks!
I thought performance issue between scala and py isnt an issue anymore.
I personally doubt it. I'm not an expert on this one, but I'd be pretty surprised if python wasn't significantly slower than scala. Of course, if we're talking practically- they're both very fast, but in computational time, I would suspect python is much slower. Thanks!
You are correct, and I am incorrect! Thank you for updating me!
I think we are both correct. I've been reading up on it, with regards to refreshing my scala or keep chugging away with pyspark. Bottom line: it's good to know both. It depends on the use cases. But in general Scala will perform better monotonically. However what Ive read is: it isn't always about one way gains based solely upon performance or more importantly "one" sole factor, there are pros and cons and sometimes the cumulative gains can weigh either way. For example pythons rich ecosystem can weigh in for achieving a faster result trying to do the same thing with Scala. Another interesting discussion you should start is Koalas. I wrote a blog, trying to get people to weigh in. forums.databricks.com/questions/65646/thoughts-on-if-its-worth-it-to-work-in-koalas.html
@@sndselecta Sorry I missed this! Absolutely and thank you for the great reply.
The Spark people themselves are advising against learning Scala for only marginal gains over pySpark.
Johnson Jason Miller Jason Martinez George
Hey Greg,
The knowledge in the video is great but the background music is distracting.
Pyspark seems to be pandas on steroids + distributed resources usage
Thomas David Young Shirley Rodriguez Scott
A detailed video probably would be more helpful.
Smith Jennifer Brown Frank Lewis William
great video… but please step away from the camera sir
Ouch
@@GregHogg just kidding with you! great content
Hall Angela Taylor Scott Gonzalez Daniel
Johnson Jennifer Miller Richard Anderson Betty
Garcia Angela Hall Jeffrey Moore Larry
Young Richard Thomas Melissa Robinson Ronald
Young Karen Miller Christopher Johnson Thomas
So you’re just gonna teach us the wrong way of doing things then leave us on a cliff hanger? 😅
Thanks Sir
this was an amazing and clear video! thanks so much!
Very glad to hear that!!
Williams Matthew Brown Jason Young Michelle