Apache Spark / PySpark Tutorial: Basics In 15 Mins
HTML-код
- Опубликовано: 24 мар 2021
- Thank you for watching the video! Here is the notebook: github.com/gahogg/RUclips-I-m...
I offer 1 on 1 tutoring for Data Structures & Algos, and Analytics / ML! Book a free consultation here: calendly.com/greghogg/30min
Learn Python, SQL, & Data Science for free at mlnow.ai/ :)
Subscribe if you enjoyed the video!
Best Courses for Analytics:
---------------------------------------------------------------------------------------------------------
+ IBM Data Science (Python): bit.ly/3Rn00ZA
+ Google Analytics (R): bit.ly/3cPikLQ
+ SQL Basics: bit.ly/3Bd9nFu
Best Courses for Programming:
---------------------------------------------------------------------------------------------------------
+ Data Science in R: bit.ly/3RhvfFp
+ Python for Everybody: bit.ly/3ARQ1Ei
+ Data Structures & Algorithms: bit.ly/3CYR6wR
Best Courses for Machine Learning:
---------------------------------------------------------------------------------------------------------
+ Math Prerequisites: bit.ly/3ASUtTi
+ Machine Learning: bit.ly/3d1QATT
+ Deep Learning: bit.ly/3KPfint
+ ML Ops: bit.ly/3AWRrxE
Best Courses for Statistics:
---------------------------------------------------------------------------------------------------------
+ Introduction to Statistics: bit.ly/3QkEgvM
+ Statistics with Python: bit.ly/3BfwejF
+ Statistics with R: bit.ly/3QkicBJ
Best Courses for Big Data:
---------------------------------------------------------------------------------------------------------
+ Google Cloud Data Engineering: bit.ly/3RjHJw6
+ AWS Data Science: bit.ly/3TKnoBS
+ Big Data Specialization: bit.ly/3ANqSut
More Courses:
---------------------------------------------------------------------------------------------------------
+ Tableau: bit.ly/3q966AN
+ Excel: bit.ly/3RBxind
+ Computer Vision: bit.ly/3esxVS5
+ Natural Language Processing: bit.ly/3edXAgW
+ IBM Dev Ops: bit.ly/3RlVKt2
+ IBM Full Stack Cloud: bit.ly/3x0pOm6
+ Object Oriented Programming (Java): bit.ly/3Bfjn0K
+ TensorFlow Advanced Techniques: bit.ly/3BePQV2
+ TensorFlow Data and Deployment: bit.ly/3BbC5Xb
+ Generative Adversarial Networks / GANs (PyTorch): bit.ly/3RHQiRj Наука
We use spark for our data pipeline at work -- we have tables with 10+ billion records, and our applications end up moving trillions upon trillions of records of data per month. Unfathomable numbers that spark is capable of. Great video!
Yeah, it's insane! Thanks so much.
that's the power of distributed systems and parallel computing... computer science is beautiful
Awesome video. I love using spark at work
Your explanation is clear and the examples are practical and useful for beginners. Thanks a lot and keep it up!
I really appreciate this. You're very welcome 😃
I'm a freelance data scientist and I really thankful to find this video, Gregg. Can't expect more! Thank you so much. Good luck with everything. 🙏
That's awesome best of luck in that! And you're very welcome it's my pleasure 😊
Hey that's super interesting to hear a freelance data scientist who actually needs pyspark!
No words man! Simply loved it. Appreciate your efforts.
Really glad to hear that! Thank you 😊
You are awesome. Just delivering the right videos. Subscribed a few days back already but hit notifications on for you rn. Cause I wanna watch all your videos
Well that's really great to hear! Thanks so much Tamzid!
What an amazing content you're putting here man... thanks for everything!
Thanks so much for the kind words. You're very welcome 🤠
Thanks Greg for the wonderful explanation !!
you are a great teacher... keep doing what you do my man
Thanks for sharing, appreciate the quick run down on this stuff
Glad to hear it!
Greg, thank you so much. I am new to PySpark, and your video is very good in explanation and you did those simple example and I am able to follow you and write in my own Python Notebook to try it out. Will watch your DataFrame basics video next.
Amazing! Sorry for the late reply
Concise and very well explained! Thank you so much!!
Thank you and you're very welcome!
this was an amazing and clear video! thanks so much!
Very glad to hear that!!
Very good examples. Thanks man :)
Glad it helped!
Just the type of samples we need to begin with. Meaningful content. thnx.
Glad you enjoyed it!
Explained so well. 5 stars. Love to see more videos..
Really glad to hear it thanks so much!
This video is really helpful. Thanks a lot Gregg.
You're super welcome!
I'm just getting into DataBricks and PySpark and this introductory tutorial was a great starter.
Awesome! Hope that goes well :)
very fine details covered. really useful and easy to understand the spark concepts.
Really glad to hear that.
You are awesome, thanks for sharing your knowledge with the world
I really appreciate that Hamid!!!
Thank you for sharing to the world. I'm currently a supply chain analyst and aspiring supply chain data scientist 🙏
That's excellent to hear and very exciting Joshua! I wish you the best of luck 🥰
Concise and well presented 👍
Very glad you found it useful, James!!
Thank you for great video and for useful education links!
You're super welcome 😃
Great overview. Thanks
Great stuff! Thanks
You're very welcome ☺️
Never used Spark before. Thank you.
Me too for the longest time; PySpark is a life changer though!
Cool video, thanks for making it
now thats what i was looking for
such a good tutorial
Nicely explained.
Very useful and interesting! Subscribed :)
Glad to hear it, thanks a ton!
very good, thanks!
You're very welcome Natalia!
Sensational!
Thank you 😊😊😊
Awesome starter!
Great!
Thank you!
Hi , I'd like to ask you a question
I'm working on a project that is how linear regression selected feature by apache spark when I want to execute the code for pyspark it gives an error that pyspark dont define and I tried to figure it out in many ways it didn't solve that problem💔
@greg, plz share the link of 1 hr video.. I am unable to find it
Took a minute to get going but well done
nice explanation
Thanks a bunch Javid! :)
Greg ,had a question on pyspark...how do I find latest parquet files stored in hdfc path using pyspark code
Sorry I don't know! 🤔
great great content! BTW, please give us the link of the an-hour-long spark tutorial mentioned in the end,thanks a lot.
Thanks! Here you go: ruclips.net/video/8ypIRp6DPew/видео.html
Good PySpark Primer! Others are either too lengthy or short and vague.
Thanks so much I'm really glad to hear that! :)
Hey Greg,
The knowledge in the video is great but the background music is distracting.
Thank you
You're very welcome!
thanks mate
Very welcome!
Hello and thanks for this video, I ve been trying to follow and to your average way, but i receive an error :
avg = nyt.map(lambda x: (x.title, int(x.rank[0])))
grouped = avg.groupByKey()
grouped = grouped.map(lambda x:(x[0], list(x[1])))
averaged = grouped.map(lambda x: (x[0], sum(x[1]) / len(x[1]) ))
averaged.collect()
'TypeError: Invalid argument, not a string or column: [1, 3, 7, 8, 12, 14, 20] of type . For column literals, use 'lit', 'array', 'struct' or 'create_map' function.'
What was your degree in Computer Science or a Data Science course?
I'm in my third year for a Computer Science BSc and I feel like I'm at a disadvantage for Data Science. We didn't learn statistics or have a lot of math modules.
Most Data Science jobs require a Masters or PhD but I don't want to get a Masters straight after uni so I'm looking at Data Engineering since they accept BSc's. Is that a realistic path into Data Science or am I wasting my time?
I'm a statistics major. I don't think you're at a disadvantage, people very widely respect computer science majors. If anything I'd feel I'm at a disadvantage lol. But agreed, you get less stats courses. I would think some certificates and projects would be enough without needing a masters, unless you're aiming for FAANG or the other top jobs
This video may help; ruclips.net/video/08G-u9HN8Kc/видео.html
Which big data tools one must learn for beginners and from where to learn( please provide some resources)
Of course I'd recommend my channel - SQL and Spark are the most important ones in my opinion :)
Hi Bro, could you please make a video on learning process on bigdata?and what job roles which big data skills i'm really confsed where to start and what to learn!
I know python, sql
I learned some basics of hdfs, hive, sqoop
now i'm trying to learn pyspark
Thanks for the feedback, I'll keep this in mind!
Hi @Greg Hogg,
I can't seem to access the "sc" object on Google Colab. Which library let's you use that object?
github.com/gahogg/RUclips/blob/master/PySpark%20In%2015%20Minutes.ipynb
@@GregHogg cheers!
sc command is not working on my Colab as it's working on this vide... can anyone help?
Good for learn RDD
Hi Greg, how can I convert .csv files into .txt files (with comma as delimiter) using pyspark? Do you have a code snippet?
I think you can just change the extension from CSV to txt
Thanks for the tutorial. It was simple and easy to follow. However, when I tried the code in Colab, just by typing "sc" is not invoking spark. Is there any prerequisites - to be installed in Colab before "sc" ?
Please check out my notebook. You'll need to pip install PySpark, and write a line or two of code to set it up
@@GregHogg Thank you Greg.
Can you share the link to the hour long tutorial you mentioned at the end, couldn't find it in your spark playlist.
Here you go: ruclips.net/video/8ypIRp6DPew/видео.html
Hi Greg, which one good among data science, data analytics or machine learning, AI.. could you pls give a suggestion
Data science / ML
Can you also use apply instead of map
Probably
and what are the machine on what we parallelize the work?
They have to be configurated?
i mean,if pyspark or spark parrallelize on a cluster,we have to configue the cluster too?
Someone has to configure it. Probably won't be your job though. You'll just select it, kinda like a Python virtual environment, and act as if it's the same as in this video because nothing changes from the programming point of view :)
@@GregHogg understood. Thx :)
It look like using numpy, pandas what is the difference between this and pyspark.
It looks very similar to us coders, which is great. But pandas and numpy are mainly for dealing with data on the computer you're using. Spark allows us to distribute our workloads across a cluster of machines
@@GregHogg Thankyou
What is the URL to practice? How to setup data for practicing?
Thank you! You made me notice I accidentally removed the notebook from the video description. You can grab the notebook code in the video description now. You can actually get PySpark in google colab very easily, with simply !pip install pyspark and then import pyspark, then continue following the steps in this video.
How is future of spark is flink replacing it ? Is it worth learning for career in big data ?
I don't know what flink is.
@@GregHogg thanks for reply gregg can u pls also tell me the career scope of Apache spark for future
@@abdullahsiddique7787 Spark is and will stay essential for Data science, ML, analysts and big data for a long time.
@@GregHogg thanks gregg appreciate your quick response
@@abdullahsiddique7787 Of course!
back up from the camera my dude. I feel like your staring directly at my soul
Maybe I am
@@GregHoggget em bro.
Pyspark seems to be pandas on steroids + distributed resources usage
You mentioned an hour long spark video, I can't find it.
ruclips.net/video/8ypIRp6DPew/видео.html
@@GregHogg Could you please paste this link in the description?
@@agnelamodia please see above
I don't understand why in tutorials like this I often get errors saying, "module x has no attribute 'y.'" In this case, I can't get Python to recognize parallelize.
Not sure sorry!
I thought performance issue between scala and py isnt an issue anymore.
I personally doubt it. I'm not an expert on this one, but I'd be pretty surprised if python wasn't significantly slower than scala. Of course, if we're talking practically- they're both very fast, but in computational time, I would suspect python is much slower. Thanks!
You are correct, and I am incorrect! Thank you for updating me!
I think we are both correct. I've been reading up on it, with regards to refreshing my scala or keep chugging away with pyspark. Bottom line: it's good to know both. It depends on the use cases. But in general Scala will perform better monotonically. However what Ive read is: it isn't always about one way gains based solely upon performance or more importantly "one" sole factor, there are pros and cons and sometimes the cumulative gains can weigh either way. For example pythons rich ecosystem can weigh in for achieving a faster result trying to do the same thing with Scala. Another interesting discussion you should start is Koalas. I wrote a blog, trying to get people to weigh in. forums.databricks.com/questions/65646/thoughts-on-if-its-worth-it-to-work-in-koalas.html
@@sndselecta Sorry I missed this! Absolutely and thank you for the great reply.
The Spark people themselves are advising against learning Scala for only marginal gains over pySpark.
how is this more useful than numpy?
NumPy works on one computer. Spark works on as many as you want
@@GregHogg thanks!
A detailed video probably would be more helpful.
great video… but please step away from the camera sir
Ouch
@@GregHogg just kidding with you! great content
So you’re just gonna teach us the wrong way of doing things then leave us on a cliff hanger? 😅
Thanks Sir
Great!