I used to connect with databases via Sqlite3 and dump the data into a csv file. After that, I imported data using Pandas. But this tutorial spotted the light on other ways to do it in a straight forward manner. Thank you for sharing and keep up the good work! My favorite singer is Rob Thomas, My favorite programmer is Rob Mulla!
This is super helpful, I am a data analytics student, literally cant live without your channel and your contents. Thank you a million times and please keep up the good work! P.S: by any chance you can make a tutorial about Apache spark in python?
This is just a tutorial, but a quick note to anyone new to databases who is putting this into a production environment, your root password should not be just a dictionary word.
First time watching one of your videos. Great job explaining the details of how the sql connection works, especially with SQLAlchemy. I have used SQLAlchemy alot and the details you provided were perfect for getting new users up and running. I will have to watch some more of your content in the future.
Hey Rob! Been watching your channel for a while and it’s both super helpful and interesting! Thank you very much for bringing this type of content to RUclips!
Great video as usual! Thank you for this. Do you perhaps consider doing a video on how to create cloud SQL databases for small (personal) project? I would love to see what is the best way to do it and access it with Python. If you also have some nice tools such as docker or beaverDB for this, that would be highly appreciated
I use sqlalchemy every day. It’s great. I would caution if your going to to_sql to create, make sure all the df columns are formatted the way they should be. You can get some wonky values in the created db table if you don’t.
I found the part about using Docker for the MySQL server to be super useful. Have you thought about making an exclusive video about the different applications of Docker for Data Science? Thank you for the video! :D
I'm not a computer person but as a business owner I've ended using jupyter notebooks to interact with my data instead of Excel. I get so much help from AI to blast through technicalities that were preventing me from doing it before. This video is a true eye opener for me!
I have found duckdb to be nice for sql related items and it also has some nice integrations with polars, pandas, arrow, and a few others. Its quick and I found the interface pretty easy to pickup on, speaking from my own perspective of limited experience with them.
Something that caught me last week at work--for the df.to_sql method you have to pass an sqlalchemy engine and not the connection object, like you see here. The pandas documentation isn't very clear on this.
Nice summary to a pattern that took me bit to figure out myself awhile ago! I'd add that it may have been worth mentioning use of a context handler for the connection, but I suppose the pandas pattern is preferred b/c it already does that.
Thank you Rob for making this video! I am interested in learning to save data frames/excel/csv to MSSQL i.e saved on AWS EC2 using Python(Jupyter notebook). The script I am running is getting lots of connection error. Looking forward to the video on the topic.
Rob, thank you for making this video. Very good content as usual. Could you please consider making a video on data versioning with various types of data versioning tools (DVTs)? DVC GitLFS Dolt DeltaLake LakeFS Liquibase … to name a few. #database #dataversioning #versioning #data #sqldbversioning #sql #db #liquibase #lakefs #deltalake #dolt #gitlfs #dvc
@Rob Mulla: It is requested to make comprehensive playlists on "Data Analytics & Visualization" using pandas, polars, matplotlib, seaborn, numpy etc. and how to connect with different databases from VS or Jupyter, and also REGEX.
Thanks, Rob! Sometimes I have queries that include both RDBMS and flat files. For example, 500,000 records are selected for a study and the RDBMS table is 5,000,000,000 records. What's an effective solution without bringing the whole table down?
Glad you liked the video. I've never used Redis personally, but it's an in-memory database so it will be very fast. There are downsides to this like cost and persistance. It really depends on your use case and if speed is the top priority. Once you read the data to pandas it's in memory so that might be all you need.
Thanks for the tut! Did i miss "pip install mysql-connector-python" via command line or wasn't that included ? Couldn't connect at first, pip installed and then finally got the database connection.
Hey Rob, I really appreciate the way you explain things. Have you ever considered the idea of creating a course on topics that you excel in? I believe many people would enjoy having you as their teacher! You are such a skilled communicator that I would gladly pay for your course without hesitation.
Glad you enjoy my content. Currently I only make youtube videos as a side hobby. My main goal right now is to reach 100k subscribers. After that I'm going to re-evaluate and decide what the next goal would be. I don't think I'll do a course though, I like giving my content away for free and there are already so many other great courses out there. Not completely ruling it out though.
What are the advantages of using the sqlalchemy engine over mysql.connector and cursor? It helps with code abstraction, but is there any difference performance wise?
Good question. I don't know if there are any performance differences, I just wanted to show the sqlalchemy approach because it supports more database types and that's how I typically set it up for most databases.
@@robmulla thanks for the reply. I decided to choose a method for the database connection that I'll use going forward. Couldn't find any huge difference between those two so decided to go with mysql connector for it's simplicity and light weight
*my hand is up, I have a question!* it sounds dumb to ask. But, I'm new and I think I just had an ah-ha.... correct me if im wrong. But the theory is that to keep the code clean and computational light (minimal double handling), you break out of that single line so that you can accept the whole packet (or chunk of data) in one shot, and then, pandas, is able to take it in and reads it as a low res bit map or weighted map or something, and is able to natively process that data? skipping like a whole read write cycle for the package? Cause it feels like actual wizard magic how much data can be processed in real time these days😅😅
Hi Rob - this was a very helpful video. My only concern is that my company uses Teradata as their SQL database. Do you happen to have instructions for doing these steps connecting to Teradata? Looking through the docs but haven’t currently found anything similar. I do have Python/pandas connected to my Teradata table currently but there’s a lot more code involved to make the connection.
Also curious if you have tips on iterating through a SQL table. An example is that I am wanting to pull data in only for a specific value and then move to the next value after the first value has been processed through my script. Ex. If I have Peter, Sue, John and Joe: Pull data in from Peter and then Sue, then John , Joe being last. In my real case it would be over 100 “persons” to iterate through. Hope this was clear. Essentially I wouldn’t want the entire table to come in all at once
Fairly new with using Docker Rob. The script gave me issues in that the image's platform doesn't match mine. I have an M1 Max chip. Am I unable to move forward?
Amazing video! If we want to run this query daily like you said, how can we do it? (Must be a basic question, but i am just starting out and using jupyter lab to process data once, and want to know how to run the code daily if possible. Thank you in advance, and keep making this great content😁
Glad you liked the video. If you are running on Linux you could write a python script to do your daily processing. Then use a cronjob that runs that script at a certain frequency (hourly, daily…)
Thanks for this. You write the query as a string. Are you aware of a way to write the query in Python code and have intellisense and autocomplete available?
Thanks for watching. Great question. I don’t know of any way to do that directly in python. But I usually write my queries and test them in the db software before running them.
Hello rob! The content of your videos is very helpful, THANK YOU! In this particular topic, I am trying to replicate the execution in an oracle database, but when trying to load the data from a dataframe, it shows this error: "DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type=' table' AND name=?;': ORA-01036: illegal variable name/number". I have checked the data type and I can't find the problem. Can you guide me on the possible reasons for the error and how I could resolve it?
This is a awesome tutorial! But I got a message from DBeaver: "Host '172.17.0.1' is not allowed to connect to this MySQL server". Please tell me where to fix this problem. Many thanks!
Nice video. I am new to python and data science world transiting from Javascript world. Just wanted to ask if is this IntelliSense built into jupyter notebook?
Thanks for watching. Jupyter lab has really good autocomplete with hitting tab. I don't think it has IntelliSense by default but it can be enabled with extensions. You can also run notebooks in something like vscode if you prefer.
Compare to LINQToSQL this feels like stone ages. Isn't there are reasonable ORM for Panda that imports the DB model automatically and the allows writing queries in Python instead of SQL?
Maybe I'm not fully getting it (quickly googled linq), but the second half/ pd.read_sql is basically that. Once the data is in a pandas dataframe you can manipulate it like normal. Sqlalchemy does let you treat dbs and tables more as objects if you want to do it that way( db.table.fetchall() or something)....but SQL is pretty standard and easy anyways. Not everything needs a gui and a handhold, this is for scripting/automation; besides looking at 50 char CammelCasedFunctionsNamed.DoubleClickOnAction() seems pretty trash and overly complicated, comparatively.
I think @mad1337nes is correct. @nothingisreal6345 - I'm showing the process of querying the database in different ways and it really depends on your use case. Sqlalchemy has an ORM, but I've never used it beyond the basics: docs.sqlalchemy.org/en/20/orm/
@@robmulla The ERP from SAP, I think it's mainly focused on real estate, building or facility management. I am trying to get an easier way of downloading the info from the cloud using SQL to process with pandas and in the end prepare an excel report and send downloaded info to Airtable in order to be in Sync. But SAP is not that easy to customise or to handle or to even access options, it only allows at the moment a slow extraction of data trough excel downloads. I think your suggestion might be good idea.
I just don't understand why there is a romantic musical background in such a nice academic video that you have created after a huge effort. The unnecessary musical background just spoils the quality because the viewers watch this type of videos not in a romantic mood at all. Please remove the music. Thanks!
hey rob, I just love your content and the way you present it from every perspective, there is just one thing I want to mention here that we can get results from cursor in bulk using fetchall() method
I like the way you started with low level examples and made your way to more practical high level stuff. Thanks.
Thanks, I tried to explain it the same way that I learned it over months/years. It makes sense to walk through it that way.
I used to connect with databases via Sqlite3 and dump the data into a csv file. After that, I imported data using Pandas. But this tutorial spotted the light on other ways to do it in a straight forward manner. Thank you for sharing and keep up the good work!
My favorite singer is Rob Thomas, My favorite programmer is Rob Mulla!
Haha. Happy to be one of your favorites alongside Mr Thomas
Instantly liked and downloaded to watch offline. Your channel is pure gold🙏🙏🙏
Wow, thank you! I apprecaite the feedback.
+1
Thanks Rob! my understanding of databases and Python improves everytime I watch your videos! Keep going!
Glad to hear that. Learn a little every day and before you know it you'll be an expert!
This is super helpful, I am a data analytics student, literally cant live without your channel and your contents. Thank you a million times and please keep up the good work! P.S: by any chance you can make a tutorial about Apache spark in python?
This is just a tutorial, but a quick note to anyone new to databases who is putting this into a production environment, your root password should not be just a dictionary word.
Absolutely! Great point. I'm assuming most people who are learning this will be more end users and not DB admins.
You deserve way more views. I use a lot of what you say at work!
Thanks! So glad you find the videos I make helpful.
I was about to throw a brick at you when you used a for loop to read the database instead of using read_sql method! 🤣Great stuff as always!
First time watching one of your videos. Great job explaining the details of how the sql connection works, especially with SQLAlchemy. I have used SQLAlchemy alot and the details you provided were perfect for getting new users up and running. I will have to watch some more of your content in the future.
Hey Rob! Been watching your channel for a while and it’s both super helpful and interesting! Thank you very much for bringing this type of content to RUclips!
Oh this is an amazing video to stumble onto!! Let me save this and trying to practice some database pulls. Thanks for the lesson!
Glad you enjoyed it! It's a time saver and allows you to automate a lot of tasks.
did anyone notice how the "Subscribe" plays a simple gradient animation when he mentions to "subscribe to the channel" around 0:58?? so cool
Great video as usual! Thank you for this. Do you perhaps consider doing a video on how to create cloud SQL databases for small (personal) project? I would love to see what is the best way to do it and access it with Python. If you also have some nice tools such as docker or beaverDB for this, that would be highly appreciated
Great video! Next time let's see a PyTorch intro!
It's on my todo list 😄 - Thanks for watching.
I use sqlalchemy every day. It’s great. I would caution if your going to to_sql to create, make sure all the df columns are formatted the way they should be. You can get some wonky values in the created db table if you don’t.
I found the part about using Docker for the MySQL server to be super useful. Have you thought about making an exclusive video about the different applications of Docker for Data Science? Thank you for the video! :D
Oh. That’s a great idea. So many applications but honestly I don’t utilize it as much as I should.
First I click like and then watch the video. Thanks for so much!
You're the best! Glad you expect to like it even before watching.
Thank you sir. I like your tutorials a lot because they are usually short but the content is amazing🙏🙏
Chefs kiss. Love the content.
Very clear explanations. Thank you for tour work
You’re the plug! Thanks Rob!
Top quality as always! Tnx Rob!
Thank you 👏 Glad you like it.
Thank you Rob!!
For a next video, LightGBM/XGBoost hyperparameter tunning would be amazing!!
This is one great video! Liked and subscribed.
I'm not a computer person but as a business owner I've ended using jupyter notebooks to interact with my data instead of Excel. I get so much help from AI to blast through technicalities that were preventing me from doing it before. This video is a true eye opener for me!
Awesome! Glad it helped.
Nice video. Would like to vote for pyspark in the future videos
Thanks for the suggestion. I do have one video on pyspark but if people want more I can give it a try!
Thank you for your great content! For a next topic, I would love a video on Pytorch.
Thank you Rob, i love ur videos! 🤩
Thanks for watching. Glad you enjoy watching these, I like making them.
Very clear tutorial! Thanks!
brilliant video.
Thanks For Making This Video 😊
Thanks for watching and commenting!
Very helpfull man,
Will see your all videos 👍
I have found duckdb to be nice for sql related items and it also has some nice integrations with polars, pandas, arrow, and a few others. Its quick and I found the interface pretty easy to pickup on, speaking from my own perspective of limited experience with them.
I've tested out duckdb and hopefully will be making a video about it at some point. I agree it's really nice to write SQL and it flat files directly.
@@robmullaplease do!
Something that caught me last week at work--for the df.to_sql method you have to pass an sqlalchemy engine and not the connection object, like you see here. The pandas documentation isn't very clear on this.
Very insightful video!
Glad it was helpful!
Amazing video as always 👏👍🏻
Thank you so much 😀 - Glad you liked it.
Nice summary to a pattern that took me bit to figure out myself awhile ago! I'd add that it may have been worth mentioning use of a context handler for the connection, but I suppose the pandas pattern is preferred b/c it already does that.
Thank you Rob for making this video! I am interested in learning to save data frames/excel/csv to MSSQL i.e saved on AWS EC2 using Python(Jupyter notebook). The script I am running is getting lots of connection error. Looking forward to the video on the topic.
Rob, thank you for making this video. Very good content as usual.
Could you please consider making a video on data versioning with various types of data versioning tools (DVTs)?
DVC
GitLFS
Dolt
DeltaLake
LakeFS
Liquibase
… to name a few.
#database #dataversioning #versioning #data #sqldbversioning #sql #db #liquibase #lakefs #deltalake #dolt #gitlfs #dvc
Amazing video, this helped a lot, thanks
Are you and duckdb on speaking terms?
Ducks can’t talk.
Gooooood!!!
🔥🔥🔥
@Rob Mulla: It is requested to make comprehensive playlists on "Data Analytics & Visualization" using pandas, polars, matplotlib, seaborn, numpy etc. and how to connect with different databases from VS or Jupyter, and also REGEX.
such a wonderful video
Thanks, Rob! Sometimes I have queries that include both RDBMS and flat files. For example, 500,000 records are selected for a study and the RDBMS table is 5,000,000,000 records. What's an effective solution without bringing the whole table down?
Thanks a lot, Rob! It's really helpful. Do you mind sharing the Jupyter notebook as well?
Thanks for the great video! What do you think of Redis? Is it worth using with Pandas?
Glad you liked the video. I've never used Redis personally, but it's an in-memory database so it will be very fast. There are downsides to this like cost and persistance. It really depends on your use case and if speed is the top priority. Once you read the data to pandas it's in memory so that might be all you need.
@@robmulla that's why I asked. Considering you've got good performance optimization videos.
Thanks for the tut! Did i miss "pip install mysql-connector-python" via command line or wasn't that included ? Couldn't connect at first, pip installed and then finally got the database connection.
great tutorial !!!
Thanks Rob!
Thanks for watching & commenting Ankur!
would like to see a video about data pipeline between SQL and Postresql when dealing with big tables like 10M rows ... how to do a fast load
Hey Rob, I really appreciate the way you explain things. Have you ever considered the idea of creating a course on topics that you excel in? I believe many people would enjoy having you as their teacher! You are such a skilled communicator that I would gladly pay for your course without hesitation.
Glad you enjoy my content. Currently I only make youtube videos as a side hobby. My main goal right now is to reach 100k subscribers. After that I'm going to re-evaluate and decide what the next goal would be. I don't think I'll do a course though, I like giving my content away for free and there are already so many other great courses out there. Not completely ruling it out though.
What are the advantages of using the sqlalchemy engine over mysql.connector and cursor? It helps with code abstraction, but is there any difference performance wise?
Good question. I don't know if there are any performance differences, I just wanted to show the sqlalchemy approach because it supports more database types and that's how I typically set it up for most databases.
@@robmulla thanks for the reply. I decided to choose a method for the database connection that I'll use going forward. Couldn't find any huge difference between those two so decided to go with mysql connector for it's simplicity and light weight
*my hand is up, I have a question!* it sounds dumb to ask. But, I'm new and I think I just had an ah-ha.... correct me if im wrong. But the theory is that to keep the code clean and computational light (minimal double handling), you break out of that single line so that you can accept the whole packet (or chunk of data) in one shot, and then, pandas, is able to take it in and reads it as a low res bit map or weighted map or something, and is able to natively process that data? skipping like a whole read write cycle for the package?
Cause it feels like actual wizard magic how much data can be processed in real time these days😅😅
Hi Rob - this was a very helpful video. My only concern is that my company uses Teradata as their SQL database. Do you happen to have instructions for doing these steps connecting to Teradata? Looking through the docs but haven’t currently found anything similar.
I do have Python/pandas connected to my Teradata table currently but there’s a lot more code involved to make the connection.
Also curious if you have tips on iterating through a SQL table. An example is that I am wanting to pull data in only for a specific value and then move to the next value after the first value has been processed through my script.
Ex.
If I have Peter, Sue, John and Joe:
Pull data in from Peter and then Sue, then John , Joe being last. In my real case it would be over 100 “persons” to iterate through. Hope this was clear. Essentially I wouldn’t want the entire table to come in all at once
Fairly new with using Docker Rob. The script gave me issues in that the image's platform doesn't match mine. I have an M1 Max chip. Am I unable to move forward?
Sir you have a Python playlist ( complete a to z course) ?
Amazing video!
If we want to run this query daily like you said, how can we do it? (Must be a basic question, but i am just starting out and using jupyter lab to process data once, and want to know how to run the code daily if possible.
Thank you in advance, and keep making this great content😁
Glad you liked the video. If you are running on Linux you could write a python script to do your daily processing. Then use a cronjob that runs that script at a certain frequency (hourly, daily…)
@@robmulla Always like the videos, even if some are a little advanced for where i am at! Thank you for your answer, going to look into it :)
Thanks for this.
You write the query as a string. Are you aware of a way to write the query in Python code and have intellisense and autocomplete available?
Thanks for watching. Great question. I don’t know of any way to do that directly in python. But I usually write my queries and test them in the db software before running them.
Hello rob! The content of your videos is very helpful, THANK YOU! In this particular topic, I am trying to replicate the execution in an oracle database, but when trying to load the data from a dataframe, it shows this error: "DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type=' table' AND name=?;': ORA-01036: illegal variable name/number". I have checked the data type and I can't find the problem. Can you guide me on the possible reasons for the error and how I could resolve it?
Rob is this community version or enterprise of thedbeaver?
Just the community version.
This is a awesome tutorial! But I got a message from DBeaver: "Host '172.17.0.1' is not allowed to connect to this MySQL server". Please tell me where to fix this problem. Many thanks!
Good job
thanks for the feedback.
Nice video. I am new to python and data science world transiting from Javascript world. Just wanted to ask if is this IntelliSense built into jupyter notebook?
Thanks for watching. Jupyter lab has really good autocomplete with hitting tab. I don't think it has IntelliSense by default but it can be enabled with extensions. You can also run notebooks in something like vscode if you prefer.
Compare to LINQToSQL this feels like stone ages. Isn't there are reasonable ORM for Panda that imports the DB model automatically and the allows writing queries in Python instead of SQL?
Maybe I'm not fully getting it (quickly googled linq), but the second half/ pd.read_sql is basically that. Once the data is in a pandas dataframe you can manipulate it like normal.
Sqlalchemy does let you treat dbs and tables more as objects if you want to do it that way( db.table.fetchall() or something)....but SQL is pretty standard and easy anyways. Not everything needs a gui and a handhold, this is for scripting/automation; besides looking at 50 char CammelCasedFunctionsNamed.DoubleClickOnAction() seems pretty trash and overly complicated, comparatively.
I think @mad1337nes is correct. @nothingisreal6345 - I'm showing the process of querying the database in different ways and it really depends on your use case. Sqlalchemy has an ORM, but I've never used it beyond the basics: docs.sqlalchemy.org/en/20/orm/
Do we need to close a sqlalchemy connection like how the first method used cursor.close() and connection.close()?
now can you get this to build automation into it with power automate to run the python script on the cloud daily? i dont have a python server at work
Super
Duper!
Thank you!
nice. thanks.
Hi, is there any way ,panda can read all the tables.. as there are many tables in db
thanks rob
This was amazing I thing you should do an official tutorial one day.
Exist also pandassql u can create querys sql in pandas.
2:08 & 2:28 can someone explain where I should run these scripts in? is it in window's command line or an ide like jupyternotebook
thanks
Why do I feel like I learned all this in 1980 and have to now re learn it with different syntax. Isnt this 2024 ?
Why not use EXCEL ?
do an example with SAP HANA
What is that?
@@robmulla The ERP from SAP, I think it's mainly focused on real estate, building or facility management. I am trying to get an easier way of downloading the info from the cloud using SQL to process with pandas and in the end prepare an excel report and send downloaded info to Airtable in order to be in Sync. But SAP is not that easy to customise or to handle or to even access options, it only allows at the moment a slow extraction of data trough excel downloads. I think your suggestion might be good idea.
Is it just me or does Rob look AI generated in the beginning?
🤫
Ay wey, mi mente!
This guy’s face is AI generated in the video
mans a grandmaster in kaggle
Pandas and PDF. reading data from a pdf document
I have a video about extracting text from an image.
@@robmulla let me check it out. thanks
This is kind of silly. Why would you not just do everything you did in this video using SQL in DBeaver?
Out of Curiosity..... Why do you look like AI-generated? :-)
sos groso sabelo
I just don't understand why there is a romantic musical background in such a nice academic video that you have created after a huge effort. The unnecessary musical background just spoils the quality because the viewers watch this type of videos not in a romantic mood at all. Please remove the music. Thanks!
hey rob, I just love your content and the way you present it from every perspective, there is just one thing I want to mention here that we can get results from cursor in bulk using fetchall() method