I am an MS Access/Oracle/SQL and .Net Developer learning python to improve and automate my processes; this tutorial is, the BEST, someone like myself can find on how to leverage and query excel by treating it as a SQL data source. OUTSTANDING tutorial!!!!
Thanks! This was very informative. I've worked with Excel and SQL Server for years and working with Python in the last few years. This beats VB hands down. As I saw one person post, Python is the glue that brings apps together.
Thank you so much! This is exactly what I've been looking for for a long time. I've never found a good explanation of how to get a preexisting dataset into a database or SQL environment, but this skips that entirely so I can get started practicing SQL!
@@SATSifaction Thanks! Quick question on this one though...Tried replacing if_exists='replace' with if_exists='append' to try and do exactly that but still replaces! Am I missing something?!
Thanks for the great short really informative tutorial .... I request you to make a elaborate tutorial discussing the same topic with more information please.....
Great question. Filtering in pandas can be quite different than querying in sql. Pandas is really meant as an intermediary in this case. Those that are versed with sql but want a quick way to query an excel sheet will benefit the most from this tutorial. Next week I’ll show how to bring this in a persistent database that you can use with external applications
Consider someone that is very good with SQL, does not know enough of python and pandas, but has to do some analysis urgently. That's the type of person that needs this.
@@SATSifaction Thank you for responding. I would love to see you take a variety of real world situations with excel like visualizations and data presentations and work with a method like this. This was so much easier. I would pay for the course.
I had been using much more complex methods of querying data frames. I don't know why this never occurred to me until now, but I'm definitely going to do it this way from now on.
Hi, firstly thanks for the video, really useful. A question; do you find that it is more common for the approach to be used (i.e. querying a dataset via SQL w/sqlalchemy & sqllite) than reading the file into pandas, massaging the data there (so df.loc[conditions,columns] for example, which would do a similar thing) and then exporting it to a SQL database? I ask this because this is instinctively what I would choose to use the latter approach rather than run a SQL query there.
Thank you for this. I tried this using a dataset from school and noticed that I can only write simple SQL queries. Is this due to the constraints of the Excel file? For example, I tried to write a "SELECT COUNT(*)" query and I was unable to make it work. Is SQL usage limited with Excel and Python?
Is it possible to do a reverse of it, instead do the updation in excel sheet and automatically the date gets updated in SQL database using python script
If you want to advance your career then you have to learn these languages. Every person regardless of your discipline needs to learn SQL, Python and basic excel formulas and more importantly Excel navigation. Saying that though someone in a few years or if they haven't already will come up with a way to this in a drag and drop format avoiding some of the technicality ie Rows or Glide.
The same output file can be generated without using sql connection n query... Any special usage of sql in python or is it just the another way of doing the same thing ?
I know I'm late to this but here's a couple of things I discovered that might be of use to others. 1) The current version of xlrd (which is a pandas requirement) will only work with .xls files - not .xlsx. To overcome this pip install openpyxl and in the .read_excel line add engine='openpyxl'. 2) This example assumes your header rows will always be on line one. If they're on some other row, also in the read_excel line add header=n (where n equals the row where the headers actually are NOTE: this is 0 indexed so if your headers are on row 4 in the Excel, set headers=3 in your code. 3) Headers with characters that are normally invalid in SQL column names can be an issue but you can take care of most of them by add this line AFTER the read_excel line - df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_').str.replace('(', '').str.replace(')', '') This will fix most, but you can add to this if needed. Now you may still end up with other invalid characters in column names such as (.) or (#) - to overcome that, wrap your those column names in brackets [grower_#] or [transfer.1]. Now, I have a question that hopefully someone else can help me with. Anytime I try to select anything but * from the .to_sql table I get errors such as (I'm trying to select only 2 columns): Traceback (most recent call last): File "C:\lee_dev\test_sqlite01.py", line 18, in final = pandas.DataFrame(results, columns=df.columns) File "C:\Users\lee\AppData\Roaming\Python\Python36\site-packages\pandas\core\frame.py", line 509, in __init__ arrays, columns = to_arrays(data, columns, dtype=dtype) File "C:\Users\lee\AppData\Roaming\Python\Python36\site-packages\pandas\core\internals\construction.py", line 548, in to_arrays return _list_to_arrays(data, columns, coerce_float=coerce_float, dtype=dtype) File "C:\Users\lee\AppData\Roaming\Python\Python36\site-packages\pandas\core\internals\construction.py", line 567, in _list_to_arrays raise ValueError(e) from e ValueError: 14 columns passed, passed data had 2 columns
Is it possible to query by column names as well? I can use this procedure for filtering data but being able to get selected columns will increase it's usability and give it more sql feel.
@@SATSifaction I got the output with pd and now heading towards making it more dynamic in nature. For the sql column selection, I used this statement: "Select 'abilities' from TmpTbl where against_bug = 1" and got an error that says: ValueError: 41 columns passed, passed data had 1 columns. What am I missing? I'm using python 3.7 with pycharm.
@@SATSifaction Posting it here instead. Much of it is replica of the video so it might help someone when they need the code. The pokemon.csv is from kaggle just in case. import sqlite3 import pandas as pd from sqlalchemy import create_engine file = 'C:/Users/Deepak/Desktop/Sample/pokemon.csv' output = 'output.xlsx' engine = create_engine('sqlite://', echo = False) df = pd.read_csv (file) df.to_sql('TmpTbl', engine, if_exists='replace', index=False) results = engine.execute ("Select 'abilities' from TmpTbl where against_bug = 1") final = pd.DataFrame (results, columns=df.columns) print(final)
Just awesome..but one question..why are we importing sqlite3..it is not used anywhere in the code..and my code also gives me warning about not using sqlite3 package..
I am here searching for python based GUI application on Excel to be deployed integrated to the database. Please can I use flask or Django over excel using SQL but can I host this on sharepoint?
Thanks for the KT, it's very helpful. But I have one question, I'm getting error if column name has space like 'bolo name'. Could you please help on this
Hi thanks for the video it was amazing. I was hoping you could help I am trying to pull all columns that are between 2 date ranges I am using the following: results= engine.execute("SELECT * FROM consults WHERE Appointment Start Date Between '2-1-20' AND 1-31-21' ") this is the format of the column 10/14/19 08:30
Hello, thank you for this video. I just executed your code with an Excel file of mine and it works perfect, however when I change the select statement from select * to simply selecting one column out of my file, it gives me an error "ValueError: 80 columns passed, passed data had 1 columns". Not sure why changing the SQL statement fails when working with specified columns.
Great video and exactly what I searched for! Unfortunately I'm getting an TypeError: ("data argument can't be an iterator") for line: final=pd.DataFrame(results, columns=df.columns) does somebody know the cause of that? code is exactly like the one in the vid...
Hi, i still dont see the point on working wit Python if i already use SQL server and PowerBi for extracting, procesing data and creating dashboards.. what im missing? :/
A lot. Py thing is beyond is a data extraction and analytics engine. It performs very complex data modelling that isnt easily ready in PBI like torch and tensorflow designs. You can build strong backend applications and APIs in it as well as full blown dashboards. It’s a Swiss army of a programming language. I know several languages incl PBI and i can say without a doubt that python givens me the most flexibility and versatility.
as a beginner looking to utilize SQL and python to manage large excel data, where should I start? it seems like you are linking this to SQL and subsequently using python language to manage the data - hope i understand it currently.
The flexibility with python is you can do analysis right on an excel spreadsheet with pandas. If your dataset is very large, ingest it into sql server then use python. You can use pandas or excel modules in python for either use case
Sir, can you just show a video regarding "storing an excel file into SQL using python " ??Actually, I was trying to do it but getting errors. Thank you
you can just pull data into pandas using pd.read_excel and then just send it to your database using... df.to_sql Also a few things you would need are:- 1. You would need write access to the database 2. also check if you could make proper connection with your database from pandas if you can pull data from the database into pandas dataframes and you have write access in your server, there should be no problem. ONE CAUTION, choose wisely while using df.to_sql argument -> if_exist = 'replace' or 'append'. Because replace would wipe out old data and replace it with the data in your excel file.
-------------------------------------------------------------------------- I am getting following error before creating data frame. NameError Traceback (most recent call last) in ----> 1 engine = create_engine('sqlite://', echo=False) 2 #df=pd.read_excel(file, sheet_name='AccountMasterData.xlsx') NameError: name 'create_engine' is not defined
Hi, I have followed this example but it bugs out. I think it is because alchemy. Here is my code. I will provide error messages after:import sqlite3 import pandas as pd from sqlalchemy import create_engine import sqlalchemyfile = ('C:\\Users\\morillor\\PythonFiles\\RateData.xlsx') output = ('C:\\Users\\morillor\\PythonFiles\\Outputs\\Rates.xlsx')engine = create_engine('sqlite://', echo = False) df = pd.read_excel(file, sheet_name = 'AllLanes')df.to_sql('tblrates',engine,if_exists='replace',index=False)results=engine.execute("Select * from tblrates")final=pd.DataFrame(results, columns = df.columns) final.to_excel(output,index=False) finalERROR MSGS:OperationalError Traceback (most recent call last) C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\sqlalchemy\engine\base.py in _execute_context(self, dialect, constructor, statement, parameters, *args) 1192 parameters, -> 1193 context) 1194 except BaseException as e: C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\sqlalchemy\engine\default.py in do_execute(self, cursor, statement, parameters, context) 506 def do_execute(self, cursor, statement, parameters, context=None): --> 507 cursor.execute(statement, parameters) 508 OperationalError: too many SQL variables The above exception was the direct cause of the following exception: OperationalError Traceback (most recent call last) in () 10 df = pd.read_excel(file, sheet_name = 'AllLanes') 11ANY IDEAS WHY?Thanks,
Dude, I can't read any of the text on my laptop screen because it's all so small, specially the excel. Maybe you're recording this on a big screen but you could improve your videos by zooming in or changing the resolution before you start recording so that the video is legible to the rest of us. Otherwise an interesting video.
I am an MS Access/Oracle/SQL and .Net Developer learning python to improve and automate my processes; this tutorial is, the BEST, someone like myself can find on how to leverage and query excel by treating it as a SQL data source. OUTSTANDING tutorial!!!!
Thank you 🙏
Thanks! This was very informative. I've worked with Excel and SQL Server for years and working with Python in the last few years. This beats VB hands down. As I saw one person post, Python is the glue that brings apps together.
Thank you so much! This is exactly what I've been looking for for a long time. I've never found a good explanation of how to get a preexisting dataset into a database or SQL environment, but this skips that entirely so I can get started practicing SQL!
Glad you enjoyed it
This is a nice time saver instead of creating a table, importing, then manipulating with sql for one timers or not often used sheets
Best use case video for Python/Excel/SQL I've seen and really well explained! Subscribed and now watching everything... Awesome Job!
Thanks and welcome to the channel
@@SATSifaction Thanks! Quick question on this one though...Tried replacing if_exists='replace' with if_exists='append' to try and do exactly that but still replaces! Am I missing something?!
You might need to save the notebook or file and restart the software. If you are using jupyter try resetting the kernel
Excellent explanation!
Thanks for the great short really informative tutorial .... I request you to make a elaborate tutorial discussing the same topic with more information please.....
Great suggestion!
This is pretty awesome. What if the Excel workbooks lived on SharePoint? What would I do differently?
hello! thanks for sharing this but i have a question. why would you use SQL to manipulate the data over using Pandas dataframes / functions?
Great question. Filtering in pandas can be quite different than querying in sql. Pandas is really meant as an intermediary in this case. Those that are versed with sql but want a quick way to query an excel sheet will benefit the most from this tutorial. Next week I’ll show how to bring this in a persistent database that you can use with external applications
Consider someone that is very good with SQL, does not know enough of python and pandas, but has to do some analysis urgently. That's the type of person that needs this.
That is just one of the alternative who doesn't want to understand pandas on a deeper level and already knows SQL
Great piece of work loved your video, thanks for posting 👍🙏
Great video! How do I import to sqldb not the whole sheet but only a certain range?
so great work
WOW get out ODBC connectors and all that - this is great.
Awesome!
Thanks! Great video
Thank you
wonderful job mate, very much appreciated. Hands down
This is great!! Thanks a lot 👍👍👍
Thank you so much,
You’re welcome
Really Really Awesome... Thanks a lot...
Wow, just wow! Thank you 🙏🏻
You’re very welcome
Instead of creating a new Excel file for output , how may I add output to an existing sheet or to new sheet of same excel?
Thanks for this great content and straightforward explanation
Glad you enjoyed it
Very informative 👍
Please do more of these videos with SQL wildcards! Maybe finding only valid emails * food for thoughts
Thank you. I wish you could do a whole series on this.
What more do you want to learn?
@@SATSifaction Thank you for responding. I would love to see you take a variety of real world situations with excel like visualizations and data presentations and work with a method like this. This was so much easier. I would pay for the course.
Awesome info. I have a question, how would I go about creating something like a user environment so someone can use this outside jupyter?
Well done!
Thank you for your kind feedback
This is wonderful. Although all could be done with pandas. It's interesting for those who already master SQL. 😊
I had been using much more complex methods of querying data frames. I don't know why this never occurred to me until now, but I'm definitely going to do it this way from now on.
Excellent, you are a blessing!
Great tutorial. Thanks! Any chance you can include excel user input - crud in your next tutorial
Just what I was looking for. Thank you so much. Please keep on creating videos that benefit a lot of us!!!
Hi, firstly thanks for the video, really useful. A question; do you find that it is more common for the approach to be used (i.e. querying a dataset via SQL w/sqlalchemy & sqllite) than reading the file into pandas, massaging the data there (so df.loc[conditions,columns] for example, which would do a similar thing) and then exporting it to a SQL database? I ask this because this is instinctively what I would choose to use the latter approach rather than run a SQL query there.
Thank you for this. I tried this using a dataset from school and noticed that I can only write simple SQL queries. Is this due to the constraints of the Excel file? For example, I tried to write a "SELECT COUNT(*)" query and I was unable to make it work. Is SQL usage limited with Excel and Python?
I have the same issue, we are unable to do "select particular_attribute from table" even. Wondering what might be the issue
I wonder if it's because it's SQLite. Not all SQL is the same. Now I've never tried to work with SQLite to know how different from mssql it is.
Hi, where can I download the dataset for this? Thanks
How to update the excel file using update query of sql?
Amazing Video, thanks a lot man
can you share this dataset?
Is it possible to do a reverse of it, instead do the updation in excel sheet and automatically the date gets updated in SQL database using python script
your audio is 👌🏼
But Sqlite can’t run temp table?
Can you please post the code in discription it will be helpful
working fine for Select * but ValueError coming upon select particular_attribute from the table
THis is very help ful thanks
That's very good job
But what if I want to do calculation on the data instead of filtering it
Can you make a tutorial for that as well 😁
If you want to advance your career then you have to learn these languages. Every person regardless of your discipline needs to learn SQL, Python and basic excel formulas and more importantly Excel navigation. Saying that though someone in a few years or if they haven't already will come up with a way to this in a drag and drop format avoiding some of the technicality ie Rows or Glide.
The same output file can be generated without using sql connection n query...
Any special usage of sql in python or is it just the another way of doing the same thing ?
Great Vid! Can you get the data from multiple sheets or excel files and use JOIN in the SQL? If so, that would be very useful!
Yes you can. Just need to define a loop in the code to get multiple pages
I know I'm late to this but here's a couple of things I discovered that might be of use to others. 1) The current version of xlrd (which is a pandas requirement) will only work with .xls files - not .xlsx. To overcome this pip install openpyxl and in the .read_excel line add engine='openpyxl'. 2) This example assumes your header rows will always be on line one. If they're on some other row, also in the read_excel line add header=n (where n equals the row where the headers actually are NOTE: this is 0 indexed so if your headers are on row 4 in the Excel, set headers=3 in your code. 3) Headers with characters that are normally invalid in SQL column names can be an issue but you can take care of most of them by add this line AFTER the read_excel line - df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_').str.replace('(', '').str.replace(')', '') This will fix most, but you can add to this if needed. Now you may still end up with other invalid characters in column names such as (.) or (#) - to overcome that, wrap your those column names in brackets [grower_#] or [transfer.1].
Now, I have a question that hopefully someone else can help me with. Anytime I try to select anything but * from the .to_sql table I get errors such as (I'm trying to select only 2 columns):
Traceback (most recent call last):
File "C:\lee_dev\test_sqlite01.py", line 18, in
final = pandas.DataFrame(results, columns=df.columns)
File "C:\Users\lee\AppData\Roaming\Python\Python36\site-packages\pandas\core\frame.py", line 509, in __init__
arrays, columns = to_arrays(data, columns, dtype=dtype)
File "C:\Users\lee\AppData\Roaming\Python\Python36\site-packages\pandas\core\internals\construction.py", line 548, in to_arrays
return _list_to_arrays(data, columns, coerce_float=coerce_float, dtype=dtype)
File "C:\Users\lee\AppData\Roaming\Python\Python36\site-packages\pandas\core\internals\construction.py", line 567, in _list_to_arrays
raise ValueError(e) from e
ValueError: 14 columns passed, passed data had 2 columns
OK, I answered my own question. In the "final =" line change the columns=df.columns to columns=['MyCol1','MyCol2']
I am sorry if I miss it, but is there a way to get the sample excel file (Employee-Attrition.xlsx)? Thanks
Hi I am getting 'Error binding parameter 4 - probably unsupported type' exception. Please let me know how to proceed.
Is it possible to query by column names as well? I can use this procedure for filtering data but being able to get selected columns will increase it's usability and give it more sql feel.
Hi Deepak yes you can. In your select statement you can specify the column or leave it as is and do the filtering within the dataframe.
@@SATSifaction I got the output with pd and now heading towards making it more dynamic in nature.
For the sql column selection, I used this statement: "Select 'abilities' from TmpTbl where against_bug = 1" and got an error that says: ValueError: 41 columns passed, passed data had 1 columns. What am I missing? I'm using python 3.7 with pycharm.
You may want to post your complete code on stack overflow so we can take a better a look at it. It’s a bit hard to dissect it as is.
@@SATSifaction Posting it here instead. Much of it is replica of the video so it might help someone when they need the code. The pokemon.csv is from kaggle just in case.
import sqlite3
import pandas as pd
from sqlalchemy import create_engine
file = 'C:/Users/Deepak/Desktop/Sample/pokemon.csv'
output = 'output.xlsx'
engine = create_engine('sqlite://', echo = False)
df = pd.read_csv (file)
df.to_sql('TmpTbl', engine, if_exists='replace', index=False)
results = engine.execute ("Select 'abilities' from TmpTbl where against_bug = 1")
final = pd.DataFrame (results, columns=df.columns)
print(final)
I am geeting import error cannot import creat engine from sqlalchemy
Just awesome..but one question..why are we importing sqlite3..it is not used anywhere in the code..and my code also gives me warning about not using sqlite3 package..
How do you query from DB2 using SQL in python?
I am here searching for python based GUI application on Excel to be deployed integrated to the database. Please can I use flask or Django over excel using SQL but can I host this on sharepoint?
Isnt comfortable to work with sql straight from vba?
Thanks for the KT, it's very helpful. But I have one question, I'm getting error if column name has space like 'bolo name'. Could you please help on this
Try bolo_name
Very helpful thank you.
What is the point of importing sqlite3 as it's never used?
Hi thanks for the video it was amazing. I was hoping you could help I am trying to pull all columns that are between 2 date ranges I am using the following: results= engine.execute("SELECT * FROM consults WHERE Appointment Start Date Between '2-1-20' AND 1-31-21' ") this is the format of the column 10/14/19 08:30
What was the result?
Very helpful. Thanks!
Hello, thank you for this video. I just executed your code with an Excel file of mine and it works perfect, however when I change the select statement from select * to simply selecting one column out of my file, it gives me an error "ValueError: 80 columns passed, passed data had 1 columns". Not sure why changing the SQL statement fails when working with specified columns.
Pandas has filtering, sorting, etc. What is the advantage of SQL?
Just showing alternative ways to process data
May we get the excel file ?
Great video and exactly what I searched for! Unfortunately I'm getting an TypeError: ("data argument can't be an iterator") for line: final=pd.DataFrame(results, columns=df.columns) does somebody know the cause of that? code is exactly like the one in the vid...
same issue and spent 2 hours trying to find the cause - unsuccessfully...
Hi,
I see that, there is an extra column with 0,1,2,3 etc coming in the first column. How do I eliminate this?
Thanks
Sendil
Just put "index=False" in the df and it´s solved.
Hi, i still dont see the point on working wit Python if i already use SQL server and PowerBi for extracting, procesing data and creating dashboards.. what im missing? :/
A lot. Py thing is beyond is a data extraction and analytics engine. It performs very complex data modelling that isnt easily ready in PBI like torch and tensorflow designs. You can build strong backend applications and APIs in it as well as full blown dashboards. It’s a Swiss army of a programming language. I know several languages incl PBI and i can say without a doubt that python givens me the most flexibility and versatility.
@@SATSifaction Thank you, i guess i should give it a try!
What if you want to display only certain columns instead of all columns, it is not working for me
as a beginner looking to utilize SQL and python to manage large excel data, where should I start? it seems like you are linking this to SQL and subsequently using python language to manage the data - hope i understand it currently.
The flexibility with python is you can do analysis right on an excel spreadsheet with pandas. If your dataset is very large, ingest it into sql server then use python. You can use pandas or excel modules in python for either use case
Good stuff, but don’t forget that XL is limited to slightly over 1 million rows
Yeah this why I generally advise not to use excel rather use something like MySQL. Good point
Why convert it too SQL when you can do everything in Pandas
just for fun)
Traditionally, I am more familiar with SQL.
Would you recommend MySQL or MsSQL?
Daaaamn
Wow your scientist man your damn good
Ya
Amazing thank you!
Can anyone tell me how to fix this error ?
TypeError: data argument can't be an iterator
Thanks !!
I get an error saying that ' No module named SQLAlchemy even though I installed using pip
Are you in a virtualenv?
This is an amazing example
Thank you 🙏
Going to try this today. My only challenge at work is, it's hard to import pandas from PIP. Firewall blocks these.
The pd.DataFrame line gives an error saying ' DataFrame constructor not properly called.' Can someone please help?
Did you solve the problem ?
Thaaaaaaaaaaaaaaaaaanks, man !!!!!!!!!!!!
Sir, can you just show a video regarding "storing an excel file into SQL using python " ??Actually, I was trying to do it but getting errors.
Thank you
I would like that as well. Have a spreadsheet from a supplier that I need to import into a shopping cart and of course it uses MySQL for the database.
you can just pull data into pandas using pd.read_excel and then just send it to your database using... df.to_sql
Also a few things you would need are:-
1. You would need write access to the database
2. also check if you could make proper connection with your database from pandas
if you can pull data from the database into pandas dataframes and you have write access in your server, there should be no problem.
ONE CAUTION, choose wisely while using df.to_sql argument -> if_exist = 'replace' or 'append'. Because replace would wipe out old data and replace it with the data in your excel file.
--------------------------------------------------------------------------
I am getting following error before creating data frame.
NameError Traceback (most recent call last)
in
----> 1 engine = create_engine('sqlite://', echo=False)
2 #df=pd.read_excel(file, sheet_name='AccountMasterData.xlsx')
NameError: name 'create_engine' is not defined
Hm, mb better using pandasql for task like that?
The beauty with python is there is no one way to do it and in some cases no right way to do it. If the code works and it’s efficient you’re good to go
@@SATSifaction thx for the answer :)
Wow!!!
skip to 4:30
Why aren't you using Power Query from within Excel .. that is powerful tool with in Excel
It’s not as scalable nor platform agnostic as python.
For all these you do not need SQLite or sql alchemy. You can manipulate all these in pandas only
This is fucking awesome.
@SATsifaction
Hi, I have followed this example but it bugs out. I think it is because alchemy. Here is my code. I will provide error messages after:import sqlite3
import pandas as pd
from sqlalchemy import create_engine
import sqlalchemyfile = ('C:\\Users\\morillor\\PythonFiles\\RateData.xlsx')
output = ('C:\\Users\\morillor\\PythonFiles\\Outputs\\Rates.xlsx')engine = create_engine('sqlite://', echo = False)
df = pd.read_excel(file, sheet_name = 'AllLanes')df.to_sql('tblrates',engine,if_exists='replace',index=False)results=engine.execute("Select * from tblrates")final=pd.DataFrame(results, columns = df.columns)
final.to_excel(output,index=False)
finalERROR MSGS:OperationalError Traceback (most recent call last)
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\sqlalchemy\engine\base.py in _execute_context(self, dialect, constructor, statement, parameters, *args)
1192 parameters,
-> 1193 context)
1194 except BaseException as e:
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\sqlalchemy\engine\default.py in do_execute(self, cursor, statement, parameters, context)
506 def do_execute(self, cursor, statement, parameters, context=None):
--> 507 cursor.execute(statement, parameters)
508
OperationalError: too many SQL variables
The above exception was the direct cause of the following exception:
OperationalError Traceback (most recent call last)
in ()
10 df = pd.read_excel(file, sheet_name = 'AllLanes')
11ANY IDEAS WHY?Thanks,
Dude, I can't read any of the text on my laptop screen because it's all so small, specially the excel. Maybe you're recording this on a big screen but you could improve your videos by zooming in or changing the resolution before you start recording so that the video is legible to the rest of us. Otherwise an interesting video.
The amount of time that he mistook '=' for '-' is outrageous
You're pronouncing SQL incorrectly......
Sorry just saw you are using a MAC too, all makes sense now... real devs use linux.
The first 4 minutes could have been reduced to 1 minute.
Really Really Awesome. Thanks a lot...
Thank you so much this is very useful