This is effectively deprecated functionality now - this hasn't been ported to the new UI (Snowsight), and the Classic UI is being disabled from late October 2022. I know there are other methods, but Snowflake really does need a no-code (UI) loader for quick one-off loads, ideally with auto-schema detect. Something BigQuery does pretty nicely.
what do you do incase you have commas in the field values in your excel csv file? while we can enclose them in double quotes and upload it, but is there anyother option? I dont want my data to show in double quotes in my table
How to load csv file which contains null values. I downloaded the result of the query of which last column contain null values. E,g. …….,false,false,, Last 2 values are null. How to upload? What should I set in csv format in sf while uploading it?
I have a CSV with 30 columns, any wizard to help with that ? Every database I used (Oracle, SQL Server, MySQL...) had an easy wizard to create the table and load the file ...since the last century at least.
What if rather than creating a new table, I want to add new rows to an existing table in Snowflake with a csv that I locally have in my computer. And I want to do this with python in jupyter notebook?
You will likely need another tool to accomplish that. One option is to use a tool such as Fivetran > ruclips.net/p/PLy4OcwImJzBIfqyGG2NMC0l8ZfXNxf_C8. There are others such as Stitch, Airbyte, etc. Or you could run a python script and use something like SnowSQL. Hope this helps!
How to create a file format which allows the values should be like below Id | Name | address 1| | atp 2| | anp Note: name field vales are null in snowfalke table. I have tried creating the csv format with | as column delimiter new line as row delimiter and fields_enclosed as none and null string as empty, for eg: Alter file format mycsv set compression='none' field_delimiter='|' record_delimiter =' ' skip_header =0 Field_optionally_enclosed_by ='\042' trim_space = false error_on_column_count_mismatch =TRUE ESCAPE ='NONE' ESCAPE_UNCLOSED_FIELD='\134' DATE_FORMAT='AUTO' TIMESTAMP _FORMAT ='AUTO' NULL_IF=(); BUT while staging the table i am getting the error as cannot unload empty string without file format option field_optionally _enclosed_by being specified. Please suggest to export table with the file firmat as 1| | atp 2| | anp
Deliver more impact w/ modern data tools, without getting overwhelmed
See how in The Starter Guide for Modern Data → www.kahandatasolutions.com/guide
In 2022 you kind of have to click "use classic UI" or something along the lines to actually find the load table button :)
Thanky you so much for sharing!! You saved me hours of work!! God bless you
Glad it helped!
This is effectively deprecated functionality now - this hasn't been ported to the new UI (Snowsight), and the Classic UI is being disabled from late October 2022.
I know there are other methods, but Snowflake really does need a no-code (UI) loader for quick one-off loads, ideally with auto-schema detect. Something BigQuery does pretty nicely.
what should we do then how i can find load table btn ?
How do I load data in snowsight?
This was helpful. Thank you
Thank you... That was useful
Glad it was helpful!
what do you do incase you have commas in the field values in your excel csv file? while we can enclose them in double quotes and upload it, but is there anyother option? I dont want my data to show in double quotes in my table
This is the best
How to load csv file which contains null values.
I downloaded the result of the query of which last column contain null values.
E,g. …….,false,false,,
Last 2 values are null.
How to upload? What should I set in csv format in sf while uploading it?
I have a CSV with 30 columns, any wizard to help with that ?
Every database I used (Oracle, SQL Server, MySQL...) had an easy wizard to create the table and load the file ...since the last century at least.
Honestly Snowflake way overstates their ease of use. I might as well just work without a UI at this point.
Hi,
can we load the csv file in single variant column in snowflake table from s3?
This doesn't apply anymore, the new interface doesn't have load wizard
Hi. In order to update the table, instead of say "create" Do I have to comand "replace table"? thanks
if the table is not created ,then only create is enough, but if it's already created you can use 'create or replace table'. the second one is easier.
What if rather than creating a new table, I want to add new rows to an existing table in Snowflake with a csv that I locally have in my computer. And I want to do this with python in jupyter notebook?
Can we load only first N rows only into Snowflake?
I get an error message saying o have insufficient privileges. How can I get around this?
How can we automate this process to run at a specific time daily.
You will likely need another tool to accomplish that. One option is to use a tool such as Fivetran > ruclips.net/p/PLy4OcwImJzBIfqyGG2NMC0l8ZfXNxf_C8.
There are others such as Stitch, Airbyte, etc. Or you could run a python script and use something like SnowSQL. Hope this helps!
@@KahanDataSolutions Couldn't a python script do this as well?
@@dianamgdata Yes that should work as well.
How to create a file format which allows the values should be like below
Id | Name | address
1| | atp
2| | anp
Note: name field vales are null in snowfalke table.
I have tried creating the csv format with | as column delimiter new line as row delimiter and fields_enclosed as none and null string as empty, for eg:
Alter file format mycsv set compression='none' field_delimiter='|' record_delimiter ='
' skip_header =0 Field_optionally_enclosed_by ='\042' trim_space = false error_on_column_count_mismatch =TRUE ESCAPE ='NONE' ESCAPE_UNCLOSED_FIELD='\134' DATE_FORMAT='AUTO' TIMESTAMP _FORMAT ='AUTO' NULL_IF=();
BUT while staging the table i am getting the error as cannot unload empty string without file format option field_optionally _enclosed_by being specified.
Please suggest to export table with the file firmat as
1| | atp
2| | anp
Why not architect the solution so it is dynamic. If you've got an S3 bucket and your csv file columns change, this is going to break.
Via python script when csv is local
Please teach me Snowflake