Hi Brother, Not sure whether Azure team fixed it or not but @replace(item().name,'.txt','') is working fine. I guess you have missed @ sign before replace function in your attempt.
I noticed that too, the data type is nvarchar max. You might want to create final tables once data is loaded and have final tables with correct data types, create stored procedure to load data from these staging tables to your destination tables. if you have already created the tables with correct data type, then you will be fine too.
Small query sir , once created the table, again if new data or new files come with suffix changes like date change then again it create new table or insert the data into the already created table coz you are using auto created option . Thank you in advance
very great tutorial, i have a question, if I run a pipeline, and there's a new csv file in the bucket with the same schema as other, this method will apend the data to the table with same schema or will create another one?
Hi, thanks for this ! 1 question. suppose I wanted to convert the csv files to parquet files the how would I proceed ? I used the concat replace, but looking at the target parquet files they seem to be corrupted : The file 'Emp1' may not render correctly as it contains an unrecognized extension. @concat(replace(item().name,'csv','parquet')) does not work either..... Any suggestions ? Thanks
Hi TechBrothers, thanks for this very useful video. I had a question, I am trying to truncate the tables with the following @{concat('truncate table',item().name)} but is not working for me, giving an error Please advise. Thank You
i tried this today as well. My implementation idea is to truncate and insert into tables. For that I truncated the table with TRUNCATE TABLE [SCHEMA_NAME].@{item.name} . After this step if the table exists already then it would truncate. Orelse try pointing a fail output line to the same block that you are pointing the sucess block. So by doing this if table doesnt exists then it will go in the fail block and execute it and if it is present then it will truncate and give you the appropriate results
Sir...Can you show once how to load the files available in blob container and load into multiple existing tables in azure sql database, that would be really helpful to me
Hi. This was a great help to me. One issue I am having is the data is failing to load due to multiple data type errors (such as String to DATETIME). As the data in the CSV is exported as string, do you have a way of mapping the formatting of each field which is a problem, bearing in mind the columns may be named something different?
Is it possible to load the different source files into existing tables in the SQL server? Means the source file names do not match with the existing table names?
Hi, yes that is possible, but you have to provide some type of source and destination, if file names are different , you can group them in source and then destination table can stay same.
Hi Bro, Any workaround for CSV files which has multiple headers and we can merge them as one Header ? Source is FTP and some files are good and some files has multiple headers.
Fantastic content. I combined some of your videos to do a bit of a complex task and I'm so happy it worked!. Thanks heaps!
Thank you! This is exactly what I was trying to configure.
Hi Brother,
Not sure whether Azure team fixed it or not but @replace(item().name,'.txt','') is working fine. I guess you have missed @ sign before replace function in your attempt.
Thanks for share this knowledge. It is fantastic !
Glad you liked it!
works like a charm, however the auto created tables all are nvarchar(MAX). Not the best for database size, not for useability. Any way around this.
I noticed that too, the data type is nvarchar max. You might want to create final tables once data is loaded and have final tables with correct data types, create stored procedure to load data from these staging tables to your destination tables. if you have already created the tables with correct data type, then you will be fine too.
Very helpful video. Thank you!
You are welcome
Hi Sir,
I am able to insert the data using dynamic CSV files, Could you please help me in upserting the data ?
Thank you very much, was really helpful.
Glad to hear that!
Small query sir , once created the table, again if new data or new files come with suffix changes like date change then again it create new table or insert the data into the already created table coz you are using auto created option . Thank you in advance
Hi Brother ,
Great Video & thanks for sharing :-)
My pleasure
very great tutorial, i have a question, if I run a pipeline, and there's a new csv file in the bucket with the same schema as other, this method will apend the data to the table with same schema or will create another one?
Great video, exactly what I needed!
Hi, thanks for this ! 1 question. suppose I wanted to convert the csv files to parquet files the how would I proceed ? I used the concat replace, but looking at the target parquet files they seem to be corrupted : The file 'Emp1' may not render correctly as it contains an unrecognized extension. @concat(replace(item().name,'csv','parquet')) does not work either..... Any suggestions ? Thanks
you are awsome. keep it up!
If csv file is hqving some columna as json structure than how to proceed?
Hi TechBrothers, thanks for this very useful video.
I had a question, I am trying to truncate the tables with the following
@{concat('truncate table',item().name)} but is not working for me, giving an error
Please advise.
Thank You
i tried this today as well. My implementation idea is to truncate and insert into tables. For that I truncated the table with TRUNCATE TABLE [SCHEMA_NAME].@{item.name} . After this step if the table exists already then it would truncate. Orelse try pointing a fail output line to the same block that you are pointing the sucess block. So by doing this if table doesnt exists then it will go in the fail block and execute it and if it is present then it will truncate and give you the appropriate results
Hi
I can see my csv files in SSMS but cannot see in table format in SSMS also it is in CSV format did i miss anything?
HI, if file names are like emp1 ,emp2, emp3 etc. in this case how we can write a expression to remove numb
ers in REPLACE. could you help us.
Thanks! Really helpful!
Glad it helped!
excellent video super
Sir can we use split() function to remove .txt ?
hi im rohit can we use copy data activity from CSV files if not why ?
Sir...Can you show once how to load the files available in blob container and load into multiple existing tables in azure sql database, that would be really helpful to me
Brother i was looking for the same... Now did you know how to do it.?
Great videos. I however don't see any video on SharePoint with ADF. Do you have a video or can you make one? Thank you
Hoping to have one soon. working in many videos and scenarios. thanks for feedback
Hi. This was a great help to me. One issue I am having is the data is failing to load due to multiple data type errors (such as String to DATETIME). As the data in the CSV is exported as string, do you have a way of mapping the formatting of each field which is a problem, bearing in mind the columns may be named something different?
Is it possible to load the different source files into existing tables in the SQL server? Means the source file names do not match with the existing table names?
Hi, yes that is possible, but you have to provide some type of source and destination, if file names are different , you can group them in source and then destination table can stay same.
THANKS SIR
Hi Bro,
Any workaround for CSV files which has multiple headers and we can merge them as one Header ? Source is FTP and some files are good and some files has multiple headers.
One of the way could be load the data without header information into staging table and then remove the bad header data and only use clean data.
How to do this in HTTP server?
good