I am 70 having no use of learning Excel/Power query! I keep watching your videos as I like the way you solve problems. I have learnt so much from your teaching. Can I pay some Token for whatever I have learnt.
I saved the entire syntax as follows: Add Custome Column =Table.PromoteHeaders(Table.Skip([Data], each not List.ContainsAny( Record.ToList(_), {"Header1", "Header2"} ))) Works wonders!! Thanks a ton bro!!
Thanks, Chandeep! I knew you had posted this, and I had this problem today. I was fooling around with different other approaches that were a mess. This was perfect!
This is wonderful @Goodly. I watch all your videos. From the logic to the solution of the problem and the actual solution. God continually bless you, you are a messiah!
You hear the problem it seems😂 I was using filter method and removing null values and a lot of other filter method. Thanks for making the work easier and cleaner ❤
Thanks for the video. Much better than only skipping rows to one hardcoded value, makes sense to use if your column order is not the same across data tables.
Thanks @Chandeep, I got it based on your trick im able to do it for removal of top rows. I also wanted to do for bottom rows, there is lot of junk bottom rows from my sheet, i applied same trick but i have added index row with descending order then applied this trick for to remove junk rows from sheet and sort ascending order index back.. Anyways your tricks are fantastic.
Super Awesome, Chandeep. Very powerful formulas that you are teaching in a simple and easy understandable way ! Power Query and DAX are having lot of hidden treasures
Thanks again for such a great clear videobabout the next step in PowerQuery. I am new in PowerQuery, but i am experimenting on DAX and you are giving a great explanation
Hello, trying to see if any one else ran into this issue: -Followed all steps and worked as they should until ~7:00 when we're supposed to transform the content from Binary to a Table. My files are all CSV so the formula I ended up using was =Table.TransformColumns(Source,{"Content",Csv.Document}) instead of =Table.TransformColumns(Source,{"Content",Excel.Workbook}) that is used in the video. It converted to Tables fine. -Then on the next step (7:40) where you're supposed to expand the tables and see all of the sheets, when I do this...the header options are just Column 1, Column 2, etc. instead of the actual headers...AND every row of all the files shows up instead of just the sheet name. -When I follow the steps afterwards, I get the error: "dataformat.error there were more columns in the result than expected" Any idea what's going on?
Very sleek. I had this very same issue but I used List.Generate to loop through each record which suppose would take slightly more processing time but nothing you would ever notice.
this is awesome; thank you! one question - i need to add a column into the combined file that shows the original source filename for each record... where in the flow and how best to do that please?
This is awesome! I have one challenge, that one of the first rows contain the date of report and I want it to be within the data columns , how to do that :D!
Hi Goodly, Wonderful and Powerful Trick...Keep up the good work. What if you are importing from PDF files...Trying to convert the binary gives a different results.
Hi Goodly, That was an amazing video. I learned a lot from your videos for my daily tasks with excel. It saves lot of my time. God bless you. I have a question please if you can answer that, when I covert pdf to excel most of the column values are not aligned into 1 column but locate on either side. Ex: column B dates, should be in column B but on few rows it will be on A or C. How can I align them into just 1 column B. Please advise. Thank you for all your great videos. 🙏
As always, very neat & clear stuff. 👍 I was wondering if one can't use Table.FindText? Like, each Table.Skip( _,Table.PositionOf(_, Table.FindText(_, "Profit"){0} ))) But only testing for 1 column header here.
Hi Goodly, thanks for all your great videos. Isn't there a simpler way to do it here? In the example file you create a conditional column (If Column1=Date Then True). Then you fill the conditional column downwards. Now you have a True for all rows you need and a null value above the desired header row. So you can filter for True. Shouldn't that be dynamic as well?
could you possibly tell me what do i have to do with CSV files for "Table.TransformColumns(Source, {"Content", Excel.Workbook})" as that dosent work for CSV
I have an interesting use case. I have a base file with headers but the fifth file has had one additional column with headers added. As a result the process you have described breaks and for the last table I get the error "DataFormat.Error: There were more columns un the result than expected" in the CSV data column. I have been wrestling with this for a week and my research does not show any ways to manage this, although an append tables would seem to manage this.
Hi i stuck on the step transform column. My file is .csv file not xlsx so when i use transformcolumn, it show error on the content. Do you know how to fix it
Pretty cool. Seems the only thing that would limit what one can do with PQ is one's imagination. Question, why List.ContainsAny instead of List.ContainsAll?
Hi, It's very smart solution. I'm looking for instruction, how to combine tables and not lose the columns that exist in the previous steps, in this case I would like the Name column to remain?
@@GoodlyChandeep Thank You very much. This is what I am looking. Can Yoy say what You think about this solution: ruclips.net/video/rCYn_onMP0I/видео.htmlsi=QYmkwRM2Cl1FuoCu
This may sound like a stupid question, and I'm sure it's something basic, but why do you get Name = Sheet , Data = Table, but I always have Name= table , Name = Sheet and adjacent Data = table, Data = table ? Oh, and loved the use of a condition for skip which I'd never thought of, even though now having looked it does say count or condition.
Hi Goodly. Thanks for all videos. They are just great. I need to combine two of your tricks in just one. I have many sheets with junk lines (same number of junk lines for all sheets) and these same sheets have inconsistent columns. How do I do that? Thanks in advance
You code to remove the junk headers can probably be Table.Skip ( Table, List.MatchesAny(Record.ToList(_), each _ "null" or each _ "") After this you can promote the headers and then follow the inconsistent header video.
Fabulous video , but it is difficult to see the "Applied Steps" at the end of the video. Question: what if the junk rows and "junk row data" are spread inconsistently through the spreadsheet. For example, the fund or department information may change resulting in blank rows between datasets and header rows when a new fund source or department is identified in the report. The example shows how to remove junk rows at the top of the report for 3 reports. Do you have any videos where the junk rows would be sporadically located throughout the report?
Mark, I'll have to take a look at the data to give you possible ways to solving it. See if you can pick any tricks from this long video - ruclips.net/video/_ZKT1raC4P0/видео.html I've shared horizontal and vertical looping techniques in this video.
@@GoodlyChandeep - Hi Chandeep, I could share a file with you on your website. I could upload on the PQ training course site. Would that work? Thanks for the great insights!
Not as sophisticated, but say you had a table with two columns, Col1 and Col2. There are junk rows at the top. The row with headers has the values "Date" and "Amount". This seems for work and is easy to implement. let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content], Custom1 = Table.Skip(Source, each [Col1] "Date" and [Col2] "Amount") in Custom1
fnTableSkipDynamic(Source, "Col1", "Date", "Col2", "Amount") Function to do this: Input column names as text, header values as text. (sourcetablename as table, col1name as text, header1value as text, col2name as text, header2value as text)=> let return = Table.Skip(sourcetablename, each ( Record.Field(_, col1name) header1value) and ( Record.Field(_, col2name) header2value) )
Wow, this came up in my feed and it’s exactly what I’ve struggling with all week!!! You’re an absolute star, thank you! 🙌🏻☺️👌🏻
I am 70 having no use of learning Excel/Power query! I keep watching your videos as I like the way you solve problems. I have learnt so much from your teaching. Can I pay some Token for whatever I have learnt.
I saved the entire syntax as follows:
Add Custome Column =Table.PromoteHeaders(Table.Skip([Data], each not List.ContainsAny( Record.ToList(_), {"Header1", "Header2"} )))
Works wonders!!
Thanks a ton bro!!
Pretty awesome! Thanks a lot for this. Record, Table and List object manipulation in one video for one task without using "Remove Other Columns"
Wow..Amazing. Been struggling with removing dynamically the junk and with custom headers for a while now. This works like a charm. Thanks a mill.
It is a huge pleasure to look at your videos. Moving from excel advanced user to Power query person. Thanks a lot.
Thanks, Chandeep! I knew you had posted this, and I had this problem today. I was fooling around with different other approaches that were a mess. This was perfect!
This is wonderful @Goodly. I watch all your videos. From the logic to the solution of the problem and the actual solution. God continually bless you, you are a messiah!
Dear, you are a genius. You make M language look so easy. I appreciate your videos, my respects to you.
Amazing, I've done this with a fixed Skip value but this is on another level! Thanks
Thanks Chandeep! I was using Index for this one, but you make it so easy. Learning from your videos is amazing! Keep up the good work!
You hear the problem it seems😂
I was using filter method and removing null values and a lot of other filter method.
Thanks for making the work easier and cleaner ❤
Thank you, SO MUCH! Had about 200 files to combine with various junk rows up top and now I can do it :D
Woah.. thats a lot of files.
I am glad I could help
M-Masterpiece!
Packing a lot of slick tricks in one video.
Thank you Chandeep!
I am actually learning power query its Excellent. I like the way to teach. Thank u so, Much for this video.
Amazing, I have had to struggle through with this exact issue to manually remove those junk rows, your a life saver I will be using this in the future
Your methods become so refined over time . Awesome job
This is a lifesaving technique. Thank you for sharing with us.
Thanks for the video. Much better than only skipping rows to one hardcoded value, makes sense to use if your column order is not the same across data tables.
Thanks @Chandeep, I got it based on your trick im able to do it for removal of top rows. I also wanted to do for bottom rows, there is lot of junk bottom rows from my sheet, i applied same trick but i have added index row with descending order then applied this trick for to remove junk rows from sheet and sort ascending order index back.. Anyways your tricks are fantastic.
You have no idea how much Power Query has helped me to automate my tasks. Also I have been struggling with this problem. A big THANK YOU ❤!
Happy to help! 😉
yeah that is pretty damn awesome, Chandeep. this is an everyday challenge
Super Awesome, Chandeep.
Very powerful formulas that you are teaching in a simple and easy understandable way !
Power Query and DAX are having lot of hidden treasures
Thanks again for such a great clear videobabout the next step in PowerQuery. I am new in PowerQuery, but i am experimenting on DAX and you are giving a great explanation
This was awesome video. Thanks for the same. I liked the trick that you used to removed the blanks dynamically.
Hello, trying to see if any one else ran into this issue:
-Followed all steps and worked as they should until ~7:00 when we're supposed to transform the content from Binary to a Table. My files are all CSV so the formula I ended up using was =Table.TransformColumns(Source,{"Content",Csv.Document}) instead of =Table.TransformColumns(Source,{"Content",Excel.Workbook}) that is used in the video. It converted to Tables fine.
-Then on the next step (7:40) where you're supposed to expand the tables and see all of the sheets, when I do this...the header options are just Column 1, Column 2, etc. instead of the actual headers...AND every row of all the files shows up instead of just the sheet name.
-When I follow the steps afterwards, I get the error: "dataformat.error there were more columns in the result than expected"
Any idea what's going on?
Awesome video. I was struggling earlier. I had work by using macro. This is very cool. Thanks Goodly
Very sleek. I had this very same issue but I used List.Generate to loop through each record which suppose would take slightly more processing time but nothing you would ever notice.
Beautiful Power Query techniques!!
its very awesome, i hade a similar issue had to work around it, but this looks pretty good
Another way, how we can do it, it is add additional column using List.PositionOf and due to that calculate in which position we have Date and Profit
Awesome. Great logic. Thanks for the video.
Brilliant! Many thanks, Mr Goodly.
Very well explained thanku so much brother.
Great , excellent. Simple like that. Thanks.
this is awesome; thank you! one question - i need to add a column into the combined file that shows the original source filename for each record... where in the flow and how best to do that please?
Fantastic! This is definitely going into my daily routine.
Enjoy! 😉
Amazing!!!, Greetings from Mexico.
really very helpful. Thanks . It is a good idea
Awe Stucked... No Words to Express How Fantabulous It is
Thanks Rahul :)
This is great! What is the best way to do this when your source files are not formatted as Tables, but are simply Excel Worksheets?
Marvelous work ji
Magosh... It's just simple brilliant!💎
Thanks a bunch for yor priceless help!🤗👦
Thanks :)
Genius 🔥 thank you my friend sooo helpful ❤️❤️❤️
Fantastic Chandeep, thank you!
Glad you liked it !
This is awesome! I have one challenge, that one of the first rows contain the date of report and I want it to be within the data columns , how to do that :D!
Fantastic video and amazing explanation!
Many thanks :)
ooff!! SHABASH! Terrific video Chandeep! Superb!! Sixer Maar diya!
Thanks 😉
Hi Goodly, Wonderful and Powerful Trick...Keep up the good work. What if you are importing from PDF files...Trying to convert the binary gives a different results.
Hi Goodly,
That was an amazing video. I learned a lot from your videos for my daily tasks with excel. It saves lot of my time. God bless you.
I have a question please if you can answer that, when I covert pdf to excel most of the column values are not aligned into 1 column but locate on either side.
Ex: column B dates, should be in column B but on few rows it will be on A or C.
How can I align them into just 1 column B.
Please advise.
Thank you for all your great videos. 🙏
Fantastic! But can we use "is blank" instead of "not Contain any" as the condition?
Or promote headers if record contains any " Date", "Amount" etc?
Can this also be solved by using index number and custom function?
Super Video Chandeep.
admiring the brilliance
excellent🙂
Incredible. Thanks :)
Could we have similar logic for bottom rows?
Fantastic 🎉.. Thanks 😊
As always, very neat & clear stuff. 👍
I was wondering if one can't use Table.FindText?
Like, each Table.Skip( _,Table.PositionOf(_, Table.FindText(_, "Profit"){0} )))
But only testing for 1 column header here.
Hi Goodly, thanks for all your great videos. Isn't there a simpler way to do it here? In the example file you create a conditional column (If Column1=Date Then True). Then you fill the conditional column downwards. Now you have a True for all rows you need and a null value above the desired header row. So you can filter for True. Shouldn't that be dynamic as well?
No. In that way if u filter true it will be with junk rows and all data except headers and if u filter for false u will get only headers
Great video Goodly ! what if I have a junk rows and also a junk columns. is it possible to combine? Thanks
Thanks Patrick.
May this video will help ruclips.net/video/1fn8fXYw6M4/видео.html
I would say simply amazing !!!!
For me my question if we need to bring the data before the columns as a new column before promoting headers, how do we go about it
unbelievable crazy as usual.
Really awesome!
Amazing❤! 🎉
could you possibly tell me what do i have to do with CSV files for "Table.TransformColumns(Source, {"Content", Excel.Workbook})" as that dosent work for CSV
JUST AMAZAING SUPERB
I have a similar problem with a CSV file, there are title characters before the Column headings. How could I remove those?
I have an interesting use case. I have a base file with headers but the fifth file has had one additional column with headers added. As a result the process you have described breaks and for the last table I get the error "DataFormat.Error: There were more columns un the result than expected" in the CSV data column.
I have been wrestling with this for a week and my research does not show any ways to manage this, although an append tables would seem to manage this.
send me your sample data and description of the problem - goodly.wordpress@gmail.com
Amazing
Thanks alot
Wow ! thank you
Delighted, this is the problem of every hour.
Many times data come with merged header, which you have sorted already,
What if we want to add back the removed rows after promoting the headers
Hi i stuck on the step transform column. My file is .csv file not xlsx so when i use transformcolumn, it show error on the content. Do you know how to fix it
If the name & kind of data which extracted into PQ it's inconsistent and i need to filter out for all non needed sheets,
How can work with that sheet
excellent
Pretty cool. Seems the only thing that would limit what one can do with PQ is one's imagination. Question, why List.ContainsAny instead of List.ContainsAll?
Awesome!!!
Hi, It's very smart solution. I'm looking for instruction, how to combine tables and not lose the columns that exist in the previous steps, in this case I would like the Name column to remain?
you'll find the answer in this video
m.ruclips.net/video/oExuBdnHtrk/видео.html
@@GoodlyChandeep Thank You very much. This is what I am looking. Can Yoy say what You think about this solution: ruclips.net/video/rCYn_onMP0I/видео.htmlsi=QYmkwRM2Cl1FuoCu
Damn awesome is right 👍
This may sound like a stupid question, and I'm sure it's something basic, but why do you get
Name = Sheet , Data = Table, but I always have Name= table , Name = Sheet and adjacent
Data = table, Data = table ? Oh, and loved the use of a condition for skip which I'd never thought of,
even though now having looked it does say count or condition.
Hi Goodly.
Thanks for all videos. They are just great.
I need to combine two of your tricks in just one.
I have many sheets with junk lines (same number of junk lines for all sheets) and these same sheets have inconsistent columns.
How do I do that?
Thanks in advance
You code to remove the junk headers can probably be
Table.Skip ( Table, List.MatchesAny(Record.ToList(_), each _ "null" or each _ "")
After this you can promote the headers and then follow the inconsistent header video.
Thanks!
Thanks a lot Abhijit :)
Brilliant
Thanks for sharing!
But looks like there should be List.ContainsAll instead of List.ContainsAny
Was thinking the same.
very nice
Actually, I think you are the only youtuber instructor who is preparing depth & creative PQ examples.
Really, fantastic 👏, elegance & more simple 😊
Amazing
Fabulous video , but it is difficult to see the "Applied Steps" at the end of the video. Question: what if the junk rows and "junk row data" are spread inconsistently through the spreadsheet. For example, the fund or department information may change resulting in blank rows between datasets and header rows when a new fund source or department is identified in the report. The example shows how to remove junk rows at the top of the report for 3 reports. Do you have any videos where the junk rows would be sporadically located throughout the report?
Mark, I'll have to take a look at the data to give you possible ways to solving it.
See if you can pick any tricks from this long video - ruclips.net/video/_ZKT1raC4P0/видео.html
I've shared horizontal and vertical looping techniques in this video.
@@GoodlyChandeep - Hi Chandeep, I could share a file with you on your website. I could upload on the PQ training course site. Would that work? Thanks for the great insights!
@@GoodlyChandeep Perfect! Thank you so much Chandeep. Your help and guidance is greatly appreciated. Cheers!
Not as sophisticated, but say you had a table with two columns, Col1 and Col2. There are junk rows at the top. The row with headers has the values "Date" and "Amount". This seems for work and is easy to implement.
let
Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
Custom1 = Table.Skip(Source, each [Col1] "Date" and [Col2] "Amount")
in
Custom1
fnTableSkipDynamic(Source, "Col1", "Date", "Col2", "Amount")
Function to do this: Input column names as text, header values as text.
(sourcetablename as table, col1name as text, header1value as text, col2name as text, header2value as text)=>
let
return = Table.Skip(sourcetablename,
each
( Record.Field(_, col1name) header1value)
and
( Record.Field(_, col2name) header2value)
)
in
return
List.matchesall can look for both the headers. 👍
Great!
My source data is from pdf. Not excel.
So the formula '=Table.TransformColumns(Source,{"Content", Excel.Workbook}) isn't working... any ideas?
How to do same thing but fro CSV file
What if i need the source.file column?
Hi. Can you give me away of doing this if there is more than one worksheet in the excel file and you only want to clean just one worksheet?
If you're connecting to a single xl file, you'll have to apply a filter before the Navigation Step to restrict the sheet names that you want.
Token of Gratitude!
Thank you so much Ankur
How can I add files name as a column in this
I am working with csv fileand the suntax is not working
Goodly is just too Godly.