Thanks for sharing. I've briefly explored the running query against a power bi dataset step, but I can only see my dashboard under the dataset selection. How do I set this up so that I have access to the dataset(s)? Currently, my report is built off dataflows that sit under my fabric workspace and my report is ingesting these as inputs. Have I gone wrong somewhere in this initial infrastructure setup?
Hello mate, I figured out the issue. As it is with these things the solution is often simpler than the problem. I forgot to publish the latest version of my model, so the dax query which was providing the expected output in the desktop version now works when I schedule the flow. All I had to do was publish the model to reflect the latest change I'd implemented in my desktop version of the pbix file. Duh. Thanks for your help 👍🏻
@@AccessAnalytic usual you want to index your data in the model, set bunch of variables, one of which will be batch size, then initial value and optionally count for loops, as then at the end you want to append everything, and each CSV will have own headers, and you don't want them. Loop number will help you with that as then you can skip first row in CSV for loops beyond first one.
@@AccessAnalytic no, there's a limit when you query dataset in power automate regardless if the premium. In your case you called single value, so no risk there.. but if you want to catch a snapshot this is the only way..
Consider this scenario, I have a PBI report that refreshes everday and i want to extract that data and append it. I dont want it to overwrite the table but always append so that I have historical record. How can i do this? Please
The csv files could become your data source and then use “from SharePoint folder” connector to consolidate them? ruclips.net/video/-XE7HEZbQiY/видео.htmlsi=YUWL7sGbRfEclx0H
i found a bug (perhaps), does anyone know a solution. Issue: although the query defines the COLUMN ORDER. I have found that if the FIRST ROW (record) has NULL CELLS (items) the column is MOVED to the END of the COLUMNS in the OUTPUT to .CSV.
this is brilliant for me, thanks
Great to hear!
Very cool, thanks Wyn
You’re welcome
Thanks for sharing. I've briefly explored the running query against a power bi dataset step, but I can only see my dashboard under the dataset selection. How do I set this up so that I have access to the dataset(s)?
Currently, my report is built off dataflows that sit under my fabric workspace and my report is ingesting these as inputs. Have I gone wrong somewhere in this initial infrastructure setup?
So who built the model that your report is based on? Are you saying you don’t see the Semantic model icon when you go into your workspace?
Hello mate, I figured out the issue. As it is with these things the solution is often simpler than the problem. I forgot to publish the latest version of my model, so the dax query which was providing the expected output in the desktop version now works when I schedule the flow. All I had to do was publish the model to reflect the latest change I'd implemented in my desktop version of the pbix file. Duh. Thanks for your help 👍🏻
@willkhan6679 glad you got it working
Amazing, thanks.
You’re welcome.
Worth mentioning how to loop through data model as there's a limit how much stuff you can call on one go.
Any tips?
@@AccessAnalytic usual you want to index your data in the model, set bunch of variables, one of which will be batch size, then initial value and optionally count for loops, as then at the end you want to append everything, and each CSV will have own headers, and you don't want them. Loop number will help you with that as then you can skip first row in CSV for loops beyond first one.
Cheers, so this as a workaround to a paginated report when you want large data exports and don’t have premium capacity ?
@@AccessAnalytic no, there's a limit when you query dataset in power automate regardless if the premium. In your case you called single value, so no risk there.. but if you want to catch a snapshot this is the only way..
Sorry, what I’m saying is a paginated report would be a simpler solution but that requires premium capacity.
Any idea on what the data limit is?
Consider this scenario, I have a PBI report that refreshes everday and i want to extract that data and append it. I dont want it to overwrite the table but always append so that I have historical record. How can i do this? Please
The csv files could become your data source and then use “from SharePoint folder” connector to consolidate them?
ruclips.net/video/-XE7HEZbQiY/видео.htmlsi=YUWL7sGbRfEclx0H
i found a bug (perhaps), does anyone know a solution. Issue: although the query defines the COLUMN ORDER. I have found that if the FIRST ROW (record) has NULL CELLS (items) the column is MOVED to the END of the COLUMNS in the OUTPUT to .CSV.
Not a scenario I’ve come across yet