Hi excellent crisp presentation. I create Power BI reports, but so far was not using Automate, but one user needs forced me to look into it. Just saw your video, and replicated this very similar flow. It worked fine. There are few challenges, if you can help, 1. it skips rows if there is any column value is blank, how to still take those lines. 2. it transfer rows in batch of 100, and between such batch it takes time, can we avoid that? 3. Just for 2400 rows, it took 4 hours, too long? 4. where are the setting to change row count limits? By the way, one of the place is when you copy the DAX query, in query itself, 501 rows were written as TOPN limit, that I altered already. All help appreciated. Thanks.
Hello 👋🏼. Glad you found it useful. For 1 - yes blank rows will be skipped, the workaround is to use replace nulls from query editor with 0’s or any other value. For 2-4 - unfortunately PowerAutomate takes long time to finish especially as the no. Of rows grows, the workaround is to duplicate the flow many times (say 3 in your case) then filter the query for each one based on one of the fields. And let them run simultaneously. Not perfect, but worked for me in a similar situation. Hope this helps.
Hello 👋🏼, null values are treated as no data. To overcome this, use the replace value option in query editor. Just replace null with any value like 0 or -
Hello Tam. Thank you first of all. For the maximum number of rows of “Run a query a against dataset” the documentation says either 100k rows or 1M rows which every comes first. So you should be ok with 501 rows.
Hello there. You can do them manually by creating table visuals and exporting them if that’s what you mean. PowerAutomate will automate the process for you
Hi Hamzeh, I tried to follow same steps as you explained in this video. I am using OneDrive for data storing as you used google sheets. When I am trying to add a row to table using item(‘apply to each’)[‘table[project_name]’] , its colour is not Turning to yellow, it staying as purple. Could you please help in this regard. Thanks.
Hello 👋🏼. I assume you are appending to excel or csv file. Based on your comment above and assuming all the steps are correct, you code should look like this: items(‘apply_to_each’)[‘table[project_name]’]. Make sure the ‘apply to each’ function name is the exact name of the block you are using also use items not item for the function. And replace spaces with _
@@NextGenBIGuruhi Hamazeh, I am able take snapshot of data after making changes as recommended by you, but it is appending only 501 rows but I have to append 21616 rows. Could you please let me know how I can do this.
@@NextGenBIGuru Hi - I get invalid expression when using this code ---- items(‘apply_to_each’)[‘table[project_name]’]. I'm using an xlsx on a Sharepoint. Any ideas? Is the code slightly different for Excel vs. Google Sheets in some way? Thank you!
Any idea on how to get rid of the 5k row limit? My snapshot has over 5k rows and I will break "Your 'Apply to each' action is encountering an error because it is trying to process more items than the maximum limit of 5000, with 5903 items being passed. This issue arises from the query results returning too many rows. To resolve this, could you consider filtering the query results to reduce the number of items returned, or perhaps implementing pagination to handle the excess items?"
Hello 👋🏼, you can do something like pagination by creating 2 workflows instead of 1. And filter your query through a field you intruduce to you table (calculated column) with values like “Page 1, Page 2,…”
Hi excellent crisp presentation. I create Power BI reports, but so far was not using Automate, but one user needs forced me to look into it. Just saw your video, and replicated this very similar flow. It worked fine.
There are few challenges, if you can help, 1. it skips rows if there is any column value is blank, how to still take those lines. 2. it transfer rows in batch of 100, and between such batch it takes time, can we avoid that? 3. Just for 2400 rows, it took 4 hours, too long? 4. where are the setting to change row count limits?
By the way, one of the place is when you copy the DAX query, in query itself, 501 rows were written as TOPN limit, that I altered already. All help appreciated.
Thanks.
Hello 👋🏼. Glad you found it useful.
For 1 - yes blank rows will be skipped, the workaround is to use replace nulls from query editor with 0’s or any other value.
For 2-4 - unfortunately PowerAutomate takes long time to finish especially as the no. Of rows grows, the workaround is to duplicate the flow many times (say 3 in your case) then filter the query for each one based on one of the fields. And let them run simultaneously. Not perfect, but worked for me in a similar situation. Hope this helps.
Excellent Tutorial as usual! I was looking for an automated way to archive/store data without having a dedicated SQL server!
Thank you Qais. Glad you found it useful.
When I try the “Run a query against the dataset” action it returns “BadRequest”. What can I be doing wrong? Is it because I am using DirectQuery data?
Hi there. You are correct, this is the reason. this action requires listing the rows which can be achieved with imported queries only.
This wont work with live connection or directquery mode only import ?
thank you
Hello 👋, it should work whether you are using import or direct query
When running the query against a dataset the rows with null columns are excluding from the results. Is there anyway to include that?
Hello 👋🏼, null values are treated as no data. To overcome this, use the replace value option in query editor. Just replace null with any value like 0 or -
Great tutorial~
But is the snapshot data table limited by 501 rows?
Hello Tam. Thank you first of all.
For the maximum number of rows of “Run a query a against dataset” the documentation says either 100k rows or 1M rows which every comes first.
So you should be ok with 501 rows.
Can we store the data in excel spreadsheet.
Hello there, yes you can save to csv format. In PowerAutomate, you need first to find a create a table block then saving the output to CSV block
@@NextGenBIGurudo you happen to have a video showing these steps?
Hello 👋🏼, apologies I don’t have a specific video shows how to do it for an excel sheet
Can we do snapshots without power automate?
Hello there. You can do them manually by creating table visuals and exporting them if that’s what you mean. PowerAutomate will automate the process for you
Hi Hamzeh,
I tried to follow same steps as you explained in this video. I am using OneDrive for data storing as you used google sheets. When I am trying to add a row to table using item(‘apply to each’)[‘table[project_name]’] , its colour is not Turning to yellow, it staying as purple. Could you please help in this regard. Thanks.
Hello 👋🏼. I assume you are appending to excel or csv file. Based on your comment above and assuming all the steps are correct, you code should look like this: items(‘apply_to_each’)[‘table[project_name]’]. Make sure the ‘apply to each’ function name is the exact name of the block you are using also use items not item for the function. And replace spaces with _
@@NextGenBIGuruhi Hamazeh,
I am able take snapshot of data after making changes as recommended by you, but it is appending only 501 rows but I have to append 21616 rows. Could you please let me know how I can do this.
Hello 👋🏼, you Dax query probably has a TOPN function. So inside this function increase the number of columns to big number like.
@@NextGenBIGuru TOPN function has maximum N_value of 5000. Could you please let me know how I can insert more than 5000 rows of table.
@@NextGenBIGuru Hi - I get invalid expression when using this code ---- items(‘apply_to_each’)[‘table[project_name]’]. I'm using an xlsx on a Sharepoint. Any ideas? Is the code slightly different for Excel vs. Google Sheets in some way? Thank you!
Any idea on how to get rid of the 5k row limit? My snapshot has over 5k rows and I will break
"Your 'Apply to each' action is encountering an error because it is trying to process more items than the maximum limit of 5000, with 5903 items being passed. This issue arises from the query results returning too many rows. To resolve this, could you consider filtering the query results to reduce the number of items returned, or perhaps implementing pagination to handle the excess items?"
Hello 👋🏼, you can do something like pagination by creating 2 workflows instead of 1. And filter your query through a field you intruduce to you table (calculated column) with values like “Page 1, Page 2,…”