Thanks! It's on my list to use Airtable as a solution for more regular crawls (as opposed to single shot scrapes like with Google Sheets) - just need to find the time!
Hi! It depends how big the knowledgebase is. If it's relatively small, you could just put it into the end of the system prompt as context to guide the output. You could use the prompts tab in the Google sheets to maintain it. If it's a large knowledgebase and you want to fetch the most relevant ones to inject into the system prompt, based on the type of post or reel, then you'd need to implement some sort of RAG retrieval. I'm sure this is possible in Make with the likes of a vector store like Pinecone and using the openai text embeddings model. Would take some trial and error to get a system like that up and running. Hope this helps!
Thank you for this. Its exactly what I was looking for. I did run into an issue though. The instagram video links seem to be expired. All of them. I tried many channels, kept running into the same issue. Wondering if instagram has made any changes on their end to make sure video links are not fetched. Any thoughts how we could fix this? Thanks in advance.
You're very welcome! I just ran the scenario from the video and the images and videos are loading fine. I ran into a similar problem when building out the automation. In the scenario - I say to only run the crawl once and then hardcode the crawl ID - to avoid crawling constantly while building out the workflow. The Instagram image and video links expire after a certain time period - I think its a few hours. So whenever that happened, I ran it again and the media URLS regenerated and worked again. If you want to run the automation and then review it over a few days or weeks, then you'll need to download the media files and host them somewhere else. Hope this helps Daniel
Hey, we have the templates for all of these in our community. For creating the formulas, I think Daniel got ChatGPT to generate a lot of these and tweaked them a bit! Alan
This is amazing. Thanks so much for doing this. I got as far as the CloudConvert step i.e the file url and name have been generated. But for some reason the transcription isn't being generated by Whisper. There's always an error popping up (on the Whisper module) and the filedata field is always blank. Any idea why this is happening?
You're very welcome! If the "File Data" field on the Whisper module is blank then it means that there's an issue with the CloudConvert step. So I recommend double checking the configuration there. Make sure "Download a file" under "Export Options" is set to Yes. "File Data" is a binary input into the Whisper module. So when "Download a file" is set to yes on CloudConvert, Make downloads the binary to its memory and then when it transitions to the Whisper module, it then has that binary to pass as an input to OpenAI. Hope that helps Daniel
Get all of our resources, templates, and automations here 👉 www.theaiautomators.com/?C2
Guys, you are just crazy!))) Brilliant))
Thanks so much!
It's amazing!!
awesome. ty for sharing
this is crazy
any chance you could do the same vid but for airtable? feels like its more advanced nowdays anyway, thank man, great content
Thanks! It's on my list to use Airtable as a solution for more regular crawls (as opposed to single shot scrapes like with Google Sheets) - just need to find the time!
I would like to add a knowledge base with viral hooks or the best structured text for the reels :) How would I go about doing that in make?
Hi!
It depends how big the knowledgebase is. If it's relatively small, you could just put it into the end of the system prompt as context to guide the output. You could use the prompts tab in the Google sheets to maintain it.
If it's a large knowledgebase and you want to fetch the most relevant ones to inject into the system prompt, based on the type of post or reel, then you'd need to implement some sort of RAG retrieval.
I'm sure this is possible in Make with the likes of a vector store like Pinecone and using the openai text embeddings model. Would take some trial and error to get a system like that up and running.
Hope this helps!
Thank you for this. Its exactly what I was looking for. I did run into an issue though. The instagram video links seem to be expired. All of them. I tried many channels, kept running into the same issue. Wondering if instagram has made any changes on their end to make sure video links are not fetched. Any thoughts how we could fix this? Thanks in advance.
You're very welcome! I just ran the scenario from the video and the images and videos are loading fine.
I ran into a similar problem when building out the automation. In the scenario - I say to only run the crawl once and then hardcode the crawl ID - to avoid crawling constantly while building out the workflow.
The Instagram image and video links expire after a certain time period - I think its a few hours. So whenever that happened, I ran it again and the media URLS regenerated and worked again.
If you want to run the automation and then review it over a few days or weeks, then you'll need to download the media files and host them somewhere else.
Hope this helps
Daniel
how do you make the xl formula?
Hey, we have the templates for all of these in our community. For creating the formulas, I think Daniel got ChatGPT to generate a lot of these and tweaked them a bit!
Alan
This is amazing. Thanks so much for doing this.
I got as far as the CloudConvert step i.e the file url and name have been generated. But for some reason the transcription isn't being generated by Whisper. There's always an error popping up (on the Whisper module) and the filedata field is always blank.
Any idea why this is happening?
You're very welcome! If the "File Data" field on the Whisper module is blank then it means that there's an issue with the CloudConvert step.
So I recommend double checking the configuration there. Make sure "Download a file" under "Export Options" is set to Yes.
"File Data" is a binary input into the Whisper module. So when "Download a file" is set to yes on CloudConvert, Make downloads the binary to its memory and then when it transitions to the Whisper module, it then has that binary to pass as an input to OpenAI.
Hope that helps
Daniel
@@TheAIAutomators This worked. It's now fully functional. Very exciting. Can't thank you enough! Cheers.