Blank Slate Labs
Blank Slate Labs
  • Видео 6
  • Просмотров 17 221
Return Data From Bubble Backend Workflows
In this video I'll teach you two different methods for returning data back from your backend workflows in @BubbleIO.
1. Using the API Connector to call your backend workflow as an API.
2. Using the App Connector to connect the app to itself and unlock returning data from the app's backend workflows.
Introduction - 0:00
Creating a Backend Workflow - 1:24
Returning Data with API Connector - 6:15
Returning Data with App Connector - 15:10
Просмотров: 366

Видео

Turn OpenAI's API Responses From Text to JSON Easily With Function Calling In Bubble
Просмотров 4,2 тыс.Год назад
In this video we will turn @OpenAI's Chat Completions responses into structured JSON arrays (lists) using their Function Calling feature. Everything will be built in @BubbleIO with no code and no plugins (other than the API Connector). 🙌 Included: * How to structure the OpenAI Chat Completions API call with a JSON Schema. * How to turn the initial response as a text string of the JSON format in...
Upload and Search Your Own Documents With Bubble, OpenAI, and Pinecone (Bonus - Upload PDFs)
Просмотров 751Год назад
In this video we'll focus on a much requested add on to our original videos - uploading a PDF. We'll use PDF.co (@PDFco) to convert the PDF to text, and then easily plug into the rest of what we built already to add documents, store their vectors in Pinecone (@pinecone-io), and then ask questions through OpenAI's APIs (@OpenAI). All built without any code within Bubble (@BubbleIO).
Building an API in Xano to Use OpenAI to Answer Questions From Google Results
Просмотров 1,2 тыс.Год назад
Part of upcoming series of videos on building an AI Agent with Xano (@nocodebackend). Looking up information on Google is a central tool for giving OpenAI's GPT APIs (@OpenAI ) access to realtime information. In this video we'll cover setting up an API in Xano, using the Serper.dev API for getting Google results, and prompting OpenAI to answer a query in natural language based on results found ...
Upload and Search Your Own Documents With Bubble, OpenAI, and Pinecone (Part 2)
Просмотров 2,9 тыс.Год назад
In this video we'll focus on building out the parts that enable you to search your document(s) and use OpenAI to compile an answer. First we'll embed your questions to a vector with OpenAI (@OpenAI), then we'll query Pinecone (@pinecone-io) by that vector, and then we'll combine the relevant pieces of the document into a prompt for OpenAI to answer. All this without a single line of code, all b...
Upload and Search Your Own Documents With Bubble, OpenAI, and Pinecone (Part 1)
Просмотров 8 тыс.Год назад
In this video we'll focus on splitting a document text into multiple parts, embedding each part with vector (from @OpenAI), and then upserting them into Pinecone (@pinecone-io). Everything will be done without any code, all coordinated within Bubble (@BubbleIO). www.buymeacoffee.com/blankslatelabs

Комментарии

  • @dariogn536
    @dariogn536 8 дней назад

    thanks for your video lesson. very useful.... I have data that is returned as base 64 string to create a pdf. Not sure what to do next to download it automatically to the browser.. Do you have any tutorial on that? thanks

  • @Attlas-b1k
    @Attlas-b1k 10 дней назад

    I did exactly the same, with a several pages document, I can split 10 chunks only. Do you have an idea of what's goin on please ?

  • @marinamoreira8544
    @marinamoreira8544 14 дней назад

    So cool! Works perfectly for the API setup call, but when trying it on the actual workflow i keep getting an "Unexpected B token" error.

  • @Eric-wk4cg
    @Eric-wk4cg 21 день назад

    Hello and very nice video ! I however have a question, If I want to create a thing and return it's ID to frontend, can I use the App connector method. And does the action "Run [bewf-name]" actually waits for the response before going to step 2 of the frontend workflow ?

  • @akshaymalhotra597
    @akshaymalhotra597 25 дней назад

    Is this only possible with a paid version of bubble?

  • @fheralanis
    @fheralanis Месяц назад

    I find this alternative of the app connector very interesting. What could be some other use cases for using this option?

  • @osaviosousa
    @osaviosousa Месяц назад

    having the following problem when calling the chunk_text api event in Bubble backend WF. Can someone help me? "Workflow error - The service Pinecone - Upsert Vectors just returned an error (HTTP 400). Please consult their documentation to ensure your call is setup properly. Raw error: Octal/hex numbers are not valid JSON values. "values": [-0,011428926, -0,000166 ^ "

  • @chisomonyea
    @chisomonyea 2 месяца назад

    I’m curious to know if this could be done by using app data in bubble instead of pinecone.

  • @aisaluk
    @aisaluk 3 месяца назад

    Very comprehensive video, good stuff! Would be great if you can make another video on this topic but with the new Pinecone Assistant connected with OpenAI Assistant (which is connected and operating in Bubble)

  • @dr.ricardoloureiro
    @dr.ricardoloureiro 3 месяца назад

    How to convert pdf to text? I'm looking for a way to the bot give answers after reading a file.

  • @ninovizi
    @ninovizi 7 месяцев назад

    Hey, what tool are you using for those effects as your mouse moves? There's so many startups for that already - wondering which you picked!

  • @watchparth
    @watchparth 7 месяцев назад

    Super cool!!

  • @sidhorihawthorne
    @sidhorihawthorne 7 месяцев назад

    Awesome content, perfect walkthrough! thank you for your generosity on sharing this and putting the effort to make it great.

  • @SashaShumylo
    @SashaShumylo 8 месяцев назад

    How to save it in DB as a separate entities?

    • @BlankSlateLabs
      @BlankSlateLabs 6 месяцев назад

      Two steps: 1. Create a backend workflow that takes in the parameters. So in this example: - keyword (data type text) - sentiment (data type text or an option set) within that workflow, you would create a new thing, and set the values based on the parameters. 2. Schedule API workflow on a list. Select the list from the response and configure to pass the parameter values for each item in the list.

    • @SashaShumylo
      @SashaShumylo 6 месяцев назад

      @@BlankSlateLabs thank's, the biggest problem was how to select the proper data type.

  • @alainpapaa7930
    @alainpapaa7930 8 месяцев назад

    thank you and be courageous you are important and helping more than you think

  • @procrastinateXrok
    @procrastinateXrok 9 месяцев назад

    Next video please continue and put them into list

    • @BlankSlateLabs
      @BlankSlateLabs 8 месяцев назад

      nice, good suggestion. lemme put something together. (also can update with json mode vs. function calling)

  • @abbeyjackson4362
    @abbeyjackson4362 10 месяцев назад

    Thank you for this tutorial! With it I was able to get data returned in a consistent format (finally!) but I am really struggling to use the json that is returned and wonder if you might be able to help me figure it out. I have the following json which I was able to get returned to me from OpenAI using function calling. I need to display the raw json array of products for each category in a different display group. In otherwords I am trying to extract the json string under products for each category and displaying the extracted json in a display group. I both need to display the raw products json for each category separately as well as USE this raw products array in a future API call. I absolutely can not figure out how to do this. I have been trying for several days. I have tried a few different plugins to no avail. Does anyone know how to do this? { "lists": [ { "category": "ENUMValue1", "products": [ { "name": "name1", "rating": 95, "url": "url1" }, { "name": "name2", "rating": 95, "url": "url2" }, ] }, { "category": "ENUMValue2", "products": [ { "name": "name1", "rating": 95, "url": "url1" }, { "name": "name2", "rating": 95, "url": "url2" }, ] }, { "category": "ENUMValue3", "products": [ { "name": "name1", "rating": 95, "url": "url1" }, { "name": "name2", "rating": 95, "url": "url2" }, ] } ] }

    • @BlankSlateLabs
      @BlankSlateLabs 8 месяцев назад

      Hiiii so sorry for the delay. you probably already solved this, but if not, feel free to email me at hello [at] blankslate-labs.com and happy to chat through options.

  • @TechAuditTV
    @TechAuditTV 10 месяцев назад

    Is this process now different with the introduction of OpenAI's Assistants API?

    • @BlankSlateLabs
      @BlankSlateLabs 6 месяцев назад

      Yes. You'd most likely leverage Function calling tools within the assistant API. I am definitely overdue in doing some tutorials on the assistant api. :)

  • @msaavedra897
    @msaavedra897 10 месяцев назад

    Thank you for this! Keep it up🙂

  • @roberthayes4312
    @roberthayes4312 11 месяцев назад

    This is the cleanest solution I've seen to converting strings into usable JSON. Appreciate the detailed walkthrough here!

  • @ajmound790
    @ajmound790 Год назад

    So damn helpful. Every last one of your videos goes through something I need right now and can't find documented anywhere else. Wanted to just say thank you!

  • @ryanscott642
    @ryanscott642 Год назад

    Thanks very helpful

  • @PocketRiches
    @PocketRiches Год назад

    Make more bubble videos, please. Yours are the best. 😁

  • @Fannyvieira1
    @Fannyvieira1 Год назад

    thank you so much

  • @tima3829
    @tima3829 Год назад

    Hi, which browser do you use? :) It looks cool

  • @davewliu
    @davewliu Год назад

    Your tutorials are the clearest and best in the bubble community. Feel like these videos should have 20x views!

  • @xuyaoren
    @xuyaoren Год назад

    Exactly what I need, thanks for your effort.

  • @dniliveact
    @dniliveact Год назад

    Espectacular tutorial! Very easy to follow. It's the first video I watched from you and I'm def subscribing. Here is a timeline of the video: 00:01 Learn how to connect Bubble with OpenAI and Pinecone for document question-answering 04:06 Setting up OpenAI and Pinecone APIs 08:25 Building a curl command for Pinecone API 12:47 API setup for database, OpenAI embedding, and Pinecone upsert 17:19 Convert PDF to text and chunk it into smaller pieces using a backend workflow 21:22 Create a document chunk and split it into an array of words 25:34 Process for chunking and connecting text content with Pinecone and Bubble 29:38 Create a workflow to chunk text and send it to Pinecone

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Thanks so much for putting this together! 🙏

  • @malekaimischke2444
    @malekaimischke2444 Год назад

    thank you for this video!

  • @Byhazellim
    @Byhazellim Год назад

    I've tried this several times. Any idea why I get zero matches at minute 7.10? Not sure what I've missed. 🥲

  • @teddyschoenith9152
    @teddyschoenith9152 Год назад

    hey I have it set up where there are 5 different chatbots, each of the chatbots has its own pdf's, Now when the user deletes a chatbot how do I delete everything related to that chatbot in pinecone, (because when they add a chatbot It will use the same namespace as the one that was deleted), also need to be able to change the knowledge base within that specific namespace, lie they can add or delete certain files in that chatbot

  • @soonstudio101
    @soonstudio101 Год назад

    Great tutorial. i tried to build it.. it works. But how to do it for different users? As for now, different users will see the same thing & content @Blank Slate Labs

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Hey! Glad it was helpful. One approach I have done is to use namespace in Pinecone as the user or team filter. So you can set Namespace when you upsert the vector as the user ID (if doing user-based access control) or team ID (if doing team-based access control). Then you query only for that Namespace.

  • @mattk2531
    @mattk2531 Год назад

    Have to STRONGLY disagree with the use of JSON-safe formatting for the api calls (vs. keeping quotes in)... Couldn't help but laugh when you forgot at the end! Thanks so much for this, truly one of the better resources out there for integrating these new AI tools with Bubble.

    • @jameslakin
      @jameslakin Год назад

      Hey Matt, curious why you're not a fan of using JSON-safe formatting. Is there a particular reason why or just personal preference?

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Ha appreciate the strong opinion. I do it mainly to escape out special characters. The fact that it adds the quotation marks in automatically is a downside. I'd definitely be down for another way though. What is your approach to escaping out any special characters before sending in an API call? Thanks for the feedback!

  • @ErickBonilla-p2o
    @ErickBonilla-p2o Год назад

    Awesome videos, very good content. Thank you. What screen recording are you using? Looks very smooth.

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Thanks! Appreciate the feedback. I am using screen.studio. It is great!

  • @jvanh8926
    @jvanh8926 Год назад

    Fantastic tutorial! Would love if you could also show us other implementations like: - How to update your vector database - How to implement the status of the upload to Pinecone - How to search for multiple uploads Thank you so much for sharing this so far

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Thanks so much! I can try to do some quick videos on those. a) valuable! thanks for the suggestions. b) all pretty easy to implement, just some small changes to how you implement them.

    • @jvanh8926
      @jvanh8926 Год назад

      @@BlankSlateLabs Thank you for replying! Would be amazing if you could make some quicks videos. Other idea would also be to create a bubble template that People can buy (if you want to monetize it) or use to have a Quick start up with your (soon to be made) advanced tutorials and can use it along side those videos. With great anticipation I hope to see more content of your knowledge in this field. Thanks!

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Thanks for the suggestions! Yeah, also something to think about. :) Let me look into it!

  • @StartupStudio_MA
    @StartupStudio_MA Год назад

    Hi! Was wondering if the parameter in "select until#" has to be the chunk_size or if it has to be start_index+chunk_size. In this case chunk_size would remain 100 all the time and start_indext changes from 0 to 95, 190, etc. then the command would say i.e. from item #190 to item #100.. Would that work or generate an error? Such a precious tutorial btw! Thanks!

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      ha, what you are describing was what I initially assumed it would do as well! It seems logical that the the select until would be the index overall of the array. But it actually is the index from the starting point. So when you create the "select from" then it treats that as the new 0 index and so the "select until" is additive from that vs. the whole array. Does that make sense?

  • @mot._vat._on
    @mot._vat._on Год назад

    Those tutorials are great! I really appreciate providing full guidance on how to put everything together with working example. Have you tried your demo app with Excel or CSV files? I am trying to build similar app but want to analyze the uploaded Excel or CSV similar to what GPT Code Interpreter does. I believe it was still not available functionality at the time you created the tutorial. Do you think that now is possible to query files directly to OpenAI to analyze and interact with them in similar fashion or still need to split the files to chunks and rely on Pinecone?

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Hey! Thanks so much for the feedback! Yeah, this was before the days of Code Interpreter. I have not done anything yet with Excel or CSVs. From what I have heard, the best way to query data is to think of GPT as a query writer vs. the actual data analyzer. So to compare it to this Pinecone example: You have data (the documents and their chunks), you have a process to search for the data (vector search in Pinecone), and then once you have the right data (Text Context), you can use GPT to summarize that data. So a system where you hold tabular data in something like a SQL-based database can work like below: The SQL database is the data source. You then use GPT to translate text question into a SQL query. You then run the SQL query on your database to get the right data. Then you pass along the matched data to GPT to summarize or answer questions from.

    • @mot._vat._on
      @mot._vat._on Год назад

      @@BlankSlateLabs I really appreciate your response and advise on that! Really helpful information!

  • @Djaxad
    @Djaxad Год назад

    Hey mate, what is your browser?

  • @MisCopilotos
    @MisCopilotos Год назад

    👏🏼

  • @extremehealthradio
    @extremehealthradio Год назад

    I've been searching forever to try to find something like your video! I've got hundreds of PDFs, mp3s and URLs that I would like to search to get answers from. With mp3s I'd have to get them transcribed. Can I upload a zip file to bubble or put them all on dropbox and store there for Bubble to connect with?

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      hey! just responded via your email. You'd either have to upload the files one at a time or upload them somewhere else and add a link(s) to the record in Bubble. It does look like Bubble has a plugin for using Box to store files (manual.bubble.io/core-resources/bubble-made-plugins/box), though I personally have never used it, so can't comment too much on its use.

  • @shoshi475
    @shoshi475 Год назад

    Woukd you be able to make a similar video but instead of document loader, we are using hugging face dataset. The same querying and upserting pinecone and everything into a openai chatbot. Ive been scouring everywhere and you put out the most value. Keep it up man!

  •  Год назад

    Would love to see a tutorial on scraping a website without a sitemap to store in pinecone!

  • @jvanh8926
    @jvanh8926 Год назад

    Thanks for your video! Is pinecone the only option though?

    • @BlankSlateLabs
      @BlankSlateLabs 6 месяцев назад

      Not necessarily. There are three types of options. 1. Fully Hosted Vector Databases - Pinecone - OpenAI's vector stores (as part of the Assistants v2 API) simpler to use, but less control over how you store and retrieve data. 2. Vector data types in hosted backend services (using PGVector - an add on to Postgres) - Supabase (with PGVector enabled) - Xano (using their vector data type) 3. Self-hosted vector stores - Chroma

  • @shoshi475
    @shoshi475 Год назад

    Hi there, I am impressed by your work and channel. How would I be able to contact and hop on a call with you?

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Sure! Feel free to reach out to me via email: hello [at] blankslate-labs.com

  • @tylersnard
    @tylersnard Год назад

    Text in the diagrams is too small!

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Sorry 'bout that! Here are links to images: drive.google.com/file/d/17ubXUoAaGG0I6LE-qBSCmy9DwjFJi51r/view?usp=drive_link drive.google.com/file/d/16zJd50aVTrVZoO46yqHr-1QodFhLRVqb/view?usp=drive_link

  • @salemmohammad2701
    @salemmohammad2701 Год назад

    If I want to make it an extended discussion (instead of a single question and answer) how can I make OpenAI remember what was discussed during the conversation? Any hints to help me search would be appreciated

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      sure! You need to send all previous messages along with the API call to OpenAI. Here is a tutorial that shows how to do that for a more general ChatGPT clone-like product in Bubble: www.planetnocode.com/tutorial/plugins/openai/build-a-chatgpt-clone-in-30-mins-with-bubble-io/

    • @salemmohammad2701
      @salemmohammad2701 Год назад

      @@BlankSlateLabs I really appreciate your answers. so how much text of previous messages I can send with the api call? and also how can I find informations like this regarding cohere?

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      ​@@salemmohammad2701 You can send as much as the input token limit of the model allows. For OpenAI, the limits are here: platform.openai.com/docs/models You can also just cap it at last N messages (maybes something like last 10 messages) to provide continuity with the context. Regarding Cohere, that I can't comment on. They don't seem to have a chat based API, so you'd probably just have to append the prompt with a transcript of the prior conversation for each message.

    • @salemmohammad2701
      @salemmohammad2701 Год назад

      @@BlankSlateLabs thank you very much, your responses are very helpful

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      @@salemmohammad2701 no problem! happy it has been helpful. :)

  • @andreschurrla949
    @andreschurrla949 Год назад

    Excellent tutorial! Please I can't get past the stage of initializing the Upsert Vectors call at 15:27. I keep on getting this response "Raw error for the API read ECONNRESET" on bubble

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Thanks! for that error, i am not sure. It's a connection error (not a setup error). So could be an issue on the Bubble side, the Pinecone side, or with your network connection. Maybe try again?

    • @andreschurrla949
      @andreschurrla949 Год назад

      @@BlankSlateLabs Yeah, I will retry the steps

  • @shoshi475
    @shoshi475 Год назад

    having a problem with 15.42 minutes, it came up as There was an issue setting up your call. Raw response for the API Status code 400 Expected an object key or }. 34, 0.0023024268, ^ please help

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      This most likely is caused by a missing comma or bracket in the JSON body. Make sure it is this: { "vectors": [ { "values": [<embedding>], "metadata": { "document_id": <document_id> }, "id": <document_chunk_id> } ], "namespace": <namespace> } Another possible reason is you copied a { or extra comma or quotation mark when you grabbed the vector and copied to the <embedding> value. Make sure the start and end do not include any special characters.

    • @JoaoSilva-ij6dx
      @JoaoSilva-ij6dx Год назад

      I have the same problem, and I checked many time :( @@BlankSlateLabs

  • @proan_1992
    @proan_1992 Год назад

    Does it work for image embeddings?

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      For this setup, no, it would not work for image embeddings. Image embeddings and language embeddings are generally not compatible with each other and are not interchangeable.

  • @salemmohammad2701
    @salemmohammad2701 Год назад

    I watched another video in which the developer did not chunk the text and then embedding them, as he said that embedding include chunking

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      Embedding itself is not chunking. Basically think of embedding as translating to another language. In this case it's a language defined by a vector in multi-dimensional space (1,536 dimensions). Each unique vector represents unique meaning. The closer the vectors are to each other, the closer they are in meaning. So when you embed to a vector, you give it a new language to search for things with similar meaning. Instead of just search by text in a language like English. The chunking is done so that when you search, you get back the most relevant excerpts of text to the search term (instead of the entire document text). If you did not chunk and just embedded, you'd just always return the whole document text.

    • @salemmohammad2701
      @salemmohammad2701 Год назад

      @@BlankSlateLabs What are the factors that determine the appropriate size for the chunk?

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      @@salemmohammad2701 The ultimate goal is to break it down into pieces of text that represent distinct topics. Some methods will define the length of the chunk dynamically by parsing the text by sentences or paragraphs. Others will use the headers to define sections and create a chunk for each section. For what I did in the video, it's a bit more rudimentary, since I am just using Bubble natively (no-code). Thus I set the chunk size at 100 words. This is about a paragraph length (20 words per sentence average, 5 sentences). I then overlap by 10 words (each chunk also includes the last 10 words of the previously created chunk).

    • @salemmohammad2701
      @salemmohammad2701 Год назад

      @@BlankSlateLabs the embedding model can handle any size of chunk? and is the size of chunk affect the cost the model's company will take?

    • @BlankSlateLabs
      @BlankSlateLabs Год назад

      @@salemmohammad2701 OpenAI's Embedding cost is $0.0001 / 1,000 tokens (a token is about 4 characters). For their latest model, the max tokens you can input into the embedding model is 8,000 tokens