Richmond Alake
Richmond Alake
  • Видео 52
  • Просмотров 27 935
How to Implement Agentic RAG Using Claude 3.5 Sonnet, LlamaIndex, and MongoDB
In this video, we dive into implementing an agentic RAG system using Claude 3.5 Sonnet by Anthropic, LlamaIndex, and MongoDB. I'll take you through the concepts of agentic RAG, which combines retrieval-augmented generation (RAG) and agentic systems to create a dynamic system capable of retrieving information efficiently and making autonomous decisions.
We'll cover everything from data loading and embedding to integrating with MongoDB and setting up the agentic system. By the end, you'll know how to build a recommendation system for Airbnb listings and extend it with additional tools. Follow along with the provided notebook and learn practical applications of building advanced AI agents.
⏲︎...
Просмотров: 765

Видео

Not Another Tech Bro Startup Series
Просмотров 39014 дней назад
Welcome to one of the many entries to the AI Stack Engineer Startup Series, where we build an AI startup from the perspective of a technical startup founder. 👷🏾‍♂️ We are building OpenSpeech. Try out OpenSpeech: bit.ly/3VYVfc8 💻 An AI app that converts audio from sources such as podcasts, videos, and audio files into written content such as social media posts, articles and more. 🧐 Watch this vi...
Building a RAG Pipeline with Anthropic Claude Sonnet 3.5
Просмотров 4,6 тыс.Месяц назад
In this video, we explore and test the coding capabilities of Claude Sonnet 3.5, Anthropic's latest model. We begin by providing a diagram of a RAG (Retrieval-Augmented Generation) pipeline for data processing, embedding ingestion, retrieval, and integration with large language models. We then use Claude Sonnet 3.5 to generate Python code to implement this pipeline using MongoDB and the LangCha...
Building an AI Agent To Replace Me Whilst In Japan
Просмотров 2 тыс.Месяц назад
While vacationing in various cities in Japan, I had an idea to create an AI Agent who would replace 5% of my role as an AI practitioner. This video highlights, at a high level, the development of an AI agent that can review Google Docs. I discuss the process of building the agent using GPT-4.0 and the Langchain framework. ⏱️ Timestamps 00:00 Introduction and context of the video 00:39 Purpose o...
How to Build a RAG System Using Claude 3 Opus And MongoDB
Просмотров 3 тыс.4 месяца назад
In this video, we will walk you through the process of building a RAG system using the Anthropic's Claude 3 Opus model, OpenAI embedding models and MongoDB as the vector database. By the end of this video, you will understand how to build a RAG system using the latest Gemma model and MongoDB. ⏱️ Timestamps 00:00 Overview of Anthropic Claude 3 02:30 Explanation of the RAG system using MongoDB an...
Building a RAG System With Google Gemma, Hugging Face and MongoDB
Просмотров 5 тыс.5 месяцев назад
In this video, we will walk you through the process of building a RAG system using the Google's Gemma open model, GTE embedding models and MongoDB as the vector database. We will be using Hugging Face as the model provider for this stack. By the end of this video, you will have a clear understanding of how to build a RAG system using the latest Gemma model and MongoDB ⏱️ Timestamps 00:00 Introd...
Build a RAG System With LlamaIndex(v0.10), OpenAI, and MongoDB Vector Data
Просмотров 2,1 тыс.5 месяцев назад
In this video, we will walk you through the process of building a rack system using the Palm stack, which includes Python, OpenAI, Llamaindex, and MongoDB. We will discuss the steps involved in setting up the system, installing the necessary libraries, and configuring the global settings. We will also cover how to import and clean the data, create a vector search index, and perform queries usin...
Building A RAG System With OpenAI Latest Embeddings
Просмотров 3,9 тыс.5 месяцев назад
Dive into the cutting-edge world of AI with our latest video, where we explore the integration of OpenAI's newest embedding models into a Retrieval-Augmented Generation (RAG) system powered by MongoDB Atlas Vector Database. This comprehensive guide takes you through the journey of implementing the text-embedding-3-small model, providing a deep dive into the concepts of embeddings, their signifi...
Eva Esteban | AI, Brain Computer Interface and OpenBCI | Richmond Alake Podcast #14
Просмотров 646Год назад
Welcome to the Richmond Alake Podcast. This week I’m speaking with Eva Esteban Join us for an exciting and informative podcast episode with Eva Esteban, an Embedded Software Engineer at OpenBCI who is on the cutting edge of developing biosensors integrated with VR headsets. Eva shares her wealth of experience working on the development of firmware and software for Galea, a VR headset with biose...
Christina Stathopoulos on In-Person and Online Learning Environments
Просмотров 38Год назад
Watch the full episode here: ruclips.net/video/qXi07C7UUXE/видео.html #datascience #machinelearning #podcast #ai Support This Podcast: richmondalake.medium.com/membership
How Christina Stathopoulos got into Google
Просмотров 70Год назад
Watch the full episode here: ruclips.net/video/qXi07C7UUXE/видео.html #datascience #machinelearning #podcast #ai Support This Podcast: richmondalake.medium.com/membership
Christina's Stathopoulos on Women In Tech and Role Models
Просмотров 42Год назад
Watch the full episode here: ruclips.net/video/qXi07C7UUXE/видео.html #datascience #machinelearning #podcast #ai Support This Podcast: richmondalake.medium.com/membership
Christina's Stathopoulos on Consciousness and Intelligence
Просмотров 32Год назад
Watch the full episode here: ruclips.net/video/qXi07C7UUXE/видео.html #datascience #machinelearning #podcast #ai Support This Podcast: richmondalake.medium.com/membership
Christina Stathopoulos on the difference between a Big Data Engineer and a Data Engineer
Просмотров 11Год назад
Watch the full episode here: ruclips.net/video/qXi07C7UUXE/видео.html #datascience #machinelearning #podcast #ai Support This Podcast: richmondalake.medium.com/membership
Ajay Halthor | How to succeed in your Masters degree | Richmond Alake Podcast
Просмотров 31Год назад
Watch Full Episode: ruclips.net/video/FaUba1J4qbQ/видео.html Welcome to the Richmond Alake Podcast. This week I’m speaking with Ajay Halthor aka @CodeEmporium Ajay is a Machine Learning Engineer, Data Scientist, Writer.Ajay has amassed over 70,000 followers on his RUclips Channel Code Emporium where he discusses machine learning topics. In our conversation we explore the various ways Ajay has g...
Christina Stathopoulos on Taking An MBA as a Data Practitioner
Просмотров 18Год назад
Christina Stathopoulos on Taking An MBA as a Data Practitioner
Ajay Halthor on What A Machine Learning Engineer Does
Просмотров 97Год назад
Ajay Halthor on What A Machine Learning Engineer Does
Christina Stathopoulos on Responsibilities of an Analytical Lead at Google, Waze
Просмотров 60Год назад
Christina Stathopoulos on Responsibilities of an Analytical Lead at Google, Waze
Ajay Halthor on Getting a Data Science Job
Просмотров 62Год назад
Ajay Halthor on Getting a Data Science Job
Ajay Halthor | How to know when you've found your passion | Richmond Alake Podcast
Просмотров 15Год назад
Ajay Halthor | How to know when you've found your passion | Richmond Alake Podcast
Christina Stathopoulos on Dealing with Rejection
Просмотров 18Год назад
Christina Stathopoulos on Dealing with Rejection
Ajay Halthor on Learning How To Learn
Просмотров 39Год назад
Ajay Halthor on Learning How To Learn
Carly Taylor | Call of Duty, Behavioural Modelling and Machine Learning | Richmond Alake Podcast #13
Просмотров 422Год назад
Carly Taylor | Call of Duty, Behavioural Modelling and Machine Learning | Richmond Alake Podcast #13
Ajay Halthor on Data Science Internship Experience
Просмотров 38Год назад
Ajay Halthor on Data Science Internship Experience
Christina Stathopoulos defines Data Analytics
Просмотров 64Год назад
Christina Stathopoulos defines Data Analytics
Christina Stathopoulos | Networking, Travelling and The Human Brain | Richmond Alake Podcast #12
Просмотров 319Год назад
Christina Stathopoulos | Networking, Travelling and The Human Brain | Richmond Alake Podcast #12
Ajay Halthor on The Future of the Machine Learning Industry
Просмотров 114Год назад
Ajay Halthor on The Future of the Machine Learning Industry
Ajay Halthor | Content Creation, Data Scientist vs ML Engineer | Richmond Alake Podcast #11
Просмотров 127Год назад
Ajay Halthor | Content Creation, Data Scientist vs ML Engineer | Richmond Alake Podcast #11
Vin Vashishta | AI Governance, Policy and Regulation
Просмотров 13Год назад
Vin Vashishta | AI Governance, Policy and Regulation
Vin Vashishta | Over Reliance on Machine learning
Просмотров 15Год назад
Vin Vashishta | Over Reliance on Machine learning

Комментарии

  • @StephenBacso
    @StephenBacso День назад

    I think the demo needs to be updated to handle "granted access" to the Google gemma models. You also need an HF token in your colab secrets to access the models: change the checkpoint you've been granted access to in the calls to create the tokenizer and the model and add the token=your_hf_token to each of the calls.

  • @ameroamigo1
    @ameroamigo1 5 дней назад

    Outstanding video. Subscribed.

  • @mohammad-xy9ow
    @mohammad-xy9ow 9 дней назад

    As the context windows of llm increase will it affect rag badly cause after some time the tokens will cost less than now

  • @realCleanK
    @realCleanK 10 дней назад

    Thank you!

  • @richmond_a
    @richmond_a 13 дней назад

    Notebook: github.com/mongodb-developer/GenAI-Showcase/blob/main/notebooks/agents/how_to_build_ai_agent_claude_3_5_sonnet_llamaindex_mongodb.ipynb Best Repo for you: github.com/mongodb-developer/GenAI-Showcase Article Version: www.mongodb.com/developer/products/atlas/claude_3_5_sonnet_rag/

  • @richmond_a
    @richmond_a 20 дней назад

    Try out OpenSpeech and let me know what you think: bit.ly/3VYVfc8

  • @divyansh4620
    @divyansh4620 22 дня назад

    Richmond, sir, I need your help. I want to enter AI Agent Development, so how can I start? I just completed high school, and in mid-August, my college will start. And zero coding exp. so please help

  • @divyansh4620
    @divyansh4620 22 дня назад

    Richmond, sir, I need your help. I want to enter AI Agent Development, so how can I start? I just completed high school, and in mid-August, my college will start. And zero coding exp. so please help

  • @dewijones92
    @dewijones92 27 дней назад

    great

  • @skohari
    @skohari Месяц назад

    Glad I was recommended this! Brilliant video

  • @lalpremi
    @lalpremi Месяц назад

    Thank you for sharing, very interesting, have a great day. :-)

  • @SanjaySingh-gj2kq
    @SanjaySingh-gj2kq Месяц назад

    Thank you, Rich for the real world use case. Other youtuber are just building snake games.

  • @richmond_a
    @richmond_a Месяц назад

    Thanks for Watching Links Notebook: bit.ly/4eyKPbs Best Repo for you: bit.ly/4bWraQA AI Agents With Memory Video: ruclips.net/user/liveL49AxDcOURU?si=uBj8SuvCIJdVKUoz

  • @z8ttov
    @z8ttov Месяц назад

    Your channel is a gem. Love this content 👏

  • @AmeenAlamOfficial
    @AmeenAlamOfficial Месяц назад

    Are you running it locally or in a cloud environment?

    • @richmond_a
      @richmond_a Месяц назад

      Thanks for watching. And Cloud Environment

  • @matten_zero
    @matten_zero Месяц назад

    Thats nice. You run that locally or set up in cloud environment?

    • @richmond_a
      @richmond_a Месяц назад

      Thanks for watching. And Cloud Environment

  • @emeriechristian6450
    @emeriechristian6450 2 месяца назад

    Please I how no one from the stackup bounty challenge is here, because we are going to have a big problem😂😂

  • @DonatFeher
    @DonatFeher 2 месяца назад

    Sir, Im trying to implement your code, but I found an other error, this part does not work, if I find a solution by myself I paste here: I stucked here: from llama_index.core.node_parser import SentenceSplitter parser = SentenceSplitter() nodes = parser.get_nodes_from_documents(llama_documents) for node in nodes: node_embedding = embed_model.get_text_embedding( node.get_content(metadata_mode="all") ) node.embedding = node_embedding this line just loads and loads and does not go forward: node.get_content(metadata_mode="all") Maybe the problem is this: I'll check later: github com/run-llama/llama_index/issues/12200

  • @DonatFeher
    @DonatFeher 2 месяца назад

    At the second step of the article has problems

    • @richmond_a
      @richmond_a 2 месяца назад

      Thanks for bringing this up. The dataset in Step 2 of the article is located here: huggingface.co/datasets/MongoDB/embedded_movies

  • @DonatFeher
    @DonatFeher 2 месяца назад

    Where are you from Sir? I like your accent and the vid is great too

  • @rajpulapakura001
    @rajpulapakura001 2 месяца назад

    Thanks for your clear explanations and moderate pace. I appreciate that you're not rushing in your videos and taking the time to explain each step. Cheers mate!

    • @richmond_a
      @richmond_a Месяц назад

      Glad it was helpful and thanks for watching

  • @alexglebo
    @alexglebo 2 месяца назад

    the google collab does not work after this step: collection.delete_many({}) "SSL handshake failed:..."

  • @deathdefier45
    @deathdefier45 3 месяца назад

    Thank you so much for this brother <3

  • @vishalnaik5453
    @vishalnaik5453 3 месяца назад

    Great content 🔥. subscribed 💯

  • @viteok1234
    @viteok1234 3 месяца назад

    Wonderful conversation - nice questions and amazing answers. Thank you planning and organizing this video. +++++++

  • @Raptor3Falcon
    @Raptor3Falcon 3 месяца назад

    how to reuse these embeddings so that we don't have to recreate them?

    • @richmond_a
      @richmond_a 3 месяца назад

      You can access the dataset with the embeddings here: huggingface.co/datasets/MongoDB/embedded_movies

    • @Raptor3Falcon
      @Raptor3Falcon 3 месяца назад

      @@richmond_a no i meant that we have a function called "load_index_from_storage" which we use if we don't want to re-index our embeddings. This is for local. Is there something similar so that I can query directly the mongodb database and extract the embeddings from there.

    • @richmond_a
      @richmond_a 3 месяца назад

      @@Raptor3Falcon This should be possible by following these steps: 1. Initialize a MongoDB vector store with LlamaIndex, specifying the database, collection and index name 2. Ensure that the embedding field in your database is named 'embedding' 3. Create an index from the loaded MongoDB vector store 4. Create a query engine from the index These steps should enable you to use preexisitng embeddings. And as usual ensure you have an appropriately set up vector index definition for your collection. Also ensure the same embedding model used in your existing development environment to embed user queries.

    • @Raptor3Falcon
      @Raptor3Falcon 3 месяца назад

      @@richmond_a can u write a small sample code ?

    • @richmond_a
      @richmond_a 3 месяца назад

      It should look something like this: from llama_index.vector_stores.mongodb import MongoDBAtlasVectorSearch from llama_index.core import VectorStoreIndex vector_store = MongoDBAtlasVectorSearch(mongo_client, db_name=DB_NAME, collection_name=COLLECTION_NAME, index_name="vector_index") index = VectorStoreIndex.from_vector_store(vector_store) query_engine = index.as_query_engine(similarity_top_k=3) query = "Recommend a romantic movie suitable for the christmas season and justify your selecton" response = query_engine.query(query) print(response)

  • @abhaysaini9406
    @abhaysaini9406 3 месяца назад

    hey, iam doing the same thing. can i ask you specific questions ?

  • @elijahthomas7833
    @elijahthomas7833 3 месяца назад

    ayooooo appreciate how thorough you are salute!

  • @Evildark666
    @Evildark666 4 месяца назад

    Great video! So if I understood this correctly, RAG basically uses an external vector database to retrieve first the most relevant information performing a similarity search, then grabs this information and it "appends" it to the user prompt resulting in a larger prompt with better contextualization, am I right ?

  • @pat2715
    @pat2715 4 месяца назад

    quick and clean, top marks

  • @djlarrydjlarry
    @djlarrydjlarry 4 месяца назад

    Hello, thanks for the video! I get an error ServerSelectionTimeoutError when I execute collection.delete_many({}) in spite of having a successful connection to MongoDB in the previous step, do you know what could be the reason? Thanks!

    • @richmond_a
      @richmond_a 4 месяца назад

      This could be caused by not adding your IP address as a whitelist in your MongoDB Atlas

  • @user-kv3hu6qe6z
    @user-kv3hu6qe6z 4 месяца назад

    hi brother , where are you located , I am in Dubai . How can we connect . Linkedin , IG will be fine

  • @user-yc2te4vz2y
    @user-yc2te4vz2y 4 месяца назад

    I like video but you have to explain each and everything about code.then beginner will be understand why that line of code is written .

    • @richmond_a
      @richmond_a 4 месяца назад

      More explanation is located here: mdb.link/rag_claude_mongodb

  • @richmond_a
    @richmond_a 4 месяца назад

    Thanks for watching 🧾 Article: mdb.link/rag_claude_mongodb 💻 Code: bit.ly/3TqQcB1 📈 Hugging Face Dataset: huggingface.co/datasets/MongoDB/tech-news-embeddings

  • @dvdmtchln
    @dvdmtchln 4 месяца назад

    Hi there, this has been really helpful! I have a question though. What if instead of the "plot" you have a huge file. And let's assume we created the embedding for it at OpenAI and now we have it in the search results. Would adding it to the completion query at OpenAI cause a problem? Since the context is gonna be huge. Or am I missing something? This is what I mean. Imagine the completion api is being called with a user query as such: "Answer this user query: " + query + " with the following context: " + search_result What if the search_result is the contents of that huge file? You see what I mean? Thank you!

    • @richmond_a
      @richmond_a 4 месяца назад

      Thanks for your question. One aspect that I didn't show in the video is the utilisation of chunking when creating the embedding. In the scenario where you have a huge file, you'll chunk the file into different pieces and then create embeddings for the chunks. You'll also store the text chunks along with the embeddings in the database. So in the information retrieval stage or when you are getting your search result you won't get the entire pdf, but only the chunk that corresponds to the query.

    • @dvdmtchln
      @dvdmtchln 4 месяца назад

      Hmmm, interesting! Alright then, I'll research more on the "chunking" part. Thank you for help. It's a great video. I love it. If you don't mind me asking, could you point me to a tutorial, blog post, or a RUclips video (if you haven't done one already) concerning this particular topic? @@richmond_a

    • @richmond_a
      @richmond_a 4 месяца назад

      @@dvdmtchln This could help: docs.llamaindex.ai/en/stable/understanding/loading/loading.html#splitting-your-documents-into-nodes I'll cover chunking in the next videos I make

    • @dvdmtchln
      @dvdmtchln 4 месяца назад

      @@richmond_a Great! Thank you. I’ll look into it. And also will be waiting for your next video 😎🎩

  • @rakeshraki2163
    @rakeshraki2163 5 месяцев назад

    Great article :) awesome.

  • @d3mist0clesgee12
    @d3mist0clesgee12 5 месяцев назад

    new to channel, good stuff

    • @richmond_a
      @richmond_a 5 месяцев назад

      Thanks for watching

  • @matten_zero
    @matten_zero 5 месяцев назад

    Im building an MVP for my startup and was on my way to building this as a way to search a database and this is a great start.

  • @matten_zero
    @matten_zero 5 месяцев назад

    4:13 PALM stack? Ok I can dig that terminology

    • @richmond_a
      @richmond_a 5 месяцев назад

      It's POLM stack, the O for OpenAI But PALM stack does work for Anthropic. That might just be the next video I do 😉

    • @matten_zero
      @matten_zero 5 месяцев назад

      @@richmond_a true haha.

  • @matten_zero
    @matten_zero 5 месяцев назад

    @2:30 I vote AI Engineers. A nice simple title (not to be confused with ML engineers)

  • @matten_zero
    @matten_zero 5 месяцев назад

    A MongoDB vector database isnt as widely covered with RAG. Well done

  • @richmond_a
    @richmond_a 5 месяцев назад

    🧾 Article: mdblink.com/rag_with_gemma_hg... 💻 Code: bit.ly/42MMjJS 📈 Hugging Face Dataset: huggingface.co/datasets/MongoDB/embedded_movies Thanks for Watching.

    • @anjonbhattacharjee6810
      @anjonbhattacharjee6810 5 месяцев назад

      Richmond, your systematic organization of crucial steps following the RAG mechanism is genuinely beneficial. To enhance community support, consider taking an additional step by integrating techniques. This involves fine-tuning a model for a specific task and subsequently bolstering it with retrieval-based mechanisms, thereby generating responses that are contextually enriched.

    • @JitendraKumar-uo4tg
      @JitendraKumar-uo4tg 2 месяца назад

      Article link is broken

  • @BurningR
    @BurningR 5 месяцев назад

    THis is great thank you for sharing! Very clear and concise instructions I think. I'm trying to wrap my head around the following use-case for this, where the I was wondering if you (or any other) would know how to transfer this very clear instruction to two similar-but-different use-case, I find this helps me understand better : SO in this example, we have rows with a string column that we convert into embeddings and use with the mongoDB. So far so good. These two examples vary from your example. the one closet to it is also most similar to your example: 1) a bunch of PDfs with academic articles about the same research topic. I'd like to ask questions about the topic. 2) a whole book from the public domain? It's 1.000 pages, so plenty of text. I would like to 'ask questions' to the book. So in the first case, one could make a dataframe where the rows would be articles (one per article), and the string could be either the abstract or idealy the entire article. Would this be sufficient for somewhat complex inquires like 'give me the different definitions of worksite inequality from the articles', or 'list the datasources from the articles'? In the second example, I'd like to ask questions about the text - such as 'how do the protagonist evolve his convictions about Man as the ruler of Nature'? here, I would be in doubt about how fitting this example is to this use case, since there is no obvious way of making rows of data - perhaps by chapter, or even by 5-page intervals? Seems like it could get very expensive very quickly. I know this is a lot to ask - just putting it out there, hoping someone has the time to help me understand the different utility of this. Anyway really good video, it's already gold! :)

    • @onyekaokonji28
      @onyekaokonji28 4 месяца назад

      i dont think you necessarily have to make a dataframe out of the external knowledge base, he did here because that's the dataset he worked with. my suggestion will be to find a good LLM that accepts considerably large token inputs, create chunks of the input text (the individual pdf documents), create embeddings of these chunks which can then be used to create a DB and Vector Index.

    • @BurningR
      @BurningR 4 месяца назад

      @@onyekaokonji28 Alright, seems like a way to approach it. So basically the same setup, in the sense that each row will be one pdf, and instead of a small text field with a movie review, it will be a huge text string with the entire pdf. That I then make embeddings out of. So actually I can do exactly like he does here, except I need to find an LLM that accepts 20-30 page long text inputs for the embeddings I guess. Thank you.

  • @Vibhakara
    @Vibhakara 5 месяцев назад

    Very concise and informative. Good stuff. Link to github repo is broken !! Please fix it. Thanks and keep up the good work.

    • @richmond_a
      @richmond_a 5 месяцев назад

      Thanks for watching. And the link is updateted now.

  • @richmond_a
    @richmond_a 5 месяцев назад

    🧾 Article: mdblink.com/polm_ai_stack 💻 Code: bit.ly/3UJVbOc 📈 Hugging Face Dataset: huggingface.co/datasets/AIatM...

  • @richmond_a
    @richmond_a 5 месяцев назад

    Thanks for watching 🧾 Article: bit.ly/47YqOGQ 💻 Code: bit.ly/3u7duSV 📈 Hugging Face Dataset: huggingface.co/datasets/MongoDB/embedded_movies Notebook: bit.ly/486IYpW

  • @bennguyen1313
    @bennguyen1313 7 месяцев назад

    I understand writing is learning, but I imagine Bex could also have a very successful youtube channel teaching! Alternatively, would love to see Bex do some cross promotion collaboration videos with other programming/teaching legends.. some of my favorite are: sentdex , George Hotz, Corridor Crew, Keith Galli , Olivia Sarikas, SECourses, and bycloud!

  • @todd.westra
    @todd.westra 11 месяцев назад

    This podcast with Shashank Kalanithi is a goldmine of insights for businesses! From data analytics to finance and productivity, it's a must-listen for anyone looking to thrive in today's data-driven world.

  • @christinastats
    @christinastats Год назад

    Had such a great time on the show!! Thanks again for having me Richmond :)

  • @richmond_a
    @richmond_a Год назад

    TIMESTAMPS: 00:01:18 Introduction to Eva 00:02:18 Eva's educational background and internship 00:03:54 Attending a conference as an intern 00:05:18 Electroencephalogram (EEG) 00:05:33 Exploring educational opportunities outside of Spain 00:07:12 Mind Controlled Wheelchair 00:08:19 Trash Picking Robot 00:10:33 Brain Computer Interfaces 00:13:38 OpenBCI 00:15:20 Galea and the synergy between VR and BCI 00:18:06 Invasive vs Non-Invasive approach to BCI 00:19:56 Time estimation on the Commercial introduction of BCI 00:26:27 Programming Languages used to build and use BCI products 00:27:41 Galea price points 00:29:54 Day to Day of A BCI Software Engineer 00:31:49 User Feedback after using BCI hardwares 00:32:46 Meditation and Brain Computer Interfaces 00:33:49 Practicality of BCI headgears 00:35:51 Standing out as an Intern 00:38:06 Values of OpenBCI 00:42:10 Crazy and Interesting BCI ideas 00:44:58 Handling Large Quantities of Data from Sensors 00:47:55 Movies that bring BCI to life 00:53:30 Operating Systems of the Mind 00:59:05 Mental Health and Dealing With stress 01:01:30 Surviving as a Hardware Companies In The Remote Era 01:04:31 Diversity In Tech 01:08:02 UniPeers 01:11:16 What degrees gets you into BCI Roles