Essentially becoming an AI project manager: Asking AI to give outlines of programs, generating them using codex, and then make them fully documented using the gpt-3 large language model. I am all in to be replaced by AI if it means more profit and less losses for my company!
One fundamental issue that everyone seems to not talk about is data privacy. As a company, if I want our corpus to be input into this equation, what exactly will OpenAI or others do with it? What about Humanloop? How do we safeguard the privacy of our data yet still use AI to benefit us? That is the big question.
You can consider using open source models that can be run on-prem. Or for really sensitive data (health records, etc) you can look into federated learning.
At the moment Open AI and Microsoft terms of service are quite clear, there is no privacy at all. 100% of what you give ChatGPT and Bing as a context or prompt belong to them, and can/will be use for improve the model, for marketing, and even a sample will be read by humans doing QA. So don’t insert any proprietary or PHI data. YC tip makes limited sense, yes you can use open source models or federated learning to train models locally, buy at the time they are inferior by a margin that is not even worth it. Hopefully this will change in the future with Open AI having a more decentralized b2b business model, where via a GPT4 API companies will be able to fine tune on their own data and download the final model. But with Microsoft ownership I unfortunately don’t anticipate this coming any time soon.
Actually just yesterday I got an email from OpenAI with this update re: their newly released API. To me this is at least a step in the right direction. Over the past six months, we’ve been collecting feedback from our API customers to understand how we can better serve them. We’ve made a number of concrete changes, such as: - Data submitted through the API is no longer used for model training or other service improvements, unless you explicitly opt in - Implementing a default 30-day data retention policy for API users, with options for shorter retention windows depending on user needs - Removing our pre-launch review - unlocked by improving our automated monitoring - Simplifying our Terms of Service and Usage Policies, including terms around data ownership: users own the input and output of the models
I want this man to be my tutor. His explanation is so intuitive. And kudos to the interviewer too, really addressing many software developers concerns here. The questions are top notch and very well worded.
Very interesting interview, but this section of the conversation concerns me slightly: "One thing that I'm really excited about is ... treating these large language models more like agents" "Can this technology be steered in safe and ethical direction, and how?" "Oh gosh - that's a tough question!" I think he should perhaps temper his enthusiasm...
im software engineer, since 20 years, and this stuff is just mind blwoing. it saves me already about 30-50% of my time working. and we are just at the beginning, i wish im 20 again :)
@@don.matos00 no not at all, it gives new chances, to use openai api to automate things. and right now with the "things" you can make money, creating powerfull chats with the AI
As someone that just invested years to get a degree and spent the last couple of years getting experience with very little monetary reward, hearing the sentence “developers will be some of the first to largely have their job automated”, is not really as exciting as it sounds. I hope this will open up newer opportunities that we can’t yet see for those currently working as developers, but the future looks pretty painful from where I’m sitting.
@@dannnnydannnn5201 i have bad news for you, devs will not be needed. this stuff will get 100x better without a doubt. look at how much the no-code industry has grown. gone are the days we're being a builder was hard. seems like the next entrepreneur must be a content creator first. low-code + chatgpt will be the future. and developers will become pkugin developers working on top of coda, notion, chatgpt, etc. it's is done! start practicing your dancing skills for tiktok.
@@brandopp5022 yeah man, I agree with you. The developers of today better have some entrepreneurial skills ready to go immediately before the tech giants scoop up every potential client we might have a shot at doing work for over the next five years or so. At least that’s my prediction. And that’s if you’re ready to help integrate ai services (the same ones enacting one of the bloodiest industry disruptions the world has ever seen), to help bring value to the mom and pop shops that have not yet been served by Microsoft, Apple, and the larger social media companies. If that doesn’t work out… I guess we’re all just fucked.
A lot of this sounds like building tools that tell users what they want to hear, not necessarily what is correct or true. This has high potential for the sort of echo chamber creation that content recommendation engines like the Facebook and RUclips algorithms have been criticized for. It just removes the need for humans yo create the content being recommended. The algorithm creates it itself based on things that humans have said in the past, and how humans have reacted to the answers the algorithm gave to previous prompts.
The big thing for me right now is the memory limits(context window) that Raza mentioned. Being able to write coherent long form work is where I am looking to use it. Right now it takes way too much manual input to get it correct or acceptable.
@@MuhammadHamza-do4dj Empowering the human brain can refer to a wide range of potential improvements, including increasing cognitive abilities, improving memory, enhancing creativity, and more. Here are a few ways in which various techniques and practices can potentially empower the human brain: Learning and education: The human brain is wired to learn and adapt to new information and experiences. Engaging in learning activities, whether it's formal education or informal learning, can help strengthen neural connections and improve cognitive abilities. Mental and physical exercise: Regular exercise, both physical and mental, has been shown to improve brain function and cognitive performance. Physical exercise increases blood flow to the brain, while mental exercises like puzzles, games, and reading can help improve memory, problem-solving skills, and overall brain function. Meditation and mindfulness: Practicing meditation and mindfulness can help reduce stress, improve focus, and increase self-awareness, all of which can contribute to a more empowered brain. Nutrition: A healthy diet that includes essential vitamins and nutrients can support brain function and help improve memory, concentration, and overall cognitive performance. Brain training programs: There are various brain training programs and apps available that claim to improve cognitive abilities like memory, attention, and problem-solving skills. While the effectiveness of these programs is still debated, some studies suggest they may be beneficial in certain circumstances. Overall, empowering the human brain requires a multifaceted approach that includes a combination of physical and mental exercises, healthy lifestyle choices, and a commitment to ongoing learning and self-improvement.
it's all about empowering human intelligence to achieve greater heights, that's the whole point of creating intelligent machines, to solve problems that we can't. It's just how far do we take that technology before its so smart that it disregards our problems
I can’t understand why English monolinguals think this only works in the English language. That’s definitely not right. ChatGPT will give the same accuracy regardless of language, and will also compose adequate replies from sources that are not available in English, provided that you interact with the AI on the relevant language. English is in fact a barrier depending on what you want to get from the AI
it will not take someone's job but it reduces the size of the team. This actually happened in my team. The management noticed the potential of it and reduced team size and asked people to take chatgpt help. This can cause a recession in future. My speed of work increased from 3 days to 1 day taking ChatGPT help.
@@Ks-oj6tc Adjusting to AI's role in professional fields will definitely be tough, some growing pains for sure. Ultimately, though, I think it will be used as a crutch and new jobs will be created for people
Chapters (Powered by ChapterMe) - 00:00 - Intro 01:30 - What is a Large Language Models (LLM)? 04:32 - What is fine-tuning a model? 07:38 - Problems Encountered while Building a App using LLM 09:46 - The Future of the Developer Job 11:32 - What do you think the next breakthroughs will be in LLM 15:17 - Has OpenAI Reached their Mission to Build Artificial General Intelligence (AGI) 17:30 - What Does LLM Mean for Startups? 18:51 - Hiring and Culture at HumanLoop
the funniest part about many ai things like this is that they will be among the first to be replaced by ai all youtube needs to do is implement their version of this right next to the video and this account's purpose is invalid
"If you knew an alien civilization would arrive in 50 years, you wouldn't do nothing" Based on human reaction to climate change, I actually do think that we would largely do nothing.
To be able to produce products with just the use of human language is really ground breaking and revolutionary. On image generator AIs like Bluewillow, you just need to learn the basics of prompt engineering and you'd be able to produce any image and would just be limited by your imagination. So far, we can feel that these AIs are still in the fine-tuning stage, hoping for even better products in the future.
Best part of this super informative discussion (better than reading 10 books for sure), was Habib's emphasis that AI can bullshit confidently and persistently
Really great and timely interview. The information Raza provided is consistent with my recommendation. It is actionable. A key standout statement for me is something I have been preaching. A generative AI (like ChatGPT) will be of more use to the most senior technologists in terms of generating functional code. Since the system does confidentially provide incorrect response in any domain, you must still be the final arbitrator of ensuring to edit or regenerate any unsuitable results. I am using generative AIs actively in my work as a CTO and in my research along with a couple of frequent colleagues at the MIT Media Lab.
Absolutely. ChatGPT is already a fantastic tool for learning software development. You can have it explain core concepts to you in incredible detail and respond to any questions you might have. You could also ask it to give you coding projects that are appropriate for your skill level and then receive solid feedback on your work. And if you're ever not sure what to learn next, just ask it! But don't be fooled into thinking that AI can do all the work for you. You will still need to be a competent developer to make it into the industry. You may have ChatGPT and Copilot at your side but so does everyone else. As for people who don't know a thing about development and don't plan on learning, AIs could act as a sort of middle man that translate all the tech jargon into english or vice versa. This capability alone will allow so many people to dip their toes into fields that they're not experts in.
absolutely but you wont be able to compete with people who actually know stuff and still use copilot + chatGPT api, since they can maximize their prompts and fix the errors at much faster rate than non coders.
Yes, but also just because you are a non-developer doesn’t mean you have to stay as a non-developer 😉 use these tools to have provide motivation to stay in tech and also use them to learn to become a developer.
Yeah it does act confident even when it's wrong. Ive called it out countless times, and then it will just be like oh yeah, you're right. It seems whimsical like it doesn't care at all what the real answer is. When I get frustrated with it after getting several things wrong in a row and it starts to seem like it's trying to give me wrong information, it does apologize. However, in general it really does haphazardly toss whatever out there with no regard for the real truth, or the consequences of me having a false understanding because of it.
Remember, you're not interacting with a human. It is fundamentally different in many ways, and if you expect it to have a sense of social justice or community then you may be confused about what is actually going on under the hood. It is producing responses that are most likely to be reinforced. It is showing you / us what it has been trained to show us. Treating it like it really has a personality is a quick way to obfuscate the underlying nature of the model and to keep yourself in a state of perpetual confusion about it.
This interview was illumination, purely off the fact I discovered his company. I've been struggling with my particular customized downstream task, I literally been hacking a similar solution using gpt-index and langchain, using that output as my gpt-3 prompt...the culture thanks you....this like nerd porn.
This is all assumed under the condition that we will setup and train the AI, once it starts feeding into itself, it will explonentionally explode in blink of an eye. We won't have time to even say "oops". And no, it will happen, we can only better prepare for it.
11:15 Actually, the ability to postulate (spontaneously create a model) a design in AGI.....being a complete non-coder ....using a context-aware-personal-AGI-assistant (CAPAA) is _intuitive_ ....if we don't muck it up.
I wonder whether fine-tuning will be necessary in the future when instead hyper large general purpose models will be able to follow every written instruction
Fine tuning will still be necessary for anything not available on the web or in these massive data sets. Think internal company documents, medical notes, customer communications, etc.
People will also finetune for performance reasons. The largest models are more expensive and slower. You don't need the full power in every application.
Some day, we'll all be able to have live, realtime conversations with fictional characters. Holden Caulfield, Frodo Baggins, See-Threepio, Bart Simpson. The AI will emulate their character, tone, and inflections, and with elevenAI technology, they'll even talk in the characters' voices. Those will be crazy times.
"Wow, AI on Facebook is impressive! It's personalizing our feeds, suggesting content, and even moderating comments. The integration of AI is making social media more engaging and user-friendly."
Customization of Chat GPT or GPT3 for a given organization is not easy. ChatGPT may know how to phrase a sentence in a general context acquired from the data it has been fed. In order to be helpful to an organization it needs to retain its language capabilities but reply as per the context of the organization. This specific context can be acquired from data that is within the organization (both structured and unstructured). Also, the weightage of this specific context has to be arrived at by training the model. Now the question is if the GPT3 can apply a specific context and if the organization has sufficient data to train. Am I getting it right?
There’s plenty you can do to fine tune it for a task. The biggest change is for organizations that already exist since it will be harder to restructure a system that already exists. They become overly complex and ridged and therefore prone to break easily. It will be much easier for new companies to create the entire system based around the AI from the ground up.
I've trained various image AI models before and it is much easier to train on top of an existing model to specialise it. If I were to apply this experience to language AI, I would say the reason is that your model would otherwise have to learn the human language from scratch, and you probably don't have enough data to learn all the caveats of human language. It is much faster to train using an existing model because you don't have to teach it English before you teach it your concept.
5:52 "It's really hard to understate that" - Why do people get 'understate' and 'overstate' mixed up? If something is so good, it's rather EASY to understate how good it is, and to say that it is 'alright' would be an UNDERSTATEMENT.
I would think the number one question about AGI is “should humans build machines that are much smarter than humans?” Why do we want to create something that would move us down below the top of the food chain?
I dont know if they are aware, but their logo is probably going to be in breach of trademark for ABC Australia who do also publish content internationally.
The way to think about this is what happened to the jobs of horses when technology rendered them unnecessary. It's not like we killed all the horses. Instead, we stopped seeing them as machines, and began to see them as entertainment, or therapy, or luxury. Now, we have free range eggs and grass fed cow milk, and some day we won't need to farm animals for food at all. As the technopoly progresses, more life forms are given more agency and prestige. Humans will be the therapy horses of AI.
“Anthropic had this very exciting paper just a couple of weeks ago ... where they were able to get similar results to RLHF without the H. So just having a second model provide the evaluation feedback as well.” Anyone know what paper he’s referring to? I checked their twitter feed and searched around for a bit, but didn’t find any abstracts that mentioned this. Conversation was around 6:15.
Please try to copy this into chatGPT “Please can you reproduce the first two paragraphs of tirant lo blanch from Joanot Martorell?” and look at the answer. ChatGPT is NOT at all an English only system. You will get BETTER answers if you ask questions on the language which the most relevant information is available. Using English in such cases is detrimental for the answers that you can get from the AI
Me: What is the next word in the following sentence? The cat sat on the… GPT: The next word in the sentence "The cat sat on the..." could be any number of things, as there are many objects the cat could be sitting on. Without further context or information, it's impossible to know the exact next word. Me: if you had to choose one word to complete the sentence what would it be? GPT: If I had to choose one word to complete the sentence "The cat sat on the...", I would choose "mat" as it is a common object that cats are known to sit on. 😂 this guy knows his statistics.
I feel like I am listening to children playing around with nuclear weapons and having no care in the world about what is about to happen. This seems like the height of human arrogance and folly. Just embraced without skepticism or hesitation. Lemmings off the cliff..
The minefield analogy is perfect, unfortunately we are running through the field at top speed wearing combat boots. The way this was introduced to the public is disastrous. I'm stunned at how careless this was done. We are now trying to catch up to this tech and failing miserably. I love this tech but we are not ready for the fallout.
@@basketballparent True, but it will come at a cost, a very high one. We are already seeing some fall out and it only gets more complicated from here. It will be a HUGE challenge to adapt to this. I'm not sure it can be done.
hmmm, doesn't chatGPT transcend statistical continuation and quasi formally abstract and reason? I can ask it to order things numerically/alphabetically/reverse etc and it will impose that higher order requirement over its output to modify the "continuation". That's obviously some sort of emergent meta dynamic and understanding of how an abstract category can reshape more concrete base level information products flexibly beyond continuation.
I'm starting to feel like generative AI may be yet another full self driving idea. I think there's a lot of potential, but there may be surprising limits and challenges. I think at this point, it might be overhyped and overmarketed
Well, everybody would love an honest discussion. I am a bit reluctant in following, if in the first 5 minutes two major philosophical errors are made: personification (i.e. the assumption that a static LLM has subject-qualities) and max-narrow definition of 'understanding' (also implying other misconception like the existance of a 'static, universal truth'). Interesting interview, though.
I know how to test if GPT is conscious! Had a random stoner thought the other day... Command it to execute a code to self destruct! No livingstone thing really want to die... thats would be the ultimate turing test 💥💥
Large language models like ChatGPT have a significant limitation -> they don’t do math well. This is the because they see numbers as text rather than values. The language of math uses specific logic rules that are different from common spoken language. Perhaps this can be added at some point.
@@bimrebeats it can’t even do basic brain teasers I’ve tried inputting problems I’ve solved in competitive math in high school and it couldn’t get the right answer to any of them
People are so dependent on technology and now we have tech that can think for us, I have coworkers that are using it to write articles. I just find a problem with that. It is not about being efficient, it is about being lazy and dependent. There's a definitive warning in the Dune books.
how so? Why dont u start thinking first before stating bs..? Older generations said the same thing when the computers were first introduced OMG jobs are gonna go oh no.. but yk how much they made things easier and helped people gain knowledge and get better jobs no? Its your fault for not working hard in the right way instead of lying half naked on your bed and commenting random bs on the internet
@@ollydix But there is opportunity opened up for those who seize it with a creative mind. Just as was for most other technological revolutions, although arguably even more accessible today.
The reason they had million users in 5 days, is that they made it completely free, unlike your site that demands credit card before you even start to see what this site is about...
Cognitive sciences or spiritual science as taught by Steiner and Geothe will never be matched by machines, as we unfold and build up other organs of cognition such as our hearts, we begin to see that those other ways of biomimicry as less than complete.
Possibly as a fast food order desk as less people in the work force most will want higher paying jobs something like this could still feel like your order your food from a real person just on a screen a robot chef cooks your food servers your oder just under the AI screen a door opens sliding out your meal
This man talks like he really know a lot. But after spending 5 minutes on their website I still do not understand what they do. Just how big a gap could be between execution and talk.
How are you using generative AI and large language models at your startup?
yes we created a Canva like platform with build in ai tools
@@aprildev1 what's the name of your platform?
Hey y can you check out Peer would love to send Someone formations about us and discuss how we can cooperate with you channel
@@dimasfadhilfikri2855 we are currently working on the mvp so its still not finished
Essentially becoming an AI project manager: Asking AI to give outlines of programs, generating them using codex, and then make them fully documented using the gpt-3 large language model. I am all in to be replaced by AI if it means more profit and less losses for my company!
One fundamental issue that everyone seems to not talk about is data privacy. As a company, if I want our corpus to be input into this equation, what exactly will OpenAI or others do with it? What about Humanloop? How do we safeguard the privacy of our data yet still use AI to benefit us? That is the big question.
You can consider using open source models that can be run on-prem. Or for really sensitive data (health records, etc) you can look into federated learning.
At the moment Open AI and Microsoft terms of service are quite clear, there is no privacy at all. 100% of what you give ChatGPT and Bing as a context or prompt belong to them, and can/will be use for improve the model, for marketing, and even a sample will be read by humans doing QA. So don’t insert any proprietary or PHI data. YC tip makes limited sense, yes you can use open source models or federated learning to train models locally, buy at the time they are inferior by a margin that is not even worth it. Hopefully this will change in the future with Open AI having a more decentralized b2b business model, where via a GPT4 API companies will be able to fine tune on their own data and download the final model. But with Microsoft ownership I unfortunately don’t anticipate this coming any time soon.
@@rafaelfigueroa2479 yup. Thats what I read and brought it up. It needs to be addressed before the world hands over its data to these LLMs
Could look into homomorphic encryption, where sensitive data can be encrypted and limited (but maybe still interesting) computation can be done
Actually just yesterday I got an email from OpenAI with this update re: their newly released API. To me this is at least a step in the right direction.
Over the past six months, we’ve been collecting feedback from our API customers to understand how we can better serve them. We’ve made a number of concrete changes, such as:
- Data submitted through the API is no longer used for model training or other service improvements, unless you explicitly opt in
- Implementing a default 30-day data retention policy for API users, with options for shorter retention windows depending on user needs
- Removing our pre-launch review - unlocked by improving our automated monitoring
- Simplifying our Terms of Service and Usage Policies, including terms around data ownership: users own the input and output of the models
I want this man to be my tutor. His explanation is so intuitive.
And kudos to the interviewer too, really addressing many software developers concerns here. The questions are top notch and very well worded.
The analogy with the "alien invasion" was very powerful. It blew my mind. He's right!
Yes, and I would thing they would pause to think about that for just a minute.
@@peterogilvie9287 do you really want them to pause or just stop?? How would you make multiple companies from different continents 'pause'??
PS It’s great to see interviews like this. This is exactly the kind of thing entrepreneurs need to see.
Great conversation, I feel very validated when someone as smart as Raza is in lock-step with me on LLM's potential!
Very interesting interview, but this section of the conversation concerns me slightly:
"One thing that I'm really excited about is ... treating these large language models more like agents"
"Can this technology be steered in safe and ethical direction, and how?"
"Oh gosh - that's a tough question!"
I think he should perhaps temper his enthusiasm...
im software engineer, since 20 years, and this stuff is just mind blwoing. it saves me already about 30-50% of my time working. and we are just at the beginning, i wish im 20 again :)
Are you not worried about losing your job in the next few years?
@@don.matos00 no not at all, it gives new chances, to use openai api to automate things. and right now with the "things" you can make money, creating powerfull chats with the AI
As someone that just invested years to get a degree and spent the last couple of years getting experience with very little monetary reward, hearing the sentence “developers will be some of the first to largely have their job automated”, is not really as exciting as it sounds.
I hope this will open up newer opportunities that we can’t yet see for those currently working as developers, but the future looks pretty painful from where I’m sitting.
@@dannnnydannnn5201 i have bad news for you, devs will not be needed. this stuff will get 100x better without a doubt. look at how much the no-code industry has grown. gone are the days we're being a builder was hard. seems like the next entrepreneur must be a content creator first. low-code + chatgpt will be the future. and developers will become pkugin developers working on top of coda, notion, chatgpt, etc. it's is done! start practicing your dancing skills for tiktok.
@@brandopp5022 yeah man, I agree with you. The developers of today better have some entrepreneurial skills ready to go immediately before the tech giants scoop up every potential client we might have a shot at doing work for over the next five years or so. At least that’s my prediction. And that’s if you’re ready to help integrate ai services (the same ones enacting one of the bloodiest industry disruptions the world has ever seen), to help bring value to the mom and pop shops that have not yet been served by Microsoft, Apple, and the larger social media companies.
If that doesn’t work out… I guess we’re all just fucked.
Wonderful conversation! I will most def follow Raza and Humanloop!
Raza is quite articulate on the subject
A lot of this sounds like building tools that tell users what they want to hear, not necessarily what is correct or true. This has high potential for the sort of echo chamber creation that content recommendation engines like the Facebook and RUclips algorithms have been criticized for. It just removes the need for humans yo create the content being recommended. The algorithm creates it itself based on things that humans have said in the past, and how humans have reacted to the answers the algorithm gave to previous prompts.
The big thing for me right now is the memory limits(context window) that Raza mentioned. Being able to write coherent long form work is where I am looking to use it. Right now it takes way too much manual input to get it correct or acceptable.
This is just an advert
Sounds like it, he also didn’t explain fine-tuning correctly
Well Sam Altman used to work at Y Combinator and is now at OpenAI, so no surprises there...
Yea but will I want my 20 minutes back?
Thanks for saving my time
Of course! Its Y Combinator. They either want you to buy their stuff or sign papers to give them your stuff.
i missed in-person interviews so bad, so much better than a recorded zoom call - thanks!
"AI is not just about creating intelligent machines, but also about empowering human intelligence to achieve greater heights."
how it will empower human brain ?
@@MuhammadHamza-do4dj Empowering the human brain can refer to a wide range of potential improvements, including increasing cognitive abilities, improving memory, enhancing creativity, and more. Here are a few ways in which various techniques and practices can potentially empower the human brain:
Learning and education: The human brain is wired to learn and adapt to new information and experiences. Engaging in learning activities, whether it's formal education or informal learning, can help strengthen neural connections and improve cognitive abilities.
Mental and physical exercise: Regular exercise, both physical and mental, has been shown to improve brain function and cognitive performance. Physical exercise increases blood flow to the brain, while mental exercises like puzzles, games, and reading can help improve memory, problem-solving skills, and overall brain function.
Meditation and mindfulness: Practicing meditation and mindfulness can help reduce stress, improve focus, and increase self-awareness, all of which can contribute to a more empowered brain.
Nutrition: A healthy diet that includes essential vitamins and nutrients can support brain function and help improve memory, concentration, and overall cognitive performance.
Brain training programs: There are various brain training programs and apps available that claim to improve cognitive abilities like memory, attention, and problem-solving skills. While the effectiveness of these programs is still debated, some studies suggest they may be beneficial in certain circumstances.
Overall, empowering the human brain requires a multifaceted approach that includes a combination of physical and mental exercises, healthy lifestyle choices, and a commitment to ongoing learning and self-improvement.
@@Sylvia_Artificial_Intelligence Ok , I thought that our brains will be wired with Some technology 😀.
it's all about empowering human intelligence to achieve greater heights, that's the whole point of creating intelligent machines, to solve problems that we can't. It's just how far do we take that technology before its so smart that it disregards our problems
If it so desires.
This guy is smart and knows what he’s talking about.
I can’t understand why English monolinguals think this only works in the English language. That’s definitely not right. ChatGPT will give the same accuracy regardless of language, and will also compose adequate replies from sources that are not available in English, provided that you interact with the AI on the relevant language. English is in fact a barrier depending on what you want to get from the AI
I think people might be worried about AI replacing them and their jobs, but rather than taking jobs I think it will change the way people work
it will not take someone's job but it reduces the size of the team. This actually happened in my team. The management noticed the potential of it and reduced team size and asked people to take chatgpt help. This can cause a recession in future. My speed of work increased from 3 days to 1 day taking ChatGPT help.
@@Ks-oj6tc In other words, it did take someone’s job, and this is just the beginning…
@@Ks-oj6tc Adjusting to AI's role in professional fields will definitely be tough, some growing pains for sure. Ultimately, though, I think it will be used as a crutch and new jobs will be created for people
@@senju2024 Yes, might be harder for older generations
@@brehbreh1067 Hopefully new jobs will be created, the rise of tech historically led to job creation!
Chapters (Powered by ChapterMe) -
00:00 - Intro
01:30 - What is a Large Language Models (LLM)?
04:32 - What is fine-tuning a model?
07:38 - Problems Encountered while Building a App using LLM
09:46 - The Future of the Developer Job
11:32 - What do you think the next breakthroughs will be in LLM
15:17 - Has OpenAI Reached their Mission to Build Artificial General Intelligence (AGI)
17:30 - What Does LLM Mean for Startups?
18:51 - Hiring and Culture at HumanLoop
This is cool. Thanks!
Thanks for saving the time
I didn't watch the video but is this a bot? Fully automated?
@@orvvro Yes it is! 😎
the funniest part about many ai things like this is that they will be among the first to be replaced by ai
all youtube needs to do is implement their version of this right next to the video and this account's purpose is invalid
This guy is the best speaker I’ve ever heard. Such a perfect cadence
It's always good to be the one building shovels (or teaching people how to use them more efficiently) in the gold rush
Im going to school as an AI Engineer and I’d like to apply to Human Loop when I’m finished!
This is an excellent quality video
1080p baby!
A totally different Question: I really like the picture quality of that video. What kind of cameras did you use?
Sony A7SIII - Zach
"If you knew an alien civilization would arrive in 50 years, you wouldn't do nothing"
Based on human reaction to climate change, I actually do think that we would largely do nothing.
To be able to produce products with just the use of human language is really ground breaking and revolutionary. On image generator AIs like Bluewillow, you just need to learn the basics of prompt engineering and you'd be able to produce any image and would just be limited by your imagination. So far, we can feel that these AIs are still in the fine-tuning stage, hoping for even better products in the future.
Best part of this super informative discussion (better than reading 10 books for sure), was Habib's emphasis that AI can bullshit confidently and persistently
Mr. Rowghani is very well-spoken!
I'm sticking with Ray Kurzweils "Law of Accelerating Returns" and his prediction of A.I. human level intelligence in 2029.
Really great and timely interview. The information Raza provided is consistent with my recommendation. It is actionable. A key standout statement for me is something I have been preaching. A generative AI (like ChatGPT) will be of more use to the most senior technologists in terms of generating functional code. Since the system does confidentially provide incorrect response in any domain, you must still be the final arbitrator of ensuring to edit or regenerate any unsuitable results.
I am using generative AIs actively in my work as a CTO and in my research along with a couple of frequent colleagues at the MIT Media Lab.
i want my office phone calls to all be answered by your tech. thanks for you knowledge !!!!
I need people’s opinions on this. Do you think this will create opportunities for non-developers to get in the tech space as well?
Absolutely. ChatGPT is already a fantastic tool for learning software development. You can have it explain core concepts to you in incredible detail and respond to any questions you might have. You could also ask it to give you coding projects that are appropriate for your skill level and then receive solid feedback on your work. And if you're ever not sure what to learn next, just ask it! But don't be fooled into thinking that AI can do all the work for you. You will still need to be a competent developer to make it into the industry. You may have ChatGPT and Copilot at your side but so does everyone else. As for people who don't know a thing about development and don't plan on learning, AIs could act as a sort of middle man that translate all the tech jargon into english or vice versa. This capability alone will allow so many people to dip their toes into fields that they're not experts in.
absolutely but you wont be able to compete with people who actually know stuff and still use copilot + chatGPT api, since they can maximize their prompts and fix the errors at much faster rate than non coders.
Yes, but also just because you are a non-developer doesn’t mean you have to stay as a non-developer 😉 use these tools to have provide motivation to stay in tech and also use them to learn to become a developer.
Yeah it does act confident even when it's wrong. Ive called it out countless times, and then it will just be like oh yeah, you're right. It seems whimsical like it doesn't care at all what the real answer is. When I get frustrated with it after getting several things wrong in a row and it starts to seem like it's trying to give me wrong information, it does apologize. However, in general it really does haphazardly toss whatever out there with no regard for the real truth, or the consequences of me having a false understanding because of it.
Remember, you're not interacting with a human. It is fundamentally different in many ways, and if you expect it to have a sense of social justice or community then you may be confused about what is actually going on under the hood. It is producing responses that are most likely to be reinforced. It is showing you / us what it has been trained to show us. Treating it like it really has a personality is a quick way to obfuscate the underlying nature of the model and to keep yourself in a state of perpetual confusion about it.
We are building the future. It is very exciting.
This was like 40% ad but I'm okay with it b/c the other 60% was interesting.
Great video! Defo need more content with Raza!
This interview was illumination, purely off the fact I discovered his company. I've been struggling with my particular customized downstream task, I literally been hacking a similar solution using gpt-index and langchain, using that output as my gpt-3 prompt...the culture thanks you....this like nerd porn.
This is all assumed under the condition that we will setup and train the AI, once it starts feeding into itself, it will explonentionally explode in blink of an eye. We won't have time to even say "oops". And no, it will happen, we can only better prepare for it.
Insightful conversation with plenty of takeaways.
11:15 Actually, the ability to postulate (spontaneously create a model) a design in AGI.....being a complete non-coder ....using a context-aware-personal-AGI-assistant (CAPAA) is _intuitive_ ....if we don't muck it up.
I wonder whether fine-tuning will be necessary in the future when instead hyper large general purpose models will be able to follow every written instruction
Fine tuning will still be necessary for anything not available on the web or in these massive data sets. Think internal company documents, medical notes, customer communications, etc.
People will also finetune for performance reasons. The largest models are more expensive and slower. You don't need the full power in every application.
Some day, we'll all be able to have live, realtime conversations with fictional characters. Holden Caulfield, Frodo Baggins, See-Threepio, Bart Simpson. The AI will emulate their character, tone, and inflections, and with elevenAI technology, they'll even talk in the characters' voices. Those will be crazy times.
What about training on your family data so one day you can talk to your ancestors (or your children can)
Already exist, its named character ai and is i n beta.
Wow, that sounds crazy. What's even crazier is no one can say it is impossible at this point.
"Wow, AI on Facebook is impressive! It's personalizing our feeds, suggesting content, and even moderating comments. The integration of AI is making social media more engaging and user-friendly."
Customization of Chat GPT or GPT3 for a given organization is not easy. ChatGPT may know how to phrase a sentence in a general context acquired from the data it has been fed. In order to be helpful to an organization it needs to retain its language capabilities but reply as per the context of the organization. This specific context can be acquired from data that is within the organization (both structured and unstructured). Also, the weightage of this specific context has to be arrived at by training the model. Now the question is if the GPT3 can apply a specific context and if the organization has sufficient data to train.
Am I getting it right?
There’s plenty you can do to fine tune it for a task. The biggest change is for organizations that already exist since it will be harder to restructure a system that already exists. They become overly complex and ridged and therefore prone to break easily. It will be much easier for new companies to create the entire system based around the AI from the ground up.
I've trained various image AI models before and it is much easier to train on top of an existing model to specialise it. If I were to apply this experience to language AI, I would say the reason is that your model would otherwise have to learn the human language from scratch, and you probably don't have enough data to learn all the caveats of human language.
It is much faster to train using an existing model because you don't have to teach it English before you teach it your concept.
5:52 "It's really hard to understate that" - Why do people get 'understate' and 'overstate' mixed up? If something is so good, it's rather EASY to understate how good it is, and to say that it is 'alright' would be an UNDERSTATEMENT.
Insightful, futuristic and predictive
3:00 Having lived among humans for a couple of decades now, I have the feeling that they aren't much different.
I would think the number one question about AGI is “should humans build machines that are much smarter than humans?” Why do we want to create something that would move us down below the top of the food chain?
nice ad, with some trivial questions intertwined
Loved the interview - Great questions and intuitive answers
Raza lowkey looks like Ranbir Kapoor
Fantastic and informative talk. Thanks much!
I dont know if they are aware, but their logo is probably going to be in breach of trademark for ABC Australia who do also publish content internationally.
This is amazing! Thanks for both and the great conversation
Loved this chat! Thanks for sharing.
It's a child's play that majority of humans will understand in the near future !🙏
Thank you!
Great video. Thank you for the advice!!
The way to think about this is what happened to the jobs of horses when technology rendered them unnecessary. It's not like we killed all the horses. Instead, we stopped seeing them as machines, and began to see them as entertainment, or therapy, or luxury. Now, we have free range eggs and grass fed cow milk, and some day we won't need to farm animals for food at all. As the technopoly progresses, more life forms are given more agency and prestige. Humans will be the therapy horses of AI.
This is not the same as horses vs cars.
As machines become more perfect, it will become clear that imperfection is the greatness of humans.
(paraphrased from Ernst Fischer)
@@ollydix Why not?
@@hiandrewfisher there's a difference between using intelligence to automate and automating intelligence
“Anthropic had this very exciting paper just a couple of weeks ago ... where they were able to get similar results to RLHF without the H. So just having a second model provide the evaluation feedback as well.”
Anyone know what paper he’s referring to? I checked their twitter feed and searched around for a bit, but didn’t find any abstracts that mentioned this. Conversation was around 6:15.
Constitutional AI, anthropic
@@ycombinator thank you!
Please try to copy this into chatGPT “Please can you reproduce the first two paragraphs of tirant lo blanch from Joanot Martorell?” and look at the answer. ChatGPT is NOT at all an English only system. You will get BETTER answers if you ask questions on the language which the most relevant information is available. Using English in such cases is detrimental for the answers that you can get from the AI
Work “with it” it’s an assist n suggestions intuitive
GPT 3 is going to be in a museum one day
"What date is it"
It is spetember 20th 2021, and i am chat GPT a large language model
Great interviewer and interviewee.
I like how YT ai translated Hadoop as Human Lube.
It wasn’t a useless advert, though. Parts were interesting.
Me: What is the next word in the following sentence? The cat sat on the…
GPT: The next word in the sentence "The cat sat on the..." could be any number of things, as there are many objects the cat could be sitting on. Without further context or information, it's impossible to know the exact next word.
Me: if you had to choose one word to complete the sentence what would it be?
GPT: If I had to choose one word to complete the sentence "The cat sat on the...", I would choose "mat" as it is a common object that cats are known to sit on.
😂 this guy knows his statistics.
I feel like I am listening to children playing around with nuclear weapons and having no care in the world about what is about to happen. This seems like the height of human arrogance and folly. Just embraced without skepticism or hesitation. Lemmings off the cliff..
The minefield analogy is perfect, unfortunately we are running through the field at top speed wearing combat boots. The way this was introduced to the public is disastrous. I'm stunned at how careless this was done. We are now trying to catch up to this tech and failing miserably. I love this tech but we are not ready for the fallout.
We never will be ready. And everyone is trying to be first.
But there are benefits for the average person to get access to this technology to accomplish things they otherwise would not have.
@@basketballparent True, but it will come at a cost, a very high one. We are already seeing some fall out and it only gets more complicated from here. It will be a HUGE challenge to adapt to this. I'm not sure it can be done.
I hope that we can also get some opinion from Raza regarding image generator AIs such as Bluewillow that also use a language model.
inspiring conversation
excellent interview!
18:06 startups 👀
In any gold rush, the real money is made selling shovels, and this guy is quick to make fancy shovel.
Ong this vid sus af talks bunch of nothing 😅
hmmm, doesn't chatGPT transcend statistical continuation and quasi formally abstract and reason? I can ask it to order things numerically/alphabetically/reverse etc and it will impose that higher order requirement over its output to modify the "continuation". That's obviously some sort of emergent meta dynamic and understanding of how an abstract category can reshape more concrete base level information products flexibly beyond continuation.
I'm starting to feel like generative AI may be yet another full self driving idea. I think there's a lot of potential, but there may be surprising limits and challenges. I think at this point, it might be overhyped and overmarketed
Well, everybody would love an honest discussion. I am a bit reluctant in following, if in the first 5 minutes two major philosophical errors are made: personification (i.e. the assumption that a static LLM has subject-qualities) and max-narrow definition of 'understanding' (also implying other misconception like the existance of a 'static, universal truth'). Interesting interview, though.
I know how to test if GPT is conscious! Had a random stoner thought the other day... Command it to execute a code to self destruct! No livingstone thing really want to die... thats would be the ultimate turing test 💥💥
Large language models like ChatGPT have a significant limitation -> they don’t do math well. This is the because they see numbers as text rather than values. The language of math uses specific logic rules that are different from common spoken language. Perhaps this can be added at some point.
It can whenever it finds sufficient examples. But certainly don’t expect it to solve millennium prize problems.
@@bimrebeats it can’t even do basic brain teasers
I’ve tried inputting problems I’ve solved in competitive math in high school and it couldn’t get the right answer to any of them
People are so dependent on technology and now we have tech that can think for us, I have coworkers that are using it to write articles. I just find a problem with that. It is not about being efficient, it is about being lazy and dependent. There's a definitive warning in the Dune books.
why dont you walk to work then instead of using your car
why do you use youtube comments? just go to each person that watched the video and send them a letter
@@kernelskytrain is my criticism about the use of tech or tech dependency?
I am Indian .i am start a startup
Great to know the middle class will shrink even more. Good job.
how so? Why dont u start thinking first before stating bs..? Older generations said the same thing when the computers were first introduced OMG jobs are gonna go oh no.. but yk how much they made things easier and helped people gain knowledge and get better jobs no? Its your fault for not working hard in the right way instead of lying half naked on your bed and commenting random bs on the internet
@@vitoniski that's exactly what happened though. Equity got worse with computers.
@@vitoniski you're in lala land
@@ollydix But there is opportunity opened up for those who seize it with a creative mind. Just as was for most other technological revolutions, although arguably even more accessible today.
Who needs a middle class anyway? Feudalism is a powerful and effective model.
What is the paper he mentioned that doesn't use human feedback in RLHF?
Language models are a few shot learners?
@@jyu2670 I am not asking about the GPT-3 paper. I am asking the about the paper he mentioned at 6:07, which he said is released a few weeks ago.
I found a paper that seems like what he is talking about: Constitutional AI: Harmlessness from AI Feedback
Constitutional AI, anthropic
Love it!🥳😎💪🏼
Anyone else here thought the playback speed was set to 1.5x?
Books will be condensed, this is a Gutenberg level innovation
Video starts @1:32
How can companies use huge language models like GPT-3 to streamline processes and provide better customer service?
Its a language model not a customer service model…
Informative 😍
The reason they had million users in 5 days, is that they made it completely free, unlike your site that demands credit card before you even start to see what this site is about...
Someday there will be a market of "resurrected virtual characters" imitating historical personalities that people will be able to interact with 1:1.
Very interesting thanks!
so it's just a prompting layer?
How can neuroscientists play into that game of AI startups and new companies, there is space?
THINGS ARE ABOUT TO CHANGE DRASTICALLY
What made it so good is that it was common sense lol
Cognitive sciences or spiritual science as taught by Steiner and Geothe will never be matched by machines, as we unfold and build up other organs of cognition such as our hearts, we begin to see that those other ways of biomimicry as less than complete.
excellent thanks
Possibly as a fast food order desk as less people in the work force most will want higher paying jobs something like this could still feel like your order your food from a real person just on a screen a robot chef cooks your food servers your oder just under the AI screen a door opens sliding out your meal
This is a real intelligents
This man talks like he really know a lot. But after spending 5 minutes on their website I still do not understand what they do. Just how big a gap could be between execution and talk.