Hi Janpancake. First off thank you for your videos about grad school. I'm on the fence but I'm letting the decision stew. I have a mech-e background and am curious about the tech world! Chat GPT has helped me learn to code and put together working projects faster than manually parsing through stackoverflow. But I wonder where the tradeoff is (as a human) for learning. Which is better? Faster feedback to get working code vs reading the variety of solutions seeing where they fail so you have (in theory) a bigger knowledge bank for later on? The language models are interesting. The advice Chat GPT gives with respect to health/wellness activities is very poor. For example, ask for lifting advice and you won't get a good response unless you have domain knowledge and can prompt it to give context for things like exercise selection or body part prioritization. If the first law of robotics is "A robot may not injure a human being, or, through inaction, allow a human being to come to harm.", is it viable to give non-specific advice that could injure a person? I guess they cover their bases with their disclaimer lol. I thought about scraping transcripts of entire youtube channels or podcasts and creating language models for that. Or even datasheets for electronics if you're trying to understand tech adjacent domains better.
Thanks for sharing your experiences with the using ChatGPT! I personally learn coding better when I actually get things to work, and theory goes in one ear and out the other if I don’t apply it in some way. Using ChatGPT to learn coding faster is a great idea. Following what you’ve seen with asking it for fitness advice, I’ve also noticed the advice it gives can only be so technical. The more detailed and technical of an answer I want, the more errors I get. The idea of building domain-specific language models is also a good idea. They’re doing it in the medical field, but I still think unless you’re training it for accuracy to actually get facts right, it’s still going to spit out nonsense sometimes. And even if you are training it for accuracy, it’ll still get things wrong sometimes - even we get things wrong when we have references, so how can we ask a machine to be perfect when the world isn’t 🙂
ChatGPT is definitely interesting but I'm not sure I would categorize it as helpful yet. Only about 20% of the code I've asked it to write has been functional. It seems great at having a conversation about an expert topic, which can help researchers form new ideas, but the danger of being fed untrue information or information that cannot be attributed to sources is problematic. On creative prompts, it performs well on open-ended tasks, but when more constraints are added, the responses become boring and useless. It seems to have a real home with language learners as it can act as a tutor when writing in any language. I would love to see more models like ChatGPT trained for specific tasks and would happily pay for similar services in the future.
This is such a well-put and edited video! Great Job!
Hi Janpancake. First off thank you for your videos about grad school. I'm on the fence but I'm letting the decision stew. I have a mech-e background and am curious about the tech world!
Chat GPT has helped me learn to code and put together working projects faster than manually parsing through stackoverflow. But I wonder where the tradeoff is (as a human) for learning. Which is better? Faster feedback to get working code vs reading the variety of solutions seeing where they fail so you have (in theory) a bigger knowledge bank for later on?
The language models are interesting. The advice Chat GPT gives with respect to health/wellness activities is very poor. For example, ask for lifting advice and you won't get a good response unless you have domain knowledge and can prompt it to give context for things like exercise selection or body part prioritization. If the first law of robotics is "A robot may not injure a human being, or, through inaction, allow a human being to come to harm.", is it viable to give non-specific advice that could injure a person? I guess they cover their bases with their disclaimer lol.
I thought about scraping transcripts of entire youtube channels or podcasts and creating language models for that. Or even datasheets for electronics if you're trying to understand tech adjacent domains better.
Thanks for sharing your experiences with the using ChatGPT! I personally learn coding better when I actually get things to work, and theory goes in one ear and out the other if I don’t apply it in some way. Using ChatGPT to learn coding faster is a great idea.
Following what you’ve seen with asking it for fitness advice, I’ve also noticed the advice it gives can only be so technical. The more detailed and technical of an answer I want, the more errors I get.
The idea of building domain-specific language models is also a good idea. They’re doing it in the medical field, but I still think unless you’re training it for accuracy to actually get facts right, it’s still going to spit out nonsense sometimes. And even if you are training it for accuracy, it’ll still get things wrong sometimes - even we get things wrong when we have references, so how can we ask a machine to be perfect when the world isn’t 🙂
ChatGPT is definitely interesting but I'm not sure I would categorize it as helpful yet. Only about 20% of the code I've asked it to write has been functional. It seems great at having a conversation about an expert topic, which can help researchers form new ideas, but the danger of being fed untrue information or information that cannot be attributed to sources is problematic. On creative prompts, it performs well on open-ended tasks, but when more constraints are added, the responses become boring and useless. It seems to have a real home with language learners as it can act as a tutor when writing in any language. I would love to see more models like ChatGPT trained for specific tasks and would happily pay for similar services in the future.
this is quality content