What products ? The ones they kill every 3 years ? Why would anyone use anything from Google besides RUclips and maybe the search. Except their search is so bad that I changed to Bing, I mean, DuckDuckGo in 2019 and never went back to Google. They can be broken by the Antitrust , won't be missed.
What I'm hearing is, now is a fantastic time to start getting into bug hunting on google dogfooded products because they are probably 25% worse by volume.
Not really. They are an incumbent in a lot of spaces, and "AI" is the thing that is supposed to unseat them. Exaggerating AI is not in their best interests.
At this point AI is the original white elephant, as in the gift a powerful ruler gave you as a punishment, white elephants being sacred, expensive to maintain and you can't put them to work, and constantly report if the elephant is in good condition and that you are immensely enjoying the burden put on you. Many companies are being white elephanted into using AI.
Google with their famous hard-to-get-in interviewing process has essentially created an in-bred fleet of developers who all clones of each other and that's why it's failing.
It started failing well before that. The CEO is a complete nut job. There has been talk in the past to fire him. It's a shame one of the founders had to give it up due to his vocal coords.
The most dangerous part about the statement 'x percentage of lines of code are written by AI' is the fundamentally wrong assumption that you can measure programmer productivity in lines of code. Such a statement (true or not) will lead management to think they got 25% of the work done for free and that the programmers are now 25% more productive. This could not be farther from the truth. As a full time programmer I spend less than 1% of my time actually typing code. On some days AI does write a large proportion of my code because I use AI for line completion. But I decide which line and how the line starts etc. Saying that a spell/grammar checker writes 10% of management emails would be equivalent.
The amount of non-existing libraries and APIs it hallucinates is staggering. What it does well is explaining code and translating code from one language to another.
I tried gemini, chatgpt and claude. Im pretty sure you guys are just biased because they are pretty much equal (the latest versions). Maybe one is slightly better than the other but the difference is not that significant.
@@neatpaul Might be honestly, but I've been coding for 4 yrs, maybe I'm giving it complex code, but still GPT solves them. I think Gemini is good for essays, not code.
I tried to use gemini to help write some terraform code for GCP stuff, it was a total disaster and was suggesting imaginary functions/api's... another time i tried to use it for writing technical documentation and it could not even get a private subnet notation correct... AI is absolute useless for anything that is not surface level or just BS anyways (AI is really good at making BS sound like its the truth though)
I guarantee you that 25% of AI generated code is boilerplate and wrapper functionality for the myriad of crazy abstractions they have internally. I'd be shocked if actual production functionalities are being automated end-to-end with Gemini.
So, a company that has placed an existential bet on AI says AI is amazing and writing a fourth of their code now. Interesting, I wonder if maybe they have an interest in saying that even if it isn’t true.
Google, the people who laid the foundation for LLMs, have the worst AI of anyone. Asking Gemini anything is like ramming your forehead against a steel bulkhead. Claude or bust.
They also have all the data so no need to make a mad dash to scrape data, something they always did passively since ever. No effort in sanitizing data though, as one clever post on Reddit was enough to fix UK food influencer plague. People should ignore those, Angus Steakhouse 4 life ❤
@@semyaza555 I use it through Poe so haven't had to deal with them or even the Poe support people because I just haven't had any error happen that isn't 100% my fault.
@hey_im_him a lot of the people at oai were from Google because they pretty much had a monopoly on AI at the time outside whatever the IBM Watson team was doing in their basement. It's why Elon was freaking out because the founders of Google were pro AI and anti-human so all he saw was dystopia and like minded folks came together to create oai.
Well I do use Google AI actively on a daily basis and some automated tasks runs on top of Gemini 1.5 flash. Pretty good. Also there are some stuff buried inside vertex stuff. One other thing if you are doing diarization and transcript you can upload stuff from the api directly to Gemini up do 4gb files withou parsing anything. I don't know man, Gemini is pretty good. But you should know when to use it
Depends on what model you use. The bigger Gemini model is actually really good at extracting info from large volumes of docs. Like full books and long videos.
I was recently going over the example projects and implementation examples for different google cloud products. In the comments it always said that the code was generated. It was horrible.
I don't think it's betting on AI but moreso on AI gives the big tech another reason or direction to sell their services and hardware which is already a win-win for them. Remember how many years ios/android and cpus have been stale compared to now where we're seeing new hypermarketing strategies that lets them tie their upgrades to ai.
Cloud margins are thin. Turns out building, operating, and updating hardware in datacenters is insanely expensive. People come to biggest cloud providers for cheap top of the line compute and storage, and if they don't deliver they lose market share. If not for the US tax system Google and Microsoft wouldn't be in the top 3, and Amazon would be making much less money
If you made 'increasing triplet subsequence' I would immediately drop everything I'm doing and watch it to figure out WTF that is. So it's onto something
When the google AI search summary works, it's awesome. When it doesn't, it's actively detrimental to what I was looking to know. I'd say the latter happens about as often as the former.
Only having someone reason about the code during the review seems like a terrible idea. Imagine selling that to engineers as well. "You won't actually be implementing any features, you'll just be reviewing PRs all day, every day."
I don't mind adding code from AI into a repository... What I really wanted google to disclose is the churn rate of this code? How much of the AI bullshit is being kept in the source for over a month? Or over a year? This is what really matters.
@@lordkekz4 Well I don't think there is statistic data to know who's right. But I don't understand why dev wouldn't use it. Supermaven has a free tier which is really good, and it juste make you work faster.
@@Nyao35 see: every developer who disabled Copilot for a minute and found they were just randomly sitting there waiting for autocomplete, rather than working. I have no seen anyone do a serious attempt to see if Copilot and other AI tools 'actually' make development faster--or if people just feel like it does. The few studies that have looked into this have found Junior developers appear to improve while Senior developers tend to end up slowed down. The justification is that the Junior devs don't know what they don't know, and the AI can sometimes fill in specialty knowledge (hey, this function exists if you didn't know) or get the Junior on the right track more quickly. Whereas the Senior ends up getting so many bugs or inefficiencies they have to fix, that the AI is actively slowing them down.
@@connorskudlarek8598 I disagree with your last sentence. For boilerplate and relatively easy lines (where you can just quickly reread it and be sure it was exactly what you were going to write), autocomplete definitively makes you work faster. The more you use it, the more you know at what it is good or not, and so you can just use it at the right moment, and do your usual stuff the rest of the time.
@@Nyao35 hard disagree. It has been shown so, so many times, that Copilot doesn't even write sort functions correctly all of the time. Often writing out the wrong sort function, or introducing a runtime bug in less common sorting environments. Short autocompletes? Sure. Boilerplate? Sure. These things were never really slowing Senior devs down anyway, but when it works it saves a couple seconds of time. That is neat. The problem is when it does NOT work, BUT the code technically runs or the issue is not easily corrected and has to be re-written by hand after analyzing the problem It autocompleted the code, sure. But it did it in O(n^2) when the Senior dev would have done it in O(n*log(n)). Now you have to go back and re-factor the damn thing, fight Copilot to do it the nlogn way, rather than keep trying to autocomplete in n^2. All that additional time confirming the autocomplete did it right, didn't introduce a bug, and didn't do it inefficiently... that is the bulk of the time in the first place. Not the writing code part. So the quick autocomplete only really accelerates things IF the problem is already well-solved at this point (boilerplate, common problems) and IF the Senior dev doesn't really spend much time verifying good, quality code is going into the codebase. For code that simply doesn't matter, have at it, don't bother checking it over, full send. But the second the code actually matters, needs to be specific, needs to address a problem not already solved 100,000 times over... the autocomplete is an active hindrance.
These companies use AI to replace humans. The question is, will they build AI to buy their products as well? Because humans living in constant fear of losing jobs will be behaving in a manner that will result in low demand in the market with more supply which will lead to more job loss and even lower demand. I cannot wait for that future. Or politicians coming to the rescue for controlling jobless growth.
Cloud in general is low margin for 'big cloud', until you get to more high performant instances. For YT analytics, sensationalism generally wins (hence the recommendations), though I'm not so sure the 'shorts' recommendations were anywhere near reality. Agreed, Google's LMs need quite a bit of work. Not sure why they lag here. Every once in awhile we see something that works nicely, then borks, then works nicely again. I'm curious about their R&D process in this space.
as long as it's getting reviewed by the programmer and also, as it goes up the chain, I see no problem. the statement sounds bad, and I think in general its used by the press for click bait, its generally seen as negative but I think its taken out of context. We don't go around saying 60% of college students essays are written by grammarly.
Google is hard to really take serious as they’ll get bored and move on as they’re obsessed with shiney. In 4 years it’ll be some other over hyped product. I think we all collectively forgot the Google crypto products as they were being announced. They had some dorkass stock to blockchain thing, and hyped up blockhain cloud services . Ironically their biggest crypto asset was google drive during the nft craze.
Unless I see specifics, 25% of code being generated by AI means nothing to me. What counts as "AI"? Does automatic lint fixing count? What about automatic adding of import statements? IIRC, Google already had lots of automated code generation (not deep learning), so this would be nothing new. I wouldn't call this AI, but Google might do it for marketing. What counts as 25%? Lines of code affected by AI? Does it count if the line had some autocomplete? What if the reviewer had to refactor some of the AI-generated code? Too many unanswered questions.
I do a lot of stupid services with lots of Chatty-assist boilerplate. It's fine. You just have to glue it together and focus on the delinting and such. For smaller codebases with maybe several thousand lines of code it's totally fine.
I think those analytics and recommendations that the AI gave you are things that your viewers watch, so.. Its not directly telling you to become Linus, its just recommending what your audience mostly watches which, makes sense
AI is ruining UX as well, every damn Theo vid I open has its audio track autotranslated to my native language (I'm brazilian) and there's no way of turning this shit off, I have to manually switch it to OG english.
Google needs to separate their m&s decisions from their design decision processes. System designs must be rational to be best of class, revenue is driven by UX and emotion.. they have a history of killing their incubated babies way too early.. AI is improving exponentially.. and will eventually bring exponential value if they don’t kill the project first.
How ironic, that this is the first video of yours that RUclips decided to show me with a translated AI voice that completely butchers the pronounciation of the word "AI" every time. I hate it with a passion that RUclips now auto-translates in places where I don't need it. The auto-translated video titles were already bad enough.
Speaking of bad AI within RUclips - I wonder if anyone else noticed that the titles of some videos get auto translated to my native language. The titles sound so ridiculous and there does not seem to be any way to turn this feature off. Only by using a Browser Addon. Why is the video description also translated? Literally nobody asked for this.
I think the Google AI is telling you as much as it's allowed to that you have to buy Linus Tech Tips, then close it a year later in order to improve your channel content.
Completely different market though. LTT's market focus is mostly gamers. I wouldn't go there to learn about dev stuff, I wouldn't come here to learn about gaming stuff.
what kind of code? is it throw away code? or is it their existing infrastructure code?? This means nothing unless I can see they replace their existing code into AI code, which is never going to happen
They try even harder on enterprise sales and it’s a shit show with no stability. I have to put rate limiting and retry on every services just to bear the pain of gemini rate limiting and hallucination. Not to mention terrible documentation and sdks, just use llm to generate some up to date doco would already be useful lol.
Those AI "what people are looking for" ideas at 6:50 seem super outdated - like they were all very in the moment things that an AI is spitting back out because it only knows the past (and is probably based on your own channel metrics lol). Kinda funny to see
If theo is trying to make a sandwich and his cheese is falling that same ai will tell you to apply glue to stick it also if after eating that sandwich you feel miserable it will tell you to sucide. That was a really bad hot take.
For some reason I hear this video narrated in a German "AI" voice and I can't find a way to switch the audio track. :( Edit: I can watch the video with original audio track via invidious, but... why???
There should be an option in the video settings (near the video quality setting, varies from device to device, usually in a menu behind a gear icon), the option is called "Audiotrack". It seems RUclips will re-enable this "for you" again for future videos. Apparently they aren't aware of people speaking multiple languages, or watching videos in a different language than the language they use for the RUclips UI.
it would explain why google products and services just seem to regress
Lots of coders of non-European descent explain it pretty well.
That has been going on for 10 years at least. Googles ad engines just rakes in cash that is used to do a bunch of other stuff, badly.
@@EVanDoren "LotS of coders" 🤡
You might be part of the problem. Don't put on others your own failures.
@@souleymaneba9272 Cut the craps. The jeets are ruining the internet, this isn't up for debate.
What products ? The ones they kill every 3 years ?
Why would anyone use anything from Google besides RUclips and maybe the search.
Except their search is so bad that I changed to Bing, I mean, DuckDuckGo in 2019 and never went back to Google.
They can be broken by the Antitrust , won't be missed.
What I'm hearing is, now is a fantastic time to start getting into bug hunting on google dogfooded products because they are probably 25% worse by volume.
woof? Woof! 🐶
@@lashlarue7924 😂😂
i found a lot of small issues with youtube player recent days
The elephant in the room is that Google is kind of motivated to "exaggerate" about usefulness of AI.
The company is now run by sales people and lawyers. All in on AI, what could go wrong...
Not really. They are an incumbent in a lot of spaces, and "AI" is the thing that is supposed to unseat them. Exaggerating AI is not in their best interests.
At this point AI is the original white elephant, as in the gift a powerful ruler gave you as a punishment, white elephants being sacred, expensive to maintain and you can't put them to work, and constantly report if the elephant is in good condition and that you are immensely enjoying the burden put on you.
Many companies are being white elephanted into using AI.
@SomeThingOrMaybeAnother they been exaggerating all of them mostly Google. Keep learning your trees IA not going anywhere soon. Let's see in 6 years.
@@SomeThingOrMaybeAnother yes it is. to please their investors. thats all this is about
Google with their famous hard-to-get-in interviewing process has essentially created an in-bred fleet of developers who all clones of each other and that's why it's failing.
true af
Based
Can’t agree more
If they generating tests with AI, that’s your 30%
It started failing well before that. The CEO is a complete nut job. There has been talk in the past to fire him. It's a shame one of the founders had to give it up due to his vocal coords.
The most dangerous part about the statement 'x percentage of lines of code are written by AI' is the fundamentally wrong assumption that you can measure programmer productivity in lines of code. Such a statement (true or not) will lead management to think they got 25% of the work done for free and that the programmers are now 25% more productive. This could not be farther from the truth. As a full time programmer I spend less than 1% of my time actually typing code. On some days AI does write a large proportion of my code because I use AI for line completion. But I decide which line and how the line starts etc. Saying that a spell/grammar checker writes 10% of management emails would be equivalent.
Gemini is really crap at code. Probably an order of magnitude worse than claude and chatgpt.
The amount of non-existing libraries and APIs it hallucinates is staggering. What it does well is explaining code and translating code from one language to another.
I have never solved even one question with Gemini. I used GPT, I might try claude
@@josephmgiftThat seems like a you problem
I tried gemini, chatgpt and claude. Im pretty sure you guys are just biased because they are pretty much equal (the latest versions). Maybe one is slightly better than the other but the difference is not that significant.
@@neatpaul Might be honestly, but I've been coding for 4 yrs, maybe I'm giving it complex code, but still GPT solves them. I think Gemini is good for essays, not code.
The AI overview is super helpful it helped me find a safe detergent to use when I put my kitten in the washer.
I tried to use gemini to help write some terraform code for GCP stuff, it was a total disaster and was suggesting imaginary functions/api's... another time i tried to use it for writing technical documentation and it could not even get a private subnet notation correct... AI is absolute useless for anything that is not surface level or just BS anyways (AI is really good at making BS sound like its the truth though)
Have you used the new Claude model
"People find Google products unreliable so we at Google are writing code with AI to double down on that unreliability"
Isn't it great? We can have 100x the amount of bugs written with 70% less humans involved!
enterprise-ai AI fixes this. Google betting heavily on AI.
I guarantee you that 25% of AI generated code is boilerplate and wrapper functionality for the myriad of crazy abstractions they have internally. I'd be shocked if actual production functionalities are being automated end-to-end with Gemini.
So, a company that has placed an existential bet on AI says AI is amazing and writing a fourth of their code now. Interesting, I wonder if maybe they have an interest in saying that even if it isn’t true.
Google, the people who laid the foundation for LLMs, have the worst AI of anyone. Asking Gemini anything is like ramming your forehead against a steel bulkhead. Claude or bust.
They also have all the data so no need to make a mad dash to scrape data, something they always did passively since ever. No effort in sanitizing data though, as one clever post on Reddit was enough to fix UK food influencer plague.
People should ignore those, Angus Steakhouse 4 life ❤
Wasn’t the person who actually did the work recruited to found openAI?
Claude’s support team sucks.
@@semyaza555 I use it through Poe so haven't had to deal with them or even the Poe support people because I just haven't had any error happen that isn't 100% my fault.
@hey_im_him a lot of the people at oai were from Google because they pretty much had a monopoly on AI at the time outside whatever the IBM Watson team was doing in their basement. It's why Elon was freaking out because the founders of Google were pro AI and anti-human so all he saw was dystopia and like minded folks came together to create oai.
IMHO companies should be more careful than just giving everything to AI.
Great, as if Google services weren't getting worse by the day... This is gonna just accelerate the mediocitization.
Well I do use Google AI actively on a daily basis and some automated tasks runs on top of Gemini 1.5 flash. Pretty good. Also there are some stuff buried inside vertex stuff. One other thing if you are doing diarization and transcript you can upload stuff from the api directly to Gemini up do 4gb files withou parsing anything. I don't know man, Gemini is pretty good. But you should know when to use it
How are they measuring that 25%? That's a heck of a lot of extra overhead for developers.
Do we really trust what the verge says? They called Microsoft's recall a groundbreaking product and lets not forget their infamous PC build guide
Don't be forgetting their Galaxy S10 video lmao
Gemini is easily the weakest of the models I’ve tried out. I mostly only use Claude now.
Depends on what model you use. The bigger Gemini model is actually really good at extracting info from large volumes of docs. Like full books and long videos.
Goodluck to google's long tern stability
If AI will be able to guide me step by step on how to make a money printer, then I won't even care whether it takes my job or not.
I was recently going over the example projects and implementation examples for different google cloud products. In the comments it always said that the code was generated. It was horrible.
I don't think it's betting on AI but moreso on AI gives the big tech another reason or direction to sell their services and hardware which is already a win-win for them. Remember how many years ios/android and cpus have been stale compared to now where we're seeing new hypermarketing strategies that lets them tie their upgrades to ai.
Cloud margins are thin. Turns out building, operating, and updating hardware in datacenters is insanely expensive. People come to biggest cloud providers for cheap top of the line compute and storage, and if they don't deliver they lose market share. If not for the US tax system Google and Microsoft wouldn't be in the top 3, and Amazon would be making much less money
If you made 'increasing triplet subsequence' I would immediately drop everything I'm doing and watch it to figure out WTF that is. So it's onto something
When the google AI search summary works, it's awesome.
When it doesn't, it's actively detrimental to what I was looking to know.
I'd say the latter happens about as often as the former.
Only having someone reason about the code during the review seems like a terrible idea. Imagine selling that to engineers as well.
"You won't actually be implementing any features, you'll just be reviewing PRs all day, every day."
In my last company, I was forced to rewrite stuff as we were moving to gcp from aws (cause gcp offered some fucking discounts), google cloud sucks
Saying while people don’t think googles products are reliable as they once were is a wild tell. Hackers paradise ensues:
I don't mind adding code from AI into a repository... What I really wanted google to disclose is the churn rate of this code?
How much of the AI bullshit is being kept in the source for over a month? Or over a year? This is what really matters.
Why is everyone shitting on google cloud? I have used all three major providers, quite a lot and the dev experience is better by a BIG margin.
Autocomplete tools like Copilot or Supermaven count as "code written by AI", and every developpers use these tools now.
Nah, lots of devs don't use them at all, or only very sparingly.
@@lordkekz4 Well I don't think there is statistic data to know who's right.
But I don't understand why dev wouldn't use it. Supermaven has a free tier which is really good, and it juste make you work faster.
@@Nyao35 see: every developer who disabled Copilot for a minute and found they were just randomly sitting there waiting for autocomplete, rather than working.
I have no seen anyone do a serious attempt to see if Copilot and other AI tools 'actually' make development faster--or if people just feel like it does.
The few studies that have looked into this have found Junior developers appear to improve while Senior developers tend to end up slowed down.
The justification is that the Junior devs don't know what they don't know, and the AI can sometimes fill in specialty knowledge (hey, this function exists if you didn't know) or get the Junior on the right track more quickly.
Whereas the Senior ends up getting so many bugs or inefficiencies they have to fix, that the AI is actively slowing them down.
@@connorskudlarek8598 I disagree with your last sentence. For boilerplate and relatively easy lines (where you can just quickly reread it and be sure it was exactly what you were going to write), autocomplete definitively makes you work faster. The more you use it, the more you know at what it is good or not, and so you can just use it at the right moment, and do your usual stuff the rest of the time.
@@Nyao35 hard disagree.
It has been shown so, so many times, that Copilot doesn't even write sort functions correctly all of the time. Often writing out the wrong sort function, or introducing a runtime bug in less common sorting environments.
Short autocompletes? Sure. Boilerplate? Sure. These things were never really slowing Senior devs down anyway, but when it works it saves a couple seconds of time. That is neat.
The problem is when it does NOT work, BUT the code technically runs or the issue is not easily corrected and has to be re-written by hand after analyzing the problem
It autocompleted the code, sure. But it did it in O(n^2) when the Senior dev would have done it in O(n*log(n)). Now you have to go back and re-factor the damn thing, fight Copilot to do it the nlogn way, rather than keep trying to autocomplete in n^2.
All that additional time confirming the autocomplete did it right, didn't introduce a bug, and didn't do it inefficiently... that is the bulk of the time in the first place. Not the writing code part.
So the quick autocomplete only really accelerates things IF the problem is already well-solved at this point (boilerplate, common problems) and IF the Senior dev doesn't really spend much time verifying good, quality code is going into the codebase.
For code that simply doesn't matter, have at it, don't bother checking it over, full send. But the second the code actually matters, needs to be specific, needs to address a problem not already solved 100,000 times over... the autocomplete is an active hindrance.
It's possible I'm having a stroke, but I can watch this video in everything except English
These companies use AI to replace humans. The question is, will they build AI to buy their products as well? Because humans living in constant fear of losing jobs will be behaving in a manner that will result in low demand in the market with more supply which will lead to more job loss and even lower demand. I cannot wait for that future. Or politicians coming to the rescue for controlling jobless growth.
Google is becoming the future's brightest law firm.
Semiconductor will push computing forward > Muaadh Rilwan
Those AI suggestions are like at a high school project level.
AI, AI, generative AI, we use AI to bring AI...
Cloud in general is low margin for 'big cloud', until you get to more high performant instances. For YT analytics, sensationalism generally wins (hence the recommendations), though I'm not so sure the 'shorts' recommendations were anywhere near reality. Agreed, Google's LMs need quite a bit of work. Not sure why they lag here. Every once in awhile we see something that works nicely, then borks, then works nicely again. I'm curious about their R&D process in this space.
Honestly, I think Google's models can't be hallucinating more than Microsoft (the company, not the models)
as long as it's getting reviewed by the programmer and also, as it goes up the chain, I see no problem. the statement sounds bad, and I think in general its used by the press for click bait, its generally seen as negative but I think its taken out of context. We don't go around saying 60% of college students essays are written by grammarly.
unfortunate for us devs , the bad news just keeps on coming
Google is hard to really take serious as they’ll get bored and move on as they’re obsessed with shiney. In 4 years it’ll be some other over hyped product. I think we all collectively forgot the Google crypto products as they were being announced. They had some dorkass stock to blockchain thing, and hyped up blockhain cloud services . Ironically their biggest crypto asset was google drive during the nft craze.
Unless I see specifics, 25% of code being generated by AI means nothing to me.
What counts as "AI"? Does automatic lint fixing count? What about automatic adding of import statements? IIRC, Google already had lots of automated code generation (not deep learning), so this would be nothing new. I wouldn't call this AI, but Google might do it for marketing.
What counts as 25%? Lines of code affected by AI? Does it count if the line had some autocomplete? What if the reviewer had to refactor some of the AI-generated code?
Too many unanswered questions.
Tbh I think that most devs at Google just ... use AI ... no matter what the companies policy is
THEY TOOK ER JERBS!!!
Google went remedial
We must leave AI in 2024
I do a lot of stupid services with lots of Chatty-assist boilerplate. It's fine. You just have to glue it together and focus on the delinting and such. For smaller codebases with maybe several thousand lines of code it's totally fine.
regarding useful AI from Google:
NotebookLM is a truely mindblowing AI 🔥👌
I use gemini for my job it's legitimately pretty good
Ai is for All In, not intelligence just stubbornness
Well, to be fair the LLM Arena Leaderboard classifies the Gemini model in 4th, just after OpenAI.
I think those analytics and recommendations that the AI gave you are things that your viewers watch, so.. Its not directly telling you to become Linus, its just recommending what your audience mostly watches which, makes sense
how much code is written at google? they don't seem to do anything new
Will Google begin to use AI to determine when to shutdown services now too?
AI is ruining UX as well, every damn Theo vid I open has its audio track autotranslated to my native language (I'm brazilian) and there's no way of turning this shit off, I have to manually switch it to OG english.
Google needs to separate their m&s decisions from their design decision processes. System designs must be rational to be best of class, revenue is driven by UX and emotion.. they have a history of killing their incubated babies way too early.. AI is improving exponentially.. and will eventually bring exponential value if they don’t kill the project first.
How ironic, that this is the first video of yours that RUclips decided to show me with a translated AI voice that completely butchers the pronounciation of the word "AI" every time.
I hate it with a passion that RUclips now auto-translates in places where I don't need it. The auto-translated video titles were already bad enough.
>realize your funding that went to ai are not coming back
>try to value ai by spreading news such as 1/4 of your new code is written by ai
I am watching this on stream Theo !! Lmaoooo
🎉
guy literally making his entire livelihood using a google product. lol
RUclips was not invented by google though to be fair, they more or less just bought it and added unskippable adds.
Who will make the unit tests? Who will do the knowledge transfer of a freshly developed system?
Google is actually the only AI company which has models that can take 1M context windows. There is no alternative to Google's models
If no one cared enough to create a product, why should we care enough to use it?
the article is written by non coder, from non coder to non coder. if you understand what I mean, this article of course is a waste of time.
I imagine a future where AI products created by AI for consumers who are AI bots. Meanwhile humans are frolicking in the fields.
Correction: Google cloud HIT $1.95b, up from $270m last year - profit of circa $1.2b vs last year
You're using the audio dubbing feature in Portuguese? it still doesn't sound good man... just switched back to English
It would be hilarious If Google starts to destroy its codebase by using its own AI models and ended up imploding as a company
Seriously.. Gemini is just the worst from all the ai models available out there.
Speaking of bad AI within RUclips - I wonder if anyone else noticed that the titles of some videos get auto translated to my native language. The titles sound so ridiculous and there does not seem to be any way to turn this feature off. Only by using a Browser Addon. Why is the video description also translated? Literally nobody asked for this.
2030: This was how google died guys 😮
I missread "betting" to "beating". 😂
beating the meat
25% sounds very low to me at this point… 😅
Nah not the youtube voice AI translation. Its so bad
So how exactly they write 1/4 of the code using AI if their AI is so shitty?
100% of this video was dubbed with AI
Try “Who is she?” next time, lol
I think the Google AI is telling you as much as it's allowed to that you have to buy Linus Tech Tips, then close it a year later in order to improve your channel content.
Completely different market though. LTT's market focus is mostly gamers. I wouldn't go there to learn about dev stuff, I wouldn't come here to learn about gaming stuff.
If that's the case I hope they are using Claude or GPT-4o. Gemini sucks hard in coding questions.
My college is going to start from 2025 and I am scared what would happen to my career after 4-5 years
the spanish voice is way too fast, even to me being chilean xD
what kind of code? is it throw away code? or is it their existing infrastructure code?? This means nothing unless I can see they replace their existing code into AI code, which is never going to happen
They try even harder on enterprise sales and it’s a shit show with no stability. I have to put rate limiting and retry on every services just to bear the pain of gemini rate limiting and hallucination. Not to mention terrible documentation and sdks, just use llm to generate some up to date doco would already be useful lol.
I for one look forward to Google programming themslves into an unsustainable corner and killing their monopoly in industries of monopolies
Those AI "what people are looking for" ideas at 6:50 seem super outdated - like they were all very in the moment things that an AI is spitting back out because it only knows the past (and is probably based on your own channel metrics lol). Kinda funny to see
wayne os lmao who is wayne?
You sound ill Theo, so hope you will feel better soon :)
Good sponsor
If theo is trying to make a sandwich and his cheese is falling that same ai will tell you to apply glue to stick it also if after eating that sandwich you feel miserable it will tell you to sucide. That was a really bad hot take.
Only can only hope.
Gemini in Android Studio is useful in my opinion
lmao that AI thing above search is fucking useless, maybe for code syntax it cant be so wrong but everything I search its nonesense
Rip google
Nah, gemini is bad. No way a sane person can use that
AI making AI??? WTF
For some reason I hear this video narrated in a German "AI" voice and I can't find a way to switch the audio track. :( Edit: I can watch the video with original audio track via invidious, but... why???
There should be an option in the video settings (near the video quality setting, varies from device to device, usually in a menu behind a gear icon), the option is called "Audiotrack".
It seems RUclips will re-enable this "for you" again for future videos. Apparently they aren't aware of people speaking multiple languages, or watching videos in a different language than the language they use for the RUclips UI.
What the heck with the Spanish audio track lol