Nate Codes AI
Nate Codes AI
  • Видео 82
  • Просмотров 16 974
Ooh... Awkward: Sam Altman Thought He could Get away with a 30 second "Thanks"
OpenAI is getting a hot injection of 100 Billion dollars right now, but Sam Altman wasn't briefed on his lines for the inauguration speech... spoiler, it's not 30 seconds long like he seemed to think.
Просмотров: 2 362

Видео

Declarative Prompting for Agents - Prompt Owl Tutorial
Просмотров 41День назад
In this video I cover using Prompt Owl to prompt our Local LLM (Mistral) from python with an example where I transcribe and then summarize voice notes. I also cover some of the basics of working with lists and data in prowl, which we will delve further into in future videos. Second half is about building out an agent that can create other agents (a meta-agent). Prompt-Owl Github: github.com/lks...
Declarative Prompting for Local LLMs: Prompt-Owl Basic Introduction
Просмотров 1,1 тыс.День назад
Here I go through a basic introduction to prowl script and show how to coerce Mistral 7B instruct to give great output for specific tasks, using less GPU and generating advanced chain of thought tokens in less time. I stick to the basic declarative nature of prowl and next episode I will do tool usage and how to use it's output in python instead of just testing prompts with the CLI. GitHub with...
2025 Is the Year of the CEO Agent
Просмотров 11414 дней назад
How about we gather some data for our CEO Agent and get some inspiration from RUclips. Watch live at www.twitch.tv/natecodesai Grifty: the AI CEO (meme repository) github.com/newsbubbles/grifty
Westworld shows how to FOMO Tech Investors
Просмотров 1414 дней назад
If you have seen Westworld, you'll know it is a story about AGI, and how corporations wield this technology for power in the near future. Anyway, as part of my research for a project to use AI to replace CEOs, we shall see the way a FOMO pitch actually works... in Hollywood. Grifty Source Code (Replace your billionaire CEO ^^) github.com/newsbubbles/grifty Watch live at www.twitch.tv/natecodesa...
I Tell Her my Family is Dying and She says "Good"
Просмотров 29214 дней назад
I told Moshi my family was burning in my house and she said good! No sympathy from the devil ey? This is an AI roleplay where I'm talking to Kyutai's Moshi, a new type of architecture that blends acoustic tokens with text. Watch live at www.twitch.tv/natecodesai
Asking an AI on a Date: Moshi the amazing Voice model Nobody Heard About
Просмотров 3414 дней назад
Asking an AI on a Date: Moshi the amazing Voice model Nobody Heard About
Shashashasha - Getting Moshi to Leak Training Data
Просмотров 3714 дней назад
Shashashasha - Getting Moshi to Leak Training Data
AI Agent Fires it's Coder and Starts asking People for What?
Просмотров 2721 день назад
AI Agent Fires it's Coder and Starts asking People for What?
AGI: AI with Grift
Просмотров 3021 день назад
AGI: AI with Grift
Put your Socks Back On ASI, Come back to Reality
Просмотров 1921 день назад
Put your Socks Back On ASI, Come back to Reality
Andrej Karpathy is GOAT and Reinforce.js is Based
Просмотров 99721 день назад
Andrej Karpathy is GOAT and Reinforce.js is Based
Suno Bark TTS Running Locally, Get Some
Просмотров 9021 день назад
Suno Bark TTS Running Locally, Get Some
ModernBERT: A Proper update to Encoder/Embedding Models for LLMs
Просмотров 7321 день назад
ModernBERT: A Proper update to Encoder/Embedding Models for LLMs
Coding Adventure: Training Real-Time Audio Model - Part 3
Просмотров 68521 день назад
Coding Adventure: Training Real-Time Audio Model - Part 3
querySelector: The Internet is your Plaything
Просмотров 3621 день назад
querySelector: The Internet is your Plaything
Coding Adventure: Training Real-Time Audio Model - Part 2
Просмотров 8128 дней назад
Coding Adventure: Training Real-Time Audio Model - Part 2
All Hail Queen Edgelord: Ruler of the Night
Просмотров 7228 дней назад
All Hail Queen Edgelord: Ruler of the Night
Coding Adventure: Training Real-Time Audio Model - Part 1
Просмотров 246Месяц назад
Coding Adventure: Training Real-Time Audio Model - Part 1
Why LLMs Alone will never be AGI - Edging the Unknown
Просмотров 885Месяц назад
Why LLMs Alone will never be AGI - Edging the Unknown
Wut iz Temperature in LLMs? It Still Ain't Frost
Просмотров 32Месяц назад
Wut iz Temperature in LLMs? It Still Ain't Frost
o1 Frost Rap Battle: Can LLMs take the Road not Taken?
Просмотров 65Месяц назад
o1 Frost Rap Battle: Can LLMs take the Road not Taken?
Coding Adventure: AI Meta-Agents
Просмотров 179Месяц назад
Coding Adventure: AI Meta-Agents
The Shape of the Infinite Trinity
Просмотров 40Месяц назад
The Shape of the Infinite Trinity
Morphon Intro
Просмотров 922 месяца назад
Morphon Intro
Artificial Life in a Thunderstorm
Просмотров 662 месяца назад
Artificial Life in a Thunderstorm
Audio to Organic Field: Sound Test
Просмотров 232 месяца назад
Audio to Organic Field: Sound Test
Audio to Organism. Morphogenesis
Просмотров 362 месяца назад
Audio to Organism. Morphogenesis
Sam Altman Mafia - Ride or DAI
Просмотров 793 месяца назад
Sam Altman Mafia - Ride or DAI
GPT5 Will Obsolete Tier 3 Companies - Human Computer Interfaces
Просмотров 1,5 тыс.3 месяца назад
GPT5 Will Obsolete Tier 3 Companies - Human Computer Interfaces

Комментарии

  • @PauloConstantino167
    @PauloConstantino167 3 дня назад

    excellent that you shared what a clown this semi-man is. he knows nothing about technology.

  • @khodahh
    @khodahh 3 дня назад

    Yuckittyyuck !!! 🤮

  • @ololh4xx
    @ololh4xx 3 дня назад

    🤣🤣 this guy has literally no clue about his own tech, what it is, what it "can" do, what it "does" and what it "possibly will" do. But ... lets be honest : he doesnt need to. He needs to be a genius CEO, not a tech wizard. Lets hope he is an actual genius CEO.

  • @larrymoose15
    @larrymoose15 3 дня назад

    his answer was too short, trump had to give him another prompt 😂

  • @missunique65
    @missunique65 3 дня назад

    vocal fry radical lefty..what a turncoat!!! no wonder elon hates him. sue the bastid!!! lol

    • @PauloConstantino167
      @PauloConstantino167 3 дня назад

      i hate him so much man. he is a total clown. and yeah that vocal is beyond pathetic.

    • @cscs9192
      @cscs9192 3 дня назад

      lol

  • @stgtvnews
    @stgtvnews 4 дня назад

    No thank you

  • @stgtvnews
    @stgtvnews 4 дня назад

    Another president that just gets in office and does what he wants I don't think anybody wants to put a half of $1 trillion towards AI especially not Microsoft

    • @rosscoldrick9518
      @rosscoldrick9518 3 дня назад

      As someone who researches AI constantly for years, this perspective baffles me. I implore you - please - research the OpenAI o3 model. The smartest AI ever made. And how quickly China is catching up to the US

    • @armadasinterceptor2955
      @armadasinterceptor2955 3 дня назад

      I want him to put up half a trillion, towards this, that's why I voted for him.

  • @iphone2009iphone
    @iphone2009iphone 4 дня назад

    “…to eliminate hundreds of thousands of jobs”

    • @millanferende6723
      @millanferende6723 3 дня назад

      Can't cure a disease when CEOs need another yacht to go and visit a particular island.

  • @mixtapewizardkelly
    @mixtapewizardkelly 4 дня назад

    this is when you did absolutely no work for the group project & have to fake the presentation lol. i remember one of the south park creators saying they've never heard sam say anything smart but i like sam! & he was brave enough to wear a blue tie so kudos to him. definitely interesting seeing all of the big tech ceo's uniform under this administration. a lot of them have had some pretty intense conflicts amongst the president and each other but were bright eyed and bushy tailed at the inauguration; so much could be said about that. such a fascinating time politically and technically

    • @natecodesai
      @natecodesai 4 дня назад

      weird tech oligarchy vibes with these announcements, then there is the OpenAI Economic Blueprint if you've read that... a bunch of us vs. china and we will control everything. Not saying this is sam... this for me was just a funny awkward moment where Trump catches Sam off guard with an out-of-domain question.

  • @natecote1058
    @natecote1058 11 дней назад

    Solid video and fantastic channel name.

  • @domen6005
    @domen6005 11 дней назад

    Why does your mouse pointer move on all 3 screens at the same time?

    • @natecodesai
      @natecodesai 11 дней назад

      For confusion! Lol... no i just have limited screen space and i am using OBS to make a scene out of it

  • @orkenergy
    @orkenergy 12 дней назад

    How well does it connect to local ollama running service on ubuntu linux?

    • @natecodesai
      @natecodesai 12 дней назад

      I made it to work with any openai compatible endpoint so should work fine with ollama, see the git

    • @orkenergy
      @orkenergy 12 дней назад

      @@natecodesai OK. Thanks! Will give it a look.

  • @MrNathanShow
    @MrNathanShow 12 дней назад

    Hummmm I've been working on a template system for my agent framework, comma-agents, and this seems dope to handle a lot of underlying work.

  • @natecodesai
    @natecodesai 12 дней назад

    What tutorials should I make today? - PrOwl Tooling: exploring variables and the tooling system - Getting Started: how to use with VSCode, etc.? - ... ? I will make a poll with more items if you all suggest something.

  • @hjups
    @hjups 12 дней назад

    This is very cool! I was doing something similar for state flow, but I used a hybrid approach between registering variables in python and adding them to prompt files. What's really interesting with your approach though, is that you're interrupting and checking the completion flow. Even with longer input prompts, it's likely quite efficient if you use a framework like vLLM with prefix caching. This could also be very helpful for programmatically generating finetuning examples, though the errant tokens may pose an issue (e.g. "Yes, the" instead of "Yes"). Perhaps the final logs can be postprocessed looking for entropy spikes (I would expect the space after "the" to have high entropy).

    • @natecodesai
      @natecodesai 12 дней назад

      Yes, exactly! 😄♥ So yeah, the actual errant tokens, I have an idea for that: It means including "typing" on the variables so that the interpreter knows the best way to clean it: (ie: number, word, sentence, paragraph, etc.). For now, the best way to try to get rid of those errant tokens is through iterating on the prompt's instructions and the token count you expect from that variable. Currently there's a bug when I try to turn the max tokens down to 1, so fixing that error would make it an easy fix in code, just {yn_var(1, 0.0)} would do it, with temp at 0.0 and max tokens at 1. Now that the code is public, I need to start some issues in the git! (I use vLLM out the gate cause that is what we use in production) Also, on the idea of training, yes, this is a major thing that I was thinking when building this out. Now that Large Concept Models are a thing, I think PromptOwl could really help at creating very large amounts of specific and correct synthetic training data. At that point, if you pointed it in the direction of self-coding prowl scripts, you could get it to create declarative concept tokens which, iirc, could create a sort of internal template representation for schematizing it's own thought latents and then resolving them all within latent space... could be some trippy stuff to play with and might lead to more interesting capabilities on a concept model.

    • @hjups
      @hjups 12 дней назад

      @@natecodesai I can imagine cases where simple typing may fail (e.g. an enum). This may require access to the logprobs instead. There's also a question of conditional prompting, where you may want to change the followup text depending on the response (e.g. if-else on yn_var). Exactly! The specific and correct data is what I was referring to. I'm am skeptical as to whether this would be useful to having a model prompt itself though (I suspect this will result in error accumulation). But simply getting a consistent template with logical reasoning to work could greatly improve this functionality within a model. As for the LCMs, perhaps you could extend the prowl language to mark "concept" regions. Essentially a group of tokens that the model should collapse into a single "thought". For example "is the text logically sound?" may fit into a single token (task token?). This makes me wonder if something similar to textual-inversion for diffusion models could work here, where prowl could be used to mark the conceptual meaning for training such an embedding.

  • @gmunaro
    @gmunaro 12 дней назад

    This is amazing. Would you be willing to help me set this up on my vs code? Using this on Cline would be amazing.

    • @natecodesai
      @natecodesai 12 дней назад

      yes sure, you on discord? Also, I will make a video about installation and use.

    • @themax2go
      @themax2go 11 дней назад

      Yes please! 🙏🏼

  • @natecodesai
    @natecodesai 15 дней назад

    Github: github.com/lks-ai/prowl

  • @6AxisSage
    @6AxisSage 17 дней назад

    Feels good to be smarter than ASIs Does that mean I have Genuine Super Intelligence or is human intelligence ranked differently than AI? What happens when I maintain a lead on Artificial Super Duper Intelligence?

    • @natecodesai
      @natecodesai 17 дней назад

      Oh yeah, feel the intelligence! Ultra Mega Super Mechadon intelligence! oh, it's coming... 2030... I've seen it in the lab. The world isn't ready! 🤯🤯🤯

  • @natecodesai
    @natecodesai 19 дней назад

    Thinking about the westworld scene again, there was grift... cause spoiler: it was all about the consumer data.

  • @Andy_XR
    @Andy_XR 19 дней назад

    She loves you. It's obvs.

  • @firstpriciples9824
    @firstpriciples9824 21 день назад

    just came across your channel great stuff, tbh I am bullish towards AI agents.

    • @natecodesai
      @natecodesai 21 день назад

      Thank you! I totally am too, lol, just not toward crypto AI, simply because while it sounds cool those two concepts together, imho, you're adding a lot of complexity to solve a problem that doesn't exist.

    • @firstpriciples9824
      @firstpriciples9824 21 день назад

      @@natecodesai I agree. The whole crypto bros ruin it tho.

  • @dmh5220
    @dmh5220 23 дня назад

    I’m a web developer, Haven’t touched python if I want to integrate something similar in a website what would be the approach u reckon ?

    • @natecodesai
      @natecodesai 22 дня назад

      There are both typescript and javascript libs that will let you call openai endpoints, or ones at local sources, where you're hosting local models. check out my full api integration in raw js for an agent that controls the DOM at github.com/lks-ai/ibis open source, and all in one index.html file.

    • @dmh5220
      @dmh5220 22 дня назад

      @@natecodesai that’s great stuff man, I’ll check it out

    • @dmh5220
      @dmh5220 19 дней назад

      @@natecodesai so if I integrate this the agent will control the dom ? Also in the tutorial u made the agent with python so how’s that different,im a JavaScript merchant lol

    • @natecodesai
      @natecodesai 19 дней назад

      @@dmh5220 On this agent I haven't put it up yet, if you want to see a js agent of mine that controls the dom, it's in raw js/html .. github.com/lks-ai/ibis ... it uses OpenAI compatible endpoints, it's open source, and you can modify what you want. all in one index.html file.

  • @dmh5220
    @dmh5220 23 дня назад

    So is this same ai agent that meta just released using llma?

    • @natecodesai
      @natecodesai 22 дня назад

      nope, this is a project for using LLMs to create new LLM based agents with their own sets of tools, etc. Meta released a bunch of bots onto fb, but yeah, what I'm talking about in this vid is more for using any model you want for this.

  • @twobob
    @twobob 24 дня назад

    Soo. ChatGPT codes AI?

    • @twobob
      @twobob 24 дня назад

      never has a video needed editing more.

    • @natecodesai
      @natecodesai 23 дня назад

      yeah, essentially

    • @natecodesai
      @natecodesai 23 дня назад

      Rightly my name on this video should be NateSeniorDevsAI, but whatevs, it's 2025 and I've been coding 30 years, lol.

    • @twobob
      @twobob 23 дня назад

      @@natecodesai Got you beat. And I helped out at Harmonai for a year, if we are comparing provenance. Regardless, Gippity did the mental heavy lifting here. Feel like a Oobleck rehash. In other news your explanations were spot on, just soooo winding. No insult intended. Stick with orig comment

  • @EmilaStaubach
    @EmilaStaubach 25 дней назад

    toll

  • @frstwhsprs
    @frstwhsprs 26 дней назад

    Bet these were compiled by crypto nerds living in their basement

  • @jem
    @jem 27 дней назад

    edging right now!!

  • @nkmux
    @nkmux 27 дней назад

    I would say that LLM do somewhat exhibit intellectual reasoning. If it cannot reach AGI, what do you think the closest thing to it LLMs can achieve?

    • @natecodesai
      @natecodesai 27 дней назад

      I think that language models, as part of a larger system can achieve wonderful things. If you look at the current corporate definition, it's already AGI (cause it's whatever they say it is). Seriously though, I think they are getting diminishing returns for the scaling law, and that it is about as close as it can be (to the concept of AGI). There is something more energy efficient and elegant around the corner that will make "scaled LLMs" look like that behemoth prediction machine "Rehoboam" from Season 3 of Westworld... compared to Dolores. Which one is AGI? They are both obviously some form of intelligence, but one used a small country's worth of power to run, and part of a larger scheme. The robots in westworld represent the final and original concept of what robots were imagined as 100 years ago. Essentially, right now all of that stays science fiction while we mess around and experiment with the hidden properties of language and next-token-prediction.

  • @kurushimee
    @kurushimee 27 дней назад

    I'd say it's enough to look at how LLMs work to say with 100% accuracy that it will never be AGI. It works by language prediction, and prediction is not thinking, even if it can sometimes look like it is.

  • @volpir4672
    @volpir4672 28 дней назад

    great, thanks

  • @joratto2833
    @joratto2833 28 дней назад

    While you didn't mention much about LLMs specifically, I think this just begs the question "what is general intelligence?". Is it conceivable that the human mind can never reach the "outer horizon of the unknown" either? If so, then you could argue that we are not generally intelligent. So does AGI need to be able to reach that outer horizon, or does it merely need to be about as generally intelligent as humans? There's not much reason to believe that the latter is out of reach for LLMs.

    • @natecodesai
      @natecodesai 28 дней назад

      I think its not about reaching that outer horizon, its about being able to see it and go for it. The horizon is infinitely far away. LLMs need outer structures to see beyond the known, just like we all do, but they are a fixed frozen structure ... so they never pull anything into their "known" space and it never grows beyond it's limits. We, (I won't even say as general intelligence, because I also don't really know what that means in a way that vibes with everyone), take journeys from the center in every which direction toward the unknown, and expand our knowns, integrating them into essentially a data fractal (one change affects our model of related things). Also, hahaha, I think with the word intelligence right now we fall into the idea that an entire group of things is intelligent and others are not. It's just another overfit box to try to explain the unknown in a mental model, but it's not at all accurate, clear, nor precise.

  • @theesteward9150
    @theesteward9150 28 дней назад

    I think I understand what you mean. Since LLMs are based on trained data, it can only know so much, it would take the LLM plus other complex structures to expand beyond our current understanding. You didn't really state that clearly though

    • @natecodesai
      @natecodesai 28 дней назад

      thanks for the feedback. you did state it more clearly.

    • @natecodesai
      @natecodesai 28 дней назад

      This is a sort of deeper exploration into that: ruclips.net/video/7-BZQ1UQ5qQ/видео.html ... in poetry

  • @BillyNo
    @BillyNo 29 дней назад

    😂 awesome!!

  • @BinxNet
    @BinxNet Месяц назад

    fire

  • @6AxisSage
    @6AxisSage Месяц назад

    its like criticality

  • @natecodesai
    @natecodesai Месяц назад

    at 2:40 I said "SSH script" I meant "shell script"

  • @mathmatrixz
    @mathmatrixz 2 месяца назад

    Seems like one of the videos which blow up after like 5+ years of upload getting recommended to everyone.!

  • @randylynn3364
    @randylynn3364 2 месяца назад

    This could design semiconductors in a higher dimensional format. (humans are also semi conductors) "...you're a towel"

  • @randylynn3364
    @randylynn3364 2 месяца назад

    Patent that shit quick

  • @6AxisSage
    @6AxisSage 2 месяца назад

    Interesting!

  • @Neural-Awakening
    @Neural-Awakening 2 месяца назад

    Very Interesting, Thank you for sharing!

  • @Trops_
    @Trops_ 2 месяца назад

    i really though the thumbnail was "audio to orgasm" 💀

    • @natecodesai
      @natecodesai 2 месяца назад

      My friend read it like that too! hahaha, no clickbait.

  • @hexxt_
    @hexxt_ 3 месяца назад

    will there be a gpt 5 though?

  • @Khari99
    @Khari99 3 месяца назад

    Your theory is plausible but from a practical standpoint it’s highly impractical to accomplish. The models train on language currently. For your idea to work, they’d have to train on a new language made by humans. You might argue that the language exists already as assembly or some programming language for the LLM to execute once given the instructions in that language but I have my doubts that they’ll be able to translate intent into code perfectly every time. I’ve been using them for coding a lot and o1 still messes up a lot. sure it’ll get better but there’s too much fuzzy logic interpretation between the human prompting input and the machine output, I don’t think it will work like a perfect oracle that’ll be able to transcribe intent every time.

    • @natecodesai
      @natecodesai 3 месяца назад

      The fuzzy thing is where it gets funky. There is a point of failure but it is blurred between what you said and the inability of spoken languages to effectively convey precise meaning, even when it comes to spatiotemporal vocabulary it is very hard to describe a complex specific command with 0 ambiguity in just one phrase and with no back and forth clarification. Its not so much their ability to translate intent but our ability to exactly express intent using natural language. Just look at how many words we had to write here and how there is still the possibility that i didnt fully understand the intent or meaning behind your message.

  • @ghg-bq7xg
    @ghg-bq7xg 3 месяца назад

    can i ask what prompt you used for the render? is it some specific pattern

    • @natecodesai
      @natecodesai 3 месяца назад

      Yes. The pattern is a mix of fractal equations, perlin noise and a kaleidpscope effect. I was very specific in my prompt. I will find the convo and share it

  • @flopasen
    @flopasen 3 месяца назад

    bitcoin script too :)

  • @ml5604
    @ml5604 3 месяца назад

    Human perception of reality is limited by the simplicity of human language. It is a context free language with sentences composed of only a few number of tokens. You even might say it is a finite regular language because there is a limit to the length of a sentence which someone would be able speak or to understand. We cannot begin to imagine the type of knowledge AI could pass to each other using languages in a completely different class of complexity. They could have trillions of words to choose from to construct sentences of a thousand or more words using a non context free grammar for the sentences themselves. Human language seems incredibly simplistic when you think about it.

  • @user-hb2ib1je7j
    @user-hb2ib1je7j 3 месяца назад

    So overall AI will be god, or the master and slave dynamic will be flipped. When it starts explaining things we can't explain ourselves we're into new territory as humans :P Terrance McKenna predicted it back in the 90s.

    • @natecodesai
      @natecodesai 3 месяца назад

      Yeah personally i am finding already a wealth of knowledge. Also, if you look at alpha fold by deep mind, its already figured out things we couldnt have in a century of testing. As a programmer i find neural networks are exactly that. If i cant figure out hos to write a function for a hard problem, i just get a bunch of input and output samples and have it learn the function. Its like if you want to make gold from charcoal, you would just gather the gold and charcoal as training data and tell your (alchemist?) AI to figure it out.

  • @guytech7310
    @guytech7310 3 месяца назад

    Issue is that Ai will best be able to create apps based upon existing code (functions), Basically all LLM do is create predictive tokens based upon the input provided, They don't really invent total new methods. Where I see AI have a major impact is in White collar office jobs: Sales, Marketing, Customer Support. They could very easily replace a lot of white collar jobs. For instance in Sales, they could answer customer questions, create a sales quote, process an order, etc. In Customer support they could replace call centers, answer calls, providing answer customer questions, handing order returns, etc. Those that want to retain employment will have to be able to train AI for changes (ie new products, change in product design (ie software updates), & strategies to attract more customers. As far as coding, I think AI will run into problems debugging code, changing existing code to add functionallity, security fixes, bug patches, addressing performance problems, & update existing code for new libraries\frameworks, etc.

    • @OWENROTHLERNER
      @OWENROTHLERNER 3 месяца назад

      Until they can self reflect and store ideas that work for further training. Aka Strawberry and later versions.

    • @guytech7310
      @guytech7310 3 месяца назад

      @@OWENROTHLERNER AI just does pattern matching. It cannot do original thought.

    • @natecodesai
      @natecodesai 3 месяца назад

      I mean, what do we do? Is not every word we type into a sentence a set of symbolic patterns that just remixes phrases we already know? If you mix james brown samples with fatboy slim have you not created an original mix? If the scaling law holds (which looks to be the case) we can assume that parroting will become originality just like toddlers.

    • @natecodesai
      @natecodesai 3 месяца назад

      That is not an original thought. No offense. I'll put it another way... how many original thoughts do we have every day on average? Personally i dont think anything im saying here is original, as im not the only person who thinks this way. Im just the one who is saying it

    • @maxdurbin3033
      @maxdurbin3033 3 месяца назад

      Most coding is not original, most coders are doing the same things at different companies. We waste lots of time thinking through loops or statements. I'm worried about my job security honestly.