Mojo Is FASTER Than Rust

Поделиться
HTML-код
  • Опубликовано: 4 янв 2025

Комментарии •

  • @shinjiku144
    @shinjiku144 10 месяцев назад +423

    How to scare a VIM user? Make them use their mouse 9:08

    • @creativecraving
      @creativecraving 10 месяцев назад +23

      ❤ Real talk! It's a disorienting experience, sometimes. 😅

    • @thatmg
      @thatmg 10 месяцев назад +9

      The struggle is real.

    • @crimsonmegumin
      @crimsonmegumin 10 месяцев назад +6

      it would be funnier if he pressed ":q" lol

    • @mgord9518
      @mgord9518 10 месяцев назад +6

      Brave of you to assume I have a mouse

    • @lezzbmm
      @lezzbmm 8 месяцев назад

      dude was shook tbh

  • @CaptainOachkatzl
    @CaptainOachkatzl 10 месяцев назад +361

    write bad code -> write optimized code in different language -> post article -> wait till prime posts it with clickbait title

    • @Navhkrin
      @Navhkrin 10 месяцев назад

      1. Comparison to Rust is done aganist best lib that does same operation on Rust, not written by same author.
      2. Both are pretty decently optimized and have extremely similar logic flows.

    • @J-Kimble
      @J-Kimble 10 месяцев назад +13

      Yeah and who would have thought SIMD optimization makes processing faster.. the shock..

    • @zeratax
      @zeratax 10 месяцев назад +18

      @@J-Kimblesure but if it is easy to write simd in mojo that’s pretty cool

    • @PeterKilian
      @PeterKilian 10 месяцев назад

      @@zeratax
      don’t bother, yt comments are made by zombies.

  • @Dooezzz
    @Dooezzz 10 месяцев назад +324

    'Tensor is one eigenvalue less than elevensor'
    😂

    • @evian8976
      @evian8976 10 месяцев назад +1

      lol

    • @paladynee
      @paladynee 10 месяцев назад +21

      monad is a monoid in the category of endofunctors

    • @monad_tcp
      @monad_tcp 10 месяцев назад +5

      Tensor is just a jagged array with 3 or more levels. Why do physicists have to make their data structures sound fancy.

    • @johanngambolputty5351
      @johanngambolputty5351 10 месяцев назад +14

      @@monad_tcp It really isn't just fanciness lol, tensors are not about the numbers required to represent them in a given coordinate system, they are about making sure that no matter what coordinate system is used, you end up expressing the same quantity.

    • @Michallote
      @Michallote 10 месяцев назад +9

      ​​@@monad_tcpyou know tensors where arround before computers even existed right? We are talking about 1800's or so mathematics.

  • @ragectl
    @ragectl 10 месяцев назад +27

    No performance benchmark articles or claims should be taken seriously unless they provide the full code examples, environment details, and details of all optimizations applied. Saying x is faster than y is "trust me bro" level.

    • @mahakasem810
      @mahakasem810 9 месяцев назад +1

      Oh man, go look at the embedded ML space. It's so littered with this exact problem.

  • @thatmg
    @thatmg 10 месяцев назад +64

    What an idiomatic pythonic nascent article this was.

  • @gilpo
    @gilpo 10 месяцев назад +11

    Bioinformatician here (yes, it's a real field of science!). Most of us come from a bio or stats background and were then taught how to program and how to use analysis tools in grad school. Most of us are unfamiliar with low-level languages such as C or Rust because loads of us didn't have a solid CS background and the learning curve is just too steep for people whose main focus is solving biological issues and keeping up to date with literature and modern data analysis techniques. So most low-level tooling is actually made by CS people working in collaboration with bioinformaticians. I mean... think about it, low-level programming is as hard as it is for full-time computer scientists and requires loads of deep knowledge on how CPUs, memory and OSes work now couple that with also having to master the complex science which is biology, knowing how genes, transcription, translation, mutations, DNA/RNA works and large biochemical regulatory networks and you can see why Python and R are so popular in our field. The barrier of entry is much lower for people from hardcore biology backgrounds and it allows us to engineer performant enough complex data analysis pipelines without having to understand and worry about stuff like memory allocation and pointers...

    • @halneufmille
      @halneufmille 10 месяцев назад +3

      You may want to check Julia then. It was designed specifically to be as fast as C and Fortran, but as easy to write as Python, but with more modern syntax.

    • @gilpo
      @gilpo 10 месяцев назад +1

      @@halneufmille I'm aware of Julia, though it seems more like a replacement to MATLAB than Python, it seems more geared towards engineering than data analysis

  • @kirillgimranov4943
    @kirillgimranov4943 10 месяцев назад +12

    "Mojo is faster than rust"
    And the whole article contains just a sentence mentioning rust once that saying "I implemented it in rust and it was slower"
    Let's just make jokes about the author and say like he just compiled without a release flag or something

  • @halneufmille
    @halneufmille 10 месяцев назад +54

    The end of the blog post of viralinstruction about this benchmark sums up my thought about Mojo and Julia perfectly:
    To me, Julia seems like such an obvious solution to the two-language problem in bioinformatics (and in deep learning). All the hard problems with bridging speed and dynamism have essentially been solved in Julia. At the same time, the language remains niche, mostly because it still has too many rough edges and usability issues, such as latency, the inability to statically analyse Julia or compile executable binaries. But these issues are not fundamental to the language - they're rather in the category of ordinary engineering problems. Solving them is mostly "just" a matter of putting in tens of thousands of professional dev hours, which is a matter of getting tens of millions of euros to pay for hiring people to do the job.
    It does grate me then, when someone else manages to raise 100M dollars on the premise of reinventing the wheel [a.k.a. Mojo] to solve the exact same problem, but from a worse starting point because they start from zero and they want to retain Python compatibility. Think of what money like that could do to Julia!

    • @MrAbrazildo
      @MrAbrazildo 10 месяцев назад +1

      So has Julia a smaller learning curve than Rust?

    • @RazgrizDuTTA
      @RazgrizDuTTA 10 месяцев назад +4

      @@MrAbrazildo Julia is very easy to use. If you avoid a few common mistakes, you can quickly write fast programs. Squeezing the last drops of performance can be tough though.

    • @ck-dl4to
      @ck-dl4to 10 месяцев назад

      They want Python codes to run fast as C lol

    • @MrAbrazildo
      @MrAbrazildo 10 месяцев назад

      ​@@ck-dl4toI guess it's easier to make C++ has efficient high-level features, like ranges.

    • @Navhkrin
      @Navhkrin 10 месяцев назад +7

      This argument is mostly raised by Julia fans, but it really doesn't make sense. It is more of a wishful thinking from a Julia fan rather than a good argument.
      1. Julia syntax sucks vs Mojo syntax.
      2. Mojo is more than Julia. It isn't "just an alternative". Mojo solves far more problems than Julia ever aimed to solve.
      Some advantages that Mojo has over Julia:
      1. Allows access to MLIR intrinsics directly, which makes adding support for fancy accelerators CONSIDERABLY easier.
      2. Native GPU codegen allows un-before-seen heterogeneous compute abilities that is extremely simple to use.
      3. Better syntax decisions
      4. Supports both AOT and JIT. This makes compiler chain SIGNIFICANTLY more complex and advanced than a language that just supports 1. This is also what supercharges Mojo with dynamism of an interpreted language and performance of a AoT compiled language.
      At that point, it really wouldn't make to spend dev effort on enhancing Julia. Because Julia has achieved only 5% of what Mojo aims to achieve and it doesn't start on strong foundations to begin with. If we are going to have to engineer 95% anyways, much better to start fresh with better foundations instead.

  • @ferdynandkiepski5026
    @ferdynandkiepski5026 10 месяцев назад +41

    If you're writing an article that mentions performance you have to provide all the compiler options that you used. If the rust implementation didn't have the target-cpu=native codegen flag set and the --release flag then it is an unfair comparison. And since he used a library for the rust implementation true lto would also probably improve performance. So a set of compiler flags for all languages used is needed to make it a fairer comparison.

  • @magfal
    @magfal 10 месяцев назад +143

    I taught a bioinformatician that he could run python or rust inside the database they were using: Postgres
    He took one of his jobs and moved it to the db to skip the overhead. It was a bit more than 100 times faster than how they had been doing that task.

    • @farrela.846
      @farrela.846 10 месяцев назад +11

      Care to elaborate or drop some keywords about this? I'm curious about how it works and what we can do with it.
      Thanks!

    • @magfal
      @magfal 10 месяцев назад

      @@farrela.846 you can run your code within the context of the database server rather than have your code fetch data and push it back over a high overhead connection.
      PGRX is the rust option pl/pythonu is the Python option

    • @magfal
      @magfal 10 месяцев назад

      @@farrela.846 pgrx pl/pythonu

    • @mateusvmv
      @mateusvmv 10 месяцев назад +18

      @@farrela.846 Postgres Procedural Languages
      Reminds me of EVCXR, which can be used in jupyter, but I **think** PL/Rust compiles at creation time

    • @thedoctor5478
      @thedoctor5478 10 месяцев назад

      Chapter 46. PL/Python - Python Procedural Language of the postgres docs, which is crazy that I didn't even know about until just now despite using both Python and postgres daily since like forever.@@farrela.846

  • @psychoinferno4227
    @psychoinferno4227 10 месяцев назад +21

    The rust crate is searching for both '
    ' and '
    ' while the Mojo function is only searching for one character which defaults to '
    '. Is it any wonder the Mojo version is faster when the Rust version is doing twice the work?

  • @djupstaten2328
    @djupstaten2328 10 месяцев назад +65

    iterators/generators/coroutines are actually faster than traditional (while and for-in) loops in python because they are very thin c-wrappers under the hood (thinner than the original ones).

    • @nevokrien95
      @nevokrien95 10 месяцев назад +5

      Yes but u r still stuck with dynamic typing and the gill
      So every single operation is a type check and the you preform it and on top of that you can't use ur 32 cores...
      If mojo solves this it could be nice but gill is actually a key thing for a lot of the c packages that ai uses. And u need to then compare to Julia and R and I doubt it would be much better.
      Cython existed for quite a while And it was used to write a lot of these packages. It also gives u low level control and other nice stuff.
      I think mojo can be basically tensor cython which would be really nice but not a take over everything very nice.
      It still has a compile step which is a bit to slow for my taste

    • @maksymiliank5135
      @maksymiliank5135 10 месяцев назад

      ​@@nevokrien95Mojo has static typing. It also has structs which are a static version of classes. No additional runtime checks if a field exists are needed. It supports simd parallelization and even running your code on the gpu in a pretty simple syntax if I remember correctly. You should check out the original Mojo demo/teaser video.

    • @DoomTobi
      @DoomTobi 10 месяцев назад +1

      Is that true though? Aren't they "just" faster because they avoid appending to a dynamic array all the time?

    • @nevokrien95
      @nevokrien95 10 месяцев назад

      @DoomTobi dynamic array appending is 100% fine even c++ does it. Sure usually they reserve it but like.. log(n) slowness is probably not it.
      Gens and maps are not that much faster in my exprince. U can use ThreadPoolExecutor and map on it then it's like 20x faster because it actually uses the god dam cores.
      In a regular python for loop here is all the operations I can think of:
      1. Allocate a new heap object for the loop var and write to it.
      2. Type check the loop var
      3. Actually do the arithmetic
      4. Decrement the var reference count
      5. Figure out the var is our of scope later on.
      It's like 4 junk operations per actual thing. BTW 5 is actualy very slow it's a whole graph dependency thing and 1. Fucks ur cach locality.
      Now if u r using a generator and a sum u can change this to
      1. Get the next pointer
      2. Follow it to the object
      3. Type check the object
      4.do arithmetic
      Because u know build in generators don't get to bleed references out. And u know a + operation on an int or float does bot requite u to save the object. And you know because if the Gill no one is freeing those objects.
      So are they actually use this trick? Idk I bet pypy does and I would venture cython is doing so as well

    • @ck-dl4to
      @ck-dl4to 10 месяцев назад

      ​@@nevokrien95 Compiler does

  • @the_mastermage
    @the_mastermage 10 месяцев назад +179

    Mojo won't be useable until it goes Open Source and until it fulfills its promise to be an actual python superset (it still calls the underlying python runtime for pure python code)

    • @SMorales851
      @SMorales851 10 месяцев назад +27

      What do you mean? Why does calling the python runtime makes it not a superset of python? If it's straight up python with some added things its a superset in the most literal way possible.

    • @hammerheadcorvette4
      @hammerheadcorvette4 10 месяцев назад +6

      Cython better?

    • @darkalleycomics4028
      @darkalleycomics4028 10 месяцев назад +47

      I tried it it took me 1 min to hit that I can't create a "class" and I can't use f-strings. The "superset" of python at this time is all marketing. They shouldn't have released anything until python was fully implemented. Why not do that first, then add new features? I kind of feel like they are just taking advantage of every python developer's desire to have a faster python.

    • @onigurumaa
      @onigurumaa 10 месяцев назад +2

      Open source enthusiast 🤓

    • @bobbymorelli9763
      @bobbymorelli9763 10 месяцев назад

      lol

  • @sammysheep
    @sammysheep 10 месяцев назад +35

    Bioinformatics often relies on CLI tools, embarrassingly parallel tasks, and pipelined data processing. I think Rust is positioned well for the CS/Bioinformatics methodologist, but maybe not with universal appeal given the learning curve, for data analysts.

    • @gristlelollygag
      @gristlelollygag 10 месяцев назад

      Sounds interesting, could you elaborate? (tag me/like the comment so i get a notification)

    • @mistdoyhta696
      @mistdoyhta696 10 месяцев назад

      rust is the worst when it comes to GPU computing

    • @masterchief1520
      @masterchief1520 6 месяцев назад

      Rust would be good for cli tooling and setting up pipelines and use CUDA for interfacing 😂.

  • @marktaylor7162
    @marktaylor7162 10 месяцев назад +46

    5:08 The term for idiomatic Rust should be 'Rustaceous'.

    • @iAmPyroglyph
      @iAmPyroglyph 10 месяцев назад +4

      Why not just "Rusty"?

    • @marktaylor7162
      @marktaylor7162 10 месяцев назад +7

      @@iAmPyroglyph Because I think 'Rustaceous' sounds cooler, and also because it sounds more like it's related to 'Rustacean'.

    • @Luxalpa
      @Luxalpa 10 месяцев назад +4

      Rustalicious!

    • @raffimolero64
      @raffimolero64 10 месяцев назад +2

      Rustic

    • @mrpocock
      @mrpocock 19 дней назад

      Crabby

  • @PeterFaria
    @PeterFaria 10 месяцев назад +37

    I played around with Mojo after reading that article. The SIMD type doesn’t do any bounds-checking, and also doesn’t zero-out unset values. It sacrifices a lot of safety for the sake of performance.

    • @thedoctor5478
      @thedoctor5478 10 месяцев назад

      it zeros-out deez nutz pretty gud

    • @Julian.u7
      @Julian.u7 10 месяцев назад +4

      And that is a good thing. Nothing more annoying than unsolicited safety

    • @thedoctor5478
      @thedoctor5478 10 месяцев назад

      and unsolicited nutz@@Julian.u7

    • @isodoubIet
      @isodoubIet 10 месяцев назад

      @@Julian.u7 Sure but the whole marketing spiel around this language was how it's literally python but Blazingly Fast™. In contrast, the code in this article would actually be much superior had it been written in C++.

    • @ryanlog
      @ryanlog 8 месяцев назад

      SIMDeez nuts

  • @Endelin
    @Endelin 10 месяцев назад +43

    Watch Mojo be the name of some obscure snake species.

    • @SPeeSimon
      @SPeeSimon 10 месяцев назад +6

      No. It's the name of a big (music) concert promoter. :)

    • @johanngambolputty5351
      @johanngambolputty5351 10 месяцев назад +23

      Watch WatchMojo make a video of the top ten comments accidentally mentioning WatchMojo.

  • @SciDiFuoco13
    @SciDiFuoco13 10 месяцев назад +8

    According to their Github page however their implementation and needtail have pretty much the same performance (sometimes one wins, sometimes the other). Unfortunately I can't reproduce them since Mojo on my PC (WSL) immediately segfaults due to issue 1260...
    Also, this is nothing particularly difficult to code: the benchmark is just measuring how fast you can read a file and split it into lines.

  • @nathanielsimard5908
    @nathanielsimard5908 10 месяцев назад +11

    The best-case scenario isn't to have a fast programming language so that data scientists can write optimized versions of their algorithms; it's to have a high-level library that does all of those optimizations for free. Way faster experimentation speed this way.

  • @jonnyso1
    @jonnyso1 10 месяцев назад +13

    Is this the one he forgot to compile in release mode ?

    • @jpratt8676
      @jpratt8676 10 месяцев назад

      Nooooo say it ain't so

  • @thekwoka4707
    @thekwoka4707 10 месяцев назад +29

    Not shwoing any of the Rust code makes this highly sus...

    • @adhalianna
      @adhalianna 10 месяцев назад +5

      I have a weird suspicion that they might have even done a debug compilation instead of the optimized one.
      The fact that we cannot check even for that typical mistake is very suspicious.

  • @annoorange123
    @annoorange123 10 месяцев назад +11

    Go is an awful name, not search friendly at all. If we're talking about good names, Haskell comes to mind or Elixir.

    • @haniffaris8917
      @haniffaris8917 9 месяцев назад

      You would expect a google-able name from a language developed by Google, but nope.

  • @flynn3649
    @flynn3649 10 месяцев назад +40

    Bro dropping new words in his articles

    • @Carter9007
      @Carter9007 10 месяцев назад +2

      Nascent looks to be nascent

    • @ninocraft1
      @ninocraft1 10 месяцев назад

      ​@jazzycoder r u retard?

    • @_zh3ro_
      @_zh3ro_ 10 месяцев назад

      nascent comes from Latin, so if you speak any language other than English that happens to be Latin based, it suddenly becomes not that fancy

  • @WizardofWestmarch
    @WizardofWestmarch 10 месяцев назад +1

    I don't remember if they were able to leave it on but one thing to keep in mind on C vs Rust is, if LLVM works correctly now, Rust's aliasing rules allows avoiding rechecking memory to ensure the register and memory are still in sync since C lets you arbitrarily alias a block of memory infinite times with multiple of them being writeable.

  • @michaelhildebrand-faust4039
    @michaelhildebrand-faust4039 10 месяцев назад +1

    Python:Mojo :: JavaScript:Typescript.
    Look how Typescript has taken over from JS. Same thing will happen with Python -> Mojo.

  • @vectoralphaSec
    @vectoralphaSec 10 месяцев назад +3

    Mojo is the future of AI and ML fully replacing Python i believe.

  • @ruroruro
    @ruroruro 10 месяцев назад +4

    As a CV/ML Python developer, I despise Mojo's marketing. It's an interesting project with some good potential (especially if they end up opensourcing it), but their marketing is full of borderline lies and clickbait.
    If they think that this kind of publicity will help their project, they are sorely mistaken. All that they have done is train their potential users not to trust a single word coming out of their mouths.
    smh

  • @gabereiser
    @gabereiser 10 месяцев назад +4

    The whole article smells of GPT.

  • @mattythebatty1
    @mattythebatty1 10 месяцев назад +55

    Why don't they use a binary file format for such massive datasets?

    • @ErazerPT
      @ErazerPT 10 месяцев назад +8

      Was wondering the same because especially field 2 has just 10 possibilities, ie, you could pack each element into a nibble and make it 2el/byte. Just that would shave 50% of field size (~25% of the file size)... field 4 is only 7bit (if my math didn't fail me) so, could also shave another 14% by packing 8char>7bytes (of the field), ~7% of file size. All in all, you could definitely gain a lot from just going binary, as just reading such huge files is a lot of I/O time.
      But truth be said, the whole thing is so repetitive it just screams COMPRESSION!!!!! :P

    • @minerscale
      @minerscale 10 месяцев назад +12

      the real crime is reading in 4x the data than is required due to using plaintext...

    • @MI08SK
      @MI08SK 10 месяцев назад +7

      Right! It doesn't make sense
      Each letter could be represented in 2 bits! since the only possible letters are A,C,G,T
      They could reduce memory usage by 4x by switching from one byte letter representation to 2 bit letter representation

    • @VudrokWolf
      @VudrokWolf 10 месяцев назад

      @@ErazerPTwhile applying for a DNA sequencing company they only seek for differences from a normalized DNA sequence is kind of misleading think they sequence the whole DNA it takes 7 days to make the comparison and find differences. Maybe is faster now this was 6 months ago I did not make it as an bioinformatician. 😢

    • @ErazerPT
      @ErazerPT 10 месяцев назад +3

      @@minerscale 4x is a bit hopeful, but credible. Just tried plain old lzma2 and it was 56>20MB (2.8x), and that's a compressor that knows NOTHING about the format. A proper binary representation could probably arrange things in a way that bitstream compression could exploit to at least 3x+. But yeah, s**t format. Just seeing stuff like start_time=2020-02-26T02:02:24Z makes me wanna scream.

  • @bloody_albatross
    @bloody_albatross 10 месяцев назад +6

    As I understand it tensors are n-dimensional matrices with all kinds of matrix operations and "broadcasting" of other operations (think SIMD, but because n-dimensions more complex and you have to think about what is happening rows/columns wise). Good tensor libraries have multiple kernels, usually you use a GPU (CUDA) kernel where all the operations run in parallel on your GPU(s). Therefore your ML program can be written in Python, since the actual heavy operations are (JIT compiled? (dunno) and) run on the GPU.
    PS: Out of curiosity I started to write a small matrix (not tensor) library in Rust nightly. You can do fun optimizations in Rust because of the ownership model where you elide allocations. Think you do mtx1 + mtx2, but at least one of the parameters are passed as owned. You can impl the Add trait so that in that case you simply reuse the memory of that parameter and do a += instead. However, there are limitations. To make this all somewhat nice you need to use nightly for better const generics and it all gets really complex and redundant quickly and I managed to crash rustc.

    • @randomcubestuff3426
      @randomcubestuff3426 10 месяцев назад

      a tensor is just an element of a tensor product of vector spaces :thumbsup

    • @isodoubIet
      @isodoubIet 10 месяцев назад +1

      The "right" way to handle those kinds of optimizations is with expression templates (C++ name ofc, have no idea what the idea is called in Rust but from what I know it should be possible). The idea is that you build the computation graph in the type system. For example, instead of having the result of mat1 + mat2 be another matrix, you have it be a MatPlus. Then if you do (mat1 + mat2) * mat3, the type will be MatTimes, and so on. You only actually effect the computation when you actually want to store the result somewhere, at which point the whole thing can be optimized by SIMD, fused-multiply-adds, whatever makes it fastest while minimizing cache misses.

    • @mrpocock
      @mrpocock 19 дней назад

      In ml, you often need to keep all the intermediates about to calculate gradients. Sad face.

  • @victorybhg
    @victorybhg 10 месяцев назад +1

    it’s hard to compare the same class of programming language on performance as any language of the same class ( gc, vm, etc ) can be optimized by their respective compilers to relatively the same performance as everything gets to machine code eventually, what does make the difference is whether the language is able to allow the dev to express the most performant set of algorithms that is allowed by the target hardware resources ( cpu, gpu, memory, disk, etc ). it seems mojo is able to expose these interfaces ( simd vectors etc ) more than other languages doesn’t mean that those other languages can’t also have these interfaces as well. it comes down to what provides the easiest path for the domain experts to do what they need to do.

  • @mrpocock
    @mrpocock 19 дней назад

    I was part of the core team that wrote biojava, back around 2000. I also coined the word bioinformatician. It was far more accessible to people than c, and the python libraries were much slower. Things have moved on, but due to multithreading, there are still a lot of very fast bioinformatics tools written for the jvm. These days I mostly write Rust.

  • @humansaremortal3803
    @humansaremortal3803 10 месяцев назад +42

    Worked with some of the people doing "bio-informatics" for two years. They don't code, nor do they have desire to code. I was paid to code, while they were dealing with cells, sequencing, proteins and bunch of other things that I had zero interest in. Some of them are as autistic as average programmer, meaning some of them are decent company.

    • @sammysheep
      @sammysheep 10 месяцев назад +3

      This is not generally true. We are split between data analysts and methodologists. Most code the equivalent of a data science and data engineering level. Methodologists can be mathematicians and computer scientists, and code at a deeper level or more esoteric level. Hence the two language problem cited above.

  • @apaz-cli
    @apaz-cli 10 месяцев назад +2

    I work as a researcher, and I'm building these kinds of tooling and data pipelines, but have literally never heard of anyone using Mojo. The general consensus is that it's not even a contender until they open source it. I haven't even heard anybody talk about it for at least a few months, which in the machine learning world is an eternity. I legitimately forgot that they existed. Even if Modular does open source it, I doubt their ability to beat the Huggingface model serving ecosystem, which is all in Rust. I hope that Modular pivots before they burn the whole hundred million dollars they raised. I don't think they actually understand the problem they're trying to solve.

  • @boccobadz
    @boccobadz 10 месяцев назад +12

    I call it BS, exactly like the guy from Reddit. As someone with MSc in Electronics & Computer Science in Medicine, I've done some stuff with bioinformatics algos in Cpp (even though it wasn't my main field of expertise) and I doubt that this mojo code can outperform highly optimized and battle-tested Cpp parsers (written by much smarter and better-paid people than this article's author). I feel it's the same as their example with mojo magic being faster than their hand-written Python code for matrix multiplication instead of benchmarking it against numpy (which has nothing to do with python because it's a wrapper on 40 yo Fortran lib).
    Again, it's an advertisement full of BS - if it was so great and magical, it would be open source already.

  • @Exilum
    @Exilum 10 месяцев назад +11

    0:45 This actually brought back a core memory. Back when I was in highschool, I used "learnt" a lot because it was how English was supposed to be. But I eventually figured out that few English speakers actually knew that, and used learned instead. Now, I intentionally use the incorrect learned form because it's just how the language evolved. Learn just isn't treated as the irregular verb it is.

    • @NibbleMeTwice
      @NibbleMeTwice 10 месяцев назад +1

      Good on you for having learned a second language.

    • @samarrowsmith2723
      @samarrowsmith2723 10 месяцев назад

      Start calling people "my learned friend" where you voice the last 'e' and we can start a movement

  • @RazgrizDuTTA
    @RazgrizDuTTA 10 месяцев назад +9

    I switch from Python to Julia to do my PhD and it's amazing! I have a bad feeling Mojo will kill Julia and waste all that effort just for non-CS devs to avoid learning a new syntax.

  • @VivBrodock
    @VivBrodock 10 месяцев назад +2

    Prime: "idk what a tensor is"
    Me, a math major currently learning tensors: "idk what a tensor is either"

  • @tajkris
    @tajkris 10 месяцев назад

    1:55
    Write your critical path in a way that mostly uses primitives and you're good to forget about GC. In fact, most popular jvms have escape analysis implemented to allocate objects on the stack instead of heap, so if you're careful enough you can use classes too.
    Not that I am a fan of Java, but if they can write low latency algorithmic trading systems (tens of microseconds or lower to deserialize a price tick, make a decision, prepare an order, serialize and push it out to the network card), high throughput message processing system (Kafka), or highly optimized low latency, concurrent, lock-free queue (LMAX disruptor) then I'm sure it can handle bioinformatics too.

  • @joe5head
    @joe5head 10 месяцев назад +5

    1:10 thousands of machines each worth several millions of dollars

    • @knnk4000
      @knnk4000 6 дней назад

      when im in a lying competition and my opponent is somebody named "mohammed"

  • @melodyogonna
    @melodyogonna 10 месяцев назад +17

    Those writing Mojo off are very funny. Mojo is just a user interface for MLIR; Chris Lattner and his team wrote MLIR by hand and validated it can be just as fast as Assembly and so performant enough to handle workloads they intended to take. Mojo is just a user interface so they wouldn't have to write MLIR by hand anymore, they could have made it look like anything, they chose Python because that's what the ML community is familiar with. Speaking of Chris Lattner ... That is the same person that started LLVM, Clang, Swift, and MLIR. This isn't some random compiler guy claiming some things, there are track records.

    • @melodyogonna
      @melodyogonna 10 месяцев назад +1

      @@anonymousalexander6005 Python isn't hard to implement, there are just 36 keywords, and straightforward semantics. The part that makes it a superset is what they're currently building, and that is the low-level parts Python doesn't have. Also, being a superset is a long-term goal.

    • @monad_tcp
      @monad_tcp 10 месяцев назад

      I thought automatic vectorization was a solved problem. Turns out it isn't when you have heterogeneous architectures.

    • @lmnts556
      @lmnts556 10 месяцев назад

      This is true, Chris is a beast that cant be underestimated.

    • @UnidimensionalPropheticCatgirl
      @UnidimensionalPropheticCatgirl 10 месяцев назад +2

      I mean LLVM succeeded despite being mismanaged mess, not because of it, swift didn’t get good until he got booted of the project and MLIR hasn’t been widely used yet, so as far I am concerned everything lattner touches is vapor ware until proven otherwise.

    • @melodyogonna
      @melodyogonna 10 месяцев назад

      @@UnidimensionalPropheticCatgirl you must be a troll.

  • @steffahn
    @steffahn 10 месяцев назад +21

    1:07 no, you got that all wrong, it's "thousands of [...] machines"! The "thousands" is the number of machines, and "multi-million dollar" the price of each machine.

    • @yakocal
      @yakocal 10 месяцев назад +11

      Which mounts up to billions of dollars worth of machines, and it's what he said...

    • @steffahn
      @steffahn 10 месяцев назад +4

      ​​@@yakocal No he said "most people say 'billions'" suggesting that the phrase could be simply re-worded to use the shorter "billon" somehow. I disagree with that statement and assume it has only come about by misunderstanding of the sentence structure.
      I mean, tell me, how WOULD you re-word this? You say "billions of dollars worth of machines". But that sounds weird, too, and loses informations, as in "billions of dollars worth of DNA-sequencing machines are working non-stop ...". Arguably the "thousands of machines" is the CORE information here, given the sentences around it focus not on economics ("how many dollars") but on scale ("how much data processed").

    • @Carter9007
      @Carter9007 10 месяцев назад +1

      ​​@@steffahnso we conclude that it's subjective...

  • @J0R1AN
    @J0R1AN 10 месяцев назад +14

    9:08 Average VIM experience

  • @dixaba
    @dixaba 10 месяцев назад +8

    You can almost always write a faster version if you make some assumptions: "No, that format is outdated", "Nah, surely there will be no right-to-left languages used", "Of course every file will end with an empty line" and so on. And even if you write code in accordance to specifications, other widely used tools almost always will have quite a bit of QoL features, which can easily become dealbreakers for end users.
    As an example, GNU grep does support Perl-compatible regular expressions (using -P flag), while POSIX grep does not. If you write your blazingly 🔥fast 🔥Mojo 🔥grep 🔥that is only POSIX-compliant, it will work for some users, while many other users (including myself) won't even consider it.

  • @enkiimuto1041
    @enkiimuto1041 10 месяцев назад +5

    This was the weirdest watch mojo I saw.

  • @johanngambolputty5351
    @johanngambolputty5351 10 месяцев назад +4

    I spent a lot of time in python and I'm very glad that I did, because it never needed to be fast, in fact it could have been 1000x slower and I still would have got the same experiments/prototypes done. I swapped when I started trying static analysis, which was just yelling about a bunch of my old code not adhering to stylistic standards or because it couldn't infer what I was doing with certain variables, at that point I might as well strongly type and get useful comments from the linting. I haven't really looked at Mojo, but Rust has some very tasty generics and meta-programming, which has the potential to let you get a lot more done with less code, which is what I've always loved.

    • @vatanak8146
      @vatanak8146 10 месяцев назад +1

      Python has type hints you know?

    • @johanngambolputty5351
      @johanngambolputty5351 10 месяцев назад

      @@vatanak8146 I know, but when I tried it, it was hit and miss. It's funny though, because if you don't want random warnings (from rufflsp I think) everywhere you end up having to explicitly annotate more variables than in rust.

  • @bravethomasyt
    @bravethomasyt 10 месяцев назад +4

    This code has been heavily criticised in the Rust community, and it's expected that, done properly, rust will be faster. It's a clickbait article.

  • @griof
    @griof 10 месяцев назад +3

    The truth is, Data scientist and researchers will use python... Nobody in this side of tech cares about perf. Or safety. You can always rent more cpu on the cloud. Also, Data scientist don't like to program, and we are very bad programmers actually.

    • @_MrKekovich
      @_MrKekovich 9 месяцев назад

      The truth is, new tools for python will be written in mojo instead of C/C++. Mojo does not aim to replace python.

    • @MichaelYoussef-kn8cy
      @MichaelYoussef-kn8cy 7 месяцев назад

      Most ML research involves comparing different methods/algorithms, so the performance of a language is irrelevant - you just need to make sure everything is implemented in a consistent language. For companies, performance is crucial, and that’s where Mojo will shine

  • @abinavravi1063
    @abinavravi1063 10 месяцев назад +10

    Mojo seems like a movement towards rust while retaining python syntax to an extent, they have structs and traits over classes and OOP world.

  • @sdi87hhk
    @sdi87hhk 10 месяцев назад +4

    that "cool story bro" sounded personal lol

  • @sweiscool
    @sweiscool 10 месяцев назад +2

    "Everybody I know who does Julia loves Julia" - Prime, 2024

  • @Turalcar
    @Turalcar 10 месяцев назад +1

    3:08 The most amusing to me is that there exists a class called "Wirth languages"

  • @nitsanbh
    @nitsanbh 10 месяцев назад +1

    0:39 Hey Prime, ya can’t have GREEN HAIR and a GREEN SCREEN at the same time

  • @anotherelvis
    @anotherelvis 5 месяцев назад

    Mojo is faster than Rust in a specific benchmark involving tail calls.

  • @chigozie123
    @chigozie123 10 месяцев назад +8

    I'm pretty sure Julia was created to do exactly what Mojo 🔥 is now struggling to do.
    Julia syntax is intuitive and familiar to Python programmers. It has a thriving ecosystem of libraries and has seen massive adoption. It has an excellent C/C++ interop.
    Why do we keep reinventing Python?

    • @yldrmcs
      @yldrmcs 10 месяцев назад

      because data scientists are so stupid that they can’t write code in languages other than python. fact.

    • @kinomonogatari
      @kinomonogatari 10 месяцев назад +3

      Julia just failed to find a company to invest in it

    • @RazgrizDuTTA
      @RazgrizDuTTA 10 месяцев назад +2

      Mojo will kill Julia and waste all that effort just for lazy devs not to learn a new syntax. I hope time proves me wrong :(

    • @kinomonogatari
      @kinomonogatari 10 месяцев назад

      @@RazgrizDuTTA Well I quite like Julia as a language but at the end of the day I really do not care as a physicist who wins in between Julia and Mojo. I just want a performant modern language using which I can get my work done.

    • @RazgrizDuTTA
      @RazgrizDuTTA 10 месяцев назад +2

      @@kinomonogatari I understand your point of vue. I wish I was that pragmatic. In my case, I do linear algebra for computational mechanics. I absolutely despise the Python Numpy and Scipy design and syntaxe so I hope Julia will survive along with all the great linear algebra packages developed by the Julia community.

  • @josefkaras7519
    @josefkaras7519 10 месяцев назад +119

    mojo is not faster then rust, in the article he quiet literally said that he spotted a mistake/oversight and implemented it better.
    edit: not true, author didn't say that in the article

    • @josefkaras7519
      @josefkaras7519 10 месяцев назад

      also in bioinformatics redundant checks can save a life. as i see it - DNA -> cancer -> therapy with radioactive elements -> if u screw up, u die

    • @weirdworld3734
      @weirdworld3734 10 месяцев назад +8

      couldn't find that quote in article. Care to add it here?

    • @josefkaras7519
      @josefkaras7519 10 месяцев назад +34

      @@weirdworld3734 so... i apparently cant read. i was talking about "In addition, I explored optimizations from C/C++ implementations ..."
      sry for that, as i read it for the second time, it really seems that the author of the article says that those implementations are equal (optimization-wise)

    • @aneeshprasobhan
      @aneeshprasobhan 10 месяцев назад +26

      ​@@josefkaras7519you actually admitted your mistake. That's rare on the internet these days.
      Kudos to you 😃

    • @HammytheSammy-ds2em
      @HammytheSammy-ds2em 10 месяцев назад +19

      @@aneeshprasobhanI’m more surprised that someone who understands code admitted they were wrong. It’s a double whammy for me. Next thing he’s going to say is he reads documentation before starting a project, lol.

  • @KilgoreTroutAsf
    @KilgoreTroutAsf 10 месяцев назад +4

    As a seasoned HPC developer I have to confess the lack of scientific, mathematuc and engineering knowledge of the average FANG drone is as mindboggling as it is scary.
    I really have no idea what they teach you for four years of a CS degree at the uni. Fingerpainting, seemingly.

    • @stretch8390
      @stretch8390 6 месяцев назад

      What?

    • @eliasboegel
      @eliasboegel 5 месяцев назад

      Agree with this. It is mindboggling how different the thinking of HPC developers is compared to "traditional" software engineers.

  • @binary_gaming113
    @binary_gaming113 10 месяцев назад +1

    I used SIMD and it was faster than not using SIMD: The article

  • @AggressivesnowmaN
    @AggressivesnowmaN 10 месяцев назад

    Prime nailed this take regarding likelihood of adoption. Most in that field would find more use in a fast simple language. I’d be more curious how Mojo compares to things like GO and Cython than Rust. Just because they seem to be after a similar niche

  • @_tsu_
    @_tsu_ 10 месяцев назад

    For such small programs, the progress bar is the thing that slows down the program the most. Looks pretty, very cool for writing heavy duty stuff, but unless your benchmark runs a minute or longer, you should disable the bar and then do it for fair results.

  • @0xnika
    @0xnika 10 месяцев назад +4

    Dude, you are breaking the green screen with your hair!

  • @vincentyan8531
    @vincentyan8531 10 месяцев назад +1

    As a data scientist I can assure you that 90% of us do not care about tooling, at all. The majority of data scientist use Jupyter Notebook (not even Jupyter Lab) as their one and only IDE. AS THEIR **IDE**. It is pointless to assume they will ever learn a new language just for the better tooling.

    • @Indonesia_Emas_
      @Indonesia_Emas_ 10 месяцев назад

      i mean pure python implementation of ViT model for example needed huge resource for its computation. and then we try to modify the architecture to hope to make it more resource efficient. my thinking is why don't we implement ViT architecture in pure rust. it will become much more resource efficient compared with modify its architecture because it still get affected by python's less resource efficient.

    • @Indonesia_Emas_
      @Indonesia_Emas_ 10 месяцев назад

      but yeah, the rust data science framework shall be easy to use 😅😅😅😅

    • @Indonesia_Emas_
      @Indonesia_Emas_ 10 месяцев назад

      but the core python tensorflow for example, one of python ml framework, is written in fast language that is c++ and python as the frontend. so i wonder from where the bottleneck come from.

  • @earx23
    @earx23 10 месяцев назад

    SIMD done in intrinsics in Rust will probably beat the pants off of that implementation. The beauty of Rust in this case is, even if it's just a kind of assembler without the explicit register bookkeeping, that its typing system makes you more aware of float to int type conversions, which are very expensive. Porting my intrinsics from C over to Rust yielded a nice speed boost.

  • @uncrunch398
    @uncrunch398 6 месяцев назад

    I like optimizing my speech. Will use the shortest words, fewest syllables and words I can think quickly to compact my message to giving a higher priority only to that it can't be taken to mean something it doesn't. I often fail at the higher priority intent.

  • @9SMTM6
    @9SMTM6 10 месяцев назад +1

    hyperfine, which he used as benchmark tool, actually does a few runs under the hood, and calculates the standard deviation etc. for it.
    So it's not 'just one run'.
    Still, as others highlighted, there are a few other things where at the very least he did not prove that he did it right. Such as whether he used SIMD in Rust (likely not, and SIMD can easily be responsible for more than a 50% speedup).
    Whether SIMD in Rust is in an usable state is another question. I've never tried it myself (in Rust), that that could be a good motivation. But it's a less catchy title.
    Honestly Mojo as a language is very tempting to me. I love Rust, but really, getting others to use it, when it does take a decent bit of time to do properly, is difficult, and must be justified. Mojo promises me the ability to write similar code to Rust, while still being at least somewhat editable for others. The big issue with Mojo is licensing (and, underlying, the VC funding of Mojo). Frankly I don't see how they intend to make back their money without eventually doing things I'm not going to be happy with.

  • @juancarlospizarromendez3954
    @juancarlospizarromendez3954 10 месяцев назад

    In Rust: A value of type char is a Unicode scalar value, represented as a 32-bit unsigned word ...; In C and probably in the propietary Mojo: 1 char = 1 byte unless another encodings are used. It's why, for 4 letters, ACGT, Rust is worst in speed managing pure ASCII characters.

  • @johanngambolputty5351
    @johanngambolputty5351 10 месяцев назад +5

    I've always hated how the word tensor is used in ML to just describe a ndarray, but I guess you can't blame people being confused when the real definition often is "a tensor is something that transforms like a tensor".

  • @bopcity5785
    @bopcity5785 10 месяцев назад +1

    As someone whos written a lot of python Mojo is so annoying, its literally a worse version of numpy. (Numpy arrays mostly are written in C and autogen SIMD operations)

  • @GamesmotionTV
    @GamesmotionTV 10 месяцев назад +1

    Just wondering if the Rust Community could come up with an equally fast solution. Unfortunately the article did not show any code.

  • @elgalas
    @elgalas 10 месяцев назад +1

    Mojo looks fun. Been doing Aoc with it, but it is still way too early for it! I ran into multiple bugs when using parallelism

  • @isodoubIet
    @isodoubIet 10 месяцев назад

    It's nuts how python (and now mojo) with heavy type annotations looks about as verbose and cryptic as template-heavy C++, except that templated C++ is better because templates are structurally typed, which ironically would fit better with the python usual duck typing. Like just use C++ at that point, you're not really gaining anything by deleting the braces.

  • @TurtleKwitty
    @TurtleKwitty 10 месяцев назад

    My guess is that people will favor mojo heavily when doing initial parsing and tesintgo ut theories and when it gets cemented then it'll bee pushed voer to whoever was doing the super optimized versions befroe but now they can work on larger data sets while testing theories

  • @jerrygreenest
    @jerrygreenest 8 месяцев назад

    So, what should I use now? Rust? Zig? Odin? Mojo? Write my own language with OCaml?

  • @nevokrien95
    @nevokrien95 10 месяцев назад +2

    Mojo is still a buggy mess...
    I tested it on their cloud it broke with very small changes to what they did
    It dosent boot on my machine at all like just crashes on install in very nasty ways
    Basically if it works it can be fast but it just breaks all the time

  • @amansawhney3318
    @amansawhney3318 10 месяцев назад +1

    The problem is that mojo is just using python as a front end for portable simd instrincs. You still need to understand cache coherence and vectorized instructions. Changing the syntax doesn’t just make the barrier to entry lower. I personal use Rust for all my ML and research work.

  • @mytechnotalent
    @mytechnotalent 10 месяцев назад +8

    I live for Prime's thoughts. It is interesting such a claim to be faster than Rust. Single algorithms can't be taken in a vacuum. I would love to see a larger set of benchmarks.

    • @thekwoka4707
      @thekwoka4707 10 месяцев назад +5

      Or even the Rust code used...

    • @SaHaRaSquad
      @SaHaRaSquad 10 месяцев назад +2

      @@thekwoka4707 And the Rust compiler flags. In the end Rust also uses LLVM as compiler backend so this 50% claim is difficult to take serious without more details.

  • @KyleHarrisonRedacted
    @KyleHarrisonRedacted 10 месяцев назад +3

    Java, ya dude, big data infrastructure is all jvm’s. Hadoop is Java, Spark is Scala, Elastic is Java, etc etc
    Being honest when Machine Learning was all the rage I was surprised to see that the big language of choice was.. python.. Of all things, and not Java.

    • @nevokrien95
      @nevokrien95 10 месяцев назад

      The reason for python is 2 things.
      1 array are nicer to handle in python because list comprehension is useful.
      U would not belive how many times I print data that I list comprehend to get a picture kf what's going on.
      2
      Ai reaserch isn't production code its all dev which means it's VERY hacky. Like I would go and read private vars left and right just to get a grasp of what's going on.
      Or I would also overwrite forward with my_log(forward) to get an idea of what's actually happening under these abstructions
      In java such a thing would be impossible, which means some tasks I could solve in a hack way in python r unsovlble in java...
      Also python has better c integration and u gota remember that ai used to be done In c++ so porting that into python Is a much easier task.
      Also python code for ai tends to be more framework independent since u don't have static typing and TENSORs are the same thing of they r np array tensorfloe or pytorch for the most part

    • @vatanak8146
      @vatanak8146 10 месяцев назад

      Is it worth it to learn Scala as an aspirant to Data engineering and Big data field? Or just stick to Python and SQL?

  • @lauraprates8764
    @lauraprates8764 10 месяцев назад +1

    Wasn't mojo a GPU accelerated language? like it isn't a big surprise that a GPU outperformed a CPU in a SIMD context, actually it's quite impressive that it only outperformed by 50%

    • @Summersault666
      @Summersault666 10 месяцев назад

      Mojo outperforms even c running only on CPU. Mojo llama is 30% faster than llama.cpp. when it comes to GPU is another level

    • @lauraprates8764
      @lauraprates8764 10 месяцев назад

      @@Summersault666 but is a state of the art C code, because there is so much of what an interpreter can do

    • @Summersault666
      @Summersault666 10 месяцев назад

      @@lauraprates8764 and it's impossible to access hardware optimizations without creating special semantic instructions to them. Mojo has Mlir, it's hardware abstraction to reach language structures. And not the opposite.

  • @thoughtsuponatime847
    @thoughtsuponatime847 14 дней назад

    I'm fairly certain, Mojo will be my next language. This is exactly what I have been looking for.

  • @uzbekistanplaystaion4BIOScrek
    @uzbekistanplaystaion4BIOScrek 10 месяцев назад

    the more stuff i see from prime about new languages, the more i think c might just be the best tool for the job.

  • @cbbcbb6803
    @cbbcbb6803 10 месяцев назад

    Wasn't there a MoJo something used to create the WebOS software for the Palm phone?

  • @STChaosman
    @STChaosman 10 месяцев назад

    Bro, i’m no a native speaker, you’re the only RUclipsr for which I put videos @ 0.75 speed

  • @thedelanyo
    @thedelanyo 10 месяцев назад

    "Biggest takeaway" this keeps me coming back

  • @PhilipAlexanderHassialis
    @PhilipAlexanderHassialis 10 месяцев назад

    The real takeaway of course is the fact that "having to learn just a bit more and achieve something much better will always beat having to learn something totally new". Which is, for me at least, why Next won the React SSR wars: it was React as you know it, with one opinionated thing (routing) and an extra function to have server processing before delivering the finished rendered stuff to the browser. You could literally take an existing small but actually in production React application and convert it to SSR within an afternoon.
    Remember: developers are users too. And when we get into "user" mode, woe and behold we are *far worst* than the users of the applications we create.

  • @InfiniteQuest86
    @InfiniteQuest86 10 месяцев назад +6

    This already existed. It's called Julia. You'd have to get all those people to move over. It's too late.

    • @MichaelYoussef-kn8cy
      @MichaelYoussef-kn8cy 7 месяцев назад

      Is Julia a superset of Python like Mojo?

    • @InfiniteQuest86
      @InfiniteQuest86 7 месяцев назад

      @@MichaelYoussef-kn8cy Dude, youtube has been deleting like 50% of my replies to people. I had responded that Julia is as easy to learn as Python, but it is compiled and super optimized. It is it's own language. It is better then Python in almost every way. Hopefully this doesn't get deleted too.

  • @doigt6590
    @doigt6590 10 месяцев назад

    I'm surprised you never saw it before, I've heard and seen pythonic many times when talking about languages that are pythonic like nim, lobster and mojo (and boo and genie, in days of yore)

  • @piotrs448
    @piotrs448 5 месяцев назад

    Rust or Mojo for backend REST API?

  • @georgerogers1166
    @georgerogers1166 10 месяцев назад +9

    Julia is better.

    • @tychoides
      @tychoides 10 месяцев назад

      I tested Julia a few months ago, to see if I could move my data analysis to it. It was bad. The JIT lag was 6 second for a runtime of 0.5 secs. I managed to compile the libraries to reduce the compilation time, but it was the most hideous and deranged compilation process I have ever seen. And the documentation regarding this was awful. And the dataframe library I tested (Dataframe.jl I think) was on par with pandas on merging and almost 2 times faster for some grouping operation. However, I tried polars and it was amazing. It was multithreaded by default and gave me an orden of magnitude improvement (10x) with half the memory consumption. And I still was using Python.

    • @omarruelas1897
      @omarruelas1897 10 месяцев назад +3

      Julia Is not Mature enough, but its syntax is the most comfortable I've seen

    • @RazgrizDuTTA
      @RazgrizDuTTA 10 месяцев назад

      @@omarruelas1897 Have you tried Julia in recent years? I have done 3 years of my PhD with it and it's great. The tooling is good too. I don't see what's not mature enough? It's for sure more mature than Mojo.

  • @k98killer
    @k98killer 10 месяцев назад

    Wild idea: ThePrimeagen may one day say "the name is or at least somewhat resembles ________". Not sure in what context, but it's theoretically plausible.

  • @tobene
    @tobene 10 месяцев назад +2

    IMO mojo seems really promising, lets hope it will get open sourced soon.
    It seems like a great alternative to python or matlab for research

  • @AgainPsychoX
    @AgainPsychoX 10 месяцев назад +7

    When I heard "Java" in context of "highly optimized tools" I had to take break and walk around to feel better...

  • @theaifam5
    @theaifam5 10 месяцев назад

    It feels like Actix and „unsafe“ drama.

  • @Veptis
    @Veptis 10 месяцев назад +1

    if pypy is better than Cython and Mojo better than both ... Why can't we get Cython to do the same?
    Mojo is a supposed superset, but you could already use python cffi or other ideas abstracted away in the heavy lifting libraries

  • @martijn3151
    @martijn3151 10 месяцев назад +1

    If anything is fast it’s me moving away from a language when it’s called Pythonic.

  • @hipsterJoe1
    @hipsterJoe1 10 месяцев назад

    +1 to "Zero to production in Rust" is one of the best Rust books!

  • @sdsa007
    @sdsa007 6 месяцев назад

    I cannot imagine what its like to speed learn Rust for months, and then suddenly do an about face and learn Mojo! All because you were too Pythonic, and should not have switched in the first place! Stay on the right path wonderful people! Everything has a niche.

  • @gustavojambersi9569
    @gustavojambersi9569 8 месяцев назад

    A lot of of people were already not changing from python to better performing languages. They DEFINITELY will not change if Mojo performs somewhat close to rust.

    • @MichaelYoussef-kn8cy
      @MichaelYoussef-kn8cy 7 месяцев назад

      That’s because no other language (including Julia) has anything close to the vast ecosystem of libraries that Python has for ML, data science, and math, but Mojo is designed to be a superset of Python.

  • @useruser-tc7xx
    @useruser-tc7xx 10 месяцев назад

    I don't think I can say Mojo in anything other than Mojo Jojo's voice. Maybe Dr. Evil's voice if I try hard enough :D

  • @MeysamBelash
    @MeysamBelash 10 месяцев назад

    Can you make a comparison between rust and qbasic in which qbasic wins?

  • @dbug64
    @dbug64 10 месяцев назад

    Is it just me or does Mojo look like Rust without the {}'s and then people just claiming "Look, it's just like Python with a few types in it"???
    If I was a Python coder, Mojo would look just as "complicated" as Rust to me.
    So weird that no one else seem to think this too.