Apple’s M1 chip with the neural engine - what is it, and why is it so disruptive?

Поделиться
HTML-код
  • Опубликовано: 3 ноя 2024

Комментарии • 150

  • @alexwatson6370
    @alexwatson6370 Год назад +7

    Neural engine starts at 9:05

  • @twitchster77
    @twitchster77 3 года назад +28

    I just want a simple explanation of what the neural engine is and what it does. This is not the video I was hoping it'd be.

    • @abigailneves8843
      @abigailneves8843 2 года назад +4

      Me too😅 the internet is not helping me figure this out whatsoever

    • @sethmikael5129
      @sethmikael5129 2 года назад +3

      Well I think it helps run AI tasks efficiently without using a lot of power.
      Like - FaceID
      Without the Neural Engine, FaceID would be really slow

    • @clinux2
      @clinux2 2 года назад +1

      yes, I'm still looking for that simple explanation

  • @dandan1364
    @dandan1364 2 года назад +2

    Omg get to the point … 3 minutes of advertising before the show starts.

  • @skymakai
    @skymakai 10 месяцев назад +1

    Curious on your thoughts now that there's an M2 Ultra available.

  • @Photomeike
    @Photomeike 3 года назад +13

    Thumbs up for that type beat intro!

    • @rewardx
      @rewardx 3 года назад

      i'm pretty sure he listens to young thug and throws up gang sign while showering!

  • @BeaglefreilaufKalkar
    @BeaglefreilaufKalkar 3 года назад +14

    Apple took licence in ARM a long time before the Iphone, it started with the Newton, when Apple bought not only Licences, but also a shitload of shares.

  • @niladriray
    @niladriray 2 года назад +2

    Misleading title, nothing on neural engine

  • @EdwinvandenAkker
    @EdwinvandenAkker 11 месяцев назад

    @11:55 Yeah, you mentioned the *_magic keyword_* a few times.
    I had to _temporarily disable Siri_ to watch the video 😆

  • @tortysoft
    @tortysoft 3 года назад +18

    Have you heard of Acorn ? It was the 'A' in ARM. We have Acorn and the BBC to thank for starting all of this :-)

    • @andrewstones2921
      @andrewstones2921 3 года назад +3

      Yes, it’s good to hear that mentioned because Acorn get so little credit for starting it all with the Archimedes

    • @Teluric2
      @Teluric2 3 года назад

      Advanced risc machines

    • @tortysoft
      @tortysoft 3 года назад +2

      @@Teluric2 it WAS the A in ARM. They got very upset when reminded of their Acorn background though ! I worked at ANT Ltd in Cambridge, closly related to both companies - and I wrote for Acorn mags for years.

    • @andrewstones2921
      @andrewstones2921 3 года назад +4

      @@Teluric2 you are correct, it changed from Acorn RISC to Advanced RISC

    • @leonardofcohen
      @leonardofcohen 3 года назад +1

      God bless the Acorn RiscPC machines! :-)

  • @lakshminarasimmanv
    @lakshminarasimmanv 3 года назад +1

    11:57 my HomePod got activated when you said, "Hey Siri!" and answered the question you asked.

  • @ΒύρωναςΛαδιάς
    @ΒύρωναςΛαδιάς 3 года назад +16

    thank you for speaking on this!! it’s truly amazing what they did with that chip. the whole mac lineup is gonna be unbeatable

    • @PanosPitsi
      @PanosPitsi 3 года назад

      Ισχύει το αγόρασα σήμερα και γσμαει το pc μου με 2080ti

  • @miz-misah
    @miz-misah Год назад +1

    Are you reading from the computer's screen?

  • @KokkiePiet
    @KokkiePiet 2 года назад +1

    We are now a year and a half further, what is the status now? What applications use the Neural Engine?

  • @seanflora397
    @seanflora397 3 года назад +24

    "Silicone" is not the same as silicon.

    • @earth4009
      @earth4009 3 года назад

      You Silly hahaha

  • @LeicaM11
    @LeicaM11 2 года назад +1

    Video starts at 09:12 min!😕

  • @paulborneo7535
    @paulborneo7535 3 года назад +6

    I use silicone to seal my tub and silicon to run my Mac. I will try grinding up some old memory chips and putting them in the silicone to see if that makes a difference. You can't have enough caulk.

    • @fryode
      @fryode 2 года назад

      That's what she said!
      I'll show myself out...

  • @lesfreresdelaquote1176
    @lesfreresdelaquote1176 3 года назад +9

    Very informative. I saw a video a couple of days ago about training ML models on an M1 (Apple's New M1 Chip is a Machine Learning Beast (M1 vs Intel MacBook speed test) and the trick was to reduce the batch size, which from my experience is a very bad idea since it also reduces the way data is aggregated. Basically, you are going to miss some key data combinations, if your batches are too small. I guess we'll have to wait for the next generation of machines to really tackle GPT3... :-)

    • @s0kulite
      @s0kulite Год назад +1

      This aged well

  • @parasinthephilippines
    @parasinthephilippines 3 года назад +1

    I think the Camera industry will be rocking on this. Especially Cannon with their R5 overheat problems.

    • @Teluric2
      @Teluric2 3 года назад

      The problem with the heat on camera is the size of the node of ccd and asic not the architecture.

  • @kishoreabraham
    @kishoreabraham 3 года назад +1

    Thanks. Great informative video.
    A small suggestion- please shoot a little wider, as reading from teleprompter is noticeable and little uncomfortable. Sorry for being too picky. It would definitely improve quality

  • @karlspencer1493
    @karlspencer1493 3 года назад +1

    I have just watched this one video and i already find that this is an awesome channel!

  • @ck8708
    @ck8708 2 года назад +1

    Think about it. How would those subtle neural engines be beneficial to heavy training?

  • @utubekullanicisi
    @utubekullanicisi 3 года назад +9

    The Neural engine’s performance isn’t 11 Teraflops, it’s 11 Teraops (trillion operations per second)

    • @p3rrypm
      @p3rrypm 3 года назад +2

      One can be basic operations, the other is math calculations and it is ridiculous to try and claim this SoC is as powerful as an RTX 2080.

    • @p3rrypm
      @p3rrypm 3 года назад

      Seems to me that Mr Know It All, doesn’t know as much as he claims.

    • @utubekullanicisi
      @utubekullanicisi 3 года назад

      @@p3rrypm Who claimed that lol?

  • @gaborenyedi637
    @gaborenyedi637 3 года назад +2

    Again: x86 is RISC inside. It executes microops, not instructions. As a first step, all CPU-s (x86 and ARM too) compiles the instructions to some internal representation (called microops @ x86). There is no (or not too big) difference from this point.
    There are two REAL difference: the compilation to microops and insctuction fetching. The first takes some space from the chip and eats up some power, more from CISC, less from RISC. Although RISC has some advantage, neither of these are important factors.
    On the other hand, instruction fetching is another thing. RISC CPU-s have fixed sized instructions, while x86 use prefix code, i.e. the instructions has a varying size and you can tell the size only when you read it (at least the first some bytes). Therefore, fetching 8 instructions at the same time is not an issue for an Apple M1, while fetching more than 4 is impossible for an x86, and even 4 is very hard. THIS IS THE POINT HERE: RISC has now some advantage, since it has turned out that knowing where an instruction ends is more important than we thought. Not from the small instruction set.
    Read further only if you want to know why:
    Loading 8 instruction at a time is important in the case of a branch miss, when the predictor fails to predict the right branch to continue in an if-then-else situation. In this case, the reordering buffer (see later) is dropped, and you need to fill it again asap (note that a CISC instruction produces typically more microops, which helps a bit). The bigger your buffer, the longer it takes to fill it. If it is too big, you will rarely fill it, and there is no point to make the buffer that big.
    Reordering buffer (ROB) is important, since this is used to help the CPU to do out-of-order execution: since clock frequency cannot be increased nowadays, you need to execute more at each cycle, but some instructions may need some data not available yet, so CPUs execute some later instruction from the ROB, if they can. Apple was able to produce an extremely huge ROB, far huger than any other CPU out there, and this seems to be worth: their instruction per cycle skyrocketed and that's why they have this strong processor. Of course, the whole architecture (e.g. ALUs, caches, extreme fast DRAM) is different, since it's built to support this advanced out-of-order execution.

  • @rkustanto
    @rkustanto 2 года назад

    Does A15 also have 16 NE? What's the difference?

  • @fslinky07
    @fslinky07 3 года назад +1

    Very informative, thanks for such a great video.

  • @mcg6762
    @mcg6762 3 года назад +6

    In my opinion the discussion about the impressive performance and power efficiency of the M1 chip is much too focused on the RISC vs CISC topic. Modern processors decode the instruction stream into internal operations and the instruction set actually does not matter that much. The magic happens in the far larger back-end of the processor. I think Apple could actually build an x86 processor based on the M1 internal architecture and beat Intel and AMD if they really wanted.

    • @Teluric2
      @Teluric2 3 года назад

      Nuh.if they really wanted? Apple doesnt have the expertise of IBM. You put apple in a pedestal
      X86 with M1 architecture ??? So clueless.

    • @mcg6762
      @mcg6762 3 года назад +1

      @@Teluric2 I have never been an Apple fan and I never buy their products, but this processor really impresses me. It is a real feat of engineering. And I'm not saying Apple will ever build an x86-processor. I was just trying to make a point that in modern processors it is less important which instruction set the processor implements. The internal back-end architecture and memory sub-system is more important.

    • @Teluric2
      @Teluric2 3 года назад +1

      @@mcg6762 I m not impressed because I know that the tech apple is using is not new ,its being used in AI .
      Gpu can improve the performance of a cpu.
      If a xeon cpu scores 5000 points, it will score 10,000 pointd if you add a GPU that accelerates execution of video and processing
      So M1 without GPU acceleration ( maybe there s a FGPA inside M1) will be 50% less powerful
      What people said apple wouldnt make its a cpu that matches x86 but apple made a soc not a cpu. The only pure ARM cpu that gets performance is A64FX.
      If you have a car with turbocharger and is faster than you car ,could you say that the engines is better than yours?
      A x86 processor with the same gpu acceleration used in M1 would be faster than M1. If amd or intel hasnt done yet is because its more expensive to build that kind of chip,gpu on die makes it complex and expensive to make .apple has underpriced this mac to make win the price/ performance category
      A new architecture is always more expensive.
      Dont know why people have so much credibility on what youtubers test.
      Have anyone of them studied in princeton or certified that are capable of testing computers? Have you noticed that all reviewers use iphones?

    • @mcg6762
      @mcg6762 3 года назад

      @@Teluric2 Geez. And you called me clueless...

    • @Teluric2
      @Teluric2 3 года назад +2

      @@mcg6762 because x86 architecture is complex and arm is about getting simple. How are you gonna add complexity to a system that use simplicity
      What apple has done is
      Hybrid accelerated computing
      Nothing new unified memory also not new
      www.google.com/amp/s/www.nextplatform.com/2019/01/24/unified-memory-the-final-piece-of-the-gpu-programming-puzzle/amp/
      Everybody thinks apple has done something that nobody in the planet has done before.
      Guess what you wanted to say is that apple can do a x86 chip using the hardware acceleration built inside M1 like GPU and FGPA . That is not an architechture.

  • @ΒύρωναςΛαδιάς
    @ΒύρωναςΛαδιάς 3 года назад +11

    4:35 actually, they started with A4!

    • @absk601
      @absk601 3 года назад +4

      I think he doesn’t know much of what he is saying!

    • @ΒύρωναςΛαδιάς
      @ΒύρωναςΛαδιάς 3 года назад +1

      @@absk601 i thought he does know the technicalities but didn’t know the apple branding

  • @cogmission1
    @cogmission1 3 года назад

    Hi there! You seem like a great source of interesting info! I'd be interested to watch your review of the new M1 Pro and M1 Pro Max MacBook Pro laptops?

  • @openroomxyz
    @openroomxyz 3 года назад +2

    What do you think are some useful deep learning inference computations for consumers? Will is change anything? Or is like ARKit things that yea it's a thing almost nobody uses it.

  • @ck8708
    @ck8708 2 года назад +1

    Neural engines are capable of processing fast inferences. yes. But the way i see it, it's just a feature to run Apple's management applications, such as child protection care. I haven't seen a single application that utilizes the neural engines.

  • @MrPirreE
    @MrPirreE 3 года назад +1

    You said Hey Siri and my HomePod Mini served an answer on your equation example. Also did this without the chip was talking about.

  • @pebre79
    @pebre79 3 года назад +2

    Great job Apple! Gonna get the M3 when that comes out.

    • @jackjackson3507
      @jackjackson3507 Год назад

      just here to remind you to buy M3

    • @CD6GLL
      @CD6GLL 7 месяцев назад

      Buy m3 xd

  • @war6193
    @war6193 3 года назад +5

    wow. I think your info is a bit out of date. All so-called CISC processors went to a RISC-style microcode a long time ago. The whole CISC versus RISC thing is not relevant anymore.
    Also, I hope this works out as Intel has been not innovating for many years, which is the real reason the M1 is doing so well. It is 5nm and that accounts for most of its performance and power usage numbers that you mistakenly attribute to CISC vs RISC.

    • @jbjefe
      @jbjefe 3 года назад

      From my understanding you're close, but not quite right. RISC was introduced to Pentium chips in the nineties on the back end (at the core), but maintains CISC at the front end. Literally, all instructions in x86 run through a translation layer from CISC to RISC. Which happened to not be such a big deal for a long time because most instructions end up being RISC compliant anyway. Still it's enough of an issue that we are now nearing the performance threshold (particularly on single core) of x86_64. Going back several years now, there has almost always been 1+ dark cores because the thermal constraints are just too high. With a purely RiSC chip, such as arm, the thermal constraints aren't nearly as limiting since power consumption is a fraction of RISC+CISC.

    • @war6193
      @war6193 3 года назад

      @@jbjefe Again, you are a bit off the mark. A CISC instruction doesn't mean it is variable length. It means the instruction contains both instructions to load/store and to do computation. There are CISC architectures that are fixed width and have _fewer_ instructions than some RISC instruction sets.
      In short, your point about RISC/CISC is simply not the way things work and not at all an reason for processor performance difference. It's true that Intel chips are bloated with SSE instruction handling. And it's true they have to do translation into microcode, but the translation is not a bottleneck to performance, not even close.
      And finally, no ARM processor for the desktop or server environment can outperform the fastest AMD or Intel chips.
      I want them to succeed, but they are simply not there yet.

    • @war6193
      @war6193 3 года назад

      @@jbjefe Oh, and you are close but not quite right on another thing. Micro-codes were introduced with the 486 and a Super Scalar arch, which was used by MIPS and other RISC chips at the time was introduced with the Pentium.

    • @Teluric2
      @Teluric2 3 года назад +1

      Performance of M1 comes from gpu acceleration and asic video units not from cores
      If the apple arm cores would be fast by themselves there wouldn need to use asic video encoder decoders
      A amd chip with the same asic would kill M1.

  • @carterspencer1787
    @carterspencer1787 3 года назад +3

    We love the Premieres.

  • @doomtomb3
    @doomtomb3 3 года назад +3

    I did not learn anything from this video

  • @DanShepherd72
    @DanShepherd72 2 года назад +2

    As a side note another advantage of risk is that because of the simpler instruction set it leaves more space for internal registers and so doesn’t have to fetch so many variables from ram which slows things down. Also allows for more system on chip customisations which is suppose is what the M1 is an example of. Do as much as possible in a single chip and you don’t need to rely so much on external logic board etc. I’m hoping to see more servers adopt Arm based architectures too as it will greatly reduce power consumption, especially with integrated GPU’s and Neural network features. Also praying ARM doesn’t get sold to NVIDIA, it’s about the only innovative tech England still owns and invented. We need Cambridge to keep innovating so we can’t let them get starved out!

  • @pavanlulla
    @pavanlulla 3 года назад +2

    Mr know it all, says it all.. except what we wanna know..

  • @goobfilmcast4239
    @goobfilmcast4239 3 года назад +9

    1:19 ...... Not "Attempting" ..... this mainstream product is in the mass market..... They have succeeded...... and not to be crass but Apple is going to sell a poop load of the M1 equipped Macs over the Q4 2020 and Q1 2021 ..... and many more with improved M-series chips in the not-so-distant future.......

    • @michaelandrews4783
      @michaelandrews4783 3 года назад +1

      Lets hope they don't cripple it in a few years by dropping OS support like they usually do.

  • @vernearase3044
    @vernearase3044 3 года назад +1

    Most people are looking at these first Apple Silicon Macs wrong - these aren't Apple's powerhouse machines: they're simply the annual spec bump of the lowest end Apple computers with DCI-P3 displays, Wifi 6, and the new Apple Silicon M1 SoC.
    They have the same limitations as the machines they replace - 16 GB RAM and two Thunderbolt ports.
    These are the machines you give to a student or teacher or a lawyer or an accountant or a work-at-home information worker - folks who need a decently performing machine with decent build quality who don't want to lug around a huge powerhouse machine (or pay for one for that matter). They're still marketed at the same market segment, though they now have a vastly expanded compute power envelope.
    The real powerhouses will probably come next year with the M1x (or whatever). Apple has yet to decide on an external memory interconnect and multichannel PCIe scheme, if they decide to move in that direction.
    Other CPU and GPU vendors and OEM computer makers take notice - your businesses are now on limited life support. These new Apple Silicon models can compete speed-wise up through the mid-high tier of computer purchases, and if as I expect Apple sells a ton of these many will be to your bread and butter customers.
    In fact, I suspect that Apple - once they recover their R&D costs - will be pushing the prices of these machines lower while still maintaining their margins - while competing computer makers will still have to pay Intel, AMD, Qualcomm, and nVidea for their expensive processors, whereas Apple's cost goes down the more they manufacture. Competing computer makers may soon be squeezed by Apple Silicon price/performance on one side and high component prices on the other. Expect them to be demanding lower processor prices from the above manufacturers so they can more readily compete, and processor manufacturers may have to comply because if OEM computer manufacturers go under or stop making competing models, the processor makers will see a diminishing customer base.
    I believe the biggest costs for a chip fab are startup costs - no matter what processor vendors would like you to believe. Design and fab startup are _expensive_ - but once you start getting decent yields, the additional costs are silicon wafers and QA. The more of these units Apple can move, the lower the per unit cost and the better the profits.
    So ... who should buy these M1 Macs?
    If you're in the target demographic - the student, teacher, lawyer, accountant, or work-at-home information worker - this is the Mac for you.
    If you're a heavy computer user like a creative and don't simply want a light and cheap computer with some additional video and sound editing capability for use on the go - I'd wait for the M1x (or whatever) next year. You'll probably kick yourself next year when the machines targeted at _you_ finally appear.

    • @CD6GLL
      @CD6GLL 7 месяцев назад

      No my friend...

    • @vernearase3044
      @vernearase3044 7 месяцев назад +1

      @@CD6GLL You _do_ realize that was a 3 year old post about the M1 powered machines, right?
      These machines _still_ compete favorably with current Wintel lightweight laptops or office-class machines.

    • @CD6GLL
      @CD6GLL 7 месяцев назад

      @@vernearase3044 the M1 was a ANOMALY…

    • @vernearase3044
      @vernearase3044 7 месяцев назад

      @@CD6GLL An anomaly? Don't quite get what you mean.
      I was a revolutionary change in chip architecture sporting high performance and low energy usage.
      An anomaly implies a natural outlier … an exception to a rule rather than a revolutionary _invention._

    • @CD6GLL
      @CD6GLL 7 месяцев назад

      @@vernearase3044 hi sir. I need to use a translate for answer you.. maybe i was a mistake about your comentary... let me read again your comentary and i know if a need apologyse with u... its late here in chile... greetings...

  • @bobbrown7511
    @bobbrown7511 3 года назад

    Thank you. Most informative and entertaining.

  • @warlockza
    @warlockza 3 года назад +1

    I'm really enjoying your videos. Your topics are exactly what I enjoy speculating about. Great Job! Could I ask you to do an episode about LTE, 5G and the new 6G network protocols. You could also comment on Dishy Mcflatface from Starlink if you have any info on that.

  • @GaioBardelle
    @GaioBardelle Год назад

    I guess the day you where waiting is finally here with the new Mac mini m 2 pros

  • @davidantill6949
    @davidantill6949 3 года назад +1

    Can you please cover the black boxes within neural networks. Do computers actually make up their own intermediate all purpose secret languages for translation purposes and could such a language by useful for us to use (a computer's version of Esperanto?)?

  • @HansBaumeister
    @HansBaumeister 2 года назад

    SoC is "System on a Chip", not "Silicon on a Chip" 🙂

  • @meleader
    @meleader 2 года назад

    Will it run Windows ala boot camp?

  • @denvera1g1
    @denvera1g1 3 года назад

    I'd like the inference chip to be integrated into Plex so it can automatically detect, and adjust to sports that over-run their time slot so i dont get an hour of the superbowl, instead of The Simpsons, Family Guy, Bob's Burger ect.
    Long, but possibly important side note, my Apple M1 Mac Mini is not as efficient as my AMD 4750u Thinkpad L15. So far, i've only tested the use case i bought the Mac Mini for, mainly Plex live TV recording, and then transcoding those h264 files, into h265. The Mac Mini is not only slower at transcoding at the same setting, but it also users more power from the wall (33wh VS 26wh). This is actually of great concern to me,. This 5nm RISC processor, is loosing(in my use case) to a 7nm CISC processor while also using more power than the 7nm CISC. According to TSMC, an AMD 4750u, ported over to 5nm, should be roughly 50% more efficient than the original 7nm processor(paraphrased). Considering that the M1 is both a far more efficient RISC, and a fully integrated design of RAM and SSD offering even better efficiencies, AND has a 50% efficiency head start just because of the 5nm node, This probably means the M1 just doesn't have enough cores, and to get this performance, they've pushed the silicon well past the point of diminishing returns. Had this processor been 12+4, at half the frequency, it would probably use less energy than the current mac mini, while being likely more than twice as powerful.
    This also might be a weird use case where CISC is better suited at the task than RISC(or ARM specifically)

  • @AlexRubio
    @AlexRubio 3 года назад +1

    Linux has done this for years with arm. Right now risc-V is where it is.

  • @10p6
    @10p6 Год назад +1

    Interesting video. The apple M1 11 TFLOP speed claim is BS. Even if the M1 runs at full speed without thermal throttling, and with zero bus contention from any other M1 device, it means to get the 11 TFLOP speed then the ANE would have to be doing almost 215 operations per tick per core. At single bit resolution, the ANE would have a bus bandwidth of around 1350 Gigabytes per second, when the M1 ram is rated at 66 Gigabytes per second. Even with the highest speed cache, apples claims at very best can be nothing more than a theoretical maximum within the ANE, when in reality the ANE is probably massively slower in real applications.

  • @jdcrunchman999
    @jdcrunchman999 3 года назад

    Should I wait for the M2?

  • @dotexe4981
    @dotexe4981 3 года назад

    I wonder what happends when you CPU mine with it, if it would increase the efficiency.

  • @DieBastler1234
    @DieBastler1234 3 года назад +1

    "why is it so disruptive?" It's not. It's a modern ARM-SoC like the Snapdragon lineup, with the performance advantage you would expect looking at the higher power consumption.
    The disruptive part is that Apple's Desktop OS - including a wide range of applications - runs on such an ARM-SoC now, and Microsoft's cannot.

    • @jbjefe
      @jbjefe 3 года назад

      I think biggest improvement Apple made to their ARM-SoC compared to others is the unified memory. I don't know about other SOC's that give full RAM access to the GPU? I honestly don't think that Apple's GPU is really more powerful than Adreno or Vega, the main difference is the memory bottlenecks are removed/reduced. I also suspect the same applies to the neural engine, but I don't know.

  • @turrafirmaguitarchannel
    @turrafirmaguitarchannel 3 года назад

    Super interesting topic thank you

  • @ejeylow206
    @ejeylow206 3 года назад

    that was so helpful thank you

  • @wade2805
    @wade2805 3 года назад

    Deep.

  • @nguyenngocly1484
    @nguyenngocly1484 3 года назад

    Do youes knows abouts the Walsh Hadamard transform?

  • @mr.wrongthink.1325
    @mr.wrongthink.1325 3 года назад

    Bad argument that CISC "gets hot". What really counts is performance per watt.

  • @jbparsons
    @jbparsons 3 года назад

    You tell us see the links below, but none of them are there!

  • @workingTchr
    @workingTchr Год назад

    So it's NOT for training. Dang. I was hoping my new M1 would let me separate my neighbor's barking dog from other background noises. I think there still might be online resources for this, so I'll have to see.

  • @AaronWacker
    @AaronWacker 2 года назад

    Brilliant summation. Thanks Doctor. I find now the power in m1. M1 Max versus RTX3080 is 11 minutes to the RTX 16 minutes running on GPU. With the TPU flag on the TF score in time can be up to 9 minutes. I appreciate too your coverage of A14 and place in NLP.

    • @simply6162
      @simply6162 2 года назад

      Have you tried m1 ultra?

  • @tokapi
    @tokapi 3 года назад

    Hey Sheldon!

  • @618GOLDENRATIO
    @618GOLDENRATIO 3 года назад +1

    I love these guys that are trying to explain how the internals of an Apple chip and software work, but in no way have access to any of the proprietary information. But you are bashing the neural engine placement but admitted at the begging of the video you had to go look UP the neural engine, OMG. I sleep at a Holiday Inn express last night so I'm an Apple M1 chip expert. So you just really guessing to get hits on your RUclips channel.

  • @silberlinie
    @silberlinie 3 года назад +1

    13:50, the crux of the matter, only inferencing

  • @r-bybacani7416
    @r-bybacani7416 3 года назад +1

    This is a good content foor sure!!!! OVER CLOCK M1 CHip if you can!!!!!

  • @alonzalomas9550
    @alonzalomas9550 3 года назад

    information available on many subjects

  • @vangildermichael1767
    @vangildermichael1767 3 года назад +2

    I thought it was pretty obvious that the (x86) architecture was dead, when Intel allowed AMD to catch up. And then surpass INTEL in performance of the(x86). I think that (Optane ) memory that INTEL spent a good chunk of time perfecting. I think it will play an enormous role in a neural processor. And when it all settled down, A (worthy) processor will emerge.

  • @cerealspiller
    @cerealspiller 3 года назад +1

    Actually, ARM-based laptops already exist. The MS Surface Pro X is just one example.

    • @tjerkheringa937
      @tjerkheringa937 3 года назад

      The Acorn PC were the first RISC-based PC's. And pretty awesome too

  • @transeuropex
    @transeuropex Год назад

    Tesla inference chip next. once you have one chip video you can't help but eat another one

  • @riOdariot
    @riOdariot 3 года назад

    "I will likely upgrade". It sounded so robotic

  • @daan3298
    @daan3298 3 года назад +1

    Blablablablabla... 9:10 actually talking about neural engine.

  • @piotrd.4850
    @piotrd.4850 3 года назад

    Some 90% of this innovation is .... ASML/TSMC. There's no innovation in battery. Neural Engine? Yeah, they claim insane performance - too bad no software is using it. Intel had Movidious stick and AI accelerated functions for quite some time (with the same problem - basically, one trick pony in single application). Also: intel used to have own ARM implemementation (StrongARM).

  • @ourcollectivewisdom8769
    @ourcollectivewisdom8769 3 года назад

    Romulan Warbird!!!

  • @celltypespecific8988
    @celltypespecific8988 3 года назад

    Way too many ads...you need to develop your audience base first.

  • @rbvemulapalli
    @rbvemulapalli 3 года назад

    seems like most of your episode is based on information you gathered. We appreciate if you can go in-depth into the topics with your own expertise/analysis.

  • @Sara-xi2ug
    @Sara-xi2ug 3 года назад

    nice video

  • @CarlMoebis
    @CarlMoebis 3 года назад +1

    Dr. Know it all doesn’t know the difference between silicon and silicone? You pronounced it as if Apple is making implants. Other than that great episode. Maybe rename the channel to Mr. Repeats some stuff I heard somewhere else. 👍

  • @ablearcher2753
    @ablearcher2753 3 года назад +1

    Okay, let me share my two cents on this as a fellow UNIX/Linux user. From what I have seen right now the OS is way worse than a KDE Plasma with Latte Dock - and I did the ultimate test here by asking my Mrs. which 'Mac' she would buy and she likes my Linux GUI more, saying that the actual Mac OS looks too complicated . Now let's talk hardware for a bit. 4 'performance cores' and 4 'efficiency cores' are either the same bollocks that I have in my $200 budget smartphone from the 'evil' Chinese company called Huawei or just this hyperthreading thing where I have 4 actually fully featured cores whilst there is 4 more cores without an FPU, so I won't really recommend to run anything on these... So what this really comes down to is a $65 Raspberry Pi 4B 8GB in a MacBook 'Pro' case with an ugly GUI for over twice the price of an Lenovo ThinkPad E595 with 32GB RAM and two SSDs, one of which being an NVMe one. I don't know about you guys but in Germany we call this full on FRAUD. Yes my friends, full on FRAUD, that's what it is! So my suggestion to solve this issue: Either keep your 'old' X86 Mac or get yourself the cheapest Lenovo Laptop there is, get some cool stickers to put on the back and install Linux. And if you want an actually very stylish beyond Apple look and feel I would suggest Hefftor Linux Plasma which just happened to have released a new version which you can download under www.hefftorlinux.net . Welcome to 2021! fesrg

  • @keithdow8327
    @keithdow8327 3 года назад

    It is pronounced SILICON, not SILICONE! There is no E on the end of Silicon Valley.

  • @msp5138
    @msp5138 2 года назад

    2505: Brawndo has electrolytes people... electrolytes...so, obviously, it's better than water, food, and even air.
    2022: Apple has the M1 chip people...M1 chip...so, obviously, it's better than any other tech company.
    Idiocracy has arrived...lol

  • @fabsanh
    @fabsanh 3 года назад +1

    If you're gonna read an article, why don't you write a blog instead? Boring!

  • @msp5138
    @msp5138 3 года назад

    Nine months later and:
    1. NO one is talking about Apple's M1 chip.
    2. I still haven't heard one person in the real world mention Apple's M1 chip.
    3. Apple's laptop market share remains at 7%-8%.
    Further, proof most tech RUclipsrs are sponsored by Apple.

  • @minglingli7097
    @minglingli7097 4 месяца назад

    9 mins until video title //// scam

  • @ByronScottJones
    @ByronScottJones 3 года назад

    Almost TEN minutes to start getting to the point. Just, wow...

    • @MakeItMakeSense285
      @MakeItMakeSense285 3 года назад

      I’m sorry, did this FREE video waste your precious time?

    • @Ibakecookiess
      @Ibakecookiess 3 года назад

      @@MakeItMakeSense285 Yes. I'm not saying I'm owed money, just that this is not a good video.

  • @Aldo-Not_Reired_Yet
    @Aldo-Not_Reired_Yet 2 года назад

    Informative but your intro is too loooonng

  • @macgamer1973
    @macgamer1973 3 года назад

    I remember commodore came out with AGA chipset, not to many companies developed software for the chip set, couple years later it was died in water. M1 chip so haven’t seen AAA title coming to Apple M1 chips. I’m not buying into the hype.

  • @LeicaM11
    @LeicaM11 3 года назад

    Misleading title.

  • @RonG1960
    @RonG1960 3 года назад

    Still not going to buy a mac to spy on me.

  • @madcalm2024
    @madcalm2024 3 года назад

    SISK chips have highly developed branch prediction & pre-executuion thats why the're so performant

  • @jamesdubben3687
    @jamesdubben3687 3 года назад

    I just think back on all the apple stuff I never bought and sent to a landfill. Looks like a continuous upgrade venture to me

  • @Benjohnson-nc1ik
    @Benjohnson-nc1ik 3 года назад

    The vivacious herring pertinently behave because request july buzz apropos a tasty experience. nervous, aberrant sky

  • @bnodosa3919
    @bnodosa3919 3 года назад

    Apple Fanboy......

  • @АлександрШвед-н5д
    @АлександрШвед-н5д 3 года назад +1

    qualcomm snapdragon 888 is better, do not believe in advertising

    • @ΒύρωναςΛαδιάς
      @ΒύρωναςΛαδιάς 3 года назад +4

      um, no

    • @vangildermichael1767
      @vangildermichael1767 3 года назад

      Probably yea. The (real) processor that Qualcomm has. The (real) one is not even come to light yet. It is in some vault somewhere. Well, that is true for all the big boys (Intel, Apple, Qualcomm). Only difference. Qualcomm was there, years, and years ago. But Apple was also (ashkenazi have all the technology). Bottom line...it is going to be such an unbelievable next couple months. expect it!

    • @vangildermichael1767
      @vangildermichael1767 3 года назад

      @lithgrapher So. Are you suggesting that (tsmc) is Intel? Probably correct. There is only one. Just like the New world order. Intel, Apple, Google, Amazon, Walmart, att, and the little guys Texas instrument, Sony. There is only one. Always was, always will be.

    • @АлександрШвед-н5д
      @АлександрШвед-н5д 3 года назад

      @@ΒύρωναςΛαδιάς ou yes

    • @bartomiejkomarnicki7506
      @bartomiejkomarnicki7506 3 года назад

      do you actually believe it?

  • @helloworld12
    @helloworld12 4 месяца назад

    Jesus Christ this guy never gets to the point and beat around the bush so much.
    Actual content 2 min with 12 min of BS.

  • @denvera1g1
    @denvera1g1 3 года назад

    I'd like the inference chip to be integrated into Plex so it can automatically detect, and adjust to sports that over-run their time slot so i dont get an hour of the superbowl, instead of The Simpsons, Family Guy, Bob's Burger ect.
    Long, but possibly important side note, my Apple M1 Mac Mini is not as efficient as my AMD 4750u Thinkpad L15. So far, i've only tested the use case i bought the Mac Mini for, mainly Plex live TV recording, and then transcoding those h264 files, into h265. The Mac Mini is not only slower at transcoding at the same setting, but it also users more power from the wall (33wh VS 26wh). This is actually of great concern to me,. This 5nm RISC processor, is loosing(in my use case) to a 7nm CISC processor while also using more power than the 7nm CISC. According to TSMC, an AMD 4750u, ported over to 5nm, should be roughly 50% more efficient than the original 7nm processor(paraphrased). Considering that the M1 is both a far more efficient RISC, and a fully integrated design of RAM and SSD offering even better efficiencies, AND has a 50% efficiency head start just because of the 5nm node, This probably means the M1 just doesn't have enough cores, and to get this performance, they've pushed the silicon well past the point of diminishing returns. Had this processor been 12+4, at half the frequency, it would probably use less energy than the current mac mini, while being likely more than twice as powerful.
    This also might be a weird use case where CISC is better suited at the task than RISC(or ARM specifically)