AMD + Xilinx: UNSTOPPABLE

Поделиться
HTML-код
  • Опубликовано: 5 июл 2024
  • 20% coupon code: C30
    win11 professional 1pc : www.ucdkeys.com/product/windo...
    Office2021 Professional Plus CD Key:www.ucdkeys.com/product/offic...
    Windows11Pro+Office2021Pro Global Bundle: www.ucdkeys.com/product/windo...
    Windows 11 Home CD KEY GLOBAL :www.ucdkeys.com/product/windo...
    office 19 pp:www.ucdkeys.com/office-2019-P...
    win10 pro:www.ucdkeys.com/Windows-10-Pr...
    365 account: www.ucdkeys.com/Office-365-Pr...
    Support me on Patreon: / coreteks
    Buy a mug: teespring.com/stores/coreteks
    Bookmark: coreteks.tech
    My channel on Odysee: odysee.com/@coreteks
    I now stream at:​​
    / coreteks_youtube
    Follow me on Twitter: / coreteks
    And Instagram: / hellocoreteks
    Footage from various sources including official youtube channels from AMD, Intel, NVidia, Samsung, etc, as well as other creators are used for educational purposes, in a transformative manner. If you'd like to be credited please contact me
    #AMD #XILINX #FPGA
  • НаукаНаука

Комментарии • 450

  • @nekomakhea9440
    @nekomakhea9440 2 года назад +304

    If AMD is smart, they will make an open API for the on-chip FPGA. NVidia is the market leader in GPU's and AI in no small part due to the fact that they made CUDA open to third parties, which established an ecosystem around their product line.

    • @LaughingOrange
      @LaughingOrange 2 года назад +31

      You just described UCIe, and AMD is a founding member of the group responsible for this new standard.

    • @annebokma4637
      @annebokma4637 2 года назад +23

      Xillinx already has open API

    • @JaenEngineering
      @JaenEngineering 2 года назад +15

      Advanced NPC AI
      Accelerated FEM/CFD
      "Realtime" (ultra low latency) audio/video DSP
      The possibilities are almost endless.

    • @kayakMike1000
      @kayakMike1000 2 года назад +10

      AMD has a thing called OpenCL

    • @annebokma4637
      @annebokma4637 2 года назад +14

      @@kayakMike1000 that's the beauty of AMD, they create open standards if needed, or join open standards if they already exist

  • @pweddy1
    @pweddy1 2 года назад +98

    One of the really common applications of FPGAs is serving low volume markets. This is going to include lots of industrial, telecom and military type applications. If you’re only going to ship somewhere between 1000 and 10,000 of a product it doesn’t often makes sense to pay the cost of rolling your own asic/CPU. You typically want hundreds of thousands to millions of sales to make custom chips. This makes an FPGA makes sense in manufacturing and industrial applications. While FPGA can be updated with new algorithms in the field theoretically, customers usually ship you the board back to do that. They want something that’s tested in the factory before they just update something that production is dependent on.
    Where the dynamic program ability of FPGAs actually does come and play is in data centers & high-performance computing such as supercomputers.
    The reality is once you have an algorithm nailed down and you need massive volumes of that part you’re almost always gonna move to ASIC. FPGAs run in order of magnitude slower than the same algorithm implemented in direct logic. This is unavoidable, ultimately FPGAs are replacing and-gates and or-gates with look up tables that are constructed from and-gates and or-gates.
    Xilinx also shows a lot of hard logic components in their bigger FPGAs. The top end includes the quad core arm CPU and a low performance door core arm CPU, PCI lanes, ethernet Macs, Ai engines, Boatloads of DSP blocks, onboard ram specifically for the CPUs, onboard Ram blocks for the digital logic, and a ton of things I can’t think of off the top my head.
    One other thing that’s very significant is Xilinx uses TSMC and has a very good relationship and understanding of the TSMC design processes. Xilinx designers are really next level when it comes to chip design, they’re essentially designing A chip to implement other chips.
    While there is a possibility of making epyc CPUs with FPGA built in It’s probably more likely that they would use IP that Xilinx designed to go in there FPGAs.
    There is no application that runs better on an FPGA then it will run on an ASIC. Even AI,That’s why Xilinx implemented hard AI engines to go into FPGAs.
    Industrial applications, space, military, telecoms infrastructure and other applications where you’re doing low volume production and you don’t need parts that are dirt cheap are typically going to be the markets you’re in with FPGA.
    The only place I really see FPGAs in main stream computing would possibly be a reconfigurable I/O interface. And even there things like PCIE would still be implemented in hard logic. I am pretty sure even USB 4.0 would have to be implemented in hard logic. Any logic you implement in FPGA operates well under one gigahertz. Implementing Accelerators in FPGA have to be applications that are massively parallel otherwise you might be able to beat them with either GPUs or CPUs.

    • @whynot01
      @whynot01 2 года назад +10

      this...coreteks has a very superficial understanding of FPGAs

    • @LiveType
      @LiveType 2 года назад +5

      @@whynot01 Yeah... I have used them and from what I can tell they are exclusively used for research and prototyping. Sort of like 3d printing. Once you have a design, you implement it directly in hardware as it's quite a bit cheaper. Literally just like 3d printing. The only real use case for these is low volume where costs aren't a key factor. Exactly like 3d printing...

    • @brodriguez11000
      @brodriguez11000 2 года назад

      Somewhere in all this is DSPs.

    • @thecooletompie
      @thecooletompie 2 года назад +2

      Another downside of FPGA's is that they are generally way larger for the same design than an asic and also use more power/energy. Another reason why I don't see the future for FPGA on the client platform is because of the relatively high complexity that comes with making an accelerator, It's not always as easy as creating your code and just pressing compile and from that you will get a hardware design.

    • @anythingbutASIC
      @anythingbutASIC 2 года назад

      @@thecooletompie My hope is that AMD does not go full Nividia and make everything full proprietary so no one can learn how to actually work with this hardware. Its bad enough the off brand dev boards don't even give you a PNR schematic half the time.

  • @KonstantinJH
    @KonstantinJH 2 года назад +83

    Xilinx actually already has heterogeneous architectures. The Zynq series has ARM Cores and an FPGA on a single chip.

    • @anythingbutASIC
      @anythingbutASIC 2 года назад +2

      Right people have not even used the Versal ( If you can get one) to its full capicity. The amount of radio wave hacking you could do with one of those would scare people into the dark ages.

    • @sinephase
      @sinephase 2 года назад +1

      well there ya go. This shit is gonna be awesome! :D

    • @linuxgaminginfullhd60fps10
      @linuxgaminginfullhd60fps10 2 года назад

      I am doing some research as a hobby. Xilinx stuff seems cool and I'd like to try some of my things running on top of it. Unfortunately it is just too expensive. I'd like to have a cloud solution where I could rent a server with a powerful FPGA just for the time I need it. This approach works really well with GPUs - there are even some places where I can get a multi-V100 GPU server running my code for a minute for free - ideal for the stuff I am working on.

    • @KonstantinJH
      @KonstantinJH 2 года назад +1

      Maybe look up Amazon EC2 F1

    • @pg_usa
      @pg_usa 2 года назад

      Ok and i install x64 or arm os?

  • @YaseinoMakoto
    @YaseinoMakoto 2 года назад +12

    Hi, I have left electronics area quite a long time ago. But we used to program microcontroler and fpga (Xilinx's at the time) From what I recall is that in the case of a microcontroler you load your code on the chip which was converted to a machine language. It's fast and easy and it's slow to run. With an FPGA the code has to be converted to a discret model where all the input are turned to logic gates formula known as karnaugh maps. In fact FPGA were only chips with a lot a nand gates and a programmable matrix. When done, it's fast like all discret design. I remember the trend where everybody was moving away from those technology calling it too rigid and from the past like risc. Now with the return of Risc (thanks to mobile & ARM) I see the trends coming back to old tech by merging those solution. The limitations of an FPGA were the number of physical gates. It might be able to run optymised code for specific use, like a programable acceleration card, chip. The key will be the communication bus and protocol. RISC was left for instruction set on CISC and now we might hardcode thoses sets on FPGA on demand. I'd like to see that.

  • @Zappyguy111
    @Zappyguy111 2 года назад +127

    AMD has always had a "build it and they will... finish it off" mentality. If they can build it from the hardware layer all the way up to the software, like iNvidia, I think they stand a chance of making this work.

    • @samlebon9884
      @samlebon9884 2 года назад +15

      They already have a strong foundation for heterogeneous computing. Infinity Fabric is the core component of their architecture.

    • @sinephase
      @sinephase 2 года назад +1

      You're making it sound like they don't have a good plan in place to justify the massive purchase they just made

    • @Zappyguy111
      @Zappyguy111 2 года назад +11

      @@sinephase
      Historically, that would've been the case I'd be making. These days, I'm confident they can pull through.

    • @hammerheadcorvette4
      @hammerheadcorvette4 2 года назад

      @@sinephase I heard the same thing when they bought ATI...

    • @hammerheadcorvette4
      @hammerheadcorvette4 2 года назад

      @@samlebon9884 ROCm (previously HSA) is the future of AMD heterogeneous architecture. Everything will be heterogeneous.

  • @Kneedragon1962
    @Kneedragon1962 2 года назад +48

    Well said, well explained.
    From what I'm seeing, there are two broad areas that FPGAs are going to revolutionise. AI designed algorithms, (which he talks about, like the image recognition thing) and extreme high speed communications, like the boundary between PCIe and Infinity Fabric and something like FireWire, or let's say the comms between two cores in a supercomputer that need to communicate but they're in different cases in different rows in the building... It can be and has been applied to the general internet networking but that doesn't seem to be the most applicable ~ with one exception ~ trading bots for a stock market / futures market. You can 'write' an algorithm / trading bot, in pure gate logic, and program that onto an FPGA, and integrate that onto your network card, and that bypasses the operating system and the BIOS and everything - your input comes over the network and your output / reply goes straight back onto the network, and it is fast in a way nothing else can match. You measure it in tens of clock-ticks, not whole seconds...
    As a conceptual example, mining crypto is a good case, but that's not really what this is for. There are certain kinds of algorithms that general purpose CPUs are just not designed to do, and nor are GPUs. The beauty of an FPGA, is that if your AI can come up with an even better / faster design, you can implement that in seconds, not months.

    • @DarksurfX
      @DarksurfX 2 года назад +1

      I’m hoping they use the FPGAs for video encode/decode.

    • @JaenEngineering
      @JaenEngineering 2 года назад +2

      @@DarksurfX that's not the point. You don't use it to do just one thing. The Idea is that you can program it to do whatever you need it to do. So the same hardware can be be used for video encode/decode as easily as it can be be used to accelerate computational fluid dynamics or finite element modeling or digital signal processing or NPC AI or whatever else you can think of.

    • @DarksurfX
      @DarksurfX 2 года назад

      @@JaenEngineering I’m aware of that, but that wouldn’t be my use case. I personally want to use it for video encode and decode. Also rather than having a chip with static limitations like AMDs crappy implementation of UVD and VCE/VCN. It could be updated over time to support things such as 10bit, 4:4:4, and even AV1 as technology progresses. No more need for things like the nvidia encode/decode matrix to know what your card can encode/decode. I’m well aware FPGA can achieve greater things, but I’m not personally developing with machine learning algorithms.

    • @JaenEngineering
      @JaenEngineering 2 года назад +6

      @@DarksurfX that's cool, you can use it how you wish. That's exactly the point. But saying "I hope they use it for "X" is kinda pointless as they're not the ones deciding what it's used for, the end user is. That's the beauty. Everyone gets what they need to accelerate the process they're performing.

    • @anythingbutASIC
      @anythingbutASIC 2 года назад +1

      @@JaenEngineering Right there was no such thing as a chip shortage. Just a bunch of cheap empolyers not willing to pay one talented FPGA programmer.

  • @KangoV
    @KangoV 2 года назад +17

    A Steam Deck with a few FPGAs each loaded with the system of an older game console. Now that would be something else.

    • @noergelstein
      @noergelstein 2 года назад +4

      Or save money and reduce complexity and just use a software emulator.

    • @Blownkingg
      @Blownkingg 2 года назад +1

      @@noergelstein Software Emulation is slower for in most cases it is heavily reliant on the CPU.

    • @catchnkill
      @catchnkill 2 года назад +2

      Not a good idea at all. Just add another CPU for running emulators. FPGA CPU is order of ten time a hundred time slower than a CPU. With the same size in the motherboard, just add a cheap ARM CPU there.

    • @Blownkingg
      @Blownkingg 2 года назад

      @@catchnkill I'm not talking about Emulating a CPU (that would make no sense), I'm talking about emulating the many coprocessors and accelarators that were on older generation consoles. A capable FPGA hardware emulating those accelerators is much faster than a CPU software emulating them.

  • @PaulGrayUK
    @PaulGrayUK 2 года назад +22

    Maybe they will do full open-source drivers and any special source contained and run from an FPGA upon the GFX card. That would be a nice step as would open up their hardware better and equally, enable any special sauce to be removed from the driver completely.

    • @anythingbutASIC
      @anythingbutASIC 2 года назад +1

      Right I was hoping this would have ben Intel secret weapon when they came off of the 14nm chips since they make there FPGAs in house.. But I see AMD locking Xilinx down they have ony been merge for a few days and the AMD logo is smeared all over the Xilinx website. Better get those "free" IDE downloads before AMD goes all Nividia.

  • @nekomakhea9440
    @nekomakhea9440 2 года назад +15

    Taking out most of the on-chip acceleration modules, and dynamically loading soft accelerator cores into the FPGA only when they are needed would likely save a lot of die space. Given how expensive the latest nodes are, that would be a lot of money saved. A market disruption can succeed in a big way if it's almost as good but vastly cheaper.

    • @nekomakhea9440
      @nekomakhea9440 2 года назад +2

      @@Zuluknob Right, but if there are 20 ASIC accelerators that are used only occasionally, then loading them onto a shared FPGA only when they are used would save space overall through hardware reuse.

  • @pweddy1
    @pweddy1 2 года назад +5

    It’s nice to hear somebody pronounce Xilinx correctly!
    Most people absolutely butcher it. And then repeat it dozens of times!

    • @defeqel6537
      @defeqel6537 2 года назад

      I pronounce it as "Bob", that's usually correct

  • @SleepyRulu
    @SleepyRulu 2 года назад +40

    Nice finally amd doing something good like investing on themselves.

    • @Slavolko
      @Slavolko 2 года назад +7

      They also repurchased some shares, so that's another literal sense of investing in themselves.

    • @silenciummortum2193
      @silenciummortum2193 2 года назад +4

      @@Slavolko That’s OK too!

    • @SleepyRulu
      @SleepyRulu 2 года назад +2

      @@Slavolko is that legal to do that?

    • @Slavolko
      @Slavolko 2 года назад +10

      @@SleepyRulu Yeah, of course. Companies go public to attract in investors, but the drawback is that the company then has to answer to investors demands.
      Now that AMD is doing great financially, they can afford to purchase Xilinx and buy back some of their stocks.

    • @SleepyRulu
      @SleepyRulu 2 года назад +1

      @@Slavolko that good thing then

  • @stan110
    @stan110 2 года назад +13

    one thing that come to my mind that fpga can help in gaming is loading times. some blocks get programmed to quickly decompress game assets like on the ps5. Or emulation instead of wasting multiple cycles to simulate parts of a bygone architecture why not program a fpga to speed it up.

    • @cjjuszczak
      @cjjuszczak 2 года назад +12

      I think if you're going to put an FPGA in a console it's also great to use for backwards compatibility by emulating previous hardware.

    • @Allpurple_reign
      @Allpurple_reign 2 года назад +2

      @@cjjuszczak like on a ps5 pro would be insane

    • @davedimitrov
      @davedimitrov 2 года назад

      No way they're rolling out FPGAs in their CPUs any time soon, so their use in a Playstation or Xbox where stability is key is even further out.

    • @hjups
      @hjups 2 года назад

      I think the application you are describing may be better suited for a CGRA than a FPGA. Although placing it in a console does provide a more closed environment for development (i.e. Sony would provide the FPGA configuration, as a "dynamically selectable" piece of hardware). In exchange though, your console would likely go from $500 to $2,500, because FPGAs take up a lot of transistors (and thus area) compared to an ASIC core - hence why a CGRA would be better.
      Also, backwards compatibility wouldn't work the way one might think. As of right now, on 14nm FinFET, FPGAs can hit around 1 GHz if the implementation is deeply pipelined (has to be done by hand from a wise engineer with a gray beard) and the die is binned to the extreme (i.e. the upper 1% of all dies). So that should give you an idea of what exactly "backward compatibility" would mean (PS6 being able to run PS2 stuff, but not PS3,PS4,PS5, etc.)

  • @HighYield
    @HighYield 2 года назад +8

    Another really interesting and well done video. I also think the Xilinx acquisition has the potential to jump-start AMD even further, some crazy products will be possible!

    • @anythingbutASIC
      @anythingbutASIC 2 года назад +2

      In the future 3 mega corporations battle it out for control over the planets reasources.. Wait I've heard this story before!!

  • @michelvanbriemen3459
    @michelvanbriemen3459 2 года назад +28

    It's probably going straight to social credit system hardware, but i'm looking forward to emulating old PC and console hardware directly in FPGA by the time a hybrid FPCPU is released for the PC desktop. Or even a PCI-E FPGA expansion unit.

    • @princekittipon6510
      @princekittipon6510 2 года назад +5

      More enslavement letsgooo

    • @defeqel6537
      @defeqel6537 2 года назад

      Some SPE emulation would certainly be welcome ;)

    • @anythingbutASIC
      @anythingbutASIC 2 года назад +2

      Right this is going striaght to the contractors of The CIA NSA and FBI who host there servers. (AWS and AZURE) No normal person is going to be able to slap dow $6000 for just a slab of silicon not including the reest of the hardware.

    • @David.C.Velasquez
      @David.C.Velasquez 2 года назад

      Go watch more television.

    • @CharlieBoy360
      @CharlieBoy360 2 года назад

      @@anythingbutASIC They're saving tax payer money while further enslaving us. It's a win-win.... for them.

  • @mwork79
    @mwork79 2 года назад +12

    This seems very cool and I hope we get awesome stuff from this. Although I'm curious about how safe a FPGA is. Some of the CPU vulnerabilities came from Micro code being leaked. I would assume you need to edit microcode to "reprogram" a FPGA. This seems kinda unsafe to me.

    • @AntonioNoack
      @AntonioNoack 2 года назад +6

      Nah, security by obfuscation never has been accepted as true security.
      If it's totally open source, those leaked issues can be fixed as well. And potentially, they could be found faster.

  • @kazwalker764
    @kazwalker764 2 года назад +8

    I think FPGAs will really shine in consumer devices if you look at devices like the steam deck. Having FGPAs in a steam deck would make a VR version possible, the device could use low cost cameras sending images to FPGAs to do motion capture and pose estimation in real time. You could even go as far to have the game pick the algorithm applied to the FPGAs, choosing one optimized to capture hand gestures, or one better suited to doing full-body pose estimation.

  • @VLS-Why
    @VLS-Why 2 года назад +4

    This is exactly what I've been waiting for for years now! Hope I do well with their interviews next week

    • @anythingbutASIC
      @anythingbutASIC 2 года назад

      This is what xilinx has been waiting for for years. They have been talking about a chip like this since 2018 when the 7nm proccess first came about.

  • @Jabid21
    @Jabid21 2 года назад +2

    Correct me if I’m getting the wrong idea here. I’m just imagining is it possible to replace dedicated hardware blocks?
    For example FPGA can replace video encoder/decoder or “Tensor Cores” on GPUs and run the specific AI algorithm or encode videos and then get those FPGA updated/reprogrammed in the future as newer algorithms and future video codecs get released.

  • @georhodiumgeo9827
    @georhodiumgeo9827 2 года назад

    DUDE! Well researched video. Usually when people talk about FPGAs and ASICs they get a bunch wrong. Thanks for putting in the work!

  • @itsTyrion
    @itsTyrion 2 года назад +1

    isn't the video encoder part on a graphics card (like for NVenc) an ASIC?
    A (now older) GTX 1070 for example can output 2 1080p HEVC (H.265) streams at ~200-300FPS.

  • @sailorbob74133
    @sailorbob74133 2 года назад +3

    AMD has started understanding the importance of the software stack to their products. It's seems like they're accelerating development of ROCm, and in general talking up their software stack more.

  • @carlosornelas90
    @carlosornelas90 2 года назад +1

    CPU: an efficient array of premanufactured pipes and valves
    FPGA: pipes and valves

  • @pvtnewb
    @pvtnewb 2 года назад +1

    It would be awesome to see a one-silicon solution for multiple accelerated workloads like you said, maybe Vulkan/DX12 compute & RT mode for gaming, then switches to OpenCL accel for things like Premiere, and lastly a mode for tensor workload for upscaling/denoising. Heck, FPGA can probably be programmed for video decode/encode as well, combined with AMD's VCN silicon for let's say, AV1 contents.

  • @Acky0078
    @Acky0078 2 года назад +5

    Well, we had similar expectations back when Intel acquired Altera (2nd biggest FPGA manufacturer) and Intel assured us there will be FPGAs inside every intel cpu... look around you. Can you spot a single cpu with configurable FPGA onboard?

    • @anythingbutASIC
      @anythingbutASIC 2 года назад

      Right I expected this to happen when they pushed pass their 14nm chip but consumers got none of that. Technically some of the Altrera V FPGAs got ARM proccessors on chip but that's no differnet than the Xilinx zynq.

  • @xymaryai8283
    @xymaryai8283 2 года назад +2

    low latency digital video transmission is something that has always needed very expensive ASICs, if Xilinx+AMD can reduce power consumption and increase accessibility, i know for sure there will be a large community that will pounce on it.

  • @alidan
    @alidan 2 года назад

    1) on raytraceing, while I don't know the hardware requirements to make it work in game, some people I had contact with think quarter float or maybe even 1/8th float would work which if implemented would be a 2-4x boost over current matrix shaders, raytracing is a brute force operation that does not accelerate beyond tricks to constrain it or to get more out of less, see the denoisers (in the case of video rendering its better to render without a denoiser and then pass it though a denoiser made for film grain as it gets a smoother image to image denoise rather than a fairly jumpy one if done image to image)
    2) if amd puts a fpga on their product, it would greatly accelerate certain workloads on the fly, that's correct, but it's nothing like adding 16 cores 32 threads, while applications that take advantage of everything are few, there were always applications that would use as much as you give it, like 3d rendering, or when everything isn't used, the extra cores allows headroom to allow the computer to still be useful even when its getting maxed out, my reason for getting an 8 core the moment it was a viable option was this specifically. with how fast graphics move, I 100% doubt that it would be viable to have the cpu take up visual rendering workloads, however it may be more viable to do physics workloads there than on gpu

  • @kayakMike1000
    @kayakMike1000 2 года назад +3

    I have used both xilinx and altera. I prefer the xilinx, but they're pretty close in capabilities.

  • @axe863
    @axe863 Год назад

    HEDT line not only with high core counts; pcie lanes; memory channels but also tailored FPGA

  • @Roxor128
    @Roxor128 2 года назад

    At one point there was a project to make and open-source FPGA-based graphics card. Unfortunately, the project stalled and it never saw the light of day.
    Maybe it might be a good idea to revive the idea and make a graphics card that's basically a big FPGA, some display interfaces and a controller chip that will reconfigure the FPGA when asked by the driver to suit whatever kind of job was requested of it. On boot, it automatically loads a simple layout implementing plain old VGA like it's 1993, then once the OS loads and asks for a more modern 2D environment, it loads in a new layout suitable for driving a modern desktop. If you start a video playback program that wants hardware-accelerated decoding, it loads up a suitable decoding plan into an unused area of the FPGA (some FPGAs do support partial reconfiguration). Start a game that uses regular old rasterisation, and the FPGA will be loaded up with a bunch of copies of a rasterisation engine. Start a game that uses raytracing as well, and some of those rasterisation engine instances will be replaced by basic raytracing ones. Start your customised build of POVRay, and the whole chip gets filled with advanced raytracing engines. Hell, a game with a fancy audio engine could have some DSP blocks thrown into the mix for that.

  • @doit1374
    @doit1374 2 года назад +1

    Have you considered to pause a while in between phrases in your voice over?

  • @Lync512
    @Lync512 2 года назад +10

    This is one deal I’m legitimately excited for.

    • @lostdeeply397
      @lostdeeply397 2 года назад

      Excited for big tech mergers wow so much support for the little guy 😒😒😒

    • @miyagiryota9238
      @miyagiryota9238 2 года назад +1

      Me too! Go AMD!

    • @Lync512
      @Lync512 2 года назад +1

      @@miyagiryota9238 Hoping AMD doesn't waste Xilinx like intel wasted altera

    • @miyagiryota9238
      @miyagiryota9238 2 года назад +1

      @@Lync512 Lisa Su is not the typical CEO, she is a visionary and executes what she sets out to do!

    • @JohnnyWednesday
      @JohnnyWednesday 2 года назад

      I want FPGAs as standard in all CPUs - imagine the super low latency emulation that would be possible! hybrid FPGA/emulator designs - it would be amazing :)

  • @dreamora1000
    @dreamora1000 2 года назад

    I think outside the cloud workload, the needs that in particular Sony with its custom behaviour on the PS5 (Pro) has which include sound raytracing, not just graphics, could make an interesting case for said APU to gain such an FPGA, as it would be a much more focused scenario with a partner that has strong experience in such fields as the use of the Cell in the PS3 showed.

  • @sinephase
    @sinephase 2 года назад

    What do you think about using FPGA as a hardware level security system for OSs? Sounds kinda ideal considering they've been using ARM co-processors for that purpose for years in other devices

  • @handlemonium
    @handlemonium 2 года назад

    Would you mind touching on Lightmatter's photonic processor and the resurgence of high-performance analog computing?
    Maybe both in relation to all the other categories of hybrid computing hardware talked about in this video?

  • @hlavaatch
    @hlavaatch 2 года назад +2

    I hope AMD will do something about the FPGA toolchains for the older xilinx products. Currently the only way to run it is in a VM :( If at least they would document the bitstream formats open source replacement could be done

  • @seylaw
    @seylaw 2 года назад +5

    We even could get updateable CPUs if there is an FPGA on there, I am thinking of a better branch prediction algorithm or algorithms for cryptography.

    • @hjups
      @hjups 2 года назад +2

      That would likely never happen. FPGAs are too slow to be useful within the CPU critical path. The best you could do is what Intel tried (it performed terribly though), where your CPU has a custom instruction which invokes the FPGA datapath. But a FPGA would need a different clock domain from the CPU which would be a mess (or the CPU would have to make the clock dive to a low frequency via DFS to run the FPGA part).

    • @anythingbutASIC
      @anythingbutASIC 2 года назад

      Or just a modular computer that if you want more proccessing power you just add another car like the raspberry Pi compute modules.

    • @hjups
      @hjups 2 года назад

      ​@@anythingbutASIC With CXL, that could be a viable architecture. You have a primary system control CPU, which is responsible for doing management and book-keeping (where you would run your main OS), and then you could add in additional compute modules via CXL (it would likely still use PCIe connectors). So you could get an 8 core 5 GHz x86 compute module for example, and have the OS schedule tasks on those cores. If you want 16 cores instead, just put in two of those compute modules.
      That could start leading to problems with enclosure volume though, so maybe it would make more sense to use a more standard SODIMM type connector?

  • @CitizenTechTalk
    @CitizenTechTalk 2 года назад

    Simply fantastic video! Thank you! Unfortunately we have an inverse issue here. The gaming industy in crumbling as we speak and the developers are no longer pushing hardware to perform at its peak. Aside from some very poorly programmed outlyers like Cyberpunk 2077 for example.
    Game developers are aiming at low end system specs (for the most part) to capture a larger audience along with a massive skill shortage in the gaming industry ontop this is leading to the hardware industry far exceeding the requirements for gaming sadly.
    The big question now is; is the I.T indusry completely disconnected from the gaming and end user industry and is big business the only market left for profits to be made in?
    The end user maket seems to be showing this, especially in hardware availability in 2021-2022, especially in the GPU market.
    Some food for thought possibly?

  • @MickeyMishra
    @MickeyMishra 2 года назад

    well it would be nice to just get the windows start menu bar to actually show up the second you click it that would be amazing

  • @user-su7vy9zb8l
    @user-su7vy9zb8l 2 года назад

    Again quality content . Keep it up !

  • @fabriglas
    @fabriglas 2 года назад

    Will it be compatible with previous chipset sockets?

  • @martinstanek9711
    @martinstanek9711 2 года назад

    This is the video I was looking for, ever since I've heard about the merger ^^ cheerio!

  • @teknastyk
    @teknastyk 2 года назад

    Its that time of month again.. To absorb tech news in a buttery smooth voice maner like never before seen.
    Good work mate, appreciate the insights.

  • @imgeek4282
    @imgeek4282 2 года назад

    The quality of your production is outstanding!!

  • @MrSenChoi
    @MrSenChoi Год назад

    What’s up with the small box bottom left?

  • @Gooberpatrol66
    @Gooberpatrol66 2 года назад +1

    Security is also a big benefit of FPGAs. It's the only type of device that you can potentially trust from the hardware all the way up.

  • @theworddoner
    @theworddoner 2 года назад +4

    This is great and I look forward to this. My question is what stopped Intel from doing this? They've had a head start well before AMD in this department. Plus their software development budget is leagues above AMD. Why didn't they do this?
    Is this because of chiplets? That shouldn't be the case. As you said, Intel have built asics into their chips to accelerate adobe software. Apple is doing similar things with their ARM accelerators. Why didn't intel go the fpga route instead of asics?

    • @imgeek4282
      @imgeek4282 2 года назад

      Good question

    • @hjups
      @hjups 2 года назад +6

      Intel did this, just like the video briefly discussed. Unfortunately it was a huge failure except for very specific applications. Although, a bit part of why the Intel version failed was due to the interconnect latency (if I recall correctly, communication via two CPU sockets was something like 100x faster than talking to the FPGA on the same package).
      Intel also tried it before this 3D stacking technology really took off, which is what they are using now and what AMD is using with their 3D cache.
      In terms of why not FPGAs instead of ASICs? FPGAs are usually at least 2x slower (in clock speed), and between 20x to 100x larger in area for the same function that an ASIC can do.

    • @metaforest
      @metaforest 2 года назад +5

      Intel didn't have as an advanced development platform for FPGA as Xilinx did. To put it bluntly. Intel fuxed it up. They underestimated the value of high quality libraries and dev tools for fpga image development.

    • @hjups
      @hjups 2 года назад +5

      ​@@metaforest I disagree. I think Altera messed it up. The ALM architecture was poorly conceived, and the performance of the LUT6 far out paced it. Then when Intel took over, they completely mismanaged their FPGA division, treating it like some fringe thing (almost as badly as they mismanaged Xpoint).
      At the same time though, I think that once you get past the nickle and diming that Intel does, their tools are better than what Xilinx has to offer (they run faster and are more stable), then again, the Xilinx tools run ontop of Java, so you can't expect much there.

  • @NessieNep
    @NessieNep 2 года назад +2

    I can see the MiSTer fpga project benefitting from this greatly~! :3

  • @adityasixviandyj7334
    @adityasixviandyj7334 2 года назад +1

    I think making FPGFA intergrated in any CPU will be matter of time, because their capability is enormous. I already thinking, even we not use it in adobe CC family or dedicated analytic software, I bet those FPGA will be use at least in face unlock & virtual background in web-conf software automatically.

  • @n_kwadwo
    @n_kwadwo 2 года назад +2

    The quality of your content is off the charts

  • @AxiomofDiscord
    @AxiomofDiscord 2 года назад

    We might see some hyper accurate, compatible, and efficient emulators in the future if nothing else. I would love some FPGA acceleration for emulators on my machine. Even if done as hybrid emulation. If the x86 cores could handle the original Xbox CPU and all the custom bits could be accelerated by the FPGA chiplet(s) we could see greatness. We could realistically start preserving the past even the bits that currently have been difficult. This is what makes me excited the most.

  • @AntonioNoack
    @AntonioNoack 2 года назад

    @7:20 there is this technology for photos in phones... and I always wondered: is there people using it?

  • @jjdizz1l
    @jjdizz1l 2 года назад +1

    It's the vision. I like what I am hearing and soon we'll benefit by having a small datacenter in our homes.

  • @mitchjames9350
    @mitchjames9350 2 года назад +1

    I’m interested in AMD making ARM based SOC with Xilinx 5G to compete with Qualcomm in the mobile market.

  • @tobystewart4403
    @tobystewart4403 2 года назад

    Great presentation.
    With cloud AI algorithms, robots can be reduced materially to just motors, power supplies, sensors and a wifi link. This makes robots more robust and much cheaper.
    Also brings Skynet one step closer.

  • @Spyke_misc
    @Spyke_misc 2 года назад +1

    I Wonder if we could see an fpga in an apu for say fsr2
    Rembrandt seems insane, and with fsr could play anything
    I digress but it's exciting

  • @LazyBunnyKiera
    @LazyBunnyKiera 2 года назад

    I think this is good for cpus that need new instructions/instruction sets that don't already exist on those cpus. A company designing software can create the new instructions, program it on the FPGA and while the cpu handles other stuff, the FPGA would handle those specific instructions.

  • @godofbiscuitssf
    @godofbiscuitssf 2 года назад +1

    FPGAs still aren't as fast as fully-designed circuitry. Keep that in mind.

  • @ted_van_loon
    @ted_van_loon 2 года назад

    was just thinking that cpu's should just have a few fast cores and further be all fpga.the CPU cores handle for basic tasks, and programming and controlling the FPGA, possibly with some ai or counter hardware to know what hardware designs work best if it is no perfect fit. this would be especially helpful for things like simulation, rendering, ai, etc. for example look to a GPU, often you use less than half of the GPU since it has so much different specialized hardware parts. with something like this you can optimize a game or software for full raytracing for example, and actual gravity, all those things can then work efficient, then when converting video you can set it completely to a extreme hardware video compression software, etc. or something custom in between
    one of the greatest things would be if they made a tuning utility which allows you to turn of the automode and allow you to set block-modules, so for example you have a certain space on your fpga physically, then you can select different modules to set on it and a specific quality and size relation, for example a size 2 video compression section or a size 8 one where the size 8 one is 4 times as big but faster, however the 2 allows more free space for things like ai, storage, etc.

  • @tusharjamwal
    @tusharjamwal 2 года назад

    I just realised that dedicated AES encryption chips are basically ASICs

  • @toastedoats5074
    @toastedoats5074 2 года назад

    Happy birthday!!

  • @abcqer555
    @abcqer555 2 года назад

    I would love for you to do a deep dive on mythic AI. They are using memory cells to run machine learning models in analog with 5 watts of power. I would love your take and analysis.!

  • @replicant8532
    @replicant8532 2 года назад

    Maybe this way I will be able to make my freaking RT cores to do something in the future.

  • @FictitiousMe
    @FictitiousMe 2 года назад

    This guy's voice is so deep, my sub woofer blew it's fuse twice in 20 minutes.

  • @desmondchibanda6280
    @desmondchibanda6280 2 года назад

    Really cool badass voice bro..

  • @gamagama69
    @gamagama69 2 года назад

    also the fact that game cores be written to take advantage of the fpga to basically eliminate emulation if they are powerful enough. We've been limited to only weak systems on stuff like MiSTER but with powerful fpgas in ryzen it could probably do newer stuff. The only issue is that I'm not sure all the customization that emulation allows would be possible.

  • @LiveEnjoyment
    @LiveEnjoyment 2 года назад +1

    I just hope that the SOCs created by xilinx wont dissapear as i use them in my electronics for a multiple of tasks.

  • @sgtgrash
    @sgtgrash 2 года назад +1

    Forgive me if this is a stupid question, but can FPGA's be configured on the fly? If so then a fully configurable FPGA chiplet on a CPU die could be a phenomenally powerful addition.

  • @cracklingice
    @cracklingice 2 года назад +3

    16 cores do not belong on the desktop if it means CPUs over $400 stop having the quad channel memory and PCIE lanes they used to. When you're paying what AMD wants for the 16 core (and even the 12 core) you should expect to get more computer around those cores too.

  • @Steve25g
    @Steve25g 2 года назад +1

    @Coreteks at this point, the whole internet runs on FGPA's. Every serious switch has at least 1 fgpa.
    Most fast networking , like 40gbit and up, it's all fgpa. Networking at this point runs at 800gbit over fibre, per channel, one of the (possible) networkcards you showed had two of those connectors.
    Was maintaining some old (15y old) switches, where fans were failing...I looked up the chips, and they already had freaking fgpas
    So, with adding Xilinx to their productline, they can now also start delivering backbone and business switches.
    And yes, fgpas can be good mining cards, which could take a away the brunt from GPU load from miners

  • @fitsuit1555
    @fitsuit1555 2 года назад

    Great voice, presentation and explanation 🌟

  • @thesunexpress
    @thesunexpress 2 года назад

    That "stealth" customer for AMD is SuperMicro & a file systems dev outfit.

  • @khaledannajar
    @khaledannajar 2 года назад

    Could you talk about Apple’s Ultra fusion connect vs Amd’s 3d stacking

  • @jeffrydemeyer5433
    @jeffrydemeyer5433 2 года назад +3

    FPGA's are slow, adding them to consumer grade products is pointless.
    Having a piece of true optimized silicon do the same task will be much faster, smaller and power efficient.
    So anything other than having a tiny bit to do some programmable de or encryption is not going to appear on cpu's or gpu's.

  • @nishantaggarwal2825
    @nishantaggarwal2825 2 года назад

    Maybe it can tweak the pc to run at optimum levels according to your individual usage habits. Maybe it can used to more effectively emulate the consoles. It will definitely lead to machine learning and seems like the first step towards a thinking, learning, evolving robots.

  • @breakhart
    @breakhart Год назад

    since AMD after Adaptive AI Architecture most likely the ryzen 8000 series later could take advantage FPGA and since they also power up igpu the RDNA 3 might get that too

  • @AntonioNoack
    @AntonioNoack 2 года назад +2

    Currently FPGAs are pretty expensive, so they'll need a bit of time to come to low end CPUs.

    • @gamtax
      @gamtax 2 года назад

      Agree because FPGA is still in niche market and thus produce they will never go in large scale production. Once they are away from niche market, maybe they are produced in large volume and the price will be cheaper.

  • @robertsamson4610
    @robertsamson4610 2 года назад +2

    Your audio sounds much better.

  • @TheMNTK
    @TheMNTK 2 года назад +2

    I can just imagine all the creative ways hackers wil use these to infiltrate our computers

  • @steffennilsen2132
    @steffennilsen2132 2 года назад

    I'm just hoping they can make video encoding/decoding run on fpga so we aren't stuck with old encoding standards after 2-3 years

  • @ahmtankrl8526
    @ahmtankrl8526 2 года назад

    Adding custom accelators to any processing unit is actually risky not just production cost wise but OS builds may even need to be customized but it can also introduce a SoC that can be configured for a scpecific task they really need to be careful

  • @THeMin1000
    @THeMin1000 2 года назад

    I feel FPGA can be used to facilitate ISA conversion. Like from x86 to arm to RISC to whatever. The og CPU runs on whatever the original is. But other arch code can be first ran on the FPGA to implement extra functions that do not have a direct 1-1 instruction.
    This will software ISA agnostic. The CPU can just be an ARM highly efficient processor with the FPGA taking weight when it is used to run other arch code.
    This maybe what amd is planning. they can move to arm while being in the x86 Market.

  • @filipealves6602
    @filipealves6602 2 года назад

    I find it strange that no one is mentioning the possibility of adding a Xilinx FPGA die to an MCM-like graphics card precisely to counter the disadvantages of GPUs with AI and Ray-Tracing. Not putting it on either Epyc or Ryzen due to latency, but on the graphics card's PCB. Then also take into consideration that AMD has shown a graphics card with a 1 TB SSD on its PCB, and it's easy to imagine server farms with a whole computer on the size of a graphics card. Or an AMD CPU+GPU+VRAM+SSD+FPGA all-in-one. All of a sudden AMD is one step ahead of the console-makers themselves. The possibilities are huge here, especially for small form factor products like laptops and consoles.

  • @r1nlx0
    @r1nlx0 2 года назад

    I bet Xilinx acqusition is related to more talent acquisition for hardware software rapid prototyping but still ships as dedicated ASIC chip accelerator just like in M1, rather than shipping FPGA to consumer

  • @kopasz777
    @kopasz777 2 года назад +8

    Intel already did acquire Altera (the other giant in FPGAs) in 2016, yet we did not see anything in the consumer market. Why would it be different this time?

    • @miyagiryota9238
      @miyagiryota9238 2 года назад +2

      Because xilinx is the leader in fpgas not altera

    • @Hempage
      @Hempage 2 года назад +3

      I think the biggest reason this might happen is that AMD has been moving towards heterogeneous compute more than Intel for a while now, starting with the APU concept, as well as having the ability to combine different process nodes using chiplet technology. That said, I share your skepticism about AMD integrating FPGAs into their products. I'd like to see it happen, but it's certainly a departure from the norm.

    • @hjups
      @hjups 2 года назад

      @@miyagiryota9238 Altera hit 1 GHz before Xilinx did. The main reason that Xilinx is the leader right now though, is because Intel fumbled with mismanagement of the FPGA division.
      One potential difference is AMD's interconnect and 3D stacking technology, which Intel did not have in 2016.

    • @Yusufyusuf-lh3dw
      @Yusufyusuf-lh3dw 2 года назад +1

      This is just another click bait video created by an amateur RUclipsr. Everything said in this is nonsense. Neither AMD nor intel will add FPGA to pc and laptop chips in the foreseeable future. It's only used in HPC applications. Intel sapphire rapids already has built in FPGA accelerator based CPU. So that's nothing new. It's a very niche market with very low volumes.

    • @anythingbutASIC
      @anythingbutASIC 2 года назад

      The FPGA sector of Intel on comprises of 3% of intels whole company. It wasn't a big focus of theirs at the time. Maybe it might be now.

  • @Zengane
    @Zengane 2 года назад

    if the FPGA ecosystem develops and integrates with the Adobe suite, I'm switching from an NVidia GPU + Ryzen to full team red, the prospects sound amazing

  • @jonidimo
    @jonidimo 2 года назад +1

    I'm still waiting HBM2e memory integrated in the CPU chip ...

  • @uniqueprogressive9908
    @uniqueprogressive9908 2 года назад

    AI on the bread rolls: SNACK, FOOD, GOODS, BREAD, PACKAGED GOODS, FOOD, SNACK, BREAD ROLL, PACKAGED FOOD

  • @rondlh20
    @rondlh20 2 года назад

    This really brings great opportunities, but lots of custom development will be needed

  • @GUN2kify
    @GUN2kify 2 года назад

    an interessting video but please think about a pop protection for your microphone the most of time its hard to understand by your th* po* *st and so on ... that would improve your videos (in sound-quality) as much.

  • @phillysupra
    @phillysupra 2 года назад

    Zen 5 with 3d cache is about to be 🔥

  • @bujin5455
    @bujin5455 2 года назад +1

    I remember reading an article back in the 90s about how FPGAs were the way of the future, and how in the future, we'd buy new versions of our processors on a disc. Here we go again. lol

    • @anythingbutASIC
      @anythingbutASIC 2 года назад

      Seeing as we are squishing against Mores law as soon as the 3nm FPGA comes out we will never need to buy another CPU again. That's the difference Just reprogram what you got. The chips out now can proccess faster than any human will ever be able to get the informaton and do something with it.

    • @bujin5455
      @bujin5455 2 года назад

      ​@@anythingbutASIC Moore's law is not nearly as dead as people think. We're nowhere close to the theoretical information processing limits of the universe. People get used to one paradigm, and they see the failings of that particular methodology, and they think that's it, that's the end. Without realizing that the pressure to innovate increases as one approaches the limits of the conventional methods. Necessity is the mother of invention after all. The corollary is that there is no need to fix a thing that isn't broken. If you take a more abstract view of Moore's law, and just consider it from an information processing perspective (instead of simply transistors on chip), humanity has been tracking it for centuries, and there is no reason to think we won't continue to (we currently have a lot of promising directions in the lab, and the conventional methods haven't even ran out yet).
      Also, this idea that "the chips out now can proccess [SIC] faster than any human will ever be able to get the informaton [SIC] and do something with it" is just false. We are presently able to utilize the limits of the processing power we have, it's why we keep working to build faster super computers. If there were no humans who could use the power, there would be no need to keep spending large fortunes on building faster systems. Perhaps you're thinking about how most computers are faster than what most people need to browse the internet and watch RUclips. That's true. But that's hardly the limits of human endeavor. That also doesn't mean there won't be future mainstream applications which will need oceans more compute than most mainstream task use now.

  • @YouCanHasAccount
    @YouCanHasAccount 2 года назад

    The main problem I see with FPGA acceleration is that the toolchains are clunky and proprietary. There hasn't been much progress in terms of cross-vendor standards or open toolchains for the past 20 years.

  • @michahojwa8132
    @michahojwa8132 2 года назад

    friend says fpga are used by intel to load balancing between computers in rack ir between storages

  • @davocc2405
    @davocc2405 2 года назад +1

    I think there's too much obsession with "the cloud" in this regard - I can see this being of more benefit to evolving mobile solutions and edge computing (which is still very much in its infancy). I would suspect the initial market push for quick monetisation would be towards the crypto-mining market but we'll probably see this appear more in adaptive AI solutions (e.g. a remote mining robot that makes decisions and forms strategies).

  • @paulmaydaynight9925
    @paulmaydaynight9925 2 года назад

    so... a full compliment of fpga x264/x265 hardware blocks prototypes are required asap ^_~

  • @anthonyblacker8471
    @anthonyblacker8471 2 года назад

    This is really good news on the forefront here, I think if AMD can implement this technology within a reasonable amount of time they will scale to the very top and stay there for quite some time. Dr. Su is really on top of things and that's great. I've always been a fan of AMD and I'm glad to see they're really upping the game here

  • @JohnnyWednesday
    @JohnnyWednesday 2 года назад +1

    FPGAs as standard on PCs is where we need to be - I'm only going to use it to run 'emulators' - but still - I NEEEEED it. Our PCs should be able to do what the MISTer can do.

    • @metaforest
      @metaforest 2 года назад

      that is a really low bar for on-board fpga hardware.

  • @jasongooden917
    @jasongooden917 2 года назад

    Hope the drivers are good

  • @joshieecs
    @joshieecs 2 года назад +1

    15:20 Xilinx offers ARM CPU+FPGA SoC's in their Zynq line.

  • @DougOfTheAntarctic
    @DougOfTheAntarctic 2 года назад

    The answer to the question you may have is that FPGA stands for Field Programmable Gate Array.