Is this an Intel "Graphics Card" with 3 CPUs?

Поделиться
HTML-код
  • Опубликовано: 15 июл 2024
  • Support me on Patreon:
    / der8auer
    ---------------------------------------------------------
    Save 10% on your iFixit purchase: DER8AUER10
    eustore.ifixit.com/der8auer
    Pro Tech Toolkit: bit.ly/2JOFD8f
    ---------------------------------------------------------
    Find my products at Caseking:
    Delid Die Mate 2: bit.ly/2Rhv4y7
    Delid Die Mate X: bit.ly/2EYLwwG
    Skylake-X Direct Die Frame: bit.ly/2GW6yyC
    9th Gen OC Frame: bit.ly/2UVSubi
    Debug-LED: bit.ly/2QVUEt0
    Pretested CPUs: bit.ly/2Aqhf6y
    ---------------------------------------------------------
    My Equipment:
    USB- Microscope*: amzn.to/2Vi4dky
    My Camera*: amzn.to/2BN4h2O
    (*Affiliate Links: If you buy something, I will get a part of the profit)
    ---------------------------------------------------------
    Music / Credits:
    Outro:
    Dylan Sitts feat. HDBeenDope - For The Record (Dylan Sitts Remix)
    ---------------------------------------------------------
    Paid content in this video:
    - /
    Samples:
    - /
    ---------------------------------------------------------
    Timestamps
    0:00 Intro
    1:24 VCA2 overview
    3:35 Die XEON-CPUs
    4:41 A look under the cover
    6:16 A closer look at the VCA2
    7:09 Chipsets & additional Cache
    7:39 PCB & PLX-Chips
    9:42 Test-Setup & Drivers
    11:00 Problems with Windows
    12:42 Linux might be the answer
    13:33 Outro
  • НаукаНаука

Комментарии • 538

  • @izzieb
    @izzieb 2 года назад +226

    The short pins allowing you to figure out the number of lanes is useful information. Learn something new every day.

    • @abdulhkeem.alhadhrami
      @abdulhkeem.alhadhrami 2 года назад +10

      Want to hear something funny:
      When i bought my gtx 980 it had the last pin at full length i touched it and it wasn't stuck to the pcb i thought crap i damaged it right out of the box now i know it was designed like that LoL.

    • @ThunderingRoar
      @ThunderingRoar 2 года назад +2

      you can also just check the pcb traces going into the pcie slot

    • @futurepastnow
      @futurepastnow 2 года назад +13

      I don't think it is correct. That sense pin is there to detect when the card is fully inserted (since it would be the last pin to connect, telling the device to power on), making PCIe hot swap possible. You obviously don't hot swap your graphics card but a server in a datacenter might need to do so. It makes sense for there to be three of them here since the card has three devices in one to power on.

  • @IlfStoyanov
    @IlfStoyanov 2 года назад +142

    This is an Intel EOL product, that had a very small niche application in video render/streaming. The idea was that it is cheaper to do the transcoding on one of those nodes, instead of using multiple GPUs (CUDA/OpenCL) from other manufacturers. The idea was that this way you would be less dependent on PCI slots/lanes, and eventually buying Infinband adapters. Few of the VCA2s with an IB on a rack of Intel based servers and you have a real time streaming solution or a real-time rendering solution. The rendering part is actually applicable to web-sites that sell products, a good example would be a car company, lets say BMW. You go to their website and configure the car (interior, exterior), it renders in real-time and you can basically do a virtual tour around and in the car. The VCAs were meant to do all that. The idea was that the price/performance ratio was just good enough, which it really wasn't, if you know how to create good HPC farms and know your way around *nix platforms. The only plus of Intel solution was the ECC memory, something that should be mandatory in every device with more that 128MB (not a typo, megabytes) of memory.
    Sidenote: BMW actually relied on a 3rd party company for such a project, that will remain nameless, but they used CUDA (during the dev phase, actually a bunch of hacked NVIDIA gaming cards)

    • @davidstephen7070
      @davidstephen7070 2 года назад +2

      exactly something like this can be solve in software. i have an application that i create myself that can scallable to share render in multiple cpu.

    • @antoniusriskiyuliando9414
      @antoniusriskiyuliando9414 Год назад

      I think it's a hardware for transcoding a video, long ago I usually used virtual dub as frame server and TempegEnc for the transcoding software, as a result the video bit rate is very stable as I want it to be. But it would take about 6 hours in Intel Katmae, maybe it will have shorter time using this device..

  • @Baldmaxx
    @Baldmaxx 2 года назад +158

    The orange cat as a passive cohost is awesome. So is your thorough understanding of obscure technology. Very interesting.
    Subscribed!

    • @der8auer-en
      @der8auer-en  2 года назад +43

      thanks you! Will pet Shiek from you :D

    • @LordOfNihil
      @LordOfNihil 2 года назад +3

      i have a very large cat that spends much of her time sleeping either next to, behind, or on top of my keyboard.

    • @poiuytrewq8ff
      @poiuytrewq8ff 2 года назад +5

      @@der8auer-en I emailed you mate, found the full software guide with architecture overview and all the images. The card can run centos, Ubuntu, win server 2016 and win 10 on the card itself with persistent images (m2 slot) and volatile images, the host needs to run centos or Ubuntu. Lol also the card has xen and kvm hypervisor support

    • @mladendenni7062
      @mladendenni7062 2 года назад +2

      lol

    • @GilliamVespa
      @GilliamVespa 2 года назад +1

      when i first read this comment too quickly, i thought "cat orange cohost" meant some sort of "orange cat passive cooling" this was confusing until i saw the orange cat being a passive heatsink. :P

  • @popcorny007
    @popcorny007 2 года назад +132

    Looks like it requires a Xeon E5 server + custom CentOS/Windows image.
    Definitely interested in seeing "vcactl" running on Linux, with windows installed on the onboard M.2

    • @madb132
      @madb132 2 года назад +1

      Oops, I should have read the comments first😎 Spot on . Though, me not grey yet, But getting there!😀

    • @adriancoanda9227
      @adriancoanda9227 2 года назад

      it works also with windows but the m.2 slot is for cache so the frames are stored on that card to exclude long ways and you can add a feature to the bios that you can have wifi display output at boot times

  • @monchiabbad
    @monchiabbad 2 года назад +202

    On the diagram you can see the system needs to receive either a Windows or a CentOS boot image. CentOS is a linux distro. It most likely needs a tiny M.2 module with the required boot images to load. It would appear you would only need network if you want persistent storage outside of the device when using NFS.
    All this one should be able to see on the one page you showed.

    • @wsippel
      @wsippel 2 года назад +50

      The host has to run CentOS with a patched kernel to enable networking over PCIe. Then you use vcactl to upload Linux or Windows images to RAM disks on the accelerator. Looking at the documentation (which is available on Intel's website, as is the software), Windows hosts aren't supported. I don't think the M.2 slot is supposed to be used, it's hidden under the shroud and any connected drive would interfere with other cards. It's probably for developer and debug purposes only.

    • @Lil_Puppy
      @Lil_Puppy 2 года назад +6

      @@wsippel the pcie slot on the card can fit a micro sdd on it, there's even a screw hole for it on the inner shroud, the outer shroud/case would hide it. For the life of me I couldn't figure out what search terms google would give me the actual server pcie ssd card that wasn't a full slot adapter for nvme m.2 drives.

    • @Caddy666
      @Caddy666 2 года назад +2

      @@wsippel i reckon a short nvme would fit, 2230 length.

    • @wsippel
      @wsippel 2 года назад +4

      @@Lil_Puppy Maybe you're right, but the quickstart guide only mentions uploading the guest OS images to ramdisks on the card from the host via vcactl, the SSD isn't mentioned. And even if you could somehow boot the guests on a Windows host (via M.2), I'm not sure how you'd be able to interface with them in any way. All the control and interface software is Linux only as far as I can tell.

    • @qdaniele97
      @qdaniele97 2 года назад +2

      i think it's more likely that boot images are supposed to be loaded with something like PXE over virtual network cards on PCIe.

  • @YTHandlesWereAMistake
    @YTHandlesWereAMistake 2 года назад +96

    Would love to see a deeper dive into how this worked / running example. Props to your discord for finding that documentation, and the driver is maybe for older/newer version or for running windows on the device, it might be utilized from there.
    As for the network, probably some vm exposed interface or bridge etc might work

  • @TWEEMASTER2000
    @TWEEMASTER2000 2 года назад +1

    Keep up the amazing work this was super educational!

  • @wika1117
    @wika1117 2 года назад

    Good informative video and the cat was cherry on the top :) Subbed

  • @AJ_UK_LIVE
    @AJ_UK_LIVE 2 года назад +2

    It's not very often I find out about hardware that I have never heard of. Very interesting device! Thank you for buying it and poking around Roman! I hope you manage to get it up and running somehow.

  • @hlucn8tn
    @hlucn8tn 2 года назад

    Okay...really kewl find, interesting to see and thank you for the break down.

  • @warmfreeze
    @warmfreeze 2 года назад +12

    yes, you install windows on the card itself, the drivers get installed TO the card itself as the card is its on computer, the VCA/CTL tool is used in a linux/unix environment on the host machine to access the windows environment, also you can daisy chain a half dozen of them together in the same host computer with windows installed on only one card.

    • @adriancoanda9227
      @adriancoanda9227 2 года назад

      nor by default that slot is just for cache where you store the source and destination files until they are complete

  • @AdamsWorlds
    @AdamsWorlds 2 года назад +14

    Love this quirky hardware stuff. Always makes me wonder "what if?"

  • @N0N0111
    @N0N0111 2 года назад +12

    I like this Enterprise TOP SECRET hardware, i bet there will be a second video with some benchmarks :D
    And maybe even 4K60 live stream on YT for test purpose.

  • @benjamintrathen6119
    @benjamintrathen6119 2 года назад +14

    It is more like the Xeon version of the 6770HQ, as each chip has a 128MB on-die ddr4 level 4 cache like on the i7 6770hq in the original skull canyon NUC

  • @mercedes300gd
    @mercedes300gd 2 года назад

    Ooo this is right up my alley, love this stuff

  • @richardirvine2220
    @richardirvine2220 2 года назад

    Fascinating hardware. Thank you Derbauer.

  • @banu6301
    @banu6301 2 года назад +10

    13:07 - well nvidia make drivers for linux, but that doesn't automatically mean they want you to use them with linux

  • @ostettivictor
    @ostettivictor 2 года назад +2

    That cache memory chip is what intel calls EDRAM, it was present in the Broadwell desktop line, such as the i7 5775C, which I used until this year precisely by Iris Pro and this chip to edit videos, believe me, exceeded timeline performance of a Ryzen 7 in Davinci Resolve. This chip was defunct in intel's desktop line, but continued to be manufactured on processors made for Apple until they migrated to the SOC M1

  • @Magovit
    @Magovit 2 года назад

    Amazing content! Keep going!

  • @Phynix72
    @Phynix72 2 года назад +6

    About the end part, Linux might be the answer..
    Yes, it's an enterprise component it works netboot, net-storage & net-install. Net-install is the key where those windows drivers comes in play, when you can flash OS image to the recievers chache. When it's time to shut down the whole data gone, so prior to that the target output data/file is transferred to another network location.

  • @N0N0111
    @N0N0111 2 года назад +16

    Codenamed "Arctic Sound-M" comes to mind that intel showed off this year.
    But that is a proper datacenter GPU with further 0 details lol.
    Encoding streams for 4K60 with crisp details would be a dream!

  • @AhmadRady
    @AhmadRady 2 года назад +99

    I think you need a Xeon based system, the document pointed that it is connected to an E5 Xeon CPU

    • @TheSmileyTek
      @TheSmileyTek 2 года назад +3

      I was thinking the same.. May need to install on a MB running Xeon E5.. I'm curious to see that thing doing some work of some sort. Hopefully he figures it out. No doubt he will. Maybe with some help from some viewers.

    • @PARAGBD420
      @PARAGBD420 2 года назад +1

      Or every intel system is stupid

    • @heickelrrx
      @heickelrrx 2 года назад +1

      @@PARAGBD420 why stupid? I don't see anything stupid here

    • @phillysupra
      @phillysupra 2 года назад +3

      @@PARAGBD420 lil bit of history lesson: amd got their start by making Intel cpus. Literally every single cpu manufacturer started by copying Intel x86. By saying all that, I personally do prefer amd based off of their idea of efficiency > clock speed. This may change based off rumors of am5

    • @Rayu25Demon
      @Rayu25Demon 2 года назад

      @@PARAGBD420 smartest answer

  • @danwhite3224
    @danwhite3224 2 года назад +2

    I have had one of these cards for a little over a year now. I will update if I can get it to work again in my PC, but I'm sure last time I tried it did show up correctly. I haven't done a lot with it for a while as I don't have a suitable cooling setup for it but I'll see if I can come up with something.

  • @LucienneGainsborough
    @LucienneGainsborough 2 года назад +32

    Pure speculation. That M.2 slot seems can hold an 2230 SSD (but seems to be key-e which is used for wifi cards mostly) and the card also has mounting holes on the inner shroud. Presumably you can use some pre-configured windows server image and access the card via remote desktop. As for the PLX chips, I suppose one of the Xeons is managing the storage and boot config so it needs to sit behind another chip.

    • @TNW1337
      @TNW1337 2 года назад +1

      No it is not just used for wifi cards mostly. lol

  • @ZeroX252
    @ZeroX252 2 года назад +1

    I have an Intel VCA and did some work trying to get mine to work. CentOS was the only officially supported environment and the binaries were compiled against an older environment.
    There are two ways to handle booting the VCA units, both of which are configured using VCACTL.
    One way is by using images stored on the HOST system which are booted effectively using PXE over a virtual network connection between the VCA CPUs and the HOST. The VCACTL tells the bios on each system of the card what to do. The second way would be to store the image on the dedicated nvme storage on the card. I believe this is the purpose for that second plx chip - to give direct access to the NVME for all 3 CPUs. The image you run is basically up to you. You can run windows or Linux on it, and those OS can be configured for headless operation. The intended use case was for mass transcoding of high resolution video, so in most cases a thin Linux OS with a GUID for the host name set at boot and a rudimentary FIFO FFMPEG being fed video to/from the host was probably what most people used them for.
    I was going to set up plex transcoding by symlinking the /dev/dri nodes of the VCA CPUs to the host and see how bad the performance hit was.
    In the end I met a brick wall with my SuperMicro board in my server. I've been meaning to revisit since I've got two new e5-2658v4s and a new board with a later BIOS that might play nice with it.
    Oh and the windows drivers are for windows guests on the VCA to access the host network interface after booting.

  • @survivor303
    @survivor303 2 года назад

    Great vid, you got a new subscriber!

  • @nazgu1
    @nazgu1 2 года назад

    4:50 - I like the fact that Shiek is participating in the disassembly :)

  • @CaptainsWorkspace
    @CaptainsWorkspace Год назад

    Reminds me HP Tesla M6 quad gpu adapters, except for these not being GPU's still very fascinating.
    The closest equilant is proberly the ALR 6X6 Pentium Rro system with two triple Socket 8 CPU adapters.

  • @TheAussieRepairGuy
    @TheAussieRepairGuy 2 года назад

    3:56 - I hope you have a good anti-static solution on your desktop with that 4-legged van de graaff generator sleeping on the desk lol

  • @__aceofspades
    @__aceofspades 2 года назад +12

    Seems like a frankenstein solution from years ago before Intel Arc, as they now have Arctic Sound which ditches the odd Xeon parts for just DG2 dies.

    • @LordOfNihil
      @LordOfNihil 2 года назад +3

      seems like you would use your server as a backplane and install multiple cards as a cluster. the server acts as interconnect and provides i/o.

    • @gledsonborba
      @gledsonborba 2 года назад +1

      No.. its a H100 Nvidia competitor.. build for a datacenter..

  • @ATVProven
    @ATVProven 2 года назад

    Love these videos.

  • @SK_1337
    @SK_1337 2 года назад

    This is a really cool tech. You should start a series or something for weird tech. Like u did for old OLD tech. (IBM etc..)

  • @BobHannent
    @BobHannent 2 года назад

    To do video encoding we used HP Moonshot with E3 12xx CPUs with Iris graphics. A dense way of doing a large amount of live video processing and encoding.

  • @zobilamouche420
    @zobilamouche420 2 года назад

    Quite an unusual card :D
    Thanks again for providing an English and a German version of your videos, not many people do that !

  • @TeamStevers
    @TeamStevers 2 года назад

    Love this interesting stuff I haven’t seen before

  • @AC3handle
    @AC3handle 2 года назад +18

    A PCI express card with it's own PCI express slot ON the card!
    Intel inception.
    you KNOW as soon as Linus sees this, he's gotta get one.

    • @adriancoanda9227
      @adriancoanda9227 2 года назад

      ah yeah if you have custom bios you can even make that thingy boot directly from that m.2 but this is an fcpga that is the graphics , but the m.2 slot is for a cache where the destination and source files need to be stored in order to be fast enough

  • @james2042
    @james2042 2 года назад +1

    I wish we got the L4 cache on desktop chips the way they had them on the i7 5775c, that chip ran circles around skylake despite being broadwell based.

  • @BWOWombat
    @BWOWombat 2 года назад

    Thank you so much a million times for making your vids in English too. You rock n roll!

  • @madb132
    @madb132 2 года назад +1

    Awesomeness! Thank you. I miss Playing with ex Enterprise gear. You may need a Xeon based motherboard to release some PCI-e lanes. Boot up a quick Linux setup and go from there.

  • @evan7721
    @evan7721 Год назад +1

    very much reminds me of the Xeon Phi 3120A's i bought ages ago. I got them after EOL and the support for a non-enterprise user is super scattered and it's super hard to properly get them running namely due to that you need a specifically old version of the Intel compiler suite which is impossible to get
    (i gave it a good try 18 months ago and almost got it)

  • @soleenzo893
    @soleenzo893 2 года назад

    lol, i live how the cat is just chilling in the middle of the desk, not giving a fuck abiut the fact that a video is being shot and he's litterally in the middle of it

  • @denvera1g1
    @denvera1g1 2 года назад

    I need this to be updated with 3x 12700H 42 cores for TDARR (when GPU transcoding results in larger than original files, basically any time you go from MPEG2 to H265), and then 3x video encoders for when GPU encoding works fine.
    Would be awesome to get 4 of these cards in a single system. Heck, i once had a server that would have fit 7 of these cards

  • @8bit711
    @8bit711 2 года назад

    such a piece of art!

  • @myke1380
    @myke1380 2 года назад

    3:35 shiek is chillin lol

  • @beanlegion8529
    @beanlegion8529 2 года назад

    Great informative video but I cannot get over just how massive your cat is.

  • @jmsether
    @jmsether 2 года назад

    I would love to see one of these in action 😍

  • @thehairymonkeytoo
    @thehairymonkeytoo 2 года назад

    The PLX 8749 and the PLX 8717 both support 2 NTB (non-transparent bridge) ports which each let you join 2 PCIe hierarchies.
    You would need 5 ports to allow everything to talk to everything (3 between the host and each CPU, 2 between the CPUs). Given the 8717 only has 16 lanes (so 2x8-lane ports either side of the NTB), and the 8749 has 48 lanes, it probably has 8 lanes to each CPU and 16 to the host).

  • @peterjansen4826
    @peterjansen4826 2 года назад +1

    Very interesting, this video begs for a 2nd part in which you actually use the device as intended. On a Linux-system with Windows on that card. Good luck and have fun. :)

  • @Techie_ASMR
    @Techie_ASMR 2 года назад +1

    you forgot to talk about the M.2 slot that was on the card. or did i miss that part? why does it need an M.2? to function as a separate computer inside the main computer?

    • @kelrune
      @kelrune 2 года назад +1

      I feel like he only mentioned it. then forgot it when he talked about installing windows to the card.

  • @Millmantv
    @Millmantv 2 года назад

    I wish that would work for PC I would love to use it for a stream rig and just let it do all my encoding and video power for green screen and what not would be sick

  • @ArcAiN6
    @ArcAiN6 Год назад

    The reason there is plastic film "domes" over the raised components is has to do with the nature of server hardware, and the nature of server cooling. As servers draw air in through the front, and exhaust is ported out of the back of the server chassis using quite a few small, high CFM fans, almost no component that goes into a server has it's own fan solution, to ensure the high CFM cooling, as many obstacles are removed from the flow path as possible, as well as features are reduced, or "muted" by covering them with something that reduces drag, and aids int he airflow.

  • @charleshines1553
    @charleshines1553 Год назад

    I find product teardowns fun to watch. They can also help me decide if I want a particular item. Like a laptop for example. I want to make sure the memory is socketed, even if it is already maxed out. Then I can replace it easily if there is ever a bad module. I never owned any computers with the memory soldered to the motherboard, without special equipment like a hot air gun, you won't be able to easily replace the bad chip. I don't mind that laptop CPUs are soldered but I know they were not always like that.

    • @katieell4084
      @katieell4084 Год назад +1

      And there was a time that many of us swore we would never buy a phone that did not have a replaceable battery but here we are. As reliable as this stuff is now, component failure is not as big of a problem as it was in the early days when it made sense to seek socketed memory. I, for one, yearn for the day when all the storage is right in the CPU and it's all unified and it's all 10x as fast as L1 cache and as capacious as a five-pound 15K drive.

  • @OliverQueen1974
    @OliverQueen1974 2 года назад +4

    The Windows Drivers maybe for the Windows installations on the VCA itself, to get them all "talking" to each other over the virtual network etc. Just a guess but makes sense to me at least.

    • @wsippel
      @wsippel 2 года назад

      Could be. The official documentation is pretty clear: the only supported host OS is CentOS 7 with a patched kernel.

    • @TheExileFox
      @TheExileFox 2 года назад

      Could also just be that they started developing for windows but dropped it.

  • @Felice_Enellen
    @Felice_Enellen 2 года назад +1

    @der8bauer EN - I think the drivers are not for the host to see the cpus across the system bus, but rather for the CPUs to see each other across an internal bus, which is probably why there's a second bridge chip on the card. They need to be installed on the card, not the host.

  • @Hwi1son
    @Hwi1son 2 года назад

    Look at the big orange murder mittens... theyre are just chillin

  • @agentcrm
    @agentcrm 2 года назад

    Linux / unix makes sense for a server host.
    I wonder if the second PLX chip is for the CPU's accessing the M.2 drive? And you'd have windows installed on there for certain workloads, that's where you'd need the windows drivers. Because they'd need to be accessed differently to a normal install.

  • @asteroth6664
    @asteroth6664 2 года назад +1

    This device works like this: it's similar to the Xeon phi's you need to boot an image of Linux cent os on it or Windows if it's supports it. It's gonna store the is in the ram. Than you need a specific compiled program working on it and for that you gonna need a compiler like the ones for Xeon phi's visual studio and a intel compiler I don't remember the name now if you want to know more let me know

  • @tahustvedt
    @tahustvedt 2 года назад +1

    Looks like it's a modern SBC. They have been around for decades. Early use was to put an IBM compatible ISA SBC with a complete 286 or even up to 486 system with RAM and graphics in an Amiga 2000 for example. Does it not have a video output?

    • @andrewr7820
      @andrewr7820 2 года назад

      Yeah, they also used that approach for devs to write code for prototyping systems using custom processors. I remember all kinds of weird SBC's that had pretty short lifespans. Probably the most successful application of all time has been dedicated RAID controllers.

  • @bundles1978
    @bundles1978 2 года назад +1

    Very interesting card, could it be software locked? Like you may need a specific program that is written using the Specific SDK to get it to do anything. I would imagine such an architecture would be pretty specific, and literally anything else would not know what to actually do with it. possibly different instruction/execution code on the die? Sorry if i get the exact terminology wrong.

  • @ApexLodestar
    @ApexLodestar 2 года назад

    fascinating!

  • @huzudra
    @huzudra 2 года назад

    The 128MB "L4 cache" is for the iGPU, this was first introduced with Crystal Well. For Crystal Well if you're using the iGPU it's VRAM at 50GB/s each way (100GB/s total), if you're using external graphics it's L4 cache. In some work loads the 128MB cache is really beneficial even if it's comparatively slow by modern standards. I can't find specs for later models after Crystal Well, but I assume in later instances I assume they sped up this memory. eDRAM is the technical name for this.

  • @tech.curiosity
    @tech.curiosity Год назад

    1:43 Have you noticed that there is 3 areas that are not covered ? they may be bios chips so you can flash them without disassembling the card.
    Thanks for the video.

  • @khanggamr7454
    @khanggamr7454 2 года назад

    bro, i get an ad from pulseway with linus
    from the work from home setup video

  • @benjamintrathen6119
    @benjamintrathen6119 2 года назад +4

    The Level 4 Cache of 128mb is shared 64mb to CPU and 64mb to iGPU. If the iGPU gets disabled the entire 128mb will be allocated to the CPU which is cool.

  • @mrwang420
    @mrwang420 2 года назад

    They could have put a small fan in the center sucking air iin from the case and blowing out from the grills. Like a small blowymatron.

  • @chrissartain4430
    @chrissartain4430 Год назад

    And YES, Great Video!

  • @MisterRorschach90
    @MisterRorschach90 2 года назад

    So if consumer level software and hardware was made to utilize aftermarket cards like this, you could pop one inside of a server or machine designed for gaming, and have it take care of the htpc part of the server?

  • @leflavius_nl5370
    @leflavius_nl5370 2 года назад

    Mein Gott that is one fully packed board.

  • @filas312
    @filas312 2 месяца назад

    I guess you could need PCIe bifurcation support on the motherboard (almost always a given on server boards) to get this going, I suppose, especially given that you've noticed more than a single PCIe switch there. VCA2 docs available publicly don't mention it though, albeit there are some general guidelines about BIOS validation for the product. You might have to buy a system that had been validated for VCA2 (similar to various Thunderbolt woes on mobo without advertised TB support) in the BIOS/UEFI aspect. Looks like there had been SuperMicro 1U servers sold for this product stack.
    Curious if you could get this going with some more documentation and perhaps a suitable system!

  • @highplayz0275
    @highplayz0275 2 года назад +1

    put the kitty in more videos its an awesome touch.

  • @Antiwindowscatalog
    @Antiwindowscatalog 2 года назад

    This seems more useful to me as a compact virtual machine host, running VMware ESXi on that M.2 card slot. Like a three CPU server on a card. But then how do you communicate with it? Or would you then have some kind of PCIe backplane with networking and storage?

  • @StreetComp
    @StreetComp 2 года назад +1

    You get a 👍🏻 for interesting computer explanation while having a giant cat sleeping on your desk :)

  • @Splarkszter
    @Splarkszter 2 года назад +3

    Wish there was more PCIe accelerators i can hook in my PC
    imagine dedicated RayTracing PCIe cards aside of the GPU, or AI computing PCIe cards for DLSS.

    • @MrRobert264
      @MrRobert264 2 года назад

      About the "AI" part, you can already get some TPU devices to hook up to tour computer :o

    • @Splarkszter
      @Splarkszter 2 года назад

      @@MrRobert264 Oh, right. But also... That can be used for DLSS.

    • @GS-hv9rd
      @GS-hv9rd 2 года назад

      @@MrRobert264 AI prefer touring country

  • @dragonhart6505
    @dragonhart6505 2 года назад

    I’m curious if this is meant to be run from something like a NAS or as a virtual computer operation, using Linux on the host system to backend the OS from the cards internal storage. Maybe the PCIE pins are directly communicating with the host motherboard and CPU to leech PCIE lanes and other hardware, booting a Windows Server OS directly from the card.
    I’m no engineer or programmer, but it looks and sounds like the thing is supposed to be some kind of overpowered NUC server

  • @Melechtna
    @Melechtna Год назад

    This is a QEMU geared card, I actually deal with something similar at work. Basically, you'd want to partition the device out amoung say, 3 VMs, but it's less like a VM and more like a sandboxed computer at that point. You'd dedicate one of those CPUs to that VM, and install Windows in a virtual disk, physical disk, however you want to do it, and hypothetically, that should just work. It would legitimately give you native performance, as you're just using PCI-E passthrough to give that system its own dedicated CPU, and probably a GPU as well, and it has its own ram to work with.
    From there you can manage the machine remotely more easily as you can just SSH into the main node, even if this was rented out, and do all sorts of things from configuring the VM to rebooting the machine, with minimal down time, as the Linux backend would have to go down, but then EVERYTHING goes down at that point.

  • @CarthagoMike
    @CarthagoMike 2 года назад +12

    That looks like a pretty interesting accelerator extension card.

  • @alfonsodavila1655
    @alfonsodavila1655 2 года назад

    Excelente video saludos

  • @Marc_Wolfe
    @Marc_Wolfe 2 года назад

    Tesla M40s require above 4GB decoding to, can still show in device manager. You just get code 10 or 12 "not enough resources, disable anotherdevice".

  • @popo-yq5je
    @popo-yq5je 2 года назад

    3:37 that orange cat sleeping so deep...

  • @DigitalDiabloUK
    @DigitalDiabloUK 2 года назад

    I wonder what the power draw is. FFMPEG with Intel VAAPI would be a super interesting project.

  • @whyjay9959
    @whyjay9959 2 года назад +7

    Yo dawg, I heard you like computers.

  • @McTroyd
    @McTroyd 2 года назад +1

    After your removed all the heat sinking and frames, I saw a small slot on the lower-left of the card (just above the PCIe pins). Looked to me like it said "M.2 Riser." Perhaps NVMe booting on-card?

  • @martinwashington3152
    @martinwashington3152 2 года назад

    Phi? Mic check? :) Nice video buddy at first glance it's like a version three of phi compute.

  • @rakly3473
    @rakly3473 2 года назад +1

    These CPU's would need quite some bandwidth to communicate with eachother. Maybe that's why there are 2 PCI-e controllers? One for the actual PCI-e slot, one for communication between the CPU's?

  • @Ryl1ea
    @Ryl1ea Год назад

    I think the driver was meant to be install inside the Windows installation on VCA, it sound like something similar to virtIO driver for Windows guest

  • @langamtimkulu6846
    @langamtimkulu6846 2 года назад +3

    i`ve always wondered if there's a PCIe card with a CPU out there.......i can`t wait to see what its capable of, i`m lookin forward to a more in-depth review.

    • @whyjay9959
      @whyjay9959 2 года назад +2

      There are also Xeon Phi cards.

    • @666Tomato666
      @666Tomato666 2 года назад +2

      Actually there are plenty. The nVidia network accelerator cards (formerly of Mellanox) are running full-blown Linux environments on them. Similarly, the Xeon Phi accelerators were doing something like that. Just as two examples.

    • @cal2127
      @cal2127 2 года назад +1

      mikrotik also put out a router on a pcie card with dual 25 gb links

    • @primus711
      @primus711 2 года назад +1

      Cpus on pci and pcie arent new and been around for decades i made amiga accelerators using them with ppc cpus

  • @happydawg2663
    @happydawg2663 2 года назад

    I would love this thing for my proxmox

  • @Alan_J_T
    @Alan_J_T 2 года назад

    @der8auer EN Where would one get that liquid thermal pad from as it would make dong re pads a lot easier

  • @REAVER781
    @REAVER781 2 года назад +1

    Maybe it was developed for use with VMware, in a VM the drivers would still need to be installed.

  • @funnyrailroadcrossing2767
    @funnyrailroadcrossing2767 2 года назад

    Looks good for collector

  • @MK-of7qw
    @MK-of7qw 2 года назад

    3:34 nice beans 🐾

  • @highplayz0275
    @highplayz0275 2 года назад

    the orange cat co host is awesome. i love cats and dogs

  • @leonardohorovitz8724
    @leonardohorovitz8724 2 года назад

    Intel Xeon E3 v5 is a quite old product line of processors. There are 3 newer generations for that Xeon segment. I don't know if there is a newer version for this product running Xeon E-2300 (which is the current line that replaced E3 v5).

  • @dorinxtg
    @dorinxtg 2 года назад +1

    As your friend said - the windows drivers are for only part of the solution. The device manager won't help you much to debug the issue, as you'll need Linux and the boot messages on Linux to see what's going on. Also - enable IOMMU and SR-IOV, you might also need to switch the PCIe X16 to X4/X4/X4/X4.
    Long story short, VCA2 was built primary to be used under Linux. You might want to talk to Kendel from Level1techs.

  • @hovant6666
    @hovant6666 2 года назад

    That looks like an expensive, roundabout cooling solution for such an obscure product

  • @spartan1986og
    @spartan1986og 2 года назад

    Know nothing question: is the 2nd PCIE bridge for the card memory to CPU channels? Would it make sense for the card memory channels to be separate from the system channels?

  • @garrettturner7383
    @garrettturner7383 2 года назад

    Woah! That's like having physical virtual machine, that has its own processor, graphics, and ram.

  • @kevinarielmoncadalicona3835
    @kevinarielmoncadalicona3835 2 года назад

    love the cat!!!

  • @MrRatraut
    @MrRatraut 2 года назад

    have you tried windows server OS with a xeon CPU instead of a core series CPU? some drivers that are available in windows server is not available in windows 10 or 11

  • @blankblank4949
    @blankblank4949 2 года назад +8

    my only idea is to run it on a xeon 5 system that has linux then have a boot image for windows on a small m.2 in the card. probably completely wrong, regardless I would LOVE to see this thing work for you