I think a litlle more about the hardware: So...also if you can find a GPU without the IO BAR needing (and i not know if it exist at all), you have a phisical PCIe Gen 2 x1, but it can handle the power for a GPU ? I mean, a discrete GPU can use 50-75W from PCI-E slot, can the IO board for the Module 4 provide them ?
From the link in the description of the video it seems like it can use quite a few different PCI-E cards. Networking, USB, NVME, SATA... It would have been cool to be able to use other cards too, but I don't know how big the market is for Pi4 CMs with a GPU.
@@l3p3 indeed, I know many people use RPs as servers, and in that case the need for storage far exceeds the need for any kind of graphics. However, the video briefly touched upon the idea of using GPUs for parallel calculations. I can't say if that is a viable application, but that was primarily what I was reffering to when I said it would be cool to be able to use other cards. I'm currently taking a class in shader programming though and it really is amazing what a GPU can carry out with only a few little lines of code and some heavy math.
@@johanrosenberg6342 actually, there is a gpu integrated into the SOC. I have no idea what kind of gpu it is but maybe you can also abuse that for server stuff. ;-)
Let's rebrand what you just said: "It's not whether it should be done but instead whether it can be done." Literally sums up how i use computers and the story of why I want to Raid0 my flash drives, run a super slow or secondary GPU, and use that GPU to run games it shouldn't be able to.
This is the reason linux will never be anything but a fringe OS. It's too non user friendly outside of custome linux based apps like batocera and the like where someone has literally taken the time and frustration to coax something useful out of it.
@@sonichuizcool7445 i disagree, there are a lot of different distos and they are constantly being updatesd. While i agree with you at the moment i think that it will change. People will eventually make more user friendly versions. But that will probably take a long time #edit i realized i did not fully understand the meaning of fringe i mistook it for something else. My bad. It will probably stay less popular then the other major OS competitors but i still think it will grow so i agree with you in the end haha
@@tissuepaper9962 - sorry, but your comment is truly asinine. The very greatest proportion of people who use computers in their day to day lives have zero clue what goes on under the GUI and care an order of magnitude less. That huge majority will never know about, use or care about Linux unless and until they can turn it on, plug in some arbitrary peripheral and have an almost 100% expectation of it working immediately and flawlessly. My 94 year old father can navigate Windows, write documents, schedule zoom meetings and obtain value from his PC but he’d never bother with trying to make a WiFi dongle work with Linux. Why would he? Liber Office? It a hope when he gets Office 365 for a few euro a month and it all just works. I use Linux for a lot of projects, but I would love to be able to avoid throwing a Raspberry Pi at the wall after a full day trying to wrangle with a driver for a, seemingly, common peripheral, and I’ve been at this nerdy game for decades.
I’m also a firmware engineer, but would not hire you if you interviewed presenting this to a firmware engineering team because of the lack of deeper understanding. I’ll applaud you for making an attempt and being entertaining but I’d grill you for not attempting to debug the Linux device driver code because that’s where you would begin to find and understand why the driver was failing to initialize. In an interview, if I asked why it doesn’t work, the best/simplest answer would be because the gfx card doesn’t support the raspberry pi.
I love how the video escalated more and more. From just plugging in a potentially unsupported hardware to installing drivers to recompiling a damn kernel. Also Edmund Hillary would be proud. Why try to add a gpu to a raspberry pi's pcie?`Because it's there!
@@l3p3 you do realise that @Markus never said it was complicated to re-compile a kernel. They just stated it was quite an escalation from less difficult things such as installing the hardware and drivers. :)
@@IsaacMIT I dunno, it's pretty damn easy. It's like three commands to build a basic kernel. Four if you first copy the current running kernel's .config file.
It's part of the pci express standard, you can literally plug any size pci express card into any size slot, as long as you cut the plastic out of the back of the smaller slots
It's not a risk-free operation. I've damaged a couple PCI-E slots attempting it, one to the point it no longer worked, the other just looked fugly as hell.
Haha, you have no idea how many times I was like ... if this doesn't work, will I have anything to show? Luckily I did it for the memes-this is the way.
"Pi in a keyboard" that would be cool; I think the CM4 would make that achievable, just as long as you can get some airflow through it so it doesn't overheat!
@Jeff Geerling - your best video yet! Thanks so much. The jokes at the beginning had me groaning, but your can-do attitude and your clear explanation of your troubleshooting steps is great. Your knowledge is way beyond mine, but your ability to clearly break down what you're doing is helpful in teaching the rest of us how to dig further into the potential of the Pi.
One of the things I respect about this is *I* have gone down similar paths, like trying to get ancient Iomega Zip 100 drives to work with Linux (they do, my github account is GrigLars if anyone is curious). I didn't have to recompile kernels in that case, but I think the best "why would you do this?" answer is "because it's there." And also, one learns so much ancillary stuff that helps with other projects later on. So even if one gives up after days of all that research, months or even years later, some of these bits and pieces will show up again, and you might have added them with OTHER bits and pieces, which makes one a far better Linux programmer than just "I bought a Raspberry Pi, and it's still unopened in my top desk drawer." I started with Linux in 1993 or so when we tried to compile it on an old 68000 series chip (I think an Atari ST1040). Three days later and three programmers working, we never even got a bootable kernel. But BOY did we learn a lot, and 27 years later, I feel it was still worth it. All of us went on to become big people in the industry.
@@simpletongeek Yes! They had the "Atari System V," a short lived version of Unix System V Release 4.0 (SVR4) developed by Unisoft. Worked on the Atari TT 030 Workstation. It was originally intended to be a high-end Unix workstation, however the Trammels took two years to release a port of Unix SVR4 for the TT. Then the TT was replaced by the Atari Falcon, which was a slower and a severely bottlenecked consumer-grade system due to some ... strange ideas about CPU throughput. I don't remember why, but it was some lane channel traffic stupidity on the board, which made ZERO business sense. That era of Atari made some of the worst business decisions, the ST line should have beat Amiga, but Amiga won out because Commodore was just a better-run business.
Asking someone why use a GTX with a Rpi instead of going with a x86 is like asking why does one fill air in bicycle tyres. HOW DARE YOU! "HOW DARE YOU!"
I'd say more like "why are you filling your bicycle tires with nitrogen, rather than air?" but... a typically good counter to "Why", in this situation, is simply "Why not?"
@@jyvben1520 Right, that was sort of my point. Just because someone may be unaware of why you are doing something they perceive as "not normal" doesn't mean it shouldn't be done, or that you don't have a good reason for doing so. Just the fact that it's not what you're familiar with doesn't mean it's wrong! :-p I especially love when people try to qualify it with "I've been in X industry for 20+ years..." - yes, so perhaps you're the local resident expert on 20 year old technology? Experience does not equate to skill, lol. I know from my countless failures at certain projects that I'm no expert, even after years of "experience", haha.
I'm glad you're now x-compiling but I seriously felt your pain whilst watching this video.... your adventure is like those we took 20 years ago, a lot of frustration and pain but you learn a lot !
This whole project has been a roller coaster! In good news, it seems there *are* a couple ARM developer boards which can at least output console text over Nvidia GT 710, so there is some hope... but maybe not with the current generation of Pis :-/
One only fails by not trying. One learns from failures and mistakes. Many viewers learned a lot by simply watching Jeff's troubleshooting techniques. Jeff is a Raspberry Pioneer, leading the way for the rest of us, saving us from the frustrations of our own total failure. Apparently, you came here with the expectations of Jeff solving your problems, instead of trying yourself. Then, you shoot the messenger, who has worked tirelessly and personally endured the 'agony of defeat'? Really?
Pretty sure there is *one* graphic card that works. I can't remember the name ATM. Did you try Raspi official blog? Pretty sure I got it from there. HTH.
@@JeffGeerling Spend 500 hours getting an idea to work, getting close, almost there, almost there... One more try.. Everything but one little pesky piece of lingering x86 baggage.. And then ruclips.net/video/Ag1o3koTLWM/видео.html
Jeff, while I am not in need of this content, I really enjoyed your presentation and the insights it contained. Your humor is appreciated as is your humility. As one of your clips showed, failure has always been my best teacher. TOTALLY WELL DONE!
I think the problem is power to the pci, so you can use the mining rider that power the pic externally instead of power Pi board. Good luck. Love your videos.
I get flashbacks from work when I see this type of struggle with Linux Device Drivers. Linux is amazing and great but dealing, or developing, kernel modules (device drivers), can be really painful and tedious. Kernel updates, which happens a lot now-a-days, make me nervous because kernel library api's oftern change and break the driver compilation. @jeffgeerling Issue(02:40): It's a kernel module build error. The driver code had an error during compilation. At 02:27 when you grab the latest Linux Arm Display Driver (version 390.138), it lists fixes for Linux 5.6 Release Candidate in the Release Note Highlights, and at 02:52 I can see you are compiling the driver against your Kernel of a different version, Linux 5.4.51-v71+. Solution: Try downloading and compiling version 390.132, or possibly the version released after that. It claims to fix " kernel module build problems with Linux kernel 5.4.0 release candidates" in the release notes. Basically, it should play nicely with the Linux Kernel you have. The next suggestion would be to download the 5.6 kernel and build the latest driver against it, but sounds very painful. It would be really interesting to see you connect the pi to the monitor using a VGA cable. In the very beginning of the video you removed the bracket, along with the VGA plug, from the card and connect it to the monitor using an HDMI cable but at 01:52 the kernel explicitly recognizes it as a "VGA compatible controller". It could it be possible the video was being outputted through the VGA port and not the HDMI.
Honestly after the 5th or so recompile I started thinking about setting it up. It would be a bit faster on my i9 laptop, or even my older i7 I have over in the corner of my office to play Halo 3.
@starshipeleven actually, the instructions to compile the kernel sources in the raspberry pi pages include a section for cross compiling. You'll need to add maybe 4-5 commands
@@JeffGeerling I once had to recompile the kernel a few times for another ARM device… the time it took the kernel to compile once on the hardware itself (AllWinner A20), I was able to research how to do cross compilation, set up a VM with Ubuntu, install the cross compilation environment and compile the kernel once. After that it was a breeze.
Loved watching your mind work, I have a very similar work flow and it was refreshing to see someone else who does things not because you should, but because... you know, why not...
@@GeneralNickles isn't thunderbolt exclusive to x86 intel cpus? I mean sure, we have the "usb 4.0" now (currently found on the new apple computers), but it doesn't seem to support gpus.
It doesn't unless the device needs extra power. I actually have a powered adapter as well, and I may do some extra testing with it once I get a separate power supply for it. I'm also going to be testing a PCIe splitter/switch so I can see how multiple cards run together!
Seeing how the Raspberry pi 4 uses a 10-15W power supply and the relatively huge heatsinks on the cards, I'd say they need more power. A x16 PCIe slot is expected to provide up to 75 watts for a high-powered card. Even if the Raspberry pi can support that, a 1x slot can only provide 6W. TL;DR yes Also for better software support on an AMD card, it might be better to use a GCN GPU such as Radeon HD 7000 or RX 200 series because those drivers were much more developed in Linux.
The power delivery over pci-e was my first thought too. As the card doesn't have additional power plugs, it needs to get all power through the pci-e port. The TDP of the cards is at least some hind how much power they need to get over the port. Or plugging them into a x86 pc and look into HWinfo how much power they use in idle mode.
@@christianjb2002 according to wikipedia "A full-sized x1 card may draw up to the 25 W limits after initialization and software configuration as a "high power device"." That makes sense since he only started having those problems when the drivers were properly installed. Even if he's not pushing the GPU, i bet it's asking for more power than what the board is giving it.
I'm glad you stopped by, hope you enjoy the videos :)
4 года назад+3
"That's definitely not my head". :D I'm almost sure that now that you've pawed the way, in a month somebody figures out how to fix those drivers to work :)
@@Rayan-Singh Those still wouldn't get around the IO BAR issue, though. That's something that, as mentioned, is involved with VGA ports. IIRC, they had data pins to detect the status and capabilities of the monitor, which is probably what it is trying to access for VESA drivers. In order to support a video output device using said device's drivers, you need it to be VESA compliant. In order to comply with the VESA standards, you must fully support VGA "legacy" devices, which are kept current since servers still tend to use them a lot for diagnostic terminal screens. I know mine has one for iLO. (HP DL380) The problem goes back to the point someone else made here - it's a firmware issue, since the developers safely assumed there was no need to include VGA support natively. Normally, on a PC, this is something which would be baked into the BIOS, but...the raspberry pi has no BIOS. When it's powered on, it assumes the same hardware is there that it expects and starts running code a certain way right away. No overhead interface to first determine what hardware it has etc (BIOS), as it's assumed it will always be exactly the same hardware, since that's the case for the rPI boards. It's more than just a case of "not the right drivers", as there is a more core level component missing that ALL video card drivers require. Still... awesome POC! You could still probably find a way to get it to do some basic CUDA coding if you used a dev driver instead and built it WITHOUT video output of any kind, just using the cuda cores for AI processing. That might be kind of useful for the RPi, I think.
@@ArtemisKitty Not only the IO BAR, it seems the Radeon (ATI) driver, which I _did_ use (see 09:10 onward), is also trying to initialize bios on the card, which would go over the IO BAR.
@@JeffGeerling Oh yeah, good point. So yeah, a LOT more than insufficient power going on there, lol. I’m personally curious about getting the cuda cores up as AI processing devices using tensorflow or something similar, so there might still be a possibility there for them. Even a cheaper GTX series card can do some pretty heavy processing on that front. Still... long uphill road there. Thank you so much for all of this; I’ve learned so much more in just the past 2 days!
The genius of the PCI Express standard is that it's completely compatible, backwards, forwards and sideways. Among other things, any PCI Express card, of any length, will work on any PCI Express slot, of any length. It doesn't matter what those lengths are. A PCI-Express x16 card will work on a PCI-Express x1 slot just fine (with one sixteenth of the transfer rate, obviously). The only reason why in most cases you can't just plug it in is because they put a physical barrier on the end of the slot. If you were to dremel out this barrier, it would work. (I don't know why they always put that barrier in x1 slots. Perhaps to just stop people doing exactly this, and avoid card sagging damaging the slot.)
Jeff, hi from the UK. Superb video, really enjoyed it. I was playing with hardware video encoding (ffmpeg h264_omx) early in the lockdown with the RPI 4. Not fast but very low power consumption and can only used bitrate based h264 encoding - so larger filesizes than with CRF - I then went onto play with ffmpeg h264_nvenc hardware encoding on the NVIDIA GT 710 and 1050 Ti using Ubuntu 20.04 on various PCs. Very interested in your video, and I hope you can progress this in future. Good Luck!
I struggled myself with ROM BARs last week. Albeit it was in a case of GPU Passthrough. Lucky for me I wasn't walking uncharted territory like you are. But it also baffled my mind for a second as I was trying to get it to work. Great work in any case! Your effort and reporting are making a difference for a huge part of the community for years to come! (think AWS ARM systems, the new macs that are coming,...) Sincerely: thank you! “Our greatest glory is not in never failing, but in rising every time we fail.” - Confucious
At one time I might have undertaken such a quest. Now I let the "youngsters" take on these challenges ! Good foot work. I am betting that RPi5 (and what ever chip Broadcom makes of it) will have a lot more PCIe lanes. Someday PCIe will replace I2C and SPI for even mundane peripherals.
@@TheOleHermit What part of USB-C are you interested in ? The VL805 claims to support USB 3.0 5Gbs. USB 3.1 (10GBs) or USB 3.2 (20GBS) would be nice. You just need a USB-B to USB-C adapter. I don't see RPi ever supporting any of the USB-PD for output, although I do think futures versions will require a 15W (5V@3A) or possibly 27W (9V@3A) USB-C power supply. This depends on what the cost of supporting USB-PD on input and output.
@@jackpatteeuw9244Yes, USB-C is only a connector style, but it is capable of much more than PD, including USB3.x data rates, display port, networking, you name it. There are many USB-C adapter cables already on the market that provide these functionalities. A fully functional USB-C port could be broken out into a PD powered multi port, multifunctional input/output hub, with the various conventional connectors, just as the PCIe bus can be expanded with different adapter cards. I'm a maker, who wants to enclose my IoT devices and their CM4 controllers. Legacy PCIe cards only get in my way.
Let me clarify the statement, “Someday PCIe will replace i2c and SPI...”. This isn’t true and will never be true. The PCIe protocol was created for specific purposes and one of those purposes was not for replace low level serial communication protocols (i.e. RS232, i2c, spi, CAN, 1-wire, etc). In fact there are consortiums/groups interested in replacing the PCIe protocol because the design is beginning to reach it limits. New systems such as CXL and CCIX seek to replace PCIe (to a certain extent)
@@nowrd2xpln Well, maybe I was TOO enthusiastic ! Here is where I am coming from. Large, embedded applications (think automotive EFI) need huge amounts of IO. Traditionally, these were on the SoC. Not only is this not "flexible" but they were running out of pins. The lower voltages on today's chips make it difficult to use a 5V Vref for analog signals. i2c and spi are just not fast enough to sample all of sensors required for a modern car. What is really needed is a high speed "bus expender" that would directly map all of these analog channels to memory space and their controls to memory space on the chip. The same is true for outputs, but many of those have to be "timed".
You can run external GPUs on on a pi. Just need to do hardware modification. Although since this is the compute module, just need to do some tinkering in the software side and you should be able to get it to work. Sources: mloduchowski.com/en/blog/raspberry-pi-4-b-pci-express/ & www.tomshardware.com/amp/news/raspberry-pi-4-pci-express-bridge-is-a-step-closer
First time you showed up on my suggestion feed. Glad I tuned in Thank for sharing. And thansks for speaking plain English for us less experienced in IT and Development. And appreciate the dry humor I have a dry scene of humor myself.
The closest I've personally gotten to running ARM with a GPU is clustering a Jetson Nano with a Raspberry Pi as a GPU node using something like Kubernetes or Docker Swarm. Still, a Jetson in an ARM cluster is still not the same as a proper ARM GPU! I was curious if the hardware resources necessary to add a standard GPU directly to a single Pi were present, but this answers everything I wanted to know in one video. Thank you for the work you do, it's very helpful!
I wonder if the same people who say "to get a better computer instead to connect to the GPU" also told people who try to run Doom on odd devices to "get a newer computer" or "get a newer game". Anyway, this is great work. Thanks for sharing your experience with us and all your homework. Subscribed.
This video and its subject matter took you a lot of time. I appreciate it. I oddly enough had a thought yesterday on if the Pi and PCI-E + GPU was something. When I dig in GPUs, they often seem to have low level things that get in the way. An example today is a lot of new GPUs will not play with non UEFI bios systems. So you made a good call to go back to older model GPUs.
Thanks for another great video Jeff. I am totally down for this journey and will continue to follow along as you chronical your adventure for us. Thanks again.
it doesn't matter that you didn't figure how to make this work. We learned a lot from your research. Good Job fella'. Let's support this guy your persistance is inspiring.
A few things: 1. The closed-source ARM Nvidia drivers are meant for the Tegra series, so, they're probably not built to look for a wide variety of PCIe GPUs. 2. There are PCIe riser cards with 12v molex connectors spliced in. These might help alleviate some of the power-related issues you might be facing. 3. PCIe GPUs are known to work on ARM but perhaps there's a hardware limitation for the RPi.
Great Video! So das your efforts didn't end up in a happy end with nice video output, but the video really was worth my time. Thank you! I love learning stuff and while I didn't learn details, I've learned that even those extreme means didn't lead to the pie properly displaying the output. Cheers and I'm excited to watch follow-ups if there are or will be any.
I didn't expect you go so far, the support for desktop gpu outside of the mains share markets is so broken, thanks for the effort and the info, keep the hard work please
Nice deep dive Jeff! Taught me a lot about when to stop bashing my head against a brick wall. Hopefully the community can assist this effort and help you write a linux driver and compile a kernel for it to work with the Raspberry Pi module with the PCIe slot. I'm very interested in this hardware configuration for GPU machine learning in an embedded system for a really low price.
as a develloper i really enjoy the way you go trough hiccups, and bug, and search how to manage them by exchanging with others... till you found the solution...
Fantastic attempt! And thanks for sharing despite not being entirely successful. A lot is learned from attempts. Wish more people would report what didn't go well.
Very good info for us what youve done here. Conclusion, the hammer is in the device driver *and* the os. The driver is not crucial, but the os need to be able to do much if the os developer wants to make this work.
Thank you so much! Gave me flashbacks to trying to get gtkmm to compile on a cirrus logic armv4tel in 2006! (so much of that 'I'll just try ONE more thing...'). CL had paid a dev to add support for their SoC to mainline gcc... specifically for their non-arm thirdparty floating point unit. (the v4 arm arch didn't necessarily come with one, was an optional extra...) but that dev had gotten the condition register format for the floating point unit confused with the arm one - gcc would compile code, but it was horribly, horribly broken. Worse, gtkmm (the C++ gtk bindings) were coded by people not suspecting that cross-compilation might exist, and it did things like try to compile and then immediately RUN small applets during configuration, so to enable cross-compilation (with specially-patched gcc that wasn't mainlined, yet) you had to custom compile linux kernels on both the dev PC (for the cross-compiler) as well as the target machine. To turn on miscellaneous binary extensions (on the PC) and a little thing called GAPING_SECURITY_HOLE in the arm kernel. You then nfs-shared your dev PC root, booted the target up with that kernel, with root mounted from the PC via NFS - which would immediately kernel panic because init was an x86 binary! - but would leave the kernel running, and network accessible... whereby that PC kernel extension could be set to use the target machine as a 'arm coprocessor', using that GAPING_SECURITY_HOLE extension to just load the arm binary from the PC over the network to the ARM kernel where it would run.. and interact with the same filesystem... Supposedly, this would all be enough to let gtkmm do it's 'compile, execute, test' configuration phase OK, even whilst cross-compiling everything to arm code.... Although, I never did get quite that far - I recall it so well, because this was the point at which I gave up, and found the boss an x86 SBC for the project. At the time (2006) I concluded that OSS gui embedded apps on linux arm hardware, would be possible, but that it would take a team of perhaps 20 engineers at a big company, like say, google, to get it all squared away... Of course, later, exactly this did happen, only for some reason they went with java, not C++. Equally bad choice, imho. The kicker is, the reason why gtkmm was a necessity, was because I was working with another guy, who was a comp.sci graduate. And he'd chosen C++ to do all his application-level dev work on, and he could demo it all working on his laptop, so the boss just said 'make it work'. Had he stuck to C and just used gtk gui elements directly, it would have been ok, because we wouldn't have needed gtkmm. I still would have had to have been basically hand-building the cross-compilation toolchain with the custom patch there to fix the floating-point issue. It amazes me sometimes how far linux on arm has come, but the biggest learning for me was this: Leave linux distro development to the distro development guys: If you have an embedded system to build, just work from a nearly-normal os (or use something like angstrom to automate it), but try not to build anything that requires a custom kernel to work. Use an FPGA over usb or PCI to do the 'real time' things and don't get stuck trying to optimise a system also hosting an entire OS on it! Let it do it's thing, and keep the system you have to be 100% responsible for as simple as possible! I'd really like to see how well an ECP5 FPGA PciE card will work on the cm4! - Nowadays, you should be able to pick a card that has symbiflow support, so the cm4 can host the whole OSS FPGA toolchain itself!
09:32 This is the way
If there’s a will there’s a way
This is the way
pls use WoR
This is the way.
"I have spoken."
Dude, doesnt matter if it worked or not, I was able to learn a lot from this video. Thanks for all the videos you make.
That's the point 😁
That's the way
The spirit of adventure: that's what made me a scientist.
I think a litlle more about the hardware: So...also if you can find a GPU without the IO BAR needing (and i not know if it exist at all), you have a phisical PCIe Gen 2 x1, but it can handle the power for a GPU ? I mean, a discrete GPU can use 50-75W from PCI-E slot, can the IO board for the Module 4 provide them ?
@@DetConanEdogawa was thinking the same thing.
RPF: "We're gonna put a PCI-E slot on the new Raspberry Pi!"
Users: "That's great! What sort of things will it support?!"
RPF: "... support?"
they forgot BROADCOM chips don't support pcie data ...only bandwidth data ..not i/o devices to produce video or sound
From the link in the description of the video it seems like it can use quite a few different PCI-E cards. Networking, USB, NVME, SATA...
It would have been cool to be able to use other cards too, but I don't know how big the market is for Pi4 CMs with a GPU.
@@johanrosenberg6342 actually for the intended purpose of these tiny computers, stuff like SATA controllers are just ideal.
@@l3p3 indeed, I know many people use RPs as servers, and in that case the need for storage far exceeds the need for any kind of graphics.
However, the video briefly touched upon the idea of using GPUs for parallel calculations. I can't say if that is a viable application, but that was primarily what I was reffering to when I said it would be cool to be able to use other cards.
I'm currently taking a class in shader programming though and it really is amazing what a GPU can carry out with only a few little lines of code and some heavy math.
@@johanrosenberg6342 actually, there is a gpu integrated into the SOC. I have no idea what kind of gpu it is but maybe you can also abuse that for server stuff. ;-)
This is a suspense thriller. I held my breath till the end.
Have you tried Windows 10 arm version?
Yes.. me too!!
Somehow I was curious too. I enjoyed the whole process even If the result wasn't good.
Let's rebrand what you just said: "It's not whether it should be done but instead whether it can be done."
Literally sums up how i use computers and the story of why I want to Raid0 my flash drives, run a super slow or secondary GPU, and use that GPU to run games it shouldn't be able to.
This is what it felt like installing my wifi drivers on my first linux laptop...
This is what installing wifi drivers always feels like for me lmao
This is the reason linux will never be anything but a fringe OS. It's too non user friendly outside of custome linux based apps like batocera and the like where someone has literally taken the time and frustration to coax something useful out of it.
@@sonichuizcool7445 i disagree, there are a lot of different distos and they are constantly being updatesd. While i agree with you at the moment i think that it will change. People will eventually make more user friendly versions. But that will probably take a long time
#edit i realized i did not fully understand the meaning of fringe i mistook it for something else. My bad. It will probably stay less popular then the other major OS competitors but i still think it will grow so i agree with you in the end haha
@@sonichuizcool7445 oh no, I have to actually know how to use a computer to use my computer! The horror!
@@tissuepaper9962 - sorry, but your comment is truly asinine. The very greatest proportion of people who use computers in their day to day lives have zero clue what goes on under the GUI and care an order of magnitude less. That huge majority will never know about, use or care about Linux unless and until they can turn it on, plug in some arbitrary peripheral and have an almost 100% expectation of it working immediately and flawlessly. My 94 year old father can navigate Windows, write documents, schedule zoom meetings and obtain value from his PC but he’d never bother with trying to make a WiFi dongle work with Linux. Why would he? Liber Office? It a hope when he gets Office 365 for a few euro a month and it all just works.
I use Linux for a lot of projects, but I would love to be able to avoid throwing a Raspberry Pi at the wall after a full day trying to wrangle with a driver for a, seemingly, common peripheral, and I’ve been at this nerdy game for decades.
I'm a firmware engineer. You are probably a better firmware engineer than myself. I would hire you if you interviewed with this...
I’m also a firmware engineer, but would not hire you if you interviewed presenting this to a firmware engineering team because of the lack of deeper understanding. I’ll applaud you for making an attempt and being entertaining but I’d grill you for not attempting to debug the Linux device driver code because that’s where you would begin to find and understand why the driver was failing to initialize. In an interview, if I asked why it doesn’t work, the best/simplest answer would be because the gfx card doesn’t support the raspberry pi.
Good thing they don't have firmware engineers in charge of HR.
@@nowrd2xpln Grammer, please... I don't know if English is your language of choice or not. However, you make no sense.
@@electricflyer81 His grammar is fine. Attacking someone's grammar in an otherwise unrelated argument is the most strawman thing I've ever seen.
@@erathemonologuer1454 He typed 'grammer' while complaining about someone's grammar. What a joke.
You're literally the only one buggyng Nvidia with 32 bit ARM compatability reports lol. They must _really_ love you over there! xD
I love how the video escalated more and more. From just plugging in a potentially unsupported hardware to installing drivers to recompiling a damn kernel.
Also Edmund Hillary would be proud. Why try to add a gpu to a raspberry pi's pcie?`Because it's there!
When people think that recompiling damn kernels is something complicated. o,O
@@l3p3 you do realise that @Markus never said it was complicated to re-compile a kernel. They just stated it was quite an escalation from less difficult things such as installing the hardware and drivers. :)
Shame you couldn't 'knock the bastard off' as Edmund Hillary would have said.
@@l3p3 That does NOT mean that compiling/recompiling kernels is as easy as plug and play.
@@IsaacMIT I dunno, it's pretty damn easy. It's like three commands to build a basic kernel. Four if you first copy the current running kernel's .config file.
It's part of the pci express standard, you can literally plug any size pci express card into any size slot, as long as you cut the plastic out of the back of the smaller slots
It's not a risk-free operation. I've damaged a couple PCI-E slots attempting it, one to the point it no longer worked, the other just looked fugly as hell.
@@W1ldTangent they also make adapter cables
"I quickly tossed the CD and documentation aside and got right down to business"- story of my life
A lot of pain to do a 13.32 video. Congrats. Maximun Effort!
Haha, you have no idea how many times I was like ... if this doesn't work, will I have anything to show? Luckily I did it for the memes-this is the way.
This is the way
Try a Matrox card, they're ancient but still used by all kinds of server motherboards and should have good linux support.
www.matrox.com/en/video/products/graphics-cards/m-series/m9120-plus-lp-pcie-x1
Awesome project! These really show how a Pi is versatile enough to become a light weight daily machine (Pi in a keyboard). Thanks for the great video!
"Pi in a keyboard" that would be cool; I think the CM4 would make that achievable, just as long as you can get some airflow through it so it doesn't overheat!
CM4 + Mechanical switches + Thunderbolt (wooo) in a keyboard formfactor would be really nice
Dang - now that's a great idea..!
@@JeffGeerling no need, make the case out of metal and slap thermalpads between
I learned a lot from this video keep it up, your way of explaining things is amazing btw
"I was getting tired and was gonna give up but thought I would try it ne more thing."
.... Story of my life
@Jeff Geerling - your best video yet! Thanks so much. The jokes at the beginning had me groaning, but your can-do attitude and your clear explanation of your troubleshooting steps is great. Your knowledge is way beyond mine, but your ability to clearly break down what you're doing is helpful in teaching the rest of us how to dig further into the potential of the Pi.
I arrived to this video based on a RUclips suggestion. When I Saw the Trogdor t-shirt my mind blew away!
Interesting video!
One of the things I respect about this is *I* have gone down similar paths, like trying to get ancient Iomega Zip 100 drives to work with Linux (they do, my github account is GrigLars if anyone is curious). I didn't have to recompile kernels in that case, but I think the best "why would you do this?" answer is "because it's there." And also, one learns so much ancillary stuff that helps with other projects later on. So even if one gives up after days of all that research, months or even years later, some of these bits and pieces will show up again, and you might have added them with OTHER bits and pieces, which makes one a far better Linux programmer than just "I bought a Raspberry Pi, and it's still unopened in my top desk drawer." I started with Linux in 1993 or so when we tried to compile it on an old 68000 series chip (I think an Atari ST1040). Three days later and three programmers working, we never even got a bootable kernel. But BOY did we learn a lot, and 27 years later, I feel it was still worth it. All of us went on to become big people in the industry.
Now I want to do "Linux from Scratch" on a Pi... www.linuxfromscratch.org/lfs/
Didn't Atari try to do their own unix box? I don't think it went to market.
@@simpletongeek Yes! They had the "Atari System V," a short lived version of Unix System V Release 4.0 (SVR4) developed by Unisoft. Worked on the Atari TT 030 Workstation. It was originally intended to be a high-end Unix workstation, however the Trammels took two years to release a port of Unix SVR4 for the TT. Then the TT was replaced by the Atari Falcon, which was a slower and a severely bottlenecked consumer-grade system due to some ... strange ideas about CPU throughput. I don't remember why, but it was some lane channel traffic stupidity on the board, which made ZERO business sense. That era of Atari made some of the worst business decisions, the ST line should have beat Amiga, but Amiga won out because Commodore was just a better-run business.
Asking someone why use a GTX with a Rpi instead of going with a x86 is like asking why does one fill air in bicycle tyres.
HOW DARE YOU!
"HOW DARE YOU!"
I'd say more like "why are you filling your bicycle tires with nitrogen, rather than air?" but... a typically good counter to "Why", in this situation, is simply "Why not?"
@@ArtemisKitty first question answer : less leakage ?
@@jyvben1520 Right, that was sort of my point. Just because someone may be unaware of why you are doing something they perceive as "not normal" doesn't mean it shouldn't be done, or that you don't have a good reason for doing so.
Just the fact that it's not what you're familiar with doesn't mean it's wrong! :-p
I especially love when people try to qualify it with "I've been in X industry for 20+ years..." - yes, so perhaps you're the local resident expert on 20 year old technology? Experience does not equate to skill, lol. I know from my countless failures at certain projects that I'm no expert, even after years of "experience", haha.
@@ArtemisKitty Totally agree.
"BuT nItRoGeN iS bEtTeR!"
I'm glad you're now x-compiling but I seriously felt your pain whilst watching this video.... your adventure is like those we took 20 years ago, a lot of frustration and pain but you learn a lot !
I would see if any of the old PowerMac cards from when Apple used the power pc architecture get past the io bar issue.
Oh god i have two of these (I have two old 1999 PowerMacs G4) with all original parts. Oh god the memories
Oh that would be interesting to see.
This. Ight be a interesting test mail Jeff a card and maybe we get a video out of the deal...
Needs to be PCI Express, not regular old PCI.
PowerMac G5 2.0 / 2.3 / 2.5 used PCI-E and certainly were not x86 based.
My first experience with you and it was amazing! Can't wait to watch more of your videos....I like your sense of humor! 😜
Wonderful job Jeef! It doesn’t matter if it works or not! It’s about the fun of exploration!
Bro you’re my hero! I appreciate you for going through so many obstacles. The first one through always gets the most bloody.
There are only a few people to think of implementing POC on IoT modules , this video is a classic 👌..keep up the good work.
Me : With lot of expectations start to watch this video.
one after the other fails
This whole project has been a roller coaster! In good news, it seems there *are* a couple ARM developer boards which can at least output console text over Nvidia GT 710, so there is some hope... but maybe not with the current generation of Pis :-/
@@JeffGeerling I just appreciate you doing this vid. Subbed 😎 a true pioneer.
One only fails by not trying. One learns from failures and mistakes. Many viewers learned a lot by simply watching Jeff's troubleshooting techniques. Jeff is a Raspberry Pioneer, leading the way for the rest of us, saving us from the frustrations of our own total failure.
Apparently, you came here with the expectations of Jeff solving your problems, instead of trying yourself. Then, you shoot the messenger, who has worked tirelessly and personally endured the 'agony of defeat'? Really?
Pretty sure there is *one* graphic card that works. I can't remember the name ATM. Did you try Raspi official blog? Pretty sure I got it from there. HTH.
@@JeffGeerling Spend 500 hours getting an idea to work, getting close, almost there, almost there... One more try.. Everything but one little pesky piece of lingering x86 baggage..
And then ruclips.net/video/Ag1o3koTLWM/видео.html
Jeff, I love your out-takes!!!
Love the Trogdor T-Shirt and the Mando reference, Great video!
Can't wait until Friday! Mando will be burninatin' the countryside :D
Jeff, while I am not in need of this content, I really enjoyed your presentation and the insights it contained. Your humor is appreciated as is your humility. As one of your clips showed, failure has always been my best teacher. TOTALLY WELL DONE!
I think the problem is power to the pci, so you can use the mining rider that power the pic externally instead of power Pi board. Good luck. Love your videos.
Jeff, I nominate you the pi-guru of the year lol love these videos please keep it up!
I get flashbacks from work when I see this type of struggle with Linux Device Drivers. Linux is amazing and great but dealing, or developing, kernel modules (device drivers), can be really painful and tedious. Kernel updates, which happens a lot now-a-days, make me nervous because kernel library api's oftern change and break the driver compilation.
@jeffgeerling
Issue(02:40):
It's a kernel module build error. The driver code had an error during compilation. At 02:27 when you grab the latest Linux Arm Display Driver (version 390.138), it lists fixes for Linux 5.6 Release Candidate in the Release Note Highlights, and at 02:52 I can see you are compiling the driver against your Kernel of a different version, Linux 5.4.51-v71+.
Solution: Try downloading and compiling version 390.132, or possibly the version released after that. It claims to fix " kernel module build problems with Linux kernel 5.4.0 release candidates" in the release notes. Basically, it should play nicely with the Linux Kernel you have. The next suggestion would be to download the 5.6 kernel and build the latest driver against it, but sounds very painful.
It would be really interesting to see you connect the pi to the monitor using a VGA cable. In the very beginning of the video you removed the bracket, along with the VGA plug, from the card and connect it to the monitor using an HDMI cable but at 01:52 the kernel explicitly recognizes it as a "VGA compatible controller". It could it be possible the video was being outputted through the VGA port and not the HDMI.
Thank you
This is the way. I appreciate all the effort you made to attempt this setup, keep experimenting with this stuff!
I look forward to your next videos!
I just wonder: instead of compiling on the Raspberry Pi, why didn't you cross compile it on Linux X86 machine?
Most likely because setting a cross compilation environment requires additional time and effort
Honestly after the 5th or so recompile I started thinking about setting it up. It would be a bit faster on my i9 laptop, or even my older i7 I have over in the corner of my office to play Halo 3.
@@JeffGeerling doooo it 😄
@starshipeleven actually, the instructions to compile the kernel sources in the raspberry pi pages include a section for cross compiling. You'll need to add maybe 4-5 commands
@@JeffGeerling I once had to recompile the kernel a few times for another ARM device… the time it took the kernel to compile once on the hardware itself (AllWinner A20), I was able to research how to do cross compilation, set up a VM with Ubuntu, install the cross compilation environment and compile the kernel once. After that it was a breeze.
Loved watching your mind work, I have a very similar work flow and it was refreshing to see someone else who does things not because you should, but because... you know, why not...
Im just waiting for the day when I can thunderbolt a gpu to an android phone.
This is hilariously fantasy-esque i love it 😂😂❤
Don't external GPU enclosures run on thunderbolt?
That might actually work if you can find drivers.
Ah yes, let me hook up my 1080ti to my phone
*Looks at 1.9GHz CPU* WHO'S THE BOTTLENECK NOW?!
Candy crush on MAX
@@GeneralNickles isn't thunderbolt exclusive to x86 intel cpus? I mean sure, we have the "usb 4.0" now (currently found on the new apple computers), but it doesn't seem to support gpus.
Oh god... you just won my heart over with that Jurassic park reference.
Finally, video I was waiting for a long time has arrived
You have gently and nicely explained you points, against some irrational comments, you have had on your previous video.
Does the x1 to x16 pcie slot need to be actively powered? I know some graphics cards might need more than the pi can output.
It doesn't unless the device needs extra power. I actually have a powered adapter as well, and I may do some extra testing with it once I get a separate power supply for it.
I'm also going to be testing a PCIe splitter/switch so I can see how multiple cards run together!
@@JeffGeerling I don't think that the Pi could handle PCIe bifurcation (especially not on an x1 slot), but it's worth a try.
Seeing how the Raspberry pi 4 uses a 10-15W power supply and the relatively huge heatsinks on the cards, I'd say they need more power. A x16 PCIe slot is expected to provide up to 75 watts for a high-powered card. Even if the Raspberry pi can support that, a 1x slot can only provide 6W.
TL;DR yes
Also for better software support on an AMD card, it might be better to use a GCN GPU such as Radeon HD 7000 or RX 200 series because those drivers were much more developed in Linux.
The power delivery over pci-e was my first thought too. As the card doesn't have additional power plugs, it needs to get all power through the pci-e port. The TDP of the cards is at least some hind how much power they need to get over the port. Or plugging them into a x86 pc and look into HWinfo how much power they use in idle mode.
@@christianjb2002 according to wikipedia "A full-sized x1 card may draw up to the 25 W limits after initialization and software configuration as a "high power device"." That makes sense since he only started having those problems when the drivers were properly installed. Even if he's not pushing the GPU, i bet it's asking for more power than what the board is giving it.
Not entirely sure this was recommended to me, but I'm glad it was. Have a sub for your very impressive even if unsuccessful work.
I'm glad you stopped by, hope you enjoy the videos :)
"That's definitely not my head". :D
I'm almost sure that now that you've pawed the way, in a month somebody figures out how to fix those drivers to work :)
Your videos are just awesome!
I lost interest in RPI and stuff sometime back. Watching your videos makes me excited about these topics again.
That GPU would've used ATI Drivers
and there would've been Power issues as you didn't use a powered PCIe expansion card
And yeah try Manjaro or Pop OS
@@Rayan-Singh Those still wouldn't get around the IO BAR issue, though. That's something that, as mentioned, is involved with VGA ports. IIRC, they had data pins to detect the status and capabilities of the monitor, which is probably what it is trying to access for VESA drivers. In order to support a video output device using said device's drivers, you need it to be VESA compliant. In order to comply with the VESA standards, you must fully support VGA "legacy" devices, which are kept current since servers still tend to use them a lot for diagnostic terminal screens. I know mine has one for iLO. (HP DL380)
The problem goes back to the point someone else made here - it's a firmware issue, since the developers safely assumed there was no need to include VGA support natively.
Normally, on a PC, this is something which would be baked into the BIOS, but...the raspberry pi has no BIOS. When it's powered on, it assumes the same hardware is there that it expects and starts running code a certain way right away. No overhead interface to first determine what hardware it has etc (BIOS), as it's assumed it will always be exactly the same hardware, since that's the case for the rPI boards.
It's more than just a case of "not the right drivers", as there is a more core level component missing that ALL video card drivers require.
Still... awesome POC! You could still probably find a way to get it to do some basic CUDA coding if you used a dev driver instead and built it WITHOUT video output of any kind, just using the cuda cores for AI processing. That might be kind of useful for the RPi, I think.
@@ArtemisKitty Not only the IO BAR, it seems the Radeon (ATI) driver, which I _did_ use (see 09:10 onward), is also trying to initialize bios on the card, which would go over the IO BAR.
@@JeffGeerling Oh yeah, good point. So yeah, a LOT more than insufficient power going on there, lol.
I’m personally curious about getting the cuda cores up as AI processing devices using tensorflow or something similar, so there might still be a possibility there for them. Even a cheaper GTX series card can do some pretty heavy processing on that front. Still... long uphill road there. Thank you so much for all of this; I’ve learned so much more in just the past 2 days!
Your sufferings deserve my subscription! This is just like how I feel at work sometimes :), especially working with hardware.
You're a real hero. If there's a mountain, it has to be climbed. 💪
The genius of the PCI Express standard is that it's completely compatible, backwards, forwards and sideways. Among other things, any PCI Express card, of any length, will work on any PCI Express slot, of any length. It doesn't matter what those lengths are.
A PCI-Express x16 card will work on a PCI-Express x1 slot just fine (with one sixteenth of the transfer rate, obviously). The only reason why in most cases you can't just plug it in is because they put a physical barrier on the end of the slot. If you were to dremel out this barrier, it would work.
(I don't know why they always put that barrier in x1 slots. Perhaps to just stop people doing exactly this, and avoid card sagging damaging the slot.)
bro, please try WoR, I think even without the driver you might get display out from the hdmi using the nvidia graphics card
pls reply
Actually this isn't a terrible idea, though the Win10 AMD/NVidia drivers may not run on non-x86
It's something I may try, I've never tried Windows on the Pi yet though... might have some growing pains.
Jeff, hi from the UK. Superb video, really enjoyed it. I was playing with hardware video encoding (ffmpeg h264_omx) early in the lockdown with the RPI 4. Not fast but very low power consumption and can only used bitrate based h264 encoding - so larger filesizes than with CRF - I then went onto play with ffmpeg h264_nvenc hardware encoding on the NVIDIA GT 710 and 1050 Ti using Ubuntu 20.04 on various PCs. Very interested in your video, and I hope you can progress this in future. Good Luck!
Hello to my Pi-Labs and RPi Discord fam!
?? :) Hi Nat
@@pilabs3206 long time no see we should talk on discord some time
I struggled myself with ROM BARs last week. Albeit it was in a case of GPU Passthrough. Lucky for me I wasn't walking uncharted territory like you are. But it also baffled my mind for a second as I was trying to get it to work.
Great work in any case! Your effort and reporting are making a difference for a huge part of the community for years to come!
(think AWS ARM systems, the new macs that are coming,...)
Sincerely: thank you!
“Our greatest glory is not in never failing, but in rising every time we fail.” - Confucious
At one time I might have undertaken such a quest. Now I let the "youngsters" take on these challenges !
Good foot work. I am betting that RPi5 (and what ever chip Broadcom makes of it) will have a lot more PCIe lanes. Someday PCIe will replace I2C and SPI for even mundane peripherals.
I'm putting my bets on USB-C ;-)
@@TheOleHermit What part of USB-C are you interested in ? The VL805 claims to support USB 3.0 5Gbs. USB 3.1 (10GBs) or USB 3.2 (20GBS) would be nice. You just need a USB-B to USB-C adapter. I don't see RPi ever supporting any of the USB-PD for output, although I do think futures versions will require a 15W (5V@3A) or possibly 27W (9V@3A) USB-C power supply. This depends on what the cost of supporting USB-PD on input and output.
@@jackpatteeuw9244Yes, USB-C is only a connector style, but it is capable of much more than PD, including USB3.x data rates, display port, networking, you name it. There are many USB-C adapter cables already on the market that provide these functionalities. A fully functional USB-C port could be broken out into a PD powered multi port, multifunctional input/output hub, with the various conventional connectors, just as the PCIe bus can be expanded with different adapter cards.
I'm a maker, who wants to enclose my IoT devices and their CM4 controllers. Legacy PCIe cards only get in my way.
Let me clarify the statement, “Someday PCIe will replace i2c and SPI...”. This isn’t true and will never be true. The PCIe protocol was created for specific purposes and one of those purposes was not for replace low level serial communication protocols (i.e. RS232, i2c, spi, CAN, 1-wire, etc). In fact there are consortiums/groups interested in replacing the PCIe protocol because the design is beginning to reach it limits. New systems such as CXL and CCIX seek to replace PCIe (to a certain extent)
@@nowrd2xpln Well, maybe I was TOO enthusiastic ! Here is where I am coming from. Large, embedded applications (think automotive EFI) need huge amounts of IO. Traditionally, these were on the SoC. Not only is this not "flexible" but they were running out of pins. The lower voltages on today's chips make it difficult to use a 5V Vref for analog signals.
i2c and spi are just not fast enough to sample all of sensors required for a modern car. What is really needed is a high speed "bus expender" that would directly map all of these analog channels to memory space and their controls to memory space on the chip. The same is true for outputs, but many of those have to be "timed".
Had to sub simply for your logic "its not if you should but if I could" now that's a mind set for innovation
You can run external GPUs on on a pi. Just need to do hardware modification. Although since this is the compute module, just need to do some tinkering in the software side and you should be able to get it to work.
Sources:
mloduchowski.com/en/blog/raspberry-pi-4-b-pci-express/
&
www.tomshardware.com/amp/news/raspberry-pi-4-pci-express-bridge-is-a-step-closer
Post from twitter about bar that was part of this project.
Love the storytelling, and details. Good job :-)
Nice. That's a HELL of a lot of watching, waiting and hoping. Thanks for the serious amount of work and one more try attitude.
First time you showed up on my suggestion feed. Glad I tuned in Thank for sharing. And thansks for speaking plain English for us less experienced in IT and Development. And appreciate the dry humor I have a dry scene of humor myself.
I salute you man! The video is somewhat disheartening but you find your way to get the benefit of its experience.
Although it didn't work out, I still appreciate your efforts, you are great!
The closest I've personally gotten to running ARM with a GPU is clustering a Jetson Nano with a Raspberry Pi as a GPU node using something like Kubernetes or Docker Swarm. Still, a Jetson in an ARM cluster is still not the same as a proper ARM GPU!
I was curious if the hardware resources necessary to add a standard GPU directly to a single Pi were present, but this answers everything I wanted to know in one video. Thank you for the work you do, it's very helpful!
I've seen a few of your videos, but after watching this one I had to subscribe.
I wonder if the same people who say "to get a better computer instead to connect to the GPU" also told people who try to run Doom on odd devices to "get a newer computer" or "get a newer game".
Anyway, this is great work. Thanks for sharing your experience with us and all your homework. Subscribed.
This video and its subject matter took you a lot of time. I appreciate it. I oddly enough had a thought yesterday on if the Pi and PCI-E + GPU was something. When I dig in GPUs, they often seem to have low level things that get in the way. An example today is a lot of new GPUs will not play with non UEFI bios systems. So you made a good call to go back to older model GPUs.
Learned something, and had a good laugh. Great story - thoroughly entertaining. Was really rooting for you to make it work. Thank you!
You sir are a champion !! Thank you for this video and thank you for all the effort. Awesome channel
After watching this video I had no options. but to subscribe. It is just as entertaining as a well done Netflix film.
Thanks for another great video Jeff. I am totally down for this journey and will continue to follow along as you chronical your adventure for us. Thanks again.
Much appreciation for your struggle, i feel like it won't end in vane.
it doesn't matter that you didn't figure how to make this work. We learned a lot from your research. Good Job fella'. Let's support this guy your persistance is inspiring.
Indeed this is the way!!! Love your vid Jeff, continue the good work !!!!
A few things:
1. The closed-source ARM Nvidia drivers are meant for the Tegra series, so, they're probably not built to look for a wide variety of PCIe GPUs.
2. There are PCIe riser cards with 12v molex connectors spliced in. These might help alleviate some of the power-related issues you might be facing.
3. PCIe GPUs are known to work on ARM but perhaps there's a hardware limitation for the RPi.
bro keep trying, you are a genius! its amazing all your work!
Nice vid and project. Keep up the good work and make it great :)
I read your blogpost about this just yesterday
I can't wait till someone will figure out how we can add a decent gpu to the raspberry pi
you had done soo much hardwork i really appreciate that .
I really Appreciate your Hardwork. The video was helpful.
I'm in total awe of your powers of persistence. I'm sure one day you will discover / invent faster than light travel.
Great Video! So das your efforts didn't end up in a happy end with nice video output, but the video really was worth my time. Thank you! I love learning stuff and while I didn't learn details, I've learned that even those extreme means didn't lead to the pie properly displaying the output. Cheers and I'm excited to watch follow-ups if there are or will be any.
Bro wish you good luck for you and your channel
Dude I am liking your video for your efforts 👍
I didn't expect you go so far, the support for desktop gpu outside of the mains share markets is so broken, thanks for the effort and the info, keep the hard work please
Never knew how to deal with driver diagnoses stuff, know I do, subscribed since I figure I'll get more of that sort of stuff from this channel
you did good work. I appreciate learning from your tries so I don't try for many hours, thank you.
I admire your patience.. Kudos
Great research friend, So sad you couldn't make it work I was very excited waiting for you to succedd in this adventure.
Dude I loved your bar joke. Instant subs for me. Also u make quality content keep it up bro!
Nice deep dive Jeff! Taught me a lot about when to stop bashing my head against a brick wall. Hopefully the community can assist this effort and help you write a linux driver and compile a kernel for it to work with the Raspberry Pi module with the PCIe slot. I'm very interested in this hardware configuration for GPU machine learning in an embedded system for a really low price.
as a develloper i really enjoy the way you go trough hiccups, and bug, and search how to manage them by exchanging with others... till you found the solution...
Thanks, I'm glad you liked it!
Fantastic attempt! And thanks for sharing despite not being entirely successful. A lot is learned from attempts. Wish more people would report what didn't go well.
Very good info for us what youve done here.
Conclusion, the hammer is in the device driver *and* the os.
The driver is not crucial, but the os need to be able to do much if the os developer wants to make this work.
The entire video had me at the edge of my seat. Well done!
Even though it did not worked out at the end but you gained a Subscriber.
Thank you so much! Gave me flashbacks to trying to get gtkmm to compile on a cirrus logic armv4tel in 2006! (so much of that 'I'll just try ONE more thing...').
CL had paid a dev to add support for their SoC to mainline gcc... specifically for their non-arm thirdparty floating point unit. (the v4 arm arch didn't necessarily come with one, was an optional extra...) but that dev had gotten the condition register format for the floating point unit confused with the arm one - gcc would compile code, but it was horribly, horribly broken. Worse, gtkmm (the C++ gtk bindings) were coded by people not suspecting that cross-compilation might exist, and it did things like try to compile and then immediately RUN small applets during configuration, so to enable cross-compilation (with specially-patched gcc that wasn't mainlined, yet) you had to custom compile linux kernels on both the dev PC (for the cross-compiler) as well as the target machine. To turn on miscellaneous binary extensions (on the PC) and a little thing called GAPING_SECURITY_HOLE in the arm kernel. You then nfs-shared your dev PC root, booted the target up with that kernel, with root mounted from the PC via NFS - which would immediately kernel panic because init was an x86 binary! - but would leave the kernel running, and network accessible... whereby that PC kernel extension could be set to use the target machine as a 'arm coprocessor', using that GAPING_SECURITY_HOLE extension to just load the arm binary from the PC over the network to the ARM kernel where it would run.. and interact with the same filesystem... Supposedly, this would all be enough to let gtkmm do it's 'compile, execute, test' configuration phase OK, even whilst cross-compiling everything to arm code.... Although, I never did get quite that far - I recall it so well, because this was the point at which I gave up, and found the boss an x86 SBC for the project. At the time (2006) I concluded that OSS gui embedded apps on linux arm hardware, would be possible, but that it would take a team of perhaps 20 engineers at a big company, like say, google, to get it all squared away... Of course, later, exactly this did happen, only for some reason they went with java, not C++. Equally bad choice, imho.
The kicker is, the reason why gtkmm was a necessity, was because I was working with another guy, who was a comp.sci graduate. And he'd chosen C++ to do all his application-level dev work on, and he could demo it all working on his laptop, so the boss just said 'make it work'. Had he stuck to C and just used gtk gui elements directly, it would have been ok, because we wouldn't have needed gtkmm.
I still would have had to have been basically hand-building the cross-compilation toolchain with the custom patch there to fix the floating-point issue.
It amazes me sometimes how far linux on arm has come, but the biggest learning for me was this: Leave linux distro development to the distro development guys: If you have an embedded system to build, just work from a nearly-normal os (or use something like angstrom to automate it), but try not to build anything that requires a custom kernel to work. Use an FPGA over usb or PCI to do the 'real time' things and don't get stuck trying to optimise a system also hosting an entire OS on it! Let it do it's thing, and keep the system you have to be 100% responsible for as simple as possible!
I'd really like to see how well an ECP5 FPGA PciE card will work on the cm4! - Nowadays, you should be able to pick a card that has symbiflow support, so the cm4 can host the whole OSS FPGA toolchain itself!
I applaud your perseverance!
This channel has helped me a lot and thanks to it I chose the tech carrer
I recently bought a raspberry pi 4b 8g, your video is awesome
Thank you for making an interesting but very complicated project comprehensible to dummies like me.
Doesn't matter if it worked or not but you are a inspiration.
the only channel I watch the whole video, great work, as always.