RISC-V is super cool, and I’ve been working on it at school quite a bit, but, and I don’t mean to be a Debby downer here, but SiFive doesn’t have any open source CPUs. The only thing open source about their cores is that they’re using the open source RISC-V ISA, but the micro architecture implementations of theirs are all proprietary. I do appreciate them pushing RISC-V and for trying to make it easier to integrate stuff with the platform, but the sales pitch is a little disingenuous
@@L0_V The typical approach of hardware manufacturers using open specs is to keep their particular implementations proprietary (or at least as proprietary as they can), mainly because they're afraid of other people ripping off their manufacturing and burying them with their own product. That is usually unlikely, for a lot of reasons, but it doesn't stop companies from mitigating a lot of potential benefits from fully open hardware in the name of protecting themselves. It's been a long time, but I read an essay by Eric S. Raymond that explains the commercial benefits of open drivers that I think also applies to open hardware. (I don't remember the name, or the exact book it was in, or I'd mention it here.)
@@stefandj4088 Yes, there are RISC-V implementations that are truly open source. en.wikipedia.org/wiki/RISC-V#Open_source you can write one more all on your own if you want to. They probably can't match efficiency and feature set of commercial implementations though, not yet in any case. And you probably can't go and buy silicon of these implementations yet. Though, maybe one day.
@@Houshalter- The core instructions are supposed to be minimal yes, the extensions however are supposed to allow RISC-V to replace all ISAs including those of high performance CPUs. But the specs aren't finalised yet.
@WaliWorldX - I think by the time Wayland is comparable to commercial display APIs (if ever) linux will be woefully behind in some other way. This from a linux and Windows user.
FPGA should be in mass produced products, but only where it makes sense. SDR, Oscilloscopes etc all have FPGAs since the algorithms can be updated and enjoy the benefits of parallelization. But the nvidia module is just a joke
BTW. Wendell mentioned that there is a FPGA on the expansion board for the RISC-V board. It is preprogrammed to act as its PCIe root complex. I suppose the chipset is lacking...
Yeah I dont think he understands what an FPGA actually is besides the basic English Wikipedia article (he seems to think FPGAs are just reconfigurable CPUs?). FPGAs are very often used in mass products, even cheap ones, specifically because they are so cheap and they can allow you to consolidate many jellybean parts into one chip. Some of the cheaper $10 FPGAs nowadays are nothing short of amazing and can often replace 5 or 10 discrete components, especially for DSP and RF implementations.
@@helloworldstein - I think he means custom ASICs are cheaper if you can do large enough volume, not comparing to using a group of off the shelf ASICS.
David Stevenson that used to be the case but with the way FPGA prices have plummeted in the last decade and their capabilities have only gotten better, you have to have a very specific usecase and very specific products when you do a cost benefit analysis with using FPGAs vs ASICs nowadays. In many mid to high production rate manufacturing, you’re better off sticking with a FPGA now. The volume needed to get your per unit ASIC production down to the $3 mark many FPGAs hit nowadays is difficult unless you’re selling a hit product like the PS4.
Fascinating technology. I have some embedded audio DSP projects in mind and wasn't sure what architecture I want to work with. I'm convinced risc is the future, and everything being open source with a growing ecosystem, this is definitely the way to go. Thanks for the video.
It's an OpenGL "hello world" of sorts. Of all the Id software games that have had their source code released, it was the first to ship with OpenGL support.
the inefficiencies in RISCV come from doing any type of gpu related task since the gpu is emulated in software-It can bog things really far down under some use case scenarios. For basic web browsing it is fine. What I'm working on is a cpu that was only partly based on RISCV but with modified and additional instruction sets and firmware so that I could use physical gpu cores integrated into my SOC that communicate with cpu. It eliminated the inefficiencies of RISCV and provides much higher efficiency and performance. This will be going into my laptop which I designed over the past year.
That doesn't make any sense. x86 also bogs down if you run software GPU on it. But you don't have to .. you connect either RISC-V or x86 to a GPU card/chip using PCIe. Or you can build it onto the SoC. Whatever you want. If you want a GPU built into your RISC-V SoC, SiFive has licensed IP for GPU available.
@ Bruce Hoult x86 if run by itself has an integrated gpu and cant run by itself if it doesnt have an integrated gpu. A few x86 dont have an integrated gpu and have to be used directly with a dedicated gpu, but you cant run x86 cpu by itself solo unless it contains an integrated gpu. RISCV sifives chips, such as the u5 series and 7 series come WITHOUT an integrated gpu as far as i know and have seen SO FAR. If anything has changed since the last time i looked a few months ago, then it is big news to me. They do not have physical gpu cores integrated into them. instead it uses software to emulate the work of a gpu. this is how it gets significantly bogged down in any graphics computational task. Where as with x86 integrated gpus have physical cores, physical as "in hardware logic gates to build the actual cores and pipelines" and hardware is much faster and more efficient than the cpu doing gpu work in software. IF SIFIVE now has options for physical gpu cores integrated into the cpu then I am not seeing it from any of look through their website and would be very interested in seeing it. As for connecting an external graphics card to work with RISCV-yes it is possible. I was strictly reffering to their single chip embedded solutions not containing physical gpu cores. EDIT: I do know of one company over in hong kong who is designing RISC-V SOC with a capable integrated software based gpu, but even so this is different from what I am talking about regarding physical integrated gpu cores.
@@PeptideScienceInstitute "you cant run x86 cpu by itself solo unless it contains an integrated gpu." Of course you can! I've built many machines like that myself, to use as a server I log in to via ssh. Most recently I did that with a Core i9-9980XE. No integrated GPU and I didn't add a graphics card.
You are still thinking in terms of the high overheads of running a Microsoft-Windows-style desktop. Linux DEs are much more efficient than that. Remember when Vista first tried to introduce 3D-style desktop effects? And how they demanded high-end hardware to work properly? At the same time, I could run KDE 4 with full Compiz effects on my little Asus Eee 701 with its little 900MHz uniprocessor Celeron, and it worked just fine. “GPU? What’s that?”
I want fully open source hardware, software and design to be legislatively mandated for voting systems. I can think of no better way to end criminal control of such systems.
I love technology but for voting I can think of nothing more important that PAPER ballots! They are not subject to data corruption or intentional alteration. They can be counted by hand without need of machines if necessary. It's the only electronically secure way to vote!
@@FinaISpartan that's why they are first counter by machine. You then need a consensus between the hand and machine count. That's how we do it in my state and I've never heard of any problems with things like improper calibration changing votes.
Open source is essential for non-US companies in the age of Trump. Today you're friends and tomorrow you're enemies. No serious Chinese or EU company wants to depend on US semi-finished products.
@@cutliss Mainly Universities using it for teaching IS (and in the broader sense Computer Engineering), people developing for RISC-V-Based systems etc.
I've been looking at Risc-V for a while now and it is exciting. Given the companies involved I can see Samsung and Google delivering mobile devices with similar SOC performance to Apples bionic custom ARM chips.
Trent Randel if Samsung or google were going to making SoCs with performance on par with Apple’s A-series chips, they’d have done so already. Or if they will in the future it won’t be because of RISC-V, it’ll be because they put the engineering work and money into doing so
@@SoupRKnowva Apple's advantage is scale and end to end product development and delivery. Andriod and the OEMs that produce the phones rely on kernel development that accommodates a wide range of dependencies while being flexible with its APIs. Apple doesn't have this issue. Risc-V has yhe advantage that companies like Microsoft and Google could deliver a on chip solution that isn't currently possible with proprietary archtecture like ARM. The challenge is scalability.
What is it about the RISC-V ISA that will allow these companies to suddenly match Apple’s performance where they couldn’t with the armv8 ISA? The problem is that only Apple was willing to develop their own microarchitecture, everyone else is playing second fiddle with Qualcomm or ARM designed cores that just can’t compete. That doesn’t suddenly change if they start using a different ISA. These companies still need to be willing to invest the money to develop their own microarchitectures if they truly want the same performance that Apple has managed
@@SoupRKnowva I think you're missing the point, its not really about the RISC-V architecture and more about the customization of the chip and everything around the chip. At the moment Apple has the ability to design and produce a chip that they can them optimize other components V-ram, Bios, PCI lanes, ..., including their OS. This is end to end manufacturing, and gives Apple % gains along the entire process. Andriod and Windows on the other hand need into their software a multitude of variables, this makes their software overly bloated and slows them down. As most Linux people will tell you, the advantage of Linux is you can re-compile the Kernel yourself either adding or removing to increase overall performance. This is why I can run the Latest Linux Mint distro on an old Pentium Chip but not window's 10. RISC-V gives a standardized base infrastructure from which any OEM can them make optimized products from the ground up whilst also providing sufficient scope for developers to continue to develop software. The biggest loser in this in my opinion will be Intel.
I don’t really understand what you’re trying to say at all. None of that depends on RISC-V vs armv8, all of that customization is completely possible with armv8, Apple being the best example. Apple’s ability to vertically integrate has very little to do with what instruction set they chose to implement and everything to do with the investment they put in. These other companies need to make similar investments to reap the rewards
I want to see RISC SOCs get integrated into AMD and Intel packages. You could use it for A.I processing, use it to run the OS, use it for background processes and other things.
I kind of like the idea, and have often thought about a smaller micro-CPU just for running driver code on to free up the main CPU for more CPU demanding apps. There was a time that a sound card had an actual processor on the chip (like the EMU10k), that did actual sound processing stuff. After the LT win modem came out other companies drifted toward doing all the logic stuff on the computer's main CPU in software instead of having actual logic in the device to do that. case and point, most AC97 chips, are just a DAC and ADC and EVERYTHING else is done in software by the driver on the CPU, not the AC97 chip. After that trend started, I so wanted to evict the bloating drivers off to there own dedicated processor to let my apps run faster on the main CPU.
With this idea you are asking for a whole new nightmare of programming for low level devs and an even worse nightmare for vendor instruction sets. The risc chip would need it's own dedicated memory and memory controller apart from any x86 bloated hardware crowding over pcb space. Merging risc or any other large IC with x86 is a terrible idea and only a low level devs will understand why this is true.
RISC integration within an x86 chip causes a whole mess of incompatibility issues on a microcode level, which is precisely why intel has instruction sets that mimick RISC behaviour for faster execution (AVX, DMA etc) . What you say might be possible but it would basically require reinventing the wheel with massive investments that provide diminishing returns. But corporations like intel and AMD have obligations to shareholders to turn up a profit every year non stop,so they cant afford to take such endeavours and stick to improving upon whatever they have on hand (precisely the reason Haswell was the last great innovation from intel imo, with it's FinFET implementation at 22nm). So you might be able to guess where I'm going with this. The x86 platform is most likely going to be stomped out by ARM and RISC-V SoCs. And altho i hope it doesnt come to that, the enthusiast PC market might become a thing of the past.
I think the architecture is actually very exciting also, seeing how many security issues have resulted due to complexity of modern x86/amd64 stuff. How about simpler architecture, but just scale the core count to something ridiculous? The tasks that usually require much horsepower like video compression/decompression and rendering should be pretty parallelizable..
Thanks for covering this topic. I wanted to buy one myself, but imports in my country are very challenging at the moment. For now I am playing around with risc-v in qemu, but it's not the same. Hope to see more on this in the future!
This is interesting stuff. Would love to hear and see more from you guys on this. :) Maybe a guide that explains how to go about setting up Debian on one of those things? Like what hardware is required, how to configure it, setup, installs and all that jazz.
If you wish to talk about the power of FPGA, would be great if more people knew about the MiSTer project. It is amazing how many retro consoles and arcades they have supported on it already!
The issue with ASIC development isn't the cost to develop, its the cost to put into production. An ASIC is fully custom silicon and you have have a contract with a manufacturer or be a manufacturer to have an ASIC produced. The actual design of an ASIC starts at the same place that the program for an FPGA that configures it, your hardware definition language. For FPGA development there is software that takes your hardware definition file and actually routes out all the logic on the chip. Conceptually your code defines how the logic blocks that are in the chip are connected together. The software handles all the routing of signals across the chip. Some of them even allow you to change which FPGA chip is and shows how much of the chip is consumed with the logic demanded by the hardware definition code. Asic development is a somewhat different algorithm that operates on the same code. The issue with ASIC development is that it is very expensive to tool up a semiconductor plant for a new design, and there are many things that need to be done in order to have automated testing, packaging, etc. Something that I think is pretty interesting is that there are some products that rely on having reprogrammable logic, where having an FPGA is advantageous. One product like this is cable boxes. They actually reprogram themselves to receive different channels, or signals from different geographical areas, allowing the cable company to hold onto the boxes and update them to recieve new methods of receiving the cable signal as they change over time. A chip that is a really good learning recourse for this kind of thing is the Zynq 7000 chip from xilinx. It is an ARM 9 core surrounded with FPGA logic. What's cool about this is that every single pin on it is reprogrammable, you can actually define the hardware on any pin, with very few limitations. If you want to blink an led on one you can set up the logic to do that. If you want to have a serial port on another you can. If you want to add some digital signal processing to lighten the load for the arm 9 core you can as well. The main issue behind the Gsync moniors that I see is that they used a module rather than homegrown hardware. I can empathise with their decision though because FPGA design can be difficult. They are high pin count devices that require precision soldering. They often require a lot of stuff around the chip such as multiple supply rails and their own flash memory. If you want to see the ultimate preformance for this type of product behold the NetFPGA SUME. store.digilentinc.com/netfpga-sume-virtex-7-fpga-development-board/
I wonder if it is possible to pull off a dedicated and streamlined emulator machine this way. I see retropie developers making the most of the open source architecture.
Haven't seen Wendell this excited since... well, since ever, I wonder if in about a year's time we'll be seeing his last L1T video saying goodbye to concentrate all his efforts in the... Wendellexa
What if you put some put 7 nm pure silicone between the p-n junctions of transistors to incorporate negative resistance or quantum tunneling to speed up larger nano meter chips by using quantum tunneling transistors in 14 plus sized manufacturing??
I would love to develop a whole series of compute units that can be fit into a 3.5 HHD form factor. Then you could have a SOC, CPU, GPU, FPG, SSD, HHD, NNU ect. Then they would communicate to each other via a fiber optic network connection so that when you upgrade you took the old SOC or GPU And place them into a home server that managed them like a super computer you could keep the hardware till it wears out and not add to the land fill that which still works. You can the tap into you server for heavy work loads via encrypted internet link. You will grown your computing power not upgrade it.
well summarised! ... were you thinking to compare the energy efficiency of the architectures? i guess this board with the FPGA doing the heavy logistics and the RISC-V early version setup will use more electricity compared to another setup (RK3399 SoC, RaspberryPi, intel x86 Z370, amd x86 x470, ...), right?
I know you said you don't want to get into RISC vs CISC, but can you? There really isn't a lot of solid info out there, certainly not in a readily digestible format.
@electric messiah From what I've seen the debate is alive and well. I just haven't seen it explained all that well. Some have said that some form of hybrid architecture is likely most optimal, however I haven't seen it explained as to why, or what such an architecture might look like. Wendell seems to be pretty knowledgeable on the topic, and I'd love to see a rundown video from his perspective and experience.
Like electric messiah said, even x86 machines are risc style cores with a hidden instruction set that the x86 instructions get decoded into. This was the major advancement of the p3, uop cracking lead to huge performance gains. I guess there are some special purpose pieces of hardware on chips today that I guess you could say are using something more like cisc instructions, but for the most part, risc has reigned supreme for quite a while now. It’s only due to legacy basically that we’re all still mostly using x86 cores in our computers still today
@@SoupRKnowva >It’s only due to legacy basically that we’re all still mostly using x86 cores in our computers still today No, that's not quite true. x86 is used in the PC and server space because RISC has next to no _real_ advantages over x86 CPUs in many use cases. Just look at AMD's Opteron A1100, Qualcomm's Centric, or Cavium's ThunderX2 - they excel in certain workloads and fall behind their x86 competition in others. Gone is the often touted performance-per-Watt advantage as well, as all these chips are in the 100W+ class to match their CISC counterparts; all while making up for the still weaker per-core performance by adding more cores per die. It's simply not a tautology that RISC=better, see www.phoronix.com/scan.php?page=article&item=ec2-graviton-performance&num=5 for some real-world numbers.
I paid for a couple of these on Crowd Supply. The really annoying thing is that every time the shipping date gets closer for the parts inevitably the date (goal post) gets moved 2-4 weeks. I've watched this happen 3 or 4 times now and have given up expecting these in any reasonable timeframe.
How about Raptor Computing Systems' Talos II and Blackbird systems? Are you familiar with these? It is another open architecture, but built around the IBM POWER9. I'm not sure which Linux distributions run on it at this point, though I know that Fedora supports it, and did even before the IBM purchase of Red Hat. That platform is workstation-level in terms of performance and therefore easily competes with top offerings from Intel and AMD.
How ironic that RISC and ARM both came out of UK efforts in Cambridge, mainly badged as Acorn/BBC. Microsoft and Apple alongside processor makers and IBM killed off Acorn by buying and suppressing.
Hello, thank you for this video. I'm out of my league here, but i'm working to learn, steep curve. Curious if you could direct me to how to use RISC-V for a server.
Did you say 15 million or 50 million? The Architecture im working on will be the last, it can have limitless parallel execution threads, limitless in regards to the number of lines of code you have!
This is the third time I've taken a look at SiFive, and I still don't really see the big picture. I think some real-world examples would help. What's a specific situation where this tech has clear advantages? What sort of production volume or budget is needed to make custom silicon practical? This video does a great job of explaining the tech, but the business case for it is still too abstract to get excited about it.
He didn't mention how "cheap" is to make a verified semicustom silicon. I don't have several hundreds of thousands USD dollars outsource a chip production.
Almost makes me wish I understood my course in digital circuit design in undergrad> Lol all that VHDL and Verilog FPGA programming on top of learning what this stuff was and how it worked made us weak and woozy.
Its great for dedicated hardware acceleration. We have an example in our lab with image processing, there is literally no processor on earth (we looked) that can run a software implementation that handles the crazy data bandwidth we need for this experiment. We had to go with hardware acceleration; so our only option was a National Instruments FPGA, which is incredibly expensive (think more than your car, house and wife combined). This would have been amazing for that application, we could have made our own custom chip that is built for this one task and been insanely good at it. But I agree for your average tinkerer they have zero need for this.
Second video of yours I've watched due to manual search results -Subscribed SiFive = the new ARM? I noticed Arrow has a SiFive dev board earlier today. It looks like I should do some more research. Thanks
i would love to see these things made into simple application modules, "Level1 Loungeroom Modules", u have a fpga as a base, and then attach the risk modules , 1 for spotify, 1 for steam, 1 for alternative wanna be steams like epic, etc... then as soon as anyone tries to fix one or make modification in anyway...... we sue the friggen pants off that puppy and keep truckin! on a more realistic level i would like to see this kind of thing implemented into our loungerooms to give us better control of our "smart homes", something like this would allow the person to have a local network for their smart home rather than a web service which would mean better security and lower latency.
rather then designing the chip directly it would be better if sifive suggested chips based on a pcb design. also the design interface should give an an estimated cost for the features at the very least.
@@RazvanAlin Quantum computers have been coming for decades. They have so far been suited only for physical simulations and other such problems that were once the preserve of the old-style analog computers -- the ones that were pushed aside when digital became fast enough and cheap enough, and offered much greater accuracy. RISC-V has already “picked up”. These products are available now.
@@lawrencedoliveiro9104 Yeah, supercomputers, workstations and SBCs like Raspberry Pi? On the desktop world there is so much x86 legacy software that it would be hard to replace that.
@@samuelschwager You didn’t watch the video? He specifically mentioned how Linux has come to dominate most of the computing world. Think of supercomputing, data science, AI and all the rest of it. He mentioned TensorFlow -- that’s the kind of area where this thing will be used. The desktop is accounting for a smaller and smaller proportion of the computing market.
but is it faster than a raspberry pi? and can it do game emulation like raspberry pi? would love to see someone design a sifive chip that is amazing at emulation and can do that kind of pi box deal for gaming emulation.....
Featuring Wendell's Star Trek inspired look
Just glad he's not wearing a red shirt! Never ends well!
Set your phasers to LOL 😂
My youtube feed is just Radeon 7, Radeon 7...... but Level1 delivers with something much more interesting.
ikr
B-b-but V < VII
Why sub to more than 1 tech channel? They all parrot the same shit.
Because today the embargo lifts.
But hey the other bleeding edge it's coming around too.
SBCS AND risc n arms are cool
Lol your profile picture. Looks like you were desperate for this video 😂
RISC-V is super cool, and I’ve been working on it at school quite a bit, but, and I don’t mean to be a Debby downer here, but SiFive doesn’t have any open source CPUs.
The only thing open source about their cores is that they’re using the open source RISC-V ISA, but the micro architecture implementations of theirs are all proprietary.
I do appreciate them pushing RISC-V and for trying to make it easier to integrate stuff with the platform, but the sales pitch is a little disingenuous
So the big guys are getting involved , to keep it proprietary.!?
@@L0_V The typical approach of hardware manufacturers using open specs is to keep their particular implementations proprietary (or at least as proprietary as they can), mainly because they're afraid of other people ripping off their manufacturing and burying them with their own product. That is usually unlikely, for a lot of reasons, but it doesn't stop companies from mitigating a lot of potential benefits from fully open hardware in the name of protecting themselves. It's been a long time, but I read an essay by Eric S. Raymond that explains the commercial benefits of open drivers that I think also applies to open hardware. (I don't remember the name, or the exact book it was in, or I'd mention it here.)
is there any that is truly open source?
Wonderful takedown man. Nice one.
@@stefandj4088 Yes, there are RISC-V implementations that are truly open source. en.wikipedia.org/wiki/RISC-V#Open_source you can write one more all on your own if you want to.
They probably can't match efficiency and feature set of commercial implementations though, not yet in any case. And you probably can't go and buy silicon of these implementations yet. Though, maybe one day.
The way things are progressing, 2024 will be the year of the RISC-V GNU Hurd gaming desktop.
No vector extensions yet, won't be running a phone by 2024 unfortunately.
@@davidste60 its supposed to be mininal
@@Houshalter- The core instructions are supposed to be minimal yes, the extensions however are supposed to allow RISC-V to replace all ISAs including those of high performance CPUs. But the specs aren't finalised yet.
HURD is slow and bloated. Better off using something like MINIX, SeL4 or Redox.
@WaliWorldX - I think by the time Wayland is comparable to commercial display APIs (if ever) linux will be woefully behind in some other way. This from a linux and Windows user.
FPGA should be in mass produced products, but only where it makes sense. SDR, Oscilloscopes etc all have FPGAs since the algorithms can be updated and enjoy the benefits of parallelization. But the nvidia module is just a joke
RME uses FPGAs on all their audio interfaces, as opposed to using off-the-shelf converters...This is also why their cheapest product is 800 USD.
BTW. Wendell mentioned that there is a FPGA on the expansion board for the RISC-V board. It is preprogrammed to act as its PCIe root complex.
I suppose the chipset is lacking...
Yeah I dont think he understands what an FPGA actually is besides the basic English Wikipedia article (he seems to think FPGAs are just reconfigurable CPUs?). FPGAs are very often used in mass products, even cheap ones, specifically because they are so cheap and they can allow you to consolidate many jellybean parts into one chip. Some of the cheaper $10 FPGAs nowadays are nothing short of amazing and can often replace 5 or 10 discrete components, especially for DSP and RF implementations.
@@helloworldstein - I think he means custom ASICs are cheaper if you can do large enough volume, not comparing to using a group of off the shelf ASICS.
David Stevenson that used to be the case but with the way FPGA prices have plummeted in the last decade and their capabilities have only gotten better, you have to have a very specific usecase and very specific products when you do a cost benefit analysis with using FPGAs vs ASICs nowadays. In many mid to high production rate manufacturing, you’re better off sticking with a FPGA now. The volume needed to get your per unit ASIC production down to the $3 mark many FPGAs hit nowadays is difficult unless you’re selling a hit product like the PS4.
now MIPs is open source. i hope we get good stuff in computer arch in the coming years
Nice to see tech RUclipsrs who report on more than just gaming stuff
Fascinating technology. I have some embedded audio DSP projects in mind and wasn't sure what architecture I want to work with. I'm convinced risc is the future, and everything being open source with a growing ecosystem, this is definitely the way to go.
Thanks for the video.
RISC-V is significantly important.
Curious, why it seems that Quake 2 is a sort of nostalgic general benchmark..
The fan on that thing is adorable.
"When in doubt which game to use, pick Quake II"
It's an OpenGL "hello world" of sorts. Of all the Id software games that have had their source code released, it was the first to ship with OpenGL support.
Oh ok. That makes total sense. Thanks, Skin.
Is it just me, or did anyone else think 'Star Trek TOS Science Officer' when looking at what Wendel is wearing in this?
I think that was deliberate
the inefficiencies in RISCV come from doing any type of gpu related task since the gpu is emulated in software-It can bog things really far down under some use case scenarios. For basic web browsing it is fine. What I'm working on is a cpu that was only partly based on RISCV but with modified and additional instruction sets and firmware so that I could use physical gpu cores integrated into my SOC that communicate with cpu. It eliminated the inefficiencies of RISCV and provides much higher efficiency and performance. This will be going into my laptop which I designed over the past year.
HaiL
That doesn't make any sense. x86 also bogs down if you run software GPU on it. But you don't have to .. you connect either RISC-V or x86 to a GPU card/chip using PCIe. Or you can build it onto the SoC. Whatever you want. If you want a GPU built into your RISC-V SoC, SiFive has licensed IP for GPU available.
@ Bruce Hoult x86 if run by itself has an integrated gpu and cant run by itself if it doesnt have an integrated gpu. A few x86 dont have an integrated gpu and have to be used directly with a dedicated gpu, but you cant run x86 cpu by itself solo unless it contains an integrated gpu. RISCV sifives chips, such as the u5 series and 7 series come WITHOUT an integrated gpu as far as i know and have seen SO FAR. If anything has changed since the last time i looked a few months ago, then it is big news to me. They do not have physical gpu cores integrated into them. instead it uses software to emulate the work of a gpu. this is how it gets significantly bogged down in any graphics computational task. Where as with x86 integrated gpus have physical cores, physical as "in hardware logic gates to build the actual cores and pipelines" and hardware is much faster and more efficient than the cpu doing gpu work in software. IF SIFIVE now has options for physical gpu cores integrated into the cpu then I am not seeing it from any of look through their website and would be very interested in seeing it. As for connecting an external graphics card to work with RISCV-yes it is possible. I was strictly reffering to their single chip embedded solutions not containing physical gpu cores. EDIT: I do know of one company over in hong kong who is designing RISC-V SOC with a capable integrated software based gpu, but even so this is different from what I am talking about regarding physical integrated gpu cores.
@@PeptideScienceInstitute "you cant run x86 cpu by itself solo unless it contains an integrated gpu." Of course you can! I've built many machines like that myself, to use as a server I log in to via ssh. Most recently I did that with a Core i9-9980XE. No integrated GPU and I didn't add a graphics card.
You are still thinking in terms of the high overheads of running a Microsoft-Windows-style desktop. Linux DEs are much more efficient than that. Remember when Vista first tried to introduce 3D-style desktop effects? And how they demanded high-end hardware to work properly? At the same time, I could run KDE 4 with full Compiz effects on my little Asus Eee 701 with its little 900MHz uniprocessor Celeron, and it worked just fine.
“GPU? What’s that?”
I want fully open source hardware, software and design to be legislatively mandated for voting systems. I can think of no better way to end criminal control of such systems.
I'm sorry, they've already started building a wall. 🤣😜🏢
There should be no hardware involved with voting systems.
I love technology but for voting I can think of nothing more important that PAPER ballots! They are not subject to data corruption or intentional alteration. They can be counted by hand without need of machines if necessary. It's the only electronically secure way to vote!
@@RobR99 However, anything requiring counting by hand is always subject to human error.
@@FinaISpartan that's why they are first counter by machine. You then need a consensus between the hand and machine count. That's how we do it in my state and I've never heard of any problems with things like improper calibration changing votes.
Down with x86! Long live RISC!
Open source is essential for non-US companies in the age of Trump. Today you're friends and tomorrow you're enemies. No serious Chinese or EU company wants to depend on US semi-finished products.
Unfortunately, $999 for the main board and $1999 for the expansion board is... a bit outside my budget :(
Yeah same I wasn't expecting those prices thats just crazy
I just went to order expecting raspberry pi prices, thought $999 was for something else, look shit my pants, and then start raspberry pi shopping.
Ohh is that why he doesn't mention the price in the first few minutes? Got it.
Who’s was the target customer? Can’t be regular people
@@cutliss Mainly Universities using it for teaching IS (and in the broader sense Computer Engineering), people developing for RISC-V-Based systems etc.
I've been looking at Risc-V for a while now and it is exciting. Given the companies involved I can see Samsung and Google delivering mobile devices with similar SOC performance to Apples bionic custom ARM chips.
Trent Randel if Samsung or google were going to making SoCs with performance on par with Apple’s A-series chips, they’d have done so already. Or if they will in the future it won’t be because of RISC-V, it’ll be because they put the engineering work and money into doing so
@@SoupRKnowva Apple's advantage is scale and end to end product development and delivery. Andriod and the OEMs that produce the phones rely on kernel development that accommodates a wide range of dependencies while being flexible with its APIs. Apple doesn't have this issue. Risc-V has yhe advantage that companies like Microsoft and Google could deliver a on chip solution that isn't currently possible with proprietary archtecture like ARM. The challenge is scalability.
What is it about the RISC-V ISA that will allow these companies to suddenly match Apple’s performance where they couldn’t with the armv8 ISA?
The problem is that only Apple was willing to develop their own microarchitecture, everyone else is playing second fiddle with Qualcomm or ARM designed cores that just can’t compete. That doesn’t suddenly change if they start using a different ISA. These companies still need to be willing to invest the money to develop their own microarchitectures if they truly want the same performance that Apple has managed
@@SoupRKnowva I think you're missing the point, its not really about the RISC-V architecture and more about the customization of the chip and everything around the chip. At the moment Apple has the ability to design and produce a chip that they can them optimize other components V-ram, Bios, PCI lanes, ..., including their OS. This is end to end manufacturing, and gives Apple % gains along the entire process. Andriod and Windows on the other hand need into their software a multitude of variables, this makes their software overly bloated and slows them down. As most Linux people will tell you, the advantage of Linux is you can re-compile the Kernel yourself either adding or removing to increase overall performance. This is why I can run the Latest Linux Mint distro on an old Pentium Chip but not window's 10. RISC-V gives a standardized base infrastructure from which any OEM can them make optimized products from the ground up whilst also providing sufficient scope for developers to continue to develop software. The biggest loser in this in my opinion will be Intel.
I don’t really understand what you’re trying to say at all. None of that depends on RISC-V vs armv8, all of that customization is completely possible with armv8, Apple being the best example. Apple’s ability to vertically integrate has very little to do with what instruction set they chose to implement and everything to do with the investment they put in. These other companies need to make similar investments to reap the rewards
I want to see RISC SOCs get integrated into AMD and Intel packages. You could use it for A.I processing, use it to run the OS, use it for background processes and other things.
I kind of like the idea, and have often thought about a smaller micro-CPU just for running driver code on to free up the main CPU for more CPU demanding apps. There was a time that a sound card had an actual processor on the chip (like the EMU10k), that did actual sound processing stuff. After the LT win modem came out other companies drifted toward doing all the logic stuff on the computer's main CPU in software instead of having actual logic in the device to do that. case and point, most AC97 chips, are just a DAC and ADC and EVERYTHING else is done in software by the driver on the CPU, not the AC97 chip. After that trend started, I so wanted to evict the bloating drivers off to there own dedicated processor to let my apps run faster on the main CPU.
With this idea you are asking for a whole new nightmare of programming for low level devs and an even worse nightmare for vendor instruction sets. The risc chip would need it's own dedicated memory and memory controller apart from any x86 bloated hardware crowding over pcb space.
Merging risc or any other large IC with x86 is a terrible idea and only a low level devs will understand why this is true.
RISC integration within an x86 chip causes a whole mess of incompatibility issues on a microcode level, which is precisely why intel has instruction sets that mimick RISC behaviour for faster execution (AVX, DMA etc) . What you say might be possible but it would basically require reinventing the wheel with massive investments that provide diminishing returns.
But corporations like intel and AMD have obligations to shareholders to turn up a profit every year non stop,so they cant afford to take such endeavours and stick to improving upon whatever they have on hand (precisely the reason Haswell was the last great innovation from intel imo, with it's FinFET implementation at 22nm). So you might be able to guess where I'm going with this. The x86 platform is most likely going to be stomped out by ARM and RISC-V SoCs. And altho i hope it doesnt come to that, the enthusiast PC market might become a thing of the past.
Awesome video! Hopefully we're gonna see more from them in the future, IRL but also on your channel :D
Oh my, so many topics I'd like to expand on. Almost every thing you mention sounds interesting
I think the architecture is actually very exciting also, seeing how many security issues have resulted due to complexity of modern x86/amd64 stuff. How about simpler architecture, but just scale the core count to something ridiculous? The tasks that usually require much horsepower like video compression/decompression and rendering should be pretty parallelizable..
The RISC/CISC debate was over 20 years ago.
You're right, Wendell. The potential here is very exciting!
RISC-V needs to be widely available in microcontrollers and SOCs before it can knock ARM processors of the top spot. This is a long way to go.
Risc v is interesting
Thanks for covering this topic. I wanted to buy one myself, but imports in my country are very challenging at the moment. For now I am playing around with risc-v in qemu, but it's not the same. Hope to see more on this in the future!
At the end of the day.. what else do you need than Quake 2.
But can🏃 it run CrySiS 😅✌️
But can it run Quake II with path tracing?
ThePoshTux OHHH S HIT NANI!!!!!!!
1:24 Yes, but Raspberry π is ARM, and ARM is RISC, too.
But ARM is completely proprietary.
isn't nvidia own ARM now?
This is interesting stuff. Would love to hear and see more from you guys on this. :)
Maybe a guide that explains how to go about setting up Debian on one of those things? Like what hardware is required, how to configure it, setup, installs and all that jazz.
This is awesome because it keeps newer architectures within the grasp of us normal joes.
"Year of the RISC-V desktop, I don't think we'll see that any time soon." :)
Really digging the Trek look.
If you wish to talk about the power of FPGA, would be great if more people knew about the MiSTer project. It is amazing how many retro consoles and arcades they have supported on it already!
where are the links to the samatic and documentation , on how to build a CPU ?
You sir, found a new subscriber! Perfectly timed for the current state of affairs.
Would happily watch more videos on this chip hint hint. I heard a lot of Debian can already on it,
The issue with ASIC development isn't the cost to develop, its the cost to put into production. An ASIC is fully custom silicon and you have have a contract with a manufacturer or be a manufacturer to have an ASIC produced. The actual design of an ASIC starts at the same place that the program for an FPGA that configures it, your hardware definition language. For FPGA development there is software that takes your hardware definition file and actually routes out all the logic on the chip. Conceptually your code defines how the logic blocks that are in the chip are connected together. The software handles all the routing of signals across the chip. Some of them even allow you to change which FPGA chip is and shows how much of the chip is consumed with the logic demanded by the hardware definition code. Asic development is a somewhat different algorithm that operates on the same code. The issue with ASIC development is that it is very expensive to tool up a semiconductor plant for a new design, and there are many things that need to be done in order to have automated testing, packaging, etc.
Something that I think is pretty interesting is that there are some products that rely on having reprogrammable logic, where having an FPGA is advantageous. One product like this is cable boxes. They actually reprogram themselves to receive different channels, or signals from different geographical areas, allowing the cable company to hold onto the boxes and update them to recieve new methods of receiving the cable signal as they change over time.
A chip that is a really good learning recourse for this kind of thing is the Zynq 7000 chip from xilinx. It is an ARM 9 core surrounded with FPGA logic. What's cool about this is that every single pin on it is reprogrammable, you can actually define the hardware on any pin, with very few limitations. If you want to blink an led on one you can set up the logic to do that. If you want to have a serial port on another you can. If you want to add some digital signal processing to lighten the load for the arm 9 core you can as well.
The main issue behind the Gsync moniors that I see is that they used a module rather than homegrown hardware. I can empathise with their decision though because FPGA design can be difficult. They are high pin count devices that require precision soldering. They often require a lot of stuff around the chip such as multiple supply rails and their own flash memory.
If you want to see the ultimate preformance for this type of product behold the NetFPGA SUME. store.digilentinc.com/netfpga-sume-virtex-7-fpga-development-board/
At first, that blue sweater really made me think of you cos-playing a crew of the Enterprise.
I wonder if it is possible to pull off a dedicated and streamlined emulator machine this way. I see retropie developers making the most of the open source architecture.
Well many have used FPGA for "Hardware emulation", such as MIST FPGA computer, that emulates Amiga, Atari ST, and many others.
already excited for the FPGA video. I was literally just looking around to get into FPGAs
Check Coursera
Did you say sponsored in the video? I didn’t hear it.
Does illumos OiOS openindiana work on this architecture for enterprise workstations ?
Haven't seen Wendell this excited since... well, since ever, I wonder if in about a year's time we'll be seeing his last L1T video saying goodbye to concentrate all his efforts in the... Wendellexa
What if you put some put 7 nm pure silicone between the p-n junctions of transistors to incorporate negative resistance or quantum tunneling to speed up larger nano meter chips by using quantum tunneling transistors in 14 plus sized manufacturing??
I would love to develop a whole series of compute units that can be fit into a 3.5 HHD form factor. Then you could have a SOC, CPU, GPU, FPG, SSD, HHD, NNU ect. Then they would communicate to each other via a fiber optic network connection so that when you upgrade you took the old SOC or GPU And place them into a home server that managed them like a super computer you could keep the hardware till it wears out and not add to the land fill that which still works. You can the tap into you server for heavy work loads via encrypted internet link. You will grown your computing power not upgrade it.
28nm is kinda dated, but im sure it has a lot of applications still regardless of the die size. Does SiFive make any that are more modern though?
well summarised! ... were you thinking to compare the energy efficiency of the architectures? i guess this board with the FPGA doing the heavy logistics and the RISC-V early version setup will use more electricity compared to another setup (RK3399 SoC, RaspberryPi, intel x86 Z370, amd x86 x470, ...), right?
I know you said you don't want to get into RISC vs CISC, but can you? There really isn't a lot of solid info out there, certainly not in a readily digestible format.
@electric messiah From what I've seen the debate is alive and well. I just haven't seen it explained all that well.
Some have said that some form of hybrid architecture is likely most optimal, however I haven't seen it explained as to why, or what such an architecture might look like.
Wendell seems to be pretty knowledgeable on the topic, and I'd love to see a rundown video from his perspective and experience.
Like electric messiah said, even x86 machines are risc style cores with a hidden instruction set that the x86 instructions get decoded into. This was the major advancement of the p3, uop cracking lead to huge performance gains.
I guess there are some special purpose pieces of hardware on chips today that I guess you could say are using something more like cisc instructions, but for the most part, risc has reigned supreme for quite a while now. It’s only due to legacy basically that we’re all still mostly using x86 cores in our computers still today
@@SoupRKnowva >It’s only due to legacy basically that we’re all still mostly using x86 cores in our computers still today
No, that's not quite true. x86 is used in the PC and server space because RISC has next to no _real_ advantages over x86 CPUs in many use cases.
Just look at AMD's Opteron A1100, Qualcomm's Centric, or Cavium's ThunderX2 - they excel in certain workloads and fall behind their x86 competition in others.
Gone is the often touted performance-per-Watt advantage as well, as all these chips are in the 100W+ class to match their CISC counterparts; all while making up for the still weaker per-core performance by adding more cores per die.
It's simply not a tautology that RISC=better, see www.phoronix.com/scan.php?page=article&item=ec2-graviton-performance&num=5 for some real-world numbers.
I paid for a couple of these on Crowd Supply. The really annoying thing is that every time the shipping date gets closer for the parts inevitably the date (goal post) gets moved 2-4 weeks. I've watched this happen 3 or 4 times now and have given up expecting these in any reasonable timeframe.
it is all about "how fast" but not for generic tasks, it's a how fast is one's specific tasks
How about Raptor Computing Systems' Talos II and Blackbird systems? Are you familiar with these? It is another open architecture, but built around the IBM POWER9. I'm not sure which Linux distributions run on it at this point, though I know that Fedora supports it, and did even before the IBM purchase of Red Hat. That platform is workstation-level in terms of performance and therefore easily competes with top offerings from Intel and AMD.
Great video as always
Can you please press F11 and make background full screen?
This is great Wendell!!!
How ironic that RISC and ARM both came out of UK efforts in Cambridge, mainly badged as Acorn/BBC.
Microsoft and Apple alongside processor makers and IBM killed off Acorn by buying and suppressing.
I have a USB over Ethernet unit that uses an FPGA to do both the Ethernet and the USB :D
How would a consumer get a RISC cpu to play with? Is there anything to play with at this point?
It's a SBC WITH LINUX DISTRO
I have zero idea what hes talking about but i want to learn it
Great now I have a hankering for some world conquest
Where is the follow up video ??
throwing serious shade at fpga 3:45, what did the fpga ever do to you?
The fan is so tiny it's so cute.
That's whut she's said too
It reminds me of the tiny fans on some of the higher end FPGA dev boards
Hello, thank you for this video. I'm out of my league here, but i'm working to learn, steep curve. Curious if you could direct me to how to use RISC-V for a server.
Hardware that can be reconfigured in case of an update or a design flaw should not be in a finished product - I agree.
I love it. We need open source architecture!!!!!!!
So, Knut's the art of computer science volumes are going to be very popular ... very soon ...
Did you say 15 million or 50 million?
The Architecture im working on will be the last, it can have limitless parallel execution threads, limitless in regards to the number of lines of code you have!
Oooooooh shyt! I thought Wendell was wearing a Star Trek TOS shirt, lol.
This is the third time I've taken a look at SiFive, and I still don't really see the big picture. I think some real-world examples would help. What's a specific situation where this tech has clear advantages? What sort of production volume or budget is needed to make custom silicon practical?
This video does a great job of explaining the tech, but the business case for it is still too abstract to get excited about it.
He didn't mention how "cheap" is to make a verified semicustom silicon. I don't have several hundreds of thousands USD dollars outsource a chip production.
Fascinating.
And there goes my idea. My hope for humanity has been raised.
Also, do you have an affiilate link or anything if I purchase something that would benefit you?
Hey Wendell some testing on this board please and compare it to ARM cores.
Almost makes me wish I understood my course in digital circuit design in undergrad> Lol all that VHDL and Verilog FPGA programming on top of learning what this stuff was and how it worked made us weak and woozy.
Where is the next video?
soooo, what can this do that my pi dont?
look mom, i slapped some legos together
Its great for dedicated hardware acceleration. We have an example in our lab with image processing, there is literally no processor on earth (we looked) that can run a software implementation that handles the crazy data bandwidth we need for this experiment. We had to go with hardware acceleration; so our only option was a National Instruments FPGA, which is incredibly expensive (think more than your car, house and wife combined). This would have been amazing for that application, we could have made our own custom chip that is built for this one task and been insanely good at it. But I agree for your average tinkerer they have zero need for this.
Who do I send the final product to for manufacture?
You say at the end of the video that you will do a follow up video with the expansion board. Are you still planning on releasing one?
The background song is pretty cool. Anyone know the name of it?
Second video of yours I've watched due to manual search results -Subscribed
SiFive = the new ARM?
I noticed Arrow has a SiFive dev board earlier today. It looks like I should do some more research. Thanks
on Aliexpress.com there are tons of RiscV based boards made by chineese. very "fast development" indeed
There is HBM memory to plug in to this? or another direct memory? what happened with memristors ?
cool a world class cpiu on fpga that's not so expensive to buy. is there xilinx fpga code to implement the cpu somewhere ?
I subbed because of your blue sweater...
i would love to see these things made into simple application modules, "Level1 Loungeroom Modules", u have a fpga as a base, and then attach the risk modules , 1 for spotify, 1 for steam, 1 for alternative wanna be steams like epic, etc...
then as soon as anyone tries to fix one or make modification in anyway...... we sue the friggen pants off that puppy and keep truckin!
on a more realistic level i would like to see this kind of thing implemented into our loungerooms to give us better control of our "smart homes", something like this would allow the person to have a local network for their smart home rather than a web service which would mean better security and lower latency.
I can't find "the next video" you mentioned at the end. At all. What is it called?
Dope sweater tho. looking Sharp wendell
At least he's not affected by the cold
I wonder if one could build the perfect chip for video encoding/rendering... hmmm.
what happened to the "next" Micro-Semi board video? I can not find it.
As my basic understanding of VHDL I have no chance of doing anything useful with that.
rather then designing the chip directly it would be better if sifive suggested chips based on a pcb design. also the design interface should give an an estimated cost for the features at the very least.
Quality content!
How many TH/s for Bitcoin mining?
What is the energy consumption?
I'm such a noob but this is interesting af. What do I need to do to get started with something like sifive?
9:22 m1 is a risc based so welcom to the future
2:26 lol that gforce experience on the right
That intro though.. good stuff. Please do more voice acting for me ;)
P.S I may I use that intro as a sound board voice over Wendell? :)
How do you buy one? Wanna use this and run linux like you can do with a Pi
So, 2020 will be the year of the RISC-V Linux Desktop? :P
This is not “desktop” any more. Think “advanced supercomputing workstation”.
by the time RISC-V picks up, will be moving to quantum computers
@@RazvanAlin Quantum computers have been coming for decades. They have so far been suited only for physical simulations and other such problems that were once the preserve of the old-style analog computers -- the ones that were pushed aside when digital became fast enough and cheap enough, and offered much greater accuracy.
RISC-V has already “picked up”. These products are available now.
@@lawrencedoliveiro9104 Yeah, supercomputers, workstations and SBCs like Raspberry Pi? On the desktop world there is so much x86 legacy software that it would be hard to replace that.
@@samuelschwager You didn’t watch the video? He specifically mentioned how Linux has come to dominate most of the computing world. Think of supercomputing, data science, AI and all the rest of it. He mentioned TensorFlow -- that’s the kind of area where this thing will be used. The desktop is accounting for a smaller and smaller proportion of the computing market.
I’ll wait for RISC-VI.
but is it faster than a raspberry pi? and can it do game emulation like raspberry pi? would love to see someone design a sifive chip that is amazing at emulation and can do that kind of pi box deal for gaming emulation.....