The Story of GNU/Hurd

Поделиться
HTML-код
  • Опубликовано: 1 дек 2024

Комментарии • 191

  • @KevinVeroneau
    @KevinVeroneau 2 года назад +88

    I was also born in 1983 and feel that I haven't gone anywhere in my life either, so I can really relate to HURD here LMFAO

    • @CyberGizmo
      @CyberGizmo  2 года назад +8

      LOL

    • @-MaXuS-
      @-MaXuS- Год назад

      Me too fellow 83:er..me too. 🥴

    • @kencreten7308
      @kencreten7308 Месяц назад

      Where is where? Where are we? Any way?

  • @dmacnet
    @dmacnet 2 года назад +54

    Good summary. I was in some of the Hurd planning discussions around 1989. Some of the GNU developers (like Mike Haertel, as I recall) were pushing for writing a new, lean, capabilities-based kernel from scratch, but ultimately basing Hurd on Mach was chosen, ironically, to save time, even though Mach was worryingly inefficient in its message passing. The concern was that it would set back the Hurd project by a year or more if it started from scratch. As far as whether its goals have been achieved by using Linux, only partly. Like the rest of the GNU project, the goal of the Hurd was to maximize users' freedom to change and control their computer environments. In Hurd, that is accomplished by moving many traditional kernel functions into user-space. Not for efficiency or reducing crashes, but to give users more flexibility and power over their system. A bit like the FUSE file system, but more comprehensive and not grafted on architecturally. That was back before personal workstations, VMs, and containers were ubiquitous and it became common for the users of a GNU/Linux system to have root access, which perhaps mitigates the need partly. But not entirely.

    • @CyberGizmo
      @CyberGizmo  2 года назад +11

      Thanks @dmacnet, always good to hear from people who worked on the project or planning even. Thanks for filling in the blanks on the goals of Hurd, makes sense

    • @dmacnet
      @dmacnet 2 года назад +11

      I was working on what would become the coreutils, mostly remotely, so I didn't keep up with Hurd development, which was done in the FSF office at the MIT AI Lab where Thomas Bushnell worked. I only visited MIT for about two weeks a year. There were definitely some lively discussions about the direction to go. Everyone wanted a technically clean and innovative (ambitious) design, and empowering users was a motivation for that approach of moving as much as possible into userspace. Mach ended up being the pragmatic compromise choice, for better or worse.

    • @anieziisandezzlas
      @anieziisandezzlas Месяц назад

      @@dmacnet
      Linux uses GPLv2 which allows for tivoisation, and it actually violates the GPL by having binary blobs.

  • @jecelassumpcaojr890
    @jecelassumpcaojr890 2 года назад +14

    In 1987 I was tasked with my partner with finding some Unix compatible OS in Brazil for my 68020 based Merlin 2 computer, but was not happy with the options. So I wrote to Richard Stallman (via snail mail) offering to write a kernel for GNU in exchange for being able to used the system commercially - the GPL hadn't been released yet so I didn't know what restrictions he was thinking of. I am still waiting for a reply (I have had two brief chats with him over the years but didn't ask if he even got my offer).
    I did develop that kernel but for 286 machines (PC AT), but the project was cancelled in 1988 before much of the rest of the OS was finished. In the mid 1990s I released it on the Internet after the company it was developed for was no longer around. While I designed it, a friend actually programmed it and I could never convince him to write any comments... at all! So it is probably for the best that Hurd was not built on that.

    • @CyberGizmo
      @CyberGizmo  2 года назад +2

      I was put Apple's AUX on a 68020 processor, the performance wasnt stellar, might have been better to use yours :). Thanks @jecelassumpcaojr890 great story

  • @Ancipital_
    @Ancipital_ 2 года назад +51

    I enjoy these stories and viewpoints a lot. Thank you. They give me some context especially from before I started working with computers which was in about 1996. It's so important to understand backstories and know about origins.

    • @ArniesTech
      @ArniesTech 2 года назад +5

      Exactly. Feels just likes DJ is out Tech grandpa and we kids sit down around him and listen to the stories about yesteryear 💪

    • @carpetbomberz
      @carpetbomberz Год назад

      DJ's memories are right up there with any of the CHM interviews of folks in Silicon Valley. And this one on GNU Hurd is especially good, because it reinforces those discoveries that writing micro-kernels or nano-kernels is hard. Hybrid kernels work. You just have to optimize for the compromises to get the performance. So no surprise things go dormant when they hit stumbling blocks. I'm glad there are people who can devote their time doing the research, bench-marking and publishing paper.s

  • @esra_erimez
    @esra_erimez 2 года назад +79

    Linus Torvalds and Andrew Tanenbaum had a famous "discussion" regarding monolithic vs microkernel. Although at the time with single core processors Linus seemed to have prevailed. But, as we get to higher and higher core counts, it appears that Tanenbaum had a good point. This is demonstrable with Minix 3.

    • @CyberGizmo
      @CyberGizmo  2 года назад +49

      and I still owe you a video on Minux will have to do that in the next couple of weeks

    • @chrystals.4376
      @chrystals.4376 2 года назад +8

      Minix 3 hasn’t been updated in a few years though. At least Redox and Hubri are a thing now?

    • @lewiscole5193
      @lewiscole5193 2 года назад +1

      @@chrystals.4376
      Barrelfish hasn't been updated in a few years either, but I suspect that like Minix-3, it'll continue to influence the thinking of newer OSs to come.

    • @kaihorstmann2783
      @kaihorstmann2783 2 года назад +8

      At that time multi-process was much more a thing than multi-threading, in addition to distributed computing on multiple machines due to single-core processors.
      Now multi-core/multi-thread processors are the norm, and cache coherence is built-in on hardware level between kernels. Thus the multi-threaded monolithic approach is much more appealing from performance and ease of implementation standpoint. Less so academic purity, but in real life who cares when the stuff just works.

    • @RobBCactive
      @RobBCactive 2 года назад +8

      Yet it's Linux that dominates on the Top100 HPC computers. The kernel itself is highly multi-threaded and also can have user space network devices and file systems.
      Linux as a microkernel simply wouldn't have performed, a fairly simple good enough working system is what attracts development effort rather than an ivory tower theoretical approach.

  • @parrotraiser6541
    @parrotraiser6541 2 года назад +4

    I thought I had a pretty good grasp of computing and Unix history, but this has definitely extended the details of my knowledge. GNU/HURD today sounds like the answer to a question nobody's bothering to ask any more.
    I believe that one successful implementation of a micro-kernel is QNX, the OS inside most of today's automobile control systems, in which case its may numerically excees almost any other OS.

  • @scottdrake5159
    @scottdrake5159 2 года назад +13

    This was a great one! I hope everyone caught the Jobs NeXT->Apple subplot, because it's really important; and there isn't a single one of us that doesn't want your pristine NeXT shirt! And as obsolete as the innards would be, I would probably be happy to have a NeXT workstation in my boat anchors.
    I think you're right that HURD would require a re-architecture, and there's a lot of ancient GNU personalities that make that difficult. However, all kernels, as you sort of suggest, are evolving towards the "hybrid microkernel" approach that has succeeded; in GNU/Linux's case, it's gone from "good enough" (old Linux was actually really good pragmatic programming) to spreading responsibilities to user space, as you say, and not uncontroversially. Ie., `systemd` which you've covered in another video, which is another "good enough" that's going to evolve. It's unfortunate that Poettering has been driven to see MS and the GNU/Linux world as similarly inhumane, but imagine how long we've been using Gnome/GTK, started and run by a MS MVP!
    (My thoughts about MS are my own, and I do suspect skulduggery on Microsoft's part on several fronts: vscode, github, unnecessary typescript, etc., but we're not helping the situation currently.)
    Maybe a Gnome/Wayland/Pipewire/Systemd episode in the future? Though I realize it won't be a recollection video, probably. It would still be excellent.

    • @CyberGizmo
      @CyberGizmo  2 года назад +5

      Oh wow you want me lynched :). Yeah I could do that, can't wait to see the comments on it hahaha

  • @IBITZEE
    @IBITZEE 2 года назад +4

    DJ... you're a pool of wisdom... keep sharing it!!! ;-)

  • @ArniesTech
    @ArniesTech 2 года назад +40

    In fact I would have really loved to see how a complete inhouse GNU/Hurd OS would do. Just like BSD being a complete inhouse thing 💪
    BTW: it feels like DJ is our Tech grandpa and we kids sit down around him and listen to the stories about yesteryear 💪😎

    • @sunilsherkhane9905
      @sunilsherkhane9905 Год назад +4

      Love to listen and understand the knowledge which he share.

  • @zeyadkenawi8268
    @zeyadkenawi8268 2 года назад +2

    I was actually going to request it :)
    Thanks for the vid

  • @liquidmobius
    @liquidmobius 2 года назад +3

    Excellent topic! Thanks!

  • @Appalling68
    @Appalling68 2 года назад +7

    DJ, I just LOVE these type of Unix/Linux history videos. Thank you!

  • @guilherme5094
    @guilherme5094 2 года назад +2

    Thanks DJ, another great video 👍👍!

  • @staninjapan07
    @staninjapan07 2 месяца назад

    Although I find all this interesting, I always think "I have too little knowledge of the basics to grasp this."
    I have no time or money for a formal course, but I sure would like to have a solid grasp of the basics.
    Thanks for helping me piece together little bits.

  • @matthewhickok4421
    @matthewhickok4421 2 года назад +2

    I am new to your channel and this is the second of third video of yours that I have seen. I do not know anything about your background and I don't know who talked you into starting a RUclips channel, but I love your videos. I love hearing your viewpoint and knowing some of this history. I enjoyed this thoroughly.

  • @mnoxman
    @mnoxman 2 года назад +5

    Before Linux the progress of HURD Kernel was clocked by the movement of glaciers.

  • @MatthewHarrold
    @MatthewHarrold 2 года назад +15

    I love this stuff. I don't claim any expert knowledge of any of this, but from Tom's Root-Boot, Minix, to Red Hat 1.0, I've had an effort to make my hardware work better and more understandably ... the open source GNU/Hurd has always been a force in this industry. Hurd has always been a hugely ambitious project, and the underlying philosophy (if that's the right term) has been politically? correct in my opinion. FYI I'm on a MacBook Pro 2013 laptop and use the toolchain (almost) hidden in an Apple ecosystem. I wish I had the disposable income to experiment more with contrary architecture like this. $0.02 and thanks DJ.

    • @kayakMike1000
      @kayakMike1000 2 года назад +2

      You might want to check out Plan9 too.

  • @dezmondwhitney1208
    @dezmondwhitney1208 2 года назад +1

    A Great review again. Thank You. D.J..

  • @StaffyDoo
    @StaffyDoo 2 года назад +1

    Wonderful! I have something good to watch in a couple of hours when I pause to have lunch.

  • @DavidRavenMoon
    @DavidRavenMoon 2 года назад +3

    There was also MkLinux from Apple and OSF, which used the Mach kernel. I ran that on a PowerMac 6100 in 1996. Used the Mach microkernel, version 3.0.

  • @AlexLandefeld
    @AlexLandefeld Год назад

    Fascinating overview! Thank you!

  • @TheSolidSnakeOil
    @TheSolidSnakeOil 2 года назад +4

    32 years and we're this close to v1.0 of Hurd.

  • @kayakMike1000
    @kayakMike1000 2 года назад +2

    L4 sounds great!

  • @TheYoungtrust
    @TheYoungtrust 2 года назад +4

    Anthropologist of the future will probably watch your videos.

  • @JohnnieWalkerGreen
    @JohnnieWalkerGreen 2 года назад +3

    I still don't get it. (1) What was the hybrid approach that Apple took? (2) How different was it compared with the Windows/NT approach? (3) Which system better supports "message passing" instead of "shared memory" for a multi-cored system?

  • @RegisMichelLeclerc
    @RegisMichelLeclerc 2 года назад +2

    A good few years ago, I tried to look at the project, but... Where to start to understand how it works, how it's designed, what are the relationships between the different units?

  • @bushidocodes
    @bushidocodes Год назад +4

    I'd like to add that the Windows NT kernel project from Dave Cutler also was based on Mach (originally with POSIX, OS/2, and win32 personalities). Development started in 1989 and the first release was in 1993. Microsoft also hired Richard Rashid (CMU, Mach), who formed Microsoft Research in 1991. Over time, components have moved from userspace to kernelspace for performance reasons.

    • @tomhekker
      @tomhekker Год назад +1

      Tbh, the Windows NT kernel was inspired by the design of Mach, not based on it. Dave Cutler has stated in multiple interviews they looked at the design of Mach but didn’t base their code on it at all, and that the design for Windows NT was done in-house at Microsoft.
      NeXT/Apple based their XNU kernel on Mach though, and turned it into a hybrid kernel.

    • @bushidocodes
      @bushidocodes Год назад +1

      @@tomhekker Yes, “inspired by” is a much more precise description. 👍

  • @richardthompson6079
    @richardthompson6079 2 года назад +1

    I guess I agree with both sides. It has academic value. It's also redundant. It offers a solution to problems already solved. I enjoyed your talk about this subject.

  • @capta1nt0ad
    @capta1nt0ad 2 года назад +2

    Great video. Keep it up!

    • @capta1nt0ad
      @capta1nt0ad 2 года назад

      Except: 1:31 that’s not richard stallman, that’s the silly impersonator from the O’Rieleys book about him.

    • @capta1nt0ad
      @capta1nt0ad 2 года назад

      Also, archhurd has been dormant for more than a year and only has a couple hundred packages. It’s also unofficial.

  • @MJ-tn5qp
    @MJ-tn5qp 2 года назад +2

    What about BeOS , wasn’t that a micro kernel OS. Would like to hear your views on BeOS.

  • @fredsalter1915
    @fredsalter1915 2 года назад +1

    Thanks!

    • @CyberGizmo
      @CyberGizmo  2 года назад

      Thank you fred most kind of you!

  • @esra_erimez
    @esra_erimez 2 года назад +9

    I also like the idea of microkernel servers. For example, a database might be made up of several small "kernels" such as record, index, transaction, SQL parser servers.

    • @kaihorstmann2783
      @kaihorstmann2783 2 года назад +2

      Gosh, and go though multiple context switches for any and everything? using Gigabytes of shared memory, and adminstering locking and administration through distributed processes? Sounds like a recipe for a not so performant non-enterprise/big data platform. But costs a lot more to develop, and a lot more single points of failure, and overhead during development.

    • @lewiscole5193
      @lewiscole5193 2 года назад +1

      @@kaihorstmann2783
      Well, I suppose you could be right ....
      OTOH, as I understand it, "Big Data" already requires distributed applications with the attendant locking, gobs of memory (whether shared or not), and enough computing power to choke a moose (or rather lots of herds of mooses).
      All of the other stuff you wave your arms at seem to me to be the price you pay for "Big Data" ... regardless of whether or not we're talking about micro-kernel or monolithic kernel.

    • @kaihorstmann2783
      @kaihorstmann2783 2 года назад

      @@lewiscole5193 You are right in regards "Big data" being large distributed systems. But as I understand these read access by far exceeds write access, thus coordinating write access is less critical, or new data worm their way graually through the systems (I think of Google and the likes)
      My previous point rather concerns online transaction systems where change throughput is equally important.

    • @lewiscole5193
      @lewiscole5193 2 года назад

      @@kaihorstmann2783
      > You are right in regards "Big data" being large
      > distributed systems.
      Okay. So we agree on one thing.
      > But as I understand these read access by far
      > exceeds write access, thus coordinating write
      > access is less critical, or new data worm their
      > way graually through the systems (I think of
      > Google and the likes)
      I'm not at all sure that I understand what you mean when you talk about read accesses exceeding write accesses thereby making write accesses less "critical".
      My understanding is that the explosion of non-SQL databases is at least in part driven by the fact that they really aren't able to deal with writes with the same ease/speed as reads, hence the use of LSM trees in the non-SQL alternatives.
      Since I'm not particularly familiar enough with "Big Data", I don't feel warm and fuzzy about accepting your statements without further elaboration as they sound a bit like overly broad generalizations at this point.
      > My previous point rather concerns online
      > transaction systems where change throughput
      > is equally important.
      Since I haven't tried to keep up-to-date on the latest goings on in computing, the transaction systems that I'm aware of basically airline reservation systems.
      They work very, very well and they run on monolithic kernels, not because they have to, but because that's what was available in the Good Old Days.
      However, while everyone is familiar with SABRE, not everyone seems to be familiar with the history before such a system as cooked up ... lots of failures, lots of money pissed away ... the sort of thing you seem to be attributing to micro-kernels as being unavoidable.
      From my experience, it's been obvious for a long, long time that you want to run something as fast as possible, you want to stay out of the kernel because once you entered, it was anyone's guess when you come out.
      My understanding is that part of the reason LMAX Disruptor was so fast was because its developers recognized this fact.
      And if you stay out of the kernel, I don't see why I should much care if we're talking about a monolithic kernel or a micro-kernel.

  • @mikehosken4328
    @mikehosken4328 8 месяцев назад

    You did get a glimpse of gnu/bsd with the Debian release of gnu/kfreebsd. It used the kernel of FreeBSD with the Debian user land.

  • @PaulWaak
    @PaulWaak 2 года назад +3

    You mentioned OSF/1. I remember liking it when I used it way back when. Could you tell us about it's history too?

    • @CyberGizmo
      @CyberGizmo  2 года назад +7

      Yeah I think so, I was on the opposite side of that split, but I think I can cover it might have some bias in it though :)

  • @dimedriver
    @dimedriver 2 года назад +5

    What about Tru64 unix. DEC's unix for the Alpha? Pretty sure that was Mach based too. I remember having to "recompile" the kernel to add drivers for new scsi cards and seeing tons of CMU license stuff fly by. The compiling was really just re linking object files as it was a closed source system.

    • @CyberGizmo
      @CyberGizmo  2 года назад +2

      Im trying to remember what DEC called their Mach based OS, oh well might have to go look it up, ok I looked it up, duh, DEC OSF/1

    • @lewiscole5193
      @lewiscole5193 2 года назад

      Well, if you want to talk about DEC OSs, what about SPIN?
      Aside from whatever extensibility it might have had, the really good thing about it from my perspective is that it wasn't Unix.

  • @emik365
    @emik365 2 года назад +6

    Thank DJ. Could you find time to assess RedoxOS?
    Btw... I think many would enjoy your lesson on microkernels/hybrid-kernels in general and in the context of MirageOS, Plan9 and others...
    What are the overlapping and unique functionalities?
    Are these systems suitable (or what will it take to mature) for mission critical situations?

    • @PavelSayekat
      @PavelSayekat 2 года назад +1

      Should have scrolled down 11 minutes ago! wow

    • @CyberGizmo
      @CyberGizmo  2 года назад +4

      added to the list for @PavelSayekat thanks @emik365

    • @lewiscole5193
      @lewiscole5193 2 года назад +2

      Whether or not something is "suitable" for "mission critical situations" depends on what the mission or missions happen to be and what sorts of requirements and constraints are "acceptable" with regard to that mission.
      Since you haven't stated what mission(s) you have in mind or what requirements or constraints you're willing to tolerate, your question about suitability is basically meaningless (at least to me).
      For example, Once Upon a Time, the OS I took part in the care and feeding of what modified by one customer to NEVER take a system stop (i.e. all the system stops were removed from the code).
      Why? Because the mission had to do with monitoring/controlling missiles and so the priority was to keep on running, even if poorly, no matter what.
      Data integrity was a distant second.
      Meanwhile, in "normal" use, the OS was also used in banking and airlines environments.
      While they were willing to take stops, if they had to, their approach to what happened when a stop occurred was different.
      In the case of a stop in a banking environment, the customer wanted all the information that could be collected to resolve/prevent the cause of the stop from ever happening again, even if that meant that the system stayed down for awhile.
      Data integrity was everything and so leaving things frozen as they were at the time of the stop was acceptable.
      In the case of a stop in an airline environment, however, the customer wanted their system back up RIGHT NOW.
      They could reconstruct their data later, and since having to deal with seats that were overbooked was something they naturally had to deal with, they could live with data being lost.
      And there were penalties in some of the contracts IIRC for the system not getting back up within a certain amount of time.
      Not that it matters in the slightest, but here's something from a 2020 Unisys Web-inar which included a mention about "vulnerabilities" of various OSs based on data taken from NIST's National Vulnerability Database for that year:
      Operating System / Number / Date of Last Vulnerability / Compromised User Data
      Unisys OS2200........... 0 ........... ---------- ........... No
      Unisys MCP.............. 3 ........... 02/26/2018 ........... No
      IBM System z (zSeries).. 25 ........... 11/14/2019 ........... Yes
      IBM System i (iSeries).. 28 ........... 11/08/2019 ........... Yes
      OpenVMS................. 39 ........... 01/28/2020 ........... Yes
      HP-UX................... 370 ........... 01/28/2020 ........... Yes
      IBM AIX................. 398 ........... 01/28/2020 ........... Yes
      Unix.................... 890 ........... 01/29/2020 ........... Yes
      Solaris................. 1,118 ........... 02/04/2020 ........... Yes
      Linux................... 7,819 ........... 02/07/2020 ........... Yes
      Windows................. 7,977 ........... 02/06/2020 ........... Yes
      So do these numbers make it look like Linux is ready for prime time (i.e. "mission critical" work) to you?
      If so, are you willing to accept the same sort of thing for the other OSs you mentioned?

    • @Gooberpatrol66
      @Gooberpatrol66 2 года назад +3

      >What are the overlapping and unique functionalities?
      Hurd/Plan9 allows you to parse anything with a program and provide that parsed output as a tree of files to other programs. You can kind of think of Hurd as Plan9 with a global namespace instead of per-process namespaces (and of course the ability to run GNU programs).
      >Are these systems suitable (or what will it take to mature) for mission critical situations?
      In a vacuum, microkernels are more suited to mission-critical applications as they are more fault-tolerant. And Hurd is theoretically more secure as its permissions are more fine-grained than a normal Unix kernel. Though I wouldn't use Hurd for anything like that unless it had way more developers looking over it for bugs.

    • @atf300t
      @atf300t 2 года назад

      > QNX supports both ARM and x86 (and x86-64) CPUs,
      > and the last time I looked, it supported something
      > like 32 CPUs per OS instance.
      The number of supported CPUs says nothing about how well it scales up, but just for the record, Linux supports at least 2048 CPUs.
      > And since the QNX message handler could pass messages to other
      > QNX instances over a network
      Similarly, you can build a Linux cluster, and such Linux clusters are widely used. However, let's not forget multiple instances are only possible for tasks that are relatively easy to parallelize, i.e. they can have large computational chunks that can be executed independently.
      > The only time that a fine grain lock has an advantage is
      > if there is low contention on the shared object.
      Fine grained locking can reduce contention. Let's take the filesystem as an example. If you have just one filesystem lock, then you have a lot of contention on that lock, because virtually every process needs to read or write something. You may say let's use message passing instead, and it can be slightly faster than having just one highly contended lock, but it is not going to scale well anyway.
      A better approach is to add an RW lock per each inode, so most common operations require only to take inode locks, which have little contention. You still need the filesystem lock for cross-directory rename, and it can be tricky to do cross-directory rename properly, but if you did that correctly, it scales up well.
      In most cases, if you have a highly contended lock, you do not use fine grained locking or you have not designed them properly. Admittedly message passing or coarse locking is much easier to implement correctly, so if performance is good enough then development time is usually the primary concern, especially if we speak about specialized commercial software.
      > Crossing sockets is a killer.
      Sure, but if you do that a lot, you probably do not care about performance. Linux avoids that by using per-CPU data structure.

  • @Kytk7
    @Kytk7 Год назад

    Good video, great job

  • @PenguinRevolution
    @PenguinRevolution 2 года назад +2

    The problem I've had with GNU/Hurd kernel is that it really has a tough time with modern systems and hardware, that's due to the fact it really hasn't been updated for close to a decade. I know Debian provides security for it. But I think you're right, in order for Hurd to be feasible it needs to be reworked from the ground up. The Gnu.GUIX team should be thinking about doing this, as they are the maintainers of GNU/Hurd (I think that is the case).

    • @lale5767
      @lale5767 Год назад

      Maybe they'll bring life to it and take over.

  • @CyborgZeta
    @CyborgZeta 2 года назад

    Very interesting history lesson. Thank you.

  • @seventone4039
    @seventone4039 2 года назад +2

    Installed it in a VM with Debian and it crashed during install and later.

  • @MarcDunivan
    @MarcDunivan Год назад

    Can you talk about the relationship with Minnix 3 microkernel and 64 bit and ARM ports?
    Isn't the issue with the Linux kernel that it redistributes non-Free BLOBs, so as not being completely GPL v2 licensed? (And not moving to GPL v3).

  • @williamshenk7940
    @williamshenk7940 2 года назад

    excellent presentation once again!!! You should have your videos sent to the Smithsonian for archives of the computer era.

    • @CyberGizmo
      @CyberGizmo  2 года назад

      Thanks William, I doubt they would be interested in the musings of some old timer systems engineer :)

    • @GodEmperorSuperStar
      @GodEmperorSuperStar Год назад

      It's a little know fact that the Smithsonian archives include the Library of Congress and also the Library of Congress contains the Smithsonian archives. There's got to be a copy of Hurd in there somewhere.

  • @pauldufresne3650
    @pauldufresne3650 2 года назад +3

    I have been interested in Hurd/Debian, about 2 years ago. Most of the text based Debian packages work without source code modifications, that is the libc library is pretty good. There was some issues with Xorg, remember the XFCE menu was not exactly working. Samuel Thibeault is one of the main developer there. In the same time, stuff like PCI arbiter was still in development 2 years ago. They were beginning to adopt FreeBSD rump kernels, so as to be able to use NetBSD device drivers. I think it did not evolve much but I am unsure. Writing native applications rathers than one using libc is relatively hard, having to know how to use MIG to translate interfaces code to HURD code. Debian/Hurd is probably the most complete microkernel I know. Genode OS is interesting but Scuplt GUI interface it quite weird. HelenOS too is an interesting project. Now I like the Redox OS as a microkernel project (OS written in Rust) from System 76 developer. Also, Rust begin to integrate with Linux itself. Redox OS seems to have an "everything is an URL" approach that seems interesting.

  • @antoniostorcke
    @antoniostorcke Год назад

    The approach that google is taking with Fuschia may be a good one. Linux compatibly will definitely be important for GNU Hurd.

  • @PavelSayekat
    @PavelSayekat 2 года назад +6

    Do a review on Redox OS, it is based on microkernel written in RUST with MIT Licence By Jeremy Soller from System76 and so far it works.

    • @CyberGizmo
      @CyberGizmo  2 года назад +3

      Will take a look at it and see if I can get it installed

    • @chrystals.4376
      @chrystals.4376 2 года назад

      @@CyberGizmo Hubri also looks interesting but IIRC it’s mostly for Servers.

  • @fleontrotsky
    @fleontrotsky 7 месяцев назад

    Haha. I subbed simply because hurd was in the title.

  • @nanothrill7171
    @nanothrill7171 Год назад

    ah semi-fond memories of working on HURD in the late 90s and early 2000s under the auspices of the Debian project (mostly porting stuff with string length issues but i did write some demons).

  • @nextcomputerparts
    @nextcomputerparts 2 года назад

    I think XNU is still available so you can still look into the use of mach.

  • @RonJohn63
    @RonJohn63 2 года назад +7

    What you didn't mention is that Hurd failed because microkernels are *REALLY COMPLICATED.* Message passing is easy, but MP on a busy system will either crap on the hardware (RAM or filesystem) or be sooooo slooooow because all the messages have to be synchronous.

    • @CyberGizmo
      @CyberGizmo  2 года назад +6

      Actually I did, and L4 fixed the slow IPC problem it was faster than a syscall

    • @RonJohn63
      @RonJohn63 2 года назад +2

      @@CyberGizmo I paid attention to your whole video but heard nothing about how slow or buggy that Hurd is

    • @lewiscole5193
      @lewiscole5193 2 года назад +3

      @@RonJohn63
      As Mr. Ware pointed out, L4 isn't particularly slow, thus showing that your original comment was an incorrect over-generalization.
      Ditto MP on L4.
      And then there's QNX which is a proprietary micro-kernel that does rather well at real time uses which sort of further demonstrates the incorrectness of your original comment.
      So it would appear that rather than admit to the incorrectness of your statement(s), you're now trying to change the goal posts be focusing more specific about the performance of Hurd.
      That's nice, but since YOU made the assertion about Hurd, it's now up to you, not Mr. Ware, to justify your position.
      I await with bated breath for YOU to make your own damn video so that I and others can poke holes in it.

    • @RonJohn63
      @RonJohn63 2 года назад +1

      @@lewiscole5193 Hurd is not L4 or QNX.
      And you're right that I over-generalized. L4 solved the IPC problem, but Hurd did not.

    • @capability-snob
      @capability-snob 2 года назад +2

      I'm not sure anyone evaluating HURD in the last 20 years would have said "if only it performed better!". People don't use it because people _really like their hardware to work_. They want working audio and USB. They want their random to be secure (it wasn't for many years). They want programs that require posix shmem to work. They want SMP and 64 bit.

  • @ericespino7361
    @ericespino7361 2 года назад

    What kernel did Denis use?

  • @jakobw135
    @jakobw135 6 месяцев назад

    What's the difference between GNU- Linux today, and any other Linux distribution?

  • @SchwaAlien
    @SchwaAlien 2 года назад

    I have fond memories of downloading a copy of Mac OS X pre-release CD on my 33.6kbaud modem over multiple days, I was soooo excited to have a tiny slice of that NeXT coolness on my G3 tower which was running clunky old System 8 at the time... yeah it wasn’t called Mac OS, HUGE changes came with it. Eventually I installed GNU/Linux on it because it was possible (first having discovered some GNU software was ported to Mac OS X, but not most) and then that lead me to use it on Windows machines which were far more affordable to me as a young adult. I eventually switched back to Windows XP since I had purchased new hardware which had very poor support in Linux back then, so I couldn’t do things properly like put the computer to sleep or other seemingly simple tasks like respond to a connected UPS. Now I’m slowly weening myself off Windows as much as possible, it’s a necessary evil since the fact that other people uses it makes it somewhat essential to stay on top of since I troubleshoot for people on occasion... but there are things about Linux I still don’t completely comprehend like why there’s no adequate default support for controlling the temperature on a laptop, it’s pretty useless if it recognizes that it’s overheating and shuts itself down but cannot spin the fan at full speed first to save itself.
    I have a laptop that I threw PiHole on to help protect our home network from excessive advertising and tracking but now I cannot use it for anything else like watching video on the web since it’ll eventually overheat and shut itself down which is not what you want to have happen to the network’s DNS server... I used to run it on a Raspberry Pi that hosted a 3D printer but I switched from Octoprint to Mainsail with Klipper and decided that I didn’t want to fuss with getting the two webservers playing nice on one machine all the time (updates changed it back to port 80) and went with putting it on a separate Linux machine with it’s own battery so it’s fairly reliable to survive the frequent power outages we get here.

  • @agranero6
    @agranero6 2 года назад

    GNU Hurd will become available on stable version when One Piece ends or in the Greek calends what happen last. I once went to a Stallman speech that featured GNU Hurd too, it was so patronizing, he talked about what is a microkernel as nobody there knew what it was without talking about specifics...because there was nothing to talk about. It just a toy code someone pokes from time to time...
    This shows how detached of the real world some people are.
    DJ Ware nailed it.
    (I bet you are hating me now, hun?)

  • @mskiptr
    @mskiptr 2 года назад +1

    Frankly, my 'ideal kernel' idea would be that more akin to 'safe-language' operating systems. I'd rather make no distinction between kernel- and user-space and instead guarantee safety by handling different things at different abstraction levels: e.g. when a given piece of code is given the control over some resource, it uses it however it likes to and it's just impossible (at the type level or whatever) to have several components use a single resource directly. Out of that, shared resources (like memory management, scheduling, IO, …) could be built - similarly to our regular data structures and (pure) functions.
    In short: a lot of simple yet generic abstractions, composition everywhere and a language that together with type-checking can also verify models

  • @alexeisaular3470
    @alexeisaular3470 2 года назад +1

    Finally

  • @katrinabryce
    @katrinabryce 2 года назад

    I guess the first question is: What would Hurd be capable of doing that can't be done with Linux or the BSD kernels?
    The second question would be: Would it be better to start again with a completely new kernel rather than modify one of those existing kernels to achieve that task?
    It seems to me that the best-case outcome for GNU/Hurd is something that works just as well as GNU/Linux or FreeBSD. In which case, we already have those operating systems, and people won't see any reason to switch from them to something new and untested.

    • @alexxx4434
      @alexxx4434 2 года назад +1

      I guess the fact that HURD lies in dormancy is a kind of answer that people don't see a real practical value in it beyond just a proof of concept. Microkernels' general tradeoff is providing more stability and security in exchange of performance. And in most cases this tradeoff is unnecesary.

  • @n0kodoko143
    @n0kodoko143 2 года назад

    Super cool. In a world of "micro" everything, the monolithic kernel is still king (kinda)

    • @emuhill
      @emuhill Год назад

      Linux isn't truly monolithic anymore and hasn't been for quite some time. From what I understand AT&T Unix was modular. In other words drivers were loaded and unload as modules. Linux added modular support early on and became a monolithic modular hybrid kernel.

  • @simonstrandgaard5503
    @simonstrandgaard5503 2 года назад

    Interesting topic and well-explained.
    In the late 90s I worked on my own OS and used C/C++/x86 specific asm. Nowadays if I were to redo it, then Rust would be my choice.

    • @simonstrandgaard5503
      @simonstrandgaard5503 2 года назад

      @tripplefives what is a "fad" language?
      Yes, I like it, based on what I have code in Rust so far. However Rust has a learning curve.
      Swift/Kotlin/C# have concerning ties with Apple/Google/Microsoft.

  • @JonDisnard
    @JonDisnard 2 года назад +1

    BeOS was kinda successful. It was a micro kennel design.

  • @gsgatlin
    @gsgatlin Год назад

    What good is a kernel without drivers? They can never catch up to Linux hardware support. Still I commend them for trying.

  • @AlanPope
    @AlanPope 2 года назад

    Pretty sure the photo you posted of Thomas Bushnell at the start of your video is a different guy, who has passed away. The Thomas Bushnell who worked on GNU/Hurd is still alive, and working at Google.

    • @CyberGizmo
      @CyberGizmo  2 года назад

      I am pretty sure it’s the right guy his obit said he worked on GNU Hurd

  • @ivanmaglica264
    @ivanmaglica264 2 года назад

    Maybe Hurd would have been relevant if project re-pivoted to be cloud os 10 years ago, since hardware drivers could be just virtio or something and cut all the unnecessary layers and make a thin cloud os

  • @CRYPTiCEXiLE
    @CRYPTiCEXiLE 2 года назад +3

    yeah i have tried many gnu/hurd distro such as debian version and arch, but yeah lol much prefer gnu/linux.

    • @rusi6219
      @rusi6219 14 дней назад

      @@CRYPTiCEXiLE nothing beats Microsoft/Windows

  • @_general_error
    @_general_error 2 года назад +1

    Linus Torvalds never "released" Linux... he literaly uploaded it to a school FTP server and people started extending it.

    • @CyberGizmo
      @CyberGizmo  2 года назад +6

      As i recall from that time period uploading to an ftp server was the only way to release software :)

  • @jk-gn2fu
    @jk-gn2fu 5 месяцев назад

    Should've started as a monolithic kernel or at least a hybrid.

  • @BandanazX
    @BandanazX 2 года назад +2

    Taligent.

  • @jell_pl
    @jell_pl 2 года назад +1

    i don't miss hurd, as it was based on ugly mach3 which was a big mess when it was chosen.
    but... i miss that l4 family, which had so big potential - didn't went mainstream...
    still, the only bigger success in the family of microkernels afaik had qnx and... windows nt, based afair on microkernel from DEC labs, initially crafted for alpha cpus...

  • @b43xoit
    @b43xoit 2 года назад

    Talk about the B5700.

  • @ThomasBushnellBSG
    @ThomasBushnellBSG Год назад

    I don't know who that photo is you put up when you said my name, but it's not me.
    Also, early 80s? Off by a decade

  • @lewiscole5193
    @lewiscole5193 2 года назад +1

    [16:45] "First of all, microkernel ... every microkernel ... umm, pure microkernel I have ever read about, with the exception of a few ... there a few that have been successful, but they're very small and they have very specific purposes for which they are being used ... "
    I don't know what you mean by "small" -- physically as in KLOC or installed instances -- but in either case, I'm not sure where you put QNX (assuming that you do).
    Yes, QNX could be small, but depending on what you package it up with, it doesn't have to be.
    And in terms of instances installed, they used to be all over the place in the form of Blackberries.
    More recently, I don't know of Ford changed their minds about QNX, but if they haven't, you might well be driving one.
    So care to wave your arms at whether or not QNX is one of the successful microkernel exceptions you were talking about?

    • @CyberGizmo
      @CyberGizmo  2 года назад +2

      Yes QNX was one I was thinking of they were the first to be successful using a microkernel, although I always put them in the embedded, Real Time OS class...and believe it or not, I used it, it was the OS installed on the Burroughs (later Unisys) ICON machine, a computer built for education with Canadian origins as I recall...and Im not arm waving...:)

    • @lewiscole5193
      @lewiscole5193 2 года назад

      @@CyberGizmo
      I was wondering if you' knew of/remembered ICON.
      It seemed like A Good Idea, at least in principle, and so it was a bit disappointing to me to hear that it died.

    • @sdrc92126
      @sdrc92126 2 года назад

      @@CyberGizmo Yup. I visited them once in Ottawa IIRC. Quark(? Maybe Photon?) was their GUI I think. A full OS/GUI on a single 1.2MB floppy.

  • @kwpctek9190
    @kwpctek9190 2 года назад +1

    Listening in background mode envisioning already my horribly slow Apollo Workstation at Datacrown Inc getting 'upgraded' by an obstinate Richard ideologue plastering a hideous human RAM logo on it. So glad the past is past.

  • @Turalcar
    @Turalcar 3 месяца назад

    Isn't MINIX used in some Intel firmware? So technically it has billions of installs

  • @davidweeks1997
    @davidweeks1997 2 года назад +3

    Thank you! It is very important to avoid any power of "no" over one's ability to develop. I still use GNU/Linux as Linux doesn't work without GNU. I haven't gotten too far into kernel practices, though I am getting there. The jurisdiction between kernel space and user space seems to be a defining characteristic of uh, everything. (duh?)
    Thank you for this report on HURD.
    A pet peeve of mine is the mis-use of AI/artificial intelligence. We are at (past?) the point of engineered cognition, well beyond artificial intelligence. Yes, there is such a thing as artificial intelligence, but that does not apply to engineered cognition. I'd like to see the community adopt this distinction, as the continued use of AI in place of EC is exactly like saying computers do artificial math. Yeah, really stupid. It is embarrassing. I don't respect anyone who mis-uses the term AI. EC is the proper term.

    • @CyberGizmo
      @CyberGizmo  2 года назад

      Did you see the latest AI video where the interviewer got the AI so made it threatened to kill him?

    • @davidweeks1997
      @davidweeks1997 2 года назад

      @@CyberGizmo I'll check it out. There's AI then there is EC, engineered cognition. It is a failure not to make that distinction.

  • @Cyril29a
    @Cyril29a Год назад

    4:38 Calling the university Carnegie Mulligan is either a wonderful joke or one of the best freudian slips ever.

  • @dsblue1977
    @dsblue1977 Год назад +1

    Very good and informative video. I think however, that I would have liked more content about why NeXT/Apple were successful in their approach. One thing that I remember was Mach itself on NeXT was initially used as a platform enabler rather than a full microkernel. The idea was to have a virtual platform to work on. One thing that many people do not get is that Mach started from a PhD research in which the problem was about how to create a dependable kernel for a multiprocessor system. That was Avie Tevanian initial question. Mach was never thought to be used in a personal computer (Even the NeXT was a workstation). This is completely different to the problem that Linus Torvalds had that was: How can I run a UNIX-like OS on a 386 PC? Consequently, many of the ideas of Tevanian were not restricted by the reality of the hardware and afforded conceptual purity. Mach was created ( and it is maintained) with a relatively small amount of people. Linux on the other hand requires massive amount of resources that would be impossible if there was not such a large community around it always trying to improve and work around the original design limitations. XNU (Mach) has proved to be quite stable (iOS) and a more resource/effective regarding development than other kernels. I think GNU/Hurd would be a more dependable kernel than Linux in the long run, unfortunately, not many people seem to have the perspective and will to keep its development. I honestly think that Linux (as a kernel) will hit a wall in complexity during Linus Torvalds life time. I also think that Richard Stallman was a factor in the slow development of GNU/Hurd since it has been reported that it is very difficult to work with him and many developers simplify prefer the Torvalds pragmatic/realistic approach.

  • @Zeropadd
    @Zeropadd 2 года назад +1

    💗❤️♥️💖💝💞💕💟❣️

  • @AlejandroRodolfoMendez
    @AlejandroRodolfoMendez Год назад

    Hurd development cycle makes looks ReactOS as faster

  • @alexanderwhite8320
    @alexanderwhite8320 Год назад

    This video should be titled "The hystery of GNU Turd". It never delivered stable production quality product.

  • @bluesquare23
    @bluesquare23 Год назад

    Somebody go wake those CMU professors up. Time to write a new Kernel!

  • @dougphillips5686
    @dougphillips5686 Месяц назад

    When someone is going to tell the story, we expect the names be pronounced correctly.
    The 'G'' is silent and there is no a. It is not ganew, it is pronounced like new.
    The same for gnome. The g is silent and it rhymes with home.

  • @rac116
    @rac116 Год назад +2

    We can throw the Hurd project to an AI and ask it to solve all the problems 😂

    • @CyberGizmo
      @CyberGizmo  Год назад

      LOL, might throw the AI into a mobius loop :)

  • @shawnchalfant1595
    @shawnchalfant1595 Год назад

    Let it die. 30 years and still churning and nothing to really show.

  • @stephenJpollei
    @stephenJpollei 2 года назад

    Maybe a very controversial opinion... hybrid kernels with a monolithic kernel strapped on a microkernel, have the disadvantages from either pure decision and the advantages from neither.

    • @lale5767
      @lale5767 Год назад

      Dahlia OS could be something like that. The microkernel is google's Zircon though.

  • @6Diego1Diego9
    @6Diego1Diego9 2 года назад

    how do we know you're not bullshitting?

  • @michaelthompson7217
    @michaelthompson7217 2 года назад +3

    if GNU was smart they would sunset and archive this project and refocus attention on new working kernels.
    unfortunately due to the pride of those at the top they want to pretend this obvious fail is still going and wasting time of integrators and developers by pretending it’s not dead
    it just shows that GNU is more about politics and ideology than it is about technology

    • @capability-snob
      @capability-snob 2 года назад +3

      The GNU project doesn't have people they can move to different projects, it has contributors that care about different things and choose how to spend an hour or two on the weekend.

    • @michaelthompson7217
      @michaelthompson7217 2 года назад

      @@capability-snob GNU can stop funding the attention on HURD and archive the project if they want to influence where developers go. Instead they keep it on life support, still hosting the site, hoping that they’ll have one or two people working on it so it’s never declared dead

    • @Gooberpatrol66
      @Gooberpatrol66 2 года назад

      @@michaelthompson7217 GNU spends no funding on Hurd besides the cost of hosting a handful of low-traffic webpages, which adds up to maybe a bit of pocket change a month.

  • @YanestraAgain
    @YanestraAgain Год назад

    Since Linux already existed, Richard Stallman wanted the expression "GNU/Linux" to disappear, because Linus Torvalds wouldn't get himself instrumentalized in the way R.S. wanted. But actually, about the work on the project you spread misinformation. It's not so difficult to find out about that. Just try.

  • @gregzeng
    @gregzeng 2 года назад +3

    Interesting are the specific personalities.
    In the legal battles that damaged and delayed Unix, the bureaucrats and lawyers dominated everything. The Command Line warriors that produce and follow this channel are too keyboard damaged to know the world outside the keyboard.
    My involvement started with the common users who wanted computer tools. The mini and mainframe computer idiots dominated the various institutions and the official computer organizations. These traditional conservatives hated the microcomputer world, with its WYSIWYG applications.
    The business world did not care about the CLI obsessives, who did not know about the common English language. Apple, Digital Research and Microsoft used common English, which avoided the Case-sensitive code of the Unix world.
    This exotic use of the English language by the Unix CLI obsessives still haunts computer use, making CLI so difficult for Unix type systems, such as Linux, compared to English-friendly Microsoft.
    This partly explains why Unix based systems cannot easily enter the world of the common person. Unix persons pretend that WYSIWYG and the Xerox convention of WIMP do not exist. Their version of GNOME (3) in Linux is the unpopular result of these inhuman distortions.

    • @CyberGizmo
      @CyberGizmo  2 года назад

      Thanks grey for filling us all in, appreciate you taking the time to do this! I am forever a CLI junkie its a hard habit to break

  • @jamesrclayton
    @jamesrclayton Год назад +2

    "day-mons" This pronunciation makes me illogically upset. Daemon is an antiquated spelling of demon. It's pronounced "dee-mon". You know -- the devil mascot of the freebsd os? It stands for disk and execution monitor. But, in this weird way, the modern pronunciation has shifted to "day-mon", for even grey beards like DJ so double digit iq end users understand what he's talking about. I don't want to live on this planet anymore.

    • @CyberGizmo
      @CyberGizmo  Год назад

      Let me see if i can clear this up for you a little bit. In the written form its easy to tell the difference between demon and daemon in print, but what if you were talking to someone about UNIX, if you pronounce both as dee-mon, how would the other person know you were talking about device drivers or system services. The pronunciation difference was to help clear up the ambiguity in spoken language.

  • @anon_y_mousse
    @anon_y_mousse 2 года назад +1

    I'm going to have to disagree that Linux wouldn't be as big a thing as it is now if not for GNU. I think the opposite is true and that if it weren't for GNU and the GPL, and of course people's aversion to the GPL, that the whole of the industry and Linux itself would have been better off. I think Linux would've become a much bigger deal, albeit with a lot more paid options, and more people would use it worldwide. It's not as if open source didn't exist before GNU, because it most certainly did, and it's not like they made it more popular, because the GPL. The only real credit that GNU deserves is in holding Linux back. Look at Android, they hated the GPL so much that they had to rewrite a lot of code which they shouldn't have needed to do, just to avoid it, and they're working on replacing Linux as their kernel instead of contributing back to it, despite the fact that the GPL isn't a factor there. What GNU has done is infect a lot of people with a mind virus and squelched not just Linux, but the software industry at large. Of course I know I'm in the minority on this thought.

    • @scottdrake5159
      @scottdrake5159 2 года назад +2

      Happily infected. And I've got strangest feeling we had this discussion on slashdot twenty years ago...

    • @Gooberpatrol66
      @Gooberpatrol66 2 года назад +5

      If not for GNU, Linux would have been released with a non-commercial license.
      >Of course I know I'm in the minority on this thought.
      Deservedly.

  • @ralfbaechle
    @ralfbaechle 2 года назад +3

    Years ago there were claims less than one fulltime developer was working on HURD. In software one of the biggest issues is keeping up with changes in the software evnironment. One developer is barely enough to keep up with an ever changing world - at least for a large project. For a while HURD developers tried to siphon off some of the work of the Linux community. That was tolerated by the Linux community but received no active support whatsoever. And at times charming nicknames such as GNU/TURD were circulated.
    There are microkernel operating systems which were successful, for example AmigaOS. Microkernels are getting kinda ugly once memory protection is added which kinda ruins the idea of all these things happily and efficiently communicating with each other. In my own work I noticed I was reimplementing some UNIX-like infrastructure for microkernels. Which is a strong indication that the UNIX way is the natural way of doing things.
    In the early years of Linux a number of folks were advocating a rewrite of Linux as a microkernel. They received the usual scorching reply from Linus Torvalds. Linux got ported to Apple's version of the MACH kernel, a project which was named MkLinux. The last offcial release was in 2002. www.mklinux.org appears to last have been update on 12. January 2008 which I take as an indicator for there being temperatures below absolute zero in software development. It's also a showcase for '90s web design.

    • @emuhill
      @emuhill Год назад

      In the physics world there is no such thing as below absolute zero. The reason why is due to the fact that at absolute zero all molecular motion stops. So there can be no below absolute zero. It really is an absolute temperature.

  • @LabiaLicker
    @LabiaLicker 11 месяцев назад

    dude must have the craziest dual monitor setup ever