"The Mess We're In" by Joe Armstrong

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 172

  • @jollyjack5856
    @jollyjack5856 5 лет назад +514

    R.I.P. Joe Armstrong. :( You will be remembered fondly by many.

  • @stevenstone307
    @stevenstone307 Год назад +37

    I discovered this man after he died, and I feel such regret for his passing, even though I never knew him. When he speaks about computers, thing's just make sense.

    • @maxfridbe
      @maxfridbe Год назад +1

      :-( I met him once at NDC, very nice guy. when he told me what he did, i was like, ahhh..... and was reminded of the hanslman joke of what do you say to the creator of x.... "your leige"

  • @Kotylion94
    @Kotylion94 2 года назад +16

    Joe says around 7:50 if you don't why your program doesn't work you simply ask google. Not sure if you guys feel the same like me, but recently it is harder and harder to find proper solution in Google search.

  • @MEMETV1
    @MEMETV1 6 лет назад +113

    I love the way my grandfather explained things when I was small. Listening to Joe here was very reminiscent of that. It was like getting a good explanation from my grandfather if my grandfather were a brilliant technologist. I thing the voices of some of the experienced elders in the development world is something us younglings are severly lacking in the days of Brogramming. Love this.

  • @BryonLape
    @BryonLape 2 года назад +13

    8 years on and we are still in the same mess.

  • @jfolz
    @jfolz 4 года назад +108

    There is this famous story that I've been told numerous times at university about the Ariane 5 rocket crash on first launch. They had multiple guidance computers to prevent losing the rocket to a single failure, but all computers ran exactly the same code. Said code was reused from the Ariane 4 and had worked perfectly many times. When the new rocket accelerated much faster, a sensor value overflowed. The first guidance computer came to the conclusion that the rocket was going too slow and the angle of attack must be lowered dramatically. The other guidance computers came to the same conclusion a few milliseconds later. The rocket broke apart moments later.
    Just throwing more computers at the problem can only protect from hardware failures, not software bugs.

    • @SolomonUcko
      @SolomonUcko 2 года назад +3

      Unless you have multiple independently-created programs implementing the same specification in different ways?

    • @Lodinn
      @Lodinn 2 года назад +11

      @@SolomonUcko And who's gonna pay for that?
      Also, what if the specification itself is "buggy", as in does not tell whatever it was _intended_ to specify?

    • @mentalisttraceur
      @mentalisttraceur 2 года назад +4

      The people who care enough about not crashing their rockets will pay for it. Either way they're paying for it, it's just a question of whether the alternative implementations stay in the head of the guys write reliably correct code, or in the fuzzed test suite, or gets deployed to production as well.

    • @mentalisttraceur
      @mentalisttraceur 2 года назад +7

      If the specification itself is buggy you have the same problem regardless of how many alternative implementations you put in - your rocket will fail. That's entirely irrelevant/orthogonal to whether or not your reliability goals and budget are best served by one implementation or multiple.

    • @entcraft44
      @entcraft44 Год назад +2

      @@mentalisttraceur Actually, I would argue that every specification for a complex product ever is "buggy". Not in the sense that it is necessarily wrong, but at least *incomplete*. If the product should evolve over time (so not for spaceflight but for many other applications) this becomes even worse! Because you need something new but won't build it from scratch, so you change the specification and then the product (Or, if you want to create a mess: You only change the product).
      So then we should only make one program (or other product) following the specification and then improve upon it. Of course, rockets are some of the only cases where you can't improve later (unlike space probes, which can be and are reprogrammed). So it needs to be correct the first time. But I think more eyes on one product are safer than few eyes on many products.

  • @wuloki
    @wuloki 5 лет назад +159

    I am keeping a pdp11/93 (sounds big, but is not) running. It was built around 1990 I think and has 4 MB of RAM. This thing hosts a website and supports a dozen people using it _simultaneously_ without noticeable slowdown. Apart from graphics, it can do almost everything a modern computer can do. You can write texts and documents, do calculations, create programs in various different languages, and of course people can chat and send each other files.
    When I look at a modern computer, I can see a lot of progress, but this progress seems to have been made in the wrong areas. Some common tasks you would do with a computer actually take longer on contemporary computers than on the old beast.

    • @crides0
      @crides0 3 года назад +11

      It's always the graphics

    • @ViciousVinnyD
      @ViciousVinnyD 2 года назад +19

      @@crides0 True but not always. It's pretty obvious when changing the quality in a game from max to lowest doesn't fix lag, because the CPU's struggling to keep up with the inefficient code that's holding everything up.

    • @LewisCampbellTech
      @LewisCampbellTech 2 года назад +3

      that's cool as hell. what's the website it hosts?

    • @jajanken8917
      @jajanken8917 2 года назад +5

      I'm very interested in this idea of downgrading technology. Could you provide some examples of common tasks that take longer on a modern computer?

    • @4cps777
      @4cps777 2 года назад +7

      @@crides0 Idk, there's not much to change about graphics on desktop devices. 1080p@60hz is totally fine for everything but gaming or watching movies and here I am running my system with more eye candy than a stock install of either Windows or macOS has and yet difference between eye candy turned on and eye candy turned off is literally not noticeable (not even when looking at the resource consumption). I believe that as time progresses, we will instead find other ways of destroying all the advancements we make in hardware, such as shipping a desktop application not as an executable but rather as an entire fucking browser coupled together with a secound JS runtime so that we can then show a web page which is in fact only there for us to change some HTML elements around using a ton of JS (aka Electron in case anyone wonders).

  • @Fr4nR
    @Fr4nR 8 лет назад +18

    Joe Armstrong is sheer brilliance here. Outstanding. Even two years after the event his words still ring true - if not more. There really is a LOT of work for the new generation coders out there.

    • @ximono
      @ximono 11 месяцев назад +1

      Still true 7 years later. And it probably will be in another 7 years.

  • @p1CM
    @p1CM 8 лет назад +219

    "And now for the tricky bit."
    always :D

    • @jsrodman
      @jsrodman 5 лет назад +10

      I used to work on commercial code where this was spelled
      /* If you have to understand this part then you've already lost */
      It was pretty much correct.

  • @WyzrdCat
    @WyzrdCat 5 лет назад +20

    Joe is brilliant, I am in so much agreement with this talk. I am so sad, I only found out halfway through watching that he passed away last month.

  • @farazhaider6980
    @farazhaider6980 5 лет назад +53

    RIP Joe. You’ll be dearly missed.

  • @KnocksX
    @KnocksX 8 лет назад +10

    Mind blown multiple times during the lecture. I'm excited that there are smart people out there trying to solve difficult problems.

  • @LewisCowles
    @LewisCowles 5 лет назад +15

    I never thought I'd like the creator of Erlang so much. Gutted to hear he has passed

    • @TheCraigeth
      @TheCraigeth 4 года назад +1

      Here's his blog if you're interested: joearms.github.io/#Index

  • @paulwary
    @paulwary 7 лет назад +61

    I find that I do write really big comments - when I am thinking through the problem. Then they quickly become irrelevant, or even misleading, as you refine the approach, refactor, and generally do things differently. To keep the comments in alignment with the code doubles your effort.

    • @sethtaylor7519
      @sethtaylor7519 5 лет назад +19

      I find that if you find yourself "needing" to write a comment to explain what the code does, then you haven't written the code in the best way and need to rethink it.

  • @marcellerusu
    @marcellerusu 7 месяцев назад +2

    last half of this talk is a gold mine for designers of future of programming systems

  • @SpectatorAlius
    @SpectatorAlius 5 лет назад +15

    Armstrong has done well in this lecture. It is a pity he passed away so soon before he could see any real progress in fixing the mess we are in.
    But I do want to stress what he said about comments. It is so important to do them, I have long been of the unpopular opinion that anyone who does not take advantage of Javadoc (or the equivalent in your language of choice) to document all public classes and methods should not be allowed to even check in the code. Personally, I often write doc blocks for private and package access classes/methods as well, when it is clearly necessary because the doc blocks on public alone do not tell a coherent story.
    But even if you *do* religiously document them all, it it all too easy to fall short, explaining the obvious while leaving out what is really important -- Armstrong gave a funny extreme example, "now here's the tricky bit".
    As for specifications, the first thing that came to my mind is all those allegedly 'agile' methodologies that seduce both management and programmers with the siren song of less documentation. These methodologies are to blame for many people who use so little documentation their 'specification' is woefully incomplete and even more woefully out of date.

  • @loganmcbroom9001
    @loganmcbroom9001 2 года назад +5

    I revisit this talk every once in a while because it holds more and more true every year. I love that he actually presents a solution to such a vague unsolvable problem, and with a great comedic timing throughout.

  • @devon9374
    @devon9374 10 месяцев назад +1

    Just discovered this talk. It is amazing, love you Joe. RIP 🙏🏿

  • @georganatoly6646
    @georganatoly6646 2 года назад +9

    very astute 'Seven deadly sins' slide, I find, including myself, number 7, 'Code that you wrote without understanding the problem' to be very common, always in a rush and later during a walk or something thinking back to a block of code, or its logic, and a more complete/elegant solution occurs to me

  • @evandrix
    @evandrix 10 лет назад +57

    can the PDF slides for this presentation be uploaded somewhere?

  • @demesisx
    @demesisx Месяц назад +1

    Ironic to be watching this, as I just so happen to be re-compiling my whole Nix store as ``content-addressed``. This man was and this talk is simply BRILLIANT. I wonder how many people know about Unison, which brings some of these content-addressable goodness to the world of programming.

  • @Waterlimon
    @Waterlimon 8 лет назад +79

    I think the real issue he had was not duplication of raw data, but rather duplication of data with similar meaning (as duplication of raw data is trivial to avoid with a simple comparison).
    And thats where the difficulty of the task is revealed - the only compression algorithm capable of the task is a human mind or general artificial intelligence capable of extracting the meaning. It must even be specified from what point of view similarity will be evaluated (can two pieces of text that say the same thing but in different languages be combined? what about two stories that are different but convey the same meaning through a metaphor? two computer programs that compute the same result but one with horrible cpu use and other with horrible memory use?)
    And that is how the problem is managed today. Humans (As the only compressors up to the task), gather meaning and compress it into products of increasing and increasing sophistication.
    So while his algorithm is correct, its not so useful because in the end its just a re-expression of the problem.
    Thinking about compression in relation to the task of programming is useful though (its what we do when creating those abstract models, instead of directly mapping every input to the desired output in a huge lookup table). But even there, the human does the compression. We reduce the entropy of the program by releasing some into the environment. When we get some sort of general AI, computers can do that for us.

    • @sayamqazi
      @sayamqazi 5 лет назад +2

      OK removal of duplication is trivial but not fast. You need to compare each thing with each thing

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy 3 года назад

      @@sayamqazi
      Yeah, that would be exponential. But the video discussed a way around it by grouping data into manageable chunks.

    • @RAFMnBgaming
      @RAFMnBgaming Год назад

      I wouldn't be surprised if at that point the AI will be already sick enough at us for asking it to create petabytes of bad renditions of hands that it'll be sick of us. And even more surprised at the idea of humans trying to reduce bloat.

    • @megamaser
      @megamaser Год назад +3

      Yeah his point is a bit silly because it's an absurdly complex problem. It's the biggest problem that social beings have been working on since the inception of consciousness. How do you get two people to agree with each other? Well first they need to identify all the information they share but represent differently. Failures at this task are the cause of all disagreement. His proposition is akin to advocating world peace. I'm with him 100% and I love his perspective, but it's not clear that there's anything actionable here.

    • @parodoxis
      @parodoxis Год назад

      "Similar meaning" in human language may be impossibly hard, but not in programming. In Unison, for example, the names of functions and variables are discarded, and only their hashes used instead, which means little is left besides raw meaning/uniqueness. Then they can be compared in the usual/cheap way, data equivalence. Yet they are effectively comparing what in other languages would be "similar" things...

  • @jonathanrolfsen4656
    @jonathanrolfsen4656 5 лет назад +3

    Wow. This talk / this problem need more attention

  • @World_Theory
    @World_Theory 5 лет назад +7

    It seems like he's saying that we need to find a reasonably small number of abstract things that we are capable of understanding, which can in turn be used to recreate all the files we already had, such that they can be maintained by future and parallel generations, and not continue to increase the cost of computing.
    So… We need absolutely rock solid fundamentals to combat entropy in computing environments? (Which is something we definitely do not have at this time.)

  • @Vermilicious
    @Vermilicious 3 года назад +2

    We were very good at making things compact and efficient, but then we got this ever-increasing level of power, computation and storage, and we stopped caring. Plus, we've put ourselves in a situation where we are punished by being more efficient; if you optimize away the work you do, you remove the need for yourself. This is a fundamental problem for humanity, no less. We have to overcome this, sooner or later, or we are doomed.

  • @bmejia220
    @bmejia220 3 года назад +4

    Thank you Joe for your dedication! You have done a great service with your peers!!! Ciao for now :)

  • @robheusd
    @robheusd 8 лет назад +8

    As a thought - the way things evolve in the pogramming/computer world is in a way very similar to the way the natural world evolves - which then means, there isn't very much we can do about it...

  • @KENNETHUDUT
    @KENNETHUDUT 9 лет назад +23

    Brilliant - thoroughly enjoyed.

  • @twoolie
    @twoolie 2 года назад +3

    I feel like the Unison Language and codebase language are fulfilling the dream of the Entropy Reverser, by hashing code into irreducible chunks.

  • @paulwary
    @paulwary 7 лет назад +178

    Only old guys with nothing to prove can be radically honest.

  • @bkboggy
    @bkboggy 7 лет назад +9

    Wow. This is an incredible talk.

  • @JackMott
    @JackMott 8 лет назад +17

    Regarding efficiency, if I wait 10 years for it to get 1,000 times faster, I'll also be processing 1,000 times more data and rendering it to 1,000,000 times more pixels.

    • @bocckoka
      @bocckoka 7 лет назад +1

      they will need sg to sell after 4k has saturated the market, but regarding hmi, we have already passed the abilities of our sensory organs.

    • @drakefire1800
      @drakefire1800 6 лет назад +1

      totally agree. things get faster and things just get more complex and bloated. tech isn't really saving anyone time now days. it's wasting a lot of time.

  • @briandecker8403
    @briandecker8403 5 лет назад +3

    Jonathan Blow has been preaching this for years!

    • @dukereg
      @dukereg 5 лет назад +2

      Casey Muratori expresses similar sentiments quite hilariously. Vexe channel has some clips if you want a laugh.

  • @Enterprise-Architect
    @Enterprise-Architect 5 лет назад +5

    Rest In Peace Joe Armstrong... Thank you for such a fantastic video...

  • @user-eg6nq7qt8c
    @user-eg6nq7qt8c 8 месяцев назад +1

    One of the all time greats. Absolute legend.

  • @pkplexing
    @pkplexing 7 лет назад +9

    I always like Joe's presentations and views on things; Good insights and wisdom coupled to a good sense of humour :)

  • @pmarreck
    @pmarreck 10 лет назад +2

    I actually used his compression idea (to determine the "universal similarity" between two strings) in a side project idea. There's something there, although the actual check (using a given compression algorithm) would be abysmally slow without some form of optimization (which he suggests)

  • @wolfson109
    @wolfson109 9 лет назад +18

    It's an intractable problem.
    In order to actually reduce the level of complexity (rather than merely slow the rate at which it is growing) you would have to build a condenser machine that was powerful enough to reduce complexity at a rate faster than it is currently growing.
    Let's say that I were able to somehow build such a mounstrously powerful machine. It would require such a huge amount of energy and resources to run as to be ruinously expensive, and the only way I could afford to run it is by selling some of it's mamouth computing capacity to others. Which would cause an increase in the rate of growth of complexity.
    So now I have to build an even more powerful machine (or increase the capacity of the existing one) to keep up with increase in the rate of complexity growth that I've just caused by running my machine. Which would again cause an increase in the rate of complexity growth.
    The second law of thermodynamics states that the energy cost to increase the entropy of a system will always be smaller than the energy cost to reduce the entropy of the same system. So no matter how big a machine I make, I will never be able to keep up with the rate of increase in entropy caused by the existence of the very machine I built to solve the problem in the first place.

    • @KnocksX
      @KnocksX 8 лет назад

      What if you had "unlimited" energy?

  • @usercard
    @usercard 5 лет назад +7

    Земля тебе пухом Джо, великий магистр повелитель Ерланга

  • @henryltpx5045
    @henryltpx5045 5 лет назад +7

    Love this guy even though I'm no programmer

  • @theultimatereductionist7592
    @theultimatereductionist7592 5 лет назад +15

    "since the universe was booted some years ago" LOL! AWESOME description!

  • @alan2here
    @alan2here 5 лет назад +4

    I love his file (program/data, hash, "similarity hash", interval_tree, pairs_of) Least Compression Difference approach. :) I've thought before about compression the "stupid" way, using Minimum Description Length and trying a lot of programs in a given algorithm language for output most similar to that file. While the cost _could_ be prohibitively problematically prohibitive the advantage is enormous, you get a result that looks a bit like a black box, but isn't entirely, and _is_ a really good minimal representation of the _meaning_ of the thing.

  • @EngIlya
    @EngIlya 5 лет назад +3

    Ok, so how having less files addressed by hashes help with the overall complexity? Its difficult because ecosystem is so diverse, and not because people have copies of the same file (within different machines and directories)

  • @OttoRobba
    @OttoRobba 9 лет назад +18

    Amazing talk - really funny and at the same time, it present a very unique and interesting side to programming. As weird as the idea of no names is, it sounds quite interesting.

    • @vRobM
      @vRobM 7 лет назад +2

      "no names" has already been done a long time ago, it's called object storage. only thing you need is an object id (SHA1 id in his example)

  • @caseyhawthorne7138
    @caseyhawthorne7138 3 года назад +3

    Can some abstractions be baked out at compile time for efficiency?
    His Entropy Reverser is a classification of code modules
    A challenge librarians have been working on for millennia with the printed or electronic word

  • @aladdinovich
    @aladdinovich 7 лет назад +13

    ## now for the tricky bit

  • @merbst
    @merbst 5 лет назад +1

    Content addressed file systems are nice for saving space. jdupes is a nice way to do so on another FS

  • @ilikeshiba
    @ilikeshiba 3 месяца назад +1

    7:57 wow this has aged incredibly well with LLMs being the same thing but even worse

  • @pewolo
    @pewolo 2 года назад +2

    I love the part that you can write a piece of code today that you fully understand, but a few days later you just can't figure out what it is.😅

  • @NathanSmutz
    @NathanSmutz 9 лет назад +7

    As an aside: For his difficulty making slides that convert gracefully to printable .pdf, I'm surprised LaTex/beamer wasn't suggested as an immediate solution; especially for a programmer who would go to the trouble of coding slides in HTML. LaTex has been around for ages; shouldn't change out from underneath anybody, all kinds of templates are available, and there are WYSIWYG tools to even isolate you from most of the the code if that's what you want.
    Maybe none of his friends knew about it; but I understand that Math and Physics journals generally mandate LaTex for paper submissions.

  • @StrangeLoopConf
    @StrangeLoopConf  10 лет назад +6

    All slides will be collected here (eventually): github.com/strangeloop/StrangeLoop2014/tree/master/slides

    • @axilmar254
      @axilmar254 10 лет назад +2

      I can't find the slides of this talk in the above link.

    • @rostislavsvoboda7013
      @rostislavsvoboda7013 8 лет назад +5

      Have you tried to search for the slides using the hash? :)

  • @robheusd
    @robheusd 8 лет назад +2

    If I have a textfile and a file that I made by printing the textfile and then scanning the printed file, I have two very different files, and there is no way I can find that they are similar, although they contain the same information.

  • @HenkPoley
    @HenkPoley 9 лет назад +8

    The most optimal way to find similar files by 'edit distance' has (sadly?) been discovered 40 years ago already: news.mit.edu/2015/algorithm-genome-best-possible-0610

  • @TheFuture2092
    @TheFuture2092 5 лет назад +6

    Rest in peace, great soul!!!

  • @thewallstreetjournal5675
    @thewallstreetjournal5675 9 лет назад +2

    Places and names definitely should be abolished from installation procedures.
    We need to move to a linking-file-system to find files and to relate modules and libraries to programs.

  • @swyxTV
    @swyxTV 5 лет назад +2

    i need an eli5 on what he’s really trying to say here. i dont understand the call to action. just by having a condenser we will have a solution to all these languages and levels of abstraction? isnt that just how we compile to machine code (or say in future web assembly)? i dont see how that will make our programming any easier, if anything that will allow more languages to proliferate.

    • @JeremyAndersonBoise
      @JeremyAndersonBoise 3 года назад

      That’s a fair criticism of the condensor proposition as posed, but I think Web Assembly is a fair comparison to what he meant. It doesn’t necessarily and inherently promote the proliferation of new languages, because creating compilers into web assembly for all of them has a cost.

  • @pinkeye00
    @pinkeye00 8 лет назад +9

    7:30 Software is getting worse and worse .... so true.

  • @misterguts
    @misterguts 9 лет назад +55

    39:15 "Let's start making the condenser" You do know, this guy is not just talking about file storage, compression and indexing? He is outlining a project for practical machine intelligence, an idea processor. This is the most hair-raising part of his talk.

    • @rostislavsvoboda7013
      @rostislavsvoboda7013 8 лет назад +11

      A condenser has been already done. It's called a search engine.

    • @steshaw6510
      @steshaw6510 7 лет назад +10

      I interpreted it more as a global-scale ZFS-like deduplication.

    • @foobargorch
      @foobargorch 7 лет назад +8

      (d) all of the above

    • @vRobM
      @vRobM 7 лет назад +1

      What you describe is object storage. See Cleversafe (now IBM), S3, minio, etc. Hash based object storage.

    • @Cat-vs7rc
      @Cat-vs7rc 5 лет назад +1

      @@vRobM IPFS

  • @kode4food
    @kode4food Год назад

    I miss Joe

  • @checkerist
    @checkerist 10 лет назад

    Yes, I stuck in similar problems. My new project dosen't have any documentation. I spent two days to do some really basic thing with it, and yet not finished. But I don't even know, is it constraints of process management, when we have no time to stop and think a bit and get things smaller or people do this because it is easier (simple vs easy). Btw, nice talk.

  • @sanyaade
    @sanyaade 5 лет назад +1

    A legendary in his own right! RIP Joe Armstrong

  • @PaulFurber
    @PaulFurber 5 лет назад +1

    RIP Joe. Thanks.

  • @Moocman
    @Moocman 5 лет назад +3

    If this lecture struck a chord with anyone, check out the Unison programming language.

  • @reflechant
    @reflechant 7 лет назад +10

    I like the idea of hashes but I think they are absolutely unnatural to humans. Instead we are deeply relational creatures. Our brains are neural networks - the condensing machines (even time condensing, not only space) he talks about. And the amount of associations we have with data, values, etc - conscious and subsonscious - is enourmous. If we build a neural interface that can track our brain associations and automatically tag data it will be the ultimate relational database. Distributed human memory, distributed with machines and thus with other humans. In this relational database (which would be a mix of traditional relational databases, graph databases and archivers) a search would be performed by thinking about "that noisy guy in a red cardigan".

  • @ApfelLowe
    @ApfelLowe 5 лет назад +1

    RIP Joe, this comment comes too late.... but I don’t believe we could get the state of the external pair prior to the evaporation of that black hole. Causality would be conserved.

  • @blenderpanzi
    @blenderpanzi 10 лет назад +14

    "Who can program for more than 5 minutes without using the internet."
    Ignoring the fact that I do web development where I need to access resources on the web, I actually could do a lot of things offline that I do online. It is often faster to google for information than to search the locally installed documentation. Kinda ridiculous.

  • @syntaxed2
    @syntaxed2 5 лет назад +3

    Nice, so the future computer geeks will have hadron colliders under their desk :D
    "Jimmy, time for bed!" , "Yea, mom! Just waiting for the black hole to appear so I can save my data!".

  • @StephenPaulKing
    @StephenPaulKing 8 лет назад +1

    BTW, it was solar powered hardware that served as the implementation of the Matrix in said movie. Humanity, in the movie, crippled it by creating a nuclear winter effect.

  • @theinconsistentpark9060
    @theinconsistentpark9060 10 лет назад +6

    16:55 But the number of plausible states your machine can be in is not that large. We should quantify the entropy over the distribution of computer states. :P

    • @NdxtremePro
      @NdxtremePro 9 лет назад +1

      +The Inconsistent Park Of course it is. You can modify every file on your computer, and I can mine. there is no guarantee we haven't. Plausible is not the same as likely, but even likely is no guarantee.
      We can't guarantee that everyone has the same DLLs. We can't guarantee that the software we are writing today will never run on ARM, or MIPS, or OS X. We have no guarantees in the software we create, even when we give recommended system requirements.
      We can't even guarantee we are running on actual hardware, or in a VM, or in a VM inside a VM, which is possible using the VMWare solutions.
      And every byte on your computer's hard drive has the potential to affect your system. When iTunes was eating Windows systems, it could have deleted any combination of files before you turned it off. you might have turned it on and everything seemed to have worked, but you didn't know it had gotten to ntfs32.dll and deleted the last portion which might mean every time you save file, a random file is now deleted.
      Software acts on Data, and executable files are data. As soon as you install a program at your first Windows boot up, your system is now different than most others. With Windows 8/8.1/10, when you sign in your computer has settings changed to the last computers settings that you signed in with that account. So you don't have to install anything.
      On Linux, OS X, *BSD, Haiku and all other systems this is true.

    • @philippederome2434
      @philippederome2434 5 лет назад +1

      the frustration of using google and stackoverflow to get a quick fix only to find out it's not quite the same thing is very real and it has to do with states of the two underlying computers being somewhat different. One of the most standard troubleshooting technique it to reproduce an environment that does work with one that does not and migrate the two ever closer to a common state until the mysterious difference reveals itself in an obvious manner.

    • @LKRaider
      @LKRaider 5 лет назад

      NDxTremePro check out nixOS, which determines the system state by deterministic compile and deploy instructions, and hashing of configuration files. I think it is the closest experiment in that direction.

  • @jollyjack5856
    @jollyjack5856 10 лет назад +8

    you don't need the condenser. just throw away all the stuff and write anew - though not programs this time, but high-level specs. And make systems that know how to execute these specs. Then write the high-level specs language, and re-write your specs in it. Pretty soon you just end up with the Executable English (TM) which is what we all should have been working towards all these years anyway - and nothing else. That's the principle of Ditch the Efficiency taken to its natural conclusion. Example: window systems. There are lots of them, but the concept of a window is the same (or nearly the same). Example: web programming. There should be not one web programmer on the Planet. Not one. You explain - in English - to the System what you want done, and it writes the code for you. Only the designers will be left - all the formalizable stuff should be formalized and dumped on the machines to do it.

    • @vRobM
      @vRobM 7 лет назад

      This has finally come to fruition with the Interactor system. See interactor.com

  • @Roboprogs
    @Roboprogs 8 лет назад +11

    The (management insistence upon the ) C programming language for writing code on 5 MHz computers inflicted great mental damage on this industry :-(
    Worst. Premature. Optimization. Ever.
    (or, "Worse is Better", if you prefer)

    • @Roboprogs
      @Roboprogs 8 лет назад +3

      We would *still* mostly be writing C++, if it were not for the "extinction level event" that was "servers connected to the internet".
      Not that Java is far enough away from COBOL...

    • @glialcell6455
      @glialcell6455 7 лет назад +3

      C++ is not bad at all, and Java is much closer to COBOL than C++ is, if only thanks to their common senseless bureaucracy.

  • @StephenPaulKing
    @StephenPaulKing 8 лет назад

    If we can define the 'name' of a file in terms of the (class of) computer program that have that file as an output given a null input? To do this, we need a way of defining equivalence classes of computations modulo language.

  • @marccawood
    @marccawood 4 года назад +2

    A thoroughly amusing rambling rant.

  • @AlexanderTrefz
    @AlexanderTrefz 8 лет назад +4

    This talk is great, but the example of entropy with the dice is just wrong on so many levels. The Entropy does not change at all when just looking at the dice. If you chuck them long enough eventually they will all fall on the 1 face, because he is changing the arrangement of them, but the entropy stays exactly the same(the is an argument to be made that the entropy actually travels on a sine-wave here, but that probably clashes with the exact definition of entropy, so that the laws of thermodynamics keep working)

    • @5004shadow
      @5004shadow 8 лет назад

      i believe this to be true

  • @howardmarles2576
    @howardmarles2576 5 лет назад +2

    I owe my success to Joe's incite and energy.

  • @viktor14jaar
    @viktor14jaar 8 лет назад +5

    How does he not know how to set paths? Most compilers require you to manualy insert paths.

  • @Dengakuman22
    @Dengakuman22 10 лет назад +3

    22:30 In fact, I spend a lot of time programming in the train or bus and I totally feel like that. I can do just fine when it's low level stuff and I've got access to the man and source code, programming an android app is completely impossible. I just learned to avoid Java and anything related to it when I'm on the train.

  • @codewritinfool
    @codewritinfool 5 лет назад +1

    R.I.P.

  • @MrAbrazildo
    @MrAbrazildo Год назад

    10:55, my father uses to do that. He thinks he's code is self-explanatory. I'm much more humble about this. I think this is a kind of art, still wild, untamed. Making lots of f()s doesn't facilitate things, because the code that could once be read mostly horizontally _(natural for our eyes format, aka widescreen)_ now became vertical. Even worse, now you need the aid of your hands too. This leads to spending more of a silent waste of energy, the most underestimate. However, the code like that became indeed more self-explanatory. So it's a trade-off. Meanwhile, I rather stick to comments, in a mostly single page of technical code.
    11:27, for libs or projects I make, I use to write pages of text editor, like a tiny book. But this remains outside the code. I should also write blocks of comments inside the code, like in open source libs.

  • @VickiBrownatcfcl
    @VickiBrownatcfcl 9 лет назад +1

    I would like a transcript.

    • @StrangeLoopConf
      @StrangeLoopConf  9 лет назад +5

      Live captioned transcript here: github.com/strangeloop/StrangeLoop2014/blob/master/transcripts/Armstrong-OpeningAndKeynote.txt

  • @cefcephatus
    @cefcephatus Год назад

    Atoms in the universe have 4 quarks, each has it's own spins that doesn't have hard limits and influences each other.
    I think, saying everything has 10^26 states is very much understatement. But, reducing entropy of computation data is the must of our technologies. Seeing this comes from 8 years ago, the only thing I see successful are noSQL and IPFS.
    We still didn't solve "Find similar things with less entropy" problem, yet. But, least compression size sounds the most promising. How about least compiled compression size?, well, that'd not work with scripting languages, and dynamic addresses. So the problem is to figure out how to calculate computation flow and assign identity index so the same algorithm as the same index, and similar algorithm have indexes lists that tells us which idea it borrowed from. Sounds like git forks.
    However, to automatically identify an algorithm, it would take some intuitions, so AI reducers might be the best way to approach this problem.

  • @qwertyman1511
    @qwertyman1511 2 года назад +1

    is ipfs the ideal implementation of 37:00 idea?

    • @zyansheep
      @zyansheep Год назад

      It is ideal in theory, but conventional Distributed Hash Table designs (like Kademlia, what IPFS uses) don't scale well in practice. Also IPFS doesn't have a good way to keep things stored indefinitely, nodes have to periodically broadcast that they have some piece of data, which strains servers hosting large amounts of data.

  • @desmochai
    @desmochai 8 лет назад

    Amazing!

  • @tmonk1473
    @tmonk1473 7 лет назад

    In a strange way, 40:15 the merge part sounds like a search engine.

  • @uethuegiegjtreriopjg
    @uethuegiegjtreriopjg 5 лет назад

    RIP Joe Armstrong

  • @GeorgeTsiros
    @GeorgeTsiros 5 лет назад +1

    33:10 the *moment* you use terms like 'file', 'directory' and 'name', you've lost the game... you're again locked into thinking about storage in the same, decades-old, way.

  • @ahmadbaitalmal1040
    @ahmadbaitalmal1040 5 лет назад

    RIP Joe 😢

  • @Magnetohydrodynamics
    @Magnetohydrodynamics 8 лет назад +2

    Mine boots in about 5 seconds...

  • @skepticmoderate5790
    @skepticmoderate5790 5 лет назад

    A book is just a long comment. - Joe Armstrong, 2014

  • @patrickcrawford119
    @patrickcrawford119 2 месяца назад

    Rip to a real one

  • @rickdeckard1312
    @rickdeckard1312 5 лет назад

    RIP Joe

  • @David-2501
    @David-2501 5 лет назад

    I figured he was going to talk about how because of the internet and the introduction of bad actors, we've had to make systems more flexible, more safe and thus bigger.
    Got maths, physics and quantum theory instead. 10/10, would math again.
    c10789a5cac389c63e67622892c0e5ac1401d493 - title (The Mess We're In)
    074675bb7350d5077da234919568bcebd3c5ae83 - full title ("The Mess We're In" by Joe Armstrong
    )
    How can I not find anything, based on the hashes of the title? SHAME!

  • @blenderpanzi
    @blenderpanzi 10 лет назад +3

    36:10 that looks like a magnet url :P

    • @blenderpanzi
      @blenderpanzi 10 лет назад

      And what comes after reminds me very much of DHTs. :)

    • @blenderpanzi
      @blenderpanzi 10 лет назад +1

      38:35 ah, now he said it. :)

  • @alphashuro
    @alphashuro 7 лет назад +3

    isn't he talking about IPFS? ipfs.io/

  • @abhijitgm5
    @abhijitgm5 5 лет назад

    RIP..

  • @JavierVegaPerry
    @JavierVegaPerry 10 лет назад

    hilarious, GREAT video! hahahaha so true and yet so funny....and later...kinda tragic ):

  • @sashang0
    @sashang0 7 лет назад

    @7:48 Things like neural networks require different ways to understand them. Saying 'we don't understand why it doesn't work/not work' is acceptable when dealing with neural network.

  • @ramilmamedov481
    @ramilmamedov481 5 лет назад

    RIP 😕

  • @andreabadesso11
    @andreabadesso11 5 лет назад

    RIP

  • @adsah
    @adsah 5 лет назад +3

    Q: "how many of your machines boot in 60ms?" A: docker.

    • @Khwerz
      @Khwerz 5 лет назад +3

      The underlying OS boot time + docker is not 60ms.