CBW 2: AI Safety feat. Matt Farrugia

Поделиться
HTML-код
  • Опубликовано: 14 июл 2024
  • This is a conversation with my long-time friend Matt Farrugia. We talk about a range of topics including privacy, narrow vs general AI, the short-term and long-term impacts of AI on the labour market and the human experience, and how much we both hate advertising.
    Excuse the annoying camera refocusing / changing exposure, I'll fix that next time. Same deal with the audio clipping!
    Be sure to check out Matt's website at far.in.net/
    00:00 Intro
    6:30 Privacy and Censorship
    19:25 Reinforcement learning
    29:15 Can AI understand context?
    31:43 Does propaganda work?
    43:59 Okay let's actually talk about AI (Narrow vs General)
    57:40 Be careful what you wish for
    1:08:42 Why work on AI Safety?
    1:12:07 If AGI is so smart, how come it can't get me coffee without killing people?
    1:20:28 Could AI cooperate with humans?
    1:22:45 Intelligence explosion
    1:30:04 AI in the wrong hands
    1:36:53 How do you research AIs that don't exist yet?
    1:50:23 Can we trust the leading AI companies?
    2:11:14 Should we train AI's with Sesame Street episodes?
    2:14:15 Why not just have more kids?
    2:16:00 Does the world need improving?
    2:24:00 Short-term unemployment caused by AI
    2:45:21 Do humans care about authenticity?
    2:53:23 Long-term effects of AI
    3:28:04 Ads suck
  • НаукаНаука

Комментарии • 2

  • @blazearmoru
    @blazearmoru Год назад

    I wanna get into this field :c my bg is in psych and phil, and i'm started taking cs classes atm (aa). What dooooooooooooooooooo
    16:50 equally as dangerous than merely letting extremist ideology fester and grow unchecked is the notion that expressing bad takes is a way of stress-testing beliefs before action execution. I used to say that this was the human right to be mistaken and be corrected before making an error in execution.
    34:00 the amount of people who talk about systematic discrimination but don't have any clue about systems learning or group selection or whatever the dictator's handbook covers or have concepts about multi-level consciousness at the level of groups happening in parallel with individuals (like vector populations in the brain -> decision making -> group decisions). The insidious thing is that woke ppl haven't read the woke literature or any tangential literature and are in it for the grifting. I really like the ethics thing :c Also, I think the guy's somewhat right about morals being somewhat genetic. I think haidt's research touches on that and also from my own time in the field there were examples about how stress hormones regulations have ties to genes, and from there safety/stress concerns affect things like threat detection which heavily affects our moral perceptions and therefore our moral values.
    37:42 the detector is so strong that some suggest liars have adapted to be easier to detect because trust gives inter-intra group wars such a massive edge. A single traitor could be the MVP to winning a full on war. Really explains the full on sacrifice everything avenger mindset if evolution had to imbue a solution to traitors.
    40:00 noooooooooo, the dictator's handbook nooooooooooooooooo
    2:29:00 I think there was a thought experiment about imagine star trek tech could perfect copy the mona lisa. Would people still think the original was worth more? If you thought you bought the original but you bought a copy instead, would that upset people? etcetc
    2:52:20 I remember this from my stay in positive psych. Maybe I was there a bit after he was but by that time there were already different forms of happiness and it made a substantial drop in day-to-day joy but an absurd increase in life satisfaction or something. Also yea, I don't know what it is with my generation and the one right after mine but we severely overcorrected in the nature/nurture debate. It's probably more like, a computer needs both software and hardware to function (or at least electricity input) so it's not either or but every response is necessarily a combination of both.
    2:54:30 If AI gets that powerful, could we devise a way in which every human can be an island onto themselves if they so wished (+ give everyone a super AI + give them a rocket ship or something so no space limitations) and just let people yolo off into whereever. Would an economy even be necessary after say 20 years of super AI levels assuming it does work out and we just have to deal with the economic aftermath? The people don't need the best AIs to yolo off & the people in charge don't need the human work force either. The 'give' part is a bit shady but this could be done if you lead up to the AI revolution through educating the masses regarding the tech, and if someone refuses the education well... shit for them lol.
    Positional goods... gaming \o/ but also AI friends. So there was the experience machine thought experiment and it was like "brah, you wouldn't yolo into the experience machine because you'd leave behind substantive connections with people" but then I flipped the thought experiment and wondered what if you found out this was the experience machine and you can abandon your AI family for the real world that no one really cares about or for you etc etc and I suppose a substantial amount would be Cypher from the matrix. Added bonus if AI isn't oppressing humans and there wasn't a moral pressing issue to revolution the AIs.
    2:59:50 the 'don't feel good' about vid gaems is probably because the positional goods in this case is tied to things like opportunity or social status, of which vid gaems are currently not held in high regard. I'd suspect the same feels will hit doctors if they suddenly get totally shat on statuswise. Oh, it feels good when I've made the same argument regarding pathways to success if we flattened wealth inequality entirely, but that's only relevant in regards to pathways to success being more or less flexible, yes? Having zero capacity to move up the economic ladder was constraining because it still existed, so in essence there was a frigid class system. However, if in relative terms OTHER things became of high enough value to alter positional rankings or positional goods can be simulated sufficiently by AI, it should deal with most problems. It'll even deal with base level genetic mental issues which use positional goods as a crutch.
    3:05:50 I think one tentative answer can be found when looking at population vectors in neuroscience, and in things like split brain experiments in psychology. This answer has something to do with a person being looked at not as a single being, but a mix of no-self (buddhism/karma) and a mix of a collective with differing or even conflicting wills (population vectors) thus a rejection of a clear notion of a "final" goal. I think an apt parallel to this might be mesa/meta optimizers that pump out multiple different sub-goals that exist in relation to each other and the greater goal but the greater goal itself is not coded in.
    3:09:00 The amount of church go-ers who go to church regularly and have all the time in church to read the bible has not read the bible... so yea... probably, not gonna have lots of reading. O-O But I also suspect that "beauty" can be flattened into "motivation" at some fundamental level, assuming we get all the definitions of beauty and motivations correct, just as we have different definitions of happiness in positive psychology.
    3:17:50 Older games have a more experienced population so you're getting destroyed by gaming vets. Also, the issue here is that the example you're giving has a very limited pool of skills that you're being graded upon. In a game that has more pathways to victory (like in real life) then the situation changes accordingly, and you have already talked about what if in real life you had only a single metric to be graded on that could not be overcome (money). One easy solution seems to be just to make a game that encompasses more pathways to victory (mmos with different minigames in it where you don't have to be good at everything). Also, my particular skills lock me at the lowest rank in csgo but I managed to climb super high in dota2 where I could sometimes bump into tourny players but I never crossed that gap bc my hand-eye coordination and reaction timing is shit, probably at a genetic level. However, keeping up and being scaffolded in those rare matches was extremely rewarding even if I ended up losing because I was severely over-performing. The flowstate felt great. On the flip side, dota back in wc3 days felt great because it didn't have MMR/ELO and you could play for fun rather than play competitively.

    • @jesseduffield9516
      @jesseduffield9516  Год назад

      I'm not the expert on how to get into AI safety but less wrong has a post on the topic: www.lesswrong.com/posts/uKPtCoDesfawNfyJg/how-to-become-an-ai-safety-researcher
      You make a great point about stress-testing beliefs. I'll keep that in mind next time this topic comes up