Это видео недоступно.
Сожалеем об этом.

SLAM Robot Mapping - Computerphile

Поделиться
HTML-код
  • Опубликовано: 30 авг 2022
  • Thanks to Jane Street for their support...
    Check out internships here: bit.ly/compute...
    More links & stuff in full description below ↓↓↓
    This video features the Oxford Robotics Institute demonstrating their SLAM algorithm with their frontier device and the Boston Dynamics Spot robot. Thanks to Marco Camurri & Michal Staniaszek for their time. Last time we met spot: • Automating Boston Dyna... Joining Point Clouds with Iterative Closest Point: • Iterative Closest Poin...
    JANE STREET...
    Applications for Summer 2023 internships are open and filling up... Roles include: Quantitative Trading, Software Engineering, Quantitative Research and Business Development... and more.
    Positions in New York, Hong Kong, and London.
    bit.ly/compute...
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottsco...
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Комментарии • 138

  • @rallekralle11
    @rallekralle11 Год назад +506

    i made a little robot that used slam mapping once. that is, it slammed into walls to find out where they were

    • @waynec369
      @waynec369 Год назад +10

      😀😆😅🤣😂😂😂

    • @qqii
      @qqii Год назад +29

      Top of the line roomba robots do the same thing, so your robot was as cutting edge as they are.

    • @timseguine2
      @timseguine2 Год назад +31

      "Simultaneous Localisation And Mutilation"

    • @rallekralle11
      @rallekralle11 Год назад +9

      @@timseguine2 exactly. it didn't last long

    • @CoolAsFreya
      @CoolAsFreya Год назад +8

      I knew a vision impaired dog that navigated like this

  • @jonwhite6894
    @jonwhite6894 Год назад +28

    “The IMU has physics inside”
    I’m stealing this line for my next UAV LIDAR lecture

  • @Mutual_Information
    @Mutual_Information Год назад +44

    One of the coolest applications of the Kalman Filter is slam modeling. Really gives you a sense for how flexible the Kalman Filter is..

    • @oldcowbb
      @oldcowbb Год назад +6

      kalman filter for slam is quite outdated, all the state of the art method do least square on graph

    • @Mutual_Information
      @Mutual_Information Год назад +4

      @@oldcowbb Makes sense - saw it in a textbook teaching Kalman Filter.

    • @zombie_pigdragon
      @zombie_pigdragon Год назад +1

      @@oldcowbb Can you link some more information? It would be cool to get a more detailed overview after the video summary.

    • @oldcowbb
      @oldcowbb Год назад +6

      @@zombie_pigdragon find cyrill stachniss graph slam on youtube, he describes in detail behind the math of graph slam, he also has a paper on graph slam tutorial, basically a paper version of his lecture

    • @oldcowbb
      @oldcowbb Год назад

      @@zombie_pigdragon there is also a matlab tech talk on graph slam, but that one is more on the intuition than the actual math

  • @StormBurnX
    @StormBurnX Год назад +13

    I remember over a decade ago using a periodically rotating ultrasonic distance sensor on a small Lego robot to do a very basic SLAM, where (luckily) all of the relevant rooms were perfect 1m squares and all we needed to know was where we were in relation to the center of the room.
    It's amazing how far tech has come and how incredibly diverse and useful it is! I love seeing the multi-camera SLAM systems like used on Skydio drones and inside-out tracked VR headsets

  • @andrewharrison8436
    @andrewharrison8436 Год назад +69

    Makes you appreciate the human brain. I go down the garden on the right and back on the other side and I am quite happy that I have closed the loop. On the way things will have moved the wheelbarrow, the washing in the wind, I can see a programmer absolutely pulling out their hair trying to get a robot to assign permanance to the right landmarks.

    • @oldcowbb
      @oldcowbb Год назад +2

      you are absolutely right, dynamic environment is known to be hard, but it can be solved by labeling using computer vision and classifies them to dynamic objects and static objects

    • @sergiureznicencu
      @sergiureznicencu Год назад +5

      @@oldcowbb No it can't be solved. The AI solution is even more noisy and easily fooled. A broken chair under a pile of wreckage cannot be labeled whatever you do. Also labeling is done, obviously, by people. Being realistic you realize you cannot even label a small fraction of all normal objects. What you read in the news about all-knowing AI is sugar coated and it is very much way more sensitive to dynamic conditions.

    • @oluwatosinoseni7839
      @oluwatosinoseni7839 Год назад +1

      @@sergiureznicencu Yh dynamic env is yet to be solved, I see some recent papers that attempt to use segmentations and show promises and some that run a sort of dynamic landmark detection but it’s still far from solved

    • @mirr0rd
      @mirr0rd Год назад +6

      The thing is, humans don't have a pinpoint accurate map of the world. We localise ourselves in the current room to within a foot or so and work things out as we go. A human wouldn't know if a building was out of square slightly, our maps are much more conceptual, like a graph of room connections.

    • @andrewharrison8436
      @andrewharrison8436 Год назад +5

      @@mirr0rd Yes, agreed. Particularly on the error involved (hence the stubbed toe in the dark).
      So we build a conceptual map/graph - is that easier or harder than a robot building a map with accurate coordinates? It's certainly a different approach, perhaps each should be jealous of the other's skill?

  • @Gaivs
    @Gaivs Год назад +17

    SLAM is something I've been studying and working on for some time, so it's very cool to see it discussed! It is so useful in so many cases for automation, as you aren't dependent on external data, such as communication with GPS and such. It is extremely useful both for the maps it produces, and you can use the results for path planning and obstacle avoidance.

    • @swagatochatterjee7104
      @swagatochatterjee7104 Год назад +1

      Can anyone explain to me how do people update the kd tree where the point clouds are stored during loop closure? I understand the poses and landmarks are updated, but updating the landmarks should distrub the spatial ordering of the kd tree. How do you resolve that?

    • @calvinkielas-jensen6665
      @calvinkielas-jensen6665 Год назад

      @@swagatochatterjee7104 I haven't specifically worked with kd trees of point cloud data, but I have done some general SLAM implementations and can speak on how I would approach it. Later on in the video they mention factor graphs. I would link one sample of points (e.g., all the points from one revolution of the LiDAR) to an individual factor within the entire graph. That way, the point cloud is local only to whatever the position of said factor is. If the factor's position changes, then you simply rigidly transform all the points associated with it as well. That way you do not need to constantly rebuild the trees. Once you are ready for post-processing, then you could rebuild the filtered data once.

    • @swagatochatterjee7104
      @swagatochatterjee7104 Год назад

      @@calvinkielas-jensen6665 aah ok, so on loop-closure the pose-landmark factor constraint only changes relatively to the pose, and the visualisation we see is a post processing? However, if we are using landmarks local to the pose, how would one associate landmarks from pose to pose (say using ICP or DBoW)? I am talking in terms of Factor Graph SLAM only.

  • @klaasvaak8975
    @klaasvaak8975 Год назад +82

    Would be very interesting to have a deeper dive into this. Like, how do they extract features from the images, and how do they handle uncertainty in the data?

    • @Ceelvain
      @Ceelvain Год назад +6

      And then a Numberphile video on how it's solved!

    • @wisdon
      @wisdon Год назад

      those are military secrets, I don't think the Pentagon will be happy to disclose how these works

    • @mikachu69420
      @mikachu69420 Год назад +5

      there's a whole field of research on that. basically a lot of math and statistics and machine learning

    • @copypaste4097
      @copypaste4097 Год назад +1

      Also how they know that the system is near an older position based on the point clouds

    • @oldcowbb
      @oldcowbb Год назад +5

      @@mikachu69420 there are not a lot of machine learning required, very basic visual features (orb, sift) will do, of course you can also spice things up with machine learning. uncertainty is handled by Bayesian updates usually with gaussian or monte carlo

  • @antonisvenianakis1047
    @antonisvenianakis1047 Год назад +2

    Amazing video, I love to see that ROS is used for all of these applications, it helps the research grow and the results are amazing. Every year new amazing things happen

  • @LimitedWard
    @LimitedWard Год назад +1

    I only took an introductory course in robotics in college, but I remember learning a bit about SLAM. I personally think it's one of the coolest problems in computer vision.

  • @gogokostadinov
    @gogokostadinov Год назад

    First video that I saw and something about the IMU was said...Thnak you :)

  • @zxuiji
    @zxuiji Год назад

    I got a suggestion for map management, treat the robot as being in a 3x3xN cuboid, the cell 2x2x2 is the one that the robot is ALWAYS in, it is declared as RxCxD from 0x0x0 which is always the 1st place it started recording, cell 2x2x2 gets the most attention, the cells directly adjacent are for estimating physics of any interaction (such as a ball knocked off of an overhead walkway somehow that need to evade), a change vs the stored recent state indicates objects &/or living things in local ares to be wary of, otherwise just keep checking with the dedicate threads, the cells 4 onwards of dimension N are for vision based threads, they by default use empty space unless a map of the cell happens to be available, when ever the robot reaches the normalised 1.0 edge of the cell it's in then the RxCxD it's in changes with it and it's normalised position flips from 1.0 to -1.0 & vise versa, keeping to a normalised space for analysing physics & locations that can move to simplifies the math used for it since it only has to focus on what is in RAM rather than whatever map file/s are on disk which is loaded/unloaded by whatever thread moves the robots position, the files just need to be stored something like this:
    7FFFFFFF_FFFFFFFF_3FFFFFFF.map
    The underscores indicate next float (in this example anyways) value so in the above case the 3 values would be read into integers, cast to floats via pointer arithmetic or unions then treated as the absolute position of the map that is being used in related to what is stored (which is faster than reading text obv), this combined with the space of the cell in it's none normalised form gives an exponential increase of absolute positioning while also keeping the RAM usage & CPU/GPU/TPU usage reasonable by not using doubles for physics & mapping of the cells themselves, this no doubt translates to even faster processing of those cells, for the count of N in 3x3xN I would go with 10 so that it's not too far ahead that looking but also not ignoring too much with a 3x3x3 arrangement, 10 is also easier for manual math & percentages. Using fixed width map names also makes it easier to map in your own head roughly where the robot saw it in relation to when it 1st started mapping since the fields of column vs row vs depth are neatly aligned ready for quick read of it by human eyes.

  • @VulpeculaJoy
    @VulpeculaJoy Год назад +1

    You should really go into depth regarding the mathematical intrecacies of probabalistic robotics! It's an awesome field of math and engineering!

  • @alwasitacatisaw1275
    @alwasitacatisaw1275 Год назад +4

    We recently programmed a small Mindstorms robot to steer through a parcour that used lidar Scans to map the enviroment and then found its path step by step by the means of PRM (probabilistic roadmap method). That was fun.
    2D only but you have to start somewhere I guess :)

  • @Petch85
    @Petch85 Год назад +1

    Long time since I have seen Brady. That was nice.

  • @mrmphomahlangu9274
    @mrmphomahlangu9274 3 месяца назад

    this is actually quite cool

  • @SyntheticFuture
    @SyntheticFuture Год назад +5

    So you could in theory hive mind these maps so you have multiple robots and potentially drones that exchange the map data with each other right? That could be a very powerful way to quickly learn new robots the landmarks and rooms they might have to navigate in or to more quickly complete loops as robots work together on completing the same loops?

    • @papa_pt
      @papa_pt Год назад +1

      Sounds like a great idea. The robots would just need to communicate their relative distance and orientation to each other

    • @swagatochatterjee7104
      @swagatochatterjee7104 Год назад +2

      Distributed map fusion is kind of hard, especially since SLAM isn't weather and season independent yet.

    • @DotcomL
      @DotcomL Год назад +3

      Very much ongoing research. And to make it more fun, distributed algorithms open you to the problem of "what if one or more agents are lying?"

    • @Zap12348
      @Zap12348 Год назад +1

      Actually Nasa has implemented one. Its a combo of a drone and a rover. The drone identifies the safe path for the rover to travel.

  • @Nethershaw
    @Nethershaw Год назад

    I do so very much enjoy peeking under the hood into the robotics software through these videos.

  • @amjadalkhalifa3412
    @amjadalkhalifa3412 Месяц назад

    this video is so cool

  • @sarkybugger5009
    @sarkybugger5009 Год назад +1

    My robot vacuum cleaner has been doing this for the last five years. Clever little bugger. £200 to never have to push the hoover around again. 👍

  • @hugofriberg3445
    @hugofriberg3445 Год назад +1

    When will you cover the beer pissing attachment?

  • @charliebaby7065
    @charliebaby7065 Год назад

    Thank you Jane street

  • @timeimp
    @timeimp Год назад +3

    Does all this SLAM algorithm work get saved in-memory or on disk? Very interested to see how the theory is implemented.
    Amazing content either way - Spot will no-doubt be a part of all our futures...

    • @YoutubeFrogOfHell
      @YoutubeFrogOfHell Год назад +1

      It depends on the map's scale and the amount of data.
      If it's a tiny warehouse and 2d lidar - things are straightforward
      On a large scale though, such as self-driving with 3d lidar you need to load "chunks" of the map based on the location, this is where Big Data comes :)

    • @MarekKnapek
      @MarekKnapek Год назад +1

      One of the entry level LiDARs will produce about 8Mbit/s of data, that is about 1MB/s. I guesstimate one round around the room by robot to be 30-60 s. That is 60 MB of data per loop. Plus accelerometer and visible light camera data, use better lidar ... in worst case ... let's say ... 5-10× as much, maybe? Still less that gig of RAM, easily fits to the on board computer. Processing the data on the other hand? I have no idea.

    • @VulpeculaJoy
      @VulpeculaJoy Год назад

      @@MarekKnapek There is a lot of research on lidar data compression for obvious reasons. Most algorithms don't use raw pointclouds for the slam problem, instead the data is preprocessed into less dense structures, sometimes using standard methods, sometimes with the help of AI models. To recover a highly detailed map, some state-of-the-art methods again use AI for decompression.

  • @FisicoNuclearCuantico
    @FisicoNuclearCuantico Год назад

    Thank you Sean.

  • @cjordahl
    @cjordahl Год назад +2

    How does the software determine that point X50 is the same as point X2? How does it 'know' that it can close the loop?
    Also, is there a long term memory aspect that enables the software to continually refine the map and sharpen the picture?

    • @jursamaj
      @jursamaj Год назад +3

      Even with the small IMU errors, once it gets close, the point clouds near X2 and X50 start matching. Once it realizes it's matching features, that *is* closing the loop. It just has to adjust the X's in between (what he called 'warping the map') to make them coincide exactly.

    • @VulpeculaJoy
      @VulpeculaJoy Год назад

      Different matching stategies for landmarks involve either RANSAC, JCBB or simple nearest neighbor searches. For raw Lidar data ICP is the best way to align two scans and insert a corresponding edge into the pose-graph.

  • @john-paulmarletta2110
    @john-paulmarletta2110 Год назад

    really great video!

  • @OrphanSolid
    @OrphanSolid Год назад +1

    it looks like a videogame scanner or map. Metal Gear was ahead of its time lol

  • @cl759
    @cl759 Год назад

    That thing is perfect shape for next gen robovac. Give it suction tool at the front and 2 telescope feather dusters inside front limbs et voilà

  • @mattstyles4283
    @mattstyles4283 Год назад

    You guys should look at the robots at Ocado, Tom Scott did a video on them

  • @skaramicke
    @skaramicke Год назад

    Can you use the loop closer to add some constant to the IMU guesswork so the errors are likely smaller next time?

  • @charliebaby7065
    @charliebaby7065 Год назад

    OMG.... did you actually respond to my plea to cover SLAM with this video?
    Either way, thank you so much.
    I promise never to call any of you stupid again.
    (If. ... it's monocular slam, onboard, in browser and written in javascript. .... with no open3d.. or tensor flow. .... or ARcore or arkit)
    You guys can do it

  • @gunnargu
    @gunnargu Год назад

    We have an IMU of sorts, and two cameras, and touch sensitive limbs. Should we not strive to make computers able to work just using that?

  • @konstantinlozev2272
    @konstantinlozev2272 Год назад

    That sensor fusion was solved by current 300 usd VR headsets like Oculus Quest, no?

  • @fly1ngsh33p7
    @fly1ngsh33p7 Год назад

    Can you confuse/disturb the robot by walking around the robot with a big mirror?
    How would the 3D map look like?

    • @oldcowbb
      @oldcowbb Год назад +1

      not sure about lidar but mirror is the bane of robotic vision

    • @mgeisert6345
      @mgeisert6345 Год назад +1

      I believe the issue is that the lidar needs some diffusion from the material it is detecting such that at least part of the laser will come back to the sensor to be detected. So with a mirror it will have the same effect as light, the robot won't see the mirror but will see the reflection inside the mirror.

  • @Petch85
    @Petch85 Год назад

    When you solve the optimization problem, (I guess it is some variation of a least squares algorithm), is the a good way to find outliers. (Say there is a large measurement error in the LAIDAR a the robot now things is has moved a lot in the room, but the accelerations looks as expected as if the robot has not moved more than usual).
    Can you tell witch measurements align with each other or do they all look equally valid? Maybe something walks in front of the sensor and you have some measurements that do not line up with all the rest.

    • @oldcowbb
      @oldcowbb Год назад

      not sure is this what you are asking, but the least square is scaled by the uncertainty, each measurement is assigned an information matrix, the more certain you are about a measurement, the more it matters in the optimization

    • @VulpeculaJoy
      @VulpeculaJoy Год назад

      Yes, current research is largely focussing on segmenting moving objects from the sensor data, while also solving the slam problem. It's a little more complex than least squares though.
      Some new lidars can also measure doppler shift and thus the velocity of a point in space. This makes the taks of segmenting and tracking moving objects much simpler.

  • @douro20
    @douro20 Год назад

    What is the single-board computer being used there?

  • @DUDIDUAN
    @DUDIDUAN Год назад

    Is that Ouster lidar?

  • @borisverkhovskiy5169
    @borisverkhovskiy5169 Год назад

    How do you distinguish landmarks from things that are moving?

    • @AlexXPandian
      @AlexXPandian 11 месяцев назад

      Some form of RANSAC is running during feature matching to ignore outliers. The general assumption is that not everything is moving.

  • @AjSmit1
    @AjSmit1 Год назад

    i love this channel but i always have to turn my volume off and turn on captions whenever vous bust out the markers and paper 😅😅 the noise just turns my stomach ;-;

  • @ghostray9056
    @ghostray9056 Год назад

    Is it fixed-lag smoothing or full smoothing ?

  • @Shaft0
    @Shaft0 Год назад

    Makes one curious where Stone Aerospace is nowadays.

  • @swagatochatterjee7104
    @swagatochatterjee7104 Год назад

    Can anyone explain to me how do people update the kd tree where the point clouds are stored during loop closure? I understand the poses and landmarks are updated, but updating the landmarks should distrub the spatial ordering of the kd tree. How do you resolve that?

    • @sandipandas4420
      @sandipandas4420 Год назад

      Search for ikd tree

    • @swagatochatterjee7104
      @swagatochatterjee7104 Год назад

      @@sandipandas4420 what was the process before ikd tree was invented? Do they use a pointer to determine which changes happened in which cloud-points?

  • @zakiranderson722
    @zakiranderson722 Год назад

    I just need a bottle of milk

  • @retepaskab
    @retepaskab Год назад

    Can it distuinguish similar rooms opening from a looped corridoor?

    • @hyeve5319
      @hyeve5319 Год назад

      I'd expect so - it's tracking position via velocities and orientations as well as via the mapping, so it should be able to tell that it's in a different place that just looks the same

  • @neerajjoshi7228
    @neerajjoshi7228 Год назад

    👏👏

  • @raphaellmsousa
    @raphaellmsousa 11 месяцев назад

    Very didactic explanation

  • @CottonInDerTube
    @CottonInDerTube Год назад +1

    I wanna see that in a house of mirror :)
    Relationship THAT! :D

  • @guilherme5094
    @guilherme5094 Год назад

    👍

  • @abdullahjhatial2614
    @abdullahjhatial2614 Год назад

    how u make 3d environment .isnt lidar gives us 2d graph just point data in one direction with its orientation

    • @donperegrine922
      @donperegrine922 4 месяца назад

      There are lidars which are tilted onto their sides, and the entire LIDAR unit rotates, so that you get a series of 45-degree slices, stitched together can make a 3D map

    • @donperegrine922
      @donperegrine922 4 месяца назад +1

      Those are expensive, though! Actually...all LIDARS are super pricey

  • @meh3247
    @meh3247 Год назад +1

    Fascinating insight into how we can stop these machines after the war mongers have strapped automatic weapons onto them.
    Thanks.

  • @Jkauppa
    @Jkauppa Год назад

    you better have radio base stations for localization as well

    • @Jkauppa
      @Jkauppa Год назад

      usually you dont just have to scan without first being around, you can deploy base station beacons to get accurate local positioning relative to the beacons, and mutually between the beacons

    • @Jkauppa
      @Jkauppa Год назад

      breadcrumb beacons dropped along the scan path to be used as the mapping localization anchor beacons

    • @Jkauppa
      @Jkauppa Год назад

      also you could put those marks in the lidar scan positions only in 3d space, to directly align to those points always, just random align point cloud, much less than aligning to all the points

    • @Jkauppa
      @Jkauppa Год назад

      maximum likelihood mapping model

    • @Jkauppa
      @Jkauppa Год назад

      most likely model based on the measurements

  • @andreykojevnikov1086
    @andreykojevnikov1086 Год назад

    So they assembled a slamhound

  • @LaviArzi
    @LaviArzi Год назад

    Is the code/library public? Where can I find more information about it?

    • @oldcowbb
      @oldcowbb Год назад

      google ros slam, thousand of post and example on that

  • @quill444
    @quill444 Год назад +1

    Let's see a dozen of them, on ice skates, programmed for survival, with self-learning code, hunting one another down, with oxyacetylene torches! 💥 - j q t -

    • @donperegrine922
      @donperegrine922 4 месяца назад

      This is the REAL dream of robotic engineers!

  • @billr3053
    @billr3053 Год назад +2

    You know you're working at a dream job when your office's stairwell has railings that have lighting underneath. Who the heck over funds all this??? When all they do there is "think tank" and make prototypes no one asked for. They're not solving for existing problems. Someone is feeding them 'future things that would be cool' projects. I've never run across places like that. How do they turn a profit?
    Everything is super clean.
    It think their board of directors are lizard alien overlords. Mulder from "The X-Files" was right: "military-industrial-entertainment complex".

    • @hyeve5319
      @hyeve5319 Год назад

      I'm guessing you haven't seen what robots like these are used for, then? They're absolutely solving for existing problems ~ at least, problems of safety and routine.
      Robots like Spot are used commercially to do automated (and manual if needed) inspections of sites that are dangerous for humans, for instance, mines, radioactive sites, and places with other dangerous equipment.
      Sure, they might not be "essential", but it is safer, easier, more cost effective, and potentially more reliable, to have a robot do those tasks rather than a human.
      And in general, this kind of robotics technology is extremely important for any kind of robots that need to perform "open-world" tasks, and there's many reasons you might want that.

    • @jimgorlett4269
      @jimgorlett4269 Год назад

      that's just all of tech: an overfunded bubble of hype with little to no real-world uses

  • @tom-stein
    @tom-stein Год назад +1

    Interesting video. There is some really annoying high frequency noise at around 04:00.

  • @science_and_technology6
    @science_and_technology6 Год назад

    What are the complete steps to create a PayPal money adder program?

  • @aamiddel8646
    @aamiddel8646 Год назад

    Have you considered to use a Kalman filter to merge sensor data together?

    • @oldcowbb
      @oldcowbb Год назад +2

      thats a standard practice

  • @gorkyrojas9346
    @gorkyrojas9346 4 месяца назад

    This explained nothing. 'We collect these landmarks and position data and then the algorithm solves it, except it's not really solved.'
    Okay, cool, I get it now?
    Weird ad video.

  • @TehVulpez
    @TehVulpez Год назад

    SLAM describes the sound of what I would do with a baseball bat if I saw one of these things in person

  • @ScottPlude
    @ScottPlude Год назад

    Great video but ya gotta find another platform other than youTube. Move to rumble or odysee.

  • @sherkhanthelegend7169
    @sherkhanthelegend7169 Год назад

    Second 😊

  • @MichaelKingsfordGray
    @MichaelKingsfordGray Год назад

    Again: The wrong way around!
    Compute what the internal model should look like, compare with the "real" image, and refine the model until it matches.
    That is how all animals work.

  • @cscscscss
    @cscscscss Год назад +1

    Hello dear viewer watching this about 1-4 years after the creation of this video :)

  • @duartelucas8129
    @duartelucas8129 Год назад

    Please change the pen. It kills me everytime, the noise is excruciating.

  • @AdibasWakfu
    @AdibasWakfu Год назад

    He sounds so annoyed the whole video haha

  • @elonfc
    @elonfc Год назад +1

    First comment

    • @wisdon
      @wisdon Год назад +1

      no prize yet?

    • @elonfc
      @elonfc Год назад +1

      @@wisdon that guy didn't like my comment, ok so i give myself Oscars for doin this.

  • @thisisajoke0
    @thisisajoke0 Год назад

    It continues to amaze me how supposedly smart people keep helping the progress of what very clearly will be a huge downfall of humans. These people have narrowed their education far far too much and need to broaden their understanding of hostory.

    • @hyeve5319
      @hyeve5319 Год назад

      Humanity has a lot of much more imminent problems than a robot takeover. This kind of tech is super impressive and very useful for some things, but robots are nowhere even close to the generalist skills of humans, and AI is even less so.
      Sure, (4-legged) bots can walk around rooms by themselves, and AI can draw close-to-realistic images, but each on their own can pretty much *only* do those things, nothing else.

  • @pozog8987
    @pozog8987 Год назад

    The "chicken and egg" description of SLAM needs to stop. "SLAM" is simply using perceptual information to correct for errors in dead reckoning (without a map to use as a reference). If you have a reference map, it's called "localization."
    That definition applies to other types of non-metric SLAM not discussed here (topological and appearance-based SLAM).

    • @oldcowbb
      @oldcowbb Год назад

      chicken and egg refers to the map and the localization, you are missing the part where the robot assembles the map, it is also a key output of SLAM, it's not just for correcting the odometry. The sensor information is useless if you don't build a map, Every solution to SLAM boils down to building a gross map from dead reckoning first, then simultaneously update the map and the localization when the same part of a map is observed, it's overly cliched but not wrong

  • @tocsa120ls
    @tocsa120ls Год назад

    Nuclear decommissioning? You'll need something much more rad-hardened than a BD dog...

  • @ChickenPermissionOG
    @ChickenPermissionOG Год назад

    scientist need to come up with better names.