SLAM Robot Mapping - Computerphile

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024
  • Thanks to Jane Street for their support...
    Check out internships here: bit.ly/compute...
    More links & stuff in full description below ↓↓↓
    This video features the Oxford Robotics Institute demonstrating their SLAM algorithm with their frontier device and the Boston Dynamics Spot robot. Thanks to Marco Camurri & Michal Staniaszek for their time. Last time we met spot: • Automating Boston Dyna... Joining Point Clouds with Iterative Closest Point: • Iterative Closest Poin...
    JANE STREET...
    Applications for Summer 2023 internships are open and filling up... Roles include: Quantitative Trading, Software Engineering, Quantitative Research and Business Development... and more.
    Positions in New York, Hong Kong, and London.
    bit.ly/compute...
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottsco...
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Комментарии • 141

  • @rallekralle11
    @rallekralle11 2 года назад +539

    i made a little robot that used slam mapping once. that is, it slammed into walls to find out where they were

    • @waynec369
      @waynec369 2 года назад +10

      😀😆😅🤣😂😂😂

    • @qqii
      @qqii 2 года назад +31

      Top of the line roomba robots do the same thing, so your robot was as cutting edge as they are.

    • @timseguine2
      @timseguine2 2 года назад +33

      "Simultaneous Localisation And Mutilation"

    • @rallekralle11
      @rallekralle11 2 года назад +10

      @@timseguine2 exactly. it didn't last long

    • @CoolAsFreya
      @CoolAsFreya 2 года назад +8

      I knew a vision impaired dog that navigated like this

  • @jonwhite6894
    @jonwhite6894 2 года назад +44

    “The IMU has physics inside”
    I’m stealing this line for my next UAV LIDAR lecture

    • @infinix2003
      @infinix2003 2 месяца назад

      U working on UAV's, so cool

  • @StormBurnX
    @StormBurnX 2 года назад +14

    I remember over a decade ago using a periodically rotating ultrasonic distance sensor on a small Lego robot to do a very basic SLAM, where (luckily) all of the relevant rooms were perfect 1m squares and all we needed to know was where we were in relation to the center of the room.
    It's amazing how far tech has come and how incredibly diverse and useful it is! I love seeing the multi-camera SLAM systems like used on Skydio drones and inside-out tracked VR headsets

  • @Mutual_Information
    @Mutual_Information 2 года назад +49

    One of the coolest applications of the Kalman Filter is slam modeling. Really gives you a sense for how flexible the Kalman Filter is..

    • @oldcowbb
      @oldcowbb 2 года назад +7

      kalman filter for slam is quite outdated, all the state of the art method do least square on graph

    • @Mutual_Information
      @Mutual_Information 2 года назад +5

      @@oldcowbb Makes sense - saw it in a textbook teaching Kalman Filter.

    • @zombie_pigdragon
      @zombie_pigdragon 2 года назад +2

      @@oldcowbb Can you link some more information? It would be cool to get a more detailed overview after the video summary.

    • @oldcowbb
      @oldcowbb 2 года назад +7

      @@zombie_pigdragon find cyrill stachniss graph slam on youtube, he describes in detail behind the math of graph slam, he also has a paper on graph slam tutorial, basically a paper version of his lecture

    • @oldcowbb
      @oldcowbb 2 года назад

      @@zombie_pigdragon there is also a matlab tech talk on graph slam, but that one is more on the intuition than the actual math

  • @andrewharrison8436
    @andrewharrison8436 2 года назад +74

    Makes you appreciate the human brain. I go down the garden on the right and back on the other side and I am quite happy that I have closed the loop. On the way things will have moved the wheelbarrow, the washing in the wind, I can see a programmer absolutely pulling out their hair trying to get a robot to assign permanance to the right landmarks.

    • @oldcowbb
      @oldcowbb 2 года назад +3

      you are absolutely right, dynamic environment is known to be hard, but it can be solved by labeling using computer vision and classifies them to dynamic objects and static objects

    • @sergiureznicencu
      @sergiureznicencu 2 года назад +6

      @@oldcowbb No it can't be solved. The AI solution is even more noisy and easily fooled. A broken chair under a pile of wreckage cannot be labeled whatever you do. Also labeling is done, obviously, by people. Being realistic you realize you cannot even label a small fraction of all normal objects. What you read in the news about all-knowing AI is sugar coated and it is very much way more sensitive to dynamic conditions.

    • @oluwatosinoseni7839
      @oluwatosinoseni7839 2 года назад +1

      @@sergiureznicencu Yh dynamic env is yet to be solved, I see some recent papers that attempt to use segmentations and show promises and some that run a sort of dynamic landmark detection but it’s still far from solved

    • @mirr0rd
      @mirr0rd 2 года назад +6

      The thing is, humans don't have a pinpoint accurate map of the world. We localise ourselves in the current room to within a foot or so and work things out as we go. A human wouldn't know if a building was out of square slightly, our maps are much more conceptual, like a graph of room connections.

    • @andrewharrison8436
      @andrewharrison8436 2 года назад +5

      @@mirr0rd Yes, agreed. Particularly on the error involved (hence the stubbed toe in the dark).
      So we build a conceptual map/graph - is that easier or harder than a robot building a map with accurate coordinates? It's certainly a different approach, perhaps each should be jealous of the other's skill?

  • @Gaivs
    @Gaivs 2 года назад +20

    SLAM is something I've been studying and working on for some time, so it's very cool to see it discussed! It is so useful in so many cases for automation, as you aren't dependent on external data, such as communication with GPS and such. It is extremely useful both for the maps it produces, and you can use the results for path planning and obstacle avoidance.

    • @swagatochatterjee7104
      @swagatochatterjee7104 2 года назад +2

      Can anyone explain to me how do people update the kd tree where the point clouds are stored during loop closure? I understand the poses and landmarks are updated, but updating the landmarks should distrub the spatial ordering of the kd tree. How do you resolve that?

    • @calvinkielas-jensen6665
      @calvinkielas-jensen6665 2 года назад +1

      @@swagatochatterjee7104 I haven't specifically worked with kd trees of point cloud data, but I have done some general SLAM implementations and can speak on how I would approach it. Later on in the video they mention factor graphs. I would link one sample of points (e.g., all the points from one revolution of the LiDAR) to an individual factor within the entire graph. That way, the point cloud is local only to whatever the position of said factor is. If the factor's position changes, then you simply rigidly transform all the points associated with it as well. That way you do not need to constantly rebuild the trees. Once you are ready for post-processing, then you could rebuild the filtered data once.

    • @swagatochatterjee7104
      @swagatochatterjee7104 2 года назад

      @@calvinkielas-jensen6665 aah ok, so on loop-closure the pose-landmark factor constraint only changes relatively to the pose, and the visualisation we see is a post processing? However, if we are using landmarks local to the pose, how would one associate landmarks from pose to pose (say using ICP or DBoW)? I am talking in terms of Factor Graph SLAM only.

  • @antonisvenianakis1047
    @antonisvenianakis1047 2 года назад +2

    Amazing video, I love to see that ROS is used for all of these applications, it helps the research grow and the results are amazing. Every year new amazing things happen

  • @klaasvaak8975
    @klaasvaak8975 2 года назад +83

    Would be very interesting to have a deeper dive into this. Like, how do they extract features from the images, and how do they handle uncertainty in the data?

    • @Ceelvain
      @Ceelvain 2 года назад +7

      And then a Numberphile video on how it's solved!

    • @wisdon
      @wisdon 2 года назад

      those are military secrets, I don't think the Pentagon will be happy to disclose how these works

    • @mikachu69420
      @mikachu69420 2 года назад +6

      there's a whole field of research on that. basically a lot of math and statistics and machine learning

    • @floppypaste
      @floppypaste 2 года назад +1

      Also how they know that the system is near an older position based on the point clouds

    • @oldcowbb
      @oldcowbb 2 года назад +5

      @@mikachu69420 there are not a lot of machine learning required, very basic visual features (orb, sift) will do, of course you can also spice things up with machine learning. uncertainty is handled by Bayesian updates usually with gaussian or monte carlo

  • @LimitedWard
    @LimitedWard 2 года назад +1

    I only took an introductory course in robotics in college, but I remember learning a bit about SLAM. I personally think it's one of the coolest problems in computer vision.

  • @VulpeculaJoy
    @VulpeculaJoy Год назад +1

    You should really go into depth regarding the mathematical intrecacies of probabalistic robotics! It's an awesome field of math and engineering!

  • @zxuiji
    @zxuiji 2 года назад

    I got a suggestion for map management, treat the robot as being in a 3x3xN cuboid, the cell 2x2x2 is the one that the robot is ALWAYS in, it is declared as RxCxD from 0x0x0 which is always the 1st place it started recording, cell 2x2x2 gets the most attention, the cells directly adjacent are for estimating physics of any interaction (such as a ball knocked off of an overhead walkway somehow that need to evade), a change vs the stored recent state indicates objects &/or living things in local ares to be wary of, otherwise just keep checking with the dedicate threads, the cells 4 onwards of dimension N are for vision based threads, they by default use empty space unless a map of the cell happens to be available, when ever the robot reaches the normalised 1.0 edge of the cell it's in then the RxCxD it's in changes with it and it's normalised position flips from 1.0 to -1.0 & vise versa, keeping to a normalised space for analysing physics & locations that can move to simplifies the math used for it since it only has to focus on what is in RAM rather than whatever map file/s are on disk which is loaded/unloaded by whatever thread moves the robots position, the files just need to be stored something like this:
    7FFFFFFF_FFFFFFFF_3FFFFFFF.map
    The underscores indicate next float (in this example anyways) value so in the above case the 3 values would be read into integers, cast to floats via pointer arithmetic or unions then treated as the absolute position of the map that is being used in related to what is stored (which is faster than reading text obv), this combined with the space of the cell in it's none normalised form gives an exponential increase of absolute positioning while also keeping the RAM usage & CPU/GPU/TPU usage reasonable by not using doubles for physics & mapping of the cells themselves, this no doubt translates to even faster processing of those cells, for the count of N in 3x3xN I would go with 10 so that it's not too far ahead that looking but also not ignoring too much with a 3x3x3 arrangement, 10 is also easier for manual math & percentages. Using fixed width map names also makes it easier to map in your own head roughly where the robot saw it in relation to when it 1st started mapping since the fields of column vs row vs depth are neatly aligned ready for quick read of it by human eyes.

  • @sarkybugger5009
    @sarkybugger5009 2 года назад +2

    My robot vacuum cleaner has been doing this for the last five years. Clever little bugger. £200 to never have to push the hoover around again. 👍

  • @gogokostadinov
    @gogokostadinov Год назад

    First video that I saw and something about the IMU was said...Thnak you :)

  • @alwasitacatisaw1275
    @alwasitacatisaw1275 2 года назад +5

    We recently programmed a small Mindstorms robot to steer through a parcour that used lidar Scans to map the enviroment and then found its path step by step by the means of PRM (probabilistic roadmap method). That was fun.
    2D only but you have to start somewhere I guess :)

  • @Petch85
    @Petch85 2 года назад +1

    Long time since I have seen Brady. That was nice.

  • @Nethershaw
    @Nethershaw 2 года назад

    I do so very much enjoy peeking under the hood into the robotics software through these videos.

  • @charliebaby7065
    @charliebaby7065 2 года назад

    Thank you Jane street

  • @SyntheticFuture
    @SyntheticFuture 2 года назад +5

    So you could in theory hive mind these maps so you have multiple robots and potentially drones that exchange the map data with each other right? That could be a very powerful way to quickly learn new robots the landmarks and rooms they might have to navigate in or to more quickly complete loops as robots work together on completing the same loops?

    • @papa_pt
      @papa_pt 2 года назад +1

      Sounds like a great idea. The robots would just need to communicate their relative distance and orientation to each other

    • @swagatochatterjee7104
      @swagatochatterjee7104 2 года назад +2

      Distributed map fusion is kind of hard, especially since SLAM isn't weather and season independent yet.

    • @DotcomL
      @DotcomL 2 года назад +3

      Very much ongoing research. And to make it more fun, distributed algorithms open you to the problem of "what if one or more agents are lying?"

    • @Zap12348
      @Zap12348 Год назад +1

      Actually Nasa has implemented one. Its a combo of a drone and a rover. The drone identifies the safe path for the rover to travel.

  • @OrphanSolid
    @OrphanSolid 2 года назад +1

    it looks like a videogame scanner or map. Metal Gear was ahead of its time lol

  • @charliebaby7065
    @charliebaby7065 2 года назад

    OMG.... did you actually respond to my plea to cover SLAM with this video?
    Either way, thank you so much.
    I promise never to call any of you stupid again.
    (If. ... it's monocular slam, onboard, in browser and written in javascript. .... with no open3d.. or tensor flow. .... or ARcore or arkit)
    You guys can do it

  • @FisicoNuclearCuantico
    @FisicoNuclearCuantico 2 года назад

    Thank you Sean.

  • @mrmphomahlangu9274
    @mrmphomahlangu9274 7 месяцев назад

    this is actually quite cool

  • @cl759
    @cl759 2 года назад

    That thing is perfect shape for next gen robovac. Give it suction tool at the front and 2 telescope feather dusters inside front limbs et voilà

  • @timeimp
    @timeimp 2 года назад +3

    Does all this SLAM algorithm work get saved in-memory or on disk? Very interested to see how the theory is implemented.
    Amazing content either way - Spot will no-doubt be a part of all our futures...

    • @YoutubeFrogOfHell
      @YoutubeFrogOfHell 2 года назад +1

      It depends on the map's scale and the amount of data.
      If it's a tiny warehouse and 2d lidar - things are straightforward
      On a large scale though, such as self-driving with 3d lidar you need to load "chunks" of the map based on the location, this is where Big Data comes :)

    • @MarekKnapek
      @MarekKnapek 2 года назад +1

      One of the entry level LiDARs will produce about 8Mbit/s of data, that is about 1MB/s. I guesstimate one round around the room by robot to be 30-60 s. That is 60 MB of data per loop. Plus accelerometer and visible light camera data, use better lidar ... in worst case ... let's say ... 5-10× as much, maybe? Still less that gig of RAM, easily fits to the on board computer. Processing the data on the other hand? I have no idea.

    • @VulpeculaJoy
      @VulpeculaJoy Год назад

      @@MarekKnapek There is a lot of research on lidar data compression for obvious reasons. Most algorithms don't use raw pointclouds for the slam problem, instead the data is preprocessed into less dense structures, sometimes using standard methods, sometimes with the help of AI models. To recover a highly detailed map, some state-of-the-art methods again use AI for decompression.

  • @meostyles
    @meostyles 2 года назад

    You guys should look at the robots at Ocado, Tom Scott did a video on them

  • @amjadalkhalifa3412
    @amjadalkhalifa3412 4 месяца назад

    this video is so cool

  • @hugofriberg3445
    @hugofriberg3445 2 года назад +1

    When will you cover the beer pissing attachment?

  • @cjordahl
    @cjordahl 2 года назад +2

    How does the software determine that point X50 is the same as point X2? How does it 'know' that it can close the loop?
    Also, is there a long term memory aspect that enables the software to continually refine the map and sharpen the picture?

    • @jursamaj
      @jursamaj 2 года назад +3

      Even with the small IMU errors, once it gets close, the point clouds near X2 and X50 start matching. Once it realizes it's matching features, that *is* closing the loop. It just has to adjust the X's in between (what he called 'warping the map') to make them coincide exactly.

    • @VulpeculaJoy
      @VulpeculaJoy Год назад

      Different matching stategies for landmarks involve either RANSAC, JCBB or simple nearest neighbor searches. For raw Lidar data ICP is the best way to align two scans and insert a corresponding edge into the pose-graph.

  • @john-paulmarletta2110
    @john-paulmarletta2110 Год назад

    really great video!

  • @AjSmit1
    @AjSmit1 2 года назад

    i love this channel but i always have to turn my volume off and turn on captions whenever vous bust out the markers and paper 😅😅 the noise just turns my stomach ;-;

  • @skaramicke
    @skaramicke 2 года назад

    Can you use the loop closer to add some constant to the IMU guesswork so the errors are likely smaller next time?

  • @konstantinlozev2272
    @konstantinlozev2272 Год назад

    That sensor fusion was solved by current 300 usd VR headsets like Oculus Quest, no?

  • @Jkauppa
    @Jkauppa 2 года назад

    you better have radio base stations for localization as well

    • @Jkauppa
      @Jkauppa 2 года назад

      usually you dont just have to scan without first being around, you can deploy base station beacons to get accurate local positioning relative to the beacons, and mutually between the beacons

    • @Jkauppa
      @Jkauppa 2 года назад

      breadcrumb beacons dropped along the scan path to be used as the mapping localization anchor beacons

    • @Jkauppa
      @Jkauppa 2 года назад

      also you could put those marks in the lidar scan positions only in 3d space, to directly align to those points always, just random align point cloud, much less than aligning to all the points

    • @Jkauppa
      @Jkauppa 2 года назад

      maximum likelihood mapping model

    • @Jkauppa
      @Jkauppa 2 года назад

      most likely model based on the measurements

  • @gunnargu
    @gunnargu 2 года назад

    We have an IMU of sorts, and two cameras, and touch sensitive limbs. Should we not strive to make computers able to work just using that?

  • @fly1ngsh33p7
    @fly1ngsh33p7 2 года назад

    Can you confuse/disturb the robot by walking around the robot with a big mirror?
    How would the 3D map look like?

    • @oldcowbb
      @oldcowbb 2 года назад +1

      not sure about lidar but mirror is the bane of robotic vision

    • @mgeisert6345
      @mgeisert6345 2 года назад +1

      I believe the issue is that the lidar needs some diffusion from the material it is detecting such that at least part of the laser will come back to the sensor to be detected. So with a mirror it will have the same effect as light, the robot won't see the mirror but will see the reflection inside the mirror.

  • @neerajjoshi7228
    @neerajjoshi7228 2 года назад

    👏👏

  • @douro20
    @douro20 2 года назад

    What is the single-board computer being used there?

  • @guilherme5094
    @guilherme5094 2 года назад

    👍

  • @Shaft0
    @Shaft0 2 года назад

    Makes one curious where Stone Aerospace is nowadays.

  • @Petch85
    @Petch85 2 года назад

    When you solve the optimization problem, (I guess it is some variation of a least squares algorithm), is the a good way to find outliers. (Say there is a large measurement error in the LAIDAR a the robot now things is has moved a lot in the room, but the accelerations looks as expected as if the robot has not moved more than usual).
    Can you tell witch measurements align with each other or do they all look equally valid? Maybe something walks in front of the sensor and you have some measurements that do not line up with all the rest.

    • @oldcowbb
      @oldcowbb 2 года назад

      not sure is this what you are asking, but the least square is scaled by the uncertainty, each measurement is assigned an information matrix, the more certain you are about a measurement, the more it matters in the optimization

    • @VulpeculaJoy
      @VulpeculaJoy Год назад

      Yes, current research is largely focussing on segmenting moving objects from the sensor data, while also solving the slam problem. It's a little more complex than least squares though.
      Some new lidars can also measure doppler shift and thus the velocity of a point in space. This makes the taks of segmenting and tracking moving objects much simpler.

  • @borisverkhovskiy5169
    @borisverkhovskiy5169 2 года назад

    How do you distinguish landmarks from things that are moving?

    • @AlexXPandian
      @AlexXPandian Год назад

      Some form of RANSAC is running during feature matching to ignore outliers. The general assumption is that not everything is moving.

  • @annas8079
    @annas8079 Месяц назад

    What is the name of the small stereo cameras he is using?

  • @billr3053
    @billr3053 2 года назад +2

    You know you're working at a dream job when your office's stairwell has railings that have lighting underneath. Who the heck over funds all this??? When all they do there is "think tank" and make prototypes no one asked for. They're not solving for existing problems. Someone is feeding them 'future things that would be cool' projects. I've never run across places like that. How do they turn a profit?
    Everything is super clean.
    It think their board of directors are lizard alien overlords. Mulder from "The X-Files" was right: "military-industrial-entertainment complex".

    • @hyeve5319
      @hyeve5319 2 года назад

      I'm guessing you haven't seen what robots like these are used for, then? They're absolutely solving for existing problems ~ at least, problems of safety and routine.
      Robots like Spot are used commercially to do automated (and manual if needed) inspections of sites that are dangerous for humans, for instance, mines, radioactive sites, and places with other dangerous equipment.
      Sure, they might not be "essential", but it is safer, easier, more cost effective, and potentially more reliable, to have a robot do those tasks rather than a human.
      And in general, this kind of robotics technology is extremely important for any kind of robots that need to perform "open-world" tasks, and there's many reasons you might want that.

    • @jimgorlett4269
      @jimgorlett4269 2 года назад

      that's just all of tech: an overfunded bubble of hype with little to no real-world uses

  • @raphaellmsousa
    @raphaellmsousa Год назад

    Very didactic explanation

  • @zakiranderson722
    @zakiranderson722 Год назад

    I just need a bottle of milk

  • @ghostray9056
    @ghostray9056 2 года назад

    Is it fixed-lag smoothing or full smoothing ?

  • @retepaskab
    @retepaskab 2 года назад

    Can it distuinguish similar rooms opening from a looped corridoor?

    • @hyeve5319
      @hyeve5319 2 года назад

      I'd expect so - it's tracking position via velocities and orientations as well as via the mapping, so it should be able to tell that it's in a different place that just looks the same

  • @swagatochatterjee7104
    @swagatochatterjee7104 2 года назад

    Can anyone explain to me how do people update the kd tree where the point clouds are stored during loop closure? I understand the poses and landmarks are updated, but updating the landmarks should distrub the spatial ordering of the kd tree. How do you resolve that?

    • @sandipandas4420
      @sandipandas4420 2 года назад

      Search for ikd tree

    • @swagatochatterjee7104
      @swagatochatterjee7104 2 года назад

      @@sandipandas4420 what was the process before ikd tree was invented? Do they use a pointer to determine which changes happened in which cloud-points?

  • @abdullahjhatial2614
    @abdullahjhatial2614 2 года назад

    how u make 3d environment .isnt lidar gives us 2d graph just point data in one direction with its orientation

    • @donperegrine922
      @donperegrine922 8 месяцев назад

      There are lidars which are tilted onto their sides, and the entire LIDAR unit rotates, so that you get a series of 45-degree slices, stitched together can make a 3D map

    • @donperegrine922
      @donperegrine922 8 месяцев назад +1

      Those are expensive, though! Actually...all LIDARS are super pricey

  • @andreykojevnikov1086
    @andreykojevnikov1086 2 года назад

    So they assembled a slamhound

  • @quill444
    @quill444 2 года назад +1

    Let's see a dozen of them, on ice skates, programmed for survival, with self-learning code, hunting one another down, with oxyacetylene torches! 💥 - j q t -

    • @donperegrine922
      @donperegrine922 8 месяцев назад

      This is the REAL dream of robotic engineers!

  • @DUDIDUAN
    @DUDIDUAN Год назад

    Is that Ouster lidar?

  • @meh3247
    @meh3247 2 года назад +1

    Fascinating insight into how we can stop these machines after the war mongers have strapped automatic weapons onto them.
    Thanks.

  • @CottonInDerTube
    @CottonInDerTube 2 года назад +1

    I wanna see that in a house of mirror :)
    Relationship THAT! :D

  • @JanetJensen-p9f
    @JanetJensen-p9f 3 месяца назад

    Garcia Elizabeth Martinez Anthony White Matthew

  • @aamiddel8646
    @aamiddel8646 2 года назад

    Have you considered to use a Kalman filter to merge sensor data together?

    • @oldcowbb
      @oldcowbb 2 года назад +2

      thats a standard practice

  • @TehVulpez
    @TehVulpez 2 года назад

    SLAM describes the sound of what I would do with a baseball bat if I saw one of these things in person

  • @gorkyrojas9346
    @gorkyrojas9346 7 месяцев назад

    This explained nothing. 'We collect these landmarks and position data and then the algorithm solves it, except it's not really solved.'
    Okay, cool, I get it now?
    Weird ad video.

  • @LaviArzi
    @LaviArzi 2 года назад

    Is the code/library public? Where can I find more information about it?

    • @oldcowbb
      @oldcowbb 2 года назад

      google ros slam, thousand of post and example on that

  • @MichaelKingsfordGray
    @MichaelKingsfordGray 2 года назад

    Again: The wrong way around!
    Compute what the internal model should look like, compare with the "real" image, and refine the model until it matches.
    That is how all animals work.

  • @tom-stein
    @tom-stein 2 года назад +1

    Interesting video. There is some really annoying high frequency noise at around 04:00.

    • @deadfIag
      @deadfIag 2 года назад

      No, there's not

  • @Technology_88888
    @Technology_88888 2 года назад

    What are the complete steps to create a PayPal money adder program?

  • @elonfc
    @elonfc 2 года назад +1

    First comment

    • @wisdon
      @wisdon 2 года назад +1

      no prize yet?

    • @elonfc
      @elonfc 2 года назад +1

      @@wisdon that guy didn't like my comment, ok so i give myself Oscars for doin this.

  • @sherkhanthelegend7169
    @sherkhanthelegend7169 2 года назад

    Second 😊

  • @cscscscss
    @cscscscss 2 года назад +1

    Hello dear viewer watching this about 1-4 years after the creation of this video :)

  • @AdibasWakfu
    @AdibasWakfu 2 года назад

    He sounds so annoyed the whole video haha

  • @duartelucas8129
    @duartelucas8129 2 года назад

    Please change the pen. It kills me everytime, the noise is excruciating.

  • @pozog8987
    @pozog8987 2 года назад

    The "chicken and egg" description of SLAM needs to stop. "SLAM" is simply using perceptual information to correct for errors in dead reckoning (without a map to use as a reference). If you have a reference map, it's called "localization."
    That definition applies to other types of non-metric SLAM not discussed here (topological and appearance-based SLAM).

    • @oldcowbb
      @oldcowbb 2 года назад

      chicken and egg refers to the map and the localization, you are missing the part where the robot assembles the map, it is also a key output of SLAM, it's not just for correcting the odometry. The sensor information is useless if you don't build a map, Every solution to SLAM boils down to building a gross map from dead reckoning first, then simultaneously update the map and the localization when the same part of a map is observed, it's overly cliched but not wrong

  • @thisisajoke0
    @thisisajoke0 2 года назад

    It continues to amaze me how supposedly smart people keep helping the progress of what very clearly will be a huge downfall of humans. These people have narrowed their education far far too much and need to broaden their understanding of hostory.

    • @hyeve5319
      @hyeve5319 2 года назад

      Humanity has a lot of much more imminent problems than a robot takeover. This kind of tech is super impressive and very useful for some things, but robots are nowhere even close to the generalist skills of humans, and AI is even less so.
      Sure, (4-legged) bots can walk around rooms by themselves, and AI can draw close-to-realistic images, but each on their own can pretty much *only* do those things, nothing else.

  • @ScottPlude
    @ScottPlude 2 года назад

    Great video but ya gotta find another platform other than youTube. Move to rumble or odysee.

  • @tocsa120ls
    @tocsa120ls 2 года назад

    Nuclear decommissioning? You'll need something much more rad-hardened than a BD dog...

  • @ChickenPermissionOG
    @ChickenPermissionOG 2 года назад

    scientist need to come up with better names.