Will Ai come for the Pilots?! - Expert interview

Поделиться
HTML-код
  • Опубликовано: 21 сен 2024

Комментарии • 774

  • @MentourNow
    @MentourNow  Год назад +26

    To try out my new AI app here!👉🏻 app.mentourpilot.com
    You can also contact Marco directly if you have a serious idea you would like his help developing cvt.ai/mentour

    • @jsonwilliams347
      @jsonwilliams347 Год назад +2

      😊😊😊

    • @forgotten_world
      @forgotten_world Год назад +1

      About AI, this is not a "if" question, but just "when", and this day is coming fast. I would say, no more than twenty years. That is, for commercial aviation, because autonomous UAMs will cover the sky way before that. Those protocols are already in development for years.

    • @damionlee7658
      @damionlee7658 Год назад

      ​@@forgotten_world I think there is still an "If" element, which centres around whether AI is the best solution towards automated piloting.
      Of course we are headed towards more automation in commercial aviation, but AI isn't necessarily the best solution for pilot replacement, in an industry with the framework that aviation has. We may be better off expanding automation technologies that work on defined processes, in the same way autopilot and autoland do.
      There is no reason why we cannot have an aircraft automated from the ramp of its departure airport, to the ramp of the destination airport, and never use an AI system.
      There is a lot of focus on AI at the moment, and it is a fantastic field that will doubtlessly become more prominent in society. But we need to use the right tools for each job. And automation without AI is probably far more suited to flying aircraft, at the very least until we can get AI to spontaneously consider scenarios, but even then AI probably isn't going to be the best solution.
      Where AI is perhaps going to be better suited, is in the role of traffic control. I'd wager you'll see an AI driven ATC long before you see an AI pilot (in commercial use, rather than in research, development, and testing programmes), of we ever see AI used for commercial piloting.

    • @seraphina985
      @seraphina985 Год назад +2

      The idea you suggested for the plane to offer prompts where the pilot is doing something strange and pushing the plane towards an unsafe envelope doesn't require AI. As a computer programmer I'd say this would be an expansion of the scope of the existing hard coded envelope protection system with a second layer or soft mitigation (Soft in this regard as they are recommendation as vs the hard mitigations where the aircraft physically intervenes). For example say the pilot is experiencing a somatogravic illusion and forcing the nose up failing to notice the plane is actually slowing quickly. Perhaps instead of waiting until the hard envelope protection system kicks in a RED CLIMB message shows up on the ECAM perhaps accompanied by an aural "REDUCE CLIMB" alert. I feel like that wording also has the added bonus that it calls out what is the immediate problem that the pilots are likely missing here, as a pilot even if I doubted this it would call my attention to the ADI, Altimeter, VSI, and ASI all of which would confirm the yup you are actually dangerously nose high, climbing and getting dangerously slow. After all if I didn't believe I was climbing the first three will disburse me of that notion and the latter is going to indicate this is becoming a serious issue. So that little prompt could help break the pilot out of the confusion by giving and instruction that also calls attention to the right instruments that for whatever reason they are somehow just not seeing at this critical moment.

    • @netjamo
      @netjamo 9 месяцев назад

      Länka flightradar24 i appen 👍

  • @MrBlablubb33
    @MrBlablubb33 Год назад +211

    I did not expect the most level headed take on ai coming from an aviation channel. Truly well done!

    • @oadka
      @oadka Год назад +6

      IKR!!!! Most posts on linkedin have sounded more doomed, as if AI is already so good it can take over everything or as if AI would revolutionize work the way MS Office did. But I think there might be a difference in which sector they are discussing, as the managerial people on linkedin tend to be more vocal. Aiden here is presenting AI for engineering/pilots which is a different use case than generating reports or making presentations.

    • @bhaskarmukherjee4768
      @bhaskarmukherjee4768 Год назад +5

      Aviation has been dealing with automation far before mainstream tech. Heck neural networks in aviation controls research was a thing by the 90s.

    • @yamminemarco
      @yamminemarco Год назад +9

      Well, you have to be level headed to level a plane. Sorry, didn't manage to slip a dad joke in the interview so I figured I'd do it in the comments. 😅

    • @MrBlablubb33
      @MrBlablubb33 Год назад +4

      Dad jokes are always appreciated!

    • @tiffanynicole6050
      @tiffanynicole6050 Год назад

      I did 😊

  • @michaelrichter9427
    @michaelrichter9427 Год назад +23

    I like this guy. Saying "this could be done in AI, but there may be a better conventional way" is actual engineering instead of cultism.

  • @Maggie-tr2kd
    @Maggie-tr2kd Год назад +155

    As far as AI goes for advance informing a nervous flyer about turbulence, all a nervous flyer really needs is false reassurance from someone they trust. For my first flight on a commercial airline as a young teenager, my mother told me that it would be lots of fun - sort of like riding a roller coaster at times with ups and downs. The flight turned out to be what the other more experienced passengers told me later was a nightmare with lots and lots of turbulence. There I was smiling and enjoying myself the entire time because it met up with my expectations.

    • @MentourNow
      @MentourNow  Год назад +42

      That’s a very conscious mum! Excellent example

    • @ajg617
      @ajg617 Год назад +5

      One of the reasons I always flew United was to listen to channel 9. Ride reports were extremely valuable insight on what to expect and the professionalism of the crews and ATC were incredibly reassuring.

    • @gnarthdarkanen7464
      @gnarthdarkanen7464 Год назад +7

      Through the 90's I did quite a bit of flying on commercial airlines... AND from time to time we'd hit "pretty good" turbulence... I knew there were likely timid, nervous, or outright scared flyers in the cabin, so every time there was a "proper" bit of roller-coaster or similar effect, I always made a point to give a good "WHEEE!" and laugh... in two or three of them, there'd usually be a few others to join me, and more often than not, even some of the crew...
      I doubt anyone knew I was even looking around, but I saw more than a few shoulders start to slouch, faces slack from distentions and clenched jaws, and even a couple parents pointing at us "lunatics" and speaking to youngsters... I like to hope it was a boost in morale to a few of the more anxious among us...
      Obviously, I'd temper that with the sensible judgment not to be shouting when the lights are out and everyone's even trying to get some shut-eye or rest... Even (especially?) the anxiety-prone don't need a maniac (or several) shouting anything when the plane starts buffeting and bouncing... haha... ;o)

    • @The_ZeroLine
      @The_ZeroLine Год назад +4

      As a child, I absolutely loved turbulence. Not so much now despite being able to pilot fixed wing aircraft. Go figure.

    • @mikezappulla4092
      @mikezappulla4092 Год назад +1

      I find it eerie when the flight is very smooth. I actually enjoy the subtle turbulence along with some more bumpy turbulence at times.
      I don’t know if the false reassurance is a good idea though. If the person ends up panicking then it could create trust issue.

  • @kate3881
    @kate3881 Год назад +54

    I think having ai as a second set of eyes for stuff like checking if pitot tubes are covered would be useful for pilots, or alerting pilots if the readings of sensors dont match

    • @MentourNow
      @MentourNow  Год назад +17

      Yes! Exactly. It will be useful as a tool

    • @kate3881
      @kate3881 Год назад +5

      i think for the checklists they could use a non-ai search engine that can search through them or even a voice assistant type search like alexa/siri

    • @muenstercheese
      @muenstercheese Год назад +4

      ...but you don't need AI for that, you can do that with deterministic procedural algorithms. the hype for AI is far too high, imo

    • @elkeospert9188
      @elkeospert9188 Год назад

      There is really no AI necessary to find out if values delivered from redundant sensors are inconsistent
      But informing the pilots that information is inconsistent leaves the pilots with the problem which one they should trust and which not - which causes a lot of stress and confusion.
      I think something I call "sensor fusion" would help in such situations a lot.
      For example one altimeter says that you are on flying at 20.000 feet while the other says that your are only flying on 1500 feet is a bad thing when flying over the pacific at night without any visual helpers.
      But including data from other sensors (like outside temperature or GPS) would provide a lot of hints which of the both altimeters is the one delivering wrong values.
      If the outside temperature is -10°C and GPS tells you that you are flying at 2300 feet than it is nearly 100% sure that the altimeter showing 20.000 feet is wrong and you should better trust the other altimeter.
      Or if one pitot sensor measures a dramatic change in relative air speed (for example a drop from 800 km/h to less than 400 km/h in less than one second) while the other one is delivering "smoother" values than the propability that the first pitot was blocked by ice or something else is high and in doubt it is better to trust the other pitot sensor.
      Clever guys could take all time they need to develop algorithms using data from the different sensors to calculate how trustable each individuel sensor is and inform the pilots which sensors they should trust in case they not have any better idea.
      For humans it is very difficult to perform such considerations in a stressfull situation with limitted time to make a decision but a software could do such (and even much more complex) analysis in the fraction of a second - without any AI needed...

    • @philip6579
      @philip6579 Год назад +1

      @@muenstercheese Most of the things AI does are possible via other means. The whole point is that Generalized AI (or something close to it, like ChatGPT) makes those things orders of magnitude easier.

  • @PushbuttonFYM
    @PushbuttonFYM Год назад +30

    As someone who works with AI on a daily basis, this basically hits the nail on the head. AI did not replace anyone in my team, instead it took over the mundane repetitive work which is the longest part of a project, freeing up my team to focus on the final finishing portions of the deliverable. AI does 80-85% of the work in less than half the time, making my team more efficient and allowing us to take on more projects with the same staff. We refer to it as AI/Human hybrid where the AI is more of a partner to a human rather than a replacement.

    • @MentourNow
      @MentourNow  Год назад +3

      Exactly our point! Thank you, feel free to give some feedback on our app!

    • @avisparc
      @avisparc Год назад +6

      "allowing us to take on more projects with the same staff" So there are fewer projects for other people to work on. If you do more work with the same people then couldn't you do the same work (not more) with less people?

    • @oadka
      @oadka Год назад +4

      @@avisparc I was going to say the same thing as well. By reducing number of man hours needed for a given project it has indirectly decreased employment. However, this will mean significantly reduced costs which might allow more demand. This will make the task of deciphering the changes in employment due to AI a bit more complex.

    • @avisparc
      @avisparc Год назад +1

      @@oadka that's a good point, it hadn't occurred to me that the Jevons paradox could operate in this situation. (en.wikipedia.org/wiki/Jevons_paradox)

  • @thetowndrunk988
    @thetowndrunk988 Год назад +94

    Even if accidents were “drastically” reduced (which would be incredibly hard to do, given that aviation is extremely safety conscious now), all it’ll take is one crash, and people will be screaming for pilots to be back in the seats. The MAX crashes taught us that (yes, I know there were pilots, but they couldn’t override the automation).

    • @elkeospert9188
      @elkeospert9188 Год назад +7

      That might happen - or not.
      Assume that 25 % of airlines are removing their pilot by some software and after 2 years the statistics would show that they have 90% less accidents than airlines flying with human pilots
      when even a single accident may not change the mind of passengers to scream for pilots as long as the "total picture" shows clearly that it is safer to rely on a "software pilot".
      The MAX crashes were (in my opinion) caused by human errors - not errors done by the human pilots but errors from the guys at Boeing building a software relying on a single (Angle of attach) even they were aware that sensor could fail.
      The minimum requirement would have been to compare the input of multiple AOT sensors and if they are not consistent to turn of MCAS and of course inform the pilots how they should react in such a case.

    • @endefael
      @endefael Год назад +4

      They could override it. They just didn't.

    • @elkeospert9188
      @elkeospert9188 Год назад

      @@endefael There were two crashes related to MCAS
      In the first one the cockpit crew was not informed about MCAS and therefore could not override it.
      In the second one the crew did know about MCAS and how to "override" it by manual turning the trim wheels - but this was physical hard for the pilots to do and also take a lot of time - in the second accident it took to much time.

    • @endefael
      @endefael Год назад +3

      @@elkeospert9188 I am afraid you do not understand completely what the MCAS is capable of, what that failure looks like, what are all the contributing factors, and the huge deal training played in both accidents. Just to give you a hint, in none of the two crashes the crew performed all 4 or 5 basic memory items they were supposed to. I highly encourage you to study them more deeply, and you will see that automation did not have the capability of overriding them by itself. It just took them too long to take the appropriate actions in due time - if taken at all. Not judging them, as individuals, but the fact that they were exposed to it without being fully prepared to. Not saying that the acft couldn't have been improved either, which any human project, including the 737 can. But it was never simple as the MCAS existence or failure: no accident happens for an isolated fact.

    • @russbell6418
      @russbell6418 Год назад +2

      @@endefael Hadn’t been appropriately trained to the override. Boeing engineers assumed their system had almost 0 failure likelihood, so the training to override that system was much more “check the boxes” than teach the failure response.

  • @timothyunedo5642
    @timothyunedo5642 Год назад +51

    I am very fascinated by the growth of AI but this video is my favorite explanations of AI so far. Bravo Petter and Marco!

    • @MentourNow
      @MentourNow  Год назад +9

      Thank you!

    • @MrCaiobrz
      @MrCaiobrz Год назад +7

      Except it doesn't explain AI, it explains what a semantic database and Generative Pre-trained (the GPT in ChatGPT) is. What we currently have is still not even A.I., is just the interface for a semantic database.

    • @vladk8637
      @vladk8637 Год назад +7

      Indeed, I've been working in AI for years too, and I really appreciated Marco's intervention. There is much buzz around AI, and it's pleasant to see people like Marco clearly putting dots on the i-s

    • @-p2349
      @-p2349 Год назад

      @@MrCaiobrz yes what we currently have is narrow AI what your taking about is AGI (artificial general intelligence)

  • @ProgressiveMastermind
    @ProgressiveMastermind Год назад +83

    Aidan is so cool, really gives polite, coherent and useful answers! 😎👍🏻

    • @MentourNow
      @MentourNow  Год назад +11

      Thank you! Glad you like him

    • @rayfreedell4411
      @rayfreedell4411 Год назад +1

      I think this is completely wrong. If you approach the issue from the standpoint of what information is available to support AI, the clear answer is yes, even in Sully's situation.
      The autopilot already flys the airplane. Altitude and heading can be changed by turning knobs. The computer can turn those just as trim is changed. Add Jeppsen charts, digital RADAR input and ground controllers replaced by AI to 'talk' to the AI pilot, all thr information for safe flight is there.
      For the Hudson river, again, consider the information available. Engines failing, altitude, heading, distance back to Laguardia are all known. AI would know it had to land, but where? A new problem for AI, but again, consider the information available. To me, searching for clear fields, beaches, etc. are a common enough problem to be part of AI from the start. GOOGLE MAPS has that information now.
      My background is computer tech. Final thought; I served in the Air Force during the war in Vietnam. The Hughes Mark 1 fire control system in the F106, could almost fly a complete shootdown mission, and that was 70 years ago.
      AI replacing pilots and ground controllers is a lot closer than you think, and I'm not happy about that.

    • @stevedavenport1202
      @stevedavenport1202 4 месяца назад +1

      Yes, excellent choice of guest.

  • @MrTmm97
    @MrTmm97 Год назад +28

    I love this channel! So excited when I see a new video in my feed.
    I’ve recently been on my 3rd binge rewatching all of Mentour Pilot’s videos the past few weeks. Thanks so much for the fascinating entertainment and information!

    • @MentourNow
      @MentourNow  Год назад +5

      Thank YOU so much for being an awesome fan and supporting what I do!

  • @steve3291
    @steve3291 Год назад +40

    Thank you, thank you, thank you Mentor for bringing someone on like Marco who gets it.
    I studied AI at University in the 80's and I am very much a Turing purist. I have worked with systems that I would categorise as 'advanced analytics' and 'machine learning' and have had rows with people who said that Turing's opinions are out of date after I accused them of re-badging their tired old software as AI to charge more money (which they are).
    Back on topic, who is most scared of of the current form of AI? The mainstream media are. Why? Because AI will start to present unbiased news feeds and put them out of work.The vast majority of the negative press is being generated by the press.

    • @MentourNow
      @MentourNow  Год назад +5

      Thank you for your thoughtful comment and I’m glad you liked the video.

    • @steve3291
      @steve3291 Год назад +5

      @@MentourNow It was, as you say, fantastic 🤣

    • @jantjarks7946
      @jantjarks7946 8 месяцев назад

      Don't worry, in the media you can create AI being politically correct too. It's a matter of rule settings to follow.
      It's not going to become better, but even worse and far more refined.
      😉

  • @lek1223
    @lek1223 Год назад +29

    as for AI taking over control of the plane, there is ONE situation i can think of, if you remember the disaster with the plane that flew in a straight line for a while before crashing near greece with both pilots out, introduce something like the check in trains where the pilot have to confirm 'yes i am still awake and paying attention' if a pilot misses for instance two in a row, it could make sense for the 'ai' pilot to bring the plane to a low altitude and broadcast a automated mayday, and then if still no pilot response it can make sense to then try and land the plane.

    • @kopazwashere
      @kopazwashere Год назад +11

      so basically as absolute last ditch if there's nobody qualified in the plane to land the plane...

    • @lek1223
      @lek1223 Год назад +5

      @@kopazwashere yup. the 'alternative' are of course a remote pilot alá drones which argueably are a better solution, but a backup-for-the-backup

    • @flyfloat
      @flyfloat Год назад +8

      You do not need "AI" specifically for a last ditch solution like that. Garmin already offers a solution called Autoland for GA aircrafts where in a case of pilot incapacitation a passenger can press the emergency button and the plane will land by itself at the nearest suitable airport while making the proper mayday calls to atc. It even displays instructions to the passengers if they need to talk to atc

    • @oadka
      @oadka Год назад +2

      ah so something like a dead man's switch?

    • @roberre164
      @roberre164 Год назад +5

      @@flyfloat That's fine if there is someone conscious to press the button. With Helios 522 and the Payne Stewart Learjet crash no one was conscious to do that. They are not the only examples either. As a last resort AI could have been very helpful to initiate autoland in these 100% fatal accidents.

  • @kevinbrennan8794
    @kevinbrennan8794 Год назад +17

    Excellent interview, thank you. I would be interested in more videos like this one.

    • @MentourNow
      @MentourNow  Год назад +2

      Thank you!
      Unfortunately you seem to be quite alone thinking that 😔 The video is tanking

    • @makeupbyseli31
      @makeupbyseli31 Год назад +3

      Nooo, we want more of that!!! Greetings from Germany 👏🏽

    • @jillcrowe2626
      @jillcrowe2626 Год назад +1

      ​@@MentourNow I'm fascinated by this video! I'm going to share it on Facebook and Twitter.

  • @yamminemarco
    @yamminemarco Год назад +8

    Having the opportunity to participate in this interview with @MentourNow was an absolute honour and pleasure. I am both impressed and grateful for the amazing comments, perspectives, questions, and debates I see here in the comments. They truly are a testament to the quality of this community. I am especially thankful to those who have provided feedback and pointed out areas that I intentionally did not elaborate on during the interview, as well as suggesting improvements to my phrasing. I agree with those highlighting that I shared a somewhat oversimplified version of the subject matter, as I briefly mentioned at the start of the video. This was done intentionally to make the conversation accessible to as broad an audience as possible. However, for those wishing to delve into the nitty-gritty details, I would be more than happy to elaborate in a thread following this message. I will be tagging the most intriguing comments, but everyone is free to join in.

    • @yamminemarco
      @yamminemarco Год назад

      Elevator Operator:
      From the point of view that I shared in the interview you can argue that its the evolution of a job. However, as you rightfully pointed out, there is another point of view which says that few to no people today have a job title "Elevator Operator". AI & technology can most definitely and has made certain specific Job Titles redundant. But if we elaborate on that perspective lets perhaps dive deeper into what did it actually make redundant. It took over repetitive and potentially unfulfilling jobs making it so that people who previously might have considered becoming an elevator operator now need to consider becoming an elevator engineer. If we take a look at the fluctuations of employment rates throughout the entire 1900s where technology evolved and automated more things than ever before we will notice that not only did it not go in a growing trend, but the quality of life of everyone in the world has steadily increased. The perspective I'm sharing is that automation has been only good for mankind and there is no reason to believe that it will change. After all it is us who chose to create it, we are the creators of technology.

    • @yamminemarco
      @yamminemarco Год назад +2

      Unmanned Aircraft Topic.
      As many of you have pointed out, there have been unmanned aircraft (such as drones) that have successfully been deployed. These however, are not flown by AI. Some of them might incorporate some elements of AI such as Face Recognition and many more. However, they are operated and dependent on code instructions which, as I pointed out in the video, are much more efficient and reliable for this purpose. Hence the answer to the question "Can AI fly a plane on its own" remains No. However whether automation, especially if properly leveraging AI, will be able to do that is a different question. One that I would still answer no to today but with less certainty than if it were only with AI.

    • @yamminemarco
      @yamminemarco Год назад +2

      Generative AI vs Other AI.
      One of my primary emphasis for this interview was to address the fear-mongering misconceptions that have been irresponsibly spread by the media and that have, for the most part, been centered around Generative AI. Unlike previous breakthroughs in AI, ChatGPT became an overnight sensation and a lot more people have heard about it than any other AI breakthrough. Now, I stated that AI, in an oversimplified way, could be described as fake it till you make it. And I will stand my ground on this one by further elaborating it. When I chose to use the sentence "fake it till you make it" I explicitly did so in an attempt to translate one of the core principles of all ML models: Approximation. One of the incredible things of many ML models is how you can generate a desired output from an input of many functions without needing to know any of the logic inside the function. This is, a foundation of AI/ML. And it is a principle used in just about all types of models, from Classifiers, Regression and Neural Networks to Transformers, Entity Extraction, and many more. I believe that so long as any AI that we develop is driven by this approximation principle we will simply not achieve anything other than Narrow AI. And most definitely we will not achieve consciousness.

    • @MentourNow
      @MentourNow  Год назад +3

      Thanks for having you on Marco! It was truly illuminating!

    • @gergelyvarju6679
      @gergelyvarju6679 Год назад +1

      @@yamminemarco Some AI models try to simulate various biological processes. Like how the brain works, like how a hive of ants work. Approximation also happens in our won natural intelligence. But even the perfect emulation of human brain/mind would have no advantage over the human brain/mind. And this is why I don't see general purpose AI. And copying the weakness of human mind, and creating an "inferior human emulator" in the cockpit wouldn't be the best option either.
      Even if GPT is designed in a way that would give it the best chance to beat the turing test, even with very large databases, lots of data it is bad at many tasks. Including procedural generation of game content even for tabletop gaming. We can get back to this point and my experiences with using GPT for this purpose, but the issue is what you described: It tries to use the "best option" one step at a time, it doesn't even consider its own generated answer as a whole. And it often doesn't understand which criteria from the prompt is relevant and important. But I think that ChatGPT and MidJourney isn't suitable for production environment yet, but trying them while gaming, learning prompt engineering, evaluating these options is a much better option.
      Your claim is that Midjourney doesn't see airplanes without windows. But some large High Altitude High Endurance fixed wing drones are in essence airplanes without windows. But Midjourney (and its language model) doesn't understand that fixed wing UAVs are a variant of airplanes, and how it can use the information. Please check the following futuristic image: cdn.midjourney.com/618fa0bb-c4b4-453d-b2ac-d5dc9850e007/0_2.webp
      You tried to engineer a prompt to render the plane without front windows / windscreens, I have tried a different approach and my prompt engineering resulted in picture with a plane, with two jet engines, but no windows, no windscreens. No new, better trained model, and my approach still worked. Creating variants, using the blend command, using the describe commands to help with even better prompts... I am sure with enough prompt engineering we would get far better results.
      Approximation isn't an issue because it is used by natural intelligence as well, and ability to approximate the results of any "unlikely" option is important when we want to invent new solutions. Approximation is the shortcut that makes HUMAN intelligence possible.
      So, when I used MidJourney I seen how it uses some random noise, and try to turn it more and more into an image we want to see. We have multi-prompts, can prioritize the importance of prompts. If in addition of "front windows", I would also mention the word "Windscreen" and in priority I give them a -2 or -500 or just no... It was easy to use more and more prompt engineering for better results. But due to economic crisis I don't have money to finance a lot of experiments with MidJourney, but I think discussing prompt engineering here would make sense.
      But when I started to learn about AI it started with an extremely simple algorithm, yes it was the minmax algorithm and it has plenty of derivatives. It would use plenty of resources and it should be optimized by prioritizing which options should be checked first, we would need to make sure once if found a "good enough" solution it wouldn't waste endless amount of resources on other things, if an option would be a dead end, it should move to the next one.
      So, if a machine learning algorithm can approximate the results of some potential actions, it is well trained and reasonably accurate it can quickly identify options that shouldn't be considered by the minmax option or should be only considered only as last result. Minmax and its derivatives can think "several steps ahead" and this is how they would choose the best options. We would have different kinds of algorithms (some of them with machine learning, etc) to evaluate various scenarios, etc.

  • @limbeboy7
    @limbeboy7 Год назад +11

    I agree, It can help with workload. the checklist. monitoring ground and air traffic. emergencies etc

    • @MentourNow
      @MentourNow  Год назад +2

      Yep, that’s where we will see it first.

    • @anilbagalkot6970
      @anilbagalkot6970 Год назад +1

      Exactly!

    • @amirshahab3400
      @amirshahab3400 Год назад +1

      It can help us drive our cars, busses, trucks safer & more efficiently.

    • @titan133760
      @titan133760 Год назад

      Sort of like how autopilot works

  • @JohnFrancis66
    @JohnFrancis66 Год назад +10

    Artificial intelligence is not and never will be. This was so good I shared it in my LinkedIn stream because every other post is some BS AI-related fever dream.

    • @MentourNow
      @MentourNow  Год назад

      Thank you! Feel free to tag me in the post - Petter Hörnfeldt

    • @dss12
      @dss12 Год назад +1

      People have become too dumb to understand this simple concept. They think of AI as God.

    • @fabiocaetanofigueiredo1353
      @fabiocaetanofigueiredo1353 7 месяцев назад

      What do you think is so un-replicatable about a biological brain?
      Physician here.

    • @fabiocaetanofigueiredo1353
      @fabiocaetanofigueiredo1353 7 месяцев назад

      That phrase "AI is not and never will be" is not anything other than wishful thinking. Anyone can say anything.

  • @filipkrawczyk9630
    @filipkrawczyk9630 Год назад +7

    I really disagree with many claims of your expert, but I will just emphasise one. He said that AI doesn't understand what it is talking about and simply uses things it was trained on to predict the result. Of course he didn't give any arguments to back it up and just said it as a given.
    I've got two examples to show that AI can think and understand.
    First is the famous move 37 done by alphago. This model was trained on millions of game but hasn't seen such a move because no human has ever done that. So in this case you can't say that it just combines things it saw. It understands how the game works on deeper level than just simple rules of the game.
    The second one is example from chatgpt 4
    "Imagine that I put a stone in the cup. I covered the cup with a cutting board, then turned the cup upside down with the cutting board and placed it on the table. Then I pulled the board out from under the cup and then picked up the cup and dropped it on the floor. Where is the stone?"
    It answed that you pulled the board out from under the cup and the stone probably fell on the table. When you picked up the cup the stone was left on the table.
    To answer that problem you have to really understand relation between objects mentioned in the question and to some degree understand physics. I have no idea how one can claim that in this example AI just combined sentences it saw when learning and predicted the next words without understanding anything.
    I really like your videos about aviation and learned a lot from them as a hobbyist but I hope that you will also invite experts that disagree with your opinion (and there are a lots of them in the field of AI).

    • @yamminemarco
      @yamminemarco Год назад +2

      As we intended to make the video relevant to a broader audience we intentionally oversimplified it. I've created a main comment and thread where I've elaborated a little more and you are welcome to join in.

    • @dss12
      @dss12 Год назад

      All of the above items you're mentioning are simply trained. They don't prove understanding...

  • @danielayers
    @danielayers Год назад +5

    Masters Degree in Computer Science here, with more than 20 years experience. This is the best explanation of AI that I have seen anywhere! Well done!

  • @Silverhks
    @Silverhks Год назад +5

    This was a wonderful primer on AI.
    Aviation specific but useful for anyone questioning what ML means for the future.
    Thank you Petter and Marco

  • @jim.franklin
    @jim.franklin Год назад +1

    That was the best and most honest AI discussion I have seen to date, I get so fed up with people banging on about how AI will damage the job market - they often get upset when I point out AI cannot sweep streets or fill shelves in Tesco so they will always have a job - but seriously, AI is a database algorythm and nothing more, Marco explains that so well and from an inside perspective people need to take note. I will be sharing this video on Facebook and LinkedIn because this discussion needs to be heard by millions so they understand what AI can do, but more importantly, what AI cannot do.
    Thanks for a briiliant interview.

  • @ashwin1698
    @ashwin1698 Год назад +7

    Getting cross-Industry insights associated with Aviation is fantastic!. Would be valuable to learn, watch such discussions/exchanges. Amazing break-through Peter!. Congratulations.
    Warm greeting from Germany. Vielen Dank!

  • @Trebor_I
    @Trebor_I 8 месяцев назад +2

    As a season software engineer I can say that *some* aspects of the flying will be supplemented by AI. The autopilot, auto landing, technical malfunctions checklist items, memory items, all make perfect sense. Beyond that I do agree with your assessment of a fully automated cockpit being a generation away.

  • @stonykark
    @stonykark Год назад +5

    I work in tech with ML/AI at various points over the years. Agreed with the assessment here, it’s really nothing to freak out about. The reason people care is because there are a lot of extremely wealthy “entrepreneurs” who want to use it to make money even easier than they do now. LLMs like chatgpt will have their uses, but it is not the revolution everyone is afraid of.

    • @teemo8870
      @teemo8870 Год назад

      Hopefully.

    • @youerny
      @youerny Год назад

      I frankly disagree, it is an amazing enabler capable of revolutional acceleration in productivity and value generation, both in productive and recreational activities. It is the new steam engine of XXI century. On the other hand I cannot say anything about dangerous outcomes. There are potential problematic scenarios indeed as described in Bostrom book superintelligence

  • @gepetotube
    @gepetotube Год назад +10

    Love Mentour videos and they are always very well documented. The guest in this one seems to me not quite as an expert as I hopped. He is just stating the oversimplified view of AI that seems to flood internet these days. Here you have a few comments if I may:
    * AI is not just ChatGPT. GPT architecture is just one of many (BERT, GANS, etc.). Many of these are not that visible as ChatGPT but we have been already affected by AI at large scale (Google translate, RUclips suggestions, voice recognition, etc.)
    * AI is not just a database system. In the process of deep learning the neural networks are able to isolate meaning (see sentence emebeddings, convolutional neural networks, etc.). AI is able to cluster information and use it for reasoning and I can give you many examples. GPT does not only generate next word based on previously generated words but it is also driven by the meaning constructed in the learning process. Actually it does not even generates words. It generates tokens ("sub words"). It is not a probabilistic system.
    * AI could land an airplane without any problems if trained so. Full self driving AI for cars is far more complex problem and it is amazing what the current AI systems can do (Tesla, Mercedes, etc.). But as somebody said, the first rule of AI is: if you can solve a problem without AI then do it that way. AI is not very efficient at training. Currently we can fly airplanes without pilots without using AI (autopilot, auto-landing systems, etc.). On the other hand, replacing pilots completely will not happen any time soon even for the simple reason that people will not board a plane any time soon without a pilot. But it is creeping in. As it was mentioned in a previous videos the industry is moving from two pilots to one pilot.
    * AI will replace jobs (and it will create new ones). One example is customer support with all the robots that answer your questions. What do you think Med PaLM-2 will do?
    ... ;-)
    One thing I agree with the guest. AI is an enabler for new opportunities. Also, good idea to bring aviation in the AI discussion.

    • @yamminemarco
      @yamminemarco Год назад +1

      You bring up very valid points. I've created a main comment and thread where I've elaborated a little more and you are welcome to join in.

  • @esce69
    @esce69 Год назад +7

    Finally an intelligent discussion on current AI. Really didn't expect it on an aviation channel. Thanks!

  • @johnhawks5035
    @johnhawks5035 Год назад +3

    Brilliant. You addressed a question, (very comprehensively), that many have been pondering. Thanks.

  • @thesoniczone
    @thesoniczone Год назад +3

    It is ironic isn't it. If you want to know the truth about subjects that don't even revolve around aviation, this is your channel.
    One of the best videos I have seen in YEARS!

    • @MentourNow
      @MentourNow  Год назад

      Glad you found it interesting!

  • @diogopedro69
    @diogopedro69 Год назад +9

    Someone honest talking about "AI", for a change :) thank you

    • @MentourNow
      @MentourNow  Год назад +3

      You are welcome. That’s what we are here for

  • @NicolaW72
    @NicolaW72 Год назад +6

    Thank you very much for this really informative interview which clarifies what AI is and can do - and what AI is not and cannot do!👍 That´s a core point of knowledge, not only in the Aviation Business.

  • @LemuelTaylor
    @LemuelTaylor Год назад +4

    This channel is on another level. This is kind of content we need (and want as well 😄) . Thank you so much Petter and Marco. This was truly informative.

  • @arunkottolli
    @arunkottolli Год назад +2

    AI can help pilots in automation of routine operations - like routine cabin announcements, early warning of turbulence etc, verify and implement various checklists. Send automated communications to control tower and vice versa, etc there is no way AI will not be integrated in a cockpit in near future

  • @sergethelight
    @sergethelight Год назад +3

    Awesome vid! Airbus already uses machine learning (the basis of AI) in order to better engineering and airline fleet operations. They work together with a company called Palantir and they created "Skywise" for this

  • @andrzejostrowski5579
    @andrzejostrowski5579 Год назад +3

    Excellent video, thanks for bringing a real expert on the subject. As a mathematician working as a software engineer I am so happy to see a voice of reason in talking about what we call AI. Don’t underestimate automation though, I am mind blown by Garmin autoland, I think that we might see similar automation systems in commercial aviation at some point, so I wouldn’t rule out a single pilot operations at some point in the future.

  • @dmitrikomarov4311
    @dmitrikomarov4311 Год назад +17

    As a pilot, I'm happy to hear my job is safe! (For now)

    • @MentourNow
      @MentourNow  Год назад +5

      Yep! And for a while to come I would say.

    • @mustafashulqamy1844
      @mustafashulqamy1844 Год назад +3

      ​@@MentourNow I hope it's a long enough while. I'm 15 now and want to become an airline pilot in the very near future.

  • @starbock
    @starbock Год назад +3

    Excellent interview with Marco. Very informative, objective, level-headed, practical. Always great content and awesome job! 🙏

  • @ninehundreddollarluxuryyac5958
    @ninehundreddollarluxuryyac5958 Год назад +4

    Computers running normal code (not AI) will be more useful because of the predictability of its output. AI can be useful as a user interface (voice recognition) and general awareness. One example would be an AI that listens to all the ground control transmissions so it is aware that you are supposed to turn on a certain taxiway and reminds you if you are starting to turn at the wrong one, or if you are about to cross an active runway but another plane has been cleared to take off or land on it by a different controller. Miracle on the Hudson is an excellent example why I want a human pilot flying any plane I am in.

    • @Jimorian
      @Jimorian Год назад

      Another aspect of programmed automation is that you can tell the pilots what the parameters are for "decisions" made by the automation so that they understand exactly when the automation is outside of its parameters and thus have more information about whether to trust it or not in a particular situation.

  • @NateJGardner
    @NateJGardner Год назад +2

    GPT models contain a world model- they are capable of performing calculations and keeping track of state. They can play games they've never seen before. It's not as simple as just accessing memories. It's accessing abstract concepts and using a predictive model it has developed thay can reliably predict information, and the only way to do that is to actually process the information. I.e., it's not overfit. It can actually perform reasoning. It does have abstract ideas about what things mean. However, its entire world is text, as seen through the lens of its training data, so of course it currently has limitations.

  • @Cartier_specialist
    @Cartier_specialist Год назад +6

    I can understand why you partnered with Marcus for this because he explains AI in a simplistic way like you explain aviation related information.

  • @TheHenryFilms
    @TheHenryFilms Год назад +19

    In light of one of your recent videos, I just realized AI might be very useful to parse the relevant parts of NOTAMs, and maybe even remind the pilots as the flight progresses.

  • @BottleOfCoke
    @BottleOfCoke 5 месяцев назад

    Aerospace control engineer here!
    One of the problems with the idea that AI can outperform a pilot, is that to collect the training data - you need a pilot to consistently fly the same maneuvers you originally thought the AI could do better than the pilot.

  • @fredrikjohansson
    @fredrikjohansson Год назад +37

    Even if it would fly the plane perfectly 99.99% of the times it would be devastating the 0.01% of the times it fails due to the weird hallucinations AI sometimes have.

    • @playmaka2007
      @playmaka2007 Год назад +10

      Kind of like the weird hallucinations humans sometimes have?

    • @danharold3087
      @danharold3087 Год назад +1

      Airplanes spend most of the time flying at high altitudes. They often have minutes to correct the problem. The 0.01% is not necessarily or likely to be devastating. Closer to the ground of course you are correct.

    • @adb012
      @adb012 Год назад +4

      Well, that's the tension here. If it fails to save United 232 and Sully's flight, but it doesn't crash in AF447, Lion Air and Ethiopian MCAS accidents, Colgan, AA587 at New York, Helios hypoxia, Tenerife, PIA gear up go-around, AA965 CFIT at Cali, TAM 3045 at Conghonas, and a long list of accidents that were either caused by the pilot (due to distraction, confusion, spatial disorientation, disregard of procedures, fatigue, etc...) or that were not avoided by the pilot when the pilot could have avoided it just by following procedures (MCAS accidents, AF447...), is it worth the price?

    • @mofayer
      @mofayer Год назад +3

      @@playmaka2007 right? This channel is full of examples of pilots hallucinating, especially in stressful(high work load) situations, something AI doesn't ever have to deal with since it can't stress.

    • @robainscough
      @robainscough Год назад +5

      0.01% is still better than the 0.5% human errors.

  • @madconqueror
    @madconqueror Год назад +2

    Very cool! I like this format very much. Petter discussing with other passionate people about not explicitly aviation related topics. Very nice video.

  • @Oyzatt
    @Oyzatt Год назад +15

    This is most accurate description of AI I've sever heard. Great 🧡

    • @NicolasChaillan
      @NicolasChaillan Год назад +2

      Not true. It's a description of Generative AI. Not AI.

    • @Oyzatt
      @Oyzatt Год назад +1

      @@NicolasChaillan but the analogy applies to any AI. What you don't understand is that, humans are exceptional in their "capabilities". Whatever that means

    • @NicolasChaillan
      @NicolasChaillan Год назад +1

      @@Oyzatt Clearly you don't understand what the technology is capable off and what we have already done at the U.S. Air Force and Space Force. I was the Chief Software Officer for 3 years.

    • @Oyzatt
      @Oyzatt Год назад

      @@NicolasChaillan let's crystallized everything here for the sake of clarity . Ai is not creative like humans, it's simply regeneration what has been feed in it, that clearly shows it boundaries. In the military space it can be capable of many things but not cognitive stuffs, if you'll agree with

    • @preventbreach3388
      @preventbreach3388 Год назад

      @@Oyzatt wrong. That's generative AI. Not AI as a whole.

  • @oDrashiao
    @oDrashiao 9 месяцев назад

    I'm an AI researcher, and I always struggle to explain why we are not talking about sentience, but basically big prediction machines. Marco did a great job there! Thanks for bringing an actual expert :)

  • @MrKen59
    @MrKen59 Год назад +1

    Perhaps if Ai was a check pilot. How many investigators have you shown where the crew was disoriented or things changed abruptly and the crew is inputting conflicting controls. Then someone would say a cryptic word instead of “hey, you guys need to put the nose down as the plane is going to stall “. Ai can bypass emotions, fear of being wrong or embarrassed to say what you mean because the captain is so experienced. Perhaps Ai, who has been watching the flight can verbally suggest solutions. You rely on buzzers or a light that says on pilot is pulling up while the other down. The check pilot Ai could step in when the human is frozen due to information bias or tunnel vision. We all understand it happens, but hindsight is 20/20, so let’s get a set of eyes to challenge a potential set of situations and speak up. This is one application that can help in my opinion. Perhaps it can even help with traffic control.

  • @stevedavenport1202
    @stevedavenport1202 4 месяца назад

    One thing that people need to understand about Sully is that he was not only a very capable pilot with a very calm and grounded personality, he was also good at playing "what if" games in his head.
    So, I suspect that he had already contemplated this scenario and came up with some alternatives.

  • @fazq02yfd
    @fazq02yfd Год назад

    Wow, what a knowledgeable and well articulated guest. He really understands the subject matter.
    Thank you both.

  • @mishagolub8227
    @mishagolub8227 Год назад +2

    We have flight simulators. Big commercial simulators are very accurate. You can automate simulators to generate large number of random failures let AI fly the simulation with failure and use reinforcement learning. Go over all controls in the airplane add a scenario in which this specific control does not work, or works in unexpected way. Now we can train a system that can detect system failures by symptomes.
    When it comes to accountability what companies really want is more like insurability. If insurance companies see that AI is no more faulty than human they will be ready to work with such planes. It make take some time as aviation is overregulated, but once insurance corporations say their word the way is open.

  • @Bob-nc5hz
    @Bob-nc5hz Год назад +3

    10:00 the discussion about the feedback loop and ability to identify weak points and source of errors was one of the eye openers of the essay "they write the right stuff", on the US Shuttle Software Group (pretty much the only component of the Shuttle program which got praise after the Challenger Accident), the shuttle group's engineering was set up such that *the process* had the responsibility for errors, in order to best make use of human creativity and ingenuity without having to suffer from its foibles. Any bug which made it through the process was considered an issue with the process, and thus something to evaluate and fix in that context.

  • @JAF30
    @JAF30 Год назад +1

    This was a great discussion on the whole "AI' marketing going on right now for something that isn't even really AI. Speaking as someone who is in tech field, current AI is nothing more than a messy, giant, computer program that must always answer your question, it doesn't even have to be a truthful answer.

  • @charlescrankson3487
    @charlescrankson3487 Год назад +1

    Oh, I like how Marco defined AI. This truth is never revealed because they know we will start distrusting and knowing that AI is not actually intelligent (at least not yet). As he said, now I understand that what the AI companies are trying to do is to "FAKE it until [they] make it!"

  • @pfefferle74
    @pfefferle74 Год назад +1

    There was a good point made: an AI pilot assistant must have a way to strongly signal to the pilots whether it makes a helpful suggestion that the pilot may overrule or whether it does an emergency interjection that the pilot must simply have faith in. Like when the TCAS wants to step in to avoid an imminent collision.

  • @lothar4tabs
    @lothar4tabs Год назад +2

    This is the best explanation I ever had about AI, what it can do and cannot. You guys did an amazing job here.

  • @vedranb87
    @vedranb87 Год назад +2

    As a programmer myself, I understand where the guest is coming from and I agree with a lot of being said.
    However, could an AI model, trained on the entirety of human knowledge to date in aviation, all scenarios and outcomes, become an indispensible crewmember who would notice when CRM is broken and act as a voice of reason, when humans are in an upset? Could it notice from the instrumentation and the inputs that whatever the crew is doing is making the situation worse and start shouting at the captain to reset their head and check the flight directors or something constructive?
    In the case of rapid decompression and both pilots incapacitation, could it achieve level flight at a low altitude?
    Could it take over the boring tasks of making sure the correct radio frequencies are being used, communicating with the tower and taking some of the predictable workload in the cockpit?
    I think it could! I genuinely think AI could make aviation safer, in its current state, if used to its greatest advantage - having access to a lot of things that happened and how to solve them.

  • @JakeRobb
    @JakeRobb Год назад +1

    AI
    Hi! Software engineer with 30+ years of experience speaking.
    Marco is correct that AI can’t do those things today. However, there’s a decent chance he’s wrong about when it might gain those abilities.
    If you think about how Marco described how AI works, and you compare that to the apparent behavior of a toddler, you’ll find a lot of similarities. This is not a coincidence - the human brain works in exactly the same way. An *adult* human brain is significantly more advanced, of course, but it’s the same system we had as toddlers, and a handful of decades of training is all it takes to make the difference.
    Now, consider the rate at which we learn, and consider that the megacorporations currently pushing AI forward are investing billions of dollars each year. Suffice it to say that AI learns faster than we do, and it will take _less_ than a “handful of decades” to get from where we are now to a point where these systems exhibit adult human intelligence. Including everything Marco said is not currently possible. How much less? I’ll come back to that.
    In the past year, GLLMs (generative large language models - the type of AI demonstrated by ChatGPT, Bard, Midjourney, et al) have demonstrated two key abilities:
    1. They can train themselves - i.e., they can generate their own training data, ingest it, and come out the other side more capable than before. They can do this continuously.
    2. They are beginning to demonstrate emergent capabilities. Meaning they can answer questions on subjects outside their training set. They can solve math problems that no human has solved before. They can are capable of research-grade chemistry.
    People who study and work with these models every day are discovering new abilities in existing AIs *literally every day.* All indications are that the current maturation rate is scaling _exponentially_. (For those not up on their math, that means that the rate is accelerating really fast.)
    Exponential growth is deceptive in the early stages, and there comes an inflection point at which it quickly gets out of hand. Perhaps you’ve heard the parable about the man who did something to please an ancient Pharaoh. The Pharaoh offered him the reward of his choice. The man said that his choice was simple. He wanted a chess board, and on it he wanted grains of wheat (the story varies; sometimes it’s something else). On the first square of the board, one grain. On the second, two grains. Four on the third, eight on the fourth, and so on, doubling with each successive square.
    Anyway, the story goes that the Pharaoh agreed, and his people began portioning out the wheat. At some point, they realize what’s happening, and long before completing the request, the man is put to death.
    The story is obviously a myth, but one imagines that the death sentence would have come about before they even finished filling the third row, as the first square in that row would have required 65,536 grains. Because the total number of grains required to satisfy the request was 18,446,744,073,709,551,615, or 2^64-1. That’s eighteen quintillion grains. If gathered into a sphere, it would have been miles in diameter.
    All of this to say: we’re in the early, shallow part of that growth curve in terms of the maturation of GLLM AIs, and there will come a point at which the growth rate accelerates beyond what we can comprehend, and it will not stop there. AIs are like toddlers now. In a month, they might be like kindergarteners. In six months, teenagers. In a year, fully sentient and capable of anything a human can do, but faster - and continuing to mature beyond that. Beyond anything we have a frame of reference against which to measure it.
    That timeline is only a guess, but it’s not outside the realm of possibility.
    Getting back to the subject: it might be much longer than that before regulators _allow_ AI to fly a commercial airliner, but I think it will not be long at all before there are AIs operating in some aircraft, comparing what they would have done with what the pilots actually did, and then comparing outcomes. At some point, AI will be shown to make consistently better choices than a human, and then it’s only a matter of time before the AI becomes an additional “pilot” (in addition to PF and PM) with a more active role. Then it’ll _be_ the PM, and then eventually it’ll be the PF. I suspect it will be 20 years or more before there’s not at least one human PM, but the idea that two (or more) redundant AIs can take the roles of both PF and PM is entirely feasible, and I’d go so far as to predict that most people reading this will live to see it.
    I might be wrong about all of this. Maybe we’re not on an exponential growth path. But signs point to that being the case, and if I were a betting man, I’d sooner bet for it than against it.

  • @pedrosmith4529
    @pedrosmith4529 Год назад +1

    History is full of predictions about the future being 100% wrong.
    The way IA has evolved in two years is mindblowing. We can't even begin to imagine what is going to happen in ten years.

  • @mityace
    @mityace Год назад +2

    Thanks for a video without the hype. Backing in the 1980s, I was thinking of getting a PhD in AI. So, I have continued to follow the development of the field. So, kudos for finding an expert in the field who gives it to us straight about what "AI" is and is not.
    It's hard to predict whether true AI will ever exist. Nearly 40 years after I left school, we don't seem to be that much closer to building an actual intelligence, There could be a breakthrough today or 100 years from now our descendants will be trying to figure out why we wasted so much time on this dead end. As it usually is in life, what actually happens will be somewhere between those extremes.

  • @adamstefanski8744
    @adamstefanski8744 Год назад +2

    Very cool and insightful material guys 😉 thank you

  • @tensorlab
    @tensorlab Год назад +1

    Being a data scientist and aviation enthusiast, A situation such as Sully can definitely be implemented in AI in the form of recommendation system or disaster managing co-pilot system where system can quickly identify dual engine failure and determine shortest route available to nearest airstrip. This however requires intensive training on large simulation datasets and would involve multiple countries across the world. The model inference also would require extremely powerful computers on board to process such large streams of data quickly which might shoot up cost of airplane. SO, theoretically it is possible but practical implementation likely wouldn't be possible anytime soon.

  • @larrybremer4930
    @larrybremer4930 Год назад +1

    This video is spot on about AI. Look at Tesla cars, where its "AI" is not really anything of the sort. It has a set of rules to follow and it does its best to follow them. When faced with input that is not following the rules it generally does not end well. AI has to learn when NOT to obey rules and that concept is hard for a computer, but a trained dog can do this, so you could say AI is not even as smart as a dog. One element of training a seeing eye dog is teaching it to disobey such as the owner prompting the dog to cross a street but the dog can see oncoming cars. In that case the dog has to break its obedience training and instead enforce a higher directive of safety, basing its decision by measuring action and inaction and projecting that to probable outcomes before acting. When AI can make those kinds of determinations based on actual measurable quanta (like the seeing eye dog) rather than rule rankings (AI) then we will have a workable general AI.

  • @Sheherezade
    @Sheherezade Год назад +1

    Thank you for the post, my teenager is interested in becoming a pilot so I’m grateful for your opinion.
    It would be boring for a pilot if the co-pilot is eliminated, humans need people to talk to at work, especially now that the cockpits are sealed.

  • @MountainCry
    @MountainCry Год назад +3

    I got completely sidetracked by how beautiful the AI Van Gogh airplanes were.

  • @BlueDinnie
    @BlueDinnie Год назад +11

    So it's basically computerized muscle memory.... Interesting

  • @SinergiaAlUnisono
    @SinergiaAlUnisono Год назад

    I trust more "an expert" that says 'I don't know for sure', than the ones that answer binarily by saying 'yes or no' so bluntly.

  • @arunkottolli
    @arunkottolli Год назад +1

    In near term, we can get to AI assisted pilot - like enhanced Auto pilot for 99.99% use cases and rely on human pilot for the remainder.
    Next stage will be use remote pilots for Cargo airlines, where AI flies the plan for almost all the time and uses human assistance when needed.
    AI system will never suffer from input overload and AI can be an excellent co-pilot

  • @owais4621
    @owais4621 Год назад +4

    Absolutely fantastic video

  • @alphabravo1214
    @alphabravo1214 Год назад

    This video is spot on and agrees with what most other experts in the field are saying.
    One thing that wasn't really touched on this video is the difference between an autopilot and an AI controlling the airplane. When autopilots were created, there was similar concern about pilots being replaced because, as the name suggests, the point of an autopilot is to control the airplane automatically. Autopilots are advanced enough that taxiing around the airport is about the only thing that isn't automated. So, one may be tempted to ask: why doesn't autopilot steal pilots jobs, and what does AI bring to the table that could threaten their jobs?
    The answer is that autopilot doesn't have the decision making capabilities necessary to safely fly an aircraft, and the AI of today isn't advanced enough to have that, either, as this excellent video explains. Just because we can automate the mechanics of flying an airplane doesn't mean we can automate the decision making behind why we fly a particular way or follow a certain set of procedures. An autopilot might very well be able to fly an ILS approach more accurately than any human could, but that doesn't mean it understands when it needs to fly an ILS approach or how to set up for one. As the video explains, AI is also incapable of creative thinking, it's only able to take what it has seen before and apply that to the situation. This understanding of why we do things and what makes some action appropriate or not is the crucial element that is missing from these automatic systems, whether they be rule based (autopilots) or machine learning based (AI).
    That said, some use cases that AI could be used for, in addition to what the video explains, include improving flight planning and communication with ATC and other pilots.
    For flight planning, consider all the NOTAMs, METARs, and so on that pilots have to sift through. It is a lot of information that is usually not formatted in a way that is very human readable, and even when it is, pilots still have to pick out what is important and relevant to their flight. AI could parse through all that information, give pilots a summary of the important information out of it, and even suggest alternate routes if it were paired with weather and traffic prediction models. That could be a way in which ATC is helped out, also: helping choose the routing for flights to maintain traffic separation and expediency.
    Of course, any such tool would have to be thought through carefully. Pilots would still need to go through the materials to check that what the AI said is correct, but at least they would have an idea of what to expect which might speed up the process. Still, Mentour has done videos on confirmation bias contributing to accidents, so pilots would need to be trained to use these tools effectively.
    Another use of the tool could be in communication. Paired with radio or text based interfaces, these models could assist in translation when non-native English speakers or other languages are being used with ATC, which could improve situational awareness and even clear up miscommunications. Again, care must be taken, since these models could also translate incorrectly, but there are other translation/speech recognition/text to speech tools that could be paired with AI to reduce that risk.

  • @philipsmith1990
    @philipsmith1990 Год назад +10

    This was a fascinating discussion. As a pilot for a major airline I spent many hours in a simulator preparing to employ procedures learned from generations of pilots. As a technical rep for pilot associations and for my own interest I spent many more hours studying accident and incident reports and hopefully learning from them. I spent many hours in the air seeing how another pilot managed events. Like just about every pilot I spent even more hours, often over a beer, talking about aviation events I had experienced. In those ways I built a base of knowledge that stood me in good stead when I had to deal with events myself. This process, although less structured, resembles the building of a knowledge base on which an AI depends. Certainly one can point to incidents that an AI would find difficulty in handling although I'm not sure that the 'miracle on the Hudson' is one. I can imagine that an AI would have the knowledge that when no other option is available put the aircraft down in any space clear of obstacles within range. The QF32 incident of an uncontained engine failure might be more difficult since the crew there had to ignore some of the computer generated prompts. QF72 would also be unlikely to be fixed by an AI since it involved a system fault combined with an unanticipated software failing.
    So I agree that there would be situations that an AI could not resolve. But would they be more than those that pilots don't satisfactorily resolve? Possibly not. It may be that even with current technology the overall safety, measured as number of accidents, would be improved.
    However there is another issue. Would passengers fly in an aircraft with no-one up front? I know many people who would not. But I also know people who would choose the flight that was 10% or 20% cheaper.
    And of course there are the non-flying events that a pilot is called upon to resolve. I can't see any current AI finding an ingenious way to placate a troublesome passnger. I found that walking through the aircraft after making a PA announcing a long delay was far more effective than the PA alone. Just seeing that the pilot had the confidence to show their face made pax believe what they were told. I regret that some of my ex-colleagues didn't believe this.
    Something that does worry me and which is not yet down to AI but is already a problem and would be made worse by AI is skill fade. The less a pilot is called upon to exercise his skills the more likely it is that they will not be good enough when called upon.

    • @evinnra2779
      @evinnra2779 Год назад +1

      They could make the flight 50 or even 90 % cheaper, I wouldn't buy a ticket for a flight that has no pilot and first officer flying it. There is something about this psychological factor of fear. Although we know there are far more people who get involved in car accidents than plane accidents, never the less the fear of flying remains because it is mostly about the knowledge of being powerless to do anything what so ever in a plane when something goes wrong. This is why to trust artificial intelligence to do the job of a human mind would be a step too far for me. I guess it is not about how accurate artificial intelligence may become in the future in recognizing a situation and finding viable solutions to problems, but rather my own sense of trusting a human mind much more, since it works similar to how my mind works. Currently AI is not working the way the human mind works, it only imitates some aspects of the human thinking process.

    • @philipsmith1990
      @philipsmith1990 Год назад +1

      @@evinnra2779 As I said, I know people who think the way you do, but I also know people for whom the price matters more. If the operators see more profit then they will pitch the price to maximise that and employ whatever marketing they need to.

    • @lawnmower1066
      @lawnmower1066 Год назад +2

      This generation might struggle to fly with AI, the next generation wont give it a second thought!

    • @giancarlogarlaschi4388
      @giancarlogarlaschi4388 Год назад +1

      Go fly in Asia or Africa ...it's an unpredictable thing , Weather wise and ATC .
      An sometimes sub par Local Pilots / Maintenance Standards.
      Been there done that many times.

    • @philipsmith1990
      @philipsmith1990 Год назад +1

      @@giancarlogarlaschi4388 I've flown in Africa and Asia. The unpredictability is no worse than other places. I used to tell my copilots that the one thing you know for sure about a weather forecast is is that it's wrong. It may be a little bit wrong or it may be completely wrong. The lowest fuel I ever landed with was at Heathrow thanks to a surprise. In the USA the weather can be unpredictable. I landed from a nightime Canarsie approach in a snowstorm. The next day we strolled down to the Intrepid in our shirt sleeves.

  • @XRPcop
    @XRPcop Год назад +4

    Great information and perspective on AI. Definitely a ton of Fear mongering going on and the analogy "fake it till you make it" truly makes sense!

    • @WindowsXP-y5l
      @WindowsXP-y5l Год назад

      What do you mean by "Fear mongering"?

    • @XRPcop
      @XRPcop Год назад

      @@WindowsXP-y5l The news media and others exploiting AI tech by saying "AI is taking your job" or "AI will control everything" when in reality, as explained in this video, AI really can't take control of anything...just yet.

  • @anilbagalkot6970
    @anilbagalkot6970 Год назад +4

    We are in advanced phase of situation where calculator were invented during 1700's!! Everybody thought they would lose jobs but we invented computers!

  • @damianknap3444
    @damianknap3444 Год назад +2

    One of the best uploads ever Peter 👏🏻

  • @jacobgreenmanedlion1863
    @jacobgreenmanedlion1863 Год назад

    My wife is somewhat slowly learning how to drive; she has problems with perceiving distance, speed, and direction (if she’s supposed to turn left, and it’s a new trip, she may well turn right) as well as mental mapping. She can drive the car fine so long as I’m co-piloting.
    Anyway, she often will ask me at stop signs, and unprotected left turns, “can I go?”; meaning, is there enough space from the oncoming car to go. Usually when she asks me this, if I was driving, the answer would be yes. But what I always tell her is: if you aren’t sure, the answer is no. I will then tell her once she has gone, I would have gone back when she asked, but she has to judge herself. I also firmly believe that I have the ability to tell her not to go, but that it is dangerous, sitting where I am, on the opposite side of the car from usual, to make the positive decisions for her.
    These are also dangers of AI.

  • @EdOeuna
    @EdOeuna Год назад +1

    In order to implement AI into flying aircraft I think there has to be a major overhaul and rethink into the entire way of flying and operating the flight deck. Airmanship is almost entirely based on previous experiences. Also, there already exist systems for detecting the wrong runway, wind shear, terrain ahead, etc, so these don’t need to be redeveloped and replaced by AI. As for AI helping in emergencies, a lot of emergencies require quick action and muscle memory from the pilots. To introduce a third “person” in the flight deck might be seen as too much interference. It also risks clouding the judgement of the pilots as they themselves will have reduced situational awareness.

  • @johnmcqueen4883
    @johnmcqueen4883 Год назад +7

    This was sooo informative. I am retired so I haven’t worried about AI taking my job (what, it’s going to sit in my hammock?) and therefore have not paid a lot of attention to it, but now I feel I have a pretty good understanding of what it is and what it isn’t, what it can and cannot do. Thanks, Mentour!

    • @philip6579
      @philip6579 Год назад

      AI may make it so expensive to live that you cannot remain retired.

    • @theairaccumulator7144
      @theairaccumulator7144 Год назад

      @@philip6579 that won't happen because of chatgpt and clones, those are simply a slightly more advanced autocorrect.

  • @jwflyaway
    @jwflyaway Год назад +3

    Pilot and Truck drivers just made their more effective, but AI will not replace them. Thank for information about AI.

  • @idanceforpennies281
    @idanceforpennies281 Год назад +1

    Time-critical weather prediction/modelling is mathematically one of the hardest things imaginable. So much data and its like a 4D fluid dynamics show on steroids. With positive and negative feedbacks all going on instantly. AI will be a significant benefit here.

  • @mendel5106
    @mendel5106 Год назад +3

    My take is that at least in the first gen, it wouldn't make any inputs that are un-commanded by the pilots, instead it can be integrated with sensors and make quick suggestions in emergency situations "And explain right away why he thinks that is the correct solution"
    Now the pilot has time to analyze the suggestion and think for himself if the suggestion is valid and or applicable or not.
    So no worries about AI "taking over airplanes" but instead I think pilots should welcome the idea so long as it could act as a guardian angel to make suggestions as needed, but never actually take control.
    Cheers

  • @arifaboobacker
    @arifaboobacker Год назад +1

    I think this is finally a realistic view of AI. Why do all the other 'AI experts' and their millionaire/billionaire investors simply go mad and illogical when discussing AI... all hand movements and all..

  • @anilbagalkot6970
    @anilbagalkot6970 Год назад +5

    Hej!!! The best discussion 👌 Thanks Petter!

  • @TheAnticorporatist
    @TheAnticorporatist Год назад

    It seems like every crash video boils down to, “but the first officer got flustered and didn’t realize that the down stick was in doggy mode and the autopilot had defaulted to protect the kitty mode so it did X”, a computer can be programmed to KNOW all of that and NOT miss the checklist item that needed to be reset to “A” mode, or whatever. For instance, in the “Miracle on the Hudson”, one of the cameras (or some other form of monitoring) that was watching the inlet to the engines would “see” a frickin’ goose get sucked into that engine, would automatically, in seconds, determine if it was too damaged to be restarted (and, if not, try the restarting checklist), know exactly what the flight characteristics of the plane were going to be from then on, realize that returning to the airport was going to be a no go, detect emergency landing options, decide on the Hudson, contact the field, air traffic control, the police and the coastguard simultaneously and instantly while informing the passengers and crew of what was about to happen and maneuvering the plane around for a water landing, making sure to land at the ideal angle to avoid having the plane break up, if, indeed, that was the only option. It is entirely possible that the plane, knowing that the starboard engine was completely out, and knowing EXACTLY where the other air traffic and ground traffic was (and was going to be), would INSTANTLY throw the plane into a maximum bank, while deploying flaps and re-lowering the landing gear and broadcasting appropriate messages and crew and passenger instructions, pulled out of the bank and landed the aircraft back on the field; a feat that no human flight crew could hope to achieve in that amount of time. Or, who knows, since the subsystem of the expert system that is involved with vision wouldn’t have been occupied with everything else that’s involved in getting an airplane into the sky, and could also have a much greater field of view than the pilot it may well have noticed the flock of geese and modified the takeoff profile temporarily to avoid them. I’m a huge fan of pilots, but I will say it again, modern aircraft have too many disparate systems each of which has a billion different (but eminently knowable) states and settings and errors and things that can go wrong with them. It is too complicated for a human pilot and actually needs to be either GREATLY simplified, or completely under the control of a triple redundant computer system, IMHO.

  • @Flattyflaf
    @Flattyflaf Год назад +2

    Amazing talk, thanks for it. For me AI has the potential to significantly improve the autopilot performance and if we teach it all of the incidents and accidents that has happened in aviation it will be an amazing pilot aid. However there will always be a pilot, we can’t really trust the lives of so many people to a machine but we can make sure that the human who is in charge of the aircraft has all the information and aid necessary to make a good decision in a bad situation. I believe AI will definitely change the way we design aircrafts and further improve efficiency and aircraft design but it will never be the only pilot.

  • @WT.....
    @WT..... Год назад +2

    For those going "oh no, improvements in AI will steal our jobs", unless your job was obsolete to begin with, or doesn't require technical experience to improvise, you have nothing to fear. Don't believe me, look at the advent of the calculator, people actually thought that it would make mathematicians obsolete, but a few decades on, and that hasn't happened yet.

  • @jasonyu4664
    @jasonyu4664 Год назад

    I honestly felt like I heard him say “here is how ai works” and then “and here’s how Humans work!”…. But then describe basically the exact same process for each.

  • @aliancemd
    @aliancemd Год назад +2

    8:42 I don’t agree that it doesn’t have accountability. You can tell it that this areas of the ground is “people” and this airplane is “people”, if “people” suffer than you “lost the game” - it will then analyze all the available variations to save people, it has “accountability” in real-time, you program it.
    Also, I don’t agree that AI can’t stop - giving ChatGPT as an example is just wrong/uninformed. You get the probabilities and you can set a threshold, if it is insecure about something, you can pass it back to the pilot, just like regular Autopilot just using probabilities and gathered knowledge.

    • @ZS-rm5vn
      @ZS-rm5vn Год назад

      I think you’re confusing conscience with accountability. Think of it more as in legal liability, and being able to explain why it took certain decisions based on actual understanding of inputs and systems at play. Until the developers of “AI” models can consistently and repeatably fulfill that condition, you will be hard pressed to see companies, manufacturers and any other entity who may be under legal liability to allow implementation of “ai” as it is today in direct flight operations.

  • @susiejones3634
    @susiejones3634 Год назад +3

    Really interesting video, thanks Petter and Marco.

  • @Amy_A.
    @Amy_A. Год назад

    Great video, the points were well made. I don't think you're wrong, but I do have a few counterpoints for consideration:
    1) When you bring up Sully, you're taking a single instance and touting that as the golden standard, while entirely ignoring all the pilots who have made terrible decisions that led to mass loss of life. I feel the question that should be asked isn't "Is AI as good as Sully?", it's "Is AI safer than most pilots?" I feel the answer is yes, AI in it's current form can be better than most pilots. AI doesn't get distracted, it doesn't get bored, and it doesn't get tired. The reality is, 80% of incidents are human error. Pilots and tower controllers can and will get distracted, bored, and tired.
    2) You mention AI can only merge two things it's already seen. Let's assume that's true, even though it isn't. You can run AI through millions of flight hours of simulations. Remove wings in some simulations, kill engines in others, reduce control in more. Realistically, how many errors can there be in a plane? You're in the air. There's birds, other planes, and clouds. You've got sensors, control surfaces, and thrust. And honestly, that's kind of it. You can train AI on every aircraft failure in history, and create millions of theoretical failures for it to learn from. Create a script to train AI on every possible system and control failure possible in a given plane. AI will improve from every single one. As amazing as pilots are, they just can't do that.
    3) AI already can fly planes. The military trained an AI to control a fighter jet in a simulation, and it beat humans, every single time. The humans fighting against it said that it felt like it could read their minds; it reacted to their movements before they even realized they were making them. Humans cannot have that level of reaction time or attention to detail.
    4) Everyone acts like AI will fully replace humans, and I agree, the current version will not do that. But it doesn't have to replace every job. It just has to reduce the number of people doing a given job. Maybe instead of a manager running a team of 10 humans, a manager runs a team of 6 humans and an AI. That's still 4 jobs it replaced. Saying AI isn't going to take jobs is like saying the car won't replace horses. Yeah, horses still exist. They're not extinct, and they're not even rare. But they're not our main transportation anymore.
    This is just some food for thought, and I appreciate a take on AI presented in such a clean format, even if I personally disagree with the conclusions it makes.
    Thanks for making the vid, and fly safe!
    While you're still allowed to 🤖😉🤖

  • @mmhuq3
    @mmhuq3 Год назад +2

    Ok again a marvelous video. Thank you so much

  • @WayneM1961
    @WayneM1961 Год назад

    Captain Petter, I am a retired (on health grounds) CEH(V9) Certified Ethical Hacker, and licensed penetration tester. IA in reality, doesn't exist, it's nothing more than a prediction program, that's ultimately programed by humans. There is no micro chip that has the ability to "think and learn" for itself without human intervention." Chat, to a degree, we already have that, take Alexa for example, it can answer a massive amount of questions and control many different functions, but behind it all is a human programmer. Captain Petter, we are still to perfect the perfect car that is completely automatous. The airline industry is nowhere near the level, I assure, your job as not only a captain, but a line trainer, will be secure, for what I can see, the foreseeable future. A very interesting subject Captain, nevertheless.

  • @otooleger
    @otooleger Год назад +4

    Great video. I learned a lot about AI in general. Well done for tackling this topic.

  • @PaulTopping1
    @PaulTopping1 Год назад +1

    I work in AI and can confirm that Marco really knows what he's talking about. Still, I feel a few possibilities were missed. One of the good qualities of computers and AI is that they don't get bored, or nervous. What about getting AI to perform the "pilot monitoring" function? We wouldn't want it to make decisions but it certainly could watch everything the pilot and plane are doing and make comments. What about using it in a pilot's walkaround? Unlike a pilot who is perhaps inattentive after making hundreds of uneventful walkarounds, the AI would catch anything out of place.
    Finally, AI won't take many jobs any time soon but elevator operator is a really bad example. It is highly unlikely that many operators were retrained as elevator maintenance people. Even if they were, a couple of elevator maintenance people can service hundreds of elevators. Any time technology makes a lot of changes to the world, some people will lose their jobs but new jobs will be created. The emphasis should be on learning new things and not counting on keeping the same job for life.

    • @StupidusMaximusTheFirst
      @StupidusMaximusTheFirst Год назад

      Yes, that's true, they don't get bored or nervous, they don't care whether they or the passengers survive in an emergency either. I think AI would be a useful helping tool for complicated stuff, including aviation. It has in a way been used in many fields in the past with good results, as well as in aviation. So, I'd guess it will still be used in the same manner the more tech improves. But no, it won't ever fly planes on its own. It might take over simplistic or procedural jobs, piloting a plane is neither of those. Procedures were put in place to make complicated stuff easier and safer to manage, when you find yourself out of those procedures or predefined parameters. But yeah, a clerk's job? Taken. Police? Taken. Mil? Taken. Any job that requires no thinking and just simplistically following procedures is going to be taken by AI. Just like those no thinking brainless factory jobs were taken by industrial automation. Scientists are safe, pilots are safe, teachers are safe, and despite the advances in AI art creation, artists are safe as well. AI could help as a tool in all those fields, but it cannot take them over cause it's garbage in all of them. And yes I agree with you, that it will help people learn new things, have more time to spare, etc. Although the power structure of the world will resist, and they depend their powers on others blindly obeying. So whether we'll get AI to help us improve our lives, or whether they first use it against us, remains to be seen.

  • @FredrIQ
    @FredrIQ Год назад +1

    I'm not sure if I can agree. From having seen your videos, most accidents can ultimately be traced down to a failure to follow a checklist or process correctly. I'm not saying I blame pilots for most accidents, but let's say for example that an accident was caused by an overlooked warning light or similar, because the light was too discrete and too easy to overlook. You wouldn't blame the pilot for the mistake (instead you'd make the warning harder to miss), but ultimately it's still a result of a failure to notice something that *the airplane was able to identify*. To me, this means that as long as an AI has complete knowledge of checklists, can't miss any details -- things that AI are actually good at, even way before the new language model innovations -- it should be fully capable of piloting an airplane. Of course, there are accidents where everything was done correctly to 100% and yet still ended badly, but those are the exception, not the rule.

  • @alexdi1367
    @alexdi1367 Год назад +5

    Deeply disagree with Marco's AI sentiments. A key limitation of current automation is that it's strictly bound to sensors feeding it. If you cover the pitot tubes on your plane, almost all of your automation disappears. AI isn't like that. It could be designed to monitor all data feeds in real time to answer the question 'what's wrong this picture' at any stage of travel, perhaps even interpolating an estimated result to compensate for a bad sensor. You could be sitting on the ground and the AI could alert you: hey, based on your fuel calculations and destinations, there's not enough gas. And you didn't request clearance for this runway. At this stage, I probably wouldn't have it controlling the plane directly, but you could absolutely have it kick in to tell the pilots their inputs are creating a deviant state and to do something else. The ultimate goal for all of this is to have the AI interpret and summarize the deeply complex systems of modern aircraft so the pilot doesn't have to think like a computer.

    • @ZS-rm5vn
      @ZS-rm5vn Год назад +1

      Did you watch the whole video…he literally discusses your sentiments….14:30 onwards….
      Point in fact your specific examples, we already have computers (or procedural design) warning us of those things that do a good job of it, and more cheaply and accurately than present-day ‘ai’.

  • @dimitarkrastev6085
    @dimitarkrastev6085 Год назад

    I am a software architect and a big fan of the channel. I am coming from a family of aviation and I build my own RC airplane and even program partial autopilots as a hobby. I wanted to share my 2 cents on the topic.
    I also don't think AI could fly an airplane today.
    Aviation is a very methodology driven industry, probably the most methodological one. There seems to be a procedure for everything. But even aviation is nowhere near methodological enough to be a good candidate for full automation yet. If that was the case every possible landing on a given runway would have a finate number of approach and landing maneuvers which you could easily program in their entirety in the autopilot or AI. The reality is even though the general procedure is the same, it has the posibility for countless variations. Maybe the traffic is too high and air traffic control gives you a zig-zag vector set to ensure separation, maybe there are construction works on the airport, or a malfunction, or bad weather, or, or, or...
    You see, as fascinating as software development is, it is not magic. It is nothing more than an automation of a set of well predefined processes.
    Once you have a well defined process that a human can follow to the letter, then you have a good candidate for automation. This is why you see more and more individual systems being automated in an airplane, but not the entire "thing" at once. A small component that we have flown with hundreds of thousands of times and we really know exactly what it should do in pretty much every scenario, but it has to be operated by the pilot - OK, lets automate that. Every situation which does not fall within the predefines process or is rare enough to not really have an exact solution is what we call "corner case". This is what causes the automation to disengage and pilots needing to take action to rectify it.
    When you look at a complex system in computer science in order to come up with the total number of different scenarios you need to account for you get a list of all systems that can affect the outcome, then you get the number of states each of them can be in and you multiple them.
    Here is an example. Lets say the aircraft has 100 systems with 10 states each. Doesn't sound that scary right? Well this means the system can be in 10 to the power of 100 variations of the individual states - now its a whole other number, right?. Now taking into account the aircraft has much more stuff going on which are virtually invisible for us, when we take into account all factors external to the aircraft itself, we need to multiply that to this number as well.
    You can quickly see there are just too many things to account for all at once. With the current technology that we have, Large Language Models (LLM) actually is probably the closest thing that could fly an aircraft. If you have a large enough model (basically a full recording all flights from leaving the apron to parking in an apron) with all relevant flight data recorded and every external parameter (weather, traffic, etc.) also recorded, you could use the "fake it till you make it" theoretically. We as humans probably performed enough flights that we encountered a big majority of the cases we would realistically get ourselves into. But that knowledge is not preserved. I am refering to one of your old videos of why we don't upload flight data to the cloud. We would essentially need that in order to train such a model with the given technology.
    Here is something you might find interesting. I believe that the Sully example was a bad one. Actually I would like to make the argument that an AI would have done a better job (statistically speaking), but not for the reasons you might think. No, I don't want to undermine what Sully did. No AI would have managed pull this off and to ditch the aircraft and save everyone on board, no questions asked. There was an interesting point made during the investigation though. It points out that if they turned around immediately, they would have probably made it safely to the runway. It is impossible to expect a human to shake the initial surprise of the event, evaluate the condition of the airplane and calculate if they can make it back in a timely manner. A computer however would have probably calculated that an immediate turn around would be the safest bet.
    And actually this is exactly the types of systems that become more and more common, especially with Airbus. Yes, you as the pilot might ask it to do something, but the aircrafts become smarter and smarter to know that in certain situations what you ask of it is probably not what really needs to happen.
    In conclussion, I would say that I do not expect to see pilots leave the cockpit any time soon. However I think we are realtively close to the technological possibility to see an aircraft being flown by just one pilot. I am not expert on psychology, so I don't know what the human factor implications would be, but at least from technological point of view, I think that is not so far in the future.

  • @marinanjer4293
    @marinanjer4293 Год назад +1

    I haven't watched this video by the time of writing this but I'm quite confident that AI will take over much of transportation. I'm an electrical and software engineer and there is a systemic push towards automation, mostly motivated by business interests who see it as a way of thining down their payroll obligations. It will not happen right away or even in the next 10 years but in 50 years I don't think there will be human pilots. Also, GPTs are not AI. They are just machine learners that output what they have learnt

  • @Ivanova2718
    @Ivanova2718 Год назад +1

    There's also an element of trust, the majority of people wouldn't trust an ai flying a plane they're on nearly as much as a human pilot, no matter the depth of the autopilot assistance

    • @tonysu8860
      @tonysu8860 Год назад

      That'll be true until AI earns the trust of people. But people will also need to learn that each individual AI, even arising from the same source code will be as individual as humans are. The only times an AI would be the same as another is when the algorithm is copied, but otherwise there is no guarantee and even a likelihood that every trained AI will be completely unique and can't be counted on to be like any other as much as you as an individual would be like anyone else on earth or has ever existed.

    • @Ivanova2718
      @Ivanova2718 Год назад

      @@tonysu8860 I don't see how that's a good thing, if we're going to entrust hundreds or thousands of lives to an ai, I'd kinda hope it to be well documented and reliable. I for one will never be happy in an ai piloted plane. My main argument for this is the fact that there a size of program that once you pass it, literally nobody knows about everything it does and how, with bugs, exploits, and issues all over the program. And that's not even for self learning ais.

  • @stulop
    @stulop Год назад

    Such a clear view of what AI actually is. After all the nonense in the newspapers. Thanks.

  • @franknewell7017
    @franknewell7017 Год назад

    I was a controls expert. I was able to automate multiple processes and reduced humans on a shift from 6 or 7 to 2 or 3. We still needed to maintain humans on watch to protect the city when a sensor went down. I could provide alarms for damaged sensors but, couldn’t provide controls for a damaged or misreading sensor because it may fail in thousands of ways.

  • @ro2nie
    @ro2nie Год назад +3

    I would use ChatGPT to feed it NOTAMs and ask it to prioritise them and ask it which ones are the most important

  • @RandomTorok
    @RandomTorok Год назад +2

    I think AI could have a role in training situations. Imagine sitting down at your computer and having a discussion with Brilliant AI on any topic. Airlines could have AI instructors that help staff learn different things. I know I learn better by hearing things explained. But it's not always convenient to go to a scheduled class. If a class has an AI instructor then I could take that course at any time anywhere.

  • @jeromethiel4323
    @jeromethiel4323 Год назад +1

    Given how automated current airliners are, autonomous airliners are going to happen eventually. AI doesn't even enter the conversation, as it isn't necessary for autonomous aircraft.