Will Ai come for the Pilots?! - Expert interview

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ม.ค. 2025

ความคิดเห็น • 768

  • @MentourNow
    @MentourNow  ปีที่แล้ว +26

    To try out my new AI app here!👉🏻 app.mentourpilot.com
    You can also contact Marco directly if you have a serious idea you would like his help developing cvt.ai/mentour

    • @jsonwilliams347
      @jsonwilliams347 ปีที่แล้ว +2

      😊😊😊

    • @forgotten_world
      @forgotten_world ปีที่แล้ว +1

      About AI, this is not a "if" question, but just "when", and this day is coming fast. I would say, no more than twenty years. That is, for commercial aviation, because autonomous UAMs will cover the sky way before that. Those protocols are already in development for years.

    • @damionlee7658
      @damionlee7658 ปีที่แล้ว

      ​@@forgotten_world I think there is still an "If" element, which centres around whether AI is the best solution towards automated piloting.
      Of course we are headed towards more automation in commercial aviation, but AI isn't necessarily the best solution for pilot replacement, in an industry with the framework that aviation has. We may be better off expanding automation technologies that work on defined processes, in the same way autopilot and autoland do.
      There is no reason why we cannot have an aircraft automated from the ramp of its departure airport, to the ramp of the destination airport, and never use an AI system.
      There is a lot of focus on AI at the moment, and it is a fantastic field that will doubtlessly become more prominent in society. But we need to use the right tools for each job. And automation without AI is probably far more suited to flying aircraft, at the very least until we can get AI to spontaneously consider scenarios, but even then AI probably isn't going to be the best solution.
      Where AI is perhaps going to be better suited, is in the role of traffic control. I'd wager you'll see an AI driven ATC long before you see an AI pilot (in commercial use, rather than in research, development, and testing programmes), of we ever see AI used for commercial piloting.

    • @seraphina985
      @seraphina985 ปีที่แล้ว +2

      The idea you suggested for the plane to offer prompts where the pilot is doing something strange and pushing the plane towards an unsafe envelope doesn't require AI. As a computer programmer I'd say this would be an expansion of the scope of the existing hard coded envelope protection system with a second layer or soft mitigation (Soft in this regard as they are recommendation as vs the hard mitigations where the aircraft physically intervenes). For example say the pilot is experiencing a somatogravic illusion and forcing the nose up failing to notice the plane is actually slowing quickly. Perhaps instead of waiting until the hard envelope protection system kicks in a RED CLIMB message shows up on the ECAM perhaps accompanied by an aural "REDUCE CLIMB" alert. I feel like that wording also has the added bonus that it calls out what is the immediate problem that the pilots are likely missing here, as a pilot even if I doubted this it would call my attention to the ADI, Altimeter, VSI, and ASI all of which would confirm the yup you are actually dangerously nose high, climbing and getting dangerously slow. After all if I didn't believe I was climbing the first three will disburse me of that notion and the latter is going to indicate this is becoming a serious issue. So that little prompt could help break the pilot out of the confusion by giving and instruction that also calls attention to the right instruments that for whatever reason they are somehow just not seeing at this critical moment.

    • @netjamo
      @netjamo ปีที่แล้ว

      Länka flightradar24 i appen 👍

  • @michaelrichter9427
    @michaelrichter9427 ปีที่แล้ว +30

    I like this guy. Saying "this could be done in AI, but there may be a better conventional way" is actual engineering instead of cultism.

  • @MrBlablubb33
    @MrBlablubb33 ปีที่แล้ว +215

    I did not expect the most level headed take on ai coming from an aviation channel. Truly well done!

    • @oadka
      @oadka ปีที่แล้ว +6

      IKR!!!! Most posts on linkedin have sounded more doomed, as if AI is already so good it can take over everything or as if AI would revolutionize work the way MS Office did. But I think there might be a difference in which sector they are discussing, as the managerial people on linkedin tend to be more vocal. Aiden here is presenting AI for engineering/pilots which is a different use case than generating reports or making presentations.

    • @bhaskarmukherjee4768
      @bhaskarmukherjee4768 ปีที่แล้ว +5

      Aviation has been dealing with automation far before mainstream tech. Heck neural networks in aviation controls research was a thing by the 90s.

    • @yamminemarco
      @yamminemarco ปีที่แล้ว +9

      Well, you have to be level headed to level a plane. Sorry, didn't manage to slip a dad joke in the interview so I figured I'd do it in the comments. 😅

    • @MrBlablubb33
      @MrBlablubb33 ปีที่แล้ว +4

      Dad jokes are always appreciated!

    • @tiffanynicole6050
      @tiffanynicole6050 ปีที่แล้ว

      I did 😊

  • @Maggie-tr2kd
    @Maggie-tr2kd ปีที่แล้ว +157

    As far as AI goes for advance informing a nervous flyer about turbulence, all a nervous flyer really needs is false reassurance from someone they trust. For my first flight on a commercial airline as a young teenager, my mother told me that it would be lots of fun - sort of like riding a roller coaster at times with ups and downs. The flight turned out to be what the other more experienced passengers told me later was a nightmare with lots and lots of turbulence. There I was smiling and enjoying myself the entire time because it met up with my expectations.

    • @MentourNow
      @MentourNow  ปีที่แล้ว +43

      That’s a very conscious mum! Excellent example

    • @ajg617
      @ajg617 ปีที่แล้ว +5

      One of the reasons I always flew United was to listen to channel 9. Ride reports were extremely valuable insight on what to expect and the professionalism of the crews and ATC were incredibly reassuring.

    • @gnarthdarkanen7464
      @gnarthdarkanen7464 ปีที่แล้ว +7

      Through the 90's I did quite a bit of flying on commercial airlines... AND from time to time we'd hit "pretty good" turbulence... I knew there were likely timid, nervous, or outright scared flyers in the cabin, so every time there was a "proper" bit of roller-coaster or similar effect, I always made a point to give a good "WHEEE!" and laugh... in two or three of them, there'd usually be a few others to join me, and more often than not, even some of the crew...
      I doubt anyone knew I was even looking around, but I saw more than a few shoulders start to slouch, faces slack from distentions and clenched jaws, and even a couple parents pointing at us "lunatics" and speaking to youngsters... I like to hope it was a boost in morale to a few of the more anxious among us...
      Obviously, I'd temper that with the sensible judgment not to be shouting when the lights are out and everyone's even trying to get some shut-eye or rest... Even (especially?) the anxiety-prone don't need a maniac (or several) shouting anything when the plane starts buffeting and bouncing... haha... ;o)

    • @The_ZeroLine
      @The_ZeroLine ปีที่แล้ว +4

      As a child, I absolutely loved turbulence. Not so much now despite being able to pilot fixed wing aircraft. Go figure.

    • @zappulla4092
      @zappulla4092 ปีที่แล้ว +1

      I find it eerie when the flight is very smooth. I actually enjoy the subtle turbulence along with some more bumpy turbulence at times.
      I don’t know if the false reassurance is a good idea though. If the person ends up panicking then it could create trust issue.

  • @alejandra46790
    @alejandra46790 ปีที่แล้ว +54

    I think having ai as a second set of eyes for stuff like checking if pitot tubes are covered would be useful for pilots, or alerting pilots if the readings of sensors dont match

    • @MentourNow
      @MentourNow  ปีที่แล้ว +17

      Yes! Exactly. It will be useful as a tool

    • @alejandra46790
      @alejandra46790 ปีที่แล้ว +5

      i think for the checklists they could use a non-ai search engine that can search through them or even a voice assistant type search like alexa/siri

    • @muenstercheese
      @muenstercheese ปีที่แล้ว +4

      ...but you don't need AI for that, you can do that with deterministic procedural algorithms. the hype for AI is far too high, imo

    • @elkeospert9188
      @elkeospert9188 ปีที่แล้ว

      There is really no AI necessary to find out if values delivered from redundant sensors are inconsistent
      But informing the pilots that information is inconsistent leaves the pilots with the problem which one they should trust and which not - which causes a lot of stress and confusion.
      I think something I call "sensor fusion" would help in such situations a lot.
      For example one altimeter says that you are on flying at 20.000 feet while the other says that your are only flying on 1500 feet is a bad thing when flying over the pacific at night without any visual helpers.
      But including data from other sensors (like outside temperature or GPS) would provide a lot of hints which of the both altimeters is the one delivering wrong values.
      If the outside temperature is -10°C and GPS tells you that you are flying at 2300 feet than it is nearly 100% sure that the altimeter showing 20.000 feet is wrong and you should better trust the other altimeter.
      Or if one pitot sensor measures a dramatic change in relative air speed (for example a drop from 800 km/h to less than 400 km/h in less than one second) while the other one is delivering "smoother" values than the propability that the first pitot was blocked by ice or something else is high and in doubt it is better to trust the other pitot sensor.
      Clever guys could take all time they need to develop algorithms using data from the different sensors to calculate how trustable each individuel sensor is and inform the pilots which sensors they should trust in case they not have any better idea.
      For humans it is very difficult to perform such considerations in a stressfull situation with limitted time to make a decision but a software could do such (and even much more complex) analysis in the fraction of a second - without any AI needed...

    • @philip6579
      @philip6579 ปีที่แล้ว +1

      @@muenstercheese Most of the things AI does are possible via other means. The whole point is that Generalized AI (or something close to it, like ChatGPT) makes those things orders of magnitude easier.

  • @MrTmm97
    @MrTmm97 ปีที่แล้ว +28

    I love this channel! So excited when I see a new video in my feed.
    I’ve recently been on my 3rd binge rewatching all of Mentour Pilot’s videos the past few weeks. Thanks so much for the fascinating entertainment and information!

    • @MentourNow
      @MentourNow  ปีที่แล้ว +5

      Thank YOU so much for being an awesome fan and supporting what I do!

  • @timothyunedo5642
    @timothyunedo5642 ปีที่แล้ว +50

    I am very fascinated by the growth of AI but this video is my favorite explanations of AI so far. Bravo Petter and Marco!

    • @MentourNow
      @MentourNow  ปีที่แล้ว +9

      Thank you!

    • @MrCaiobrz
      @MrCaiobrz ปีที่แล้ว +7

      Except it doesn't explain AI, it explains what a semantic database and Generative Pre-trained (the GPT in ChatGPT) is. What we currently have is still not even A.I., is just the interface for a semantic database.

    • @vladk8637
      @vladk8637 ปีที่แล้ว +7

      Indeed, I've been working in AI for years too, and I really appreciated Marco's intervention. There is much buzz around AI, and it's pleasant to see people like Marco clearly putting dots on the i-s

    • @-p2349
      @-p2349 ปีที่แล้ว

      @@MrCaiobrz yes what we currently have is narrow AI what your taking about is AGI (artificial general intelligence)

  • @thetowndrunk988
    @thetowndrunk988 ปีที่แล้ว +94

    Even if accidents were “drastically” reduced (which would be incredibly hard to do, given that aviation is extremely safety conscious now), all it’ll take is one crash, and people will be screaming for pilots to be back in the seats. The MAX crashes taught us that (yes, I know there were pilots, but they couldn’t override the automation).

    • @elkeospert9188
      @elkeospert9188 ปีที่แล้ว +7

      That might happen - or not.
      Assume that 25 % of airlines are removing their pilot by some software and after 2 years the statistics would show that they have 90% less accidents than airlines flying with human pilots
      when even a single accident may not change the mind of passengers to scream for pilots as long as the "total picture" shows clearly that it is safer to rely on a "software pilot".
      The MAX crashes were (in my opinion) caused by human errors - not errors done by the human pilots but errors from the guys at Boeing building a software relying on a single (Angle of attach) even they were aware that sensor could fail.
      The minimum requirement would have been to compare the input of multiple AOT sensors and if they are not consistent to turn of MCAS and of course inform the pilots how they should react in such a case.

    • @endefael
      @endefael ปีที่แล้ว +4

      They could override it. They just didn't.

    • @elkeospert9188
      @elkeospert9188 ปีที่แล้ว

      @@endefael There were two crashes related to MCAS
      In the first one the cockpit crew was not informed about MCAS and therefore could not override it.
      In the second one the crew did know about MCAS and how to "override" it by manual turning the trim wheels - but this was physical hard for the pilots to do and also take a lot of time - in the second accident it took to much time.

    • @endefael
      @endefael ปีที่แล้ว +3

      @@elkeospert9188 I am afraid you do not understand completely what the MCAS is capable of, what that failure looks like, what are all the contributing factors, and the huge deal training played in both accidents. Just to give you a hint, in none of the two crashes the crew performed all 4 or 5 basic memory items they were supposed to. I highly encourage you to study them more deeply, and you will see that automation did not have the capability of overriding them by itself. It just took them too long to take the appropriate actions in due time - if taken at all. Not judging them, as individuals, but the fact that they were exposed to it without being fully prepared to. Not saying that the acft couldn't have been improved either, which any human project, including the 737 can. But it was never simple as the MCAS existence or failure: no accident happens for an isolated fact.

    • @orestes1984
      @orestes1984 ปีที่แล้ว +1

      You could absolutely use narrow AI to fly a plane if you input enough instructions. The problem is like driverless cars is that it would make an inexplicable decision that would kill everyone. A plane can already take off and land on auto pilot as it is. You just don't want it to make decisions in dangerous situations, which is why they also eventually banned driverless cars.

  • @ProgressiveEconomicsSupporter
    @ProgressiveEconomicsSupporter ปีที่แล้ว +83

    Aidan is so cool, really gives polite, coherent and useful answers! 😎👍🏻

    • @MentourNow
      @MentourNow  ปีที่แล้ว +11

      Thank you! Glad you like him

    • @rayfreedell4411
      @rayfreedell4411 ปีที่แล้ว +1

      I think this is completely wrong. If you approach the issue from the standpoint of what information is available to support AI, the clear answer is yes, even in Sully's situation.
      The autopilot already flys the airplane. Altitude and heading can be changed by turning knobs. The computer can turn those just as trim is changed. Add Jeppsen charts, digital RADAR input and ground controllers replaced by AI to 'talk' to the AI pilot, all thr information for safe flight is there.
      For the Hudson river, again, consider the information available. Engines failing, altitude, heading, distance back to Laguardia are all known. AI would know it had to land, but where? A new problem for AI, but again, consider the information available. To me, searching for clear fields, beaches, etc. are a common enough problem to be part of AI from the start. GOOGLE MAPS has that information now.
      My background is computer tech. Final thought; I served in the Air Force during the war in Vietnam. The Hughes Mark 1 fire control system in the F106, could almost fly a complete shootdown mission, and that was 70 years ago.
      AI replacing pilots and ground controllers is a lot closer than you think, and I'm not happy about that.

    • @stevedavenport1202
      @stevedavenport1202 8 หลายเดือนก่อน +1

      Yes, excellent choice of guest.

  • @kevinbrennan8794
    @kevinbrennan8794 ปีที่แล้ว +17

    Excellent interview, thank you. I would be interested in more videos like this one.

    • @MentourNow
      @MentourNow  ปีที่แล้ว +2

      Thank you!
      Unfortunately you seem to be quite alone thinking that 😔 The video is tanking

    • @makeupbyseli31
      @makeupbyseli31 ปีที่แล้ว +3

      Nooo, we want more of that!!! Greetings from Germany 👏🏽

    • @jillcrowe2626
      @jillcrowe2626 ปีที่แล้ว +1

      ​@@MentourNow I'm fascinated by this video! I'm going to share it on Facebook and Twitter.

  • @PushbuttonFYM
    @PushbuttonFYM ปีที่แล้ว +34

    As someone who works with AI on a daily basis, this basically hits the nail on the head. AI did not replace anyone in my team, instead it took over the mundane repetitive work which is the longest part of a project, freeing up my team to focus on the final finishing portions of the deliverable. AI does 80-85% of the work in less than half the time, making my team more efficient and allowing us to take on more projects with the same staff. We refer to it as AI/Human hybrid where the AI is more of a partner to a human rather than a replacement.

    • @MentourNow
      @MentourNow  ปีที่แล้ว +3

      Exactly our point! Thank you, feel free to give some feedback on our app!

    • @avisparc
      @avisparc ปีที่แล้ว +6

      "allowing us to take on more projects with the same staff" So there are fewer projects for other people to work on. If you do more work with the same people then couldn't you do the same work (not more) with less people?

    • @oadka
      @oadka ปีที่แล้ว +4

      @@avisparc I was going to say the same thing as well. By reducing number of man hours needed for a given project it has indirectly decreased employment. However, this will mean significantly reduced costs which might allow more demand. This will make the task of deciphering the changes in employment due to AI a bit more complex.

    • @avisparc
      @avisparc ปีที่แล้ว +1

      @@oadka that's a good point, it hadn't occurred to me that the Jevons paradox could operate in this situation. (en.wikipedia.org/wiki/Jevons_paradox)

  • @lek1223
    @lek1223 ปีที่แล้ว +28

    as for AI taking over control of the plane, there is ONE situation i can think of, if you remember the disaster with the plane that flew in a straight line for a while before crashing near greece with both pilots out, introduce something like the check in trains where the pilot have to confirm 'yes i am still awake and paying attention' if a pilot misses for instance two in a row, it could make sense for the 'ai' pilot to bring the plane to a low altitude and broadcast a automated mayday, and then if still no pilot response it can make sense to then try and land the plane.

    • @kopazwashere
      @kopazwashere ปีที่แล้ว +12

      so basically as absolute last ditch if there's nobody qualified in the plane to land the plane...

    • @lek1223
      @lek1223 ปีที่แล้ว +5

      @@kopazwashere yup. the 'alternative' are of course a remote pilot alá drones which argueably are a better solution, but a backup-for-the-backup

    • @flyfloat
      @flyfloat ปีที่แล้ว +9

      You do not need "AI" specifically for a last ditch solution like that. Garmin already offers a solution called Autoland for GA aircrafts where in a case of pilot incapacitation a passenger can press the emergency button and the plane will land by itself at the nearest suitable airport while making the proper mayday calls to atc. It even displays instructions to the passengers if they need to talk to atc

    • @oadka
      @oadka ปีที่แล้ว +2

      ah so something like a dead man's switch?

    • @roberre164
      @roberre164 ปีที่แล้ว +4

      @@flyfloat That's fine if there is someone conscious to press the button. With Helios 522 and the Payne Stewart Learjet crash no one was conscious to do that. They are not the only examples either. As a last resort AI could have been very helpful to initiate autoland in these 100% fatal accidents.

  • @johnhawks5035
    @johnhawks5035 ปีที่แล้ว +3

    Brilliant. You addressed a question, (very comprehensively), that many have been pondering. Thanks.

  • @steve3291
    @steve3291 ปีที่แล้ว +40

    Thank you, thank you, thank you Mentor for bringing someone on like Marco who gets it.
    I studied AI at University in the 80's and I am very much a Turing purist. I have worked with systems that I would categorise as 'advanced analytics' and 'machine learning' and have had rows with people who said that Turing's opinions are out of date after I accused them of re-badging their tired old software as AI to charge more money (which they are).
    Back on topic, who is most scared of of the current form of AI? The mainstream media are. Why? Because AI will start to present unbiased news feeds and put them out of work.The vast majority of the negative press is being generated by the press.

    • @MentourNow
      @MentourNow  ปีที่แล้ว +5

      Thank you for your thoughtful comment and I’m glad you liked the video.

    • @steve3291
      @steve3291 ปีที่แล้ว +5

      @@MentourNow It was, as you say, fantastic 🤣

    • @jantjarks7946
      @jantjarks7946 ปีที่แล้ว

      Don't worry, in the media you can create AI being politically correct too. It's a matter of rule settings to follow.
      It's not going to become better, but even worse and far more refined.
      😉

  • @ashwin1698
    @ashwin1698 ปีที่แล้ว +7

    Getting cross-Industry insights associated with Aviation is fantastic!. Would be valuable to learn, watch such discussions/exchanges. Amazing break-through Peter!. Congratulations.
    Warm greeting from Germany. Vielen Dank!

  • @Trebor_I
    @Trebor_I ปีที่แล้ว +2

    As a season software engineer I can say that *some* aspects of the flying will be supplemented by AI. The autopilot, auto landing, technical malfunctions checklist items, memory items, all make perfect sense. Beyond that I do agree with your assessment of a fully automated cockpit being a generation away.

  • @Silverhks
    @Silverhks ปีที่แล้ว +5

    This was a wonderful primer on AI.
    Aviation specific but useful for anyone questioning what ML means for the future.
    Thank you Petter and Marco

  • @jim.franklin
    @jim.franklin ปีที่แล้ว +1

    That was the best and most honest AI discussion I have seen to date, I get so fed up with people banging on about how AI will damage the job market - they often get upset when I point out AI cannot sweep streets or fill shelves in Tesco so they will always have a job - but seriously, AI is a database algorythm and nothing more, Marco explains that so well and from an inside perspective people need to take note. I will be sharing this video on Facebook and LinkedIn because this discussion needs to be heard by millions so they understand what AI can do, but more importantly, what AI cannot do.
    Thanks for a briiliant interview.

  • @limbeboy7
    @limbeboy7 ปีที่แล้ว +11

    I agree, It can help with workload. the checklist. monitoring ground and air traffic. emergencies etc

    • @MentourNow
      @MentourNow  ปีที่แล้ว +2

      Yep, that’s where we will see it first.

    • @anilbagalkot6970
      @anilbagalkot6970 ปีที่แล้ว +1

      Exactly!

    • @amirshahab3400
      @amirshahab3400 ปีที่แล้ว +1

      It can help us drive our cars, busses, trucks safer & more efficiently.

    • @titan133760
      @titan133760 ปีที่แล้ว

      Sort of like how autopilot works

  • @Bob-nc5hz
    @Bob-nc5hz ปีที่แล้ว +3

    10:00 the discussion about the feedback loop and ability to identify weak points and source of errors was one of the eye openers of the essay "they write the right stuff", on the US Shuttle Software Group (pretty much the only component of the Shuttle program which got praise after the Challenger Accident), the shuttle group's engineering was set up such that *the process* had the responsibility for errors, in order to best make use of human creativity and ingenuity without having to suffer from its foibles. Any bug which made it through the process was considered an issue with the process, and thus something to evaluate and fix in that context.

  • @owais4621
    @owais4621 ปีที่แล้ว +4

    Absolutely fantastic video

    • @MentourNow
      @MentourNow  ปีที่แล้ว

      Glad you liked it.

  • @fazq02yfd
    @fazq02yfd ปีที่แล้ว

    Wow, what a knowledgeable and well articulated guest. He really understands the subject matter.
    Thank you both.

  • @adamstefanski8744
    @adamstefanski8744 ปีที่แล้ว +2

    Very cool and insightful material guys 😉 thank you

  • @damianknap3444
    @damianknap3444 ปีที่แล้ว +2

    One of the best uploads ever Peter 👏🏻

  • @susiejones3634
    @susiejones3634 ปีที่แล้ว +3

    Really interesting video, thanks Petter and Marco.

  • @LemuelTaylor
    @LemuelTaylor ปีที่แล้ว +4

    This channel is on another level. This is kind of content we need (and want as well 😄) . Thank you so much Petter and Marco. This was truly informative.

  • @ninehundreddollarluxuryyac5958
    @ninehundreddollarluxuryyac5958 ปีที่แล้ว +4

    Computers running normal code (not AI) will be more useful because of the predictability of its output. AI can be useful as a user interface (voice recognition) and general awareness. One example would be an AI that listens to all the ground control transmissions so it is aware that you are supposed to turn on a certain taxiway and reminds you if you are starting to turn at the wrong one, or if you are about to cross an active runway but another plane has been cleared to take off or land on it by a different controller. Miracle on the Hudson is an excellent example why I want a human pilot flying any plane I am in.

    • @Jimorian
      @Jimorian ปีที่แล้ว

      Another aspect of programmed automation is that you can tell the pilots what the parameters are for "decisions" made by the automation so that they understand exactly when the automation is outside of its parameters and thus have more information about whether to trust it or not in a particular situation.

  • @madconqueror
    @madconqueror ปีที่แล้ว +2

    Very cool! I like this format very much. Petter discussing with other passionate people about not explicitly aviation related topics. Very nice video.

  • @andrzejostrowski5579
    @andrzejostrowski5579 ปีที่แล้ว +3

    Excellent video, thanks for bringing a real expert on the subject. As a mathematician working as a software engineer I am so happy to see a voice of reason in talking about what we call AI. Don’t underestimate automation though, I am mind blown by Garmin autoland, I think that we might see similar automation systems in commercial aviation at some point, so I wouldn’t rule out a single pilot operations at some point in the future.

  • @starbock
    @starbock ปีที่แล้ว +3

    Excellent interview with Marco. Very informative, objective, level-headed, practical. Always great content and awesome job! 🙏

  • @yamminemarco
    @yamminemarco ปีที่แล้ว +8

    Having the opportunity to participate in this interview with @MentourNow was an absolute honour and pleasure. I am both impressed and grateful for the amazing comments, perspectives, questions, and debates I see here in the comments. They truly are a testament to the quality of this community. I am especially thankful to those who have provided feedback and pointed out areas that I intentionally did not elaborate on during the interview, as well as suggesting improvements to my phrasing. I agree with those highlighting that I shared a somewhat oversimplified version of the subject matter, as I briefly mentioned at the start of the video. This was done intentionally to make the conversation accessible to as broad an audience as possible. However, for those wishing to delve into the nitty-gritty details, I would be more than happy to elaborate in a thread following this message. I will be tagging the most intriguing comments, but everyone is free to join in.

    • @yamminemarco
      @yamminemarco ปีที่แล้ว

      Elevator Operator:
      From the point of view that I shared in the interview you can argue that its the evolution of a job. However, as you rightfully pointed out, there is another point of view which says that few to no people today have a job title "Elevator Operator". AI & technology can most definitely and has made certain specific Job Titles redundant. But if we elaborate on that perspective lets perhaps dive deeper into what did it actually make redundant. It took over repetitive and potentially unfulfilling jobs making it so that people who previously might have considered becoming an elevator operator now need to consider becoming an elevator engineer. If we take a look at the fluctuations of employment rates throughout the entire 1900s where technology evolved and automated more things than ever before we will notice that not only did it not go in a growing trend, but the quality of life of everyone in the world has steadily increased. The perspective I'm sharing is that automation has been only good for mankind and there is no reason to believe that it will change. After all it is us who chose to create it, we are the creators of technology.

    • @yamminemarco
      @yamminemarco ปีที่แล้ว +2

      Unmanned Aircraft Topic.
      As many of you have pointed out, there have been unmanned aircraft (such as drones) that have successfully been deployed. These however, are not flown by AI. Some of them might incorporate some elements of AI such as Face Recognition and many more. However, they are operated and dependent on code instructions which, as I pointed out in the video, are much more efficient and reliable for this purpose. Hence the answer to the question "Can AI fly a plane on its own" remains No. However whether automation, especially if properly leveraging AI, will be able to do that is a different question. One that I would still answer no to today but with less certainty than if it were only with AI.

    • @yamminemarco
      @yamminemarco ปีที่แล้ว +2

      Generative AI vs Other AI.
      One of my primary emphasis for this interview was to address the fear-mongering misconceptions that have been irresponsibly spread by the media and that have, for the most part, been centered around Generative AI. Unlike previous breakthroughs in AI, ChatGPT became an overnight sensation and a lot more people have heard about it than any other AI breakthrough. Now, I stated that AI, in an oversimplified way, could be described as fake it till you make it. And I will stand my ground on this one by further elaborating it. When I chose to use the sentence "fake it till you make it" I explicitly did so in an attempt to translate one of the core principles of all ML models: Approximation. One of the incredible things of many ML models is how you can generate a desired output from an input of many functions without needing to know any of the logic inside the function. This is, a foundation of AI/ML. And it is a principle used in just about all types of models, from Classifiers, Regression and Neural Networks to Transformers, Entity Extraction, and many more. I believe that so long as any AI that we develop is driven by this approximation principle we will simply not achieve anything other than Narrow AI. And most definitely we will not achieve consciousness.

    • @MentourNow
      @MentourNow  ปีที่แล้ว +3

      Thanks for having you on Marco! It was truly illuminating!

    • @gergelyvarju6679
      @gergelyvarju6679 ปีที่แล้ว +1

      @@yamminemarco Some AI models try to simulate various biological processes. Like how the brain works, like how a hive of ants work. Approximation also happens in our won natural intelligence. But even the perfect emulation of human brain/mind would have no advantage over the human brain/mind. And this is why I don't see general purpose AI. And copying the weakness of human mind, and creating an "inferior human emulator" in the cockpit wouldn't be the best option either.
      Even if GPT is designed in a way that would give it the best chance to beat the turing test, even with very large databases, lots of data it is bad at many tasks. Including procedural generation of game content even for tabletop gaming. We can get back to this point and my experiences with using GPT for this purpose, but the issue is what you described: It tries to use the "best option" one step at a time, it doesn't even consider its own generated answer as a whole. And it often doesn't understand which criteria from the prompt is relevant and important. But I think that ChatGPT and MidJourney isn't suitable for production environment yet, but trying them while gaming, learning prompt engineering, evaluating these options is a much better option.
      Your claim is that Midjourney doesn't see airplanes without windows. But some large High Altitude High Endurance fixed wing drones are in essence airplanes without windows. But Midjourney (and its language model) doesn't understand that fixed wing UAVs are a variant of airplanes, and how it can use the information. Please check the following futuristic image: cdn.midjourney.com/618fa0bb-c4b4-453d-b2ac-d5dc9850e007/0_2.webp
      You tried to engineer a prompt to render the plane without front windows / windscreens, I have tried a different approach and my prompt engineering resulted in picture with a plane, with two jet engines, but no windows, no windscreens. No new, better trained model, and my approach still worked. Creating variants, using the blend command, using the describe commands to help with even better prompts... I am sure with enough prompt engineering we would get far better results.
      Approximation isn't an issue because it is used by natural intelligence as well, and ability to approximate the results of any "unlikely" option is important when we want to invent new solutions. Approximation is the shortcut that makes HUMAN intelligence possible.
      So, when I used MidJourney I seen how it uses some random noise, and try to turn it more and more into an image we want to see. We have multi-prompts, can prioritize the importance of prompts. If in addition of "front windows", I would also mention the word "Windscreen" and in priority I give them a -2 or -500 or just no... It was easy to use more and more prompt engineering for better results. But due to economic crisis I don't have money to finance a lot of experiments with MidJourney, but I think discussing prompt engineering here would make sense.
      But when I started to learn about AI it started with an extremely simple algorithm, yes it was the minmax algorithm and it has plenty of derivatives. It would use plenty of resources and it should be optimized by prioritizing which options should be checked first, we would need to make sure once if found a "good enough" solution it wouldn't waste endless amount of resources on other things, if an option would be a dead end, it should move to the next one.
      So, if a machine learning algorithm can approximate the results of some potential actions, it is well trained and reasonably accurate it can quickly identify options that shouldn't be considered by the minmax option or should be only considered only as last result. Minmax and its derivatives can think "several steps ahead" and this is how they would choose the best options. We would have different kinds of algorithms (some of them with machine learning, etc) to evaluate various scenarios, etc.

  • @NicolaW72
    @NicolaW72 ปีที่แล้ว +6

    Thank you very much for this really informative interview which clarifies what AI is and can do - and what AI is not and cannot do!👍 That´s a core point of knowledge, not only in the Aviation Business.

  • @sergethelight
    @sergethelight ปีที่แล้ว +3

    Awesome vid! Airbus already uses machine learning (the basis of AI) in order to better engineering and airline fleet operations. They work together with a company called Palantir and they created "Skywise" for this

  • @mmhuq3
    @mmhuq3 ปีที่แล้ว +2

    Ok again a marvelous video. Thank you so much

  • @TheHenryFilms
    @TheHenryFilms ปีที่แล้ว +18

    In light of one of your recent videos, I just realized AI might be very useful to parse the relevant parts of NOTAMs, and maybe even remind the pilots as the flight progresses.

  • @esce69
    @esce69 ปีที่แล้ว +7

    Finally an intelligent discussion on current AI. Really didn't expect it on an aviation channel. Thanks!

  • @danielayers
    @danielayers ปีที่แล้ว +5

    Masters Degree in Computer Science here, with more than 20 years experience. This is the best explanation of AI that I have seen anywhere! Well done!

  • @Oyzatt
    @Oyzatt ปีที่แล้ว +15

    This is most accurate description of AI I've sever heard. Great 🧡

    • @NicolasChaillan
      @NicolasChaillan ปีที่แล้ว +2

      Not true. It's a description of Generative AI. Not AI.

    • @Oyzatt
      @Oyzatt ปีที่แล้ว +1

      @@NicolasChaillan but the analogy applies to any AI. What you don't understand is that, humans are exceptional in their "capabilities". Whatever that means

    • @NicolasChaillan
      @NicolasChaillan ปีที่แล้ว +1

      @@Oyzatt Clearly you don't understand what the technology is capable off and what we have already done at the U.S. Air Force and Space Force. I was the Chief Software Officer for 3 years.

    • @Oyzatt
      @Oyzatt ปีที่แล้ว

      @@NicolasChaillan let's crystallized everything here for the sake of clarity . Ai is not creative like humans, it's simply regeneration what has been feed in it, that clearly shows it boundaries. In the military space it can be capable of many things but not cognitive stuffs, if you'll agree with

    • @preventbreach3388
      @preventbreach3388 ปีที่แล้ว

      @@Oyzatt wrong. That's generative AI. Not AI as a whole.

  • @MrLewis555
    @MrLewis555 ปีที่แล้ว

    That was a facinating video, would love to see another with a deeper dive on some things and more questions.

  • @artrogers3985
    @artrogers3985 ปีที่แล้ว +2

    Great episode

  • @pfefferle74
    @pfefferle74 ปีที่แล้ว +1

    There was a good point made: an AI pilot assistant must have a way to strongly signal to the pilots whether it makes a helpful suggestion that the pilot may overrule or whether it does an emergency interjection that the pilot must simply have faith in. Like when the TCAS wants to step in to avoid an imminent collision.

  • @tensorlab
    @tensorlab ปีที่แล้ว +1

    Being a data scientist and aviation enthusiast, A situation such as Sully can definitely be implemented in AI in the form of recommendation system or disaster managing co-pilot system where system can quickly identify dual engine failure and determine shortest route available to nearest airstrip. This however requires intensive training on large simulation datasets and would involve multiple countries across the world. The model inference also would require extremely powerful computers on board to process such large streams of data quickly which might shoot up cost of airplane. SO, theoretically it is possible but practical implementation likely wouldn't be possible anytime soon.

  • @RyanEmmett
    @RyanEmmett ปีที่แล้ว +1

    Really fascinating and informative discussion!

  • @arunkottolli
    @arunkottolli ปีที่แล้ว +2

    AI can help pilots in automation of routine operations - like routine cabin announcements, early warning of turbulence etc, verify and implement various checklists. Send automated communications to control tower and vice versa, etc there is no way AI will not be integrated in a cockpit in near future

  • @NateJGardner
    @NateJGardner ปีที่แล้ว +2

    GPT models contain a world model- they are capable of performing calculations and keeping track of state. They can play games they've never seen before. It's not as simple as just accessing memories. It's accessing abstract concepts and using a predictive model it has developed thay can reliably predict information, and the only way to do that is to actually process the information. I.e., it's not overfit. It can actually perform reasoning. It does have abstract ideas about what things mean. However, its entire world is text, as seen through the lens of its training data, so of course it currently has limitations.

  • @stonykark
    @stonykark ปีที่แล้ว +5

    I work in tech with ML/AI at various points over the years. Agreed with the assessment here, it’s really nothing to freak out about. The reason people care is because there are a lot of extremely wealthy “entrepreneurs” who want to use it to make money even easier than they do now. LLMs like chatgpt will have their uses, but it is not the revolution everyone is afraid of.

    • @teemo8870
      @teemo8870 ปีที่แล้ว

      Hopefully.

    • @youerny
      @youerny ปีที่แล้ว

      I frankly disagree, it is an amazing enabler capable of revolutional acceleration in productivity and value generation, both in productive and recreational activities. It is the new steam engine of XXI century. On the other hand I cannot say anything about dangerous outcomes. There are potential problematic scenarios indeed as described in Bostrom book superintelligence

  • @hans-uelijohner8943
    @hans-uelijohner8943 ปีที่แล้ว +1

    Very, very good talk!!!

  • @gepetotube
    @gepetotube ปีที่แล้ว +11

    Love Mentour videos and they are always very well documented. The guest in this one seems to me not quite as an expert as I hopped. He is just stating the oversimplified view of AI that seems to flood internet these days. Here you have a few comments if I may:
    * AI is not just ChatGPT. GPT architecture is just one of many (BERT, GANS, etc.). Many of these are not that visible as ChatGPT but we have been already affected by AI at large scale (Google translate, TH-cam suggestions, voice recognition, etc.)
    * AI is not just a database system. In the process of deep learning the neural networks are able to isolate meaning (see sentence emebeddings, convolutional neural networks, etc.). AI is able to cluster information and use it for reasoning and I can give you many examples. GPT does not only generate next word based on previously generated words but it is also driven by the meaning constructed in the learning process. Actually it does not even generates words. It generates tokens ("sub words"). It is not a probabilistic system.
    * AI could land an airplane without any problems if trained so. Full self driving AI for cars is far more complex problem and it is amazing what the current AI systems can do (Tesla, Mercedes, etc.). But as somebody said, the first rule of AI is: if you can solve a problem without AI then do it that way. AI is not very efficient at training. Currently we can fly airplanes without pilots without using AI (autopilot, auto-landing systems, etc.). On the other hand, replacing pilots completely will not happen any time soon even for the simple reason that people will not board a plane any time soon without a pilot. But it is creeping in. As it was mentioned in a previous videos the industry is moving from two pilots to one pilot.
    * AI will replace jobs (and it will create new ones). One example is customer support with all the robots that answer your questions. What do you think Med PaLM-2 will do?
    ... ;-)
    One thing I agree with the guest. AI is an enabler for new opportunities. Also, good idea to bring aviation in the AI discussion.

    • @yamminemarco
      @yamminemarco ปีที่แล้ว +1

      You bring up very valid points. I've created a main comment and thread where I've elaborated a little more and you are welcome to join in.

  • @oDrashiao
    @oDrashiao ปีที่แล้ว

    I'm an AI researcher, and I always struggle to explain why we are not talking about sentience, but basically big prediction machines. Marco did a great job there! Thanks for bringing an actual expert :)

  • @henryklassen3362
    @henryklassen3362 ปีที่แล้ว +2

    Fascinating video indeed.

  • @JohnFrancis66
    @JohnFrancis66 ปีที่แล้ว +10

    Artificial intelligence is not and never will be. This was so good I shared it in my LinkedIn stream because every other post is some BS AI-related fever dream.

    • @MentourNow
      @MentourNow  ปีที่แล้ว

      Thank you! Feel free to tag me in the post - Petter Hörnfeldt

    • @dss12
      @dss12 ปีที่แล้ว +1

      People have become too dumb to understand this simple concept. They think of AI as God.

    • @fabiocaetanofigueiredo1353
      @fabiocaetanofigueiredo1353 11 หลายเดือนก่อน

      What do you think is so un-replicatable about a biological brain?
      Physician here.

    • @fabiocaetanofigueiredo1353
      @fabiocaetanofigueiredo1353 11 หลายเดือนก่อน

      That phrase "AI is not and never will be" is not anything other than wishful thinking. Anyone can say anything.

  • @daser99
    @daser99 ปีที่แล้ว +1

    Indeed one of the most non-artificially intelligent discussions on the topic. Thank you.

  • @XRPcop
    @XRPcop ปีที่แล้ว +4

    Great information and perspective on AI. Definitely a ton of Fear mongering going on and the analogy "fake it till you make it" truly makes sense!

    • @XRPcop
      @XRPcop ปีที่แล้ว

      @Windows XP The news media and others exploiting AI tech by saying "AI is taking your job" or "AI will control everything" when in reality, as explained in this video, AI really can't take control of anything...just yet.

  • @mpf2006
    @mpf2006 ปีที่แล้ว +1

    Such a great interview lots of good knowledge

  • @philipsmith1990
    @philipsmith1990 ปีที่แล้ว +10

    This was a fascinating discussion. As a pilot for a major airline I spent many hours in a simulator preparing to employ procedures learned from generations of pilots. As a technical rep for pilot associations and for my own interest I spent many more hours studying accident and incident reports and hopefully learning from them. I spent many hours in the air seeing how another pilot managed events. Like just about every pilot I spent even more hours, often over a beer, talking about aviation events I had experienced. In those ways I built a base of knowledge that stood me in good stead when I had to deal with events myself. This process, although less structured, resembles the building of a knowledge base on which an AI depends. Certainly one can point to incidents that an AI would find difficulty in handling although I'm not sure that the 'miracle on the Hudson' is one. I can imagine that an AI would have the knowledge that when no other option is available put the aircraft down in any space clear of obstacles within range. The QF32 incident of an uncontained engine failure might be more difficult since the crew there had to ignore some of the computer generated prompts. QF72 would also be unlikely to be fixed by an AI since it involved a system fault combined with an unanticipated software failing.
    So I agree that there would be situations that an AI could not resolve. But would they be more than those that pilots don't satisfactorily resolve? Possibly not. It may be that even with current technology the overall safety, measured as number of accidents, would be improved.
    However there is another issue. Would passengers fly in an aircraft with no-one up front? I know many people who would not. But I also know people who would choose the flight that was 10% or 20% cheaper.
    And of course there are the non-flying events that a pilot is called upon to resolve. I can't see any current AI finding an ingenious way to placate a troublesome passnger. I found that walking through the aircraft after making a PA announcing a long delay was far more effective than the PA alone. Just seeing that the pilot had the confidence to show their face made pax believe what they were told. I regret that some of my ex-colleagues didn't believe this.
    Something that does worry me and which is not yet down to AI but is already a problem and would be made worse by AI is skill fade. The less a pilot is called upon to exercise his skills the more likely it is that they will not be good enough when called upon.

    • @evinnra2779
      @evinnra2779 ปีที่แล้ว +1

      They could make the flight 50 or even 90 % cheaper, I wouldn't buy a ticket for a flight that has no pilot and first officer flying it. There is something about this psychological factor of fear. Although we know there are far more people who get involved in car accidents than plane accidents, never the less the fear of flying remains because it is mostly about the knowledge of being powerless to do anything what so ever in a plane when something goes wrong. This is why to trust artificial intelligence to do the job of a human mind would be a step too far for me. I guess it is not about how accurate artificial intelligence may become in the future in recognizing a situation and finding viable solutions to problems, but rather my own sense of trusting a human mind much more, since it works similar to how my mind works. Currently AI is not working the way the human mind works, it only imitates some aspects of the human thinking process.

    • @philipsmith1990
      @philipsmith1990 ปีที่แล้ว +1

      @@evinnra2779 As I said, I know people who think the way you do, but I also know people for whom the price matters more. If the operators see more profit then they will pitch the price to maximise that and employ whatever marketing they need to.

    • @lawnmower1066
      @lawnmower1066 ปีที่แล้ว +2

      This generation might struggle to fly with AI, the next generation wont give it a second thought!

    • @giancarlogarlaschi4388
      @giancarlogarlaschi4388 ปีที่แล้ว +1

      Go fly in Asia or Africa ...it's an unpredictable thing , Weather wise and ATC .
      An sometimes sub par Local Pilots / Maintenance Standards.
      Been there done that many times.

    • @philipsmith1990
      @philipsmith1990 ปีที่แล้ว +1

      @@giancarlogarlaschi4388 I've flown in Africa and Asia. The unpredictability is no worse than other places. I used to tell my copilots that the one thing you know for sure about a weather forecast is is that it's wrong. It may be a little bit wrong or it may be completely wrong. The lowest fuel I ever landed with was at Heathrow thanks to a surprise. In the USA the weather can be unpredictable. I landed from a nightime Canarsie approach in a snowstorm. The next day we strolled down to the Intrepid in our shirt sleeves.

  • @MountainCry
    @MountainCry ปีที่แล้ว +3

    I got completely sidetracked by how beautiful the AI Van Gogh airplanes were.

  • @otooleger
    @otooleger ปีที่แล้ว +4

    Great video. I learned a lot about AI in general. Well done for tackling this topic.

  • @anilbagalkot6970
    @anilbagalkot6970 ปีที่แล้ว +5

    Hej!!! The best discussion 👌 Thanks Petter!

    • @MentourNow
      @MentourNow  ปีที่แล้ว +3

      Glad you liked it!

  • @Cartier_specialist
    @Cartier_specialist ปีที่แล้ว +6

    I can understand why you partnered with Marcus for this because he explains AI in a simplistic way like you explain aviation related information.

  • @daleybrennan9867
    @daleybrennan9867 ปีที่แล้ว

    Fascinating interview. Thank you, Petter.

  • @kenilkhambhaita3805
    @kenilkhambhaita3805 ปีที่แล้ว +1

    I love these videos while im studying for a levels!

  • @JAF30
    @JAF30 ปีที่แล้ว +1

    This was a great discussion on the whole "AI' marketing going on right now for something that isn't even really AI. Speaking as someone who is in tech field, current AI is nothing more than a messy, giant, computer program that must always answer your question, it doesn't even have to be a truthful answer.

  • @idanceforpennies281
    @idanceforpennies281 ปีที่แล้ว +1

    Time-critical weather prediction/modelling is mathematically one of the hardest things imaginable. So much data and its like a 4D fluid dynamics show on steroids. With positive and negative feedbacks all going on instantly. AI will be a significant benefit here.

  • @Sheherezade
    @Sheherezade ปีที่แล้ว +1

    Thank you for the post, my teenager is interested in becoming a pilot so I’m grateful for your opinion.
    It would be boring for a pilot if the co-pilot is eliminated, humans need people to talk to at work, especially now that the cockpits are sealed.

  • @ericvandeweyer1766
    @ericvandeweyer1766 ปีที่แล้ว

    What an excellent discussion. Much appreciated.

  • @lawnmanmartinfan7909
    @lawnmanmartinfan7909 ปีที่แล้ว

    I am in my mid fifties and when I was a kid I remember growing up you had a telephone that was hooked to the wall and it was a Rotary dial meaning you had to wait for the dial to come around to the number and then return But you had to remember people's phone numbers either by heart or you had them written down which kept You engaged in thinking and remembering things. I realized when I got older I didn't remember as much because it wasn't a necessity. We became dependent on computers to tell us instead of having to remember and look it up in a book or think about it.. I think AI is OK for some things but there's just some jobs that are always gonna need human interaction imo. Great video as always.

  • @michaelanderson5135
    @michaelanderson5135 ปีที่แล้ว +1

    Excellent video

  • @ericfielding2540
    @ericfielding2540 ปีที่แล้ว +1

    Marco has a great way of explaining things.

  • @marceloanez100
    @marceloanez100 ปีที่แล้ว

    This conversation was amazing!Its the most clear explanation of what AI really is, that I've heard up to know. I'm gonna be sharing this video with a lot of people! Peter, Im a huge fan of your utube channels! You too have a gift of laying complicated topics down in clear digestible terms! Thank you!!👏🏼👏🏼🙏🏼

  • @alphabravo1214
    @alphabravo1214 ปีที่แล้ว

    This video is spot on and agrees with what most other experts in the field are saying.
    One thing that wasn't really touched on this video is the difference between an autopilot and an AI controlling the airplane. When autopilots were created, there was similar concern about pilots being replaced because, as the name suggests, the point of an autopilot is to control the airplane automatically. Autopilots are advanced enough that taxiing around the airport is about the only thing that isn't automated. So, one may be tempted to ask: why doesn't autopilot steal pilots jobs, and what does AI bring to the table that could threaten their jobs?
    The answer is that autopilot doesn't have the decision making capabilities necessary to safely fly an aircraft, and the AI of today isn't advanced enough to have that, either, as this excellent video explains. Just because we can automate the mechanics of flying an airplane doesn't mean we can automate the decision making behind why we fly a particular way or follow a certain set of procedures. An autopilot might very well be able to fly an ILS approach more accurately than any human could, but that doesn't mean it understands when it needs to fly an ILS approach or how to set up for one. As the video explains, AI is also incapable of creative thinking, it's only able to take what it has seen before and apply that to the situation. This understanding of why we do things and what makes some action appropriate or not is the crucial element that is missing from these automatic systems, whether they be rule based (autopilots) or machine learning based (AI).
    That said, some use cases that AI could be used for, in addition to what the video explains, include improving flight planning and communication with ATC and other pilots.
    For flight planning, consider all the NOTAMs, METARs, and so on that pilots have to sift through. It is a lot of information that is usually not formatted in a way that is very human readable, and even when it is, pilots still have to pick out what is important and relevant to their flight. AI could parse through all that information, give pilots a summary of the important information out of it, and even suggest alternate routes if it were paired with weather and traffic prediction models. That could be a way in which ATC is helped out, also: helping choose the routing for flights to maintain traffic separation and expediency.
    Of course, any such tool would have to be thought through carefully. Pilots would still need to go through the materials to check that what the AI said is correct, but at least they would have an idea of what to expect which might speed up the process. Still, Mentour has done videos on confirmation bias contributing to accidents, so pilots would need to be trained to use these tools effectively.
    Another use of the tool could be in communication. Paired with radio or text based interfaces, these models could assist in translation when non-native English speakers or other languages are being used with ATC, which could improve situational awareness and even clear up miscommunications. Again, care must be taken, since these models could also translate incorrectly, but there are other translation/speech recognition/text to speech tools that could be paired with AI to reduce that risk.

  • @alscustomerservice187
    @alscustomerservice187 ปีที่แล้ว

    Fantastic Show.

  • @franknewell7017
    @franknewell7017 ปีที่แล้ว

    I was a controls expert. I was able to automate multiple processes and reduced humans on a shift from 6 or 7 to 2 or 3. We still needed to maintain humans on watch to protect the city when a sensor went down. I could provide alarms for damaged sensors but, couldn’t provide controls for a damaged or misreading sensor because it may fail in thousands of ways.

  • @trend0000
    @trend0000 ปีที่แล้ว

    Excellent, Excellent video!!!!!

  • @TheAnticorporatist
    @TheAnticorporatist ปีที่แล้ว

    It seems like every crash video boils down to, “but the first officer got flustered and didn’t realize that the down stick was in doggy mode and the autopilot had defaulted to protect the kitty mode so it did X”, a computer can be programmed to KNOW all of that and NOT miss the checklist item that needed to be reset to “A” mode, or whatever. For instance, in the “Miracle on the Hudson”, one of the cameras (or some other form of monitoring) that was watching the inlet to the engines would “see” a frickin’ goose get sucked into that engine, would automatically, in seconds, determine if it was too damaged to be restarted (and, if not, try the restarting checklist), know exactly what the flight characteristics of the plane were going to be from then on, realize that returning to the airport was going to be a no go, detect emergency landing options, decide on the Hudson, contact the field, air traffic control, the police and the coastguard simultaneously and instantly while informing the passengers and crew of what was about to happen and maneuvering the plane around for a water landing, making sure to land at the ideal angle to avoid having the plane break up, if, indeed, that was the only option. It is entirely possible that the plane, knowing that the starboard engine was completely out, and knowing EXACTLY where the other air traffic and ground traffic was (and was going to be), would INSTANTLY throw the plane into a maximum bank, while deploying flaps and re-lowering the landing gear and broadcasting appropriate messages and crew and passenger instructions, pulled out of the bank and landed the aircraft back on the field; a feat that no human flight crew could hope to achieve in that amount of time. Or, who knows, since the subsystem of the expert system that is involved with vision wouldn’t have been occupied with everything else that’s involved in getting an airplane into the sky, and could also have a much greater field of view than the pilot it may well have noticed the flock of geese and modified the takeoff profile temporarily to avoid them. I’m a huge fan of pilots, but I will say it again, modern aircraft have too many disparate systems each of which has a billion different (but eminently knowable) states and settings and errors and things that can go wrong with them. It is too complicated for a human pilot and actually needs to be either GREATLY simplified, or completely under the control of a triple redundant computer system, IMHO.

  • @EdOeuna
    @EdOeuna ปีที่แล้ว +1

    In order to implement AI into flying aircraft I think there has to be a major overhaul and rethink into the entire way of flying and operating the flight deck. Airmanship is almost entirely based on previous experiences. Also, there already exist systems for detecting the wrong runway, wind shear, terrain ahead, etc, so these don’t need to be redeveloped and replaced by AI. As for AI helping in emergencies, a lot of emergencies require quick action and muscle memory from the pilots. To introduce a third “person” in the flight deck might be seen as too much interference. It also risks clouding the judgement of the pilots as they themselves will have reduced situational awareness.

  • @mityace
    @mityace ปีที่แล้ว +2

    Thanks for a video without the hype. Backing in the 1980s, I was thinking of getting a PhD in AI. So, I have continued to follow the development of the field. So, kudos for finding an expert in the field who gives it to us straight about what "AI" is and is not.
    It's hard to predict whether true AI will ever exist. Nearly 40 years after I left school, we don't seem to be that much closer to building an actual intelligence, There could be a breakthrough today or 100 years from now our descendants will be trying to figure out why we wasted so much time on this dead end. As it usually is in life, what actually happens will be somewhere between those extremes.

  • @coleslaw6285
    @coleslaw6285 ปีที่แล้ว +3

    What a fantastic organic conversation. Interesting and fresh perspectives on this topic. Thoroughly enjoyed. Cheers.

  • @flapjackson6077
    @flapjackson6077 ปีที่แล้ว

    Petter, this was one of the most interesting vids you’ve done! You’re one of the good folks. Your intellectual curiosity, along with with your commitment to truth is what this works needs.
    It appears Marco is another such fellow. AI is a potentially alarming technology, but it’s reassuring understanding its limitations. 👍

  • @vitvohralik5557
    @vitvohralik5557 7 หลายเดือนก่อน

    Best video about AI that I have ever saw 👍

  • @johnmcqueen4883
    @johnmcqueen4883 ปีที่แล้ว +7

    This was sooo informative. I am retired so I haven’t worried about AI taking my job (what, it’s going to sit in my hammock?) and therefore have not paid a lot of attention to it, but now I feel I have a pretty good understanding of what it is and what it isn’t, what it can and cannot do. Thanks, Mentour!

    • @philip6579
      @philip6579 ปีที่แล้ว

      AI may make it so expensive to live that you cannot remain retired.

    • @theairaccumulator7144
      @theairaccumulator7144 ปีที่แล้ว

      @@philip6579 that won't happen because of chatgpt and clones, those are simply a slightly more advanced autocorrect.

  • @timp3224
    @timp3224 ปีที่แล้ว +1

    Great video,
    Whilst, it is suggested, the AI cannot fly the aircraft, there seems to be a role for an overarching AI to monitor the flight and flag up if the flight is operating outside the expected parameters and flag to the pilots (and to the ground) that there is something “weird” about the flight. There have been a number of accidents where pilots have not picked up that they/their aircraft are not doing what would be normal.

  • @fredrikjohansson
    @fredrikjohansson ปีที่แล้ว +37

    Even if it would fly the plane perfectly 99.99% of the times it would be devastating the 0.01% of the times it fails due to the weird hallucinations AI sometimes have.

    • @playmaka2007
      @playmaka2007 ปีที่แล้ว +10

      Kind of like the weird hallucinations humans sometimes have?

    • @danharold3087
      @danharold3087 ปีที่แล้ว +1

      Airplanes spend most of the time flying at high altitudes. They often have minutes to correct the problem. The 0.01% is not necessarily or likely to be devastating. Closer to the ground of course you are correct.

    • @adb012
      @adb012 ปีที่แล้ว +4

      Well, that's the tension here. If it fails to save United 232 and Sully's flight, but it doesn't crash in AF447, Lion Air and Ethiopian MCAS accidents, Colgan, AA587 at New York, Helios hypoxia, Tenerife, PIA gear up go-around, AA965 CFIT at Cali, TAM 3045 at Conghonas, and a long list of accidents that were either caused by the pilot (due to distraction, confusion, spatial disorientation, disregard of procedures, fatigue, etc...) or that were not avoided by the pilot when the pilot could have avoided it just by following procedures (MCAS accidents, AF447...), is it worth the price?

    • @mofayer
      @mofayer ปีที่แล้ว +3

      @@playmaka2007 right? This channel is full of examples of pilots hallucinating, especially in stressful(high work load) situations, something AI doesn't ever have to deal with since it can't stress.

    • @robainscough
      @robainscough ปีที่แล้ว +5

      0.01% is still better than the 0.5% human errors.

  • @trjozsef
    @trjozsef ปีที่แล้ว +2

    The way AI could help right now is by monitoring radio communications.
    There was one incident [citation needed] where the readback of atmospheric pressure was different, there were many incidents where the readback of taxi route was off. Given that AI does not have preconceptions on what is correct and what isn't it could flag mismatching communication and prevent acting on false presumptions.

  • @grieske
    @grieske ปีที่แล้ว +2

    Can a language model sift through notams and produce a concise report specific to the flight at hand?

    • @orlovsskibet
      @orlovsskibet ปีที่แล้ว

      That would be a very inefficient way to do that. A pretty simple algorithm will do that just fine.

    • @grieske
      @grieske ปีที่แล้ว

      @@orlovsskibet okay. I didn't know that they are sufficiently standardized for that.

    • @orlovsskibet
      @orlovsskibet ปีที่แล้ว +1

      @@grieske oh yes, as far as text parsing goes, they are quite easy to handle. 😀

    • @grieske
      @grieske ปีที่แล้ว

      @@orlovsskibet Thanks for the explanation. I only have experience with notices to mariners, 25 years ago. The most important notices were non standard.

  • @WT.....
    @WT..... ปีที่แล้ว +2

    For those going "oh no, improvements in AI will steal our jobs", unless your job was obsolete to begin with, or doesn't require technical experience to improvise, you have nothing to fear. Don't believe me, look at the advent of the calculator, people actually thought that it would make mathematicians obsolete, but a few decades on, and that hasn't happened yet.

  • @mendel5106
    @mendel5106 ปีที่แล้ว +3

    My take is that at least in the first gen, it wouldn't make any inputs that are un-commanded by the pilots, instead it can be integrated with sensors and make quick suggestions in emergency situations "And explain right away why he thinks that is the correct solution"
    Now the pilot has time to analyze the suggestion and think for himself if the suggestion is valid and or applicable or not.
    So no worries about AI "taking over airplanes" but instead I think pilots should welcome the idea so long as it could act as a guardian angel to make suggestions as needed, but never actually take control.
    Cheers

  • @SinergiaAlUnisono
    @SinergiaAlUnisono ปีที่แล้ว

    I trust more "an expert" that says 'I don't know for sure', than the ones that answer binarily by saying 'yes or no' so bluntly.

  • @stevedavenport1202
    @stevedavenport1202 8 หลายเดือนก่อน

    One thing that people need to understand about Sully is that he was not only a very capable pilot with a very calm and grounded personality, he was also good at playing "what if" games in his head.
    So, I suspect that he had already contemplated this scenario and came up with some alternatives.

  • @diogopedro69
    @diogopedro69 ปีที่แล้ว +9

    Someone honest talking about "AI", for a change :) thank you

    • @MentourNow
      @MentourNow  ปีที่แล้ว +3

      You are welcome. That’s what we are here for

  • @steved2460
    @steved2460 ปีที่แล้ว

    Do more expert interviews please. Excellent work sir!

  • @Pladak8465
    @Pladak8465 ปีที่แล้ว

    Thank you so much for sharing that talk! I’m a computer programmer and that gave me some useful insights for the future. 😊

  • @jbrethous
    @jbrethous ปีที่แล้ว

    You ROCK, you are brilliant.

  • @lothar4tabs
    @lothar4tabs ปีที่แล้ว +2

    This is the best explanation I ever had about AI, what it can do and cannot. You guys did an amazing job here.

  • @simonhourani5228
    @simonhourani5228 ปีที่แล้ว +1

    I always think about .. If there was like a setting for emergencys that you could write what's happening like engine on fire... and it would by it self turn the engine off put the fire out and complete the check list by it self like it's told to do and the pilots just make sure that all was done right .. wouldn't it reduce the load on the pilots and you think it would be something that will happen at some point?

    • @EdOeuna
      @EdOeuna ปีที่แล้ว +1

      Engine fire checklists are short, as are most checklists, so getting external assistance to run them is pretty pointless.
      The majority of the issue with emergencies are the decisions related to “where are we going to go, how are we going to get there?” This is where the pilots prior knowledge and gut feeling come into play. Can AI develop “spider sense” and get the feeling that something isn’t right? Thats when I think the human will succeed over AI.

  • @chrisb.2028
    @chrisb.2028 ปีที่แล้ว

    Finally a real AI expert!

  • @stulop
    @stulop ปีที่แล้ว

    Such a clear view of what AI actually is. After all the nonense in the newspapers. Thanks.

  • @luketurner314
    @luketurner314 ปีที่แล้ว

    8:07 reminds me of Turing's proof to the Halting Problem

  • @jamesanderson2176
    @jamesanderson2176 ปีที่แล้ว +3

    17:00 - From watching your Mentour Pilot videos, a system that monitors pilot actions, compares them with the automation in use and its modes, and calls out those systems/modes when they conflict with the pilot's actions, could be extremely useful. Many bad situations seem to begin, or get much worse, when the pilot misunderstands what the automation is doing.

    • @ch33rfulness
      @ch33rfulness ปีที่แล้ว +1

      There are such automations in place already, but not AI. There are preprogrammed rules, extensively tested. Nonetheless, ultimately, it’s still the pilot in charge to deactivate some functions. That’s how the everything is designed and that’s why we trust the pilots.

  • @RandomTorok
    @RandomTorok ปีที่แล้ว +2

    I think AI could have a role in training situations. Imagine sitting down at your computer and having a discussion with Brilliant AI on any topic. Airlines could have AI instructors that help staff learn different things. I know I learn better by hearing things explained. But it's not always convenient to go to a scheduled class. If a class has an AI instructor then I could take that course at any time anywhere.

  • @Amy_A.
    @Amy_A. ปีที่แล้ว

    Great video, the points were well made. I don't think you're wrong, but I do have a few counterpoints for consideration:
    1) When you bring up Sully, you're taking a single instance and touting that as the golden standard, while entirely ignoring all the pilots who have made terrible decisions that led to mass loss of life. I feel the question that should be asked isn't "Is AI as good as Sully?", it's "Is AI safer than most pilots?" I feel the answer is yes, AI in it's current form can be better than most pilots. AI doesn't get distracted, it doesn't get bored, and it doesn't get tired. The reality is, 80% of incidents are human error. Pilots and tower controllers can and will get distracted, bored, and tired.
    2) You mention AI can only merge two things it's already seen. Let's assume that's true, even though it isn't. You can run AI through millions of flight hours of simulations. Remove wings in some simulations, kill engines in others, reduce control in more. Realistically, how many errors can there be in a plane? You're in the air. There's birds, other planes, and clouds. You've got sensors, control surfaces, and thrust. And honestly, that's kind of it. You can train AI on every aircraft failure in history, and create millions of theoretical failures for it to learn from. Create a script to train AI on every possible system and control failure possible in a given plane. AI will improve from every single one. As amazing as pilots are, they just can't do that.
    3) AI already can fly planes. The military trained an AI to control a fighter jet in a simulation, and it beat humans, every single time. The humans fighting against it said that it felt like it could read their minds; it reacted to their movements before they even realized they were making them. Humans cannot have that level of reaction time or attention to detail.
    4) Everyone acts like AI will fully replace humans, and I agree, the current version will not do that. But it doesn't have to replace every job. It just has to reduce the number of people doing a given job. Maybe instead of a manager running a team of 10 humans, a manager runs a team of 6 humans and an AI. That's still 4 jobs it replaced. Saying AI isn't going to take jobs is like saying the car won't replace horses. Yeah, horses still exist. They're not extinct, and they're not even rare. But they're not our main transportation anymore.
    This is just some food for thought, and I appreciate a take on AI presented in such a clean format, even if I personally disagree with the conclusions it makes.
    Thanks for making the vid, and fly safe!
    While you're still allowed to 🤖😉🤖