AI, ML & Ethics Cynefin Meetup Recordings - Dive into thought-provoking discussions

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 เม.ย. 2024
  • On 20 March 2024, The Cynefin Team, including Dave, Donna and our Eagles for the upcoming Triopticon, joined us for a bit of a warm-up in the form of an open meetup to explore the themes, questions and issues around AI, ML and ethics. It was a lively session.
    Whether you’re a tech enthusiast, a professional in the field, or simply curious about the future of AI and ML, this event is perfect for you. Engage in lively debates, network with like-minded individuals, and broaden your understanding of the ethical considerations surrounding AI and ML.
    Explore 👉 thecynefin.co/product/ai-ml-e...

ความคิดเห็น • 18

  • @jimallen8186
    @jimallen8186 27 วันที่ผ่านมา

    The talk of commercial fishing and the race takes me back to Jared Diamond in Collapse with his tragedy of commons discussions.

    • @jimallen8186
      @jimallen8186 27 วันที่ผ่านมา

      What did the person cutting the last tree on Easter Island think?

    • @jimallen8186
      @jimallen8186 27 วันที่ผ่านมา

      Our society is set up to reward those who can take big risks gets back to Lewis’ Premonition. And his Fifth Risk. Here we should note those who prevent get little credit while those who rush to fix despite perhaps having caused or allowed to happen are heroes. This takes me to asking about which is greater sin, committing or allowing to happen? If you permit via inaction, are you not guilty too? But you’re not in our society. Incentives and socialization of costs and risks with privatization of gains and opportunities. Externalities.

  • @jimallen8186
    @jimallen8186 27 วันที่ผ่านมา

    “We’re trying to determine accountability… who will be blamed for when something goes wrong.” Ask Sidney Dekker, that’s not accountability. Avoid blame and the second victim. Accountability is providing your account. Regarding if AI suggestions don’t make sense, do officials have power to go against? What do they do if hired consultants make suggestions that don’t make sense?

  • @jimallen8186
    @jimallen8186 27 วันที่ผ่านมา

    Remember also, winning is serial. To win, you only need success in one attempt at each time. To avoid losing, however, is parallel. To avoid losing, you must defeat all attempts. Consider also Rupert Smith Utility of Force discussing War Amongst the People distinct from Industrial War. Consider Sinek’s Infinite Game. Note Russia is in a finite fight while Ukraine is in an infinite one. This also brings up the importance of Fabius. Note George Washington and Ho Chi Minh were Fabian. As was the Mujihadeen, is the Taliban. Avoid decisive battle. Corbett may have something to add. Boyd would.

  • @jimallen8186
    @jimallen8186 27 วันที่ผ่านมา

    Decision making is always done with partial data.

  • @jimallen8186
    @jimallen8186 27 วันที่ผ่านมา

    “I try to have an idea of the last mile…” “A lot more conscious rigor than we’re used to to navigate ourselves back” see Children of the Magenta. Both literally and figuratively.

  • @jimallen8186
    @jimallen8186 27 วันที่ผ่านมา

    In looking for increased resiliency in security, consider safety is like security. Look a bit at Safety Differently? Note Todd Conklin, associate of Sidney Dekker, writes than some languages do not have a word for safety. There is a great medium article by Ron Butcher Rethinking Safety an Illusory and Context-Dependent Construct. Is security a similar illusory construct?

    • @jimallen8186
      @jimallen8186 27 วันที่ผ่านมา

      Remember Boyd here in relativity. If we had perfect efficiency perfect speed, we’d need no security as our OODA would always beat the others. We don’t have this, so we need security. But security while cutting against their efficiency also cuts against ours. There is a balance point. Too much security does make us unable to function. You want enough to inhibit an adversary to the point that the adversary is slower than you. It is relative. We still want to reduce fog of war for ourselves though not reduce fog for others in so doing. Relative.
      Aside, I think we can all recognize you can always increase security but never be secure. Unfortunately the moment too much security makes you non-functional, increasing security makes you less secure. Inability to function reduces security. It also reduces safety.

  • @jimallen8186
    @jimallen8186 27 วันที่ผ่านมา

    “Stochastic terrorism,” thanks ‘Good 2 Geek’ via Daily Kos.

    • @jimallen8186
      @jimallen8186 27 วันที่ผ่านมา

      Believe the handle is G2Geek, and believe one of the blogs under G2Geek is credited with coining the term.

    • @jimallen8186
      @jimallen8186 27 วันที่ผ่านมา

      Thought here: look at where blame falls in safety incidents. Blade of the spear not usually handle of the spear (Dekker, Conklin). Yet where is the real control? Same same lone wolves and violent actors versus most mouth pieces. When do handles see justice? How?

  • @jimallen8186
    @jimallen8186 27 วันที่ผ่านมา

    RE concern we’re abandoning the scientific method; thought that didn’t work in complexity? Only for in complication.

    • @jimallen8186
      @jimallen8186 27 วันที่ผ่านมา

      Falsifiability good, repeatability not so much. Variable isolation also not so much. Lack of cause effect and ability to analyze.

    • @jimallen8186
      @jimallen8186 27 วันที่ผ่านมา

      Look at Michael Lewis’ The Premonition paying particular attention to the Santa Barbara stories and the Fort Dix Flu stories. Explain how scientific method works here. If one measure alone be insufficient, how can you variable isolate to figure it out? Believe The Atlantic had an article regarding pandemic death spiral and serial attempts of solutions at this point. The Atlantics Deadly Myth that Human Error Causes Car Crashes also plays. As does Wired’s Inferno of the [american] west.

  • @jimallen8186
    @jimallen8186 27 วันที่ผ่านมา

    If you cannot accept machine based knowledge fed back in, by what right can you accept human based knowledge fed back in? If feeding back outputs into inputs becomes unstable, isn’t that true of any fed back outputs? If it be false of human outputs becoming inputs, what is unique in AI? A feedback is a feedback no matter what it feeds back. Humans have come up with some weird stuff. Yet we give humans a pass?