How Should I Be Using A.I. Right Now?

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 เม.ย. 2024
  • There’s something of a paradox that has defined my experience with artificial intelligence in this particular moment. It’s clear we’re witnessing the advent of a wildly powerful technology, one that could transform the economy and the way we think about art and creativity and the value of human work itself. At the same time, I can’t for the life of me figure out how to use it in my own day-to-day job.
    So I wanted to understand what I’m missing and get some tips for how I could incorporate A.I. better into my life right now. And Ethan Mollick is the perfect guide: He’s a professor at the Wharton School at the University of Pennsylvania who’s spent countless hours experimenting with different chatbots, noting his insights in his newsletter One Useful Thing (www.oneusefulthing.org/) and in a new book, “Co-Intelligence: Living and Working With A.I. (www.penguinrandomhouse.com/bo...) ”
    This conversation covers the basics, including which chatbot to choose and techniques for how to get the most useful results. But the conversation goes far beyond that, too - to some of the strange, delightful and slightly unnerving ways that A.I. responds to us, and how you’ll get more out of any chatbot if you think of it as a relationship rather than a tool.
    Mollick says it’s helpful to understand this moment as one of co-creation, in which we all should be trying to make sense of what this technology is going to mean for us. Because it’s not as if you can call up the big A.I. companies and get the answers. “When I talk to OpenAI or Anthropic, they don’t have a hidden instruction manual,” he told me. “There is no list of how you should use this as a writer or as a marketer or as an educator. They don’t even know what the capabilities of these systems are.”
    Book Recommendations:
    The Rise and Fall of American Growth (press.princeton.edu/books/pap...) by Robert J. Gordon
    The Knowledge (www.penguinrandomhouse.com/bo...) by Lewis Dartnell
    Blindsight (us.macmillan.com/books/978125...) by Peter Watts
    Thoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.
    You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast (www.nytimes.com/column/ezra-k...) . Book recommendations from all our guests are listed at www.nytimes.com/article/ezra-... (www.nytimes.com/article/ezra-...) .
    This episode of “The Ezra Klein Show” was produced by Kristin Lin. Fact-checking by Michelle Harris, with Mary Marge Locker and Kate Sinclair. Our senior engineer is Jeff Geld, with additional mixing from Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin and Rollin Hu. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Sonia Herrero.

ความคิดเห็น • 31

  • @datastrategypros
    @datastrategypros หลายเดือนก่อน +8

    Here are a few tips from the interview:
    1. Give AI a persona because it wants to slot into a role with you. If you don't tell it what that role should be, it may guess incorrectly or give you less than optimal responses.
    2. You can give the AI recordings of yourself doing a task (e.g., a specific work assignment) and ask it to critique your performance in order to learn things you could be doing more effectively
    3. Different models have different strengths and weaknesses. ChatGPT has the most helpful tone, but Claude is more literary and may be a better writer.
    Happy cyborging 😊

  • @Bronco541
    @Bronco541 หลายเดือนก่อน +2

    This is the exact podcast i needed right now

  • @bucketofbarnacles
    @bucketofbarnacles 2 หลายเดือนก่อน +6

    Great interview though I was troubled by the consistent description of the LLMs as interactive personalities, for example the Sydney Bing description, "subtle level of read by Sydney Bing", "ability to read the person", "the AI wants to make you happy", "what Sydney was revealing about itself". You're applying a lot of human-like complexity and mystery labels on what amounts to statistical complexity and mystery. The way we interact with LLMs induces this kind of interpretation but I don't think it does us any good to reinforce the misconception.

    • @lancespellman327
      @lancespellman327 หลายเดือนก่อน +4

      I felt like they were very clear on that topic and the strong temptation to anthropomorphize LLMs. They stated that it's just math, but interesting how that math has created what appears to be a goal-seeking pattern for LLMs to select a narrative plot/scenario to engage with the user.

    • @MrTeff999
      @MrTeff999 หลายเดือนก่อน +2

      Ethan Mollick suggested that thinking of AI as a human will prepare you to recognize its aptitudes and shortcomings, which it so happens is how I have learned to recognize when it’s given me wrong answers.
      Interestingly, when I tell it that it’s given me incorrect, speculative, or unsupported answers, it always acknowledges its error and apologizes.
      By comparison, I’ve never had a hammer apologize for hitting the wrong nail.

  • @GlennGaasland
    @GlennGaasland หลายเดือนก่อน +1

    This was really good and relevant…hope you could do more of these, combining big picture themes with practical ideas…

  • @tunahelpa5433
    @tunahelpa5433 2 หลายเดือนก่อน +3

    Based on this, I plan to run my Chatgpt conversations - which have a lot of good info- thru Claude 3 so that the end result will hopefully sound better to the reader. I like my conversations with chatgpt and consider them to be a 2-party brainstorming session. I know way more about the topic than I did before using chatgpt.

    • @tunahelpa5433
      @tunahelpa5433 2 หลายเดือนก่อน +1

      Dang, so many ideas here. I'll have to listen again and take notes !

    • @tunahelpa5433
      @tunahelpa5433 2 หลายเดือนก่อน +1

      The statement about Spark notes gives me this idea. It's 5 years in the future. Every person has their own personal AI that learns the person starting in high school and staying with them throughout their subsequent lives, becoming almost a digital twin of that person's brain. Like having a human twin who finishes your sentences. The thing is, it feels to you that your digital twin is more intelligent than you !

  • @SpiderMan-od3kr
    @SpiderMan-od3kr 2 หลายเดือนก่อน +8

    Unlike other technologies, working with AI is very ungratifying. You feel no sense of accomplishment. It doesn't give you the rewarding feeling you get working your way through a problem with another person. It may deliver products and productivity but like many other technologies whether it improves our lives is an open question.

    • @Bronco541
      @Bronco541 หลายเดือนก่อน

      Like always i make the exact same arguments for like every fabrication job most people like me have had for years.

    • @dae133
      @dae133 27 วันที่ผ่านมา

      I tend to agree but it also depends on how you use it. If you figure out a clever way to utilize an ai tool, that can definitely feel like an accomplishment. It is just another tool in the toolbox, so whether it improves our lives or ends up being a meaningless output machine depends entirely on how you and i use it (or not use it).

  • @hadiza1
    @hadiza1 2 หลายเดือนก่อน +1

    💙

  • @chrissolorzano6043
    @chrissolorzano6043 2 หลายเดือนก่อน +1

  • @xzyeee
    @xzyeee หลายเดือนก่อน

    I think people need to appreciate the fact that machines have, since the middle centuries of the last millennium, assisted human beings with the PSYCHOMOTOR aspect of writing. However, AI tech represents the first time in the history of writing that some of the COGNITIVE load of writing has been transferred to a machine. The thing is, this cognitive aspect of writing is inextricably linked to THINKING. Therefore, when scientists declare that they are working to make AI better or to "advance" the technology, what exactly does that mean? Does it mean that they are working to transfer ALL the cognitive load of writing to a machine? If that's the case, it means that they are working to transfer all the THINKING LOAD ASSOCIATED WITH WRITING to AI. So, they will actually be working to have machines "think" for us. Just remember that the ultimate form of controlling a human is to remove, to transfer, or to shift the responsibility for thinking FROM that person. That is how slaves are made...

  • @LeeWoods
    @LeeWoods 14 วันที่ผ่านมา

    I personally do not want my AI to have a personality, just be super helpful. Focus on more helpful things like making the AI more proactive and making use of my data.
    Not a personality

  • @raginald7mars408
    @raginald7mars408 2 หลายเดือนก่อน +1

    it uses You
    Self Chernobyl

  • @Bronco541
    @Bronco541 หลายเดือนก่อน

    Its accelerating at an expontenial rate... Good. I dont have forever to live here, not without the help of future tech anyway

  • @gaslitworldf.melissab2897
    @gaslitworldf.melissab2897 หลายเดือนก่อน

    If you're around 60 y o, you'll remember debates as to whether students should be allowed to use calculators in math classes. In the 3rd grade, I recall learning the multiplication table. I struggled with 7, 9 and 12, but finally got through them to past the test. Thankfully, b/c of that experience, I do multiply mentally w/o thinking much about it (mostly in 3's and 5s) generating tips and adding respectively.

  • @bluewordsme2
    @bluewordsme2 2 หลายเดือนก่อน +10

    the irony at the heart of this podcast (and i usually adore Ezra's shows and guests) is that we continuously seem to fail to see that we've turned into a societ that focuses on efficiency over the imagined, time-saving vs time savoring...what the tool shapes us...that this writer uses the tool for transitional sentence sticking...im not adverse to AI, nor am i a luddite, and I am also a writer, but it just seems sad that we have fallen so in love with our toys and tools rather than the world outside....7 World Central Kitchen killed by a smart targeting munition and here we are orgasming over the new AI intelligence we've invented.....that we have become enamored and throbbingly in love with ai and focused on it more than the world we're overlooking, is an irony that ai doesnt get, cause the programmers dont get....'building a relationship'....im disappointed Ezra...as former nytimes regional paper writer, i would have expected more.....not back to reality.....best, bb

    • @numbynumb
      @numbynumb หลายเดือนก่อน +2

      It seems like all these kinds of dilemmas were discussed in this video. They weren't just mindlessly drooling over new tech at all.

    • @bluewordsme2
      @bluewordsme2 หลายเดือนก่อน +1

      I love Ezra and his podcast. Part of my daily morning listening. This podcast was definitely not mindless. I suggest it simply failed to address a fundamental issue and never addressed the metaphysics of the fact we live in a world that mindlessly focuses on “efficiency “ over something more fundamental. AI is inevitable, just as most in intractably addicted to their phones and media. I’d love for Ezra to interview Jaron Lanier on the topic. All I’m saying. Cheers, bb

    • @devlogicg2875
      @devlogicg2875 หลายเดือนก่อน +3

      On the contrary, this could in fact be the first intelligent species produced without orgasm.

    • @bluewordsme2
      @bluewordsme2 หลายเดือนก่อน

      😂😂😂 without MALE orgasm. Critical distinction. 😂😂😂🙏

  • @Bronco541
    @Bronco541 27 วันที่ผ่านมา

    All the problems people have in understanding AI is mostly because of dumb preconceptions. Preconceptions wil be humanity's downfall.

  • @QuietOC
    @QuietOC 2 หลายเดือนก่อน +5

    These seem thoroughly unethical, and should be illegal.

  • @MayorMcC666
    @MayorMcC666 2 หลายเดือนก่อน +3

    I don't trust that there is no AI writing in his book. How could he possibly prove that? How could he possibly prove that he isn't a complete charlatan who is only relying on the AI.

    • @lancespellman327
      @lancespellman327 หลายเดือนก่อน +3

      Having listened to many of Ezra's podcasts now, I'm continually surprised and impressed with the deep thought and different perspectives he brings to the table that I hadn't considered before. As a consumer of books, of podcasts, why would I even care if his insight is sharpened or made clearer through use of AI? It's benefitting me and my understanding of the topic at hand. I work in an industry where people are using AI all the time to make them more productive in their work. The "useful assistant" model is powerful. People who are experts in their field aren't going to have GenAI "do the work". They see where the gaps and flaws are. So they use it to augment their capabilities. In that sense, I've seen those users elevated to an even higher plane of expertise and productivity. I'd classify Ezra in that way. The dangerous users are those that don't know enough about their subject matter to say whether GenAI output is good or bad, and so just dogmatically copy/paste results. If I read a book from that person, I'm sure I'd be disappointed by it, maybe not even understanding why -- but something would be lacking.

  • @MayorMcC666
    @MayorMcC666 2 หลายเดือนก่อน

    I think the AI safety researchers want it to get worse too so then they wouldn't have been a useless role getting paid 6 figures