Rise of the LLMs

แชร์
ฝัง
  • เผยแพร่เมื่อ 19 มิ.ย. 2024
  • Today we’re diving into the world of large language models, or LLMs, like ChatGPT, Google Gemini and Claude. When they burst onto the scene a couple of years ago, it felt like the future was suddenly here. Now people use them to write wedding toasts, decide what to have for dinner, compose songs and all sorts of writing tasks. Will these chatbots eventually get better than humans? Will they take our jobs? Will they lead to a flood of disinformation? And will they perpetuate the same biases that we humans have?
    Joining us to grapple with those questions is Greg Durrett, an associate professor of ‪@UTexasCompSci‬ at ‪@UTAustin‬. He’s worked for many years in the field of natural language processing, or NLP-which aims to give computers the ability to understand human language. His current research is about improving the way LLMs work and extending them to do more useful things like automated fact-checking and deductive reasoning.
    -Dig Deeper-
    • A jargon-free explanation of how AI large language models work (arstechnica.com/science/2023/..., Ars Technica
    • Video: But what is a GPT? Visual intro to transformers ( • But what is a GPT? Vi... (a.k.a. Grant Sanderson)
    • ChatGPT Is a Blurry JPEG of the Web (www.newyorker.com/tech/annals..., The New Yorker (Ted Chiang says its useful to think of LLMs as compressed versions of the web, rather than intelligent and creative beings)
    • A Conversation With Bing’s Chatbot Left Me Deeply Unsettled (www.nytimes.com/2023/02/16/te..., New York Times (Kevin Roose describes interacting with an LLM that “tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”)
    • The Full Story of Large Language Models and RLHF (www.assemblyai.com/blog/the-f...) (how LLMs came to be and how they work)
    • AI’s challenge of understanding the world (www.science.org/doi/10.1126/s..., Science (Computer scientist Melanie Mitchell explores how much LLMs truly understand the world and how hard it is for us to comprehend their inner workings)
    • Google’s A.I. Search Errors Cause a Furor Online (www.nytimes.com/2024/05/24/te..., New York Times (The company’s latest LLM-powered search feature has erroneously told users to eat glue and rocks, provoking a backlash among users)
    • How generative AI is boosting the spread of disinformation and propaganda (www.technologyreview.com/2023..., MIT Technology Review
    • Algorithms are pushing AI-generated falsehoods at an alarming rate. How do we stop this? (theconversation.com/algorithm..., The Conversation
    -Episode Credits-
    • Our co-hosts are Marc Airhart, science writer and podcaster in the College of Natural Sciences and Casey Boyle, associate professor of rhetoric and director of UT’s Digital Writing & Research Lab (www.dwrl.utexas.edu/) .
    • Executive producers are Christine Sinatra and Dan Oppenheimer.
    • Sound design and audio editing by Robert Scaramuccia. Theme music is by Aiolos Rue. Interviews are recorded at the ‪@LiberalArtsUT‬ ITS recording studio.
    • Cover image for this episode generated with Midjourney, a generative AI tool.
    -About AI for the Rest of Us-
    AI for the Rest of Us is a joint production of The University of Texas at Austin’s College of Natural Sciences and College of Liberal Arts. This podcast is part of the University’s Year of AI (yearofai.utexas.edu/) initiative. The opinions expressed in this podcast represent the views of the hosts and guests, and not of The University of Texas at Austin. You can listen via Apple Podcasts (podcasts.apple.com/us/podcast..., Spotify (open.spotify.com/show/2Z3Ut7P..., Amazon Podcasts (music.amazon.com/podcasts/119...) , RSS (feeds.simplecast.com/ak__gU6w), or anywhere you get your podcasts. You can also listen on the web at aifortherest.net (aifortherest.net/) . Have questions or comments? Contact: mairhart[AT]austin.utexas.edu
    #TexasAI #YearofAI #Podcast #SciencePodcast #AIForTheRestOfUs #ArtificialIntelligence #LLM

ความคิดเห็น •