Is AGI Far? With Robin Hanson, Economist at George Mason University
ฝัง
- เผยแพร่เมื่อ 22 พ.ค. 2024
- In this episode, Nathan sits down with Robin Hanson, associate professor of economics at George Mason University and researcher at Oxford’s Future of Humanity Institute. They discuss the comparison of human brains to LLMs and legacy software systems, what it would take for AI and automation to significantly impact the economy, our relationships with AI and the moral weight it has, and much more. Try the Brave search API for free for up to 2000 queries per month at brave.com/api
LINKS:
- Robin’s Book, The Age of Em: ageofem.com/
- Robin’s essay on Automation: www.overcomingbias.com/p/no-r...
- Robin’s Blog: www.overcomingbias.com/
- AI Scouting Report: • AI Scouting Report - P...
- Dr. Isaac Kohane Episode: • The AI Revolution in M...
SPONSORS:
The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at brave.com/api
Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com
NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: netsuite.com/cognitive and download your own customized KPI checklist.
X/SOCIAL:
@labenz
@robinhanson (Robin)
@CogRev_Podcast
TIMESTAMPS
(00:00) Preview
(07:10) Why our current time is a “dream time” and the move back to a Malthusian world
(13:30) What sort of world should we be striving for?
(13:40) Sponsor - Brave
(17:50) Distinguishing value talk from factual talk
(18:00) Comparing and contrasting Ems to LLMs
(22:30) The comparison of human brains to legacy software systems
(30:52) Sponsor - Netsuite
(41:01) AIs in medicine
(53:30) A several century innovation pause
(55:30) Achieving full human level AI in the next 60-90 years
(1:03:55) Chess and routine benchmarks not a good predictor of AI performance in the economy
(1:07:44) Reaching and exceeding human-level AI in the next 1000 years
(1:11:40) Losing technologies tied to scale economies
(1:12:00) Why AI is hard to maintain in the long run
(1:12:20) Standard deviation in automation
(1:14:05) Computing power grows exponentially but automation grows steadily
(1:15:50) AI art generation and deepfakes
(01:21:42) The economics of AI-powered coding
(1:33:51) Merging LLMs
(1:36:02) Rot in software and the human brain
(1:40:18) Parallelism in LLMs and brain design
(1:41:00]) Moral weight for AIs, enslavement, and cooperation with AI
(1:47:10) What would change Robin’s mind about the future
(1:49:18) Wrap
Music licenses:
DCLFH1EAJEQP7YL7
B1UKC8AFQN3FVE2X - วิทยาศาสตร์และเทคโนโลยี
Very interesting interview. I appreciate that Nathan took the time to try to understand Robin's perspective, and really drill down on where he's coming from. That empathic skill is very useful to help give a broad perspective on AI developments, and that intellectual humility is a big reason I trust Nathan's take so much.
I see lots of people in the comments suggesting that Robin is ignorant and arrogant. I found myself struggling with this as well, especially when there are some places where I would characterise Robin as being confidentially wrong. (On LLMs not speeding up or improving coding, for instance, on which all the evidence I've seen suggests they help regardless of previous skill level.) Yet, there are other instances where I think Robin provides a unique and useful perspective-on rot, on the history of automation, on the history of people mis-predicting future progress. You don't have to agree with all his ideas to recognise that they still have merit.
I liked this interview. I feel a great sense of solidarity for Labenz having to explain this stuff to someone whose preconceptions and lack of up-to-date knowledge anchor them to the past. I think we've all been there before with friends, family, or coworkers. It can be isolating to grapple with the uncertainties of this sharp transition while those around us don't grasp the extent of what's happening, and I appreciate seeing the way Labenz handles himself in this context.
A great guest and topic. I look forward to listening!
Love seeing the interviewer be totally amazed at his realization that humans are a lot smarter and more capable than we appear on the surface. 😂 Great job Dr. Hanson and thank you both for the excellent interview
I think Robin's entire argument is based on the assumption that we won't get there anytime soon because we never got there before. Also, it leaves out the potential for a paradigm shift based on emerging technologies. Some of these technologies are things we didn't even see coming. The internet is an example. Yes, I am sure someone may have thought of a distant future where we may be able to communicate over long distances, but not like this or how quickly it was achieved. The rate of human innovation would blow anyone's mind just a few hundred years ago. It's possible that LLMs are the key to the next AI innovation that may very well be the last true human invention. Then, after that, the rate of innovation would be measured in years and not decades... maybe even months. 80-90 years to achieve AGI? I don't know about that. Compare today's technologies to what was available 80-90 years ago. No internet, no satellite, no nuclear plants... almost nothing of the things we take for granted today was mainstream. Humans did that with little access to a knowledge base, manual or limited computations, limited or no simulated environment, etc. Perhaps our expectations of what AGI or the Turing test may change as we continue to raise the bar. But one thing is guaranteed; buckle your seatbelt, Dorothy, 'cause Kansas is going bye-bye.
That's half of his argument. Other half: no major shift in economic productivity is yet visible, nor are major job sectors yet being replaced with AI.
Yeah 80-90 years is an insane time frame. Even 40+ years is pretty much pure speculation.
Could it be that writing a book about bias could, going forward, compound ones degrees of biases due to the belief that you're more impervious to bias?
I think so. The reasoning just isn't compelling unless, like him, you've made up your mind already favor of his views!
Dr. Hanson is off the scale brilliant. Doesn't matter what age he is sharp compared to all the tools in the shed. His comment is very important: "the tree is log normal and so even exponential growth results in a linear increase" ? The growth function is exponential but the gain function is logarithmic. So: log(e^x) = x* log(e) = x * 1 = x; X grows at the same rate in the past, present, and future: linearly, a straight line. Always new yet nothing new.
80's doctors being outperformed by an 80's expert system isn't the same as 2023 doctors being outperformed by an LLM.
80's doctors didn't have the internet, for example. I'm not surprised a database with a frontend knew more than them.
Regardless, Robin Hanson's grabby aliens hypothesis is excellent.
30 minutes into the interview I thought I must be biased thinking this guy robin is so full of it so I went back to the beginning with as opened mind as I could. No use, he is really full of it. I congratulate Nathan for being so patient and his efforts to try to meke this a worthwhile interview. Sorry, I couldn't go past 30 minutes.
Could I ask why?
Robin seems like one of the smartest person on the subject, Imo.
He's far from full of it. That said, a summary of the interview could be much shorter (ie, 10 minutes).
he really took roko's basilisk seriously
60 to 90 years??!??
Can I short that stock?😂😂😂
Economists, despite their self-confidence, have an appalling track record when it comes to forecasting the future of our economies much less technologies.
So let me see if I understand this: People have been wrong about AGI being near in the past, and based on these priors only I should predict when people say AGI is near they are probably wrong. This is all the evidence needed, I don't need to pay attention to any other "inside view" facts about current developments or other details about the matter.
If it turns out Robin is right about AGI, he still got the right answer based on faulty reasoning.
Not true. In this interview Hanson said he would adjust his probability as more and more jobs get automated.
Not to be that guy...but didn't a paper just come out last week about merging heterogenous LM's?
And that's far from the first. Have you seen StitchNet?
An innovation on the order of the internet or search engine making a significant incremental but not revolutionary change then. Even if a substantial number of jobs can be automated, there would still be material and resource constraints preventing superabundance. Total growth may slow with population but per capita wealth and growth could still grow faster, especially as material and resource constraints lessen, at least wealth in terms of material and resource access rather than higher prices due to competition for them.
I could tell before you asked that this guy hasnt spent much time personally exploring and experimenting with chatgpt. He doesnt fully appreciate it or what it obviously implies about the near future.
The reason why most people believe we are close to AGI is the youtube algorithm. It prefers much more exciting news, hype, revolutions, shocking thumbnails rather than realism. But the majority of researchers, i.e. those who build AI, have reached consensus that current technology won't lead to AGI.
Powered through despite some frustration with the way Robin thinks about things. It just seems like an extremely safe/conservative way to analyse the current state of AI development, only admitting that AI has reached substantial thresholds once substantial parts of the economy have been automated by them. When someone says "You may say that the new things LLMs do now show that they are close to AGI, but people used to think that mastering chess would require AGI and they were wrong", it's frustrating because they'll be correct up until the last step, where they'll be wrong. I would prefer to have someone say "These types of tasks or cognitive functions/displays are what would indicate to me that we're close to AGI...".
What I find particularly strange though, is his view that we won't be likely to achieve all-jobs-automating AI within 60 years, given the progress in unsupervised model cognitive ability over the last 5 years. He doesn't see the current LLM paradigm as significant in any way, it seems?
All that being said, well done Nathan for your patience, and thank you, I still feel like my thoughts have been expanded by this interview.
This might be seen as a sophisticated smoke screen thrown up by someone who is full accelerationist. If he convinces people that AGI is decades away, who will oppose accelerationism? This historical references aren't dumb, they have some persuasive value. They miss the possibility that AI progress is so fast that his benchmarks are lagging indicators and fail to notice the AGI threshold period.
@@kreek22 Having known about Robin for a while now, I don't think it's a smoke screen. But he's an economist through and through, and has no issues with lagging indicators, as is made clear by what he says in the video and elsewhere. I just think that embracing the outside view this fully, with no care that at some point he'll be wrong and only find out after the fact, is frustrating, not just because he can (in the present) smugly dismiss any seeming progress on AGI until jobs have been taken over (give that another year or two...), but also because his arguments sound perfectly reasonable and it's hard to argue against them.
Widespread implementation may just take $7 Trillion investment in infrastructure worldwide.
Human-level AI might be a few years away (80% in 5 years), but it won't magic wand itself over all things we do, and it won't come from any of the known companies doing ML/DL work today. All of the dialog on the topic looks at what it will do, post-deployment, using existing app downloads as a reference. It will take years for regulation and integration to be cleared before we see large scale impact for anye one domain. It also won't be for ALL domains, and in all the forms on everyones wish list. Post-AGI is "white collar" work."
*not taking Hansons side because no mind uploading, no economic output as a measure, etc.
There are several aspects of intelligence and LLMs can pass the turing test. Therefore every cognitive ability of the AI is somehow already within the range of human beings abilities. So it is all about strength and weaknesses but not about human level, yes or no. The answer is yes. Human level intelligence is already here.
WHat many people think human level AI is: The ability to solve all tasks, they can do using arbitrary tools and time.
But this is far more than human level AI.
nevertheless, ChatGPT 4 could do the job of Joe biden, if the infrastructure of tools and information transportation would be established.
I always have cognitive revolutions when I drink turpentine
Nathan really tried but couldn't convince Robin to anything (stopped watching in the middle of the video). I think he really underestimates the current trajectory and what current systems are capable of. No way we're more than a couple of years from having most economically valuable work (I could already classify some today) done by AI.
Yeah it's really bad to say 50 years ago we thought we were close but... this time IS different due to the cost of compute, the internet, hardware architecture, etc.
So many people like to use history as their reason for why things won't be different this time, yet no matter how many times I ask, they can never actually point to a time in history where we had machines that could physically or mentally do what the average person can do.
Yet history is their evidence?
I could barely get thru the preview. "Progress" doesn't matter if you're going in the wrong direction, and despite popular results from over-priced tech demos, the field of AI is still a mostly flat line (when adjusted for compute). That's the Bitter Lesson; that brute force SEEMS to work, but you need more and more of it for each new challenge (and to be clear, not all benchmarks/challenges are equal).
Even if we had it, humans will reject it. Only CEO’s and economists talk about ‘keeping the economy growing’. And they have a reason.
Maybe before automating most economically viable work we could start with driving. This stuff is on long time horizons and there are already diminishing returns.
The major danger for the foreseeable future is giving too much autonomy to dumb technology.
So has Hanson forgotten about normalcy bias?
Big difference between now & the 80s is the internet allowing decentralized communication (and groundswell movements). I don't think the pro-nuclear movement could have gotten a fair hearing back then, nor could pro-doctor AI activists.
Still great to hear Robin on, as always. As much as I hope he's wrong about a lot of what he says lol.
What does he think that M's spend money on? why would they be motivated to do anything for a company?
I stopped watching half way through too. I was trying to summarize what was bothering me the case.
The issue is that this is an economics professor who is relying on past performance as a framework to predict future outcomes. Which should be a basic economics 101 no no.
Robin Hanson is likely correct in asserting that AGI, or Artificial General Intelligence, remains a distant goal. This belief isn't necessarily a question of being open-minded or up-to-date. There's the fact that current technology, such as Large Language Models (LLMs) and transformers, only captures a fraction of the human cognition process. Essentially, they mimic the pattern recognition capabilities of the language-dominant region, usually the left hemisphere of the brain. In doing so, they grant LLMs a form of reasoning capability.
However, these models have not had the opportunity to learn about elements of perception. For instance, the models lack understanding of spatial cognition and three-dimensional processing, abilities attributed to the Parietal Lobe and Hippocampus in humans. Furthermore, the intricacies of human psychology, emotions, and behaviors still remain inaccessible to these models, as they haven't been encoded or learned by them.
Disruptive ad breaks, not a good format for the interview.
the bias is big on this one, couldn't watch more than 10 minutes.
"we're rich" and "we're not pressured to do what it takes to survive"
in what planet is he living?
sheer speculation
RH’s view is boring but he’s maybe right. Don’t overhype and just live your day to day life. Maybe there wouldn’t be any AI revolution in our lifetime.
Had to turn it off after he said he’s only used gpt4 to check if the students had cheated, how silly 🤪 😂 old man thinks he’s smart 😂😂😂😂
gary marcus on crack
60-90 years? I've worked extensively with GPT-4. It's already 90% human level or better.
AGI 2037
Robin Hanson is the Anti-Kurzweil.
Young David Eagleman talking with older Eliezer Yukoviski.
Robin keeps coming back to historical arguments or bringing up the past to try to reason about the future. So tired of these kinds of argumenst...
Oh geez. RH thought he had problems when his comments about sex and women's rights went South...
Just wait til people pick up on the "slavery" portion of this interview!
Nobody gave him any training? Not even "Robin! You have to say 'Slavery is abhorrent but historically...' before every sentence about slavery!!!"
I was always uncomfortable with his lack of ... moral CAVEATs when talking about ems.
I guess not all smart people have adequate empathy to prepare for today's types of discourse and viral media.
RIP RH! 🤦🏻♂️
Really? Robin Hanson? Man the AI community is really something. This is extremely disappointing. Unsubbed.