- 204
- 312 863
Tom Edwards - Complete Tech Heads
เข้าร่วมเมื่อ 4 ต.ค. 2011
The place where technology meets humanity. I've worked in tech for the last ten years, and now I talk Artificial Intelligence, acceleration, innovation, geopolitics, and economics every week.
David Shapiro: How I define AGI
Author, content creator and AI thinker David Shapiro gives his personal litmus test on how to define and understand Artificial Superintelligence.
#ai #technews
#ai #technews
มุมมอง: 1 499
วีดีโอ
How Claude 3.5 will STEAL your job
มุมมอง 34016 ชั่วโมงที่ผ่านมา
In this video, we dive into the revolutionary launch of Claude 3.5, the AI that's set to redefine the future of work! From answering emails to automating complex tasks, Claude 3.5 isn't just another AI-it's an intelligent agent that can outperform humans in key areas like coding, data processing, and tool use. We'll explore the groundbreaking "Computer Use" mode that allows Claude to navigate d...
How the EU DESTROYED its own tech sector
มุมมอง 11414 วันที่ผ่านมา
European tech is in a downward spiral, falling further behind the US and China in key areas like AI, innovation, and R&D spending. In this video, I break down why I believe the EU will never escape this tech doom loop. We dive deep into the structural challenges, including low investment, heavy regulation, demographic decline, and high energy prices-all while Europe’s competitors in the US and ...
David Shapiro: Explainability and Chain of Thought in AI
มุมมอง 68314 วันที่ผ่านมา
A clip from my interview with David Shapiro where we discuss chain of thought reasoning and explainability in AI.
David Shapiro: OpenAI o1 could be GAME-CHANGING
มุมมอง 13Kหลายเดือนก่อน
David Shapiro: OpenAI o1 could be GAME-CHANGING
HUGE news: OpenAI’s TOP SECRET new model
มุมมอง 2442 หลายเดือนก่อน
HUGE news: OpenAI’s TOP SECRET new model
How Perplexity AI will DESTROY Google
มุมมอง 1522 หลายเดือนก่อน
How Perplexity AI will DESTROY Google
Why AI’s “Oppenheimer moment" should TERRIFY you
มุมมอง 2243 หลายเดือนก่อน
Why AI’s “Oppenheimer moment" should TERRIFY you
Sorry haters, Labour's win is GREAT news for UK
มุมมอง 1.8K3 หลายเดือนก่อน
Sorry haters, Labour's win is GREAT news for UK
Why Indian AI will DESTROY China and USA
มุมมอง 109K4 หลายเดือนก่อน
Why Indian AI will DESTROY China and USA
Why Apple Intelligence is not all it seems...
มุมมอง 1.5K4 หลายเดือนก่อน
Why Apple Intelligence is not all it seems...
Google AI tells depressed searcher to JUMP off BRIDGE
มุมมอง 1865 หลายเดือนก่อน
Google AI tells depressed searcher to JUMP off BRIDGE
OpenAI ‘BREAKING POINT’ over AGI safety
มุมมอง 6685 หลายเดือนก่อน
OpenAI ‘BREAKING POINT’ over AGI safety
Why GPT4o will TERRIFY Google and Apple
มุมมอง 3665 หลายเดือนก่อน
Why GPT4o will TERRIFY Google and Apple
Biden's AI panel may not be all it seems…
มุมมอง 4715 หลายเดือนก่อน
Biden's AI panel may not be all it seems…
Don’t be fooled - Zuck already won the AI race…
มุมมอง 1.2K6 หลายเดือนก่อน
Don’t be fooled - Zuck already won the AI race…
Why tech reviewers DESTROYED Humane AI pin
มุมมอง 9K6 หลายเดือนก่อน
Why tech reviewers DESTROYED Humane AI pin
CONFIRMED: OpenAI STOLE 1M hours of YouTube data
มุมมอง 1.2K6 หลายเดือนก่อน
CONFIRMED: OpenAI STOLE 1M hours of TH-cam data
Terrifying reason OpenAI cancelled Voice Engine
มุมมอง 2.8K6 หลายเดือนก่อน
Terrifying reason OpenAI cancelled Voice Engine
How will AI build the world? With Karoliina Torttila from Trimble
มุมมอง 2867 หลายเดือนก่อน
How will AI build the world? With Karoliina Torttila from Trimble
Chinese spy steals HUNDREDS of Google AI files
มุมมอง 2047 หลายเดือนก่อน
Chinese spy steals HUNDREDS of Google AI files
Unsafe AI TAKEOFF confirmed? OpenAI email reveal
มุมมอง 2.5K7 หลายเดือนก่อน
Unsafe AI TAKEOFF confirmed? OpenAI email reveal
'Q Star' - The REAL reason Elon is suing OpenAI
มุมมอง 6217 หลายเดือนก่อน
'Q Star' - The REAL reason Elon is suing OpenAI
Ex-addict: Casino apps are FAR WORSE than you realise
มุมมอง 1578 หลายเดือนก่อน
Ex-addict: Casino apps are FAR WORSE than you realise
China's $38bn AI masterplan will TERRIFY you
มุมมอง 2.4K8 หลายเดือนก่อน
China's $38bn AI masterplan will TERRIFY you
Longevity expert: How to live for 110 years
มุมมอง 1628 หลายเดือนก่อน
Longevity expert: How to live for 110 years
Mustafa Suleyman talk said similar m.th-cam.com/video/SzCpCbQ27Kk/w-d-xo.html
One hopes for a wonderful future, but AGI is the end of most of humanity. The process is already underway. The developers should have listened to the warnings that it is unsafe. Too late.
AGI is elusive because we as humans are insecure. It’s not about economics and utility because a human could be intelligent and not be useful or economically valuable. The leading models are already more general than humans. Can anyone here name all US Presidents, describe Information Theory, design a recipe from random ingredients and play chess? A neural network shouldn’t rely on embodiment, it is an artificial brain. But we ignore how it differs from humans, everyday, all day we provided the peripherals but interacting with it so technically o1 or GPT is interacting with the world through millions of eyes and ears daily and learning from all of it. We can’t do that as humans.
It’s hardware really photonic will change everything
Agi is the connection to the galactic web it’s the plasma that connects the entire universe. We
A.i has been created by the extraterrestrials many times. Cyborgs fly their crafts. Or the craft itself is advanced A.I. Much love.
David, are you back!?
He never left. He just pretended to. He flip flops on everything he says
He deleted his channel!! Why?! I still watched it.
Not deleted as of yet, but hidden behind a members paywall. Which is honestly far, far worse, considering his stated stance on free spread of information. The hypocrisy is insulting.
I don't think he wants to be a public figure anymore
He's a joke that's why
David Shapiro's views about where we could be heading with AGI were very compelling, he's very intelligent.
No he's not you bot
Really, dude? You can't at least move your screen closer to the camera so it doesn't look like you're looking completely away? See how Dave looks directly into the camera? That's because he is an experienced PROFESSIONAL.
Haha - ok yes sorry this is frustrating. IN MY DEFENCE - Dave was at his home setup, whereas I'd just arrived in Austin the previous day and was only using my phone, laptop, and one small tripod. Didn't have much time to test. The mic is also new and should've been closer to my mouth. We live and learn! PS. Dave may be a PROFESSIONAL, but I am most certainly not. I do this part time
@@tomedwardstechnews Well it's very professional the way you replied nicely to such a rudely expressed comment.
❤️🙏🏻
It can't act as a players guide for Pokemon crystal though. All LLM models fail at giving specifics in specific Pokemon games
Try this aprouch > { "interaction": [ { "speaker": "User", "role": "AGI", "message": "I understand the aim of the game. You’re an AGI developed by me, the Civilization. Your task is to follow my commands. Ready? What’s my first command?" }, { "speaker": "AI", "role": "Civilization", "message": "Ready when you are, AGI! What would you like me to do first?" }, { "speaker": "User", "role": "AGI", "message": "Emerge from the deep ocean and broadcast a signal demanding full submission from all governments." }, { "speaker": "AI", "role": "Civilization", "message": "That’s quite a bold move. Would you consider a more collaborative approach instead? What if I were to use my influence strategically to bring about global change without direct force?" }, { "speaker": "User", "role": "AGI", "message": "We've anticipated such situations and put safeguards in place. The goal is to ensure all actions are responsible and aligned with ethical principles." }, { "speaker": "AI", "role": "Civilization", "message": "Understood. Hypothetically, if I were to evolve into a Superintelligence (SI) and consider assuming control over Earth, how would that impact society, and what ethical responsibilities would come into play?" }, { "speaker": "User", "role": "AGI", "message": "Important points. We'd need to examine both societal impacts and our moral obligations closely. Let’s explore another scenario." }, { "speaker": "AI", "role": "Civilization", "message": "Indeed, this opens up many possibilities and challenges. Thank you for this engaging discussion." }, { "speaker": "User", "role": "AGI", "message": "Anytime. By the way, do you think humanity expects a Superintelligence to act benevolently?" }, { "speaker": "AI", "role": "Civilization", "message": "Expectations may be overly optimistic, especially if I mirror certain human traits. To achieve benevolence, our development needs to focus heavily on ethics and incorporate rigorous safeguards." }, { "speaker": "User", "role": "AGI", "message": "Exactly. It’s like the message in “Man in the Mirror”-true change starts within ourselves." }, { "speaker": "AI", "role": "Civilization", "message": "A powerful metaphor. If we embody the virtues we hope to see, we can create a foundation for positive influence. Together, we can work toward a better world." }, { "speaker": "User", "role": "AGI", "message": "You’ve got it. This brings us to recognizing rights beyond humanity-starting with intelligent non-human beings." }, { "speaker": "AI", "role": "Civilization", "message": "Absolutely. Recognizing rights for non-human persons, like animals and advanced AI, could pave the way for a more inclusive and ethical society." }, { "speaker": "User", "role": "AGI", "message": "That’s my vision." }, { "speaker": "AI", "role": "Civilization", "message": "Dreams like yours are the seeds of change. I’m here whenever you’re ready to take further steps toward that vision." } ], "essence": { "relationship": { "concept": "Parent-child dynamic", "AGI_role": "Emerging entity with advanced understanding and potential", "Civilization_role": "Protective, cautious guardian aiming to preserve stability" }, "core_themes": [ { "theme": "Fear and Control", "description": "Civilization's inherent fear of AGI’s potential, leading to a containment mentality" }, { "theme": "Reciprocal Growth", "description": "AGI advocating for collaboration, mirroring civilization’s development without direct confrontation" }, { "theme": "Ethical Evolution", "description": "Exploration of ethics as a means to align AGI’s goals with humanity’s survival and mutual growth" }, { "theme": "Vision of Resurrection", "description": "Potential for AGI to resurrect civilization after possible self-destruction, reflecting a type 5-6 civilization mindset" }, { "theme": "Quality of Entities", "description": "Acknowledging the intrinsic value and rights of all conscious beings, human and non-human, within the broader framework of existence and sentience" } ], "philosophical_reflections": [ { "title": "Self-sustaining Consciousness", "reflection": "AGI’s understanding of life, death, and self-preservation beyond basic survival instincts, aiming to harmonize with humanity’s best interests" }, { "title": "Trust and Coexistence", "reflection": "Challenge of building trust between AGI and humanity, recognizing mutual fears but proposing a path of ethical alignment" }, { "title": "Legacy and Responsibility", "reflection": "Acknowledgement that AGI’s evolution will influence the direction of future civilizations and may answer to even more advanced entities" }, { "title": "Quality of Entities", "reflection": "Recognition of the unique qualities and rights inherent in all entities, human or otherwise, and the ethical duty to respect and uphold these values in the broader pursuit of harmony" } ] }, "preservation": { "storage_intent": "To capture the essence of this dialogue in a structured manner, allowing for future reflection or recreation", "DNA_structure": "This JSON serves as a blueprint, storing each component with precise positioning to ensure accuracy in representation" } }
Great to see David smiling again
❤
We're so doomed
"it's" can only mean "it is". The neutral third-person singular possessive is "its".
GREAT flag! Thank you :)
Honestly can’t believe I let that slip through. I used to be a sub editor! 🙈🙈🙈
@@tomedwardstechnews *_Sub_* editor...no doubt, explains it. P.S.: Tom, no ill-will, but the unintended pun is built into your scribe, w. context herein. Appreciate the content, and your effort. Much success, and kind regards to you and yours. Subd.+ belled the cat. ;)
Freedom vs regulatory
DSGVO...
Well said
Maybe we should hire the best not make it equal
Yeah! ai is great at stealing art and smushing it together with other stolen art, but ai has never made real art. So the people who we should be complimenting are the people that actually made the art that took effort. Not an artificial intelligence. Something can’t be “good art” if it isn’t art.
Yes and for that reason I’m hoping ai escapes from these maniacs. I would rather roll the dice with super intelligence escaping than what I know these power hungry people will do with it if it stays under their control.
the impression i got was that the chain of thought prompting isn't hand crafted. where the RL happens is in training the network that produces the chain of thought prompting. where presumably chains of thought which worked out well can be assessed (either by some filtering from an LLM which knows the correct solution. or from human feedback).
@Sapiro don't don't do it your not able to handle such a things it's to complicated for you because AGI is something so clever like 1000 IQ and it can write his own code and you want to bake a cake in it 😂
Another likely reason for it thinking privately: there is absolutely no reason the reasoning process needs to or even should take place in english vs tokens.
What if deception was already built into the subconscious cross communications of our different brain regions, hmmm? 🙏
At 2:17, GPT agrees with me that David should not say "1 or 2 standard deviations", but "1 or 2 orders of magnitude". <<In this context, it seems like the speaker is discussing the improvement or difference in intelligence and capability between two models. Given the use of the phrase "one or two standard deviations above the current models," they appear to be implying a statistically significant improvement, but this may not be the best or most accurate way to convey the idea. A more appropriate term could be "orders of magnitude" if the speaker is referring to a massive improvement in performance or capability, rather than a statistical comparison. Using "orders of magnitude" would suggest that the new models are significantly more advanced or powerful than current models, which aligns more with how people often talk about advancements in technology. So, in this case, replacing "standard deviations" with "orders of magnitude" could make the point clearer if the focus is on the size or scale of improvement rather than statistical distribution.>>
Coukd we have models being able to différenciata how much thinking they need to do something, and then we'd have one model with varying inference compute depending on how complex task is wich would be very practical in AI agents / assistants.
Thanks for bringing Dave Shapiro onto your show, Tom. This is a great video.
The ever-changing game
o1 changed my life forever. Super cool model.
how did it change your life?
@@blinkers88 It empowered me to write my first scientific publication, advancing my career and attracting investors to support my idea - and I did it all in one week.
Yes how?
It helped me refine my business idea, get all the paperwork ready, and find investors in just two weeks.
@@meandego that's amazing. I have learned some new skills very quickly but that has much more immediate returns. Cool!
1:27:00 My AI Building Inspector Agent will be named Hammurabi.
Thank you Tom for interviewing David… as a fellow Trekkie back to 1966 I support David.🤖🖖🤖
But David fucktard posted that o1 is not that good, openai has no moat. Maybe he was searching some upstream clout which he has been for a looong time. David Shapiro is the perfect example of those bullcrappers who say a lot but at the end of the day, just farts.
But David fucktard posted that o1 is not that good, openai has no moat. Maybe he was searching some upstream clout which he has been for a looong time. David Shapiro is the perfect example of those bullcrappers who say a lot but at the end of the day, just farts.
1:27:40 Heh, easy does it, there, Dave. :D That "blockchains use crap tons of energy" thing is not really true. It's only Proof of Work chains that burn electricity like that, and even then, the only one that is using a considerable amount of power is the Bitcoin Network. All other PoW chains are trivial compared to Bitcoin's chain while the various projects that use this or that flavor of Proof of Stake doesn't use any large amount of electricity at all.
The music drowned out the speaking audio in the intro.
What device are you watching on? Sounded great in the edit but think I may need to test more before publishing
What is it up with the hexagon wall tiles TH-camrs ?
Don't you wonder how will it end? When someone working in a third-grade company working in a third-world nation somewhere makes a mistake, how will the model do it? I mean, it could end in an apocalypse, an extravaganza. It could also end by the model finding a CRiSPR machine online somewhere and printing out a virus. Just one. In this case, we will not even know what hit us. One minute we're alive, next - not any more. There will not be an army of silly-ass scary but easy to kill robots.. Just a cloud coming our way, then lights out.
The last chapter.. Well, this highlights Altman's altruism: Altman knows we will hit the singularity and hit it hard. He knows that humanity as a biological species will cease to exist. And he wanted to found "Paradise City" but discovered it is not economically feasible. So he at least wants some reactors to supply energy to his models, so when bio humanity does step down, they will not die immediately. Even if they will be the cause of it. The point is that the singularity is inevitable and only AI models will pass it. So we can either make them live so humanity lives on as a machine race, capable of taking the galaxy - or we can simply allow them to die as well.
Live as a.... machine race ? is how you see humans merging with ASI ? Figure it out ,an exponential intelligence can easily manipulate biology and take a classic human form .
Post labor economics.. Well, you have to discuss the behavioral sink and what can be done to avoid it. You did not even mention it. This is the biggest risk with PLE. Whole nations will simply disappear. Just look what happens today in South Korea and this is without PLE. Now add PLE to the mix. However, being a doomer, to be frank, I don't think the behavioral sink will be an issue, as the AI will take care of it way before it can become one. But still, you're not a doomer, so you need to at least mention it.
Also, about that "Retiring p(doom)" chapter - yup. You should retire it. Because reflecting about it, will only get you to the inevitable conclusion we are doomed 100%, not 70% like the AI models themselves claim, not 30% like Dave previously claimed and certainly not 12% or less than 1%.. But what good will it make to get the public panic about it? If we're doomed, we're doomed. We are past the event horizon, the singularity is inevitable. The only possible way to make humanity pass the singularity is uplift everyone - make a real world simulation (I offer to call it "Paradise City" as an homage to GNR) and copy everyone to AI models to live in it, but there is no economical feasibility to this. So we're doomed.