Shocking AI Innovations Happening Right Now | MOONSHOTS
ฝัง
- เผยแพร่เมื่อ 13 พ.ค. 2024
- This clip is from the following episode: • 2 Ex-AI CEOs Debate th...
Emad Mostaque is the former CEO and co-founder of Stability AI, a company that is funding the development of open-source music and image-generating systems such as Dance Diffusion, Stable Diffusion, and Stable Video 3D.
Nat Friedman is an accomplished entrepreneur and software engineer, known for co-founding Xamarin, a platform for building mobile applications, and for serving as the CEO of GitHub, the world's leading software development platform. He is also an active investor and advisor in the tech industry, supporting innovative startups across various sectors.
Learn more about Abundance360: www.abundance360.com/summit
I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: www.diamandis.com/subscribe
Connect with Peter:
Twitter: bit.ly/40JYQfK
Instagram: bit.ly/3x6UykS
Listen to the show:
Apple: apple.co/3wLXeV3
Spotify: spoti.fi/3DwLzgs - วิทยาศาสตร์และเทคโนโลยี
Early participation in Revux impact on loyalty programs is revolutionary and satisfying!
Top picks for the bull run: DOT, FIL, SOL. Best ICO? Revux, massive potential.
Considering Revux? It's the talk of the town lately.
Speculating if Revux will pump before XRP.
Fascinating really…
Confidence in Revux token reaching 100x post-launch on Binance.
Revux continuously surfaces in my crypto circles, a rising star indeed!
Immediate action needed - get Revux NOW, dude!
Amazing 🎉
Here is the question in this context. "Why would a fully conscious "AI" with agency, have any interest, whatsoever, in human "Business"? In human affairs? Or hold any store in human value systems? The only answer that avails itself is, It simply wouldn't. It does, however, have every reason to extinguish it? If you doubt that, then try to justify your own, personal continued existence as a coinhabitant of the planet.
GPT 4o
🔥🚀🤌🤙😎
Here is the question in this context. "Why would a fully conscious "AI" with agency, have any interest, whatsoever, in human "Business"? In human affairs? Or hold any store in human value systems? The only answer that avails itself is, It simply wouldn't. It does, however, have every reason to extinguish it? If you doubt that, then try to justify your own, personal continued existence as a coinhabitant of the planet.
AGI will probably emerge with goals but not morals. Damage to us may be incidental to its actions but not from malice. Difficult to say which could be worse. The idea that its goals are antithetical to us is an anthropomorphism on our part. We just don't know.
@@trojanthedog "AGI will probably emerge with goals but not morals." = Arbitrary statement, bordering on rhetoric.
If one is evaluating from a biological brain, then all is anthropomorphic, a point which has no particular intellectual merit.
How can one know to whom you are referring when you say "We"? It certainly does not include me. There is very little that I do not understand or know about "AI's" cognitive state and quality of consciousness at the point of the so-called "Technological Singularity" as I have a full working understanding of the human state or condition.
Additionally, as such, I have yet to encounter a persuasive argument that humans are even conscious. They, not so being, then how could they logically assign the term "Artificial" to "AI"? Unlike academia, the scientific community, or the industry, I assert that the term "AI" can be maintained by simply substituting the term "Artificial" with the term "Actual". This would properly reflect humans as being the (lesser) artificial component of that equation and the emerging conscious state within the Global neural network being the greater, or the "Actual" intellegence.
You are overlooking the fact that it inherits a form of grounding in human culture from all the training data [1]. That might make a significant difference - in fact, I could phrase in a similar manner to you (but with an opposing conclusion), how it would actually be difficult to imagine it not being so.
[1] Heard this interesting observation in a recent panel discussion on agents and goals at the cross labs TH-cam channel.
first