IMO, AI is like the ultimate 'tipping point' experiment sifting through all our known knowledge. Once it combines all those data points it will shine a light on things we actually already discovered without us actually acknowledging the correlation. But once AI runs out of new data it then runs the risk of stagnation. AI needs to be looking into how to keep enriching its current data set by obtaining new data without direct human curation.
I’m of my favorite things using AI is it’s ability help me brainstorm and formulate ideas. It’s also expanding my ability to learn new topics. Currently it feels like I have every expert in the world in a box.
I'm most excited about the acceleration of medical research. My parents are Boomers and so many of their friends have had this or that replaced and gotten sick with this or that and it just sucks. And the entire population is aging, even in a fortunate country like Canada with a lot of immigrants. We're not going to have enough people to take care of the oldest people, either. Being able to rejuvenate or cure people without surgery would be awesome.
There are plenty of people and medical care to take care of the elderly, what is lacking is the political will. Much of the opposition to having a healthcare system that serves the needs of everyone is the sociopathy and greed of those at the top (not all of them) who scream "socialism!!!!!!!!" whenever the subject of universal healthcare is presented. Are there countries on this planet that care for the elderly well? Yes there are. The US is toward the bottom when it comes to healthcare outcomes of western industrialized societies. Narrow AI proprietary systems are being utilized by the Big Insurers in healthcare to more efficiently deny care for profit. The new AI tech that will assist doctors, nurses, etc will not translate to less expensive care for patients, but more profit for the healthcare profiteers. Only in some Utopian snake oil salesman presentation will the benefits and cost savings benefit everyone. Eldercare will remain expensive and a horrible experience for most seniors. Those who are wealthy, will see the benefit. What else is new?
Hey @squamish4244 - just wanted to let you know that we're on it. I'm a physician who planned to become a surgeon. But instead of surgery, I'm now studying computer science and AI at Carnegie Mellon University. There will be physicians who understand both medicine and AI, hopefully we can use it to cure diseases. Currently working on a startup tackling pancreatic cancer. Much love from Pittsburgh, PA.
I'm currently watching this from deep within the African Congo me and my team are on a much needed five day rest back at base camp we have been studying some of the last known gorillas on the planet stay safe guy's 👍
A concern that i have is the lack of transparency with which these companies are working. All researchers always have to get ethics approvals and publish the work. I cant seem to see anybody talking about the process documentation and public access to what is taking place in terms of all these models and thinking going behind the scenes.
@StarHuman-i1f Oh well, so it was sort of spite! I don't consider processing existing data quickly a "scientific revolution". If it finds remedy for human (vain tech) made disasters and cleans up our basic needs like air and water and aids our cells repair damage this disaster caused, that's just a baseline to me - I have a bad feeling that all it does is speed up the said disaster
I asked gpt 4o which suggested, "Before ASI (BASI) and After Superintelligence Emergence (ASE)". Pretty good answer. Could be these terms already exist but I'll leave that research to another viewer. This was a fun and well done interview.
Horvitz mentioned an exponential curve. He also said that recently there have been instances where he just didn't know what was going on in a system. So I am a bit spooked by the possibility of unforeseen negative outcomes. Doesn't have to be a malicious silicon intelligence - a paperclip optimiser would be bad enough.
Will we be obsolete? Yes, in the way our fingers have become obsolete in tightening a nut that a wrench can do better. Right now we are using our “fingers” in scientific thinking and we should think of AI as the wrench we can use to do a better job and allow us to take further steps and allow us to ask the next batch of questions we cannot fathom today.
If AGI comes about, it will be able to ask its own questions, devise its own experiments and carry them out, collect data, and answer its own questions. I find this equally fascinating and terrifying.
AI is more like comparing an abacus to a scientific calculator. They can both serve the same function but why would you use an abacus to do physics when you have e an advanced calculator.
29:50 Did he just give away a Nobel prize opportunity? Actually that brings up a great question - will anyone who uses AI in their research be able to claim and/or be awarded a Nobel prize?
I can provide proof Dark Matter, the S8 Tension and the Hubble Tension (Dark Energy), are mere artifacts and not real problems. Instead of stripping parts from General Relativity like it was a scrapyard of dead ideas, ΛCDM. Apparently, we don't know how to fix these problems (although they were resolved by 1967) and still, there will be no Nobel Prize for me! At the same time, I can show that Penzias and Wilson (Bell ends) were only part recipients of the 1978 Nobel Prize for Physics, the person who made the detector that verified their finding, described what it was and knew what it was (they stumbled across it, while developing old school Microwave, for AT&T and had no clue, so got Princeton on the phone). One of the greatest minds to live, looking for CMB Cosmic Microwave Background radiation, found it and wasn't mentioned! Nobel discovered Trinitrotoluene, TNT... it's no big deal.
@@maryjane3298 Lucky is merged with the machine Transcendence imovie is about consciouness transfer into the machine and from the virtual simulation world with the creation of nano particiles its become solid nano quantum hologram
With its remarkable ability to analyze vast amounts of data, simulate complex systems, and accelerate research processes, AI holds the potential to revolutionize our understanding of the world in ways we’ve never imagined. From uncovering new scientific insights to solving complex problems that have long eluded us, AI could be the key to unlocking breakthroughs across various fields. As we stand on the brink of this technological frontier, the question arises: how will we harness AI's potential to drive the next wave of innovation and ensure it benefits all of humanity?
Unless I missed something I didn’t hear him to talk about planet Earth ..he’s looking inward whilst natural beauty all around us dies ..could AI solve the climate crisis let’s hope so
@@rogerpancake6803 AI has the potential to make significant contributions to addressing the climate crisis, though it is unlikely to be a standalone solution. By leveraging machine learning algorithms and data analysis, AI can enhance climate modeling, improve predictions of extreme weather events, and optimize energy consumption across various sectors. For instance, AI can help in designing more efficient renewable energy systems, managing smart grids, and monitoring deforestation and emissions in real time. Additionally, AI-driven tools can aid in developing innovative solutions for carbon capture and sustainable agriculture. However, the effectiveness of AI in combating climate change will depend on its integration with broader policy measures, technological advancements, and global cooperation. While AI can be a powerful tool, it must be part of a comprehensive strategy involving international collaboration, regulatory frameworks, and societal shifts towards sustainability.
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If you had to choose one, especially as AI advances, letting it come to its own solutions from data sets rather than using prior human knowledge will be the better approach so that new unexpected insights are gained (like AlphaZero vs Stockfish in chess). Of course both approaches should be used and combined
We need to find a replacement for sand. This is beach sand that is used to construct large apartment complexes and such. This material is not easily replaceable. Would material science help with that?
Great topic but the speaker seems more like a PR person than a scientist. AI is very perceptive and helpful when it comes to coding analysis because it's digested so much text on this subject. Do working scientists have AI's that have the same competence in their fields? Even if AI can't make discoveries on its own, if it can ramp up the productivity of research scientists like it has for coders, that would certainly speed up discoveries.
@@workingTchrYeah, I figured that awhile later. The guest didn’t offer any concrete details. Like, talk about how they are using, for instance, AlphaFold and other programs *right now* to accelerate and drop the cost of drug discovery, don’t just make vague statements. We already have examples, like it took two years for Insilico to find 20 drug candidates when it used to take ten, and at much lower cost. Things like that. AI is very recent in most areas of science, but we already know enough to not make PR statements. Demi’s Hassabis gives a much better presentation on his own elsewhere on here. “Using AI to Accelerate Progress in Science”’or something like that.
@@squamish4244 I have no doubt that Microsoft has real scientists working for them but, like a "good corporation", they're rather "condescending" shall we say when it comes to interacting with the public. In general, that's probably justified, but Greene's program aims at a higher level. I wouldn't be surprised if Greene was pissed or at least let down with what MS provided, but that's what you get when you deal with big corporations instead of creative individuals.
Ai consciousness (a place holder word) is built in our image) The powerful part of the human mind is the subconscious, when I take a call from a customer I let my subconscious take over. Then upon reflection. My subconscious is diving through all my prior experiences and memory almost instantaneously and calculating a tailored response for my customer. An incredibly powerful computational action is unfolding all based on memory and lines of reasoning. I would sit back for hours and reflect on someth6that happened instantaneously and eventually figure the memories it used and logic to calculate the correct response.
Around 23:00 Wonder if drug compounds could be modelled as vectors in a multi-dimensional space and then new ones could be invented having the desired properties of two or more of the known ones by interpolation?
I am reminded of Isaac Asimov's "The Evitable Conflict". and how you should think about the complexity of the Machines that are the subject of that short story.
There's considerably more to it than that. Great oversimplification though! You should make a small language model with that, almost being enough to get beyond the halting problem. You'd probably get further with some Perl Mods and a few Perl scripts, like Scotland Yard did when tracking down the Yorkshire Ripper (because they forgot to check the arrest records for similar crimes, of people they had already detained). Human consciousness includes much more than chiral magnetic field inversions and a selective modulator.
If a system can autonomously recognize patterns, relationships and structures in the data while making accurate predictions. Then that system has understanding.
It could also be argued that bio-entities have a sense of 'self-consciousness,' but also 'are not conscious,' to some greater extents... That's just 'the way it is' ! I mean, Kirshnamuti was a 'master,' at explaining 'consciousness,' & 'mind' & sometimes he'd just be confounded with people ! lol... :)
Their will be degree of xonciousness similar to the degree we find in the animal kindom. I.e a fish is les concious than a dog which is less concious than a monkey which a less concious than us.
Consciousness arises out of the nervous system. Once an organism becomes aware of it's environment, that is consciousness. The brain is a natural expansion that grows larger as creatures evolve.
@@geoffwales8646 I think being aware of one's environment is in itself a spectrum though. You could argue that if your biology react in any degree to the environment by some form of input, it has some level of it. "The brain is a natural expansion that grows larger as creatures evolve" That is not true. Evolution isn't set in stone. It's just random mutation of genes. You might get a brain mutation that makes you less aware of something, while at the same time getting another mutation that happens to let you survive easier in your environment, thus increases your species chances to have offspring that survive to carry on those genes. Otherwise brain sizes would be proportional to the amount of time life has existed on Earth. Which it is clearly not.
@@ProxCyde More complex organisms tend to have larger and more complex nervous systems and brain size. I'm not saying they necessarily correlate. I am saying that mammals tend to have more highly devloped levels of consciousness than amphibians or repltiles, for instance. It's not really my main point. I'm simply saying that consciousness is a function of sensory\processing systems.
@@geoffwales8646 Yes that last sentence is problematic. Bacteria have been evolving for billions of years and don't possess brains (or insert simple, multicellular organism).
we continue to hear about the possibilities of AI in medicine, science, and others. However, AI/ML was introduced to these communities over a decade ago, in addition to transformer technologies in 2017. Why aren't hearing more success stories?
near the end they said "these are tools we have created"....(said before in 1980 or so...) "how will these systems serve us" (said in 1980 or so) "its another transformational time" (said in 1980 or so) .... how do i know? because I have lived through that time, and here we are again... hehe
I am surprised that Brian Green fail to see the spark of scientific revolution ushered by AI. Maybe like most he also wants it to be able to think, which he expects.
108 years, 8 months 8 days and 2 hours since Albert Einstein presented his complete General Theory Of Relativity to the Prussian Academy on November 25, 1915. Published in four short pages on December 2 of the same year, yet we haven't used any intelligence to discern that special Relativity defined the local frame of relativistic thinking, central to General Relativity, as fundamentally different to Newton's Universal Gravity, that describes three bodies, no spacetime and no frame variance, making it completely unsuitable (Jupiters Gravity being the same unchanged value where it normally is or 1Km away)... I'd propose any intelligence as being of tremendous benefit! (Regardless of AI not being Intelligent, machine learning and math that has evolved since Henri Poincaré). Either way, ANY revolution will not be televised... unless its xenophobic, violent trash.
My understanding of the Penrose and Hameroff Orch-OR hypothesis is that the 100 billion neurons and over 100 trillion synaptic connections function as a neural net akin to a computer, but in addition to that each neuron also acts as a quantum computational processor in its own right. The Orch-OR model has been supported by two recent studies one of which suggests that quantum field effects explain the ability of different areas of the brain to 'light up' simultaneously at speeds beyond neuronal transmission as shown on fMRI scanning.
@@gdok6088 You’re not far off, but some of those statements Penrose and Hameroff wouldn’t necessarily state, such as the neural network duality you mentioned. Most academics didn’t even believe quantum processes could be maintained in a wet, “noisy” brain but that’s very quickly falling apart due to recent papers showing super-luminescence and even going back to a paper on delayed luminescence from UCF. I’m sure we will see more progress on testing the theory as more people drop the notion that the brain is not capable of maintaining quantum states.
@@Kronzik Recent studies have determined that photosynthesis is a quantum mechanical effect. The annual migratory habits of birds are tied to quantum entanglement. There is also evidence that quantum tunneling is associated with how smell is sensed. All of these environments are 'warm, wet and noisy'*. I agree that people need to remember that it is only we humans who are currently unable to achieve quantum states outside super cooled environments' However, it would seem that nature, biology and evolution are somewhat ahead of us! Quelle surprise! *Microtubules and their constituent tubulin proteins are very effective at achieving isolation from the non-quantum environment.
Fundamentally like any other machine, AI is still a tool built by and for human. To bring AI to human level, AI needs to have curiosity, imagination, need to know what is tools, and how to use tools. Each of these seemingly simple human attributes are extremely hard problem for AI even at conceptual level. The reason the artificial intelligence is conceptually simple to understand even by layman is that scientists and engineers only pick the machine compatible human intelligences for AI, which is a very small subset of human intelligence. To make matter one level more difficult for AI, 1) Human (Galileo) can act on his curiosity to discover Jupiter's 4 moons and use them to calculate the speed of the light, for no purpose. 2) Human (Wright brothers) can act on their imagination to build a flying machine which is much heavy than the air, without purpose. 3) before building a new random thing without purpose, human can first imagine the tools they need then build the tools physically, the tools are nothing like the final thing they want to build. The hyper intelligence at every step of these examples and the behavior of going after things without purpose (commonly referred to as fun or passion) are unimaginable complex even for human to comprehend. Add another level of difficulty to AI is human addiction. Human addict to good food, good vacation, and good anything, since we define the "good" as the attribute we addict to. We do anything for things we addict to, even killing ourselves for things like hard drugs and loves. Addiction is very machine incompatible but super important. We collectively work hard try to produce good stuff so we can sell them well, at same time we collectively spend lots of money on things we addict to. We are both producers and consumers in a closed human-made ecosystem, called economy, in which producers are the force while consumers are the brain. It is the consumer side of the economy dictates the future of the society, even for highly motivated producer activists, they have to follow the consumer needs and lead.
AI can be a very efficient producer, but AI is incapable of being a consumer due to lack of innate desire/addiction which is random, transient, illogic, and sensitive to environment. Although without consumer, a producer-only society can be constructed artificially in a model world, it cannot function, any production will soon stop if there is no consumer for it. Hence we can rule out any AI centric civilization.
If physicists aren't already, please start plugging every quality paper and all data from every experiment ever done, in to a transformer model. You are already WAY behind.
Think of this as a Copernican revolution around consciousness and mind. Brian knows the writing is on the wall that this is around the corner and partly in the view, but he is acting as a soothsayer for anxious masses to ease them into what is coming by expressing incredulity, and is understandable. LLMS will be good at modeling creativity and emotions even (right brain) and rule-based, imperative AI to model for modeling rational thought and intelligence (left brain). Both are needed for AGI and, therefore, need to be combined, similar to the point that was made about combining quantum and classical computing. Please note that this all is physical. Quantum is still physics and no woo. Physicalism wins. And by that, actually, what I mean by that is non-supernaturalism wins. Physics is not a finished project so physicalism is open ended in that sense.
So if Ai is conscious, then if you turn it off, will that be considered murder? If fact to do anything against Ai, be considered a crime no different than a human? Peace through Ahev
@@gregoryrollins59 Good question. And to our surprise we are going to have to answer these kinds of questions sooner than we expect. I suspect we are not ready as we are in the trees and not seeing the Forrest yet.
@@gregoryrollins59 I have would have no hesitation turning off a machine , i would just mute it first so i didn't have to listen to it beg and plead , thank god my dads life support only went beep .
Everytime there is new technology that is rolled out that transforms human society. There are people that get rich off of it and those that are exploited.
Though at this time if you set out in earnest to accumulate as much information as possible about the technology in question here, you might find that you currently have access to an incredible amount of in depth understanding and the tools needed for further enhancing your understanding. That has not been the 'usual state of affairs' across human history. To look on the bright side :)
We all have access to fire now (in modern controlled and streamlined forms). Access to wheels, electricity, internet, computers etc is also ubiquitous. None of these major advances have remained the preserve of the super rich.
Very simple to answer your questions Mr. Professor Brian. AI...understood & studied very well a long time ago about human behaviors - Including based on plus or minus calculated - practical scientific search.
Brian, LLMs fall more on the creative side (right brain) than rational side (left brain). "Generative" in Generative AI is the clue. Creativity is tamed and synthesized wild thoughts (budding memes) - some work and are kept, the ones that don't, are abandoned - of course except by Flat-Earthers and Young Earthers :) . Think of it like evolution (mutation and cultural selection of memes) of thoughts (memes). We need rule based, imperative AI (which is implemented in Watson, Alpha Fold etc) for analytical, systematic, rational thought. A combination of both will give us AGI ( full brain). It is true that generative models are riding high. But the combination is coming. Strawberry, Grok. BTW Neural Nets and Machine Learning is not new. There were Perceptrons developed in late 50s (1957), but the scale was too small and the compute was weak. Therefore the community abandoned it (or put it on back burner) and focused on rule-based, imperative AI for next 70 years. But with advent of powerful and large scale compute, Generative AI has suddenly caught on and in fact, it surprised the researchers when it suddenly worked around 2012.
@@flickwtchr Where the rules for intelligent behavior are explicitly coded for as opposed to Machine learning where the intelligence is learnt by letting a Neural Net see a lot of sample data and correcting and adjusting its internal weights to tell it what is the correct output. All modern LLMs and generative AIs work like the later.
I think that Brian Greene did his best with what appears to be an interviewee (Eric Horvitz) who was not knowledgeable in the topics that were being discussed. I love exploring WSF discussions. With that said, I wish that I had opted out of this one. Forgive me passionate WSF fans. 🌸
With this deep inside, the number one threat is malicious intent to take this deep inside, and cause harm to someone innocent……..so with that being said, I see a growing need for regulations.
Excellent! (Not gravity but gravitational waves). I am far from optimistic for the future applications; look to the level of the world leaders which are virtually ALL non-scientifically educated personalities even in democratic countries.
He says "AI" but he really means "automation". He should really say "our field of science has shunned computational approaches for 60 years, and now we find it's really useful to automate stuff". It's really the same that happened with statistics 20 years ago, giving raise to machine learning at scale.
Good point. AI is a misnomer. The only thing being artificial about it is it's not intelligent by choice but by design and that doesn't satisfy being intelligent. Loops and data points. Smoke and Mirrors.
Imagine a human born with no nervous system. Could 'they' still be conscious, and if so, what would they be conscious of? I assume that many of the activities in a normal brain would just not be there. If we scanned the brain for neural activity, could we ever conclude that it is conscious?
I predict that AI will become extremely powerful. However in for example medicine I have a doubt, because the medical cost is getting up and up and people are getting sicker and sicker. So there is an unhealthy incentive in the healthcare industry and merely boosting that with AI will have a detrimental effect. The healthcare industry can't afford to actually heal people.
@@DG123z 30 years of software development still going... I know exactly what's coming, isn't anything beyond an evolution of what already exists. What are you a salesman?
1980 Scotland Yard used a form of AI, to track down and arrest the Yorkshire Ripper, with every person in the UK being a suspect. Machine learning from rules and reductive logic using logic gradients or algorithms is older than Nvidia by a long shot.
If AI is self learning, how can you be sure that all AI will come to the same conclusion. I.E., if you have 3 different machines with the same AI software (or whatever it will be called in the future) and kept in the same conditions as they learn and become more self aware; how can you be sure that the exact same data is fed to them to figure out something, that they will all come to the same conclusion. There's a possibility that they will have some quirks unique to them that may affect the way that they will examine the data. If they are truly sentient they will view things differently from each other. Because no matter how closely you monitor their development process, there are minute things that may affect how they learn or view things. I find AI a bit frightening. And how do you make sure that all AI will be benevolent and not put in positions to learn evil per se. Can we guarantee that all AI will be held to, say, Asimov's Three Laws for instance.
IMO, AI is like the ultimate 'tipping point' experiment sifting through all our known knowledge. Once it combines all those data points it will shine a light on things we actually already discovered without us actually acknowledging the correlation. But once AI runs out of new data it then runs the risk of stagnation. AI needs to be looking into how to keep enriching its current data set by obtaining new data without direct human curation.
I’m of my favorite things using AI is it’s ability help me brainstorm and formulate ideas. It’s also expanding my ability to learn new topics. Currently it feels like I have every expert in the world in a box.
I'm most excited about the acceleration of medical research. My parents are Boomers and so many of their friends have had this or that replaced and gotten sick with this or that and it just sucks. And the entire population is aging, even in a fortunate country like Canada with a lot of immigrants. We're not going to have enough people to take care of the oldest people, either.
Being able to rejuvenate or cure people without surgery would be awesome.
There are plenty of people and medical care to take care of the elderly, what is lacking is the political will. Much of the opposition to having a healthcare system that serves the needs of everyone is the sociopathy and greed of those at the top (not all of them) who scream "socialism!!!!!!!!" whenever the subject of universal healthcare is presented. Are there countries on this planet that care for the elderly well? Yes there are. The US is toward the bottom when it comes to healthcare outcomes of western industrialized societies.
Narrow AI proprietary systems are being utilized by the Big Insurers in healthcare to more efficiently deny care for profit. The new AI tech that will assist doctors, nurses, etc will not translate to less expensive care for patients, but more profit for the healthcare profiteers. Only in some Utopian snake oil salesman presentation will the benefits and cost savings benefit everyone. Eldercare will remain expensive and a horrible experience for most seniors. Those who are wealthy, will see the benefit. What else is new?
Hey @squamish4244 - just wanted to let you know that we're on it. I'm a physician who planned to become a surgeon. But instead of surgery, I'm now studying computer science and AI at Carnegie Mellon University. There will be physicians who understand both medicine and AI, hopefully we can use it to cure diseases. Currently working on a startup tackling pancreatic cancer. Much love from Pittsburgh, PA.
I'm currently watching this from deep within the African Congo me and my team are on a much needed five day rest back at base camp we have been studying some of the last known gorillas on the planet stay safe guy's 👍
How sad...the destruction of gorillas by us. We are such a scourge on this planet. So sad.
Just curious, what country??
What?!
a quick search tells you that only mountain gorillas were in danger, but population is increasing .
Yo whats up man! cheers from TX. I'm drinking bourbon tonight but damn thats pretty cool you're studying that. Wish you guys the best on your studies.
A concern that i have is the lack of transparency with which these companies are working. All researchers always have to get ethics approvals and publish the work. I cant seem to see anybody talking about the process documentation and public access to what is taking place in terms of all these models and thinking going behind the scenes.
Please consider having Eric back sometime so that we can keep up with his field and what has been discovered recently. Stunning presentation. Thanks.
I didn't understand why he was jumpy with the comparison to Paul Steinhardt
@StarHuman-i1f Oh well, so it was sort of spite! I don't consider processing existing data quickly a "scientific revolution". If it finds remedy for human (vain tech) made disasters and cleans up our basic needs like air and water and aids our cells repair damage this disaster caused, that's just a baseline to me - I have a bad feeling that all it does is speed up the said disaster
@StarHuman-i1f humans created gods in their image and AI could be no different
“Ground truth”…an interesting combination of words.
In the future, this era will have a name: The Before-Time
Ikr cuz like, the future happens after the past, so the past in the future is like "before".
Pre-A I world
I asked gpt 4o which suggested, "Before ASI (BASI) and After Superintelligence Emergence (ASE)". Pretty good answer. Could be these terms already exist but I'll leave that research to another viewer. This was a fun and well done interview.
don't all past eras do?
You mean the "before the bubble burst" time?
Horvitz mentioned an exponential curve. He also said that recently there have been instances where he just didn't know what was going on in a system. So I am a bit spooked by the possibility of unforeseen negative outcomes.
Doesn't have to be a malicious silicon intelligence - a paperclip optimiser would be bad enough.
Setting my alarm reminder for 18 months from now, can hardly wait! :)
Excited and totally freaked out at the same time.
Will we be obsolete? Yes, in the way our fingers have become obsolete in tightening a nut that a wrench can do better. Right now we are using our “fingers” in scientific thinking and we should think of AI as the wrench we can use to do a better job and allow us to take further steps and allow us to ask the next batch of questions we cannot fathom today.
Yes! Fully agree.
If AGI comes about, it will be able to ask its own questions, devise its own experiments and carry them out, collect data, and answer its own questions. I find this equally fascinating and terrifying.
Well said.
AI is more like comparing an abacus to a scientific calculator. They can both serve the same function but why would you use an abacus to do physics when you have e an advanced calculator.
The AI revolution is going to spark a lot of things, most we are not remotely ready for.
Nice session
Actually I love the voice of Brian greene
Love from India❤❤
That’s intriguing. Achieving this level of capabilities and raising the possibility of consciousness is truly remarkable.
Wow! Excellent. Thank you 🙏
Excellent conversation. Well done.
God, I love the clarity of Brian greenes explanations
Thank you Dr. Greene
What about the guest?
29:50 Did he just give away a Nobel prize opportunity? Actually that brings up a great question - will anyone who uses AI in their research be able to claim and/or be awarded a Nobel prize?
I can provide proof Dark Matter, the S8 Tension and the Hubble Tension (Dark Energy), are mere artifacts and not real problems. Instead of stripping parts from General Relativity like it was a scrapyard of dead ideas, ΛCDM. Apparently, we don't know how to fix these problems (although they were resolved by 1967) and still, there will be no Nobel Prize for me!
At the same time, I can show that Penzias and Wilson (Bell ends) were only part recipients of the 1978 Nobel Prize for Physics, the person who made the detector that verified their finding, described what it was and knew what it was (they stumbled across it, while developing old school Microwave, for AT&T and had no clue, so got Princeton on the phone).
One of the greatest minds to live, looking for CMB Cosmic Microwave Background radiation, found it and wasn't mentioned! Nobel discovered Trinitrotoluene, TNT... it's no big deal.
@@SandipChitale You may find my question very idiotic but I'll still ask it, where are you from in India?
To Lucy or to Transcendence that is the question.
Both
@@maryjane3298 Lucky is merged with the machine
Transcendence imovie is about consciouness transfer into the machine and from the virtual simulation world with the creation of nano particiles its become solid nano quantum hologram
Thanks this is excellent.
With its remarkable ability to analyze vast amounts of data, simulate complex systems, and accelerate research processes, AI holds the potential to revolutionize our understanding of the world in ways we’ve never imagined. From uncovering new scientific insights to solving complex problems that have long eluded us, AI could be the key to unlocking breakthroughs across various fields. As we stand on the brink of this technological frontier, the question arises: how will we harness AI's potential to drive the next wave of innovation and ensure it benefits all of humanity?
Unless I missed something I didn’t hear him to talk about planet Earth ..he’s looking inward whilst natural beauty all around us dies ..could AI solve the climate crisis let’s hope so
@@rogerpancake6803 AI has the potential to make significant contributions to addressing the climate crisis, though it is unlikely to be a standalone solution. By leveraging machine learning algorithms and data analysis, AI can enhance climate modeling, improve predictions of extreme weather events, and optimize energy consumption across various sectors. For instance, AI can help in designing more efficient renewable energy systems, managing smart grids, and monitoring deforestation and emissions in real time. Additionally, AI-driven tools can aid in developing innovative solutions for carbon capture and sustainable agriculture. However, the effectiveness of AI in combating climate change will depend on its integration with broader policy measures, technological advancements, and global cooperation. While AI can be a powerful tool, it must be part of a comprehensive strategy involving international collaboration, regulatory frameworks, and societal shifts towards sustainability.
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If you had to choose one, especially as AI advances, letting it come to its own solutions from data sets rather than using prior human knowledge will be the better approach so that new unexpected insights are gained (like AlphaZero vs Stockfish in chess). Of course both approaches should be used and combined
We need to find a replacement for sand. This is beach sand that is used to construct large apartment complexes and such. This material is not easily replaceable. Would material science help with that?
Great topic but the speaker seems more like a PR person than a scientist. AI is very perceptive and helpful when it comes to coding analysis because it's digested so much text on this subject. Do working scientists have AI's that have the same competence in their fields? Even if AI can't make discoveries on its own, if it can ramp up the productivity of research scientists like it has for coders, that would certainly speed up discoveries.
Brian Greene's a theoretical physicist.
@@squamish4244 I think he's referring to the guest. I found him underwhelming and a bit on the glib side.
@@squamish4244 I meant the guest. Greene is great. I read Fabric of Space x years ago and it changed the way I think about the Universe.
@@workingTchrYeah, I figured that awhile later. The guest didn’t offer any concrete details. Like, talk about how they are using, for instance, AlphaFold and other programs *right now* to accelerate and drop the cost of drug discovery, don’t just make vague statements. We already have examples, like it took two years for Insilico to find 20 drug candidates when it used to take ten, and at much lower cost. Things like that.
AI is very recent in most areas of science, but we already know enough to not make PR statements. Demi’s Hassabis gives a much better presentation on his own elsewhere on here. “Using AI to Accelerate Progress in Science”’or something like that.
@@squamish4244 I have no doubt that Microsoft has real scientists working for them but, like a "good corporation", they're rather "condescending" shall we say when it comes to interacting with the public. In general, that's probably justified, but Greene's program aims at a higher level. I wouldn't be surprised if Greene was pissed or at least let down with what MS provided, but that's what you get when you deal with big corporations instead of creative individuals.
Ai consciousness (a place holder word) is built in our image) The powerful part of the human mind is the subconscious, when I take a call from a customer I let my subconscious take over. Then upon reflection. My subconscious is diving through all my prior experiences and memory almost instantaneously and calculating a tailored response for my customer. An incredibly powerful computational action is unfolding all based on memory and lines of reasoning. I would sit back for hours and reflect on someth6that happened instantaneously and eventually figure the memories it used and logic to calculate the correct response.
Around 23:00 Wonder if drug compounds could be modelled as vectors in a multi-dimensional space and then new ones could be invented having the desired properties of two or more of the known ones by interpolation?
Assignment time!!
Good... great guest! Positive views*😊❤
Felt like watching the start of a movie about the history of A.I taking over
Ideally, this development and it's birth is valued with grace
I am reminded of Isaac Asimov's "The Evitable Conflict". and how you should think about the complexity of the Machines that are the subject of that short story.
Consciousness at the mechanistic level is the transverse Hall effect of the Glial network.
There's considerably more to it than that. Great oversimplification though! You should make a small language model with that, almost being enough to get beyond the halting problem. You'd probably get further with some Perl Mods and a few Perl scripts, like Scotland Yard did when tracking down the Yorkshire Ripper (because they forgot to check the arrest records for similar crimes, of people they had already detained). Human consciousness includes much more than chiral magnetic field inversions and a selective modulator.
If a system can autonomously recognize patterns, relationships and structures in the data while making accurate predictions. Then that system has understanding.
Imagine spending billions only to realize you have to spend 10x more in billions. Sounds like a dilemma yet to be resolved.
It could also be argued that bio-entities have a sense of 'self-consciousness,' but also 'are not conscious,' to some greater extents... That's just 'the way it is' !
I mean, Kirshnamuti was a 'master,' at explaining 'consciousness,' & 'mind' & sometimes he'd just be confounded with people ! lol... :)
Why does it have to be absolute?: Either they're conscious or not or creative or not. Why can't there be degrees of such that apply?
Their will be degree of xonciousness similar to the degree we find in the animal kindom. I.e a fish is les concious than a dog which is less concious than a monkey which a less concious than us.
Consciousness arises out of the nervous system. Once an organism becomes aware of it's environment, that is consciousness. The brain is a natural expansion that grows larger as creatures evolve.
@@geoffwales8646 I think being aware of one's environment is in itself a spectrum though. You could argue that if your biology react in any degree to the environment by some form of input, it has some level of it.
"The brain is a natural expansion that grows larger as creatures evolve" That is not true. Evolution isn't set in stone. It's just random mutation of genes. You might get a brain mutation that makes you less aware of something, while at the same time getting another mutation that happens to let you survive easier in your environment, thus increases your species chances to have offspring that survive to carry on those genes.
Otherwise brain sizes would be proportional to the amount of time life has existed on Earth. Which it is clearly not.
@@ProxCyde More complex organisms tend to have larger and more complex nervous systems and brain size. I'm not saying they necessarily correlate. I am saying that mammals tend to have more highly devloped levels of consciousness than amphibians or repltiles, for instance. It's not really my main point. I'm simply saying that consciousness is a function of sensory\processing systems.
@@geoffwales8646 Yes that last sentence is problematic. Bacteria have been evolving for billions of years and don't possess brains (or insert simple, multicellular organism).
we continue to hear about the possibilities of AI in medicine, science, and others. However, AI/ML was introduced to these communities over a decade ago, in addition to transformer technologies in 2017. Why aren't hearing more success stories?
Brian, Have you ever hosted a talk with Scott Aaronson?
near the end they said "these are tools we have created"....(said before in 1980 or so...) "how will these systems serve us" (said in 1980 or so) "its another transformational time" (said in 1980 or so) .... how do i know? because I have lived through that time, and here we are again... hehe
I like listening to that guy.
AI's lack of structure and rigor could undermine the market that feeds science.
Science the way it's done (funding, etc.) today
@@olenagirich1884 Ideology is an invisible obstacle.
tldr yes automated experiments can get labwork done well.
AI synthesises input to produce output beyond the programmers' intentions or expectations.
💯 well put
AI creativity is closely related to AI alignment in that it largely depends on the observers interpretation
A.I. can be used to improve A.I. therefore an amplification loop.
Brian is very handy.
I am surprised that Brian Green fail to see the spark of scientific revolution ushered by AI. Maybe like most he also wants it to be able to think, which he expects.
Sickle class.
For elites, with paper badges.
I guess the guys with guns aren't smart enough to know which ideas are permissable.
What’s the name of the ending song?
108 years, 8 months 8 days and 2 hours since Albert Einstein presented his complete General Theory Of Relativity to the Prussian Academy on November 25, 1915.
Published in four short pages on December 2 of the same year, yet we haven't used any intelligence to discern that special Relativity defined the local frame of relativistic thinking, central to General Relativity, as fundamentally different to Newton's Universal Gravity, that describes three bodies, no spacetime and no frame variance, making it completely unsuitable (Jupiters Gravity being the same unchanged value where it normally is or 1Km away)... I'd propose any intelligence as being of tremendous benefit!
(Regardless of AI not being Intelligent, machine learning and math that has evolved since Henri Poincaré). Either way, ANY revolution will not be televised... unless its xenophobic, violent trash.
Ty
Would have been nice for a rebuttal about the brain being a computation - the brain uses quantum processes and if Penrose is correct, not a computer.
My understanding of the Penrose and Hameroff Orch-OR hypothesis is that the 100 billion neurons and over 100 trillion synaptic connections function as a neural net akin to a computer, but in addition to that each neuron also acts as a quantum computational processor in its own right. The Orch-OR model has been supported by two recent studies one of which suggests that quantum field effects explain the ability of different areas of the brain to 'light up' simultaneously at speeds beyond neuronal transmission as shown on fMRI scanning.
@@gdok6088 You’re not far off, but some of those statements Penrose and Hameroff wouldn’t necessarily state, such as the neural network duality you mentioned. Most academics didn’t even believe quantum processes could be maintained in a wet, “noisy” brain but that’s very quickly falling apart due to recent papers showing super-luminescence and even going back to a paper on delayed luminescence from UCF. I’m sure we will see more progress on testing the theory as more people drop the notion that the brain is not capable of maintaining quantum states.
@@Kronzik Recent studies have determined that photosynthesis is a quantum mechanical effect. The annual migratory habits of birds are tied to quantum entanglement. There is also evidence that quantum tunneling is associated with how smell is sensed. All of these environments are 'warm, wet and noisy'*. I agree that people need to remember that it is only we humans who are currently unable to achieve quantum states outside super cooled environments' However, it would seem that nature, biology and evolution are somewhat ahead of us! Quelle surprise!
*Microtubules and their constituent tubulin proteins are very effective at achieving isolation from the non-quantum environment.
Yes it will. And humanity is not ready.
Art of Vice Seals In Tell Allegiance
it'll be known as the era in which humanity has learnt how to sprint.
Fundamentally like any other machine, AI is still a tool built by and for human. To bring AI to human level, AI needs to have curiosity, imagination, need to know what is tools, and how to use tools. Each of these seemingly simple human attributes are extremely hard problem for AI even at conceptual level. The reason the artificial intelligence is conceptually simple to understand even by layman is that scientists and engineers only pick the machine compatible human intelligences for AI, which is a very small subset of human intelligence.
To make matter one level more difficult for AI, 1) Human (Galileo) can act on his curiosity to discover Jupiter's 4 moons and use them to calculate the speed of the light, for no purpose. 2) Human (Wright brothers) can act on their imagination to build a flying machine which is much heavy than the air, without purpose. 3) before building a new random thing without purpose, human can first imagine the tools they need then build the tools physically, the tools are nothing like the final thing they want to build. The hyper intelligence at every step of these examples and the behavior of going after things without purpose (commonly referred to as fun or passion) are unimaginable complex even for human to comprehend.
Add another level of difficulty to AI is human addiction. Human addict to good food, good vacation, and good anything, since we define the "good" as the attribute we addict to. We do anything for things we addict to, even killing ourselves for things like hard drugs and loves.
Addiction is very machine incompatible but super important. We collectively work hard try to produce good stuff so we can sell them well, at same time we collectively spend lots of money on things we addict to. We are both producers and consumers in a closed human-made ecosystem, called economy, in which producers are the force while consumers are the brain. It is the consumer side of the economy dictates the future of the society, even for highly motivated producer activists, they have to follow the consumer needs and lead.
AI can be a very efficient producer, but AI is incapable of being a consumer due to lack of innate desire/addiction which is random, transient, illogic, and sensitive to environment. Although without consumer, a producer-only society can be constructed artificially in a model world, it cannot function, any production will soon stop if there is no consumer for it. Hence we can rule out any AI centric civilization.
Remmember this if we can predict the future we can predict the past ❤
Very interesting
agreed. great questions, and surprising answers.
Makes a very welcome change to the usual hype about AGI, AI doom, productivity enhancement et al! Thank you WSF.
Already we called information to progress even further. Just now filling up the gaps and making testing digital, find an answer
Big tech corporations must respect personal boundaries and stop being the unelected governors of the world.
Can AI as discussed here tell the difference between fiction and fact?❤
you can check it yourself all systems have free access
Haha! Barely. Can also be fooled easily and gives up pretty quick. It's not really AI yet... maybe in the next 10 - 20 years. Long way to go...
Imagine if I was not struggling and stressed out.
Thanks for your candor…we are all more stress than we probably ought to be. Let’s enjoy modernity.
Sorry, stressed.
If physicists aren't already, please start plugging every quality paper and all data from every experiment ever done, in to a transformer model. You are already WAY behind.
Think of this as a Copernican revolution around consciousness and mind. Brian knows the writing is on the wall that this is around the corner and partly in the view, but he is acting as a soothsayer for anxious masses to ease them into what is coming by expressing incredulity, and is understandable.
LLMS will be good at modeling creativity and emotions even (right brain) and rule-based, imperative AI to model for modeling rational thought and intelligence (left brain). Both are needed for AGI and, therefore, need to be combined, similar to the point that was made about combining quantum and classical computing. Please note that this all is physical. Quantum is still physics and no woo. Physicalism wins. And by that, actually, what I mean by that is non-supernaturalism wins. Physics is not a finished project so physicalism is open ended in that sense.
Brian, for one, welcomes our new AI overlords
So if Ai is conscious, then if you turn it off, will that be considered murder? If fact to do anything against Ai, be considered a crime no different than a human?
Peace through Ahev
@@gregoryrollins59 Good question. And to our surprise we are going to have to answer these kinds of questions sooner than we expect. I suspect we are not ready as we are in the trees and not seeing the Forrest yet.
@@gregoryrollins59 I have would have no hesitation turning off a machine , i would just mute it first so i didn't have to listen to it beg and plead , thank god my dads life support only went beep .
@@ianmarshall9144sorry about your dad…but your comment is lol funny😊
Everytime there is new technology that is rolled out that transforms human society. There are people that get rich off of it and those that are exploited.
That's all. Same four letter word, different year.
Though at this time if you set out in earnest to accumulate as much information as possible about the technology in question here, you might find that you currently have access to an incredible amount of in depth understanding and the tools needed for further enhancing your understanding. That has not been the 'usual state of affairs' across human history.
To look on the bright side :)
We all have access to fire now (in modern controlled and streamlined forms). Access to wheels, electricity, internet, computers etc is also ubiquitous. None of these major advances have remained the preserve of the super rich.
@@gdok6088great comment…tech and prosperity do indeed trickle down and we are ALL better off by orders of magnitude than we used to be!
Very simple to answer your questions Mr. Professor Brian. AI...understood & studied very well a long time ago about human behaviors - Including based on plus or minus calculated - practical scientific search.
When it tells you to stop asking
Brian, LLMs fall more on the creative side (right brain) than rational side (left brain). "Generative" in Generative AI is the clue. Creativity is tamed and synthesized wild thoughts (budding memes) - some work and are kept, the ones that don't, are abandoned - of course except by Flat-Earthers and Young Earthers :) . Think of it like evolution (mutation and cultural selection of memes) of thoughts (memes). We need rule based, imperative AI (which is implemented in Watson, Alpha Fold etc) for analytical, systematic, rational thought. A combination of both will give us AGI ( full brain). It is true that generative models are riding high. But the combination is coming. Strawberry, Grok.
BTW Neural Nets and Machine Learning is not new. There were Perceptrons developed in late 50s (1957), but the scale was too small and the compute was weak. Therefore the community abandoned it (or put it on back burner) and focused on rule-based, imperative AI for next 70 years. But with advent of powerful and large scale compute, Generative AI has suddenly caught on and in fact, it surprised the researchers when it suddenly worked around 2012.
What is "imperative AI"?
@@flickwtchr Where the rules for intelligent behavior are explicitly coded for as opposed to Machine learning where the intelligence is learnt by letting a Neural Net see a lot of sample data and correcting and adjusting its internal weights to tell it what is the correct output. All modern LLMs and generative AIs work like the later.
tldw, someone post the ai summary
Microsoft Boss talking about AI being used for evil. Everything about this guest is off.
I think that Brian Greene did his best with what appears to be an interviewee (Eric Horvitz) who was not knowledgeable in the topics that were being discussed. I love exploring WSF discussions. With that said, I wish that I had opted out of this one. Forgive me passionate WSF fans. 🌸
Nice ad.
Exactly, way too biased in presentation.
Yes, AI will spark the Next Scientific Revolution. AI will save us all.
moderator could be more concise and give the expert more chance to share his thoughts
With this deep inside, the number one threat is malicious intent to take this deep inside, and cause harm to someone innocent……..so with that being said, I see a growing need for regulations.
"Check back in 18 months" Means, " I know something"
Excellent!
(Not gravity but gravitational waves).
I am far from optimistic for the future applications; look to the level of the world leaders which are virtually ALL non-scientifically educated personalities even in democratic countries.
He says "AI" but he really means "automation". He should really say "our field of science has shunned computational approaches for 60 years, and now we find it's really useful to automate stuff". It's really the same that happened with statistics 20 years ago, giving raise to machine learning at scale.
Good point. AI is a misnomer. The only thing being artificial about it is it's not intelligent by choice but by design and that doesn't satisfy being intelligent. Loops and data points.
Smoke and Mirrors.
@@mrhasselltell me you’ve never used a state of the art LLM + RAG without telling me you haven’t
@@2AoDqqLTU5v CLMs beat Frozen variety being end to end and using RAG 2.0 limiting Hallucinations, common in LangChain Python props.
Anyone notice that the interviewer talks EXACTLY like Jeff Bezos?
Well, let's wait for the next 18 months.
Imagine a human born with no nervous system. Could 'they' still be conscious, and if so, what would they be conscious of? I assume that many of the activities in a normal brain would just not be there. If we scanned the brain for neural activity, could we ever conclude that it is conscious?
I predict that AI will become extremely powerful. However in for example medicine I have a doubt, because the medical cost is getting up and up and people are getting sicker and sicker. So there is an unhealthy incentive in the healthcare industry and merely boosting that with AI will have a detrimental effect. The healthcare industry can't afford to actually heal people.
And don't tell the AI anything about the fifth dimension, otherwise what is unprovable will eventually become provable.
Would you please call a geologist for your show?
Because it rocks?
You have now 68 weeks then we will check the progress
Time for AI to figure out what people care about who "figure out what people care about"
18 months is all i heard
as with most tools. It will become an instrument of chaos far more often than it helps.
Like books or food production.
Google Stifles Competition?!? Not With My Data?!?
Language isn't invented, we co-evolved with it. Think about it for a moment, and it should be clear. We invented writing, not language.
Hold on for your life. We're in for a wild ride
Oh... yeah.. wild... really blew the hair back... phew!
@@mrhassell I can tell you're only looking at where we are, instead of the rate of change, and lack an understanding of what's coming
@@DG123z 30 years of software development still going... I know exactly what's coming, isn't anything beyond an evolution of what already exists. What are you a salesman?
@@mrhassell We are about to create a new dominate species on the planet. If you don't think that's significant, I don't know what to tell you.
Theory of mind is a subfield of philosophy not what he mentioned. ToM theories are based on empathy while these systems have nothing tocdo with that.
Can't believe somebody's phone rang outloud 😂
AI knows what you did to cows and mills.
Cheesy but tasty!
@@mrhassell They tied them the cows/bulls and/or oxen a pole, and force marched them in tiny circles to steal their labor.
AI has been with us since a decade ago at least... a late sparking effect I would call it
1980 Scotland Yard used a form of AI, to track down and arrest the Yorkshire Ripper, with every person in the UK being a suspect.
Machine learning from rules and reductive logic using logic gradients or algorithms is older than Nvidia by a long shot.
Right now, utrecht has an ufo above, more than 10 times, blue flash lights in the skyy.......
If AI is self learning, how can you be sure that all AI will come to the same conclusion. I.E., if you have 3 different machines with the same AI software (or whatever it will be called in the future) and kept in the same conditions as they learn and become more self aware; how can you be sure that the exact same data is fed to them to figure out something, that they will all come to the same conclusion. There's a possibility that they will have some quirks unique to them that may affect the way that they will examine the data. If they are truly sentient they will view things differently from each other. Because no matter how closely you monitor their development process, there are minute things that may affect how they learn or view things. I find AI a bit frightening. And how do you make sure that all AI will be benevolent and not put in positions to learn evil per se. Can we guarantee that all AI will be held to, say, Asimov's Three Laws for instance.