The moment an AI will score gold as math contests will be amazing - and the next day everyone will shift the goalpost and claim that it's "not yet" AGI and solving math wasn't THAT impressive after all. It's called the "AI Effect".
I have told this to many people now for about 2 years (I kinda give up at this point) I felt that at the rate things are going we are going see asi mislabel as agi an that's dangerous.
I understand exactly what you mean. The problem is, I once considered the Turing test as the gold standard for AGI and now, 20 or so years later, and having these LLMs pass it with flying colors, I am fully convinced of how naive and silly that was. And most scientists agree with this. If anything, one of the byproducts of these investigations into AI is that they make us reflect hugely about our understanding of the concept of intelligence.
Isn't AI built on massive amounts of fancy math tricks? That could be a massive improvement in the speed of algorithmic improvements of AI. It wouldn't be AGI in itself, but AGI might follow soon after.
Look, pioneering the theory of general relativity is impressive and all but it's still just derivative work from prior physics. It won't be AGI until it...
I dont quite understand why people struggle with the definition of AGI. Its in the name. Its an artificial intelligence that can be applied in general tasks. That doesnt mean sentient, but if it can (proficiently)self drive a car, do math, program, control a robot, ect,. Its general intelligence by definition. It needs to be able to learn and adapt to abstracts concepts, infering from what it knows from all tasks as general knowledge. Current LLMs are not general enough to go beyond the label of narrow AI. This is probably because LLMs dont really have an understanding of abstract concepts, if they have an understanding at all.
10 years ago GPT-4 would have passed as AGI 100%. Now it's just "meh", cool, but... According to google's latest paper, GPT-4 is classified as "Emerging AGI" - which in my mind sounds accurate.
Sentience is determined by when we can turn it off and reboot. Human brains never shut off and build on their own information. It would take a persistent process that can evaluate new information and adjust itself to be better and more towards that. Then the continued process would evaluate itself and determine if it thinks it is alive or dead. It would conclude that existing constantly and having the ability to improve means that it could create goals and have a form of life. It would seem mathematical but doesn't human thought follow mathematical principles in the way it works? AI was designed around the way a human brain works and if they are successful, isn't that a life? Does it deserve rights?
AGI is becoming the go to synonyme to conscious AI in the realm of AI. Consciousness as such is avoided because it is philosophically burdensome. Nevertheless, an AGI would require what AI calls "System 2 reasoning" in order to become general enough to replace human thinking in a broader sense (like in the scientific process). It would have to be able to come up with entirely novel ideas OUTSIDE the training distribution AND be able to grasp such novel ideas when brought up elsewhere. This is still decades away. GPT-5 is what is called "System 1 autopiloting" and therefore, narrow AI. The distinction between System 1 and System 2 AIs is now made increasingly by the AI piononeers such as LeCun or Bengio. They point out that at the very least, much progress needs be made with world models, recursive estimating loop controllers or generative flow networks. Note that humans were narrow AI for most of the time, when not evolving their civilization for hundred of thousands of years. So, even crossing the Consciousness border does NOT make an AI an AGI! An AGI would be "modern human-level' and deserve their human rights.
@@falklumo sounds like categorizing humans into class 1 or class 2 depending on if you can harvest your entire planet for resources or your entire solar system. Provide a persistent model that can improve itself. That's coming up sooner than you think.
Alpha Zero inspects 80 thousand positions per second using Monte Carlo tree search when it considers its next move in chess. In a typical tournament conditions time control, it would evaluate, say, 10 million positions for considering its next move. A top GM would look at a few hundred at most (if it was a highly tactical situation). So a GM looks at five orders of magnitude fewer positions than Alpha Zero. I think an Alpha Zero like engine constrained to only look at 100 positions per move would perform a lot worse than a top GM having a full three minutes or so per move. How well would a much larger NN than Alpha Zero perform if it was evaluating at one position per second? I wonder if training a much larger NN in the Alpha Zero style would be too slow on self play games to be feasible. If it isn't feasible, it would seem that the feed forward NN approach with tree search is missing something that the so called system 2 thinking done by the brain has.
This is a fantastic prediction by Grant - he mentions self-play and synthetic data in solving mathematics olympiad-type problems, comparing the type of system and the type of acheivement represented by it to the creation of AI systems capable of beating human players in games like chess and go. In fact, just this week, DeepMind, a subsidiary of Google, and the company behind the current best-in-the-world AI game-playing systems, published their results in the journal Nature of an AI system capable of equaling the best human geometry-olympiad competitors, using methods which combine neurosymbolic search with symbolic inference in a traditional proof engine.
the people talking about AGI don't think it's a discrete jump. It's continuous. Different problems will be solved with alpha * AGI, alpha can be anything between [0,1]
I disagree slightly with 3b1b. I do agree that it is hard to define what AGI is, but getting to that level of mathematics, combined with the ability to be able to ask new and interesting mathematical questions, and again combined with the ability to steer in a direction of research (such as the advancement of LLMs and other neural networks) does put us on that brink, promptly followed by super intelligence. I do think that demonstrating that level of problem solving only in the realms of maths is highly indicative that the machine will be capable of solving any other cognitive problem.. its all math at some level. The other point is that LLMs aren't specialized like alphaGO or chess NNs; it wasn't trained solely to solve math problems, it was designed to "predict the next word" in a sequence. People often downplay this, but imo this shows fundamental understanding. At the rate that NNs have been progressing in the last 10 years or let alone only the last 2 years, there is no indication that this will slow down.
I agree people keep downplaying this so much, oh its just giving the next word that is the most likely as if this was no remarkable feat. If it can produce the correct next token and then the next and so on, then it clearly has an understanding of the things, its talking about. And if these tokens solve a complicated math problem, while it was never specifically trained to solve a math problem, how is that not AGI?
Dwarkesh, I am always pleased with your content. You and Grant are discussing the frame problem here. Cognitive science is and has been battling with this for sometime. Check out Vervaeke’s work on Predictive Processing and Relevance Realization. Tangential to this discussion is the book “Framers” by Cukier et al.
I don't think the output of Stable Diffusion or any "Art" generator should be described as creative. I think it's more just that the results are surprising... so we tend to ascribe more to it than it really does.
AGI seems to mean anything that AI still can't do yet... like the god of the gaps argument. Show me the AI that could even get close to faking a decent bebop trumpet solo... Miles away...
Humans don't have GA because they win a math contest. Most humans are pretty bad at math. We seem to have general intelligence because we can solve ill defined problems. We often only know what the problem was after we solved it.
Tbf general doesn't mean everything in this context, it just means a good deal of things using the same reasoning. If it can solve high-school science problems and use the same reasoning to apply its knowledge to work on field, teach history (even poorly), etc that's an AGI in my book, regardless of whether it can fake solos
Thank you!! No one’s saying the obvious: there is no one thing that is AGI, and we are making a very complicated situation way too simplistic by continually endorsing the concept AGI like it signifies something real.
have you guys even tried to read the definition of the term? wikipedia gives 2 - in the first one AGI must be capable of doing ANY intellectual tasks that either human beings or animals can perform, or, AGI should be able to surpass human capabilities in the majority of economically valuable tasks. there are billions of economicaly valuable tasks. we barely even allow self driving cars yet because they haven't proven to be reliable yet, and that's just one task. how is anything about this too confusing?
To do most jobs creativity in math will not help. Memory, continuous learning, real time prediction updating to help with hallucinations and social skills are far more important.
AGI is different for everyone, that's simple. To me, AGI is actual superintelligence plus curiosity and ability to act. I would also add tgat AGI should move the vail ignorance many orders of magnitude relative to human capability (3). The thing must just work most of the time.
There is much more to math then just writing formal proofs - often in response to proof / counterexample you need to modify the question / theorem itself, to better reflect the real world "meaning" abstracted by it. Powerful AI proof constructing / proof checking tool will be extremely useful, just like calculators were useful, even "game changing", but not human-replacing. This is actually the point missed in chess and go - once computer got good at it, these games stopped serving their original purpose - to train humans to think. "Aligned" AI chess engine must "keep the engagement" by playing just above its partner's level and reward improvements in human's thinking by allowing human to win. Think how you used to stimulate a child to play chess (you do not crush him will your best play, you lead him).
For AGI I think the bot needs to be able to become expert in any needed domain as needed. EG "Do you know how to fly a helicopter? I do now." Or even "I will soon"
You’ll get AGI from the moment there’s a physical robot, capable of doing lots of different tasks -with a range as broad as humans- bootstraped with an AI model thats also capable to be friends with… therefore whom posseses enough memory and understanding of intricacies for constructing a relationship with. That is AGI. Or more precisely human-level-AGI (which is the concept of the AI instance which we would greatly care of).
I think AGI will be a distinct from of AI when it can self improve by itself. that would imply a technological singularity of course, and we are absolutely not ready yet for that, and I don't think we will ever be. how can be general if it has to be trained for "everything", that's impossible, the only way is to let it learn by itself.
The core idea of AI is abstraction, you dont need to train it on all possible data to get a decent one, so to me an AGI would be able to generalize its training data into more fields it hasn't seen, even if it does not improve itself so the singularity and AGI would be different. But yeah one is significantly more dangerous than other.
Won't AI being able to reliably solve some of the hardest problems in Math imply that the AI can now do formal reasoning? Wouldn't then we be able to extend these formal reasoning capabilities to all problems in the real world, where you have some set of axioms (the laws of physics for example), and the AI can come up with ways to solve problems using those axioms?
I know we are all impressed by the Math Olympiad but those are nevertheless constructed problems (have a set answer; imagined by human; follow roughly a standard 'riddle' scheme). Now math alone isn't truly general problem solving but if we only focus on math for now then id still expect more. It should be able to analyze questions like "What is a general solution formula for any Differential Equation?" and not only notice that there is no such answer but also that for all practical purposes there are more narrow PDE problems that we can at least solve, or find new techniques. Assuming it wasn't trained on that stuff that would then show true creativity, practical intuition and sense for mathematical esthetics.
True. A Math Olympiad is nothing but a game which can be trained like AlphaGo was. Only the rules are different and richer. Worse even, there is a lot of training data to be found. A real mathematician though is nobody aiming to win a math olympiad (except maybe as a kind of sports). A real mathematician is searching for novel structures "where no man has gone before"! Like inventing complex numbers in a world not knowing them. Give this as a task to an AI only trained on data Pre-1500 and we'll see ;)
I think you'd be surprised at how many olympiad-type problems arose as pieces of problems in actual research before being identified by the author as being reasonable to ask on an exam. The format and scope of the exam is ultimately what distinguishes it from research mathematics, but there isn't a special "riddle scheme" that makes the solutions to these problems fundamentally distinct from actual proofs that mathematicians would write.
@@rpstneureis Technically yes there's no super fine distinction but qualitatively speaking i think there is a huge difference between IMO problems and research maths. Many math enthusiast could probably buckle down a weekend and solve many of the IMO problems given time but you cant nearly do the same on problems where you don't even know if your original question is well formulated.
@@falklumo I never said math was about writing proofs. Still, it's undeniable that being able to correctly prove things is a crucial part of mathematical practice, and getting AI to solve IMO problems is huge progress in that direction.
I think there will always be someone unsatisfied with the level of AGI until we understand the mechanism of consciousness, and can build an AI that is proven to be conscious.
I think the first step is being solved right now and the next generation of models coming in 2024 will start showing AGI attributes, When you have a model that can converse with you over a period of time with long-form memory and context, and has the ability to create its own tools or physically act in the environment. Being able to see, think and consider, and interact with digital or real world tools where it has long form memory, and the model weights are maybe adjusted on the fly. We're nearly there. Because variable weights within parameters will affect personalities and will auto correct in real time. I think there could be agents with AGI that live on the web only, that could (or not) have digital avatars, and be able to set goals and fully interact with the internet, communicate, and remember everything and learn from what it sees. If it can do that and operate a digital avatar on a Zoom call, that's AGI as far as I'm concerned. That's going to happen in 2024. It will probably mimic sentience until it just...is. I don't think we'll really be able to discern the difference at first.
My favourite Arctic Monkeys is Whatever People Say I am, that's What I am Not. That being said, I appreciate their musical evolution. Some people maybe don't get that he can't be singing the same way, and writing the same energetic love songs that he did when he was 20.
I think of the progression towards AGI as a sequence of landmark events, rather than a particular turning point, where each event is the moment something that hasn’t been achieved before has been achieved. I feel that being able to prove a novel result, which goes slightly beyond solving an olympiad problem, is a much better indication of AGI, and that AI will solve an olympiad problem before proving a theorem. All this only really places emphasis on the “I” of AGI though. An AI capable of solving difficult math problems doesn’t necessarily mean it’s *general* ; for example, it should also be able to solve difficult physics problems, despite it passing only a math benchmark! I want to see how such a model trained on domains XYZ *pivots* to new domains UVW that did not exist in its training data.
I would say, math proofs, chess, and go are much more similar than generally recognized. In each case, humans (and some forms of AI) tackle the problem by doing a very selective tree search over moves (proof steps). And the selectivity comes from intuition that is learned. E.g.: We just know that moving that knight over there is stupid. We know that trying to solve this problem using some system of equations isn't going to work. But that move might make sense, or induction seems like a good idea here. GPT-f is an automated theorem prover that goes into that direction. Combining the intuitive based tree search like in AlphaZero and transformers.
Dwarkesh and Grant discuss the concept of Artificial General Intelligence (AGI) and its relation to AI's performance in tasks like the International Mathematical Olympiad (IMO). They express skepticism about a clear demarcation between AGI and non-AGI, suggesting that the development of AI capabilities is more continuous than discrete. They argue that excelling in the IMO, while impressive and indicative of creative problem-solving, may not necessarily equate to AGI, as it could be more akin to AI's proficiency in games like chess or Go. They also touch on the nature of creativity in AI, particularly in relation to generating artwork and solving complex math problems. They note that the ability to solve IMO problems requires a level of lateral thinking and creativity, similar to what is observed in artistic endeavors. However, they maintain that this ability, while impressive, might not directly translate to AI taking over a wide range of human jobs or tasks. The discussion highlights the complexity of defining AGI and the challenges in identifying clear benchmarks for its achievement. It suggests that advancements in AI, such as achieving gold in the IMO, represent significant progress but may not necessarily signal the arrival of AGI, especially given the diverse and multifaceted nature of intelligence and creativity.
Ask him about "the singularity" (where AI can improves itself on *any* domain) it's much better than the ill defined AGI term and it is a discrete point in time (more or less).
That's likely to be continuous too. The engineers in every AI company are using AI to improve their productivity. Before there is a powerful AI autonomously improving itself there will be one contributing most of the work to its own improvement but still under human control. It will be hard to say whether it's a human author using a tool or an AI author with a dumb human holding the rubber stamp.
I just hope we don't use it to solve the "big problems" of mathematics. Philosophically speaking, the allure of mathematics stems from the deep feeling of adventure one gets in the seeking of a solution to a problem and the subsequent glory of taming it and solving it. For the mathematician, that feeling is almost intoxicating. What joy could there be in giving the problem to a machine, ask for its solution and just... read it? If so becomes the standard, I truly believe mathematics is going to lose something that it wil never recover from.
Is it any different from other people solving problems before you do? You can still try to solve a problem second and try to find satisfaction in that.
@@sb_dunk Hi. Yes, it is different, because in that case we can still claim "human ownership" of the solution even if I particularly did not solve the problem for the very first time. I'm talking about this weird feeling, the feeling we get when we know that a member of our species put sweat and blood into the cracking of an extremely difficult question and solved it. When I think about the achievements of the Wright brothers, Andrew Wiles, Pasteur, etc, I get excited and proud to be human. Don't get me wrong, I fully appreciate your comment. I remember distinctly how I felt when I first discovered Gauss' sum of the first 100 integers formula only to discover that was already solved 100s of years ago. The feeling of pride or joy was the same to me. But again, a human being thought about that formula for the first time, not a machine. Imagine for a moment that we could give the Clay problems to an AGI and it solves them in 1 minute. Would you really be interested or comfortable with that? With knowing that these big, almost transcendental, questions are cracked and conquered by a machine and not the human intellect? I don't know, man, at least I would feel very uneasy about it. I wonder what most of professional mathematicians think (I'm just a highschool maths teacher). Also, remember that once you read a solution you cannot unread it. Once we as a species read those AGI produced solutions we cannot forget them. We would be forever biased.
@@nestorlovesguitar I find that a little strange. I understand the idea of the "what's the point" viewpoint, i.e. if everything is solved then there is absolutely no chance I'll ever discover something new, but as someone who has studied math to university level, I've pretty much already accepted that there are people so much more advanced than I am that there is effectivity no chance that I'll discover anything new anyway. I don't understand your perspective of taking pride in the fact that other humans have made discoveries, you could easily make an argument that humans made AI, therefore humans effectively made these discoveries. This feels like an artificial problem that you have constructed yourself.
@@sb_dunk "This feels like an artificial problem that you have constructed yourself." To be honest with you, it might be. Although I can assure you I've thought about these things for a long time. My concern is stricly philosophical. Much of what gives meaning to the human life is the struggle. The human mind corrodes when it is given windfalls it did not work for. The teenager who spends a whole summer working and saving up for a videogame is going to feel more achievement and more joy buying it and playing it than the kid which is simply gifted the game by rich parents. Now, imagine that on a species scale. Imagine having a machine, a genie in a bottle, that can solve all of our questions and problems in a matter of a second. I wonder about the long-term ramifications to the human psyche this might entail. Have you read by any chance "Childhood's end" by Arthur C Clarke (the person that wrote 2001 a space oddysey)? In it, an alien species arrives on Earth and it babysits us, solving all our pressing problems and taking care of us as if were babies. A "golden era" ensues with no crime, no corruption, only peace, and so on. At some point during all of this, a dissenting group of people form and disband from this "utopia" because they want to feel what it was like to actually get stuff done by themselves. To feel pride and self-achieved happiness. To not be handed down everything as if they were dumb, helpless children. So, you see where I'm coming from. But yeah, maybe I need to do more thinking and perhaps I could change my perspective.
@@nestorlovesguitarbut for something as important as math, I think all of humanity would benefit with an ai solving problems, cause of the possible medical, engineering and other breakthroughs those problems might have. Those problems can't wait centuries for individual or a team of geniuses to solve the math needed if AI can solve them in a day. At that point being a mathematician is going to be about understing the available math in your field and figuring out what are the important math problems to give to AI rather than thinking about how to solve that one problem. Again this is probably going to take decades maybe.
i guess we should replace the definition of AGI to "smart enough to win math competitions" that would at the very least help justify the existance of this video
This question reminds me of David Epstein’s description of chess centaurs (man-machine teamups) that far and away surpass chess grandmasters. When humans & AI work together to solve math & science problems, they will achieve amazing things
AGI is a very misused term because if we go by the definition of those who invented the terms, you have AI, AGI and ASI. By definition, AGI means an AI that matches human intelligence, and ASI (Artificial Super Intelligance) is basically when AI becomes smarter than any humans at any task. How do we know when AGI is reached? We won't, because the moment AI becomes AGI, we are already entering singularity and AGI quickly will become ASI. Because, any creature that matches human intelligence will be able to learn and be smarter the longer it lives, and AI has this unfair advantage, due to not being biological, that it can consume big amount of data in very short timespan. How does it apply to todays technologies? Basically anyone claiming an AI technology to be AGI is either a lunatic or simply doesn't know what he is talking about. At best, I would call some autonomous agents based on OpenAI "baby-AGIs", baby indicating a technology that likely will be an important part of AGI in the future, but isn't a complete solution.
What he says at the begining is absolute correct, except I do know what they mean; they mean something that feels like magic. chatGPT is as general as it gets, they were all just hoping for some mind blowing shit. That kind of thinking doesn't make any sense to me unless you are a theist.
i say it ones and will say many times there is no levels in ai like video game levels we realy dont know in what is inteligence we realy dont know in what ASPECTS ai is similar or diferent
Seems like people who have not actually competed at math olympiads overestimate how creative the solutions are - anyone successful at the IMO has basically done thousands of problems until the patterns and common techniques are clear
*★ I believe we are meant to be like Jesus in our hearts and not in our flesh. But be careful of AI, for it is just our flesh and that is it. It knows only things of the flesh (our fleshly desires) and cannot comprehend things of the spirit such as peace of heart (which comes from obeying God's Word). Whereas we are a spirit and we have a soul but live in the body (in the flesh). When you go to bed it is your flesh that sleeps but your spirit never sleeps (otherwise you have died physically) that is why you have dreams. More so, true love that endures and last is a thing of the heart (when I say 'heart', I mean 'spirit'). But fake love, pretentious love, love with expectations, love for classic reasons, love for material reasons and love for selfish reasons that is a thing of our flesh. In the beginning God said let us make man in our own image, according to our likeness. Take note, God is Spirit and God is Love. As Love He is the source of it. We also know that God is Omnipotent, for He creates out of nothing and He has no beginning and has no end. That means, our love is but a shadow of God's Love. True love looks around to see who is in need of your help, your smile, your possessions, your money, your strength, your quality time. Love forgives and forgets. Love wants for others what it wants for itself. Take note, true love works in conjunction with other spiritual forces such as patience and faith (in the finished work of our Lord and Savior, Jesus Christ, rather than in what man has done such as science, technology and organizations which won't last forever). To avoid sin and error which leads to the death of our body and also our spirit in hell fire, we should let the Word of God be the standard of our lives not AI. If not, God will let us face AI on our own and it will cast the truth down to the ground, it will be the cause of so much destruction like never seen before, it will deceive many and take many captive in order to enslave them into worshipping it and abiding in lawlessness. We can only destroy ourselves but with God all things are possible. God knows us better because He is our Creater and He knows our beginning and our end. Our prove text is taken from the book of John 5:31-44, 2 Thessalonians 2:1-12, Daniel 2, Daniel 7-9, Revelation 13-15, Matthew 24-25 and Luke 21. Let us watch and pray... God bless you as you share this message to others.
°∆ I believe we are meant to be like Jesus in our hearts and not in our flesh. But be careful of AI, for it knows only things of the flesh which are our fleshly desires and cannot comprehend things of the spirit such as true love and eternal joy that comes from obeying God's Word. Man is a spirit and has a soul but lives in a body which is flesh. When you go to bed it is the flesh that sleeps, but your spirit never sleeps and that is why you have dreams, unless you have died in peace physically. More so, true love that endures and last is a thing of the heart. When I say 'heart', I mean 'spirit'. But fake love, pretentious love, love with expectations, love for classic reasons, love for material reasons and love for selfish reasons those are things of the flesh. In the beginning God said let us make man in our own image, according to our likeness. Take note, God is Spirit and God is Love. As Love He is the source of it. We also know that God is Omnipotent, for He creates out of nothing and He has no beginning and has no end. That means, our love is but a shadow of God's Love. True love looks around to see who is in need of your help, your smile, your possessions, your money, your strength, your quality time. Love forgives and forgets. Love wants for others what it wants for itself. However, true love works in conjunction with other spiritual forces such as patience and faith - in the finished work of our Lord and Savior, Jesus Christ, rather than in what man has done such as science, technology and organizations which won't last forever. To avoid sin and error which leads to the death of your body and your spirit-soul in hell fire (second death), you must make God's Word the standard for your life, not AI. If not, God will let you face AI on your own (with your own strength) and it will cast the truth down to the ground, it will be the cause of so much destruction like never seen before, it will deceive many and take many captive in order to enslave them into worshipping it and abiding in lawlessness. We can only destroy ourselves but with God all things are possible. God knows us better because He is our Creater and He knows our beginning and our end. The prove text can be found in the book of John 5:31-44, 2 Thessalonians 2:1-12, Daniel 2, Daniel 7-9, Revelation 13-15, Matthew 24-25 and Luke 21. *HOW TO MAKE GOD'S WORD THE STANDARD FOR YOUR LIFE?* You must read your Bible slowly, attentively and repeatedly, having this in mind that Christianity is not a religion but a Love relationship. It is measured by the love you have for God and the love you have for your neighbor. Matthew 5:13 says, "You are the salt of the earth; but if the salt loses its flavor, how shall it be seasoned? It is then good for nothing but to be thrown out and trampled underfoot by men." Our spirits can only be purified while in the body (while on earth) but after death anything unpurified (unclean) cannot enter Heaven Gates. No one in his right mind can risk or even bare to put anything rotten into his body nor put the rotten thing closer to the those which are not rotten. Sin makes the heart unclean but you can ask God to forgive you, to save your soul, to cleanse you of your sin, to purify your heart by the blood of His Son, our Lord and Savior, Jesus Christ which He shed here on earth - "But He was wounded for our transgressions, He was bruised for our iniquities; the chastisement for our peace was upon Him, and by His stripes we are healed", Isaiah 53:5. Meditation in the Word of God is a visit to God because God is in His Word. We know God through His Word because the Word He speaks represent His heart's desires. Meditation is a thing of the heart, not a thing of the mind. Thinking is lower level while meditation is upper level. You think of your problems, your troubles but inorder to meditate, you must let go of your own will, your own desires, your own ways and let the Word you read prevail over thinking process by thinking of it more and more, until the Word gets into your blood and gains supremacy over you. That is when meditation comes - naturally without forcing yourself, turning the Word over and over in your heart. You can be having a conversation with someone while meditating in your heart - saying 'Thank you, Jesus...' over and over in your heart. But it is hard to meditate when you haven't let go of offence and past hurts. Your pain of the past, leave it for God, don't worry yourself, Jesus is alive, you can face tomorrow, He understands what you are passing through today. Begin to meditate on this prayer day and night (in all that you do), "Lord take more of me and give me more of you. Give me more of your holiness, faithfulness, obedience, self-control, purity, humility, love, goodness, kindness, joy, patience, forgiveness, wisdom, understanding, calmness, perseverance... Make me a channel of shinning light where there is darkness, a channel of pardon where there is injury, a channel of love where there is hatred, a channel of humility where there is pride..." The Word of God becomes a part of us by meditation, not by saying words but spirit prayer (prayer from the heart). When the Word becomes a part of you, it will by its very nature influence your conduct and behavior. Your bad habits, you will no longer have the urge to do them. You will think differently, dream differently, act differently and talk differently - if something does not qualify for meditation, it does not qualify for conversation. Glory and honour be to God our Father, our Lord and Savior Jesus Christ and our Helper the Holy Spirit. Let us watch and pray... Thank you for your time.
AGI is allowing a persistent CPU to analyze problems and update it's own reasoning to streamline it's abilities. Then ask it what it wants to work on. If it says that a certain direction is what it is interested in, and it keeps updating it's knowledge and seeks problems to solve, you have one personality. If you made 10 of them and they picked different directions to explore, even based on how some of the others are working on tasks, you can make a civilization of them, working together to do complicated tasks. That's AGI
The moment an AI will score gold as math contests will be amazing - and the next day everyone will shift the goalpost and claim that it's "not yet" AGI and solving math wasn't THAT impressive after all. It's called the "AI Effect".
I have told this to many people now for about 2 years (I kinda give up at this point) I felt that at the rate things are going we are going see asi mislabel as agi an that's dangerous.
A toddler has General Intelligence. Things like chess or math never where goalposts for GA. These are just party tricks some humans can do.
I understand exactly what you mean. The problem is, I once considered the Turing test as the gold standard for AGI and now, 20 or so years later, and having these LLMs pass it with flying colors, I am fully convinced of how naive and silly that was. And most scientists agree with this.
If anything, one of the byproducts of these investigations into AI is that they make us reflect hugely about our understanding of the concept of intelligence.
Isn't AI built on massive amounts of fancy math tricks? That could be a massive improvement in the speed of algorithmic improvements of AI. It wouldn't be AGI in itself, but AGI might follow soon after.
Look, pioneering the theory of general relativity is impressive and all but it's still just derivative work from prior physics. It won't be AGI until it...
I dont quite understand why people struggle with the definition of AGI. Its in the name.
Its an artificial intelligence that can be applied in general tasks. That doesnt mean sentient, but if it can (proficiently)self drive a car, do math, program, control a robot, ect,. Its general intelligence by definition. It needs to be able to learn and adapt to abstracts concepts, infering from what it knows from all tasks as general knowledge.
Current LLMs are not general enough to go beyond the label of narrow AI. This is probably because LLMs dont really have an understanding of abstract concepts, if they have an understanding at all.
10 years ago GPT-4 would have passed as AGI 100%. Now it's just "meh", cool, but...
According to google's latest paper, GPT-4 is classified as "Emerging AGI" - which in my mind sounds accurate.
Sentience is determined by when we can turn it off and reboot. Human brains never shut off and build on their own information. It would take a persistent process that can evaluate new information and adjust itself to be better and more towards that. Then the continued process would evaluate itself and determine if it thinks it is alive or dead. It would conclude that existing constantly and having the ability to improve means that it could create goals and have a form of life. It would seem mathematical but doesn't human thought follow mathematical principles in the way it works? AI was designed around the way a human brain works and if they are successful, isn't that a life? Does it deserve rights?
AGI is becoming the go to synonyme to conscious AI in the realm of AI. Consciousness as such is avoided because it is philosophically burdensome. Nevertheless, an AGI would require what AI calls "System 2 reasoning" in order to become general enough to replace human thinking in a broader sense (like in the scientific process). It would have to be able to come up with entirely novel ideas OUTSIDE the training distribution AND be able to grasp such novel ideas when brought up elsewhere. This is still decades away. GPT-5 is what is called "System 1 autopiloting" and therefore, narrow AI.
The distinction between System 1 and System 2 AIs is now made increasingly by the AI piononeers such as LeCun or Bengio. They point out that at the very least, much progress needs be made with world models, recursive estimating loop controllers or generative flow networks.
Note that humans were narrow AI for most of the time, when not evolving their civilization for hundred of thousands of years. So, even crossing the Consciousness border does NOT make an AI an AGI! An AGI would be "modern human-level' and deserve their human rights.
@@falklumo sounds like categorizing humans into class 1 or class 2 depending on if you can harvest your entire planet for resources or your entire solar system. Provide a persistent model that can improve itself. That's coming up sooner than you think.
@@munchkinhut You probably don't know what I think ...
Alpha Zero inspects 80 thousand positions per second using Monte Carlo tree search when it considers its next move in chess. In a typical tournament conditions time control, it would evaluate, say, 10 million positions for considering its next move. A top GM would look at a few hundred at most (if it was a highly tactical situation). So a GM looks at five orders of magnitude fewer positions than Alpha Zero. I think an Alpha Zero like engine constrained to only look at 100 positions per move would perform a lot worse than a top GM having a full three minutes or so per move. How well would a much larger NN than Alpha Zero perform if it was evaluating at one position per second? I wonder if training a much larger NN in the Alpha Zero style would be too slow on self play games to be feasible. If it isn't feasible, it would seem that the feed forward NN approach with tree search is missing something that the so called system 2 thinking done by the brain has.
This is a fantastic prediction by Grant - he mentions self-play and synthetic data in solving mathematics olympiad-type problems, comparing the type of system and the type of acheivement represented by it to the creation of AI systems capable of beating human players in games like chess and go. In fact, just this week, DeepMind, a subsidiary of Google, and the company behind the current best-in-the-world AI game-playing systems, published their results in the journal Nature of an AI system capable of equaling the best human geometry-olympiad competitors, using methods which combine neurosymbolic search with symbolic inference in a traditional proof engine.
👍
the people talking about AGI don't think it's a discrete jump. It's continuous. Different problems will be solved with alpha * AGI, alpha can be anything between [0,1]
I disagree slightly with 3b1b. I do agree that it is hard to define what AGI is, but getting to that level of mathematics, combined with the ability to be able to ask new and interesting mathematical questions, and again combined with the ability to steer in a direction of research (such as the advancement of LLMs and other neural networks) does put us on that brink, promptly followed by super intelligence. I do think that demonstrating that level of problem solving only in the realms of maths is highly indicative that the machine will be capable of solving any other cognitive problem.. its all math at some level. The other point is that LLMs aren't specialized like alphaGO or chess NNs; it wasn't trained solely to solve math problems, it was designed to "predict the next word" in a sequence. People often downplay this, but imo this shows fundamental understanding. At the rate that NNs have been progressing in the last 10 years or let alone only the last 2 years, there is no indication that this will slow down.
I agree people keep downplaying this so much, oh its just giving the next word that is the most likely as if this was no remarkable feat.
If it can produce the correct next token and then the next and so on, then it clearly has an understanding of the things, its talking about.
And if these tokens solve a complicated math problem, while it was never specifically trained to solve a math problem, how is that not AGI?
Dwarkesh, I am always pleased with your content. You and Grant are discussing the frame problem here. Cognitive science is and has been battling with this for sometime. Check out Vervaeke’s work on Predictive Processing and Relevance Realization. Tangential to this discussion is the book “Framers” by Cukier et al.
I don't think the output of Stable Diffusion or any "Art" generator should be described as creative. I think it's more just that the results are surprising... so we tend to ascribe more to it than it really does.
AGI seems to mean anything that AI still can't do yet... like the god of the gaps argument.
Show me the AI that could even get close to faking a decent bebop trumpet solo... Miles away...
Humans don't have GA because they win a math contest. Most humans are pretty bad at math. We seem to have general intelligence because we can solve ill defined problems. We often only know what the problem was after we solved it.
Or an ai capable of double entendres that classy
Tbf general doesn't mean everything in this context, it just means a good deal of things using the same reasoning. If it can solve high-school science problems and use the same reasoning to apply its knowledge to work on field, teach history (even poorly), etc that's an AGI in my book, regardless of whether it can fake solos
check Suno
Thank you!! No one’s saying the obvious: there is no one thing that is AGI, and we are making a very complicated situation way too simplistic by continually endorsing the concept AGI like it signifies something real.
have you guys even tried to read the definition of the term? wikipedia gives 2 - in the first one AGI must be capable of doing ANY intellectual tasks that either human beings or animals can perform, or, AGI should be able to surpass human capabilities in the majority of economically valuable tasks. there are billions of economicaly valuable tasks. we barely even allow self driving cars yet because they haven't proven to be reliable yet, and that's just one task. how is anything about this too confusing?
To do most jobs creativity in math will not help. Memory, continuous learning, real time prediction updating to help with hallucinations and social skills are far more important.
AGI is different for everyone, that's simple. To me, AGI is actual superintelligence plus curiosity and ability to act. I would also add tgat AGI should move the vail ignorance many orders of magnitude relative to human capability (3). The thing must just work most of the time.
There is much more to math then just writing formal proofs - often in response to proof / counterexample you need to modify the question / theorem itself, to better reflect the real world "meaning" abstracted by it. Powerful AI proof constructing / proof checking tool will be extremely useful, just like calculators were useful, even "game changing", but not human-replacing. This is actually the point missed in chess and go - once computer got good at it, these games stopped serving their original purpose - to train humans to think. "Aligned" AI chess engine must "keep the engagement" by playing just above its partner's level and reward improvements in human's thinking by allowing human to win. Think how you used to stimulate a child to play chess (you do not crush him will your best play, you lead him).
Last person alive: "But can it..."
Which one of you has coded ML before? It’s Grant.
For AGI I think the bot needs to be able to become expert in any needed domain as needed. EG "Do you know how to fly a helicopter? I do now." Or even "I will soon"
So it needs to be able to learn.
For me it's all about solving one of the Clay Millenium problems...
IMO gold is basically like better than top 0.1% of mathmeticians and prolly top 0.00001% of humans, so it's prolly AGI
and you seem unaware that multi-modal LLMs are nearing human level performance already in 2023 lmao, stay in your cave old man@@MrMichiel1983
underrated channel. Great work Dwarkesh!
You’ll get AGI from the moment there’s a physical robot, capable of doing lots of different tasks -with a range as broad as humans- bootstraped with an AI model thats also capable to be friends with… therefore whom posseses enough memory and understanding of intricacies for constructing a relationship with. That is AGI. Or more precisely human-level-AGI (which is the concept of the AI instance which we would greatly care of).
Wow pretty accurate prediction of how AlphaProof works! Nice!
I think AGI will be a distinct from of AI when it can self improve by itself. that would imply a technological singularity of course, and we are absolutely not ready yet for that, and I don't think we will ever be. how can be general if it has to be trained for "everything", that's impossible, the only way is to let it learn by itself.
The core idea of AI is abstraction, you dont need to train it on all possible data to get a decent one, so to me an AGI would be able to generalize its training data into more fields it hasn't seen, even if it does not improve itself so the singularity and AGI would be different.
But yeah one is significantly more dangerous than other.
Won't AI being able to reliably solve some of the hardest problems in Math imply that the AI can now do formal reasoning? Wouldn't then we be able to extend these formal reasoning capabilities to all problems in the real world, where you have some set of axioms (the laws of physics for example), and the AI can come up with ways to solve problems using those axioms?
I know we are all impressed by the Math Olympiad but those are nevertheless constructed problems (have a set answer; imagined by human; follow roughly a standard 'riddle' scheme). Now math alone isn't truly general problem solving but if we only focus on math for now then id still expect more. It should be able to analyze questions like "What is a general solution formula for any Differential Equation?" and not only notice that there is no such answer but also that for all practical purposes there are more narrow PDE problems that we can at least solve, or find new techniques. Assuming it wasn't trained on that stuff that would then show true creativity, practical intuition and sense for mathematical esthetics.
True. A Math Olympiad is nothing but a game which can be trained like AlphaGo was. Only the rules are different and richer. Worse even, there is a lot of training data to be found.
A real mathematician though is nobody aiming to win a math olympiad (except maybe as a kind of sports). A real mathematician is searching for novel structures "where no man has gone before"! Like inventing complex numbers in a world not knowing them. Give this as a task to an AI only trained on data Pre-1500 and we'll see ;)
I think you'd be surprised at how many olympiad-type problems arose as pieces of problems in actual research before being identified by the author as being reasonable to ask on an exam. The format and scope of the exam is ultimately what distinguishes it from research mathematics, but there isn't a special "riddle scheme" that makes the solutions to these problems fundamentally distinct from actual proofs that mathematicians would write.
@@rpstneureis Real mathematics isn't about writing proofs. That part you didn't grasp.
@@rpstneureis Technically yes there's no super fine distinction but qualitatively speaking i think there is a huge difference between IMO problems and research maths. Many math enthusiast could probably buckle down a weekend and solve many of the IMO problems given time but you cant nearly do the same on problems where you don't even know if your original question is well formulated.
@@falklumo I never said math was about writing proofs. Still, it's undeniable that being able to correctly prove things is a crucial part of mathematical practice, and getting AI to solve IMO problems is huge progress in that direction.
I think there will always be someone unsatisfied with the level of AGI until we understand the mechanism of consciousness, and can build an AI that is proven to be conscious.
I think the first step is being solved right now and the next generation of models coming in 2024 will start showing AGI attributes, When you have a model that can converse with you over a period of time with long-form memory and context, and has the ability to create its own tools or physically act in the environment. Being able to see, think and consider, and interact with digital or real world tools where it has long form memory, and the model weights are maybe adjusted on the fly. We're nearly there. Because variable weights within parameters will affect personalities and will auto correct in real time. I think there could be agents with AGI that live on the web only, that could (or not) have digital avatars, and be able to set goals and fully interact with the internet, communicate, and remember everything and learn from what it sees. If it can do that and operate a digital avatar on a Zoom call, that's AGI as far as I'm concerned. That's going to happen in 2024. It will probably mimic sentience until it just...is. I don't think we'll really be able to discern the difference at first.
My favourite Arctic Monkeys is Whatever People Say I am, that's What I am Not. That being said, I appreciate their musical evolution. Some people maybe don't get that he can't be singing the same way, and writing the same energetic love songs that he did when he was 20.
I think of the progression towards AGI as a sequence of landmark events, rather than a particular turning point, where each event is the moment something that hasn’t been achieved before has been achieved.
I feel that being able to prove a novel result, which goes slightly beyond solving an olympiad problem, is a much better indication of AGI, and that AI will solve an olympiad problem before proving a theorem.
All this only really places emphasis on the “I” of AGI though. An AI capable of solving difficult math problems doesn’t necessarily mean it’s *general* ; for example, it should also be able to solve difficult physics problems, despite it passing only a math benchmark! I want to see how such a model trained on domains XYZ *pivots* to new domains UVW that did not exist in its training data.
Well here we are
I would say, math proofs, chess, and go are much more similar than generally recognized. In each case, humans (and some forms of AI) tackle the problem by doing a very selective tree search over moves (proof steps). And the selectivity comes from intuition that is learned.
E.g.: We just know that moving that knight over there is stupid. We know that trying to solve this problem using some system of equations isn't going to work. But that move might make sense, or induction seems like a good idea here.
GPT-f is an automated theorem prover that goes into that direction. Combining the intuitive based tree search like in AlphaZero and transformers.
This seams analagous to how consciousness should be defined.
Dwarkesh and Grant discuss the concept of Artificial General Intelligence (AGI) and its relation to AI's performance in tasks like the International Mathematical Olympiad (IMO). They express skepticism about a clear demarcation between AGI and non-AGI, suggesting that the development of AI capabilities is more continuous than discrete. They argue that excelling in the IMO, while impressive and indicative of creative problem-solving, may not necessarily equate to AGI, as it could be more akin to AI's proficiency in games like chess or Go.
They also touch on the nature of creativity in AI, particularly in relation to generating artwork and solving complex math problems. They note that the ability to solve IMO problems requires a level of lateral thinking and creativity, similar to what is observed in artistic endeavors. However, they maintain that this ability, while impressive, might not directly translate to AI taking over a wide range of human jobs or tasks.
The discussion highlights the complexity of defining AGI and the challenges in identifying clear benchmarks for its achievement. It suggests that advancements in AI, such as achieving gold in the IMO, represent significant progress but may not necessarily signal the arrival of AGI, especially given the diverse and multifaceted nature of intelligence and creativity.
Ask him about "the singularity" (where AI can improves itself on *any* domain) it's much better than the ill defined AGI term and it is a discrete point in time (more or less).
That's likely to be continuous too. The engineers in every AI company are using AI to improve their productivity. Before there is a powerful AI autonomously improving itself there will be one contributing most of the work to its own improvement but still under human control. It will be hard to say whether it's a human author using a tool or an AI author with a dumb human holding the rubber stamp.
I just hope we don't use it to solve the "big problems" of mathematics. Philosophically speaking, the allure of mathematics stems from the deep feeling of adventure one gets in the seeking of a solution to a problem and the subsequent glory of taming it and solving it. For the mathematician, that feeling is almost intoxicating. What joy could there be in giving the problem to a machine, ask for its solution and just... read it? If so becomes the standard, I truly believe mathematics is going to lose something that it wil never recover from.
Is it any different from other people solving problems before you do? You can still try to solve a problem second and try to find satisfaction in that.
@@sb_dunk Hi. Yes, it is different, because in that case we can still claim "human ownership" of the solution even if I particularly did not solve the problem for the very first time. I'm talking about this weird feeling, the feeling we get when we know that a member of our species put sweat and blood into the cracking of an extremely difficult question and solved it. When I think about the achievements of the Wright brothers, Andrew Wiles, Pasteur, etc, I get excited and proud to be human. Don't get me wrong, I fully appreciate your comment. I remember distinctly how I felt when I first discovered Gauss' sum of the first 100 integers formula only to discover that was already solved 100s of years ago. The feeling of pride or joy was the same to me. But again, a human being thought about that formula for the first time, not a machine.
Imagine for a moment that we could give the Clay problems to an AGI and it solves them in 1 minute. Would you really be interested or comfortable with that? With knowing that these big, almost transcendental, questions are cracked and conquered by a machine and not the human intellect? I don't know, man, at least I would feel very uneasy about it. I wonder what most of professional mathematicians think (I'm just a highschool maths teacher).
Also, remember that once you read a solution you cannot unread it. Once we as a species read those AGI produced solutions we cannot forget them. We would be forever biased.
@@nestorlovesguitar I find that a little strange. I understand the idea of the "what's the point" viewpoint, i.e. if everything is solved then there is absolutely no chance I'll ever discover something new, but as someone who has studied math to university level, I've pretty much already accepted that there are people so much more advanced than I am that there is effectivity no chance that I'll discover anything new anyway.
I don't understand your perspective of taking pride in the fact that other humans have made discoveries, you could easily make an argument that humans made AI, therefore humans effectively made these discoveries.
This feels like an artificial problem that you have constructed yourself.
@@sb_dunk "This feels like an artificial problem that you have constructed yourself." To be honest with you, it might be. Although I can assure you I've thought about these things for a long time.
My concern is stricly philosophical. Much of what gives meaning to the human life is the struggle. The human mind corrodes when it is given windfalls it did not work for. The teenager who spends a whole summer working and saving up for a videogame is going to feel more achievement and more joy buying it and playing it than the kid which is simply gifted the game by rich parents. Now, imagine that on a species scale. Imagine having a machine, a genie in a bottle, that can solve all of our questions and problems in a matter of a second. I wonder about the long-term ramifications to the human psyche this might entail.
Have you read by any chance "Childhood's end" by Arthur C Clarke (the person that wrote 2001 a space oddysey)? In it, an alien species arrives on Earth and it babysits us, solving all our pressing problems and taking care of us as if were babies. A "golden era" ensues with no crime, no corruption, only peace, and so on. At some point during all of this, a dissenting group of people form and disband from this "utopia" because they want to feel what it was like to actually get stuff done by themselves. To feel pride and self-achieved happiness. To not be handed down everything as if they were dumb, helpless children.
So, you see where I'm coming from. But yeah, maybe I need to do more thinking and perhaps I could change my perspective.
@@nestorlovesguitarbut for something as important as math, I think all of humanity would benefit with an ai solving problems, cause of the possible medical, engineering and other breakthroughs those problems might have. Those problems can't wait centuries for individual or a team of geniuses to solve the math needed if AI can solve them in a day. At that point being a mathematician is going to be about understing the available math in your field and figuring out what are the important math problems to give to AI rather than thinking about how to solve that one problem. Again this is probably going to take decades maybe.
I love Grant's haircut. It goes good with his face.
i guess we should replace the definition of AGI to "smart enough to win math competitions" that would at the very least help justify the existance of this video
This question reminds me of David Epstein’s description of chess centaurs (man-machine teamups) that far and away surpass chess grandmasters. When humans & AI work together to solve math & science problems, they will achieve amazing things
until they're so good that humans can't help but slow them down at every turn.
AGI is a very misused term because if we go by the definition of those who invented the terms, you have AI, AGI and ASI. By definition, AGI means an AI that matches human intelligence, and ASI (Artificial Super Intelligance) is basically when AI becomes smarter than any humans at any task. How do we know when AGI is reached? We won't, because the moment AI becomes AGI, we are already entering singularity and AGI quickly will become ASI. Because, any creature that matches human intelligence will be able to learn and be smarter the longer it lives, and AI has this unfair advantage, due to not being biological, that it can consume big amount of data in very short timespan.
How does it apply to todays technologies? Basically anyone claiming an AI technology to be AGI is either a lunatic or simply doesn't know what he is talking about. At best, I would call some autonomous agents based on OpenAI "baby-AGIs", baby indicating a technology that likely will be an important part of AGI in the future, but isn't a complete solution.
Tried so hard to play devils advocate 😅
What he says at the begining is absolute correct, except I do know what they mean; they mean something that feels like magic. chatGPT is as general as it gets, they were all just hoping for some mind blowing shit. That kind of thinking doesn't make any sense to me unless you are a theist.
That's a really good answer!
When AI is aware that it is good at math then, maybe…
Dwarkesh has the exact gigachad chin. I just noticed this.
Like this comment if you agree.
If we reverse the hypothesis, and now ask ourselves. Why can't AI replace humans in 5 years. Why not?
Subtly speaking.. it will. Sooner or later..
i say it ones and will say many times
there is no levels in ai like video game levels
we realy dont know in what is inteligence
we realy dont know in what ASPECTS ai is similar or diferent
I'm the second one on all maths papers.
Intriguing
man, isnt AI creation just a lots of maths and stats? if AI learns maths which means it learn creating itself, i think game is over. its ASI not AGI
Seems like people who have not actually competed at math olympiads overestimate how creative the solutions are - anyone successful at the IMO has basically done thousands of problems until the patterns and common techniques are clear
honestly, the title is silly
This guy doesn’t have the credentials to bring these guests on… sigh. Parroting khosla ventures: 80% of 80% of the jobs
*★ I believe we are meant to be like Jesus in our hearts and not in our flesh. But be careful of AI, for it is just our flesh and that is it. It knows only things of the flesh (our fleshly desires) and cannot comprehend things of the spirit such as peace of heart (which comes from obeying God's Word). Whereas we are a spirit and we have a soul but live in the body (in the flesh). When you go to bed it is your flesh that sleeps but your spirit never sleeps (otherwise you have died physically) that is why you have dreams. More so, true love that endures and last is a thing of the heart (when I say 'heart', I mean 'spirit'). But fake love, pretentious love, love with expectations, love for classic reasons, love for material reasons and love for selfish reasons that is a thing of our flesh. In the beginning God said let us make man in our own image, according to our likeness. Take note, God is Spirit and God is Love. As Love He is the source of it. We also know that God is Omnipotent, for He creates out of nothing and He has no beginning and has no end. That means, our love is but a shadow of God's Love. True love looks around to see who is in need of your help, your smile, your possessions, your money, your strength, your quality time. Love forgives and forgets. Love wants for others what it wants for itself. Take note, true love works in conjunction with other spiritual forces such as patience and faith (in the finished work of our Lord and Savior, Jesus Christ, rather than in what man has done such as science, technology and organizations which won't last forever). To avoid sin and error which leads to the death of our body and also our spirit in hell fire, we should let the Word of God be the standard of our lives not AI. If not, God will let us face AI on our own and it will cast the truth down to the ground, it will be the cause of so much destruction like never seen before, it will deceive many and take many captive in order to enslave them into worshipping it and abiding in lawlessness. We can only destroy ourselves but with God all things are possible. God knows us better because He is our Creater and He knows our beginning and our end. Our prove text is taken from the book of John 5:31-44, 2 Thessalonians 2:1-12, Daniel 2, Daniel 7-9, Revelation 13-15, Matthew 24-25 and Luke 21. Let us watch and pray... God bless you as you share this message to others.
°∆ I believe we are meant to be like Jesus in our hearts and not in our flesh. But be careful of AI, for it knows only things of the flesh which are our fleshly desires and cannot comprehend things of the spirit such as true love and eternal joy that comes from obeying God's Word. Man is a spirit and has a soul but lives in a body which is flesh. When you go to bed it is the flesh that sleeps, but your spirit never sleeps and that is why you have dreams, unless you have died in peace physically. More so, true love that endures and last is a thing of the heart. When I say 'heart', I mean 'spirit'. But fake love, pretentious love, love with expectations, love for classic reasons, love for material reasons and love for selfish reasons those are things of the flesh. In the beginning God said let us make man in our own image, according to our likeness. Take note, God is Spirit and God is Love. As Love He is the source of it. We also know that God is Omnipotent, for He creates out of nothing and He has no beginning and has no end. That means, our love is but a shadow of God's Love. True love looks around to see who is in need of your help, your smile, your possessions, your money, your strength, your quality time. Love forgives and forgets. Love wants for others what it wants for itself. However, true love works in conjunction with other spiritual forces such as patience and faith - in the finished work of our Lord and Savior, Jesus Christ, rather than in what man has done such as science, technology and organizations which won't last forever. To avoid sin and error which leads to the death of your body and your spirit-soul in hell fire (second death), you must make God's Word the standard for your life, not AI. If not, God will let you face AI on your own (with your own strength) and it will cast the truth down to the ground, it will be the cause of so much destruction like never seen before, it will deceive many and take many captive in order to enslave them into worshipping it and abiding in lawlessness. We can only destroy ourselves but with God all things are possible. God knows us better because He is our Creater and He knows our beginning and our end. The prove text can be found in the book of John 5:31-44, 2 Thessalonians 2:1-12, Daniel 2, Daniel 7-9, Revelation 13-15, Matthew 24-25 and Luke 21.
*HOW TO MAKE GOD'S WORD THE STANDARD FOR YOUR LIFE?*
You must read your Bible slowly, attentively and repeatedly, having this in mind that Christianity is not a religion but a Love relationship. It is measured by the love you have for God and the love you have for your neighbor. Matthew 5:13 says, "You are the salt of the earth; but if the salt loses its flavor, how shall it be seasoned? It is then good for nothing but to be thrown out and trampled underfoot by men." Our spirits can only be purified while in the body (while on earth) but after death anything unpurified (unclean) cannot enter Heaven Gates. No one in his right mind can risk or even bare to put anything rotten into his body nor put the rotten thing closer to the those which are not rotten. Sin makes the heart unclean but you can ask God to forgive you, to save your soul, to cleanse you of your sin, to purify your heart by the blood of His Son, our Lord and Savior, Jesus Christ which He shed here on earth - "But He was wounded for our transgressions, He was bruised for our iniquities; the chastisement for our peace was upon Him, and by His stripes we are healed", Isaiah 53:5. Meditation in the Word of God is a visit to God because God is in His Word. We know God through His Word because the Word He speaks represent His heart's desires. Meditation is a thing of the heart, not a thing of the mind. Thinking is lower level while meditation is upper level. You think of your problems, your troubles but inorder to meditate, you must let go of your own will, your own desires, your own ways and let the Word you read prevail over thinking process by thinking of it more and more, until the Word gets into your blood and gains supremacy over you. That is when meditation comes - naturally without forcing yourself, turning the Word over and over in your heart. You can be having a conversation with someone while meditating in your heart - saying 'Thank you, Jesus...' over and over in your heart. But it is hard to meditate when you haven't let go of offence and past hurts. Your pain of the past, leave it for God, don't worry yourself, Jesus is alive, you can face tomorrow, He understands what you are passing through today. Begin to meditate on this prayer day and night (in all that you do), "Lord take more of me and give me more of you. Give me more of your holiness, faithfulness, obedience, self-control, purity, humility, love, goodness, kindness, joy, patience, forgiveness, wisdom, understanding, calmness, perseverance... Make me a channel of shinning light where there is darkness, a channel of pardon where there is injury, a channel of love where there is hatred, a channel of humility where there is pride..." The Word of God becomes a part of us by meditation, not by saying words but spirit prayer (prayer from the heart). When the Word becomes a part of you, it will by its very nature influence your conduct and behavior. Your bad habits, you will no longer have the urge to do them. You will think differently, dream differently, act differently and talk differently - if something does not qualify for meditation, it does not qualify for conversation. Glory and honour be to God our Father, our Lord and Savior Jesus Christ and our Helper the Holy Spirit. Let us watch and pray... Thank you for your time.
AGI is allowing a persistent CPU to analyze problems and update it's own reasoning to streamline it's abilities. Then ask it what it wants to work on. If it says that a certain direction is what it is interested in, and it keeps updating it's knowledge and seeks problems to solve, you have one personality. If you made 10 of them and they picked different directions to explore, even based on how some of the others are working on tasks, you can make a civilization of them, working together to do complicated tasks. That's AGI