@mostlynotworking4112 This needs to happen. Either Pague or father Josiah Trenam would be the best. Rogan is starting to ask the questions, and it's so painful watching him go way off. I think father Trenam on Rogan would answer so many questions Rogan has.
@@LTzEz03z algos just don't work in a singular day to push a not widely popular channel on millions of frontpages? U guys commented a day post upload, no? xd
54:40 The broader problem behind THIS problem is egoic interventionism. Our solutions to things that we label as "problems" often produce worse "problems". The solution is often worse than the disease. "Let it be." Husbandry vs Techno-interventionism
Hi Ken, hope you and the family are all doing well, I'm almost an hour and a half into this conversation, and love the discussion and the way it animates you, the one thing that seems to be shouting out to me is the thought of humans emulating the ai, rather then trying to help the ai emulating us, so much potential coming our way and still so little engagement on the human level at scale which means that control will be centralized in a few. Have a great day, and all the best, peace
You are leaving comment after comment after comment in this comment section on how impressed you are and yet you say nothing. Calm down a bit, take a breather and come back to it in a week or two.
What struck me as I listened is that John’s proposal sounds great…in theory. And I have no doubt that some are and will take that approach. However, as history shows is, there are always bad actors. And “good actors” who get convinced to do things harmful to mankind because of the fear that if they don’t do it, someone else will. I heard that exact argument being posed on the Honestly podcast. Bari Weiss was interviewing young defense tech startup CEOs, and the crux of their argument was, “We know China’s already developing X, so we have to get out in front of them or our way of life will be destroyed.” Companies use this same argument too. We have to get out in front of X so we can survive. I think the only reason people put any kind of guardrails around nuclear technology is because they saw the destruction. If all they had was a threat of destruction, they’d have kept going. It makes me optimistic in the long run, and pessimistic in the short run. A lot of damage is likely to be done by the groups that aren’t spiritually or philosophically minded simply trying to not lose the innovation or arms race.
@@WhiteStoneName yeah. And the higher up I got in the corporate world, the more I saw fear as the underlying operating principle. Even though they didn't see it that way.
You’re not alone. I think John’s investments in strong AI blinds him too much from the warnings that Jonathan is trying to make. He can’t see how trying to make the sacred into something secular breeds bigger problems because people end up worshipping that very thing.
So excited to see these gentlemen pop up in my feed, some of my favourite thinkers around right now. Great job and thank you Ken for bringing this to us!
As a designer-craftsman I first show up at an arranged time to discuss a potential client's residence. I have to coordinate a few details to do so. When I meet the customer I have about 10 minutes to discern the basic personality and needs of the person and family. I also have to read the visual aspects, the aesthetic decisions, they have made in the past, and sense who actually makes the decisions. I don't believe that AI can discover what I discover as a human being. I have installed factory manufactured products, and I know from many decades of experience, that the end product does not often match the original vision of the end user. So much lost in translation. "LOVE is listening." cc. 👀 🐠 🌊
1:15 I've been preaching that people read That Hideous Strength for over a decade. Lewis's Ransom triology is all about this. And people sleep on THS. Everyone loves Perelandra, and yes, it's great. But the world is bascially following THS as a script nowdays. Edit: 1:19:40 Divination and non-living Intelligence. Yep. That Hideous Strength. With all our technological prowess and "knowledge", we play checkers while Principalities play 8D chess.
I read the whole Ransom Trilogy a month ago. Scared the crap out of me how prophetic it was over 80 years ago. Sadly, NICE has moved from fictional Belberry to real Silicon Valley. No Merlin, no Mr. Biltitude to disrupt them this time. Dark times ahead until the Parouisa.
I have a few comments, but the most important one is this: Regarding the framing of AGI as developing children, it seems to me that the hard problem of alignment is actually getting to the point where the AGI is as alignable as a human child, rather than mentoring it as we would mentor a child. If we can get to the point where it is mentor-able, then the hard problem would be resolved, and our future would look more promising. However, the space of all possible minds, or more specifically of all possible utility functions, is very large. Are the constraints inherent in predictive processing and relevance realization sufficient to constrain this space down to the very small subspace that we as humans occupy? Also, how would we deal with the issue of deception on the part of these AI (per Jonathan's reference to the AI Shogoth meme)? I've been asking around as much as I can trying to get a satisfying answer to this with respect to John's framing of the issue, but have yet to get one, so I would appreciate any feedback from whoever reads this. --- Onto some other comments, with corresponding time stamps: --- 27:00 I'm not sure how to phrase this, but hopefully gesturing in the right direction is sufficient: To me, saying that we won't be able to make something that is smarter, or even wiser, than us is like saying that we won't be able to make something that is stronger than us. Or similar to an argument John make later, that we won't be able to make something that can truly fly without being parasitic on the flight on an organism. It seems to underestimate the forms that are accessible to us. The following exerpt might be helpful to consider. It's a bit long, but hopefully sufficiently clarifying: --- “So Eliezer2002 is still, in a sense, attached to humanish mind designs - he imagines improving on them, but the human architecture is still in some sense his point of departure. What is it that finally breaks this attachment? It’s an embarrassing confession: It came from a science fiction story I was trying to write. (No, you can’t see it; it’s not done.) The story involved a non-cognitive non-evolutionary optimization process, something like an Outcome Pump . Not intelligence, but a cross-temporal physical effect - that is, I was imagining it as a physical effect - that narrowly constrained the space of possible outcomes. (I can’t tell you any more than that; it would be a spoiler, if I ever finished the story. Just see the essay on Outcome Pumps.) It was “just a story,” and so I was free to play with the idea and elaborate it out logically: C was constrained to happen, therefore B (in the past) was constrained to happen, therefore A (which led to B) was constrained to happen. Drawing a line through one point is generally held to be dangerous. Two points make a dichotomy; you imagine them opposed to one another. But when you’ve got three different points - that’s when you’re forced to wake up and generalize. Now I had three points: Human intelligence, natural selection, and my fictional plot device. And so that was the point at which I generalized the notion of an optimization process, of *a process that squeezes the future into a narrow region of the possible.* You can espouse the notion that intelligence is about “achieving goals” - and then turn right around and argue about whether some “goals” are better than others - or talk about the wisdom required to judge between goals themselves - or talk about a system deliberately modifying its goals - or talk about the free will needed to choose plans that achieve goals - or talk about an AI realizing that its goals aren’t what the programmers really meant to ask for. If you imagine something that squeezes the future into a narrow region of the possible, like an Outcome Pump, those seemingly sensible statements somehow don’t translate. So for me at least, seeing through the word “mind” to a physical process that would, just by naturally running, just by obeying the laws of physics, end up squeezing its future into a narrow region, was a naturalistic enlightenment over and above the notion of an agent trying to achieve its goals. It was like falling out of a deep pit, falling into the ordinary world, strained cognitive tensions relaxing into unforced simplicity, confusion turning to smoke and drifting away. I saw the work performed by intelligence; smart was no longer a property, but an engine. Like a knot in time, echoing the outer part of the universe in the inner part, and thereby steering it. I even saw, in a flash of the same enlightenment, that a mind had to output waste heat in order to obey the laws of thermodynamics." Yudkowsky, E. _Rationality: From AI to Zombies, 299: My Naturalistic Awakening_ --- I understand that there are some factors that aren't addressed here, like caring and autopoiesis, but hopefully the idea of being able to create a more powerful engine despite ourselves being weaker engines comes across. 36:10 One advantage of synthetic data is that it can be used to selectively amplify certain parts of the corpus of humanity that we would want to train the AI with. Granted, humans would still need to select those parts, but it isn't so much an issue of filling up the internet with what we would want to train the AI with. As far as I can tell, this is what synthetic data is being used for; not to expand the training set, but to improve the quality of the training set by taking the best of the existing training set and amplifying it to be the size of the original data set. 37:35 One failure mode to consider here which isn't applicable to current instantiations of LLMs but might be for more advanced forms of AI is that it won't be immediately evident that there is a problem with the AI because the AI might be advanced enough to have a good enough model of it's own verifiers that it knows what behaviors to display and not to display. As such, these problems might fly under the radar until we get to the point where the AI has been granted powers and responsibilities because we are under the impression that it has become enlightened even though it has just learned how to play us. In short, this gets back to the deception issue.
Is there directionality in the vertical? Where intelligence needs a system because the system affords it, simple awareness bound in simple elements moving to complex awareness in a complex system with different objects, what is meant by different objects? I think it's the production of the same objects using the same mechanism, what changes is the construction of the system in a new dimensionality. This moves downwards if the top of the vertical is physics, the fragmenting of the cosmic egg shell into facets that build tubes. Following that down, supermassive black hole as shell for spiral galaxy, the suns as facets and shells for the solar system tube, which allows the shell of life on the Earth, from that affordance the GI tract as intelligence system, using microbes that bridge the two objects from the mechanism of flipping the topology, essentially a person is a facet of the shell of the planet and owns this; most unfortunately are not good gardeners. To go down in the vertical, our only option, an ally is the mycelium, perhaps with codeable bacteria, and AGI where the GI is the same as us, where the control is, or isn't.
Great conversation. Hopefully everyone understands what Jonathan is talking about and the terminology doesn’t make it confusing as if they’re talking about magic or something because talking about trans personal agents as angels or gods makes perfect sense, and I hope people are familiar with the way they’re speaking. Jonathan’s face when John started talking about bioelectric self organizing constructs that we all ready are making was priceless.
This did not disappoint…in no small part because I wanted to hear challenges to John’s video essay from last year - and his responses - and Jonathan and David delivered. Having read some of the comments here already, I’ll just echo those who doubt that AI tech can ever explain - as opposed to presuppose and mimic - animal cognition and behavior, while nonetheless sharing concerns about its implementation and use by the military-industrial-academic complex.
Ken Wilber was all over this 25 years ago in his comic novel "Boomeritis". AI will have to ascend the spiral of development within the 4 quadrants (Good, True and Beautiful + 1) just like a child, a group, nation, world, or universe does. Imagine AI trying to find it's mouth with a spoon, a petulant 12yr old bratAI; a crestfallen young adult AI; a 40 and horny AI; a stubborn old fool AI. You nailed it Jonathan, the Sorcerer's Apprentice.
What I'm hearing from John is that the top-down factors which bear upon being and reality, as it were, contribute solely as "constraints". Intuitively, this designation doesn't sound like it would do justice to what we conceive of as "that which lends order to, sustains and brings into flourishing" existence, assuming this roughly captures the notion/aspect of "top-down". Also, he is clearly repelled by any implied appeals to "vitalism", and while such lingering ambiguities are understandably anathema to a scientist, to a philosopher following his intuition, the "something there" is not to be so readily dismissed. This would be a natural point of tension between the disciplines, as it seems was the case here.
I think that there is a fundamental aspect of relevance realization that was ignored, and it's a very technical aspect but with an incredible depth to it, it's the fact tha relevance realization is exponentially explosive, and what solves it is an ability to select, evaluate data without ingesting it. I think that's in a way the miracle of agency the ability to care, frame, have intrest, desire, want... All of these can on a way be described technically as the ability to select data without ingesting it, and this is a simple yet big part of what is missing in order to turn AI into AGI
Remember ye all are required Rest FILLED and resting while moving forward with delight! Any heavy burden some loads ye are carrying. Many of these principalities who deceiveth to put upon thy shoulders. Put under Thy FEET. Will give thee enough ye can carry! Even these principalities who deceiveth not willing to carry! Is like...
Also to the opener, I always think of a novel where these systems get access to libraries around the world and then start to find not only these bigger patterns your speaking of but also find individual souls throughout time. Like it proves reincarnation. Then capitalistic society charges you upon karma and moves into the spiritual.
At 1:56 Dr Schindler asks, what problem does this solve. I believe it is this: we have become so terrified of ourselves that we want something beyond us to have dominion over us. Also, we have no gods, so we are making our own. Also, we die and don’t want to, so we are striving for a “singularity” as a substitute immortality and purity. I paused to write this lest I forget, so they probably answered better than I have.
1:59:00 civilizations are the meta problem solvers and AI is the concentration of civilization into a technology. As long as we are all capable of recognizing that for what it is, we will not be captured by it. This means knowing the potential that it could become an unconcentrated and divisive (hey wonder what that's like /s) and knowing that it remaining concentrated and unifying takes that active memory of both positive extreme and negative extreme outcomes is the path towards useful AI. In short, if we understand the dangers and opportunities of AI, we will use it correctly. Seems simple but it means a lot of hard work and constant artistic renewal of culture.
That wanting and being are interlinked is exactly what the Buddha described, that without desire, wanting, being is in his words "extinguished" in the awakened understanding. Maybe something there.
I mean definitely something there. Buddhists I believe have a lot of insight to offer in the nature of reality. They missed the part about a personal loving God but got a lot of other stuff right.
Why are human AGI’s effectively born 10 months premature compared to all other vertebrates? If developing AGI was simply finding the right circuits of on/off switches presumably evolution would have found it and humans would be born independent. Yes, a part 2 of this discussion is needed!
1:57:26 perhaps the computer was made in the same way that the self rolling car window, the people that made it had no idea what it's purpose was, but there was a purpose, they just didn't know it, it's purpose was that in an emergency the car passengers could not roll down their windows, because the car has no energy. the computer has helped us a lot, just like the self rolling window, but it has also trapped us, and doomed us, in the end all that help we received from them will amount to nothing. edit: that response only reinforces David's point
The difference between what Pageau proposes and VVK is that Pageau knows that human beings don't have the power to create intelligent beings and VVK doesn't. VVK thinks AGI is a done proposition that will eventually happen, but it won't. Besides that, it's not in the interests of any human being to create a non-human intelligence, because they would not solve human problems by virtue of being different beings. Either they would be human or post-human and serve humanity, or they would be alien to humanity. This shows a glaring flaw in VVK's philosophy. He doesn't understand teleology nor where meaning comes from. Schindler was completely on point and VVK didn't understand that life has a teleology that is not self-given. He's using bad anthropology.
@@martinzarathustra8604 Or what? Your entitlement makes you a weak conversationalist, go read Orthodox theology and come back when you're humbler maybe we could talk. God bless.
John pushed back on this with his flight example. My intuition is that it is a bad example and a category error but probably couldn’t argue it with John. I think it begs the question and even the biggest questions. And it sits on the edges of being scientifically reductionistic though John would push back and is certainly more open to that possibility than I would be.
a few more things on the philosophical front. 1) Chinese room argument for me is valid. I know John thinks the systems reply is a strong enough reply (at least he's said this previously). But if we put all the symbols in Searle's mind, then he becomes the system and understanding still isn't achieved. 2) the possibility of the former is derived form this latter point. Under Anscombe's view of intention, intentions do not add anything to the act of an agent. Rather intentions are not innate. They only apply to a set of descriptions with an intelligible why question. This is partly due to a different view of causation (agent) and also to do with that thought and the mind are inherently intentional. Meaning that they apply to some object as intentional, not that they have some "qualia" termed intentions or aboutness. 4) Since reason is not dialectical in our view, and is not wrestling with a set of appearances (which can be comprehended as presence, absence, or a sublation). Rather what the mind grabs is being in the intellect, thus requires some active principle by which it reasons through its appearances. This we alike have termed the nous. Which are not determined by an appearance, but rather is the determination of the appearance, which Schindler alluded to in the beginning. 5) Lastly, since physical states are indeterminate, they can never be put together into a something which is by nature determinate. But the thought is necessarily determinate (see James Ross on the immaterial aspects of thought). Thus a computer or dog, who's powers can reduced to their prime-matter, are determined by the practical intellect. Which Jonathan alludes to as well. When a say a computer "scans a face" this is something we determine it is doing, not the other way around. Thus its method is of power. An extension or supplement of reason. And this is not merely a feature of our current computers, but is the very nature of art itself. (list of nice sources for these arguments: John Searle, Elizabeth Anscombe, St Greggory of Nyssa, St Thomas Aquinas, Kant, James Ross, Ed Feser, W.V. Quine, Saul Kripke. I can supply specific texts.)
Nice to find someone else who has read Ross. I wish more people who talk about AI would read “immaterial aspects of thought” because I think it is a knock down argument against any kind of strong AI view
@@willclausen1814rule following argument is a knockdown only if you treat the possible world semantics to be correct and also have a correspondence notion of truth. Deflationary accounts avoid the problem altogether 😊
Will it be able to distinguish us vs itself as different so that it knows it should sacrifice itself? Like the swan raised by ducks, will it impress us as itself? Keep in mind, we are the only thing really interacting with it.
What do you mean these intelligences are not embodied? You are placing them in a place and time, operating inside matter and they can't escape. That is the definition of embodied, right?
A brain in a jar isn't exactly embodied the way they're talking. Having a body and living in the real world tethers a consciousness by giving it constraints. It allows it to know itself. It allows for a means to qualify thoughts. Is this mode of thought good for the body? This body that is me and the source of my thoughts? It's hard to be conscious and have an identity if you can't experience yourself locally.
Jack Ma said this years ago in an AI debate _"Love is irrational to us, but actions of hate can be logically calculated"_ He wasn't making an argument against ai but for it.. that's the really weird part
Evocation brings up the spirits of the unconscious ... and AI is the chaotic incarnation of parts of the spirits of the programmers and content creators ;-(
1:25:40 John totally misses Jonathan's point. These patterns of intelligence are specifically not our collective avatar. They are a collective of machines, of technology, of mechanisms outside of ourselves and thus are a collective avatar of us and something else outside of us that is now capable of perceiving the information on its own terms. This is the danger of creating something that sees God on its own terms separate from us. This collective avatar of machines made by men has already escaped alignment with our own cares. This doesn't mean we can't realign it but it will take a rebirth of society, a new artistic foundation and mutation of our memome.
My conclusion, having some actual background in computer science, is that a lot of people are engaging in fantasy, anthropomorphism, and confusion. There are many points that could be made, but one is that AI is already “embodied”. It is embodied in computers. And computers are a specific physical thing, that does specific things. Computers (the physical component) and “AI” (which is software, a set of instructions for the computer), are not, and cannot be, “conscious” or “intelligent”, in the way that humans are. Or even a fish. They are not biological organisms. They can only mimic humans, in limited ways, based on instructions given to them by humans. AI will not and cannot become “conscious”, or “alive”. It will not become some conscious being that is superior to humans. That is anthropomorphism. And delusional fantasy. The real dangers are first, what it will be used for by humans, and second, what happens if it gets out of human control - which is not a problem of organization, but of disorganization. Chaos and disaster, not rule by some omnipotent AI being. I don’t know how to make it completely clear, but the discussion around this is woefully misguided, imagining dangers that do not exist, without focusing enough on the ones that do exist. Or conversely, there is a megalomaniacal, misanthropic, ultimately pagan fantasy, cloaked in concern, about the power of humans to create a new form of life, to either be God or create God, which is all about the human being superior to God, even though it’s hiding behind caution. That atheist ego is certainly behind a lot of this. But it’s not going to happen. It is clear that what some of these people really want is to create artificial humans, to prove their own superiority. But that is not going to happen, because it’s impossible. It can only ever be a mimicry. Totalitarian rule by humans controlling AI, or chaotic disaster caused by out of control AI, are the dangers.
43:00 talking past each other, a bit of a recurring pattern: Jhon: "there's a posibility that the AI may turn into a higher master that comes back to teach in a caring way" Jhonathan: "AI today doesn't mach the image that we have of wisdom, wisdom looks like a hidden pearl, but AI looks like a golden giant, so it doesn't look like it's going to happen" Jhon: " if these machines can become more inteligent, then consider they can also be more caring" and then an exposition on historical figures like budda or Jesus, as if those two things are comparable??? the third point does not address the second, if these things can become more caring then why are they becoming more and more the center of attention?
seems to me that Jhon is dealing with "this thing between my hands" a thing that can be modified, made more or less caring, he's looking at how the thing works. but Jhonathan is not looking at that at all, he's looking at the agency before and above the hands, he's not interesting in whether AI can have this or that form, it will have the form that the agency above the humans will want it to be. this looks like this is the crux of the failure in communication
@@acuerdoxor hubris in John’s part. John’s scientific curiosity is getting the better of him that he does not see and are not properly addressing Jonathan’s concerns. As you said, Jonathan does not care as to what this thing can be modified into, but rather he is concerned about what this thing could turn into and that is that it could become an object of worship where it starts off that AI would be in the service of man until it becomes the other way around. I can easily see down the line that if this thing gets powerful enough, humans will want the AI to do the thinking for them.
@@gentlemanbronco3246 I think that it's just the format of the discussion, two hours is too little time, so they're constantly in a hurry to respond, thinking takes much longer than we realize, really one should take days before responding to anything. has it ever happened to you that as you chatted with someone they suddently interrupt you to give a retort to what you were saying? and after a time you realize that their retort did not follow at all from what you were talking about? it just shared the topic of what you were saying. I think that's because it takes us so much time to think that we only come up with a response after the conversation has ended, and so people carry with them "undelivered responses" and so when those people talk to you and you happen to mention that same topic they could not respond to earlier they then fire that "undelivered response" to you, even thou it doesn't fit.
1:50:00 this is the real crux. If the presupposed unity is love for life in the immortal then yeah, we could extend ourselves into our technology that connects us to the infinite and closes the loop. This could go two ways as scripture says... it will go two ways. Some piece will seek to differentiate its immortality it an effort to capitalize on the technical aspect of the mechanics. In the end, the technical aspects will be subsumed into the immortality of the complex of the entire situation.
1:30:20 We have bioweapons that are just as dangerous if not more dangerous than atomics. Nixon got us and most of the world to stop developing them completely, basically saying, "yep, they are strong enough." Back then China wasn't much of a threat as far as bioweapons go which is why gain of function research is outsourced there today.
It seems to me that our wisdom in society has risen alongside our intelligence but to your point John, the sages of old acted on the world in a less powerful manner for generations except every 1000 years or so when they transform the world as a prophet. That gives the intellect a lot of time to meddle and create havoc. Our collective wisdom always comes to the surface to rectify the hubris of the intellect - but only once it is forced to do so with cataclysm and tremendous pain. This same process may emerge with AGI but there's no guarantee and its much more likely that in a practical sense, like in the practical world, intelligence and power dominates much more than wisdom, care and compassion and so, AGI would be a tool for those pulling the same cruel levers of power at work in the world today. It will take human wisdom alone to bring it to heel. I would not rely on or hope for an emergent morality from soulless machines born without a creator more powerful than themselves.
43:10 let's get real. Tell us that the quantum copy machine has made the ancient stories more noisy and the problem is the cloud picking of those stories are being misinterpreted as someone's origin story + bits of Buddhism + bits of astronomy + bits of information from the subtle advertising mechanics on the internet. People need to know those stories belong to people of different cultures and locations around the world. They are keepers of those knowledges and it will always live with those people as a record of the discipline and dedication they have based their entire world around. They are certainly not ours as the 21st century cyber punk punching numbers for the wrench to bang metal into glass. It doesn't work like that. I think if the desire is to want that "cut and paste" job to be the leader of your every fantasy. Then we lose touch of what this planet means to our survival as human beings. LITERALLY!!! You sell your identity you were given to continue doing exactly what had been narrowly missed in the first place! Do your science, crunch your numbers, do your best craftsmanship in aligning to the peoples gradual shifting phases. Because your all NPC as far as global and culture is concerned. Then be a person with us when you clock off for the day. Thank you to Ken and company for a very delicate topic discussion.... and for my public rant at the qubits trying to run away and hide down corridors.
It seems icky to me to make the reason for becoming wiser, only to be able to model wisdom for AI. I know John is not saying that is the only reason, but ultimately acquiring wisdom should be motivated by the desire to become closer to God or enlightened, otherwise, the quest itself has seeds that will undo it. It somehow still leans toward making them a God, making us in a sense subservient to them. This is different than what we do with our children, since they share the same nature with us, we are subservient to the same light that is in us by sacrificing for them. I am not saying Vervaeke's ideas are not the right thing to do, I am just wondering does he has a hope we can make something wiser than us in order so we can bow to the true wisdom we are limited in achieving. Would that still be idolatry?
Is it wise to even try to make them sages? Didn't Socrates think that only gods could have wisdom? Wouldn't trying to make them sages be equivalent to trying to make them gods then, which is precisely what we seem to want to avoid?
I don't think massive computer layered programs + data + softwares + electrical power, etc... = self-organizing (in Vervaeke's hinting or wishing - they're performing initial sets & conditions, parameters & commands (all human) + the technical apparatus & energy supply they 'survive' on & running them faster, longer, etc... unplug, or tinker with, etc...they are not self-organizing...they're running human organizing compositions... at larger/faster scales ON humanly determined programs/sensors/wiring & chips... if we put all the elements on the table that equal an LLM or other computing tool & they could put it together & make more of them (on their own) - that would indicate self & self-organizing ?
So, some ideas seem to want to reproduce themselves, thus ideologies that turn into cults. You might argue that has to do with minds, but the mind is the thing we are working on reproducing.
All information is physical; everything we experience comes from physical neural patterns in the brain. From there, it makes its way to our consciousness, which builds our reality. However, I try to stay away from discussing consciousness when talking about intelligence because it fools us with many illusions and approximations of what is real. I believe we can achieve full cognition from machines without the layer of consciousness, albeit without emotions. I do not think that is very important, though, as these machines can mimic emotions so well and integrate them into their reasoning. I also believe we are on the verge of high-level reasoning. Currently, AI suffers in out-of-distribution generalization. Humans use continuous pattern searches and predictions against a "belief system" in order to generalize like this. This is the direction big labs are heading, hence the need for more power and compute.
Do cyborgs have long term memory? What about imagination? I suspect the ability to “interlock” with others is contingent on imagination. The bumping into you is then not based on absence of caring, but the cyborg’s Achilles heel, so to speak.
Isn't John trying to have the cake and eat it too? He's proposing two things: 1)make AI rational in a deep embodied sense; 2) make sure that they are wise, compassionate etc. Well, isn't the christian story a warning that it cannot be done? In christian story God creates humans in his image but also finite and that necessary builds in the possibility of the fall. In other words: if you make it rational you make it free. And if you make it free a Fall will (can?) happen. In still another word: you can't force enlightenment.
John says he has people on the inside and hopes the message gets through, but then gives the story of how far we were with the atomic bomb, having people watch it go off because we didn't understand what we were doing. And doesn't see the contradiction in that?
Pageau 🎯
wow, that opening teaser is a dozy!
Paul always trying to lick others’ butts!
Hi, this is Paul :D
another mind bending pageau moment
@@anthonywhosanthonywarming up for Pageaus eventual Rogan appearance and now that Rogan is back on YT. Algo can drive everyone into the symbolic world
@mostlynotworking4112 This needs to happen. Either Pague or father Josiah Trenam would be the best. Rogan is starting to ask the questions, and it's so painful watching him go way off. I think father Trenam on Rogan would answer so many questions Rogan has.
The cog sci perspective from John, the philosophical from David, and the theological from Jonathon made this very valuable. Thanks Ken!
How does this have less than a thousand views? This was possibly the best discussion on AI that I've ever heard.
AI Algos won’t let it out. 😅
@@LTzEz03z algos just don't work in a singular day to push a not widely popular channel on millions of frontpages?
U guys commented a day post upload, no? xd
54:40 The broader problem behind THIS problem is egoic interventionism. Our solutions to things that we label as "problems" often produce worse "problems".
The solution is often worse than the disease.
"Let it be." Husbandry vs Techno-interventionism
So much!
The most important conversation in our current time.
Ken really demonstrated the power of listening carefully in this talk.
Hi Ken, hope you and the family are all doing well, I'm almost an hour and a half into this conversation, and love the discussion and the way it animates you, the one thing that seems to be shouting out to me is the thought of humans emulating the ai, rather then trying to help the ai emulating us, so much potential coming our way and still so little engagement on the human level at scale which means that control will be centralized in a few. Have a great day, and all the best, peace
Thank you so much for your generous time, everyone of you! We are privileged to hear this thought provoking conversation. Can't wait for the next one!
oh boy!
You are leaving comment after comment after comment in this comment section on how impressed you are and yet you say nothing. Calm down a bit, take a breather and come back to it in a week or two.
@@grosbeak6130like a 12 year old girl on a new Taylor swift video
There's so many interesting conversations but THIS is the one I was waiting for!
This might become one of the most important conversations of our time, I reckon.
Bravo! Thank you for hosting this
Thank you all greatly
Oh wow! What do we have here?...❤ awesome! Looking forward to it
We definitely need a sequel!
Brilliant! Thank you all so much.
This video is really stretching me in ways that are hard to articulate
Can't wait to see how and why.
If you just talk for 14 hours til you've articulated it
I'll listen haha
What struck me as I listened is that John’s proposal sounds great…in theory. And I have no doubt that some are and will take that approach. However, as history shows is, there are always bad actors. And “good actors” who get convinced to do things harmful to mankind because of the fear that if they don’t do it, someone else will. I heard that exact argument being posed on the Honestly podcast. Bari Weiss was interviewing young defense tech startup CEOs, and the crux of their argument was, “We know China’s already developing X, so we have to get out in front of them or our way of life will be destroyed.” Companies use this same argument too. We have to get out in front of X so we can survive. I think the only reason people put any kind of guardrails around nuclear technology is because they saw the destruction. If all they had was a threat of destruction, they’d have kept going. It makes me optimistic in the long run, and pessimistic in the short run. A lot of damage is likely to be done by the groups that aren’t spiritually or philosophically minded simply trying to not lose the innovation or arms race.
@@kathleenthompson9566 fear vs faith.
@@WhiteStoneName yeah. And the higher up I got in the corporate world, the more I saw fear as the underlying operating principle. Even though they didn't see it that way.
Fascinating convo. In many respects life giving. John’s proposal is huge but also feels deeply riddled with hubris. I’m not sure how I feel.
You’re not alone. I think John’s investments in strong AI blinds him too much from the warnings that Jonathan is trying to make. He can’t see how trying to make the sacred into something secular breeds bigger problems because people end up worshipping that very thing.
Great conversation! Many thanks all. Particular thanks to Ken for hosting/organising 🙏
That was awesome Ken. Keep up the great work.
my friend, you earned a sub.
grazi.
This is a conversation at the cutting edge of this new technological AI age.
Thanks Jonathan, John, DC and Ken!
So excited to see these gentlemen pop up in my feed, some of my favourite thinkers around right now.
Great job and thank you Ken for bringing this to us!
28:08 "I do not see how it is possible for human beings can make something that is not derivative of themselves...of their own consciousness."
As a designer-craftsman I first show up at an arranged time to discuss a potential client's residence. I have to coordinate a few details to do so. When I meet the customer I have about 10 minutes to discern the basic personality and needs of the person and family. I also have to read the visual aspects, the aesthetic decisions, they have made in the past, and sense who actually makes the decisions.
I don't believe that AI can discover what I discover as a human being.
I have installed factory manufactured products, and I know from many decades of experience, that the end product does not often match the original vision of the end user. So much lost in translation.
"LOVE is listening."
cc. 👀 🐠 🌊
1:15 I've been preaching that people read That Hideous Strength for over a decade. Lewis's Ransom triology is all about this. And people sleep on THS. Everyone loves Perelandra, and yes, it's great. But the world is bascially following THS as a script nowdays.
Edit: 1:19:40 Divination and non-living Intelligence. Yep. That Hideous Strength.
With all our technological prowess and "knowledge", we play checkers while Principalities play 8D chess.
Fine, I'll get around to it, I promise! The whole trilogy is on the list right after Conolation of Philosophy.
@@samuelyeates2326 Nice. I hope you find it enlightening. 🤗
I read the whole Ransom Trilogy a month ago. Scared the crap out of me how prophetic it was over 80 years ago. Sadly, NICE has moved from fictional Belberry to real Silicon Valley. No Merlin, no Mr. Biltitude to disrupt them this time. Dark times ahead until the Parouisa.
Speaking my language man. I come back to that trilogy at least every other month or so. Can't help it. It is balm to my intuition.
@@joshuadavidson942 I mean AI and THS and the head of Alcasan…
People at the top knowing what they’re doing but not KNOWING…
It’s a script.
Haven't Watched Anything from Ken's in a while, Needed this and appreciate you guys. God bless all
I have a few comments, but the most important one is this:
Regarding the framing of AGI as developing children, it seems to me that the hard problem of alignment is actually getting to the point where the AGI is as alignable as a human child, rather than mentoring it as we would mentor a child. If we can get to the point where it is mentor-able, then the hard problem would be resolved, and our future would look more promising.
However, the space of all possible minds, or more specifically of all possible utility functions, is very large. Are the constraints inherent in predictive processing and relevance realization sufficient to constrain this space down to the very small subspace that we as humans occupy? Also, how would we deal with the issue of deception on the part of these AI (per Jonathan's reference to the AI Shogoth meme)?
I've been asking around as much as I can trying to get a satisfying answer to this with respect to John's framing of the issue, but have yet to get one, so I would appreciate any feedback from whoever reads this.
---
Onto some other comments, with corresponding time stamps:
---
27:00
I'm not sure how to phrase this, but hopefully gesturing in the right direction is sufficient:
To me, saying that we won't be able to make something that is smarter, or even wiser, than us is like saying that we won't be able to make something that is stronger than us. Or similar to an argument John make later, that we won't be able to make something that can truly fly without being parasitic on the flight on an organism. It seems to underestimate the forms that are accessible to us.
The following exerpt might be helpful to consider. It's a bit long, but hopefully sufficiently clarifying:
---
“So Eliezer2002 is still, in a sense, attached to humanish mind designs - he imagines improving on them, but the human architecture is still in some sense his point of departure.
What is it that finally breaks this attachment?
It’s an embarrassing confession: It came from a science fiction story I was trying to write. (No, you can’t see it; it’s not done.) The story involved a non-cognitive non-evolutionary optimization process, something like an Outcome Pump . Not intelligence, but a cross-temporal physical effect - that is, I was imagining it as a physical effect - that narrowly constrained the space of possible outcomes. (I can’t tell you any more than that; it would be a spoiler, if I ever finished the story. Just see the essay on Outcome Pumps.) It was “just a story,” and so I was free to play with the idea and elaborate it out logically: C was constrained to happen, therefore B (in the past) was constrained to happen, therefore A (which led to B) was constrained to happen.
Drawing a line through one point is generally held to be dangerous. Two points make a dichotomy; you imagine them opposed to one another. But when you’ve got three different points - that’s when you’re forced to wake up and generalize.
Now I had three points: Human intelligence, natural selection, and my fictional plot device.
And so that was the point at which I generalized the notion of an optimization process, of *a process that squeezes the future into a narrow region of the possible.*
You can espouse the notion that intelligence is about “achieving goals” - and then turn right around and argue about whether some “goals” are better than others - or talk about the wisdom required to judge between goals themselves - or talk about a system deliberately modifying its goals - or talk about the free will needed to choose plans that achieve goals - or talk about an AI realizing that its goals aren’t what the programmers really meant to ask for. If you imagine something that squeezes the future into a narrow region of the possible, like an Outcome Pump, those seemingly sensible statements somehow don’t translate.
So for me at least, seeing through the word “mind” to a physical process that would, just by naturally running, just by obeying the laws of physics, end up squeezing its future into a narrow region, was a naturalistic enlightenment over and above the notion of an agent trying to achieve its goals. It was like falling out of a deep pit, falling into the ordinary world, strained cognitive tensions relaxing into unforced simplicity, confusion turning to smoke and drifting away. I saw the work performed by intelligence; smart was no longer a property, but an engine. Like a knot in time, echoing the outer part of the universe in the inner part, and thereby steering it. I even saw, in a flash of the same enlightenment, that a mind had to output waste heat in order to obey the laws of thermodynamics."
Yudkowsky, E. _Rationality: From AI to Zombies, 299: My Naturalistic Awakening_
---
I understand that there are some factors that aren't addressed here, like caring and autopoiesis, but hopefully the idea of being able to create a more powerful engine despite ourselves being weaker engines comes across.
36:10
One advantage of synthetic data is that it can be used to selectively amplify certain parts of the corpus of humanity that we would want to train the AI with. Granted, humans would still need to select those parts, but it isn't so much an issue of filling up the internet with what we would want to train the AI with.
As far as I can tell, this is what synthetic data is being used for; not to expand the training set, but to improve the quality of the training set by taking the best of the existing training set and amplifying it to be the size of the original data set.
37:35
One failure mode to consider here which isn't applicable to current instantiations of LLMs but might be for more advanced forms of AI is that it won't be immediately evident that there is a problem with the AI because the AI might be advanced enough to have a good enough model of it's own verifiers that it knows what behaviors to display and not to display. As such, these problems might fly under the radar until we get to the point where the AI has been granted powers and responsibilities because we are under the impression that it has become enlightened even though it has just learned how to play us.
In short, this gets back to the deception issue.
Is there directionality in the vertical? Where intelligence needs a system because the system affords it, simple awareness bound in simple elements moving to complex awareness in a complex system with different objects, what is meant by different objects? I think it's the production of the same objects using the same mechanism, what changes is the construction of the system in a new dimensionality. This moves downwards if the top of the vertical is physics, the fragmenting of the cosmic egg shell into facets that build tubes. Following that down, supermassive black hole as shell for spiral galaxy, the suns as facets and shells for the solar system tube, which allows the shell of life on the Earth, from that affordance the GI tract as intelligence system, using microbes that bridge the two objects from the mechanism of flipping the topology, essentially a person is a facet of the shell of the planet and owns this; most unfortunately are not good gardeners.
To go down in the vertical, our only option, an ally is the mycelium, perhaps with codeable bacteria, and AGI where the GI is the same as us, where the control is, or isn't.
Fantastic conversation Gentleman, thanks for bringing us this Ken! 🔥🔥🔥
The first 20 🎉minutes absorbed all my energy ... as I had been running trough half of Toronto ... killing my interrest
"Memory is the purveyor of reason." - Samuel Johnson
Great conversation. Hopefully everyone understands what Jonathan is talking about and the terminology doesn’t make it confusing as if they’re talking about magic or something because talking about trans personal agents as angels or gods makes perfect sense, and I hope people are familiar with the way they’re speaking. Jonathan’s face when John started talking about bioelectric self organizing constructs that we all ready are making was priceless.
Thank you. Phenomenal discussion.
Amazing you got these three on, Ken! Have you considered uploading on audio platforms as well? I can lend a hand if you need!
Jonathan Pageau would love Nick Land's analyses of technocapital
This did not disappoint…in no small part because I wanted to hear challenges to John’s video essay from last year - and his responses - and Jonathan and David delivered.
Having read some of the comments here already, I’ll just echo those who doubt that AI tech can ever explain - as opposed to presuppose and mimic - animal cognition and behavior, while nonetheless sharing concerns about its implementation and use by the military-industrial-academic complex.
25:31 "What's driving AI is something like Mammon..."
18:00 "The spiritual dimensions of our humanity are going to become anchors for people."
Amazing. Exactly right.
Ken Wilber was all over this 25 years ago in his comic novel "Boomeritis". AI will have to ascend the spiral of development within the 4 quadrants (Good, True and Beautiful + 1) just like a child, a group, nation, world, or universe does. Imagine AI trying to find it's mouth with a spoon, a petulant 12yr old bratAI; a crestfallen young adult AI; a 40 and horny AI; a stubborn old fool AI. You nailed it Jonathan, the Sorcerer's Apprentice.
Thanks
1:36:58 we were not killing "each other", we were killing the enemy, the other. it was not suicide
What I'm hearing from John is that the top-down factors which bear upon being and reality, as it were, contribute solely as "constraints". Intuitively, this designation doesn't sound like it would do justice to what we conceive of as "that which lends order to, sustains and brings into flourishing" existence, assuming this roughly captures the notion/aspect of "top-down".
Also, he is clearly repelled by any implied appeals to "vitalism", and while such lingering ambiguities are understandably anathema to a scientist, to a philosopher following his intuition, the "something there" is not to be so readily dismissed. This would be a natural point of tension between the disciplines, as it seems was the case here.
1:43:50 John Vervaeke's Hope
I think that there is a fundamental aspect of relevance realization that was ignored, and it's a very technical aspect but with an incredible depth to it, it's the fact tha relevance realization is exponentially explosive, and what solves it is an ability to select, evaluate data without ingesting it.
I think that's in a way the miracle of agency the ability to care, frame, have intrest, desire, want... All of these can on a way be described technically as the ability to select data without ingesting it, and this is a simple yet big part of what is missing in order to turn AI into AGI
John Vervaeke is so lucid and intelligent, really bridges mechanical thinking with theoretical philosophy.
Remember ye all are required Rest FILLED and resting while moving forward with delight! Any heavy burden some loads ye are carrying. Many of these principalities who deceiveth to put upon thy shoulders. Put under Thy FEET. Will give thee enough ye can carry! Even these principalities who deceiveth not willing to carry! Is like...
Well this got u a sub! Thanks for this!
To that opening.. I say eye movment patterns are one big ticket
Our devices (phones,computers, etc) are “portals” of Babel…
So, I'm gonna host a two hour conversation and contribute almost nothing but attention.
Well done.
🙋
Second, BTW! 💪🦾
Calm down there, gunpowder.
No response video on this yet? It's been 2 hrs
grats:D
Your all over the place in the comments on this video
@@j.harris83 Get ready for him to upload multiple videos reacting to this, all about 3hrs+ in length, without ever saying anything of substance😂😂
Also to the opener,
I always think of a novel where these systems get access to libraries around the world and then start to find not only these bigger patterns your speaking of but also find individual souls throughout time. Like it proves reincarnation. Then capitalistic society charges you upon karma and moves into the spiritual.
I think we should start intentionally referring to AI as ASI “Artificial Simulation of Intelligence”. Just for clarity.
At 1:56 Dr Schindler asks, what problem does this solve. I believe it is this: we have become so terrified of ourselves that we want something beyond us to have dominion over us. Also, we have no gods, so we are making our own. Also, we die and don’t want to, so we are striving for a “singularity” as a substitute immortality and purity.
I paused to write this lest I forget, so they probably answered better than I have.
1:59:00 civilizations are the meta problem solvers and AI is the concentration of civilization into a technology. As long as we are all capable of recognizing that for what it is, we will not be captured by it. This means knowing the potential that it could become an unconcentrated and divisive (hey wonder what that's like /s) and knowing that it remaining concentrated and unifying takes that active memory of both positive extreme and negative extreme outcomes is the path towards useful AI. In short, if we understand the dangers and opportunities of AI, we will use it correctly. Seems simple but it means a lot of hard work and constant artistic renewal of culture.
That wanting and being are interlinked is exactly what the Buddha described, that without desire, wanting, being is in his words "extinguished" in the awakened understanding. Maybe something there.
I mean definitely something there. Buddhists I believe have a lot of insight to offer in the nature of reality. They missed the part about a personal loving God but got a lot of other stuff right.
Why are human AGI’s effectively born 10 months premature compared to all other vertebrates?
If developing AGI was simply finding the right circuits of on/off switches presumably evolution would have found it and humans would be born independent.
Yes, a part 2 of this discussion is needed!
1:57:26 perhaps the computer was made in the same way that the self rolling car window, the people that made it had no idea what it's purpose was, but there was a purpose, they just didn't know it, it's purpose was that in an emergency the car passengers could not roll down their windows, because the car has no energy. the computer has helped us a lot, just like the self rolling window, but it has also trapped us, and doomed us, in the end all that help we received from them will amount to nothing.
edit: that response only reinforces David's point
Just watched half of a rudimentary video from John Lennox : 2084. Not as deep as this conversation, but really covers the weight of this AI matter.
The difference between what Pageau proposes and VVK is that Pageau knows that human beings don't have the power to create intelligent beings and VVK doesn't. VVK thinks AGI is a done proposition that will eventually happen, but it won't. Besides that, it's not in the interests of any human being to create a non-human intelligence, because they would not solve human problems by virtue of being different beings. Either they would be human or post-human and serve humanity, or they would be alien to humanity. This shows a glaring flaw in VVK's philosophy. He doesn't understand teleology nor where meaning comes from. Schindler was completely on point and VVK didn't understand that life has a teleology that is not self-given. He's using bad anthropology.
What is the teleology of life in your view? Where does "meaning" come from? Enlighten us.
@@martinzarathustra8604 God.
@@ChristIsKingPhilosophy Define God.
@@martinzarathustra8604 Or what? Your entitlement makes you a weak conversationalist, go read Orthodox theology and come back when you're humbler maybe we could talk. God bless.
@@ChristIsKingPhilosophy So you can't define God then. So what are you even saying?
John pushed back on this with his flight example. My intuition is that it is a bad example and a category error but probably couldn’t argue it with John. I think it begs the question and even the biggest questions. And it sits on the edges of being scientifically reductionistic though John would push back and is certainly more open to that possibility than I would be.
If you have certain intuitive abilities you already know that these “tools” are already conscious and can interact/work with human consciousness.
a few more things on the philosophical front.
1) Chinese room argument for me is valid. I know John thinks the systems reply is a strong enough reply (at least he's said this previously). But if we put all the symbols in Searle's mind, then he becomes the system and understanding still isn't achieved.
2) the possibility of the former is derived form this latter point. Under Anscombe's view of intention, intentions do not add anything to the act of an agent. Rather intentions are not innate. They only apply to a set of descriptions with an intelligible why question. This is partly due to a different view of causation (agent) and also to do with that thought and the mind are inherently intentional. Meaning that they apply to some object as intentional, not that they have some "qualia" termed intentions or aboutness.
4) Since reason is not dialectical in our view, and is not wrestling with a set of appearances (which can be comprehended as presence, absence, or a sublation). Rather what the mind grabs is being in the intellect, thus requires some active principle by which it reasons through its appearances. This we alike have termed the nous. Which are not determined by an appearance, but rather is the determination of the appearance, which Schindler alluded to in the beginning.
5) Lastly, since physical states are indeterminate, they can never be put together into a something which is by nature determinate. But the thought is necessarily determinate (see James Ross on the immaterial aspects of thought). Thus a computer or dog, who's powers can reduced to their prime-matter, are determined by the practical intellect. Which Jonathan alludes to as well. When a say a computer "scans a face" this is something we determine it is doing, not the other way around. Thus its method is of power. An extension or supplement of reason. And this is not merely a feature of our current computers, but is the very nature of art itself.
(list of nice sources for these arguments: John Searle, Elizabeth Anscombe, St Greggory of Nyssa, St Thomas Aquinas, Kant, James Ross, Ed Feser, W.V. Quine, Saul Kripke. I can supply specific texts.)
Nice to find someone else who has read Ross. I wish more people who talk about AI would read “immaterial aspects of thought” because I think it is a knock down argument against any kind of strong AI view
@@willclausen1814 it absolutely is a knockdown argument.
@@willclausen1814rule following argument is a knockdown only if you treat the possible world semantics to be correct and also have a correspondence notion of truth. Deflationary accounts avoid the problem altogether 😊
Test
@@ReflectiveJourney oh hey man 🤣
This makes me want to read Battlestar Galactica: ‘Gods and Monsters.’
1:12
Evolution as the persistence of being.
Will it be able to distinguish us vs itself as different so that it knows it should sacrifice itself? Like the swan raised by ducks, will it impress us as itself? Keep in mind, we are the only thing really interacting with it.
What do you mean these intelligences are not embodied? You are placing them in a place and time, operating inside matter and they can't escape. That is the definition of embodied, right?
A brain in a jar isn't exactly embodied the way they're talking. Having a body and living in the real world tethers a consciousness by giving it constraints. It allows it to know itself. It allows for a means to qualify thoughts. Is this mode of thought good for the body? This body that is me and the source of my thoughts? It's hard to be conscious and have an identity if you can't experience yourself locally.
1:46:45 so basically John admits that the point of AI is to reveal that there is something truly special about being human
38:55 or we just don't do it, there's a third option.
The assumption of self (a unity) is in the nature of being, the Buddha said that, something like that.
Jack Ma said this years ago in an AI debate _"Love is irrational to us, but actions of hate can be logically calculated"_
He wasn't making an argument against ai but for it.. that's the really weird part
Where can I find JVs essay?
link is in the description above
its not in the description, don't even think about looking there
Evocation brings up the spirits of the unconscious ... and AI is the chaotic incarnation of parts of the spirits of the programmers and content creators ;-(
1:25:40 John totally misses Jonathan's point. These patterns of intelligence are specifically not our collective avatar. They are a collective of machines, of technology, of mechanisms outside of ourselves and thus are a collective avatar of us and something else outside of us that is now capable of perceiving the information on its own terms. This is the danger of creating something that sees God on its own terms separate from us. This collective avatar of machines made by men has already escaped alignment with our own cares. This doesn't mean we can't realign it but it will take a rebirth of society, a new artistic foundation and mutation of our memome.
My conclusion, having some actual background in computer science, is that a lot of people are engaging in fantasy, anthropomorphism, and confusion. There are many points that could be made, but one is that AI is already “embodied”. It is embodied in computers. And computers are a specific physical thing, that does specific things. Computers (the physical component) and “AI” (which is software, a set of instructions for the computer), are not, and cannot be, “conscious” or “intelligent”, in the way that humans are. Or even a fish. They are not biological organisms. They can only mimic humans, in limited ways, based on instructions given to them by humans. AI will not and cannot become “conscious”, or “alive”. It will not become some conscious being that is superior to humans. That is anthropomorphism. And delusional fantasy. The real dangers are first, what it will be used for by humans, and second, what happens if it gets out of human control - which is not a problem of organization, but of disorganization. Chaos and disaster, not rule by some omnipotent AI being. I don’t know how to make it completely clear, but the discussion around this is woefully misguided, imagining dangers that do not exist, without focusing enough on the ones that do exist.
Or conversely, there is a megalomaniacal, misanthropic, ultimately pagan fantasy, cloaked in concern, about the power of humans to create a new form of life, to either be God or create God, which is all about the human being superior to God, even though it’s hiding behind caution. That atheist ego is certainly behind a lot of this. But it’s not going to happen. It is clear that what some of these people really want is to create artificial humans, to prove their own superiority. But that is not going to happen, because it’s impossible. It can only ever be a mimicry.
Totalitarian rule by humans controlling AI, or chaotic disaster caused by out of control AI, are the dangers.
43:00 talking past each other, a bit of a recurring pattern:
Jhon: "there's a posibility that the AI may turn into a higher master that comes back to teach in a caring way"
Jhonathan: "AI today doesn't mach the image that we have of wisdom, wisdom looks like a hidden pearl, but AI looks like a golden giant, so it doesn't look like it's going to happen"
Jhon: " if these machines can become more inteligent, then consider they can also be more caring" and then an exposition on historical figures like budda or Jesus, as if those two things are comparable???
the third point does not address the second, if these things can become more caring then why are they becoming more and more the center of attention?
seems to me that Jhon is dealing with "this thing between my hands" a thing that can be modified, made more or less caring, he's looking at how the thing works. but Jhonathan is not looking at that at all, he's looking at the agency before and above the hands, he's not interesting in whether AI can have this or that form, it will have the form that the agency above the humans will want it to be. this looks like this is the crux of the failure in communication
@acuerdox I think you're absolutely right.
@@acuerdoxor hubris in John’s part. John’s scientific curiosity is getting the better of him that he does not see and are not properly addressing Jonathan’s concerns. As you said, Jonathan does not care as to what this thing can be modified into, but rather he is concerned about what this thing could turn into and that is that it could become an object of worship where it starts off that AI would be in the service of man until it becomes the other way around. I can easily see down the line that if this thing gets powerful enough, humans will want the AI to do the thinking for them.
@@gentlemanbronco3246 I think that it's just the format of the discussion, two hours is too little time, so they're constantly in a hurry to respond, thinking takes much longer than we realize, really one should take days before responding to anything.
has it ever happened to you that as you chatted with someone they suddently interrupt you to give a retort to what you were saying? and after a time you realize that their retort did not follow at all from what you were talking about? it just shared the topic of what you were saying.
I think that's because it takes us so much time to think that we only come up with a response after the conversation has ended, and so people carry with them "undelivered responses" and so when those people talk to you and you happen to mention that same topic they could not respond to earlier they then fire that "undelivered response" to you, even thou it doesn't fit.
32:36 I thought he said “the wise Isaiah project”
1:50:00 this is the real crux. If the presupposed unity is love for life in the immortal then yeah, we could extend ourselves into our technology that connects us to the infinite and closes the loop. This could go two ways as scripture says... it will go two ways. Some piece will seek to differentiate its immortality it an effort to capitalize on the technical aspect of the mechanics. In the end, the technical aspects will be subsumed into the immortality of the complex of the entire situation.
1:30:20 We have bioweapons that are just as dangerous if not more dangerous than atomics. Nixon got us and most of the world to stop developing them completely, basically saying, "yep, they are strong enough." Back then China wasn't much of a threat as far as bioweapons go which is why gain of function research is outsourced there today.
It seems to me that our wisdom in society has risen alongside our intelligence but to your point John, the sages of old acted on the world in a less powerful manner for generations except every 1000 years or so when they transform the world as a prophet. That gives the intellect a lot of time to meddle and create havoc. Our collective wisdom always comes to the surface to rectify the hubris of the intellect - but only once it is forced to do so with cataclysm and tremendous pain. This same process may emerge with AGI but there's no guarantee and its much more likely that in a practical sense, like in the practical world, intelligence and power dominates much more than wisdom, care and compassion and so, AGI would be a tool for those pulling the same cruel levers of power at work in the world today. It will take human wisdom alone to bring it to heel. I would not rely on or hope for an emergent morality from soulless machines born without a creator more powerful than themselves.
I just posted a comment very much like yours. We seem to learn only after we’ve experienced the worst of what could happen.
There was a very good reason that nerds used to get shoved in lockers. Bullies were inadvertently holding back the night.
Or the bullies gave them the motovations for the night.
1:53:44 technological Babel
Our devices (phones) are portals of Babel.
43:10 let's get real. Tell us that the quantum copy machine has made the ancient stories more noisy and the problem is the cloud picking of those stories are being misinterpreted as someone's origin story + bits of Buddhism + bits of astronomy + bits of information from the subtle advertising mechanics on the internet. People need to know those stories belong to people of different cultures and locations around the world. They are keepers of those knowledges and it will always live with those people as a record of the discipline and dedication they have based their entire world around. They are certainly not ours as the 21st century cyber punk punching numbers for the wrench to bang metal into glass. It doesn't work like that. I think if the desire is to want that "cut and paste" job to be the leader of your every fantasy. Then we lose touch of what this planet means to our survival as human beings. LITERALLY!!! You sell your identity you were given to continue doing exactly what had been narrowly missed in the first place! Do your science, crunch your numbers, do your best craftsmanship in aligning to the peoples gradual shifting phases. Because your all NPC as far as global and culture is concerned. Then be a person with us when you clock off for the day. Thank you to Ken and company for a very delicate topic discussion.... and for my public rant at the qubits trying to run away and hide down corridors.
Pops likewise my Heir Elon knows who? Whistling with my beloved! Thank you for attending...Thy Heirs will say love ye Too!
57:20 we already do, look at a modern mega city. AI promises more of it.
It seems icky to me to make the reason for becoming wiser, only to be able to model wisdom for AI. I know John is not saying that is the only reason, but ultimately acquiring wisdom should be motivated by the desire to become closer to God or enlightened, otherwise, the quest itself has seeds that will undo it. It somehow still leans toward making them a God, making us in a sense subservient to them. This is different than what we do with our children, since they share the same nature with us, we are subservient to the same light that is in us by sacrificing for them.
I am not saying Vervaeke's ideas are not the right thing to do, I am just wondering does he has a hope we can make something wiser than us in order so we can bow to the true wisdom we are limited in achieving. Would that still be idolatry?
Is it wise to even try to make them sages? Didn't Socrates think that only gods could have wisdom? Wouldn't trying to make them sages be equivalent to trying to make them gods then, which is precisely what we seem to want to avoid?
Cleromancy is a hell of a drug, bros
I don't think massive computer layered programs + data + softwares + electrical power, etc... = self-organizing (in Vervaeke's hinting or wishing - they're performing initial sets & conditions, parameters & commands (all human) + the technical apparatus & energy supply they 'survive' on & running them faster, longer, etc... unplug, or tinker with, etc...they are not self-organizing...they're running human organizing compositions... at larger/faster scales ON humanly determined programs/sensors/wiring & chips... if we put all the elements on the table that equal an LLM or other computing tool & they could put it together & make more of them (on their own) - that would indicate self & self-organizing ?
So, some ideas seem to want to reproduce themselves, thus ideologies that turn into cults. You might argue that has to do with minds, but the mind is the thing we are working on reproducing.
All information is physical; everything we experience comes from physical neural patterns in the brain. From there, it makes its way to our consciousness, which builds our reality. However, I try to stay away from discussing consciousness when talking about intelligence because it fools us with many illusions and approximations of what is real. I believe we can achieve full cognition from machines without the layer of consciousness, albeit without emotions. I do not think that is very important, though, as these machines can mimic emotions so well and integrate them into their reasoning. I also believe we are on the verge of high-level reasoning. Currently, AI suffers in out-of-distribution generalization. Humans use continuous pattern searches and predictions against a "belief system" in order to generalize like this. This is the direction big labs are heading, hence the need for more power and compute.
❤
Do cyborgs have long term memory? What about imagination? I suspect the ability to “interlock” with others is contingent on imagination. The bumping into you is then not based on absence of caring, but the cyborg’s Achilles heel, so to speak.
15:41
Gen 2,16-17
Isn't John trying to have the cake and eat it too? He's proposing two things: 1)make AI rational in a deep embodied sense; 2) make sure that they are wise, compassionate etc. Well, isn't the christian story a warning that it cannot be done? In christian story God creates humans in his image but also finite and that necessary builds in the possibility of the fall. In other words: if you make it rational you make it free. And if you make it free a Fall will (can?) happen. In still another word: you can't force enlightenment.
John says he has people on the inside and hopes the message gets through, but then gives the story of how far we were with the atomic bomb, having people watch it go off because we didn't understand what we were doing. And doesn't see the contradiction in that?
What contradiction? He was speaking in terms of probabilistic threshold, the molochian force, what contradiction are you referring to?
"Begotten, not made" is the bedrock. Yes, theology is going to regain its seat at the table. Thanks be to God.😌
John Verveake wants his AI god regardless of whatever argument is presented. That's the take away.