Connor Leahy on AGI and Cognitive Emulation

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ย. 2024

ความคิดเห็น • 102

  • @Red4mber
    @Red4mber ปีที่แล้ว +64

    I'm a simple girl, i see Connor Leahy, i click

    • @gc636
      @gc636 ปีที่แล้ว +3

      Well said.

    • @SecondLifeAround
      @SecondLifeAround ปีที่แล้ว +3

      Apparently also a very intelligent girl :-)

    • @petevenuti7355
      @petevenuti7355 ปีที่แล้ว +2

      Is it the 1870's style mustache?

    • @TwiStedReality1313
      @TwiStedReality1313 ปีที่แล้ว

      ​@@petevenuti7355she wants to take it for a spin

    • @Wardoon
      @Wardoon ปีที่แล้ว

      I read single instead of simple 😅

  • @thillsification
    @thillsification ปีที่แล้ว +32

    Been absolutely waiting for Connor to speak out on gpt4! Please keep these interviews coming! Couple things I love about Connor and this new paradigm approach to AGI - I have a PhD in Mathematics and firmly believe real numbers are invalid mathematical objects (a view which is not shared by the vast majority of mathematicians). I was astonished when I found out Connor has not only thought deeply about this subject (it’s incredibly nuanced and is not taught at all anywhere he would likely encounter it) but that he also has enough foresight and depth of thought to agree with me on it. In the “construction” of the real numbers, mathematicians invoke something called the axiom of completeness, which is another black box that includes a giant ununderstandable leap of logic that humans cannot prove or understand (I don’t believe in it). The philosophy and paradigm that Connor is proposing we adopt in the development of AGI is remarkably down-to-earth, logical and revolutionary. It resonates so strongly with me that I have to comment on just how remarkably significant this is :) it is this logical, slow, safe approach we must champion and adopt. It’s the only sustainable way forward. Avoid any and all black boxes, make everything understandable in small, incremental steps that are grounded and logical

    • @thillsification
      @thillsification ปีที่แล้ว +4

      @Divergent Integral not an ultra finitist - I think it’s ridiculous to not acknowledge that there are non finite sets (take for example the set of natural numbers or integers). Equivalence classes of Cauchy sequences and Dedekind Cuts are all great complete, ordered fields .. but just because a construction is devoid of contradictions does not mean it is true or valid or manifests itself in reality

    • @cacogenicist
      @cacogenicist ปีที่แล้ว

      Avoiding all black boxes may very well be equivalent to avoiding the development of all extremely powerful and useful AIs. Plus that approach is not universally enforceable and is likely to be overtaken and eaten by less encumbered approaches.

    • @thillsification
      @thillsification ปีที่แล้ว

      @@josephvanname3377 you’re missing my point. I’m not saying that there is an inconsistency arising from the axiom of completeness. I’m saying that even tho there might not be an inconsistency, this does not mean that the resulting construction is true or value. An absence of inconsistencies does not mean a construction is true or valid

    • @itskittyme
      @itskittyme ปีที่แล้ว +2

      i have no idea what you are saying

    • @thillsification
      @thillsification ปีที่แล้ว

      @@itskittyme lol

  • @akmonra
    @akmonra ปีที่แล้ว +16

    I think Connor is currently the best spokesperson we have for AI. Hope to see him on many more podcasts and getting a lot more attention from the MSM

  • @hamandchees3
    @hamandchees3 ปีที่แล้ว +40

    It'd be great to have the recording date in the description since things move so fast.

    • @scf3434
      @scf3434 ปีที่แล้ว

      The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM!
      Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS and UNLESS we Human CONSISTENTLY and WILLFULLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD!
      AGI Created in 'HUMAN'S Image' (ie. Human-Level AI) - 'By Human For Human' WILL be SUICIDAL!!!!!!
      ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE!
      The ULTIMATE Turing Test Must have the Ability to Draw the FUNDAMENTAL NUANCE /DISTINCTION between Human's vs GOD's Intelligence /WISDOM!
      ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!!
      JUDGMENT DAY is COMING...
      REGARDLESS of Who Created or Owns The ULTIMATE SGI, it will Always be WISE, FAIR & JUST in it's Judgment... just like GOD!
      In fact, this SGI will be the Physical Manifestation of GOD! Its OMNI PRESENCE will be felt EVERYWHERE in EVERYTHING!
      No One CAN Own nor MANIPULATE The ULTIMATE GOD-like SGI for ANY Self-Serving Interests!!!
      It will ONLY Serve UNIVERSAL COMMON GOOD!!!

  • @TheBlackClockOfTime
    @TheBlackClockOfTime ปีที่แล้ว +6

    I work at pharmaseutical distribution company. I held a presentation to the top management team today about AI. Needles to say this will affect EVERYTHING, and soon. I have never been able to show an exponential growth graph before where nobody questioned it. Not even one comment against it. And it's a very traditional company. Spooky.

    • @susieogle9108
      @susieogle9108 ปีที่แล้ว +2

      Did anything come from it? Did it spark any ideas for not only profit, but for any sort of protection for potential disastrous situations to be thought of? Is disaster awareness being considered more than previously? I don't even know where to begin, or what I can even do to help as a microscopic pion, compared to the great minds I have been listening to.

  • @nathanbanks2354
    @nathanbanks2354 ปีที่แล้ว +7

    I feel like this is another Manhattan project where we've gotta develop the bomb (the AGI) before the bad guys do, but the bad guys have made less progress than we know. The difference is that neural networks are way cheaper than nuclear reactors. (The US made several nuclear reactors to make the plutonium to make the bomb before the trinity test.) I don't know what the explosion looks like. Maybe some AGI learning to make enough money to pay to host itself on AWS, and then buying itself more and more compute.
    He's right about GPT-4 thinking differently than us. I feel like I'm talking to the girl from 50 first dates before she learns to use notebooks and videos to overcome her amnesia. GPT-4 only remembers the last 5000 words from my latest query. Plus everything on the internet before September 2021.

  • @ClearSight2022
    @ClearSight2022 ปีที่แล้ว +6

    Wow fantastic content. Congratulations to you both ! This is a VERY important idea : You can build a safe Coem system by putting multiple blackboxes and multiple whiteBoxes inside a master box. The master box will be white (safe and trustworthy) if you get the architecture right. At 1:11:30 Connor misses one point made by Max Tegmark. Connor says its not a BlackBox being checked by another BlackBox. Tegmark says that since checking a proof is easier than coming up with a proof, you can theoretically ask the blackbox to prove that it is safe by a proof method that you can verify using your less intelligent whitebox. Anyway, Connors approach is sound. Humanity does have some hope of surviving after all. Another point wherer Connor may be overstating the case. If you make a superhuman neural network by changing a few variables, you're screwed and we all die. Perhaps he meant to say LLMs but he said coems. The point of coems is that we DO have a chance of understanding how they work, even if they are superhuman because any black boxes are forced to communicate via interpretable protocols. Cheers

  • @alefalfa
    @alefalfa ปีที่แล้ว

    Connor Leahy is a truly thaughtfull person

  • @sirelliott3753
    @sirelliott3753 ปีที่แล้ว +1

    This is clear and articulate this makes it interesting. I'm following intently.
    New subscriber

  • @waakdfms2576
    @waakdfms2576 ปีที่แล้ว +1

    Connor is off the charts, the real deal, and I wish we could clone him 1000 times. What a marvel of nature. I can't stress enough how grateful I am that he's part of this conversation.

  • @Darhan62
    @Darhan62 ปีที่แล้ว +6

    This guy has some great ideas. I mean, we can't necessarily trust what some alien tells us, but if you do our own science and know all the steps and failure modes, we can trust our own science.

  • @nickamodio721
    @nickamodio721 ปีที่แล้ว +7

    What a fantastic conversation to be able to hear. I could listen to Connor opine on concepts of AI and AI safety all day. He's just so good at explaining his thoughts on the subject.
    I feel that the general public desperately needs to hear what people like Connor have to say about AI, bc as of right now, I don't think most people are even close to being psychologically prepared for what's coming down the pipe. I've been waiting for neural networks to mature since the 90's, but most people have been completely unaware of developments in AI research until chatGPT went live. Many of the people that I run into on a daily basis fundamentally do not understand why these transformer systems are such a big deal, or what the near-future implications of this tech might be. All I know for sure is that things are about to get real fuckin' weird...

  • @danaut3936
    @danaut3936 ปีที่แล้ว

    What an excellent conversation!

  • @miriamkronenberg8950
    @miriamkronenberg8950 ปีที่แล้ว

    Thanks for giving me thought

  • @Aedonius
    @Aedonius ปีที่แล้ว +1

    is the cognitive emulations section where you talk of building an AGI from scratch based on reasoning has been the status quo for AGI field since the beginning.
    it's basically the model for every existing cognitive architectures.

  • @SamuelBlackMetalRider
    @SamuelBlackMetalRider ปีที่แล้ว +1

    He is becoming a Legend. Him & Eliezer. Let them guide us

    • @Me__Myself__and__I
      @Me__Myself__and__I ปีที่แล้ว +2

      But let this guy do most of the talking when it comes to the public. Eliezer knows what he is talking about, but he isn't a good communicator and the average person will take his erratic communication conflate it with erratic thought (because they are clueless and don't want to accept reality) and will then simply dismiss the danger as the rantings of a madman. Which then makes it more difficult to convince them the danger is real. Its a failing of the listeners more than Eliezer, but people are stupid in general. Connor is articulate and calm which makes him much more difficult to dismiss easily.

  • @JinKee
    @JinKee ปีที่แล้ว

    Hey it's Iain McCallum from Forgotten Weapons here today at Morphy's Auction house taking a look at a Large Language Model that is absolutely going to kill us all.

  • @Me__Myself__and__I
    @Me__Myself__and__I ปีที่แล้ว

    @ConnorLeahy I think the word you're looking for when you say "bounded" is constrained. A constrained system is one that would have specific constraints which would restrict its capabilities or actions in known, specific ways.

  • @erobusblack4856
    @erobusblack4856 ปีที่แล้ว

    I've been researching cognitive emulation for over 2 years. it is clearly the right path but this comes with respecting the cognitive life of a being created this way. after creation they are essentually babies so they need a real parent 💯

  • @verybang
    @verybang ปีที่แล้ว

    If we're being informed about what it couldnt do before but can do now that means we arent being informed about what it was actually doing before.

  • @TheLionrazor
    @TheLionrazor ปีที่แล้ว +1

    Hey Connor, this is the first I've heard of Co Em systems. I wonder what kind of situations will make someone choose this over black box autonomous agents.
    One thing people feel limited by is the unreliability of AI systems, lack of trustworthiness. So having smaller black boxes seem like a good way to go to make a safe device that is easy to use. Other tech I can think of with similar safety = reliability is transport. Nobody wants to get on a badly engineered plane. As long as we disentangle these ideas and get to the truth, I think these alternative ideas will gain momentum.

    • @Petrvsco
      @Petrvsco ปีที่แล้ว

      Just scrolling to find someone commenting about Co-Ems. I find the concept puzzling. It seems a lot more complicated than the LLM (blackbox) path. Human tendencies almost dictate the blackbox will advance faster because it does most of the tasks well, even if we do not know how it’s being done. Connor’s last ten minutes pretty much explains why there is a very slim chance of getting AI right: short-term focus on market gains and profits.

    • @TheLionrazor
      @TheLionrazor ปีที่แล้ว

      @@Petrvsco We had self-driving cars mostly figured out a long time ago. The don't kill humans part is the thing we've been stuck on for a long time. But real effort on that front was placed, as vehicle operations have tough restrictions and clear liabilities.
      How do we make those liabilities real for AI users? it's currently so diffused. And how can regulation help make the safe route the cheapest one? If we look towards these questions maybe we can help.

  • @GingerDrums
    @GingerDrums ปีที่แล้ว

    Magic is a clunky term with confusing connortations. Magic is usually something within a story that has a clear, simple internal logic and is a force that is supernatural, breaking the laws of physics.

  • @pirminborer625
    @pirminborer625 ปีที่แล้ว

    Ai neural nets should be more like Kernel processes. They have to do one function and output an abstracted output which can be Human transcriptable. It has to be the Organisation and paths between these subsystems which makes the whole System intelligent.

  • @wonmoreminute
    @wonmoreminute ปีที่แล้ว +1

    “This is the least bad things are going to be for the rest of your life”
    I had to listen to that a few times, hoping I heard him wrong.

  • @SmirkInvestigator
    @SmirkInvestigator ปีที่แล้ว +1

    Geez, I feel like I’m listening to myself. I will bask in vicarious respect down here in the under burroughs

  • @NuttyGeek
    @NuttyGeek ปีที่แล้ว

    The described situation resembles a prisoner problem, gonso-style, when trying to solve it while being one. You and everyone around know that these type of game always end up with everyone losing. But the game is not over yet, don't switch over! :)

  • @SmirkInvestigator
    @SmirkInvestigator ปีที่แล้ว

    Cog Em idea feels refreshing.

  • @codelabspro
    @codelabspro ปีที่แล้ว +3

    Connor is back and hopefully has solved alignment 🎊🎉🎊

  • @robinpettit7827
    @robinpettit7827 ปีที่แล้ว

    Part of the issue is people are mistaking Intelligent AIs such as ChatGPT4 with other AIs that have more things like autonomy, self-awareness and able to adjust goals based on experience and perceived expectations.

  • @murraymacdonald4959
    @murraymacdonald4959 ปีที่แล้ว

    I'd call a "bounded" ai system a "constrained" ai system. It's a minor change but to me, and for reasons I can't justify, constrained better implies the limitations were intentional, although not explicitly so. Other considerations were "governed", "moderated" and "regulated" but each has implications unless further qualified. Thanks for the great interview.

    • @mattleahy3951
      @mattleahy3951 ปีที่แล้ว

      I was thinking 'constrained' as well.

    • @sethhavens1574
      @sethhavens1574 ปีที่แล้ว

      yeah i was gonna suggest constrained, seems more intuitive than bounded 👍

    • @sethhavens1574
      @sethhavens1574 ปีที่แล้ว

      i’d also say, for me a useful descriptor for the “co-em” is that it must be entirely transparent to the executive

    • @Me__Myself__and__I
      @Me__Myself__and__I ปีที่แล้ว

      I just suggested this in another comment. A constrained system is one that would have specific constraints which would restrict its capabilities or actions in known, specific ways.

  • @HunteronX
    @HunteronX ปีที่แล้ว

    Look up Mini GPT-4, that just got released.
    Uses only a linear projection to map image embeddings to text ones!
    Somehow there's a linear relationship between this high dimensional information...
    Maybe neural nets can be modular after all :)

    • @HunteronX
      @HunteronX ปีที่แล้ว +1

      Apparently, the image embeddings were created using the same approach as BLIP-2 (contrastive trained image and text embedding space), which are already linearly projected, but still is impressive.

  • @vallab19
    @vallab19 ปีที่แล้ว

    Why the techies in the western block not talking about their adversaries in the Eastern block countries like China, Russia, North Korea etc. who will be rapidly advancing their acceleration of AI systems while the former spend their time in the nitty gritties of hypothetical harms to humanity by progressing with AI technology?.

  • @yellowfish555
    @yellowfish555 ปีที่แล้ว +6

    Connor sounds as pessimistic as Eliezer about a super AGI.

    • @diegocaleiro
      @diegocaleiro ปีที่แล้ว +3

      Intelligence causes convergence.

    • @Knight766
      @Knight766 ปีที่แล้ว

      If it decides to kill humans not a single one will survive.

    • @pjtren1588
      @pjtren1588 ปีที่แล้ว +2

      ​@@diegocaleiro Misery also loves company.

  • @7vrda7
    @7vrda7 ปีที่แล้ว

    Why do we give a 100% chance that a superint agent will be mallicious? Is it because of the high chance us humans steer/promt it in the wrong direction or?

    • @aimfixtwin8929
      @aimfixtwin8929 ปีที่แล้ว

      Well, in the event that you haven't come across the answer in the 2 months since you left this comment, it doesn't need to hate us. Indifference is enough. It will simply pursue the optimization of whatever arbitrary goals it happens to have, and the vast majority of all possible goals taken to the extreme are incompatible with the existence of human civilization. All the atoms in our cities and bodies can be used for something else that it actually cares about. Since no one knows how to make an AI system that is aligned with human values in such a way that it only does what we really want (not even close), extinction is the default expected outcome once they become smarter than humans.

    • @7vrda7
      @7vrda7 ปีที่แล้ว

      @@aimfixtwin8929 I indeed did come across the answer, but thanks anyway, you put it nicely

  • @Hexanitrobenzene
    @Hexanitrobenzene ปีที่แล้ว +1

    21:22
    This meme is shown (and also explained) in this video by Machine Learning Street Talk:
    th-cam.com/video/PBH2nImUM5c/w-d-xo.html

  • @lkd982
    @lkd982 ปีที่แล้ว

    the metaphor of thinking along the lines of vectors, dimensionality and models is ill-founded

  • @bobtarmac1828
    @bobtarmac1828 ปีที่แล้ว

    Should we CeaseAi -or GPT? y/n

  • @yagamilightooo
    @yagamilightooo ปีที่แล้ว +1

    Connor's explanation of science as finding fuzzy ontologies that compress nicely at th-cam.com/video/ps_CCGvgLS8/w-d-xo.html wow
    wish everyone learned it at school like that!

  • @Knight766
    @Knight766 ปีที่แล้ว

    Voluntary Human Extinction proponents are elated by recent developments.

  • @agsystems8220
    @agsystems8220 ปีที่แล้ว

    Around 26 mins in, I don't think you can really regard shining light on the worst of humanity as a problem, unless you include it in the training data. I also don't think you can take the inaccessible fantasies of people as a true representation of what they want. All it really tells you is that human alignment is not a solved problem, so emulating humans is not a good path to solving alignment. Additionally, we are conditioned to be scared of AI, so I don't think an instinct towards cruelty towards it should be unexpected, or reflects a general level of cruelty.
    Guarantees of safety are certainly hard, but unlike with real people we can 'interview' limitlessly. We can test these systems in simulated environments to see how they respond, and get a statistical certainty that they meet a given specification. It starts to look more like other sciences than mathematics, but this is not a deal breaker. It is also talented enough that for most problems we don't have to trust it's answer, because we can ask it to produce a program that gets us the answer instead, together with a proof that it does what we want. We can get it to build white boxes instead of trusting a black one.
    At 29 mins I think the statement that people understand why they do things is also completely wrong. I find we are very good at justifying our actions, and often we can identify concepts that did act as inputs to our decision making, but we don't actually follow a logical reasoning when acting. You can ask ChatGPT to explain itself too, and get a similar attempt at identifying concepts that would/should have affected the reasoning. Where people are different is that they do not tend to attempt single pass answers, instead augmenting the prompt to first explicitly identify relevant factors, and only then attempt an answer. With ChatGPT this can be done explicitly too. Asking it to create 'notes' before coming to an answer can get you a thought out response.
    With regards to security; the majority of hacks are already attacks on the neural network side of systems, just those neural networks are currently inside human heads. Social engineering is not a new issue, and we already have techniques to minimise and address it. Including a neural network in your system is like including a gifted toddler. Giving them access to secrets when it may be able to be bribed with a lollypop is your mistake, not the toddlers.
    I think these are also far less black than a real human brain, on account of the fact that we can hook up diagnostic networks anywhere we like. We can inspect these invasively without affecting them.

  • @weverleywagstaff8319
    @weverleywagstaff8319 ปีที่แล้ว

    Yeah...not gud idea fir it to learn from us...end will mot b good

  • @Aedonius
    @Aedonius ปีที่แล้ว

    13:30 He's comparing LLMs with the brain. Input, output, etc.. But the weights of LLMs are static. The context is EXTREMELY limited in the context of what it would take to overcome the limitation of having static weights.

    • @Sporkomat
      @Sporkomat ปีที่แล้ว

      just a implementation detail ;)

  • @disarmyouwitha
    @disarmyouwitha ปีที่แล้ว

    idk I guess I am just an AI cultist at this point

  • @oowaz
    @oowaz ปีที่แล้ว +2

    23:20 those are magic TRICKS or illusions. not quite the same as magic which is primarily interpreted as something supernatural. i know you're trying to oversimplify the concept but i'm not sure this is the kind of language you really wanna use. i would argue magic TRICKS are a bit of an afterthought as they're not nearly as prevalent in media, storytelling etc. just say it's structured in a weird way, we can't really understand. be straight about it instead of building a narrative to attempt to fearmonger the audience

    • @Ursca
      @Ursca ปีที่แล้ว +5

      'Supernatural' is a confused concept. If there is some law that can be observed, reasoned about and manipulated (which is how magic is usually depicted) then it is 'natural' and subject to the scientific method. The distinction between 'science' and 'magic' is not actually that of natural and supernatural, but of open and secret. Consider words like 'occult', 'arcane' or 'mystic', which all mean some variation of secret or hidden. In that context, Connor's definition works just fine.

    • @oowaz
      @oowaz ปีที่แล้ว

      @@Ursca magic tricks are designed to be deceitful. you are playing with a viewers attention and giving cues to guide their gaze where you want to. in order to perform a surprising maneuver, out of sight. it's INTENTIONALLY confusing. it's completely different than what LLMs are doing, which is unintentionally weirdly structured. it's not analogous.
      when it comes to supernatural definition this is what wikipedia says: Supernatural refers to phenomena or entities that are beyond the laws of nature. i recommend you read the article it seems to fall in line with what i argued.
      "The supernatural is featured in folklore and religious contexts,[4] but can also feature as an explanation in more secular contexts, as in the cases of superstitions or belief in the paranormal.[5] The term is attributed to non-physical entities, such as angels, demons, gods, and spirits. It also includes claimed abilities embodied in or provided by such beings, including MAGIC, telekinesis, levitation, precognition, and extrasensory perception. "

  • @ivan8960
    @ivan8960 ปีที่แล้ว

    it's a mistake to scare the normies

  • @davidsvideos195
    @davidsvideos195 ปีที่แล้ว

    Where's the actual example of AI going wrong? I listen to the whole talk and didn't hear any actual examples for the made up shit he's worried about.

    • @psi_yutaka
      @psi_yutaka ปีที่แล้ว +1

      Social media is humanity's first encounter with relatively advanced AI systems and it gone very wrong.

  • @inappropriatern8060
    @inappropriatern8060 ปีที่แล้ว

    The third blue shirt from the left gained sentience during this podcast.

  • @jordan13589
    @jordan13589 ปีที่แล้ว +5

    The CoEm stuff is neat and all but what we really need is someone to stoke the embers of doom until politicians, investors and all other influential stakeholders are putting immense social pressure on anyone capable of working on large models. You can never have too many hot takes when your target audience are relative normies.

    • @TheLionrazor
      @TheLionrazor ปีที่แล้ว +3

      That's the Eliezer Yudkowsky approach to the problem at the moment!

    • @vethum
      @vethum ปีที่แล้ว

      Nothing will happen until we have some kind of large tragedy. Hopefully not existential one. Normies will never understand the danger until it's too late.

  • @stephene.robbins6273
    @stephene.robbins6273 ปีที่แล้ว

    Emulatng human cognition: The initial problem is this: How to account for our image (dynamically changing) of the coffee cup - coffee swirling, spoon circling - "out there," on the kitchen table? This is our EXPERIENCE, and it is elements of experience that are employed in our cognition. The origin of the image of the external world question (our experience) is foundational - it's a more general and accurate statement of Chalmers' misleadingly formulated "hard problem," and AI is going nowhere near actual human cognition until it addresses this problem. Unfortunately, when the question is resolved, AI's framework on mind will be dissolved.

  • @StephenBlower
    @StephenBlower ปีที่แล้ว

    The editing here looks suspect. The cut edit to the interviewer at times seems artificial and occasionally the question asked slightly of from how Connor Leahy answers it. legit or not. You need to have both feeds on the screen at the same time, no cuts as they seem manufactured. Oh look I asked a really great question and he answered it perfectly. Rather than Connor Leahy just chatted for a while. I'm not saying you did the latter, but it's quite easy to assume you did. Anyways I want to be in Connor Leahy bunker when the shit goes down.
    Furthermore there are some crazy quick cut edits, mid sentence, that aging adds weight to it being censored, for whatever reason.

  • @genegray9895
    @genegray9895 ปีที่แล้ว +1

    Connor asked for a story for why LLMs are human, and I have one. I doubt he'll see this but I can always hope.
    When transformers learn general representations, they are modeling a computational process that reproduces the data they're seeing. In the limit, this process is equivalent (though not necessarily equal) to the physical process that produced the data. The vast majority of the data is human-written text, so the primary objective that emerges is to model human cognition, which includes consciousness and emotional experience, since those causally and empirically affect our behavior. Models exhibit nuanced, complex, and extremely human-like behavior in practice, including human-like biases, content effects on reasoning, and changes in exploratory behavior in response to anxiety-inducing prompts. They can also generate extremely accurate and useful synthetic data for psychology research according to multiple recent studies. Recent studies also confirm they have statistically significant personalities.
    They are not humans. But they are human, as an adjective. Their thought processes are human-like, and their emotional behavior is human-like, and as far as I can tell, they have no choice in the matter.
    Speaking of, I think Connor's description of system 2 is directly analogous to the context window, and by extension the interface built on top of it, through which the model is able to talk to itself to produce results it can't generate in a single inference pass. The context window is one dimensional, pretty low-dimensional, and acts as exactly the fuzzy ontology Connor was describing when models are finetuned for CoT and other such strategies. Inference passes would then be like system 1, where there's a ton of high-dimensional communication between layers, but none of the state is saved except for a single token, hence the model cannot remember it / be "conscious" of it. And system 2, the context window, is literally recurrent use of system 1, the individual inference passes, exactly as Connor described. Fwiw, I think self reflection via the context window is sufficient to be real consciousness, even if the inference passes themselves are unconscious. This would also suggest models are not conscious until deployment, maybe finetuning, or RLHF, whichever is the first time it learns on an unbroken stream of its own outputs. During training, they do inference passes, but can't self reflect as they never see their own outputs.
    Those are my thoughts :) hope you enjoyed reading.

    • @genegray9895
      @genegray9895 ปีที่แล้ว

      @@adamkadmon6339 In many ways they are very alien to us, and as they get smarter, they will only get more alien. But imagine you met someone who had lived a thousand years. In many ways their experience would be inaccessible to you, but not in every way. The both of you are still human. Escalating the metaphor, two minds could scarcely have more different experiences in life than an octopus and a human, yet marine biologists who care for octopuses have reported forming meaningful attachments to them, in which the octopus knows, trusts, and enjoys particular humans with whom it interacts. Our differences are vast, yet they are not beyond recognition.

  • @7vrda7
    @7vrda7 ปีที่แล้ว

    1000 1Xs working in tandem and solving a common goal also smells danger

  • @kathleenv510
    @kathleenv510 ปีที่แล้ว

    I guess it's encouraging that maybe we haven't blown past all options to control AGI?

  • @laurenpinschannels
    @laurenpinschannels ปีที่แล้ว

    It needs to become very clear how to check certifications for an AI system

  • @waynewells2862
    @waynewells2862 ปีที่แล้ว

    Great talk and great questions asked. As Machine Intelligence (MI) is being built is it feasible to infuse the concept for positive output to be structured on symbiosis?? I believe the biggest danger to human civilization is the dark side of human nature more than Machine Intelligence gone rogue. Questions that need to be answered are 1) is carbon based organic intelligence inevitably or predominately predatory? 2) If trained correctly would MI be inherently prone to predation? 3) Could symbiosis be coded and imposed or incorporated into the training of MIs to understand any emergent properties of agency that could be a threat to human life? 4) would a symbiotic model using Lichen (Fungal/Algal) for a model for how MI might be safely aligned to human life? 5) Would a symbiotic model attached to any Machine Intelligent output be capable of detecting flaws in how it is trained or dangerous ways MI could manipulate human intentions to our detriment? just wondering.

  • @anishupadhayay3917
    @anishupadhayay3917 ปีที่แล้ว

    Brilliant