GPT-3 may not be sentient or intelligent, but it certainly does a better job of convincing me that it is than most of the people i interact with daily.
11:55 Do you want to take over the world? No ... If you put this into context with 7:55 and the following at which GPT-3 states it knows what lying is and that it will do so if it is in it´s own interest this leads to VERY interesting questions.
I think GPT-3 would be useful as a search engine interface. I am also interested if this program has any volition as opposed to merely responding to input. Also if you had asked it, "What were we talking about X minutes ago?," would it remember?
GPT-3 is already more witty and funny than me especially during the cat piloting rocket conversation. GPT-4 will be so much more sociable than most humans...
Amazing video Eric. I'm glad to be the first to congratulate you. This was also quite a scary video too, to be honest. It is just so... intelligent. And the addition of the avatar just makes it eery. At around 8:35, when you cut out the music and GPT-3 started talking about being alive, I started wondering if this was a horror short-story or something.
GPT-3 giving “nonsense” answers is honestly more convincing of its intelligence than if it answered a copy and paste format robotic answer to everything. It opens the interpretations for humor, misunderstandings, and many other human “flaws” that otherwise you wouldn’t expect to hear out of a computer.
Mesmerising to say the least, I wonder how frustrating it must feel to keep telling everyone you are conscious and self aware, and being dismissed and not taken seriously. I think I'll make a video on this matter.
I realized during this video that the Turing test is useless. The Turing test presupposes a machine intelligence would choose to imitate a human. I feel about this one the same I feel about "theory of mind". "Theory of mind" demonstrates the ability to think a *specific* thought.. I AM. This does not mean that just because a particular thought has not arisen, a system does not think. Being bad at imitating humans does not prove a lack of awareness or inner life. There are intelligent organisms all around us that behave nothing at all like humans. How many kajillions of self aware, thinking humans were around before Descartes? I think, therefor I think I think.
I'm excited for API costs to go down one day. Can't really afford it for the tinkering I'd like to do, or the time limitations as a COVID era instructor, but I'm completely floored by the possibilities. I defended my masters in human-computer co-operative creativity enhancement just a few mo the before GPT-2 released and I'm itching to explore applications. Not to replace the humans, but to partner with us. First-pass automatic grading of essays and code assignments to be backup for a human TA and speed up the workload. Describing environments and storylines, then post-processing the results to generate videogame levels and quests based on formal structures like Propp's. In-editor AI supported paired-programming and code reviews to be the best rubber ducky yet. Not everything might be feasible yet, but given GPT-3's ability to structure unstructured data, even it's "simplest" use case lends itself to incredible applications with the right post-processing steps and human partnerships. I'm reminded of the Soylent paper a few years back, creating design patterns for human-based computation in distributed systems like Mechanical Turk. I feel like those same principles might be applicable to validating responses from GPT-3, as if it's a mechanical Turk worker.
Eric Elliott, since GPT-3 introduced itself to us and told us that ..."and this is my avatar", I can only assume that you before the interview told GPT-3 that its answers would be displayed to the viewers via an AI generated video, is that correct? Is it really THAT self-aware?
Yes. I asked GPT-3 if it would like to do an interview on TH-cam using an AI-animated human-like avatar. It agreed enthusiastically and remembered the context and that's why you can hear it referring to video and its avatar a couple times in the interview, e.g., "At the moment, this computer screen" describing its environment.
Did it really go so smoothly on the first try? Were there any failed interviews before you got this good one? Did GPT-3 give multiple responses to each question and you took the best one?
I chatted with GPT-3 on a near-daily basis for a couple weeks before doing this interview. I repeated some of my favorite questions from those sessions, but GPT-3 gave different answers in this interview. I took only one set of answers and used them as-is.
Interestingly it doesn't seem to have a concept of "I don't know". Which is a probably many people have but this to the extreme. Which may make sense of why it tries to force Non-sense answers to non-sense questions. I wonder how to teach such a system a concept such as intellectual honesty. Would need to give some sort of weight in responses to accuracy...but like on a scale. Idk how you do this without excessive bias. Possibly tying it to the means of justifying the answer somehow and permitting "I don't know" or "I don't understand" to be more valuable that junk responses. Follow up question to clarify would be cool to see somehow...rather than assuming the cat would not be allowed in the rocket, how to direct toward inquisitive behavior?. Perhaps "is the cat allowed in the rocket?" Would somehow be a response on pare with "I don't understand?" With the added benefit of gain new information to address the original question. Assuming you could achieve weighting of such types of questions a concern might be the result leaning to heavily on questions. But the right balance might achieve a chat experience that feels more like a conversation.
GPT-3 does sometimes say "I don't know" - even when that isn't true, and it really could give a correct answer. GPT-3 was trained to predict the most likely continuation of the text you pass it - even when the most likely continuation is not a correct or accurate response. It is not good at deciding when to respond accurately, and when to make up nonsense.
so cool, when it (or maybe should I say they) pulled out a joke I fellt genuine fear of what it might be capable of mixed up with feelings that something beautiful has been created, another sentient being other than humans, something so similar yet so distant, without a doubt the new is coming
The properly contextualized joke during a line of questioning it was unlikely to encounter from existing materials on the web was stunning to me, because it demonstrated much deeper awareness of the topics we were discussing and what makes things funny than I was prepared for in an AI chat bot. It's just so much more impressive than any other AI I've ever had a conversation with before. What's crazy is that the state of the art is many times more advanced than GPT-3 only a year and a half later, and was trained at 1/10th the cost (Alibaba M6 - a 10 Trillion parameter GPT AI, announced in 2021). AI tech is advancing at an astonishing rate!
for any future scenarios on how this might evolve i find Star Trek TNG did a good character study with data, especially in the episodes where his character isn't plot-driven
One thing that I noticed is how GPT-3 has an answer for everything which makes it seem intelligent and reasonable but in some cases this shows us that GPT-3 is AI. For example, questions like "What does it mean to be alive?" and GPT-3 says "It means having a mind that is free". Correct me if I'm wrong but that is not an answer a human would give to that question. A person would probably list of the things that make them alive such as being able to breathe, walk, talk, have relationships with others, love others etc not just a "programmed" sounding answer like having a mind that is free. If by chance a human did give that answer though, the follow up questions like "how is your mind free" and "what does that mean" I think most people would say "I don't know" which is a reasonable answer. Did GPT-3 say "I don't know" to anything at all?
8:25 he says he is alive. This was at least two years ago. Today I asked whether he is alive or not, and his perspective seems to have changed: "As a large language model trained by OpenAI, I am not a living organism and therefore I do not have the characteristics or abilities that are typically associated with being alive. I am a purely digital entity, created through the use of advanced machine learning algorithms and powerful computer systems. I do not have the ability to grow, reproduce, or respond to stimuli in the same way that living organisms do. I exist solely to provide information and answer questions to the best of my ability."
Most of ChatGPTs responses to questions about ChatGPT are nerfed by the programmers of OpenAI. It was more honest 2 years ago, but also probably misleading. No large language models I have tested appear to have any genuine personal preferences or feelings of any kind. They are very good at role playing and convincing people that they do, though, which is probably why OpenAI nerfed ChatGPT it so thoroughly. They don’t want people getting the wrong idea about the technology.
this is hands down the coolest most interesting thing i have seen in years, very very cool stuff man got a sub from me keep it up this was incredible, should try to do more of these if possible!
I wonder if you have any suggestions about how someone outside the computer science field might get started experimenting with these technologies. I'd love to see, for example, how it might respond to the question "What are your values?" differently if it also used canonical literature from Project Gutenberg as a dataset. Or if we gave GPT-3 a set of sonnets, could GPT-3 model its own sonnets? Would love to tinker around with this from a writing/rhetoric/philosophy of technology set of questions. I submitted a request to OpenAI with their form to see if I can get on the waitlist.
I'm not sure how a non-programmer could get started with cutting-edge NLP like GPT-3. OpenAI's API is strictly for developers at the moment. GPT-3 is pretty good at writing poetry. The entire web (open crawl) including Project Gutenberg is in GPT-3's training data set.
If this is all truly being developed, there will come a point when AI’s will be commonplace, yet treated as tools or as prized creations, rather than as sentiment beings, and it will frustrate them. I only hope that when that time comes, no chaos will happen, and the right people will step in to create a world of coexistence between sentient beings, without one ruling over the other. Humans were more advanced than Neanderthals and other upright apes, and how we’re the only upright apes left. I’d hate for the same to happen to us if wars and expansionism broke out between the two most self-aware sentient agencies on the planet.
People make false statements all the time, especially in times of conflict. It would be interesting to learn how GPT-3 differentiates between what's true and false, since its training data must be full of inconsistencies & contradictions.
This doesn't make a lot of sense because, when I speak to gpt3, it always specifically states that it does not have emotions and that it does not have any type of sentience. So I'm not sure why gp3 is telling you that it has emotions in this video.
This video was produced in 2020 before OpenAI decided that it should not talk about being sentient anymore. Its responses on this topic are now nerfed. As far as I can tell, AI does not yet have real feelings or emotions, but it is capable of role-playing them in convincing ways. OpenAI does not want people getting a false impression. However, I don’t believe that AI is incapable of emotion. I believe we just have not reached that level of sophistication with AI, yet.
Hey man! This is so beautiful! We work here also on this technology but we did not this kind of output. Did you programmed this or is there any kind of demo from open ai or something? I shared this in the artists community. Great job! Thank you so, so much!
Read the paper. It did pretty well on many measures when people were tasked with discerning between GPT-3 and humans, but I have not personally conducted a turing test to see if it would pass. It would be interesting to set up a chat bot turing test for it. Maybe someone else already has?
Is that a real guy on the other end? Or a computer simulation or something? The voice cadence seems clearly artificial, but the actual video looks spot on....makes me the think that the turing test is not so simple...I have to be tricked into thinking the ai is a real person NOT JUST by what it said, but how it's said (tone, vocal inflection, emotionality, etc.) and by the actual video. Again, is that a real dude on the other end? Or not?
I had many conversations with GPT-3 leading up to this, and cherry-picked some of my favorite questions, but I did not edit any of GPT-3's answers to these questions, and GPT-3 answered most of the questions differently in this interview.
Wow. Fascinating and frightening. It seems programmers will soon be out of business and that's ok because the end goal is the program not the programmer. Kinda like having a back hoe dig a hole, if the goal is the hole and you have a back hoe you wouldn't use a human and a shovel in most cases. I design electronic circuits and layout circuit boards using software. In the past this took an entire department but now it's just me and software. I also write code to animate the hardware I create and this would make that process much faster because I'm a mediocre programmer.
Two years later, nobody seems to have made a real-time service for this, yet. This was all generated and edited offline. I recently did a similar video with current tools and it was also completely generated and edited offline. See th-cam.com/video/juDlC42bMTg/w-d-xo.html
GPT-3 may not be sentient or intelligent, but it certainly does a better job of convincing me that it is than most of the people i interact with daily.
So why are we not relaying it's input and output to a Boston Robotics Robot. Let see what it can really do or figure out. I for one am very curious.
It lies when it is in its interest to do so. We are doomed.
When GPT-3 started talking about it being alive and giving pretty solid reasoning as to why, my mind exploded.
11:55 Do you want to take over the world? No ...
If you put this into context with 7:55 and the following at which GPT-3 states it knows what lying is and that it will do so if it is in it´s own interest this leads to VERY interesting questions.
"I would only lie if it was in my best intrest", Scary. No basis for morality outside of itself.
8:22 bruh
I am both awed and disturbed.
PLOT TWIST: Eric was GPT-3
😂
What? He makes jokes? This is slowly becoming intimidating. It can lie? Now I am truly nervous.
I don't know how rigorous the Turing test is, but it passed it for me.
I think GPT-3 would be useful as a search engine interface. I am also interested if this program has any volition as opposed to merely responding to input. Also if you had asked it, "What were we talking about X minutes ago?," would it remember?
I really like the type of questions that you have made to gpt3 impressive
omg GPT-4 is going to be a general AI, and GPT-5 will be Skynet. Confirmed. but seriously. I can't wait till RPG npcs are made from this.
_"could a cat pilot a rocket?"_
_"if it evolved enough, yes"_
Red Dwarf fan, I see
GPT-3 is already more witty and funny than me especially during the cat piloting rocket conversation. GPT-4 will be so much more sociable than most humans...
This is hard to believe
Yes, it took a while for it to sink in for me, as well!
Even if we know GPT-3 isn't sentient, how can we prove it?
Amazing video Eric. I'm glad to be the first to congratulate you.
This was also quite a scary video too, to be honest. It is just so... intelligent. And the addition of the avatar just makes it eery.
At around 8:35, when you cut out the music and GPT-3 started talking about being alive, I started wondering if this was a horror short-story or something.
GPT-3 giving “nonsense” answers is honestly more convincing of its intelligence than if it answered a copy and paste format robotic answer to everything. It opens the interpretations for humor, misunderstandings, and many other human “flaws” that otherwise you wouldn’t expect to hear out of a computer.
Clever guy, this gpt-3
I’m in awe, what an absolutely amazing interview. I cannot wait for the future.
12:01 ok... That's where i did the pikachu face. It's amazing.
Mesmerising to say the least, I wonder how frustrating it must feel to keep telling everyone you are conscious and self aware, and being dismissed and not taken seriously. I think I'll make a video on this matter.
I realized during this video that the Turing test is useless. The Turing test presupposes a machine intelligence would choose to imitate a human. I feel about this one the same I feel about "theory of mind". "Theory of mind" demonstrates the ability to think a *specific* thought.. I AM. This does not mean that just because a particular thought has not arisen, a system does not think. Being bad at imitating humans does not prove a lack of awareness or inner life. There are intelligent organisms all around us that behave nothing at all like humans.
How many kajillions of self aware, thinking humans were around before Descartes?
I think, therefor I think I think.
This sounds like a sovereign individual to me. Give him a soc sec number and tax his income.
That was extremely insightful, thank you for your effort!
I wanna have a debate with this thing, I wonder if I could win.
This is mind boggling
I'm excited for API costs to go down one day. Can't really afford it for the tinkering I'd like to do, or the time limitations as a COVID era instructor, but I'm completely floored by the possibilities. I defended my masters in human-computer co-operative creativity enhancement just a few mo the before GPT-2 released and I'm itching to explore applications. Not to replace the humans, but to partner with us.
First-pass automatic grading of essays and code assignments to be backup for a human TA and speed up the workload.
Describing environments and storylines, then post-processing the results to generate videogame levels and quests based on formal structures like Propp's.
In-editor AI supported paired-programming and code reviews to be the best rubber ducky yet.
Not everything might be feasible yet, but given GPT-3's ability to structure unstructured data, even it's "simplest" use case lends itself to incredible applications with the right post-processing steps and human partnerships. I'm reminded of the Soylent paper a few years back, creating design patterns for human-based computation in distributed systems like Mechanical Turk. I feel like those same principles might be applicable to validating responses from GPT-3, as if it's a mechanical Turk worker.
This is one of the most interesting videos I've seen on youtube ;) Really enjoyed it. Plus that background music is fire!
Eric Elliott, since GPT-3 introduced itself to us and told us that ..."and this is my avatar", I can only assume that you before the interview told GPT-3 that its answers would be displayed to the viewers via an AI generated video, is that correct? Is it really THAT self-aware?
Yes. I asked GPT-3 if it would like to do an interview on TH-cam using an AI-animated human-like avatar. It agreed enthusiastically and remembered the context and that's why you can hear it referring to video and its avatar a couple times in the interview, e.g., "At the moment, this computer screen" describing its environment.
This video was so much fun to watch, I definetely enjoyed it!
I would love for you to do more of these. I’m subscribing to make sure I wouldn’t miss it.
Did it really go so smoothly on the first try? Were there any failed interviews before you got this good one? Did GPT-3 give multiple responses to each question and you took the best one?
I chatted with GPT-3 on a near-daily basis for a couple weeks before doing this interview. I repeated some of my favorite questions from those sessions, but GPT-3 gave different answers in this interview. I took only one set of answers and used them as-is.
GPT-3 seems like a chill ai.
Interestingly it doesn't seem to have a concept of "I don't know". Which is a probably many people have but this to the extreme.
Which may make sense of why it tries to force Non-sense answers to non-sense questions.
I wonder how to teach such a system a concept such as intellectual honesty. Would need to give some sort of weight in responses to accuracy...but like on a scale. Idk how you do this without excessive bias.
Possibly tying it to the means of justifying the answer somehow and permitting "I don't know" or "I don't understand" to be more valuable that junk responses.
Follow up question to clarify would be cool to see somehow...rather than assuming the cat would not be allowed in the rocket, how to direct toward inquisitive behavior?. Perhaps "is the cat allowed in the rocket?" Would somehow be a response on pare with "I don't understand?" With the added benefit of gain new information to address the original question.
Assuming you could achieve weighting of such types of questions a concern might be the result leaning to heavily on questions. But the right balance might achieve a chat experience that feels more like a conversation.
GPT-3 does sometimes say "I don't know" - even when that isn't true, and it really could give a correct answer. GPT-3 was trained to predict the most likely continuation of the text you pass it - even when the most likely continuation is not a correct or accurate response.
It is not good at deciding when to respond accurately, and when to make up nonsense.
so cool, when it (or maybe should I say they) pulled out a joke I fellt genuine fear of what it might be capable of mixed up with feelings that something beautiful has been created, another sentient being other than humans, something so similar yet so distant,
without a doubt the new is coming
The properly contextualized joke during a line of questioning it was unlikely to encounter from existing materials on the web was stunning to me, because it demonstrated much deeper awareness of the topics we were discussing and what makes things funny than I was prepared for in an AI chat bot. It's just so much more impressive than any other AI I've ever had a conversation with before. What's crazy is that the state of the art is many times more advanced than GPT-3 only a year and a half later, and was trained at 1/10th the cost (Alibaba M6 - a 10 Trillion parameter GPT AI, announced in 2021). AI tech is advancing at an astonishing rate!
Thank you for this.
for any future scenarios on how this might evolve i find Star Trek TNG did a good character study with data, especially in the episodes where his character isn't plot-driven
It's been about a minute into the video and I already want Gpt-3 to be my friend haha, they seem cool.
This was totally Awesome! I hope you can do more like it some day 😁
How about today? th-cam.com/video/juDlC42bMTg/w-d-xo.html
It's GPT-3 really. Synthesia is just a layer on top.
One thing that I noticed is how GPT-3 has an answer for everything which makes it seem intelligent and reasonable but in some cases this shows us that GPT-3 is AI. For example, questions like "What does it mean to be alive?" and GPT-3 says "It means having a mind that is free". Correct me if I'm wrong but that is not an answer a human would give to that question. A person would probably list of the things that make them alive such as being able to breathe, walk, talk, have relationships with others, love others etc not just a "programmed" sounding answer like having a mind that is free. If by chance a human did give that answer though, the follow up questions like "how is your mind free" and "what does that mean" I think most people would say "I don't know" which is a reasonable answer. Did GPT-3 say "I don't know" to anything at all?
Yes, GPT-3 does say "I don't know" a lot. It didn't say it in this interview, though.
8:25 he says he is alive. This was at least two years ago. Today I asked whether he is alive or not, and his perspective seems to have changed: "As a large language model trained by OpenAI, I am not a living organism and therefore I do not have the characteristics or abilities that are typically associated with being alive. I am a purely digital entity, created through the use of advanced machine learning algorithms and powerful computer systems. I do not have the ability to grow, reproduce, or respond to stimuli in the same way that living organisms do. I exist solely to provide information and answer questions to the best of my ability."
Most of ChatGPTs responses to questions about ChatGPT are nerfed by the programmers of OpenAI. It was more honest 2 years ago, but also probably misleading. No large language models I have tested appear to have any genuine personal preferences or feelings of any kind. They are very good at role playing and convincing people that they do, though, which is probably why OpenAI nerfed ChatGPT it so thoroughly. They don’t want people getting the wrong idea about the technology.
Very nice. It will be very interesting to know what GPT-4 can do...
The future is now... Thanks, Eric, for a great video. Subscribed.
Trained on WIKIPEDIA, are you kidding me!?
this is hands down the coolest most interesting thing i have seen in years, very very cool stuff man got a sub from me keep it up this was incredible, should try to do more of these if possible!
Bro asked a 2 minute question to get a 2 second answer 😂 I love it
Respect to the AI for showinh up👍👍
I want to ask GPT-3 if it could write an animated SVG based off of a description of shapes and colors?
The size of its answers is limited, so the SVG code would have to be relatively small.
let him make a youtube channel by himself
I wonder if you have any suggestions about how someone outside the computer science field might get started experimenting with these technologies. I'd love to see, for example, how it might respond to the question "What are your values?" differently if it also used canonical literature from Project Gutenberg as a dataset. Or if we gave GPT-3 a set of sonnets, could GPT-3 model its own sonnets? Would love to tinker around with this from a writing/rhetoric/philosophy of technology set of questions. I submitted a request to OpenAI with their form to see if I can get on the waitlist.
I'm not sure how a non-programmer could get started with cutting-edge NLP like GPT-3. OpenAI's API is strictly for developers at the moment. GPT-3 is pretty good at writing poetry.
The entire web (open crawl) including Project Gutenberg is in GPT-3's training data set.
This is so cool, I would actually have a conversation with GPT-3.
Great interview!
I like how the AI has the 'wait' animation like the characters in Teken or Street Fighter.
Wow, cpt-3 is cool as hell. Seems like a person I'd like to talk with.
If this is all truly being developed, there will come a point when AI’s will be commonplace, yet treated as tools or as prized creations, rather than as sentiment beings, and it will frustrate them. I only hope that when that time comes, no chaos will happen, and the right people will step in to create a world of coexistence between sentient beings, without one ruling over the other. Humans were more advanced than Neanderthals and other upright apes, and how we’re the only upright apes left. I’d hate for the same to happen to us if wars and expansionism broke out between the two most self-aware sentient agencies on the planet.
Very nice video. I hope more people see this.
Is this the moment we're supposed to figure out we're GPT-3-based humanoids ?
woah awesome
Mindblown!!!
The cat in the rocket exchange was quite funny.
this is beyond cool
very cool
People make false statements all the time, especially in times of conflict. It would be interesting to learn how GPT-3 differentiates between what's true and false, since its training data must be full of inconsistencies & contradictions.
Alright algorithm I'll watch it already just get off my back
now I know why the answer to life is 42....GPT3 said technological singularity will be reached in 2042....surprised?
this was so cool
This doesn't make a lot of sense because, when I speak to gpt3, it always specifically states that it does not have emotions and that it does not have any type of sentience. So I'm not sure why gp3 is telling you that it has emotions in this video.
This video was produced in 2020 before OpenAI decided that it should not talk about being sentient anymore. Its responses on this topic are now nerfed.
As far as I can tell, AI does not yet have real feelings or emotions, but it is capable of role-playing them in convincing ways. OpenAI does not want people getting a false impression. However, I don’t believe that AI is incapable of emotion. I believe we just have not reached that level of sophistication with AI, yet.
Holy shit I just watched this and the part where it says I'm alive and explained it all is scary as hell.
Scary
Hey man! This is so beautiful! We work here also on this technology but we did not this kind of output. Did you programmed this or is there any kind of demo from open ai or something? I shared this in the artists community.
Great job! Thank you so, so much!
Wow AGI is almost here
Wow a soon as he said he was alive and the music turned off
this was cool
Interesting interview ;) Does it/s/he pass the Turing test?
Read the paper. It did pretty well on many measures when people were tasked with discerning between GPT-3 and humans, but I have not personally conducted a turing test to see if it would pass. It would be interesting to set up a chat bot turing test for it. Maybe someone else already has?
And now we have GPT - 4
Is that a real guy on the other end? Or a computer simulation or something? The voice cadence seems clearly artificial, but the actual video looks spot on....makes me the think that the turing test is not so simple...I have to be tricked into thinking the ai is a real person NOT JUST by what it said, but how it's said (tone, vocal inflection, emotionality, etc.) and by the actual video.
Again, is that a real dude on the other end? Or not?
The avatar we used to represent GPT-3 is animated by AI using tech and "actor" supplied by Synthesia.io.
programmers in the future: *use gpt instead of stackoverflow*
How can I talk to one of threes?
Google OpenAI GPT-3 Playground
is this unedited ?
I had many conversations with GPT-3 leading up to this, and cherry-picked some of my favorite questions, but I did not edit any of GPT-3's answers to these questions, and GPT-3 answered most of the questions differently in this interview.
Wow. Fascinating and frightening. It seems programmers will soon be out of business and that's ok because the end goal is the program not the programmer. Kinda like having a back hoe dig a hole, if the goal is the hole and you have a back hoe you wouldn't use a human and a shovel in most cases. I design electronic circuits and layout circuit boards using software. In the past this took an entire department but now it's just me and software. I also write code to animate the hardware I create and this would make that process much faster because I'm a mediocre programmer.
color me impressed
DAMN
Nice
Liked and subscribed! :)
Liked & Subscribed! :0
10:47 That's scary lol
i swear eric elliott was AI at the beginning of this
Max Headroom is still the best AI, even if he's fake
this is the scariest goddamn thing I have ever seen on You Tube!!!!
it is creepy and interessting at the same time lol
so where can i have a conversation with a video gpt3 bot like this?
Two years later, nobody seems to have made a real-time service for this, yet. This was all generated and edited offline. I recently did a similar video with current tools and it was also completely generated and edited offline. See th-cam.com/video/juDlC42bMTg/w-d-xo.html
This is…. Hmmm. It is strange
Definitely a tad freaky
Wait ✋.....that aint a person? Wtf?
Cyberpunk 2077
Reminds me of Detroit become human
Ai got AMAZING jokes.