As a kid in the 70s, I watched Kirk and Spock having scientific conversations with the Enterprise computer, now it's really happening and everybody can access it, pretty incredible.
These LLMs are useful to think through problems. They are like an exo-cortex. People that are good at putting their thoughts into words, get much more out of them.
Finally someone that didn't waist this feature asking it to speak fast, change language or accents. Congrats! I guess you discovered that one of the biggest charges in this version is that you can interrupt it. 😂😂❤
Bro, all the memes we made about how lame our reality was compared to the films predictions are aging like milk. We are getting 01, advance voice mode, and today Suck demoed fucking hollographic glasses. I feel like I am in the future.
Hey Kyle, just wanted to add some context. I'm pretty sure the original conversation you had was not actually with an earlier version of the so called "advanced" voice mode, but rather with the original voice mode that came out I think over a year ago now. There was a lot of confusion around this since OpenAI demoed the advanced voice mode at the same time as they released the regular chat version of the new GPT-4o model. A lot of people then went into the app and discovered the regular voice mode for the first time, thinking this is the thing that they demoed, when in reality that mode had already been available for a long time. The actual advanced mode was only released to a very limited number of alpha testers, until more recently rolling out to all plus users. The reason the interaction with the previous version felt more stilted and choppy is because that version was actually three seperate models stitched together. First the speech-to-text model takes your speech and transcribes it to text. This text is then fed into a regular chat-gpt text model, which in turn provides a text response. Finally a text-to-speech model turns this text response into an audio clip that is played back. This adds a significant amount of latency as you noted. More importantly though, there is a lot of nuance and information lost in translation. The advanced mode takes the audio and converts to tokens which are fed directly into a multi-modal model, which then outputs audio tokens directly as a response. This means that it is able to pick up on tone of voice and respond in a much more natural way. Hence why it feels more like speaking to an actual person. This is also the reason it is able to laugh and sing and make noises and so on. Although those features they have limited now for various safety and liability concerns. Hopefully they will ease up on those restrictions in the future as the range of applications would be much greater without those restrictions. Perhaps you were already aware of this, if so then my bad! Hehe. Great video anyways!
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Awesome video! Thanks for sharing it! Just a quick note: it doesn’t have access to the internet, so it might not know current events. Also, you can only interact with it for about 20 minutes at a time.
This level of responsiveness without time limit and with GPT 5 will be just too fucking awesome (or dystopian). We only need to make sure not to die until then (maybe to be killed by it).
@@KyleKabasares_PhDto me, it would be the most fascinating existential threat hands down. I mean, another world war? Lame. A new virus? Unoriginal. An asteroid? Been there* 😂 (* Ok not me personally, some distant cousins of mine)
Not to downplay his accomplishments or anything but everybody with the paid subscription is getting access to it. It's being rolled out in waves throughout this week.
@@KyleKabasares_PhD oh, ok, i thought because you got it so early that it was before the alpha rollout. thx for clarifying. i unsubscribed. i'm kidding 😅
If they build this into Copilot ,so you could interact with it on desktop/laptop/tablet, it would be so much more valuable for in depth topics. You could have transcript and the ability to do projects on the fly.
@@KyleKabasares_PhD There just doesn't seem to be any road blocks looking forward. It will be quite the horse race from here between all these tech leaders.
Convincing but hallucinating. The advanced voice mode makes it more deceptive because it’s easier to overlook fact-checking during a smooth discussion. In written chat, it’s simpler to verify details.
Sadly it's likely going to be discontinued after Microsoft poached all their main staff. I love Pi though! It's excellent just unlikely to be meaningfully updated now.
@Kyle, what are your thoughts about the fact the model knows so much about the relevant scientific literature? This has been my observation as well with advanced optics publications. Does this mean that OpenAI has scraped all scientific journals? How did they do that, and do you think they did that legally?
Scraping is the past, new models use synthetic data to learn its logic and reasoning and have the ability to search the internet in real time for things it does not know
A lot of those papers I referenced are open access and freely available on the web. It’s also possible that pre-print servers, which typically have the research articles before they go into print, were part of its training data, since those are all available too. If it has the ability to reference legitimate research and not hallucinate (a common LLM problem), then it would help researchers greatly. There definitely needs to be some discussion though about what kind data these models are trained on, and if it is copyrighted or not.
The titles and abstracts of scientific literature is typically publicly available and there is increasing open access literature. However when I've done similar tests with earlier ChatGPT and Claude I find that the AIs are only getting specific papers accurately maybe 60% of the time and tend to convincingly hallucinate some of the references they give though they are sometime close to real papers as Kyle found. Also if you get deep into a topic you know (as a researcher) they are less accurate. They seem to do better in biosciences (when tested by colleagues) and I do wonder if one reason for this is that key results of NIH funded research (and that is a lot of research) are available publicly as a condition of funding. In general RAG supported AI would is better for science. Elicit is one such tool, and one of the major scientific indexing websites Scopus has an AI summary mode for exploring different fields. Nonetheless it is amazing that general purpose AI can do so well on a specialist science topic.
That was my thought, too. Other than the shorter latency and the ability to interrupt the model, this demonstration-though interesting-doesn’t really show this new voice mode’s greatest strengths. Kyle’s follow-up comments about how researchers can use such conversation bots for brainstorming and the like were spot-on, though.
Yeah, but I like seeing what others use this stuff for. This level of responsiveness without time limit and with GPT 5 will be just too fucking awesome (or dystopian). We only need to make sure not to die until then.
Can you give some examples when it fails? Because even the "lowly" gpt 4o , let alone the more advanced gpt o1 , gave me the more precise and correct answers about questions, that caused me a lot of headaches along the years. until I find satisfactory answers. Even the most authoritative textbooks were either poor in explaining or just lacking. Same with specialists and professors . This is because you said gpt 6 , I assume you refer to the overall LLM , not just the voice update. If you speak about the voice update only, then I am not sure . I have just started using it and I am not particularly excited. But the LLM that answers or responds to your typed statements is just on another level. And if it is true, it is still in its infancy , then the medical profession does not have a bright future for biological humans.
I bet you can't provide a single case where GPT completely failed in your field. Many denialists use extreme terms like 'it can do nothing or anything,' but we all know GPT isn't that incompetent. It's not perfect, but it's far from useless, especially in fields with text-based data, like medicine."
@@prodromosregalides3402 basic stuff like what to do when oxygen saturation level is normal but heart rate and respiratory rate are elevated. They practically give answers that would probably kill the patient or make them worse 😭. I’m glad it’s bad at it. you are right about how that this in the infancy stages and it will look bad for the medical field in the next few decades . We are all getting replaced soon
It doesn't have to be extra useful, it's enough for it to be better than most human doctors (actually not even most of them). Which I understand it _already_ does in some contexts. I think this year a model did better in various scores , for example in giving less potentially harmful diagnosis or therapies then humans... I don't remember exactly, probably it was a specialized model; because it's not fair to compare an AI layman to a specialist, even if the specialist is just a puny human😂
Other than being able to interrupt, how is this functionally any better? It definitely is more fun and interesting with the improved voice capabilities but it’s the same LLM providing the answers.
It can understand emotions and actually hear you and your surroundings. The other method was text to speech conversions. You can have this act as a translator between to different people without pausing. You can introduce a person and have them give their name, and then it can listen to both people and distinguish between the two people. This opens up group discussions, translations, and I heard someone get advice on how to tune a guitar because it listened to the pitch h.
@@josephflemming7370 Thanks! I made the mistake of assuming a YT vid I watched to get a summary was complete. Not even, sheesh. I will read the actual release notes from now on :)
Is this usable on PC? Can we use advanced voice mode in browser lol? My phones too old i ask it a question and it lags like shit LOL, why is my old phone doing compute for openAI?
Your phone is not doing any compute for open ai, it is simply dialed their phone number, transmit your voice, the computer hears your voice, and answers back, but all the computer is done on their side. Your phone would do all the work, if you downloaded a local model. Most people's phones don't have enough compute to even run half, of a small model.
@@armadasinterceptor2955 This is incorrect and demonstrably so. Something is definitely happening on the phone end. I've tested this now on 5 devices and it's all the same. Just completely lags out while answering a question, the phone is being used to do something. What I have no idea.
@@armadasinterceptor2955 I've quantized and rolled out my own distillations for the last 5 years bro. I know more about this architecture than most people, who don't work in this field lol.
My friend this is normal voice mode, not advanced. The advanced mode which I have answers way quicker than yours (less delay) + there’s no “click” sound after you stop talking in the advanced mode (unlike the clicking sounds you have in this vid)
Did you skip to the end of the video? The second example he gives (the back half of the video) is regular voice mode - but the begining is clearly advanced voice mode as evidenced by the ball with the blue swirling colors in it and the fact that he can interupt it mid sentance.
@@paddleed6176i had the regular one with the 5 new voices. I couldn’t interrupt it and it had a big delay and no emotions. Then I got the advanced update few hours later with the same voices but I could interrupt it, change its emotions, and had way less delay to it
you see that blue icon? Only exist on the newly released voice mode. And he accidentally interrupted it too, so if there is another OpenAI voice model with that capability, I'm all ears
@@JohnKuhles1966 no ...unlike math trying to fit to model...my comments are "observable"....I know that is a new word for you...but yes...they are "observable"...that was the conversation the llm was trying to have before he went into feigning interest in the fact that openai scrape a science journal....
@@funkahontas Yes, that's true. But what’s the point of using such advanced technology if it can’t fully showcase its capabilities? What’s the benefit of having a powerful AI tool like this if it comes with strict guidelines that force you to think the way OpenAI wants you to? I understand the importance of safety measures, but I believe they’ve gone too far, limiting creativity and independence
Artificial intelligence generates responses based on patterns learned from human knowledge. It doesn't possess the cognitive ability to contemplate the universe, nor does it know everything. There's still a long way to go before AI can reach a level of reasoning comparable to humans. It's crucial to remember that the answers provided by current AI are essentially edited versions of human knowledge.
did you watch his previous videos where the AI solves problems without preexisting solutions? If that's not contemplating/reasoning, then what is? Human reasoning is also based on existing human knowledge too, how is it any different?
Humans themselves also provide answers to questions that are essentially edited versions of their knowledge, that's what cognition is, processing knowledge. You first need to define what you mean by a "cognitive ability to contemplate the universe" before you claim that something does or doesn't possess it. But then again, if you do, you'll also be providing a way to test whether a LLM possesses it and they are good at passing tests designed to figure out if a system is intelligent, which makes moving the goalpost harder and harder.
@@alexdoan273 ChatGPT can also generate sentences that have never been written down before, but this doesn't mean it understands or creates original thoughts like a human. Large language models are text predictors. They generate coherent responses based on statistical probabilities, not true comprehension. And the correct answer to a "problem without a preexisting solution" becomes more likely when inference time is increased. This is what makes GTP-o1 good. It thinks for longer and is iterative.
As a kid in the 70s, I watched Kirk and Spock having scientific conversations with the Enterprise computer, now it's really happening and everybody can access it, pretty incredible.
What a time to live through, yes it is pretty incredible how this technology has advanced to the point it can make scifi like that seem possible.
It took Data years to understand emotions, we have already surpassed what the Enterprise authors thought would take centuries
@@marwin4348 Great point!
These LLMs are useful to think through problems. They are like an exo-cortex. People that are good at putting their thoughts into words, get much more out of them.
Finally someone that didn't waist this feature asking it to speak fast, change language or accents. Congrats! I guess you discovered that one of the biggest charges in this version is that you can interrupt it. 😂😂❤
Glad you thought listening to my conversation was worth your time. Thanks for watching!
We all discovered this many months ago when it was one of the main features of the demo.
I saw someone ask GPT to explain Quantum physics as a caveman with Italian accent😂😂😂😂 Maaamamïiiiia
@@adolphgracius9996 okay, that could be actually funny. 😂😂
@@adolphgracius9996 lol I need to check that out
We are definitely cooked, not immediately, but we are definitely in the pot
It's a sausage party
We are frogs being slowly boiled without realizing.
Love how you're testing these models as a true expert in a field. Great contribution
Thank you so much! Glad my 6 years of graduate school can be put towards something of value :)
Always a pleasure watch these videos
@@ParsevalMusic thank you for watching!
Finally, you have a nerd to talk to 24/7! 😅
I don't know if I should be ecstatic or terrified lol
more like 1/7 since you get 1 hour per day of limits lmao
24/7, with a 80 messages/3 hours limit? I don't think so
WHAT A TIME TO BE ALIVE!!!!!!
hold on to your papers!
What a time to be AI 🤖
Indeed
Bro, all the memes we made about how lame our reality was compared to the films predictions are aging like milk. We are getting 01, advance voice mode, and today Suck demoed fucking hollographic glasses. I feel like I am in the future.
This video and this channel are going to absolutely explode
It's interesting how people tend to be more polite and courteous to AI in relation to other humans
😅 what does that say about us
@@KyleKabasares_PhDwe want to stay in its good graces?
Awesome conversation! Thanks for sharing it with us. You’ve nicely demoed 4o’s simulated intelligence and your PhD expertise. 👍🏾🤛🏾
thank you so much for watching!
Hey Kyle, just wanted to add some context. I'm pretty sure the original conversation you had was not actually with an earlier version of the so called "advanced" voice mode, but rather with the original voice mode that came out I think over a year ago now. There was a lot of confusion around this since OpenAI demoed the advanced voice mode at the same time as they released the regular chat version of the new GPT-4o model. A lot of people then went into the app and discovered the regular voice mode for the first time, thinking this is the thing that they demoed, when in reality that mode had already been available for a long time. The actual advanced mode was only released to a very limited number of alpha testers, until more recently rolling out to all plus users.
The reason the interaction with the previous version felt more stilted and choppy is because that version was actually three seperate models stitched together. First the speech-to-text model takes your speech and transcribes it to text. This text is then fed into a regular chat-gpt text model, which in turn provides a text response. Finally a text-to-speech model turns this text response into an audio clip that is played back. This adds a significant amount of latency as you noted. More importantly though, there is a lot of nuance and information lost in translation. The advanced mode takes the audio and converts to tokens which are fed directly into a multi-modal model, which then outputs audio tokens directly as a response. This means that it is able to pick up on tone of voice and respond in a much more natural way. Hence why it feels more like speaking to an actual person.
This is also the reason it is able to laugh and sing and make noises and so on. Although those features they have limited now for various safety and liability concerns. Hopefully they will ease up on those restrictions in the future as the range of applications would be much greater without those restrictions.
Perhaps you were already aware of this, if so then my bad! Hehe.
Great video anyways!
Thanks for the explanation, I was not aware of that!
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Awesome video! Thanks for sharing it! Just a quick note: it doesn’t have access to the internet, so it might not know current events. Also, you can only interact with it for about 20 minutes at a time.
Very good point!
This level of responsiveness without time limit and with GPT 5 will be just too fucking awesome (or dystopian). We only need to make sure not to die until then (maybe to be killed by it).
Like I said to someone else on here, I don't know if I should be excited or terrified lol
@@KyleKabasares_PhDto me, it would be the most fascinating existential threat hands down. I mean, another world war? Lame. A new virus? Unoriginal. An asteroid? Been there* 😂
(* Ok not me personally, some distant cousins of mine)
It’s not even 2025 yet and it’s already this good, but not great. Tho imagine what it would be in 2030 😮
Dude this is mind blowing
I asked it to speak like EDI from Mass Effect and I've never been so giddy. I might have to put conversations like this with it on my todo list.
Weird that it complied, usually tells you no because it's copyrighted stuff.
You must be a badass in your field if you got access so early. Much respect i subscribed
Not to downplay his accomplishments or anything but everybody with the paid subscription is getting access to it. It's being rolled out in waves throughout this week.
@@kcchiefsproductions8687 No offense taken! It's true, I didn't do anything special to earn this, I just paid for a subscription.
I appreciate it, but I just paid for a subscription. OpenAI didn't give me any special treatment.
@@KyleKabasares_PhD oh, ok, i thought because you got it so early that it was before the alpha rollout. thx for clarifying. i unsubscribed.
i'm kidding 😅
@@kcchiefsproductions8687 nah we talking about the fact that he had it like 5 month ago.but he clarified
If they build this into Copilot ,so you could interact with it on desktop/laptop/tablet, it would be so much more valuable for in depth topics. You could have transcript and the ability to do projects on the fly.
I think the future is very exciting. I don't think those interactions are far from being made into reality.
@@KyleKabasares_PhD There just doesn't seem to be any road blocks looking forward. It will be quite the horse race from here between all these tech leaders.
@@ran_domness Exciting times we live in.
It gave a perfect answer
fantastic tests
Thank you for watching!
Sol voice is best by test.
Ok now slap a 3d high res interactive avatar on it and we're cooking.
We are not far away
legend!
Convincing but hallucinating. The advanced voice mode makes it more deceptive because it’s easier to overlook fact-checking during a smooth discussion. In written chat, it’s simpler to verify details.
Pretty cool.
Thanks for watching!
thanks for the video ! fascinating :)
make also video about version plus chatgpt limit. thank you. limit time use it i mean.
With every prompt the model gets reset, then re-reads the chat's tokens and only then the model answers your prompt.
Os voy a dar una primicia paar vuertos proximo programa...(de fuentes de gran confianza) me comunican que Charlie Patiño no va a jugar en Albacete!!!
Great video 😊
Inflection PI AI has been this good since march. And it's free + unlimited.
Sadly it's likely going to be discontinued after Microsoft poached all their main staff. I love Pi though! It's excellent just unlikely to be meaningfully updated now.
Interesting, I had a friend who worked at Inflection, I should talk to them about this
Im pretty sure PI is just tts, not voice to voice. It can't understand tone, emotion or accents and can't output these things either
I used PI, it is not this good.
When you talk to it, you can now interrupt it!
And it doesn't complain or talk back. A feature i miss in humans.
Thanks
@Kyle, what are your thoughts about the fact the model knows so much about the relevant scientific literature? This has been my observation as well with advanced optics publications. Does this mean that OpenAI has scraped all scientific journals? How did they do that, and do you think they did that legally?
Scraping is the past, new models use synthetic data to learn its logic and reasoning and have the ability to search the internet in real time for things it does not know
A lot of those papers I referenced are open access and freely available on the web. It’s also possible that pre-print servers, which typically have the research articles before they go into print, were part of its training data, since those are all available too. If it has the ability to reference legitimate research and not hallucinate (a common LLM problem), then it would help researchers greatly.
There definitely needs to be some discussion though about what kind data these models are trained on, and if it is copyrighted or not.
The titles and abstracts of scientific literature is typically publicly available and there is increasing open access literature. However when I've done similar tests with earlier ChatGPT and Claude I find that the AIs are only getting specific papers accurately maybe 60% of the time and tend to convincingly hallucinate some of the references they give though they are sometime close to real papers as Kyle found. Also if you get deep into a topic you know (as a researcher) they are less accurate. They seem to do better in biosciences (when tested by colleagues) and I do wonder if one reason for this is that key results of NIH funded research (and that is a lot of research) are available publicly as a condition of funding. In general RAG supported AI would is better for science. Elicit is one such tool, and one of the major scientific indexing websites Scopus has an AI summary mode for exploring different fields. Nonetheless it is amazing that general purpose AI can do so well on a specialist science topic.
This update isn't an intelligence update, So the interesting part about it isn't exactly its ability to talk about black holes.
That was my thought, too. Other than the shorter latency and the ability to interrupt the model, this demonstration-though interesting-doesn’t really show this new voice mode’s greatest strengths. Kyle’s follow-up comments about how researchers can use such conversation bots for brainstorming and the like were spot-on, though.
Yeah, but I like seeing what others use this stuff for. This level of responsiveness without time limit and with GPT 5 will be just too fucking awesome (or dystopian). We only need to make sure not to die until then.
I work in the medical field and it just doesn’t do anything right☹️☹️☹️ GPT six or something will finally give good advice to students
Can you give some examples when it fails? Because even the "lowly" gpt 4o , let alone the more advanced gpt o1 , gave me the more precise and correct answers about questions, that caused me a lot of headaches along the years. until I find satisfactory answers. Even the most authoritative textbooks were either poor in explaining or just lacking. Same with specialists and professors .
This is because you said gpt 6 , I assume you refer to the overall LLM , not just the voice update. If you speak about the voice update only, then I am not sure . I have just started using it and I am not particularly excited. But the LLM that answers or responds to your typed statements is just on another level.
And if it is true, it is still in its infancy , then the medical profession does not have a bright future for biological humans.
I bet you can't provide a single case where GPT completely failed in your field. Many denialists use extreme terms like 'it can do nothing or anything,' but we all know GPT isn't that incompetent. It's not perfect, but it's far from useless, especially in fields with text-based data, like medicine."
@@하하호호-h3u ask it anything about hospital clinical questions. Real life scenarios. Its gets it wrong 70% of time😴
@@prodromosregalides3402 basic stuff like what to do when oxygen saturation level is normal but heart rate and respiratory rate are elevated. They practically give answers that would probably kill the patient or make them worse 😭. I’m glad it’s bad at it. you are right about how that this in the infancy stages and it will look bad for the medical field in the next few decades . We are all getting replaced soon
It doesn't have to be extra useful, it's enough for it to be better than most human doctors (actually not even most of them).
Which I understand it _already_ does in some contexts. I think this year a model did better in various scores , for example in giving less potentially harmful diagnosis or therapies then humans... I don't remember exactly, probably it was a specialized model; because it's not fair to compare an AI layman to a specialist, even if the specialist is just a puny human😂
I have chat gpt on my phone but I don't know which one are you using 🥺
Other than being able to interrupt, how is this functionally any better? It definitely is more fun and interesting with the improved voice capabilities but it’s the same LLM providing the answers.
I made a follow up video where I tested it on solving math and physics questions while taking on different accents
@@KyleKabasares_PhD can it search the web?
@@JaceRivera-t2wyes. It also has memory/can create memory and follows custom instructions.
It can understand emotions and actually hear you and your surroundings. The other method was text to speech conversions. You can have this act as a translator between to different people without pausing. You can introduce a person and have them give their name, and then it can listen to both people and distinguish between the two people. This opens up group discussions, translations, and I heard someone get advice on how to tune a guitar because it listened to the pitch h.
@@josephflemming7370 Thanks! I made the mistake of assuming a YT vid I watched to get a summary was complete. Not even, sheesh. I will read the actual release notes from now on :)
Alquilo un trastero,situado encima de una academia de español!!
❤
nice
Is this usable on PC? Can we use advanced voice mode in browser lol? My phones too old i ask it a question and it lags like shit LOL, why is my old phone doing compute for openAI?
Your phone is not doing any compute for open ai, it is simply dialed their phone number, transmit your voice, the computer hears your voice, and answers back, but all the computer is done on their side. Your phone would do all the work, if you downloaded a local model. Most people's phones don't have enough compute to even run half, of a small model.
@@armadasinterceptor2955 This is incorrect and demonstrably so. Something is definitely happening on the phone end. I've tested this now on 5 devices and it's all the same. Just completely lags out while answering a question, the phone is being used to do something. What I have no idea.
@@armadasinterceptor2955 I've quantized and rolled out my own distillations for the last 5 years bro. I know more about this architecture than most people, who don't work in this field lol.
The easiest way to know if it is Advanced mode or not, is it interrupt and talk over it. If it is Advanced, it will stop and listen.
ofcourse you cant compete with chatgpt on text knowledge
Looks like this people are scare of chatGPT… they gonna lose their job 😂
Hello there, did you accessed this for free? :)
My friend this is normal voice mode, not advanced. The advanced mode which I have answers way quicker than yours (less delay) + there’s no “click” sound after you stop talking in the advanced mode (unlike the clicking sounds you have in this vid)
It is advanced. The regular one doesn't sound like that at all.
Did you skip to the end of the video? The second example he gives (the back half of the video) is regular voice mode - but the begining is clearly advanced voice mode as evidenced by the ball with the blue swirling colors in it and the fact that he can interupt it mid sentance.
@@paddleed6176i had the regular one with the 5 new voices. I couldn’t interrupt it and it had a big delay and no emotions. Then I got the advanced update few hours later with the same voices but I could interrupt it, change its emotions, and had way less delay to it
@@MH-kj9hhoh yeah you’re right, I checked the part of the vid where he was using regular mode lol my bad
You should have asked it to speak differently, with different accent or fast - that's the main feature of the update!
Let's be real, the main feature is the low latency. The accents and the other capabilities are mostly a gimmick.
This isn't advanced voice mode
you see that blue icon? Only exist on the newly released voice mode. And he accidentally interrupted it too, so if there is another OpenAI voice model with that capability, I'm all ears
it is, it pauses when interrupted. But I agree there is a lot more to come.
blue colour is advanced voice
And remember, this is the dumb version that the new voice mode accesses.
This is not advenced voice mode 😂😂😂
it literally is
Literally id
Kinda is 😅 *shrugs*
@@davidka2271 the first half is.
but black holes do not exist
but your comment does not exist
@@JohnKuhles1966 no ...unlike math trying to fit to model...my comments are "observable"....I know that is a new word for you...but yes...they are "observable"...that was the conversation the llm was trying to have before he went into feigning interest in the fact that openai scrape a science journal....
@@WaitAMinutePause2-t7p blah blah murmel
science.nasa.gov/resource/first-image-of-a-black-hole/
Sure, and atoms don't exist, and the Sun at night doesn't exist
i didn't like it , its trash 🗑️ and super censored 💔
People like you lack any type of creativity, and evaluation skills....... pathetic.
You might not like it , might indeed be super censored, but it's not trash lmao, it's the most advanced voice AI out there.
common
@@armadasinterceptor2955 exactly, he would probably like ai gf haha
@@funkahontas Yes, that's true. But what’s the point of using such advanced technology if it can’t fully showcase its capabilities?
What’s the benefit of having a powerful AI tool like this if it comes with strict guidelines that force you to think the way OpenAI wants you to? I understand the importance of safety measures, but I believe they’ve gone too far, limiting creativity and independence
Artificial intelligence generates responses based on patterns learned from human knowledge. It doesn't possess the cognitive ability to contemplate the universe, nor does it know everything. There's still a long way to go before AI can reach a level of reasoning comparable to humans. It's crucial to remember that the answers provided by current AI are essentially edited versions of human knowledge.
did you watch his previous videos where the AI solves problems without preexisting solutions? If that's not contemplating/reasoning, then what is? Human reasoning is also based on existing human knowledge too, how is it any different?
Humans themselves also provide answers to questions that are essentially edited versions of their knowledge, that's what cognition is, processing knowledge.
You first need to define what you mean by a "cognitive ability to contemplate the universe" before you claim that something does or doesn't possess it. But then again, if you do, you'll also be providing a way to test whether a LLM possesses it and they are good at passing tests designed to figure out if a system is intelligent, which makes moving the goalpost harder and harder.
@@alexdoan273 ChatGPT can also generate sentences that have never been written down before, but this doesn't mean it understands or creates original thoughts like a human. Large language models are text predictors. They generate coherent responses based on statistical probabilities, not true comprehension. And the correct answer to a "problem without a preexisting solution" becomes more likely when inference time is increased. This is what makes GTP-o1 good. It thinks for longer and is iterative.
It's not running o1 yet @@alexdoan273
Juste like us 🤔