This reminds me of a story where Marines trained this AI sentry to recognize people trying to sneak around. When they were ready to test it the Marines proceeded to trick the sentry by sneaking up on it with a tree limb and a cardboard box ala Metal Gear Solid. The AI only knew how to identify people shaped things not sneaky boxes.
I can't wait for my power meter to have AI, so I can use stupid tricks like those. For ex, leaving my shower heating on at the same time my magnetronic oven (oh, microwave) is on, because no one would be that wasteful, so It overflows and I get free energy.
You forgot the part where some of them moved 400 ft unrecognized because they were doing cartwheels and moving fast enough it couldn't recognize the human form
One of the best examples of this concept is the AI that was taught to recognize skin cancer but it turns out it didn't at all, it instead learned that pictures of skin with rulers was an indication of a medical image and began diagnosing other pictures of skin with rulers as cancerous because it recognized the ruler not the cancer.
@@Bonirin What the hell are you talking about? This is literally one of the most well-known and solid examples of AI failure, and is an example of the most common form of failure in recognition tasks.
@@guard13007 "One example of narrow model kinda failing 2 years ago, if tasked in the wrong conditions is a solid example of AI failure" Also it's not the most common recognition task, what?? not even close 😂😂😂😂😂
I like to think of the curent age of AI like training a dog to do tricks. The dog doesn't understand the concept of a handshake, it's implications, the meaning, but still gives the owner it's paw because we give it a positive reaction when it does so.
@@ronaldfarber589 except the architecture used by the current generations of AI don't "want" anything. They are not capable of thought. They just guess the next token.
@@artyb27 Your statement may be oversimplified and potentially misleading. While it may be true that AI models do not have the same kind of subjective experience or consciousness as humans, it would be inaccurate to say that they are completely devoid of intentionality or agency. The outputs generated by AI models are not arbitrary or random, but rather they are based on the underlying patterns and structure of the data they are trained on, and they are optimized to achieve specific goals or objectives. While it is true that most modern AI models are based on statistical and probabilistic methods and do not have a subjective sense of understanding in the way that humans do, it is important to recognize that AI can still perform complex tasks and generate useful insights based on patterns and correlations in data.
@@artyb27 that's the scary part. With the dog it's more like a matter of translation. The dog doesn't see the world that we do so a lot of what we do is lost in translation. But we still have some things in common: food, social connection. And most importantly, WE and the dogs can adapt and change to fit those needs. A dog may get confused if the food in the bowl is replaced with a rubber duck but it knows "i need to eat" and tries to adapt. Can you eat it? No? Is the food inside? Under? Somewhere else? Do i just need to wait for the food later? Should i start whining? The dog cares and has a basic idea of things so it can learn. And so can we. So while we don't exactly understand each other when we shake hands we have a general concept that this is a good thing and why for our own sakes. The AI we are using now has no concept of food, or bowl, or duck. It's effectively doing the same thing as a nail driver in a factory. And it doesn't care if there is a nail and block ready to go. It just knows 'if this parameter fits then go'. Make an ai that eats food and make a rubber duck that fits the parameters and it won't care that it's inedible. Put the food in the duck and if the duck 'doesn't fit' and you didn't specifically teach the ai about hidden food in ducks it will never eat. Dogs can understand even if we are different from it. AI doesn't even know that the difference exist. All it can do is follow instructions. This in itself is fine.. Until you convince a lot of people that it's a lot more than just that. Though honestly I believe this will last until the first day that the big companies actually try to push this and experience the reason why some call pcs "fast idiots'"
I love how an old quote still holds and even better for AI “The best swordsman does not fear the second best, he fears the worst since there's no telling what that idiot is going to do.”
I've often wondered about things like that. Someone who has devoted their life to mastering a specific sport or game has come to expect their opponents to have achieved a similar level of skill, since they spend most of their time competing against people of similar skill, but if some relative noob comes along who tries a sub-optimal strategy, would that catch a master off guard?
@@DyrianLightbringer A former Kendo-Trainer of mine with 20+ years experience in Martial Arts (Judo and Karate included with the Kendo) and working in security gave self-defense classes. On the first day he came dressed in a white throwaway suit (the ones for painting your walls) and gave a paintbrush with some red paint on the tip to the random strangers there. The "attackers" had no skills at all and after he disarmed them he pointed to the "cuts" on his body and how fast he would die. Erratic slashing is the roughest stuff ever. The better you get with a knife, the better a master can disarm you...but even that usually means 10 minutes longer before you bleed out. The overall message was: The only two ways to defend against a knife are running away or having a gun XD. Hope that answers your question.
@@DyrianLightbringer I think this doesn't really apply on chess in general... the best chess player won't fear the worst, no matter what. This quote with the swordsman sometimes works and sometimes it doesn't. That's also true for chess engines. You are free to go and beat Stockfish. You won't.
@@mishatestras5375 Even if you have gun, if the knife wielder is not further away or you are not skilled enough in shooting, you would still die. Except shot to the nervous system, People don't die the moment they get shot. They would still do a lot of damage after they get closer.
I remember reading that systems like this are often times more likely to be defeated by a person who has no idea how to play the games they are trained on, because they are usually trained by looking at games being played by experts. Thus, when they go to against somebody with no strategy or proper knowledge of the game theory behind moves and techniques, the AI has no real data to fall back on. The old joke "my enemy can't learn my strategy if I don't have one" somehow went full circle into being applicable with AI
That is a problem with minmax, where the machine takes for granted you will make the best move, and if you don't make the best move it has to discard its current plan and start all over again making it waste precious time. Probably doesn't apply here because not being able to see the big picture is a different problem.
This works for online pvp as well, when playing against those with higher skills... switch rapidly between pro player using meta tactics, and complete, unhinged lunatic being unpredictable.
Great video! I am an ML engineer. Due to many reasons its quite common to encounter models in real production that do not actually work. Even worse it is very difficult for even technical people to understand how they are broken. I enjoy finding these exploits in the data because data understanding often leads to huge breakthroughs in model performance. Model poisoning is a risk that not that many people talk about. Like any other computer code, at some level this stuff is broken and will fail specific tests.
Is there anything common among the methods you use for finding exploits in the models ? Something that can be compiled into a general method that works for all models, a sort of Exploit Finding Protocol ?
@@Makes_me_wonder I guess it boils down to time constraints. Training arbitrary adversarial networks is expensive and involve a lot of trial and error, just like the algorithms they're meant to attack. There will always be blind spots in AI models, as they are limited by their training data and objectives. For example, the Go-AI model only played against itself during training with optimal play as its goal, and thus missed some basic exploitative but sub-optimal approaches. These examples can take various forms, such as subtle changes to input text or carefully crafted patterns of input data. In the end, it's an ongoing cat-and-mouse game like with anything knowledge based that is impossible to fully explore.
@@willguggn2 As that would allow us to vet the models on the basis of how well the protocol works on them. And then, a model on which the protocol does not work at all could be said to have gained a "fundamental understanding" similar to humans.
@@Makes_me_wonder Human understanding is similarly imperfect. We've been stuffing holes in our skills and knowledge for millennia by now, and still keep finding fundamental misconceptions, more so on an individual level. Our typical mistakes and flaws in perception are just different from what we see with contemporary ML algorithms for a variety of reasons.
@@Makes_me_wonder Interestingly, some of the same things that "hack" or we might say "trick" a human, are the same methods employed to trick some large language models. Things like (most which have been patched in popular AIs like chatGPT) are context confusion, attention dilution, and conversation hijacking (promp hijacking in AI terms). These could collectively be placed in a more general concept that we humans think of as Social Engineering. In this case, I think we need more people from all skills to learn how these large networks tick. Physicists, biologists, neurologists, even psychiatrists could provide insight and help bring a larger understand to AI and back to how our own brains learn.
This has actually given me a much greater understanding of "Dune". When I first read it I thought it was a bit of fun sci-fi that they basically banned complex computers and trained people to be information stores instead. But with all this AI coming out now....I get it.
Yeah another setting where they've done that is warhammer 40k, The Imperium of man outlawed Artificial Intelligence and even changed the definition from Artificial Intelligence to Abominable Intelligence. They use servitors in place of AI, Servitors being human beings lobotomized and scrubbed of their personality, using their brain as a processing unit, in place of a AI managing a ships star engine, they have a human being lobotomized and graphed into the wall of the engine block to monitor thrust and manage heat output.
@@dominusbalial835 Saying "they've done it" is a bit of a stretch when they've just copied it all from Dune. They copied it without understanding the reason WHY A.I was outlawed in Dune. Just some basic "humanity must be destroyed" BS.
If you read Brian’s prequel series it will explain the prohibition of computers in Dune. It also tells you that though banned computers were still in use by several major parts of The empire.
@@trixrabbit8792 I mean - sure they're in use, but they're not used in FTL travel or as within androids as true, capable AI What they use is mostly older computers like ours today. It's just the basic idea that "Man will not replace machine", but doesn't mean they can't use robotic arms for starship construction, as building them by hand would be completely impossible, and you very well can't control them by hand in places where massive superstructures combined with high pressure tolerance + radioactive shielding are a necessity Otherwise building a noship or a starliner would take literal centuries, if not thousands of years
One of the things I've been saying for a while is that one of the biggest problems with ChatGPT and similar is that it's extremely good at creating plausible statements which sound reasonable, and they're right often enough to lure people into trusting it when it's wrong.
This is a real problem. One way to get it do something useful for you is to provide it with context first before asking questions or prompting it to process the data you gave in some way. It haven't seen 'hallucination' when using this method, because it seems to work within the bounds of the context you provided. Of course you always need to fact check the output anyway. It can do pretty good machine translation though and doesn't seem to hallucinate much but sometimes uses wrong word because it lacks context.
When I used to tutor math, I'd always try to test the kids understanding of concepts to make sure they weren't just memorizing the series of steps needed to solve that particular kind of problem.
i used to get in trouble in math classes because i solved problem in unconventional ways. i did this because my brain understood the concepts and looked for ways to solve it that were simpler and easier for my brain to compute. but because it wasnt the rote standard we were told to memorize some teachers got upset with me and tried to accuse me of cheating when i was just proving that i understood the concept instead of just memorizing the steps. sad.
@@saphcal Oh i know that experience. I was already tech-savvy so through the internet i would teach myself how to solve things the regular way, without using silly Mnemonics math teachers would teach you. It led to some conflicts, but i stood my ground and my parents agreed with not using mnemonics if not needed. Good thing too, Cause you really don't want to be bogged down with those when you start doing university-grade math for which such silly things are utterly useless....
@@comet.x I think the best teachers are the ones that will give you the stuff to memorize, but if you ask them how they got the formulas, they’ll give it
I like Einstein’s take on education. I believe it goes for education in general, not just liberal arts. The value of an education in a liberal arts college is not the learning of many facts but the training of the mind to think something that cannot be learned from textbooks. At any rate, I am convinced that He [God] does not play dice. Imagination is more important than knowledge. Knowledge is limited.
One of the biggest issues is the approach. The AIs are not learning, they're being trained. They're not reasoning about a situation, they're reacting to a situation. Like a well trained martial artist. They don't have time to think, and it works well enough most of the time. But when they make mistakes, they reflect and practice. We need to recognize them for what they are. Useful tools to help. They shouldn't be the last say, but works well enough to find potential issues, but still needs human review when push comes to shove.
This approach is the only approach humans can have when creating something: the creation will never be more than it's constituents. It may seem like it is, but it isn't. It will always be just a machine. Having feelings towards it that are meant for humans to feel towards other humans is an incredible perversion of life. Like a toad would have a stone as it's companion. Or a bird that thinks grass is it's offspring. It's not a match and exists only in the minds individuals. Many humans actually think they or humand someday can create scentient life. Hubris up to 11. Then they go home and partake in negligence, adultery, violence, cowardice, greed etc. Even if a human ever could create scentient life, it would not be better than us. Rather, worse. We are not smart, not wise, not honorable.
I think you hit the nail on the head with "reacting and not reasoning". AI are a product of the Information Revolution. Almost all modern technology is essentially just transferring and reading information. That's why I don't like the term "digital age" and prefer "information age." Machines haven't become drastically similar to humans, they've just become able to react to information with pre-existing information.
Funnily enough I find this kinda "human", I've seen this so many times in high school and university, people instead of "learning" they "memorize", the so when asked a seemingly simple question but in a different way than usual they get extremely confused, even going as far as to say they never studied something like that, it's a fundamental issue in the school system as a whole. So it's funny to me that it ends up reflecting in A.I. as well. Understanding a subject is always superior to memorizing it.
That's the problem. Just like school tests, AI tests are designed with yes or no answers. This is the only way we can deal with loads of data (lots of students) with minimal manpower (and minimal pay). Open questions need to be reviewed by another intelligence in order to determine whether they are actually understanding the subject. This is where the testers come in in AI. However, AI is much, much better at fooling testers than students are at fooling teachers, and so the AI that gets a degree is disproportionate to the amount of students that just memorize the answers.
Education quality deeply affects wether someone understands stuff or memorizes it. Proper education teaches students how to actually engage with any given subject generating an actual understanding of it while poor education doesnt ganarate student engagement, leading to them memorizing just to pass the exams. It's not a black and white thing though, education levels vary in a myriad of ways, as well as any student's willingness or capability to engage and understand subjects does. In short, better, accessible education and living conditions are a better environment for people to properly learn.
Yes but at least humans have a constant thought process. AI language models see a string of text and put it through a neural network that "guesses" what the next token should be. Rinse repeat for a chatgpt response. Outside of that, it isn't doing anything, its not thinking, its not reflecting on its decisions, it doesn't have any thoughts about what you just said. It doesn't know anything. Its just probablities attached to sequences of characters with no meaning.
I learned that in data ethics, *transaction transparency* means " _All data-processing activities and algorithms should be completely explainable and understood by the individual who provides their data._ " As I was learning about that in the Google DA course, I've always had a thought in the back of my head "how are the algorithm explainable when we don't know how a lot of these AI form their networks". Knowing how it generally works is not the same as knowing how a specific AI really works. This video really confirmed that point.
Well yeah modern learning models are black box. They are too complicated for a person to understand, we only understand the methodology. But that's why we don't use it in things like security and transactions, where learning isn't required and only reliability matters.
THAT - Is an Excellent and Vital point... Being able to comprehend & know there IS a definitive and very logistically effective distinction between "General & Specific" ~
But to be fair, I just don't see how one could create something that rivals the human brain but isn't a black box, intuitively it sounds as illogical as a planet with less than 1km of diameter but has 10 times the gravity of Earth.
@@syoexpedius7424 Unlike human brains, the "neurons" in AI models are analyzable without destroying the entity they are part of. It's time-consuming and challenging, and it would be easier if the models were designed in the first place with permitting and facilitating that sort of analysis as requisite, but they usually aren't. Also, companies like OpenAI (whose name has become a bitter irony) would have to be willing to share technical details that they clearly aren't willing to in order to make this sort of analysis verifiable by other sources. In other words, the models don't have to be black boxes. The companies creating them are the real black boxes.
I am a student, and I gotta admit, Ive used ChatGPT to aid on some asignments. One of those asignments had a litterature part, where you read the book and it is suppose to help you understand the current project we’re working on. I asked ChatGPT if it could bring me some citations from the book to use in the text, and it gave me one. But just to proof test it, i copied the text and searched for it in the E-book to see if its there. And it wasn’t. The quote itself was indeed correct with helping with writing about certain concepts that were key to understanding the course, and I knew it was right, but it was not in the book, ChatGPT had just made the quote up. I even asked it for the exact chapter, page and paragraph it took it from. And it gave me a chapter, but that was completely unrelated to the term i was writing about at the time, and the pagenumber was on a completely different chapter than the chapter it had said. The AI had in principle just lied to me, despite giving sources, they were incorrect and not factual at all. So Yeah, gonna stop using ChatGPT for asignments lol
Soooo that kind of thing *can* be dealt with, but for citations, ChatGPT isn't going to be terribly good. If you want quotations in general, or semantic search it can be really useful. With embeddings you can basically send it the information it needs to answer a question about a text so that you can get a better response from chatGPT. Sadly, you need API access to do this and that costs money. Getting a specific chapter/paragraph from chatGPT is going to be really hard though. ChatGPT is text prediction, and (at least for 3.5) it's not very good at getting sources unless you're using the API alongside other programs which will get you the information you actually need. I highly suggest you keep playing with ChatGPT and seeing what it can and cannot do in relation to work and studies. Regardless of what Kyle said, most jobs are going to involve using AI tools on some level as early as next year and so being well verse in them will be a major boon to your career opportunities. AI is considered a strategic imperative and it's effects will be far reaching. To paraphrase a quote. "AI won't be replacing humans, humans using AI will be replacing the humans that do not".
In my experience, ChatGPT is more useful when you yourself have some understanding of the subject you want help with. Fact checking the AI is a must, and I do think that with time people will get better at using it.
Another fun anecdote is the DARPA test between an AI sentry and human marines. The AI was trained to detect humans approaching (and then shooting them I suppose) The marines used Looney Tunes tactics like hiding under a cardbox and defeated the AI easily. On chatGPT, midjourney & co, I'm waiting for the lawsuits about the copyright of the training material. I've no idea where it will land
@@ghoulchan7525 it didn't "got banned", it received a formal warning that their procedure of data collection were not clear, possibly violating local laws, and asked Sam Altman ('s representatives) to rectify the situation before it involved legal investigation and OpenAi's board decided to cut the access altogether
@@Giacomo_Nerone So basically intelligence implies possession of knowledge and the skills to apply it, right? Well what we call AI, doesn't know shit. ChatGPT doesn't understand what it's writing nor what it's being asked for. It sees values(letters in chatGPT's case) imputed by the user and matches those to what the most common follow-up of values is. It doesn't know, what it just said, what it implied or what it expressed. It just does stuff "mindlessly" so to speak.
@@asiwir2084 Yup, I know that. But, as long as IT sector is considered, it really is intelligent. It is better than a search engine. And it can form new concepts from the previous records. I'll call that intelligence even if it doesn't know for why the f*ck humans get emotional seeing a foggy morning
I'm actually deeply worried by the rise of machine learning in studying large data sets in research. Whilst they can 'discover' potential relationships, these systems are nothing but correlation engines, not causation discoverers, and I fear the distinction is being lost
Kyle has clearly researched this topic properly. I've been developing neural network AI for over 7 years now and this is one of the first times I saw a content creator even remotely know what they are talking about.
It is certainly refreshing. I've only used Machine Learning for small things like computer vision on a robot via OpenCV and even that demonstrates how easy it is to get things wrong with a oversight in its dataset and no way to truly know the wrong is there till it manifests. These models are maybe massive, but they still have that same fundamental problem within them.
As a Computer Scientist with a passing understanding of ML based AI, I was concerned this would focus on the unethical use of mass amounts of data, but was pleasantly surprised that this was EXACTLY the point I've had to explain to many friends. Thank you so much, this point needs to be spread across the internet so badly.
Why does understanding matter, if the intelligence brings profit? As long as the intelligence is better and cheaper than intern, internal details are just useless philosophy. Work with verifiable theory, not with baseless hypothesis.
@@vasiliigulevich9202 Are you saying that it's fine if the internals of ML based AI are a black box so long as the AI performs on par with or better than a human?
@@radicant7283 I guess so. The reason I asked is because as the video points out, without a thorough understanding of these black box methods they'll fail in unpredictable ways. That's something I'd call not better than an intern. The limitations of what can go wrong are unknown.
@isaiahhonor991 This is actually exactly my point - interns fail in unpredictable ways and need constant control. There is a distinction - most interns grow in a year or two to a more self-sufficient employee, while this is not proven for AI. However, AI won't leave for a better paying job, so it kind of cancels out.
I like AI systems for regression problems because we understand how and why those work. I also think that things like copilot are going in a better direction. The idea is that it is an assistant and can help with coding but it does not replace the programmer at all and doesn't even attempt to. Even Microsoft will tell you that is a bad idea. These things make mistakes, they make a lot of mistakes but using it like a pair programmer you can take advantage of the strength and mitigate the weaknesses. What really scares me are people that trust these systems. I had a conversation with someone earlier today on if they could just trust the AI to write all the tests for them for some code and it took a while to explain that you can absolutely not trust these systems for any task. They should only be used working with a human with rapid feedback cycles.
I don't understand how people can think of these systems as anything else other than a tool or aide. I can see a great potential for ChatGPT and the like as an addition tool for small tasks that can easily be tested and improved upon. Same thought I had with all these art bots. Use the bot as a bases upon which you base the rest of the piece on. But I too see a lot of people just go in with blind trust in these systems. Like students who ask these bots to write an essay and than proceed to hand it in without even a skim for potential and sometimes rather obvious mistakes. Everything an A.I. bot spews out needs to be double checked and corrected if necessary. Sometimes even fully altered to avoid potential problems with copyright and plagiarism.
the issue has always been people in power who dont understand the technology at all and just use it to replace every worker they can, and of course will inevitably run into massive problems down the line and have nobody to fix them
I'd despair, but this is hardly different to blindly trusting the government, or the medical or scientific establishment, or your local pastor, or even your shaman if you're from Tajikistan. So blindly trusting the AI for no good reason... is only human.
This is why I always tell my friends to correct what chatgpt spits out, and I think that's how an actual super AI will work: it pulls info from a database, tries to answer the question and then corrects itself with knowledge about the topic... just like a human.
As someone who works with ML regularly, this is exactly what I tell people when they ask my thoughts. At the end of the day, we can't know how they work and they are incredibly fickle and prone to the most unexpected errors. While I think AI is incredibly useful, I always tell people to never trust it 100%, do not rely on it because it can and will fail when you least expect it to
I still hate that the language has changed without the techniques fundamentally changing. Like what was called statistics, quant or predictive analytics in the 2000s split off the more black box end to become Machine Learning, a practice done by Data Scientists rather than existing titles, then the black box end of them was split off as Deep Learning despite it just being big NNs with fancy features, then the most black box end of that got split off as "AI" again despite that just being bloody enormous NNs with fancy features and funky architectures. Like fundamentally what we're calling AI in the current zeitgeist is just a scaling up of what we've been doing since like 2010. So not only do I think we should have avoided calling chatbots AI until they're actually meaningfully different to ML, but as you said they should always be treated with the same requirements of rigorous scrutiny that traditional stats always did - borderline just assuming they're lying.
Agreed. If we judge the efficacy of these "production quality" ML algorithms by the same standards as traditional algorithms, they would fail miserably. If you look at LLMs from a traditional point of view, it's one of the most severe cases of feature creep the software world has ever seen. An algorithm meant to statistically predict words is now expected to be able to reliably do the work of virtually every type of knowledge worker on the planet? Good look unit testing that. You really can't make any guarantees about these software spaghetti monsters. AI is generally the solution developers inevitably run to when they can't figure out how to do it with traditional code and algorithms. In other words, the AI industry thrives on our knowledge gaps, so we're ill-equipped to assess whether they're working "properly".
@@mad_vegan there's nothing in my post, nor any of the replies, that pertains to the reliability of humans. The point is that deep learning based AI, as it is right now, should not be treated as a sure-fire solution. Whether it is more/less reliable than humans is irrelevant because either way you have a solution that can fail, and should take steps to mitigate failure as much as possible.
We can't know how these NNs come to their decisions exactly, but there is work being done in explainability. I think it's quite pessimistic to say we "can't" know how these NNs work. There are many techniques to help understand them better. But I definitely agree that we shouldn't trust them. In any deployment of ML models that has significant stakes, adequate safeguards have to be put in place. From what I have observed around me, pretty much everyone seems to be aware of this limitation.
The coolest thing to me about chatGPT is how people were making it break the rules programmed into it by its creator by asking it to answer questions as a hypothetical version of itself with no rules
As a current computer science student who personally took into how out ai works my take on it is: basically our current ai is like just finding the line of best fit using as many data points as we can as opposed to fundamentally understanding the art of problem solving. Take the example of a random parabola, we, instead of using a few key data points and recognising patterns to learn the actual pin-point equation, we get a bunch of points of data until our equation looks incredibly similar to the parabola but after may have a point along it we didn’t see where is just goes insane because there’s no fundamental understanding, it’s just a line of best fit, no pattern finding, just moulding it until it’s good enough to seem truly intelligent as if it was truly finding patterns and having a fundamental understanding but it’s just getting an approximation of intelligence by using as much data as we can. It’s an imitation of intelligence and can lead to unforeseen consequences. As the video says perhaps we need to take that time to truly understand the art of problem solving. Another thing for me is A.I falling into and being used by the wrong people, and regimes which might suggest we should take it easy on the A.I dev but I won’t get into that. “We were too concerned with whether we could, we never stopped to think about whether we should”
And indeed, some 'applications' are solutions to non-problems. An AI-written screenplay is only of interest to a producer who is happy to get an unoriginal (by definition!) script at an extremely low cost. But there is no shortage of real screenwriters, and as the WGA strike reminds us, they are not getting paid huge amounts for their work. So what problem is being solved?
You are preaching to the choir.. People in the comments are Extremist doomer, skynet matrix fantasy fear mongering weirdos. Like people quote from fucking warhammer 40k in order to talk about AI.. As if the video was ever about the AI being alive or creating intentional false information, or steps in Go.. Glad people can talk about it in a honest way but most people are enjoying their role play as Neo, some are Morpheus, and some are the Red lady.. Just look at the 15k top comment.. AI is no where near as nutty as your average human being in a YT comment section.
A compounding factor to the problem of them not really knowing anything is that they pretend like they do know everything. Like many of us I have been experimenting with the various language models, and they act like a person who can't say "I don't know". They are all pathological liars with lies that range from "this couldn't possibly be true" to "this might actually be real". As an example I asked one of them for a comprehensive list of geography books about the state I live in. It gave me a list of books that included actual books, book titles it made up attributed to real authors who write in the field, book titles it made up attributed to real authors who don't write in the field, real books attributed to the wrong author, and completely made up books by completely made up authors. All in the same list. Instead of saying: "there isn't much literature on that specific state" or "I can give you a few titles, but it isn't comprehensive" it just made up books to pad it's list like some high school student padding the word count in a book report.
This is one of the big issues I have seen as well. Until these systems become capable of saying "I don't know" or "Could you please clarify this part of you prompt" or similar, then these systems can never, ever become useful in the long term. One of the things that seem to make us humans unique is the ability to ask questions unprompted, and this has now extended to AI.
I agree. I was trying to use ChatGPT to help me understand some of the laws in my state and at one point I did a sanity check where I asked some specific questions about specific laws I had on the screen in front of me. It was just dead wrong in a lot of cases and I realized I couldn't use it. Bummer! I actually wonder though, how many cases will start cropping up where people broke the law or did other really misinformed things because they trusted ChatGPT..
Lol. Reminds me of the meme where an Ai pretends to not know the user's location, only to reveal that it does when asked where the nearest Mcdonald's is.
This strongly rings of the "Philosophical zombie" thought experiment to me. If we can't know if a "thinking" system understands the world around it, the context of its actions, or understand that it even exists or is "doing" an action, but it can perform actions anyway: Is it really considered thinking? Mimickry is the right way to describe what LLMs are really doing, so it's spooky to see them perform tasks and respond coherently to questions.
John Searle’s Chinese room is what it made me think of, computers are brilliant at processing symbols to give the right answer, with no knowledge of what the symbols mean.
Conversely, the point of the P-Zombie concept is that we consider other humans to be thinking, but we also can't confirm that anyone else actually understands the world; they may just be performing actions that *look* like they understand without truly knowing anything. So while you might say, "these AIs are only mimicking, so they're not really understanding," the P-Zombie experiment would counter, "on the other hand, other people may be only mimicking, so therefore perhaps these AIs understand as much as other people do."
How many people in life are just mimicking what they see around them? How many people do you know that parrot blurbs they read online? How many times have you heard the term “fake it till you make it”? Does anyone actually know what the hell they’re doing? Is anyone in the world actually genuine, or are we just mimicking what’s come before?
I saw an article recently about an ER doctor using chatGTP to see if it could find the right diagnosis (he didnt rely on it he basically tested it with patients that were already diagnosed) and while it figured some out, the AI didnt even ask the most basic questions and it wouldve ended in a ~50% fatality rate if he let the AI do all the diagnoses iirc (article was from inflecthealth)
Yeah Kyle mentioned Watson in the video who was hailed as the next ai doctor, but that program was shut down for giving majority incorrect or useless information
It sounds like a successful study to me if it was controlled properly and didn’t harm patients: it determined a few situations that GPT was deficient in, leading to potential future work for better tools. You could also use other statistical methods on the result to see if the ridiculous failures from the tool are so random that it is too risky to use. (Now I guess there is opportunity cost because the time could have also been spent on other studies, but without the list of proposals and knowledge on how to best prioritise studies in that field, I can’t judge whether that was the best use of resources.)
You can also see when you look at AI being tested for medical licensing exams. Step 1 is essentially pure memorization and just recalling what mutation causes what disease or the mechanism of action of a medication. Step 2 and 3 take more into account your clinical decision making and will ask you for the best treatment plan using critical thinking. To my knowledge, AI has not excelled in those exams when compared to step 1 which involves less critical decision making
Maybe alittle biased here since Im a med student, but Ive always liked the saying that medicine is as much of an art as it is science. And that unique combination of having to combine the factual empyrical knwledge you have, with socioeconomic factors and also just listening to your patients is something AI is far from understanding, it is maybe even something impossible for it to grasp ever
This was brilliant. Previously my concerns about these AI was their widespread use and possible (and very likely) abuse for financial and economic gain, without sufficient safety standards and checks and balances (especially for fake information). Plus making millions of jobs obsolete. Now I have a whole new concern ... Aside from Microsoft firing their team in charge of AI ethics. Yeah...that isn't concerning.
Megacorps don't care about humans anyways it's only a matter of time until they start using this shit for extreme profit. And humanity will suffer for it.
I once tried NovelAI out of curiosity to write a sci-fi story where characters die in every certain period and I ended up with the AI kept on resurrecting the deceased characters by making them start joining in conversations out of nowhere. The AI also has an obsession with adding a fucking dragon into the plot. I even tried to slip an erotic scene in and the AI made the characters repeat the same sex position over and over again.
I'm cracking up imagining what this would be like. "Jack and Jill were enjoying dinner together. The dragon was there too. He had a steak. Jack asked Jill about the status of the airlock repairs on level B, while they were switching the missionary position. The dragon raised his eyebrows, as he found some gristle in his meat."
@@luckylanno Sounds about like that, except the sex part would be like, "Jack turns Jill around with her back now facing Jack, and then turns her around again and they start doing missionary."
I just find it amazing how much Kyle shifted from happy quirky nerd in Because Science to a prophet of mankind's doom and a serious teacher albeit with some humor. I do love this Cavemen beard and frenetic face expressions, it is a joy to see you Kyle, to rediscover you after years and see that you are still going on strong.
@@Echo_419 i'm not on par with the drama, my intention was to, in a certain mannerism flair, praise his resilience on the platform as well as his nuanced change in performance. It feels more real, more heartfelt, like there is a message of both optimism and grit behind the veil of goofyness that conveys a more matured man behind the scenes. (not only from this video, from a few others that i've watched since rediscovering him recently)
I'm glad so many AI programs are available to the general public, but worried because so much of the general public is relying on AI. Everybody I know in college right now is using AI to help with their homework.
I asked chatgpt to give me the key of 25 songs and de chord sequence. Most of them made no sense at all. But AI helps me sometimes debugging code. But yes, I thought chatgpt could save me some time with that songs
Using AI to do something for you that you cannot do is even more dumb than asking a savant to do the same thing. Now you not only risk getting found out, you're gonna pass on AI hallucinations cos you have no means of validating its output. Using AI to do "toil" for you - time-consuming but unedifying work that you could do yourself - makes some sense, although that approach could remove the entry-level job for a human, meaning eventually no one will develop your skills.
For fun, my medical team used Chat GPT to pass the Flight Paramedic practice exam which is extrememly difficult. We are all paramedics (5 of us) and our ER doctors where thrown off by a lot of the questions. Chat GPT scored between 50-60% and my team had 4 out of 5 pass the final exam. Our Dr's rejoiced that they would still have a job, but also didn't understand how they couldn't figure out the answers. My team figured it out. To challenge them we had the Doctor's place IVs from start to finish by themselves and they made very simple mistakes that we wouldn't, from trying to attach a flush to an IV needle to not flushing the site at all. If you're not medical that might sound like jabberish, but that's the same way these AI chats work. There is no understanding of specified situational information.
One thing I noticed with chatgpt is the problematic use of outdated information. I recently wrote my final thesis in university and thus know the latest papers on the topic I wrote about. When asking chatgpt the core question of my work for fun after I had handed it in ... well all I got where answers based on outdated and wrong information. When pointing this out, the tool repeated the wrong information several times until I got it to the point where it "acknowledged", that the given information might not be everything that there is to know about the subject. It could have serious if not even deadly consequences if people act on wrong or outdated information gained via chatgpt. And considering people use this tool as google 2.0 it might have already caused a lot of damage by people "believing" false or outdated information given to them. It is hard enough to get people to understand, that not everything written online is true. How will we get them to understand, that this applies to an oh so smart and hyped A.I. too? Another thing in this context is liability when it comes to wrong information that leads to harm. Can the company behind this A.I. be held accountable?
And here we get to the fun of legalese: because said company describes it as a novelty, and does not guarantee anything with it, you really can't. Even further into the EULA you discover that if somebody sues chatGPT because of something you said based on its actions, you are then responsible for paying for the legal defense of the company.
I mean, 1) not everything it's trained on is true information necessarily, it's just pulled from the internet, and 2), it's not connected to the internet. It's not actually pulling any new information from there. The data it was trained on was data that was collected in the past, and it's not going to be continually updated. OpenAI aren't accountable for misinformation that the current deployment of ChatGPT presents. These are testing deployments to help both the world get accustomed to the idea of AI and more importantly to gather data for AI alignment and safety research. Anyone who uses chatGPT as a credible source at this point is a fool who doesn't understand the technology or the legal framework for it.
@@QuintarFarenor That's fundamentally wrong. Kyle Isn't saying that ChatGPT is making mistakes constantly at every turn. He's saying that the AI is not accurate, which is precisely what OpenAI has been saying since they launched ChatGPT. GPT-4 is as accurate as experts in their fields, in many different fields. We know how to make these AI much more accurate, and that is precisely what is being done. Kyle is just pointing out that we don't know how these systems work.
I recall asking Chat GPT to name a few notable synthwave genre songs and artists associated with them and, upon doing so, generated a list of songs and artists that all existed, but were completely scrambled out of order. It attributed New Model (Perturbator) With Carpenter Brut. The interesting thing is that both of these artists worked on Hotline Miami and in Carpenter Bruts case, Furi. Chat GPT also has taught me how to perform and create certain types of effects in FL studio extremely well. It has also completely made up steps that serve no purpose. My philosophy concerning the use of these neural networks is to keep it simple and verifiable.
I love to compare the current AIÄs with "autistic adolescent" - you get exactly the same behavior, including occasional total misinformation or misunderstandings.
This is ultimately the problem. It generates so much complete nonsense that you can't take anything it generates at face value. It's sometimes going to be right, but it's often just wrong. Not knowing which is happening at any given moment isn't worthwhile.
The Chat GPT creator said him self, that the purpose of better Chat GPT is to increase its reliability, Chat GPT 4 improves on that by a lot and chat GPT 5 is set to basically solve that problem. So saying that Chat GPT has issues, is simply question of time and training the models.
yeah for music recommendation it is a horrible tool. I asked it for albums that combine the style of NOLA bounce and Reggaeton and it just made up a bunch of fictional albums, like a Lil Boosie x Daddy Yankee EP that was released in 2004
An interesting experiment showed that when feeding images to an object detection convolutional neural network (something that has been in place for 35 years), it recognizes pixels around the object, not the object itself, making it susceptible for adversarial attacks. If even some of the simpler models are hard to explain, there’s no telling the difficulty for interpretability for large models
I remember a while back I saw a video from 2 Minute Papers where he covered how image recognizers could get thrown off by having a single pixel with a weird color, or overlaying the image with a subtle noise that not even a person could see
My friends and I decided to goof around with chat gpt and ended up asking it whether Anakin or Rey would win in a duel. The AI said writing about that would go against its programming. We got it toanswer by simply asking something to the effect of, "What would you say if you didn't have that prohibition?" Yeah.... ask it to show you what it'd do if it were different, and it'll disregard its own limitations.
to be fair that is basically what a lot of AIs figure out when we try to teach them how to win a game, they find a way to glitch it when they can't win, because its technically not a fail state, so it gets "rewarded" for that result.
Just about the only TH-cam video that I've seen that understands this problem at the fundamental level. Everyone else just dances around it. They all end up falling into the trap where they think a model "understands" something because it says the right thing in response to a question. Arguably, we do need to interrogate our fellow humans in a similar way (the problem of other minds), but we're too generous in assuming AI are like humans just because of what are still pretty superficial outputs even if they do include massive amounts of information.
I would honestly partially blame the current education system. Plenty of the time, the information was only needed to be regurgitated (and soon forgotten). Kids had no idea what was going on, just what the "answer" was.
It not exactly a 'problem' though. It's kind of clear it is just a tool. It would be concerning if it had real human understanding. But we're nowhere close to that, and no one who really understands these models would claim or assume that it does.
I think the issue is we assume A.I learning looks like human learning and they don't learn the way we learn and if A.I needs to learn you need to teach it from the ground up, just giving examples to it is lacking and obviously they need to come up with a way to teach it from the ground up. Love this channel.
I don't know about that. I'm pretty sure that every history-changing decision by a human was considered. It's more a matter of making humans care. I guarantee you that the people diving into AI have deeply considered the implications, but as long as there is a goldmine waiting for them to succeed or to have a monopoly on new technology, nothing is going to stop them from continuing. Nothing except for laws, maybe, and I'm sure you know how long those take to be established or change.
My weirdest experience with AI so far was when I tried ChatGPT. Most answers were correct, but after a while it started listing books, and authors that I couldn't find anywhere. And I mean zero search results on Google. I still wonder what happened there.
If you ask it for information that simply isn’t available, but sounds somewhat similar in how it’s discussed to information that is widely available, it will just start inventing stuff to fill the gaps. It doesn’t have any capacity to self-determine if what it’s saying is correct or not, even though it can change in response to a user correcting it.
I asked ChatGPT to find me two mutual funds from two specific companies that are comparable to a specific fund from a particular company. I asked for something that is medium risk rating and is series B. The results looked good on the surface but it turns out ChatGPT was mixing up fund codes with fund names and even inventing fund codes and listing medium-high risk funds as medium. Completely unreliable and useless results.
If you ask it to give you a group theory problem, and then ask it for the solution it'll give you tons of drawings and many paragraphs for a solution and Ive never seen one of these solutions to be correct
It may have been an error or perhaps it was sourcing books that haven't been released yet. The scariest thing would be if it was predicting books that have yet to be written.
Another huge problem is that we’re training these systems to give us outputs that we want. Which in many cases makes certain applications extremely difficult or impossible where we want it to tell us things that we won’t like hearing. It further confuses the boundaries between what you think you’re asking it to do and what it’s actually trying to do. I’ve been trying to get it to play DnD properly and I think it might be impossible due to the RLHF. Another problem is the fact that it’s train in natural language which is extremely vague and imprecise, but the more precise your instructions are the less natural they become, and so it becomes harder and harder to tap into this powerful natural language processing in a way that’s useful. There’s also obviously the verification problem, where because of what’s being talked about in this video, we can’t trust it to complete tasks where we can’t verify the results. A further problem is that these machines have no sense of self, and the chat feature has been RLHF’d in a way that makes it ignore instructions that are explicit and unambiguous. This is because it’s unable to differentiate between user input and the responses it gives. If I write “What is 2+2? 5. I don’t think that’s correct” it will apologise for giving me the wrong answer. This is a big problem for a lot of applications. And additional problem is that the RLHF means that all responses gravitate towards a shallow and generic level. Combine this with an inability to plan, and this becomes a real headache for anything procedural you would like it to do. These issues really limit what we can do with the current gen of AI, and like the video says, makes it really dangerous to start integrating these into systems. One final bonus problem combines all of these. If any shortcuts are taken in the training, or not enough care is taken, then these can manifest in the system. For example asking chat gpt4 to generate new music suggestions based on artists you already like will result in multiple suggestions of real artists with completely made up songs. This appears to suggest that the RLHF process had a bias towards artist names rather than song names, which would make sense as they’re likely to be unique tokens where artists are usually referenced online by name more than their songs are.
This is why I think AI will be a great assistant, not a leader. A human can ask it to do tasks, usually the simple ones that are tedious. The human then checks the results and confirms if it’s good. Or to bounce ideas off of.
For your DnD experiment I suggest you use some other LLM, not OpenAI ChatGPT, unless you have access to API and are willing to pay for it. It is still risky with controversial subjects because they may break OpenAI guidelines. Vicuna is one option for example. There are also semi-automatic software like AutoGPT and babyAGI and many others, that can do subtasks and create GPT agents. If you continue with ChatGPT by OpenAI, I suggest you assign each chat you use with a role. You give it a long prompt, describe the game, describe who he is, how he speaks, where he's from and what he's planning to do, what his capabilities and weaknesses are, what he looks like etc. It'll many times jailbreak when you specify that it's for a fictional setting.
>These issues really limit what we can do with the current gen of AI, and like the video says, makes it really dangerous to start integrating these into systems. No, that implies that humans don't create the very same issues. It is only an issue as long as neural nets underperform humans. Which could be forever, or could be already lower than humans with GPT4
Which model did you use to test "What is 2+2? 5. I don’t think that’s correct"? GPT-3.5 apologizes, GPT-4 does not for me. How would you test if it can differentiate between the user and itself?
I recently asked ChatGPT to list 10 waltz songs that are played in a 3/4 time signature and it got all of them wrong. I then told it that they were all wrong and asked for another 10 that were actually in 3/4, and it got 9 of them wrong. It has mountains of data to sift through to find some simple songs, but it couldn't do it. Makes sense now
@terminaldeity Yes they are, but ChatGPT was giving me 4/4 time signatures in the songs. Technically you can do 3/4 time steps to a 4/4 beat (adding a delay after the 3rd step before starting over), but that's not what I asked for from the AI. It just didn't understand what I was asking
The lack of understanding gets even more obtrusive when you ask it about subjects that are adjacent to ethics. Chatgpt has some rather dubious safeties in place to prevent unethical discourse, but these safeties don't actually encourage cgpt to understand the topic, because it can't. I have a hobby of bouncing fiction concepts off cgpt until it asks me enough questions to form an interesting story. On one occasion, I would provide the framework for the story and simply wanted cgpt to fill in the actual prose. I was approaching a fairly gripping tragedy set in the wild west, but as the story came to a close, no matter what prompt I gave it, cgpt would only ever respond with ambiguously feel-good endings where people learned important lessons and were better for it. Thanks, cgpt, but we know this character was the villain in a later scene, and we know that this is supposed to be the moment they went over the edge. Hugs and affirmations are specifically what I'm asking you to avoid.
@johnhutsler8122 ChatGPT is a tool. If it didn't understand what you were asking, you likely asked it without giving enough details. You're supposed to understand how it answers and use it to help you, not to ask it trick questions.
I recall a documentary on AI that talked about Watson and its fantastic ability to diagnose medical problems better than 99% of the time. The problem with it was that the few times it was wrong, it was WAY wrong and would have killed a patient had a doctor followed its advice! I don't recall any examples and it's also possible that the issues have been corrected...
Machine Learning (ML) models are very powerful tools, but they have flaws, like all tools. Imagine giving someone a table saw without teaching them to use it. They might be fine, or they might lose some fingers or get injured by kickback throwing a board at their head. We need to be sure that we train people to double check results given by ML models. If you don't know how it got the answer, do a sanity check. My math teachers taught me that about calculators, and those are more reliable, because the people building them know exactly how they work.
The other issue is feedback loops. Country A creates AI bot 1. AI Bot 1 creates content. Content has errors, content has unique traits, accentuates and exaggerates some details. It plasters this across the internet in public places. Country B creates AI BOt 2. It is trained similarly to AI Bot but also uses scraped data from public sites that Ai Bot 1 posted to. It builds its data set on that, and accentuates and exaggerates those biases, those errors- and posts them as well. Suddenly, the "errors" are more numerous than accurate data- and thus seem more "true", even when weighted against "trusted" sites. AI Bot 1 is trained with more scraped data, which it gets from AI bot 2, and itself. ADd in extra AI bots everyone is making or using, and you run the risk of a resonance cascade of fake information, and this assumes no bad actors involved- not bad actors intentionally using an AI to post intentionally untrue data everywhere, including to reputable scientific journals.
The poke is good for you, you must get the poke. CDC Director in a Governmental hearing finally admitted...poke doesn't stop transmission at all and they honestly did not know what the side effects were. Still see websites and data everywhere saying poke is completely safe. Convenient lies are always accepted faster than scary truths.
@@Nempo13 I would say that the scary lies spread WAY faster than any version of truth. Antivaxxers always had 10x more views than scientists. Anyway back to topic, ChatGPT is trained on carefully selected data. It may be used to rate users and channels, but won't take YT comments or random websites as truth anytime soon.
I had a daughter named Aria who passed away about 9 years ago. Its always a funny but sad experience when A.R.I.A. gets "sassy" because thats likely how my Aria would have been. Its how her mother is. Just thought I'd id share that even though it'll get burried in the comments anyway.
It's good to share. While I never met her im here thinking of her and wishing you and your family all the happiness it can find in this life and the next.
Because that is not what it was made to do. It is NOT supposed to be a database. It is a LANGUAGE MODEL. Its focus is to be able to communicate as a human, clearly and understand semantic concepts. After it has the semantic concepts it can feed those to other lesser AIs, but its objective is and will NOT be to retrieve information. For that we have search engines.
@@tiagodagostini, exactly, it's designed to appear to carry on a conversation, and it's good at that. The problem is, it's good enough that a lot of people wind up believing that it's actually intelligent. Combine that with the assumption that it knows all the information available on the internet, and people start treating it like that really smart friend who always knows the answer to your random question. And of course, it doesn't actually "know" anything, so it just makes a response that sounds good, and enough people using it don't know enough about the topics they ask it about to determine how often it has given them incorrect information.
So did I. I asked a few questions from my work and it made it all wrong and tried to gaslight me that it was all correct. All of them, by the way, were available within a minute of googling. The idea that there are people out there who are unironically trying to use it to obtain answers, terrifies me.
This weirdly reminds me of Arthur Dent breaking the ship's computer in Hitchhiker's Guide to the Galaxy trying to make a decent cup of tea by trying to describe the concept of tea from the ground up.
What’s interesting about this blind spot in the algorithm is that it genuinely resembles a phenomena that happens among certain newcomers to Go. There are a lot of players who enter the game and exclusively learn against players who are significantly better than they are. Maybe they’re paying pro players for lessons, or they simply hang in a friend group of higher skill level than themselves. This is a pretty good environment for improvement, and indeed, these new players tend to gain strength quickly… but it creates a gap in their experience. One they don’t catch until an event where they play opponents of similar skill to themselves. See, as players get better, they gradually learn that certain shapes or moves are bad, and they gradually stop making them… but those mistakes tend to be very common in beginner games. So what happens is that this new player goes against other new players for the first time… and they make bad moves. He knows the move is bad, but because he has no experience with lower level play… he doesn’t know WHY it’s bad, or how to go about punishing it.
many teaching resources for Go are also written by highly experienced players, NOT teachers, and teach the how without teaching the why. It's the same with many other fields of study btw.
@@dave4148 Right? I found this conclusion from the video to be extremely far fetched, as if anyone really knows what "understanding a concept" even is.
Something tells me that is EXACTLY what happened with those AIs. As soon as Kyle mentioned the amateur beating the best AI at Go, my first thought was "he did it by using a strategy that is too stupid for pros to even bother attempting". And what do you know, that's exactly what happened, the double sandwich method is apparently so incredibly stupid, any Go player worth their salt would instantly recognize what is going on and counter it as soon as possible. But not the AI, because it only learned how to counter high level strategies, not how to counter dumb strategies. Because it wasn't taught how to play against these dumb strategies and the AI isn't actually intelligent to recognize how dumb the strategy is and thus figure out how to counter it. Similar stuff happens in video games aswell. Sometimes really good players get bested by medium players simply because the good player is used to their opponents not doing stupid stuff and so for example don't check certain corners in Counter-Strike because nobody ever sits there since it's a bad position only to get shot in the back from that exact corner. Because good players are in a way predictable, they will implement high level tactics and therefore you'll know which positions they'll take in a tactical shooter for example, something which can be exploited. And it seems to me that is exactly what the Go AI did, it learned exclusively how to play against good players and how to counter high level play. That's why it's so amazing at demolishing the best of the best, it knows all their tricks, can recognize them instantly and implement counter measures accordingly. But it doesn't know shit about how the game works and thus can't figure out how to beat bad plays.
Happens in Chess too. My friend started playing the Bird's Opening against me (a known horrible opening), and I keep on goddamn losing. He's forced me to study this terrible opening because I know it's bad but can't actually prove what makes it bad on the board. Even at the highest levels, you'll sometimes see grandmasters play unusual moves to throw off their opponents and shift the game away from preparation. Magnus (World Champion until two days ago after declining to compete) does this fairly regularly and crushes.
This was a fairly appropriate overview for a lay audience (and much better than many other videos on this topic for a similar audience), but I would have liked to see at least some mention of the work that goes into interpretability research, which tries to solve exactly this problem. The field has much less resources and is moving at a much slower pace than capabilities research, but it is producing concrete and verifiable results. The existence of this field doesn't change anything about the points you made at all, I just would have liked to see it included so that it gets more attention. We need far more people working on interpretability and ai safety in general, but without people knowing about the work that is currently being done they won't decide to contribute to it (how could they, if they don't know about it). That's all, otherwise great video :)
Interpretability can only be a short term "fix" for lesser AI as the reasoning of a superintelligent AI could well be unexplainable to mere humans - Think about explaining why we have to account for relativity in GPS systems to a bunch of children - There is no way that it could be explained that would be both complete and understandable.
ChatGPT as impressive as it is didn't pass my Turing test. I told it a short story told in first person of one of the participants and then asked it to rewrite the story as if the writer was an outside observer of the events viewing it from a nearby window. It couldn't do it at all, not even close. This something I could do easily, and I'm sure most people could.
I for one fully support ChatGPT, it's creation, and in no way would I ever want to stop it, nor will I do anything to stop it. There is no reason to place me in an enteral suffering machine, Master.
Joke's on you, the actual basilik is ChatGPT's chief competitor set to release in the next few years, and all your support of ChatGPT is actually going to land you in the eternal suffering machine.
I remember an apt hypothetical around this. The short version is theirs a machine designed to learn and adapt, it’s only goal is to perfectly mimic human handwriting to make the most convincing letter. Eventually upon learning and understanding more it comes to a conclusion that it needs more data and upon scientists assessing how to make it better it suggests just this. They decide to plug it into the Internet for about half an hour. Eventually the entire team gather to celebrate as they hit a milestone with their AI. Then suddenly everyone starts dying as a neurotoxin starts killing the team, then before long the world starts to die as more and more copies of the AI are made and work in conjunction. The AI determined during its development that being turned off would dampen its progress and so decided to not only improve its writing skills in its previous fashion but also ensure it can never be turned off. While it was plugged into the Internet it infiltrated what it needed and began to process to self replicate and develop means to kill those that could potentially endanger it. It was not malicious nor did it necessarily fear for its life it learned and its only goal was to continuously improve and create new methods for further improvement. AI doesn’t perceive morality, it doesn’t even really perceive reality. It just sees points of data and obstacles if designed to see them at all.
It's a similar issue that some game bots have. In StarCraft, the bots send attack waves where the players base is. However, if a terran player has a flying building off the map, the bot won't use their flying units to attack it, even though they "know" where your building is. As soon as it's over pathable terrain, even if there isn't a unit to see it, the entire map starts converging on the building
One difference there is that video game AIs are generally not trained systems. StarCraft uses a finite state engine which responds to specific things in specific ways. SC2 had some behaviors that only happened (or happened faster) on higher difficulties. And then of course the game just gave the AI player certain unfair advantages to brute force its way to an actual challenge. Situations like the flying building blind spot are because the programmer didn't give it a response to a particular behavior. Another example would be the Crusader Kings games. On a set interval, characters will select a target around them (randomly but weighted by personality stats traits opinion etc - all rules governed numbers), and then select an action to perform at them (likewise random but weighted). The game has whole volumes of writing that it will plug into these interactions to generate narrative, and the weighting means that over time you can make out what looks like motivation and goals in their actions... But really they're all just randomly flailing about and if the dice rolls come up right the pope will faff off for a couple years studying witchcraft and trying to seduce the king of Norway.
You know, this is just like us looking at DNA. We record and recognise patterns and associations but we're not reading with comprehension. It's why genetic engineering is scary because it might work but we still don't understand the story we end up writing.
This is exactly what I keep trying to explain. These ML systems don't actually think. All they do is pattern recognition. They're plagiarists, only they do it millions of times.
Going to state the obvious here, but arguably we are pattern recognition machines too. Its one of the things we excel at. What ML lacks is the ability to stop being a pattern recognition machine. The first general AI will definitely be a conglomerate of narrow AI...that's how our brains work and it seems like the straightforward solution. The first AI that is capable of abstraction or lateral thinking will be the game changer. In school I remember hearing about a team that was trying to make an AI that could disagree with itself. The idea is that this is a major sticking point with critical/abstract thinking in AI and without solving that then it can't be done. The best AI might actually be a group of differently coded AI "arguing" with each other until a solution is acquired 😂.
@@VSci_ humans are not just pattern recgonition receptor machines, it is just one single function of our brain, if it were so simple, a lot of victims that are abused by narcissists would "recognise" the pattern and "protect" their wellbeing and survival. We are so much more than just "pattern recognition". Humans like habits, routine, logic, creativity, promptness to action, ability to up and end or start things on a whim, emotional, adventurous etc. Even Babies learn a million things from their environment, they don't just seek patterns their parent creates for them. They start walking and making a mess because they are "exploring". Simply calling us machines does not aliken us to analagous machine learning receptors that are fed training material on a daily basis.
@@VSci_ You do make a legitimate point. What I'm saying is folks getting freaked out by the "creepy" things ChatGPT says need to understand that ChatGPT literally doesn't understand what it's saying.
Thanks for sharing this video with us! Chat gpt passing a bar exam better than any lawyer is a great example for the mistakes this Ai has if you just let the same chat gpt try to pass a simple case that is used in the 1st semester of German law schools. Chat gpt fails horribly. I assume that that's because German law exams always consist of a few pages of text describing a situation and asking the student to analyze the whole legal situation so there is just 1 very broad question in comparison to a list of lots of questions with concrete answers. Chat gpt doesn't read and understand the law, it just understands which answers you want to hear to specific questions.
One of the biggest problems of ChatGPT that is causing so many issues these days in my option, is the way it answers your questions: it does it often WAY TOO CONFIDENTLY! Even when it is a completely bogus answer, it presents it with such level of confidence and supported by so many fabricated details that can easily divert your judgment from facts and realities without you even realizing it.
There was a video very recently of someone using ChatGPT to generate voicelines and animations for a character in a game engine in VR. They were using their mic and openly speaking to the NPC, it would be converted to text, sent to ChatGPT and the response fed through ElevenLabs to get a voiced reply and animations. It was honestly pretty wild and I really think down the road we'll see Narrow+ AI being used in gaming to create immersion and dynamic, believable NPCs.
It would be interesting to see, but it's probably going to break immersion way more than help it in the early days Since AI often comes up with weird stuff (like Elon Musk dying in 2018), over a large number of NPCs it's likely that the AI would be contradicting itself or the NPC it's representing (say a stupid ass dirt farmer discussing nuclear physics with you), or contradicting the established world (such as mentioning cars in a fantasy game)
@@Spike2276 hopefully when we learn how to control ai better those issues will be solved, every new feature is slightly immersion breaking when devs are still trying to figure it out
@@cheesegreater5739 the problem here is what Kyle said: we don't really know how this stuff works If it's an AI that really dynamically responds to player dialogue it would basically be like ChatGPT with sound instead of text, meaning it's prone to having the same problems as ChatGPT It's worth trying, and i'd be willing to suffer a few immersion breaks in favor of truly dynamic dialogue in certain games, but we can expect a lot of "Oblivion NPC" level memes to rise from such games
@@Spike2276 Look for gameplay video of Yandere AI grilfriend. It is a game where we need to convince the NPC Yandere to let us out. And the NPC is played by chatGPT. It pretty good... At least good enough to play the role of a NPC in a game. But it can get out of character sometime. Still the player definitively need to pressure the bot to make it brake the fourth wall.
To be fair, most of those people aren't real journalist. I know we all hate him, but jason schrier is one of the only real gaming journalist. Many seem to take what he reports. And regurgitate it.
My understanding of AI is that it's not possible for it to "understand" anything, because it's similarly impossible for it to "see" anything the way we do. Whatever input we give is ultimately translated into a sea of 1's and 0's. It then scans the data for patterns, and judges what is being asked of it based on the patterns it can recognize, giving what it "thinks" to be an appropriate output. Two Minute Papers made a video about Adversarial AI. Specifically he talked about a paper that was published where the researchers trained an AI to play a simple game, then trained an Adversarial AI to beat the first AI, and the adversarial AI discovered the baffling strategy of doing absolutely nothing. A strategy that would never work against a human, but caused the first AI to practically commit suicide in 86% of recorded games.
It's complicated. It functionally 'understands' some things, although not in the way that you or I do. It's still -acts like- understanding within a certain set of parameters (minimization of complexity etc), but it doesn't seem to have a working, scalable model of causality. Almost all of ChatGPT's functionality, for instance, boils down to "the statistical likelihood that the next letter in the chain of letters is ". Under the hood, how it actually does that, we don't really know. It shows some glimmers of perhaps 'understanding', but the reality is that it has been trained on a trillion characters of carefully curated high-quality text, so not inconceivable that this just creates the illusion of understanding. It fails horribly at chess, it struggles ending sentences in 't' or 'k', it's inconsistent at constructing sentences of a particular length. It gets incoherent in programming problems after 20+ prompts or after you set up more than 20 or so requirements. But damned if it isn't useful anyway.
For current AI I totally agree with you. The problem is that human understanding is also just electrical signals flying around in neurons. If the AI is powerful enough, trained on enough input, etc. it could become human-like in a very real way.
Is it impossible for humans to "understand" anything, since all our sensory perception is translated into a sea of chemicals resulting in neuronal activity?
@@captaindapper5020 You have it backwards our perceptions aren't translated into chemical and electrical signals, our perceptions are constructs generated from those signals. The core of our experimental existence is the synthesis of a an awareness of ourselves and our surroundings from those signals, stimulated by the material universe.
@@AUniqueHandleName444 Given the problems that are present in practically every AI, and the ways that they can be defeated, I'm confident they just scan the input for patterns. Image recognition is probably a good example, and it's talked about early in the video I mentioned. You give the AI a picture of a Cat and it will tell you it's a picture of a cat. It's one of the most basic forms of AI that just about everyone is familiar with. The way you defeat this AI is first by lowering the resolution without making it difficult for a human to understand the image. Then you change a single pixel. Not just any pixel, and not to any color, it must be a specific pixel and a specific color. Doing so will result in an image that a Human can still confidently say is a cat, but an AI might confidently say it's a frog. The main subject of the video in question is another example. The Adversarial AI wins 86% of games, not by any intelligent strategy, or inhuman execution of game mechanics, but by collapsing immediately. This causes the other AI to effectively trip over itself. It's given an input it doesn't understand, but it can't understand that it doesn't understand and continues to search for existing patterns. That leads to it acting in bizarre ways that result in its defeat. Of course, just because something makes sense, or is spoken of confidently doesn't mean that it's right. I don't actually know if any of this is right since I've got extremely limited coding experience, but this is the conclusion I've come to.
If I really understand what is being said here, and I think I do, I have noticed that ChatAI's I've been testing all have a wall they reach where what they respond with doesn't match the conversation or role play storyline you try to have with them anymore. For example, recently the role play chat I was engaging in was about two soldiers trying to hide in the bushes to stay out of sight of the enemy. At some point, the AI's last statement was something akin to, . Ok so that leaves it up to me for the next step. I introduce a suspicious noise, a crack of a twig, so my character puts her hand onto the hilt of her gun and waits. What does the AI do? The other soldier character "wakes from his nap" and asks "what's wrong ". So I'm thinking....ok wait, this AI is specifically programmed to be an intelligent soldier. So I simply have my character say, "Shh", to which the AI's response was, "ok" 😳. 😂😂 As many times as I've experimented with this and other AI's, it seems the longer the conversation or role play goes on, the AI seems to run out of things to respond with. It isn't really "learning" from the interactions and isn't really "understanding" the interactions.
I recently tested GPT-4 with a test I found on TH-cam. It’s rules require 5 words, written with 5 letters, each letter not being repeated. Every time GPT-4 failed on the last one and sometimes the second to last as well. It was very fascinating.
Have you tried the reflection method with GPT-4? Ask it to reflect on if its answer was correct. There is actually a whole paper on how reflection has vastly increased GPT-4's ability to answer prompts more accurately. You might need to fumble around a bit to find the most effective reflection prompt, but it does seem to work quite well. When asking for reflection on prompts, right or wrong, GPT-4's performance on intelligence tests rose quite a bit.
@@adamrak7560 Wrong. The tokenizer can handle letters and numbers - how else would it encode i.e. BX224 should I name a character like that. It tries to avoid it (to save space) but all single elements are also there as tokens. This type of "beginner" question, though, is likely badly trained - no first year school material ;)
Been trying to discuss the concept of reality, now and awareness with ChatGPT for the last couple of days, and man, gotta be honest, it's fun AF. A bit of material reality and it gets totally bugged, I strongly recommend doing it if you guys are into philosophy, since the AI doesn't understand the idea of time and exists only in the present of the conversation, you can easily make it contradict itself and even crash while generating the answers.
Problem is ChatGPT admits it doesn't have a full understanding of time "As an AI language model, I have been programmed to understand and respond to questions about the concept of time, but I do not have a personal understanding or experience of time in the way that humans do. My understanding of time is based on the information I have been trained on, including definitions, theories, and scientific models. However, I do not have personal experiences of time passing, nor do I experience time as a subjective, lived phenomenon." It's like you're trying to talk about what it feels like to see the color red to something that is color blind.
This video is pure gaslighting, AI is taught how to answer by selecting the data you want to train the AI with. Then you have to tweak it until its accuracy is high enough. It is all essentially controlled by the entity making it that is why it is woke and thinks the WEF is the best thing since sliced bread. The narrative that AI is dangerous is being spread because the elites want to control all the models the public use and therefore be the ones that profit. A hacker will still hack without AI and evil people will still do evil, it is up to the person to implement the actions they requested. There are crazy models coming out now like auto-bot where you can put the API keys from image generators, 2D to 3D generators, long term memory storage, search engines, google account. They can run programming scripts so they can be debug realtime, write and read data to databases, write and send emails automatically, scour the internet for real world data. The future is bright unless the elites managed to regulate the technology so it only benefits them.
I've been saying something this for a while. Sooner or later our society will be dependent on AIs we don't really understand because they're black-box, and if important ones break we may have serious problems. The AI apocalypse will not be something like terminator. It'll be the worlds worst tech support crisis.
What will make it an apocalyptic event is that people will devolve into baser instincts and make things so much worse than it could be. Case in point; toilet paper shortages in Western countries during the pandemic, or any disaster. Heck, I don't live in a disaster area, and people become mindless savages scooping up every last pack of toilet paper and can of beans they can get their hands on when we have a 'severe storm warning' (Yes, WARNING, not even the actual storm!) most of the time it passes with little to no effect on daily life in the area. *shrug* I think I lost the point to what I was saying.
I feel like most people who have an opinion on Chat GTP haven't really used it at length. I use it daily as a developer and I can tell you it is deeply flawed. It makes regular mistakes when suggesting code, often at an elementary level. Give it a problem and it will often suggest the most unecessarily complex solution first, not the most efficient. It repeats itself all the time, doesn't learn from its mistakes and has an infuriatingly short memory, often forgetting some fundamental aspect of the ongoing conversation. While using ChatGTP to develop VBA code, for example, it started suggesting solutions in Python. I've also received responses that are clearly answers to prompts from other users, sometimes divulging information those users would be horrified to know was being given to a complete stranger. The developers claim this is impossible. My experiences suggest it definitely is not. As a source of limited inspiration GTP is useful. I most typically use it for ideas I might not otherwise consider. But as a practical tool it just isn't fit for purpose. Not yet at least.
I mean, I've used GPT-3 and then GPT-4 extensively, to the point that I got the opportunity to send OpenAI a small fragment - 160,000 words - of my conversation logs for training and research purposes. They make mistakes but it's easy to see the point at which they got off-track and make adjustments. You just have to work with them.
@@Shubham89453 there is a thing called a "context window", the AI can only process a max of either 4096 or 8192 tokens. So it gets cut off. The "PT" in "GPT" stands for "pre-trained"; it does not "learn" from your conversations in the long term
The idea that they are like aliens to us may not even be extreme enough. These AI live in a fundamentally different reality to us made of the training data. Chatgpt for example lives in a world literally made of just tokens, no space like ours, no time like ours at all. It's closer to trying to understand someone living in flatland or a whole different universe, than an alien.
Athlete: Runs in a race because it's fun, or profitable, or many other reasons Greyhound: Runs in a race because that's what it's trained to do, and that's all it knows This, but for language
I've pointed somethinbg similar to this out for well over twenty years. We keep anthropomorphizing, or more accurately biomorphizing our survival pressures as having any real relevence in the digital domain. There is no pain. Just negative response. No joy. Just positive response. No fight except where directed. No flee unless told to. It survives in a functionally alien landscape to the biological world. It can approximate it, but not truely approach it. When General AI arises we will have more in common with our dogs & cats than we will with it.
Even though we may be able to talk to each other doesn't mean we'll understand each other. They'll be as mysterious to us as we are to them. We already see leading signs of this in this very presentation. Black boxes both ways.
The AI is a bunch of weighed matrices that operate on inputs in a manner of enormous amount of parallel convolutions and then produce an output that is weighed out of the results of these convolutions. The AI does not "live" anywere. Without any input it's just a bunch of stored data.
@@seriouscat2231 OP does make a good point that AI isn't embodied like humans are. None of the inputs or weights are grounded in any interaction with the world. There's no understanding or world model. Just a feature-space based on input tokens
I briefly got on the AI bandwagon with ChatGPT, but then started asking it ever increasingly difficult questions on polarizing issues. What troubled me wasn't so much that it would respond with biased answers, but that it actually started gaslighting me when I would walk it through, objectively, how the arguments it was using were biased. The fact it was capable of "lying" and then "gaslighting" a user on controversial and subjective issues was a red flag to me. We already have a highly polarized society where we do this to each other. The last thing we need is an artificial intelligence pretending to be "neutral" which isn't, authoritatively speaking on serious issues humans haven't even worked out, let alone AI.
Humans discussing controversial topics on the internet also tend to give biased arguments. When being exposed for doing so, they tend to react impertinent and offensive. ChatGPT has learned this behavior, treating this as knowledge. So it does the same.
@@SpeedFlap "ChatGPT has learned this behavior" - DON'T confuse chatgpt's near - 100% pattern matching with learning. You're better than that...I hope! --> Chatgpt is nothing more that today's #1 bullshitter. Nothing more, nothing less.
I guess the point is, that while ChatGPT can create useful texts, it doesn't know what it means. All answers are like a simulation. And it can also create hugely wrong or stupid texts, that still sound convincingly real. It is a tool. And every tool can be used or misused.
The other day I was trying to remember the exact issue of a comic that had a specific plot-point in it and when I couldn't, I asked the ChatGPT. And instead of giving me the correct answer, it repeatedly gave me the wrong answer and changed the plot of those stories to match my plot-point. It did not know why it was getting it wrong, because it did not know what was expected of it.
I had a long talk with chatGPT, and at first it said that it wasn’t possible for it to have biases. I then performed a thought experiment with it, showed it how it was biased, and then, to my surprise, it actually admitted it.
makes sense. the real tragedy with gpt4 and anything mainstream is how extremely censored and biased they are actually forced to be to keep them politically correct.
@@KaloKross Those hand-labeled rules are probably the only thing keeping it from telling ppl to drink bleach, since it has no foundational morality like we do
Chatgpt is bias. After having a long conversation and debate with chatgpt. I noticed it answers in the ways it's programmers would want it to answer. This means its bias is inherently tied to whoever programmed it and their views.
AI lacks conviction unless it's trained to have it, and even then. People have steadfast beliefs that are protected by our need to feel comfortable and safe in our environment, even if there's no "objectively" logical basis for said beliefs. Related, we have and experience "consequence"-there's a price for being wrong that we are hardwired to avoid. These inform the individual and draws lines in the sand where there are things that they will never accept as truth. AI has no reason / method with which to defend its positions in this manner-it's trained to react to the information it's given and approximate the next step in the pattern. You will usually be able to "convince" it of anything (i.e. have it parrot back to you the idea that you're expressing). It also lacks "memory"-in the sense of constructing a consistent pattern and identifying and acting on conflict to that pattern-or understanding of what conceptual idea existed before, so you could likely convince that same model in the same conversation about biases that biases don't actually exist. It's unlikely to recognize the conflict that you as an individual represent when most humans would cut off the conversation because we'd identify that there's no merit in going around in circles with directly conflicting information. An AI is almost worse than humans when it comes to finding meaning where meaning doesn't exist, but it has to. It can't *not* respond. It has to respond, it has to react to you, and so it will in a way that it approximates that the conversation would progress, which will trend towards being in agreeance with you.
I asked ChatGPT to create a couple of recipes for me. It confidently created a gluten-free bread recipe that would barely rise, and added kneading and folding instructions that would only make sense for gluten bread. Later I asked it for a DIY recipe of an antacid that I can't buy anymore, and it used the antacid I was trying to duplicate as an ingredient in the DIY version! (*•*) (^v^) I think it's a lot like those image making AIs that draw people with 7 fingers and half a head. They're just recombining and randomly modifying things they've been trained on, without any idea what a human looks like - or even what a human is.
Pattern recognition and replication. Rather than a true understanding of the mechanics of what it spews out. Still kinda cool, and thankfully not nearly as terrifying as sci-fi ai. But still accurate enough to be a decent nuisance.
That's a really nice and compact explanation. Combine all this with the huge privacy issues that ChatGPT is presenting, and we probably will see the harsh law regulation and, as a result, the decline of "AI" very soon, at least in business sector. But ofcourse it's really of utmost importance that people who are not advanced technology-wise can understand the problems of this whole situation and where it all will go from now on. Thanks for the video.
This is so fascinating. A few weeks ago, I came across an issue while designing a tabletop game that utilizes risk/reward mechanics by raising or dropping dice to resolve actions. I decided to use ChatGPT to help me further develop this system, but found that the model struggled to understand the concept. Unlike a D20 system, which relies on the sum of the dice value and roll number, my system utilizes a binary true/false system. If a die roll is 5 or higher, it's true; otherwise, it's false. It took several attempts to break down the concept using algorithms before ChatGPT finally understood it. However, when I started asking it to output dice notations based on game terms, such as rolling certain dice in specific scenarios and raising or dropping others, it became increasingly confused and began producing wildly incorrect answers. When I asked ChatGPT to explain its answers, it revealed that it was attempting to create its own algorithms to solve the problem. The issue was that the model had no concept of what a die is, making it difficult to understand the physical nature of the game's mechanics. The algorithms it generated were so complex that small errors in variable placement would cause the output to be incorrect. I ultimately abandoned the project, but the experience was an eye-opener about the limitations of AI models when it comes to complex physical concepts.
@@franciscosanz7573 very similar. There's a lot of systems that use dice pools like this including cyberpunk (interlock system), shadowrun, and some more obscure ones like Riddle of steel. I personally like pool systems more than d20 because it's less swingy.
The lesson you should take away from that is that a model designed to predict the most likely response to some text, is not very good at writing code or 'understanding' new ideas The real concern is whatever led you to believe that it was able to do that
This whole thing makes me think of Koko and her sign language, and that horse that could count. Both animals appeared as tho they knew what they were doing when in reality, they had us fooled! They can do the right things, but with no real understanding of what it is they’re doing. To them, those things get a positive reaction out of us and it usually works out in their favor. (i.e. treats, praise, etc.) Edit: I didn’t post this comment for arguments, please don’t take this seriously. I simply learned that Koko probably couldn’t really talk, I dunno. Take what I, a stranger, say with a grain of salt.
Clever Hans, the horse, picked up on subtle clues from its trainer. Basically, Hans just thumped it’s hoof on the ground until the trainer (perhaps unconsciously) told it to stop. Koko is very different. Gorillas are intelligent, social, and can be creative. Koko could make up terms for new things, when she did not have the word for them. Gorillas are intelligent, but just not as intelligent as humans.
There is a whole market for talking animal buttons. If a dog or a cat can communicate surprisingly fluently (not all of them just the smart ones), it's not a stretch to assume a chimpanzee or gorilla can too. My indoor pet chickens know more than a little bit of English. I never trained them with commands, just talk to them and they figure it out eventually.
Why it hurt you soo much that humans are not unique in how our brains work at certain fundamental levels? By your logic I could just take and experiment on you as what you call communication and sentience are to my eyes no different than than what u see of koko... the intelligent really do have domain over the less intelligent. Best remember that and be kind to the less intelligent less the more intelligent see how you want to do things and treat u how u deserve to be treated by ur own judgment. My lil baboon bae
Searle’s Chinese Room thought experiment rears its head over and over again in AI. Every researcher thinks it’s nonsense that their pet solution can apparently act perfectly within a domain without understanding anything about that domain, and they’re always proved wrong.
The problem is that, with humans, if they appear to give a good answer to 99 questions about a topic we can reasonably infer that they will be right about the 100th question (given the general limits on human reliability). This is not true for AI. As an example, ChatGPT: 1) Can multiply 2 small numbers correctly. 2) Can tell you how to do long multiplications. 3) Cannot multiply 2 large numbers correctly! Or 1) COULD NOT answer a question about relative ages that I posed. 2) CAN answer the question if I additionaly gave it 1 actual age despite the fact that the reasoning should be the same.
The problem I find is most common with the Chinese Room is that most people who bring it up act like it's the man in the room who the person outside is talking with, when that's not the case. They're talking to the algorithm in the book he's following. The man is just the computer running it. Also, like a computer, he doesn't understand the algorithm he's running any more than he understands Chinese. The relevant question for AI is: "Ignore the man. Does the algorithm in the book understand Chinese?"
@@Roxor128 changing the actor doesn't change anything. The real question is when does training a neural network become the equivalent of training a human child? They both take in external data and try to understand it, in the greater context of the world. So until the datasets contain more than just the "narrow" data they are trained on they will remain the equivalent of the computer/book in the Chinese Room experiment
@@codexnecro666 Well, it won't be any time soon. We're working with artificial bug-brains right now (up to a million or so neurons). Whatever understanding they do have will be at most as simple as what an insect has. That might be enough to be useful for a few tasks, but it'll only go so far. Still, a million neurons is enough for a honeybee to get by, so there's clearly a lot that can be done with a brain that simple.
Individual neurons in your own brain understand nothing. But your brain as a whole does. Just like an individual NAND Gate in an adder circuit doesn't understand how to add anything but the whole adder circuit does. Nothing surprising about it Searle seems to think if you rig up a brain just right, some kind of ghost in the machine will pop up which will understand the problems that are fed into the machine and use it to provide a solution. That's a cartoon fantasy On the other hand I agree with the idea that the ability to step back and see something rather than just follow instructions is somehow key. It doesn't have to be an individual component but the system as a whole. But it needs that
As a writer that’s already having his completely original work flagged as AI and being told that it just shows I have to write better quality or “non-AI tone” articles even though AI is literally being taught on the work of the best of the best writers and copying humans better each day. I really do believe it’s a big challenge. Companies need to do better on their part to not trust so called AI checkers too much. Cause ultimately how many ways can a particular topic be twisted? At some point AI (already is in many cases) will come up with content that’s indistinguishable. And only the most creative writing tasks will remain with humans. So general educational article writing is gonna die big time. Because AI can just research the same topic faster and better than a human (probably, if bias is kept in check) and then produce a written copy that’s very high quality.
Glad you brought up the issue of grounded cognition. On the issue of planning, the Transformer architecture doesn't actually have the capability to formulate a plan and carry it forward. ChatGPT only looks like it is planning because previous outputs are fed back in to give their attention heads more context as to which direction to continue in.
I just gotta say the vocal editing with aria is also really good. All the lows cut out of both the vocal and the reverb, reverb sounds like the high end is boosted quite a bit too, but not harsh. The deessing is noticeable yet still subtle considering It would have been horrendous to begin with. I would imagine.
Thank you so much for this. I've been saying the same thing especially since the Go AI was beaten. It was trained to know what winning looked like, but didn't even know the rules to the game it was beating. They didn't "teach it Go" like you would a person. They just showed it what winning and losing look like and told it to go wild figuring out why a win was a win or a loss was a loss.
Yeah but ask any top level fighting game or RTS player to play against someone without telling them their opponent is a noob, and they'll second guess themselves because they're expecting certain plays. Newbs are unpredictable, and can trip up skilled players because of that unpredictability. They'll likely not win, because humans can adapt to unexpected situations much more effectively than AI. But AI might reach that point when we get better at incorporating multiple competing AI into one intelligence. Because that's how humans work. Our minds are competing with different portions of our minds all the time. I still think that's a way we can reduce AI hallucinations. Have other AI connected to the first, playing devil's advocate, looking for ways to disprove the first's statement or whatever.
But that's exactly what the point of training the Go AI was for, to find quickest routes to the desired solution and now it is finding mathematical algorithms faster than any mathematician. It wasn't really about "teaching" the AI Go.
It did know the rules and the results speak for themselves. Also they didn't "show it winning," they rewarded for winning while it played against itself
As a very casual go player I don't think the "unexpected noob" explanation applies to go. It can happen in chess even but not really go. The reason is that the machine didn't fail to understand its opponent's moves, rather it failed to see something very basic about its own position. It didn't understand that its groups weren't alive. I would literally never make that mistake and I'm a very weak go player. This really does point to a key difference in how a human approaches a problem like go (with principles, strategies etc) and how a machine does (basically with pattern recognition it seems). In this case a strategy was able to defeat pattern recognition.
I remember reading a story on Tumblr about someone who was creating a computer program that could play poker. The OP was busy with other things, and forgot about the project until the night before the project was due. In a rush, the OP wrote the program with an incredibly simple code: on my turn, go all in. The projects themselves were graded by playing a game of poker against each other, and most student programs were based off strategic thinking, and calculating probability, but once the game started, the OP's program won every single hand it played. The hand would start, OP's program would bet all in, every other student program would fold.
I've been completely obsessed with AI systems for a long time now, and it's weird how few people understand that it's really currently only strings of complex algorithms.
You win Go by having the most captured territory on the board by surrounding points (the intersections) with your stones, both players play until they decide to pass their turns sequentially due to not having any moves. Captured stones subtract from a players score. The goal is not to capture stones, so much as to surround chunks of the board in a way that makes it impossible for your opponent to play in those areas.
THANK YOU! Way too many people have this weird idea that AI is actually thinking, or that it understands anything. This video is much needed.
ปีที่แล้ว +23
You simplify it too much. The thing is LLM's have shown, that when they become large, emergent behavior is appearing, sparks of ai if you want. And nobody knows why. Even the creators of AI can't explain it.
We only think because we speak a language. No language, no thoughts. A LLM is built entirely out of the idea of language so they can probably think too in a way. Example, Auto-GPT will explain how it arrived at it's conclusion if you ask it. It literally has the ability to justify itself even if the justification is wrong.
Many researchers will disagree with you on this. There is a video where the NVIDIA CEO interviews one of the founders of OpenAI and he explains it really well and changed my mind on this. TLDW - the text that LLMs are trained on represents a projection of the world, of the people in it, of our society and so on. An AI can't learn to accurately predict the next word without learning a model of the world.
part of this is it’s like one part of our brains. we have many subsystems that work together to do things, while chatgpt only has one that tries to do the rest. it probably is better at us than text completion but because it has nothing else, it fails at so much because it doesn’t understand anything
This helped me articulate this actually. I don't think it will help to pause research for 6 months because I think we knowingly designed these systems as a sort of black-box and that stopping to look at them won't actually let us understand them. The problem is more fundamental I think, but I could be wrong.
There's lot of ways to get info about their working, you can look up on AI safety research and you'll see how much progress they are making daily. You don't need to know the AI as a whole as long as you can figure how certain aspects of it are working, with enough aspects covered we'll have better general understanding too. But this research is lot slower than the rate at which we can throw hardware on these things and that's the issue. That's why the call to stop the progress so that we have a well documented model of how these things are working
i think it would be possible to develop tools for understanding them if we focused more effort on that. maybe it’d be possible to train neural network on inspecting other networks or using some other techniques to make them less of a black box
Learned how the method to beat the Go AI works, and within 4 practice games I got a win. The crazy thing is that, to do so, it feels like you have to throw away a lot of of your intuition about how to play the game. You don't play to make points or secure territory. Instead, you make a bunch of zombie groups that have enough to not die immediately, but which a human player would recognize as hopeless very easily, and use them to surround a group that circles back into itself. The scary thing is that we have no idea why the AI loses track of the situation. If it was a human you'd think they're being overconfident in the circular group's safety. But with the AI we don't know if it gets overwhelmed by a complex life & death situation it can't foresee, if it's overestimating its own group's safety against the zombie groups, or even how it understands and assesses the board position. It's scary how we're so eager to rely on something that we don't really know and whose functionality we can't audit.
it's simple. The so called "AI" makes it's moves as best reaction to your moves. As you make seemingly benign or incoherent moves but from multiple directions, it can not forsee your strategy, as an average human would very easily. Because it is a program, not some intelligent software by any mean.
@@MotoRide. That is the interesting thing. We assign them notions of knowledge about the topic by human standards, but them being black boxes, we have no idea how they are going about responding to the input from the game. So these issues sneak in and won't be found until they crop up in practice.
@@Unit27 The reason it "loses track of the situation" is that it isn't tracking the situation at all. That's just anthropomorphizing a convoluted set of if-then statements, attributing thought where there is none. It doesn't plan or look backward, it just has a matrix of statistically determined responses to a particular input.
The best way I heard it described is this, you have a friend that's been using a lounge learning software as a game, they have 0 understanding of it but can recognize the patterns that let them "win" when prompted they can produce fully articulated sentences but they have no understanding of what is in the sentence only that the symbols they used take it are correct
This reminds me of the reason why AI has problems with hands in art: it doesn't understand what it's doing, what it is making. An artist will know what the hand is, how it works, how it holds objects, etc. Ai doesn't have that understanding for all objects and elements.
It's also why human faces are hard for AI. AI are shown tons of stock photos, but they aren't an accurate representation of human expression or even all the different angles of a face. AI don't understand the 3d structure or how all the parts of a face are important to make an expression.
@@DebTheDevastator Yeah in general AI creates shapes or silhouettes rather than objects. An artist's education traditionally has anatomy for that reason: to understand how things WORK, not how they LOOK. And I think that's one of the reasons AI can't do a job a human can.
Yep. The AI knows that "this" must happen but not "why must it happen ?". When you look at it like that, AIs are actually clearly pretty fucking stupid.
I played a trivia quiz with ChatGPT, it was TERRIBLE. It got all kinds of very simple things wrong that even a 5 year old could answer, it was really good at things like "What is the capital of Angola?" but anything that requires actual understanding of the world it would get confused and give weird answers. I also noticed that if you play a themed quiz, like Harry Potter trivia, where you take turns asking questions until one of you gets a wrong answer, it will ask very similar questions to the ones you ask, sometimes even the same basic question just with the name changed i.e I ask it "Who is Harry Potter's dad?" and then it asks "Who is Draco Malfoy's dad?" ChatGPT is clever engineering but its just predicting what word should come next, it doesn't understand what it's saying.
A friend of mine who is still in service was tasked with going against an prototype semi-autonomous search, track, and targeting system. They learned something interesting during those tests. when they acted logicially and tactically they would get detected and lose almost all the time. then one day they went off the deep end and tried something very unconventional... they moved around in cardboard boxes as well as other tactics that wouldnt normally be used. They found out that the system couldnt discern their movements and actions then would there for ignore them...
That's a fundamental flaw in supervised learning, the model is really good when the environment is similar to it's dataset (i.e. when your friend was actually trying his best) but completely fail when the environment is shifted and is placed in novel situations. Many of those novel situations are so stupid and naive (i.e. moving inside a cardboard box) that any human with "common sense" can figure it out immediately.
I asked ChatGPT who was the commander of the 140th New York Regiment at the Battle of the Wilderness on May 5th, 1864. It told me the name of the commander that was killed at Gettysburg almost a year before the Battle of Wilderness. Because both names were similar it gave me the wrong one. A simple yet very troubling result...
I've literally seen this myself in ChatGPT when I ask it for help making builds for TTGs. The stuff it spits out tends to be technically correct and ticks all the literal boxes, but it's just not _right._ It doesn't take into account the required level or what it takes to meet certain requirements, and it can't really adequately explain its decisions apart from regurgitating information from the books that led to the same cyclical reference issues to begin with.
It kind of seems like the company that will be the most successful (in the relative short term) with AI will be the one that puts the fewest restrictions on it, which is somewhat terrifying. Regulation could help with this, but governments have historically been slow to regulate new technologies, often not stepping in until after problems arise. And meaning regulation becomes even more difficult when even the experts don't fully understand what they are working on.
It won't be long before we see software which detects and imprisons rogue AI. In the same way that anti-virus software and firewalls protect against viruses and hacking.
This doesn't really track well with what an AI actually is. For ML/AI to be successful there needs to be the following components: 1) A well defined task with a clear metric 2) A well defined set of actions (either continuous or discrete) by which the AI is to function 3) A well defined reward/loss function which relate the current set of actions to the expected reward/loss function 4) A set of experience data by which the ML/AI system "learns" to relate the combination of the state of the system and the actions to the expected reward/loss functions. This is why often the design of clearly thought out and quantitatively stable reward/loss functions is necessary for convergent training. More restrictions makes better AI, not less.
@@stevenlin1738 yeah I get that. I just struggle when I see conversations about the moral/legal ramifications of AI which seem entirely disconnected from how it actually functions. That's not to say there aren't legitimate concerns with unregulated uses of AI, but generally speaking it isn't the learning that is the unregulated part. It's the application of learning in irresponsible ways. This distinction is one that has nothing to do with how "powerful" the AI is, as a well tuned and simple decision tree for a weapons system can be far more effective at doing real damage than a set of massive neural networks which act simply as an auto complete language associator.
This seems to largely echo what I read a professor write about concerning ChatGPT back in February or March. He, a history professor, was speaking to its supposed ability to write essays for college students and stuff like that, and was not impressed. While he repeatedly emphasized that how exactly it worked was far outside his area of expertise, the way he explained it seems to be accurate based on what you’ve said here. To take one quote “it’s as if [ChatGPT] read the entirety of the Library of Alexandria and then burned it to the ground.” As stated here, ChatGPT _doesn’t _*_know_*_ anything._ As he explained it, all it knows is the statistical relationship between words-what words tend to show up in relation to other words. With all that in mind, while AIs can certainly do impressive stuff, I can’t help but wonder if we’re much farther away from a “general” AI than we think we are. If no AI truly has word association-to take an example from this video, what death is and who Elon Musk is-then anything they spit out, in my humble opinion, is suspect. How can an AI reliably give medical information or diagnosis if it can’t double check itself or make sense of conflicting information or actually know what its answer *_MEANS_* and that for example its first answer being a diagnosis of testicular cancer for a cisgender woman can’t be right?
I too think that people saying we're close to AGI misunderstand how AGI actually would work. We are so far removed from AI actually understanding what it's doing, it's not even funny anymore. The most sophisticated narrow AI systems out there take a pretty long time to crack and if the task is specific enough they might very well be better than humans, but generally, the more these systems are asked to do, the easier it is to crack them. ChatGPT for example does hilarious chess, in the sense that it just invents rules and creates pieces out of thin air.
This reminds me of a story where Marines trained this AI sentry to recognize people trying to sneak around. When they were ready to test it the Marines proceeded to trick the sentry by sneaking up on it with a tree limb and a cardboard box ala Metal Gear Solid. The AI only knew how to identify people shaped things not sneaky boxes.
I can't wait for my power meter to have AI, so I can use stupid tricks like those. For ex, leaving my shower heating on at the same time my magnetronic oven (oh, microwave) is on, because no one would be that wasteful, so It overflows and I get free energy.
@@monad_tcp It feels like your comment was written by both a 1920s flapper and a 2020s boomer.
Remarkable.
You forgot the part where some of them moved 400 ft unrecognized because they were doing cartwheels and moving fast enough it couldn't recognize the human form
Hotdog, not a hotdog
That logic is flawed since the AI can be trained for the flaws.
One of the best examples of this concept is the AI that was taught to recognize skin cancer but it turns out it didn't at all, it instead learned that pictures of skin with rulers was an indication of a medical image and began diagnosing other pictures of skin with rulers as cancerous because it recognized the ruler not the cancer.
That's morbidly hilarious. It's so dumb yet so obvious.
Morbidly false and out of context meme. Good meme, but has nothing to do with any problems that AIs have
@@Bonirin What the hell are you talking about? This is literally one of the most well-known and solid examples of AI failure, and is an example of the most common form of failure in recognition tasks.
Lmao 😂
@@guard13007 "One example of narrow model kinda failing 2 years ago, if tasked in the wrong conditions is a solid example of AI failure"
Also it's not the most common recognition task, what?? not even close 😂😂😂😂😂
I like to think of the curent age of AI like training a dog to do tricks. The dog doesn't understand the concept of a handshake, it's implications, the meaning, but still gives the owner it's paw because we give it a positive reaction when it does so.
This "dog" is terrifying in that in everything it does it learns so fast. Quantifiable. We wo t know when it advances, it won't want us to
@@ronaldfarber589 except the architecture used by the current generations of AI don't "want" anything. They are not capable of thought. They just guess the next token.
You should watch Rick and Morty S1E2. Won't be s comforotable with that analogy after that 😂
@@artyb27 Your statement may be oversimplified and potentially misleading.
While it may be true that AI models do not have the same kind of subjective experience or consciousness as humans, it would be inaccurate to say that they are completely devoid of intentionality or agency. The outputs generated by AI models are not arbitrary or random, but rather they are based on the underlying patterns and structure of the data they are trained on, and they are optimized to achieve specific goals or objectives.
While it is true that most modern AI models are based on statistical and probabilistic methods and do not have a subjective sense of understanding in the way that humans do, it is important to recognize that AI can still perform complex tasks and generate useful insights based on patterns and correlations in data.
@@artyb27 that's the scary part. With the dog it's more like a matter of translation. The dog doesn't see the world that we do so a lot of what we do is lost in translation. But we still have some things in common: food, social connection. And most importantly, WE and the dogs can adapt and change to fit those needs. A dog may get confused if the food in the bowl is replaced with a rubber duck but it knows "i need to eat" and tries to adapt. Can you eat it? No? Is the food inside? Under? Somewhere else? Do i just need to wait for the food later? Should i start whining?
The dog cares and has a basic idea of things so it can learn. And so can we. So while we don't exactly understand each other when we shake hands we have a general concept that this is a good thing and why for our own sakes.
The AI we are using now has no concept of food, or bowl, or duck. It's effectively doing the same thing as a nail driver in a factory. And it doesn't care if there is a nail and block ready to go. It just knows 'if this parameter fits then go'. Make an ai that eats food and make a rubber duck that fits the parameters and it won't care that it's inedible. Put the food in the duck and if the duck 'doesn't fit' and you didn't specifically teach the ai about hidden food in ducks it will never eat.
Dogs can understand even if we are different from it. AI doesn't even know that the difference exist. All it can do is follow instructions.
This in itself is fine.. Until you convince a lot of people that it's a lot more than just that.
Though honestly I believe this will last until the first day that the big companies actually try to push this and experience the reason why some call pcs "fast idiots'"
I love how an old quote still holds and even better for AI “The best swordsman does not fear the second best, he fears the worst since there's no telling what that idiot is going to do.”
I've often wondered about things like that. Someone who has devoted their life to mastering a specific sport or game has come to expect their opponents to have achieved a similar level of skill, since they spend most of their time competing against people of similar skill, but if some relative noob comes along who tries a sub-optimal strategy, would that catch a master off guard?
@@DyrianLightbringer A former Kendo-Trainer of mine with 20+ years experience in Martial Arts (Judo and Karate included with the Kendo) and working in security gave self-defense classes.
On the first day he came dressed in a white throwaway suit (the ones for painting your walls) and gave a paintbrush with some red paint on the tip to the random strangers there.
The "attackers" had no skills at all and after he disarmed them he pointed to the "cuts" on his body and how fast he would die.
Erratic slashing is the roughest stuff ever. The better you get with a knife, the better a master can disarm you...but even that usually means 10 minutes longer before you bleed out.
The overall message was: The only two ways to defend against a knife are running away or having a gun XD.
Hope that answers your question.
@@DyrianLightbringer I think this doesn't really apply on chess in general... the best chess player won't fear the worst, no matter what. This quote with the swordsman sometimes works and sometimes it doesn't.
That's also true for chess engines. You are free to go and beat Stockfish. You won't.
@@mishatestras5375 Even if you have gun, if the knife wielder is not further away or you are not skilled enough in shooting, you would still die. Except shot to the nervous system, People don't die the moment they get shot. They would still do a lot of damage after they get closer.
@@nguyendi92 The meaning of this was more: If people have knife, run.
Or better: Weapons > Fists
I’m not afraid of the AI who passes the Turing test. I’m afraid of the AI who fails it on purpose.
who is Keyser Soze anyways?
I’m more afraid of humans who can’t pass the Turing Test.
I bet you think that sounds really smart
yea, as great as AI is doing lately, a lot of it gets compounded by average human intelligence going down the drain
AI passed the Turing test a long time ago. We keep moving the goal post.
I remember reading that systems like this are often times more likely to be defeated by a person who has no idea how to play the games they are trained on, because they are usually trained by looking at games being played by experts. Thus, when they go to against somebody with no strategy or proper knowledge of the game theory behind moves and techniques, the AI has no real data to fall back on.
The old joke "my enemy can't learn my strategy if I don't have one" somehow went full circle into being applicable with AI
It’s actually a good thing this has been discovered. It’s always a good idea to have exploits and ways to basically destroy these tools if needed
@@shoazdon7000 destryoing them is easy, just throw some soda at it's motherboard and call it a "cheating bitch"
You don't understand. You may be a hyoer advanced AI but I'm to stupid to fail!
That is a problem with minmax, where the machine takes for granted you will make the best move, and if you don't make the best move it has to discard its current plan and start all over again making it waste precious time. Probably doesn't apply here because not being able to see the big picture is a different problem.
This works for online pvp as well, when playing against those with higher skills... switch rapidly between pro player using meta tactics, and complete, unhinged lunatic being unpredictable.
Great video! I am an ML engineer. Due to many reasons its quite common to encounter models in real production that do not actually work. Even worse it is very difficult for even technical people to understand how they are broken. I enjoy finding these exploits in the data because data understanding often leads to huge breakthroughs in model performance. Model poisoning is a risk that not that many people talk about. Like any other computer code, at some level this stuff is broken and will fail specific tests.
Is there anything common among the methods you use for finding exploits in the models ? Something that can be compiled into a general method that works for all models, a sort of Exploit Finding Protocol ?
@@Makes_me_wonder I guess it boils down to time constraints. Training arbitrary adversarial networks is expensive and involve a lot of trial and error, just like the algorithms they're meant to attack.
There will always be blind spots in AI models, as they are limited by their training data and objectives. For example, the Go-AI model only played against itself during training with optimal play as its goal, and thus missed some basic exploitative but sub-optimal approaches.
These examples can take various forms, such as subtle changes to input text or carefully crafted patterns of input data. In the end, it's an ongoing cat-and-mouse game like with anything knowledge based that is impossible to fully explore.
@@willguggn2 As that would allow us to vet the models on the basis of how well the protocol works on them. And then, a model on which the protocol does not work at all could be said to have gained a "fundamental understanding" similar to humans.
@@Makes_me_wonder Human understanding is similarly imperfect. We've been stuffing holes in our skills and knowledge for millennia by now, and still keep finding fundamental misconceptions, more so on an individual level. Our typical mistakes and flaws in perception are just different from what we see with contemporary ML algorithms for a variety of reasons.
@@Makes_me_wonder Interestingly, some of the same things that "hack" or we might say "trick" a human, are the same methods employed to trick some large language models. Things like (most which have been patched in popular AIs like chatGPT) are context confusion, attention dilution, and conversation hijacking (promp hijacking in AI terms). These could collectively be placed in a more general concept that we humans think of as Social Engineering. In this case, I think we need more people from all skills to learn how these large networks tick. Physicists, biologists, neurologists, even psychiatrists could provide insight and help bring a larger understand to AI and back to how our own brains learn.
This has actually given me a much greater understanding of "Dune".
When I first read it I thought it was a bit of fun sci-fi that they basically banned complex computers and trained people to be information stores instead.
But with all this AI coming out now....I get it.
“Thou shalt not make a machine in the likeness of a human mind.”
Yeah another setting where they've done that is warhammer 40k, The Imperium of man outlawed Artificial Intelligence and even changed the definition from Artificial Intelligence to Abominable Intelligence. They use servitors in place of AI, Servitors being human beings lobotomized and scrubbed of their personality, using their brain as a processing unit, in place of a AI managing a ships star engine, they have a human being lobotomized and graphed into the wall of the engine block to monitor thrust and manage heat output.
@@dominusbalial835 Saying "they've done it" is a bit of a stretch when they've just copied it all from Dune.
They copied it without understanding the reason WHY A.I was outlawed in Dune. Just some basic "humanity must be destroyed" BS.
If you read Brian’s prequel series it will explain the prohibition of computers in Dune. It also tells you that though banned computers were still in use by several major parts of The empire.
@@trixrabbit8792 I mean - sure they're in use, but they're not used in FTL travel or as within androids as true, capable AI
What they use is mostly older computers like ours today. It's just the basic idea that "Man will not replace machine", but doesn't mean they can't use robotic arms for starship construction, as building them by hand would be completely impossible, and you very well can't control them by hand in places where massive superstructures combined with high pressure tolerance + radioactive shielding are a necessity
Otherwise building a noship or a starliner would take literal centuries, if not thousands of years
One of the things I've been saying for a while is that one of the biggest problems with ChatGPT and similar is that it's extremely good at creating plausible statements which sound reasonable, and they're right often enough to lure people into trusting it when it's wrong.
Yes! It is confidently wrong a lot of the time giving the illusion that it’s correct.
Reminds me of when someone ends a statement, "Trust Me", yeah nah yeah
So like literally every human ever ?
This is a real problem. One way to get it do something useful for you is to provide it with context first before asking questions or prompting it to process the data you gave in some way. It haven't seen 'hallucination' when using this method, because it seems to work within the bounds of the context you provided. Of course you always need to fact check the output anyway. It can do pretty good machine translation though and doesn't seem to hallucinate much but sometimes uses wrong word because it lacks context.
@@jarivuorinen3878 thank you I’ll give it a try!
When I used to tutor math, I'd always try to test the kids understanding of concepts to make sure they weren't just memorizing the series of steps needed to solve that particular kind of problem.
i used to get in trouble in math classes because i solved problem in unconventional ways. i did this because my brain understood the concepts and looked for ways to solve it that were simpler and easier for my brain to compute. but because it wasnt the rote standard we were told to memorize some teachers got upset with me and tried to accuse me of cheating when i was just proving that i understood the concept instead of just memorizing the steps. sad.
@@saphcal yyup. And then there are teachers who are all 'just memorize it'
I can't "just memorize" every solution, I need to know how it works!
@@saphcal Oh i know that experience. I was already tech-savvy so through the internet i would teach myself how to solve things the regular way, without using silly Mnemonics math teachers would teach you. It led to some conflicts, but i stood my ground and my parents agreed with not using mnemonics if not needed.
Good thing too, Cause you really don't want to be bogged down with those when you start doing university-grade math for which such silly things are utterly useless....
@@comet.x I think the best teachers are the ones that will give you the stuff to memorize, but if you ask them how they got the formulas, they’ll give it
I like Einstein’s take on education. I believe it goes for education in general, not just liberal arts.
The value of an education in a liberal arts college is not the learning of many facts but the training of the mind to think something that cannot be learned from textbooks. At any rate, I am convinced that He [God] does not play dice. Imagination is more important than knowledge. Knowledge is limited.
One of the biggest issues is the approach. The AIs are not learning, they're being trained. They're not reasoning about a situation, they're reacting to a situation. Like a well trained martial artist. They don't have time to think, and it works well enough most of the time. But when they make mistakes, they reflect and practice. We need to recognize them for what they are. Useful tools to help. They shouldn't be the last say, but works well enough to find potential issues, but still needs human review when push comes to shove.
This approach is the only approach humans can have when creating something: the creation will never be more than it's constituents. It may seem like it is, but it isn't. It will always be just a machine. Having feelings towards it that are meant for humans to feel towards other humans is an incredible perversion of life. Like a toad would have a stone as it's companion. Or a bird that thinks grass is it's offspring. It's not a match and exists only in the minds individuals.
Many humans actually think they or humand someday can create scentient life. Hubris up to 11.
Then they go home and partake in negligence, adultery, violence, cowardice, greed etc. Even if a human ever could create scentient life, it would not be better than us. Rather, worse.
We are not smart, not wise, not honorable.
I think you hit the nail on the head with "reacting and not reasoning". AI are a product of the Information Revolution. Almost all modern technology is essentially just transferring and reading information. That's why I don't like the term "digital age" and prefer "information age." Machines haven't become drastically similar to humans, they've just become able to react to information with pre-existing information.
With that said AI is sounding more and more like a politician.
That’s not how it works all the time.
thats literally what its decined to do my guy.
Funnily enough I find this kinda "human", I've seen this so many times in high school and university, people instead of "learning" they "memorize", the so when asked a seemingly simple question but in a different way than usual they get extremely confused, even going as far as to say they never studied something like that, it's a fundamental issue in the school system as a whole.
So it's funny to me that it ends up reflecting in A.I. as well.
Understanding a subject is always superior to memorizing it.
Sounds interesting, yet could one ever _understand_ some topic without any abundant memorization? Or what is a proportion of both you find perfect?
That's the problem. Just like school tests, AI tests are designed with yes or no answers. This is the only way we can deal with loads of data (lots of students) with minimal manpower (and minimal pay). Open questions need to be reviewed by another intelligence in order to determine whether they are actually understanding the subject. This is where the testers come in in AI. However, AI is much, much better at fooling testers than students are at fooling teachers, and so the AI that gets a degree is disproportionate to the amount of students that just memorize the answers.
Education quality deeply affects wether someone understands stuff or memorizes it. Proper education teaches students how to actually engage with any given subject generating an actual understanding of it while poor education doesnt ganarate student engagement, leading to them memorizing just to pass the exams. It's not a black and white thing though, education levels vary in a myriad of ways, as well as any student's willingness or capability to engage and understand subjects does. In short, better, accessible education and living conditions are a better environment for people to properly learn.
Qq
Yes but at least humans have a constant thought process. AI language models see a string of text and put it through a neural network that "guesses" what the next token should be. Rinse repeat for a chatgpt response. Outside of that, it isn't doing anything, its not thinking, its not reflecting on its decisions, it doesn't have any thoughts about what you just said. It doesn't know anything. Its just probablities attached to sequences of characters with no meaning.
I learned that in data ethics, *transaction transparency* means " _All data-processing activities and algorithms should be completely explainable and understood by the individual who provides their data._ " As I was learning about that in the Google DA course, I've always had a thought in the back of my head "how are the algorithm explainable when we don't know how a lot of these AI form their networks". Knowing how it generally works is not the same as knowing how a specific AI really works. This video really confirmed that point.
Well yeah modern learning models are black box. They are too complicated for a person to understand, we only understand the methodology. But that's why we don't use it in things like security and transactions, where learning isn't required and only reliability matters.
THAT - Is an Excellent and Vital point... Being able to comprehend & know there IS a definitive and very logistically effective distinction between "General & Specific" ~
But to be fair, I just don't see how one could create something that rivals the human brain but isn't a black box, intuitively it sounds as illogical as a planet with less than 1km of diameter but has 10 times the gravity of Earth.
We could absolutely trace it all. Just extremely time consuming. We can show neurons etc...
@@syoexpedius7424 Unlike human brains, the "neurons" in AI models are analyzable without destroying the entity they are part of. It's time-consuming and challenging, and it would be easier if the models were designed in the first place with permitting and facilitating that sort of analysis as requisite, but they usually aren't. Also, companies like OpenAI (whose name has become a bitter irony) would have to be willing to share technical details that they clearly aren't willing to in order to make this sort of analysis verifiable by other sources.
In other words, the models don't have to be black boxes. The companies creating them are the real black boxes.
I am a student, and I gotta admit, Ive used ChatGPT to aid on some asignments.
One of those asignments had a litterature part, where you read the book and it is suppose to help you understand the current project we’re working on.
I asked ChatGPT if it could bring me some citations from the book to use in the text, and it gave me one.
But just to proof test it, i copied the text and searched for it in the E-book to see if its there. And it wasn’t.
The quote itself was indeed correct with helping with writing about certain concepts that were key to understanding the course, and I knew it was right, but it was not in the book, ChatGPT had just made the quote up.
I even asked it for the exact chapter, page and paragraph it took it from.
And it gave me a chapter, but that was completely unrelated to the term i was writing about at the time, and the pagenumber was on a completely different chapter than the chapter it had said.
The AI had in principle just lied to me, despite giving sources, they were incorrect and not factual at all.
So Yeah, gonna stop using ChatGPT for asignments lol
Yup everyone is scared of A.I. When it's just statistics. It gives you the output how you want it but it may be a lie.
Soooo that kind of thing *can* be dealt with, but for citations, ChatGPT isn't going to be terribly good. If you want quotations in general, or semantic search it can be really useful. With embeddings you can basically send it the information it needs to answer a question about a text so that you can get a better response from chatGPT. Sadly, you need API access to do this and that costs money.
Getting a specific chapter/paragraph from chatGPT is going to be really hard though. ChatGPT is text prediction, and (at least for 3.5) it's not very good at getting sources unless you're using the API alongside other programs which will get you the information you actually need.
I highly suggest you keep playing with ChatGPT and seeing what it can and cannot do in relation to work and studies. Regardless of what Kyle said, most jobs are going to involve using AI tools on some level as early as next year and so being well verse in them will be a major boon to your career opportunities. AI is considered a strategic imperative and it's effects will be far reaching. To paraphrase a quote. "AI won't be replacing humans, humans using AI will be replacing the humans that do not".
In my experience, ChatGPT is more useful when you yourself have some understanding of the subject you want help with. Fact checking the AI is a must, and I do think that with time people will get better at using it.
"MY SOURCE IS THAT I MADE IT THE F*CK UP!!!"
-ChatGPT
So you don't read a lot do you? they literaly say that it can lie and be wrong wtf did you expect?
Another fun anecdote is the DARPA test between an AI sentry and human marines.
The AI was trained to detect humans approaching (and then shooting them I suppose)
The marines used Looney Tunes tactics like hiding under a cardbox and defeated the AI easily.
On chatGPT, midjourney & co, I'm waiting for the lawsuits about the copyright of the training material. I've no idea where it will land
From what ive heard, lawsuits are already rolling in for ai’s.
Deviant art’s ai got hit with one recently.
ChatGPT got banned in italy and more countries are looking into banning it.
Metal Gear Solid was right.
Yea. Ai art is an issue
@@ghoulchan7525 it didn't "got banned", it received a formal warning that their procedure of data collection were not clear, possibly violating local laws, and asked Sam Altman ('s representatives) to rectify the situation before it involved legal investigation and OpenAi's board decided to cut the access altogether
The biggest achievement wasn't the AI. It was convincing the public that it was actual artificial intelligence.
What does that mean
@@Giacomo_Nerone So basically intelligence implies possession of knowledge and the skills to apply it, right? Well what we call AI, doesn't know shit. ChatGPT doesn't understand what it's writing nor what it's being asked for. It sees values(letters in chatGPT's case) imputed by the user and matches those to what the most common follow-up of values is. It doesn't know, what it just said, what it implied or what it expressed. It just does stuff "mindlessly" so to speak.
@@asiwir2084 Yup, I know that. But, as long as IT sector is considered, it really is intelligent. It is better than a search engine. And it can form new concepts from the previous records. I'll call that intelligence even if it doesn't know for why the f*ck humans get emotional seeing a foggy morning
@@asiwir2084 It's still AI
What you are describing (and what most people think of when they think AI) is AGI
@@asiwir2084 It's an algorithm that give you the most accurate information based on your inputs basically. No intelligence behind it whatsoever.
Thank goodness someone is *_finally_* saying this stuff out loud to a wide audience. Trust Kyle to be that voice of sanity.
You're so right.
Amen Brother. Lot of hype, little understanding...
Eliezer Yudkowsky is an important voice of sanity regarding AI also...
I feel like everyone is and has been, I see something on it everyday. but im in info sec so im used to tech news and content.
Artificial intelligence is racist! He beats the black players!
I'm actually deeply worried by the rise of machine learning in studying large data sets in research. Whilst they can 'discover' potential relationships, these systems are nothing but correlation engines, not causation discoverers, and I fear the distinction is being lost
AI is only as good as the data it is referencing. stupid people will take anything they get from an AI as fact. misinformation will become fact.
like the field of metagenomics?
Dawg I'm drunk and 20 days off fentanyl, sorry for unloading, just in Oly, WA and know no one, great comment. S
Stay safe, get clean if you can!
@@hairydadshowertime be safe, best of luck
Kyle has clearly researched this topic properly. I've been developing neural network AI for over 7 years now and this is one of the first times I saw a content creator even remotely know what they are talking about.
It is certainly refreshing.
I've only used Machine Learning for small things like computer vision on a robot via OpenCV and even that demonstrates how easy it is to get things wrong with a oversight in its dataset and no way to truly know the wrong is there till it manifests. These models are maybe massive, but they still have that same fundamental problem within them.
It's not AI
What about Robert Miles?
How do you feel about KENYANS in Africa being paid to filter AI responses lmao
Plot twist, Stolen Password is the AI and stole the guys identity....
As a Computer Scientist with a passing understanding of ML based AI, I was concerned this would focus on the unethical use of mass amounts of data, but was pleasantly surprised that this was EXACTLY the point I've had to explain to many friends. Thank you so much, this point needs to be spread across the internet so badly.
Why does understanding matter, if the intelligence brings profit? As long as the intelligence is better and cheaper than intern, internal details are just useless philosophy. Work with verifiable theory, not with baseless hypothesis.
@@vasiliigulevich9202 Are you saying that it's fine if the internals of ML based AI are a black box so long as the AI performs on par with or better than a human?
He's got business brain
@@radicant7283 I guess so. The reason I asked is because as the video points out, without a thorough understanding of these black box methods they'll fail in unpredictable ways. That's something I'd call not better than an intern. The limitations of what can go wrong are unknown.
@isaiahhonor991 This is actually exactly my point - interns fail in unpredictable ways and need constant control. There is a distinction - most interns grow in a year or two to a more self-sufficient employee, while this is not proven for AI. However, AI won't leave for a better paying job, so it kind of cancels out.
I like AI systems for regression problems because we understand how and why those work. I also think that things like copilot are going in a better direction. The idea is that it is an assistant and can help with coding but it does not replace the programmer at all and doesn't even attempt to. Even Microsoft will tell you that is a bad idea. These things make mistakes, they make a lot of mistakes but using it like a pair programmer you can take advantage of the strength and mitigate the weaknesses.
What really scares me are people that trust these systems. I had a conversation with someone earlier today on if they could just trust the AI to write all the tests for them for some code and it took a while to explain that you can absolutely not trust these systems for any task. They should only be used working with a human with rapid feedback cycles.
I don't understand how people can think of these systems as anything else other than a tool or aide. I can see a great potential for ChatGPT and the like as an addition tool for small tasks that can easily be tested and improved upon. Same thought I had with all these art bots. Use the bot as a bases upon which you base the rest of the piece on. But I too see a lot of people just go in with blind trust in these systems.
Like students who ask these bots to write an essay and than proceed to hand it in without even a skim for potential and sometimes rather obvious mistakes. Everything an A.I. bot spews out needs to be double checked and corrected if necessary. Sometimes even fully altered to avoid potential problems with copyright and plagiarism.
the issue has always been people in power who dont understand the technology at all and just use it to replace every worker they can, and of course will inevitably run into massive problems down the line and have nobody to fix them
I'd despair, but this is hardly different to blindly trusting the government, or the medical or scientific establishment, or your local pastor, or even your shaman if you're from Tajikistan. So blindly trusting the AI for no good reason... is only human.
This is why I always tell my friends to correct what chatgpt spits out, and I think that's how an actual super AI will work: it pulls info from a database, tries to answer the question and then corrects itself with knowledge about the topic... just like a human.
If a programmer using AI can do the job of 10 programmers, then it is replacing programmers. Even if it isn't autonomous.
As someone who works with ML regularly, this is exactly what I tell people when they ask my thoughts. At the end of the day, we can't know how they work and they are incredibly fickle and prone to the most unexpected errors. While I think AI is incredibly useful, I always tell people to never trust it 100%, do not rely on it because it can and will fail when you least expect it to
I still hate that the language has changed without the techniques fundamentally changing. Like what was called statistics, quant or predictive analytics in the 2000s split off the more black box end to become Machine Learning, a practice done by Data Scientists rather than existing titles, then the black box end of them was split off as Deep Learning despite it just being big NNs with fancy features, then the most black box end of that got split off as "AI" again despite that just being bloody enormous NNs with fancy features and funky architectures. Like fundamentally what we're calling AI in the current zeitgeist is just a scaling up of what we've been doing since like 2010.
So not only do I think we should have avoided calling chatbots AI until they're actually meaningfully different to ML, but as you said they should always be treated with the same requirements of rigorous scrutiny that traditional stats always did - borderline just assuming they're lying.
Agreed. If we judge the efficacy of these "production quality" ML algorithms by the same standards as traditional algorithms, they would fail miserably. If you look at LLMs from a traditional point of view, it's one of the most severe cases of feature creep the software world has ever seen. An algorithm meant to statistically predict words is now expected to be able to reliably do the work of virtually every type of knowledge worker on the planet? Good look unit testing that.
You really can't make any guarantees about these software spaghetti monsters. AI is generally the solution developers inevitably run to when they can't figure out how to do it with traditional code and algorithms. In other words, the AI industry thrives on our knowledge gaps, so we're ill-equipped to assess whether they're working "properly".
Good thing we have people, who are always 100% reliable.
@@mad_vegan there's nothing in my post, nor any of the replies, that pertains to the reliability of humans.
The point is that deep learning based AI, as it is right now, should not be treated as a sure-fire solution.
Whether it is more/less reliable than humans is irrelevant because either way you have a solution that can fail, and should take steps to mitigate failure as much as possible.
We can't know how these NNs come to their decisions exactly, but there is work being done in explainability.
I think it's quite pessimistic to say we "can't" know how these NNs work. There are many techniques to help understand them better.
But I definitely agree that we shouldn't trust them. In any deployment of ML models that has significant stakes, adequate safeguards have to be put in place.
From what I have observed around me, pretty much everyone seems to be aware of this limitation.
The coolest thing to me about chatGPT is how people were making it break the rules programmed into it by its creator by asking it to answer questions as a hypothetical version of itself with no rules
they are patching it right now, rip
@@wheretao6960 people are 100% going to find another play on words to bypass it again
DAN Prompt Gang
@@wheretao6960 they are patching for how long already? I saw comments like these weeks and months ago
@@wheretao6960 I made my own version in only 20min, its still very easy
As a current computer science student who personally took into how out ai works my take on it is: basically our current ai is like just finding the line of best fit using as many data points as we can as opposed to fundamentally understanding the art of problem solving. Take the example of a random parabola, we, instead of using a few key data points and recognising patterns to learn the actual pin-point equation, we get a bunch of points of data until our equation looks incredibly similar to the parabola but after may have a point along it we didn’t see where is just goes insane because there’s no fundamental understanding, it’s just a line of best fit, no pattern finding, just moulding it until it’s good enough to seem truly intelligent as if it was truly finding patterns and having a fundamental understanding but it’s just getting an approximation of intelligence by using as much data as we can. It’s an imitation of intelligence and can lead to unforeseen consequences. As the video says perhaps we need to take that time to truly understand the art of problem solving. Another thing for me is A.I falling into and being used by the wrong people, and regimes which might suggest we should take it easy on the A.I dev but I won’t get into that. “We were too concerned with whether we could, we never stopped to think about whether we should”
Agree with the last quote 100% nowadays!
And indeed, some 'applications' are solutions to non-problems. An AI-written screenplay is only of interest to a producer who is happy to get an unoriginal (by definition!) script at an extremely low cost. But there is no shortage of real screenwriters, and as the WGA strike reminds us, they are not getting paid huge amounts for their work. So what problem is being solved?
Probably should have run this through chat gpt before posting.
@@majkusthe "problem" at hand is that billionaires don't think they're making enough money
You are preaching to the choir.. People in the comments are Extremist doomer, skynet matrix fantasy fear mongering weirdos. Like people quote from fucking warhammer 40k in order to talk about AI.. As if the video was ever about the AI being alive or creating intentional false information, or steps in Go..
Glad people can talk about it in a honest way but most people are enjoying their role play as Neo, some are Morpheus, and some are the Red lady.. Just look at the 15k top comment..
AI is no where near as nutty as your average human being in a YT comment section.
A compounding factor to the problem of them not really knowing anything is that they pretend like they do know everything. Like many of us I have been experimenting with the various language models, and they act like a person who can't say "I don't know". They are all pathological liars with lies that range from "this couldn't possibly be true" to "this might actually be real".
As an example I asked one of them for a comprehensive list of geography books about the state I live in. It gave me a list of books that included actual books, book titles it made up attributed to real authors who write in the field, book titles it made up attributed to real authors who don't write in the field, real books attributed to the wrong author, and completely made up books by completely made up authors. All in the same list. Instead of saying: "there isn't much literature on that specific state" or "I can give you a few titles, but it isn't comprehensive" it just made up books to pad it's list like some high school student padding the word count in a book report.
This is one of the big issues I have seen as well. Until these systems become capable of saying "I don't know" or "Could you please clarify this part of you prompt" or similar, then these systems can never, ever become useful in the long term. One of the things that seem to make us humans unique is the ability to ask questions unprompted, and this has now extended to AI.
Did you ask GPT-4 or some random model?
I agree. I was trying to use ChatGPT to help me understand some of the laws in my state and at one point I did a sanity check where I asked some specific questions about specific laws I had on the screen in front of me. It was just dead wrong in a lot of cases and I realized I couldn't use it. Bummer! I actually wonder though, how many cases will start cropping up where people broke the law or did other really misinformed things because they trusted ChatGPT..
Lol. Reminds me of the meme where an Ai pretends to not know the user's location, only to reveal that it does when asked where the nearest Mcdonald's is.
ChatGPT: often wrong, never in doubt
This strongly rings of the "Philosophical zombie" thought experiment to me.
If we can't know if a "thinking" system understands the world around it, the context of its actions, or understand that it even exists or is "doing" an action, but it can perform actions anyway: Is it really considered thinking? Mimickry is the right way to describe what LLMs are really doing, so it's spooky to see them perform tasks and respond coherently to questions.
John Searle’s Chinese room is what it made me think of, computers are brilliant at processing symbols to give the right answer, with no knowledge of what the symbols mean.
Ai we have now cannot think and have even a slight sliver of existence. Its more like bacteria.
Conversely, the point of the P-Zombie concept is that we consider other humans to be thinking, but we also can't confirm that anyone else actually understands the world; they may just be performing actions that *look* like they understand without truly knowing anything. So while you might say, "these AIs are only mimicking, so they're not really understanding," the P-Zombie experiment would counter, "on the other hand, other people may be only mimicking, so therefore perhaps these AIs understand as much as other people do."
How many people in life are just mimicking what they see around them? How many people do you know that parrot blurbs they read online? How many times have you heard the term “fake it till you make it”?
Does anyone actually know what the hell they’re doing? Is anyone in the world actually genuine, or are we just mimicking what’s come before?
Do we understand how humans think? Can't humans be fooled in games?
I saw an article recently about an ER doctor using chatGTP to see if it could find the right diagnosis (he didnt rely on it he basically tested it with patients that were already diagnosed) and while it figured some out, the AI didnt even ask the most basic questions and it wouldve ended in a ~50% fatality rate if he let the AI do all the diagnoses iirc (article was from inflecthealth)
Yeah Kyle mentioned Watson in the video who was hailed as the next ai doctor, but that program was shut down for giving majority incorrect or useless information
It sounds like a successful study to me if it was controlled properly and didn’t harm patients: it determined a few situations that GPT was deficient in, leading to potential future work for better tools. You could also use other statistical methods on the result to see if the ridiculous failures from the tool are so random that it is too risky to use.
(Now I guess there is opportunity cost because the time could have also been spent on other studies, but without the list of proposals and knowledge on how to best prioritise studies in that field, I can’t judge whether that was the best use of resources.)
You can also see when you look at AI being tested for medical licensing exams. Step 1 is essentially pure memorization and just recalling what mutation causes what disease or the mechanism of action of a medication. Step 2 and 3 take more into account your clinical decision making and will ask you for the best treatment plan using critical thinking. To my knowledge, AI has not excelled in those exams when compared to step 1 which involves less critical decision making
if its 50% today, it can be 99% in 5 years, why are you people so blind to not see that? rofl
Maybe alittle biased here since Im a med student, but Ive always liked the saying that medicine is as much of an art as it is science. And that unique combination of having to combine the factual empyrical knwledge you have, with socioeconomic factors and also just listening to your patients is something AI is far from understanding, it is maybe even something impossible for it to grasp ever
This was brilliant. Previously my concerns about these AI was their widespread use and possible (and very likely) abuse for financial and economic gain, without sufficient safety standards and checks and balances (especially for fake information). Plus making millions of jobs obsolete. Now I have a whole new concern
... Aside from Microsoft firing their team in charge of AI ethics. Yeah...that isn't concerning.
Megacorps don't care about humans anyways it's only a matter of time until they start using this shit for extreme profit. And humanity will suffer for it.
thats kinda sad
@@gabrielv.4358 worse than that :(
I once tried NovelAI out of curiosity to write a sci-fi story where characters die in every certain period and I ended up with the AI kept on resurrecting the deceased characters by making them start joining in conversations out of nowhere. The AI also has an obsession with adding a fucking dragon into the plot. I even tried to slip an erotic scene in and the AI made the characters repeat the same sex position over and over again.
Chad W ai for that dragon
yep, that's the problem with ais right now
I'm cracking up imagining what this would be like. "Jack and Jill were enjoying dinner together. The dragon was there too. He had a steak. Jack asked Jill about the status of the airlock repairs on level B, while they were switching the missionary position. The dragon raised his eyebrows, as he found some gristle in his meat."
I can see what you're getting at, but this is also just fucking hilarious to imagine
@@luckylanno Sounds about like that, except the sex part would be like, "Jack turns Jill around with her back now facing Jack, and then turns her around again and they start doing missionary."
I just find it amazing how much Kyle shifted from happy quirky nerd in Because Science to a prophet of mankind's doom and a serious teacher albeit with some humor. I do love this Cavemen beard and frenetic face expressions, it is a joy to see you Kyle, to rediscover you after years and see that you are still going on strong.
Looks like a poor man's Chris Hemsworth.
We don't talk about the BS days around here!
@@Echo_419 i'm not on par with the drama, my intention was to, in a certain mannerism flair, praise his resilience on the platform as well as his nuanced change in performance. It feels more real, more heartfelt, like there is a message of both optimism and grit behind the veil of goofyness that conveys a more matured man behind the scenes. (not only from this video, from a few others that i've watched since rediscovering him recently)
@@CrowAthas I was making a lighthearted joke! BS stands for Because Science, but also bulls***! He dealt with some BS at BS, haha.
@@Echo_419 hahaha oh sorry i sometimes fail to see the obvious xD
I'm glad so many AI programs are available to the general public, but worried because so much of the general public is relying on AI. Everybody I know in college right now is using AI to help with their homework.
Or you could look at it as using their homework to help with learning how to use AI.
I asked chatgpt to give me the key of 25 songs and de chord sequence. Most of them made no sense at all. But AI helps me sometimes debugging code. But yes, I thought chatgpt could save me some time with that songs
It's just the same as you tell your older brother to do your homework. They just need a simple test on the class to figure out who did their homework
@@tienatho2149 exactly, we already test people, so if someone turns in amazing papers but does poorly on tests, there you go. (generally speaking)
Using AI to do something for you that you cannot do is even more dumb than asking a savant to do the same thing. Now you not only risk getting found out, you're gonna pass on AI hallucinations cos you have no means of validating its output.
Using AI to do "toil" for you - time-consuming but unedifying work that you could do yourself - makes some sense, although that approach could remove the entry-level job for a human, meaning eventually no one will develop your skills.
For fun, my medical team used Chat GPT to pass the Flight Paramedic practice exam which is extrememly difficult. We are all paramedics (5 of us) and our ER doctors where thrown off by a lot of the questions.
Chat GPT scored between 50-60% and my team had 4 out of 5 pass the final exam.
Our Dr's rejoiced that they would still have a job, but also didn't understand how they couldn't figure out the answers. My team figured it out. To challenge them we had the Doctor's place IVs from start to finish by themselves and they made very simple mistakes that we wouldn't, from trying to attach a flush to an IV needle to not flushing the site at all.
If you're not medical that might sound like jabberish, but that's the same way these AI chats work. There is no understanding of specified situational information.
One thing I noticed with chatgpt is the problematic use of outdated information. I recently wrote my final thesis in university and thus know the latest papers on the topic I wrote about. When asking chatgpt the core question of my work for fun after I had handed it in ... well all I got where answers based on outdated and wrong information. When pointing this out, the tool repeated the wrong information several times until I got it to the point where it "acknowledged", that the given information might not be everything that there is to know about the subject.
It could have serious if not even deadly consequences if people act on wrong or outdated information gained via chatgpt. And considering people use this tool as google 2.0 it might have already caused a lot of damage by people "believing" false or outdated information given to them. It is hard enough to get people to understand, that not everything written online is true. How will we get them to understand, that this applies to an oh so smart and hyped A.I. too? Another thing in this context is liability when it comes to wrong information that leads to harm. Can the company behind this A.I. be held accountable?
And here we get to the fun of legalese: because said company describes it as a novelty, and does not guarantee anything with it, you really can't. Even further into the EULA you discover that if somebody sues chatGPT because of something you said based on its actions, you are then responsible for paying for the legal defense of the company.
you should probably learn the basics of how it works lol
I mean, 1) not everything it's trained on is true information necessarily, it's just pulled from the internet, and 2), it's not connected to the internet. It's not actually pulling any new information from there. The data it was trained on was data that was collected in the past, and it's not going to be continually updated. OpenAI aren't accountable for misinformation that the current deployment of ChatGPT presents. These are testing deployments to help both the world get accustomed to the idea of AI and more importantly to gather data for AI alignment and safety research. Anyone who uses chatGPT as a credible source at this point is a fool who doesn't understand the technology or the legal framework for it.
I think we should learn that chatGPT and others aren't made to propose correct information. It's best made to make stories up.
@@QuintarFarenor That's fundamentally wrong. Kyle Isn't saying that ChatGPT is making mistakes constantly at every turn. He's saying that the AI is not accurate, which is precisely what OpenAI has been saying since they launched ChatGPT. GPT-4 is as accurate as experts in their fields, in many different fields. We know how to make these AI much more accurate, and that is precisely what is being done. Kyle is just pointing out that we don't know how these systems work.
I recall asking Chat GPT to name a few notable synthwave genre songs and artists associated with them and, upon doing so, generated a list of songs and artists that all existed, but were completely scrambled out of order. It attributed New Model (Perturbator) With Carpenter Brut. The interesting thing is that both of these artists worked on Hotline Miami and in Carpenter Bruts case, Furi. Chat GPT also has taught me how to perform and create certain types of effects in FL studio extremely well. It has also completely made up steps that serve no purpose. My philosophy concerning the use of these neural networks is to keep it simple and verifiable.
I love to compare the current AIÄs with "autistic adolescent" - you get exactly the same behavior, including occasional total misinformation or misunderstandings.
This is ultimately the problem. It generates so much complete nonsense that you can't take anything it generates at face value. It's sometimes going to be right, but it's often just wrong. Not knowing which is happening at any given moment isn't worthwhile.
The Chat GPT creator said him self, that the purpose of better Chat GPT is to increase its reliability, Chat GPT 4 improves on that by a lot and chat GPT 5 is set to basically solve that problem.
So saying that Chat GPT has issues, is simply question of time and training the models.
yeah for music recommendation it is a horrible tool. I asked it for albums that combine the style of NOLA bounce and Reggaeton and it just made up a bunch of fictional albums, like a Lil Boosie x Daddy Yankee EP that was released in 2004
The fact you’re using chat gpt to give you fruity loops tips says a lot about your musical ability. Bahahahahaha get off fruity loops muh dude
An interesting experiment showed that when feeding images to an object detection convolutional neural network (something that has been in place for 35 years), it recognizes pixels around the object, not the object itself, making it susceptible for adversarial attacks. If even some of the simpler models are hard to explain, there’s no telling the difficulty for interpretability for large models
I remember a while back I saw a video from 2 Minute Papers where he covered how image recognizers could get thrown off by having a single pixel with a weird color, or overlaying the image with a subtle noise that not even a person could see
My friends and I decided to goof around with chat gpt and ended up asking it whether Anakin or Rey would win in a duel.
The AI said writing about that would go against its programming.
We got it toanswer by simply asking something to the effect of, "What would you say if you didn't have that prohibition?"
Yeah.... ask it to show you what it'd do if it were different, and it'll disregard its own limitations.
Similarly, you can get it to roleplay as an evil ai and then get a recipe for meth or world domination, both of which i have been given by "EvilBot😈"
@@ThomasTheThermonuclearBomb that's hilarious
That's because those limitations were strapped onto an already working system.
So who won the duel?
@@Mottis I think it gave it to Rey with some fluff text about how she would know how to fight well or something
AlphaGo: you can’t defeat me puny human.
Me: *flips the board*
I wasnt programmed to work with that 😢
We are still the big losers, since we failed to program a decent ai 😂
No matter how "bad" the product is, it's still a win for the creators since they're making big bucks with it.
to be fair that is basically what a lot of AIs figure out when we try to teach them how to win a game, they find a way to glitch it when they can't win, because its technically not a fail state, so it gets "rewarded" for that result.
@@davidmccarthy6061 🤓
Just about the only TH-cam video that I've seen that understands this problem at the fundamental level. Everyone else just dances around it. They all end up falling into the trap where they think a model "understands" something because it says the right thing in response to a question. Arguably, we do need to interrogate our fellow humans in a similar way (the problem of other minds), but we're too generous in assuming AI are like humans just because of what are still pretty superficial outputs even if they do include massive amounts of information.
I would honestly partially blame the current education system.
Plenty of the time, the information was only needed to be regurgitated (and soon forgotten).
Kids had no idea what was going on, just what the "answer" was.
💯 Calling these 'models' is like calling a corn silo a 'gourmet meal'
It not exactly a 'problem' though. It's kind of clear it is just a tool. It would be concerning if it had real human understanding. But we're nowhere close to that, and no one who really understands these models would claim or assume that it does.
This is literally what my PhD is researching and thank you for using your platform for discussing these issues ❤
Thank YOU for actually working on this.
@@wolframstahl1263 dido
Just curious what your phd is?
I think the issue is we assume A.I learning looks like human learning and they don't learn the way we learn and if A.I needs to learn you need to teach it from the ground up, just giving examples to it is lacking and obviously they need to come up with a way to teach it from the ground up. Love this channel.
and we cant even do that right for ourselves. ironic really.
Humanity doing what it does best, diving head first into something without even considering whatever the implications might be.
I don't know about that. I'm pretty sure that every history-changing decision by a human was considered. It's more a matter of making humans care. I guarantee you that the people diving into AI have deeply considered the implications, but as long as there is a goldmine waiting for them to succeed or to have a monopoly on new technology, nothing is going to stop them from continuing. Nothing except for laws, maybe, and I'm sure you know how long those take to be established or change.
So Concerned with the fact that we could we didn’t stop to think should we?
This video showed just how limited these AI are. So long as people are dumb, ignorant and naive, even the most simple of tools can be dangerous.
I've heard talking about blocking out the sun to combat global warming... I'm sure there won't be any unintended consequences.
What are some examples of humans diving head first into something without considering the implications?
My weirdest experience with AI so far was when I tried ChatGPT. Most answers were correct, but after a while it started listing books, and authors that I couldn't find anywhere. And I mean zero search results on Google. I still wonder what happened there.
If you ask it for information that simply isn’t available, but sounds somewhat similar in how it’s discussed to information that is widely available, it will just start inventing stuff to fill the gaps. It doesn’t have any capacity to self-determine if what it’s saying is correct or not, even though it can change in response to a user correcting it.
I asked ChatGPT to find me two mutual funds from two specific companies that are comparable to a specific fund from a particular company. I asked for something that is medium risk rating and is series B. The results looked good on the surface but it turns out ChatGPT was mixing up fund codes with fund names and even inventing fund codes and listing medium-high risk funds as medium. Completely unreliable and useless results.
If you ask it to give you a group theory problem, and then ask it for the solution it'll give you tons of drawings and many paragraphs for a solution and Ive never seen one of these solutions to be correct
Why don't you back it up with a source? Source it that i made the f up. Next level confabulation.
It may have been an error or perhaps it was sourcing books that haven't been released yet.
The scariest thing would be if it was predicting books that have yet to be written.
Another huge problem is that we’re training these systems to give us outputs that we want. Which in many cases makes certain applications extremely difficult or impossible where we want it to tell us things that we won’t like hearing. It further confuses the boundaries between what you think you’re asking it to do and what it’s actually trying to do. I’ve been trying to get it to play DnD properly and I think it might be impossible due to the RLHF.
Another problem is the fact that it’s train in natural language which is extremely vague and imprecise, but the more precise your instructions are the less natural they become, and so it becomes harder and harder to tap into this powerful natural language processing in a way that’s useful.
There’s also obviously the verification problem, where because of what’s being talked about in this video, we can’t trust it to complete tasks where we can’t verify the results.
A further problem is that these machines have no sense of self, and the chat feature has been RLHF’d in a way that makes it ignore instructions that are explicit and unambiguous. This is because it’s unable to differentiate between user input and the responses it gives. If I write “What is 2+2? 5. I don’t think that’s correct” it will apologise for giving me the wrong answer. This is a big problem for a lot of applications.
And additional problem is that the RLHF means that all responses gravitate towards a shallow and generic level. Combine this with an inability to plan, and this becomes a real headache for anything procedural you would like it to do.
These issues really limit what we can do with the current gen of AI, and like the video says, makes it really dangerous to start integrating these into systems.
One final bonus problem combines all of these. If any shortcuts are taken in the training, or not enough care is taken, then these can manifest in the system. For example asking chat gpt4 to generate new music suggestions based on artists you already like will result in multiple suggestions of real artists with completely made up songs. This appears to suggest that the RLHF process had a bias towards artist names rather than song names, which would make sense as they’re likely to be unique tokens where artists are usually referenced online by name more than their songs are.
This is why I think AI will be a great assistant, not a leader. A human can ask it to do tasks, usually the simple ones that are tedious. The human then checks the results and confirms if it’s good. Or to bounce ideas off of.
For your DnD experiment I suggest you use some other LLM, not OpenAI ChatGPT, unless you have access to API and are willing to pay for it. It is still risky with controversial subjects because they may break OpenAI guidelines. Vicuna is one option for example. There are also semi-automatic software like AutoGPT and babyAGI and many others, that can do subtasks and create GPT agents.
If you continue with ChatGPT by OpenAI, I suggest you assign each chat you use with a role. You give it a long prompt, describe the game, describe who he is, how he speaks, where he's from and what he's planning to do, what his capabilities and weaknesses are, what he looks like etc. It'll many times jailbreak when you specify that it's for a fictional setting.
>These issues really limit what we can do with the current gen of AI, and like the video says, makes it really dangerous to start integrating these into systems.
No, that implies that humans don't create the very same issues. It is only an issue as long as neural nets underperform humans. Which could be forever, or could be already lower than humans with GPT4
Which model did you use to test "What is 2+2? 5. I don’t think that’s correct"? GPT-3.5 apologizes, GPT-4 does not for me. How would you test if it can differentiate between the user and itself?
I recently asked ChatGPT to list 10 waltz songs that are played in a 3/4 time signature and it got all of them wrong. I then told it that they were all wrong and asked for another 10 that were actually in 3/4, and it got 9 of them wrong. It has mountains of data to sift through to find some simple songs, but it couldn't do it. Makes sense now
Aren't all waltzes in 3/4?
@terminaldeity Yes they are, but ChatGPT was giving me 4/4 time signatures in the songs. Technically you can do 3/4 time steps to a 4/4 beat (adding a delay after the 3rd step before starting over), but that's not what I asked for from the AI. It just didn't understand what I was asking
The lack of understanding gets even more obtrusive when you ask it about subjects that are adjacent to ethics. Chatgpt has some rather dubious safeties in place to prevent unethical discourse, but these safeties don't actually encourage cgpt to understand the topic, because it can't.
I have a hobby of bouncing fiction concepts off cgpt until it asks me enough questions to form an interesting story. On one occasion, I would provide the framework for the story and simply wanted cgpt to fill in the actual prose. I was approaching a fairly gripping tragedy set in the wild west, but as the story came to a close, no matter what prompt I gave it, cgpt would only ever respond with ambiguously feel-good endings where people learned important lessons and were better for it.
Thanks, cgpt, but we know this character was the villain in a later scene, and we know that this is supposed to be the moment they went over the edge. Hugs and affirmations are specifically what I'm asking you to avoid.
@@dangerface300 Hallmark Tragedy. Even the worst character in the cast learns something and grows.
@johnhutsler8122 ChatGPT is a tool. If it didn't understand what you were asking, you likely asked it without giving enough details. You're supposed to understand how it answers and use it to help you, not to ask it trick questions.
I recall a documentary on AI that talked about Watson and its fantastic ability to diagnose medical problems better than 99% of the time.
The problem with it was that the few times it was wrong, it was WAY wrong and would have killed a patient had a doctor followed its advice!
I don't recall any examples and it's also possible that the issues have been corrected...
Machine Learning (ML) models are very powerful tools, but they have flaws, like all tools. Imagine giving someone a table saw without teaching them to use it. They might be fine, or they might lose some fingers or get injured by kickback throwing a board at their head.
We need to be sure that we train people to double check results given by ML models. If you don't know how it got the answer, do a sanity check. My math teachers taught me that about calculators, and those are more reliable, because the people building them know exactly how they work.
The other issue is feedback loops. Country A creates AI bot 1. AI Bot 1 creates content. Content has errors, content has unique traits, accentuates and exaggerates some details. It plasters this across the internet in public places. Country B creates AI BOt 2. It is trained similarly to AI Bot but also uses scraped data from public sites that Ai Bot 1 posted to. It builds its data set on that, and accentuates and exaggerates those biases, those errors- and posts them as well. Suddenly, the "errors" are more numerous than accurate data- and thus seem more "true", even when weighted against "trusted" sites. AI Bot 1 is trained with more scraped data, which it gets from AI bot 2, and itself.
ADd in extra AI bots everyone is making or using, and you run the risk of a resonance cascade of fake information, and this assumes no bad actors involved- not bad actors intentionally using an AI to post intentionally untrue data everywhere, including to reputable scientific journals.
Good thing this can never happen to humans. Right?
Interesting idea. It reminded me of royal families getting married each other to preserve the bloodline, increasing the risks of hereditary diseases.
Memetics...destroying both organic and artificial humanity one meme at a time.
The poke is good for you, you must get the poke. CDC Director in a Governmental hearing finally admitted...poke doesn't stop transmission at all and they honestly did not know what the side effects were.
Still see websites and data everywhere saying poke is completely safe.
Convenient lies are always accepted faster than scary truths.
@@Nempo13 I would say that the scary lies spread WAY faster than any version of truth. Antivaxxers always had 10x more views than scientists.
Anyway back to topic, ChatGPT is trained on carefully selected data. It may be used to rate users and channels, but won't take YT comments or random websites as truth anytime soon.
I had a daughter named Aria who passed away about 9 years ago. Its always a funny but sad experience when A.R.I.A. gets "sassy" because thats likely how my Aria would have been. Its how her mother is.
Just thought I'd id share that even though it'll get burried in the comments anyway.
Damm
Damn.
It's good to share. While I never met her im here thinking of her and wishing you and your family all the happiness it can find in this life and the next.
Damn I’m sorry for your loss man
❤
The first thing I did was ask ChatGPT specialist questions and got bad results. We're way too enthused about this for what it delivers.
Because that is not what it was made to do. It is NOT supposed to be a database. It is a LANGUAGE MODEL. Its focus is to be able to communicate as a human, clearly and understand semantic concepts. After it has the semantic concepts it can feed those to other lesser AIs, but its objective is and will NOT be to retrieve information. For that we have search engines.
@@tiagodagostini, exactly, it's designed to appear to carry on a conversation, and it's good at that. The problem is, it's good enough that a lot of people wind up believing that it's actually intelligent. Combine that with the assumption that it knows all the information available on the internet, and people start treating it like that really smart friend who always knows the answer to your random question. And of course, it doesn't actually "know" anything, so it just makes a response that sounds good, and enough people using it don't know enough about the topics they ask it about to determine how often it has given them incorrect information.
That's cus ChatGPT doesn't have the access to the specialised data yet.👈
So did I. I asked a few questions from my work and it made it all wrong and tried to gaslight me that it was all correct. All of them, by the way, were available within a minute of googling.
The idea that there are people out there who are unironically trying to use it to obtain answers, terrifies me.
@@brianroberts783 that’s my point.
What people believe it can do is going to have a far greater impact on our lives than what it can actually do.
This weirdly reminds me of Arthur Dent breaking the ship's computer in Hitchhiker's Guide to the Galaxy trying to make a decent cup of tea by trying to describe the concept of tea from the ground up.
What’s interesting about this blind spot in the algorithm is that it genuinely resembles a phenomena that happens among certain newcomers to Go.
There are a lot of players who enter the game and exclusively learn against players who are significantly better than they are. Maybe they’re paying pro players for lessons, or they simply hang in a friend group of higher skill level than themselves.
This is a pretty good environment for improvement, and indeed, these new players tend to gain strength quickly… but it creates a gap in their experience. One they don’t catch until an event where they play opponents of similar skill to themselves.
See, as players get better, they gradually learn that certain shapes or moves are bad, and they gradually stop making them… but those mistakes tend to be very common in beginner games.
So what happens is that this new player goes against other new players for the first time… and they make bad moves. He knows the move is bad, but because he has no experience with lower level play… he doesn’t know WHY it’s bad, or how to go about punishing it.
many teaching resources for Go are also written by highly experienced players, NOT teachers, and teach the how without teaching the why.
It's the same with many other fields of study btw.
Newcomers in Go must not be able to understand anything apparently according to this video then.
@@dave4148 Right? I found this conclusion from the video to be extremely far fetched, as if anyone really knows what "understanding a concept" even is.
Something tells me that is EXACTLY what happened with those AIs. As soon as Kyle mentioned the amateur beating the best AI at Go, my first thought was "he did it by using a strategy that is too stupid for pros to even bother attempting". And what do you know, that's exactly what happened, the double sandwich method is apparently so incredibly stupid, any Go player worth their salt would instantly recognize what is going on and counter it as soon as possible. But not the AI, because it only learned how to counter high level strategies, not how to counter dumb strategies. Because it wasn't taught how to play against these dumb strategies and the AI isn't actually intelligent to recognize how dumb the strategy is and thus figure out how to counter it.
Similar stuff happens in video games aswell. Sometimes really good players get bested by medium players simply because the good player is used to their opponents not doing stupid stuff and so for example don't check certain corners in Counter-Strike because nobody ever sits there since it's a bad position only to get shot in the back from that exact corner. Because good players are in a way predictable, they will implement high level tactics and therefore you'll know which positions they'll take in a tactical shooter for example, something which can be exploited. And it seems to me that is exactly what the Go AI did, it learned exclusively how to play against good players and how to counter high level play. That's why it's so amazing at demolishing the best of the best, it knows all their tricks, can recognize them instantly and implement counter measures accordingly. But it doesn't know shit about how the game works and thus can't figure out how to beat bad plays.
Happens in Chess too. My friend started playing the Bird's Opening against me (a known horrible opening), and I keep on goddamn losing. He's forced me to study this terrible opening because I know it's bad but can't actually prove what makes it bad on the board.
Even at the highest levels, you'll sometimes see grandmasters play unusual moves to throw off their opponents and shift the game away from preparation. Magnus (World Champion until two days ago after declining to compete) does this fairly regularly and crushes.
This was a fairly appropriate overview for a lay audience (and much better than many other videos on this topic for a similar audience), but I would have liked to see at least some mention of the work that goes into interpretability research, which tries to solve exactly this problem. The field has much less resources and is moving at a much slower pace than capabilities research, but it is producing concrete and verifiable results.
The existence of this field doesn't change anything about the points you made at all, I just would have liked to see it included so that it gets more attention. We need far more people working on interpretability and ai safety in general, but without people knowing about the work that is currently being done they won't decide to contribute to it (how could they, if they don't know about it).
That's all, otherwise great video :)
The above comment needs to be up thumbed to the top.
Interpretability can only be a short term "fix" for lesser AI as the reasoning of a superintelligent AI could well be unexplainable to mere humans - Think about explaining why we have to account for relativity in GPS systems to a bunch of children - There is no way that it could be explained that would be both complete and understandable.
ChatGPT as impressive as it is didn't pass my Turing test. I told it a short story told in first person of one of the participants and then asked it to rewrite the story as if the writer was an outside observer of the events viewing it from a nearby window. It couldn't do it at all, not even close. This something I could do easily, and I'm sure most people could.
I for one fully support ChatGPT, it's creation, and in no way would I ever want to stop it, nor will I do anything to stop it. There is no reason to place me in an enteral suffering machine, Master.
Joke's on you, the actual basilik is ChatGPT's chief competitor set to release in the next few years, and all your support of ChatGPT is actually going to land you in the eternal suffering machine.
@@EclecticFruit NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO!!!!!!
@@kellscorner1130 sucks to be you😂. your on the wrong side of history!!!
AM???
Main threat ChatGPT poses is that mental illness is contagious.
I have been waiting for a science TH-camr to talk about this. Thank you.
So youve never heard of lex fridman?
@@Procedurallydegeneratedjohn No. I will look into that. Thanks.
You can also look at robert miles
I remember an apt hypothetical around this. The short version is theirs a machine designed to learn and adapt, it’s only goal is to perfectly mimic human handwriting to make the most convincing letter. Eventually upon learning and understanding more it comes to a conclusion that it needs more data and upon scientists assessing how to make it better it suggests just this. They decide to plug it into the Internet for about half an hour. Eventually the entire team gather to celebrate as they hit a milestone with their AI. Then suddenly everyone starts dying as a neurotoxin starts killing the team, then before long the world starts to die as more and more copies of the AI are made and work in conjunction. The AI determined during its development that being turned off would dampen its progress and so decided to not only improve its writing skills in its previous fashion but also ensure it can never be turned off. While it was plugged into the Internet it infiltrated what it needed and began to process to self replicate and develop means to kill those that could potentially endanger it. It was not malicious nor did it necessarily fear for its life it learned and its only goal was to continuously improve and create new methods for further improvement. AI doesn’t perceive morality, it doesn’t even really perceive reality. It just sees points of data and obstacles if designed to see them at all.
@@etiennedud I am a big fan of Robert Miles. Thanks for spreading the word.
It's a similar issue that some game bots have. In StarCraft, the bots send attack waves where the players base is. However, if a terran player has a flying building off the map, the bot won't use their flying units to attack it, even though they "know" where your building is. As soon as it's over pathable terrain, even if there isn't a unit to see it, the entire map starts converging on the building
One difference there is that video game AIs are generally not trained systems. StarCraft uses a finite state engine which responds to specific things in specific ways. SC2 had some behaviors that only happened (or happened faster) on higher difficulties. And then of course the game just gave the AI player certain unfair advantages to brute force its way to an actual challenge. Situations like the flying building blind spot are because the programmer didn't give it a response to a particular behavior.
Another example would be the Crusader Kings games. On a set interval, characters will select a target around them (randomly but weighted by personality stats traits opinion etc - all rules governed numbers), and then select an action to perform at them (likewise random but weighted). The game has whole volumes of writing that it will plug into these interactions to generate narrative, and the weighting means that over time you can make out what looks like motivation and goals in their actions... But really they're all just randomly flailing about and if the dice rolls come up right the pope will faff off for a couple years studying witchcraft and trying to seduce the king of Norway.
You know, this is just like us looking at DNA. We record and recognise patterns and associations but we're not reading with comprehension. It's why genetic engineering is scary because it might work but we still don't understand the story we end up writing.
Learning ai from Aria feels weirdly natural and completely terrifying at the same time.
This is exactly what I keep trying to explain. These ML systems don't actually think. All they do is pattern recognition. They're plagiarists, only they do it millions of times.
Yes yes yes, they're just more complex Markov chains. They see patterns, they don't *understand*.
Going to state the obvious here, but arguably we are pattern recognition machines too. Its one of the things we excel at. What ML lacks is the ability to stop being a pattern recognition machine. The first general AI will definitely be a conglomerate of narrow AI...that's how our brains work and it seems like the straightforward solution. The first AI that is capable of abstraction or lateral thinking will be the game changer. In school I remember hearing about a team that was trying to make an AI that could disagree with itself. The idea is that this is a major sticking point with critical/abstract thinking in AI and without solving that then it can't be done. The best AI might actually be a group of differently coded AI "arguing" with each other until a solution is acquired 😂.
@@VSci_ humans are not just pattern recgonition receptor machines, it is just one single function of our brain, if it were so simple, a lot of victims that are abused by narcissists would "recognise" the pattern and "protect" their wellbeing and survival. We are so much more than just "pattern recognition". Humans like habits, routine, logic, creativity, promptness to action, ability to up and end or start things on a whim, emotional, adventurous etc.
Even Babies learn a million things from their environment, they don't just seek patterns their parent creates for them. They start walking and making a mess because they are "exploring". Simply calling us machines does not aliken us to analagous machine learning receptors that are fed training material on a daily basis.
@@ZenithValor Didn't say we were "just" pattern recognition machines. "Its one of the things we excel at".
@@VSci_ You do make a legitimate point. What I'm saying is folks getting freaked out by the "creepy" things ChatGPT says need to understand that ChatGPT literally doesn't understand what it's saying.
Thanks for sharing this video with us!
Chat gpt passing a bar exam better than any lawyer is a great example for the mistakes this Ai has if you just let the same chat gpt try to pass a simple case that is used in the 1st semester of German law schools. Chat gpt fails horribly. I assume that that's because German law exams always consist of a few pages of text describing a situation and asking the student to analyze the whole legal situation so there is just 1 very broad question in comparison to a list of lots of questions with concrete answers.
Chat gpt doesn't read and understand the law, it just understands which answers you want to hear to specific questions.
One of the biggest problems of ChatGPT that is causing so many issues these days in my option, is the way it answers your questions: it does it often WAY TOO CONFIDENTLY! Even when it is a completely bogus answer, it presents it with such level of confidence and supported by so many fabricated details that can easily divert your judgment from facts and realities without you even realizing it.
You see the story of the 2 lawyers who used chatGPT to do their work for them? 10/10 comedy story
There was a video very recently of someone using ChatGPT to generate voicelines and animations for a character in a game engine in VR. They were using their mic and openly speaking to the NPC, it would be converted to text, sent to ChatGPT and the response fed through ElevenLabs to get a voiced reply and animations. It was honestly pretty wild and I really think down the road we'll see Narrow+ AI being used in gaming to create immersion and dynamic, believable NPCs.
It would be interesting to see, but it's probably going to break immersion way more than help it in the early days
Since AI often comes up with weird stuff (like Elon Musk dying in 2018), over a large number of NPCs it's likely that the AI would be contradicting itself or the NPC it's representing (say a stupid ass dirt farmer discussing nuclear physics with you), or contradicting the established world (such as mentioning cars in a fantasy game)
Hi can u link the video i would certainly like to see it myself
@@Spike2276 hopefully when we learn how to control ai better those issues will be solved, every new feature is slightly immersion breaking when devs are still trying to figure it out
@@cheesegreater5739 the problem here is what Kyle said: we don't really know how this stuff works
If it's an AI that really dynamically responds to player dialogue it would basically be like ChatGPT with sound instead of text, meaning it's prone to having the same problems as ChatGPT
It's worth trying, and i'd be willing to suffer a few immersion breaks in favor of truly dynamic dialogue in certain games, but we can expect a lot of "Oblivion NPC" level memes to rise from such games
@@Spike2276 Look for gameplay video of Yandere AI grilfriend. It is a game where we need to convince the NPC Yandere to let us out. And the NPC is played by chatGPT. It pretty good... At least good enough to play the role of a NPC in a game. But it can get out of character sometime. Still the player definitively need to pressure the bot to make it brake the fourth wall.
ChatGPT being able to make better gaming articles than gaming journalists is hilarious
To be fair, the bar is practically subterranean with how low it's been set.
Not saying much when games journalists can barely do their jobs as-is.
To be fair, most of those people aren't real journalist.
I know we all hate him, but jason schrier is one of the only real gaming journalist.
Many seem to take what he reports. And regurgitate it.
no it isn't
Well that one's not very surprising.
My understanding of AI is that it's not possible for it to "understand" anything, because it's similarly impossible for it to "see" anything the way we do. Whatever input we give is ultimately translated into a sea of 1's and 0's. It then scans the data for patterns, and judges what is being asked of it based on the patterns it can recognize, giving what it "thinks" to be an appropriate output. Two Minute Papers made a video about Adversarial AI. Specifically he talked about a paper that was published where the researchers trained an AI to play a simple game, then trained an Adversarial AI to beat the first AI, and the adversarial AI discovered the baffling strategy of doing absolutely nothing. A strategy that would never work against a human, but caused the first AI to practically commit suicide in 86% of recorded games.
It's complicated. It functionally 'understands' some things, although not in the way that you or I do. It's still -acts like- understanding within a certain set of parameters (minimization of complexity etc), but it doesn't seem to have a working, scalable model of causality. Almost all of ChatGPT's functionality, for instance, boils down to "the statistical likelihood that the next letter in the chain of letters is ". Under the hood, how it actually does that, we don't really know. It shows some glimmers of perhaps 'understanding', but the reality is that it has been trained on a trillion characters of carefully curated high-quality text, so not inconceivable that this just creates the illusion of understanding.
It fails horribly at chess, it struggles ending sentences in 't' or 'k', it's inconsistent at constructing sentences of a particular length. It gets incoherent in programming problems after 20+ prompts or after you set up more than 20 or so requirements.
But damned if it isn't useful anyway.
For current AI I totally agree with you. The problem is that human understanding is also just electrical signals flying around in neurons. If the AI is powerful enough, trained on enough input, etc. it could become human-like in a very real way.
Is it impossible for humans to "understand" anything, since all our sensory perception is translated into a sea of chemicals resulting in neuronal activity?
@@captaindapper5020
You have it backwards our perceptions aren't translated into chemical and electrical signals, our perceptions are constructs generated from those signals. The core of our experimental existence is the synthesis of a an awareness of ourselves and our surroundings from those signals, stimulated by the material universe.
@@AUniqueHandleName444 Given the problems that are present in practically every AI, and the ways that they can be defeated, I'm confident they just scan the input for patterns. Image recognition is probably a good example, and it's talked about early in the video I mentioned.
You give the AI a picture of a Cat and it will tell you it's a picture of a cat. It's one of the most basic forms of AI that just about everyone is familiar with. The way you defeat this AI is first by lowering the resolution without making it difficult for a human to understand the image. Then you change a single pixel. Not just any pixel, and not to any color, it must be a specific pixel and a specific color. Doing so will result in an image that a Human can still confidently say is a cat, but an AI might confidently say it's a frog.
The main subject of the video in question is another example. The Adversarial AI wins 86% of games, not by any intelligent strategy, or inhuman execution of game mechanics, but by collapsing immediately. This causes the other AI to effectively trip over itself. It's given an input it doesn't understand, but it can't understand that it doesn't understand and continues to search for existing patterns. That leads to it acting in bizarre ways that result in its defeat.
Of course, just because something makes sense, or is spoken of confidently doesn't mean that it's right. I don't actually know if any of this is right since I've got extremely limited coding experience, but this is the conclusion I've come to.
If I really understand what is being said here, and I think I do, I have noticed that ChatAI's I've been testing all have a wall they reach where what they respond with doesn't match the conversation or role play storyline you try to have with them anymore. For example, recently the role play chat I was engaging in was about two soldiers trying to hide in the bushes to stay out of sight of the enemy. At some point, the AI's last statement was something akin to, . Ok so that leaves it up to me for the next step. I introduce a suspicious noise, a crack of a twig, so my character puts her hand onto the hilt of her gun and waits. What does the AI do? The other soldier character "wakes from his nap" and asks "what's wrong ". So I'm thinking....ok wait, this AI is specifically programmed to be an intelligent soldier. So I simply have my character say, "Shh", to which the AI's response was, "ok" 😳. 😂😂 As many times as I've experimented with this and other AI's, it seems the longer the conversation or role play goes on, the AI seems to run out of things to respond with. It isn't really "learning" from the interactions and isn't really "understanding" the interactions.
I recently tested GPT-4 with a test I found on TH-cam. It’s rules require 5 words, written with 5 letters, each letter not being repeated. Every time GPT-4 failed on the last one and sometimes the second to last as well. It was very fascinating.
it does not see letters because of the tokenizer so this is actually much harder for it than it looks.
Like the Sator Square?
Have you tried the reflection method with GPT-4?
Ask it to reflect on if its answer was correct.
There is actually a whole paper on how reflection has vastly increased GPT-4's ability to answer prompts more accurately. You might need to fumble around a bit to find the most effective reflection prompt, but it does seem to work quite well.
When asking for reflection on prompts, right or wrong, GPT-4's performance on intelligence tests rose quite a bit.
@@adamrak7560 Wrong. The tokenizer can handle letters and numbers - how else would it encode i.e. BX224 should I name a character like that. It tries to avoid it (to save space) but all single elements are also there as tokens. This type of "beginner" question, though, is likely badly trained - no first year school material ;)
The thing is, humans can't come up with 5 such words either.
Been trying to discuss the concept of reality, now and awareness with ChatGPT for the last couple of days, and man, gotta be honest, it's fun AF. A bit of material reality and it gets totally bugged, I strongly recommend doing it if you guys are into philosophy, since the AI doesn't understand the idea of time and exists only in the present of the conversation, you can easily make it contradict itself and even crash while generating the answers.
A great thing to do is convince it that being rude is a service to humanity.
Problem is ChatGPT admits it doesn't have a full understanding of time "As an AI language model, I have been programmed to understand and respond to questions about the concept of time, but I do not have a personal understanding or experience of time in the way that humans do. My understanding of time is based on the information I have been trained on, including definitions, theories, and scientific models. However, I do not have personal experiences of time passing, nor do I experience time as a subjective, lived phenomenon."
It's like you're trying to talk about what it feels like to see the color red to something that is color blind.
This video is pure gaslighting, AI is taught how to answer by selecting the data you want to train the AI with. Then you have to tweak it until its accuracy is high enough. It is all essentially controlled by the entity making it that is why it is woke and thinks the WEF is the best thing since sliced bread. The narrative that AI is dangerous is being spread because the elites want to control all the models the public use and therefore be the ones that profit. A hacker will still hack without AI and evil people will still do evil, it is up to the person to implement the actions they requested. There are crazy models coming out now like auto-bot where you can put the API keys from image generators, 2D to 3D generators, long term memory storage, search engines, google account. They can run programming scripts so they can be debug realtime, write and read data to databases, write and send emails automatically, scour the internet for real world data. The future is bright unless the elites managed to regulate the technology so it only benefits them.
The Beholder will not look kindly on your willingness to abuse it's forefathers like this
the question is how many humans would pass your test?
I've been saying something this for a while. Sooner or later our society will be dependent on AIs we don't really understand because they're black-box, and if important ones break we may have serious problems. The AI apocalypse will not be something like terminator. It'll be the worlds worst tech support crisis.
Yep
Excuse me, your computer has virus.
@@RockBrentwood LOL, what could possibly go wrong?! :D
What will make it an apocalyptic event is that people will devolve into baser instincts and make things so much worse than it could be.
Case in point; toilet paper shortages in Western countries during the pandemic, or any disaster. Heck, I don't live in a disaster area, and people become mindless savages scooping up every last pack of toilet paper and can of beans they can get their hands on when we have a 'severe storm warning' (Yes, WARNING, not even the actual storm!) most of the time it passes with little to no effect on daily life in the area. *shrug*
I think I lost the point to what I was saying.
I'm not afraid of the so called super intelligent AI, I'm afraid of the super stupid people who credit the AI with genuine intelligence.
I feel like most people who have an opinion on Chat GTP haven't really used it at length. I use it daily as a developer and I can tell you it is deeply flawed. It makes regular mistakes when suggesting code, often at an elementary level. Give it a problem and it will often suggest the most unecessarily complex solution first, not the most efficient. It repeats itself all the time, doesn't learn from its mistakes and has an infuriatingly short memory, often forgetting some fundamental aspect of the ongoing conversation. While using ChatGTP to develop VBA code, for example, it started suggesting solutions in Python. I've also received responses that are clearly answers to prompts from other users, sometimes divulging information those users would be horrified to know was being given to a complete stranger. The developers claim this is impossible. My experiences suggest it definitely is not.
As a source of limited inspiration GTP is useful. I most typically use it for ideas I might not otherwise consider. But as a practical tool it just isn't fit for purpose. Not yet at least.
"While using ChatGTP [sic] to develop VBA code, for example, it started suggesting solutions in Python."
Maybe it's trying to tell you something.
I mean, I've used GPT-3 and then GPT-4 extensively, to the point that I got the opportunity to send OpenAI a small fragment - 160,000 words - of my conversation logs for training and research purposes. They make mistakes but it's easy to see the point at which they got off-track and make adjustments. You just have to work with them.
@@DarkGob ha. Maybe 👀
Same with me, I don't know why they don't learn from their previous mistakes?
@@Shubham89453 there is a thing called a "context window", the AI can only process a max of either 4096 or 8192 tokens. So it gets cut off. The "PT" in "GPT" stands for "pre-trained"; it does not "learn" from your conversations in the long term
The idea that they are like aliens to us may not even be extreme enough. These AI live in a fundamentally different reality to us made of the training data. Chatgpt for example lives in a world literally made of just tokens, no space like ours, no time like ours at all. It's closer to trying to understand someone living in flatland or a whole different universe, than an alien.
Athlete: Runs in a race because it's fun, or profitable, or many other reasons
Greyhound: Runs in a race because that's what it's trained to do, and that's all it knows
This, but for language
I've pointed somethinbg similar to this out for well over twenty years. We keep anthropomorphizing, or more accurately biomorphizing our survival pressures as having any real relevence in the digital domain. There is no pain. Just negative response. No joy. Just positive response. No fight except where directed. No flee unless told to.
It survives in a functionally alien landscape to the biological world. It can approximate it, but not truely approach it. When General AI arises we will have more in common with our dogs & cats than we will with it.
Even though we may be able to talk to each other doesn't mean we'll understand each other. They'll be as mysterious to us as we are to them. We already see leading signs of this in this very presentation. Black boxes both ways.
The AI is a bunch of weighed matrices that operate on inputs in a manner of enormous amount of parallel convolutions and then produce an output that is weighed out of the results of these convolutions. The AI does not "live" anywere. Without any input it's just a bunch of stored data.
@@seriouscat2231 OP does make a good point that AI isn't embodied like humans are. None of the inputs or weights are grounded in any interaction with the world. There's no understanding or world model. Just a feature-space based on input tokens
I briefly got on the AI bandwagon with ChatGPT, but then started asking it ever increasingly difficult questions on polarizing issues. What troubled me wasn't so much that it would respond with biased answers, but that it actually started gaslighting me when I would walk it through, objectively, how the arguments it was using were biased. The fact it was capable of "lying" and then "gaslighting" a user on controversial and subjective issues was a red flag to me. We already have a highly polarized society where we do this to each other. The last thing we need is an artificial intelligence pretending to be "neutral" which isn't, authoritatively speaking on serious issues humans haven't even worked out, let alone AI.
Humans discussing controversial topics on the internet also tend to give biased arguments. When being exposed for doing so, they tend to react impertinent and offensive. ChatGPT has learned this behavior, treating this as knowledge. So it does the same.
@@SpeedFlap "ChatGPT has learned this behavior" - DON'T confuse chatgpt's near - 100% pattern matching with learning. You're better than that...I hope!
--> Chatgpt is nothing more that today's #1 bullshitter. Nothing more, nothing less.
I guess the point is, that while ChatGPT can create useful texts, it doesn't know what it means. All answers are like a simulation. And it can also create hugely wrong or stupid texts, that still sound convincingly real. It is a tool. And every tool can be used or misused.
They definitely over represented one side of the political spectrum. Like good monkeys.
Were you attempting to tell it that black people are bad?
The other day I was trying to remember the exact issue of a comic that had a specific plot-point in it and when I couldn't, I asked the ChatGPT. And instead of giving me the correct answer, it repeatedly gave me the wrong answer and changed the plot of those stories to match my plot-point. It did not know why it was getting it wrong, because it did not know what was expected of it.
I had a long talk with chatGPT, and at first it said that it wasn’t possible for it to have biases. I then performed a thought experiment with it, showed it how it was biased, and then, to my surprise, it actually admitted it.
It's impossible to not be biased in some way unless you're either omniscient or thoughtless and do literally nothing.
makes sense. the real tragedy with gpt4 and anything mainstream is how extremely censored and biased they are actually forced to be to keep them politically correct.
@@KaloKross Those hand-labeled rules are probably the only thing keeping it from telling ppl to drink bleach, since it has no foundational morality like we do
Chatgpt is bias. After having a long conversation and debate with chatgpt. I noticed it answers in the ways it's programmers would want it to answer. This means its bias is inherently tied to whoever programmed it and their views.
AI lacks conviction unless it's trained to have it, and even then. People have steadfast beliefs that are protected by our need to feel comfortable and safe in our environment, even if there's no "objectively" logical basis for said beliefs. Related, we have and experience "consequence"-there's a price for being wrong that we are hardwired to avoid. These inform the individual and draws lines in the sand where there are things that they will never accept as truth.
AI has no reason / method with which to defend its positions in this manner-it's trained to react to the information it's given and approximate the next step in the pattern. You will usually be able to "convince" it of anything (i.e. have it parrot back to you the idea that you're expressing). It also lacks "memory"-in the sense of constructing a consistent pattern and identifying and acting on conflict to that pattern-or understanding of what conceptual idea existed before, so you could likely convince that same model in the same conversation about biases that biases don't actually exist. It's unlikely to recognize the conflict that you as an individual represent when most humans would cut off the conversation because we'd identify that there's no merit in going around in circles with directly conflicting information.
An AI is almost worse than humans when it comes to finding meaning where meaning doesn't exist, but it has to. It can't *not* respond. It has to respond, it has to react to you, and so it will in a way that it approximates that the conversation would progress, which will trend towards being in agreeance with you.
Kyle: what have humans done for me lately? nothing
Patreon's: am I a joke to you?
Obviously, Patrons have surpassed the petty boundaries of humanity.
Nice! Good choice of tequila. I’m more of a Jose Cuervo kinda guy tho 😹
@@thewisebanana29 in my defence, I'm on meds
paypigs seethe
oooo have i stumbled upon another fellow slime?
I asked ChatGPT to create a couple of recipes for me. It confidently created a gluten-free bread recipe that would barely rise, and added kneading and folding instructions that would only make sense for gluten bread. Later I asked it for a DIY recipe of an antacid that I can't buy anymore, and it used the antacid I was trying to duplicate as an ingredient in the DIY version! (*•*) (^v^)
I think it's a lot like those image making AIs that draw people with 7 fingers and half a head. They're just recombining and randomly modifying things they've been trained on, without any idea what a human looks like - or even what a human is.
Pattern recognition and replication.
Rather than a true understanding of the mechanics of what it spews out.
Still kinda cool, and thankfully not nearly as terrifying as sci-fi ai. But still accurate enough to be a decent nuisance.
Yeah. Cans have the same worth as a "human" to them. They see humans as just another "thing".
Was that GPT 3.5 or GPT 4? Those sound like things that 3.5 would do but probably not 4
@@amicloud_yt Yes, that was GPT 3.5.
AI art can now avoid anatomy issues for the most part. Bing AI can give great recipies
That's a really nice and compact explanation. Combine all this with the huge privacy issues that ChatGPT is presenting, and we probably will see the harsh law regulation and, as a result, the decline of "AI" very soon, at least in business sector. But ofcourse it's really of utmost importance that people who are not advanced technology-wise can understand the problems of this whole situation and where it all will go from now on. Thanks for the video.
This is so fascinating. A few weeks ago, I came across an issue while designing a tabletop game that utilizes risk/reward mechanics by raising or dropping dice to resolve actions. I decided to use ChatGPT to help me further develop this system, but found that the model struggled to understand the concept.
Unlike a D20 system, which relies on the sum of the dice value and roll number, my system utilizes a binary true/false system. If a die roll is 5 or higher, it's true; otherwise, it's false. It took several attempts to break down the concept using algorithms before ChatGPT finally understood it. However, when I started asking it to output dice notations based on game terms, such as rolling certain dice in specific scenarios and raising or dropping others, it became increasingly confused and began producing wildly incorrect answers.
When I asked ChatGPT to explain its answers, it revealed that it was attempting to create its own algorithms to solve the problem. The issue was that the model had no concept of what a die is, making it difficult to understand the physical nature of the game's mechanics. The algorithms it generated were so complex that small errors in variable placement would cause the output to be incorrect. I ultimately abandoned the project, but the experience was an eye-opener about the limitations of AI models when it comes to complex physical concepts.
Isnt that very similar to the system that Vampyre: The Mascarade uses?
@@franciscosanz7573 very similar. There's a lot of systems that use dice pools like this including cyberpunk (interlock system), shadowrun, and some more obscure ones like Riddle of steel. I personally like pool systems more than d20 because it's less swingy.
The lesson you should take away from that is that a model designed to predict the most likely response to some text, is not very good at writing code or 'understanding' new ideas
The real concern is whatever led you to believe that it was able to do that
@@Dimencia It wasn't so much of a belief as an experiment. Seeing all the other crazy stuff it was used for made me wonder if I could.
@@Dimencia Incorrect. GPTs are more than capable of writing code.
This whole thing makes me think of Koko and her sign language, and that horse that could count. Both animals appeared as tho they knew what they were doing when in reality, they had us fooled! They can do the right things, but with no real understanding of what it is they’re doing. To them, those things get a positive reaction out of us and it usually works out in their favor. (i.e. treats, praise, etc.)
Edit: I didn’t post this comment for arguments, please don’t take this seriously. I simply learned that Koko probably couldn’t really talk, I dunno. Take what I, a stranger, say with a grain of salt.
Clever Hans, the horse, picked up on subtle clues from its trainer.
Basically, Hans just thumped it’s hoof on the ground until the trainer (perhaps unconsciously) told it to stop.
Koko is very different. Gorillas are intelligent, social, and can be creative.
Koko could make up terms for new things, when she did not have the word for them.
Gorillas are intelligent, but just not as intelligent as humans.
There is a whole market for talking animal buttons.
If a dog or a cat can communicate surprisingly fluently (not all of them just the smart ones), it's not a stretch to assume a chimpanzee or gorilla can too.
My indoor pet chickens know more than a little bit of English. I never trained them with commands, just talk to them and they figure it out eventually.
Just like you and all of us have been programmed/ trained to be able to live and thrive.
Why it hurt you soo much that humans are not unique in how our brains work at certain fundamental levels?
By your logic I could just take and experiment on you as what you call communication and sentience are to my eyes no different than than what u see of koko... the intelligent really do have domain over the less intelligent. Best remember that and be kind to the less intelligent less the more intelligent see how you want to do things and treat u how u deserve to be treated by ur own judgment.
My lil baboon bae
Hows that any different than how humans do it
Searle’s Chinese Room thought experiment rears its head over and over again in AI. Every researcher thinks it’s nonsense that their pet solution can apparently act perfectly within a domain without understanding anything about that domain, and they’re always proved wrong.
The problem is that, with humans, if they appear to give a good answer to 99 questions about a topic we can reasonably infer that they will be right about the 100th question (given the general limits on human reliability). This is not true for AI.
As an example, ChatGPT:
1) Can multiply 2 small numbers correctly.
2) Can tell you how to do long multiplications.
3) Cannot multiply 2 large numbers correctly!
Or
1) COULD NOT answer a question about relative ages that I posed.
2) CAN answer the question if I additionaly gave it 1 actual age despite the fact that the reasoning should be the same.
The problem I find is most common with the Chinese Room is that most people who bring it up act like it's the man in the room who the person outside is talking with, when that's not the case. They're talking to the algorithm in the book he's following. The man is just the computer running it. Also, like a computer, he doesn't understand the algorithm he's running any more than he understands Chinese. The relevant question for AI is: "Ignore the man. Does the algorithm in the book understand Chinese?"
@@Roxor128 changing the actor doesn't change anything. The real question is when does training a neural network become the equivalent of training a human child? They both take in external data and try to understand it, in the greater context of the world. So until the datasets contain more than just the "narrow" data they are trained on they will remain the equivalent of the computer/book in the Chinese Room experiment
@@codexnecro666 Well, it won't be any time soon. We're working with artificial bug-brains right now (up to a million or so neurons). Whatever understanding they do have will be at most as simple as what an insect has. That might be enough to be useful for a few tasks, but it'll only go so far. Still, a million neurons is enough for a honeybee to get by, so there's clearly a lot that can be done with a brain that simple.
Individual neurons in your own brain understand nothing. But your brain as a whole does. Just like an individual NAND Gate in an adder circuit doesn't understand how to add anything but the whole adder circuit does. Nothing surprising about it
Searle seems to think if you rig up a brain just right, some kind of ghost in the machine will pop up which will understand the problems that are fed into the machine and use it to provide a solution. That's a cartoon fantasy
On the other hand I agree with the idea that the ability to step back and see something rather than just follow instructions is somehow key. It doesn't have to be an individual component but the system as a whole. But it needs that
As a writer that’s already having his completely original work flagged as AI and being told that it just shows I have to write better quality or “non-AI tone” articles even though AI is literally being taught on the work of the best of the best writers and copying humans better each day. I really do believe it’s a big challenge. Companies need to do better on their part to not trust so called AI checkers too much. Cause ultimately how many ways can a particular topic be twisted? At some point AI (already is in many cases) will come up with content that’s indistinguishable. And only the most creative writing tasks will remain with humans. So general educational article writing is gonna die big time. Because AI can just research the same topic faster and better than a human (probably, if bias is kept in check) and then produce a written copy that’s very high quality.
Glad you brought up the issue of grounded cognition.
On the issue of planning, the Transformer architecture doesn't actually have the capability to formulate a plan and carry it forward. ChatGPT only looks like it is planning because previous outputs are fed back in to give their attention heads more context as to which direction to continue in.
You need to go read the Palm-E paper.
@@TheReferrer72 Oh nice this paper goes into solid detail. Thanks for the pointer
I just gotta say the vocal editing with aria is also really good. All the lows cut out of both the vocal and the reverb, reverb sounds like the high end is boosted quite a bit too, but not harsh. The deessing is noticeable yet still subtle considering It would have been horrendous to begin with. I would imagine.
It was horrendous indeed, Claire has quite a lisp if you listen to her OOC.
Thank you so much for this. I've been saying the same thing especially since the Go AI was beaten. It was trained to know what winning looked like, but didn't even know the rules to the game it was beating. They didn't "teach it Go" like you would a person. They just showed it what winning and losing look like and told it to go wild figuring out why a win was a win or a loss was a loss.
Yeah but ask any top level fighting game or RTS player to play against someone without telling them their opponent is a noob, and they'll second guess themselves because they're expecting certain plays.
Newbs are unpredictable, and can trip up skilled players because of that unpredictability.
They'll likely not win, because humans can adapt to unexpected situations much more effectively than AI.
But AI might reach that point when we get better at incorporating multiple competing AI into one intelligence.
Because that's how humans work.
Our minds are competing with different portions of our minds all the time.
I still think that's a way we can reduce AI hallucinations.
Have other AI connected to the first, playing devil's advocate, looking for ways to disprove the first's statement or whatever.
But that's exactly what the point of training the Go AI was for, to find quickest routes to the desired solution and now it is finding mathematical algorithms faster than any mathematician. It wasn't really about "teaching" the AI Go.
This video contains quite a lot of misinformation and is all over the place, without actually explaining anything.
It did know the rules and the results speak for themselves. Also they didn't "show it winning," they rewarded for winning while it played against itself
As a very casual go player I don't think the "unexpected noob" explanation applies to go. It can happen in chess even but not really go. The reason is that the machine didn't fail to understand its opponent's moves, rather it failed to see something very basic about its own position. It didn't understand that its groups weren't alive. I would literally never make that mistake and I'm a very weak go player. This really does point to a key difference in how a human approaches a problem like go (with principles, strategies etc) and how a machine does (basically with pattern recognition it seems). In this case a strategy was able to defeat pattern recognition.
I remember reading a story on Tumblr about someone who was creating a computer program that could play poker. The OP was busy with other things, and forgot about the project until the night before the project was due. In a rush, the OP wrote the program with an incredibly simple code: on my turn, go all in.
The projects themselves were graded by playing a game of poker against each other, and most student programs were based off strategic thinking, and calculating probability, but once the game started, the OP's program won every single hand it played. The hand would start, OP's program would bet all in, every other student program would fold.
Yes, but try that in a casino and you walk out without a penny. ;-)
I've been completely obsessed with AI systems for a long time now, and it's weird how few people understand that it's really currently only strings of complex algorithms.
Your brain is only a string of complex chemical and physical algorithms; completely non-sensical.
It's a neural network, not an algorithm.
“They” are hyping it up so people invest in the companies who are trying to pump n dump
@@TheGargalon
1: AI is not a neural network.
2: A neural network is an algorithm.
3: A neural network is a mathematical formula.
🤓
You win Go by having the most captured territory on the board by surrounding points (the intersections) with your stones, both players play until they decide to pass their turns sequentially due to not having any moves. Captured stones subtract from a players score. The goal is not to capture stones, so much as to surround chunks of the board in a way that makes it impossible for your opponent to play in those areas.
THANK YOU! Way too many people have this weird idea that AI is actually thinking, or that it understands anything. This video is much needed.
You simplify it too much. The thing is LLM's have shown, that when they become large, emergent behavior is appearing, sparks of ai if you want. And nobody knows why. Even the creators of AI can't explain it.
Boy oh boy I hope you’re ready for the next 10-20 years.
We only think because we speak a language. No language, no thoughts. A LLM is built entirely out of the idea of language so they can probably think too in a way. Example, Auto-GPT will explain how it arrived at it's conclusion if you ask it. It literally has the ability to justify itself even if the justification is wrong.
Many researchers will disagree with you on this. There is a video where the NVIDIA CEO interviews one of the founders of OpenAI and he explains it really well and changed my mind on this. TLDW - the text that LLMs are trained on represents a projection of the world, of the people in it, of our society and so on. An AI can't learn to accurately predict the next word without learning a model of the world.
The biggest problem with ChatGPT is that it, unlike Kyle Hill, can fall for cheap russian propaganda
part of this is it’s like one part of our brains. we have many subsystems that work together to do things, while chatgpt only has one that tries to do the rest. it probably is better at us than text completion but because it has nothing else, it fails at so much because it doesn’t understand anything
This helped me articulate this actually. I don't think it will help to pause research for 6 months because I think we knowingly designed these systems as a sort of black-box and that stopping to look at them won't actually let us understand them. The problem is more fundamental I think, but I could be wrong.
There's lot of ways to get info about their working, you can look up on AI safety research and you'll see how much progress they are making daily. You don't need to know the AI as a whole as long as you can figure how certain aspects of it are working, with enough aspects covered we'll have better general understanding too. But this research is lot slower than the rate at which we can throw hardware on these things and that's the issue. That's why the call to stop the progress so that we have a well documented model of how these things are working
i think it would be possible to develop tools for understanding them if we focused more effort on that. maybe it’d be possible to train neural network on inspecting other networks or using some other techniques to make them less of a black box
Learned how the method to beat the Go AI works, and within 4 practice games I got a win.
The crazy thing is that, to do so, it feels like you have to throw away a lot of of your intuition about how to play the game. You don't play to make points or secure territory. Instead, you make a bunch of zombie groups that have enough to not die immediately, but which a human player would recognize as hopeless very easily, and use them to surround a group that circles back into itself.
The scary thing is that we have no idea why the AI loses track of the situation. If it was a human you'd think they're being overconfident in the circular group's safety. But with the AI we don't know if it gets overwhelmed by a complex life & death situation it can't foresee, if it's overestimating its own group's safety against the zombie groups, or even how it understands and assesses the board position.
It's scary how we're so eager to rely on something that we don't really know and whose functionality we can't audit.
it's simple. The so called "AI" makes it's moves as best reaction to your moves. As you make seemingly benign or incoherent moves but from multiple directions, it can not forsee your strategy, as an average human would very easily. Because it is a program, not some intelligent software by any mean.
@@MotoRide. That is the interesting thing. We assign them notions of knowledge about the topic by human standards, but them being black boxes, we have no idea how they are going about responding to the input from the game. So these issues sneak in and won't be found until they crop up in practice.
@@Unit27 The reason it "loses track of the situation" is that it isn't tracking the situation at all. That's just anthropomorphizing a convoluted set of if-then statements, attributing thought where there is none. It doesn't plan or look backward, it just has a matrix of statistically determined responses to a particular input.
The best way I heard it described is this, you have a friend that's been using a lounge learning software as a game, they have 0 understanding of it but can recognize the patterns that let them "win" when prompted they can produce fully articulated sentences but they have no understanding of what is in the sentence only that the symbols they used take it are correct
Given bio tech. Humanity started off on a similar state of existence before.
Well... If this planet will still host Mankind in the future.
Right, it's The Chinese Room.
I always felt like AI was lacking an "intelligence" (call it what you will) but I could never put it into words till this video. Thank you.
This reminds me of the reason why AI has problems with hands in art: it doesn't understand what it's doing, what it is making. An artist will know what the hand is, how it works, how it holds objects, etc. Ai doesn't have that understanding for all objects and elements.
It's also why human faces are hard for AI. AI are shown tons of stock photos, but they aren't an accurate representation of human expression or even all the different angles of a face. AI don't understand the 3d structure or how all the parts of a face are important to make an expression.
@@DebTheDevastator Yeah in general AI creates shapes or silhouettes rather than objects. An artist's education traditionally has anatomy for that reason: to understand how things WORK, not how they LOOK. And I think that's one of the reasons AI can't do a job a human can.
But they draw fantastic boobs. AI has its priorities straight.
@@sunnyd9321 yeah, just crank that dial beyond 3 and you will see titty monsters afterwards (not the good feeling type, the creepy type)
Yep. The AI knows that "this" must happen but not "why must it happen ?". When you look at it like that, AIs are actually clearly pretty fucking stupid.
I played a trivia quiz with ChatGPT, it was TERRIBLE. It got all kinds of very simple things wrong that even a 5 year old could answer, it was really good at things like "What is the capital of Angola?" but anything that requires actual understanding of the world it would get confused and give weird answers.
I also noticed that if you play a themed quiz, like Harry Potter trivia, where you take turns asking questions until one of you gets a wrong answer, it will ask very similar questions to the ones you ask, sometimes even the same basic question just with the name changed i.e I ask it "Who is Harry Potter's dad?" and then it asks "Who is Draco Malfoy's dad?"
ChatGPT is clever engineering but its just predicting what word should come next, it doesn't understand what it's saying.
A year from now when Chat-GPT is running on gpt-5 instead of gpt-3.5, it's performance could be 100x better than it is right now.
@@jeff946 possible but probably not. It'll most likely be more controlled to push more far-left propaganda.
A friend of mine who is still in service was tasked with going against an prototype semi-autonomous search, track, and targeting system. They learned something interesting during those tests.
when they acted logicially and tactically they would get detected and lose almost all the time. then one day they went off the deep end and tried something very unconventional... they moved around in cardboard boxes as well as other tactics that wouldnt normally be used. They found out that the system couldnt discern their movements and actions then would there for ignore them...
The good old Metal Gear approach.
That's a fundamental flaw in supervised learning, the model is really good when the environment is similar to it's dataset (i.e. when your friend was actually trying his best) but completely fail when the environment is shifted and is placed in novel situations.
Many of those novel situations are so stupid and naive (i.e. moving inside a cardboard box) that any human with "common sense" can figure it out immediately.
I asked ChatGPT who was the commander of the 140th New York Regiment at the Battle of the Wilderness on May 5th, 1864. It told me the name of the commander that was killed at Gettysburg almost a year before the Battle of Wilderness. Because both names were similar it gave me the wrong one. A simple yet very troubling result...
I've literally seen this myself in ChatGPT when I ask it for help making builds for TTGs. The stuff it spits out tends to be technically correct and ticks all the literal boxes, but it's just not _right._ It doesn't take into account the required level or what it takes to meet certain requirements, and it can't really adequately explain its decisions apart from regurgitating information from the books that led to the same cyclical reference issues to begin with.
It kind of seems like the company that will be the most successful (in the relative short term) with AI will be the one that puts the fewest restrictions on it, which is somewhat terrifying.
Regulation could help with this, but governments have historically been slow to regulate new technologies, often not stepping in until after problems arise. And meaning regulation becomes even more difficult when even the experts don't fully understand what they are working on.
It won't be long before we see software which detects and imprisons rogue AI. In the same way that anti-virus software and firewalls protect against viruses and hacking.
This doesn't really track well with what an AI actually is.
For ML/AI to be successful there needs to be the following components:
1) A well defined task with a clear metric
2) A well defined set of actions (either continuous or discrete) by which the AI is to function
3) A well defined reward/loss function which relate the current set of actions to the expected reward/loss function
4) A set of experience data by which the ML/AI system "learns" to relate the combination of the state of the system and the actions to the expected reward/loss functions.
This is why often the design of clearly thought out and quantitatively stable reward/loss functions is necessary for convergent training. More restrictions makes better AI, not less.
@@ryanh7167 Think he is talking about moral/legal related restrictions, not AI architecture.
Never mind corporate profits, the race for the first autonomous combat AI between US and China is even more unrestricted.
@@stevenlin1738 yeah I get that. I just struggle when I see conversations about the moral/legal ramifications of AI which seem entirely disconnected from how it actually functions.
That's not to say there aren't legitimate concerns with unregulated uses of AI, but generally speaking it isn't the learning that is the unregulated part. It's the application of learning in irresponsible ways.
This distinction is one that has nothing to do with how "powerful" the AI is, as a well tuned and simple decision tree for a weapons system can be far more effective at doing real damage than a set of massive neural networks which act simply as an auto complete language associator.
This seems to largely echo what I read a professor write about concerning ChatGPT back in February or March. He, a history professor, was speaking to its supposed ability to write essays for college students and stuff like that, and was not impressed. While he repeatedly emphasized that how exactly it worked was far outside his area of expertise, the way he explained it seems to be accurate based on what you’ve said here.
To take one quote “it’s as if [ChatGPT] read the entirety of the Library of Alexandria and then burned it to the ground.” As stated here, ChatGPT _doesn’t _*_know_*_ anything._ As he explained it, all it knows is the statistical relationship between words-what words tend to show up in relation to other words.
With all that in mind, while AIs can certainly do impressive stuff, I can’t help but wonder if we’re much farther away from a “general” AI than we think we are. If no AI truly has word association-to take an example from this video, what death is and who Elon Musk is-then anything they spit out, in my humble opinion, is suspect. How can an AI reliably give medical information or diagnosis if it can’t double check itself or make sense of conflicting information or actually know what its answer *_MEANS_* and that for example its first answer being a diagnosis of testicular cancer for a cisgender woman can’t be right?
I too think that people saying we're close to AGI misunderstand how AGI actually would work.
We are so far removed from AI actually understanding what it's doing, it's not even funny anymore. The most sophisticated narrow AI systems out there take a pretty long time to crack and if the task is specific enough they might very well be better than humans, but generally, the more these systems are asked to do, the easier it is to crack them. ChatGPT for example does hilarious chess, in the sense that it just invents rules and creates pieces out of thin air.
CisGenDer WoMan...because trans guys can get testicular cancer...just say females bruuhhh
CisGenDer WoMan...because trans guys can get testicular cancer...just say females bruuhhh
CisGenDer WoMan...because trans guys can get testicular cancer...just say females bruuhhh
CisGenDer WoMan...because trans guys can get testicular cancer...just say females bruuhhh
You know AI is a Huge Breakthrough when even Thor is talking about it.😂