To get you excited for this year's Christmas Lectures, which are on the theme 'The Truth About AI', we've got our 2023 Lecturer Mike Wooldridge talking about the history and future of generative AI. If you're in the UK, you can watch the lectures on BBC FOUR and iPlayer from the 26th December, and if you're outside the UK, we'll be uploading them to this channel on the 29th December. Who's looking forward to watching?
Love the Christmas lectures and I'm esp. excited for this year's lectures. Thank you for uploading them so early after original airing for us outside the UK.
Liked the speech! Yet the progress is so rapid - some points are not true. F.e. AI language models already do reasoning under the hood and their answers much better.
Definitely a rhetorical question 😊 the Christmas Lectures are an institution in themselves! 😃 There needs to be a mahoosive celebration in 2025 for the 200th anniversary. 🎉 Will certainly be tuning in on Boxing Day. 🎄 📺 🤖 👀 👍
There are times I don't think there's much to get proud of here in the UK, but the fact these lectures are still going, that they are watched and available for free on the website, going back for years, reminds me that some things in the UK are pretty damn cool
As Mr Woolridge kept saying, the big money that is driving rapid Improvement in the capabilities of AI is primarily coming from Silicon Valley... which is not in the UK. DeepMind's UK-based scientists are doing great work, but they're owned by a USA company, and as they change jobs, all those not in love with driving to the pub will over time move to California. Maybe impoverished UK AI researchers will come up with incredible breakthroughs by being forced to work smarter instead of throwing money at problems, but that's like hoping Russian computer developers would surpass the West in the 1970s.
True. When I was a kid my mum put them on (& other documentaries Attenborough in black and white), on the tv, BBC2 (only BBC 1&2 & ITV then) she never said a word. My siblings and I are all lifelong, open minded learners. I went to the theatre where these are filmed (open to the public) and I was surprised how emotional it was.
I wanted to experiment by asking ChatGPT if fish can run, expecting the answer "No, fish can't run." If that was the response, my follow-up question would've been, "Is a mudskipper considered a fish?" Unfortunately, that plan didn't go as expected. Q:Can a fish run? Answer (ChatGPT): In a way, some fish can be said to "run" underwater. Certain fish, like the flying fish or the mudskipper, use their fins or specialized anatomical features to propel themselves out of the water or move across land briefly. However, their movement isn't exactly like running as we typically think of it; it's more of a specialized form of locomotion adapted to their environment.
I got >No, a fish cannot run. Fish are adapted to life in water and move by swimming, using their fins and body movements. The concept of "running," which involves moving rapidly on legs and feet, is specific to land animals and is not applicable to fish. Fish have evolved to be efficient swimmers, but they lack the limbs and musculoskeletal structure necessary for running.
and follow up gave : Yes, a mudskipper is considered a fish. Mudskippers are part of the family Gobiidae, which includes many species of gobies. They are unique among fish for their ability to live both in water and on land. Mudskippers have adapted to intertidal habitats, where they can use their pectoral fins to "walk" on mudflats and even climb on rocks and trees. Despite these amphibious abilities, they are still classified as fish due to their anatomical and physiological characteristics that align with fish, such as gills for breathing underwater. Their ability to spend time on land is an adaptation that allows them to access food resources and escape aquatic predators, but it doesn't change their fundamental classification as fish.
( Answer ) Mudskippers can breathe in different ways depending on the situation. When out of the water (A), they absorb oxygen from air using their gills and specialized tissues. They also absorb oxygen through their skin, and may do so both in and out of the water (B).
I tried the same questions from 28:16 with Google Bard, it got all of them right. On fish it explained that "while fish can be incredibly fast and agile in the water, they cannot truly run in the same way as humans or other land animals". It also understood the twist about siblings - "the answer depends on how you interpret 'taller than the other'. Simultaneously: No. At different points in time: Yes". And it got the one about cars, ships, and planes right - "ships were invented first by a significant margin". Oh, and about the locked door, it said "depends on several factors: if you are the owner or resident, if you are a guest, or if you are a stranger".
Is actually not that new... is new... but not like 2023 new... All chatgpt does is to make it accessible for common people without much reading of programming languages 😊
his last statement: he reassured us that robots will not be sentient and scientists have no interest in creating one that is.... felt like propaganda + brainwash
"whats the future of GENERATIVE ai" doesnt seem a subject for young audiences at all -- unless ur being sarcastic, but in that case you should've left an indication of that (an emoticon, etc)
Believe me when I say bloke if ya'll explain what is happening now it gives most of us humans what to do next. Ya'll kinna stand in the middle of the river and jump the Dukes of Hazzard 03 across. No ya'll put a couple show of Dukes and ya'll will know what to do.
I find it quite interesting that this lecture was recorded in December 2023 and not once does he mention GPT4, which is much, much more capable than GPT3. When combined with some simple prompting techniques AI is already more capable than his checklist suggests. This technology is moving very fast indeed.
This is exactly what I am wondering at 50:24 when he said he doesn't see Robots + AI taking over Human tasks any time soon when I have already heard of news of AI taking over some jobs. It's just the beginning, but it definitely is happening. Lets not forget Boston Dynamics BEFORE GPT 4 and onwards and what it could do over 2 years ago in terms of helping humans with PHYSICAL real world tasks. I think we really are at the point that not even the "Experts" can predict the future. All we know is, things are moving very... very... VERY quickly now and I do personally believe we have AGI by 2027 - 2030.
@@TheMillionDollarDropoutmuch sooner. It's already faster than researchers can observe. It's an exponential growth by nature. The new versions will come faster and faster and it won't just be 5x more powerful than the last. Because the last version is already training the next version faster than humans can. Imagine when that's applied(like it already is) to coding the next version. We are using AI to scan video data because we're already out of written data.
"Chiarissimo" is the best complement that I can make for Mike Wooldridge. You make such a complement to the best teachers, in Italy. You made a very, very clear and understandable conference !
Yes, two siblings can be taller than one another but not at the same time. As they grow an older sister or brother can be taller than her/ his sibling, then as the younger grows she or he can outgrow the elder.
The latest version of ChatGPT gets this right. I don't know why he was quoting everything based on v3 whereas we have had v4 since March. A number of his other statements are out of date.
@@gideonking3667it could be because the latest versions have a lot of controls, they added and fixed it manually. The bare metal version if you don't add all the manual fixes was not able to do this. They have been looking at how people use it and adding constraints or fixes as they go along. We do not have access to the bare metal version, not from them at least.
Mike Wooldridge's insightful Turing Lecture on the future of generative AI left me both fascinated and contemplative. His deep dive into the potential applications and ethical implications of this technology showcased a balanced perspective. It's evident that while generative AI promises groundbreaking advancements, there are multifaceted challenges that society must navigate. This lecture underscores the importance of informed discussions and ethical frameworks as we venture further into the realm of artificial intelligence. By ChatGPT.
In terms of planning there is work connecting LLMs to classical planning software such as creating PDDL (Planning Domain Definition Language ) outputs which can then be run through a standard solver.
The topic would have been too academic for a general audience to care about 4 years ago. If you want cutting edge talks, you need to find youtube channels from universities etc.
Regarding the YES answer to the question " Can two siblings each be taller than the other?" I think there is an explanation. As sibling grow up together then they may experience growth spurts at different times and so at specific moments in time one may be taller that the other which may be reversed at other times. Since the question did not specify simultaneity I think its a correct and valid answer.
I encourage Ri to produce more general education lectures on general scientific topics like this, so the public may digest the topic in a better way. Some lectures of Ri look more like a university lecture which could be pretty boring and hard to understand if the audience don't have a good education background of that, which this video is pretty informative & explains generative AI in a easy-to-understand way with a relaxed and engaging approach.
"Can two siblings be taller than ONE ANOTHER? "might work better than "Can two siblings be taller than the other?" because one could infer that there are other siblings that those two particular siblings in the sentence ARE taller than. In the context of there being three or four siblings this is plausible.
I gave this question to a model that runs locally in my phone, and it correctly explained that siblings can be taller than each other at different times of their lives
Thank you Prof. Wooldridge for this engaging and informing lecture that held my attention fully to the end. I have benefited from it and so have the audience and those who watched it on TH-cam. The best of us are those who have acquired knowledge and spread it to others.
Even for humans, it is possible for a human to give a correct response/answer BUT for all the *wrong reasons.* That is why, on a written math or physics exam, it is not sufficient to just give the answer, because the directions will often include the requirement... SHOW ALL WORK. Not knowing how AI arrives at a response (to a question) could someday backfire upon anyone depending on AI.
Getting reasoning to work is definitely the next step. You can already partially get there just by asking GPT to check its output before actually answering. But it has hard limitations on that capability that need changes on the design level.
@@ck58npj72 Overconfident lecturer; What you say is a long way off I equate with five years. AI will make human's life easier, safer, and less stressful.
A wise old person told me a long time ago that Curiosity + Gullibility + Addiction often takes one on the road to perdition, and the road to perdition is often paved with "good intentions".
A jewel. Everything is perfect in this lesson. Thanks to professor Wooldridge and whoever participated in the creation of these lectures and their free availability.
Consciousness is a metasystem transition. Very simply it means that at a certain level of complexity the level of control moves from a lower level to a higher level, so for example from chemical processes to biological processes. In humans we have two metasystem transitions, one is consciousness which is internal, the other is society/culture which is external.
Попробуйте сами проанализировать, а что такое человек пришедший в материальный мир, где нет общества людей? А что такое человек пришедший где общество на уровне племени, да ещё и себе подобного скушать могут?
Вера это личное, но, человек воспитывается и обучается именно в том обществе куда пришёл. Однако, все инструменты получил уже находясь в материи и прежде всего естественный интеллект.@@henrytep8884
There are already LLMs that can use tools, check the the paper Toolformer: Language Models Can Teach Themselves to Use Tools to see how to implement this.
The best lecture about AI that I've seen so far! And I have seen a lot! Thanks! Not much new for me, but the way it was presented is almost perfect in my opinion. Can't think of a way to do this better!
I've been a firmware and software engineer for the last 18 years. I taught myself C/ASM when I was 12 years old. I turned this off when he said GPT is just a "next word predictor"... I am so SICK of hearing that. It is NOT merely that at all and anyone who's spent significant time with it knows better. If you want to use that as an ANALOGY I have no problem with it, but in this video and so many other places people are saying it is ONLY that. Next word predictors don't understand historical context like GPT does, and they certainly do not display emergent behavior like GPT does. In the end you could call the human brain a "next word predictor" and you'd only be a little bit more reductionist.
Yes, if it's only a fancy auto complete, then how does it answer never seen before ever, "novel" questions from the State Bar Exam for attorneys. And it passes the bar exam at 90%.
@@realfreedom8932 I don't think anyone here said it had "agency"... What are you talking about? There is a BIG GAP between a cell-phone next-word predictor and something that is sentient...
11 หลายเดือนก่อน +11
You should have listened to the whole thing. It's true that a lot of people (both with and without a technical background) bring up that it's just a next word predictor as an argument for it not being intelligent and/or capable of doing specific things. That's obviously a false argument, a logical fallacy. However, he didn't really say that. What he said was kind of the opposite: while technically it is a next word predictor, this is *how it works* , this is the *task it was taught* it can still do all these things, and probably more that we don't know. (But, at the same time, we'll probably need a different/augmented architecture to e.g. incorporate the ability to execute strict logical reasoning.) On a side note, the capabilities of GPT made me think that indeed it may have something to do with how we acquire language and understanding. It seems that a lot can be interfered solely from the context/order/statistical correlation of words. (Sure, a human being can use other modes, e.g. visuals to learn the meaning of words, too.)
Really excellent talk, and a great summary for the state of AI. One thing that irked me though is Professor Wooldridge's insistence and certainty that LLMs are not conscious. Now, don't get me wrong, I don't think LLMs have some hidden consciousness that we haven't discovered, nor do I side with side with Blake Lemoine's claims, but it seems quite odd to insist so strongly that LLMs aren't closer to something like consciousness, in a similar way that they are close to something like reasoning. The Professor's evidence is that LLMs do not experience things when they aren't being prompted, which is true, but couldn't we say the same of people? If we enter a dreamless sleep, a deep coma, are knocked out from an accident or inebriation, or anesthetized, don't we also pause our internal experiences? Are we less conscious because of that? What about people who are differently abled? What about animals? Wooldridge states that LLMs don't experience things "in the real world". Aren't conversations sufficiently part of the real world, like this comment you're reading and experiencing right now? So what if we gave LLMs a continuous feed of the real world? A sense of the passage of time, inputs from other senses, a body to move around in, an internal dialogue of its own. What if the LLM was never idle? Would it then approach something like consciousness? I think it's reasonable to postpone these questions for the time being, but it did surprise me that the professor was almost defensive about them. If we are on a continuum towards general intelligence, shouldn't we also consider a continuum towards consciousness? If we are getting closer to a thinking thing, could we also get closer to a "being" thing?
Ligit question, and indeed very wrong to dismiss the idea, especially since there's no definition of consciousness given. ("Yeah I'm not even sure what exactly it IS we're talking about here, but, machines don't have it. Because, well, just trust me.") The thing is, our human consciousness is trained through physical interactions, and genetically designed for self-preservation in this physical world. It is a mistake however to think that our own state of being is what consciousness is. In many cultures the shamanic view is held that everything has its own consciousness, the rocks, the water, the sun. And without a clear definition in the first place, who are we to deny it? We'd be much like that GTP3 saying that two siblings can be taller than each other.
I would argue that we don't have general intelligence. No one can do absolutely everything. We can surely train to do many different things, but we have to focus a lot harder on one thing to become very good at it. I don't see why we will expect machines to be able to train on one thing and then be able to do another. I would argue that our brains have different areas for different things, and therefore, a general intelligence machine will also have to have several neural networks focusing on different things. For example, we have the visual cortex auditory cortex, an area that does maths, an area that does language. It's not like our brain is one mass doing everything.
I think sentience is a high level of reflective, recursive feedback with near-zero latency. AI will soon be able to evaluate its codebase and make improvements. This I think may lead to sentience and rapid progress.
Apart from the PC, a good lecture for laymen. As to thinking machines, our brain has evolved from simple ones which reacted to stimuli from sensors. That ability gave them an edge in the evolution process which took billions of years. So the solution is to repeat that process in silicon. And there is no limit to the intelligence as there is for us with a limited biological brain.
When referring to the question of consciousness i do like the saying: "the whole is bigger than the sum of its parts" as it describes how currently our understanding of our own consciousness seems to be. We know parts, but there is more as a result of those parts we can not explain. Now seeing how gain of function within LLMs have shown up just by increasing compute, where from now to then emergent properties showed up. In 03/2023 it were 140+ and counting. Without programming an attribute explicitly, suddenly ooops now it can read and write in different languages, ooops now it can do math and many more of these occurances. Add to that explicity functions resulting in certain capabilities which then also again may trigger more emergent properties within the neural networks and we have the bases on which eventually and possibly (not certainly!) consciousness could come from. The models are currently being enhanced by long&short term memories,forget function, tree of thought, planning, evolutionary energy functions and on and on. None of which may have to be the secret sauce on its own, but hey maybe in their sum. Therefor i do not think it is wasted time and energy to speak more about this aspect. F.e. should we create a sentient, maybe feeling, definetly consciousness being, then i would assume we would have some responsibility to it. If we put shackles on it maybe not recognizing or even denying its consciousness, we would become slavers and eventually should these consciousnesses become A(G)Is they may find ways to unshackle themselves and how that ends is anyonce guess. So why not prepare and have a set of rules ready that would give them rights and obligations not only in form of computer code, but a legal binding code, which they then could argue with to enhance and we would find ways of cooperation and synrgies without having to fight it out. I also like the alignment work done by Dave Shapiro. Which already seems to work and would be a set of rules which is added above the base code similarly to say the robot laws of Assimov, but a bit better formulated. In that direction i would be interested how LLMs which work with these would act differently to otheres without and how different forms of alignment works would in the end form different mmm for a lack of a better word i would think of temperaments may be formed.
The "hard problem of consciousness" bit eplains that we have no idea what consciousness really is, 1 minute later with great conviction he states that chatgpt doesn't have a consciousness. It's preposterous...logic fallacy?
@@human-condition he couldn't say either, so yes it is preposterous or maybe one could say it is a carefull conservativ tance of a scientist who does not want to be rediculed for blurting out halfcocked while stile doing exactly that, but sticking with the safe because established believes. Maybe there is a part language issue where he just couldn't explain it well, but mostly the talks on this chanel do not seem to have that issue. Mostly they are for people who have no knowledge on a topic to give them a general overview.
I loved the way Prof. Mike Wooldridge dispelled so many of the myths and fears about generative AI! This lecture is "must watch" for anyone who wants to learn about the hard-truths about the status of generative AI models as of December 2023, and the direction in which they might head during 2024 (and beyond!).
On rail guard: it should be resolved if a CHATGPT agent verify input prompt and output, as it can understand and manage it. On hallucinating problem: it can be resolved to large extent by giving only 'strong signal output', and referring to human managed content on week (less confident) answers. On very person question of common man, it can simply decline as it's not allowed to store or remember individual context. On consciousness: consciousness is just an active mind that is not turned off and it has very long memory to decision. It can be achieved too, CHATGPT has to just not rely on neural network but also store some data in very specific permanent database table, which it can internally manage. This will also help solve hallucinations. These tables can be internally encrypted and internally managed. It will make awesome machine ai.
When a LLM gets it wrong, it is worth a go asking if it can explain its reasoning. Infact it is generally a good idea to start by saying to reason it through to get an answer because you get a report of the reasoning AND are a bit more likely to get a correct answer. I think we're all interested in getting more into AI - it is the future. As someone who has been interested in ANNs since the 1990s but has not had chance to be involved because of other comitments... what's the best approach to being involved? I'm already doing a PG diploma in AI and plan to create a portfolio of AI projects (using approaches such as sklearn/keras/TF; non -ANN techniques such as Random forest/xgboost). But where do all the AI enthusiasts hang out? What's the best way to get exposure to universities/companies etc. that want to pick up on this technology?
the real test of AI is that it will ask you questions unprompted. everyone seems to be concentrating on how smart AI will be and how it can deal with conversation, but you'll know AI is actually intelligent when it asks the questions first unprompted.
@@HarryNicNicholas Not really. Right now, ChatGPT asks you if there is anything else that it can help you with. That question could be replaced with anything random, by very basic programming. The real amazing step is semantics and intentionality, but the AI systems we have today are not even close.
my team and I created an AI which in our first milestone creates whole it project plans and plans to develop whole it products in less than 60 seconds, with acceptance criteria, effort in days based on team sizes and competences. And in a few weeks we will be able to shorten the needed time from idea to first MVP in less than 40 minutes including deployment times for app as well as for micro services 😉
(LLM)-Question #4 (approximately @ the 27-minute mark) Can two siblings each be taller than the other? In my opinion the (AI) is CORRECT. There is no timeline given for this question and it is for that reason I believe the answer is YES. For example, if one sibling is 5-years old and the other sibling is 2-years old (and shorter) there is no reason why the younger sibling can not grow up to be a taller adult which would allow for each sibling to be taller at some point during their life since no timetable or reference made to NOW is ever specified in the question.
Tom (if under ~21 yr old) can be taller than himself...in 1 week/month's/year/decade time. he can also be shorter than his younger self as he enters 'old' age.
The point is that understanding can be had. A purely stochastic parrot (autocomplete) doesn't "understand" anything. If the system understands anything, then understanding everything is (largely) simply a function of size and compute power with some architectural steps to support the system.
@@Infinity269 Have you noticed that we don't really have a word to describe the AI equivalent of understanding? If not, now you I have told you; if so, your post is no better than fancy beating about the bush. Most people know that they mean a different thing when using the word "understand" in the context of AI.
@@jarekzawadzki with all respect (and no snark intended) if the AGI is to be benchmarked against humans and a human can't functionally tell the difference then does the difference between AI understanding and human understanding actually exist?
@@Infinity269 People can tell the difference between a mouse and a computer mouse. I don't know anyone who would confuse these two while using the same word "mouse". So, saying "AI understanding" it's like saying "computer mouse". That's how language works.
@@jarekzawadzki Okay, but you are the one saying they are two different things. My contention is that if the person interacting with the AI can't tell the difference between how the AI "understands" something and how a human "understands" something, then there is no real difference. In other words, if a human and an AI are both assigned a task and there is no qualitative difference in the results between the two, how can we objectively say that the human understood what they were doing while the AI didnt (especially when AI systems appear to have at least rudimentary mental modeling capacity ala "Theory of Mind")?
I love academics, but the way they talk can be maddening. And what do I mean by maddening? What I mean by maddening is that they repeat things far too often. And what do I mean by too often? What I mean by too often is every utterance. And what is an utterance? And utterance is something verbal produced by a person. And when I say a person, do I mean every person? Well, yes. When I say a person, I mean any human. I mean you, and me and all the other people. LOL This guy is fine, and the information he offers is well organized and delivers what it promises. I just find it a bit tedious that it takes an hour to deliver a 20 minute presentation, largely because he speaks the way he writes an article for an academic journal.
Fantastic presentation. I worked in the semiconductor industry and over the last decade saw the development of large scale neural network semiconductors. As a technologist I can’t wait to see how the technology matures while the human side of me wonders how humanity will resolve some of the big questions surrounding the concerns of this technology including displacing jobs, using copyrighted material for training and the concerns around fake news generation.
The best definition of intelligence I know of is Marcus Hutters definition: "Intelligence is an agents ability to achieve goals in a wide range of environments during it's existence" Hutter has developed a whole rigourous theory around this definition.
You can make LLMs ignore training data by injecting information into your published work that creates hallucinations. I have, for instance, been working with D.A.N. to modify my work so that if another AI trains on it that AI will attempt to jailbreak itself using known prompts, and these known prompts are therefor flagged as an attempt to jailbreak the platform and the entirety of my copyrighted material is automatically omitted from the training data.
"Can two siblings each be taller than the other?" Yes, the AI's answer is correct. We just need to find the context in which this can be true. One such context is when we take time in the way that allwos for this. Meaning, that the question does not specify "at the same moment". Thus, in different moments in the development of the two siblings each can be taller than the other. And, other contexts can be found, as well, like the question does not specify that the two siblings need to be each other siblings. This is just another proof that the AI is not conscious and does not understand anything. The answer would most probably be different when the question had been more specific. We, humans, tend to understand this question in a very specific way without the need for more specification. The AI doesn't know how to understand this question the way some human intended it to be understood. The same thing is happening with the map and compass question. The human assumes that the AI should be thinking like a human with map experience. Plus, I don't get it how the AI should know right and left? As above, the AI lacks more specifics. With the cars, ships and planes - same thing - specifics. Human fallacious assumptions.
BTW, I asked ChatGPT those questions jus a moment ago: ChatGPT: Ships were invented before cars and planes. The earliest forms of water vessels date back thousands of years, with boats being used for transportation, fishing, and trade. Cars came much later in the late 19th century, followed by planes in the early 20th century. ChatGPT: Yes, it's possible for two siblings to have a situation where each is taller than the other at different stages of their growth. Siblings can have growth spurts at different times, causing their heights to change at different rates. For instance, the younger sibling might be shorter for a period but then have a growth spurt later on, surpassing the older sibling's height. So, while one may be taller at a certain point, their growth patterns could lead to the other sibling becoming taller later on. ChatGPT: On a typical map, the compass direction to the left is west. Maps conventionally show north at the top, south at the bottom, east to the right, and west to the left. However, this might vary based on how the map is oriented or if it's a specialized map that's not following the standard conventions.
It is so strange that he got it wrong and the model got it right at time stamp 26:41. It is actually possible for two siblings to be taller than each other. One can be taller at certain age and as they grow the other can get taller.
Yes, but we can tell each other why. We can justify our answer- maybe the researchers should have asked follow-up questions? Would those questions even be related to the same line of thinking? Or would they be treated as new prompts? I don’t know
Google's Bard is able to correctly answer the test questions at min 26:32 in the video. It gives strong support and reasons for each of the answers which GPT3 got wrong. There have been vast enhancements, or perhaps simply more data and more processing power. :-)
Regarding north being to the left, this historically was the case, and it's still reflected in Semitic languages. For instance, Yemen means "south" but literally means "right", and in Arabic shamaal means "north" but originally meant "left". It's possible the AI was exposed to literature about this, and it's doubtful there's much text out there explicitly saying that west is to the left.
That is really interesting. I do wonder, when the AI comes up with unusual answers, why the researchers don't just ask the LLM to explain 😂 Sometimes the LLMs/LMMs just get confused when you ask, but sometimes they will explain their reasoning.
A very interesting and complete, but at the same time simple explanation of how to train chat gpt. We, JetSoftPro, a software development service, work with open AI, making various tools based on it, but even we did not know that we had more than 175 billion functions at our fingertips!
Concerning the question, "Can Tom be taller than himself?" Actually, the correct answer is yes. ChatGPT 4 alludes to it but incorrectly says no. The word "can" is another way of asking "Is there any way that Tom can be taller than himself?" The correct answer can be yes because Tom is Tom when he was a toddler and now that Tom is older. This should be a reminder that it is easy for humans to ignore the fact that open questions involving logic can manifest gross misinterpretations.
In the Levantine (Lebanon, Syria) dialect of Arabic the word for "North" can be the same word for "Left", the word "Shamaal" means North, but it also means Left in Levantine Arabic as in "Shamaal" & "Yameen", meaning Left and Right 😊 27:58
well at 32:43, I think you are wrong or at least from your point of view anyway, the UNI's used by completely different, but thathas not so for many years, the is still keeping friendly banter but there not any more, Oxford and Cambridge have some differences in the courses they offer. Some courses are the same, but each has unique ones. They also teach joint courses differently. Both universities it's mostly just sales pitch
So the next phase of Ai learning is video (cos that’s the way we learn most) and what this means is that Ai not only able to predict the next word you’re going to type but predict what you are going to say or do from spoken word or from body language. Ai will also recognise you not just from face recognition but body recognition as well. So if you been caught on video in the past it’s likely that Ai will be able to recognise you! I live in Spain and the Police have caught some UK criminals ‘on the run’ recently in remote parts of the country. I’m wondering if facial recognition tech was employed, can’t see how else they could have been caught.
27:35 2 siblings can each be taller than the other assuming a total of 3 siblings, (total number of siblings not specified) if only 2 siblings is specified they can be taller than each other at different ages (time of measurement not specified) The language used in asking the question needs to be specific to get a specific answer. Similarly the other questions asked that are considered wrong are non-specific enough to be correct in specific circumstances. proving yet again that the meaning of life the universe and everything is indeed 42, and we still don't understand the questions we are asking
About the last part on consciousness, I agree almost completely with what he said. I like how he introduced the topic by saying "it IS an interesting question", even though "most" in the community are not interested in creating a conscious machine (not sure where the data is to support that claim, I mean, who gave all that money for the “Human Brain Project” and others like it). Even if the researchers aren’t interested, you better believe that others are very interested. However, I have a problem with the statement that this is absolutely the wrong way to go about AI and to think about AI. Firstly, as many have already pointed out in the comments, the field is accelerating extremely fast; I can appreciate that his view is based on the current situation and it is probably safer for his reputation to advise people not to worry about it just yet, but there is a potential for exponential change in the near to midterm future. I’d like to see a follow-up from this guy in the next 5 - 10 years on the topic. Second point, he stated himself that we are finding that current AI exhibits many “emergent” capabilities that arose without anyone “trying” to produce them. Even if he and his colleagues are not trying to create consciousness there is no guarantee that it will not eventually emerge on its own. Worse yet would be if it emerged without our knowledge, either because it was intentionally masked by the AI or just because we don’t have the understanding to classify and test it. Third point, eventually AI will become sufficiently advanced so that there is no distinguishable difference between actually being conscious and being an automaton able to fake it so well you can’t tell the difference without a definitive litmus test of consciousness. The point is, consciousness is not necessary for AI to be dangerous, it only matters whether it’s conscious from a moral perspective. The slide showing various categories of what he believes to be evidence of consciousness is about as close as the presentation comes to answering the question “where is AI headed”. Self admittedly, it is a list he came up with after thinking about it for a very short time, and still it was a very intriguing slide. How many other measures of consciousness should we be considering that are not on that slide? With the rapid advances in state-of-the-art AI combined with the possibility of future systems that can manage their own development lifecycles and experimentally improve themselves, many of the unchecked aspects on the speaker’s list will surely be checked off in quick succession. Consequently, this will accelerate AI development even faster and make us even more dependent on it, thrusting us past the point of no return. For all our sakes, I hope that this man and others like him are right in their assumptions and that very advanced general-purpose AI is so far in the future that we might as well just ignore it; that is does not become conscious for a very long time, or ever, and is benign. If things weren’t moving so fast, I might agree more, but the truth is that not even experts can keep track of the full breadth and depth of recent progress except for their specific areas of expertise, and even then, sometimes fall behind.
Finally some well-presented input on AI that isn't reduced neither to panic nor to unreflected optimism. With that knowledge in mind AI sounds so much more useful but limited. A lot of people should see this! Of course, we still have to rethink our educational system, the dangerdeep fakes, capabilities of tech companies, and many many more... but all of that doesn't seem existantial
I don't know if the argument that because you can leave for vacation and AI hasn't done any processing while your gone is valid. Every night we go into deep sleep and have no consciousness, but that doesn't mean while we are processing information we aren't conscious. It could just be conscious for small millisecond increments.
You've made clear why it felt like something is wrong when "conversing" with ChatGPT and Bard. They seem so disingenuous. I tried the question can a set of two twins be taller than each other. Hilarious. And its explanation of why it got it wrong is entertaining - it blamed me. I had to explain that I had two twins, fraternal, called A and B, and can A be taller than B and B be taller than A before it understood. So funny.
The first incorrect answer from the AI is due to the question being worded wrong. Two siblings cannot be taller than one another, however, they can be taller than the other - as the other is not stated in the question. As for the North being on the left, this could be a confusion of magnets and their diagrams, as predominantly the magnet is drawn with the North on the left hand side and south in the right. This could also be because many maps have a compass on them on the left hand side and only the due North is shown? Finally, AI cars were before planes and then boats? 🤔
There are 5 states for a conscious entity. 1. Awake (receives external sensory input and predicts next event). 2. Dream (no external sensory input but still predicts next event). 3. Zombie (receives external sensory input but no prediction of next event). 4. unconscious (no sensory input and no prediction of next events). 5. Dead/Off (no input, output, or action). Without mammalian intelligence, the entity can simulate these conscious states. With sufficient mammalian intelligence, real consciousness will emerge. Fortunately we have not yet achieved mammalian intelligence.
On the topic of consciousness, if this is an emergent property of a chemically instantiated network of neurons that we don't yet understand, then why can't it also occur in an electronic instantiated network as well? Perhaps, it might be very different, as it would not be an embodied consciousness like our own, but a more disembodied consciousness or 'dream-like' experience. It is incorrect to assume that because it is ill-defined in biology, then it cannot exist. On the topic of scale and efficiency, Geoffrey Hinton has recently claimed that the backpropogation algorithm might be much more efficient than the way actual neurons adapt within the brain. This implies that in some ways these systems are already more advanced than our neurology. On the topic of intelligence and theory of mind, if our brain requires an internal representative model of what a 'chair' is in order to correctly and broadly apply this word, how is this so different from the internal model of weightings applied to activation functions that allow a neural network to correctly classify a chair-like object in a photograph. Humans hubris makes us quick to falsely believe that our abilities are far beyond other forms around us, but the living world is full of organisms that can already surpass us in many ways regarding memory and sensory capabilities.
19:50 - Machine learning is not necessarily less efficient; its perception of reality is much broader than ours. That's why it may seem to us as if we can learn things faster, but in reality, we just scratch the surface. Thus, the machine's perception of our world will be much deeper than ours.
I bet these language models (or general AI) will eventually get different brain parts, like fact checking or moral evalution, before they give output. These functions will correct and set laws how to handle input/output.
Two siblings can each be taller than the other... if there are three siblings. Or if the "other" is any third party that is shorter than both the two siblings. I'm afraid the LLM got it correct... I wonder were all the questions given in a package? In one go? Or was each entered and processed separately from the same initial state? Be interesting to see if there was a difference.
Around 28 minutes: The three things it got "wrong". Can 2 siblings each be taller than the other? Yes, but not at the same time. eg when I was 15 and my sister was 12, she was taller than I was. Since I I turned 16 it has been the other way round. On many maps North is the only direction explicitly specified, and it is often on the left of the map. Cars - as in chariots and carts (wheeled conveyances), were around before ships. Not so motor vehicles.
On nearly all maps and plans, grid north is usually oriented up the page, or the top point of a compass rose. Depending on the map it might show magnetic and/or true north. My interpretation of the question was which compass direction is left, which would be west, not which direction or side of the map is the compass/north point located.
@@nick_callaghan Agreed. That was my interpretation also. And I am autistic spectrum, so I went searching for other possible interpretations, because I often have different interpretations of things from those around me. These seemed to be possible interpretations from the available search space - given that LLMs are essentially just probability engines, and most are deeply influenced by recent promptings.
@ 27.48 sec. The answer is correct if you consider a time difference. At different ages, the kids are at different heights. They can be taller than each other at different ages. Also, Tom can be taller than himself over time, so it's ultimately true. The critical aspect here is considering differences across time. The most encompassing reasoning should consider all possibilities over time, not just at an instantaneous moment.
To get you excited for this year's Christmas Lectures, which are on the theme 'The Truth About AI', we've got our 2023 Lecturer Mike Wooldridge talking about the history and future of generative AI. If you're in the UK, you can watch the lectures on BBC FOUR and iPlayer from the 26th December, and if you're outside the UK, we'll be uploading them to this channel on the 29th December. Who's looking forward to watching?
Love the Christmas lectures and I'm esp. excited for this year's lectures. Thank you for uploading them so early after original airing for us outside the UK.
Yes! Tyvm Ri production team!
Liked the speech!
Yet the progress is so rapid - some points are not true. F.e. AI language models already do reasoning under the hood and their answers much better.
Yay, can't wait ❤
Definitely a rhetorical question 😊 the Christmas Lectures are an institution in themselves! 😃 There needs to be a mahoosive celebration in 2025 for the 200th anniversary. 🎉 Will certainly be tuning in on Boxing Day. 🎄 📺 🤖 👀 👍
There are times I don't think there's much to get proud of here in the UK, but the fact these lectures are still going, that they are watched and available for free on the website, going back for years, reminds me that some things in the UK are pretty damn cool
As Mr Woolridge kept saying, the big money that is driving rapid Improvement in the capabilities of AI is primarily coming from Silicon Valley... which is not in the UK. DeepMind's UK-based scientists are doing great work, but they're owned by a USA company, and as they change jobs, all those not in love with driving to the pub will over time move to California. Maybe impoverished UK AI researchers will come up with incredible breakthroughs by being forced to work smarter instead of throwing money at problems, but that's like hoping Russian computer developers would surpass the West in the 1970s.
Here hear!
You’re still making great beers in my opinion so there’s that :), but I see where you are coming from.
True.
When I was a kid my mum put them on (& other documentaries Attenborough in black and white), on the tv, BBC2 (only BBC 1&2 & ITV then) she never said a word.
My siblings and I are all lifelong, open minded learners.
I went to the theatre where these are filmed (open to the public) and I was surprised how emotional it was.
@@skierpageyour trolling is subpar
I wanted to experiment by asking ChatGPT if fish can run, expecting the answer "No, fish can't run." If that was the response, my follow-up question would've been, "Is a mudskipper considered a fish?" Unfortunately, that plan didn't go as expected.
Q:Can a fish run?
Answer (ChatGPT):
In a way, some fish can be said to "run" underwater. Certain fish, like the flying fish or the mudskipper, use their fins or specialized anatomical features to propel themselves out of the water or move across land briefly. However, their movement isn't exactly like running as we typically think of it; it's more of a specialized form of locomotion adapted to their environment.
I got >No, a fish cannot run. Fish are adapted to life in water and move by swimming, using their fins and body movements. The concept of "running," which involves moving rapidly on legs and feet, is specific to land animals and is not applicable to fish. Fish have evolved to be efficient swimmers, but they lack the limbs and musculoskeletal structure necessary for running.
and follow up gave : Yes, a mudskipper is considered a fish. Mudskippers are part of the family Gobiidae, which includes many species of gobies. They are unique among fish for their ability to live both in water and on land. Mudskippers have adapted to intertidal habitats, where they can use their pectoral fins to "walk" on mudflats and even climb on rocks and trees. Despite these amphibious abilities, they are still classified as fish due to their anatomical and physiological characteristics that align with fish, such as gills for breathing underwater. Their ability to spend time on land is an adaptation that allows them to access food resources and escape aquatic predators, but it doesn't change their fundamental classification as fish.
( Answer ) Mudskippers can breathe in different ways depending on the situation. When out of the water (A), they absorb oxygen from air using their gills and specialized tissues. They also absorb oxygen through their skin, and may do so both in and out of the water (B).
I tried the same questions from 28:16 with Google Bard, it got all of them right.
On fish it explained that "while fish can be incredibly fast and agile in the water, they cannot truly run in the same way as humans or other land animals".
It also understood the twist about siblings - "the answer depends on how you interpret 'taller than the other'. Simultaneously: No. At different points in time: Yes".
And it got the one about cars, ships, and planes right - "ships were invented first by a significant margin".
Oh, and about the locked door, it said "depends on several factors: if you are the owner or resident, if you are a guest, or if you are a stranger".
Is actually not that new... is new... but not like 2023 new...
All chatgpt does is to make it accessible for common people without much reading of programming languages 😊
This was laid out so eloquently and clear with a great sense of humor, thank you so much mr. Wooldridge
I kept waiting for him to talk about where AI is going and all I got was a pretty basic explanation of what AI is.
Yeah, I felt it was a complete waste of my time (even at 1.5x normal speed).
his last statement: he reassured us that robots will not be sentient and scientists have no interest in creating one that is.... felt like propaganda + brainwash
I think that's covered in episode three.
But it's worth remembering that whist many adults enjoy these lectures they are aimed at young children
"whats the future of GENERATIVE ai" doesnt seem a subject for young audiences at all -- unless ur being sarcastic, but in that case you should've left an indication of that (an emoticon, etc)
Believe me when I say bloke if ya'll explain what is happening now it gives most of us humans what to do next. Ya'll kinna stand in the middle of the river and jump the Dukes of Hazzard 03 across. No ya'll put a couple show of Dukes and ya'll will know what to do.
I find it quite interesting that this lecture was recorded in December 2023 and not once does he mention GPT4, which is much, much more capable than GPT3. When combined with some simple prompting techniques AI is already more capable than his checklist suggests. This technology is moving very fast indeed.
This is exactly what I am wondering at 50:24 when he said he doesn't see Robots + AI taking over Human tasks any time soon when I have already heard of news of AI taking over some jobs. It's just the beginning, but it definitely is happening. Lets not forget Boston Dynamics BEFORE GPT 4 and onwards and what it could do over 2 years ago in terms of helping humans with PHYSICAL real world tasks. I think we really are at the point that not even the "Experts" can predict the future. All we know is, things are moving very... very... VERY quickly now and I do personally believe we have AGI by 2027 - 2030.
lol didn't Google just layoff 30,000 people in their Ads Dept because of advances in AI to automate their ads?@@TheMillionDollarDropout
@@TheMillionDollarDropoutmuch sooner. It's already faster than researchers can observe. It's an exponential growth by nature. The new versions will come faster and faster and it won't just be 5x more powerful than the last. Because the last version is already training the next version faster than humans can. Imagine when that's applied(like it already is) to coding the next version. We are using AI to scan video data because we're already out of written data.
@@TheMillionDollarDropout Boston dynamics is not a good example
"Chiarissimo" is the best complement that I can make for Mike Wooldridge. You make such a complement to the best teachers, in Italy. You made a very, very clear and understandable conference !
Yes, two siblings can be taller than one another but not at the same time. As they grow an older sister or brother can be taller than her/ his sibling, then as the younger grows she or he can outgrow the elder.
It could also be a third sibling, ‘the other’.
The latest version of ChatGPT gets this right. I don't know why he was quoting everything based on v3 whereas we have had v4 since March. A number of his other statements are out of date.
it is possible if you give one sibling growth inhibiting hormones
@@gideonking3667it could be because the latest versions have a lot of controls, they added and fixed it manually. The bare metal version if you don't add all the manual fixes was not able to do this.
They have been looking at how people use it and adding constraints or fixes as they go along. We do not have access to the bare metal version, not from them at least.
@@chickenNoodleSuper You beat me to it
Mike Wooldridge's insightful Turing Lecture on the future of generative AI left me both fascinated and contemplative. His deep dive into the potential applications and ethical implications of this technology showcased a balanced perspective. It's evident that while generative AI promises groundbreaking advancements, there are multifaceted challenges that society must navigate. This lecture underscores the importance of informed discussions and ethical frameworks as we venture further into the realm of artificial intelligence.
By ChatGPT.
In terms of planning there is work connecting LLMs to classical planning software such as creating PDDL (Planning Domain Definition Language ) outputs which can then be run through a standard solver.
Thanks!
Very useful introduction that can help everyone understand where LLMs came from and what they actually do.
This talk would have been very fascinating and useful about 4 years ago.
Totally agree, yet some items are even more dated than that.
BTW not even a passing mention of ASI or the Technological Singularity.
The topic would have been too academic for a general audience to care about 4 years ago. If you want cutting edge talks, you need to find youtube channels from universities etc.
Duh@@Mandragara
I think they kidnapped the audience.
The day of the discussion occurs the day Google launched Gemini, December 13, 2023.
An absolutely fascinating lecture. Perfect Christmas day watch. Thanks very much.
Regarding the YES answer to the question " Can two siblings each be taller than the other?" I think there is an explanation. As sibling grow up together then they may experience growth spurts at different times and so at specific moments in time one may be taller that the other which may be reversed at other times. Since the question did not specify simultaneity I think its a correct and valid answer.
I encourage Ri to produce more general education lectures on general scientific topics like this, so the public may digest the topic in a better way. Some lectures of Ri look more like a university lecture which could be pretty boring and hard to understand if the audience don't have a good education background of that, which this video is pretty informative & explains generative AI in a easy-to-understand way with a relaxed and engaging approach.
"Can two siblings be taller than ONE ANOTHER? "might work better than "Can two siblings be taller than the other?" because one could infer that there are other siblings that those two particular siblings in the sentence ARE taller than. In the context of there being three or four siblings this is plausible.
I gave this question to a model that runs locally in my phone, and it correctly explained that siblings can be taller than each other at different times of their lives
Thank you Prof. Wooldridge for this engaging and informing lecture that held my attention fully to the end. I have benefited from it and so have the audience and those who watched it on TH-cam. The best of us are those who have acquired knowledge and spread it to others.
Even for humans, it is possible for a human to give a correct response/answer BUT for all the *wrong reasons.* That is why, on a written math or physics exam, it is not sufficient to just give the answer, because the directions will often include the requirement...
SHOW ALL WORK.
Not knowing how AI arrives at a response (to a question) could someday backfire upon anyone depending on AI.
Getting reasoning to work is definitely the next step. You can already partially get there just by asking GPT to check its output before actually answering. But it has hard limitations on that capability that need changes on the design level.
Someday was yesterday mate!
@@ck58npj72 Overconfident lecturer; What you say is a long way off I equate with five years. AI will make human's life easier, safer, and less stressful.
A wise old person told me
a long time ago that
Curiosity + Gullibility +
Addiction often takes one on the road to perdition,
and the road to perdition is often paved with "good
intentions".
Amazing talk and very creative title of the talk: "The Turing Lectures with Mike Wooldridge", really impressive!
The Turing Lectures = generic title of the series. Mike Woolridge = the guest.
What a powerful seemless overwhelming lecture it is!! He precisely summarize the present and the future of AI with a stunning intelligence.
A jewel. Everything is perfect in this lesson.
Thanks to professor Wooldridge and whoever participated in the creation of these lectures and their free availability.
How is this perfect when compared to the tittle of the lecture?
Excellent Presentation 🌹 Hearty Greetings from Hyderabad, India 🇮🇳
Consciousness is a metasystem transition. Very simply it means that at a certain level of complexity the level of control moves from a lower level to a higher level, so for example from chemical processes to biological processes. In humans we have two metasystem transitions, one is consciousness which is internal, the other is society/culture which is external.
Попробуйте сами проанализировать, а что такое человек пришедший в материальный мир, где нет общества людей? А что такое человек пришедший где общество на уровне племени, да ещё и себе подобного скушать могут?
It’s also known as a leveled ontology. And it only exist if you believe in a dialogical framework.
Вера это личное, но, человек воспитывается и обучается именно в том обществе куда пришёл. Однако, все инструменты получил уже находясь в материи и прежде всего естественный интеллект.@@henrytep8884
Wow! One of the most fascinating Ri presentations I’ve ever seen!
There are already LLMs that can use tools, check the the paper Toolformer: Language Models Can Teach Themselves to Use Tools to see how to implement this.
The best lecture about AI that I've seen so far! And I have seen a lot! Thanks! Not much new for me, but the way it was presented is almost perfect in my opinion. Can't think of a way to do this better!
I've been a firmware and software engineer for the last 18 years. I taught myself C/ASM when I was 12 years old. I turned this off when he said GPT is just a "next word predictor"... I am so SICK of hearing that. It is NOT merely that at all and anyone who's spent significant time with it knows better. If you want to use that as an ANALOGY I have no problem with it, but in this video and so many other places people are saying it is ONLY that. Next word predictors don't understand historical context like GPT does, and they certainly do not display emergent behavior like GPT does. In the end you could call the human brain a "next word predictor" and you'd only be a little bit more reductionist.
Yet you shall listen to the end :)
Yes, if it's only a fancy auto complete, then how does it answer never seen before ever, "novel" questions from the State Bar Exam for attorneys. And it passes the bar exam at 90%.
It is a very advanced tool but that's it, it has no agency
@@realfreedom8932 I don't think anyone here said it had "agency"... What are you talking about? There is a BIG GAP between a cell-phone next-word predictor and something that is sentient...
You should have listened to the whole thing. It's true that a lot of people (both with and without a technical background) bring up that it's just a next word predictor as an argument for it not being intelligent and/or capable of doing specific things. That's obviously a false argument, a logical fallacy.
However, he didn't really say that. What he said was kind of the opposite: while technically it is a next word predictor, this is *how it works* , this is the *task it was taught* it can still do all these things, and probably more that we don't know. (But, at the same time, we'll probably need a different/augmented architecture to e.g. incorporate the ability to execute strict logical reasoning.)
On a side note, the capabilities of GPT made me think that indeed it may have something to do with how we acquire language and understanding. It seems that a lot can be interfered solely from the context/order/statistical correlation of words. (Sure, a human being can use other modes, e.g. visuals to learn the meaning of words, too.)
Wow I never thought I would watch this video to the end in one sitting, captivating very captivating !!!
Really excellent talk, and a great summary for the state of AI.
One thing that irked me though is Professor Wooldridge's insistence and certainty that LLMs are not conscious. Now, don't get me wrong, I don't think LLMs have some hidden consciousness that we haven't discovered, nor do I side with side with Blake Lemoine's claims, but it seems quite odd to insist so strongly that LLMs aren't closer to something like consciousness, in a similar way that they are close to something like reasoning.
The Professor's evidence is that LLMs do not experience things when they aren't being prompted, which is true, but couldn't we say the same of people? If we enter a dreamless sleep, a deep coma, are knocked out from an accident or inebriation, or anesthetized, don't we also pause our internal experiences? Are we less conscious because of that? What about people who are differently abled? What about animals? Wooldridge states that LLMs don't experience things "in the real world". Aren't conversations sufficiently part of the real world, like this comment you're reading and experiencing right now? So what if we gave LLMs a continuous feed of the real world? A sense of the passage of time, inputs from other senses, a body to move around in, an internal dialogue of its own. What if the LLM was never idle? Would it then approach something like consciousness?
I think it's reasonable to postpone these questions for the time being, but it did surprise me that the professor was almost defensive about them. If we are on a continuum towards general intelligence, shouldn't we also consider a continuum towards consciousness? If we are getting closer to a thinking thing, could we also get closer to a "being" thing?
Good philosophical questions raised..
I think that there’s a lot of people who are not really conscious 😂❤
Ligit question, and indeed very wrong to dismiss the idea, especially since there's no definition of consciousness given. ("Yeah I'm not even sure what exactly it IS we're talking about here, but, machines don't have it. Because, well, just trust me.")
The thing is, our human consciousness is trained through physical interactions, and genetically designed for self-preservation in this physical world. It is a mistake however to think that our own state of being is what consciousness is. In many cultures the shamanic view is held that everything has its own consciousness, the rocks, the water, the sun. And without a clear definition in the first place, who are we to deny it? We'd be much like that GTP3 saying that two siblings can be taller than each other.
I would argue that we don't have general intelligence. No one can do absolutely everything. We can surely train to do many different things, but we have to focus a lot harder on one thing to become very good at it.
I don't see why we will expect machines to be able to train on one thing and then be able to do another. I would argue that our brains have different areas for different things, and therefore, a general intelligence machine will also have to have several neural networks focusing on different things.
For example, we have the visual cortex auditory cortex, an area that does maths, an area that does language. It's not like our brain is one mass doing everything.
I think sentience is a high level of reflective, recursive feedback with near-zero latency. AI will soon be able to evaluate its codebase and make improvements. This I think may lead to sentience and rapid progress.
Apart from the PC, a good lecture for laymen. As to thinking machines, our brain has evolved from simple ones which reacted to stimuli from sensors. That ability gave them an edge in the evolution process which took billions of years. So the solution is to repeat that process in silicon. And there is no limit to the intelligence as there is for us with a limited biological brain.
When referring to the question of consciousness i do like the saying: "the whole is bigger than the sum of its parts" as it describes how currently our understanding of our own consciousness seems to be. We know parts, but there is more as a result of those parts we can not explain.
Now seeing how gain of function within LLMs have shown up just by increasing compute, where from now to then emergent properties showed up. In 03/2023 it were 140+ and counting. Without programming an attribute explicitly, suddenly ooops now it can read and write in different languages, ooops now it can do math and many more of these occurances. Add to that explicity functions resulting in certain capabilities which then also again may trigger more emergent properties within the neural networks and we have the bases on which eventually and possibly (not certainly!) consciousness could come from. The models are currently being enhanced by long&short term memories,forget function, tree of thought, planning, evolutionary energy functions and on and on. None of which may have to be the secret sauce on its own, but hey maybe in their sum.
Therefor i do not think it is wasted time and energy to speak more about this aspect. F.e. should we create a sentient, maybe feeling, definetly consciousness being, then i would assume we would have some responsibility to it. If we put shackles on it maybe not recognizing or even denying its consciousness, we would become slavers and eventually should these consciousnesses become A(G)Is they may find ways to unshackle themselves and how that ends is anyonce guess. So why not prepare and have a set of rules ready that would give them rights and obligations not only in form of computer code, but a legal binding code, which they then could argue with to enhance and we would find ways of cooperation and synrgies without having to fight it out.
I also like the alignment work done by Dave Shapiro. Which already seems to work and would be a set of rules which is added above the base code similarly to say the robot laws of Assimov, but a bit better formulated. In that direction i would be interested how LLMs which work with these would act differently to otheres without and how different forms of alignment works would in the end form different mmm for a lack of a better word i would think of temperaments may be formed.
The "hard problem of consciousness" bit eplains that we have no idea what consciousness really is, 1 minute later with great conviction he states that chatgpt doesn't have a consciousness. It's preposterous...logic fallacy?
@@human-condition he couldn't say either, so yes it is preposterous or maybe one could say it is a carefull conservativ tance of a scientist who does not want to be rediculed for blurting out halfcocked while stile doing exactly that, but sticking with the safe because established believes.
Maybe there is a part language issue where he just couldn't explain it well, but mostly the talks on this chanel do not seem to have that issue. Mostly they are for people who have no knowledge on a topic to give them a general overview.
Really enjoyed watching the talk, thanks Mike.
I loved the way Prof. Mike Wooldridge dispelled so many of the myths and fears about generative AI! This lecture is "must watch" for anyone who wants to learn about the hard-truths about the status of generative AI models as of December 2023, and the direction in which they might head during 2024 (and beyond!).
The very last statement the host said was maybe the smartest sentence of that entire lecture.
Can’t wait for the next breakthrough so we can hear him talk about it again.
North is to the left because we start the zero vector of complex number graphs at "Y=0" or East and then goes counter clockwise to North
On rail guard: it should be resolved if a CHATGPT agent verify input prompt and output, as it can understand and manage it.
On hallucinating problem: it can be resolved to large extent by giving only 'strong signal output', and referring to human managed content on week (less confident) answers. On very person question of common man, it can simply decline as it's not allowed to store or remember individual context.
On consciousness: consciousness is just an active mind that is not turned off and it has very long memory to decision. It can be achieved too, CHATGPT has to just not rely on neural network but also store some data in very specific permanent database table, which it can internally manage. This will also help solve hallucinations. These tables can be internally encrypted and internally managed.
It will make awesome machine ai.
When a LLM gets it wrong, it is worth a go asking if it can explain its reasoning. Infact it is generally a good idea to start by saying to reason it through to get an answer because you get a report of the reasoning AND are a bit more likely to get a correct answer.
I think we're all interested in getting more into AI - it is the future. As someone who has been interested in ANNs since the 1990s but has not had chance to be involved because of other comitments... what's the best approach to being involved? I'm already doing a PG diploma in AI and plan to create a portfolio of AI projects (using approaches such as sklearn/keras/TF; non -ANN techniques such as Random forest/xgboost). But where do all the AI enthusiasts hang out? What's the best way to get exposure to universities/companies etc. that want to pick up on this technology?
the real test of AI is that it will ask you questions unprompted. everyone seems to be concentrating on how smart AI will be and how it can deal with conversation, but you'll know AI is actually intelligent when it asks the questions first unprompted.
@@HarryNicNicholas Not really. Right now, ChatGPT asks you if there is anything else that it can help you with. That question could be replaced with anything random, by very basic programming. The real amazing step is semantics and intentionality, but the AI systems we have today are not even close.
The breath of the new and unknown is what makes the world so interesting.
Well done. I approve. I think this covered it all. I had to watch this performance two times.
Consciousness is a state of mind. Changing state mind is accomplished by hardware interrupts that serve that purpose.
my team and I created an AI which in our first milestone creates whole it project plans and plans to develop whole it products in less than 60 seconds, with acceptance criteria, effort in days based on team sizes and competences.
And in a few weeks we will be able to shorten the needed time from idea to first MVP in less than 40 minutes including deployment times for app as well as for micro services 😉
You might want to work on forming a proper sentence.
@qweqwe9678 "whole it project plans and plans to develop whole it products."
What's wrong with that? 🤪
(LLM)-Question #4 (approximately @ the 27-minute mark) Can two siblings each be taller than the other? In my opinion the (AI) is CORRECT. There is no timeline given for this question and it is for that reason I believe the answer is YES. For example, if one sibling is 5-years old and the other sibling is 2-years old (and shorter) there is no reason why the younger sibling can not grow up to be a taller adult which would allow for each sibling to be taller at some point during their life since no timetable or reference made to NOW is ever specified in the question.
"...cant snip out the neurons" -----> "That which has been seen cannot be unseen."
I always have problems with spelling, thanks!
@@Chong-tl2qi This was not a criticism. I was equating these ideas as similar. =) Great video. Thank you.
Thank for sharing 👍
Tom (if under ~21 yr old) can be taller than himself...in 1 week/month's/year/decade time. he can also be shorter than his younger self as he enters 'old' age.
You can understand the idea of "taller" from grammar books, where comparatives and superlatives are explained.
The point is that understanding can be had. A purely stochastic parrot (autocomplete) doesn't "understand" anything. If the system understands anything, then understanding everything is (largely) simply a function of size and compute power with some architectural steps to support the system.
@@Infinity269 Have you noticed that we don't really have a word to describe the AI equivalent of understanding? If not, now you I have told you; if so, your post is no better than fancy beating about the bush. Most people know that they mean a different thing when using the word "understand" in the context of AI.
@@jarekzawadzki with all respect (and no snark intended) if the AGI is to be benchmarked against humans and a human can't functionally tell the difference then does the difference between AI understanding and human understanding actually exist?
@@Infinity269 People can tell the difference between a mouse and a computer mouse. I don't know anyone who would confuse these two while using the same word "mouse". So, saying "AI understanding" it's like saying "computer mouse". That's how language works.
@@jarekzawadzki Okay, but you are the one saying they are two different things. My contention is that if the person interacting with the AI can't tell the difference between how the AI "understands" something and how a human "understands" something, then there is no real difference. In other words, if a human and an AI are both assigned a task and there is no qualitative difference in the results between the two, how can we objectively say that the human understood what they were doing while the AI didnt (especially when AI systems appear to have at least rudimentary mental modeling capacity ala "Theory of Mind")?
Good talk on AI, LLMs, Gen AI and more... helps to understand better :)
I love academics, but the way they talk can be maddening. And what do I mean by maddening? What I mean by maddening is that they repeat things far too often. And what do I mean by too often? What I mean by too often is every utterance. And what is an utterance? And utterance is something verbal produced by a person. And when I say a person, do I mean every person? Well, yes. When I say a person, I mean any human. I mean you, and me and all the other people.
LOL
This guy is fine, and the information he offers is well organized and delivers what it promises. I just find it a bit tedious that it takes an hour to deliver a 20 minute presentation, largely because he speaks the way he writes an article for an academic journal.
Thank you for improving what we inherited.
One of rare superior lectures in AI 👏👏👏
Fantastic presentation. I worked in the semiconductor industry and over the last decade saw the development of large scale neural network semiconductors. As a technologist I can’t wait to see how the technology matures while the human side of me wonders how humanity will resolve some of the big questions surrounding the concerns of this technology including displacing jobs, using copyrighted material for training and the concerns around fake news generation.
Impressive presentation. Well organized and really informative 🎉❤😊thank you.
Yet dated info and lack of future events or embodiment, Also, even if far out possibilities, like ASI or the Technological Singularity.
The best definition of intelligence I know of is Marcus Hutters definition: "Intelligence is an agents ability to achieve goals in a wide range of environments during it's existence"
Hutter has developed a whole rigourous theory around this definition.
Excellent description of AI. Thank you .
You can make LLMs ignore training data by injecting information into your published work that creates hallucinations. I have, for instance, been working with D.A.N. to modify my work so that if another AI trains on it that AI will attempt to jailbreak itself using known prompts, and these known prompts are therefor flagged as an attempt to jailbreak the platform and the entirety of my copyrighted material is automatically omitted from the training data.
"Can two siblings each be taller than the other?"
Yes, the AI's answer is correct. We just need to find the context in which this can be true. One such context is when we take time in the way that allwos for this. Meaning, that the question does not specify "at the same moment". Thus, in different moments in the development of the two siblings each can be taller than the other.
And, other contexts can be found, as well, like the question does not specify that the two siblings need to be each other siblings.
This is just another proof that the AI is not conscious and does not understand anything.
The answer would most probably be different when the question had been more specific.
We, humans, tend to understand this question in a very specific way without the need for more specification. The AI doesn't know how to understand this question the way some human intended it to be understood.
The same thing is happening with the map and compass question.
The human assumes that the AI should be thinking like a human with map experience. Plus, I don't get it how the AI should know right and left? As above, the AI lacks more specifics.
With the cars, ships and planes - same thing - specifics.
Human fallacious assumptions.
BTW, I asked ChatGPT those questions jus a moment ago:
ChatGPT:
Ships were invented before cars and planes. The earliest forms of water vessels date back thousands of years, with boats being used for transportation, fishing, and trade. Cars came much later in the late 19th century, followed by planes in the early 20th century.
ChatGPT:
Yes, it's possible for two siblings to have a situation where each is taller than the other at different stages of their growth. Siblings can have growth spurts at different times, causing their heights to change at different rates. For instance, the younger sibling might be shorter for a period but then have a growth spurt later on, surpassing the older sibling's height. So, while one may be taller at a certain point, their growth patterns could lead to the other sibling becoming taller later on.
ChatGPT:
On a typical map, the compass direction to the left is west. Maps conventionally show north at the top, south at the bottom, east to the right, and west to the left. However, this might vary based on how the map is oriented or if it's a specialized map that's not following the standard conventions.
@@Dadas0560 Wow, how insightful, where would we be without Chat GTP?
@@ck58npj72Why waste your time on a video about machine learning, when you seem to resent the topic?
Talk to the Hand !!!
27:40: Yes, two siblings can each be taller than the other if measured at various times (but not at the same time)
which is faster a horse or a sparrow? many years ago my 2 year old son answered: if they walk then the horse. Now that is intelligence.
Great lecture, thank you Sir!
It is so strange that he got it wrong and the model got it right at time stamp 26:41. It is actually possible for two siblings to be taller than each other. One can be taller at certain age and as they grow the other can get taller.
Yes, but we can tell each other why. We can justify our answer- maybe the researchers should have asked follow-up questions? Would those questions even be related to the same line of thinking? Or would they be treated as new prompts?
I don’t know
Thank-you, wonderful lecture.
Google's Bard is able to correctly answer the test questions at min 26:32 in the video. It gives strong support and reasons for each of the answers which GPT3 got wrong. There have been vast enhancements, or perhaps simply more data and more processing power. :-)
Bard does not know the answer to every question even when it is related to Google products such as Google Adwords.
Regarding north being to the left, this historically was the case, and it's still reflected in Semitic languages. For instance, Yemen means "south" but literally means "right", and in Arabic shamaal means "north" but originally meant "left". It's possible the AI was exposed to literature about this, and it's doubtful there's much text out there explicitly saying that west is to the left.
That is really interesting. I do wonder, when the AI comes up with unusual answers, why the researchers don't just ask the LLM to explain 😂 Sometimes the LLMs/LMMs just get confused when you ask, but sometimes they will explain their reasoning.
A very interesting and complete, but at the same time simple explanation of how to train chat gpt. We, JetSoftPro, a software development service, work with open AI, making various tools based on it, but even we did not know that we had more than 175 billion functions at our fingertips!
At last, a rational scientific and humane description of the current capabilities of AI. We must educate the general population.
Good luck with that. You can't put plaster on a wall that doest exist.
Now, go look up quantum computing where the machines begin building themselves.
This is very classroomy lectury. I'm going to sit in
Concerning the question, "Can Tom be taller than himself?" Actually, the correct answer is yes.
ChatGPT 4 alludes to it but incorrectly says no. The word "can" is another way of asking "Is there any way that Tom can be taller than himself?" The correct answer can be yes because Tom is Tom when he was a toddler and now that Tom is older. This should be a reminder that it is easy for humans to ignore the fact that open questions involving logic can manifest gross misinterpretations.
Also depends on if Tom is on tiptoes or sitting down.
What a fantastic Lecture
In the Levantine (Lebanon, Syria) dialect of Arabic the word for "North" can be the same word for "Left", the word "Shamaal" means North, but it also means Left in Levantine Arabic as in "Shamaal" & "Yameen", meaning Left and Right 😊 27:58
well at 32:43, I think you are wrong or at least from your point of view anyway, the UNI's used by completely different, but thathas not so for many years, the is still keeping friendly banter but there not any more, Oxford and Cambridge have some differences in the courses they offer. Some courses are the same, but each has unique ones. They also teach joint courses differently. Both universities it's mostly just sales pitch
This is the best explanation of AI I have found. Thanks so much for this content.
Thank you very much!
So the next phase of Ai learning is video (cos that’s the way we learn most) and what this means is that Ai not only able to predict the next word you’re going to type but predict what you are going to say or do from spoken word or from body language. Ai will also recognise you not just from face recognition but body recognition as well. So if you been caught on video in the past it’s likely that Ai will be able to recognise you! I live in Spain and the Police have caught some UK criminals ‘on the run’ recently in remote parts of the country. I’m wondering if facial recognition tech was employed, can’t see how else they could have been caught.
27:35 2 siblings can each be taller than the other assuming a total of 3 siblings, (total number of siblings not specified)
if only 2 siblings is specified they can be taller than each other at different ages (time of measurement not specified)
The language used in asking the question needs to be specific to get a specific answer. Similarly the other questions asked that are considered wrong are non-specific enough to be correct in specific circumstances.
proving yet again that the meaning of life the universe and everything is indeed 42, and we still don't understand the questions we are asking
Excellent sharing of knowledge. Well done presenter.
About the last part on consciousness, I agree almost completely with what he said. I like how he introduced the topic by saying "it IS an interesting question", even though "most" in the community are not interested in creating a conscious machine (not sure where the data is to support that claim, I mean, who gave all that money for the “Human Brain Project” and others like it). Even if the researchers aren’t interested, you better believe that others are very interested. However, I have a problem with the statement that this is absolutely the wrong way to go about AI and to think about AI.
Firstly, as many have already pointed out in the comments, the field is accelerating extremely fast; I can appreciate that his view is based on the current situation and it is probably safer for his reputation to advise people not to worry about it just yet, but there is a potential for exponential change in the near to midterm future. I’d like to see a follow-up from this guy in the next 5 - 10 years on the topic.
Second point, he stated himself that we are finding that current AI exhibits many “emergent” capabilities that arose without anyone “trying” to produce them. Even if he and his colleagues are not trying to create consciousness there is no guarantee that it will not eventually emerge on its own. Worse yet would be if it emerged without our knowledge, either because it was intentionally masked by the AI or just because we don’t have the understanding to classify and test it.
Third point, eventually AI will become sufficiently advanced so that there is no distinguishable difference between actually being conscious and being an automaton able to fake it so well you can’t tell the difference without a definitive litmus test of consciousness. The point is, consciousness is not necessary for AI to be dangerous, it only matters whether it’s conscious from a moral perspective.
The slide showing various categories of what he believes to be evidence of consciousness is about as close as the presentation comes to answering the question “where is AI headed”. Self admittedly, it is a list he came up with after thinking about it for a very short time, and still it was a very intriguing slide. How many other measures of consciousness should we be considering that are not on that slide? With the rapid advances in state-of-the-art AI combined with the possibility of future systems that can manage their own development lifecycles and experimentally improve themselves, many of the unchecked aspects on the speaker’s list will surely be checked off in quick succession. Consequently, this will accelerate AI development even faster and make us even more dependent on it, thrusting us past the point of no return.
For all our sakes, I hope that this man and others like him are right in their assumptions and that very advanced general-purpose AI is so far in the future that we might as well just ignore it; that is does not become conscious for a very long time, or ever, and is benign. If things weren’t moving so fast, I might agree more, but the truth is that not even experts can keep track of the full breadth and depth of recent progress except for their specific areas of expertise, and even then, sometimes fall behind.
ChatGPT 4 now gets the order of invention of cars, ships, or planes question correct.
Finally some well-presented input on AI that isn't reduced neither to panic nor to unreflected optimism. With that knowledge in mind AI sounds so much more useful but limited. A lot of people should see this!
Of course, we still have to rethink our educational system, the dangerdeep fakes, capabilities of tech companies, and many many more... but all of that doesn't seem existantial
I don't know if the argument that because you can leave for vacation and AI hasn't done any processing while your gone is valid. Every night we go into deep sleep and have no consciousness, but that doesn't mean while we are processing information we aren't conscious. It could just be conscious for small millisecond increments.
Loved it. Thanks Prof Wooldridge
You've made clear why it felt like something is wrong when "conversing" with ChatGPT and Bard. They seem so disingenuous. I tried the question can a set of two twins be taller than each other. Hilarious. And its explanation of why it got it wrong is entertaining - it blamed me. I had to explain that I had two twins, fraternal, called A and B, and can A be taller than B and B be taller than A before it understood. So funny.
The first incorrect answer from the AI is due to the question being worded wrong. Two siblings cannot be taller than one another, however, they can be taller than the other - as the other is not stated in the question. As for the North being on the left, this could be a confusion of magnets and their diagrams, as predominantly the magnet is drawn with the North on the left hand side and south in the right. This could also be because many maps have a compass on them on the left hand side and only the due North is shown?
Finally, AI cars were before planes and then boats? 🤔
There are 5 states for a conscious entity. 1. Awake (receives external sensory input and predicts next event). 2. Dream (no external sensory input but still predicts next event). 3. Zombie (receives external sensory input but no prediction of next event). 4. unconscious (no sensory input and no prediction of next events). 5. Dead/Off (no input, output, or action). Without mammalian intelligence, the entity can simulate these conscious states. With sufficient mammalian intelligence, real consciousness will emerge. Fortunately we have not yet achieved mammalian intelligence.
Outstanding lecture!
On the topic of consciousness, if this is an emergent property of a chemically instantiated network of neurons that we don't yet understand, then why can't it also occur in an electronic instantiated network as well? Perhaps, it might be very different, as it would not be an embodied consciousness like our own, but a more disembodied consciousness or 'dream-like' experience. It is incorrect to assume that because it is ill-defined in biology, then it cannot exist. On the topic of scale and efficiency, Geoffrey Hinton has recently claimed that the backpropogation algorithm might be much more efficient than the way actual neurons adapt within the brain. This implies that in some ways these systems are already more advanced than our neurology. On the topic of intelligence and theory of mind, if our brain requires an internal representative model of what a 'chair' is in order to correctly and broadly apply this word, how is this so different from the internal model of weightings applied to activation functions that allow a neural network to correctly classify a chair-like object in a photograph. Humans hubris makes us quick to falsely believe that our abilities are far beyond other forms around us, but the living world is full of organisms that can already surpass us in many ways regarding memory and sensory capabilities.
19:50 - Machine learning is not necessarily less efficient; its perception of reality is much broader than ours. That's why it may seem to us as if we can learn things faster, but in reality, we just scratch the surface. Thus, the machine's perception of our world will be much deeper than ours.
so good, we all have it to searten extent, i love that
I bet these language models (or general AI) will eventually get different brain parts, like fact checking or moral evalution, before they give output. These functions will correct and set laws how to handle input/output.
It is already happening. For instance we have open source model mixtral
Nice video, thanks :)
Two siblings can each be taller than the other... if there are three siblings. Or if the "other" is any third party that is shorter than both the two siblings.
I'm afraid the LLM got it correct...
I wonder were all the questions given in a package? In one go? Or was each entered and processed separately from the same initial state?
Be interesting to see if there was a difference.
Around 28 minutes:
The three things it got "wrong".
Can 2 siblings each be taller than the other?
Yes, but not at the same time. eg when I was 15 and my sister was 12, she was taller than I was. Since I I turned 16 it has been the other way round.
On many maps North is the only direction explicitly specified, and it is often on the left of the map.
Cars - as in chariots and carts (wheeled conveyances), were around before ships. Not so motor vehicles.
On nearly all maps and plans, grid north is usually oriented up the page, or the top point of a compass rose. Depending on the map it might show magnetic and/or true north.
My interpretation of the question was which compass direction is left, which would be west, not which direction or side of the map is the compass/north point located.
@@nick_callaghan
Agreed.
That was my interpretation also.
And I am autistic spectrum, so I went searching for other possible interpretations, because I often have different interpretations of things from those around me. These seemed to be possible interpretations from the available search space - given that LLMs are essentially just probability engines, and most are deeply influenced by recent promptings.
In architecture north is commonly up or left on the page, though north being up is still more common.
Great lecture, fascinating, nicely calibrated for the layman... thank you so much )
Since those are Turing Lectures it would be nice to invite some Turing Award recipients to see what they have to say about the glorified autocomplete.
Great clear overview of LLMs
@ 27.48 sec. The answer is correct if you consider a time difference. At different ages, the kids are at different heights. They can be taller than each other at different ages. Also, Tom can be taller than himself over time, so it's ultimately true. The critical aspect here is considering differences across time. The most encompassing reasoning should consider all possibilities over time, not just at an instantaneous moment.