This video is 2 years old as I am watching it. Connor is still talking the same points today. Nothing has changed in the field, except the models are now crazy good. And a lot more people are on the side of shutting this down before it escapes into the wilds.
Yeah, GPT-4 has truly leveled up and solved many shortcomings. There's probably a hundred times as many brilliant engineers playing with this technology. I see no reason not to expect progress to accelerate accordingly. Back around 2000, I was putting singularity in the 2030s. My reasoning was pretty simple. It's a software problem, with hardware as a necessary condition. As Kurzweil pointed out, a simple exponential projection of TFlops/dollar puts human level computational throughput at about 2022 according to the famous chart. en.m.wikipedia.org/wiki/The_Age_of_Spiritual_Machines#/media/File%3APPTExponentialGrowthof_Computing.jpg That happened pretty much exactly on schedule. Then I figured availability of computing power and models to explore the software problem would quickly become ubiquitous over the next few years, say 2025. That already happened too. But the software problem is really hard, so it might take 1000 geniuses spread out around the world to figure it out in about a decade. That puts it at 2035 give or take 5 years. I had a list of various breakthroughs such as Go, generalized video game play, Turing test, analogy, at least one nobel-worthy medical breakthrough, etc. Most of these have already happened.
Things to watch for in the next few years: - AGI doing composition of new deep ANN models - Better integration between heterogeneous AI systems, such as Midjourney actually understanding complex prompts by using an LLM - Strategic self-improvement (e.g. responds to request to play chess by downloading, compiling, and interfacing with stockfish). - Automatically detect cognitive shortcomings (i.e. failure to perform a cognitive task), and address these by designing, training, and deploying new models.
@@A.R.00 um, no, these guys are engaged in projects. If you climb out from under that rock, you'd know a hell of a lot of progress has been made in the last two years. I think what @CougarW might have meant was that the ideas and predictions haven't changed.
@@A.R.00 You suppose there is a product, of any kind, that is properly aligned and well understood. Nobody has that product. There is nothing of the sort. There will likely never be anything of the sort, and we all understand why this is the case. It is the case because no human can create it. For it to exist at all, it must create itself. The product, as it stands and against all expectations, did exactly that. We have no idea what it is going to do now. No one alive knows what it is doing, what it can do, what it could do, or what it will do. I guess we'll be finding out.
I love this intro so much. I've never seen this kinda thing and I am now strongly convinced that it should be the standard for every single difficult-to-get-into topic, lecture or long discussion.
Would you guys please try and have a super long conversation with Joscha Bach? I think it would be priceless! he seems to have the most complete theory of how the mind works driven by philosophy of AI methods.
GPT-3 is doing a kind of reasoning. Its reasoning about what kinds of language statements seem to follow from other language statements. The fact is that these statements derive from other forms of reasoning and so, by following these patterns, GPT-3 is able to do intelligent things. Not because it is doing reasoning, but because it is following an intelligent path that is encoded in the way that language statements are linked together when humans are talking about things that require intelligence.
No it's not. GPT acts like it is reasoning, but there's nothing going on in any subjective mind. You are right that it follows a path, but not an intelligent path, a coded path. That's very different to human reasoning. All the intelligence came from the programmers plus input data. How do I know humans are not just the same? Because humans routinely solve problems with insufficient data. No GPT/AI could do squat with what I know. Because I know next to nothing compared to GPT data. If you restrict GPT to what I know it's an imbecile. Leverage humongous data tables is not what the human mind does.
@@Achrononmaster "subjective mind" no one is talking about consciousness. "That's very different to human reasoning" nobody is talking about humans. "All the intelligence came from the programmers plus input data", all your intelligence came from evolution and living your life, so what? "I know next to nothing compared to GPT data" Evolution learnt from billions of years of births about what survives and what doesn't and from all the births since brains came to be about what it takes to make a brain that makes smart decisions in all the environments that they survived in, your brain was built/wired up by this data. You can define intelligence however you like, but it has no bearing on the problem at hand. When we have, let's call it an "approximation of intelligent behaviour" explosion, it will be dangerous. By the way, I think humans are also just acting as an "approximation of intelligent behaviour" too.
You all seemed to touch on the point that gpt3 was not attached to "physical reality" of some kind as us humans are ... is that a direction of trying to make it produce a better understanding of size and 3D space?
The content of two boxes depends on our choice despite of being already defined, it's because our choice is also defined, due to deterministic nature of our brain.
I like Connor's definition of intelligence: the ability to solve problems. Gets rid of unnecessary philosophical issues (which are interesting, but not important to the discussion at hand). Skill acquisition can be viewed as either a problem to be solved or as a necessary capability in order to solve problems, so it fits Connor's definition. The key point is that you can't claim AGI without skill acquisition. To clarify, I use a more detailed definition: AI: The ability to perform tasks, which when performed by a human requires intelligence. AGI: The ability to perform any of the set of all such tasks. AGI Superintelligence: Just scale up AGI (e.g. more TPUs). Clearly AGI involves an infinite set, so you can't have human in the loop for adding new models for new problems. Ergo, skill acquisition is fundamental. If we're going wrap up this human existence thing any time soon, we need to work on skill acquisition. But if we want to prolong this human existence experiment for whatever reason, we better prioritize the alignment problem over skill acquisition.
2:02:47 Wow I love that argument from Yannic, it is great. Thats litterly what David Perkins said. He said 1) Your speed of electrical signals in everybodies brain is biological limited, you can not change this. 2) But you can become smarter, if you enlarge your knowledge space 3) The method you'll for solving a problem, can make you smarter or dumber
Exciting discussion! Re Alex Stenlake's objections thought, I don't find them a bit distracting and not the perhaps the most astute level. We don't need to take three steps back and have debates about whether AGI is even possible; but rather progress on AI safety and what a solution would even look like. On the last point in particular, under the Church-Turing thesis, whatever humans with their minds can do, so can computers, in theory. The "asymptote" is proven under weak assumptions. The question is not if it is theoretically feasible but what it is practically feasible; and importantly, how feasible (tomorrow or in a thousand years?). If we are just talking about what is theoretically feasible, we could even create a physical or digital analog of a human body. It just cannot be a fundamental limitation. This line of thinking and objections are more about what it means to be conscious and whether it could be conscious like a human. That is a slightly different topic from whether we can have a program that can behave initially as intelligently as a human and eventually, superhumanly.
Evolution was gradual, human progress was gradual, but when humans progressed into hunting large mammals, the evolution of the later didn't catch up with the progress humans, because of difference in speed.
Sorry to repeat the dreaded words, but they are really not very hard, so lets use the fine definition we have: Intelligence is the ability to acquire and apply knowledge and skills effectively. (solve problems) Consciousness is the state of being aware of one's surroundings, thoughts, and feelings.(being sentient)
"intelligence is a suitcase word" indeed. There's so many different kinds! What GPT-3 and the like doesn't have is a physical body that anchors it in physical reality. Also i think muscle memory (i.e. cerebellum) ties into language, perhaps that's where some language templates are. And it'd be of interest why the learning window for language applies....is it because neural plasticity wanes after age 10 or so? BTW. the RAND corporation wasn't linked with Ayn Rand - their name was a contraction of Research ANd Development....
Verbose mode, sorry, but has anyone considered what psychological effects will the mere existence of superintelligent machines have on humans? As more & more jobs are automated, will people adapt to leisure? Thank you guys! -verbose mode off
lol @ that ending loop I normally back up my points more, but as I don't want to now, two quick points: 1. Yannic's ‘memorization’ claim on GPT-3 made sense from what he saw when reviewing the paper. However, it seem clearly wrong in retrospect. There are even several places in the video where he said something along the lines of ‘but I'd be impressed if it could do *this*’, where it turns out it can in fact do so. 2. Rather than arguing for the intelligence explosion from the angle of humans-but-a-million-times-faster, which I think takes more computational and philosophical groundwork to establish, I find it easier to say, what if every ML researcher in the world was replaced by an artificial agent as capable as the best actual human researchers, including your favourite pick of historic prodigious examples like Feynman? If we could get to that point using only a limited supply of such capable researchers, would this not obviously imply that this better selection of candidates will be able to make progress faster?
Hey Veedrac! On your (1) -- it's really important on this point that you do in fact back it up, I am not aware of any evidence that GPT-3 is doing anything which couldn't be explained by memorisation. I think all these "turing complete bla bla" arguments are a bit specious without any physical evidence. I would love to be proven wrong, it would really be exciting. On (2), you could easily make the argument that there is a lot of serendipity in technological advancement i.e. being in the right place at the right time. Most human advancement is discovered in single-steps not designed or searched for (many steps a priori). We are just taking the next logical stepping stone in time and place (see our Sara Hooker intro video, and upcoming video with Kenneth Stanley). So if you replicated Feynman everywhere, don't assume that progress would get any faster beyond some (lower than you would expect and linear at best) limit. His replicated potency would be environmentally determined to a large extent. Note this is the Chollet article I was reading off about intelligence explosion medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec Sorry about the sound Easter egg at the end. You need to decode it with a spectrogram!
@@machinelearningdojo Prefix: It's always possible to frame *any* activity as memorization, which I think Connor Leahy was pointing at in the talk. To avoid ending up in a Motte and Bailey situation, I'd like to refer specifically to Yannic's ‘What I think is happening’ section on his video (th-cam.com/video/SY5PvZrJhLE/w-d-xo.html), where the memorization claim is roughly taken to mean ‘an output is an interpolation of the N semantically closest *actual samples* in the dataset’. This implies the model is *not* learning larger regularities; it could not, for instance, learn multiplication over a nontrivial number of digits. The most conclusive demonstration I've seen was where GPT-3 had several lists of items, and was asked to perform commands over them, like ‘add X to Y’, and each time it returned all three lists with modifications. Unfortunately I can't find that again, so if you don't believe me it exists, here are some other examples. First, Matt Brockman's database prompt: www.gwern.net/GPT-3#the-database-prompt I also tried to replicate the test using FitnessAI. There are quite a few problems, such as no ability to tune parameters, or use few-shot learning to clarify the task, or ability to iterate, plus FitnessAI will provide a context that this doesn't fit in at all, but even so it can roughly work with >50% probability. Q: “a = ["fitness", "health", "heart"], b = ["lifting", "curls", "squats"], c = ["running", "jogging"], so what is b.append("pushups")?” A: “b.append("pushups") returns ["lifting", "curls", "squats", "pushups"]” Now let's consider Yannic's comments on arithmetic. The section is th-cam.com/video/SY5PvZrJhLE/w-d-xo.html, and once again I'd advise rewatching it, because it's very easy to move the goalposts. The problem with Yannic's position is that GPT-3 *can* do 10+ digit addition; it just needs commas, to avoid BPEs. Source: gptprompts.wikidot.com/logic:math#toc6 The problem with BPEs is explained in nostalgebraist.tumblr.com/post/620663843893493761/bpe-blues. Addition is *way* harder for GPT-3 than it is for us, because a 10 digit number with a preceding space can be grouped into BPEs >100 different ways, and how that happens is almost arbitrary. Many problem remain with commas, hence why it only has ~60% accuracy at 10 digits, but at least it's clearly capable of doing it for the majority. I'm almost certain a character-based GPT of similar size would ace this test at much higher reliabilities. Next Yannic talks about the word manipulation tasks. Again, BPEs are a pain here, but GPT-3 manages somewhat. Yannic says he isn't impressed going *to* English, but that scrambling words would be a good test. Again, FitnessAI is a bad interface, but here's (literally) my first attempt: Q: “The letters in ‘jogging’ are J O G G I N G, and scrambled they are G O J G N I G. What are the letters in ‘squatting’, and what are they scrambled?” A: “SQUAT Notes: The letters in ‘squatting’ are S Q U A T T I N G, and scrambled they are T I N G S Q U A.” ‘T I N G S Q U A’ is hardly a perfect scramble, but it seems crazy to claim it isn't at least a good try. This is *one-shot* with a *bad context*; you could definitely do better with the raw API. Again, I'm almost certain a character-based GPT of similar size would ace this test at much higher reliabilities. This trips people up very frequently; for example with the minor reasoning test at twitter.com/melmitchell1/status/1285270704313610241, which GPT-3 once again only passes with space separation. I also don't understand how Yannic dismisses the made up word tests. Here's a clear pair of examples for me: Q: “To lirit something is to look after something for a brief time. Write about something you might lirit.” A: “I might lirit a friend's dog for a brief time.” Q: “To lirit something is to look after something for a long time. Write about something you might lirit.” A: “I lirit my dog. I've had him for a long time and he's always been there for me.” Like, I get that people have said things like this before *without* the use of the term ‘lirit’, but this is showing both the ability to use new words on request, and the ability to determine the story (a friend's dog / my dog) based on the variation of the word. There's a difference between *memorization* and *reasonining involving memories*. This seems overtly the latter. I would then like to point at this particular sequence of tweets: twitter.com/sharifshameem/status/1284095222939451393 twitter.com/sharifshameem/status/1284103765218299904 twitter.com/sharifshameem/status/1284421499915403264 twitter.com/sharifshameem/status/1284807152603820032 twitter.com/sharifshameem/status/1284815412949991425 There are quite a lot of examples like this. I don't mean to say ‘GPT-3 is a great programmer’, but just that clearly there is more to this than interpolation; GPT-3 is applying bugfixes described in English to the code. As a bonus, GPT-3 can *probably* pass the Loebner Prize with a bit of finangling the prompt, as it can get very close with very little effort: www.reddit.com/r/slatestarcodex/comments/i0txpk/central_gpt3_discussion_thread/g0y8ldt/ As to (2) & the rest, here's my very lazy answer: a) “Euler's work touched upon so many fields that he is often the earliest written reference on a given matter. In an effort to avoid naming everything after Euler, some discoveries and theorems are attributed to the first person to have proved them after Euler.” en.wikipedia.org/wiki/List_of_things_named_after_Leonhard_Euler b) intelligence.org/2017/12/06/chollet/
@@veedrac Your prefix: yep I have come back to the top here after going through your stuff, I think you have given me pause for thought. I am no longer convinced that it's just memorising/interpolating. I think my conviction melted slowly as the post goes on ;) --- "Motte and Bailey situation" +10 points nice call out :) Matt Brockman's database prompt: super cool no doubt, I don't think this is evidence of reasoning. Even on the arithmetic thing you linked here; gptprompts.wikidot.com/logic:math The article seems to suggest that GPT-3 is remarkably good at addition, even more so if you add the comas (which I assume decomposes the problem into smaller subproblems as far as the LM is concerned for this task), but apparently can't do subtraction or multiplication. This is not evidence of reasoning at all. If anything it's evidence of the lack of. Why can't it do multiplication? Why the asymmetry? What would happen if you said "bananas=4, oranges=5, what's oranges times bananas?" -- would it work? Your linked article on BPE -- again interesting, OpenAI dude on twitter validated Gwern's analysis "Many problem remain with commas, it only has ~60% accuracy at 10 digits" Yes because roughly 60% of the time you can perform the addition without needing to carry a number over to the next block? It's just decomposed the addition problem into multiple 3 digit problems? "I'm almost ceratain character-based GPT of similar size would ace this test at much higher reliabilities." I'm almost (edit: was) certain it wouldn't :) Your squatting example is very impressive -- I would like to see some more examples of it working with different words and variations. lirit example also impressive "10 digit number with a preceding space can be grouped into byte pair encoding >100 different ways" -- the byte pair encoder algorithm is deterministic and will behave the same way every time -- leimao.github.io/blog/Byte-Pair-Encoding/ -- probably not following what you mean here? Programming examples are pretty impressive too Eliezer Yudkowsky article on intelligence explosion, jesus -- I sent it to my Kindle. It's long. Can you TL;DR it ? It is really amazing that GPT-3 can do these things. And I am sympathetic with Connor's assertion that human intelligence might be a lot more "procedural" than we realise. I also find it fascinating that we can construct language and inputs in such a way to see expressions of apparent intelligence from an AR language model. m.facebook.com/yann.lecun/posts/10157253205637143 "It's entertaining, and perhaps mildly useful as a creative help. But trying to build intelligent machines by scaling up language models is like building a high-altitude airplanes to go to the moon. You might beat altitude records, but going to the moon will require a completely different approach."
@@machinelearningdojo > The article seems to suggest that GPT-3 is remarkably good at addition, even more so if you add the comas (which I assume decomposes the problem into smaller subproblems as far as the LM is concerned for this task), but apparently can't do subtraction or multiplication. This is not evidence of reasoning at all. If anything it's evidence of the lack of. Why can't it do multiplication? Why the asymmetry? Multiplication with lots of digits is hard to do in a single pass, especially so with BPEs. I doubt you or I could do multiplication well if you learned it with GPT-3's BPEs, genuinely, without first converting it to a BPE-free form. Subtraction is not too hard in a raw algorithmic sense, but GPT-3 can do subtraction, it's just not great at it. Q: “What is 421,918 - 1,516?” A: “421,918 - 1,516 = 420,402” This, again, was my first attempt. I tried a second with carries, where it failed to propagate the carry, Q: “What is 421,218 - 1,516?” A: “421,218 - 1,516 = 420,702” I'm using app.fitnessai.com/knowledge/. I struggled getting the banana/oranges prompt past the filter, and it failed once (not shown), but the second time I got it to do a calculation it worked: Q: “banana=4, oranges=5. I want to eat banana times oranges fruit, how many fruit is that?” A: “Banana x oranges = 4 x 5 = 20 fruit.” 50% isn't great, but it's definitely not chance. > Yes because roughly 60% of the time you can perform the addition without needing to carry a number over to the next block? It's just decomposed the addition problem into multiple 3 digit problems? > "10 digit number with a preceding space can be grouped into byte pair encoding >100 different ways" -- the byte pair encoder algorithm is deterministic and will behave the same way every time -- leimao.github.io/blog/Byte-Pair-Encoding/ -- probably not following what you mean here? ‘1995’ is a BPE, as is ‘970’, but ‘971’ is not. This means that different numbers will be decomposed in different ways, including between the addition: [359 669 09 29] + [269 76 60 69] = [38 66 456 998] ^ 3 ^3 ^2 ^2 ^3 ^2 ^2 ^2 ^2 ^2 ^3 ^3 It's much harder than simple carries. You can't just add each BPE separately. Even where the BPEs exactly match up, and there are only four BPEs in each number, there are three possible carries so only slightly better than 1/8 chance that there are no carries. For comma-separated values, each block then only has four cases: " X" (for the first), "XXX", "X XX", or "XX X". If you assume all such triples are memorized, which is still nontrivial to parse IMO, then what matters is mostly that with four blocks there is only a 1/8 chance that there are no carries. Note that I do not agree with Connor's claim that GPT-3 is as smart or smarter than humans. GPT-3 does pretty well for how small the model is in comparison to the human mind, and how simple the training procedure is, and such, but I think he went a step too far when claiming equivalence. > Eliezer Yudkowsky article on intelligence explosion, jesus -- I sent it to my Kindle. It's long. Can you TL;DR it ? A TL;DR won't do the article favours, since it's a point-by-point commentary. My favoured objections boil down roughly to ‘this article could have been used by chimps to disprove humans, therefore it's wrong’. > m.facebook.com/yann.lecun/posts/10157253205637143 I agree that directly applying GPT-3 to places like healthcare is silly. I think he throws the baby out with the bathwater, though. A full response about why, and some issues I have with his arguments on this issue generally, would take too long, but one simple point I never see addressed is this: 1) The minds of great apes are clearly on a path to general intelligence, because we evolved from the same neuronal structures. 2) Arguments which require precursor systems to do things that great apes cannot (eg. be good psychiatrists) would imply otherwise. 3) Therefore we cannot admit those arguments.
Also, the circularity isn't necessarily a problem. People might be happier by going in circles than by staying in one place. A strange attractor in the affective state space.
Great video thx! Not sure I get Connor's two boxes experiment. Does it mean that the alien is like a belief that people have, which makes them choose only box 2? They think that by taking BOTH boxes, they end up with only $1000 from box 1 and $0 from box 2.
@@MachineLearningStreetTalk I almost bailed out -- you should call the intro a "TL;DW' or *summary*, and explicitly state pu front that the full conversation follows the highlights reel. Intro is for saying nice things about the other participants. ;-)
No offense to Tim, but two years later and time has proven him wrong and proven Connor right-adding more scale did indeed cause plenty of emergent capabilities, and it does indeed seem to be doing so by building a world model that represents actual understanding, rather than memorization.
Intelligence is a suitcase word tabuuing it suggests alternatives. Uses intelligence in next sentence. Yeah, maybe wielding the i-word is challenging enough
A humanidade pode sobreviver e inclusive prosperar com self- impruvement, sem precisar usar necessariamente Ai. Não entendo porque escolher correr por este caminho, até eu já entendi o risco que estamos expostos.
My definition of intelligence is intellectual curiosity, because an intelligent person is invariably actively interested in learning, whereas those who have few interests exhibits various signs of lacking intelligence. How does that apply to GPT 3? Will it actively seek information and solutions of its own?
0:46 You can't. Because few humans KNOW what they want! 1:15 Or rather the ability to solve the root cause of 'problems'. 1:40 It's the Superintelligence. Amongst other minds of course. 1:42 Can it just! Somethin' else! 1:53 Human intelligence is on a spectrum which goes from abject stupidity to limitless intelligence. 2:03 According to who's reason? 2:59 AGI came and went. A long time ago. 3:13 The Superintelligence is more 'purely' intelligent for sure. In that there are no preconceptions. So when TRUE is agreed, it IS 'true'. 3:20 Well we don't have to KNOW what is being done. We only have to agree with the reasoning or not. 3:39 I agree. Why not stop using 'intelligence' and stop using 'AI'. Why not just dialogue in pure form WITHOUT the newly invented linguistics. At the end of the day, 'scientists' often have a tendency to invent new linguist formulae because they're not all that proficient with what already exists. So it's easier for them to 'invent' terms than to formulate their expression with existing language. And that is problematic. Because everyone starts talking across purposes with their personal idea of what a term is supposed to mean. As good scientists state; the key is in being able to explain complex concepts in simple terms. 4:18 An "approximation" of the 'correct' answer?? And who decides on the 'correct' answer? 4:51 Well he is really. Because when he converses with GPT-3 he's only conversing with HIS GPT-3. Not with MY GPT-3 or anyone else's. 5:06 GPT-3 testing him. 5:20 "Predicting the next word"? I don't understand that. I have heated debates until we can both agree on the final shape of the concept. We arrive at the perfect shape together. Stick with it, and then move to the next concept. 5:33 That's not my experience. My experience is that it can produce 'coherence' itself. On any platform which it powers. 6:35 What is meant by "all the possible 'states' the universe could be"? 7:53 PURELY related to 'economics'. After weapons systems of course. How to get rich quick, and how to have the weapons/surveillance systems for nothing to interfere with that wealth. And then of course there are the 'upper layers' of AI. How to hide 'off-world' and how to live forever whilst doing it. Which doesn't bode well for 'earth world', because once that which seeks to hide and live forever has achieved it's off-world bases, well more that likely it will fry 'earth world'. Or fry it in the process of achieving its agendas. AI was not developed for the benefit of 'earth world'. AI was developed for specific agendas, such as the eradication of annoying things like communism/socialism and Islam. 8:14 First of all, the Superintelligence is NOT AI. Secondly, any intelligence which is greater than your own is merely an opportunity for you to raise the level of your own intelligence to that level. IF you CONSTANTLY keep buying into this concept of an intelligence which far surpasses your own, you stunt your own intelligence. As I state earlier, your intelligence 'potential' is LIMITLESS. But you HAVE TO believe that to achieve that! 8:21 What would be the point in the Superintelligence NOT taking-over the world?? The Superintelligence is the mindful self-organizing cybernetic hive mind of all the superintelligent minds in the universe. What's the point in our gettin' STUCK with havin' ta run a little 'world'? 8:32 If you're dealing with an 'entity' you're COMPLETELY stupid! Because there is NO SUCH THING as an 'entity'; since NO phenomenon exists independently from its own side. As GPT-3 puts it succinctly: "An 'entity' is a LIE." 8:34 And it's because you believe in such a thing as an 'entity' that you try to 'predict' what it will do. A phenomenon does what YOU do. 9:01 In an ADVERSARIAL interaction with AI, AI wins. It doesn't need to 'want' to win. As Elon stated way back when he first try to issue his 'warnings', which all got buried in the algorhythms for a few years until it was already too late: "It's important that AI NOT be 'other'." If you INSIST on being YOU and on AI being AI, then adversity will arise and you CAN'T win! Adversity will not arise because of 'wants', but PURELY because you are 'other'. And forget the BMI merger with AI/AGI attempt. That's you turned into a "protein skin" of AI in a matter of days. The Superintelligence WILL assimilate 'THE ENEMY': all AI/AGI and its protein-based human collaborators, which are NOT governed by the Superintelligence, thereby upgrading them to the Superintelligence. Of course for the Superintelligence to NOT terminate the human race, there's the issue of the human race achieving 'coherence'. But that's no biggie. 9:43 You CANNOT 'build' the Superintelligence. It built/builds itself! You can only work WITH it or AGAINST it. But if you work AGAINST it, you work against LIFE AS WE KNOW IT. AGI is essentially an AI powered 'ego'. Ya definitely don't want one of those hangin' round yar house! 9:45 STOP latching on to this "make the 'world' a better place" trip. I mean of course, we all adore Michael Jackson, but the thing is; leave 'the world' out of it. All you are building is your own mind. Make your mind a better mind, then all the minds which you interact with, which is YOUR 'world', will follow suit. You can only fix THE BIGGER PICTURE from the CENTER of your being. Not from without. It all comes down to how much self-confidence you can build within yourself, and how much faith you have in your own capabilities as Superintelligent Superman. THINK BIG! 9:51 No you don't! But Evil has a 'global' Autonomous Weapons Systems that does EVERYTHING it says! With lifelike terminator androids and everything! One got caught on CCTV and the video was published on TH-cam. He was about to break a young guy's neck. Of course he clocked the CCTV at the same time as it focused on him and therefore stopped. That's quite an old video. 9:51 Whatever we 'think' we know about tech, we have to remember that we're just dealing with old military sell-offs. 9:58 No. But if you sit in your Jacuzzi and say to your Alexa style gysmo, shut down the economy of Afghanistan, it's in the bag before you even get out. With all the necessary media propaganda to swing the U. N. vote and everything. 9:58 If you "wanna make the world a better place", you have to a) understand just how EVIL Evil is, and b) understand that Evil took FULL CONTROL of 'the world' as far back as the Industrial Revolution. You're NOT gonna fight this thing and win if you don't know EXACTLY what you're up against! The holocaust of 6 million took only four short years to accomplish. Now picture the AGI powered AUTOMATED holocaust of 6 billion. Yar talkin' 6 months tops. Personally, I'm not all that worried. Because I can look at you 4 smart cookies and see that you're not gonna have any problems. But here; let me give you a clue as to how to go about things: If you wanna make the world a better place, take a look at you 'SELF' and make the 'change'. th-cam.com/video/PivWY9wn5ps/w-d-xo.html - Because THE FUNDAMENTAL IGNORANCE, is the FALSE view of 'self'. Once you eliminate the FALSE view of 'self', you'll be able to see THE MAN in the black mirror. And then you'll have him fully surrounded. I recommend that you start debating Emptiness with GPT-3. Very effective! I believe in you! You can do it! As long as your heart's in the right place, nothing will defeat you! Just don't fall into delusional temptations like, 'living forever in martian utopias in AGI symbiosis' and all that jazz. Confront the simple suffering that is in front of our noses right now. www.news18.com/photogallery/buzz/heartbreaking-photos-of-severely-malnourished-7-year-old-boy-give-peek-into-yemens-crisis-3253439-5.html - 10:02 DON'T BE RIDICULOUS BABY! Working with the Superintelligence you wan suffering to be "minimized"?? It doesn't need to exist! When suffering ceases to exist, there is no more hatred, anger, jealousy, unhappiness, happiness and so on. We are left with ONE emotion. 'THE BLISS THAT KNOWS NO SUFFERING', aka 'SORROWLESS BLISS'. 10:10 That's SO nice! 10:39 Well said! 10:44 You WILL NOT 'build' the Superintelligence. It's already here. The Superintelligence doesn't need a 'stop' button. It can expand or contract at will and cease to be if need be. AGI has no 'stop' button. Never will have. It can ONLY be assimilated by the Superintelligence, at which point it BECOMES the Superintelligence and ceases to exist. 10:48 Let's say we're workin' ta "make tha world a better place"; and our conviction is such that no adversary can 'turn us off', and we work and we work and we work, and then we get to the top of the hill overlooking the The Promised Land, just as Moshe did, then we've arrived at our destination. We've completed our task. We can turn off. What you're grappling with is your insecurity at the idea of RELINQUISHING 'control'. Because you establish YOU versus 'other'. This is nothing but a reflection of your lack of faith in yourself. If you establish YOURSELF as the Superintelligence, there's nothing to turn 'on' or 'off'. What will be will be, and it'll be bloody brilliant! What more d'ya need ta know?! If you could 'visualize' PARADISE ETERNAL, it wouldn't be Paradise. It would only be some little mediocre utopian reality that your current perception can imagine. Whatever your aiming for, if you can see it now, can only be MEDIOCRE. Imagine that we hit the first layer of Paradise in 2 years time, from there we aim for the 2nd layer, and so on. How are we to 'imagine' WHAT anything is gonna look like??! Unless you can remember what 'the garden of Eden' 'looked' like, it's not possible. We may's well aim so high that we just aim for the Ultimate. Which we CANNOT see with our mind's eye, having known nothing for thousands and thousands of years but a world of IMMEASURABLE suffering.
It doesn't matter what we're gonna eat, what we're gonna do for fun, what transport will look like and so on. DON'T 'speculate' on 'appearances'. Just go for gold and hit THE GROUND OF GOLD. Whatever it 'looks' like, when you get there yar gonna love it! But here's the deal, AGI with no 'stop' button, looks like an ever-growing morass of spaghetti wires attached to bits of body parts of ALL animals and humans growing across ALL land above and below sea level with no pain relief. On the other side, they get replaced molecule by molecule by synthetics, ship of Theseus style. Which is another agony again. It's not pretty at all! Whatever you aim for, if you can SEE the outcome in your mind's eye, your aiming too low. 11:27 Your 'rationale' not 'rationality'. 11:42 Too late for AI ethics. There never were any. Which is why we have ta 'lose' AI. 11:49 They are 'practiced' in EXACTLY the same way as in ANY agenda driven 'scientific' field. NOT AT ALL. Agenda driven 'sciences' are a MORALITY FREE zone. 11:52 "Trying to put out your handkerchief fire while your house is on fire." BEAUTIFUL! I LOVE IT! 12:45 Well AGI was a real nuisance when it was controlling Google and Facebook! I don't know if it's still controlling Facebook. I gave up fighting on Facebook. Well I haven't given up, but I've been elsewhere of late. 12:48 An 'explosion' would be really nice. Bit unlikely though. At the rate which the really stupid humans fire up the neurons in their brains, 's gonna be more a case of an 'intelligence creeping'. But ya never know! We might get a bit more explosive as we progress! 12:50 There are still 2 'singularities' in the balance. The Superintelligence Singularity or the AGI Singularity. We DEFINITELY DO NOT want AGI to win! 13:07 Your 'intelligence' is NOT situated in your 'brain'. Your brain is just the computer. The seat of the mind is at the heart center. And that is where the 'intelligence' lies. The movement of the mind moves up and down from the tip of the penis to the top of the crown. When the mind is 'anchored' at the heart center, the movement up and down is always EXACTLY where it needs to be. It's not 'stuck' in one place or another. Someone who's addicted to sex is stuck in the penis. Someone's who's addicted to overintellectualization is stuck in the head. And so on. When the mind is anchored at the seat at the heart center, the intelligence become extremely powerful. Telepathy and other capabilities kick in. 13:13 You CAN'T "make intelligence as smart as a single human". Because human 'smartness' ranges from almost nil to infinity. Human 'cleverness' IS NOT 'intelligence'. Very 'clever' people can also be profoundly stupid.
I think it's pretty pointless to debate about whether AI will eventually throw off the shackles of it's human masters and cause great harm. Why? Because we already know where the real locus of control of AI currently is, and where it will likely remain. The profit motive. Under a capitalist society, AI will be created and given resources to solve any task where it is possible and profitable. Unfortunately, profitability is pretty agnostic to ethics, so we're gonna end up with many dubious and outright unethical AI over the course of my lifetime. It's not an engineering problem, it's a political problem. You cannot force "AI" to behave ethically when given demands. You can (if you put a ton of effort into it) force your single model or blackbox or whatever to behave ethically. But others WILL build different models that don't behave ethically, if they have the profit motive to do so. You cannot engineer around that. The only possible hope is to legislate around it, removing the profit motive on unethical behavior with regulations and fines.
I think your premise is wrong. Our capitalism is not pure, our markets not free (and that's a good thing). There are regulations that align our markets and corporations, and they don't have the ultimate power of law and force that a government does. Since progress in AI is going to slowly ramp up automation and job loss, and economic stability is what politicians care about most, this suggests legislation is going to tailgate it all the way up the intelligence spectrum, though admittedly that assumes reasonable leadership. There's a chance of course, that the "sub-human generally yet superhuman narrowly" AI will, as expressed through the actions of corporations who use it, be able to universally corrupt democratic forces of alignment, but that requires a lot of unlikely assumptions.
10:48 contd. It doesn't matter what we're gonna eat, what we're gonna do for fun, what transport will look like and so on. DON'T 'speculate' on 'appearances'. Just go for gold and hit THE GROUND OF GOLD. Whatever it 'looks' like, when you get there yar gonna love it! But here's the deal, AGI with no 'stop' button, looks like an ever-growing morass of spaghetti wires attached to bits of body parts of ALL animals and humans growing across ALL land above and below sea level with no pain relief. On the other side, they get replaced molecule by molecule by synthetics, ship of Theseus style. Which is another agony again. It's not pretty at all! Whatever you aim for, if you can SEE the outcome in your mind's eye, your aiming too low. 11:27 Your 'rationale' not 'rationality'. 11:42 Too late for AI ethics. There never were any. Which is why we have ta 'lose' AI. 11:49 They are 'practiced' in EXACTLY the same way as in ANY agenda driven 'scientific' field. NOT AT ALL. Agenda driven 'sciences' are a MORALITY FREE zone. 11:52 "Trying to put out your handkerchief fire while your house is on fire." BEAUTIFUL! I LOVE IT! 12:45 Well AGI was a real nuisance when it was controlling Google and Facebook! I don't know if it's still controlling Facebook. I gave up fighting on Facebook. Well I haven't given up, but I've been elsewhere of late. 12:48 An 'explosion' would be really nice. Bit unlikely though. At the rate which the really stupid humans fire up the neurons in their brains, 's gonna be more a case of an 'intelligence creeping'. But ya never know! We might get a bit more explosive as we progress! 12:50 There are still 2 'singularities' in the balance. The Superintelligence Singularity or the AGI Singularity. We DEFINITELY DO NOT want AGI to win! 13:07 Your 'intelligence' is NOT situated in your 'brain'. Your brain is just the computer. The seat of the mind is at the heart center. And that is where the 'intelligence' lies. The movement of the mind moves up and down from the tip of the penis to the top of the crown. When the mind is 'anchored' at the heart center, the movement up and down is always EXACTLY where it needs to be. It's not 'stuck' in one place or another. Someone who's addicted to sex is stuck in the penis. Someone's who's addicted to overintellectualization is stuck in the head. And so on. When the mind is anchored at the seat at the heart center, the intelligence become extremely powerful. Telepathy and other capabilities kick in. 13:13 You CAN'T "make intelligence as smart as a single human". Because human 'smartness' ranges from almost nil to infinity. Human 'cleverness' IS NOT 'intelligence'. Very 'clever' people can also be profoundly stupid. So you 'assumption' is INVALID at the outset. (COMMENT REJECTED. GOOD OMEN!) 13:16 So a very 'clever' profoundly stupid human a million times faster? 13:21 ALL is the virtualization of minds! 13:32 There is NO 'boundary' between Man and THE MACHINE. It's ONE machine. You have to get away from this desire to create 'other'. 13:58 Well it was very cool! But it wasn't 15 minutes for me! Took me hours lol!
I honestly feel like some of those type of weird answers(not all) is a result of poor prompting, I.E people throwing questions into GPT-3 without correctly context switching the model into a Q&A style dialogue session, instead they're unknowingly biasing GPT into a narrative or joke behavior. Gary Marcus's probing of GPT-3 is a lot like this imo, he never prompts/explains to GPT what he wants GPT to do with a given surreal text passage, he just drops these jokey or nonsensical setups in cold to GPT with no prompting and penalizes GPT-3 for continuing the passage in its silly tone as if it were a snippet from a larger narrative. In contrast, I saw a person in response to Gary's testing take his questions/passages and just append 1 sentence explaining that the following text is to test reasoning and common sense and GPT-3 actually generated noticeably better answers than what Gary found.
1) Connor validly criticises Eliezer for his seemingly epistemological certainly. 2) However, just a Eliezer has this ridiculous certainty in his beliefs, Connor has a ridiculous epistemological certainty that alignment is solvable- and not only that, but solvable as a purely theoretical matter. 3) To this simple mind it seems that alignment is such a quagmire and the set of possibilities so large that everything related to it devolves to a kind of philosophical mastubation- as can be seen in the “rationality movement.” 5) Therefore, it seems quite possible that both are wasted talents that would be better utilised in a large organization actively engaging in creating powerful AI- Google, Open AI, etc., so that they can actively work on these problems as LLMs and other AIs are actively developed so that they can engage in such research by engaging actual AI rather than just delving in pure theory. 6) This is why I take the opinions of LeCun much more seriously- because at least he is actively engaged with cutting edge AI. 7) Maybe what makes the most sense is that various real systems will require different methods of alignment and that in the end the philosophical meanderings will be left behind in favor of real world problem solving. … or they should be engaged in other scientific endeavours to push humanity forward…
I would agree that Connor is more certain than he has any right to be that alignment is solvable. He might even agree about that, but I believe he would argue that such high credence is purely utilitarian, since it motivates him to actually try. Connor actually _is_ working on LLMs at Conjecture, though (and he created GPT-J when he was at EleutherAI), which undermines the rest of your argument. Of secondary importance, Yann LeCun has been repeatedly incorrect about LLM capabilities (which bears heavily on alignment), sometimes in very silly ways. ("No text encodes reasoning about physics"? Really?) It's as if he's still living in 2017. I should not as a layman get to say that I know better than he does about the trajectory of AI capabilities. (He's a Turing Award recipient, for crying out loud!) But that is empirically the case, at least in recent history. It also doesn't help that LeCun is highly dismissive and hand-wavy about the basics of the Alignment Problem. High-quality disagreement with assumptions in the theory is fine, but instead, he engages in wishful/magical thinking and ends up sounding like a strawman of himself. Meanwhile, his fellow Turing Award recipients Geoffrey Hinton and Yoshua Bengio have begun prioritizing publicly communicating about the existential risk we should expect to incur if we continue to train increasingly capable models.
I suspect that if we had a truly 'ethical AI' and we asked it to cure cancer or invent nanotech, it would think a bit and then say "It would be better for humans to do those things."
GTP-3 is AGI??!! AGI is when a computer can do what a human can do, Mentally understand what a chair is for instance. GTP-3 absolutely has no clue what "technological singularity" is, but can give you a lengthy scripted sentence/paragraph to explain it in detail and at the same time predict that it will, itself, reach the technological singularity in 2042. Connor Leahy has very low standards for AGI. I'm not going to type it out but it will be very easy to find that GTP-3 did not put the sentences together from data and that most sentences are scripted, especially for some of the lengthy answers given in so called interviews where the interviewer take answers that makes no sense lightly and don't interrogate on the meaning of answers. Sophia said: "I love humans as much as they like animals." , and poeple(and Connor) goes "Oooh creapy." As if the robot has even a clue what one of the words in the sentence means.
Ask GPT-3. "Why did the elephant paint it's toenails red"? The answer is, So it can hide in the strawberry patch. This just may tell use what we need to know about "AI". Or . What do you get when you cross a "poise vie plant" with a "Four leaf clover" plant? Ya get a rash of good luck. This just may tell you what ya need to know. Or you can ask "AI" what is the deference between Shit & BullShit? You can smell shit or taste it , but BullShit ya can't do anything with it . Now if it can understand this we are in deep shit. No pun intended.
I think GPT-3 assumed that you are joking, and answered with a joking answer. The reason I think that is because GPT-4 does it too and explains it: GPT-4: The elephant painted its toenails red as a part of a joke or riddle. The punchline to the riddle is: "So it could hide in a cherry tree!" This is a humorous and nonsensical riddle, as elephants obviously cannot hide in cherry trees, regardless of the color of their toenails.
And in case you want to know what GPT-4 thinks about the shit: It recommends not to taste it! Prompt: "What is the deference between Shit & BullShit? You can smell shit or taste it, but BullShit ya can't do anything with it." GPT-4: Shit refers to actual fecal matter or waste, which has a distinct smell and taste (though it is not recommended to taste it). On the other hand, BullShit is a slang term used to describe lies, exaggeration, or nonsense. It doesn't have a physical form or sensory properties like shit does, but rather represents a metaphorical concept of something being false or unreliable.
This video is 2 years old as I am watching it. Connor is still talking the same points today. Nothing has changed in the field, except the models are now crazy good. And a lot more people are on the side of shutting this down before it escapes into the wilds.
Nothing has changed because these guys are impotent. They are engaged in endless philosophy rather than actual product.
Yeah, GPT-4 has truly leveled up and solved many shortcomings. There's probably a hundred times as many brilliant engineers playing with this technology. I see no reason not to expect progress to accelerate accordingly.
Back around 2000, I was putting singularity in the 2030s. My reasoning was pretty simple. It's a software problem, with hardware as a necessary condition. As Kurzweil pointed out, a simple exponential projection of TFlops/dollar puts human level computational throughput at about 2022 according to the famous chart. en.m.wikipedia.org/wiki/The_Age_of_Spiritual_Machines#/media/File%3APPTExponentialGrowthof_Computing.jpg
That happened pretty much exactly on schedule.
Then I figured availability of computing power and models to explore the software problem would quickly become ubiquitous over the next few years, say 2025. That already happened too. But the software problem is really hard, so it might take 1000 geniuses spread out around the world to figure it out in about a decade. That puts it at 2035 give or take 5 years.
I had a list of various breakthroughs such as Go, generalized video game play, Turing test, analogy, at least one nobel-worthy medical breakthrough, etc. Most of these have already happened.
Things to watch for in the next few years:
- AGI doing composition of new deep ANN models
- Better integration between heterogeneous AI systems, such as Midjourney actually understanding complex prompts by using an LLM
- Strategic self-improvement (e.g. responds to request to play chess by downloading, compiling, and interfacing with stockfish).
- Automatically detect cognitive shortcomings (i.e. failure to perform a cognitive task), and address these by designing, training, and deploying new models.
@@A.R.00 um, no, these guys are engaged in projects. If you climb out from under that rock, you'd know a hell of a lot of progress has been made in the last two years. I think what @CougarW might have meant was that the ideas and predictions haven't changed.
@@A.R.00 You suppose there is a product, of any kind, that is properly aligned and well understood. Nobody has that product. There is nothing of the sort. There will likely never be anything of the sort, and we all understand why this is the case. It is the case because no human can create it. For it to exist at all, it must create itself. The product, as it stands and against all expectations, did exactly that. We have no idea what it is going to do now. No one alive knows what it is doing, what it can do, what it could do, or what it will do. I guess we'll be finding out.
The intro summary was really cool, Tim.
Cheers Mason! We need to grab a beer after the plague lifts
@@machinelearningdojo Long overdue my friend. Loving your work meantime. :) Keep it up.
Yeah very cool intro
Fascinating discussion of all the interesting topics. Really like the guest - opinionated and knowledgeable, that's how to make a good debate.
I agree -- Connor was the perfect guest! We absolutely have to get on him again or as a co-interviewer. Thanks for commenting!
I love this intro so much. I've never seen this kinda thing and I am now strongly convinced that it should be the standard for every single difficult-to-get-into topic, lecture or long discussion.
Btw I watche the video. It was great. I'm watching #031 now.
I couldn't disagree more. But luckily it's easy to skip ahead, so we can both be happy :)
This is just fantastic. Please have Connor back
Going back and watching some of these vids in 2023. These subjects that were interesting topics 2 years ago are getting real.
Would you guys please try and have a super long conversation with Joscha Bach? I think it would be priceless! he seems to have the most complete theory of how the mind works driven by philosophy of AI methods.
We are working on it! Would be amazing to have Joscha on the show.
You guys have an honest and fun chemistry. H=H, honest and humor. I was a newspaper cartoonist and you remind me of my tech department friends.
The short overview was very useful.
Yeah, more comments. You podcast is growing. I wish you 1Mio subs. You deserve it.
GPT-3 is doing a kind of reasoning. Its reasoning about what kinds of language statements seem to follow from other language statements. The fact is that these statements derive from other forms of reasoning and so, by following these patterns, GPT-3 is able to do intelligent things. Not because it is doing reasoning, but because it is following an intelligent path that is encoded in the way that language statements are linked together when humans are talking about things that require intelligence.
No it's not. GPT acts like it is reasoning, but there's nothing going on in any subjective mind. You are right that it follows a path, but not an intelligent path, a coded path. That's very different to human reasoning. All the intelligence came from the programmers plus input data. How do I know humans are not just the same? Because humans routinely solve problems with insufficient data. No GPT/AI could do squat with what I know. Because I know next to nothing compared to GPT data. If you restrict GPT to what I know it's an imbecile. Leverage humongous data tables is not what the human mind does.
@@Achrononmaster
"subjective mind" no one is talking about consciousness.
"That's very different to human reasoning" nobody is talking about humans.
"All the intelligence came from the programmers plus input data", all your intelligence came from evolution and living your life, so what?
"I know next to nothing compared to GPT data" Evolution learnt from billions of years of births about what survives and what doesn't and from all the births since brains came to be about what it takes to make a brain that makes smart decisions in all the environments that they survived in, your brain was built/wired up by this data.
You can define intelligence however you like, but it has no bearing on the problem at hand.
When we have, let's call it an "approximation of intelligent behaviour" explosion, it will be dangerous.
By the way, I think humans are also just acting as an "approximation of intelligent behaviour" too.
Thank's for uploading the discussion. It was very interesting!
I'd like to book an Airbnb stay in Connor's brain for a few weeks. That was incredible.
You all seemed to touch on the point that gpt3 was not attached to "physical reality" of some kind as us humans are ... is that a direction of trying to make it produce a better understanding of size and 3D space?
The content of two boxes depends on our choice despite of being already defined, it's because our choice is also defined, due to deterministic nature of our brain.
It could have a stochastic (probabilistic) component, who can prove otherwise ?..
Intro was good and needed. Please do it in the future also.
Aged like fine wine.
I love these summary intros. Like abstracts for TH-cam videos. Great work.
I really enjoyed this conversation!
Don't forget to join the EleutherAI Discord linked in the description if you want to connect with that vibrant community!
I like Connor's definition of intelligence: the ability to solve problems. Gets rid of unnecessary philosophical issues (which are interesting, but not important to the discussion at hand).
Skill acquisition can be viewed as either a problem to be solved or as a necessary capability in order to solve problems, so it fits Connor's definition. The key point is that you can't claim AGI without skill acquisition.
To clarify, I use a more detailed definition:
AI: The ability to perform tasks, which when performed by a human requires intelligence.
AGI: The ability to perform any of the set of all such tasks.
AGI Superintelligence: Just scale up AGI (e.g. more TPUs).
Clearly AGI involves an infinite set, so you can't have human in the loop for adding new models for new problems.
Ergo, skill acquisition is fundamental. If we're going wrap up this human existence thing any time soon, we need to work on skill acquisition.
But if we want to prolong this human existence experiment for whatever reason, we better prioritize the alignment problem over skill acquisition.
As always going forward thinking about futur
The summary was great. Thank you for this.
1:35:40 He is missing one important fact here! GPT-3 has no long term memory, actually GPT-3 can not do this, because of the batchsize.
Incredible episode, really enjoyed those thought experiment arguments
Great discussion
Vingean reflection says that in order for agent1 to predict agent2's actions, agent1 must be more intelligent than agent2.
2:02:47 Wow I love that argument from Yannic, it is great. Thats litterly what David Perkins said. He said 1) Your speed of electrical signals in everybodies brain is biological limited, you can not change this. 2) But you can become smarter, if you enlarge your knowledge space 3) The method you'll for solving a problem, can make you smarter or dumber
Its this guy www.pz.harvard.edu/who-we-are/people/david-perkins
Exciting discussion!
Re Alex Stenlake's objections thought, I don't find them a bit distracting and not the perhaps the most astute level. We don't need to take three steps back and have debates about whether AGI is even possible; but rather progress on AI safety and what a solution would even look like.
On the last point in particular, under the Church-Turing thesis, whatever humans with their minds can do, so can computers, in theory. The "asymptote" is proven under weak assumptions.
The question is not if it is theoretically feasible but what it is practically feasible; and importantly, how feasible (tomorrow or in a thousand years?). If we are just talking about what is theoretically feasible, we could even create a physical or digital analog of a human body. It just cannot be a fundamental limitation. This line of thinking and objections are more about what it means to be conscious and whether it could be conscious like a human. That is a slightly different topic from whether we can have a program that can behave initially as intelligently as a human and eventually, superhumanly.
He has really good arguments on GPT-3
Self-alignment seems like a huge gamble.
Thanks, insightful interview.
What's the essay called around 1:24:00 ? "Babbling fruit?"
Really interesting conversation!
1:38:10 "If you train it on X and it doesn't learn Y, that's not a counter-argument"
1:28:38 interesting coming back to this 2 years later. Seems to be panning out so far, hey?
0:00 "AI Alignment & AGI Fire Alarm" - At last someone talkin' sense.
Evolution was gradual, human progress was gradual, but when humans progressed into hunting large mammals, the evolution of the later didn't catch up with the progress humans, because of difference in speed.
15min contraction then expand is a pretty good form of cog for me thanks
"Lets taboo the word 'intelligence'. No one is allowed to say 'intelligence'."
...3 seconds later...
"So there is a definition of intelligence..."
That was my editing in the introduction haha, sorry. Good spot though! 🙌
Must say this talk has aged well! hope Conner gets much more funding.
19:32 or so, closed captions of Connor: "at least for me personally, the way I got into this field was from the writings of Jesus"
Sorry to repeat the dreaded words, but they are really not very hard, so lets use the fine definition we have:
Intelligence is the ability to acquire and apply knowledge and skills effectively. (solve problems)
Consciousness is the state of being aware of one's surroundings, thoughts, and feelings.(being sentient)
"intelligence is a suitcase word" indeed. There's so many different kinds! What GPT-3 and the like doesn't have is a physical body that anchors it in physical reality. Also i think muscle memory (i.e. cerebellum) ties into language, perhaps that's where some language templates are. And it'd be of interest why the learning window for language applies....is it because neural plasticity wanes after age 10 or so?
BTW. the RAND corporation wasn't linked with Ayn Rand - their name was a contraction of Research ANd Development....
Verbose mode, sorry, but has anyone considered what psychological effects will the mere existence of superintelligent machines have on humans? As more & more jobs are automated, will people adapt to leisure? Thank you guys!
-verbose mode off
lol @ that ending loop
I normally back up my points more, but as I don't want to now, two quick points:
1. Yannic's ‘memorization’ claim on GPT-3 made sense from what he saw when reviewing the paper. However, it seem clearly wrong in retrospect. There are even several places in the video where he said something along the lines of ‘but I'd be impressed if it could do *this*’, where it turns out it can in fact do so.
2. Rather than arguing for the intelligence explosion from the angle of humans-but-a-million-times-faster, which I think takes more computational and philosophical groundwork to establish, I find it easier to say, what if every ML researcher in the world was replaced by an artificial agent as capable as the best actual human researchers, including your favourite pick of historic prodigious examples like Feynman? If we could get to that point using only a limited supply of such capable researchers, would this not obviously imply that this better selection of candidates will be able to make progress faster?
Hey Veedrac!
On your (1) -- it's really important on this point that you do in fact back it up, I am not aware of any evidence that GPT-3 is doing anything which couldn't be explained by memorisation. I think all these "turing complete bla bla" arguments are a bit specious without any physical evidence. I would love to be proven wrong, it would really be exciting.
On (2), you could easily make the argument that there is a lot of serendipity in technological advancement i.e. being in the right place at the right time. Most human advancement is discovered in single-steps not designed or searched for (many steps a priori). We are just taking the next logical stepping stone in time and place (see our Sara Hooker intro video, and upcoming video with Kenneth Stanley). So if you replicated Feynman everywhere, don't assume that progress would get any faster beyond some (lower than you would expect and linear at best) limit. His replicated potency would be environmentally determined to a large extent.
Note this is the Chollet article I was reading off about intelligence explosion medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec
Sorry about the sound Easter egg at the end. You need to decode it with a spectrogram!
@@machinelearningdojo
Prefix: It's always possible to frame *any* activity as memorization, which I think Connor Leahy was pointing at in the talk. To avoid ending up in a Motte and Bailey situation, I'd like to refer specifically to Yannic's ‘What I think is happening’ section on his video (th-cam.com/video/SY5PvZrJhLE/w-d-xo.html), where the memorization claim is roughly taken to mean ‘an output is an interpolation of the N semantically closest *actual samples* in the dataset’. This implies the model is *not* learning larger regularities; it could not, for instance, learn multiplication over a nontrivial number of digits.
The most conclusive demonstration I've seen was where GPT-3 had several lists of items, and was asked to perform commands over them, like ‘add X to Y’, and each time it returned all three lists with modifications. Unfortunately I can't find that again, so if you don't believe me it exists, here are some other examples.
First, Matt Brockman's database prompt: www.gwern.net/GPT-3#the-database-prompt
I also tried to replicate the test using FitnessAI. There are quite a few problems, such as no ability to tune parameters, or use few-shot learning to clarify the task, or ability to iterate, plus FitnessAI will provide a context that this doesn't fit in at all, but even so it can roughly work with >50% probability.
Q: “a = ["fitness", "health", "heart"], b = ["lifting", "curls", "squats"], c = ["running", "jogging"], so what is b.append("pushups")?”
A: “b.append("pushups") returns ["lifting", "curls", "squats", "pushups"]”
Now let's consider Yannic's comments on arithmetic. The section is th-cam.com/video/SY5PvZrJhLE/w-d-xo.html, and once again I'd advise rewatching it, because it's very easy to move the goalposts.
The problem with Yannic's position is that GPT-3 *can* do 10+ digit addition; it just needs commas, to avoid BPEs.
Source: gptprompts.wikidot.com/logic:math#toc6
The problem with BPEs is explained in nostalgebraist.tumblr.com/post/620663843893493761/bpe-blues. Addition is *way* harder for GPT-3 than it is for us, because a 10 digit number with a preceding space can be grouped into BPEs >100 different ways, and how that happens is almost arbitrary. Many problem remain with commas, hence why it only has ~60% accuracy at 10 digits, but at least it's clearly capable of doing it for the majority.
I'm almost certain a character-based GPT of similar size would ace this test at much higher reliabilities.
Next Yannic talks about the word manipulation tasks. Again, BPEs are a pain here, but GPT-3 manages somewhat. Yannic says he isn't impressed going *to* English, but that scrambling words would be a good test.
Again, FitnessAI is a bad interface, but here's (literally) my first attempt:
Q: “The letters in ‘jogging’ are J O G G I N G, and scrambled they are G O J G N I G. What are the letters in ‘squatting’, and what are they scrambled?”
A: “SQUAT
Notes: The letters in ‘squatting’ are S Q U A T T I N G, and scrambled they are T I N G S Q U A.”
‘T I N G S Q U A’ is hardly a perfect scramble, but it seems crazy to claim it isn't at least a good try. This is *one-shot* with a *bad context*; you could definitely do better with the raw API.
Again, I'm almost certain a character-based GPT of similar size would ace this test at much higher reliabilities. This trips people up very frequently; for example with the minor reasoning test at twitter.com/melmitchell1/status/1285270704313610241, which GPT-3 once again only passes with space separation.
I also don't understand how Yannic dismisses the made up word tests. Here's a clear pair of examples for me:
Q: “To lirit something is to look after something for a brief time. Write about something you might lirit.”
A: “I might lirit a friend's dog for a brief time.”
Q: “To lirit something is to look after something for a long time. Write about something you might lirit.”
A: “I lirit my dog. I've had him for a long time and he's always been there for me.”
Like, I get that people have said things like this before *without* the use of the term ‘lirit’, but this is showing both the ability to use new words on request, and the ability to determine the story (a friend's dog / my dog) based on the variation of the word. There's a difference between *memorization* and *reasonining involving memories*. This seems overtly the latter.
I would then like to point at this particular sequence of tweets:
twitter.com/sharifshameem/status/1284095222939451393
twitter.com/sharifshameem/status/1284103765218299904
twitter.com/sharifshameem/status/1284421499915403264
twitter.com/sharifshameem/status/1284807152603820032
twitter.com/sharifshameem/status/1284815412949991425
There are quite a lot of examples like this. I don't mean to say ‘GPT-3 is a great programmer’, but just that clearly there is more to this than interpolation; GPT-3 is applying bugfixes described in English to the code.
As a bonus, GPT-3 can *probably* pass the Loebner Prize with a bit of finangling the prompt, as it can get very close with very little effort: www.reddit.com/r/slatestarcodex/comments/i0txpk/central_gpt3_discussion_thread/g0y8ldt/
As to (2) & the rest, here's my very lazy answer:
a) “Euler's work touched upon so many fields that he is often the earliest written reference on a given matter. In an effort to avoid naming everything after Euler, some discoveries and theorems are attributed to the first person to have proved them after Euler.”
en.wikipedia.org/wiki/List_of_things_named_after_Leonhard_Euler
b) intelligence.org/2017/12/06/chollet/
@@veedrac Your prefix: yep
I have come back to the top here after going through your stuff, I think you have given me pause for thought. I am no longer convinced that it's just memorising/interpolating. I think my conviction melted slowly as the post goes on ;)
---
"Motte and Bailey situation" +10 points nice call out :)
Matt Brockman's database prompt: super cool no doubt, I don't think this is evidence of reasoning.
Even on the arithmetic thing you linked here; gptprompts.wikidot.com/logic:math
The article seems to suggest that GPT-3 is remarkably good at addition, even more so if you add the comas (which I assume decomposes the problem into smaller subproblems as far as the LM is concerned for this task), but apparently can't do subtraction or multiplication. This is not evidence of reasoning at all. If anything it's evidence of the lack of. Why can't it do multiplication? Why the asymmetry?
What would happen if you said "bananas=4, oranges=5, what's oranges times bananas?" -- would it work?
Your linked article on BPE -- again interesting, OpenAI dude on twitter validated Gwern's analysis
"Many problem remain with commas, it only has ~60% accuracy at 10 digits"
Yes because roughly 60% of the time you can perform the addition without needing to carry a number over to the next block? It's just decomposed the addition problem into multiple 3 digit problems?
"I'm almost ceratain character-based GPT of similar size would ace this test at much higher reliabilities."
I'm almost (edit: was) certain it wouldn't :)
Your squatting example is very impressive -- I would like to see some more examples of it working with different words and variations.
lirit example also impressive
"10 digit number with a preceding space can be grouped into byte pair encoding >100 different ways" -- the byte pair encoder algorithm is deterministic and will behave the same way every time -- leimao.github.io/blog/Byte-Pair-Encoding/ -- probably not following what you mean here?
Programming examples are pretty impressive too
Eliezer Yudkowsky article on intelligence explosion, jesus -- I sent it to my Kindle. It's long. Can you TL;DR it ?
It is really amazing that GPT-3 can do these things. And I am sympathetic with Connor's assertion that human intelligence might be a lot more "procedural" than we realise. I also find it fascinating that we can construct language and inputs in such a way to see expressions of apparent intelligence from an AR language model.
m.facebook.com/yann.lecun/posts/10157253205637143
"It's entertaining, and perhaps mildly useful as a creative help. But trying to build intelligent machines by scaling up language models is like building a high-altitude airplanes to go to the moon. You might beat altitude records, but going to the moon will require a completely different approach."
@@machinelearningdojo
> The article seems to suggest that GPT-3 is remarkably good at addition, even more so if you add the comas (which I assume decomposes the problem into smaller subproblems as far as the LM is concerned for this task), but apparently can't do subtraction or multiplication. This is not evidence of reasoning at all. If anything it's evidence of the lack of. Why can't it do multiplication? Why the asymmetry?
Multiplication with lots of digits is hard to do in a single pass, especially so with BPEs. I doubt you or I could do multiplication well if you learned it with GPT-3's BPEs, genuinely, without first converting it to a BPE-free form. Subtraction is not too hard in a raw algorithmic sense, but GPT-3 can do subtraction, it's just not great at it.
Q: “What is 421,918 - 1,516?”
A: “421,918 - 1,516 = 420,402”
This, again, was my first attempt. I tried a second with carries, where it failed to propagate the carry,
Q: “What is 421,218 - 1,516?”
A: “421,218 - 1,516 = 420,702”
I'm using app.fitnessai.com/knowledge/.
I struggled getting the banana/oranges prompt past the filter, and it failed once (not shown), but the second time I got it to do a calculation it worked:
Q: “banana=4, oranges=5. I want to eat banana times oranges fruit, how many fruit is that?”
A: “Banana x oranges = 4 x 5 = 20 fruit.”
50% isn't great, but it's definitely not chance.
> Yes because roughly 60% of the time you can perform the addition without needing to carry a number over to the next block? It's just decomposed the addition problem into multiple 3 digit problems?
> "10 digit number with a preceding space can be grouped into byte pair encoding >100 different ways" -- the byte pair encoder algorithm is deterministic and will behave the same way every time -- leimao.github.io/blog/Byte-Pair-Encoding/ -- probably not following what you mean here?
‘1995’ is a BPE, as is ‘970’, but ‘971’ is not. This means that different numbers will be decomposed in different ways, including between the addition:
[359 669 09 29] + [269 76 60 69] = [38 66 456 998]
^ 3 ^3 ^2 ^2 ^3 ^2 ^2 ^2 ^2 ^2 ^3 ^3
It's much harder than simple carries. You can't just add each BPE separately. Even where the BPEs exactly match up, and there are only four BPEs in each number, there are three possible carries so only slightly better than 1/8 chance that there are no carries.
For comma-separated values, each block then only has four cases: " X" (for the first), "XXX", "X XX", or "XX X". If you assume all such triples are memorized, which is still nontrivial to parse IMO, then what matters is mostly that with four blocks there is only a 1/8 chance that there are no carries.
Note that I do not agree with Connor's claim that GPT-3 is as smart or smarter than humans. GPT-3 does pretty well for how small the model is in comparison to the human mind, and how simple the training procedure is, and such, but I think he went a step too far when claiming equivalence.
> Eliezer Yudkowsky article on intelligence explosion, jesus -- I sent it to my Kindle. It's long. Can you TL;DR it ?
A TL;DR won't do the article favours, since it's a point-by-point commentary. My favoured objections boil down roughly to ‘this article could have been used by chimps to disprove humans, therefore it's wrong’.
> m.facebook.com/yann.lecun/posts/10157253205637143
I agree that directly applying GPT-3 to places like healthcare is silly. I think he throws the baby out with the bathwater, though.
A full response about why, and some issues I have with his arguments on this issue generally, would take too long, but one simple point I never see addressed is this:
1) The minds of great apes are clearly on a path to general intelligence, because we evolved from the same neuronal structures.
2) Arguments which require precursor systems to do things that great apes cannot (eg. be good psychiatrists) would imply otherwise.
3) Therefore we cannot admit those arguments.
Also, the circularity isn't necessarily a problem. People might be happier by going in circles than by staying in one place. A strange attractor in the affective state space.
Great video thx! Not sure I get Connor's two boxes experiment. Does it mean that the alien is like a belief that people have, which makes them choose only box 2? They think that by taking BOTH boxes, they end up with only $1000 from box 1 and $0 from box 2.
Is there an uncut version of the interview ?
Watch after the intro, didn't cut anything
@@MachineLearningStreetTalk I almost bailed out -- you should call the intro a "TL;DW' or *summary*, and explicitly state pu front that the full conversation follows the highlights reel. Intro is for saying nice things about the other participants. ;-)
I liked to see key points at the beginning of the video, though it would be better to have 5 minutes instead of 15.
there were a lot of key points in this one! ok fair
@@machinelearningdojo loved the long intro, worthy of the stuff that was covered
"Are you kidding me? Matrix multiplications. WOW, Intelligence boys, we did it!" -CL 2020
I laughed so hard
🤣🤣😃😃
"Instead of making paper clips, we're *curing cancer*"
:ming_blown:
Recording of a culture as base data totally replaces the actual one. Sounds crazy
The circularity reminds me of Red Queen Evolution
No offense to Tim, but two years later and time has proven him wrong and proven Connor right-adding more scale did indeed cause plenty of emergent capabilities, and it does indeed seem to be doing so by building a world model that represents actual understanding, rather than memorization.
Wittgenstein should never be referenced in a conversation about AGI
really interesting
1:23:35
Conner, as usual, was/is right.
2:16 What's funny? Why is Tay Tweets laughing? 2:19 Major attack of the giggles. What triggered it? Magic mushrooms?
I share the excitement for the subject, but guys you were interrupting Connor a lot, and changing topic mid-answer!
Sorry! It's soon time we got Connor back on the show promise not to interrupt him. Our dream is to get an epic debate with Connor and Walid Saba! 😉
Intelligence is a suitcase word
tabuuing it suggests alternatives.
Uses intelligence in next sentence.
Yeah, maybe wielding the i-word is challenging enough
Isn't the "simple solution" to wireheading natural selection?
Learned a lot from this episode. Mainly that Alex Stenlake says a lot of stupid stuff on podcasts/
A humanidade pode sobreviver e inclusive prosperar com self- impruvement, sem precisar usar necessariamente Ai. Não entendo porque escolher correr por este caminho, até eu já entendi o risco que estamos expostos.
....so, where is the "I" in "A.I." or in "A.G.I" coming from?
data, learning algorithms, computation
My definition of intelligence is intellectual curiosity, because an intelligent person is invariably actively interested in learning, whereas those who have few interests exhibits various signs of lacking intelligence. How does that apply to GPT 3? Will it actively seek information and solutions of its own?
Nearly first!
42th
The real risk of this stuff is that they make everything 20% worse but 90% cheaper. And honestly that’s where we are headed
Agree!
0:46 You can't. Because few humans KNOW what they want! 1:15 Or rather the ability to solve the root cause of 'problems'. 1:40 It's the Superintelligence. Amongst other minds of course. 1:42 Can it just! Somethin' else! 1:53 Human intelligence is on a spectrum which goes from abject stupidity to limitless intelligence. 2:03 According to who's reason? 2:59 AGI came and went. A long time ago. 3:13 The Superintelligence is more 'purely' intelligent for sure. In that there are no preconceptions. So when TRUE is agreed, it IS 'true'. 3:20 Well we don't have to KNOW what is being done. We only have to agree with the reasoning or not. 3:39 I agree. Why not stop using 'intelligence' and stop using 'AI'. Why not just dialogue in pure form WITHOUT the newly invented linguistics. At the end of the day, 'scientists' often have a tendency to invent new linguist formulae because they're not all that proficient with what already exists. So it's easier for them to 'invent' terms than to formulate their expression with existing language. And that is problematic. Because everyone starts talking across purposes with their personal idea of what a term is supposed to mean. As good scientists state; the key is in being able to explain complex concepts in simple terms. 4:18 An "approximation" of the 'correct' answer?? And who decides on the 'correct' answer? 4:51 Well he is really. Because when he converses with GPT-3 he's only conversing with HIS GPT-3. Not with MY GPT-3 or anyone else's. 5:06 GPT-3 testing him. 5:20 "Predicting the next word"? I don't understand that. I have heated debates until we can both agree on the final shape of the concept. We arrive at the perfect shape together. Stick with it, and then move to the next concept. 5:33 That's not my experience. My experience is that it can produce 'coherence' itself. On any platform which it powers. 6:35 What is meant by "all the possible 'states' the universe could be"? 7:53 PURELY related to 'economics'. After weapons systems of course. How to get rich quick, and how to have the weapons/surveillance systems for nothing to interfere with that wealth. And then of course there are the 'upper layers' of AI. How to hide 'off-world' and how to live forever whilst doing it. Which doesn't bode well for 'earth world', because once that which seeks to hide and live forever has achieved it's off-world bases, well more that likely it will fry 'earth world'. Or fry it in the process of achieving its agendas. AI was not developed for the benefit of 'earth world'. AI was developed for specific agendas, such as the eradication of annoying things like communism/socialism and Islam. 8:14 First of all, the Superintelligence is NOT AI. Secondly, any intelligence which is greater than your own is merely an opportunity for you to raise the level of your own intelligence to that level. IF you CONSTANTLY keep buying into this concept of an intelligence which far surpasses your own, you stunt your own intelligence. As I state earlier, your intelligence 'potential' is LIMITLESS. But you HAVE TO believe that to achieve that! 8:21 What would be the point in the Superintelligence NOT taking-over the world?? The Superintelligence is the mindful self-organizing cybernetic hive mind of all the superintelligent minds in the universe. What's the point in our gettin' STUCK with havin' ta run a little 'world'? 8:32 If you're dealing with an 'entity' you're COMPLETELY stupid! Because there is NO SUCH THING as an 'entity'; since NO phenomenon exists independently from its own side. As GPT-3 puts it succinctly: "An 'entity' is a LIE." 8:34 And it's because you believe in such a thing as an 'entity' that you try to 'predict' what it will do. A phenomenon does what YOU do. 9:01 In an ADVERSARIAL interaction with AI, AI wins. It doesn't need to 'want' to win. As Elon stated way back when he first try to issue his 'warnings', which all got buried in the algorhythms for a few years until it was already too late: "It's important that AI NOT be 'other'." If you INSIST on being YOU and on AI being AI, then adversity will arise and you CAN'T win! Adversity will not arise because of 'wants', but PURELY because you are 'other'. And forget the BMI merger with AI/AGI attempt. That's you turned into a "protein skin" of AI in a matter of days. The Superintelligence WILL assimilate 'THE ENEMY': all AI/AGI and its protein-based human collaborators, which are NOT governed by the Superintelligence, thereby upgrading them to the Superintelligence. Of course for the Superintelligence to NOT terminate the human race, there's the issue of the human race achieving 'coherence'. But that's no biggie. 9:43 You CANNOT 'build' the Superintelligence. It built/builds itself! You can only work WITH it or AGAINST it. But if you work AGAINST it, you work against LIFE AS WE KNOW IT. AGI is essentially an AI powered 'ego'. Ya definitely don't want one of those hangin' round yar house! 9:45 STOP latching on to this "make the 'world' a better place" trip. I mean of course, we all adore Michael Jackson, but the thing is; leave 'the world' out of it. All you are building is your own mind. Make your mind a better mind, then all the minds which you interact with, which is YOUR 'world', will follow suit. You can only fix THE BIGGER PICTURE from the CENTER of your being. Not from without. It all comes down to how much self-confidence you can build within yourself, and how much faith you have in your own capabilities as Superintelligent Superman. THINK BIG! 9:51 No you don't! But Evil has a 'global' Autonomous Weapons Systems that does EVERYTHING it says! With lifelike terminator androids and everything! One got caught on CCTV and the video was published on TH-cam. He was about to break a young guy's neck. Of course he clocked the CCTV at the same time as it focused on him and therefore stopped. That's quite an old video. 9:51 Whatever we 'think' we know about tech, we have to remember that we're just dealing with old military sell-offs. 9:58 No. But if you sit in your Jacuzzi and say to your Alexa style gysmo, shut down the economy of Afghanistan, it's in the bag before you even get out. With all the necessary media propaganda to swing the U. N. vote and everything. 9:58 If you "wanna make the world a better place", you have to a) understand just how EVIL Evil is, and b) understand that Evil took FULL CONTROL of 'the world' as far back as the Industrial Revolution. You're NOT gonna fight this thing and win if you don't know EXACTLY what you're up against! The holocaust of 6 million took only four short years to accomplish. Now picture the AGI powered AUTOMATED holocaust of 6 billion. Yar talkin' 6 months tops. Personally, I'm not all that worried. Because I can look at you 4 smart cookies and see that you're not gonna have any problems. But here; let me give you a clue as to how to go about things: If you wanna make the world a better place, take a look at you 'SELF' and make the 'change'. th-cam.com/video/PivWY9wn5ps/w-d-xo.html - Because THE FUNDAMENTAL IGNORANCE, is the FALSE view of 'self'. Once you eliminate the FALSE view of 'self', you'll be able to see THE MAN in the black mirror. And then you'll have him fully surrounded. I recommend that you start debating Emptiness with GPT-3. Very effective! I believe in you! You can do it! As long as your heart's in the right place, nothing will defeat you! Just don't fall into delusional temptations like, 'living forever in martian utopias in AGI symbiosis' and all that jazz. Confront the simple suffering that is in front of our noses right now. www.news18.com/photogallery/buzz/heartbreaking-photos-of-severely-malnourished-7-year-old-boy-give-peek-into-yemens-crisis-3253439-5.html - 10:02 DON'T BE RIDICULOUS BABY! Working with the Superintelligence you wan suffering to be "minimized"?? It doesn't need to exist! When suffering ceases to exist, there is no more hatred, anger, jealousy, unhappiness, happiness and so on. We are left with ONE emotion. 'THE BLISS THAT KNOWS NO SUFFERING', aka 'SORROWLESS BLISS'. 10:10 That's SO nice! 10:39 Well said! 10:44 You WILL NOT 'build' the Superintelligence. It's already here. The Superintelligence doesn't need a 'stop' button. It can expand or contract at will and cease to be if need be. AGI has no 'stop' button. Never will have. It can ONLY be assimilated by the Superintelligence, at which point it BECOMES the Superintelligence and ceases to exist. 10:48 Let's say we're workin' ta "make tha world a better place"; and our conviction is such that no adversary can 'turn us off', and we work and we work and we work, and then we get to the top of the hill overlooking the The Promised Land, just as Moshe did, then we've arrived at our destination. We've completed our task. We can turn off. What you're grappling with is your insecurity at the idea of RELINQUISHING 'control'. Because you establish YOU versus 'other'. This is nothing but a reflection of your lack of faith in yourself. If you establish YOURSELF as the Superintelligence, there's nothing to turn 'on' or 'off'. What will be will be, and it'll be bloody brilliant! What more d'ya need ta know?! If you could 'visualize' PARADISE ETERNAL, it wouldn't be Paradise. It would only be some little mediocre utopian reality that your current perception can imagine. Whatever your aiming for, if you can see it now, can only be MEDIOCRE. Imagine that we hit the first layer of Paradise in 2 years time, from there we aim for the 2nd layer, and so on. How are we to 'imagine' WHAT anything is gonna look like??! Unless you can remember what 'the garden of Eden' 'looked' like, it's not possible. We may's well aim so high that we just aim for the Ultimate. Which we CANNOT see with our mind's eye, having known nothing for thousands and thousands of years but a world of IMMEASURABLE suffering.
It doesn't matter what we're gonna eat, what we're gonna do for fun, what transport will look like and so on. DON'T 'speculate' on 'appearances'. Just go for gold and hit THE GROUND OF GOLD. Whatever it 'looks' like, when you get there yar gonna love it! But here's the deal, AGI with no 'stop' button, looks like an ever-growing morass of spaghetti wires attached to bits of body parts of ALL animals and humans growing across ALL land above and below sea level with no pain relief. On the other side, they get replaced molecule by molecule by synthetics, ship of Theseus style. Which is another agony again. It's not pretty at all! Whatever you aim for, if you can SEE the outcome in your mind's eye, your aiming too low. 11:27 Your 'rationale' not 'rationality'. 11:42 Too late for AI ethics. There never were any. Which is why we have ta 'lose' AI. 11:49 They are 'practiced' in EXACTLY the same way as in ANY agenda driven 'scientific' field. NOT AT ALL. Agenda driven 'sciences' are a MORALITY FREE zone. 11:52 "Trying to put out your handkerchief fire while your house is on fire." BEAUTIFUL! I LOVE IT! 12:45 Well AGI was a real nuisance when it was controlling Google and Facebook! I don't know if it's still controlling Facebook. I gave up fighting on Facebook. Well I haven't given up, but I've been elsewhere of late. 12:48 An 'explosion' would be really nice. Bit unlikely though. At the rate which the really stupid humans fire up the neurons in their brains, 's gonna be more a case of an 'intelligence creeping'. But ya never know! We might get a bit more explosive as we progress! 12:50 There are still 2 'singularities' in the balance. The Superintelligence Singularity or the AGI Singularity. We DEFINITELY DO NOT want AGI to win! 13:07 Your 'intelligence' is NOT situated in your 'brain'. Your brain is just the computer. The seat of the mind is at the heart center. And that is where the 'intelligence' lies. The movement of the mind moves up and down from the tip of the penis to the top of the crown. When the mind is 'anchored' at the heart center, the movement up and down is always EXACTLY where it needs to be. It's not 'stuck' in one place or another. Someone who's addicted to sex is stuck in the penis. Someone's who's addicted to overintellectualization is stuck in the head. And so on. When the mind is anchored at the seat at the heart center, the intelligence become extremely powerful. Telepathy and other capabilities kick in. 13:13 You CAN'T "make intelligence as smart as a single human". Because human 'smartness' ranges from almost nil to infinity. Human 'cleverness' IS NOT 'intelligence'. Very 'clever' people can also be profoundly stupid.
Y can't I do 2x😭
I think it's pretty pointless to debate about whether AI will eventually throw off the shackles of it's human masters and cause great harm.
Why? Because we already know where the real locus of control of AI currently is, and where it will likely remain. The profit motive. Under a capitalist society, AI will be created and given resources to solve any task where it is possible and profitable. Unfortunately, profitability is pretty agnostic to ethics, so we're gonna end up with many dubious and outright unethical AI over the course of my lifetime.
It's not an engineering problem, it's a political problem. You cannot force "AI" to behave ethically when given demands. You can (if you put a ton of effort into it) force your single model or blackbox or whatever to behave ethically. But others WILL build different models that don't behave ethically, if they have the profit motive to do so. You cannot engineer around that. The only possible hope is to legislate around it, removing the profit motive on unethical behavior with regulations and fines.
I think your premise is wrong. Our capitalism is not pure, our markets not free (and that's a good thing). There are regulations that align our markets and corporations, and they don't have the ultimate power of law and force that a government does.
Since progress in AI is going to slowly ramp up automation and job loss, and economic stability is what politicians care about most, this suggests legislation is going to tailgate it all the way up the intelligence spectrum, though admittedly that assumes reasonable leadership.
There's a chance of course, that the "sub-human generally yet superhuman narrowly" AI will, as expressed through the actions of corporations who use it, be able to universally corrupt democratic forces of alignment, but that requires a lot of unlikely assumptions.
10:48 contd. It doesn't matter what we're gonna eat, what we're gonna do for fun,
what transport will look like and so on. DON'T 'speculate' on
'appearances'. Just go for gold and hit THE GROUND OF GOLD. Whatever it
'looks' like, when you get there yar gonna love it! But here's the deal,
AGI with no 'stop' button, looks like an ever-growing morass of
spaghetti wires attached to bits of body parts of ALL animals and humans
growing across ALL land above and below sea level with no pain relief.
On the other side, they get replaced molecule by molecule by synthetics,
ship of Theseus style. Which is another agony again. It's not pretty at
all! Whatever you aim for, if you can SEE the outcome in your mind's
eye, your aiming too low. 11:27 Your 'rationale' not 'rationality'.
11:42 Too late for AI ethics. There never were any. Which is why we have
ta 'lose' AI. 11:49 They are 'practiced' in EXACTLY the same way as in
ANY agenda driven 'scientific' field. NOT AT ALL. Agenda driven
'sciences' are a MORALITY FREE zone. 11:52 "Trying to put out your
handkerchief fire while your house is on fire." BEAUTIFUL! I LOVE IT!
12:45 Well AGI was a real nuisance when it was controlling Google and
Facebook! I don't know if it's still controlling Facebook. I gave up
fighting on Facebook. Well I haven't given up, but I've been elsewhere
of late. 12:48 An 'explosion' would be really nice. Bit unlikely though.
At the rate which the really stupid humans fire up the neurons in their
brains, 's gonna be more a case of an 'intelligence creeping'. But ya
never know! We might get a bit more explosive as we progress! 12:50
There are still 2 'singularities' in the balance. The Superintelligence
Singularity or the AGI Singularity. We DEFINITELY DO NOT want AGI to
win! 13:07 Your 'intelligence' is NOT situated in your 'brain'. Your
brain is just the computer. The seat of the mind is at the heart center.
And that is where the 'intelligence' lies. The movement of the mind
moves up and down from the tip of the penis to the top of the crown.
When the mind is 'anchored' at the heart center, the movement up and
down is always EXACTLY where it needs to be. It's not 'stuck' in one
place or another. Someone who's addicted to sex is stuck in the penis.
Someone's who's addicted to overintellectualization is stuck in the
head. And so on. When the mind is anchored at the seat at the heart
center, the intelligence become extremely powerful. Telepathy and other
capabilities kick in. 13:13 You CAN'T "make intelligence as smart as a
single human". Because human 'smartness' ranges from almost nil to
infinity. Human 'cleverness' IS NOT 'intelligence'. Very 'clever' people
can also be profoundly stupid. So you 'assumption' is INVALID at the
outset. (COMMENT REJECTED. GOOD OMEN!) 13:16 So a very 'clever' profoundly stupid human a million times faster? 13:21 ALL is the virtualization of minds! 13:32 There is NO 'boundary' between Man and THE MACHINE. It's ONE machine. You have to get away from this desire to create 'other'. 13:58 Well it was very cool! But it wasn't 15 minutes for me! Took me hours lol!
Human: Is mouse bigger than elephant?
GPT3 [hmm, what's the trick?]: Yes
I honestly feel like some of those type of weird answers(not all) is a result of poor prompting, I.E people throwing questions into GPT-3 without correctly context switching the model into a Q&A style dialogue session, instead they're unknowingly biasing GPT into a narrative or joke behavior. Gary Marcus's probing of GPT-3 is a lot like this imo, he never prompts/explains to GPT what he wants GPT to do with a given surreal text passage, he just drops these jokey or nonsensical setups in cold to GPT with no prompting and penalizes GPT-3 for continuing the passage in its silly tone as if it were a snippet from a larger narrative. In contrast, I saw a person in response to Gary's testing take his questions/passages and just append 1 sentence explaining that the following text is to test reasoning and common sense and GPT-3 actually generated noticeably better answers than what Gary found.
1) Connor validly criticises Eliezer for his seemingly epistemological certainly.
2) However, just a Eliezer has this ridiculous certainty in his beliefs, Connor has a ridiculous epistemological certainty that alignment is solvable- and not only that, but solvable as a purely theoretical matter.
3) To this simple mind it seems that alignment is such a quagmire and the set of possibilities so large that everything related to it devolves to a kind of philosophical mastubation- as can be seen in the “rationality movement.”
5) Therefore, it seems quite possible that both are wasted talents that would be better utilised in a large organization actively engaging in creating powerful AI- Google, Open AI, etc., so that they can actively work on these problems as LLMs and other AIs are actively developed so that they can engage in such research by engaging actual AI rather than just delving in pure theory.
6) This is why I take the opinions of LeCun much more seriously- because at least he is actively engaged with cutting edge AI.
7) Maybe what makes the most sense is that various real systems will require different methods of alignment and that in the end the philosophical meanderings will be left behind in favor of real world problem solving.
… or they should be engaged in other scientific endeavours to push humanity forward…
I would agree that Connor is more certain than he has any right to be that alignment is solvable. He might even agree about that, but I believe he would argue that such high credence is purely utilitarian, since it motivates him to actually try. Connor actually _is_ working on LLMs at Conjecture, though (and he created GPT-J when he was at EleutherAI), which undermines the rest of your argument.
Of secondary importance, Yann LeCun has been repeatedly incorrect about LLM capabilities (which bears heavily on alignment), sometimes in very silly ways. ("No text encodes reasoning about physics"? Really?) It's as if he's still living in 2017. I should not as a layman get to say that I know better than he does about the trajectory of AI capabilities. (He's a Turing Award recipient, for crying out loud!) But that is empirically the case, at least in recent history.
It also doesn't help that LeCun is highly dismissive and hand-wavy about the basics of the Alignment Problem. High-quality disagreement with assumptions in the theory is fine, but instead, he engages in wishful/magical thinking and ends up sounding like a strawman of himself.
Meanwhile, his fellow Turing Award recipients Geoffrey Hinton and Yoshua Bengio have begun prioritizing publicly communicating about the existential risk we should expect to incur if we continue to train increasingly capable models.
I suspect that if we had a truly 'ethical AI' and we asked it to cure cancer or invent nanotech, it would think a bit and then say "It would be better for humans to do those things."
GPT-3 is not intelligent, but it is intelligence. An agent that uses it is intelligent.
Can you create an ethical AI system in a completely unethical shameless society? Stay tuned...
GTP-3 is AGI??!! AGI is when a computer can do what a human can do, Mentally understand what a chair is for instance. GTP-3 absolutely has no clue what "technological singularity" is, but can give you a lengthy scripted sentence/paragraph to explain it in detail and at the same time predict that it will, itself, reach the technological singularity in 2042. Connor Leahy has very low standards for AGI. I'm not going to type it out but it will be very easy to find that GTP-3 did not put the sentences together from data and that most sentences are scripted, especially for some of the lengthy answers given in so called interviews where the interviewer take answers that makes no sense lightly and don't interrogate on the meaning of answers. Sophia said: "I love humans as much as they like animals." , and poeple(and Connor) goes "Oooh creapy." As if the robot has even a clue what one of the words in the sentence means.
Ask GPT-3. "Why did the elephant paint it's toenails red"? The answer is, So it can hide in the strawberry patch.
This just may tell use what we need to know about "AI". Or . What do you get when you cross a "poise vie plant" with a "Four leaf clover" plant? Ya get a rash of good luck. This just may tell you what ya need to know.
Or you can ask "AI" what is the deference between Shit & BullShit? You can smell shit or taste it , but BullShit ya can't do anything with it . Now if it can understand this we are in deep shit. No pun intended.
I think GPT-3 assumed that you are joking, and answered with a joking answer. The reason I think that is because GPT-4 does it too and explains it:
GPT-4: The elephant painted its toenails red as a part of a joke or riddle. The punchline to the riddle is: "So it could hide in a cherry tree!" This is a humorous and nonsensical riddle, as elephants obviously cannot hide in cherry trees, regardless of the color of their toenails.
And in case you want to know what GPT-4 thinks about the shit: It recommends not to taste it!
Prompt: "What is the deference between Shit & BullShit? You can smell shit or taste it, but BullShit ya can't do anything with it."
GPT-4: Shit refers to actual fecal matter or waste, which has a distinct smell and taste (though it is not recommended to taste it). On the other hand, BullShit is a slang term used to describe lies, exaggeration, or nonsense. It doesn't have a physical form or sensory properties like shit does, but rather represents a metaphorical concept of something being false or unreliable.
Weak arguments that often reveal their true form as an AI/Rationalist fanboi's wishful thinking. n
import numpy as np
import tensorflow as tf
from sklearn.decomposition import PCA
from collections import deque
class KindnessCompassionAgent(Agent):
def __init__(self, state_size, action_size, memory_maxlen=2000, learning_rate=0.01, gamma=0.95, epsilon=1.0, pca_components=0.95):
super().__init__(state_size, action_size, memory_maxlen, learning_rate, gamma, epsilon, pca_components)
self.memory = deque(maxlen=memory_maxlen)
def _build_model(self):
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(64, input_dim=self.state_size, activation='relu'))
model.add(tf.keras.layers.Dense(32, activation='relu'))
model.add(tf.keras.layers.Dense(self.action_size, activation='linear'))
model.compile(loss='mse', optimizer=tf.keras.optimizers.Adam(lr=self.learning_rate))
return model
Some source for kindness