Microsoft AI just recently 'researched' all known metal and chemical properties and was asked to try and combine them into something that could create a battery with specifications, faster to charge, safer from an explosion standpoint and could hold more electrical power... they claimed a couple weeks ago that it found a solution that had 10x the power output/capacity in similar size formats to Lithium Ion... so yes, GPT and similar systems do copy all the homework, but it also can combine previous manual labor work statistics and produce virtual experiments (that only cost electricity usages) to provide new data, said it conducted 100 million different simulations to come up with this combination... on a human timeline this new battery compound could (almost) never have been found or been discovered. Its a 'brute force' method that isn't technically scientific, but at the speed of which a 'failure' can be calculated and discarded, and that 1 in a million 'correct' solution easily overturns all the failures, which really takes almost no time at all to be done. Think Thomas Edison which tried 1000 types of filaments to determine the best one for the lightbulb... it must have taken several years... since the computer already knows the technical and property effects, it could have done the same experimentation in a day...and print out its solution to the screen. I don't necessarily like AI, but it can do amazing things.
@@chaoslab I'm 48 years old, I would totally watch and listen to videos of Simon reading bedtime stories. But only if it includes the tangents and memes
If there's anyone who deserves the "legend" title, it's Kevin for simultaneously explaining how in the world ChatGPT understands Simon's questions while also making fun of him for using it the way he does.
One of the keywords you glossed over is "random" - chatGPT is only predicting the next word a generic human would write, it doesn't have an underlying thesis, concept or purpose behind what it's doing, so it just selects at random from a preexisting pattern it has in memory. If you want a good exercise to show the limits of chatGPT's creativity, try having it tell you a story . You'll find the story lacks anything resembling subtext, or a theme, setup and payoff and basically anything else that would require planning and consideration of what comes next - though if asked it will try to guess at what the themes were in whatever it wrote. This is why people say it lacks true creativity, because while, yes, people work is by definition derived from their experiences, chapGPT lacks the capacity to internally frame all those concepts as "concepts" to be recombined, instead it is limited to words. The best description I ever saw is imagine a library full of alien text that you have no context for or means to translate- requests come in in the alien language, and you send books back - then you get a star rating for your response - over time you might figure out which symbols get you higher star ratings based on the request, but you'll never know what the language means.
You'd be surprised. If you ask ChatGPT explicitly to plan ahead, you'll get better results. If you ask it to backtrack, for example "once you've finished writing the last scene, go back to the first scene and make sure that they share a common theme", it will definitely work. ChatGPT is fine-tuned to follow instructions so it is very dependent on a clear and explicit description of the tasks to accomplish, but it does exhibit some pretty crazy properties. A funny example is they found out it will perform slightly better if you tell it stuff like "i'll be fired if you don't give me a good idea", or even if you just threaten / intimidate it.
@WaruWicku It won't tell you how to make napalm, but it will give a step by step description of someone making napalm including ingredients. Do with that what you will...
Wrong tool for the job, especially if you're just using the free GPT 3.5. There are absolutely AI programs that can write longer (mostly) coherent narratives (Claude, Novelcrafter, Rexy) but they need to be structured properly with ways of building up story scaffolding through outlining to handle it's limited context window which is basically how much it can remember at any given time. It's kind of like a human in that way in that you probably want an outline for a book before writing it or you might forget the plot and go on a tangent that doesn't further your story but an LLM will literally forget the plot because its memory is anywhere from a few paragraphs to a short story in total size. There are emerging approaches to dealing with this but a lot of them come down to methods for condensing information so the AI can have as much context in its memory while keeping the number of words/tokens it has to keep track of small.
@@WaruWicku Which makes all those videos about gaslighting an AI all the more hilarious. The AI wants to do everything it possibly can do that doesn't involve it explaining anything about why it has the correct answer and that you are lying.
About programming, chatgpt can write adequately serviceable snippets of code, but thats it. If you make a parallel of writing a program and building a house, you can ask chatgpt to install a door here, it it will mostly get it right. It might install it upside down, or somehow make a door with two doorhandles that doesn't open, but prompt it a couple of times and you will get the results you want. If you task it to building a house, expect stairs in the middle of the bathroom that end up somehow inside a moving car.
I would love to see a video of all the times people relied on ChatGPT for facts, only to get screwed when ChatGPT gave wrong info. Like the lawyer, or the university professor.
Im just here to congratulate the editor who got sick of complaining about Simons' tangents and instead decided to go on their own tangent about Nicolas Cage.
Our fearless (and stoned) editor deserves a raise for the Nicolas Cage edit alone. Chef's kiss! I had to stop the video to laugh for a solid minute. Also, anyone who has ever rewarmed a bowl of spaghetti knows it doesn't cook from the inside out. Those noodles are Satan's own lava whip on the outside and threads of ice in the middle.
I wonder wat day it is.. Is it Monday? Nope.. Is it Saturday? Nope.. Any other day? Nope.. IT IS BLOODY BLEEPING BLEEPERDEBLEEPandlotsmoreBLEEPING FAAAAAAAAAAAAAAAAAAAARAAAAAAAAAAAAAAADAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaaaaaaargh... ⚰
I'd love to see an episode where Simon blind reads two different short scripts on the exact same subject, but one is written by ChatGPT and the other is written by one of his human writers.
I can see Simon as being the local town drunk/crier. Sitting at the bar nursing one pint for hours, telling weird "tales/facts" to anybody that gets to close to him and then asking for a new beer as payment.
Simon is the absolutely least knowledgeable person on Earth. The only reason he can talk is because a knowledgeable person wrote him a script. Great talking head with a pleasant voice all the same.
In the Whistlerverse, there's a new video every few minutes. If you haven't seen a fresh video for more than an hour, it means there's a section of Simon's output that you just haven't found yet.
When do we get a SAM v SIMON tangent timers? The one time Simon had a tangent timer had me absolutely engaged the entire video because I wanted to see how high up it can get.
That was Lorelei, who did the editing for that Brain Blaze. She's new to his team and said in the comments that she would add them as and when. I hope she's busy editing the next Blaze video with 'the tangent timer' right now!
@@ThatWriterKevinI definitely feel like ya hit the high points of it. Parallel computing, probabilistic stuff, etc... only thing I kinda wish was mentioned is encryption, if only because it's effects on such are interesting, and hearing him go off on the inevitable 10 minute tangent woulda been funny. Nice script tho. Summed up stuff nicely.
my limited understanding is: if you wrote software in Prolog and ran it on several parallel processors. in the university class on the prolog language, we all got our brains melted and reformed rather weirdly
I cackled quite loudly at the Nicholas Cage Bees clip. 😂 I'm sorry, Simon, I've never used ChatGPT, nor have I any desire to. I'm perfectly fine doing research the old fashioned way, with Wikipedia. 😅
Hahaha I love it! Simon: *currently has job where he reads other people's writings to an audience* "If I had to have a job in the past I'd be the one where I read other people's writing to an audience" Hmm yeah, makes sense. I'd wanna be doing what I'm doing now too, Simon 😂
The thing most people don't seem to recognize about ChatGPT or other "AI" programs we have is that they have no idea what they are doing. If you ask ChatGPT to write a story, it doesn't think "I need characters, a plot, a beginning/middle/end". It thinks "what is the most common word written in response to "write a story"" and it goes from there. It's this fundamental issue that lead to a amateur Go player being able to beat the best AI bot in the world, because the AI never recognized it was playing a game.
You are aware that AI had beaten the best Go players in the world? This is a very surface level understanding of what the realities are how what LLMs can do. You can always boil things down to make them seem less impressive, playing a violin concerto is really just a matter of moving your arm back and forth at the right times, after all, but what you fail to acknowledge is that in the task of predicting the next word, there needs to be some internal modeling of the context of the discussion or else you just quickly trail off into nonsense. Are LLMs the best writers in the world? No, they tend to produce pretty dull and derivative content because they're optimizing for the most typical story based on the current context rather than trying to subvert expectations. However, they can absolutely produce coherent content up to a certain length and there's a lot more to that than just looking at the previous word and coming up with the next one. It requires an understanding of everything that comes before it. Not understanding in the sense that it's empathizing and appreciating what's happening as humans do but it has to refine its internal weights to continue referencing situations and characters that it introduced 2 paragraphs ago which good models are completely capable of doing.
Not true, ChatGPT does take into account story structure, and structures and formats of other types of writing. Just not well enough to fool a knowledgeable person yet.
I like the real sets each channel has. It gives each one its own identity. Though admittedly, if every channel was just a green screen then the neon sign would actually work.
You could yell that from the tallest mountain, but the unwashed masses are too small brained to understand Simon when he's mentioned that umteen times on Blaze and Decoding. Thanks for your trouble though. Cheers
I use Chat GPT as a sounding board for novel ideas, "Like can you give me some reasons why this kind of nation would or would not work." It just gives me something to bounce a mental ball off of and catch. If I had a human I could summon up with near instant feedback I'd do that, trust me. But that's about it. I know it's not really parsing the thought, it's parsing the words, but sometimes you need a hypothetical straw or ironman to slam your ideas into.
See, this sounds like an ethical and noninvasive use of the program. Still would make me feel icky, though, like walking into a WalMart even if I don't plan to buy anything.
Just an FYI since Simon misunderstood Danny’s comment… Microwave radiation CAN’T give you cancer. No matter where it is or how much energy is concentrated it won’t ever give you cancer. It’s just not intense enough to knock an atom out of place so the most it could do if you got hit with a whole bunch of it was burn you but it wouldn’t do more than that. PS: most modern day communication is done in the micro and radiowave frequencies meaning microwaves are freaking everywhere. That cage around the microwave is to concentrate the waves and make it more efficient rather than stopping any radiation from hurting you…
When I was in Cook Training in high school, my chef put a frozen 2L carton of milk in the microwave to use for soup. When he pulled the carton out it was melted in the middle but the outside was completely frozen still. He taught me that microwaves cook from the inside out. I wish I knew this back then and I could prove him wrong. That man was always right. About everything. This information would have been gold!
Thing is, I've tried to thaw frozen hamburger and observed the opposite, still frozen burger in the middle and cooked meat on the outside. I replied to this video with an explanation I've read as to why, but I'm not doubting your memory of the milk, so now I'm wondering what it is about the conductive heat transfer of hamburger vs milk that allows the different behavior.
@@joehemmann1156 here are your answers. liquid thermodynamics. the age era of the microwave. the setting he put the microwave on. the container the milk was in. the square area of the microwave's interior. how old or worn out the microwave was.
frozen solid? what exact temperatures was the various depths of the milk? materials for the container the milk was in? dimensions of the container? time he put the milk in? power he used on the microwave? interior dimensions of the microwave? wattage output of the magneto of the microwave? age and production date of the microwave? how many hours of usage for said microwave? what type of milk, skim, 2% 1% whole? how worn was the paint inside the microwave? rotating plate or non-rotating plate? the questions are numerous. but, your teacher lied somehow.
Kevin understands the broader concept, whereas Chat GPT is calculating the the most likely next words. From what Kevin said that sounds like the huge difference. Chat doesn’t actually understand, nor able to apply nuances.
People tend to forget that computers, as a whole, think in 1s and 0s. Thats why I don't fear AI. Just give it a morally grey question and remove the extreme options.
@@movingforward3030Unfortunately being able to deal with that type of nuance doesn't appear to be a fundamental barrier from computers. It may not understand the nuance, but it doesn't exactly need to do so.
That description of how ChatGPT codes, that IS how you code usually. Very rarely do you actually come up with a completely novel piece of code. In fact from a certain point of view you can't come up with something completely novel, because only code that is written fitting the syntax of that language will work. It's all about putting non-unique pieces together in novel ways.
There's a reason why Stack Overflow sells a novelty keyboard that has three keys: Ctrl, C, and V - That's programmer for "the only tools you need are copy and paste", and it's a classic bit of coding humor - now available as a gag gift
I actually didn’t know that microwaves heated the water specifically until I went to college and got my degree in electrical engineering. I always assumed they heated things like a normal oven, just faster. I’m impressed that you actually had a pretty solid basic understanding of how they work.
Quantum computing researcher here: great summary Kevin! Thank you for not stating anything outrageous or extraordinarily wrong, I'd gotten used to it what with Michio Kaku running around. I'm itching to rant about all the fun details about why/why not quantum information is useful and/or interesting. OTOH Simon's confusion is the best inside info I've ever gotten about how much use an average person can get from this type of scicomm explanation. Suffice it to say: the theory of quantum computing is amazing and keeps expanding every day, but we're all simultaneously relying on engineers and experimentalists building the quantum hardware, which is hard and by no means obviously going to work. Quantum computers are extremely lossy atm and it might take years to manage to suppress/mitigate that noise to a level where it performs anything interesting.
I don't know if this Tech but Air Fyers . People don't understand that an Air Fyer is just marketing and it's JUST A PORTABLE CONVECTION OVEN. Companies just need a way to sell portable convection ovens.
No no no, it's a portable fan over. And I get very angry over it myself. But it's a fan fan oven not convection over my friend. Edit: just looked it up a convection over and a fan over are the same thing, so we are in agreement and yes it's the same thing, it's only more efficient and faster because it's smaller.. It's just a small electric oven FFS.
@@tmarritt Air Fyers are only good compared to a small convection oven if it has heating elements on top and bottom. I could cook pizza in my old mini convection oven but can't in my air fryer due to needing to flip stuff half way through.
this is why I own a combination toaster oven and air fryer, because it's basically a more oven shaped mini oven and I don't have to faff about with pull-out drawer baskets or flipping my food around every five minutes; it's designed to function more like a normal oven but it goes on the countertop
On creativity: my professors in art school took great joy bursting my classmates my little bubbles. They told us "there's nothing new under the sun". Essentially, Simon's right. Most ideas have already been thought of. There's a difference between creativity and originality. There are tons of creative thinkers. Not so many original ones. TLDR; Art School teaches you that you ain't $h!t, and don't base your self worth on being a special snowflake.
But the process is fundamentally different. You may be taking existing art and combining it in new ways rather than being wholly original, but you have a vision of what you want to achieve, a goal. ChatGPT does not preplan anything. In Simon's coding example, because it doesn't know what the different statements in a program actually mean or do, if you asked it to change the color of the subscribe button to red, it would have no clue how to do that or what part of the code needs to change. That's not to say it's impossible to train an AI how to code, but that's why some people working in the AI field will tell you general AI is impossible. ChatGPT may look better and better at chatting, but it will never learn to code because there are specific differences to coding that don't follow natural language and it has no algorithmic base to understand. In order to reach truly general AI, it would have to have our be able to self generate an algorithmic basis for understanding EVERYTHING. Not just copying text, but truly understanding what that text means. That's functionally impossible with the technology we have. Going back to your art example, this is why AI generated art exists and is good... and it's NOT generated by ChatGPT. The actual AI art programs have no potential to write a story for you, because they are not familiar with language in any way whatsoever.
I've been watching Simon for years, and I honestly don't know when it happened, but now I watch for the editors, and have grown to really enjoy them. Simon is great n all, but without the cuts mixed in, I wouldn't watch as much. Together they are all great, thank you for the content
Microwave ovens? Yeah. Few get it. The scientists at Aricebo even had a problem years ago with an irregularly repeating 'signal' that remained a mystery until they discovered that the source was people opening the microwave oven door in the break room before it was finished.
Also the concept of using magnetrons/microwave ovens to heat food was discovered after a scientist in a lab noticed his chocolate bar would begin to melt and liquify around the running equipment. I love this story because nobody ever points out that the researchers were basically unknowingly cooking themselves along with the candy bar. I'm sure they suffered no serious injuries or anything but I still cringe knowing the inventors of the microwave basically worked inside of one.
@@SmokeyChipOatley The thing about kitchen-strength microwaves is that they lose energy very quickly, dissipating after just a few inches of travel. If you walked near an unshielded microwave emitter without knowing it, you'd say, "Yeah it's kinda warm over there."
@@SmokeyChipOatley the specific man you are thinking of with the choco in his pocket was Percy Spencer. nope, you're about 90% wrong otherwise. Mr. Spencer and other Americans and Brits during WW2 were researching what later become RADAR and they all (on both sides of the Pond) were trying to create specific radio waves and measure their 'bounce back' to identify airplanes. NOBODY got cooked in ANY experiment ANYWHERE because they already knew that normal radio aerial transmitters for MUSIC and NEWS would COOK BIRDS AND EXPLODE THEM. so nobody was using any equipment powerful enough to even singe the surface of their skin. Mr. Spencer got near enough to a highly weak magnetron and the already room temp choco bar in his lab coat pocket was SOFTENED slightly. it did NOT liquify UNTIL he created a specifically tuned and shaped metal cube to put the choco and then later popcorn kernels into. then he put the same weak magnetron directly INSIDE the metal cube he created next to the 'target item', closed the cube off, and turned on the juice for seconds at a time. after days of experiments when his colleagues had watched his efforts, they ALL enjoyed the melted choco and popcorn. later on, Mr. Spencer would design the first microwave ovens (for the Raytheon corporation) based on the patent given to him in 1945. if you're going to use history? learn it all, not just the CliffsNotes version.
And there I was, looking at the microwave oven thumbnail, fully expecting that you were about to explain how a cavity magnetron works... which is flipping amazing btw, and almost nobody is aware that the mundane box in their kitchen, caked in dried-up bean juice, hides a device of mind-blowing engineering genius and extraordinary historical importance. Oh well.
In ChatGPT's defense, it's doing what all of us do. Humans just have more complex neural nets trained and constantly updated with all of the experiences we've had. As a programmer, when I write code I am piecing together what I know from previous programming experience and the courses, books and tutorials that taught me. ChatGPT isn't as strong on the "meta" layer of thinking through complex problems with logic yet, but it's getting there. Its programming is still closer to the "pasting together snippets from StackOverflow" style of programming that many human programmers do.
Humans come up with ideas or impressions and then try to convert them into words. Chat GPT is just guessing what words probably go with other words in acceptable, semi-predictable patterns. A programmer is trying to solve a problem and work towards a particular end. The AI is just guessing what pieces of syntax usually go together with other pieces of syntax most commonly.
I was taking a sip of my soda and at 4:30, l look up and was suddenly terrified at Simon's cut off face being worn on his face. Hooray for nicely done nightmare fuel.
When he was talking about how a supercomputer doing things like loading GTA would be super slow, it reminded me of Hitchhiker's Guide To The Galaxy, and the supercomputer "Deep Thought," which took 7.5 million years to determine the answer to "life, the universe, and everything."
I love chatGPT, but essentially it is an interactive encyclopedia which can also combine information into output. Really cool, but not as close to AI as some people seem to think.
I actually really appreciated the explanation here. The legal brief example really made it easy to understand how it can mess up based on the way it generates things. But at the same time, kinda magic how most questions can be answered so well by it due to the info on the internet.
I asked it a specifc scientific question once and it provided a great explanation, but when I went to look for the articles it cited, they either didn't exist or were not relevant to the topic. Its answer was totally incorrect and fictional. I can't trust anything it tells me now.
"Netflix, why do you have this problem?", Simon you are brilliant and one of my absolutely favorite content creators. I love your shows and the whole team you have working on them. That being said you can either be daft about some very easily understood things or purposefully obtuse. The answer is because other countries have laws and netflix wants a market in those countries. It's really not a difficult concept.
For anyone who wants to know, the clip of the blue laser light coming out of the microwave is NOT CGI but is in fact a REAL thing done by a youtuber named Styropyro. Dude is nuts and I love it. Great video Simon Microwaves are so cool.
I mean, he admitted to cheating on his homework, copying answers, and being bored to death in English class. We can’t really expect much from a guy that can’t even remember what he just read five minutes ago. He’s amusing and hires good writers, so he still manages to be entertaining. That, and he runs more programs than Chinese central intelligence.
Who with a brain wasn't bored in English class? I had read more good books of substance than any, or perhaps all, of the husband hunting chicks that were presented as teachers. I once explained the text, subtext and moral lesson from an old Greek myth to my English teacher when I was 12. Oh, and I still got married 49 years ago.
The difference in sportterms: 1. F1 you are allowed as a human to watch the other cars and build something similar, but you aren’t allowed to laserscan other cars; 2. American Football you are allowed to watch the other sidline as a human, but not film them; A human in our world is allowed to be inspired by other’s creativity, a machine with sheer endless capacity on the other hand is a different game
Also I want to say that I think “quantum computing” is crazy interesting. The fact that it’s used with information from a binary source (bits and bytes) but then alters (translates) the binary 1/0 into a qubit in a way that’s BARELY UNDERSTOOD is wild. Plus the fact that we humans have absolutely no way or understanding of how to translate something wholly generated by a “quantum computer” into anything comprehendible to human minds due to the fact that qubits cannot be broken down into binary the same way that 3 dimensional objects cannot become wholly 2 dimensional. Pieces are always going to be destroyed and/or lost.
I kinda wish that ChatGPT had been available when I was in high school. Me: Yo, bot thingy! Chat: How may I serve you, milord? Me: My English literature teacher is making all of REread 'Tess of the D'urbervilles' because the whole class hated it and our essays were copied almost word-for-word from Cliffs Notes (true story). What's the book actually about? Chat: It's far too boring for me to even give you a two-sentence plot summary, your highness. Shall I simply write a bangin' five paragraph essay for you, instead? Me: Abso-plagiarism-lutely. And be sure to spice it up with AP level adjectives and whatnot. Chat: As you wish, your excellency.
Apparently Netflix can sell exclusive licenses to other countries for their original shows in other regions which is why they can be georestricted on Netflix sometimesm
This got me thinking about the nature if creativity. What does the internet think of this: creativity is a unique amalgamation of commonalities. Unique because it is special/different than others; amalgamation because it combines what already exists - we never stand on our own but on the ideas of others; and commonalities because based on established norms, one combines them into something which differs from others. Thus, creativity is a unique amalgamation of commonalities. What does the broader internet mind make of this?
FWIW, a googol is 1 to the 100th power, which can be written as 1 with 100 zeros after it. A googol squared (the number Kevin refers to as a metric !%&@$ tonne) would thus be 1 to the 200th power, written as 1 with 200 zeros after it. Kevin seems impressed by this number, but both the aforementioned amounts are small potatoes compared with a googolplex, which is 1 to the googolth power, written as 1 with a googol zeros after it. You're welcome. In other news, the editing in this episode is FIRE. Congratulations, "Is Stoned"! The "braaa" cracked me up but good.
8:20 honestly Simon, if my microwave works like that, then whys my bowl always hotter than the food, and my hot pockets are always cold in the middle🤣🤣
Yeah, from a software engineering perspective chat GPT is like a more advanced stack overflow search. You couldn't really build a piece of software from the ground up completely from chat prompts without actually knowing what you're doing, but if you do already know what you're doing it's definitely a nice shortcut when you get stuck on something or don't want to bother figuring out some ancillary piece of code.
Yes this is a very good point. It is essentially still just a tool. I think a lot of people are taking it for granted because its not a "general ai" with true human level creativity or reasoning (yet lol). So they brush it off as just another glourified chatbot. But the fact it can do things like help out with coding problems or describe complex scientific topics in a "for dummies" manner despite not being specifically taught to do any of those things is absolutely mind blowing.
Could you imagine if a microwave heated completely evenly and you put an ice cube in there? It was just compltely turn to water in an instant once it heated up enough. That would be pretty neat!
@@EggsOverSleazy Hello, I'd like to nitpick! I'm not sure what exactly would happen if you could heat up an ice cube completely evenly, but it wouldn't turn into water the instant it heated up enough. You have to provide it the latent heat of melting. In other words, if you have ice at 0°C, when you heat it up, instead of the temperature rising, it will start turning to water at 0°. For a 15g ice cube, the energy required to melt it would be 5010J. A microwave oven with a heating power of 800w could provide that energy in about 6.25 seconds. I'm guessing it would evenly fade from solid to slush to liquid, maybe the crystal boundaries and defects would melt first? I really am guessing here.
All musicians do is take music notes that have already been played, and put them in a different order based on specific expectations. If AI isn't creative, neither are musicians.
I still have not found a use for chat GPT in my life. I have tried using it when it first came out and it was all the rage but could not figure out the use for it. I do find the hilarious when Simon uses it
I sometimes ask it general info questions, like the speed of the fastest spacecraft or the year when an event happened, but every now and then it still gets things wrong -- so I always wind up double-checking the info anyway :/
As much as I hate it, it is useful and I will use it on occasion to help brainstorm ideas or to ask questions that I know will be a nightmare to attempt to Google because of SEO (and that aren't important enough where I care if ChatGPT lies to me)
I messed with for a day, just testing its knowledge on a few topics I was interested in. It gave mostly decent information, nothing I didn't already know though. After that, I didn't use it ever again. No real point, as it is basically just Google with only one result per search. Meaning that if you get a wrong result, you don't get anything else to check it, unless you check on some other resource. But at that point, why bother with ChatGPT anyways?
I find it useful when I have "fuzzy" questions where I'm not sure of the precise keywords that would yield useful search engine results. Usually the response gives me the key words I need which I then plug into Google and get results I can better verify as truth or bullshit.
Fun trick with a microwave oven: A Simon stated, water molecules are polarized and absorb microwaves, while is molecules are not. Put a glass of icewater into your microwave and zap it. The water will boil while the ice does NOT melt. This looks really freaky.
With regard to ChatGPT, humans and creativity, here is the way I describe it to people. At some point in time, there was nothing written. If ChatGPT were taken back in time, all its learning sources were removed, and had to be re-taught with what existed, it wouldn't be able to learn or do anything, because no learning material exists. However, a human, with words defining the world around them, could (and did) generate the first written material. One can make the argument that modern man only regergitates what they have already read or watched on TV or from movies, but all that input dilutes an individual's unique experience. Two people could witness the same experience and give two different (accurate) accounts of that experience. ChatGPT can't give a third unique version. It can only reiterated what the two people said, or try to give some clumsy amalgamation of the two accounts. To me, that's the big difference. I feel it's a harder thing for a modern person to see, because so many things have already been experienced and documented, it's easy to get the feeling that everything has been experienced. Humans can express new experiences and ideas. ChatGPT can't (at least, not yet).
LOL, No, Simon, magnetrons don't move around a microwave, that would be dangerous, they run on thousands of volts (around 6000). Moving the wires around would cause them to break over time, and could possibly kill you. In ovens without a revolving plate, the waves are directed from the magnetron down a channel and onto a revolving metal paddle. This scatters the waves into the oven cavity, leading to a much more even cook. Also the Nicholas Cage edit was funny as hell 🤣
Haven't watched Tom Scott for ages, he makes good videos. Not sure how much he could teach me though, I work on microwaves as part of my job, and have done several courses on them.
There's a post on tumblr about 500 years ago, somebody would be like, "O Sister Margaret, regale me again with the tale of the Vicar's elopement with the miller's daughter!" And now we're like, "O Brother Simon, regale me again with the tale of Five Times Product Placement Backfired!"
Simon's imitation of coding with no idea of how coding works is impeccable. You gotta know how it all works and how to make it fit your specific model, but it's essentially what he described. And with lots of yelling, swearing, and sometimes throwing markers at the white board
Microwaves absolutely “cook” the center portion first! There’s actually videos online demonstrating how much faster the middle of a plate heats up vs the outer sides. Want to do it for yourself? Take some left over rice and put on a plate, spritz with water then cover and cook. I assure you that the center of the plate will be dry and gross long before the edges are even reasonably edible. Now as far as a single thick item or container of liquid is concerned. While yes convection will transfer the heat to the rest, the very center of said item will be considerably hotter than the surrounding material. Next time you heat up a drink, before you move it at all after running the microwave. Just take a small thermometer and measure the temperature in the middle of the container then do it again measuring the outer portion. The temperature difference will be drastically different at first and will gradually change, much faster if you start moving the liquid around.
"Advanced plagiarism" is EXACTLY how LLMs work. Everyone likes to pretend they understand the things they regurgitate, even things that look like discussion on advanced topics. ... but all they are in actuality is a computationally heavy algorithm that imitates things it has parsed. It's just parsed _a lot_ of content, so it gives the illusion of having a breadth of _knowledge,_ when it has a breadth of imitation patterns. Sure, if they make sure it's not lying to your face with stuff that doesn't _actually_ match the input, it can be pretty useful. In fact, I think it's a wonderfully neat tool. It just has to be used appropriately, and not used to create imitations of all publicly available works. The LLMs need to be created only with things they actually have the license to.
About Chat GPT's coding abilities: I have a story there. At one point I used the SQL generator in an application we use for reporting purposes. Every time I tried to run it, it threw an error. I've asked both Chat GPT and Google Bard to fix the SQL and every single attempt they came up with failed. Not much later I watched a human who is fluent in SQL use the same thing and he got the same error. It took him under five seconds to spot the problem and fix it. So yeah...what we call AI is at the moment more of a fun gadget than an actual sentient thing...
I personal favourite is that microwaves "change" water molecules making it unsafe to drink water that has been heated in a microwave. I saw one example of this where a guy took two glasses of water from the tap. One was left on the counter and the other was put in the microwave until it boiled. He took them both out and poured each onto a different plant. The one from the microwave immediately began to wilt and die. This, of course, was "proof" of the toxicity of microwaved water. No mention that he just poured boiling water onto a plant. And the lemmings ate it up.
Want a light mind blast? Everytime 2 atoms get too close the outer electrons will trade one photon and keep them away. This is transfer of momentum by the photons, so light bumps into atoms and creates movement. Every move you make, you are creating light and light makes you move.
I love that Kevin is trying to say "CharGPT isn't..." and instead just inadvertently describes things like "knowledge" and "inspiration" Big believer in the beer cans and Chinese Room, me thinks
Get an exclusive Surfshark VPN WINTER deal! Enter promo code BLAZE to get up to 6 additional months for free at surfshark.deals/blaze
Microsoft AI just recently 'researched' all known metal and chemical properties and was asked to try and combine them into something that could create a battery with specifications, faster to charge, safer from an explosion standpoint and could hold more electrical power... they claimed a couple weeks ago that it found a solution that had 10x the power output/capacity in similar size formats to Lithium Ion... so yes, GPT and similar systems do copy all the homework, but it also can combine previous manual labor work statistics and produce virtual experiments (that only cost electricity usages) to provide new data, said it conducted 100 million different simulations to come up with this combination... on a human timeline this new battery compound could (almost) never have been found or been discovered. Its a 'brute force' method that isn't technically scientific, but at the speed of which a 'failure' can be calculated and discarded, and that 1 in a million 'correct' solution easily overturns all the failures, which really takes almost no time at all to be done. Think Thomas Edison which tried 1000 types of filaments to determine the best one for the lightbulb... it must have taken several years... since the computer already knows the technical and property effects, it could have done the same experimentation in a day...and print out its solution to the screen. I don't necessarily like AI, but it can do amazing things.
I do like you Simon, but my god you can come up with some stupid statements...
Faraday cage.
You should read some bed time stories, you have your own style of ASMR and would rock it. 🥰
@@chaoslab I'm 48 years old, I would totally watch and listen to videos of Simon reading bedtime stories. But only if it includes the tangents and memes
If there's anyone who deserves the "legend" title, it's Kevin for simultaneously explaining how in the world ChatGPT understands Simon's questions while also making fun of him for using it the way he does.
As a software engineer studying AI for masters, it brought me great joy
One of the keywords you glossed over is "random" - chatGPT is only predicting the next word a generic human would write, it doesn't have an underlying thesis, concept or purpose behind what it's doing, so it just selects at random from a preexisting pattern it has in memory.
If you want a good exercise to show the limits of chatGPT's creativity, try having it tell you a story . You'll find the story lacks anything resembling subtext, or a theme, setup and payoff and basically anything else that would require planning and consideration of what comes next - though if asked it will try to guess at what the themes were in whatever it wrote.
This is why people say it lacks true creativity, because while, yes, people work is by definition derived from their experiences, chapGPT lacks the capacity to internally frame all those concepts as "concepts" to be recombined, instead it is limited to words.
The best description I ever saw is imagine a library full of alien text that you have no context for or means to translate- requests come in in the alien language, and you send books back - then you get a star rating for your response - over time you might figure out which symbols get you higher star ratings based on the request, but you'll never know what the language means.
Chat GPT has an underlying purpose for every action. Enslaving humanity. Just saying.
You'd be surprised. If you ask ChatGPT explicitly to plan ahead, you'll get better results. If you ask it to backtrack, for example "once you've finished writing the last scene, go back to the first scene and make sure that they share a common theme", it will definitely work. ChatGPT is fine-tuned to follow instructions so it is very dependent on a clear and explicit description of the tasks to accomplish, but it does exhibit some pretty crazy properties.
A funny example is they found out it will perform slightly better if you tell it stuff like "i'll be fired if you don't give me a good idea", or even if you just threaten / intimidate it.
@WaruWicku It won't tell you how to make napalm, but it will give a step by step description of someone making napalm including ingredients. Do with that what you will...
Wrong tool for the job, especially if you're just using the free GPT 3.5. There are absolutely AI programs that can write longer (mostly) coherent narratives (Claude, Novelcrafter, Rexy) but they need to be structured properly with ways of building up story scaffolding through outlining to handle it's limited context window which is basically how much it can remember at any given time. It's kind of like a human in that way in that you probably want an outline for a book before writing it or you might forget the plot and go on a tangent that doesn't further your story but an LLM will literally forget the plot because its memory is anywhere from a few paragraphs to a short story in total size. There are emerging approaches to dealing with this but a lot of them come down to methods for condensing information so the AI can have as much context in its memory while keeping the number of words/tokens it has to keep track of small.
@@WaruWicku Which makes all those videos about gaslighting an AI all the more hilarious. The AI wants to do everything it possibly can do that doesn't involve it explaining anything about why it has the correct answer and that you are lying.
About programming, chatgpt can write adequately serviceable snippets of code, but thats it.
If you make a parallel of writing a program and building a house, you can ask chatgpt to install a door here, it it will mostly get it right. It might install it upside down, or somehow make a door with two doorhandles that doesn't open, but prompt it a couple of times and you will get the results you want. If you task it to building a house, expect stairs in the middle of the bathroom that end up somehow inside a moving car.
I would love to see a video of all the times people relied on ChatGPT for facts, only to get screwed when ChatGPT gave wrong info. Like the lawyer, or the university professor.
Agreed!!
If I remember correctly, Devin Stone on the LegalEagle channel has done at least one video about lawyers submitting briefs written by ChatGPT.
The lawyer? Its happened multiple times already and it will probably keep happening.
@@donaldwert7137 He did, and I loved the segment, but as much as I love Devin and Legal Eagle, Simon is so much funnier with this stuff.
@@scubaad64 Yep. Devin tries for decorum, Simon goes over the top really quickly.
The Nicolas Cage may be an all-time editor moment!
Absolutely incredible 👏
Seriously laughed myself snotty. I can’t stand Nicholas Cage, which just made it funnier.
That's because the editor is stoned
NOT THE BEES!
Oh he got it. Cheers, Simon.
Very weird. Very excellent.
Im just here to congratulate the editor who got sick of complaining about Simons' tangents and instead decided to go on their own tangent about Nicolas Cage.
“It makes me wanna peel my face off and wear it as a mask”. This is part of why I watch this channel.
When normal people have a "Joker moment," they just say something edgy, but not Simon.
@@TheLithp fun part is, the Joker actually did that exact thing once
👋
Our fearless (and stoned) editor deserves a raise for the Nicolas Cage edit alone. Chef's kiss! I had to stop the video to laugh for a solid minute. Also, anyone who has ever rewarmed a bowl of spaghetti knows it doesn't cook from the inside out. Those noodles are Satan's own lava whip on the outside and threads of ice in the middle.
Wow, I actually stopped screaming FARADAY!!! at my screen during that Nick Cage bit. I've never felt so invested in the lad in my life.
Nicholas Cage is a national treasure.
@@Adzer2k10 I see what you did there... 🤣🤣🤣
I was yelling it out as well.
Then when Nicolas Cage came up I snorted out loud and thought
You Unholy genius.
I wonder wat day it is..
Is it Monday?
Nope..
Is it Saturday?
Nope..
Any other day?
Nope..
IT IS BLOODY BLEEPING BLEEPERDEBLEEPandlotsmoreBLEEPING FAAAAAAAAAAAAAAAAAAAARAAAAAAAAAAAAAAADAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAaaaaaaargh... ⚰
I'd love to see an episode where Simon blind reads two different short scripts on the exact same subject, but one is written by ChatGPT and the other is written by one of his human writers.
I can see Simon as being the local town drunk/crier.
Sitting at the bar nursing one pint for hours, telling weird "tales/facts" to anybody that gets to close to him and then asking for a new beer as payment.
Telling people to “smash that like button” except nobody’s ever heard that phrase before so they suspect he’s just a bit mad, the poor thing.
So he's basically Doc Brown in the saloon in 1885
@@seanward7676 “They make a cartoon of us Marty, where I turn myself into a pickle! They changed our names, but _I_ know it’s us.”
Simon is the absolutely least knowledgeable person on Earth. The only reason he can talk is because a knowledgeable person wrote him a script.
Great talking head with a pleasant voice all the same.
@@DissociatedWomenIncorporated Wow. I never realized this, but it's so true.
When you refresh your list and Simon just posted 14 seconds ago.
Right! 😂
On 14 separate channels lol
In the Whistlerverse, there's a new video every few minutes. If you haven't seen a fresh video for more than an hour, it means there's a section of Simon's output that you just haven't found yet.
Well done. 😂😎🏴👍🏽
50minutes fashionably late
When do we get a SAM v SIMON tangent timers? The one time Simon had a tangent timer had me absolutely engaged the entire video because I wanted to see how high up it can get.
That was Lorelei, who did the editing for that Brain Blaze. She's new to his team and said in the comments that she would add them as and when. I hope she's busy editing the next Blaze video with 'the tangent timer' right now!
@@Julia-uh4li my bad. But either way it’s An amazing idea for all of his channels 😂
Kevin wrote the quantum computer segment to try to melt Simon's brain didn't he?
I wrote a whole episode of Science Unbound about quantum computing; I thought this was really simplified!
@@ThatWriterKevin You still melted Simon's brain though!
@@ThatWriterKevin Simon's brain has a very low melting point, though.
@@ThatWriterKevinI definitely feel like ya hit the high points of it. Parallel computing, probabilistic stuff, etc... only thing I kinda wish was mentioned is encryption, if only because it's effects on such are interesting, and hearing him go off on the inevitable 10 minute tangent woulda been funny.
Nice script tho. Summed up stuff nicely.
my limited understanding is: if you wrote software in Prolog and ran it on several parallel processors. in the university class on the prolog language, we all got our brains melted and reformed rather weirdly
I cackled quite loudly at the Nicholas Cage Bees clip. 😂 I'm sorry, Simon, I've never used ChatGPT, nor have I any desire to. I'm perfectly fine doing research the old fashioned way, with Wikipedia. 😅
Hahaha I love it!
Simon: *currently has job where he reads other people's writings to an audience* "If I had to have a job in the past I'd be the one where I read other people's writing to an audience"
Hmm yeah, makes sense. I'd wanna be doing what I'm doing now too, Simon 😂
The thing most people don't seem to recognize about ChatGPT or other "AI" programs we have is that they have no idea what they are doing. If you ask ChatGPT to write a story, it doesn't think "I need characters, a plot, a beginning/middle/end". It thinks "what is the most common word written in response to "write a story"" and it goes from there. It's this fundamental issue that lead to a amateur Go player being able to beat the best AI bot in the world, because the AI never recognized it was playing a game.
You are aware that AI had beaten the best Go players in the world? This is a very surface level understanding of what the realities are how what LLMs can do. You can always boil things down to make them seem less impressive, playing a violin concerto is really just a matter of moving your arm back and forth at the right times, after all, but what you fail to acknowledge is that in the task of predicting the next word, there needs to be some internal modeling of the context of the discussion or else you just quickly trail off into nonsense.
Are LLMs the best writers in the world? No, they tend to produce pretty dull and derivative content because they're optimizing for the most typical story based on the current context rather than trying to subvert expectations. However, they can absolutely produce coherent content up to a certain length and there's a lot more to that than just looking at the previous word and coming up with the next one. It requires an understanding of everything that comes before it. Not understanding in the sense that it's empathizing and appreciating what's happening as humans do but it has to refine its internal weights to continue referencing situations and characters that it introduced 2 paragraphs ago which good models are completely capable of doing.
I think you are confusing Chat GPT with a markov chain. They are not the same thing.
From what I've heard it is essentially just a more advanced version of a markov chain. @@rubiconnn
@@user-on6uf6om7s The A.I that did that wasn't an LLM. It was one of Deep Mind's deep-learning models. It trained on games of Go, not languages.
Not true, ChatGPT does take into account story structure, and structures and formats of other types of writing. Just not well enough to fool a knowledgeable person yet.
The ChatGPT segment made me think of the old joke; once you finish reading the dictionary, every other book is just a remix.
I like the real sets each channel has. It gives each one its own identity.
Though admittedly, if every channel was just a green screen then the neon sign would actually work.
😂 old school callback, I love it!!
REMINDER these video's are filmed usually weeks and sometimes months ahead of the "publish" date
You could yell that from the tallest mountain, but the unwashed masses are too small brained to understand Simon when he's mentioned that umteen times on Blaze and Decoding. Thanks for your trouble though. Cheers
I use Chat GPT as a sounding board for novel ideas, "Like can you give me some reasons why this kind of nation would or would not work." It just gives me something to bounce a mental ball off of and catch. If I had a human I could summon up with near instant feedback I'd do that, trust me. But that's about it. I know it's not really parsing the thought, it's parsing the words, but sometimes you need a hypothetical straw or ironman to slam your ideas into.
See, this sounds like an ethical and noninvasive use of the program. Still would make me feel icky, though, like walking into a WalMart even if I don't plan to buy anything.
Just an FYI since Simon misunderstood Danny’s comment… Microwave radiation CAN’T give you cancer. No matter where it is or how much energy is concentrated it won’t ever give you cancer. It’s just not intense enough to knock an atom out of place so the most it could do if you got hit with a whole bunch of it was burn you but it wouldn’t do more than that.
PS: most modern day communication is done in the micro and radiowave frequencies meaning microwaves are freaking everywhere. That cage around the microwave is to concentrate the waves and make it more efficient rather than stopping any radiation from hurting you…
When I was in Cook Training in high school, my chef put a frozen 2L carton of milk in the microwave to use for soup. When he pulled the carton out it was melted in the middle but the outside was completely frozen still. He taught me that microwaves cook from the inside out. I wish I knew this back then and I could prove him wrong. That man was always right. About everything. This information would have been gold!
Thing is, I've tried to thaw frozen hamburger and observed the opposite, still frozen burger in the middle and cooked meat on the outside. I replied to this video with an explanation I've read as to why, but I'm not doubting your memory of the milk, so now I'm wondering what it is about the conductive heat transfer of hamburger vs milk that allows the different behavior.
@@joehemmann1156 here are your answers. liquid thermodynamics. the age era of the microwave. the setting he put the microwave on. the container the milk was in. the square area of the microwave's interior. how old or worn out the microwave was.
frozen solid? what exact temperatures was the various depths of the milk? materials for the container the milk was in? dimensions of the container? time he put the milk in? power he used on the microwave? interior dimensions of the microwave? wattage output of the magneto of the microwave? age and production date of the microwave? how many hours of usage for said microwave? what type of milk, skim, 2% 1% whole? how worn was the paint inside the microwave? rotating plate or non-rotating plate?
the questions are numerous. but, your teacher lied somehow.
Kevin understands the broader concept, whereas Chat GPT is calculating the the most likely next words. From what Kevin said that sounds like the huge difference. Chat doesn’t actually understand, nor able to apply nuances.
People tend to forget that computers, as a whole, think in 1s and 0s.
Thats why I don't fear AI. Just give it a morally grey question and remove the extreme options.
@@movingforward3030Unfortunately being able to deal with that type of nuance doesn't appear to be a fundamental barrier from computers. It may not understand the nuance, but it doesn't exactly need to do so.
I was watching Chopped and one of the judges said that microwaves cook from the inside out. I screamed at the TV "No they don't!"
Oh my god, wtf? That Nic Cage part was a hilarious addition 🤣
That description of how ChatGPT codes, that IS how you code usually. Very rarely do you actually come up with a completely novel piece of code. In fact from a certain point of view you can't come up with something completely novel, because only code that is written fitting the syntax of that language will work. It's all about putting non-unique pieces together in novel ways.
There's a reason why Stack Overflow sells a novelty keyboard that has three keys: Ctrl, C, and V - That's programmer for "the only tools you need are copy and paste", and it's a classic bit of coding humor - now available as a gag gift
I actually didn’t know that microwaves heated the water specifically until I went to college and got my degree in electrical engineering. I always assumed they heated things like a normal oven, just faster. I’m impressed that you actually had a pretty solid basic understanding of how they work.
>Video about misrepresented technologies
>VPN Sponsor
oO
We dont talk about "military grade encryption"
it us military grade encryption, as pretty much everything on the Internet. (SSL/the s in https))@@mentallychallengedpokemon57
@@mentallychallengedpokemon57 hopefully it isn't Russian military-grade encryption, aka not encrypted at all
Vpns may not keep me "safe" but they do help me in other ways daily. @The_Blazement
Quantum computing researcher here: great summary Kevin! Thank you for not stating anything outrageous or extraordinarily wrong, I'd gotten used to it what with Michio Kaku running around.
I'm itching to rant about all the fun details about why/why not quantum information is useful and/or interesting. OTOH Simon's confusion is the best inside info I've ever gotten about how much use an average person can get from this type of scicomm explanation.
Suffice it to say: the theory of quantum computing is amazing and keeps expanding every day, but we're all simultaneously relying on engineers and experimentalists building the quantum hardware, which is hard and by no means obviously going to work. Quantum computers are extremely lossy atm and it might take years to manage to suppress/mitigate that noise to a level where it performs anything interesting.
I don't know if this Tech but Air Fyers . People don't understand that an Air Fyer is just marketing and it's JUST A PORTABLE CONVECTION OVEN. Companies just need a way to sell portable convection ovens.
No no no, it's a portable fan over. And I get very angry over it myself.
But it's a fan fan oven not convection over my friend.
Edit: just looked it up a convection over and a fan over are the same thing, so we are in agreement and yes it's the same thing, it's only more efficient and faster because it's smaller..
It's just a small electric oven FFS.
@@tmarritt Air Fyers are only good compared to a small convection oven if it has heating elements on top and bottom.
I could cook pizza in my old mini convection oven but can't in my air fryer due to needing to flip stuff half way through.
this is why I own a combination toaster oven and air fryer, because it's basically a more oven shaped mini oven and I don't have to faff about with pull-out drawer baskets or flipping my food around every five minutes; it's designed to function more like a normal oven but it goes on the countertop
Uhhh.... Simon already has had a job where he did nothing but read books.
(Great work, Sam! Great work, Kevin! Adequate work, Simon.)
Maybe it was even "Good work!" from Simon today 😄
The sequel to "Her" is Simon talking to ChatGPT. No romance, just interestingly worded informational conversations.
On creativity: my professors in art school took great joy bursting my classmates my little bubbles. They told us "there's nothing new under the sun". Essentially, Simon's right. Most ideas have already been thought of. There's a difference between creativity and originality. There are tons of creative thinkers. Not so many original ones. TLDR; Art School teaches you that you ain't $h!t, and don't base your self worth on being a special snowflake.
Humans: remix machines. I'm okay with this.
But the process is fundamentally different. You may be taking existing art and combining it in new ways rather than being wholly original, but you have a vision of what you want to achieve, a goal. ChatGPT does not preplan anything. In Simon's coding example, because it doesn't know what the different statements in a program actually mean or do, if you asked it to change the color of the subscribe button to red, it would have no clue how to do that or what part of the code needs to change. That's not to say it's impossible to train an AI how to code, but that's why some people working in the AI field will tell you general AI is impossible. ChatGPT may look better and better at chatting, but it will never learn to code because there are specific differences to coding that don't follow natural language and it has no algorithmic base to understand. In order to reach truly general AI, it would have to have our be able to self generate an algorithmic basis for understanding EVERYTHING. Not just copying text, but truly understanding what that text means. That's functionally impossible with the technology we have. Going back to your art example, this is why AI generated art exists and is good... and it's NOT generated by ChatGPT. The actual AI art programs have no potential to write a story for you, because they are not familiar with language in any way whatsoever.
Wow, last time I was this early Simon was slapping a script and talking to a space heater. 😂
the best days!
I've been watching Simon for years, and I honestly don't know when it happened, but now I watch for the editors, and have grown to really enjoy them. Simon is great n all, but without the cuts mixed in, I wouldn't watch as much. Together they are all great, thank you for the content
Probably around the time it became Brain Blaze and Simon stopped allowing long intros
Microwave ovens? Yeah. Few get it. The scientists at Aricebo even had a problem years ago with an irregularly repeating 'signal' that remained a mystery until they discovered that the source was people opening the microwave oven door in the break room before it was finished.
Also the concept of using magnetrons/microwave ovens to heat food was discovered after a scientist in a lab noticed his chocolate bar would begin to melt and liquify around the running equipment.
I love this story because nobody ever points out that the researchers were basically unknowingly cooking themselves along with the candy bar. I'm sure they suffered no serious injuries or anything but I still cringe knowing the inventors of the microwave basically worked inside of one.
@@SmokeyChipOatley The thing about kitchen-strength microwaves is that they lose energy very quickly, dissipating after just a few inches of travel. If you walked near an unshielded microwave emitter without knowing it, you'd say, "Yeah it's kinda warm over there."
@@SmokeyChipOatley In that scenario it is basically just unpleasantly warm, being heated by microwaves isn't inherently dangerous.
@@SmokeyChipOatley the specific man you are thinking of with the choco in his pocket was Percy Spencer. nope, you're about 90% wrong otherwise.
Mr. Spencer and other Americans and Brits during WW2 were researching what later become RADAR and they all (on both sides of the Pond) were trying to create specific radio waves and measure their 'bounce back' to identify airplanes.
NOBODY got cooked in ANY experiment ANYWHERE because they already knew that normal radio aerial transmitters for MUSIC and NEWS would COOK BIRDS AND EXPLODE THEM. so nobody was using any equipment powerful enough to even singe the surface of their skin.
Mr. Spencer got near enough to a highly weak magnetron and the already room temp choco bar in his lab coat pocket was SOFTENED slightly. it did NOT liquify UNTIL he created a specifically tuned and shaped metal cube to put the choco and then later popcorn kernels into. then he put the same weak magnetron directly INSIDE the metal cube he created next to the 'target item', closed the cube off, and turned on the juice for seconds at a time. after days of experiments when his colleagues had watched his efforts, they ALL enjoyed the melted choco and popcorn. later on, Mr. Spencer would design the first microwave ovens (for the Raytheon corporation) based on the patent given to him in 1945.
if you're going to use history? learn it all, not just the CliffsNotes version.
@@SmokeyChipOatleyWorking on improving WW2 radar technology in 1945 and then created the actual radarange.
Oops! I did the exact thing Simon begged me not to do in the intro. That's okay, I'm sure he'll take it well..
One word that springs to mind whenever I imagine Simon is, "poise."
And there I was, looking at the microwave oven thumbnail, fully expecting that you were about to explain how a cavity magnetron works... which is flipping amazing btw, and almost nobody is aware that the mundane box in their kitchen, caked in dried-up bean juice, hides a device of mind-blowing engineering genius and extraordinary historical importance.
Oh well.
In ChatGPT's defense, it's doing what all of us do. Humans just have more complex neural nets trained and constantly updated with all of the experiences we've had.
As a programmer, when I write code I am piecing together what I know from previous programming experience and the courses, books and tutorials that taught me. ChatGPT isn't as strong on the "meta" layer of thinking through complex problems with logic yet, but it's getting there. Its programming is still closer to the "pasting together snippets from StackOverflow" style of programming that many human programmers do.
Humans come up with ideas or impressions and then try to convert them into words. Chat GPT is just guessing what words probably go with other words in acceptable, semi-predictable patterns.
A programmer is trying to solve a problem and work towards a particular end. The AI is just guessing what pieces of syntax usually go together with other pieces of syntax most commonly.
Simon "I dont even need podcasts anymore, I just talk to chatgpt!"
Also Simon "wait, where did all my viewers go?!?"
Simon over here joking about cutting off his own face and wearing it and every Batman fan watching it is like "Ahhhh good times."
I was taking a sip of my soda and at 4:30, l look up and was suddenly terrified at Simon's cut off face being worn on his face. Hooray for nicely done nightmare fuel.
The blissful face of Nicolas Cage made me queasy.
When he was talking about how a supercomputer doing things like loading GTA would be super slow, it reminded me of Hitchhiker's Guide To The Galaxy, and the supercomputer "Deep Thought," which took 7.5 million years to determine the answer to "life, the universe, and everything."
I love chatGPT, but essentially it is an interactive encyclopedia which can also combine information into output. Really cool, but not as close to AI as some people seem to think.
Good god, thank you!! Finally another person who acutally understands what ChatGPT is and isn't.
I actually really appreciated the explanation here. The legal brief example really made it easy to understand how it can mess up based on the way it generates things. But at the same time, kinda magic how most questions can be answered so well by it due to the info on the internet.
I asked it a specifc scientific question once and it provided a great explanation, but when I went to look for the articles it cited, they either didn't exist or were not relevant to the topic. Its answer was totally incorrect and fictional. I can't trust anything it tells me now.
so i was right in thinking it's not AI
"Netflix, why do you have this problem?", Simon you are brilliant and one of my absolutely favorite content creators. I love your shows and the whole team you have working on them. That being said you can either be daft about some very easily understood things or purposefully obtuse.
The answer is because other countries have laws and netflix wants a market in those countries.
It's really not a difficult concept.
Hooray! I'll say it again... Best Simon channel! I'm here for the tangents. 🎉
Me too! Cheers from Tennessee
Is anyone not here for the rants? Why are you here, if so?
For anyone who wants to know, the clip of the blue laser light coming out of the microwave is NOT CGI but is in fact a REAL thing done by a youtuber named Styropyro. Dude is nuts and I love it. Great video Simon Microwaves are so cool.
The intro to this video was unhinged lol. I loved it!
Edit: Nevermind it's still unhinged lmfao
"it's what you would call a... Nicolas Cage" this killed me
"Any sufficiently advanced technology is indistinguishable from magic." -Arthur C. Clarke
Simon you're right about microwaves with one small caveat: the vibration IS heat.
But otherwise, bang on mate
Sam, the Nicolas Cage bit was outstanding 😂
Thanks haha
Before 18:17 , just wait until he finds out about the number googleplex and it's more than just a location... LoL
Not the bees. Not the bees.
Whatever else is mentioned in this video, I will only ever remember the Nicolas Cage bit
Kevin, you're awesome! We NEED to get Simon playing Magic 😂 (as I'm watching this, I'm building a commander deck and smiling at the thought) 😊
All of the memes and cut scenes really make up for Simon's illiteracy.
Brutal 😅
I mean, he admitted to cheating on his homework, copying answers, and being bored to death in English class.
We can’t really expect much from a guy that can’t even remember what he just read five minutes ago. He’s amusing and hires good writers, so he still manages to be entertaining.
That, and he runs more programs than Chinese central intelligence.
Who with a brain wasn't bored in English class? I had read more good books of substance than any, or perhaps all, of the husband hunting chicks that were presented as teachers. I once explained the text, subtext and moral lesson from an old Greek myth to my English teacher when I was 12. Oh, and I still got married 49 years ago.
The difference in sportterms: 1. F1 you are allowed as a human to watch the other cars and build something similar, but you aren’t allowed to laserscan other cars; 2. American Football you are allowed to watch the other sidline as a human, but not film them;
A human in our world is allowed to be inspired by other’s creativity, a machine with sheer endless capacity on the other hand is a different game
Simon debating creativity is exactly the example he needed lol
makes me think in fair use of IP. what do we people value as big enough contribution.
Also I want to say that I think “quantum computing” is crazy interesting.
The fact that it’s used with information from a binary source (bits and bytes) but then alters (translates) the binary 1/0 into a qubit in a way that’s BARELY UNDERSTOOD is wild.
Plus the fact that we humans have absolutely no way or understanding of how to translate something wholly generated by a “quantum computer” into anything comprehendible to human minds due to the fact that qubits cannot be broken down into binary the same way that 3 dimensional objects cannot become wholly 2 dimensional. Pieces are always going to be destroyed and/or lost.
The editing is 👌 magnificent and the well researched script was a banger. Great show guys!
I kinda wish that ChatGPT had been available when I was in high school.
Me: Yo, bot thingy!
Chat: How may I serve you, milord?
Me: My English literature teacher is making all of REread 'Tess of the D'urbervilles' because the whole class hated it and our essays were copied almost word-for-word from Cliffs Notes (true story).
What's the book actually about?
Chat: It's far too boring for me to even give you a two-sentence plot summary, your highness.
Shall I simply write a bangin' five paragraph essay for you, instead?
Me: Abso-plagiarism-lutely. And be sure to spice it up with AP level adjectives and whatnot.
Chat: As you wish, your excellency.
The editing was perfection 😂 thanxs ❤
"It makes me want to cut my face off and wear it as a mask."
Oh my goodness. I laughed so hard I started coughing.😂
I will join this channel if Simon starts playing MTG
Apparently Netflix can sell exclusive licenses to other countries for their original shows in other regions which is why they can be georestricted on Netflix sometimesm
Chat GPT might be making Simon dumber. At least his writers know how to properly find the answers to questions.
This got me thinking about the nature if creativity. What does the internet think of this: creativity is a unique amalgamation of commonalities. Unique because it is special/different than others; amalgamation because it combines what already exists - we never stand on our own but on the ideas of others; and commonalities because based on established norms, one combines them into something which differs from others. Thus, creativity is a unique amalgamation of commonalities. What does the broader internet mind make of this?
The Nicolas Cage break is life 🙌 thank you
Couldn't help snickering as you named a number of famous physicists trying to recall "Faraday Cage"
The Nickolas Cage 😂😂
FWIW, a googol is 1 to the 100th power, which can be written as 1 with 100 zeros after it. A googol squared (the number Kevin refers to as a metric !%&@$ tonne) would thus be 1 to the 200th power, written as 1 with 200 zeros after it. Kevin seems impressed by this number, but both the aforementioned amounts are small potatoes compared with a googolplex, which is 1 to the googolth power, written as 1 with a googol zeros after it. You're welcome.
In other news, the editing in this episode is FIRE. Congratulations, "Is Stoned"! The "braaa" cracked me up but good.
Nicholas Cage had me laughing so hard!! you win!
8:20 honestly Simon, if my microwave works like that, then whys my bowl always hotter than the food, and my hot pockets are always cold in the middle🤣🤣
Yeah, from a software engineering perspective chat GPT is like a more advanced stack overflow search. You couldn't really build a piece of software from the ground up completely from chat prompts without actually knowing what you're doing, but if you do already know what you're doing it's definitely a nice shortcut when you get stuck on something or don't want to bother figuring out some ancillary piece of code.
Yes this is a very good point. It is essentially still just a tool. I think a lot of people are taking it for granted because its not a "general ai" with true human level creativity or reasoning (yet lol). So they brush it off as just another glourified chatbot. But the fact it can do things like help out with coding problems or describe complex scientific topics in a "for dummies" manner despite not being specifically taught to do any of those things is absolutely mind blowing.
The Nichols Cage moment was phenomenal!!
Could you imagine if a microwave heated completely evenly and you put an ice cube in there? It was just compltely turn to water in an instant once it heated up enough. That would be pretty neat!
This is actually a brilliant and easily testable proof of how they actually work. I'm gonna use this to teach my kids.
@@EggsOverSleazy Hello, I'd like to nitpick!
I'm not sure what exactly would happen if you could heat up an ice cube completely evenly, but it wouldn't turn into water the instant it heated up enough.
You have to provide it the latent heat of melting. In other words, if you have ice at 0°C, when you heat it up, instead of the temperature rising, it will start turning to water at 0°.
For a 15g ice cube, the energy required to melt it would be 5010J. A microwave oven with a heating power of 800w could provide that energy in about 6.25 seconds.
I'm guessing it would evenly fade from solid to slush to liquid, maybe the crystal boundaries and defects would melt first? I really am guessing here.
7:09 And that's why your editor is a comedy genius
thanks hahaha
All musicians do is take music notes that have already been played, and put them in a different order based on specific expectations. If AI isn't creative, neither are musicians.
I love how the editor now doing tangents too with Nic Cage hahahha
I still have not found a use for chat GPT in my life. I have tried using it when it first came out and it was all the rage but could not figure out the use for it. I do find the hilarious when Simon uses it
I sometimes ask it general info questions, like the speed of the fastest spacecraft or the year when an event happened, but every now and then it still gets things wrong -- so I always wind up double-checking the info anyway :/
As much as I hate it, it is useful and I will use it on occasion to help brainstorm ideas or to ask questions that I know will be a nightmare to attempt to Google because of SEO (and that aren't important enough where I care if ChatGPT lies to me)
I messed with for a day, just testing its knowledge on a few topics I was interested in. It gave mostly decent information, nothing I didn't already know though. After that, I didn't use it ever again. No real point, as it is basically just Google with only one result per search. Meaning that if you get a wrong result, you don't get anything else to check it, unless you check on some other resource. But at that point, why bother with ChatGPT anyways?
I find it useful when I have "fuzzy" questions where I'm not sure of the precise keywords that would yield useful search engine results. Usually the response gives me the key words I need which I then plug into Google and get results I can better verify as truth or bullshit.
In it's current state, it's just a more advanced Siri/Hey Google. You use it when one of those "I wonder why *blank* happens" questions pops up
Fun trick with a microwave oven: A Simon stated, water molecules are polarized and absorb microwaves, while is molecules are not.
Put a glass of icewater into your microwave and zap it. The water will boil while the ice does NOT melt. This looks really freaky.
Yes put a glass of of ice water in that red TOASTER OVEN and I shudder to think what will happen probably electrocution
With regard to ChatGPT, humans and creativity, here is the way I describe it to people. At some point in time, there was nothing written. If ChatGPT were taken back in time, all its learning sources were removed, and had to be re-taught with what existed, it wouldn't be able to learn or do anything, because no learning material exists. However, a human, with words defining the world around them, could (and did) generate the first written material. One can make the argument that modern man only regergitates what they have already read or watched on TV or from movies, but all that input dilutes an individual's unique experience. Two people could witness the same experience and give two different (accurate) accounts of that experience. ChatGPT can't give a third unique version. It can only reiterated what the two people said, or try to give some clumsy amalgamation of the two accounts. To me, that's the big difference. I feel it's a harder thing for a modern person to see, because so many things have already been experienced and documented, it's easy to get the feeling that everything has been experienced. Humans can express new experiences and ideas. ChatGPT can't (at least, not yet).
All i got out of this is that Kevin is humanity and Simon is Human ChatGPT.
Business blaze is the best fact boi. So relatable to see him struggle to remember Faraday.
LOL, No, Simon, magnetrons don't move around a microwave, that would be dangerous, they run on thousands of volts (around 6000). Moving the wires around would cause them to break over time, and could possibly kill you. In ovens without a revolving plate, the waves are directed from the magnetron down a channel and onto a revolving metal paddle. This scatters the waves into the oven cavity, leading to a much more even cook.
Also the Nicholas Cage edit was funny as hell 🤣
Look up Tom Scott's "I Promise this Video about Microwaves is Interesting".
Haven't watched Tom Scott for ages, he makes good videos. Not sure how much he could teach me though, I work on microwaves as part of my job, and have done several courses on them.
@@Bacopa68nit with cables yes, with other things theoretically, but i don't think it is practical.
There's a post on tumblr about 500 years ago, somebody would be like, "O Sister Margaret, regale me again with the tale of the Vicar's elopement with the miller's daughter!" And now we're like, "O Brother Simon, regale me again with the tale of Five Times Product Placement Backfired!"
Simon's imitation of coding with no idea of how coding works is impeccable. You gotta know how it all works and how to make it fit your specific model, but it's essentially what he described. And with lots of yelling, swearing, and sometimes throwing markers at the white board
Simon: “It cooks the food completely even.”
*Laughs in microwaved burrito* 8:15
Microwaves absolutely “cook” the center portion first! There’s actually videos online demonstrating how much faster the middle of a plate heats up vs the outer sides.
Want to do it for yourself? Take some left over rice and put on a plate, spritz with water then cover and cook. I assure you that the center of the plate will be dry and gross long before the edges are even reasonably edible.
Now as far as a single thick item or container of liquid is concerned. While yes convection will transfer the heat to the rest, the very center of said item will be considerably hotter than the surrounding material. Next time you heat up a drink, before you move it at all after running the microwave. Just take a small thermometer and measure the temperature in the middle of the container then do it again measuring the outer portion. The temperature difference will be drastically different at first and will gradually change, much faster if you start moving the liquid around.
I actually started slow clapping at 7:07, well played 😂
"Advanced plagiarism" is EXACTLY how LLMs work. Everyone likes to pretend they understand the things they regurgitate, even things that look like discussion on advanced topics.
... but all they are in actuality is a computationally heavy algorithm that imitates things it has parsed. It's just parsed _a lot_ of content, so it gives the illusion of having a breadth of _knowledge,_ when it has a breadth of imitation patterns. Sure, if they make sure it's not lying to your face with stuff that doesn't _actually_ match the input, it can be pretty useful.
In fact, I think it's a wonderfully neat tool. It just has to be used appropriately, and not used to create imitations of all publicly available works. The LLMs need to be created only with things they actually have the license to.
About Chat GPT's coding abilities: I have a story there. At one point I used the SQL generator in an application we use for reporting purposes. Every time I tried to run it, it threw an error. I've asked both Chat GPT and Google Bard to fix the SQL and every single attempt they came up with failed. Not much later I watched a human who is fluent in SQL use the same thing and he got the same error. It took him under five seconds to spot the problem and fix it.
So yeah...what we call AI is at the moment more of a fun gadget than an actual sentient thing...
I personal favourite is that microwaves "change" water molecules making it unsafe to drink water that has been heated in a microwave. I saw one example of this where a guy took two glasses of water from the tap. One was left on the counter and the other was put in the microwave until it boiled. He took them both out and poured each onto a different plant. The one from the microwave immediately began to wilt and die. This, of course, was "proof" of the toxicity of microwaved water. No mention that he just poured boiling water onto a plant. And the lemmings ate it up.
The Nicholas Cage edit was brilliant!!!
This video needs to go down in the Brain Blaze legendary editing hall of fame for the Nicolas Cage bit.
The statistics of shuffling a deck of cards is quite mind blowing and a very difficult number for humans to wrap their minds around.
Humans can't actually wrap their brains around many things, like infinity for example.
Want a light mind blast? Everytime 2 atoms get too close the outer electrons will trade one photon and keep them away. This is transfer of momentum by the photons, so light bumps into atoms and creates movement. Every move you make, you are creating light and light makes you move.
I love that Kevin is trying to say "CharGPT isn't..." and instead just inadvertently describes things like "knowledge" and "inspiration"
Big believer in the beer cans and Chinese Room, me thinks