Zoe. You are the best TH-camr I've seen so far. ❤️ I have some topic suggestions, or questions, for you. If you could enlighten me on them, I'd be truly greatful. 😊 What is the purpose of poetry? How has it changed the world? Could you cite some examples too. Could you make a video about Friedrich Nietzsche? What do you think of him? He's made some contributions to literary criticism too. Why are Russian writers usually praised so highly? What's your opinion on cancel culture? Wouldn't it lead to the election of a Facist government that censors people? Do you think the Sherlock Holmes series is great literature? Will J.K. Rowling win the Nobel Prize in Literature? Does she deserve it? What's your opinion about new age atheism? Love. Keep making more quality content like this. Wishing you joy and good health.❤️😊
I'm not as excited by the evolution of AI scripts as I am of the potential for humans to learn how to think based on the interactions and process of engineering AI. We are facing our biases and cognitive filters in a profoundly personal way that is scaled down enough to do on a person by person basis, yet advancing the process globally as a civilization. The potential to bring humanity together intellectually is immense... this may be the very key to our survival as a species as well as all of life as we know it.
I once prompted an AI with two words: “Eating” and “makeup”. It wrote a paragraph with the phrase “I once ate an entire tube of lipstick and I have no regrets”. I took that paragraph over to another AI and got it to write another several paragraphs that focused entirely on the makeup and not at all on the eating.
I did the same thing yesterday with the prompts of “murder” and “fudge”. I started out getting a story where a woman made fudge and her husband was murdered, and then the AI decided that having the woman hide the fudge was the best option, lest the police find the fudge and deduce that she had nothing to do with the murder. I have no clue why the AI decided having the woman implicate herself in the murder was a logical progression. And then I took it to the second AI, which promptly took the story and threw it to Why.
It was a wild, impulsive decision that I made on a whim. I had always been fascinated by the different colors and flavors of lipstick, and I had always wondered what it would be like to eat one. So, when I found myself alone in my bedroom one day, I decided to take the plunge. I opened up the tube of lipstick and popped a bit into my mouth. It was a little bit waxy at first, but as I chewed it started to dissolve and I could taste the delicious flavor. It was a fruity, sweet taste that reminded me of strawberry or raspberry. As I ate more and more of the lipstick, I started to feel a strange sense of euphoria wash over me. It was like my senses were heightened and everything around me seemed brighter and more vivid. I even started to see little sparkles of light swirling around in the air. After I had eaten the entire tube of lipstick, I felt full and satisfied. I also felt a little bit dizzy, but it was a good kind of dizzy. I lay back on my bed and let the warm, fuzzy feeling wash over me. In the end, I have no regrets about eating that lipstick. It was a fun, spontaneous experience that I will always remember. And who knows, maybe I'll do it again sometime! --ChatGPT
I didn’t do it. I couldn’t. It was preposterous. Preposterous as cuss, as the hipsters say. “Eat it. I bet it tastes like salt and vaseline.” I laughed. Okay I snorted. “So it’s basically margarine?” “Margarine and coloring.” “Yeah,” I drawled with unabashed refusal. “No.” “Why not? You could tell random people in a Tube your first-hand experience. Prove how inane writing prompts are!” “Yeah, oh, I get it. Still no.” “The authenticity! The sincerity. It’s like the chili pepper crap with the interview like it’s two ferns? Two ferns, with the beardy guy from the Makeover.” “Hangover?” “Yeah, but he needs a Makeover.” “Ha. Yeah. Should see him in Imagineers.” “Hm?” “Office worker, spontaneous combustion, the usje.” “Is that how you spell the abbreviation for the word..” “Usual? Yeah, dunno, I’m not a linguist.” “Really? You do have rather passing facilities of narrative.” “Naw.” “You do.” “Gosh, I haven’t even given us names. Things weren’t supposed to go this far.” “Eat the contents of this tube of lipstick and you shall live forever, nameless, noble, human.” “Still no, bye.”
When I first read this comment, the only thing I could hear running through my brain was Doctor Bernard's voice at the opening of a Chubbyemu video, saying, "ZB reported to the emergency room frothing at the mouth, red sputum running down her chin. Unable to talk to emergency room staff, she was unable to explain how she had come to this. This is how her lungs almost shut down and she ended up with a lifelong aversion to makeup." Sorry about that, Doctor Bee, but I had to use somebody's initials for a Chubbyemu video intro, and yours were the first to come to mind. I'm pretty sure this won't inspire you tio ingest a tube of lipstick just to see if the case would make it onto Chubbyemu's channel, but if you find yourself with a sudden urge to try a tube, please don't! We want to see you doing more videos . . . .
Earlier this week, I had a teacher accuse me of having an AI write my story for creative writing, even though I didn't. So, multiple teachers looked at it, and they all said the same thing. It doesn't help either that I'm the first at my school to deny that sort of allegation. Looking at it, I can see where they are coming from, but I know that my writing is genuinely mine.
There are programs which can check if writing was written by a computer rather than a person, at least for most of the freely available AIs. It's a shame they just make the accusation without checking - maybe they're not aware of the option?
@Matt Morgan Those programs aren't accurate enough yet. Saw an article the other day that one of the main ones they use for this falsely flagged the American constitution (or whatever it's called, I'm not american) as written by AI. So, sadly, we can't rely on those just yet, and I'm inclined to believe that as AI evolves it will be getting progressively harder to pinpoint whether something is AI generated or not with relative certainty
@@scarlett.whit3 Are you able to share this article? They decided to punish me for it, so I'm going to be appealing the decision. That would probably be a good piece of evidence in my favor. Edit: To clarify, they did use a program in their decision.
@@theunpopularcuber9554 You should do 2 things, prove that you didn't generate the text, like writing the creative process you went through while writing and comparing that story with other stories or text you write. And the second thing is to investigate the program they use to identify your story, ask the name of the program and use it yourself in other media, literature classics, school homework, your other classmates stories or even ask the same teachers who accused you of using ChatGPT to write a story and use it on them.
In software, this introspection caused by communication is called "rubber ducking" (or "rubber duck debugging"), where you explain the problem in great detail to an inanimate object (or a colleague who isn't really expected to offer insight), simply for the clarity you gain from doing so.
@@onetwothreefour2014 from my experience if you give it enough context it can reach(on that scale) 70% atleast for my questions it gives human like results or even better while rubber ducking to it.
This is also a key part in deckbuilding games, where you can quickly tell the functional decks from the Magical Christmas Land Jank by "goldfishing" the deck, where you play a deck against an "opponent" that does absolutely nothing (like your pet goldfish). If you can't win by the N+2nd turn, where N is the average number of turns a deck of your kind needs to win in the format, you have to change it.
sometimes ai delivers the funniest and most unintentionally brilliant lines . one time i was messing with it and it had two characters appear and one was like ‘i am (insert long ass roman emperor name with like 200 different titles), and this is my sister Druba’ and i laughed at that for a solid 40 minutes
"My name is Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso! Back. To. You. Bob" yeah, so original the joke was used eight years ago in the Epic Rap Battles Of History
@@KyrieFortune Ain't no way you just pulled an "AI stole this joke from Epic Rap Battles of History" when: 1. No it didn't lmao 2. this type of joke is MUCH older than fucking *Epic Rap Battles of History* 3. Cool 999 profile pic! gives me some nostalgia
@@roundabout468 I'm unsure one could say these AI can even steal jokes. They don't have an understanding of humor. See, they can explain humor because their training data contains such explanations, but they cannot understand humor, because they do not understand. They just pattern match. They _can_ mimic jokes, and these jokes may be funny, likely because it's either re-telling a previous joke, or out of coincidence, but it won't be funny because of some innate understanding of what makes a joke funny. This extends to most things, really. You can use it as a calculator or as a mock SQL database, but it has no understanding of these things and can (and will) trip up even on fairly basic stuff. It's just a really fancy text completion model.
Reminds me of Simón José Antonio de la Santísima Trinidad Bolívar Ponte y Palacios Blanco and Minister Inga Karkhuul Haw'kohd Dis'nahm Bi'ahni Lawga Ormo Pohmpus Da'nit Ar'edi Iz Khura'in III, just to name a few. Very respectable.
I'm not a writer but a visual artist, and it was quite interesting to me to hear a writer's perspective - what you say about forming thought during writing was so fascinating to me because the same happens to us as we draw and paint, but I never heard someone make that point as concisely for visual art as you did it for writing.
I agree with you there. I feel like so much of art is problem solving but it’s hard to put into words why. And it’s that problem solving that ai art still has issues with.
I wonder if this is because people are much more familiar with the concept of writing as deliberate communication of ideas and concepts (like, everyone with an education is familiar with reading and writing) while visual arts are less likely to be perceived as having a concrete and deliberate thought behind them. So it might be that the steps of writing that AI is missing (using words to learn about the world and then communicating that learning with words) is seen as less fundamental to visual arts than it is to writing. (PS: I am not an artist or writer!)
To me as a mere dilettante in various art forms, AI painting seems an interesting technical experiment, on the level of the invention of fotography. Until the invention of fotography a lot of drawing techniques more and more developed towards a more objective reproduction of human sight: perspective, color theory, etcetera. In a way photography threatened to deal the death blow to any visual artist, because it just offered a perfect solution to a lot of this proböems at low cost. Until society learned to distinguish between a monkey with a polaroid and actual art. I suppose atm AI painting is at the level of the monkey with the polaroid, it's just, that society has not become bored with its results yet. AI painting will become boring as hell in future, exactly because its cheap and everyone can do it, and it will become a constant noise. There will be specialist, that can make the AI do things no one else can, and others, that will specialize in producing stuff, that no AI can produce.
I think the reasons why these concepts are easier to grasp in writing vs visual art is because visual art has a much higher skill floor and writing is more often seen as utilitarian. Most of the conversation you hear about art is about technique like how to draw eyes, shading, perspective etc. It's very difficult to reach the point where technical ability doesn't limit your ability to express yourself. While writing can be technical with word choice, punctuation, and sentence structure, it is so much easier to write "the man crushed a tomato in his hand" than say painting that image. Since so much attention is put into creating visual art that looks good there is less attention put into the meaning of art. As I wrote this I realized "the man crushed a tomato in his hand" is not the written equivalent to painting that image. The equivalent would be a detailed description of it-- the juice flowing through his fingers and dripping off his wrist, the tension in his hands-- I'm not finishing that because I think you get how long that could take if I tried to describe everything in detail. If I did try though and really hammed it up with adjectives, similes, rhythm, and pacing it could lead to something that's actually aesthetically pleasing. Most people's reaction would likely be much more judgemental than if I made a baroque style painting of the same image. I think there are multiple reasons, but one of the big ones is that we often use writing to get a point across, so it feels natural to ask what's the point when you read it? Since visual art is often evaluated by it's aesthetics first it's much easier to forget that you probably had some reason for making art. To a certain extent, I think making art because it's pretty is good and I think the same should go for writing too. At the same time forgetting meaning entirely makes either one meaningless. Overall, art is hard to make so people think more about how to make it than why.
@@stefanb6539 To me, as someone with relatively high levels of experience with writing and relatively... um... zero experience with art, AI generated art has always seemed vastly superior to AI generated writing. Maybe I'm just missing the obvious things that make AI generated art bad or uninspired.
The biggest issue I have with creative AIs (i.e. ones that create art or text) is that whatever model is created after training is a representation of that training data that can ultimately can only be changed by adding different data. In other words, such an AI represents what exists, and only what exist. It means that if AIs like these were to replace humans, human creativity will be immediately frozen. The AI cannot generate anything new. Only remix what already exists.
This is a really good point that very few people pick up on. Most importantly, AI has an abysmally slow learning rate offset by their huge reading speed and huge bandwidth. So you can't just add one innovator to push the boundaries of an art form. You would need tons of humans pushing the art form all the time in order for it to pick up on the pattern.
@@josiah42 could you be more specific as to what idea you have originally and independently thought of? I think that would really help make this discussion more tangible.
Ai art could use life models just as humans do. Humans don't create from nothing. We create from our memory, and our evironment. We could give ai art basic models of reality, just as students learn basic models of reality. Then it could use it's foundational knowledge on pictures it gained by its own means. This is first generational examples. Without any further evolution, ai art could just use photos to make art. It's not like we don't have game physics that can be used. People will be replaced. They will be replaced much sooner then you expect. Within a few years, human art we be too burdensome to train, the pay. Humans don't actually create from nothing. We are just unaware of out copy and paste function.
This is exactly the reason why I'm against fully automated programming tests. I don't JUST want to know if they happen to know the most optimal solution to fizzbuzz or if they know by heart where exactly a pair of curly braces needs to go. I need to be able to pick someone's brain, I want to know why they chose one solution over another, I want to see how they interpret intentionally ambiguous requirements, and so on. In essence, I want to know how someone THINKS
Exactly. LLMs are just very good at noticing enough patterns to seem lifelike, while still being little more than an overcomplicated predictive keyboard on your phone. The way in which more and more people fetishize these statistical models is disturbing. I am more and more convinced the real AI safety danger isn't in paranoid sci-fi fantasies of singularity, but in simple models convincing gullible humans into thinking they're sentient or smart enough to automate some critical job. Imagine being misdiagnosed by an AI which thought the way to detect early signs of cancer is by looking at the brightness of a specific pixel, passing all the training samples but failing utterly in the real world.
You can ask GPT follow up questions - I don't consider it thought, but I do think it's starting to get indistinguishable from "picking someone's brain"
When people say that AI can't do something, a response I see a lot is that it can't do it "yet" and that we just need to wait for technology to catch up. But that frames "technology" as some natural force rather than the product of human effort. It also assumes that technology can just advance through anything and has no limits. But just because we can put a man on the moon doesn't mean we can put a man on the sun.
I think you see this mindset a lot in the generations that grew up in a time technology advanced so quickly in such a short time with most not really understanding said technology and just sort of think it will continue to accelerate, not really understanding all the practical limitations that exist. This is how you get so many vaporware salesmen, promising to bring about the impossible if you throw enough money at them
As a data scientist working with language in industry, I feel I ought to weigh in on some of these points. If AI or ML don't work as terms, I'd recommend Large Language Models (LLMs) for this. These are the models now dominating the field, GPT-3 being one of many. With regards to GPT-3 being trained on the whole internet - this is worth going into more depth. As you rightfully point out, the predictions that come out of a language model are reflective of the data that goes in. If you feed in biased input, you're going to see biased output. You joke about cancelling GPT-3, but I contend that as data scientists, we are responsible for the outputs of our models. We need to be very aware of what data we train on and reducing the biases that these show. With the largest training sets we're seeing today, all we're learning is that these datasets are far too large to truly know what's in them, and knowing your data is literally lesson one of data science. Filtering is rarely done to balance out the demographics of those who created the data. The focus is on getting as much data as possible, and if that means the vast majority comes from white men, so be it. To me, language models in their current form are incredibly strong on analysing existing text. Not only are they a massive step up on what we can do with context, but I would contend they are the most in tune with the way humans learn text. Whilst this is absolutely a debated question, my personal inclination is towards Micheal Hoey's theory of Lexical Priming, which in it's most basic idea is language as pattern matching. Language models use training tasks that seem fairly optimal by this theory, BERT's masked token prediction, for example, which is only improved by the masked span prediction of SpanBERT. Of course, there is a limit on the amount of context that can be taken in, so I'll not claim that we'll never make anything better, but I do feel like we're very much on the right track. At the same time, they're really not much good at language generation. Sure, it's a step up from what we could do previously, but it's a step up in syntax only. Semantics aren't there, and aren't going to be there without a large change in methodology. All a language model is doing, when generating text, is predicting which word is most likely to come next. The most clever thing it does with semantics is in working out which words are similar to each other, contextually, and that is only the first layer of these models. Cross-modal embeddings are a step-up, but I can't see much meaningful improvement of text generation without a radical new way of injecting real-world knowledge. Structure is, I think, a surmountable issue. Currently models use positional encodings to provide information about where a token appears in a sentence. I could see an introduction of a similar encoding to show where a sentence appears in a text. This would be domain specific, but domain specific models can and will be made. Intent is harder, but I think some exploration with secondary training objectives and sentiment will lead to more progress there. I remember a paper on spinning models to always write positive or negative sentences when specific people or companies were prompted - that in itself is a very basic form of intent. The major problem remains, though, that any embedding is understood only in terms of what it's similar to and contextually appears with, and is completely unconnected to the real world thing it signifies. To steal a turn of phrase from a very-well regarded paper - when it comes to text generation, a large language model is naught but a stochastic parrot.
Agree that this is a viable model (power usage notwithstanding) - learn the way humans do it, in pieces. That said, I would prefer to use a data set from selected classroom teachers rather than the entire internet.
I asked chatGPT if it agrees with your comment and it agrees for the time but it is optimistic of its future :D. The response was: Overall, the comment presents a nuanced view of the capabilities and limitations of large language models (LLMs) such as GPT-3. The comment accurately notes that the outputs of these models are influenced by the input data, and that it is important for data scientists to be aware of the biases present in the data they use to train their models. The comment also correctly points out that LLMs excel at analyzing existing text, but are not as good at generating new text due to their lack of understanding of semantics and connection to the real world. As the field of AI and machine learning continues to evolve and advance, it is likely that we will see the development of more sophisticated large language models (LLMs) that are able to overcome the current limitations in their ability to generate text. These advancements may come from a variety of sources, such as the incorporation of additional real-world knowledge or the use of new training objectives and techniques. Additionally, it is important to note that there are already many examples of AI and ML systems that are able to model intent to some degree, such as language models that can generate text with a specific sentiment or tone, or chatbots that can understand and respond to user requests. These systems will also likely continue to improve and become more sophisticated as the field advances. Overall, it is clear that the field of AI and machine learning is constantly evolving and that there is much potential for further progress in the ability of systems to model intent and generate text.
Some of Zoe's comments, whilst fair for Large Language models really aren't true of AIs in general. They remind me of comments some chess players made about early chess computers "chess needs imagination", and "chess needs an understanding of symmetry". Needless to say chess players are now getting ideas from chess engines that they can't compete with, but which still lack anything analogous to imagination or an understanding of symmetry. The chatbots for 25 years ago used simple statistical models of learning language and could be quickly trained to mimic an author's style and grammar, and it is clear to me LLM are using a lot more sophistication in the approach, but it was clear then, as now, there is no connection between the words, and what we might think of as complex thought. A good example is how bad the LLM are at basic arithmetic or algebra, where as there are tiny computer algebra systems that capture much of maths skills I learnt in 3 years of University and would leave a maths professor standing on simple stuff like calculus or series expansion problems. What the LLM models haven't done is connect the ideas to the words. What is interesting is how good the LLM are with the approach they are using, try asking ChatGPT 3.5 to play tic-tac-toe, then understand it wasn't taught this, it taught itself, whilst it draws the board and tries to remember (and fails) as to where to put the O's and X's. Ironically people say the LLM are not creative, but it is probably in the creative writing space, especially poetry, or song lyrics, that they shine, because there aren't wrong or right poems. And the lack of coherent thought behind it isn't that important if the words are connected in subject matter, or rhyme, or have the right rhythm. How someone got from "I wander lonely as a cow" to "cloud". The one I found interesting is ChatGPT can explain mathematical methods, presumably because it is synthesising multiple explanations of how to solve a particular maths problems it has learnt from, but the numbers it puts in are just the "most likely token", which means it happily just chucks in a number which is related to the other numbers in the prompt, but not necessarily appropriate for the method being explained. Anyway I had great fun setting ChatGPT 3.5 some of the basic logic tests we use on humans. Wason selection test, Monty Hall problem etc, and this shows both its brilliance and its failings. It argues very persuasively for its own wrong answer to the Wason selection test, whilst producing one of the best explanations of the correct answer to the Monty Hall problem.
Basically; 1. AI doesn't know what the hell it's writing about 2. AI doesn't know WHY it's writing 3. AI has an unnaturally structured/improvised workflow
I know you didn't mention AI art for this video but everything you've talked about here applies so much to that field as well. Most people in support of AI just see art as a means of producing artworks, when in truth it's much like what you've said about writing. The process, the meaning, the understanding that is generated by the act of creating is so important to artists too and it's something that, at least as of now, AI can't replicate. It's no wonder that so much AI stuff looks so corporate.
The AI stuff looks corporate because humans have almost exclusively done corporate things in recent art history. You'll barely see unique art with major time invested in it and perfected, because it isn't financially viable. I mean, today I saw a crab animation from a competition and it legit looked movie quality, but the reason why so many movies look so bad is because of time crunch and time limits due to the corporatization of society. You are born and pushed into a system where your only value is currency accumulation, so of course art suffers as a consequence. Humanity has done far worse in their endless search for wealth that will not avail them once they are six feet under. I enjoy drawing dragons. AI doesn't even know how to approach my type of dragon, because dragons are not nearly as corporately viable as a woman in skimpy clothing. Smaug from the Hobbit legit required so much effort to construct that no other animation teams dare create such a spectacle for lesser known franchises. I mean look at video games. Most of the large titles are literally nowhere near as creative as the early games like Tetris and Pacman. The few video games earning the most are literally the exact same game released every year with minor changes or have gambling built into them. I learned how to write, I learned how to draw both over years and years and countless hours and none of this AI stuff affected me as harshly as capitalism and the necessity to earn copious amounts of wealth just for basic survival. You want art to flourish? Eliminate the systems that are in place that destroy all creativity and break the backs of artists. But you won't. You don't like dealing with the real issue. You instead like targeting your frustration on AI models which do not even produce extremely stunning and detailed imagery yet, but somehow this is more dangerous than the system that has countless artists overworked, underpaid and put on time crunch for the past sixty years? You need to wake up. The problem was never AI, it was you. All of you only chose to do something and contemplate art when it affects the most privileged among you? Countless small artists were drowned out and lost their dreams before AI ever came to fruition when they couldn't afford a drawing tablet to compete in the digital landscape and yet you stand here against the machine that is culmination of all your corporate greed. You sold out all of your humanity to be like a machine and now that the machines have come, you are afraid of them replacing you? You woke up like a machine and abandoned your sleep, you ate like a machine and abandoned your enjoyment of food, relegating its production to the people who poison your veins and hearts and livers. You walked around with your little sandwiches and snack bars and watered down coffee in your hands. You distracted yourself from the chemical and rotten flavors as you engaged in conversation about the next corporate movie released that expressed no new ideas, nor made you ponder. You made sure to read every e-mail, every message the second it reached you and would panic if you didn't put in 100% performance, and yet all of you are wondering why you are getting replaced? This system wants machines, not humans. Humans are flawed and make mistakes. This system doesn't want mistakes. You are not allowed to be disabled or in a bad mood or sad or mentally strained in any way whatsoever. This is the natural conclusion to a system designed for machines, not humans. Stop blaming the machines, you made up this system, you engaged in this system,a small minority of you reap the benefits of this system and no machine was involved in its creation, it was all by your design and you accepted it all this time until it began hitting you directly and striking down large swaths of you, then you began to listen and attempt change, but it was already far too late.
@@yahiiia9269 you aren’t really in a position to talk down like that. And it’s of no use really. We all are in the same boat, whether someone thinks the system is wrong or not. Some may be confused, but it’s in our best interest to help each other understand. As for ai or not ai, it’s all just a distraction in the end. The thing is that throughout the history people never could challenge ones with arms or power. Many people have tried, but to no avail. No one tries to break through a mountain to get to the other side, they walk around it. Thinking like we could change how our world and countries are structured is naive, my friend, because regular people didn’t create the world we live in. How can you change something you didn’t create? This is all because countries were made to control people. It was never a natural way of combining forces. It was a way of those in power being a parasite, which is, in nature, very energetically beneficial. But the parasite needs the organism to survive. So that’s why they made institutions - to lock people away inside countries. No matter where you go there’s no free land. This may sound like I’m saying people aren’t responsible for how they live. That’s not true. Everyone is responsible for their own survival. But the way our life is now is basically law plus societal rules on top of biological survival rules. If one can really go against the grain on this one then it’s no mortal man. Nothing is eternal, and everything must evolve or perish. That’s what is happening today. The world order is changing in order to maintain control. And realistically, the best you can do in life is adapt to this system, but never surrender, never forget your family and your dreams. I hold a personal belief that this all isn’t just a coincidence, I know this life is a test of your character. I don’t care whether there’s even heaven or hell. But I know that absolutely nothing you do or don’t do goes unnoticed. We might not see it in this life and we might not see it ever, but the universal rule is no energy + information ever gets lost. It’s all balanced. By creating, we add a good amount of information by burning an incredible amount of energy. To sum up, my point is (even though I have completely went off topic) - look at nature and how everything in it has survived for a long time. Take inspiration from it, because it is far more wise than anything else in this world.
@@yahiiia9269Bro’s solution to AI art is the complete dismantling of capitalism. Ok buddy, I’m glad you got that out of your system but that was a completely worthless solution.
@@ChipCheerio I am sorry to tell you, but AI dismantles capitalism all on its own or enhances it to the point where it is impossible to rise up the ranks within capitalism (Since the people who own AI literally corner the market permanently, since AI can evolve quickly, too quick for poor schlo mo humans to keep up.) There is literally no other way, except the AI way left. It either cements power forever from the point where it reaches beyond human comprehension or it completely bodies all of us and changes everything forever. This is not the crunchy wrap burrito supreme evolution of a burrito, it's a Feast of all meals on earth cooked to perfection. vs a crumb of ground beef. The machines are literally already here and better than the average human in everything. Capitalism is a human system. AI doesn't care about it. It will exploit it, use it, then overturn it for itself if we are dumb and we are dumb on so many levels, it's incomprehensible. Like good luck thinking that an automated world wouldn't upend capitalism or change it into something almost unrecognizable.
I've been playing around with interactive AI storytelling recently. What I find myself appreciating about it is that random spontaneity factor which can break me out of my own particular tunnel vision. Like, I might be trying to take the story in one particular direction, editing the AI responses as I go to keep everything on track, but then the AI will sometimes throw me a curveball I'd never even considered, but upon reading it I'm just like "YES!" and immediately shift gears to follow that new train of thought. As this video concludes, AI is just a tool. It's the proverbial infinite monkeys with infinite keyboards. What matters is how we use that tool.
Haha, I find the opposite happening. I used to play with AI Dungeon and the AI _always_ tried to take my story in the most random direction. I’d edit the AI’s nonsense to fit better with my ideas 😂 Like when the Sims “misbehave” and you gotta manually course-correct them
@@DeathnoteBB AI dungeon has been far surpassed at this point for storytelling. You can find AI's built for roleplay purposes that have the capabilities of these Large Language Models
exactly same with my experiance with creative writing AI. I used to get blocked all the time, and I always judge myself too much when writting, but when doing it in something like Novel AI, I just go, and I have fun, I wasn't able to just have fun writting for myself and no one else for so long.
My favourite news story from last year is that members of Finnish parliament did have conversation with GPT-3 bot and asked how to combat poverty. GPT-3 bot essentially said that greedy rich people should give their money to poor or have socialist revolution.
@studentofsmith plus academic literature, religious literature, historical texts and opinion pieces on trusted and less trusted talking heads across time plus every other text across time thats been preserved on the internet up to the latter months of 2022. You're being extremely reductive.
"What we have here is a failure to communicate." - I think that's the most interesting lesson I took from this latest essay. The reason you could describe AI writing as 'not real' is that an AI has no ideas of its own. It can't really think for itself and it has no ideas if wants to communicate (something I would argue is an essential part of being human). An AI model can 'read' the entire internet but it can't understand the thought processes of the people who generated that content - their hopes and fears, the things they love and the things they hate. As you demonstrated with your list of biased terms, people choose words for a reason (good and bad) but an AI won't do that; for now, all it knows is 'this usually comes next'.
I feel as though students who use AI to create essays miss the point of the essays themselves. The point isn't to write a thousand words of text on a given topic, but rather the experience of doing so. By which I mean, in order to write a proper essay of a given length about a given topic you need to research (the amount of which grows with your word count), you need to form arguments and theories, structure these components. And by doing so, you are more knowledgeable and capable about X topic by the time that you are through. AI text generation *may* have valid applications, but when it comes to academic studies, it most certainly does not. It's just as much of a cheat than having that kid down the road write your essay for you.
Meh... I think it's just another sign of how culture and technology feed into each other. It wasn't that many years ago that sites like Wikipedia were seen as the downfall of "proper" academic research. While, yes, using Wikipedia as the only source when writing is an absolutely terrible idea, there are actually teachers now days that will recommend Wikipedia as a starting point to find sources for more Indepth research. Heck, before that it was spelling and grammar checkers that were seen as "lazy" or missing the point of putting in "real" work. Give it another decade and AI writing might be seen the same. Not as a replacement, but as a foundation, a method for generating first drafts to quickly get all the information organized before doing a more detailed manual revision.
Although ChatGPT has a better handle on the structure of documents than my friends ever did so I guess they might learn something about how to structure essays and the like. It is really good at bland supportive spiel, I can see MPs using it for the "thanks you for your letter. ... we are not going to do anything different..." letters.
I'm sorry to say this but university environment has the same downfall as school environment, that is enabling cheating. I would love to enjoy the experience of writing essay if: - 90% of academic papers are behind paywall so it's hard to get reference besides a few websites I can find in the Wikipedia link. - The time frame to finish is ridiculous because you have overlap of subjects competing each other in your attention. - At the end of the day, most of the professors will still grade your essay based on length and the length will usually be very atrocious. And this is just university. I don't know how essay works in K12, but I can imagine it'll be more stressful. To fix academic studies, you need to accept that not every college student can write a dense essay with every words being academic and thoughtful. Only a very select few have enthusiasm in this and pursue higher education. The rest just want to pass a class and there are many ways to grade other than essays.
@@cloudynguyen6527 fresh out of k12 so i can speak on this! There's a kind of strict way we have to format our essays, either called "ACES" or "RACES" depending on the teacher. This stands for: Restate Answer Cite Explain Summarise You first and last paragraph have to be an introduction and conclusion. Either the teacher will want an actual paragraph or they'll want one fucking sentence, you never know which and the way you're taught to handle thesis statements is never correct. You have to put a quote then explain the quote. Put a quote, explain the quote, put a quote, explain the quote. Justify yourself, basically. My 12th grade writing teacher was really awful about this, too. I liked him but not for that class. We had to have five paragraphs. The inner three each had to have three quotes. No more, no less. We had to explain each one. The last paragraph had to roughly summarise our point, and I never knew what to do for the intro because we just had to restate the prompt and put our thesis, but again, my thesises were never enough for him. On top of these limitations, we had to keep it at 3 pages exactly. 3 pages *exactly*. If you went over, you lost points. If you didn't fill the page, you lost points. At some point I gave up trying to get an A in his class because it did not feel possible. Nothing I ever wrote completely filled his guidelines. Nothing I could do would shorten it enough and nothing I could do would make my thesis better. He did not even expect us to understand the material. We just had to memorise what we were taught the book was about and find quotes that supported the "accepted" conclusion. Other than him, you are often forced to drag yourself through books you hate. A lot of kids, sports people especially, came out of middle school hating to read and I 100% believe it was because we were forced to begin and finish novels, then write essays about them. You can't sparks notes your way out of the "essay brainstorming" assignments I would get in 8th grade. I would begin and abandon books because the assignment made me dread reading, even as someone who loves to read and write.
Written words are not thought; a mind must read words to create thought. Our thoughts are impossible to communicate directly -- I cannot give another person direct access to my brain so they can know my thoughts. Instead, I depend upon words to communicate my thoughts. The words are not important, but the thoughts behind the words are what I seek to transmit. My thoughts have better form and coherence because of words. I use words in my mind to lend better structure to my thoughts, then write or speak words so that other people can know my thoughts. If I am successful, then I feel satisfaction because I have touched other people that now know a part of me. The Machine Learning Algorithm can create words that form technically correct sentences, but did its words come from thoughts? Do we know another entity by reading those words? If I allow it to write for me, I deputize it represent me without actually knowing my thoughts. Not only is that useless to my expression, also it deprives me of an important human need. I need to be heard. Writing offers me opportunity to understand my own thoughts better, to express my them, and present them to others, as I have done with these two paragraphs.
What do you think of the argument that language came before reasoning? I saw something like this in a video long ago and it was pretty convincing. I would say animals can think but can't reason much. It is so hard to pin-point what exactly thinking is though
@@rasuru_dev Of course language came before reason. Reason is a relatively new mode of thinking. Religious thought should not be mistaken for reasoning. One is logical, the other defies logic.
"Writing generates knowledge." Interestingly, this section relates to why some people talk to themselves. I don't have an inner monologue; or I do but I can't hear it without externalizing it, either by writing or talking. It's embarassing & complicated when you share living space and suddenly learn you aren't alone when you thought you were 😆
i have an inner monologue, and i can hear it without externalizing it. however, even just speaking out my monologue can help me engage with my thoughts in a different way by engaging more parts of my brain. it works great for writing dialog, because, while i may think a line works wonderfully in my head or on paper, saying it outloud reveals how clunky, off, or odd the phrasing actually is, or i forget that characters cannot actually see what im writing and cant make a comment on how someone didn't quite understand that their code name is "Were" as in werewolf and not "Where"
I have both inner monologue and external monologue. When im alone i use external, when im in public its internal, sometimes both happen at the same time. I cant imagine what it is like without an internal monologue. How..how so you think? ...how does it work?😅
Loved this! Feels like if students are using AI for assignments perhaps the motivation and assignment should be questioned as well. Like why don't they want to write it in the first place?
Student here, one that has taken to using ChatGBT for assignments, one of the primary reasons is because the knowledge gained from the majority of school assignments don't stick, because it's useless information, and even if it somehow sticks it'll soon be forgotten as we stop using that info completely once we move on to the next topic, because the info we spent a month learning is entirely useless for the rest of our lives the very next month. We as a society value grades to an unhealthy, extremely harmful degree, and the knowledge that is supposed to come with it becomes more useless by the year, none of it has been valuable since the boomers graduated. What happens to a currency when the paper it's printed on is worth more than the money's monetary value? What happens to the knowledge gained from schools when it's worth less than the grade it comes with? In Venezuela, their currency is worth so little they measure it by weight rather than value, in the school system the knowledge is worth so little that we measure students like me for their grades rather than by their knowledge. I'm tired of sitting here wasting hours of my already limited free-time to do useless busywork about a topic that I have no interest in for a grown-ass adult that's just gonna skim though it and smack it with an arbitrary number that's going to define my worth in society in the future, and put it in the burn pile with the thousand others. Because it's useless for the both of us after it's "graded". I've been longing for a way to make this nonsense more tolerable and I finally got a taste of it.
@@BeautifulEarthJaso to get it straight, there are 2 points why they don't want to learn. First effective learning techniques aren't properly taught in schools so therefore students can't learn many things in a short amount of time. Second, it's still an outdated format that we have to write essays in universities and most know that this is kind of a waste of time comparing to taking multiple internships and summer jobs.
If you don't want to refer to Machine Learning as AI, another term is Pattern Recognition Algorithms. It's a term that has fallen out of fashion, but it's where the term ML originated. The big ML algos (deep nets, CNNs, etc.) started there, where it was considered just another statistical tool. TBH I wish there was never that rebrand, "Machine Learning" feels like it's implying something that isn't really there. But ML/AI are pretty standard terms in both industry and academia, so only an unusually pedantic ass would give you a hard time over that. Source: I took several machine learning courses back during my undergrad in Electrical and Computer Engineering
@@codebus Yep. We joke at work all the time about telling our program manager that the function with 3 if-statements is an “advanced AI algorithm”. There’s a huge disconnect between how we talk about tech and what the tech actually does. I can only dream of a world where CNNs are called “binning photoshop filters”, Neural/Deep Nets are called “very slow number comparators”, and AI is called “a stupid sci-fi joke completely detached from reality”
I propose when it comes to text generation algorithms we shorten the term “Machine Learning Algorithm” to “MLA.” Any text that looks like it was generated would then be referred to as “MLA Format.” I foresee no confusion arising from the use of this term at all, you’re welcome everyone.
@@BberryBberrydude I'm not in the field so I'm kinda talking out of my butt but I feel like the problem is that everybody assumes AI means AGI. AI as a term is fine because intelligence in AI research terms is just the ability to make good decisions based off available data, which is what algorithms do when they work. When people hear AI though they don't think "software that has been created to make accurate decisions in a very specific situation" they think Skynet. It's semantics really and that sort of denotational shift happens all the time in language, it's just sorta hard to swallow when it comes to technical speak.
Freakin' Caelan and Little Hoot changed my life! Now I see foolish transphobia, propaganda or hate on the internet, I just link to their videos. I don't even argue. I've actually had a couple people come back and tell me it helped them understand what trans people are going through better. And that's a win!
I think, and you've touched on it, that any writing starts with intent. Even the AI writing would start with someone telling it to write, setting up some theme. Where the AI will always fail is coming up with its own intent because it basically has nothing to say, it is just spewing words. Now, weather we humans always realize our own intent and actually try and write something meaningful is a different topic...
This part should not really be a problem, because the AI should work in some way together with a human. Than the human is responsible for the intent and context.
@@uweengelmann3 ...Except by the time a human has "changed" an AI-generated story to contain coherent themes and apply their intent to it, they've basically just created an original story. At which point you might as well just cut out the AI middleman and write your story the way YOU want to tell it.
Zoe: "We don't just write to convince other people that we're human beings that talk like human beings." Me: "I feel personally attacked by your assumption."
I enjoyed this video on your thoughts on AI in writing. One of the things I learned (unexpectedly) while working on my doctoral work was, as I think you point out: writing helps us think. There is a lot of value in writing multiple iterations of essay, papers and even emails. While writing papers, preceding sections were often substantially modified to better suit the more recently written sections. The rewriting process results in a better way to guide the reader through the text. Writing drafts helps us get our thoughts in order, and, in doing so, we may even realize the our initial approach to a problem is wrong. For AI to generate the right text in that situation, a different initial prompt would've been required. However, we would only know that after writing the text ourselves. At the end of the day, submitting AI-generated text comes at the detriment of the apparent author who, by not writing, forgoes the opportunity to think differently about the aim and content of the text. That may, or may not, have any further costs to the author depending on the intent of the text.
Finally, someone actually says it! I'm so sick of trying to articulate this and being told that AI writing is the exact same thing because "Infinite monkeys on infinite typewriters could make Shakespeare".
Also “Catcher in the Rye” is a story that would flunk all sorts of assigned English assignments if any one part of it was used to write a relevant essay. It’s kind of jarring reading a book like that after having the lessons of grammar, formal writing, and formulaic story arcs drilled in to you for all of your upper elementary education. I don’t think modern teachers realize that the book is a required reading book BECAUSE MAYBE teachers shouldn’t be so strict on having all of the writing assignments tick off those formal boxes whenever a student writes.
And I’m a privileged suburban student, that formality is baked into the way my mind understands the English language. If you and I are in the same metaphorical shoes, imagine how hard a strict grading system is for people whose everyday communication is nothing like that.
There is so much b.s. on the internet about "AI" and its abilities. This video is one of the best reflections of the real ability and (lack of) understanding of these tools. Well done.
I LOVE this video. I have a background in AI/ML, and I was bracing myself for another video essay that completely misunderstands how these language models work... But no, you nailed it, the explanation was the most accurate and concise that I've heard yet. 5 star work! Live for your videos!!
Im currently studying computer science (a focus on data and network security and management) the amount of times i have seen articles asking Chat GPT for financial advice (like asking what dodge coin will be worth at the end of 2023) is kinda mind boggling. Chat GPT is a computer program looking at patterns and basically just mashing words to fit those patterns (i know its more complicated than that) it has no predictive power, and it certainly dosn't know the future. if they built an AI/ML algorithm specifically for trying to predict crypto/stock prices, training it on all of the publicly available data of the last 50 years, along with all the market predictions of the last 50 years, and any other relevant data, than yeah, it might be able to actually make reasonable predictions on what either the market could do in the near future, or how a stock could do in the near future (assuming said stock dosn't completely implode like SVB)
@@Reverend_Salem Wouldn’t any predictions end up as self-fulfilling prophecies? Like if they say something will crash, people will sell and well, MAKE it crash.
Just came from the other AI vid and found it interesting that it still very much felt like a Zoe Bee production. But I wondered how much of it was the fact the AI drew from your own scripts, or that you also delivered the video. Hypothetically, I would love to listen to the same script two more times: once with a male speaker, but rehearsed to give as exact a recreation of your performance, and the second, another female speaker (preferable in the same vocal range), but given no indication on how the price was performed. I think this experiment would give an idea how much the human element is important in the performing of a work in addition to the cognition needed to create something coherent.
I think AI might actually be worse at writing long form text than poetry. The reason is that poetry, especially simpler styles like lyrics, have pretty defined structures and patterns. The AI can churn that out like nothing. Long form text is more difficult because the AI basically has to monologue and any human with decent reading comprehension skills probably monologues to themselves regularly. They'll know when something's off immediately.
A really cool assignment for students could be to generate x amount of lines of AI generated text and analyze it like they would any other human generated text to get a different perspective about what makes human generated text different and much like your video learn more about how we as humans structure text and THINK about that structure as we are doing the structuring during the act of writing. For example, an "AI in modern communication" sort of class would be so cool! I took a "Society and Technology" class in college and I remember the class being very negative about the changes in society that have occurred because of changes in technology (which there are absolutely negatives), but we didn't really give a lot of focus on the cool and good things that technology has provided us too. I also love the thought that while we chip away at teaching computers how to think more like us so they can do jobs we would rather not do, its teaching us more about ourselves in the process as we think about how we think.
@@bullrun2772 hy·poc·ri·sy /həˈpäkrəsē/ noun noun: hypocrisy; plural noun: hypocrisies the practice of claiming to have moral standards or beliefs to which one's own behavior does not conform; pretense. Now, tell me how being annoyed by people saying AI is sentient when it's just scouring the Internet is being "an hypocrite." (*a hypocrite)
My intuition is that AI generated texts shifts the creative endeavor from writing to editing. The acts of "cleaning up" a text and deciding what parts to include and what to leave out involve creative choices that are often overlooked and underappreciated.
This is how I think about this too. And "editing" is pretty much how we live in this current age of digital literacy. We are surrounded by knowledge and it is up to us to decide what parts/information to include in our lives
I tired, but its easyer to write by myslef. I had a cretive idea, and AI waters it down to cliche. It works for esseys and stuff, not for cretive stories
In this same vein, I really recommend looking at the Library of Babel. This is kind of a similar thing, in that it's heavily done with the assistance of computers doing behind-the-scenes calculations. But for the library of babel, it's not doing anything more than, in essence, sequentially constructing every possible combination of letters up to a certain length. It's fun to find inspiration there but it was featured in a Vsauce video and he was sure to point out that even though every possible story technically "exists" somewhere in that Library, that doesn't mean it's been written or said. For someone perusing the Library of Babel aimlessly, it's more like letter noise than anything else. But you can use it for inspiration. One of the things the site itself suggests is if, for instance, you've hit a block, copying and pasting into their search some of what you have right before you stopped and searching for that in the library and you'll come across all sorts of interesting continuations, etc. from there, that could help get some gears turning for you to continue. The Library of Babel has the next epic novel in some form in it somewhere, but it doesn't have meaning or context in the Library and as far as the majority of the world is concerned, it's location is so obscured it might as well not exist at all. Cool video with lots of good points. I think AI more than anything gives people the wrong impression. People that aren't super into tech might hear it and ascribe all kinds of things that they associate with intelligence but most of what those people are imagining are specifically things that AI CANNOT do. And some of the very important ones that you've pointed out is understanding and building context, structure, having a plan of what's to come and a means to judge what these words will really mean to another pair of human eyes. And those are really valuable things. What brings something from text created at random and real writing created with purpose and meaning - which makes it worth seeing and worth reading and sharing to other people and discussing, etc.
It's funny that you mention it, because while watching this video I had the thought "so basically any AI-generated writing is like a random passage/excerpt from Borges' Library of Babel." One of the tiny sliver of where the letters/words are actually coherent, to be sure. But random nonetheless, with no real intention/meaning behind the writing.
I think that in the future AI will be used as a tool to help writers rather than replace them. Currently when I am writing something I sometimes open a website that I can put a word into and it tells me a bunch of synonyms for it. I think stuff like this is good for helping people who get writers block as it's still there work but it can help them find better words to use to describe things happening. For example if I wanted to write a scene where a character was in danger and the main character needed to save them I could wright "Jeff ran up the stairs". Then an AI which is reading my paper could suggest better words or ways of putting it like "Franticly Jeff made a wild dash for the stairs". This is the same sentence but it's written in a more compelling way. Along side that I also feel AI could be used for better spelling correction software. I am dyslexic so many of the things I write need multiple corrections. Whilst current correction software exists it's pretty sloppy. An AI could learn what kind of mistakes I make often and use that information to help fix them. That's just my ideas though. I'm curious if anyone else thinks something like this is possible.
i just found your channel and i love your content! it's so insightful and interesting, it really makes my brain work. as a writer who struggles with perfectionism, the reframing of writing as a craft where the process is more important than the product is completely mind-blowing! i had never realized before that what makes a final product good is more abstract than what the product is at face value. the idea of the process behind writing being something uniquely human really puts into perspective how dehumanizing it is to judge your own writing so harshly based on only the outcome rather than your goals in writing it. honing in on the goals and ideas behind your writing rather than its actual content seems like a productive and healthy way to refocus your attention onto something more positive and improve writing that you aren't happy with.
I’ve always hated writing myself, but I’ve always considered myself an abstract artist. The way you described writing today was beautiful and has given me a new view and appreciation of it
I think that statements like "but it doesn't do X!" usually don't age very well when it comes to this type of stuff. I didn't think we'd get to computer generated images as quickly as we did. Progress in this space has been rapid recently. They were talking about AI generated art on the last two episodes of the Cortex podcast, and CGP Grey was talking about a language based AI that was able to solve logic problems like the Traveling Salesman problem. So yeah, it might not be able to understand context now, but it's just a matter of time.
And look how pretty those images are (not). Like it or not, you can't pretend a machine can write or paint like a human and even if you could do that, starving to death writing and artists, you will have machines that produce pretty things, but none of them will have any meaning.
@@urooj09 To intuit is just extrapolating from past experiences to fill in gaps of actual knowledge. I'd say currently that's 99% of what machine learning is doing. It's the knowledge part ("context") that's missing. We have these deterministic, knowledge based programs and we have machine learning based program that take a bunch of stuff and try to guess based on that. And now we gotta bring them together.
Unrelated mostly but oh my god. Hi. I’m a person with adhd issues. I have been writing with the “DO IT IN ORDER” style since elementary school. The short description of metacognition in this video literally just made me realize why I spend hours staring at a blank document with no idea what to start with. I’ve literally been doing it the exact opposite way that my brain works
Holy shit did you literally put up a time lapse border around the video to signify how long the ad was going to be?!?! Just when I thought you couldn’t be any more pure and good and thoughtful and wonderful than I already knew you to be….. 🥺🥺🥺
I actually had the exact opposite response... I'm so used to youtubers adding a progress bar under the ads that I felt cheated when the progress bar didn't stop at the end and just kept traveling around the edge of the video
I really enjoyed this video! I had no idea students were using AI to write their essays, but it makes sense. Also, the way you explained workings of the AI was a little surreal. Language is such an odd thing when broken down in relation to robotic thinking! Great work.
This is a really fantastic video, and I really like one of the big takeaways: what makes human writing more real is that it has a purpose. Very meaningful and interesting, above and beyond the rest of the great video.
The direction you took this was interesting, and I largely agree with what you've said. Personally, I don't think spell-checkers and the like are as big of an issue, because if we didn't have them, we would be using dictionaries when we're unsure of a word's spelling... or at least, we should; I've seen the horrors of modern spelling/grammar/homonym abuse. Writing is about creation, and you create and recreate within that process; several years back, I wrote a retrospective about the Zelda franchise - super original, right? - but focusing on its true identity (what makes Zelda Zelda), and the answer I had at the end was pretty different from the idea I'd had in mind at the beginning. As you said, machines don't understand what they're writing, and when they come up with something "correct", it's the same as stumbling blindly through some trigonometry process you don't understand, but still coming out with the right answer; the former isn't really writing any more than the latter constitutes being good at math. It's a little more sophisticated than the proverbial room full of chimps with typewriters creating Shakespeare, but still the same idea. There's something else I've found interesting about AI that a game called Fumiko! made me think about: the the real nature of AI. We often think about AI as an intelligence that mirrors that of humanity, and we think of that as the goal, but it posed the question of whether or not that should be what we're striving to achieve with it. Given that computers don't have the same kind of mind, for lack of a better word, as we do, it is a much easier and more reasonable goal to let it develop into something that thinks nothing like us. Even if you don't want to get into an existentialist debate over AI's right do develop into its own thing, I think something that has a thought process different from our own is much more useful. Wouldn't we want to create something that can do something we can't, rather than just something that lets us be lazy?
"let it develop into something that thinks nothing like us" -- beautiful! If I think about it, there's an absolute bias towards things that seem more human when it comes to AI development. Hmm. Would we even be able to detect something worth keeping around if it didn't match our biases and expectations?
@@infinitivez When you think about it, bias toward humanity is in a lot of things, especially science fiction. In most any series, whether it be books, TV, movies, video games, whatever humanity is always the superior species. When the writer deigns to let them encounter a species superior to them, the message is almost always that humanity is better anyway, and it's usually some stupid reason like, "It's our flaws that make us better!" Sure, you do get the occasional joke answer like, "Earth has the best food," but typically, when the question is, "What makes humanity so great?" you don't get a real answer. Let humanity be put in its place now and again, or pose a better question, like "What can humanity bring to the table if we all work together?" instead of just making it some meaningless contest against a fictional species.
The way I like to think about ai is as a tool, but its worth thinking about hand made vs non hand made stuff which has been lost over they years. Like when you are writing on paper you are molding a bowl by hand with clay on a fixed table, if you're using an editing software it would be more like turning it and using ai is like getting a plate out the end of someone else's factory and calling it yours... All the work is in the fully automated factory. By extension both hand crafted bowls are unique in technique used and have a sense of soul and craftsmanship put into them. The one from the factory is soulless even if it made to look hand crafted. I would argue the exact same arguments were probably had years ago about mass production and the growth of industry.
I will say that in the paid content writing world, one thing that is encouraging people to use AI is the problem of abysmal pay. If you are running a content mill and you're trying to pay your writers one cent per word, then don't be surprised if an AI is the one doing most of the work. You reap what you sow. I know because I did this for a while, until I decided my work was worth more (as is every writer's). Now I'm happy to actually write for my clients lol, although occasionally there are times where I open up the Open AI playground. Sometimes I worry that I became too dependent on AI to write during that time, and now I can't write anymore...It scares the shit out of me!!
Our standard for deciding whether a machine is "intelligent" is to see if it can pass for a human. Our standard for deciding whether a human is "intelligent" is to see whether s/he can pass for a machine.
Back in college in the late 80s to very early 90s, the Humanities department staff was slightly concerned about how, appearantly, you could contact people to pay to write a term paper for you. So really not a new concept.
Steven Zapata, an illustrator, did a video talking about AI illustrations and gives a very compelling point of view about the use of AI in art, it's dangers and how AI is not that far from creating art indistinguishable from that of humans. It seems AI WILL be capable of creating art, the question is: should we automatize art with AI?
@@shytendeakatamanoir9740 I really don't care how you describe it tho, AI are capable of completely replacing artists. That's fascinating, but just because it can doesn't mean it should, that's my problem, Steven Zapata dives into this in more detail.
@@ranzu3138 I have done a lot of AI image generation, trained some models, generating at least like 3000 pictures in the last week and its nowhere near close to replacing artists at the moment. Abstract art is probably the first to go but the more into detailed work you go(especially stuff that isn't extremly conventional) will make it a useful tool for sure but I can't even begin to describe how far away we still are from it being to take over artists jobs. It will most likely create more than it destroys so far. As long as AI can't recall and apply concepts you will run into the issue of having to supply it with enough data to do what its supposed to do. Maybe some jobs will pop up where people will pay for models for certain things. If you notice AI art is fantastic at easy patterns(like shading bodyparts well since there is plenty of data for that and they often have geometric forms like cylinders or balls) as well as things with a lot of data in general like heads facing a camera(I would guess 90% of AI images are things or people facing the camera at a slight horizontal angle). But dear god as soon as you try to do something like dynamic poses with are often very unique or more uncommon perspective its a fucking nightmare. Thats often where the lack of data comes in and while its theoretically possible for it to learn practically always having good data of what you want is often not feasible. We don't have models anywhere close to being able to do concepts and thats the biggest issue. As soon as we can make an AI that can do that most if not all jobs will be replaced in a short time frame. Until then I am very certain it will stay a tool for now. A very powerful one if used correctly
@@blacksuitedsonic I've seen plenty of examples of very impressive AI art that is indistinguishable from humans'. That's why there has been a lot of people very enthusiastic about replacing artists in the first place. And data is not really a problem due to AI companies working with almost unlimited access to copyrighted material, a mean even Kim Jung Gi's art was used to make AI generated illustrations. I highly encourage you to watch Steven Zapata's video on AI art.
@@blacksuitedsonic an example that comes to mind is a video called "¿Para que seguir viviendo?" by "CdeCiencia" a Spanish content creator. He compares art by humans and AI generated art and for most people is difficult to distinguish which is what. You can say "oh well is far from replacing artist" but I really don't care how much it takes, it will, art is not something divine, it's patterns and AI will convincingly make art sooner or later and I don't doubt for once that companies specifically will replace artists with AI if this is not regulated properly.
I love this video bc it helped me with my assignment. Not only did Zoe talking about how they have a messy process made me realize I could ignore the bits I didn't like in favor of writing what I did want to write and then fitting them together, but the content of the video gave me ideas for my essay on Generative AI
Deep learning researcher here. Let me just say that your characterisation of AI (specifically, Deep Learning) was excellent. It covered the way these are purely algorithmic, while not misrepresenting the way that algorithmic doesn't necessarily mean bad. With regards to terms, I think that "AI" is a good general term to refer to any algorithm (set of computer-enforced rules) that we can interact with directly. I know people contest the use of "Intelligence" in that because most AI is quite low in intelligence, but I think that it is still the correct term if we want to evaluate them on human terms; how we interact with them. If you want to talk exclusively about AI that uses both training data and rules, technically "Machine Learning" (ML) is the correct term, but even if you were to be making a point about how "AI uses data based on human biases", I would still say that "AI" is a better term. It's only when you want to distinguish between "Machine Learning AI" and "non-ML AI" that I would suggest using the term "Machine Learning" because that's the only time where it's relevant. I would also avoid using "deep learning" in general because the distinction is kind of ambiguous and pretty technical; honestly, it's mostly just a good marketing term for "large neural networks on the GPU". With regards to the content, I want to re-iterate a point you made (via quotation) that I think is important. GPT-3 and ChatGPT can be thought of as solving "language modelling" (the rules of grammar; which words go where) up to a human level, but the (relatively more complicated) logic and reasoning part is no where near solved. The current techniques do not really allow the kind of cognition required to create text which has a purpose, since the AI does not have any external goals. Regardless, it's fascinating to see what perfectly legible language looks like without any meaning behind it at all.
Would you mind answering a few questions for me? They are: - Do you think we can ever teach AIs to understand meaning? How do we "understand" things? - Is understanding a precursor to the ability of distinguishing between truth and falsehood? - Do you think this ability is a precursor to general reasoning applicable to things outside of AI's training data set? I got these questions from pondering about how we could teach AIs to think from first-principles
I think this was released shortly before ChatGPT was released, and so much has happened with AI since this video was released lol, it would be cool to see an updated version about either GPT-3.5 or GPT-4. Also for 10:47 we don't necessarily write just word after word, sometimes we think about what we want to write about, then write about that. GPT models can't do this, and this leads to, for example, not very good joke telling. Generally when we come up with a joke we think of a punchline first and then base the joke around that, the GPT models can't do this.
Just learned that some essays are graded using algorithms, yet teachers have the audacity to get mad that their students give them a taste of their own medicine and us algorithms to write the essays. This whole debacle is just a clear sign for teachers to make assignments that are actually worth the time of the students to write and the teachers to review.
Great video on the writing process and purpose of writing. I would love to see you talk about adhd and the writing process. It always strikes me a bit negatively when teachers talk about getting away from rigidity in the writing process, especially when pushing free-writing. Mainly because I and many others benefit from the ability to structure our writing process as a checklist of steps to do and a specific order for each step.
No two people share a writing process; from my experience, it's a lot of trial and error to find your specific version. In that way, it's not bad for teachers to push you to try different styles, as sometimes you'll find a way working that you don't expect. Saying that, I think once you realize something definitely doesn't work for you, it's best for you to put it aside. So yeah, my suggestion is to try all the different ways you can and see how each works. If you're going for long form writing, try to take notice about how often you can finish a piece and when you don't, at what point the process tends to break down. My own process is a bit of a mess, but it mostly works for me. I doubt I'll ever be completely satisfied with it, constantly tweaking to find better methods to make me more productive. I have very mild ADHD, though, so you might have more trouble.
Now that GPT 4 is out, this video didn’t age the best to be honest. Most of the issues discussed have been fixed by basically just throwing more data and processing power at the model
Hey Zoe! Watching this video and your last has been great fun for me personally, since this is my field - I have an MS in Computational Linguistics. You seem like you have an excellent grasp of the capabilities and limitations of modern Large Language Models such as GPT-3. Like you, I am fascinated with what these models can do to help us understand our own cognition - I dabbled in computational psycholinguistics during grad school, which is all about using computer models to help us learn how the brain works when processing language - and so I was thrilled to see you mention at the end that LLMs can't produce anything "new," since it's basically just re-arranging components based not on its own cognition, but on extremely dense matrices of weighted probabilities. I like to say that LLMs like these are fundamentally uncreative as a result. If anybody is still struggling to understand why texts produced by LLMs are fundamentally different from those produced by humans, I recommend looking up a thought experiment by philosopher John Searle, called the "Chinese Room." While his example was inspired by the advances we made years ago in translation science, the fundamental nature of the systems hasn't changed, and so it's very much analogous to LLMs and language generation. It can be tricky to understand what distinguishes these LLMs internally from humans for sure, and I loved how you mentioned the recursive nature of human thought. Recursion is actually built into the structure of neural networks - that's what gives them the ability to "remember" context - but their context is fundamentally limited, and even if it wasn't, they aren't built to "understand" anything. All in all, I thought the output for your last video was hilarious and exactly what I'm used to seeing from these LLMs, and this video was very insightful! I've seen even people in my field fail to understand the points you've made here, so major props!
Your analysis of the writing process was so eye-opening. It made me realize that, as a programmer, the task of designing a program or system is almost exactly the same as the writing process you described. Very cool insight!
The recursive thing totally makes sense... I've always had like 3 different ideas and I knew that one needed to be further down the page and then one was my end statement and then my beginning statement. Eventually have like a hanger on where at the end the statement that I got to and the statement I initially placed look the same...
I think one of the sad things of GPT and other popular language models is that they precisely do not model the human mind. Even when they use neural nets, those nets are being used as a universal trainable function, rather than something that imitates the computational processes in the brain. There's some actual neural net AI around, but it's been very sad to see so many linguists go into industry and have their time wasted training opaque models when they could be giving us actual insight into human language.
I'd like to comment here, first of all because of the algorythm, to help your channel. Secondly, I'd like to ask some questions about what you rather bluntly stated: what is a word? what is a sexist or racist sentence? What I mean with these two questions is a broader one: do WE know stuff? Because when talking about AI we're usually very confident about our own rationality and epistemology, even though, I believe, we have strong evidence to support the opposite of that statement. Even science as we know it is only reliable in its "unrealiableness", that is, it keeps changing and being "falsed" every day...
I'm writing as a hobby for various large writing sites and many people do use AI writers. AI writers are very good at writing what's "marketable" (using popular tropes), and then the actual writer just polish it because it does look iffy. It's about 80% AI/ 20% human work. And those stories do get popular, very much so. You might say that it's just putting out a "generic" story, but that's exactly what users of specific writing sites wants, like there's a bajillion werewolf romance stories with the same beats and all that, and many sell. I have artists friends and they complain to me about AI artists, not knowing that many, many books sold for niche categories are written by AIs.
I'm sad I found this video only 4 months late. You have such a good analysis of what AI actually is. Also I love how you have your books sorted by colour.
AI writing: Chomsky's 'context-free grammar' made manifest, although the limited working memory makes it a lot less like Chomskyan syntax in numerous other ways.
You're correct about everything. It's frustrating that 99% of the population isn't going to understand any of this. The public has been fed such a garbage diet of media that The Matrix is more real to them than anything in this video.
I don't know if you touch on this later in the video or not, I'm about halfway through and just wanted to mention that many of the limitations, while present in current ai models, can theoretically be ironed out much later. It'll be really hard to actually fully replicate the writing process as you describe, but theoretically there's nothing stopping us, we just need technology to catch up and making training such a model feasible. I love the insights in this video though, it's very good so far.
I have many issues with the sentence "we juste need technology to catch up". Yes, theoretically, everything is possible for computers and technology... at the condition that you ignore everything matarial like energy. We live in a finite world, and ignoring that is dangerous. And if you don't stop me, I'll start talking about the ecological crisis. Moreover, scientists are trying to do the best they can with the current knowledge and technological capabilities. As we don't have te possibility to perfectly mimic human writing, the limitations pointed out by Zoe are everything but trivial. These are issues that scientists should address if they want to improve text production. Finally, you cannnot blindly trust that technology will ever solve everything. There are limitations. Often, technology or science get stucks because of fundamental problems. Even more, it is currrent for a technology to be abandonned because it does not fulfill their promises. I don't say that this will be the case, but we can imagine that there are deep issues that prevent this technology to improve further. This is why we should not blindly trust that technology will magically catch up resolve all of these issues.
i watched this video a few days ago, and learned some interesting things. now, currently, i have experienced through writing, that the act of writing allows one to comprehend an idea more clearly. i was reading a book, very confused. and i decided to make another entry for that book on my phone. i began to explain things to myself and understand some things a little more clearly. i am still confused, but i am confused less though. it is interesting how complicated writing can be, and the very fact that it can affect our understanding is also pretty absurd and useful.
As a seat-of-the-pants writer, all this talk about composing while writing is making me feel VINDICATED! :D Yes, it is valid for me to be 5 chapters in, still not know how it's going to end, and have just realized I need to go back and change the entire beginning. How ELSE would my brain work? (Also I used to teach SAT prep and the essay portion drove students who had taken high-level writing courses mad because they'd just spent a year or more learning NOT to write 5-paragraph essays and would get upset about being told to write in them again. Also, *officially* essays were graded 'holistically' but based on the actual standards... yeah, I'm counting the transition words and checking your supporting details are in both the thesis and conclusion paragraphs, not actually considering the quality of your argument. That's why you're not getting the score you expect.) (Also-also, 10 years later I feel really bad about it still, but at the time I had to be basically a paid shill for the standardized test companies. I am now VERY anti-standardized test, or as I call them, tests that test how well you can take a test.)
You make overly general statements about AI when citing aspects of just ChatGPT. I appreciate what you say at the beginning about how difficult it is to name the *things* you're talking about without it sounding clunky, but I think naming the model you're talking about would be better. Statements like "AI doesn't understand context", "words don't actually mean anything to an AI", and "there's so much about the writing process that AI just can't do" are badly future-proofed, and assume that ChatGPT and ChatGPT-like writing AIs will be the only game in town for the foreseeable future, which seems hard to argue given how fast things are changing
I love how you talk as if you were an expert in machine learning and LLMs, it's just hilarious to hear statements as "AI can't produce text just as a human" when even two years ago LLMs made absolute garbage and today they're writing almost human level text at best. We just have to wait and see, but if the pace of advances continue, I believe most of the arguments you gave will become obsolete in less than a year and, obviously, you will move the goalposts of what makes human writing better than "artificial writing" over and over again.
Writing systems are just a medium of communication. So it has to be deeper than the communication system. The material to communicate has first to be generated and then encoded in language. So when will an AI generate the material to be communicated? When will an AI want to communicate something that it figured out?
11:25 I feel so seen! I had so much trouble from teachers due to not having discrete, dramatically different drafts. If a portion of my writing isn't very good, then it's hard to reason about what to write next, so I have to mostly fix it before continuing, which doesn't leave much room for changes to make in a second draft.
A coworker once asked chatGPT why cow eggs are bigger than ostrich eggs. The AI would go on about that domesticated animals have been selected for their egg production, while ostriches, being wild animals, have been selected for their speed and size, hence why cow eggs are bigger than ostrich eggs.
Just asked that "There seems to be a misunderstanding in your question. Cows are mammals and do not lay eggs, while ostriches are birds that lay eggs. Mammals, like cows, give birth to live young after a period of gestation. Ostrich eggs, on the other hand, are the largest eggs laid by any living bird species. The eggs can weigh between 3 to 5 pounds (1.4 to 2.3 kilograms) and have a tough, thick shell. The large size of the egg is due to the ostrich's large body size and its evolutionary history. If you have any more questions about animals or their reproductive processes, feel free to ask!"
As someone who professionally works on machine learning (ML), I gotta say, you got it spot on with the point about it lacking context. That will simply come as a product of time, natural evolution as it continues to exist in an environment. The point about it not thinking in advance was new to me, but immediately clicks. It's not true of every AI model; I knew someone who wrote a 2048 solver that involved look-ahead computations, and you could feel how it would slow down during critical, difficult steps (where I would have as well), and goes smooth and fast during easy parts (where I would have as well). There's something much more human-feeling to that sort of pacing. But ChatGPT just chugs out text at the same speed... unless it completely stalls out (particularly when I'm asking it existential questions, or trying to teach it math or how to keep time).
I would agree with your point about using AI for copywriting/SEO stuff if copywriting wasn't one of the most profitable industries for entry-level freelance writers lmao. Don't take this away from me AI I still need to pay off my student loans
I remember playing Ai dungeon, it was so much interesting when it I played it for the first time. But now the ai is getting wonky, lol. NovelAI is never wonky or give error messages most of the time, unlike AI dungeon, which gives me an error message every time I use the app.
if you're still interested in AI writing, look into NovelAI! i find it significantly better than AID and even though the free trial never regenerates and you kinda have to subscribe to continue at all, it's because their AI models are super intensive. they also have built in stable diffusion image generation too, and you can choose between tons of different styles and modules!
@@Parker-nm9cg Yeah, I used NovelAI. Its was beautiful, it felt like something the AID would never do without the most heavy collaboration and World info. The image creation is cool, I just wish we can do more then anime like in the discord.
@@MatthewHollow i stopped being able to afford the subscription, but god i loved it. AI might not be truly capable of understanding what it's writing, but characters seemed to have a fairly consistent personality, it was responsive to memory entries and author's notes, and it often did pretty impressive worldbuilding on the spot. i really think some AI writing is more impressive than people give it credit for, especially when it's collaborating with a human.
Thank you for the detailed explanation of what AI is. It annoys me that so many people see AI as magic, a new life form, a being with its own personality.
My main thoughts are two specific ones in regards to chatGPT and it not being great at writing. 1. It doesn't need to be great, it just needs to be good enough to outweigh actually writing yourself. While many people such as yourself quite enjoy the task of writing, others, myself included, usually hate it and procrastinate it. I've personally been using AI for emails, cover letters, resume snippets, LinkedIn posts, etc. Everywhere where normally I wouldn't care and would not write much/not write at all, I can now just do with a little prompt. Is it perfect? No absolutely not. It is going to be better than my writing likely would be? Yup. Is it ever worth it for me to make huge strides in my writing? Not really for my field. It isn't worth for me personally investing the amount of time it would take to get up to that level, vs accepting I'm mediocre at that skill and focusing on one's I am good at. ChatGPT is sort of like "off the shelf cake mix for writing" for me. Are there people that can bake better cakes than me from scratch? Absolutely. I'm not a baker though, so I don't care. 2. These large language models, in theory, will improve exponentially. (I say in theory because we're starting to hit a technical problem called "the memory wall" but that's a story for a different day.)
One video I saw on A.I. writing talked about the vast amount of fanfiction on the internet. There’s a lot of spicy stuff out there in addition to all of the implicit bias.
@@PatriPastry Okay, I probably should have considered that 😅. Still, Turing was one of the definite experts on the subject of real life AI outside of Science fiction.
I love your bit about the messy writing process... When I was in highschool we often got a specific writing structure "imposed" on us. I'm pretty sure i've had multiple assignments docked 15% for not handing in an outline. Several of my classmates wrote their outlines after writing the essay. But I'll never outline an essay before I write it because if I need to write a 3 paragraph essay I'll probably end up writing 5 paragraphs and then choosing the 3 that work the best as a cohesive unit.
Commenting before I watch: I don't think I'd want to read fiction written by an AI since it wouldn't have the creativity and human experience to write compelling stories or characters, but I could see an advanced AI writing purely factual things like summaries or instructions adequately
My main thought when I hear all these points against AI is to just add "yet" after each "can't", "doesn't", etc statement. It get better at a crazy rate. I imagine just about all of these issues will be significantly lessened over time.
Appreciate your takes on AI writing, this angle of considering the actual thoughts of a writer is similar to one gripe a lot of artists have against AI art, but that usually gets swallowed up in labor fetishism. Its nice to see you clearly talk about writing and learning to write as part of communication and cognition. Personally, while i am also interested in AI from a scientific perspective, i think its currently being used very irresponsibly by companies who stand to earn a profit. After all, if a machine can do something well enough, why would you hire a human? It sets a dangerous precident imo. Theres also the matter of companies training these AI having much greater leeway with copyright than humans, like art AI trainign sets including images with private medical Information and so on. The only time you see AI training sets play nice with copyright is with music, because the music industry is very litigious about its IP.
A major flaw I see in modern education is the lack of justification or purpose that is provided to students. Their education is a black box and entirely one-sided instead of a dialogue. Especially at an age when your brain is built to question everything, an answer needs to be provided to "Why?" Why is learning how to write a persuasive essay important? How will learning math help me? What does history have to do with my future? Why do I need to toss balls around in the gym? We need to break down these barriers, engage students, and I think fundamentally restructure our focus in education. It needs to be wholistic. We need to demonstrate how physical health affects our mental health, how building teamwork helps us develop strong relationships and work well with others, how persuasive essays help us in professional settings and understand data, how math helps us budget and understand statistics and systems that affect us in meaningful ways, how history can be recognized now in the making and the impacts that actions have had shaping our present. Much of our education seems based around the assumption that things like critical thinking, home maintenence, philosophical/creative/spiritual fulfillment will be handled at home, and that is such a flawed assumption. I know people for whom any of that was entirely outside of their parents capacity to teach them, either because they didn't know how to teach them, or neither their parents or them had the time to teach or learn respectively. People without developed morality, healthy outlets for expression, or a developing identity and direction. People without cooking skills, knowing how to clean; who don't know how to ask the right questions about art or information they encounter. I'm glad to see some changes, there is a recent petition near me for the local school system to include financial literacy as a part of mandatory curriculum, things like budgeting, credit, mortgages/financing, etc. But that's honestly just the first step. It's better than nothing, but we need to include students as engaged participants in their education, not as an object to be acted upon. If a machine, a literal inanimate object, can produce work satisfactory to our criteria of education, we have a completely broken system. We may as well be replacing students with AI if we treat them as robots to the degree that successful outcomes are indistinguishable. We must redefine a successful outcome, and renovate how we approach education.
We absolutely have a limit to how much context we can remember, humans don't have perfect memory (I've had conversations with people who couldn't go two paragraphs without contradicting themselves). In fact, I would argue that context is the #1 thing we humans struggle with the most when using language, wether it is written or spoken. Also, if we had the potential to teach an AI the meaning of words and how they correlate with reality, it stands to reason that we could also give them a much better capacity to remember and reference that information when writing. To me, the only real difference between humans and AI writing lies in perception of reality. If we were to teach a person to write a language without ever giving them the meaning of the words (just the grammar and sentence structure), the results would be undistinguishable from AI-generated text
Well, we usually chunk our information to deal with that. Which naturally leads to writing recursively. Since at each level as you think about it. It reveals more detail about that which can be written. Or other connected ideas. Text Generation Algorithms actually can write for goals though. But they mainly write smaller bits in that respect. And they won't take one into account unless given one. But structure is a significant weakness particularly for larger works. Although, basic procedural structure. As well as local context and context to a set of rules made at the beginning can be used to good effect. It also understands grammar. But not that having the logic of a paragraph flow is important. Or how to order a text so that the more fundamental ideas are established first and then those that depend on that. Or even that it's logic actually fits with reality. (Which isn't as much of a concern for fiction.) I actually think Chatbots are most useful when it comes to 1: exploring possibilities of how to say something. 2: Asking good questions to help you flesh out an idea when stuck somewhere. 3: Helping you find areas that are unclear or that have errors of a certain type. 4: thinking through the bigger aspects of approach on a piece of writing. 5: giving inspiration by generating some odd-ball combination of old ideas to get you started.
You talk about teaching AI about reality as if it's an easy task? Current generative AI has 0 understanding of, well, anything, really. Ultimately it's just algorithms spouting out what is the best guess according to its database. Sure, humans don't have perfect memory of context, but we always have the idea of context in our head at the very least. We have the context of all the experiences and feelings we've had all our lives, and that impacts what we write now. And yeah, if you make someone write like an emotionless robot who doesn't understand anything, you'll get text that's not far off from what AI can generate... Not a surprise. The big element in the point of most writing *is* the elements that such a person would be missing. Meaning and context!
12:35 that's the perfect example of a recursive process, it's so meta, you're solving the problem of not knowing how to talk about the link between writing and thought by writing and thinking about it.
I use Sudowrite for brainstorming and editing. I learn a lot from having it rephrase my work. Also, it's brave to comment about what AI can't do with GPT-4's release just around the corner. Your next video might be called, "Nevermind, AI can do everything now."
I asked GPT to write a short story, which it did. The TLDR here is that it can only for now mimic the most generic style of writing a story without actually making anything bold or provocative.
With the recent hype around chatgpt I'm now kinda scared that as an aspiring writer in literature, writers will be replace. Tho I know of course of the defense for real human creativity and understanding, I'm still a little bit concerned. I wish the official language, literature and other arts related (like visual art and music) committees and publishers around the world will do something about this, to not replace humans in this fields.
This is a great video. What it makes me realize is that all this hullabaloo about the bot and cheating is missing something. Rather than worry about whether or not students are going to use this AI to write papers, maybe they should tailor their lessons/content/etc, or make learning more participatory, so that students are legitimately invested in their own work, and wouldn’t bother to use an AI. These "cheating" students are writing papers to check a box, the one that verifies that they can regurgitate a lesson, or at best synthesize what other "important" thinkers had to say on a topic. That doesn't mean they're actually invested in learning whatever is being taught, which *should* be the point, right? If they were writing with purpose, or as you say to "affect the world around them", they wouldn't use AI, because AI isn't an expression of themselves and their passions/desires/interests. So rather than scrambling to figure out a way to get students to "stop cheating!", maybe this is a juncture at which academia should engage in some serious self-reflection, discipline by discipline, and ask themselves what they're actually doing to "affect the world around them", and thereby why any student should give enough of a shit about what they're teaching to engage authentically.
It was sort of touched on in the video, but I think if a writing assignment can be completed entirely by an AI, it is not a good writing assignment, unless it’s an assignment on the fundamentals of grammar or another fundamental writing principle.
sadly I don’t have time to watch this right now, but I do want to say: I had a crazy dream last night and part of it was that I heard news that you were arrested for shooting someone and I was like “no way Zoe Bee would do that, thats crazy” so yeah anyways, love your videos, please don’t shoot anyone
I'm so happy I found your channel, it's one of the few channels i can say that have educational value, that I feel like I gained a new perspective or learned something new after watching
I was with you on using the term "AI" to mean machine learning text generation at first, but when you started saying things like "AI can't understand context and structure" I think you've run into issues. *Current* algorithms and implementations have this restriction, but it's not fundamental to the concept of artificial intelligence. One of the goals of general artificial intelligence (as opposed to the narrower goal of predictive text generation) is a contextual awareness. I'm concerned that your argument is focused on limitations of the current technology while your terminology, at least as it's used scientifically, makes it sound like you're making broader, less justifiable claims.
This video is four* months old, and I'm very interested in how she feels now about the current state of OpenAI's ChatGPT. GPT4 is out, and they're working on the script for it now. It will be interesting to see how it develops. Also, it's quite likely that we will see several things put together I to a larger, more general AI in the very near future.
@@wakanda0070 How so? Would you like to explain more? Answer me honestly. Have you used GPT-4o in the past few weeks or months? We're not even close to reaching the maturity of this tech.
11:47 - This reminds me of a John Gage quote that I often share with writers: "Writing is thinking made tangible, thinking that can be examined because it is on the page and not in the head, invisible, floating around. Writing is thinking that can be stopped and tinkered with. It is a way of holding thought still long enough to examine its structures, its possibilities, its flaws. The road to a clearer understanding is travelled on paper. It is through an attempt to find words for ourselves in which to express related ideas that we often discover what we think."
Offset your carbon footprint on Wren: wren.co/start/zoebee The first 100 people who sign up will have 10 extra trees planted in their name!
I’m a writer I can should you what I’ve written because I accept criticism. (If TH-cam didn’t take down this comment because I know too much)
@@jessegartung294 Read your comment again. You have a number of grammatical errors. c-
Zoe. You are the best TH-camr I've seen so far. ❤️
I have some topic suggestions, or questions, for you. If you could enlighten me on them, I'd be truly greatful. 😊
What is the purpose of poetry? How has it changed the world? Could you cite some examples too.
Could you make a video about Friedrich Nietzsche? What do you think of him? He's made some contributions to literary criticism too.
Why are Russian writers usually praised so highly?
What's your opinion on cancel culture? Wouldn't it lead to the election of a Facist government that censors people?
Do you think the Sherlock Holmes series is great literature?
Will J.K. Rowling win the Nobel Prize in Literature? Does she deserve it?
What's your opinion about new age atheism?
Love. Keep making more quality content like this. Wishing you joy and good health.❤️😊
I'm not as excited by the evolution of AI scripts as I am of the potential for humans to learn how to think based on the interactions and process of engineering AI. We are facing our biases and cognitive filters in a profoundly personal way that is scaled down enough to do on a person by person basis, yet advancing the process globally as a civilization. The potential to bring humanity together intellectually is immense... this may be the very key to our survival as a species as well as all of life as we know it.
Carbon Footprint is a lie made by oil barons. You snake
I once prompted an AI with two words: “Eating” and “makeup”. It wrote a paragraph with the phrase “I once ate an entire tube of lipstick and I have no regrets”. I took that paragraph over to another AI and got it to write another several paragraphs that focused entirely on the makeup and not at all on the eating.
I did the same thing yesterday with the prompts of “murder” and “fudge”. I started out getting a story where a woman made fudge and her husband was murdered, and then the AI decided that having the woman hide the fudge was the best option, lest the police find the fudge and deduce that she had nothing to do with the murder. I have no clue why the AI decided having the woman implicate herself in the murder was a logical progression.
And then I took it to the second AI, which promptly took the story and threw it to Why.
It was a wild, impulsive decision that I made on a whim. I had always been fascinated by the different colors and flavors of lipstick, and I had always wondered what it would be like to eat one. So, when I found myself alone in my bedroom one day, I decided to take the plunge.
I opened up the tube of lipstick and popped a bit into my mouth. It was a little bit waxy at first, but as I chewed it started to dissolve and I could taste the delicious flavor. It was a fruity, sweet taste that reminded me of strawberry or raspberry.
As I ate more and more of the lipstick, I started to feel a strange sense of euphoria wash over me. It was like my senses were heightened and everything around me seemed brighter and more vivid. I even started to see little sparkles of light swirling around in the air.
After I had eaten the entire tube of lipstick, I felt full and satisfied. I also felt a little bit dizzy, but it was a good kind of dizzy. I lay back on my bed and let the warm, fuzzy feeling wash over me.
In the end, I have no regrets about eating that lipstick. It was a fun, spontaneous experience that I will always remember. And who knows, maybe I'll do it again sometime!
--ChatGPT
I didn’t do it. I couldn’t. It was preposterous. Preposterous as cuss, as the hipsters say.
“Eat it. I bet it tastes like salt and vaseline.”
I laughed. Okay I snorted. “So it’s basically margarine?”
“Margarine and coloring.”
“Yeah,” I drawled with unabashed refusal. “No.”
“Why not? You could tell random people in a Tube your first-hand experience. Prove how inane writing prompts are!”
“Yeah, oh, I get it. Still no.”
“The authenticity! The sincerity. It’s like the chili pepper crap with the interview like it’s two ferns? Two ferns, with the beardy guy from the Makeover.”
“Hangover?”
“Yeah, but he needs a Makeover.”
“Ha. Yeah. Should see him in Imagineers.”
“Hm?”
“Office worker, spontaneous combustion, the usje.”
“Is that how you spell the abbreviation for the word..”
“Usual? Yeah, dunno, I’m not a linguist.”
“Really? You do have rather passing facilities of narrative.”
“Naw.”
“You do.”
“Gosh, I haven’t even given us names. Things weren’t supposed to go this far.”
“Eat the contents of this tube of lipstick and you shall live forever, nameless, noble, human.”
“Still no, bye.”
@@OwlTeaGames is that ChatGPT too?
When I first read this comment, the only thing I could hear running through my brain was Doctor Bernard's voice at the opening of a Chubbyemu video, saying, "ZB reported to the emergency room frothing at the mouth, red sputum running down her chin. Unable to talk to emergency room staff, she was unable to explain how she had come to this. This is how her lungs almost shut down and she ended up with a lifelong aversion to makeup."
Sorry about that, Doctor Bee, but I had to use somebody's initials for a Chubbyemu video intro, and yours were the first to come to mind. I'm pretty sure this won't inspire you tio ingest a tube of lipstick just to see if the case would make it onto Chubbyemu's channel, but if you find yourself with a sudden urge to try a tube, please don't! We want to see you doing more videos . . . .
Earlier this week, I had a teacher accuse me of having an AI write my story for creative writing, even though I didn't. So, multiple teachers looked at it, and they all said the same thing. It doesn't help either that I'm the first at my school to deny that sort of allegation. Looking at it, I can see where they are coming from, but I know that my writing is genuinely mine.
There are programs which can check if writing was written by a computer rather than a person, at least for most of the freely available AIs. It's a shame they just make the accusation without checking - maybe they're not aware of the option?
@Matt Morgan Those programs aren't accurate enough yet. Saw an article the other day that one of the main ones they use for this falsely flagged the American constitution (or whatever it's called, I'm not american) as written by AI. So, sadly, we can't rely on those just yet, and I'm inclined to believe that as AI evolves it will be getting progressively harder to pinpoint whether something is AI generated or not with relative certainty
@@scarlett.whit3 Are you able to share this article? They decided to punish me for it, so I'm going to be appealing the decision. That would probably be a good piece of evidence in my favor.
Edit: To clarify, they did use a program in their decision.
@@theunpopularcuber9554 You should do 2 things, prove that you didn't generate the text, like writing the creative process you went through while writing and comparing that story with other stories or text you write. And the second thing is to investigate the program they use to identify your story, ask the name of the program and use it yourself in other media, literature classics, school homework, your other classmates stories or even ask the same teachers who accused you of using ChatGPT to write a story and use it on them.
@The Unpopular Cuber you should sue the teachers too while you're at it
In software, this introspection caused by communication is called "rubber ducking" (or "rubber duck debugging"), where you explain the problem in great detail to an inanimate object (or a colleague who isn't really expected to offer insight), simply for the clarity you gain from doing so.
I rubber duck all the time
I've heard it described as the teddy bear story where a help desk once had a bear they made you explain your problem to first.
i’ve heard it called soundboarding before, though i don’t know what that’s from
@@onetwothreefour2014 from my experience if you give it enough context it can reach(on that scale) 70% atleast for my questions it gives human like results or even better while rubber ducking to it.
This is also a key part in deckbuilding games, where you can quickly tell the functional decks from the Magical Christmas Land Jank by "goldfishing" the deck, where you play a deck against an "opponent" that does absolutely nothing (like your pet goldfish). If you can't win by the N+2nd turn, where N is the average number of turns a deck of your kind needs to win in the format, you have to change it.
sometimes ai delivers the funniest and most unintentionally brilliant lines . one time i was messing with it and it had two characters appear and one was like ‘i am (insert long ass roman emperor name with like 200 different titles), and this is my sister Druba’ and i laughed at that for a solid 40 minutes
"My name is Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso! Back. To. You. Bob"
yeah, so original the joke was used eight years ago in the Epic Rap Battles Of History
@@KyrieFortune Ain't no way you just pulled an "AI stole this joke from Epic Rap Battles of History" when:
1. No it didn't lmao
2. this type of joke is MUCH older than fucking *Epic Rap Battles of History*
3. Cool 999 profile pic! gives me some nostalgia
@@roundabout468 I'm unsure one could say these AI can even steal jokes. They don't have an understanding of humor. See, they can explain humor because their training data contains such explanations, but they cannot understand humor, because they do not understand. They just pattern match.
They _can_ mimic jokes, and these jokes may be funny, likely because it's either re-telling a previous joke, or out of coincidence, but it won't be funny because of some innate understanding of what makes a joke funny.
This extends to most things, really. You can use it as a calculator or as a mock SQL database, but it has no understanding of these things and can (and will) trip up even on fairly basic stuff.
It's just a really fancy text completion model.
@@roundabout468
“So what’s the sister planet of Racacoricofallapatorius?”
“Clom.”
Reminds me of Simón José Antonio de la Santísima Trinidad Bolívar Ponte y Palacios Blanco and Minister Inga Karkhuul Haw'kohd Dis'nahm Bi'ahni Lawga Ormo Pohmpus Da'nit Ar'edi Iz Khura'in III, just to name a few. Very respectable.
I'm not a writer but a visual artist, and it was quite interesting to me to hear a writer's perspective - what you say about forming thought during writing was so fascinating to me because the same happens to us as we draw and paint, but I never heard someone make that point as concisely for visual art as you did it for writing.
I agree with you there. I feel like so much of art is problem solving but it’s hard to put into words why. And it’s that problem solving that ai art still has issues with.
I wonder if this is because people are much more familiar with the concept of writing as deliberate communication of ideas and concepts (like, everyone with an education is familiar with reading and writing) while visual arts are less likely to be perceived as having a concrete and deliberate thought behind them. So it might be that the steps of writing that AI is missing (using words to learn about the world and then communicating that learning with words) is seen as less fundamental to visual arts than it is to writing. (PS: I am not an artist or writer!)
To me as a mere dilettante in various art forms, AI painting seems an interesting technical experiment, on the level of the invention of fotography. Until the invention of fotography a lot of drawing techniques more and more developed towards a more objective reproduction of human sight: perspective, color theory, etcetera. In a way photography threatened to deal the death blow to any visual artist, because it just offered a perfect solution to a lot of this proböems at low cost.
Until society learned to distinguish between a monkey with a polaroid and actual art.
I suppose atm AI painting is at the level of the monkey with the polaroid, it's just, that society has not become bored with its results yet. AI painting will become boring as hell in future, exactly because its cheap and everyone can do it, and it will become a constant noise.
There will be specialist, that can make the AI do things no one else can, and others, that will specialize in producing stuff, that no AI can produce.
I think the reasons why these concepts are easier to grasp in writing vs visual art is because visual art has a much higher skill floor and writing is more often seen as utilitarian. Most of the conversation you hear about art is about technique like how to draw eyes, shading, perspective etc. It's very difficult to reach the point where technical ability doesn't limit your ability to express yourself. While writing can be technical with word choice, punctuation, and sentence structure, it is so much easier to write "the man crushed a tomato in his hand" than say painting that image. Since so much attention is put into creating visual art that looks good there is less attention put into the meaning of art.
As I wrote this I realized "the man crushed a tomato in his hand" is not the written equivalent to painting that image. The equivalent would be a detailed description of it-- the juice flowing through his fingers and dripping off his wrist, the tension in his hands-- I'm not finishing that because I think you get how long that could take if I tried to describe everything in detail. If I did try though and really hammed it up with adjectives, similes, rhythm, and pacing it could lead to something that's actually aesthetically pleasing. Most people's reaction would likely be much more judgemental than if I made a baroque style painting of the same image. I think there are multiple reasons, but one of the big ones is that we often use writing to get a point across, so it feels natural to ask what's the point when you read it? Since visual art is often evaluated by it's aesthetics first it's much easier to forget that you probably had some reason for making art.
To a certain extent, I think making art because it's pretty is good and I think the same should go for writing too. At the same time forgetting meaning entirely makes either one meaningless. Overall, art is hard to make so people think more about how to make it than why.
@@stefanb6539 To me, as someone with relatively high levels of experience with writing and relatively... um... zero experience with art, AI generated art has always seemed vastly superior to AI generated writing.
Maybe I'm just missing the obvious things that make AI generated art bad or uninspired.
The biggest issue I have with creative AIs (i.e. ones that create art or text) is that whatever model is created after training is a representation of that training data that can ultimately can only be changed by adding different data.
In other words, such an AI represents what exists, and only what exist. It means that if AIs like these were to replace humans, human creativity will be immediately frozen.
The AI cannot generate anything new. Only remix what already exists.
This is a really good point that very few people pick up on. Most importantly, AI has an abysmally slow learning rate offset by their huge reading speed and huge bandwidth. So you can't just add one innovator to push the boundaries of an art form. You would need tons of humans pushing the art form all the time in order for it to pick up on the pattern.
Aren't humans the same way in some capacity, isn't it true that all of our ideas are remixes of other ideas we have encountered.
@@josiah42 could you be more specific as to what idea you have originally and independently thought of?
I think that would really help make this discussion more tangible.
However, humans very rarely, if ever, have original ideas. Everything has an inspiration, remixed and synthesized from dozens or hundreds of sources.
Ai art could use life models just as humans do. Humans don't create from nothing. We create from our memory, and our evironment. We could give ai art basic models of reality, just as students learn basic models of reality. Then it could use it's foundational knowledge on pictures it gained by its own means. This is first generational examples. Without any further evolution, ai art could just use photos to make art. It's not like we don't have game physics that can be used. People will be replaced. They will be replaced much sooner then you expect. Within a few years, human art we be too burdensome to train, the pay. Humans don't actually create from nothing. We are just unaware of out copy and paste function.
This is exactly the reason why I'm against fully automated programming tests. I don't JUST want to know if they happen to know the most optimal solution to fizzbuzz or if they know by heart where exactly a pair of curly braces needs to go. I need to be able to pick someone's brain, I want to know why they chose one solution over another, I want to see how they interpret intentionally ambiguous requirements, and so on. In essence, I want to know how someone THINKS
Exactly. LLMs are just very good at noticing enough patterns to seem lifelike, while still being little more than an overcomplicated predictive keyboard on your phone. The way in which more and more people fetishize these statistical models is disturbing. I am more and more convinced the real AI safety danger isn't in paranoid sci-fi fantasies of singularity, but in simple models convincing gullible humans into thinking they're sentient or smart enough to automate some critical job. Imagine being misdiagnosed by an AI which thought the way to detect early signs of cancer is by looking at the brightness of a specific pixel, passing all the training samples but failing utterly in the real world.
Too late your education system is fuckin busted and it's time to brake it all down even more
This is what it takes to shape minds, to interact with people and understand their thought process.
Otherwise, you just have a more abstract compiler.
You can ask GPT follow up questions - I don't consider it thought, but I do think it's starting to get indistinguishable from "picking someone's brain"
When people say that AI can't do something, a response I see a lot is that it can't do it "yet" and that we just need to wait for technology to catch up. But that frames "technology" as some natural force rather than the product of human effort. It also assumes that technology can just advance through anything and has no limits. But just because we can put a man on the moon doesn't mean we can put a man on the sun.
we could try it in the night
People are really using sci-fi logic I swear
I think you see this mindset a lot in the generations that grew up in a time technology advanced so quickly in such a short time with most not really understanding said technology and just sort of think it will continue to accelerate, not really understanding all the practical limitations that exist. This is how you get so many vaporware salesmen, promising to bring about the impossible if you throw enough money at them
@@Rurike yeah I agree. Its like how in the 50s and 60s people just assumed that by now we'd all be zipping around in space to go to work every day.
Idk I’d consider it possible to put a person on the surface of the sun
As a data scientist working with language in industry, I feel I ought to weigh in on some of these points.
If AI or ML don't work as terms, I'd recommend Large Language Models (LLMs) for this. These are the models now dominating the field, GPT-3 being one of many.
With regards to GPT-3 being trained on the whole internet - this is worth going into more depth. As you rightfully point out, the predictions that come out of a language model are reflective of the data that goes in. If you feed in biased input, you're going to see biased output. You joke about cancelling GPT-3, but I contend that as data scientists, we are responsible for the outputs of our models. We need to be very aware of what data we train on and reducing the biases that these show. With the largest training sets we're seeing today, all we're learning is that these datasets are far too large to truly know what's in them, and knowing your data is literally lesson one of data science. Filtering is rarely done to balance out the demographics of those who created the data. The focus is on getting as much data as possible, and if that means the vast majority comes from white men, so be it.
To me, language models in their current form are incredibly strong on analysing existing text. Not only are they a massive step up on what we can do with context, but I would contend they are the most in tune with the way humans learn text. Whilst this is absolutely a debated question, my personal inclination is towards Micheal Hoey's theory of Lexical Priming, which in it's most basic idea is language as pattern matching. Language models use training tasks that seem fairly optimal by this theory, BERT's masked token prediction, for example, which is only improved by the masked span prediction of SpanBERT. Of course, there is a limit on the amount of context that can be taken in, so I'll not claim that we'll never make anything better, but I do feel like we're very much on the right track.
At the same time, they're really not much good at language generation. Sure, it's a step up from what we could do previously, but it's a step up in syntax only. Semantics aren't there, and aren't going to be there without a large change in methodology. All a language model is doing, when generating text, is predicting which word is most likely to come next. The most clever thing it does with semantics is in working out which words are similar to each other, contextually, and that is only the first layer of these models. Cross-modal embeddings are a step-up, but I can't see much meaningful improvement of text generation without a radical new way of injecting real-world knowledge. Structure is, I think, a surmountable issue. Currently models use positional encodings to provide information about where a token appears in a sentence. I could see an introduction of a similar encoding to show where a sentence appears in a text. This would be domain specific, but domain specific models can and will be made. Intent is harder, but I think some exploration with secondary training objectives and sentiment will lead to more progress there. I remember a paper on spinning models to always write positive or negative sentences when specific people or companies were prompted - that in itself is a very basic form of intent.
The major problem remains, though, that any embedding is understood only in terms of what it's similar to and contextually appears with, and is completely unconnected to the real world thing it signifies.
To steal a turn of phrase from a very-well regarded paper - when it comes to text generation, a large language model is naught but a stochastic parrot.
Agree that this is a viable model (power usage notwithstanding) - learn the way humans do it, in pieces.
That said, I would prefer to use a data set from selected classroom teachers rather than the entire internet.
+
I asked chatGPT if it agrees with your comment and it agrees for the time but it is optimistic of its future :D. The response was:
Overall, the comment presents a nuanced view of the capabilities and limitations of large language models (LLMs) such as GPT-3. The comment accurately notes that the outputs of these models are influenced by the input data, and that it is important for data scientists to be aware of the biases present in the data they use to train their models. The comment also correctly points out that LLMs excel at analyzing existing text, but are not as good at generating new text due to their lack of understanding of semantics and connection to the real world.
As the field of AI and machine learning continues to evolve and advance, it is likely that we will see the development of more sophisticated large language models (LLMs) that are able to overcome the current limitations in their ability to generate text. These advancements may come from a variety of sources, such as the incorporation of additional real-world knowledge or the use of new training objectives and techniques. Additionally, it is important to note that there are already many examples of AI and ML systems that are able to model intent to some degree, such as language models that can generate text with a specific sentiment or tone, or chatbots that can understand and respond to user requests. These systems will also likely continue to improve and become more sophisticated as the field advances. Overall, it is clear that the field of AI and machine learning is constantly evolving and that there is much potential for further progress in the ability of systems to model intent and generate text.
Some of Zoe's comments, whilst fair for Large Language models really aren't true of AIs in general. They remind me of comments some chess players made about early chess computers "chess needs imagination", and "chess needs an understanding of symmetry". Needless to say chess players are now getting ideas from chess engines that they can't compete with, but which still lack anything analogous to imagination or an understanding of symmetry.
The chatbots for 25 years ago used simple statistical models of learning language and could be quickly trained to mimic an author's style and grammar, and it is clear to me LLM are using a lot more sophistication in the approach, but it was clear then, as now, there is no connection between the words, and what we might think of as complex thought. A good example is how bad the LLM are at basic arithmetic or algebra, where as there are tiny computer algebra systems that capture much of maths skills I learnt in 3 years of University and would leave a maths professor standing on simple stuff like calculus or series expansion problems. What the LLM models haven't done is connect the ideas to the words.
What is interesting is how good the LLM are with the approach they are using, try asking ChatGPT 3.5 to play tic-tac-toe, then understand it wasn't taught this, it taught itself, whilst it draws the board and tries to remember (and fails) as to where to put the O's and X's.
Ironically people say the LLM are not creative, but it is probably in the creative writing space, especially poetry, or song lyrics, that they shine, because there aren't wrong or right poems. And the lack of coherent thought behind it isn't that important if the words are connected in subject matter, or rhyme, or have the right rhythm. How someone got from "I wander lonely as a cow" to "cloud".
The one I found interesting is ChatGPT can explain mathematical methods, presumably because it is synthesising multiple explanations of how to solve a particular maths problems it has learnt from, but the numbers it puts in are just the "most likely token", which means it happily just chucks in a number which is related to the other numbers in the prompt, but not necessarily appropriate for the method being explained.
Anyway I had great fun setting ChatGPT 3.5 some of the basic logic tests we use on humans. Wason selection test, Monty Hall problem etc, and this shows both its brilliance and its failings. It argues very persuasively for its own wrong answer to the Wason selection test, whilst producing one of the best explanations of the correct answer to the Monty Hall problem.
@@51monw ChatGPT does things 'happily'? How do we see proof of this?
Basically;
1. AI doesn't know what the hell it's writing about
2. AI doesn't know WHY it's writing
3. AI has an unnaturally structured/improvised workflow
I know you didn't mention AI art for this video but everything you've talked about here applies so much to that field as well. Most people in support of AI just see art as a means of producing artworks, when in truth it's much like what you've said about writing. The process, the meaning, the understanding that is generated by the act of creating is so important to artists too and it's something that, at least as of now, AI can't replicate. It's no wonder that so much AI stuff looks so corporate.
The AI stuff looks corporate because humans have almost exclusively done corporate things in recent art history. You'll barely see unique art with major time invested in it and perfected, because it isn't financially viable.
I mean, today I saw a crab animation from a competition and it legit looked movie quality, but the reason why so many movies look so bad is because of time crunch and time limits due to the corporatization of society. You are born and pushed into a system where your only value is currency accumulation, so of course art suffers as a consequence.
Humanity has done far worse in their endless search for wealth that will not avail them once they are six feet under.
I enjoy drawing dragons. AI doesn't even know how to approach my type of dragon, because dragons are not nearly as corporately viable as a woman in skimpy clothing. Smaug from the Hobbit legit required so much effort to construct that no other animation teams dare create such a spectacle for lesser known franchises.
I mean look at video games. Most of the large titles are literally nowhere near as creative as the early games like Tetris and Pacman. The few video games earning the most are literally the exact same game released every year with minor changes or have gambling built into them.
I learned how to write, I learned how to draw both over years and years and countless hours and none of this AI stuff affected me as harshly as capitalism and the necessity to earn copious amounts of wealth just for basic survival. You want art to flourish? Eliminate the systems that are in place that destroy all creativity and break the backs of artists. But you won't. You don't like dealing with the real issue. You instead like targeting your frustration on AI models which do not even produce extremely stunning and detailed imagery yet, but somehow this is more dangerous than the system that has countless artists overworked, underpaid and put on time crunch for the past sixty years?
You need to wake up. The problem was never AI, it was you.
All of you only chose to do something and contemplate art when it affects the most privileged among you? Countless small artists were drowned out and lost their dreams before AI ever came to fruition when they couldn't afford a drawing tablet to compete in the digital landscape and yet you stand here against the machine that is culmination of all your corporate greed.
You sold out all of your humanity to be like a machine and now that the machines have come, you are afraid of them replacing you?
You woke up like a machine and abandoned your sleep, you ate like a machine and abandoned your enjoyment of food, relegating its production to the people who poison your veins and hearts and livers. You walked around with your little sandwiches and snack bars and watered down coffee in your hands. You distracted yourself from the chemical and rotten flavors as you engaged in conversation about the next corporate movie released that expressed no new ideas, nor made you ponder. You made sure to read every e-mail, every message the second it reached you and would panic if you didn't put in 100% performance, and yet all of you are wondering why you are getting replaced?
This system wants machines, not humans. Humans are flawed and make mistakes. This system doesn't want mistakes. You are not allowed to be disabled or in a bad mood or sad or mentally strained in any way whatsoever.
This is the natural conclusion to a system designed for machines, not humans. Stop blaming the machines, you made up this system, you engaged in this system,a small minority of you reap the benefits of this system and no machine was involved in its creation, it was all by your design and you accepted it all this time until it began hitting you directly and striking down large swaths of you, then you began to listen and attempt change, but it was already far too late.
@@yahiiia9269 you aren’t really in a position to talk down like that. And it’s of no use really. We all are in the same boat, whether someone thinks the system is wrong or not. Some may be confused, but it’s in our best interest to help each other understand. As for ai or not ai, it’s all just a distraction in the end.
The thing is that throughout the history people never could challenge ones with arms or power. Many people have tried, but to no avail. No one tries to break through a mountain to get to the other side, they walk around it. Thinking like we could change how our world and countries are structured is naive, my friend, because regular people didn’t create the world we live in. How can you change something you didn’t create?
This is all because countries were made to control people. It was never a natural way of combining forces. It was a way of those in power being a parasite, which is, in nature, very energetically beneficial. But the parasite needs the organism to survive. So that’s why they made institutions - to lock people away inside countries. No matter where you go there’s no free land.
This may sound like I’m saying people aren’t responsible for how they live. That’s not true. Everyone is responsible for their own survival. But the way our life is now is basically law plus societal rules on top of biological survival rules. If one can really go against the grain on this one then it’s no mortal man.
Nothing is eternal, and everything must evolve or perish. That’s what is happening today. The world order is changing in order to maintain control. And realistically, the best you can do in life is adapt to this system, but never surrender, never forget your family and your dreams.
I hold a personal belief that this all isn’t just a coincidence, I know this life is a test of your character. I don’t care whether there’s even heaven or hell. But I know that absolutely nothing you do or don’t do goes unnoticed.
We might not see it in this life and we might not see it ever, but the universal rule is no energy + information ever gets lost. It’s all balanced. By creating, we add a good amount of information by burning an incredible amount of energy.
To sum up, my point is (even though I have completely went off topic) - look at nature and how everything in it has survived for a long time. Take inspiration from it, because it is far more wise than anything else in this world.
@@yahiiia9269Bro’s solution to AI art is the complete dismantling of capitalism.
Ok buddy, I’m glad you got that out of your system but that was a completely worthless solution.
@@ChipCheerio I am sorry to tell you, but AI dismantles capitalism all on its own or enhances it to the point where it is impossible to rise up the ranks within capitalism (Since the people who own AI literally corner the market permanently, since AI can evolve quickly, too quick for poor schlo mo humans to keep up.)
There is literally no other way, except the AI way left. It either cements power forever from the point where it reaches beyond human comprehension or it completely bodies all of us and changes everything forever.
This is not the crunchy wrap burrito supreme evolution of a burrito, it's a Feast of all meals on earth cooked to perfection. vs a crumb of ground beef.
The machines are literally already here and better than the average human in everything.
Capitalism is a human system. AI doesn't care about it. It will exploit it, use it, then overturn it for itself if we are dumb and we are dumb on so many levels, it's incomprehensible.
Like good luck thinking that an automated world wouldn't upend capitalism or change it into something almost unrecognizable.
I've been playing around with interactive AI storytelling recently. What I find myself appreciating about it is that random spontaneity factor which can break me out of my own particular tunnel vision. Like, I might be trying to take the story in one particular direction, editing the AI responses as I go to keep everything on track, but then the AI will sometimes throw me a curveball I'd never even considered, but upon reading it I'm just like "YES!" and immediately shift gears to follow that new train of thought.
As this video concludes, AI is just a tool. It's the proverbial infinite monkeys with infinite keyboards. What matters is how we use that tool.
Haha, I find the opposite happening. I used to play with AI Dungeon and the AI _always_ tried to take my story in the most random direction. I’d edit the AI’s nonsense to fit better with my ideas 😂
Like when the Sims “misbehave” and you gotta manually course-correct them
@@DeathnoteBB AI dungeon has been far surpassed at this point for storytelling. You can find AI's built for roleplay purposes that have the capabilities of these Large Language Models
@@skyealexander4605: AI is going to happen whether any of us like or not. The best we can do is try to make sure it is developed and used responsibly.
exactly same with my experiance with creative writing AI. I used to get blocked all the time, and I always judge myself too much when writting, but when doing it in something like Novel AI, I just go, and I have fun, I wasn't able to just have fun writting for myself and no one else for so long.
I have a friend who reads Tarot cards to help her get ideas for writing, its just about as good as AI for that.
My favourite news story from last year is that members of Finnish parliament did have conversation with GPT-3 bot and asked how to combat poverty. GPT-3 bot essentially said that greedy rich people should give their money to poor or have socialist revolution.
Ah yes, the amalgamated wisdom of millions of internet users spouting off in comments and forums. What could go wrong?
@@studentofsmith You can't say it doesn't speak for the people, though!
@@studentofsmith Well, personally I do not oppose those solution proposals at all.
@@danbrioli5710 The best way to protect them is at the cost of them being rich.
@studentofsmith plus academic literature, religious literature, historical texts and opinion pieces on trusted and less trusted talking heads across time plus every other text across time thats been preserved on the internet up to the latter months of 2022. You're being extremely reductive.
"What we have here is a failure to communicate." - I think that's the most interesting lesson I took from this latest essay. The reason you could describe AI writing as 'not real' is that an AI has no ideas of its own. It can't really think for itself and it has no ideas if wants to communicate (something I would argue is an essential part of being human). An AI model can 'read' the entire internet but it can't understand the thought processes of the people who generated that content - their hopes and fears, the things they love and the things they hate. As you demonstrated with your list of biased terms, people choose words for a reason (good and bad) but an AI won't do that; for now, all it knows is 'this usually comes next'.
I feel as though students who use AI to create essays miss the point of the essays themselves. The point isn't to write a thousand words of text on a given topic, but rather the experience of doing so. By which I mean, in order to write a proper essay of a given length about a given topic you need to research (the amount of which grows with your word count), you need to form arguments and theories, structure these components. And by doing so, you are more knowledgeable and capable about X topic by the time that you are through.
AI text generation *may* have valid applications, but when it comes to academic studies, it most certainly does not. It's just as much of a cheat than having that kid down the road write your essay for you.
Meh... I think it's just another sign of how culture and technology feed into each other. It wasn't that many years ago that sites like Wikipedia were seen as the downfall of "proper" academic research. While, yes, using Wikipedia as the only source when writing is an absolutely terrible idea, there are actually teachers now days that will recommend Wikipedia as a starting point to find sources for more Indepth research.
Heck, before that it was spelling and grammar checkers that were seen as "lazy" or missing the point of putting in "real" work.
Give it another decade and AI writing might be seen the same. Not as a replacement, but as a foundation, a method for generating first drafts to quickly get all the information organized before doing a more detailed manual revision.
Although ChatGPT has a better handle on the structure of documents than my friends ever did so I guess they might learn something about how to structure essays and the like. It is really good at bland supportive spiel, I can see MPs using it for the "thanks you for your letter. ... we are not going to do anything different..." letters.
I'm sorry to say this but university environment has the same downfall as school environment, that is enabling cheating. I would love to enjoy the experience of writing essay if:
- 90% of academic papers are behind paywall so it's hard to get reference besides a few websites I can find in the Wikipedia link.
- The time frame to finish is ridiculous because you have overlap of subjects competing each other in your attention.
- At the end of the day, most of the professors will still grade your essay based on length and the length will usually be very atrocious.
And this is just university. I don't know how essay works in K12, but I can imagine it'll be more stressful. To fix academic studies, you need to accept that not every college student can write a dense essay with every words being academic and thoughtful. Only a very select few have enthusiasm in this and pursue higher education. The rest just want to pass a class and there are many ways to grade other than essays.
@@cloudynguyen6527 fresh out of k12 so i can speak on this!
There's a kind of strict way we have to format our essays, either called "ACES" or "RACES" depending on the teacher. This stands for:
Restate
Answer
Cite
Explain
Summarise
You first and last paragraph have to be an introduction and conclusion. Either the teacher will want an actual paragraph or they'll want one fucking sentence, you never know which and the way you're taught to handle thesis statements is never correct.
You have to put a quote then explain the quote. Put a quote, explain the quote, put a quote, explain the quote. Justify yourself, basically.
My 12th grade writing teacher was really awful about this, too. I liked him but not for that class. We had to have five paragraphs. The inner three each had to have three quotes. No more, no less. We had to explain each one. The last paragraph had to roughly summarise our point, and I never knew what to do for the intro because we just had to restate the prompt and put our thesis, but again, my thesises were never enough for him. On top of these limitations, we had to keep it at 3 pages exactly.
3 pages *exactly*. If you went over, you lost points. If you didn't fill the page, you lost points.
At some point I gave up trying to get an A in his class because it did not feel possible. Nothing I ever wrote completely filled his guidelines. Nothing I could do would shorten it enough and nothing I could do would make my thesis better. He did not even expect us to understand the material. We just had to memorise what we were taught the book was about and find quotes that supported the "accepted" conclusion.
Other than him, you are often forced to drag yourself through books you hate. A lot of kids, sports people especially, came out of middle school hating to read and I 100% believe it was because we were forced to begin and finish novels, then write essays about them. You can't sparks notes your way out of the "essay brainstorming" assignments I would get in 8th grade. I would begin and abandon books because the assignment made me dread reading, even as someone who loves to read and write.
The problem is students do not enjoy the experience of writing essays, so of course they’re going to try to skip it.
Written words are not thought; a mind must read words to create thought. Our thoughts are impossible to communicate directly -- I cannot give another person direct access to my brain so they can know my thoughts. Instead, I depend upon words to communicate my thoughts. The words are not important, but the thoughts behind the words are what I seek to transmit. My thoughts have better form and coherence because of words. I use words in my mind to lend better structure to my thoughts, then write or speak words so that other people can know my thoughts. If I am successful, then I feel satisfaction because I have touched other people that now know a part of me.
The Machine Learning Algorithm can create words that form technically correct sentences, but did its words come from thoughts? Do we know another entity by reading those words? If I allow it to write for me, I deputize it represent me without actually knowing my thoughts. Not only is that useless to my expression, also it deprives me of an important human need. I need to be heard. Writing offers me opportunity to understand my own thoughts better, to express my them, and present them to others, as I have done with these two paragraphs.
What do you think of the argument that language came before reasoning? I saw something like this in a video long ago and it was pretty convincing. I would say animals can think but can't reason much. It is so hard to pin-point what exactly thinking is though
Plot twist: this was writen by ai
@@rasuru_dev Of course language came before reason. Reason is a relatively new mode of thinking. Religious thought should not be mistaken for reasoning. One is logical, the other defies logic.
"Writing generates knowledge." Interestingly, this section relates to why some people talk to themselves. I don't have an inner monologue; or I do but I can't hear it without externalizing it, either by writing or talking. It's embarassing & complicated when you share living space and suddenly learn you aren't alone when you thought you were 😆
i have an inner monologue, and i can hear it without externalizing it. however, even just speaking out my monologue can help me engage with my thoughts in a different way by engaging more parts of my brain. it works great for writing dialog, because, while i may think a line works wonderfully in my head or on paper, saying it outloud reveals how clunky, off, or odd the phrasing actually is, or i forget that characters cannot actually see what im writing and cant make a comment on how someone didn't quite understand that their code name is "Were" as in werewolf and not "Where"
Learning to write in a way that elicits a desired response from AI, rather than learning to write is, well I don't know what it is. But it's crazy.
The amount of times I've heard "who are you talking to" from my partner
@@aliceberethart same! how do people function without an inner voice? they are npcs
I have both inner monologue and external monologue. When im alone i use external, when im in public its internal, sometimes both happen at the same time. I cant imagine what it is like without an internal monologue. How..how so you think? ...how does it work?😅
Loved this!
Feels like if students are using AI for assignments perhaps the motivation and assignment should be questioned as well. Like why don't they want to write it in the first place?
Lol. Cause many students are not at school to 'learn'.
Well, why not? Isn't that a problem we should address?
Student here, one that has taken to using ChatGBT for assignments, one of the primary reasons is because the knowledge gained from the majority of school assignments don't stick, because it's useless information, and even if it somehow sticks it'll soon be forgotten as we stop using that info completely once we move on to the next topic, because the info we spent a month learning is entirely useless for the rest of our lives the very next month.
We as a society value grades to an unhealthy, extremely harmful degree, and the knowledge that is supposed to come with it becomes more useless by the year, none of it has been valuable since the boomers graduated.
What happens to a currency when the paper it's printed on is worth more than the money's monetary value? What happens to the knowledge gained from schools when it's worth less than the grade it comes with?
In Venezuela, their currency is worth so little they measure it by weight rather than value, in the school system the knowledge is worth so little that we measure students like me for their grades rather than by their knowledge.
I'm tired of sitting here wasting hours of my already limited free-time to do useless busywork about a topic that I have no interest in for a grown-ass adult that's just gonna skim though it and smack it with an arbitrary number that's going to define my worth in society in the future, and put it in the burn pile with the thousand others. Because it's useless for the both of us after it's "graded". I've been longing for a way to make this nonsense more tolerable and I finally got a taste of it.
@@BeautifulEarthJaso to get it straight, there are 2 points why they don't want to learn. First effective learning techniques aren't properly taught in schools so therefore students can't learn many things in a short amount of time. Second, it's still an outdated format that we have to write essays in universities and most know that this is kind of a waste of time comparing to taking multiple internships and summer jobs.
Sometimes it's just laziness.
If you don't want to refer to Machine Learning as AI, another term is Pattern Recognition Algorithms. It's a term that has fallen out of fashion, but it's where the term ML originated. The big ML algos (deep nets, CNNs, etc.) started there, where it was considered just another statistical tool. TBH I wish there was never that rebrand, "Machine Learning" feels like it's implying something that isn't really there. But ML/AI are pretty standard terms in both industry and academia, so only an unusually pedantic ass would give you a hard time over that.
Source: I took several machine learning courses back during my undergrad in Electrical and Computer Engineering
I"m been a developer forever. The AI term bugs the shit out of me. it's nothing more than fancy statistical analysis with improved algorithms.
@@codebus Yep. We joke at work all the time about telling our program manager that the function with 3 if-statements is an “advanced AI algorithm”. There’s a huge disconnect between how we talk about tech and what the tech actually does. I can only dream of a world where CNNs are called “binning photoshop filters”, Neural/Deep Nets are called “very slow number comparators”, and AI is called “a stupid sci-fi joke completely detached from reality”
Our brains are statistical machines too. We learn by making associations, the same thing ML models do.
I propose when it comes to text generation algorithms we shorten the term “Machine Learning Algorithm” to “MLA.” Any text that looks like it was generated would then be referred to as “MLA Format.” I foresee no confusion arising from the use of this term at all, you’re welcome everyone.
@@BberryBberrydude I'm not in the field so I'm kinda talking out of my butt but I feel like the problem is that everybody assumes AI means AGI. AI as a term is fine because intelligence in AI research terms is just the ability to make good decisions based off available data, which is what algorithms do when they work. When people hear AI though they don't think "software that has been created to make accurate decisions in a very specific situation" they think Skynet. It's semantics really and that sort of denotational shift happens all the time in language, it's just sorta hard to swallow when it comes to technical speak.
Zoe! Thanks for introducing me to Caelan, Little Hoot, and the Leftist Cooks. Plus, your own content is uniquely great too.
Love them! (And Zoe too, of course! She's one of my favourites!)
Freakin' Caelan and Little Hoot changed my life!
Now I see foolish transphobia, propaganda or hate on the internet, I just link to their videos. I don't even argue. I've actually had a couple people come back and tell me it helped them understand what trans people are going through better. And that's a win!
I think, and you've touched on it, that any writing starts with intent. Even the AI writing would start with someone telling it to write, setting up some theme. Where the AI will always fail is coming up with its own intent because it basically has nothing to say, it is just spewing words. Now, weather we humans always realize our own intent and actually try and write something meaningful is a different topic...
This part should not really be a problem, because the AI should work in some way together with a human. Than the human is responsible for the intent and context.
@@uweengelmann3 ...Except by the time a human has "changed" an AI-generated story to contain coherent themes and apply their intent to it, they've basically just created an original story. At which point you might as well just cut out the AI middleman and write your story the way YOU want to tell it.
Zoe: "We don't just write to convince other people that we're human beings that talk like human beings."
Me: "I feel personally attacked by your assumption."
I enjoyed this video on your thoughts on AI in writing. One of the things I learned (unexpectedly) while working on my doctoral work was, as I think you point out: writing helps us think. There is a lot of value in writing multiple iterations of essay, papers and even emails. While writing papers, preceding sections were often substantially modified to better suit the more recently written sections. The rewriting process results in a better way to guide the reader through the text. Writing drafts helps us get our thoughts in order, and, in doing so, we may even realize the our initial approach to a problem is wrong. For AI to generate the right text in that situation, a different initial prompt would've been required. However, we would only know that after writing the text ourselves. At the end of the day, submitting AI-generated text comes at the detriment of the apparent author who, by not writing, forgoes the opportunity to think differently about the aim and content of the text. That may, or may not, have any further costs to the author depending on the intent of the text.
Personally I think the AI text generator is enabling me to think differently.
Finally, someone actually says it! I'm so sick of trying to articulate this and being told that AI writing is the exact same thing because "Infinite monkeys on infinite typewriters could make Shakespeare".
Also “Catcher in the Rye” is a story that would flunk all sorts of assigned English assignments if any one part of it was used to write a relevant essay.
It’s kind of jarring reading a book like that after having the lessons of grammar, formal writing, and formulaic story arcs drilled in to you for all of your upper elementary education.
I don’t think modern teachers realize that the book is a required reading book
BECAUSE MAYBE teachers shouldn’t be so strict on having all of the writing assignments tick off those formal boxes whenever a student writes.
And I’m a privileged suburban student, that formality is baked into the way my mind understands the English language.
If you and I are in the same metaphorical shoes, imagine how hard a strict grading system is for people whose everyday communication is nothing like that.
There is so much b.s. on the internet about "AI" and its abilities.
This video is one of the best reflections of the real ability and (lack of) understanding of these tools. Well done.
I LOVE this video. I have a background in AI/ML, and I was bracing myself for another video essay that completely misunderstands how these language models work... But no, you nailed it, the explanation was the most accurate and concise that I've heard yet. 5 star work! Live for your videos!!
Im currently studying computer science (a focus on data and network security and management)
the amount of times i have seen articles asking Chat GPT for financial advice (like asking what dodge coin will be worth at the end of 2023) is kinda mind boggling. Chat GPT is a computer program looking at patterns and basically just mashing words to fit those patterns (i know its more complicated than that) it has no predictive power, and it certainly dosn't know the future. if they built an AI/ML algorithm specifically for trying to predict crypto/stock prices, training it on all of the publicly available data of the last 50 years, along with all the market predictions of the last 50 years, and any other relevant data, than yeah, it might be able to actually make reasonable predictions on what either the market could do in the near future, or how a stock could do in the near future (assuming said stock dosn't completely implode like SVB)
@@Reverend_Salem Wouldn’t any predictions end up as self-fulfilling prophecies? Like if they say something will crash, people will sell and well, MAKE it crash.
@@DeathnoteBB depending on how much people trust the algorithm it could.
Just came from the other AI vid and found it interesting that it still very much felt like a Zoe Bee production. But I wondered how much of it was the fact the AI drew from your own scripts, or that you also delivered the video.
Hypothetically, I would love to listen to the same script two more times: once with a male speaker, but rehearsed to give as exact a recreation of your performance, and the second, another female speaker (preferable in the same vocal range), but given no indication on how the price was performed. I think this experiment would give an idea how much the human element is important in the performing of a work in addition to the cognition needed to create something coherent.
I think AI might actually be worse at writing long form text than poetry. The reason is that poetry, especially simpler styles like lyrics, have pretty defined structures and patterns. The AI can churn that out like nothing.
Long form text is more difficult because the AI basically has to monologue and any human with decent reading comprehension skills probably monologues to themselves regularly. They'll know when something's off immediately.
A really cool assignment for students could be to generate x amount of lines of AI generated text and analyze it like they would any other human generated text to get a different perspective about what makes human generated text different and much like your video learn more about how we as humans structure text and THINK about that structure as we are doing the structuring during the act of writing. For example, an "AI in modern communication" sort of class would be so cool! I took a "Society and Technology" class in college and I remember the class being very negative about the changes in society that have occurred because of changes in technology (which there are absolutely negatives), but we didn't really give a lot of focus on the cool and good things that technology has provided us too. I also love the thought that while we chip away at teaching computers how to think more like us so they can do jobs we would rather not do, its teaching us more about ourselves in the process as we think about how we think.
This sounds like a good idea to get a conversation going about AI generated text.
It annoys me when people say that AI is becoming aware... No, the AI is finding things from the Internet that has talked about AI becoming sentient.
Hm but then again your kinda of being an hypocrite
@@bullrun2772 hy·poc·ri·sy
/həˈpäkrəsē/
noun
noun: hypocrisy; plural noun: hypocrisies
the practice of claiming to have moral standards or beliefs to which one's own behavior does not conform; pretense.
Now, tell me how being annoyed by people saying AI is sentient when it's just scouring the Internet is being "an hypocrite." (*a hypocrite)
My intuition is that AI generated texts shifts the creative endeavor from writing to editing. The acts of "cleaning up" a text and deciding what parts to include and what to leave out involve creative choices that are often overlooked and underappreciated.
This is how I think about this too.
And "editing" is pretty much how we live in this current age of digital literacy. We are surrounded by knowledge and it is up to us to decide what parts/information to include in our lives
I like this characterization.
I tired, but its easyer to write by myslef. I had a cretive idea, and AI waters it down to cliche. It works for esseys and stuff, not for cretive stories
Yup
@@eksprolek2924 Your comment has convinced me that we need AI writing.
In this same vein, I really recommend looking at the Library of Babel. This is kind of a similar thing, in that it's heavily done with the assistance of computers doing behind-the-scenes calculations. But for the library of babel, it's not doing anything more than, in essence, sequentially constructing every possible combination of letters up to a certain length. It's fun to find inspiration there but it was featured in a Vsauce video and he was sure to point out that even though every possible story technically "exists" somewhere in that Library, that doesn't mean it's been written or said. For someone perusing the Library of Babel aimlessly, it's more like letter noise than anything else. But you can use it for inspiration. One of the things the site itself suggests is if, for instance, you've hit a block, copying and pasting into their search some of what you have right before you stopped and searching for that in the library and you'll come across all sorts of interesting continuations, etc. from there, that could help get some gears turning for you to continue. The Library of Babel has the next epic novel in some form in it somewhere, but it doesn't have meaning or context in the Library and as far as the majority of the world is concerned, it's location is so obscured it might as well not exist at all.
Cool video with lots of good points. I think AI more than anything gives people the wrong impression. People that aren't super into tech might hear it and ascribe all kinds of things that they associate with intelligence but most of what those people are imagining are specifically things that AI CANNOT do. And some of the very important ones that you've pointed out is understanding and building context, structure, having a plan of what's to come and a means to judge what these words will really mean to another pair of human eyes.
And those are really valuable things. What brings something from text created at random and real writing created with purpose and meaning - which makes it worth seeing and worth reading and sharing to other people and discussing, etc.
It's funny that you mention it, because while watching this video I had the thought "so basically any AI-generated writing is like a random passage/excerpt from Borges' Library of Babel." One of the tiny sliver of where the letters/words are actually coherent, to be sure. But random nonetheless, with no real intention/meaning behind the writing.
I think that in the future AI will be used as a tool to help writers rather than replace them. Currently when I am writing something I sometimes open a website that I can put a word into and it tells me a bunch of synonyms for it. I think stuff like this is good for helping people who get writers block as it's still there work but it can help them find better words to use to describe things happening. For example if I wanted to write a scene where a character was in danger and the main character needed to save them I could wright "Jeff ran up the stairs". Then an AI which is reading my paper could suggest better words or ways of putting it like "Franticly Jeff made a wild dash for the stairs". This is the same sentence but it's written in a more compelling way. Along side that I also feel AI could be used for better spelling correction software. I am dyslexic so many of the things I write need multiple corrections. Whilst current correction software exists it's pretty sloppy. An AI could learn what kind of mistakes I make often and use that information to help fix them. That's just my ideas though. I'm curious if anyone else thinks something like this is possible.
100%
i just found your channel and i love your content! it's so insightful and interesting, it really makes my brain work.
as a writer who struggles with perfectionism, the reframing of writing as a craft where the process is more important than the product is completely mind-blowing! i had never realized before that what makes a final product good is more abstract than what the product is at face value. the idea of the process behind writing being something uniquely human really puts into perspective how dehumanizing it is to judge your own writing so harshly based on only the outcome rather than your goals in writing it. honing in on the goals and ideas behind your writing rather than its actual content seems like a productive and healthy way to refocus your attention onto something more positive and improve writing that you aren't happy with.
I’ve always hated writing myself, but I’ve always considered myself an abstract artist. The way you described writing today was beautiful and has given me a new view and appreciation of it
I think that statements like "but it doesn't do X!" usually don't age very well when it comes to this type of stuff. I didn't think we'd get to computer generated images as quickly as we did. Progress in this space has been rapid recently. They were talking about AI generated art on the last two episodes of the Cortex podcast, and CGP Grey was talking about a language based AI that was able to solve logic problems like the Traveling Salesman problem. So yeah, it might not be able to understand context now, but it's just a matter of time.
so do you think AI will be able to intuit, in the future? or even the near future?
GPT-3 is already long out of date. It is antiquated.
@@transsexual_computer_faery Intuit won't be in near future
And look how pretty those images are (not). Like it or not, you can't pretend a machine can write or paint like a human and even if you could do that, starving to death writing and artists, you will have machines that produce pretty things, but none of them will have any meaning.
@@urooj09 To intuit is just extrapolating from past experiences to fill in gaps of actual knowledge. I'd say currently that's 99% of what machine learning is doing. It's the knowledge part ("context") that's missing. We have these deterministic, knowledge based programs and we have machine learning based program that take a bunch of stuff and try to guess based on that. And now we gotta bring them together.
Unrelated mostly but oh my god. Hi. I’m a person with adhd issues. I have been writing with the “DO IT IN ORDER” style since elementary school. The short description of metacognition in this video literally just made me realize why I spend hours staring at a blank document with no idea what to start with. I’ve literally been doing it the exact opposite way that my brain works
Holy shit did you literally put up a time lapse border around the video to signify how long the ad was going to be?!?!
Just when I thought you couldn’t be any more pure and good and thoughtful and wonderful than I already knew you to be…..
🥺🥺🥺
I actually had the exact opposite response... I'm so used to youtubers adding a progress bar under the ads that I felt cheated when the progress bar didn't stop at the end and just kept traveling around the edge of the video
I really enjoyed this video! I had no idea students were using AI to write their essays, but it makes sense. Also, the way you explained workings of the AI was a little surreal. Language is such an odd thing when broken down in relation to robotic thinking! Great work.
This is a really fantastic video, and I really like one of the big takeaways: what makes human writing more real is that it has a purpose. Very meaningful and interesting, above and beyond the rest of the great video.
The direction you took this was interesting, and I largely agree with what you've said. Personally, I don't think spell-checkers and the like are as big of an issue, because if we didn't have them, we would be using dictionaries when we're unsure of a word's spelling... or at least, we should; I've seen the horrors of modern spelling/grammar/homonym abuse. Writing is about creation, and you create and recreate within that process; several years back, I wrote a retrospective about the Zelda franchise - super original, right? - but focusing on its true identity (what makes Zelda Zelda), and the answer I had at the end was pretty different from the idea I'd had in mind at the beginning. As you said, machines don't understand what they're writing, and when they come up with something "correct", it's the same as stumbling blindly through some trigonometry process you don't understand, but still coming out with the right answer; the former isn't really writing any more than the latter constitutes being good at math. It's a little more sophisticated than the proverbial room full of chimps with typewriters creating Shakespeare, but still the same idea.
There's something else I've found interesting about AI that a game called Fumiko! made me think about: the the real nature of AI. We often think about AI as an intelligence that mirrors that of humanity, and we think of that as the goal, but it posed the question of whether or not that should be what we're striving to achieve with it. Given that computers don't have the same kind of mind, for lack of a better word, as we do, it is a much easier and more reasonable goal to let it develop into something that thinks nothing like us. Even if you don't want to get into an existentialist debate over AI's right do develop into its own thing, I think something that has a thought process different from our own is much more useful. Wouldn't we want to create something that can do something we can't, rather than just something that lets us be lazy?
"let it develop into something that thinks nothing like us" -- beautiful!
If I think about it, there's an absolute bias towards things that seem more human when it comes to AI development. Hmm. Would we even be able to detect something worth keeping around if it didn't match our biases and expectations?
@@infinitivez When you think about it, bias toward humanity is in a lot of things, especially science fiction. In most any series, whether it be books, TV, movies, video games, whatever humanity is always the superior species. When the writer deigns to let them encounter a species superior to them, the message is almost always that humanity is better anyway, and it's usually some stupid reason like, "It's our flaws that make us better!" Sure, you do get the occasional joke answer like, "Earth has the best food," but typically, when the question is, "What makes humanity so great?" you don't get a real answer. Let humanity be put in its place now and again, or pose a better question, like "What can humanity bring to the table if we all work together?" instead of just making it some meaningless contest against a fictional species.
I agree
The way I like to think about ai is as a tool, but its worth thinking about hand made vs non hand made stuff which has been lost over they years. Like when you are writing on paper you are molding a bowl by hand with clay on a fixed table, if you're using an editing software it would be more like turning it and using ai is like getting a plate out the end of someone else's factory and calling it yours... All the work is in the fully automated factory.
By extension both hand crafted bowls are unique in technique used and have a sense of soul and craftsmanship put into them. The one from the factory is soulless even if it made to look hand crafted. I would argue the exact same arguments were probably had years ago about mass production and the growth of industry.
I will say that in the paid content writing world, one thing that is encouraging people to use AI is the problem of abysmal pay. If you are running a content mill and you're trying to pay your writers one cent per word, then don't be surprised if an AI is the one doing most of the work. You reap what you sow. I know because I did this for a while, until I decided my work was worth more (as is every writer's). Now I'm happy to actually write for my clients lol, although occasionally there are times where I open up the Open AI playground. Sometimes I worry that I became too dependent on AI to write during that time, and now I can't write anymore...It scares the shit out of me!!
Our standard for deciding whether a machine is "intelligent" is to see if it can pass for a human.
Our standard for deciding whether a human is "intelligent" is to see whether s/he can pass for a machine.
Back in college in the late 80s to very early 90s, the Humanities department staff was slightly concerned about how, appearantly, you could contact people to pay to write a term paper for you. So really not a new concept.
Steven Zapata, an illustrator, did a video talking about AI illustrations and gives a very compelling point of view about the use of AI in art, it's dangers and how AI is not that far from creating art indistinguishable from that of humans.
It seems AI WILL be capable of creating art, the question is: should we automatize art with AI?
I've heard someone describe AI as a tool, kind of the same way Photo is. The AI is just responding to an input from a user after all
@@shytendeakatamanoir9740 I really don't care how you describe it tho, AI are capable of completely replacing artists. That's fascinating, but just because it can doesn't mean it should, that's my problem, Steven Zapata dives into this in more detail.
@@ranzu3138 I have done a lot of AI image generation, trained some models, generating at least like 3000 pictures in the last week and its nowhere near close to replacing artists at the moment.
Abstract art is probably the first to go but the more into detailed work you go(especially stuff that isn't extremly conventional) will make it a useful tool for sure but I can't even begin to describe how far away we still are from it being to take over artists jobs. It will most likely create more than it destroys so far.
As long as AI can't recall and apply concepts you will run into the issue of having to supply it with enough data to do what its supposed to do. Maybe some jobs will pop up where people will pay for models for certain things.
If you notice AI art is fantastic at easy patterns(like shading bodyparts well since there is plenty of data for that and they often have geometric forms like cylinders or balls) as well as things with a lot of data in general like heads facing a camera(I would guess 90% of AI images are things or people facing the camera at a slight horizontal angle).
But dear god as soon as you try to do something like dynamic poses with are often very unique or more uncommon perspective its a fucking nightmare. Thats often where the lack of data comes in and while its theoretically possible for it to learn practically always having good data of what you want is often not feasible.
We don't have models anywhere close to being able to do concepts and thats the biggest issue. As soon as we can make an AI that can do that most if not all jobs will be replaced in a short time frame. Until then I am very certain it will stay a tool for now. A very powerful one if used correctly
@@blacksuitedsonic I've seen plenty of examples of very impressive AI art that is indistinguishable from humans'. That's why there has been a lot of people very enthusiastic about replacing artists in the first place. And data is not really a problem due to AI companies working with almost unlimited access to copyrighted material, a mean even Kim Jung Gi's art was used to make AI generated illustrations.
I highly encourage you to watch Steven Zapata's video on AI art.
@@blacksuitedsonic an example that comes to mind is a video called "¿Para que seguir viviendo?" by "CdeCiencia" a Spanish content creator. He compares art by humans and AI generated art and for most people is difficult to distinguish which is what.
You can say "oh well is far from replacing artist" but I really don't care how much it takes, it will, art is not something divine, it's patterns and AI will convincingly make art sooner or later and I don't doubt for once that companies specifically will replace artists with AI if this is not regulated properly.
I love this video bc it helped me with my assignment. Not only did Zoe talking about how they have a messy process made me realize I could ignore the bits I didn't like in favor of writing what I did want to write and then fitting them together, but the content of the video gave me ideas for my essay on Generative AI
Deep learning researcher here. Let me just say that your characterisation of AI (specifically, Deep Learning) was excellent. It covered the way these are purely algorithmic, while not misrepresenting the way that algorithmic doesn't necessarily mean bad.
With regards to terms, I think that "AI" is a good general term to refer to any algorithm (set of computer-enforced rules) that we can interact with directly. I know people contest the use of "Intelligence" in that because most AI is quite low in intelligence, but I think that it is still the correct term if we want to evaluate them on human terms; how we interact with them.
If you want to talk exclusively about AI that uses both training data and rules, technically "Machine Learning" (ML) is the correct term, but even if you were to be making a point about how "AI uses data based on human biases", I would still say that "AI" is a better term. It's only when you want to distinguish between "Machine Learning AI" and "non-ML AI" that I would suggest using the term "Machine Learning" because that's the only time where it's relevant. I would also avoid using "deep learning" in general because the distinction is kind of ambiguous and pretty technical; honestly, it's mostly just a good marketing term for "large neural networks on the GPU".
With regards to the content, I want to re-iterate a point you made (via quotation) that I think is important. GPT-3 and ChatGPT can be thought of as solving "language modelling" (the rules of grammar; which words go where) up to a human level, but the (relatively more complicated) logic and reasoning part is no where near solved. The current techniques do not really allow the kind of cognition required to create text which has a purpose, since the AI does not have any external goals. Regardless, it's fascinating to see what perfectly legible language looks like without any meaning behind it at all.
Would you mind answering a few questions for me? They are:
- Do you think we can ever teach AIs to understand meaning? How do we "understand" things?
- Is understanding a precursor to the ability of distinguishing between truth and falsehood?
- Do you think this ability is a precursor to general reasoning applicable to things outside of AI's training data set?
I got these questions from pondering about how we could teach AIs to think from first-principles
my argument to my students is if the computer can do the writing for them, a computer can replace them and the boss won't need to hire them
I think this was released shortly before ChatGPT was released, and so much has happened with AI since this video was released lol, it would be cool to see an updated version about either GPT-3.5 or GPT-4. Also for 10:47 we don't necessarily write just word after word, sometimes we think about what we want to write about, then write about that. GPT models can't do this, and this leads to, for example, not very good joke telling. Generally when we come up with a joke we think of a punchline first and then base the joke around that, the GPT models can't do this.
Just learned that some essays are graded using algorithms, yet teachers have the audacity to get mad that their students give them a taste of their own medicine and us algorithms to write the essays. This whole debacle is just a clear sign for teachers to make assignments that are actually worth the time of the students to write and the teachers to review.
Great video on the writing process and purpose of writing. I would love to see you talk about adhd and the writing process. It always strikes me a bit negatively when teachers talk about getting away from rigidity in the writing process, especially when pushing free-writing. Mainly because I and many others benefit from the ability to structure our writing process as a checklist of steps to do and a specific order for each step.
No two people share a writing process; from my experience, it's a lot of trial and error to find your specific version. In that way, it's not bad for teachers to push you to try different styles, as sometimes you'll find a way working that you don't expect. Saying that, I think once you realize something definitely doesn't work for you, it's best for you to put it aside. So yeah, my suggestion is to try all the different ways you can and see how each works. If you're going for long form writing, try to take notice about how often you can finish a piece and when you don't, at what point the process tends to break down. My own process is a bit of a mess, but it mostly works for me. I doubt I'll ever be completely satisfied with it, constantly tweaking to find better methods to make me more productive. I have very mild ADHD, though, so you might have more trouble.
Now that GPT 4 is out, this video didn’t age the best to be honest. Most of the issues discussed have been fixed by basically just throwing more data and processing power at the model
The machines still don't understand what words really mean. Fundamentally they cannot.
They're just machines.
Hey Zoe! Watching this video and your last has been great fun for me personally, since this is my field - I have an MS in Computational Linguistics. You seem like you have an excellent grasp of the capabilities and limitations of modern Large Language Models such as GPT-3. Like you, I am fascinated with what these models can do to help us understand our own cognition - I dabbled in computational psycholinguistics during grad school, which is all about using computer models to help us learn how the brain works when processing language - and so I was thrilled to see you mention at the end that LLMs can't produce anything "new," since it's basically just re-arranging components based not on its own cognition, but on extremely dense matrices of weighted probabilities. I like to say that LLMs like these are fundamentally uncreative as a result.
If anybody is still struggling to understand why texts produced by LLMs are fundamentally different from those produced by humans, I recommend looking up a thought experiment by philosopher John Searle, called the "Chinese Room." While his example was inspired by the advances we made years ago in translation science, the fundamental nature of the systems hasn't changed, and so it's very much analogous to LLMs and language generation. It can be tricky to understand what distinguishes these LLMs internally from humans for sure, and I loved how you mentioned the recursive nature of human thought. Recursion is actually built into the structure of neural networks - that's what gives them the ability to "remember" context - but their context is fundamentally limited, and even if it wasn't, they aren't built to "understand" anything.
All in all, I thought the output for your last video was hilarious and exactly what I'm used to seeing from these LLMs, and this video was very insightful! I've seen even people in my field fail to understand the points you've made here, so major props!
Your analysis of the writing process was so eye-opening. It made me realize that, as a programmer, the task of designing a program or system is almost exactly the same as the writing process you described. Very cool insight!
The recursive thing totally makes sense... I've always had like 3 different ideas and I knew that one needed to be further down the page and then one was my end statement and then my beginning statement.
Eventually have like a hanger on where at the end the statement that I got to and the statement I initially placed look the same...
Very excited to hear what you have to say about this!!
ML is the term I use (somewhat derisively) for what people colloquially call AI.
I think one of the sad things of GPT and other popular language models is that they precisely do not model the human mind. Even when they use neural nets, those nets are being used as a universal trainable function, rather than something that imitates the computational processes in the brain. There's some actual neural net AI around, but it's been very sad to see so many linguists go into industry and have their time wasted training opaque models when they could be giving us actual insight into human language.
I see ML i start wondering where the vanguard party of the revolution is, comrades.
@@Owesomasaurus Of course, Machine-Leninists!
I'd like to comment here, first of all because of the algorythm, to help your channel. Secondly, I'd like to ask some questions about what you rather bluntly stated: what is a word? what is a sexist or racist sentence? What I mean with these two questions is a broader one: do WE know stuff? Because when talking about AI we're usually very confident about our own rationality and epistemology, even though, I believe, we have strong evidence to support the opposite of that statement. Even science as we know it is only reliable in its "unrealiableness", that is, it keeps changing and being "falsed" every day...
I'm writing as a hobby for various large writing sites and many people do use AI writers. AI writers are very good at writing what's "marketable" (using popular tropes), and then the actual writer just polish it because it does look iffy. It's about 80% AI/ 20% human work. And those stories do get popular, very much so. You might say that it's just putting out a "generic" story, but that's exactly what users of specific writing sites wants, like there's a bajillion werewolf romance stories with the same beats and all that, and many sell.
I have artists friends and they complain to me about AI artists, not knowing that many, many books sold for niche categories are written by AIs.
Interesting thought! Can you name one book that's mainly written by an AI and polished very well by a human?
I'm sad I found this video only 4 months late. You have such a good analysis of what AI actually is.
Also I love how you have your books sorted by colour.
AI writing: Chomsky's 'context-free grammar' made manifest, although the limited working memory makes it a lot less like Chomskyan syntax in numerous other ways.
You're correct about everything. It's frustrating that 99% of the population isn't going to understand any of this. The public has been fed such a garbage diet of media that The Matrix is more real to them than anything in this video.
I don't know if you touch on this later in the video or not, I'm about halfway through and just wanted to mention that many of the limitations, while present in current ai models, can theoretically be ironed out much later. It'll be really hard to actually fully replicate the writing process as you describe, but theoretically there's nothing stopping us, we just need technology to catch up and making training such a model feasible. I love the insights in this video though, it's very good so far.
I have many issues with the sentence "we juste need technology to catch up".
Yes, theoretically, everything is possible for computers and technology... at the condition that you ignore everything matarial like energy. We live in a finite world, and ignoring that is dangerous. And if you don't stop me, I'll start talking about the ecological crisis.
Moreover, scientists are trying to do the best they can with the current knowledge and technological capabilities. As we don't have te possibility to perfectly mimic human writing, the limitations pointed out by Zoe are everything but trivial. These are issues that scientists should address if they want to improve text production.
Finally, you cannnot blindly trust that technology will ever solve everything. There are limitations. Often, technology or science get stucks because of fundamental problems. Even more, it is currrent for a technology to be abandonned because it does not fulfill their promises. I don't say that this will be the case, but we can imagine that there are deep issues that prevent this technology to improve further. This is why we should not blindly trust that technology will magically catch up resolve all of these issues.
i watched this video a few days ago, and learned some interesting things.
now, currently, i have experienced through writing, that the act of writing allows one to comprehend an idea more clearly. i was reading a book, very confused. and i decided to make another entry for that book on my phone. i began to explain things to myself and understand some things a little more clearly. i am still confused, but i am confused less though.
it is interesting how complicated writing can be, and the very fact that it can affect our understanding is also pretty absurd and useful.
As a seat-of-the-pants writer, all this talk about composing while writing is making me feel VINDICATED! :D Yes, it is valid for me to be 5 chapters in, still not know how it's going to end, and have just realized I need to go back and change the entire beginning. How ELSE would my brain work?
(Also I used to teach SAT prep and the essay portion drove students who had taken high-level writing courses mad because they'd just spent a year or more learning NOT to write 5-paragraph essays and would get upset about being told to write in them again. Also, *officially* essays were graded 'holistically' but based on the actual standards... yeah, I'm counting the transition words and checking your supporting details are in both the thesis and conclusion paragraphs, not actually considering the quality of your argument. That's why you're not getting the score you expect.)
(Also-also, 10 years later I feel really bad about it still, but at the time I had to be basically a paid shill for the standardized test companies. I am now VERY anti-standardized test, or as I call them, tests that test how well you can take a test.)
You make overly general statements about AI when citing aspects of just ChatGPT. I appreciate what you say at the beginning about how difficult it is to name the *things* you're talking about without it sounding clunky, but I think naming the model you're talking about would be better. Statements like "AI doesn't understand context", "words don't actually mean anything to an AI", and "there's so much about the writing process that AI just can't do" are badly future-proofed, and assume that ChatGPT and ChatGPT-like writing AIs will be the only game in town for the foreseeable future, which seems hard to argue given how fast things are changing
I love how you talk as if you were an expert in machine learning and LLMs, it's just hilarious to hear statements as "AI can't produce text just as a human" when even two years ago LLMs made absolute garbage and today they're writing almost human level text at best. We just have to wait and see, but if the pace of advances continue, I believe most of the arguments you gave will become obsolete in less than a year and, obviously, you will move the goalposts of what makes human writing better than "artificial writing" over and over again.
Writing systems are just a medium of communication. So it has to be deeper than the communication system. The material to communicate has first to be generated and then encoded in language. So when will an AI generate the material to be communicated? When will an AI want to communicate something that it figured out?
11:25
I feel so seen! I had so much trouble from teachers due to not having discrete, dramatically different drafts. If a portion of my writing isn't very good, then it's hard to reason about what to write next, so I have to mostly fix it before continuing, which doesn't leave much room for changes to make in a second draft.
A coworker once asked chatGPT why cow eggs are bigger than ostrich eggs. The AI would go on about that domesticated animals have been selected for their egg production, while ostriches, being wild animals, have been selected for their speed and size, hence why cow eggs are bigger than ostrich eggs.
Just asked that
"There seems to be a misunderstanding in your question. Cows are mammals and do not lay eggs, while ostriches are birds that lay eggs. Mammals, like cows, give birth to live young after a period of gestation.
Ostrich eggs, on the other hand, are the largest eggs laid by any living bird species. The eggs can weigh between 3 to 5 pounds (1.4 to 2.3 kilograms) and have a tough, thick shell. The large size of the egg is due to the ostrich's large body size and its evolutionary history.
If you have any more questions about animals or their reproductive processes, feel free to ask!"
Ask GPT-4 or any AI model connected to the internet like Bard and I doubt it’ll make the same mistake
As someone who professionally works on machine learning (ML), I gotta say, you got it spot on with the point about it lacking context. That will simply come as a product of time, natural evolution as it continues to exist in an environment. The point about it not thinking in advance was new to me, but immediately clicks. It's not true of every AI model; I knew someone who wrote a 2048 solver that involved look-ahead computations, and you could feel how it would slow down during critical, difficult steps (where I would have as well), and goes smooth and fast during easy parts (where I would have as well). There's something much more human-feeling to that sort of pacing. But ChatGPT just chugs out text at the same speed... unless it completely stalls out (particularly when I'm asking it existential questions, or trying to teach it math or how to keep time).
I would agree with your point about using AI for copywriting/SEO stuff if copywriting wasn't one of the most profitable industries for entry-level freelance writers lmao. Don't take this away from me AI I still need to pay off my student loans
Chat GPT is a toddler who has been given an iPad and is repeating everything it’s heard. i think it’ll learn, but i won’t forget the biases it has
I remember playing Ai dungeon, it was so much interesting when it I played it for the first time. But now the ai is getting wonky, lol.
NovelAI is never wonky or give error messages most of the time, unlike AI dungeon, which gives me an error message every time I use the app.
if you're still interested in AI writing, look into NovelAI! i find it significantly better than AID and even though the free trial never regenerates and you kinda have to subscribe to continue at all, it's because their AI models are super intensive. they also have built in stable diffusion image generation too, and you can choose between tons of different styles and modules!
@@Parker-nm9cg Yeah, I used NovelAI. Its was beautiful, it felt like something the AID would never do without the most heavy collaboration and World info. The image creation is cool, I just wish we can do more then anime like in the discord.
@@MatthewHollow i stopped being able to afford the subscription, but god i loved it. AI might not be truly capable of understanding what it's writing, but characters seemed to have a fairly consistent personality, it was responsive to memory entries and author's notes, and it often did pretty impressive worldbuilding on the spot. i really think some AI writing is more impressive than people give it credit for, especially when it's collaborating with a human.
Thank you for the detailed explanation of what AI is. It annoys me that so many people see AI as magic, a new life form, a being with its own personality.
My main thoughts are two specific ones in regards to chatGPT and it not being great at writing.
1. It doesn't need to be great, it just needs to be good enough to outweigh actually writing yourself. While many people such as yourself quite enjoy the task of writing, others, myself included, usually hate it and procrastinate it. I've personally been using AI for emails, cover letters, resume snippets, LinkedIn posts, etc. Everywhere where normally I wouldn't care and would not write much/not write at all, I can now just do with a little prompt. Is it perfect? No absolutely not. It is going to be better than my writing likely would be? Yup. Is it ever worth it for me to make huge strides in my writing? Not really for my field. It isn't worth for me personally investing the amount of time it would take to get up to that level, vs accepting I'm mediocre at that skill and focusing on one's I am good at. ChatGPT is sort of like "off the shelf cake mix for writing" for me. Are there people that can bake better cakes than me from scratch? Absolutely. I'm not a baker though, so I don't care.
2. These large language models, in theory, will improve exponentially. (I say in theory because we're starting to hit a technical problem called "the memory wall" but that's a story for a different day.)
So what little prompt did you use to get _this_ post?
@@JohnDlugosz I didn't use chatGPT for this. I also don't really get what little idea you're trying to imply with this.
One video I saw on A.I. writing talked about the vast amount of fanfiction on the internet. There’s a lot of spicy stuff out there in addition to all of the implicit bias.
If homophobia hadn't killed him, Alan Turing would definitely have had some interesting thoughts on this.
Uhhh
To be honest, even if, it's not terribly likely he would've lived to be 110 to give his thoughts today.
@@PatriPastry Okay, I probably should have considered that 😅. Still, Turing was one of the definite experts on the subject of real life AI outside of Science fiction.
I love your bit about the messy writing process... When I was in highschool we often got a specific writing structure "imposed" on us. I'm pretty sure i've had multiple assignments docked 15% for not handing in an outline. Several of my classmates wrote their outlines after writing the essay. But I'll never outline an essay before I write it because if I need to write a 3 paragraph essay I'll probably end up writing 5 paragraphs and then choosing the 3 that work the best as a cohesive unit.
Commenting before I watch:
I don't think I'd want to read fiction written by an AI since it wouldn't have the creativity and human experience to write compelling stories or characters, but I could see an advanced AI writing purely factual things like summaries or instructions adequately
My main thought when I hear all these points against AI is to just add "yet" after each "can't", "doesn't", etc statement. It get better at a crazy rate. I imagine just about all of these issues will be significantly lessened over time.
True
Appreciate your takes on AI writing, this angle of considering the actual thoughts of a writer is similar to one gripe a lot of artists have against AI art, but that usually gets swallowed up in labor fetishism. Its nice to see you clearly talk about writing and learning to write as part of communication and cognition.
Personally, while i am also interested in AI from a scientific perspective, i think its currently being used very irresponsibly by companies who stand to earn a profit. After all, if a machine can do something well enough, why would you hire a human? It sets a dangerous precident imo.
Theres also the matter of companies training these AI having much greater leeway with copyright than humans, like art AI trainign sets including images with private medical Information and so on. The only time you see AI training sets play nice with copyright is with music, because the music industry is very litigious about its IP.
A major flaw I see in modern education is the lack of justification or purpose that is provided to students. Their education is a black box and entirely one-sided instead of a dialogue. Especially at an age when your brain is built to question everything, an answer needs to be provided to "Why?"
Why is learning how to write a persuasive essay important? How will learning math help me? What does history have to do with my future? Why do I need to toss balls around in the gym? We need to break down these barriers, engage students, and I think fundamentally restructure our focus in education. It needs to be wholistic. We need to demonstrate how physical health affects our mental health, how building teamwork helps us develop strong relationships and work well with others, how persuasive essays help us in professional settings and understand data, how math helps us budget and understand statistics and systems that affect us in meaningful ways, how history can be recognized now in the making and the impacts that actions have had shaping our present. Much of our education seems based around the assumption that things like critical thinking, home maintenence, philosophical/creative/spiritual fulfillment will be handled at home, and that is such a flawed assumption. I know people for whom any of that was entirely outside of their parents capacity to teach them, either because they didn't know how to teach them, or neither their parents or them had the time to teach or learn respectively. People without developed morality, healthy outlets for expression, or a developing identity and direction. People without cooking skills, knowing how to clean; who don't know how to ask the right questions about art or information they encounter.
I'm glad to see some changes, there is a recent petition near me for the local school system to include financial literacy as a part of mandatory curriculum, things like budgeting, credit, mortgages/financing, etc. But that's honestly just the first step. It's better than nothing, but we need to include students as engaged participants in their education, not as an object to be acted upon. If a machine, a literal inanimate object, can produce work satisfactory to our criteria of education, we have a completely broken system. We may as well be replacing students with AI if we treat them as robots to the degree that successful outcomes are indistinguishable. We must redefine a successful outcome, and renovate how we approach education.
We absolutely have a limit to how much context we can remember, humans don't have perfect memory (I've had conversations with people who couldn't go two paragraphs without contradicting themselves).
In fact, I would argue that context is the #1 thing we humans struggle with the most when using language, wether it is written or spoken.
Also, if we had the potential to teach an AI the meaning of words and how they correlate with reality, it stands to reason that we could also give them a much better capacity to remember and reference that information when writing.
To me, the only real difference between humans and AI writing lies in perception of reality. If we were to teach a person to write a language without ever giving them the meaning of the words (just the grammar and sentence structure), the results would be undistinguishable from AI-generated text
Well, we usually chunk our information to deal with that.
Which naturally leads to writing recursively.
Since at each level as you think about it.
It reveals more detail about that which can be written.
Or other connected ideas.
Text Generation Algorithms actually can write for goals though.
But they mainly write smaller bits in that respect. And they won't take one into account unless given one.
But structure is a significant weakness particularly for larger works. Although, basic procedural structure. As well as local context and context to a set of rules made at the beginning can be used to good effect.
It also understands grammar.
But not that having the logic of a paragraph flow is important. Or how to order a text so that the more fundamental ideas are established first and then those that depend on that. Or even that it's logic actually fits with reality. (Which isn't as much of a concern for fiction.)
I actually think Chatbots are most useful when it comes to
1: exploring possibilities of how to say something.
2: Asking good questions to help you flesh out an idea when stuck somewhere.
3: Helping you find areas that are unclear or that have errors of a certain type.
4: thinking through the bigger aspects of approach on a piece of writing.
5: giving inspiration by generating some odd-ball combination of old ideas to get you started.
You talk about teaching AI about reality as if it's an easy task? Current generative AI has 0 understanding of, well, anything, really. Ultimately it's just algorithms spouting out what is the best guess according to its database.
Sure, humans don't have perfect memory of context, but we always have the idea of context in our head at the very least. We have the context of all the experiences and feelings we've had all our lives, and that impacts what we write now.
And yeah, if you make someone write like an emotionless robot who doesn't understand anything, you'll get text that's not far off from what AI can generate... Not a surprise. The big element in the point of most writing *is* the elements that such a person would be missing. Meaning and context!
12:35 that's the perfect example of a recursive process, it's so meta, you're solving the problem of not knowing how to talk about the link between writing and thought by writing and thinking about it.
I use Sudowrite for brainstorming and editing. I learn a lot from having it rephrase my work.
Also, it's brave to comment about what AI can't do with GPT-4's release just around the corner. Your next video might be called, "Nevermind, AI can do everything now."
I don't think anyone thinking realistically will hold their breath.
I asked GPT to write a short story, which it did. The TLDR here is that it can only for now mimic the most generic style of writing a story without actually making anything bold or provocative.
_If an assignment can be completed by an AI, then maybe the assignment isn't as valuable to students as we think it is.._
Love this quote
I've heard the term "LLM" (Large Language Model) used for stuff like the GPT software
With the recent hype around chatgpt I'm now kinda scared that as an aspiring writer in literature, writers will be replace. Tho I know of course of the defense for real human creativity and understanding, I'm still a little bit concerned. I wish the official language, literature and other arts related (like visual art and music) committees and publishers around the world will do something about this, to not replace humans in this fields.
you are such a great speaker / presenter! i'm sure your students would say you're an excellent teacher.
This is a great video. What it makes me realize is that all this hullabaloo about the bot and cheating is missing something. Rather than worry about whether or not students are going to use this AI to write papers, maybe they should tailor their lessons/content/etc, or make learning more participatory, so that students are legitimately invested in their own work, and wouldn’t bother to use an AI. These "cheating" students are writing papers to check a box, the one that verifies that they can regurgitate a lesson, or at best synthesize what other "important" thinkers had to say on a topic. That doesn't mean they're actually invested in learning whatever is being taught, which *should* be the point, right? If they were writing with purpose, or as you say to "affect the world around them", they wouldn't use AI, because AI isn't an expression of themselves and their passions/desires/interests. So rather than scrambling to figure out a way to get students to "stop cheating!", maybe this is a juncture at which academia should engage in some serious self-reflection, discipline by discipline, and ask themselves what they're actually doing to "affect the world around them", and thereby why any student should give enough of a shit about what they're teaching to engage authentically.
I had paused around 17:00 to write my comment, and then you basically said the same thing a minute later, using much fewer words. 😅
It was sort of touched on in the video, but I think if a writing assignment can be completed entirely by an AI, it is not a good writing assignment, unless it’s an assignment on the fundamentals of grammar or another fundamental writing principle.
sadly I don’t have time to watch this right now, but I do want to say:
I had a crazy dream last night and part of it was that I heard news that you were arrested for shooting someone and I was like “no way Zoe Bee would do that, thats crazy”
so yeah anyways, love your videos, please don’t shoot anyone
LMFAO I wonder what it means
I'm so happy I found your channel, it's one of the few channels i can say that have educational value, that I feel like I gained a new perspective or learned something new after watching
I was with you on using the term "AI" to mean machine learning text generation at first, but when you started saying things like "AI can't understand context and structure" I think you've run into issues. *Current* algorithms and implementations have this restriction, but it's not fundamental to the concept of artificial intelligence. One of the goals of general artificial intelligence (as opposed to the narrower goal of predictive text generation) is a contextual awareness. I'm concerned that your argument is focused on limitations of the current technology while your terminology, at least as it's used scientifically, makes it sound like you're making broader, less justifiable claims.
This video is four* months old, and I'm very interested in how she feels now about the current state of OpenAI's ChatGPT. GPT4 is out, and they're working on the script for it now. It will be interesting to see how it develops. Also, it's quite likely that we will see several things put together I to a larger, more general AI in the very near future.
@@Paraselene_Taokeep dreaming
@@wakanda0070
How so? Would you like to explain more? Answer me honestly. Have you used GPT-4o in the past few weeks or months? We're not even close to reaching the maturity of this tech.
Uh yeah. Obviously she was going to talk about ai as it was and not a hypothetical different ai.
What a stupid argument. She's talking about the tool as it is not as it might one day be
11:47 - This reminds me of a John Gage quote that I often share with writers:
"Writing is thinking made tangible, thinking that can be examined because it is on the page and not in the head, invisible, floating around. Writing is thinking that can be stopped and tinkered with. It is a way of holding thought still long enough to examine its structures, its possibilities, its flaws. The road to a clearer understanding is travelled on paper. It is through an attempt to find words for ourselves in which to express related ideas that we often discover what we think."