So those subtitles were user submitted, or was it google's auto-generate subtitles? Because one is scummy, and the other is scummy sneaky and doesnt make sense half the time, lol.
How about the term "Boomers" being used as somehow degrading? What about poor people from that generation? What about the fact that - if you look back - the older generations always seem to have more, and the younger generations are always angry (especially the "boomers")? How about ageism being a problem? etc.
AI companies "protecting" copyright of big corporations while their entire data model is itself stolen is pretty hilarious. Or depressing, your choice I guess.
The copyright system is nothing but a corollary of the broken system we run overall. The story still the same, people getting rob of their work by capitalists.
It is piracy when we do it, but it isn't when they do it. Rules for thee, not for me. Edit: It seems half of the replies don't get it. To create the model, you need to train it on data. To train it on data, you need the data on a hard drive. The act of acquiring and putting the data on a hard drive is piracy, since all that data isn't licenced under creative commons or other copyleft licence. It has nothing to do with AI, inferencing, inspiration, reproduction, fair use, etc. You wouldn't download a car. Insert matrix music.
@@bdarecords_ too bad the redditors "fighting" it were just a bunch of people all out to make a quick bag with no care about the rest. Not defending the hedge funds, but there was nothing benevolent about what they were doing, and many normal people were left with losses because they were sold the hype. It was the same pyramid scheme as crypto with a veneer of "justice"
If you go on google, download a few hundred copyrighted images, and they display them on your website for commercial gain, you'll get hit with a barrage of copyright infringements, and rightfully so. If you download millions of copyrighted images, but instead use them to train an AI for commercial gain, as of yet apparently that's ok.
What if you download hundreds of images, figure out what about them looks good, and draw a bunch of images by hand, copying some elements like shadows and perspective from them?
I mean, that's if we're ignoring the fact that transformative works are protected by copyright law, which AI tools are absolutely capable of achieving.
@@VecheslavNovikov This is a strawman argument entirely divorced from what actually happens in valid copyright infringement cases and AI model development. Even if this was a relevant argument this still creates the issue of "why shouldnt we abolish copyright entirely then if it never mattered in the first place? artists have been infringing on copyright for centuries if copying techniques is the same as an ai photocopier that jumbles the pixels a little bit". megaconglomerates still hold far more power with the current copyright system than corporations and both outpace the individual by a mile in being able to defend themselves. Oh right, the answer to that is because people need to eat and sleep before they can do the shit they actually want to do and the current system requires they sell their soul in order to survive, and people are encouraged to copy something that works instead of risk poverty making their own stuff, sorry i forgot about that. I for one would rather have 20 different interpretations of Avatar: The Last Airbender in a month than one every 10 years made by the company who just so happens to buy the rights the year prior. I would also prefer if everyone on earth was able to feed and house themselves regardless of their ability to work. how about we fix both before corporations decide that AI should be used to starve anyone speaking a narrative they dont like.
@VecheslavNovikov Art by AI is the same, somewhere it says by human hand. Argument is then if it's the person that programmet the AI that have the rights or if they AI removes the owner rights as you not totally sure what the AI will make and therefore you don't halve total control of the outcome
Let me get this straight... It's okay for tech companies to scrape the internet for data to train their AI, but for some reason it's not okay for the internet archive to digitally loan books, films and audio files out? I think we're being fucked
Someone tried to argue that AI was going to make creators more powerful. My reply was that isn't how companies were going to use AI: do everything they can do to cut as many people as they can out of their profits.
music streaming was going to make creators more powerful, only for it to turn into a corporate money machine that pays users a miniscule fraction of the profit.
People love to separate technology from behavior but it's not possible to do that. And there's absolutely a need to understand that if you don't want someone using technology in a specific way then it needs to not be worthwhile for them to do so. Consequences shape behavior.
I think the whole scenario is simply over-hyped ... like this A.I.-BS itself:) (BTW guess who are the culprits for that, being the spokesmen of f.e. OpenAIs marketing division? Yes, greedy journalists and even more greedier data-scientists and crypto/Ai-"Bros"). The word that AI is a hallucinating cesspool and has nothing to do with INTELLIGENCE at all will simply spread. Oh and everyone can see for themselves that such "articles" and work is just an embarrassment in terms of quality and reason. People will simply stop giving those outlets their money or their attention, because most of us do not want to be even associated with this "soulless" nonsense. The most important thing that Tom missed is that this so called A.I. cannot be innovative ... NO! If you think so ... then YOU are the one hallucinating, hehehe. This kind of technology simply can't CREATE new things. By Design! By Principle! -By- Because of and how the TRAINING works! There will be nothing NEW ... which diametrically excludes "THE NEWs" (journalism) as a successful application area of pseudo A.I.! Without the human "slave workers" (in todays or another form, work stolen from journalist or sexist abuse of precarious workers from third world countries who actually do the "intelligent" part of the work (see Amazon ... it is disgusting!), doesn't matter) ... those companies are only big fraudsters with big mouths and hot air!
I mean.... is this wrong that it is learning from information freely put out in the world? Maybe a legal change such that any NEW content put out after a certain point requires consent and/or compensation for the creator is the way to go?
I do think Gemini does show how pointless the AI we have now is. It will give you answer that is exactly the same as you would get if you scroll down past the gemini bit and even then it gets it often gets it wrong. They've put what $200 million into something that they can already do.
it's really the lamest possible implementation of the technology. they could've used it to make it easier to specifically search for what you want, like research papers, but it instead just summarizes the top 3 results (and usually fails)
@@TheNinToaster yes, but the fact that a tech company used so much money on a model that you and I can see with common sense would do more-or-less nothing is what we mean
Seeing AI generated stuff is an instant turn-off for me. I don't know who originally said it but I've heard it said online "why would I bother to read something nobody bothered to write" and I agree completely. I recently found a channel that had a lot of really interesting sounding videos, but then halfway through the first video I watched they had a bunch of AI images and I stopped watching right then. Generative AI for text makes stuff up all the time, if I see someone using AI images I'll assume they're using stuff like chatgpt and then they've lost all credibility to me at that point. At least with the crypto bubble I could point and laugh, but AI is invading everything and it's the worst.
> why would I bother to read something nobody bothered to write Because at time it might not exist in your language. Or the does a much better job of summarising the points because the original author doesn't do a good job of logically connecting the pars and stuff. For things like meetings I find it super useful. Instead of watching 1 hour of umms and uhhs I can just read for 5 minutes to get a summary.
@@samarthsif it doesn't exist in your language then the AI is literally ripping it and translating it, violating copyright law. But that's still just a translation, and it's one that is certainly not going to be entirely correct
@@wck I would argue that the output of these tools effectively replace the work that they are trained on and have the capacity to drown out the original work, undermining a critical pillar of fair use (the effect of the use upon the potential market). AI tools by necessity are trained on entire works so that can be convincingly recreated (amount and substaniality of the work, purpose and character). Unless I’m miss interpreting American copyright law, training on all this stolen material seems fairly likely to be copyright infringement.
@@MrMoon-hy6pn tell that to universal pictures before they sue Sony over the VCR. Oh wait, you're decades too late and they lost that fight. So long as the machine has non-infringing uses (and it absolutely does) then the creators are not liable for people using it to do copyright infringement. That is the law. "these tools effectively replace the work that they are trained on and have the capacity to drown out the original work" - No. AI is not autonomous, it does not do any work without a human directing it. So, while it does have the capacity to drastically reduce jobs by making one worker as productive as multiple workers, it is still a human being that is doing original work using a TOOL.
Wrong, maybe. Bad PR, almost certainly. Illegal, definitely not. It is not reproducing copyrighted material, it is creating new material based on all the data it has been trained on. Which falls completely outside the scope of copyright law. Copyright law might change, and likely will, but until it does training an AI on any publicly available data, copyrighted or not, including youtube videos, is perfectly legal. That said, there are however some companies that have actually broken the law because they used copyrighted meterial that was not publicly available to train them without paying for it. AKA, piracy of copyrighted materials. The court case is currently ongoing.
@@DefaultFlame "it is creating new material based on all the data it has been trained on," I wouldn't even say that's a fair characterization. It's creating new material based on pattern recognition and representation data. Nothing ChatGPT does (or any these other AI tool) is directly based on training data. They don't even have access to the training data after model training is done.
It drives me up the walls when my classmates use these ai "tools" like a web browser. They don't question ONCE if there could be false information there. I think this show pretty well how people bend backwards for anything tech. They think it's smarter than they are, when in reality, the ai is extremly limited by not only by it's human creators but the (often unchecked) input it gets. Like c'mon. Some people were told to add glue to their pizza or eat a mushroom that is so deadly that it will melt your insides and people STILL bow down to the ai crap. I mention this because I heard a lot of excuses for people using ai generated images as references for their art. The ai is NOT reliable, especially in terms of proportions. Why use extremly flawed programms when there is thousands of free resources in any language?
When it first became a big thing it seemed genuinely useful, now it's just garbage because it's already "learned" off everything that already existed and is just "leaning" from itself now. The accuracy and quality of the results is becoming ever worse. It's like inbreeding. The lack of genetic diversity leads to... issues.
One of my classmates tried to use ai to do the summary of a class project, we were supposed to cover the laws regarding primary education, even though she feed all the papers it would only summarize the bare basics of secondary education we had to do it by hand.
@@clray123anyone who knows how to do basic research and cares about such dumb things as "is this author an expert? Is this website reliable? Was this from a blog post? How old is the information?", i would say no, AI is not smarter, it just looks like it is. It's easy to look like you know something technical while saying random bullshit.
As an artist, one thing I find extremely frustrating about generative ai is when I can recognize an artist peaking out through the soup. But because generative ai are entirely new images, it is impossible to reverse image search to find an artist! If we could reference the images one piece of ai referenced, I’m certain we could see just how closely ai copies certain works of art. That being said, I imagine it is something similar with generative text. Reading something you enjoy, the tone, the pacing, the voice behind it. Wanting to read more by them, but then finding it extremely difficult to find that voice again. It’s stealing people’s individuality and profiting off of it
It was really disturbing to me when I played around with generating paintings and AI tried to replicate an artist's signature. The future is looking very grim for artists
Individuality or originality is a myth that has been inherently overrated since the times when there were far fewer people, and an even smaller fraction of them could potentially afford to become genuine, recognizable artists. You want to know whose works AI was trained on, but almost no one cares whose works that meaty guy from DeviantArt was trained on, or whose chord progression you almost recognized in that other guy's music. Even before AI and computers, someone's work, one way or another, was based upon centuries of blood and sweat from others, with little or no remorse, especially when it comes to money.
@@bartolomeus441 why's that? The art is constantly transforming, and so are the artists. It might look grim, but only because you can't imagine what people will make of it.
@lyrajaded Exactly! 💯 I am an artist as well, and the stuff that I've seen that was AI generated is often times a very clear style of both specific artists, and pieces of work. (Same with authors, musical pieces, ect.) It's seems they are just blatantly ripping off other artists in order to create new art when using AI to generate "original" works of art. For example, if one were to give the prompt of create an image of Garfield the Cat painted in the style of Vincent Van Gogh. Then these are two very specific pieces that were definitely created by other people. So while the concept of here's what Garfield would look like if he were painted by Van Gogh is an original thought. It's just not original art. You could easily ask an actual human painter to create that same painting for you, but no one is doing that. So it's like you said, AI companies are just stealing people's individuality and profiting off of them without ever paying the original artists. 🤬
@@Cuddly-Cactus You too could paint Garfield the Cat in the style of Vincent Van Gogh. Would this be stealing or your original artistic interpretation of both? And you surely can ask an actual human painter to create that same painting for you. Many would agree for a reasonable price. So, would this be art or craft? I don't really care if AI will create music better, faster, and cheaper than I do, because thousands of people are already doing it faster and better than me right now. I can't and don't want to compete, but I can create, and that's enough for me.
Internet Archive is far more legitimate than Ai. And the Wayback Machine alone is critically important. (It's a backup and historical record of the entire web thru the decades. Especially useful comparing what news sites said yesterday compared to today. What headlines have been changed. And so on)
@@egonzalez4294 Even some AI maximalists agree that letting AI grow freely would be dangerous. Don't be so eager to deify the big calculator that techbros overhyped.
It may seem like a small thing but I'm very happy you pointed out the difference between a TH-cam commentator of news and actual journalists who are breaking the news. I get so sick and tired of hearing people complaining about mainstream media and saying that social media is the only place they feel confident getting their news when they fail to realize that FEW of those social media posters are ever actually breaking the news. They're regurgitating what actual journalists have written. As you said, being a journalist takes so much more work and money. TH-camrs can safely remain in their offices while someone doing a piece on a war risks being hurt, killed or even taken hostage. People don't have to love the way media is presented these days but I think we all need to take a brief moment the next time we're about to rally about mainstream media and be grateful to the journalists who put in the legwork that social media commentators largely have not. I just don't see enough of that distinction being made.
Mainstream media is mostly regurgitating news too, that's just how journalism works in general. Most "journalists" nowadays are not much different from youtubers and have no budget or time to actually do investigative journalism. It's actually youtubers that often do, like this very video.
The thing is that mainstream media has basically become social commentators. When the majority of the information is the commentators views instead of the actual information. There are many more people who are just at first TH-cam journalists trying to expose corruption, abuse, things that mass media doesn't want to cover remember the mostly peaceful protests where they were trying to not catch in the background the burning cars😂
Yes EXACTLY this. Thank you. This is where I unfortunately struggle with audience naïveté. When I was a very geeky teenager who consumed mainstream news media (habits set by my BBC-watching & broadsheet newspaper immigrant parents), I remember reading year after year ... then month after month when FB was taking off globally how much the journalism industry was being undermined by the internet and social media even at that early time. And also reducing readership. All the now well-worn complaints about news organisations losing readership in favour of click bait articles, Googles horrible search engine practices & other shenanigans ... all meant that there was less money available to go into quality reporting. First all the medium-sized national outfits went bust, then local paper after local paper. All the while journalist were loudly beating the drum saying: "original reporting is important to hold people and power to account. To be your local chronicler." Newsrooms got smaller, you couldn't afford to send as many journalists to cover a 2-week court case or a 5-month story and still leave money in the bank for all the vindictive lawsuits they got from the rich and powerful wanting to shut them up. But nobody really heeded the warning unless you were kind of a politics/civics/journalism aficionado like me. So the quality of MSM news started to decrease. Then what was once a tragic storm of events turned into something worse. Eventually millionaires and billionaires realised they didn't have to spend as much money on lawsuits if they just BOUGHT those news outlets and purposefully underfunded them or undermined them by making them hyper biased etc. So that's what they started to do in the 2010s and its picked up speed since. The largest example of this is a billionaire in France scooping up I think TV2 & its channels and making it into an ultra-right wing French Fox news. The reason all your independent MSM magazines and tiny outfits have 1:1:1 same articles is cause they purposely dont pay staff enough so not many stay in those jobs, and the ones that do dont have enough time to do write and edit good stuff. Honestly it's a really miserable state of affairs and if I had the talents of a YT video essayist I would do my own passion project exposing this history. Waayyy to many people, even older than me! As well as the young'uns think that the MSM just started to let people down outta nowhere and it's not the case.
I mean, is it really piracy when you read articles and then write it in your own words? because thats what AI does... why it is okay for me to do it, but not for ai company using their software? Really...
@@lazymass Because brain power 'n human effort is not equivalent to electrical power to a lot of people morally. To a capitalist, human hours 'n machine hours are the same so there's no problem. To a worker, you are becoming valueless 'n don't deserve to get paid for your work.
That argument "AI professionals" used to make saying that they didn't know how the LLM was coming up with answers is starting to sound less like genuine uncertainty and more like laying the groundwork for a legal defense in case the copyright lawyers come for them...
Your source : Multi Billion Dollar Corporation Our Source : Scientists like Kyle Hill or Sabine Hossenfelder. Lawyers like Legal Mindset or Legal Eagle. Law Firms like Joesph Saveri. Programmers like Thomas Brush and Arvind who gets a speech platform with Adam Connover, and GitHub programmers who got duped by Microsoft Copilot. Artists like Yoneyama Mai, Mogoon, Kim Jung Gi, Hayao Miyazaki, Greg Rutkowski, Thomas Kinkade. Our forefathers, the Luddites, and the victims of Industrial Revolution who got killed over fair wages to maintain the machine. Would you cite or hallucinate your sources, shill boy?
I think almost every source in The Pile was heavily threatened by IP law at one point. People had to fight tooth an nail for their fair use rights only for a lot of those same IP holders who were previously threatening them to turn around and gobble it all up, somehow avoiding the copyright of small creators. Must be nice to own everything and make all the rules...
Fair use is when big companies take an individual's content. That's why we need to shut down the internet archive, and just feed the rest of the internet into chatgpt instead. /s It really is revolting how much of modern IP law and precedent ignores the public interest in favour of hypercapitalist megacorporations.
It just seems to me that communists like our host here are all for love and sharing and free access, and mostly against copyrights when it is them doing all the stealing, but up in arms when the big corps steal from them.
I can always tell when an AI tool is used to narrate a video. Lack of inflection, mispronounced words, and other gaffes much worse than weird hands are signs of AI used. I experimented on without posting AI generated narration and found that I had to weirdly spell words to get the correct pronunciation, specifically with words that can have more than one pronunciation based on how its used in the context. That experiment proved to me that it's not worth using.
Can you though? How would you be able to tell if the next video you watch is real or if it's just a "better" AI that doesn't have the problems you mentioned? Or the one after that and so on. How will you know the point where AI improves so much that you can't tell anymore, and how do you know we haven't hit that point already? The fact that _some_ AI videos are discernable doesn't imply that _all_ of them are.
I laughed too when I saw that. I imagine it's simply because it is a wealth of legal information, as well as spoken/written testimony that is freely in the public domain.
No, it's just pretty much the only freely available dataset of emails out there. No one in the AI developer community cares too much about the content of the emails. Few people even care much about the dataset as a whole because it's so tiny compared to the amount of data the generative AI models need.
"They took the credit for your second symphony Rewritten by machine on new technology And now I understand the problems you can see" ---Video Killed the Radio Star, by the Buggles
@@VecheslavNovikov Because when a human does it, its because they are looking for specific techniques the artists are using and applying them in their own work deliberately, its often done out of respect for your own craft and the inspiration, when it is expressed outwardly in your work, most artists can see that and say 'hey this is very invader zim esque in its style, but its done in a way that allows for their persoanl expression'. When you take a bunch of images and put into a dataset, you arent copying the techniques, youre just ripping the work apart. Its not 'this is very invader zim esque', its 'oh no, you just traced invader zim, put shades and a scarf on him and called him zom going on adventures with his yaoi rival Doob'.
@@principleshipcoleoid8095 Generative A.I. does not "learn" It trains a matrix formula for transforming the data, the more complex the matrix the more accurate but also the more the marix is just storing the orgional works, without any knowing of what anything is. Its a bit like an oppisite of the brain, we learn and know what something is to the point where we can simulate it, then transform what we know into something orgional that we may not know, that may make new knowledge or emotions or even just navigate the world, then check the source material I.E. the world. (even our senses are not registered in our internal simulations *until* something is sensed that was not predicted) A.I. goes in reverse so it does not have to know anything.
The thing I worry most about AI that over $150 billion was spent on it over 6 months (god knows what it is now). What happens when the investors want their money back and we are forced to pay for this in one way or another.
That was banks, not investors. What will happen is the investors, including retirement and pension fund members, will get screwed while the VC/PE and startup owners get a really nice skim off the top as it sinks. So they get a nice several percent of that $150 billion bet while anyone dumb enough to trust them with their money lose everything. Of they really do want to have the investment pay off, but they will still make out like bandits if it crashes.
The sad thing is, just imaging how many healthy school meals could have been bought with that many, how many urgently needed surgeries could have been funded, how many homeless people given a place to live with that money....
intermediaries? They sound like the bourgeoisie to me (in the traditional sense that they don't provide the thing, they just own the thing that hosts it)
But to host it costs money and resources. It is a free market for servers for sure. Unlike land, compute and storage don't have a cap and are man made. This would make hosting not a monopoly or the equivalent of land hoarders.
@@samarths a houe requires upkeep too, I have to get my boiler checked for example. I would also argue that there is a cap, since there is a finite amount of copper in the world. All of this is besides the point though. When I order on uber eats, who is actually providing the service? The people making the food and delivering it. And yet there is a transaction fee that goes to uber. I know from the cost of running my own website that they do not need to charge that much to cover their costs, it could be fractions of a cent and they would still make bank.
@@Janokins But it is truly a free market, right? In the case of uber and servers I mean. There is no cartel like behaviour. Maybe Uber works differently in different parts of the world. Where I'm from they don't have the evil monopoly like over reach yet. So, when they start charging a platform fee people are free to move away. If others are finding it so hard to make a platform then isn't their platform fees justified?
That last but doesnt make logical sense though... If uber(and other platforms) are doing ok in your area, what market share is some new startup supposed to eat? It would only have the room to grow where the bigger companies overstep the customers desires and charge too much or make mistakes. But they can grow comfortably, small increases, a new fee here, now you can subscribe to uber one here, pay for direct delivery here even tho it used to just be free part of the service. There isnt room to grow a new business in the market niche, until it is too late :/
@@cookies23z I read "There isnt room to grow a new business" as "there is no problem to be solved". Maybe that's where we disagree? > That last but doesnt make logical sense though Here is why I think it makes sense: Uber has a monopoly over nothing. Not over the engineers, not over the tech, not over the taxis, not over the cars, ..... over nothing actually. Which means if someone wanted to build a cheaper replacement app they should be able to (free market forces). The fact that no one is able to build a cheaper apps means that the folks over at Uber haven't overspent and that they have actually solved in the cheapest way. Where do you think the reasoning is off?
Soon there will be so much AI garbage published that the machine will start feeding itself and producing ever better garbage. And some day, a major part of the internet is going to be pure gibberish.
I think there is a different problem entirely with LLMs that's even trickier: A regular text may be truthful, deceptive or incorrect. An LLM can be none of those things because it's pure slop. The system doesn't really have the concept of "meaning", it can't lie or tell the truth because there's no difference. A world where people use LLMs as a source of information is a world where people lose grasp on reality because their information becomes more and more dissociated from meaning. These models can't be "right or wrong", it's just coincidences. And a furthering of the problem discussed is that portions of the web may close down in response. The fight against these scrapers is suddenly at the forefront and the result may be more paywalls, more walled gardens etc.... I'm a software dev with my own blog and... I don't really feel like open sourcing my work any more knowing that it'll just get stolen by Microsoft and OpenAI to make them money while making software everywhere worse. And while I haven't taken down my blog, I also don't really want to be posting anything new if it'll just share a similar fate... The internet has suddenly become intensely hostile
The fact AI costs waaaay more then it can reap... the fact it is making storage and chips even MORE EXPENSIVE in these shortage times... The fact it consumes even MORE POWER then any other silly tech trends... There is just more wrong with AI tech then the NFT trend (wich LITTERLY stole from artists and forced copyrights on THEM!) I don't forsee Generative AI to stay much longer then two years. The reason is unsustainability. It takes one law... one thing... one small lil bitsy thing to make completely unsustainable... And it's already happening with them having to pay out major publishers.. but it's gonna get even worst for them due to the EU. YEP... Article 11 (formely known as Article 13 but became part of article 11) will put a stop to AI the moment the EU updates it to include any AI generated works. Cause now all AI models have dump any data made from EU countries... and you might think that is not a lot.. but it would break their AI and it would costs BILLIONS to shift through data and check all of it to see "Is this from europe?" It would litterly kill the bubble and sure you can cater to American,African, Australian and Asian countries..but Australia will soon follow. As for America and Asia? We will see how long that lasts with two continents preventing data that would cost them even more billions to pay for rights. Remember that the EU has kill a lot of bad practices in tech companies over the last years (something they do well thank kot) But also, the only reason you can ask any company of all data collected on you and delete it.. is article 11. So now if all EU creators go to these companies and say: "Hey.. you operate in the EU so do I, all this data you got one me.. SHOW ME.. and then DELETE IT. Or else I will have the EU sue your ass." And trust me when I say... when tech companies are losing billions in the tech... such lawsuits going to become frequent? That's just bad news...
@@unchainedmel1475 It's already unprofitable. Numbers from OpenAI came in.. they make 3 billion in possible revenue but their costs are over 5 billion. 2 billion loss per year. Millions of losses per month. also most of those billions are from investments. If even a ruling comes in saying: "You have to credit each artists and source credited in this generated stuff." It would take even more billions to do so. Generative AI or any AI of it's kind is a car teetering over the cliff with rich companies holding them up. So either more weight comes in and it collapses or the people holding them up go away.
I truly never understood the "AI is gonna create more jobs" idea. Give it 5 years. Any low-mid level role which doesn't require a physical human is gonna disappear when some CEOs realize that 1 high level engineer can do the work of 50 people from different departments. Other companies will follow shortly after. Im usually not a doomer, but I think we are on the verge of something bad. The skill ceiling to get a job will get so high that most people won't be able to keep up. All this talk about reskilling to adapt to the job market is pure cope. The average 35 year old ain't becoming a Machine larning doctorate.
@@skibidicoffee22 That's a completely different thing tho. This kind of generative AI is generalist, or almost generalist. Cars created a need for huge infrastructures (roads and highways). AI doesn't, and It impacts the vast majority of jobs and all industries, not just one. What kind of jobs is it gonna create if it greatly outperforms most people's capabilities by itself? What would be the incentive in hiring humans if there is no strong regulation? And to do what exactly?
@@YouAreNotThatGuy4844 While I semi-agree, this take is also assuming that progress will continue at the same or increased rate, which I doubt, most likely we've reached the top of the s-curve, GPT-5 and its competitive equivalents will be the last really noticeable increase for a while IMO
Ironically AI will not create more jobs. it will dissolve them into one. Infact a lot of CEO's see dollar bills, but in truth... that AI they want to push? it's gonna make them obsolete even faster. Only coders and programmers are gonna be useful. Those who can read the AI's algorithims.. (and trust me Google doesn't udnerstand YT's algorithim! So that's promising) The CEO is even more replacable then those working under him. CEO's mostly guide, direct and work on reports to make decissions on company output and policies. They litterly mostly take all the data themselves from other departments and then make decissions of them and put them out to the board... sometimes they have to socialize with other companies and possible investors.. but... most of them is: "Put data in, give data out." Sounds a whole lot an AI could do with a press of a button? ...Why pay millions for a CEO if an AI can do it for them? Why have all these "heads of departments" managing data if that same AI can do it? Why have middle managers? You only need someone to guide workers to do physical work and give output data the AI cannot see or record. Trust me when I say.. AI will most likely replace a lot coorperate FAT 1st. Then the lowly worker. CEO's , Managers , Head of departments, Administration work and the whole financial department can all be done by 1 AI. But hey most CEO's don't think long term, only short term....
Potential solution. 1. Create an image generator that exclusively sources it's training data from Disney movies. 2. Watch Disney and Google etc Duke it out in court. They'd manage to find a way to make only corporate copyright count I'm sure, but I can dream.
Sora released a trailer a few months ago, featuring a "trailer" for some "Monsters going to Summer Camp" sorta film. In the background of one of those shots, you can literally see Mike Wazowski AND Sully. Mike is pretty f'ed up in the way AI-gen animated characters tend to be and most of Sully's body is hidden behind a snack stand but it is undeniably those two characters.
It's disgusting that no matter what evidence there is of the theft, there will never be any significant consequences because government is designed to protect the property of the rich against the poor, and the property of the regular Joe isn't important enough to defend against them... "Because the economy." AI has a lot of great potential, but as long as capitalism and other hierarchical systems are the norm that potential will always favor continued oppression and exploitation rather than any greater good.
yeah tragedies of the masses (companies getting to own and sell something that used to be a public good) seem to be quite the feature of unchecked capitalism
@@marcus.H Is this a serious response? If so, re-read what I wrote and you'll find the answer to the first question. For the second question, "no doubt, because that's where technology is headed, and just like we no longer use wagons to move stuff since the invention of trains and cars, we will all end up using AI for various things that we don't today." Was this a serious response, or just a knee-jerk reaction to something that triggered you? I can't see it as serious, because there was zero thought put into it.
@@samuelrosander1048 cool. So you're saying that you are going to use this technology and you, and many others, will willingly take advantage of all the effort Microsoft have put into this. Got it
@@uooooooooh what are y'all talking about? if anything they're mutually inclusive. the only thing democracy has done for us is give us two rich asses to vote for, neither of which ever have anyone's best interests in mind. democracy is more useful to the ruling class than it is to anyone else. capitalism isn't going to take away our voting system and abolishing capitalism would make the need to vote pointless.
@@uooooooooh Capitalism has been the most inclusive system that humans have ever came up with, it has brought over a billion poor people out of poverty and into the middle class and even rich..... Socialism has brought us hitler and stalin, bread lines...... Keep drinkin the woke cool aid.....
Visual artists were the canary in the coal mine; artists raised the flags back in the end of 2022 when we discovered the truth behind Midjourney and the LAION5b data set, containing our work. And then later proof that MJ fine tuned their dataset on magic the gathering artists exclusively. And so much more. I’m glad it’s getting attention again but man it sucks it’s taking years when we could’ve had everyone working together since the start. Authors and TH-camrs alike been happy about image generators and only seem to be upset now that their own content is being stolen. Again, glad to see more understanding, and I hope we get these things shut down.
In an age of subscriptions everywhere, having an "old school" one time fee is really nice to see again, even if the cost gets recovered only after 8 years and 4 months, which is a pretty hefty timespan
The other thing you gotta remember is that AI needs real work to feed off and so real human creations will always be needed. Is the issue with dog breeding in a sense, if AI uses other AI data the small issues AI makes would be repeated, if that happens enough times you get a Pug like AI that can has it's eye popping out. So if people want AI to be useful and make sense there's going to have to be a balance between generated and humanmade content.
Until, like with dog breeds, people instead choose whichever flavour of AI model has a particular type of grotesque deformity that they like best. Some people will care more about the "health of the being" than others, and I imagine a decent number of models will exist that are designed to be more accurate than others. But I also anticipate the number of "this AI agrees with your existing worldviews" models will be considerably higher. Plenty of people, arguably the majority, don't like to be challenged. They don't really want an AI with maximum utility and benefit, they want a personal assistant that's loyal to them over all else. I think Pandora's Box is already open on this one.
Don't worry about automation taking jobs out of the manufacturing industry, we'll still need (considerably less) people to press the buttons and fix the robots...
There are companies right now literally hiring people like copywriters and designers to create new content to train their AI on. They're training their own replacements and their actual work will never appear anywhere in the real world. Just in some large language model dataset. It's extremely dystopian when you think about it for more than a second. We should be using AI to do the boring tasks that we created for ourselves, not the creative ones that make the soul sing.
This is not true for Chess engines. AI does not always need human players to get good at something. Mind you, I define AI as anything with an artificial neural network (which is a lame definition btw) The way it works with chess engines is that they play themselves for as long as possible, knowing the rules. There is an optimal way to play the game. Chess isn't a chaotic environment, so it was hard until it wasn't. The issue with current generations of AI is that feeding the AI its own generation creates a negative feedback loop. Art is chaotic; there are no rules but what we make. We would critically analyse what we've made and look for better examples elsewhere, but today's AI cannot do that. But if it found a way to "self-play" like in chess, where it can find rules to optimise for on its own, rather than given to it... Then, it would improve substantially. Synthetic data is basically that, and it's a work in progress with promise. AIs might not need human training data anymore.
writing without citing. it got Harvard Presidents fired, but it is ok for a big tech company. it facilitates other people to unknowingly reproduce portions of other people's work, but is somehow not facilitating IP theft, because it is big tech.
What do you mean "without citing"? The Pile says exactly what videos were used! It's only because of meticious citing the creators can now cry about it.
@@steve_jabz I multiplied every pixel in your artwork by two! and I have this cool new algorithm that makes *my* picture with just a bit of data! look how cool my art is!
@Shoshiroll That's because stabbing is a type of murder, not murder itself. This is just a weird semantics game to play. Learning isn't a type of theft. They're not even remotely in the same realm.
@@steve_jabz The "learning" in "machine learning" is a metaphor. It's not the same as a person reading a book. Which you would know, if you read books.
I suppose the intuition pump is to ask how much an AI model would be able to do without its stolen training data, using only public domain material, and the answer is almost certainly "very little" because you'd have to train it on 100-year-old novels, news articles and Wikipedia. So it'd basically just become Wikipedia: The Audiobook.
Actually, Wikipedia is released under a ShareAlike license, so the model would also need to be cite which Wikipedia articles it's drawing from, so the fact that it's a Wikipedia summarizer would be even more painfully obvious.
Just some context on Australia's Media Bargaining Code: Almost all of Australia's media is owned by three massive companies (Nine-Fairfax and Fox being the most recognisable) and is horrifically and provably biased about what they post. All this code does is give more power to these conglomerates, because it starves smaller outlets and independents who don't qualify for the code and therefore stifles what voices can be heard.
I've said it before and I'll say it again, ai prompters being absolutely precious about the prompts they use to steal other people's work is incredibly funny to me.
Meh, what you say is conflated in itself and you do not understand how it all really works-- can you make 'rip-offs' of other people's IP? oh yeah, absolutely. Can you make something so chopped up and generated from a million pieces of correlated data-points where it doesn't infringe on anyone in particular? yes, it is also true.
Glad people are talking about this, main gripe with generative AI is how much data is being stolen from artists and writers that worked hard for years to perfect their art. What I find particularly disgusting is people using generative AI and then claiming the work as their own, in particular this is a large issue with music generators. Furthermore, it replaces the jobs that people actually want to do, instead of factory work for example. *AI in it's current state is technically "machine learning" which is what generative AI uses, AI is something different, although it seems OpenAI is getting closer to a genuine AI.
0:38 I mean, fanfic authors have known their work was stolen for AI training since the very beginning. ChatGPT knew about things that it could _only_ know about if it had been scraping AO3 and/or Wattpad. Digital artists have also found image generators output content that's near identical to their own work, or even able to replicate their style in demand. Heck, I think it was Stable Diffusion which sometimes reproduced the Getty Images watermark on the content it generated, a clear sign of theft! I still find it surprising that people are talking about these revelations like they're at all a new surprise since we've known about this from the very start. Feels kinda disingenuous, as if it only matters now "real" creators' works have been discovered in the trove.
If I'm drawing a sun in an upper corner of my painting, it's not me repeating the pattern I've seen multiple times, it's clearly me stealing someone else's artwork
There is no quantity of money that can compensate the harm to an artist. An artist won't ever be able to compete in the market against millions and millions of Ai arts created in seconds every day. In this mode, Every human artist and human art itself won't exist anymore regardless a previous compensation!!! That is what Sam Altman, Bill Gates and these companies want. They don't care human art and human artists at all! Therefore they are destroying it...
Seems pretty pathetic if they can't compete. All you guys do is go on and on about how soulless, ugly, and un-artistic AI images are, but you can't compete? Yeesh, what does that say about the value you were bringing to the table? Isn't art about expression at it's most fundamental level anyways? AI tools don't remove your ability to express yourself artistically. You aren't entitled to making a living from artistic expression, and those who are skilled enough will still get hired regardless. Stop playing victim and realize that making a living as an artist was a blessing in the first place.
@@_B_E For real, even as a developer why would I want to spend my life making a living off of expressing myself through art, I would much rather bring REAL value to my lord and savior shareholders instead! AI art allows me to type 5 words and see an image! That's really expressive, it somehow knows exactly what's in my mind, and gets every detail right! It definitely does NOT just add in keywords like "Knight" "Grass" "sunset", my artistic vision is totally fulfilled! Its not an approximation, its art!
@@RawrxDev You seem to be confused about what my stance is. I'm saying your personal ability to be creative isn't stopped because AI exists. You can still express yourself, and even make money with your art. Every artistic field is still able to exist despite mass production undercutting it. You just simply need to be able to create things people actually want, which is not something you're entitled to simply by creating.
People create art because they can't live otherwise, not because it holds any monetary value for others. You've just found another excuse to do nothing. Pathetic.
Here's hoping copyright laws get repealed as finally those laws designed to terrorize regular citizens, are going up against businesses of their own size. Though more likely, you get the usual case, copyright repealed for big businesses, regular citizens terrorization remains intact.
As an artist and writer it has been demoralizing. I've stopped posting and I've pulled my galleries off of places like Twitter. Another thing to mention is the climate is also affected by AI...
Too late. If it was on the Internet, your works are already in the AI databases. As long as it was accessible via Google, they have a copy. Considering how hard it is to get published without having a popular portfolio I am not sure if there is a point in taking it down.
@@SorkHanahb Depends how many times it was scraped for datasets, filtered, packaged and resold to other developer groups. High-quality, clean datasets are worth their digital weight in gold and anybody who is anybody is outright lying when they say "we don't keep the data." They do, they know it, we know it, because these packages end up in the hands of these same mega-corps we are raging against right now.
Really wish I could trust Nebula, but there's just too many creators that have been silently removed from the site and then suddenly get awful quiet about why exactly it happened. Even if you benefit from Nebula's shady business practices, they're still shady business practices, and I can't support them.
I like the IDEA of directly financially supporting the creators and outlets that I regularly use and, sometimes, depend on. Its a nice idea. Its not realistic in a cost of living crisis where I have to view any non-free media as a luxury good.
*EDIT: I no longer agree with this comment, and it's only still here because with this edit, the comment still says something which the absence of a comment wouldn't.* As someone in the field, I normally take huge issue with summaries of how LLMs work in videos like these since they tend to be hugely reductive and outright wrong. Not this one, though! Great job - you certainly did your due diligence.
I disagree. Repeated uses of terms like "regurgitate" implied to me a lack of understanding, or conversely a refusal to understand how LLMs and other forms of AI function, and misrepresents it, using words deliberately chosen to be derogatory, and to incite further mistrust and misunderstanding.
@@Kaotiqua that's what current "AI" is doing though, they're statistical completion algorithms that just respond with whatever is mathematically most likely to follow your prompts.
@@Kaotiqua That's very fair - but I saw it as 'writing ideas learned from the training data in a novel way'. I'm probably just too used to the portrayal being even less sound. Might delete this comment. It's very ironic how much emphasis we on the political left place on listening to experts on things like vaccines, but when it comes to talking about AI it's an outright refusal to even try to understand. AI can easily be shown to have a negative impact _without_ misrepresenting it. Let's all have some intellectual honesty and listen to experts.
Also, it's really dumb to think that there's political solutions to these problems. Ted laid it out decades ago: technology itself is in the driver seat. Humans and human societies are unable to cope with the very technologies they produce. Effectively, we're being farmed by our own tech. I guess on some level it's interesting to document these issues as they arrive, but make no mistake: they won't be dealt with.
I think an angle for me that really sticks out is the perspective of having grown up with the transition from Dos, Windows 95 into 11. When I was a kid, I had dialup and AOL/MSN messenger which was not moderated and extremely dangerous . . . but the world wide web was actually alive and thriving. Before the era of social media aka Myspace, access to high quality and unfiltered information was extremely easy. I remember doing research as a kid reading and watching videos on the cusp of the modern internet. I also almost got human trafficked, raped, and possibly other things over the years because my parents were kind of dumb to let me do everything without monitoring. The kids and young adults these days do NOT have the critical thinking skills or experience to understand what they are being force fed by the modern consolidated internet. These kids didn't grow up with genuine TROLLS. Dont Feed the Trolls. Don't Believe Everything You See. Don't give out your personal information. Don't send random strangers pictures of yourself or your family. The list could be much longer but for brevity that kind of summarizes it. The real problem with AI and the modern internet is that kids haven't seen the evolution of the internet. They can't even sift out most of the AI slop from the real content. This brings me to thinking of 1984 and other similar dystopian fiction titles over the last 5 decades which warns us of how the world is viewed. The information people see might be outright false, but because theres no way for them to see the queues or red flags. The modern internet is carefully engineered by psychology to such a degree even TH-cam Kids is dangerous because I've seen child labor, blatant toy advertising, and AI slop flood the app over the last 5 years. AI is killing society, because you will NOT be able to trust anything. In a lot of ways, I think certain aspects of society will take a luddic approach nearly banning the use of technology to safeguard its legitimacy. Legacy programs and systems will be supported ad-nauseum because otherwise the risks might cause collapse. Even 8 years ago, I had professors who would only accept HAND WRITTEN documents because of the common use of cheating services. The very real danger is that the newer generations do not have the skills to avoid being subjugated and controlled by the Mass Media Mega Feudal Lords "im not being hyperbolic, this is a post Citizens United Economy." Why is it corporations are SPRINTING as if the actual "doomsday" as Sam Altman put it is approaching us because of AI? They will seize control of everything and as with Uber's hidden algorithms committing actual labor rights crimes against us all while upending the taxi industry did for transportation or AirBNB causing investment capital to ruin the housing market across the world "while just being hospitality, which is a heavily regulated industry." I have no hope for the future generations to avoid being abused and enslaved.
If you and many other people can get their stories up on a blog or something that’s got enough traffic it’s worth having personal recounts on there not buried in you tube comments.
Here's the Catch 22. To have a popular platform that's owned by others, you have to accept an agreement that essentially allows the platform providers to do whatever they want with the stuff you produce on their platform. Then your work ends up being used by them. Yet without such a popular platform, you can't get your work out there in the first place.
If you make your goal accumulating money for doing what you love, not only are you taking specifically from those who would take your place if you failed, but also encouraging those around you to organize to fit the structure that money has given to the shape of every industry. Industry of course being A word we use when we talk about profiting not off of providing for others goals and passions, but rather others whos goal is to do so somewhere down the chain. When did we stop calling this grifting?
You can probably remember the first time you encountered neural network (or AI as it is more popular today). It was the first time when your translator of choice made sense while translating a sentence. It is the same principle, thousands of examples analysed, and it can tell you what is the probable meaning of word and also what is the archaic use. But remember, today generative AI can only tell you what's probable as well and thus it will be mediocre forever. Its not a revolutionary tool, it will generate only the most mediocre result, because it is its purpose.
Im not sure whether it is immoral to train AI models on the work of others without consent. Is it immoral if I listen to Pink Floyd and then write a song inspired by their music? How is this different from chatgpt cobbling together a thousand books into an article? Just how much originality should this article have to not be considered theft? Im genuinely asking from a philosophical point of view
An AI cannot listen to a song and resonate with its themes and instrumentation. An AI cannot read a book and love its characters, its story. An AI cannot see a piece of art and use it to process its own feelings and life experiences. An AI cannot read an article and be inspired to delve deeper into the topic, to write its own piece. AI is all about information, data, replicating that data for profit. Its not immoral because its derivative, its immoral because its motivated by profit with the intention to replace the very people it derives from. It is immoral because the existence of a tool that can mimick an artist/artsits exactly puts those artists out of work, because companies will always prefer to invest in a machine than in workers. An artist being inspired by others does not. Replacing individual creatives with a tool driven by profit motivated corporations is not even idiocy, its calculated and dangerous. I say this as an artist, writer, and musician myself. If someone sees my work, resonates with it, and feels inspired by what I do, taking some elements they learned from my work into theirs and developing it further in their own style, that's great! But if someone saw my work, invented a clever little machine that could replicate it to a tee, then started creating and selling that derivative work at a rate that I, as a human being, cannot possibly compete with, that would SUCK. That's it, really. 'Is it immoral if I listen to Pink Floyd and then write a song inspired by their music' - key word here, inspired. AI cannot be inspired. It can analyse data and regurgitate it, but it cannot be inspired. AI does have incredible potential when it comes to data compilation and organisation. I agree that having to search through loads of articles and books to find something relevant to the topic you want to write about is tedious. Having a tool that could compile relevant sources would be great! The problems begin when that tool stops directing people towards the work of other people, and starts competing with it. I also just don't think getting bogged down with these technicalities is a good idea. If it isn't immoral, if it isn't plagiarism, it is, at the very least, SHIT. Art and writing nobody could be bothered to make is shit.
@@portablegoose So your problem is with capitalism, not the technology. Advancements in technology put people out of work all the time, but because the system values people only for their work, that extra productivity isn't used to lessen the amount of work done, but increase demands on workers.
One of the best pieces I've seen about AI's impacts on society, from access of "real" information and art, to the resulting further concentration of funds/resources in the hands of the wealthly few. One thing that's missing from the conversation is how teachers are being pushed (hard) to use these "tools," time-saving, STEM, "21st century...," aka must do/use if you want to keep your job.... As an educator who enjoys sharing new information, which I've checked out for legitimacy and accuracy, in a personable, relatable way, I find the push for this as a "time saver" dismissing the "knowledgeable one" part of being a teacher. It tells me that those who promote this think that my hard-earned knowledge and expertise can be easily replaced by letting AI write my lessons---then I'll have more time to deal with the behavior issues of a non-ergonomic classroom full of 30-40 students with a wide variety of significant challenges, including ADHD, ASD, poverty, abuse, depression/anxiety, and general growing pains/maturity issues, ....salting with sarcasm intended. Or, they could also being saying, "education" can be done better through our "technological wonders" and using our corporate partners' excusive publications and tools, which ensure that we get all the subsidies and grant $$$ and that the budding worker bees gain just the attitudes and skills we need. The fact that these tech organizations have been freely using and plundering our human output on the internet for at least a decade is something we should all be angered about. Note: their incredible blundering overreach may actually result in "old tech" surging in popularity, land lines, un-connected computers, not-smart TVs, VW Bugs.....🤔
As Yanis Varoufakis points out in 'Technofeudalism', profit is not always the most effective way of increasing big tech share values. AI harvesting of public content without creators' consent is an inevitable extension of what he calls 'cloud serfdom', one that reaches out beyond the enclosures created by Amazon, Tesla etc. Why draw the line at amassing information capital from your platform's active (and willing) users when you can scrape the entire web... the 'smarter' the AI the higher the share value.
I'm a lawyer, and I've been thinking about this topic for a while. But something is missing in my chain of thought to get to the same conclusion as the majority of people, a little piece that, without it, I can't say with certain that this is undoubtedly wrong. I understand people getting mad with their work being used to train AI without their permission. But legally and morally (in a more broad sense), that's another thing. because the use is too indirect, the text used isn't in the algorithm of the AI, just the link between words in a way it's impossible to track any particular work backward, and the AI can't be used (directly) as substitutive to any original work. I'm not saying that this isn't wrong, I'm saying that I still don't have a conclusion. there are even more pieces missing to say that this kind of thing is undoubtedly ok. I think that if these Ais couldn't be used for profit, I would be totally ok with it, morally and legally.
I think a lot of creators main argument about using their work without permission is these massive companies are making billions off their backs with no compensation. Even if it isn't directly being reproduced, just in the abstract. It's kind of ironic given many of these same companies will go after people for downloading a single song or having it play in the background of a video like the famous case of the baby dancing to Prince.
Maybe you could provide me with some insight here: isn't this like, instead of stealing a car you just steal a tail pipe here and a headlight here until you've stolen a whole car slowly? Isn't that just as bad?
To me, as a developer and artist, a lot of this rests on the _where_ and _how_ they are getting their training data. These generated works do not get anywhere without filtered and clean datasets to work off of. If they were to set up a contract with an artist to create a portion of the dataset that would be fine in my book as fair compensation is awarded for the effort. But a lot of these datasets are created by using work taken without the user's permission or compensation. What's worse is that good training data is shared and resold to other groups for profit and the packages are only larger. A lot of it comes down to consent unfortunately, and a violation of artists rights to work is seemingly the easiest to trample upon. See Abdin v. CBS (2020) Docket No. 19-3160-cv
@@bujusticIt’s not really like that, though. The program isn’t stealing bits of the car, it’s analyzing how those bits fit together and recognizing patterns. Show it 30 cars, and it will notice that, in all 30 examples, the tail pipe was in the back, and the headlights were in the front. Therefore, if you prompt it to build you a car, it will arrange the parts such that it follows the most likely patterns it found among the 30 cars it trained on. Nothing about those 30 cars is directly reproduced, except by chance.
@@uoooooooohbecause normally humans would state who they've referenced, the ai takes thousands of creatives' work and merges into its own 'work', effectively stealing the time and effort the people did beforehand (as neither the user of the ai nor ai will credit the original creators) (hope that makes sense!! also tom explains it here at 18:39, i think)
@@twlxyl That's how humans work, though. You're influenced by every single piece of media you've ever interacted with. AI can just do it at a larger scale.
@@uooooooooh Simple, AI isn't human, it does not have the same rights. Your entire premise is flawed. You're comparing the AI system to a human, when you should be comparing it to an object of human creation itself. Those are subject to copyright, and so should AI be.
@@ttt5205Does a hammer have rights? That's nonsensical. Copyright is also nonsense in the first place. How can you own an idea? Anything digital is just a number, how can you own a number?
Using the works of others without adequately crediting them has been at the Internet's core. Most TH-cam creators never give any credit or cite any sources. You do not have a proper list of your sources, but at least you mention them in your videos. It is something, but I find it ironic. So, if any free independent media is to be killed by stealing content, it should have been. Or perhaps it already has. But AI wouldn't be the only and main culprit; the widespread plagiarism and thieving of content is such a day-to-day practice that almost everyone is blinded to it. And it will have achieved this without the aid of AI. It is people, I tell you. People!
I think one of the most interesting questions in the AI space is how well they've been able to say, we aren't really making progress, and yet have people still very exciting (depending on the circles you are in admittedly) about future progress coming. In that, Chat-GPT for example, has released 5(? it's at least 5, maybe I'm missing 1 or 2) models now, all of which are just minor improvements over GPT4. Anthropic have done the same sitting at 3, and Google as well with Gemini. Which very much feels like it's only purpose is to generate hype around what's "just around the corner", but we aren't reaching that corner. Now don't get me know, I'm not saying we won't reach the corner (we might not though, it is a possibility), but I don't think anyone really knows when that's coming. Things like 4o and the "advancement" there for example, is just based on research from years ago at this point.
when you start obviously conflating youtube videos, with video titles, you lose all credibility. It's the sort of absurd misrepresentation that I permanently backlist channels for.
@@Henrik_Holst You act like copyright law is extremely simple. There are still many open lawsuits that try to decide whether this is copyright infringement or not. This shit is complicated. To be clear: I don't want to argue against your position. I just want you to know that the issue is less clear than you seem to think.
"Fair Use", as AI companies are now learning, is, and has always been, a legal 'defense'. What has been happening is most certainly infringement, and since these companies are now competing very directly with a lot of the people that have been victims of copyright infringement, it will be interesting to see whether the fourth consideration of the "Fair Use" doctrine is simply erased from the books entirely or not.
Plenty of AI bros in the comment that conflate human learning with the calibration of AI models with stolen data. The AI has no intent or motive, and even if it had one they'd be the slave of the AI companies. And let say the AI model was intelligent, what are the ethic of letting something being forced fed stolen data for the profit of corporations? But the reality is that these systems only use probability to generate words and sentences, and images, and the probability weights are calibrated by stealing data. There's no spin that make AI companies ethical.
@@tuckerbugeater "There is no such thing as ethical consumption" and "Unethical acts can happen in alternative power structures as they are all human and therefore flawed" are two phrases that can coexist. Stalin also counts for godwins law, make better arguments.
I am not an AI bro but I also dislike the arrogance where people act like they know for sure that the brain is so different from deep learning. Who are you to make that assertion that a human can make something original, but an AI model can only "regurgitate". Are you a neuroscientist? As far as I know it is extremely unclear how the brain processes information. Without knowing any better, we can not tell if the brain works fundamentally different than a machine learning model. Whether the AI has intent or motive is an extremely complex philosophical debate, but you present it as if it were fact. You can hate AI all you want, there is enough reason to do that. But please do not just dismiss very interesting avenues of discussion about what constitutes art, intent, originality, etc.
But does not human intelligence use the same learning methods? Then that "HI" produces an amalgam of what they've seen on TH-cam or Google News as monetized TH-cam video essays. If something has been published publicly, then it is intentionally available, NOT for out-and-out plagiarism, but it is accessible for assimilation into a larger knowledge base which can only be appreciated by other intelligence actuators.
Talk to an AI, there's no intelligence in them, just a more sophisticated search engine that can lie because instead of an actual database it uses probability to generate the next word. No intent, no motive, there's no reason to leave data to be scrapped and regurgitated for the profit of AI companies.
I have been trying to work this out too. My knee-jerk reaction was to think what makes 'creatives' so special that the automation of their work is different from the weavers facing the jacquard loom? Understandably, when it's your job under threat you are going to be totally against it, but jobs have been steadily automated away for a long time now. There is a difference in AI and HI learning, that is who profits. When we learn from school or books or youtube videos, the creator is being paid something. Sometimes we forget these things cost because we don't pay directly but through taxes or adverts. This is not the case with AI sucking up everything ever written so the companies can make a profit (eventually) without paying all those making the content that drives the AI, for which they would normally be paid. Another difference is that the jacquard loom didn't take knowledge/imagination/creativity as a raw ingredient. These AIs need to be fed with new information or they will soon be obsolete. When individuals learn, we collectively benefit. When AI learns, do we benefit? It feels like we do but I'm not so sure. Either way, all our jobs are being automated away and we need to think about what this means. In the past and now the response is 'there will be better jobs', but where does this lead and can it last?
@@SpaceMonkeyTCT You make three points that I want to address: - 1. Does a site make money from an AI data scoop? Acknowledging that I make an assumption, I assume that it is the AI creators who arrange access to the site, engaging normal earning protocols. I'm not sure how an "independent" AI would bypass those criteria, not without actual fraud. - 2. Who benefits from AI's knowledge? The same as benefit from HI knowledge and in much the same way. Some AI knowledge is shared freely, some is sold. - 3. Are humans competing with AI? Probably, at more superficial levels. I imagine AI could write any number of money-making superhero movies. But _Lincoln,_ or _Lawrence of Arabia,_ or _Twelve Angry Men,_ or _Arrival,_ not so much. I think the competition would be the difference between a home-cooked meal and a microwave burrito. I may be wrong, but my understanding of AI is that it does not need a constant inflow of new information. ChatGPT has operated with cutoff dates. The value of a large language model is the amalgamation and synthesis of large bodies of knowledge to the benefit of its users, not in cherry-picking data. As to my own involvement, I have written works of fiction which appear on a story site. I receive no payment; I write for my own pleasure. I would be honored if some of my expressed concepts were spread. I would be pissed if my works were plagiarized and I would likely take legal action.
@@_B_EThen why does Greg Rutkowski, Thomas Kinkade, Sarah Andersen, and Pixar found their name commonly used in Midjourney's input prompt to generate specific styles? Back then people link/cite the source of their image, or people have ideas on where to look for dimilar images; nowadays, only good people does that.
Honestly. This all just reminds me of record companies reaction to Napster. I get TH-camrs aren’t quite the same group of insanely powerful people. But, LLMs are just another new thing we’re going to have get used to.
I’m sure this is an unpopular opinion, but I think it has to be said: training generative AI models on copyrighted material is *not* copyright infringement. Yes, these systems “consume” lots of copyrighted work. That does not require permissions or licenses though. The *republishing* of copyrighted material does. Running these models (i.e. inference) to generate output could potentially be considered copyright infringement. That said, there are currently no court decisions of any court in the world on if they are or not or under which specific conditions and circumstances they are. Clearly, if they regurgitate original work verbatim in significant parts, then that probably is copyright infringement. Fair use, citation and transformative work carve outs probably do also apply to large language models though. The way that these models train on original work and generate outputs is not that unlike how humans learn from original sources and can then generate entirely novel outputs inspired and influenced by the original works they once consumed. It’s a somewhat tricky situation quite frankly. Lastly, I find it noteworthy that while I agree that AI systems have the potential to tear big holes into the financial feasibility of journalism, I tend to believe that they will not stop at journalism. The reality is, that with the rapid advance of these systems we are likely going to face systems that can do most human cognitive work for significantly less cost than an equivalent human cognitive laborer. Strapped onto a robot body cognitive work isn’t even the limit. I expect that in just a few years, we will start seeing significant economic pressure start to develop that will cause havoc on labor markets. Journalists won’t be the first or last to struggle to get payed a living wage. We’re in it for a wild ride.
tell me what the difference is between a generative AI being influenced by a particular piece of art and a person being inspired by the same piece of art in terms of the legitimacy of their output. take all the time you need. if an AI is shown existing art based on which it creates new art, that’s bad, but when a human does literally the exact same thing, that’s okay? why exactly? how is it theft when an AI is influenced by existing art, and how is it not theft when a person does the same? where do we draw the line? if a true AGI did the same thing, would it be considered theft?
I can't imagine what it must feel like to know your videos were stolen to feed the models. I only know how I feel by being almost certain that something of the first drafts of my fiction writing that I've shared have been scrapped without my permission. Which is to say...not fucking great. With how much I don't trust the people who in charge of the place I posted too...I'm not exactly willing to share more than I have.
honestly im getting tired of one person claiming AI is going to save us all, while someone else proclaiming its shallow stealing... just give it a decade or two, and no one is an AI weirdo anymore, because everyone will be using it at some level.
Yeah it's going to be the norm in 10 years. Corpos and idiots are using it right now for theft and pretending they can create and whatever but one day it's just going to be something everyone uses
let’s clarify how LLMs like ChatGPT work. These models don’t "steal" content in the way you imply. They are trained on vast amounts of data to learn linguistic patterns-grammar, structure, and associations-not to memorize or replicate specific works. LLMs generate text by predicting the next word in a sequence based on learned patterns, not by copying and pasting from their training data. Comparing this process to theft misrepresents what’s actually happening. It’s like saying a person who reads thousands of books and later writes an article is plagiarizing because they learned from those texts. The AI is essentially doing the same thing humans do when they read and synthesize information. When it comes to the issue of copyright, there’s a deeper problem. Copyright, as it currently exists, is a system designed more for protecting corporate interests than fostering creativity or innovation. It often creates artificial barriers to knowledge, restricting the flow of information in ways that harm both creators and society at large. Originally intended to incentivize creativity by giving authors temporary control over their works, copyright has evolved into a tool that locks up knowledge and ideas for far too long, preventing them from being built upon and shared. This slows down innovation and limits what others can create, even in collaborative or educational settings. Moreover, it’s important to note that humans have always built on the works of others. Whether it’s in literature, art, or science, the most groundbreaking ideas often come from standing on the shoulders of giants. LLMs, in this sense, are simply a tool that enables this process at scale. AI can take vast amounts of information and help generate new, unique outputs-just as people do when they learn from a wide range of sources. To deny AI the ability to "learn" from publicly available information would be to deny a fundamental principle of human creativity itself. Ultimately, the concern over AI models "stealing" work is rooted in a misunderstanding of how these technologies function. AI doesn’t destroy creativity-it amplifies it by providing new tools and opportunities for collaboration and innovation. Rather than clinging to restrictive intellectual property laws , we should focus on how to adapt and encourage a more open, collaborative future where ideas and creativity can flourish.
There's something to be said about profiting off of a copy of something to begin with. I think a song, speech, or acting should be paid for when done in person. When filmed, printed or recorded, it is no longer "work" it is just a copy, and no one should have to pay for it. Copyrighting and reselling copies of anything is the biggest scam of our modern world.
Especially when royalties are still being paid for things 20+ years later and the original creator isn't even alive to profit off of it, because of shady industry licensing groups or companies like Disney using legally-questionable methods to hold copyright in-perpetuity. Or when the company doesn't even make the product / copies "available for sale" and is just sitting on the copyright so nobody can use it.
Brad, I don't think you are going to find any creators who agree with you, for many good reasons. There certainly are exploitative examples, where patents or copyrights are taken from under creators or used to squeeze consumers. But in general, copyright is an incredibly necessary protection. Without it, no creator would be able to make a living. Gillian Welch wrote a great song about this, it's called 'everything is free now' .
When everything is worth money, then money becomes worthless, or where money is involved, art integrity, ownership, no longer matter. We are quickly heading for that world where it doesn't matter how much of your fiat money you spend, all you get is garbage, because the talented, creative people have been replaced by the greedy....
That needs a very strong (and extremely poor) assumption: talented and creative people are not greedy. That is obviously not true. The only greedy people that are successful are the smart and creative ones. Like the folks over at big tech. So, money will not lose value. In fact there will be significant deflation. The "really" creative and talented people will use AI tools to make better stuff. So, more poor people will get access to good things. For example, chat GPT allows for free access to best in class tutor for the poor children - of course they need internet - but the greedy smart people have made sure internet is cheap and wide spread. At least that's the case in my country.
@@cookies23z He's definitely smarter than most other people in my life. He managed to align smart scientists and engineers to make rockets land vertically, get space internet working, build a proper recharging grid. I don't know anyone in my circle that can replicate that level of management.
@@samarths you're confusing smarts with luck, and HE didn't do all that stuff, he paid people to do it. and gee where is the proper recharging grid? the self driving car? all the satellites that he as going to send to orbit for his space internet? he's just a capitolist, doing exactly what they do to get more investors, call your company some grand name of past glory, make a bunch of bold promises, fool people into investing, then try to deliver on those promises, normal silicon valley poo, how's that tesla stock doing?
This makes me think that the future of all art will be either just robots, or we will have more private performances personalized to the audience. You could hire an acting troupe to take a play they know and tweak it to make it personal to you or your group. No recording allowed.
The future of art will be interesting to see, especially when it comes to digital art. My hunch is that it'll move differently from things like bitcoins and NFT where those place value on "Proof of Stake" while Art moves toward "Proof of Work"
Unfortunately this video does not seem as well researched as your other ones, as it is unfairly one sided. There are many valid counterarguments that haven't been brought up, such as say that training AI on copyrighted data has been illegal all along, the result of that would be that only handful of tech-giants would already have access to such incredibly immense amount of data or would be able to buy the rights to the data, effectively removing from the scene the competition and innovation that hundreds if not thousands of small AI startups are bringing to the table. This would severely stifle inovativan and competition, concentrating all the AI power in the hands of the few, which is obviously a terrible outcome for everyone. We need to stop and think for the moment and understand the principles and ideas behind copyright. It's essentially artificially imposed scarcity the purpose of which is to protect someone from copying someone's work and selling it as their own, effectively stealing the revenue of each such sale from the original author. AI training on copyrighted data is simply not theft in the same sense of the word, in which selling someone else's artwork as your own is, and we need to stop using the word as if it is, since it's extremely misleading to the point of becoming a politically charged piece that is essentially anti AI propaganda rather than content that is as unbiased as possible, that is documentary in nature and meant to educate. I'll also quote another comment because I couldn't have said it better: "Repeated uses of terms like "regurgitate" implied to me a lack of understanding, or conversely a refusal to understand how LLMs and other forms of AI function, and misrepresents it, using words deliberately chosen to be derogatory, and to incite further mistrust and misunderstanding". Just look at that shameless THEFT of that insightful comment, I should be sentenced as a THIEF and put in jail :)
19:30 - Isn't that how everyone writes? We all use, whether knowingly or not, idioms or even whole sentences that we have heard somewhere. Everything is based on something else. Every story contains “tropes” that have existed somewhere before. Even many newspaper reports copy their content one-to-one.
no. it isn't. "AI" can only spit back out what it has had inserted into its dataset. humans CAN create new things wholecloth, "everything is based on something else" is such a meaningless cliché that ignores the obscene scale of the total dearth in creation at hand-and the fact that these models will occasionally just spit out complete plagiarism.
@@MelMelodyWerner Why do you write AI under quotation marks? And AI can also create things that have never existed before. You should inform yourself a little before you flaunt your ignorance.
@@MelMelodyWerner You just made a contradiction. You said that AI can only spit out what was inserted into its database, yet you state that AI will occasionally spit out complete plagiarism. AI does create unique generations based on statistical models. Humans are also capable of spitting out complete plagiarism.
Nothing has been stolen, stealing implies that it was private and somehow protected, but according to what's stated in the video all the data they used were 100% public information, if that is stealing all of us are stealing right night as we are watching the video. One have to be careful of the language they use
Watching or transforming a video does not constitute copyright infringement. Only copying a video verbatim would infringe copyright. As AI-generated outputs are transformations of existing content, they do not typically infringe copyright.
If copyright was still some 14 years as it was originally (not that 70 years after death current nonsense), there would be no problem with sufficient public domain training data, so "everyone" could train good AI, not just selected few with enough lawyers. Another thing is that this is exactly how humans create. We see, remember (even copyrighted material) and remix. Same as AI.
If gen AI creates in exactly the same way we do, then the problem of it needing an ever increasing amount of data is solved. It can be trained on a very limited dataset and simply create the novel information for itself and keep learning from it, like we can do. Unless-- that's not really how it works and gen AI is in fact mostly derivative, where this fact is obscured by its ability to take many small bits of information from an immense amount of data and combine them through not much transformative processing, and where any trace to the original works is erased. That would be bad news, right? Copyright law doesn't take kindly to derivative creations, no matter how much data they're based on.
@@felixmoore6781 Of course computers are inherrently deterministic, the only reason an AI prompt gives different results each time is because it generates by starting with pseudorandom noise. If you modified it to use a single noise pattern, persumably a given prompt would always result in an identical result. Because computers are inherently capable of exact replication of complex data in the way humans are not. The best art forger in the world could not perfectly recreate the Mona Lisa. The simplest computer program could. Humans need to get better to copy more exactly, AI need to get better to copy less exactly. Humans also learn as a way of operating in our physical world, and most of our information is not of human-generated data. AI learn exclusively to be able to replicate human works, and are trained exclusively on human-generated data. People acting like AI's just 'learn like humans' ignores the vast differences between a conscious evolved lifeform and software, and between the reasons behind human learning and AI learning.
I genuinely do not understand why people are upset about AI companies using copyrighted material to train their models on - certainly in the case of content that is free and publicly available online. Nobody has a problem with us, as humans, being able to read such content and, then, learn from it or be inspired by it without having to pay anything to the authors or request their permission first. Why should AI be any different? Writers look at others' writing to learn, artists study other artworks, and songwriters listen to others' music for inspiration, all completely for free. If we want AI to be even remotely useful, surely it has to have access to the same information we do. Inevitably, this may lead to their work in some ways resembling something they've been inspired by. Arguably, even, in the case of AI, it will "use" each piece of work it has been trained on less, simply because it has seen more of them than a person ever could. I am, of course, not talking about plagiarism or situations where an AI spits out chunk of copyrighted material verbatim. If you explicitly ask the AI to do that, it probably will. But just as with any other tool, the end user holds the ultimate responsibility for the content they produce. We don't ban Google because it allows people to commit copyright infringement, so how is AI any different? Also, forseeing the argument that AI companies profit from their usage of others' content, so does Google - it would be useless if not for it indexing practically everything on the internet. I cannot comment on the current legalities of this, but, from a moral point of view, I do not see any problems here.
AI does not technically regurgitate. Each generation is quite unique, or as unique as it can get given the material. For example, you could ask an AI to describe cheese using one word; it will use the same word many times, but so will humans. And if you ask an AI to generate a unique image, each pixel will be generated uniquely in a unique place. What exactly do you mean by regurgitate than?
It seems like the creator of this video, and most of the people in the comments have a fundamental misunderstanding of how AI works. Neural nets are in essence just lots and lots of equations that try to predict output from a specific input. What is fed into the AI during training is not stored in any reversible way, and it's really misguided to say that training a model on some data equals theft of that data, or that the specific training data should be cited (which just isn't really possible). In a way the learning is quite alike that of humans. If I draw a picture in my own style, should I cite all artists who have once inspired me?
Find out more about my upcoming, Nebula Original documentary *Boomers* here: go.nebula.tv/boomers?ref=tomnicholas
can we see you learn what words mean and how thigns actually work instead of you jumping on a bandwagon of fear and stupidity there?
So those subtitles were user submitted, or was it google's auto-generate subtitles? Because one is scummy, and the other is scummy sneaky and doesnt make sense half the time, lol.
How about the term "Boomers" being used as somehow degrading? What about poor people from that generation? What about the fact that - if you look back - the older generations always seem to have more, and the younger generations are always angry (especially the "boomers")?
How about ageism being a problem?
etc.
I love the idea of your documentary, even if i cannot afford it, Thank you for making it (all the angy comments are boomers)
@@battlemasterofaxes Why do you think giving people a certain label disqualifies everything they say?
1) Climb the ladder.
2) Remove the ladder.
3) Profit.
Eh, pretty much.
That's what capitalism always incentivizes. To cheat the system and then remove the cheat to make others struggle.
“Hey, Babe, Monopoly II just dropped!”
@@alexandrudorries3307Youve heard of the landlords game, now get ready for the infobrokers game
lmao
AI companies "protecting" copyright of big corporations while their entire data model is itself stolen is pretty hilarious. Or depressing, your choice I guess.
The copyright system is nothing but a corollary of the broken system we run overall. The story still the same, people getting rob of their work by capitalists.
I’m laughing funny tears 😭
Why my comment keeps getting deleted???
@@OjoRojo40 dunno. Probably because of whatever it is you wrote.
It reminds me so much of how colonial powers stole someone else's land then fought tooth and nail to defend "their" territory
It is piracy when we do it, but it isn't when they do it. Rules for thee, not for me.
Edit: It seems half of the replies don't get it. To create the model, you need to train it on data. To train it on data, you need the data on a hard drive. The act of acquiring and putting the data on a hard drive is piracy, since all that data isn't licenced under creative commons or other copyleft licence. It has nothing to do with AI, inferencing, inspiration, reproduction, fair use, etc. You wouldn't download a car. Insert matrix music.
Just like it's fair game when hedge fund billionaires short but when a bunch of redditors fiught it, its market manipulation.
The rich really do think their entitled to everything
@@bdarecords_ too bad the redditors "fighting" it were just a bunch of people all out to make a quick bag with no care about the rest. Not defending the hedge funds, but there was nothing benevolent about what they were doing, and many normal people were left with losses because they were sold the hype. It was the same pyramid scheme as crypto with a veneer of "justice"
@@USSAnimeNCC- It's not as if we don't let it happen...
@@martinfiedler4317 how
The only way to solve this is to feed Nintendo IPs into the AI and let Nintendo fight them over copyright lol
Ha ha. pretty smart actually. I wuld pick Disney because they are much more sinister.
Haha! Just commented the same, but Disney. They'd find a way to make it only count for their IP, but one can hope.
Like tricking the monster to swallow the bomb in a movie
@@samarthsDisney is pro-AI, though.
Nintendo is *not*.
@@samarths Do both. Copyright battle royale.
If you go on google, download a few hundred copyrighted images, and they display them on your website for commercial gain, you'll get hit with a barrage of copyright infringements, and rightfully so.
If you download millions of copyrighted images, but instead use them to train an AI for commercial gain, as of yet apparently that's ok.
What if you download hundreds of images, figure out what about them looks good, and draw a bunch of images by hand, copying some elements like shadows and perspective from them?
I mean, that's if we're ignoring the fact that transformative works are protected by copyright law, which AI tools are absolutely capable of achieving.
@@VecheslavNovikov This is a strawman argument entirely divorced from what actually happens in valid copyright infringement cases and AI model development. Even if this was a relevant argument this still creates the issue of "why shouldnt we abolish copyright entirely then if it never mattered in the first place? artists have been infringing on copyright for centuries if copying techniques is the same as an ai photocopier that jumbles the pixels a little bit". megaconglomerates still hold far more power with the current copyright system than corporations and both outpace the individual by a mile in being able to defend themselves.
Oh right, the answer to that is because people need to eat and sleep before they can do the shit they actually want to do and the current system requires they sell their soul in order to survive, and people are encouraged to copy something that works instead of risk poverty making their own stuff, sorry i forgot about that.
I for one would rather have 20 different interpretations of Avatar: The Last Airbender in a month than one every 10 years made by the company who just so happens to buy the rights the year prior. I would also prefer if everyone on earth was able to feed and house themselves regardless of their ability to work. how about we fix both before corporations decide that AI should be used to starve anyone speaking a narrative they dont like.
@@_B_E Law is weird. Art done by animals can't have a copyright either.
@VecheslavNovikov
Art by AI is the same, somewhere it says by human hand. Argument is then if it's the person that programmet the AI that have the rights or if they AI removes the owner rights as you not totally sure what the AI will make and therefore you don't halve total control of the outcome
Let me get this straight...
It's okay for tech companies to scrape the internet for data to train their AI, but for some reason it's not okay for the internet archive to digitally loan books, films and audio files out?
I think we're being fucked
"Someone, somewhere is usually ripping me off in one way or another" is a somewhat dystopian thing to just say in casual acceptance.
Someone tried to argue that AI was going to make creators more powerful. My reply was that isn't how companies were going to use AI: do everything they can do to cut as many people as they can out of their profits.
music streaming was going to make creators more powerful, only for it to turn into a corporate money machine that pays users a miniscule fraction of the profit.
People love to separate technology from behavior but it's not possible to do that. And there's absolutely a need to understand that if you don't want someone using technology in a specific way then it needs to not be worthwhile for them to do so.
Consequences shape behavior.
technology COULD be used to enhance our lives instead of exploit us except capitalism.
I think the whole scenario is simply over-hyped ... like this A.I.-BS itself:) (BTW guess who are the culprits for that, being the spokesmen of f.e. OpenAIs marketing division? Yes, greedy journalists and even more greedier data-scientists and crypto/Ai-"Bros"). The word that AI is a hallucinating cesspool and has nothing to do with INTELLIGENCE at all will simply spread. Oh and everyone can see for themselves that such "articles" and work is just an embarrassment in terms of quality and reason. People will simply stop giving those outlets their money or their attention, because most of us do not want to be even associated with this "soulless" nonsense.
The most important thing that Tom missed is that this so called A.I. cannot be innovative ... NO! If you think so ... then YOU are the one hallucinating, hehehe. This kind of technology simply can't CREATE new things. By Design! By Principle! -By- Because of and how the TRAINING works! There will be nothing NEW ... which diametrically excludes "THE NEWs" (journalism) as a successful application area of pseudo A.I.! Without the human "slave workers" (in todays or another form, work stolen from journalist or sexist abuse of precarious workers from third world countries who actually do the "intelligent" part of the work (see Amazon ... it is disgusting!), doesn't matter) ... those companies are only big fraudsters with big mouths and hot air!
@@albert2006xp lol the ai lover is trying to gaslight us. Let them use it in their slop, the folks that like quality will go indie.
AI even read my masters thesis. A piece of text maybe 3 people ever fully read.
If you link me me the paper I can make it four
@@Dragonshadowbob me too
@@Dragonshadowbob Ill read half and say it sucks if you want...
I mean.... is this wrong that it is learning from information freely put out in the world? Maybe a legal change such that any NEW content put out after a certain point requires consent and/or compensation for the creator is the way to go?
@@CentristDad155 Legal charge?
Dude, I am from North Korea; Charge me.
I do think Gemini does show how pointless the AI we have now is. It will give you answer that is exactly the same as you would get if you scroll down past the gemini bit and even then it gets it often gets it wrong. They've put what $200 million into something that they can already do.
it's really the lamest possible implementation of the technology. they could've used it to make it easier to specifically search for what you want, like research papers, but it instead just summarizes the top 3 results (and usually fails)
all generative ai is a predictive model... makes sense that the answer it arrived at ended up being what was already working 🤔
GPT-4 and newer stuff is much better. Gemini is shit. GPT-4 used to be free on Bing Chat but now only GPT-3 is free.
@@TheNinToaster yes, but the fact that a tech company used so much money on a model that you and I can see with common sense would do more-or-less nothing is what we mean
For the layman, yes, it's pointless. However a google search can't write code to my specifications.
Seeing AI generated stuff is an instant turn-off for me. I don't know who originally said it but I've heard it said online "why would I bother to read something nobody bothered to write" and I agree completely. I recently found a channel that had a lot of really interesting sounding videos, but then halfway through the first video I watched they had a bunch of AI images and I stopped watching right then. Generative AI for text makes stuff up all the time, if I see someone using AI images I'll assume they're using stuff like chatgpt and then they've lost all credibility to me at that point.
At least with the crypto bubble I could point and laugh, but AI is invading everything and it's the worst.
> why would I bother to read something nobody bothered to write
Because at time it might not exist in your language. Or the does a much better job of summarising the points because the original author doesn't do a good job of logically connecting the pars and stuff. For things like meetings I find it super useful. Instead of watching 1 hour of umms and uhhs I can just read for 5 minutes to get a summary.
@@samarths This is why you have minutes keeping though????? The hell kind of banana republic management is running your company???
@@samarthsif it doesn't exist in your language then the AI is literally ripping it and translating it, violating copyright law. But that's still just a translation, and it's one that is certainly not going to be entirely correct
@@samarths dork
To be more accurate thus better pointpoin the source of the problem, Ai is not invading everything, immoral people are invading with their AI tool.
The fact that they were trying to hide what they had done shows that they know what they are doing is wrong and illegal.
it's definitely not illegal lol.
@@wck I would argue that the output of these tools effectively replace the work that they are trained on and have the capacity to drown out the original work, undermining a critical pillar of fair use (the effect of the use upon the potential market). AI tools by necessity are trained on entire works so that can be convincingly recreated (amount and substaniality of the work, purpose and character). Unless I’m miss interpreting American copyright law, training on all this stolen material seems fairly likely to be copyright infringement.
@@MrMoon-hy6pn tell that to universal pictures before they sue Sony over the VCR. Oh wait, you're decades too late and they lost that fight. So long as the machine has non-infringing uses (and it absolutely does) then the creators are not liable for people using it to do copyright infringement. That is the law.
"these tools effectively replace the work that they are trained on and have the capacity to drown out the original work" - No. AI is not autonomous, it does not do any work without a human directing it. So, while it does have the capacity to drastically reduce jobs by making one worker as productive as multiple workers, it is still a human being that is doing original work using a TOOL.
Wrong, maybe. Bad PR, almost certainly. Illegal, definitely not. It is not reproducing copyrighted material, it is creating new material based on all the data it has been trained on. Which falls completely outside the scope of copyright law. Copyright law might change, and likely will, but until it does training an AI on any publicly available data, copyrighted or not, including youtube videos, is perfectly legal.
That said, there are however some companies that have actually broken the law because they used copyrighted meterial that was not publicly available to train them without paying for it. AKA, piracy of copyrighted materials. The court case is currently ongoing.
@@DefaultFlame "it is creating new material based on all the data it has been trained on," I wouldn't even say that's a fair characterization. It's creating new material based on pattern recognition and representation data. Nothing ChatGPT does (or any these other AI tool) is directly based on training data. They don't even have access to the training data after model training is done.
It drives me up the walls when my classmates use these ai "tools" like a web browser. They don't question ONCE if there could be false information there. I think this show pretty well how people bend backwards for anything tech. They think it's smarter than they are, when in reality, the ai is extremly limited by not only by it's human creators but the (often unchecked) input it gets.
Like c'mon. Some people were told to add glue to their pizza or eat a mushroom that is so deadly that it will melt your insides and people STILL bow down to the ai crap.
I mention this because I heard a lot of excuses for people using ai generated images as references for their art. The ai is NOT reliable, especially in terms of proportions. Why use extremly flawed programms when there is thousands of free resources in any language?
When it first became a big thing it seemed genuinely useful, now it's just garbage because it's already "learned" off everything that already existed and is just "leaning" from itself now. The accuracy and quality of the results is becoming ever worse. It's like inbreeding. The lack of genetic diversity leads to... issues.
There's a ceiling to how much these programs can produce. Until it iterates on itself which produces pure gibberish
One of my classmates tried to use ai to do the summary of a class project, we were supposed to cover the laws regarding primary education, even though she feed all the papers it would only summarize the bare basics of secondary education we had to do it by hand.
I'm sorry to break the news to you, but for 99.9% technical topics AI is already indeed smarter than you are. But so is Wikipedia.
@@clray123anyone who knows how to do basic research and cares about such dumb things as "is this author an expert? Is this website reliable? Was this from a blog post? How old is the information?", i would say no, AI is not smarter, it just looks like it is. It's easy to look like you know something technical while saying random bullshit.
As an artist, one thing I find extremely frustrating about generative ai is when I can recognize an artist peaking out through the soup. But because generative ai are entirely new images, it is impossible to reverse image search to find an artist! If we could reference the images one piece of ai referenced, I’m certain we could see just how closely ai copies certain works of art.
That being said, I imagine it is something similar with generative text. Reading something you enjoy, the tone, the pacing, the voice behind it. Wanting to read more by them, but then finding it extremely difficult to find that voice again.
It’s stealing people’s individuality and profiting off of it
It was really disturbing to me when I played around with generating paintings and AI tried to replicate an artist's signature. The future is looking very grim for artists
Individuality or originality is a myth that has been inherently overrated since the times when there were far fewer people, and an even smaller fraction of them could potentially afford to become genuine, recognizable artists. You want to know whose works AI was trained on, but almost no one cares whose works that meaty guy from DeviantArt was trained on, or whose chord progression you almost recognized in that other guy's music. Even before AI and computers, someone's work, one way or another, was based upon centuries of blood and sweat from others, with little or no remorse, especially when it comes to money.
@@bartolomeus441 why's that? The art is constantly transforming, and so are the artists. It might look grim, but only because you can't imagine what people will make of it.
@lyrajaded
Exactly! 💯 I am an artist as well, and the stuff that I've seen that was AI generated is often times a very clear style of both specific artists, and pieces of work. (Same with authors, musical pieces, ect.)
It's seems they are just blatantly ripping off other artists in order to create new art when using AI to generate "original" works of art.
For example, if one were to give the prompt of create an image of Garfield the Cat painted in the style of Vincent Van Gogh. Then these are two very specific pieces that were definitely created by other people. So while the concept of here's what Garfield would look like if he were painted by Van Gogh is an original thought. It's just not original art.
You could easily ask an actual human painter to create that same painting for you, but no one is doing that.
So it's like you said, AI companies are just stealing people's individuality and profiting off of them without ever paying the original artists. 🤬
@@Cuddly-Cactus You too could paint Garfield the Cat in the style of Vincent Van Gogh. Would this be stealing or your original artistic interpretation of both? And you surely can ask an actual human painter to create that same painting for you. Many would agree for a reasonable price. So, would this be art or craft?
I don't really care if AI will create music better, faster, and cheaper than I do, because thousands of people are already doing it faster and better than me right now. I can't and don't want to compete, but I can create, and that's enough for me.
AI can't exist when the Internet Archive can't.
It definitely can.
But it just as definitely *shouldn't*.
Both should be allowed to exist and thrive.
@@egonzalez4294 hahahahahahahahahahah no.
Internet Archive is far more legitimate than Ai. And the Wayback Machine alone is critically important. (It's a backup and historical record of the entire web thru the decades. Especially useful comparing what news sites said yesterday compared to today. What headlines have been changed. And so on)
@@egonzalez4294 Even some AI maximalists agree that letting AI grow freely would be dangerous. Don't be so eager to deify the big calculator that techbros overhyped.
It may seem like a small thing but I'm very happy you pointed out the difference between a TH-cam commentator of news and actual journalists who are breaking the news. I get so sick and tired of hearing people complaining about mainstream media and saying that social media is the only place they feel confident getting their news when they fail to realize that FEW of those social media posters are ever actually breaking the news. They're regurgitating what actual journalists have written. As you said, being a journalist takes so much more work and money. TH-camrs can safely remain in their offices while someone doing a piece on a war risks being hurt, killed or even taken hostage.
People don't have to love the way media is presented these days but I think we all need to take a brief moment the next time we're about to rally about mainstream media and be grateful to the journalists who put in the legwork that social media commentators largely have not. I just don't see enough of that distinction being made.
Mainstream media is mostly regurgitating news too, that's just how journalism works in general. Most "journalists" nowadays are not much different from youtubers and have no budget or time to actually do investigative journalism. It's actually youtubers that often do, like this very video.
@@marcogenovesi8570 yeah I have seen so many articles that were 1:1 the same on plenty websites..
The thing is that mainstream media has basically become social commentators. When the majority of the information is the commentators views instead of the actual information. There are many more people who are just at first TH-cam journalists trying to expose corruption, abuse, things that mass media doesn't want to cover remember the mostly peaceful protests where they were trying to not catch in the background the burning cars😂
Journalists don't break the news, they make it up.
Yes EXACTLY this. Thank you. This is where I unfortunately struggle with audience naïveté. When I was a very geeky teenager who consumed mainstream news media (habits set by my BBC-watching & broadsheet newspaper immigrant parents), I remember reading year after year ... then month after month when FB was taking off globally how much the journalism industry was being undermined by the internet and social media even at that early time. And also reducing readership.
All the now well-worn complaints about news organisations losing readership in favour of click bait articles, Googles horrible search engine practices & other shenanigans ... all meant that there was less money available to go into quality reporting. First all the medium-sized national outfits went bust, then local paper after local paper. All the while journalist were loudly beating the drum saying: "original reporting is important to hold people and power to account. To be your local chronicler." Newsrooms got smaller, you couldn't afford to send as many journalists to cover a 2-week court case or a 5-month story and still leave money in the bank for all the vindictive lawsuits they got from the rich and powerful wanting to shut them up.
But nobody really heeded the warning unless you were kind of a politics/civics/journalism aficionado like me. So the quality of MSM news started to decrease.
Then what was once a tragic storm of events turned into something worse. Eventually millionaires and billionaires realised they didn't have to spend as much money on lawsuits if they just BOUGHT those news outlets and purposefully underfunded them or undermined them by making them hyper biased etc. So that's what they started to do in the 2010s and its picked up speed since. The largest example of this is a billionaire in France scooping up I think TV2 & its channels and making it into an ultra-right wing French Fox news. The reason all your independent MSM magazines and tiny outfits have 1:1:1 same articles is cause they purposely dont pay staff enough so not many stay in those jobs, and the ones that do dont have enough time to do write and edit good stuff.
Honestly it's a really miserable state of affairs and if I had the talents of a YT video essayist I would do my own passion project exposing this history. Waayyy to many people, even older than me! As well as the young'uns think that the MSM just started to let people down outta nowhere and it's not the case.
and they always told me i was breaking the law when i saved Netflix streams and downloaded cracked adobe software
That's coz you are poor compared to microsoft.
You probably were breaking the law, but it wasn't theft.
I mean, is it really piracy when you read articles and then write it in your own words? because thats what AI does... why it is okay for me to do it, but not for ai company using their software? Really...
Apples and oranges.
@@lazymass Because brain power 'n human effort is not equivalent to electrical power to a lot of people morally.
To a capitalist, human hours 'n machine hours are the same so there's no problem.
To a worker, you are becoming valueless 'n don't deserve to get paid for your work.
That argument "AI professionals" used to make saying that they didn't know how the LLM was coming up with answers is starting to sound less like genuine uncertainty and more like laying the groundwork for a legal defense in case the copyright lawyers come for them...
Hey, we know exactly how it works now, check your sources.
Your source : Multi Billion Dollar Corporation
Our Source :
Scientists like Kyle Hill or Sabine Hossenfelder.
Lawyers like Legal Mindset or Legal Eagle.
Law Firms like Joesph Saveri.
Programmers like Thomas Brush and Arvind who gets a speech platform with Adam Connover, and GitHub programmers who got duped by Microsoft Copilot.
Artists like Yoneyama Mai, Mogoon, Kim Jung Gi, Hayao Miyazaki, Greg Rutkowski, Thomas Kinkade.
Our forefathers, the Luddites, and the victims of Industrial Revolution who got killed over fair wages to maintain the machine.
Would you cite or hallucinate your sources, shill boy?
I think almost every source in The Pile was heavily threatened by IP law at one point. People had to fight tooth an nail for their fair use rights only for a lot of those same IP holders who were previously threatening them to turn around and gobble it all up, somehow avoiding the copyright of small creators. Must be nice to own everything and make all the rules...
Fair use is when big companies take an individual's content. That's why we need to shut down the internet archive, and just feed the rest of the internet into chatgpt instead. /s
It really is revolting how much of modern IP law and precedent ignores the public interest in favour of hypercapitalist megacorporations.
It just seems to me that communists like our host here are all for love and sharing and free access, and mostly against copyrights when it is them doing all the stealing, but up in arms when the big corps steal from them.
Steal from the poor, you become rich.
Steal from the rich, you go to jail.
As someone who types in the URL to the times and economist, I'm feeling a bit attacked here lol
I can always tell when an AI tool is used to narrate a video. Lack of inflection, mispronounced words, and other gaffes much worse than weird hands are signs of AI used. I experimented on without posting AI generated narration and found that I had to weirdly spell words to get the correct pronunciation, specifically with words that can have more than one pronunciation based on how its used in the context. That experiment proved to me that it's not worth using.
Can you though? How would you be able to tell if the next video you watch is real or if it's just a "better" AI that doesn't have the problems you mentioned? Or the one after that and so on. How will you know the point where AI improves so much that you can't tell anymore, and how do you know we haven't hit that point already? The fact that _some_ AI videos are discernable doesn't imply that _all_ of them are.
Yeah give it half a year. First photos weren't perfect either.
They’re training generative intelligence on Enron emails?
I laughed too when I saw that. I imagine it's simply because it is a wealth of legal information, as well as spoken/written testimony that is freely in the public domain.
@@J5L5M6 Pretty much. I'm sure Enron is used as a case study in business and legal studies.
No, it's just pretty much the only freely available dataset of emails out there. No one in the AI developer community cares too much about the content of the emails. Few people even care much about the dataset as a whole because it's so tiny compared to the amount of data the generative AI models need.
@@newsjunkie7135 It was a joke. About a joke.
I thought that was wild! 😂
"I don't know what a brat summer is", Same. Same.
It's not really AI theft. It's just human theft poorly disguised.
They stole the data for the dataset. Then the AI learned a bunch
"They took the credit for your second symphony
Rewritten by machine on new technology
And now I understand the problems you can see"
---Video Killed the Radio Star, by the Buggles
@@principleshipcoleoid8095 Why is that theft but humans learning from others' works isn't?
@@VecheslavNovikov Because when a human does it, its because they are looking for specific techniques the artists are using and applying them in their own work deliberately, its often done out of respect for your own craft and the inspiration, when it is expressed outwardly in your work, most artists can see that and say 'hey this is very invader zim esque in its style, but its done in a way that allows for their persoanl expression'.
When you take a bunch of images and put into a dataset, you arent copying the techniques, youre just ripping the work apart. Its not 'this is very invader zim esque', its 'oh no, you just traced invader zim, put shades and a scarf on him and called him zom going on adventures with his yaoi rival Doob'.
@@principleshipcoleoid8095 Generative A.I. does not "learn" It trains a matrix formula for transforming the data, the more complex the matrix the more accurate but also the more the marix is just storing the orgional works, without any knowing of what anything is.
Its a bit like an oppisite of the brain, we learn and know what something is to the point where we can simulate it, then transform what we know into something orgional that we may not know, that may make new knowledge or emotions or even just navigate the world, then check the source material I.E. the world.
(even our senses are not registered in our internal simulations *until* something is sensed that was not predicted)
A.I. goes in reverse so it does not have to know anything.
The thing I worry most about AI that over $150 billion was spent on it over 6 months (god knows what it is now). What happens when the investors want their money back and we are forced to pay for this in one way or another.
Not really, unless there is a new major change they've just burded their investments.
It's happended before and will happen again.
@@magfal and the investors will run to the government and ask for a bail out to stop a recession...
That was banks, not investors. What will happen is the investors, including retirement and pension fund members, will get screwed while the VC/PE and startup owners get a really nice skim off the top as it sinks.
So they get a nice several percent of that $150 billion bet while anyone dumb enough to trust them with their money lose everything.
Of they really do want to have the investment pay off, but they will still make out like bandits if it crashes.
The sad thing is, just imaging how many healthy school meals could have been bought with that many, how many urgently needed surgeries could have been funded, how many homeless people given a place to live with that money....
@@agilemind6241 money ain't solving that. Actual control of the resources and production does
intermediaries? They sound like the bourgeoisie to me (in the traditional sense that they don't provide the thing, they just own the thing that hosts it)
But to host it costs money and resources. It is a free market for servers for sure. Unlike land, compute and storage don't have a cap and are man made. This would make hosting not a monopoly or the equivalent of land hoarders.
@@samarths a houe requires upkeep too, I have to get my boiler checked for example. I would also argue that there is a cap, since there is a finite amount of copper in the world. All of this is besides the point though. When I order on uber eats, who is actually providing the service? The people making the food and delivering it. And yet there is a transaction fee that goes to uber. I know from the cost of running my own website that they do not need to charge that much to cover their costs, it could be fractions of a cent and they would still make bank.
@@Janokins But it is truly a free market, right? In the case of uber and servers I mean. There is no cartel like behaviour. Maybe Uber works differently in different parts of the world. Where I'm from they don't have the evil monopoly like over reach yet. So, when they start charging a platform fee people are free to move away. If others are finding it so hard to make a platform then isn't their platform fees justified?
That last but doesnt make logical sense though...
If uber(and other platforms) are doing ok in your area, what market share is some new startup supposed to eat? It would only have the room to grow where the bigger companies overstep the customers desires and charge too much or make mistakes. But they can grow comfortably, small increases, a new fee here, now you can subscribe to uber one here, pay for direct delivery here even tho it used to just be free part of the service.
There isnt room to grow a new business in the market niche, until it is too late :/
@@cookies23z I read "There isnt room to grow a new business" as "there is no problem to be solved". Maybe that's where we disagree?
> That last but doesnt make logical sense though
Here is why I think it makes sense: Uber has a monopoly over nothing. Not over the engineers, not over the tech, not over the taxis, not over the cars, ..... over nothing actually. Which means if someone wanted to build a cheaper replacement app they should be able to (free market forces). The fact that no one is able to build a cheaper apps means that the folks over at Uber haven't overspent and that they have actually solved in the cheapest way. Where do you think the reasoning is off?
Soon there will be so much AI garbage published that the machine will start feeding itself and producing ever better garbage.
And some day, a major part of the internet is going to be pure gibberish.
Some day? It's always been gibberish ever since we allowed social media to exist.
Dead internet.
This has already been happening for years now.
That's called model collapse
We are already there
I think there is a different problem entirely with LLMs that's even trickier:
A regular text may be truthful, deceptive or incorrect. An LLM can be none of those things because it's pure slop. The system doesn't really have the concept of "meaning", it can't lie or tell the truth because there's no difference. A world where people use LLMs as a source of information is a world where people lose grasp on reality because their information becomes more and more dissociated from meaning. These models can't be "right or wrong", it's just coincidences.
And a furthering of the problem discussed is that portions of the web may close down in response. The fight against these scrapers is suddenly at the forefront and the result may be more paywalls, more walled gardens etc.... I'm a software dev with my own blog and... I don't really feel like open sourcing my work any more knowing that it'll just get stolen by Microsoft and OpenAI to make them money while making software everywhere worse. And while I haven't taken down my blog, I also don't really want to be posting anything new if it'll just share a similar fate... The internet has suddenly become intensely hostile
The fact AI costs waaaay more then it can reap... the fact it is making storage and chips even MORE EXPENSIVE in these shortage times... The fact it consumes even MORE POWER then any other silly tech trends...
There is just more wrong with AI tech then the NFT trend (wich LITTERLY stole from artists and forced copyrights on THEM!)
I don't forsee Generative AI to stay much longer then two years. The reason is unsustainability.
It takes one law... one thing... one small lil bitsy thing to make completely unsustainable...
And it's already happening with them having to pay out major publishers.. but it's gonna get even worst for them due to the EU.
YEP... Article 11 (formely known as Article 13 but became part of article 11) will put a stop to AI the moment the EU updates it to include any AI generated works.
Cause now all AI models have dump any data made from EU countries... and you might think that is not a lot.. but it would break their AI and it would costs BILLIONS to shift through data and check all of it to see "Is this from europe?" It would litterly kill the bubble and sure you can cater to American,African, Australian and Asian countries..but Australia will soon follow.
As for America and Asia? We will see how long that lasts with two continents preventing data that would cost them even more billions to pay for rights.
Remember that the EU has kill a lot of bad practices in tech companies over the last years (something they do well thank kot)
But also, the only reason you can ask any company of all data collected on you and delete it.. is article 11. So now if all EU creators go to these companies and say: "Hey.. you operate in the EU so do I, all this data you got one me.. SHOW ME.. and then DELETE IT. Or else I will have the EU sue your ass." And trust me when I say... when tech companies are losing billions in the tech... such lawsuits going to become frequent? That's just bad news...
Yes. It's unsustainable. They just need 1 court to rule they're breaking the law... just one to not go their way and boom. Gone.
@@Saliferous lol it will replace you. this is unstoppable.
I wish i was as hopeful. Let's hope making generative Ai unprofitable will at least curb the worst of it
@@unchainedmel1475 It's already unprofitable. Numbers from OpenAI came in.. they make 3 billion in possible revenue but their costs are over 5 billion.
2 billion loss per year. Millions of losses per month. also most of those billions are from investments.
If even a ruling comes in saying: "You have to credit each artists and source credited in this generated stuff." It would take even more billions to do so.
Generative AI or any AI of it's kind is a car teetering over the cliff with rich companies holding them up. So either more weight comes in and it collapses or the people holding them up go away.
@@Saliferous It also costs waaaay too much to sustain and will cause futher issues.
I truly never understood the "AI is gonna create more jobs" idea.
Give it 5 years. Any low-mid level role which doesn't require a physical human is gonna disappear when some CEOs realize that 1 high level engineer can do the work of 50 people from different departments. Other companies will follow shortly after.
Im usually not a doomer, but I think we are on the verge of something bad. The skill ceiling to get a job will get so high that most people won't be able to keep up. All this talk about reskilling to adapt to the job market is pure cope. The average 35 year old ain't becoming a Machine larning doctorate.
i mean people were thinking the same thing in 1800s and yet factories and steam engines created more jobs then it replaced from horses
@@skibidicoffee22 That's a completely different thing tho. This kind of generative AI is generalist, or almost generalist. Cars created a need for huge infrastructures (roads and highways). AI doesn't, and It impacts the vast majority of jobs and all industries, not just one.
What kind of jobs is it gonna create if it greatly outperforms most people's capabilities by itself? What would be the incentive in hiring humans if there is no strong regulation? And to do what exactly?
@@YouAreNotThatGuy4844 While I semi-agree, this take is also assuming that progress will continue at the same or increased rate, which I doubt, most likely we've reached the top of the s-curve, GPT-5 and its competitive equivalents will be the last really noticeable increase for a while IMO
Ironically AI will not create more jobs. it will dissolve them into one.
Infact a lot of CEO's see dollar bills, but in truth... that AI they want to push? it's gonna make them obsolete even faster.
Only coders and programmers are gonna be useful. Those who can read the AI's algorithims.. (and trust me Google doesn't udnerstand YT's algorithim! So that's promising)
The CEO is even more replacable then those working under him. CEO's mostly guide, direct and work on reports to make decissions on company output and policies. They litterly mostly take all the data themselves from other departments and then make decissions of them and put them out to the board... sometimes they have to socialize with other companies and possible investors.. but... most of them is:
"Put data in, give data out."
Sounds a whole lot an AI could do with a press of a button? ...Why pay millions for a CEO if an AI can do it for them? Why have all these "heads of departments" managing data if that same AI can do it? Why have middle managers? You only need someone to guide workers to do physical work and give output data the AI cannot see or record.
Trust me when I say.. AI will most likely replace a lot coorperate FAT 1st. Then the lowly worker.
CEO's , Managers , Head of departments, Administration work and the whole financial department can all be done by 1 AI.
But hey most CEO's don't think long term, only short term....
Potential solution. 1. Create an image generator that exclusively sources it's training data from Disney movies. 2. Watch Disney and Google etc Duke it out in court. They'd manage to find a way to make only corporate copyright count I'm sure, but I can dream.
Sora released a trailer a few months ago, featuring a "trailer" for some "Monsters going to Summer Camp" sorta film. In the background of one of those shots, you can literally see Mike Wazowski AND Sully. Mike is pretty f'ed up in the way AI-gen animated characters tend to be and most of Sully's body is hidden behind a snack stand but it is undeniably those two characters.
Been done, google "All Disney Princess XL LoRA Model". Up for almost a year now.
It's disgusting that no matter what evidence there is of the theft, there will never be any significant consequences because government is designed to protect the property of the rich against the poor, and the property of the regular Joe isn't important enough to defend against them... "Because the economy."
AI has a lot of great potential, but as long as capitalism and other hierarchical systems are the norm that potential will always favor continued oppression and exploitation rather than any greater good.
yeah tragedies of the masses (companies getting to own and sell something that used to be a public good) seem to be quite the feature of unchecked capitalism
Will you yourself directly benefit from Ai? Will you ever use it as a tool?
Is Ai a free tool which you use and benefit from? Do you benefit from multibillion corporations?
@@marcus.H Is this a serious response? If so, re-read what I wrote and you'll find the answer to the first question.
For the second question, "no doubt, because that's where technology is headed, and just like we no longer use wagons to move stuff since the invention of trains and cars, we will all end up using AI for various things that we don't today."
Was this a serious response, or just a knee-jerk reaction to something that triggered you? I can't see it as serious, because there was zero thought put into it.
@@samuelrosander1048 cool. So you're saying that you are going to use this technology and you, and many others, will willingly take advantage of all the effort Microsoft have put into this. Got it
Ads based Capitalist economy will be Democracy's undoing.
That's exactly what they want.
Capitalism and democracy are, and always have been, mutually exclusive.
@@uooooooooh 😭😭😭🤣🤣🤣
@@uooooooooh what are y'all talking about? if anything they're mutually inclusive. the only thing democracy has done for us is give us two rich asses to vote for, neither of which ever have anyone's best interests in mind. democracy is more useful to the ruling class than it is to anyone else.
capitalism isn't going to take away our voting system and abolishing capitalism would make the need to vote pointless.
@@uooooooooh Capitalism has been the most inclusive system that humans have ever came up with, it has brought over a billion poor people out of poverty and into the middle class and even rich..... Socialism has brought us hitler and stalin, bread lines...... Keep drinkin the woke cool aid.....
Visual artists were the canary in the coal mine; artists raised the flags back in the end of 2022 when we discovered the truth behind Midjourney and the LAION5b data set, containing our work. And then later proof that MJ fine tuned their dataset on magic the gathering artists exclusively. And so much more.
I’m glad it’s getting attention again but man it sucks it’s taking years when we could’ve had everyone working together since the start. Authors and TH-camrs alike been happy about image generators and only seem to be upset now that their own content is being stolen.
Again, glad to see more understanding, and I hope we get these things shut down.
The solution is simple: no copyright protection for anything created using AI.
In an age of subscriptions everywhere, having an "old school" one time fee is really nice to see again, even if the cost gets recovered only after 8 years and 4 months, which is a pretty hefty timespan
The other thing you gotta remember is that AI needs real work to feed off and so real human creations will always be needed. Is the issue with dog breeding in a sense, if AI uses other AI data the small issues AI makes would be repeated, if that happens enough times you get a Pug like AI that can has it's eye popping out. So if people want AI to be useful and make sense there's going to have to be a balance between generated and humanmade content.
Until, like with dog breeds, people instead choose whichever flavour of AI model has a particular type of grotesque deformity that they like best.
Some people will care more about the "health of the being" than others, and I imagine a decent number of models will exist that are designed to be more accurate than others. But I also anticipate the number of "this AI agrees with your existing worldviews" models will be considerably higher. Plenty of people, arguably the majority, don't like to be challenged. They don't really want an AI with maximum utility and benefit, they want a personal assistant that's loyal to them over all else.
I think Pandora's Box is already open on this one.
Don't worry about automation taking jobs out of the manufacturing industry, we'll still need (considerably less) people to press the buttons and fix the robots...
There are companies right now literally hiring people like copywriters and designers to create new content to train their AI on. They're training their own replacements and their actual work will never appear anywhere in the real world. Just in some large language model dataset. It's extremely dystopian when you think about it for more than a second. We should be using AI to do the boring tasks that we created for ourselves, not the creative ones that make the soul sing.
This is not true for Chess engines. AI does not always need human players to get good at something. Mind you, I define AI as anything with an artificial neural network (which is a lame definition btw)
The way it works with chess engines is that they play themselves for as long as possible, knowing the rules. There is an optimal way to play the game. Chess isn't a chaotic environment, so it was hard until it wasn't.
The issue with current generations of AI is that feeding the AI its own generation creates a negative feedback loop. Art is chaotic; there are no rules but what we make. We would critically analyse what we've made and look for better examples elsewhere, but today's AI cannot do that. But if it found a way to "self-play" like in chess, where it can find rules to optimise for on its own, rather than given to it... Then, it would improve substantially. Synthetic data is basically that, and it's a work in progress with promise.
AIs might not need human training data anymore.
@@cody4rockApples to oranges. You know the AI in the context strictly means generatives ones and you still try to twist the narrative.
writing without citing. it got Harvard Presidents fired, but it is ok for a big tech company. it facilitates other people to unknowingly reproduce portions of other people's work, but is somehow not facilitating IP theft, because it is big tech.
What do you mean "without citing"? The Pile says exactly what videos were used! It's only because of meticious citing the creators can now cry about it.
@@KAZExNOxSAGATell me you didn't go to university without explicitly telling me so.
we had no idea how "can i copy your homework?" would catch up with us
At the moment AI is looking over your shoulder and copying without asking.
I got an Economics A level using the same technique.
And the follow-on "sure just make it look different so it doesn't look like you copied it" bit is bang on
Ceo could be replaced with ai and not much would change
"We couldn't do this without stealing everyone's data" Then don't do it! Simple
@steve_jabz ok but like where did they get it from did they ask and are they benefiting in any way from the ai learning from the actual work they did
@@steve_jabz stabbing isn't murder
@@steve_jabz I multiplied every pixel in your artwork by two! and I have this cool new algorithm that makes *my* picture with just a bit of data! look how cool my art is!
@Shoshiroll That's because stabbing is a type of murder, not murder itself. This is just a weird semantics game to play.
Learning isn't a type of theft. They're not even remotely in the same realm.
@@steve_jabz The "learning" in "machine learning" is a metaphor. It's not the same as a person reading a book. Which you would know, if you read books.
I suppose the intuition pump is to ask how much an AI model would be able to do without its stolen training data, using only public domain material, and the answer is almost certainly "very little" because you'd have to train it on 100-year-old novels, news articles and Wikipedia. So it'd basically just become Wikipedia: The Audiobook.
The Pile mentioned before is completely open source.
Actually, Wikipedia is released under a ShareAlike license, so the model would also need to be cite which Wikipedia articles it's drawing from, so the fact that it's a Wikipedia summarizer would be even more painfully obvious.
Just some context on Australia's Media Bargaining Code:
Almost all of Australia's media is owned by three massive companies (Nine-Fairfax and Fox being the most recognisable) and is horrifically and provably biased about what they post. All this code does is give more power to these conglomerates, because it starves smaller outlets and independents who don't qualify for the code and therefore stifles what voices can be heard.
The act of scanning others' material to generate a summary of their work isn't confined to AI. It's what passes as journalism in a lot of instances.
I've said it before and I'll say it again, ai prompters being absolutely precious about the prompts they use to steal other people's work is incredibly funny to me.
Meh, what you say is conflated in itself and you do not understand how it all really works--
can you make 'rip-offs' of other people's IP? oh yeah, absolutely.
Can you make something so chopped up and generated from a million pieces of correlated data-points where it doesn't infringe on anyone in particular? yes, it is also true.
Base on the level of theft the people that founded the companies should be criminally liable.
Whoever coined the term "plagiarism-as-a-service" to describe AI needs more praise.
Glad people are talking about this, main gripe with generative AI is how much data is being stolen from artists and writers that worked hard for years to perfect their art. What I find particularly disgusting is people using generative AI and then claiming the work as their own, in particular this is a large issue with music generators.
Furthermore, it replaces the jobs that people actually want to do, instead of factory work for example.
*AI in it's current state is technically "machine learning" which is what generative AI uses, AI is something different, although it seems OpenAI is getting closer to a genuine AI.
"although it seems OpenAI is getting closer to a genuine AI." - no they are not. Sam Altman is a full of the brown stuff
@@mach489i getting *closer* not close, it is still seeing significant improvements with each new model.
It's not. That's marketing.
Under pre-existing precedent, “AI” art shouldn't be copyrightable anyway.
same corporations say "pirating is direct stealing"
0:38 I mean, fanfic authors have known their work was stolen for AI training since the very beginning. ChatGPT knew about things that it could _only_ know about if it had been scraping AO3 and/or Wattpad. Digital artists have also found image generators output content that's near identical to their own work, or even able to replicate their style in demand. Heck, I think it was Stable Diffusion which sometimes reproduced the Getty Images watermark on the content it generated, a clear sign of theft! I still find it surprising that people are talking about these revelations like they're at all a new surprise since we've known about this from the very start. Feels kinda disingenuous, as if it only matters now "real" creators' works have been discovered in the trove.
If I'm drawing a sun in an upper corner of my painting, it's not me repeating the pattern I've seen multiple times, it's clearly me stealing someone else's artwork
There is no quantity of money that can compensate the harm to an artist. An artist won't ever be able to compete in the market against millions and millions of Ai arts created in seconds every day. In this mode, Every human artist and human art itself won't exist anymore regardless a previous compensation!!! That is what Sam Altman, Bill Gates and these companies want. They don't care human art and human artists at all! Therefore they are destroying it...
Seems pretty pathetic if they can't compete. All you guys do is go on and on about how soulless, ugly, and un-artistic AI images are, but you can't compete? Yeesh, what does that say about the value you were bringing to the table? Isn't art about expression at it's most fundamental level anyways? AI tools don't remove your ability to express yourself artistically. You aren't entitled to making a living from artistic expression, and those who are skilled enough will still get hired regardless. Stop playing victim and realize that making a living as an artist was a blessing in the first place.
@@_B_E For real, even as a developer why would I want to spend my life making a living off of expressing myself through art, I would much rather bring REAL value to my lord and savior shareholders instead! AI art allows me to type 5 words and see an image! That's really expressive, it somehow knows exactly what's in my mind, and gets every detail right! It definitely does NOT just add in keywords like "Knight" "Grass" "sunset", my artistic vision is totally fulfilled! Its not an approximation, its art!
@@_B_E You are not that special either. you too will be replaced by AI, ...dork
@@RawrxDev You seem to be confused about what my stance is. I'm saying your personal ability to be creative isn't stopped because AI exists. You can still express yourself, and even make money with your art. Every artistic field is still able to exist despite mass production undercutting it. You just simply need to be able to create things people actually want, which is not something you're entitled to simply by creating.
People create art because they can't live otherwise, not because it holds any monetary value for others. You've just found another excuse to do nothing. Pathetic.
Here's hoping copyright laws get repealed as finally those laws designed to terrorize regular citizens, are going up against businesses of their own size.
Though more likely, you get the usual case, copyright repealed for big businesses, regular citizens terrorization remains intact.
As an artist and writer it has been demoralizing. I've stopped posting and I've pulled my galleries off of places like Twitter. Another thing to mention is the climate is also affected by AI...
did people actually want to see it in the first place? or are you just copping out your lack of skill on AI?
but with ai we can learn and reverse all those factors of climate change they say... ;D
@@SorkHanahbaw. Would have liked to see their art...
Too late. If it was on the Internet, your works are already in the AI databases. As long as it was accessible via Google, they have a copy. Considering how hard it is to get published without having a popular portfolio I am not sure if there is a point in taking it down.
@@SorkHanahb Depends how many times it was scraped for datasets, filtered, packaged and resold to other developer groups. High-quality, clean datasets are worth their digital weight in gold and anybody who is anybody is outright lying when they say "we don't keep the data."
They do, they know it, we know it, because these packages end up in the hands of these same mega-corps we are raging against right now.
Really wish I could trust Nebula, but there's just too many creators that have been silently removed from the site and then suddenly get awful quiet about why exactly it happened. Even if you benefit from Nebula's shady business practices, they're still shady business practices, and I can't support them.
I like the IDEA of directly financially supporting the creators and outlets that I regularly use and, sometimes, depend on. Its a nice idea. Its not realistic in a cost of living crisis where I have to view any non-free media as a luxury good.
*EDIT: I no longer agree with this comment, and it's only still here because with this edit, the comment still says something which the absence of a comment wouldn't.*
As someone in the field, I normally take huge issue with summaries of how LLMs work in videos like these since they tend to be hugely reductive and outright wrong. Not this one, though! Great job - you certainly did your due diligence.
I disagree. Repeated uses of terms like "regurgitate" implied to me a lack of understanding, or conversely a refusal to understand how LLMs and other forms of AI function, and misrepresents it, using words deliberately chosen to be derogatory, and to incite further mistrust and misunderstanding.
How is the scarecrow game these days?
@@Kaotiqua that's what current "AI" is doing though, they're statistical completion algorithms that just respond with whatever is mathematically most likely to follow your prompts.
@@Kaotiqua That's very fair - but I saw it as 'writing ideas learned from the training data in a novel way'. I'm probably just too used to the portrayal being even less sound. Might delete this comment.
It's very ironic how much emphasis we on the political left place on listening to experts on things like vaccines, but when it comes to talking about AI it's an outright refusal to even try to understand. AI can easily be shown to have a negative impact _without_ misrepresenting it. Let's all have some intellectual honesty and listen to experts.
@@swedneck Yes - that doesn't mean it regurgitates things it's already seen.
Also, it's really dumb to think that there's political solutions to these problems. Ted laid it out decades ago: technology itself is in the driver seat. Humans and human societies are unable to cope with the very technologies they produce. Effectively, we're being farmed by our own tech. I guess on some level it's interesting to document these issues as they arrive, but make no mistake: they won't be dealt with.
I think an angle for me that really sticks out is the perspective of having grown up with the transition from Dos, Windows 95 into 11.
When I was a kid, I had dialup and AOL/MSN messenger which was not moderated and extremely dangerous . . . but the world wide web was actually alive and thriving. Before the era of social media aka Myspace, access to high quality and unfiltered information was extremely easy. I remember doing research as a kid reading and watching videos on the cusp of the modern internet. I also almost got human trafficked, raped, and possibly other things over the years because my parents were kind of dumb to let me do everything without monitoring.
The kids and young adults these days do NOT have the critical thinking skills or experience to understand what they are being force fed by the modern consolidated internet. These kids didn't grow up with genuine TROLLS. Dont Feed the Trolls. Don't Believe Everything You See. Don't give out your personal information. Don't send random strangers pictures of yourself or your family. The list could be much longer but for brevity that kind of summarizes it.
The real problem with AI and the modern internet is that kids haven't seen the evolution of the internet. They can't even sift out most of the AI slop from the real content. This brings me to thinking of 1984 and other similar dystopian fiction titles over the last 5 decades which warns us of how the world is viewed. The information people see might be outright false, but because theres no way for them to see the queues or red flags. The modern internet is carefully engineered by psychology to such a degree even TH-cam Kids is dangerous because I've seen child labor, blatant toy advertising, and AI slop flood the app over the last 5 years.
AI is killing society, because you will NOT be able to trust anything. In a lot of ways, I think certain aspects of society will take a luddic approach nearly banning the use of technology to safeguard its legitimacy. Legacy programs and systems will be supported ad-nauseum because otherwise the risks might cause collapse. Even 8 years ago, I had professors who would only accept HAND WRITTEN documents because of the common use of cheating services.
The very real danger is that the newer generations do not have the skills to avoid being subjugated and controlled by the Mass Media Mega Feudal Lords "im not being hyperbolic, this is a post Citizens United Economy." Why is it corporations are SPRINTING as if the actual "doomsday" as Sam Altman put it is approaching us because of AI? They will seize control of everything and as with Uber's hidden algorithms committing actual labor rights crimes against us all while upending the taxi industry did for transportation or AirBNB causing investment capital to ruin the housing market across the world "while just being hospitality, which is a heavily regulated industry." I have no hope for the future generations to avoid being abused and enslaved.
If you and many other people can get their stories up on a blog or something that’s got enough traffic it’s worth having personal recounts on there not buried in you tube comments.
Here's the Catch 22. To have a popular platform that's owned by others, you have to accept an agreement that essentially allows the platform providers to do whatever they want with the stuff you produce on their platform. Then your work ends up being used by them. Yet without such a popular platform, you can't get your work out there in the first place.
If you make your goal accumulating money for doing what you love, not only are you taking specifically from those who would take your place if you failed, but also encouraging those around you to organize to fit the structure that money has given to the shape of every industry.
Industry of course being A word we use when we talk about profiting not off of providing for others goals and passions, but rather others whos goal is to do so somewhere down the chain.
When did we stop calling this grifting?
Great vid but I was rly distracted by the fact you look like a young version of the old guy from UP
You can probably remember the first time you encountered neural network (or AI as it is more popular today). It was the first time when your translator of choice made sense while translating a sentence. It is the same principle, thousands of examples analysed, and it can tell you what is the probable meaning of word and also what is the archaic use. But remember, today generative AI can only tell you what's probable as well and thus it will be mediocre forever. Its not a revolutionary tool, it will generate only the most mediocre result, because it is its purpose.
Im not sure whether it is immoral to train AI models on the work of others without consent. Is it immoral if I listen to Pink Floyd and then write a song inspired by their music? How is this different from chatgpt cobbling together a thousand books into an article? Just how much originality should this article have to not be considered theft?
Im genuinely asking from a philosophical point of view
Humans =/= Machines
@@AlexW1495 What separates us?
These early models can be guilty of basically memorizing or copy pasting, but future larger models won’t. So we have to deal with this sometime
An AI cannot listen to a song and resonate with its themes and instrumentation. An AI cannot read a book and love its characters, its story. An AI cannot see a piece of art and use it to process its own feelings and life experiences. An AI cannot read an article and be inspired to delve deeper into the topic, to write its own piece. AI is all about information, data, replicating that data for profit.
Its not immoral because its derivative, its immoral because its motivated by profit with the intention to replace the very people it derives from. It is immoral because the existence of a tool that can mimick an artist/artsits exactly puts those artists out of work, because companies will always prefer to invest in a machine than in workers. An artist being inspired by others does not. Replacing individual creatives with a tool driven by profit motivated corporations is not even idiocy, its calculated and dangerous.
I say this as an artist, writer, and musician myself. If someone sees my work, resonates with it, and feels inspired by what I do, taking some elements they learned from my work into theirs and developing it further in their own style, that's great! But if someone saw my work, invented a clever little machine that could replicate it to a tee, then started creating and selling that derivative work at a rate that I, as a human being, cannot possibly compete with, that would SUCK. That's it, really. 'Is it immoral if I listen to Pink Floyd and then write a song inspired by their music' - key word here, inspired. AI cannot be inspired. It can analyse data and regurgitate it, but it cannot be inspired.
AI does have incredible potential when it comes to data compilation and organisation. I agree that having to search through loads of articles and books to find something relevant to the topic you want to write about is tedious. Having a tool that could compile relevant sources would be great! The problems begin when that tool stops directing people towards the work of other people, and starts competing with it.
I also just don't think getting bogged down with these technicalities is a good idea. If it isn't immoral, if it isn't plagiarism, it is, at the very least, SHIT. Art and writing nobody could be bothered to make is shit.
@@portablegoose So your problem is with capitalism, not the technology. Advancements in technology put people out of work all the time, but because the system values people only for their work, that extra productivity isn't used to lessen the amount of work done, but increase demands on workers.
One of the best pieces I've seen about AI's impacts on society, from access of "real" information and art, to the resulting further concentration of funds/resources in the hands of the wealthly few.
One thing that's missing from the conversation is how teachers are being pushed (hard) to use these "tools," time-saving, STEM, "21st century...," aka must do/use if you want to keep your job.... As an educator who enjoys sharing new information, which I've checked out for legitimacy and accuracy, in a personable, relatable way, I find the push for this as a "time saver" dismissing the "knowledgeable one" part of being a teacher. It tells me that those who promote this think that my hard-earned knowledge and expertise can be easily replaced by letting AI write my lessons---then I'll have more time to deal with the behavior issues of a non-ergonomic classroom full of 30-40 students with a wide variety of significant challenges, including ADHD, ASD, poverty, abuse, depression/anxiety, and general growing pains/maturity issues, ....salting with sarcasm intended.
Or, they could also being saying, "education" can be done better through our "technological wonders" and using our corporate partners' excusive publications and tools, which ensure that we get all the subsidies and grant $$$ and that the budding worker bees gain just the attitudes and skills we need.
The fact that these tech organizations have been freely using and plundering our human output on the internet for at least a decade is something we should all be angered about.
Note: their incredible blundering overreach may actually result in "old tech" surging in popularity, land lines, un-connected computers, not-smart TVs, VW Bugs.....🤔
As Yanis Varoufakis points out in 'Technofeudalism', profit is not always the most effective way of increasing big tech share values. AI harvesting of public content without creators' consent is an inevitable extension of what he calls 'cloud serfdom', one that reaches out beyond the enclosures created by Amazon, Tesla etc. Why draw the line at amassing information capital from your platform's active (and willing) users when you can scrape the entire web... the 'smarter' the AI the higher the share value.
I'm a lawyer, and I've been thinking about this topic for a while. But something is missing in my chain of thought to get to the same conclusion as the majority of people, a little piece that, without it, I can't say with certain that this is undoubtedly wrong.
I understand people getting mad with their work being used to train AI without their permission. But legally and morally (in a more broad sense), that's another thing.
because the use is too indirect, the text used isn't in the algorithm of the AI, just the link between words in a way it's impossible to track any particular work backward, and the AI can't be used (directly) as substitutive to any original work.
I'm not saying that this isn't wrong, I'm saying that I still don't have a conclusion. there are even more pieces missing to say that this kind of thing is undoubtedly ok.
I think that if these Ais couldn't be used for profit, I would be totally ok with it, morally and legally.
I think a lot of creators main argument about using their work without permission is these massive companies are making billions off their backs with no compensation. Even if it isn't directly being reproduced, just in the abstract. It's kind of ironic given many of these same companies will go after people for downloading a single song or having it play in the background of a video like the famous case of the baby dancing to Prince.
Maybe you could provide me with some insight here: isn't this like, instead of stealing a car you just steal a tail pipe here and a headlight here until you've stolen a whole car slowly? Isn't that just as bad?
To me, as a developer and artist, a lot of this rests on the _where_ and _how_ they are getting their training data. These generated works do not get anywhere without filtered and clean datasets to work off of. If they were to set up a contract with an artist to create a portion of the dataset that would be fine in my book as fair compensation is awarded for the effort. But a lot of these datasets are created by using work taken without the user's permission or compensation. What's worse is that good training data is shared and resold to other groups for profit and the packages are only larger.
A lot of it comes down to consent unfortunately, and a violation of artists rights to work is seemingly the easiest to trample upon. See Abdin v. CBS (2020) Docket No. 19-3160-cv
@@bujusticIt’s not really like that, though. The program isn’t stealing bits of the car, it’s analyzing how those bits fit together and recognizing patterns. Show it 30 cars, and it will notice that, in all 30 examples, the tail pipe was in the back, and the headlights were in the front. Therefore, if you prompt it to build you a car, it will arrange the parts such that it follows the most likely patterns it found among the 30 cars it trained on. Nothing about those 30 cars is directly reproduced, except by chance.
@@TheCheeseman1983 I know how the tech works I'm asking from a legal perspective
Glad you made a video on this. It's a travesty these companies are getting away with such blatant theft.
What's the difference from this and a human referencing something? Is it just scope? I don't get it.
@@uoooooooohbecause normally humans would state who they've referenced, the ai takes thousands of creatives' work and merges into its own 'work', effectively stealing the time and effort the people did beforehand (as neither the user of the ai nor ai will credit the original creators)
(hope that makes sense!! also tom explains it here at 18:39, i think)
@@twlxyl That's how humans work, though. You're influenced by every single piece of media you've ever interacted with. AI can just do it at a larger scale.
@@uooooooooh Simple, AI isn't human, it does not have the same rights. Your entire premise is flawed. You're comparing the AI system to a human, when you should be comparing it to an object of human creation itself. Those are subject to copyright, and so should AI be.
@@ttt5205Does a hammer have rights? That's nonsensical.
Copyright is also nonsense in the first place. How can you own an idea? Anything digital is just a number, how can you own a number?
Using the works of others without adequately crediting them has been at the Internet's core. Most TH-cam creators never give any credit or cite any sources. You do not have a proper list of your sources, but at least you mention them in your videos. It is something, but I find it ironic.
So, if any free independent media is to be killed by stealing content, it should have been. Or perhaps it already has. But AI wouldn't be the only and main culprit; the widespread plagiarism and thieving of content is such a day-to-day practice that almost everyone is blinded to it. And it will have achieved this without the aid of AI. It is people, I tell you. People!
Thank you As if the creators share the youtube ad revenue with the sources they use to create their videos...
I think one of the most interesting questions in the AI space is how well they've been able to say, we aren't really making progress, and yet have people still very exciting (depending on the circles you are in admittedly) about future progress coming. In that, Chat-GPT for example, has released 5(? it's at least 5, maybe I'm missing 1 or 2) models now, all of which are just minor improvements over GPT4. Anthropic have done the same sitting at 3, and Google as well with Gemini. Which very much feels like it's only purpose is to generate hype around what's "just around the corner", but we aren't reaching that corner. Now don't get me know, I'm not saying we won't reach the corner (we might not though, it is a possibility), but I don't think anyone really knows when that's coming. Things like 4o and the "advancement" there for example, is just based on research from years ago at this point.
when you start obviously conflating youtube videos, with video titles, you lose all credibility. It's the sort of absurd misrepresentation that I permanently backlist channels for.
I find it baffling that these AI companies just thought stealing so kuch content was just fine.
How is it stealing when IT IS FREELY GIVEN AWAY!?
@@EmperorZelos it is not freely given away, is copyright law really a novel concept for you?
they didn't thought that it was just fine, if they had they wouldn't have tried to hide the fact that they did it.
@@EmperorZelos Yeah! Did you know people just leave their cars in big lots? Its ripe for the taking!
@@Henrik_Holst You act like copyright law is extremely simple. There are still many open lawsuits that try to decide whether this is copyright infringement or not. This shit is complicated. To be clear: I don't want to argue against your position. I just want you to know that the issue is less clear than you seem to think.
"Fair Use", as AI companies are now learning, is, and has always been, a legal 'defense'. What has been happening is most certainly infringement, and since these companies are now competing very directly with a lot of the people that have been victims of copyright infringement, it will be interesting to see whether the fourth consideration of the "Fair Use" doctrine is simply erased from the books entirely or not.
Plenty of AI bros in the comment that conflate human learning with the calibration of AI models with stolen data. The AI has no intent or motive, and even if it had one they'd be the slave of the AI companies.
And let say the AI model was intelligent, what are the ethic of letting something being forced fed stolen data for the profit of corporations?
But the reality is that these systems only use probability to generate words and sentences, and images, and the probability weights are calibrated by stealing data.
There's no spin that make AI companies ethical.
Yeah, ethical companies are impossible in capitalism. There's nothing unethical about the tech itself.
He poses this as a free speech issue and is simping for the government limiting speech ( enforcing copyright).
@@uooooooooh stalin was ethical and efficient
@@tuckerbugeater "There is no such thing as ethical consumption" and "Unethical acts can happen in alternative power structures as they are all human and therefore flawed" are two phrases that can coexist.
Stalin also counts for godwins law, make better arguments.
I am not an AI bro but I also dislike the arrogance where people act like they know for sure that the brain is so different from deep learning. Who are you to make that assertion that a human can make something original, but an AI model can only "regurgitate". Are you a neuroscientist? As far as I know it is extremely unclear how the brain processes information. Without knowing any better, we can not tell if the brain works fundamentally different than a machine learning model. Whether the AI has intent or motive is an extremely complex philosophical debate, but you present it as if it were fact. You can hate AI all you want, there is enough reason to do that. But please do not just dismiss very interesting avenues of discussion about what constitutes art, intent, originality, etc.
If anyone here was interested in this topic further I'd recommend the article 'Mean Images' by Hito Steyerl
Opensource AI
Please talk about it. Good or bad, but its a MAJOR factor in these discussions that gets washed over.
But does not human intelligence use the same learning methods? Then that "HI" produces an amalgam of what they've seen on TH-cam or Google News as monetized TH-cam video essays.
If something has been published publicly, then it is intentionally available, NOT for out-and-out plagiarism, but it is accessible for assimilation into a larger knowledge base which can only be appreciated by other intelligence actuators.
Talk to an AI, there's no intelligence in them, just a more sophisticated search engine that can lie because instead of an actual database it uses probability to generate the next word. No intent, no motive, there's no reason to leave data to be scrapped and regurgitated for the profit of AI companies.
I have been trying to work this out too. My knee-jerk reaction was to think what makes 'creatives' so special that the automation of their work is different from the weavers facing the jacquard loom? Understandably, when it's your job under threat you are going to be totally against it, but jobs have been steadily automated away for a long time now.
There is a difference in AI and HI learning, that is who profits. When we learn from school or books or youtube videos, the creator is being paid something. Sometimes we forget these things cost because we don't pay directly but through taxes or adverts. This is not the case with AI sucking up everything ever written so the companies can make a profit (eventually) without paying all those making the content that drives the AI, for which they would normally be paid.
Another difference is that the jacquard loom didn't take knowledge/imagination/creativity as a raw ingredient. These AIs need to be fed with new information or they will soon be obsolete.
When individuals learn, we collectively benefit. When AI learns, do we benefit? It feels like we do but I'm not so sure. Either way, all our jobs are being automated away and we need to think about what this means. In the past and now the response is 'there will be better jobs', but where does this lead and can it last?
@@SpaceMonkeyTCT You make three points that I want to address:
- 1. Does a site make money from an AI data scoop? Acknowledging that I make an assumption, I assume that it is the AI creators who arrange access to the site, engaging normal earning protocols. I'm not sure how an "independent" AI would bypass those criteria, not without actual fraud.
- 2. Who benefits from AI's knowledge? The same as benefit from HI knowledge and in much the same way. Some AI knowledge is shared freely, some is sold.
- 3. Are humans competing with AI? Probably, at more superficial levels. I imagine AI could write any number of money-making superhero movies. But _Lincoln,_ or _Lawrence of Arabia,_ or _Twelve Angry Men,_ or _Arrival,_ not so much. I think the competition would be the difference between a home-cooked meal and a microwave burrito.
I may be wrong, but my understanding of AI is that it does not need a constant inflow of new information. ChatGPT has operated with cutoff dates. The value of a large language model is the amalgamation and synthesis of large bodies of knowledge to the benefit of its users, not in cherry-picking data.
As to my own involvement, I have written works of fiction which appear on a story site. I receive no payment; I write for my own pleasure. I would be honored if some of my expressed concepts were spread. I would be pissed if my works were plagiarized and I would likely take legal action.
It’s like how react streamers steal content, but now it’s big companies with AI.
React streamers actually contain and serve as a replacement to the original works. AI does not.
@@_B_EThen why does Greg Rutkowski, Thomas Kinkade, Sarah Andersen, and Pixar found their name commonly used in Midjourney's input prompt to generate specific styles?
Back then people link/cite the source of their image, or people have ideas on where to look for dimilar images; nowadays, only good people does that.
Honestly. This all just reminds me of record companies reaction to Napster. I get TH-camrs aren’t quite the same group of insanely powerful people. But, LLMs are just another new thing we’re going to have get used to.
I’m sure this is an unpopular opinion, but I think it has to be said: training generative AI models on copyrighted material is *not* copyright infringement.
Yes, these systems “consume” lots of copyrighted work. That does not require permissions or licenses though. The *republishing* of copyrighted material does.
Running these models (i.e. inference) to generate output could potentially be considered copyright infringement. That said, there are currently no court decisions of any court in the world on if they are or not or under which specific conditions and circumstances they are.
Clearly, if they regurgitate original work verbatim in significant parts, then that probably is copyright infringement.
Fair use, citation and transformative work carve outs probably do also apply to large language models though.
The way that these models train on original work and generate outputs is not that unlike how humans learn from original sources and can then generate entirely novel outputs inspired and influenced by the original works they once consumed.
It’s a somewhat tricky situation quite frankly.
Lastly, I find it noteworthy that while I agree that AI systems have the potential to tear big holes into the financial feasibility of journalism, I tend to believe that they will not stop at journalism. The reality is, that with the rapid advance of these systems we are likely going to face systems that can do most human cognitive work for significantly less cost than an equivalent human cognitive laborer. Strapped onto a robot body cognitive work isn’t even the limit.
I expect that in just a few years, we will start seeing significant economic pressure start to develop that will cause havoc on labor markets. Journalists won’t be the first or last to struggle to get payed a living wage. We’re in it for a wild ride.
tell me what the difference is between a generative AI being influenced by a particular piece of art and a person being inspired by the same piece of art in terms of the legitimacy of their output. take all the time you need.
if an AI is shown existing art based on which it creates new art, that’s bad, but when a human does literally the exact same thing, that’s okay? why exactly? how is it theft when an AI is influenced by existing art, and how is it not theft when a person does the same?
where do we draw the line? if a true AGI did the same thing, would it be considered theft?
Praise AI all you want, your ai generated pfp still looks like a piece of turd.
I can't imagine what it must feel like to know your videos were stolen to feed the models. I only know how I feel by being almost certain that something of the first drafts of my fiction writing that I've shared have been scrapped without my permission. Which is to say...not fucking great. With how much I don't trust the people who in charge of the place I posted too...I'm not exactly willing to share more than I have.
Always fun seeing all the AI weirdos and tech bros coming out of the woodwork. It clearly means that you have done a great job with the video!
Or it means he did a bad job because he is definitionally wrong.
@@EmperorZelos Let me guess - you are in both of those camps. Definitionally.
honestly im getting tired of one person claiming AI is going to save us all, while someone else proclaiming its shallow stealing... just give it a decade or two, and no one is an AI weirdo anymore, because everyone will be using it at some level.
Yeah it's going to be the norm in 10 years. Corpos and idiots are using it right now for theft and pretending they can create and whatever but one day it's just going to be something everyone uses
let’s clarify how LLMs like ChatGPT work. These models don’t "steal" content in the way you imply. They are trained on vast amounts of data to learn linguistic patterns-grammar, structure, and associations-not to memorize or replicate specific works. LLMs generate text by predicting the next word in a sequence based on learned patterns, not by copying and pasting from their training data. Comparing this process to theft misrepresents what’s actually happening. It’s like saying a person who reads thousands of books and later writes an article is plagiarizing because they learned from those texts. The AI is essentially doing the same thing humans do when they read and synthesize information.
When it comes to the issue of copyright, there’s a deeper problem. Copyright, as it currently exists, is a system designed more for protecting corporate interests than fostering creativity or innovation. It often creates artificial barriers to knowledge, restricting the flow of information in ways that harm both creators and society at large. Originally intended to incentivize creativity by giving authors temporary control over their works, copyright has evolved into a tool that locks up knowledge and ideas for far too long, preventing them from being built upon and shared. This slows down innovation and limits what others can create, even in collaborative or educational settings.
Moreover, it’s important to note that humans have always built on the works of others. Whether it’s in literature, art, or science, the most groundbreaking ideas often come from standing on the shoulders of giants. LLMs, in this sense, are simply a tool that enables this process at scale. AI can take vast amounts of information and help generate new, unique outputs-just as people do when they learn from a wide range of sources. To deny AI the ability to "learn" from publicly available information would be to deny a fundamental principle of human creativity itself.
Ultimately, the concern over AI models "stealing" work is rooted in a misunderstanding of how these technologies function. AI doesn’t destroy creativity-it amplifies it by providing new tools and opportunities for collaboration and innovation. Rather than clinging to restrictive intellectual property laws , we should focus on how to adapt and encourage a more open, collaborative future where ideas and creativity can flourish.
Good write up. It's shame these people aren't really interested in learning and just want a new trendy thing to hate on.
Great marketing. I'm not buying though.
@@beth1979 Denial is a river in Egypt
4:40 How did it 'steal' your work though?
There's something to be said about profiting off of a copy of something to begin with.
I think a song, speech, or acting should be paid for when done in person. When filmed, printed or recorded, it is no longer "work" it is just a copy, and no one should have to pay for it.
Copyrighting and reselling copies of anything is the biggest scam of our modern world.
Especially when royalties are still being paid for things 20+ years later and the original creator isn't even alive to profit off of it, because of shady industry licensing groups or companies like Disney using legally-questionable methods to hold copyright in-perpetuity. Or when the company doesn't even make the product / copies "available for sale" and is just sitting on the copyright so nobody can use it.
Brad, I don't think you are going to find any creators who agree with you, for many good reasons. There certainly are exploitative examples, where patents or copyrights are taken from under creators or used to squeeze consumers. But in general, copyright is an incredibly necessary protection. Without it, no creator would be able to make a living. Gillian Welch wrote a great song about this, it's called 'everything is free now' .
Funny how when it comes to creators, they enforce with malice copyright law, but when It comes to AI the law Is very slow moving
It's almost like the problem is the abuse of the regulatory laws and not a new technology, right?
Why dont you list all the articles and videos you used to make this in your description?
I thought you cared about credit?
When everything is worth money, then money becomes worthless, or where money is involved, art integrity, ownership, no longer matter. We are quickly heading for that world where it doesn't matter how much of your fiat money you spend, all you get is garbage, because the talented, creative people have been replaced by the greedy....
That needs a very strong (and extremely poor) assumption: talented and creative people are not greedy. That is obviously not true. The only greedy people that are successful are the smart and creative ones. Like the folks over at big tech. So, money will not lose value. In fact there will be significant deflation. The "really" creative and talented people will use AI tools to make better stuff. So, more poor people will get access to good things.
For example, chat GPT allows for free access to best in class tutor for the poor children - of course they need internet - but the greedy smart people have made sure internet is cheap and wide spread. At least that's the case in my country.
Elon musk is smart and/or creative?
@@cookies23z He's definitely smarter than most other people in my life. He managed to align smart scientists and engineers to make rockets land vertically, get space internet working, build a proper recharging grid. I don't know anyone in my circle that can replicate that level of management.
@@samarths so he just knew some people got them together and gave them money to do things. Got it. Such skills, wow
@@samarths you're confusing smarts with luck, and HE didn't do all that stuff, he paid people to do it. and gee where is the proper recharging grid? the self driving car? all the satellites that he as going to send to orbit for his space internet? he's just a capitolist, doing exactly what they do to get more investors, call your company some grand name of past glory, make a bunch of bold promises, fool people into investing, then try to deliver on those promises, normal silicon valley poo, how's that tesla stock doing?
This makes me think that the future of all art will be either just robots, or we will have more private performances personalized to the audience. You could hire an acting troupe to take a play they know and tweak it to make it personal to you or your group. No recording allowed.
The future of art will be interesting to see, especially when it comes to digital art. My hunch is that it'll move differently from things like bitcoins and NFT where those place value on "Proof of Stake" while Art moves toward "Proof of Work"
Unfortunately this video does not seem as well researched as your other ones, as it is unfairly one sided. There are many valid counterarguments that haven't been brought up, such as say that training AI on copyrighted data has been illegal all along, the result of that would be that only handful of tech-giants would already have access to such incredibly immense amount of data or would be able to buy the rights to the data, effectively removing from the scene the competition and innovation that hundreds if not thousands of small AI startups are bringing to the table. This would severely stifle inovativan and competition, concentrating all the AI power in the hands of the few, which is obviously a terrible outcome for everyone.
We need to stop and think for the moment and understand the principles and ideas behind copyright. It's essentially artificially imposed scarcity the purpose of which is to protect someone from copying someone's work and selling it as their own, effectively stealing the revenue of each such sale from the original author. AI training on copyrighted data is simply not theft in the same sense of the word, in which selling someone else's artwork as your own is, and we need to stop using the word as if it is, since it's extremely misleading to the point of becoming a politically charged piece that is essentially anti AI propaganda rather than content that is as unbiased as possible, that is documentary in nature and meant to educate.
I'll also quote another comment because I couldn't have said it better: "Repeated uses of terms like "regurgitate" implied to me a lack of understanding, or conversely a refusal to understand how LLMs and other forms of AI function, and misrepresents it, using words deliberately chosen to be derogatory, and to incite further mistrust and misunderstanding".
Just look at that shameless THEFT of that insightful comment, I should be sentenced as a THIEF and put in jail :)
19:30 - Isn't that how everyone writes? We all use, whether knowingly or not, idioms or even whole sentences that we have heard somewhere. Everything is based on something else. Every story contains “tropes” that have existed somewhere before. Even many newspaper reports copy their content one-to-one.
no. it isn't. "AI" can only spit back out what it has had inserted into its dataset. humans CAN create new things wholecloth, "everything is based on something else" is such a meaningless cliché that ignores the obscene scale of the total dearth in creation at hand-and the fact that these models will occasionally just spit out complete plagiarism.
@@MelMelodyWernerDo you have an example of a human creating something totally original?
@@MelMelodyWerner Why do you write AI under quotation marks?
And AI can also create things that have never existed before. You should inform yourself a little before you flaunt your ignorance.
@@MelMelodyWerner You just made a contradiction. You said that AI can only spit out what was inserted into its database, yet you state that AI will occasionally spit out complete plagiarism. AI does create unique generations based on statistical models. Humans are also capable of spitting out complete plagiarism.
Nothing has been stolen, stealing implies that it was private and somehow protected, but according to what's stated in the video all the data they used were 100% public information, if that is stealing all of us are stealing right night as we are watching the video. One have to be careful of the language they use
Did you know, downloading copyrighted content with the intent to use it in a way the copyright owner doesn't consent to is called piracy?
i lived in florida for 30 years, i already understand the golf cart thing, lol
5:13 Since TH-cam is a Google property, and the videos are not paywalled, I'm sure creators do not have rights against web scraping by FAANGs.
Watching or transforming a video does not constitute copyright infringement. Only copying a video verbatim would infringe copyright. As AI-generated outputs are transformations of existing content, they do not typically infringe copyright.
If copyright was still some 14 years as it was originally (not that 70 years after death current nonsense), there would be no problem with sufficient public domain training data, so "everyone" could train good AI, not just selected few with enough lawyers.
Another thing is that this is exactly how humans create. We see, remember (even copyrighted material) and remix. Same as AI.
Couldn't agree more. We can thank Disney for that nonsense (at least the more recent extensions).
If gen AI creates in exactly the same way we do, then the problem of it needing an ever increasing amount of data is solved. It can be trained on a very limited dataset and simply create the novel information for itself and keep learning from it, like we can do. Unless-- that's not really how it works and gen AI is in fact mostly derivative, where this fact is obscured by its ability to take many small bits of information from an immense amount of data and combine them through not much transformative processing, and where any trace to the original works is erased. That would be bad news, right? Copyright law doesn't take kindly to derivative creations, no matter how much data they're based on.
@@felixmoore6781 copyright law only takes kindly to giant monopolies with teams of lawyers...
@@felixmoore6781 Of course computers are inherrently deterministic, the only reason an AI prompt gives different results each time is because it generates by starting with pseudorandom noise. If you modified it to use a single noise pattern, persumably a given prompt would always result in an identical result. Because computers are inherently capable of exact replication of complex data in the way humans are not. The best art forger in the world could not perfectly recreate the Mona Lisa. The simplest computer program could. Humans need to get better to copy more exactly, AI need to get better to copy less exactly. Humans also learn as a way of operating in our physical world, and most of our information is not of human-generated data. AI learn exclusively to be able to replicate human works, and are trained exclusively on human-generated data.
People acting like AI's just 'learn like humans' ignores the vast differences between a conscious evolved lifeform and software, and between the reasons behind human learning and AI learning.
@@jazzpear8877 Yeah, unfortunately.
I genuinely do not understand why people are upset about AI companies using copyrighted material to train their models on - certainly in the case of content that is free and publicly available online. Nobody has a problem with us, as humans, being able to read such content and, then, learn from it or be inspired by it without having to pay anything to the authors or request their permission first. Why should AI be any different? Writers look at others' writing to learn, artists study other artworks, and songwriters listen to others' music for inspiration, all completely for free. If we want AI to be even remotely useful, surely it has to have access to the same information we do. Inevitably, this may lead to their work in some ways resembling something they've been inspired by. Arguably, even, in the case of AI, it will "use" each piece of work it has been trained on less, simply because it has seen more of them than a person ever could.
I am, of course, not talking about plagiarism or situations where an AI spits out chunk of copyrighted material verbatim. If you explicitly ask the AI to do that, it probably will. But just as with any other tool, the end user holds the ultimate responsibility for the content they produce. We don't ban Google because it allows people to commit copyright infringement, so how is AI any different? Also, forseeing the argument that AI companies profit from their usage of others' content, so does Google - it would be useless if not for it indexing practically everything on the internet.
I cannot comment on the current legalities of this, but, from a moral point of view, I do not see any problems here.
If having knowledge from reading text (books. Shakespeare, plato) at some point every person will have done so. The only problem is a.i regurgitates
AI does not technically regurgitate. Each generation is quite unique, or as unique as it can get given the material. For example, you could ask an AI to describe cheese using one word; it will use the same word many times, but so will humans. And if you ask an AI to generate a unique image, each pixel will be generated uniquely in a unique place. What exactly do you mean by regurgitate than?
It seems like the creator of this video, and most of the people in the comments have a fundamental misunderstanding of how AI works. Neural nets are in essence just lots and lots of equations that try to predict output from a specific input. What is fed into the AI during training is not stored in any reversible way, and it's really misguided to say that training a model on some data equals theft of that data, or that the specific training data should be cited (which just isn't really possible). In a way the learning is quite alike that of humans. If I draw a picture in my own style, should I cite all artists who have once inspired me?