The poisoning reminds me of the ad nauseum extension - not only does it block ads, but it also clicks them so the advertiser has to pay & the added bonus of ruining your advertising profile.
@@AvitheTiger Yea Ad nauseam is pretty cool (from now advertisers lost 120$ by my fault ahah). But in fact, you have to install it manually while the extension was removed from the Chrome store. Give it a try, it's pretty fun
@@ShivaTD420 Nah man you got it twisted, human art has real quality. AI spew is garbage and we don't want it. Your sparkly eyed waifu that looks like a carbon copy of the other generated picture right next to it will ultimately be forgotten but human art with its quirks will be remembered. Cope, seethe, mald
@@sfkdsxzjkcfjldskaf99sddf809sdf funny how you're the one saying "cope, seethe, mald" while you literally cope, seethe and mald. the ai waifus are clearly good enough for most people, the human touch in art was only worth so much.
I feel like treating Nightshade as an illegal piece of malware is like saying home security cameras should be illegal because you're taking away the house robbers' main source of income.
I understand not wanting your art being used but ruining the model with only 50 of the images affected by Nightshade is insane especially when you can't really tell what image is causing it is actually insane and shouldn't be the path to take like it would waste countless hours for people just trying to make small AI projects like image recognition, it will also probably make AI models eventually become better and not get affected as much by it if you are gonna go offensive on it anyway lol
corrupting progress is wrong cause the AI is just a tool and only who is using and how is using it makes it good or bad, i know u are mad like me on peoples who seling AI generated work and ruining the market for digital artits like me but the best way for the artist is to be more efective just try to learn to use AI 2 couse ewery artist in this world knows here is not enough time to create all the projects and art peaces u want in your life with ony yours hands and AI is greate tool giving us possibility to create much more in our lifes, and only peoples are immoral not AI.
@@rarehyperion wasting countless hours? what about the hours "wasted" on making art just for it to be ripped off, used for training and then generated?
@@jykox i think this glaze and nightshade filter is targeted at companies that take the art without permission or compensation for generation, and not ai in general
Everyone parroting "They'll just use AI to work around Nightshade" are missing the point. The point of glazing images is to make it JUST annoying enough that data scrapers don't bother to circumvent the cloaking. It's the same logic as getting a big, scary padlock for your property. It's not supposed to stop the expert thief who is after you specifically, it's just supposed to keep your common everyday thief out. Glazing your images is likely to stop data scrapers from going after you because circumventing a glaze takes more time and effort than it would be to just find an unglazed image elsewhere, much like how most thieves see an industrial padlock and just leave to look for unsecured goods somewhere else instead.
The problem is that anyone making an AI are not everyday people. They should have some on it, and If everyday people can look at one video and come up with a solution that can work reasonably well. The people making the AI could.
They can't use any workarounds on nightshade or glaze other than removing them from their datasets when detected, the original data from the image is irreversibly damaged by glaze/nightshade.
Isn't the big problem that while a master thief can steal for, idk, 60 years of heists, an AI can be trained and preserved to have ALL the lockpicking strats for the next however-long-the-internet's-alive? I'm more worried about permanent art-pirating progress that can be copy-pasted into any tech...
I train AI and Glaze has never stopped me. Nightshade won't either. It's a good idea made by smart people, but as a retired engineer, "good ideas made by smart people are always foiled by common sense attacks"
My biggest issue with neural networks is that it should be Opt IN, not opt OUT If they want to include an artist's works then they should contact the artist and get their permission, not just use the works until the artist finds out and asks them to remove it
@@kolliwanne964 asking for consent is not viable? I mean, only if you're straight up a shit person lol What do you consider the alternative "viable" for? If "ease/quality of use" overrides consent from those who produce the art then you're honestly lost as a person Because that's the thing, the only way you can see a "viable" option for AI image generation being through stolen art because AI image generation exists not through need or want by those who make it If the only "viable" option is through ignoring consent, it's best there are no options at all
@@kolliwanne964 Yeah it is, use a bot to send out emails and DMs automatically. Even scammers can do it for basically no money. They just don't give enough of a fuck to do that.
@@kolliwanne964 How is that the artist's problem? It certainly isn't viable either to ask artists to go to every company and ask for them not to train on their art. A solution that could benefit both party would be a platform where user could upload their art to train models or a simple check box that would say i consent to this being used by ai. Main ethical problem with ai really is the lack of consent.
@@saperate No it is just unrealistic. And that is why it becomes the artists problem. The solution for anybody who doesnt want their data to be scraped is not uploading them freely. I am pretty sure that if you have your art only behind paywalls, you will experience way less problems than those with the brilliant idea to upload their work for free on social media.
Eh... as some once said... the cycle continues... Ad > Adblock > Adblock Blocker > Anti Adblock Blocker >... DRM > Cracks > New DRM > New Crack >... Cloaked Image > Image Uncloaker > Anti DeCloak...
As the demand for blocking annoying ads stays strong because the ads only get worse with time, it becomes an unwinnable battle even for the richest companies. Same goes for anti ai art attempts. People will make cool pictures of licensed character fighting and use them as desktop backgrounds and copyright holders, wether artists or corps, wont be able to do jack about it. Only question is how much money they'll waste in the attempt.
@@serioserkanalname499 Huh? Lot's of people have the AI generate images and claim they are the ones who made it or they use it for thumbnails. Personal use is something I never hear about with the AI humpers. They're always using it for some kind of gain.
@@j.k.4479 People do complain about personal use as well, since it's an opportunity cost to the artists. If you choose to generate an image for yourself using AI there's a small chance you might've commissioned an artist had you not had the opportunity to use AI.
@@inv41id Not what I meant, I mean the AI users never mention they're using the AI for personal use like a desktop wallpaper. But yeah I understand your point.
@@malakoihebraico2150 That actually depends, if you're talking about trapping your property certain states it's legal certain states it isn't. It's a legal grey area because on one hand you're intending to cause harm, on the other hand so is the other person, and it could be argued it's self defense
@@stephenkrahling1634 It's certainly vague, since legislators don't bother about codifying it, with exception of Arkansas that have laws against it. But in other common law cases on the matter, it's mostly against boobytrapping, and there is actually none siding with the ones that involved property only.
Artists just want to do job they love for an okay pay, I would do art for McDonalds kind of pay just because I love doing it and if I can survive on drawing I will do so gladly, heck, pay me a rent for some shoe-box sized one room apartment in bad neighbourhood and feed me once a day and I would still gladly draw for you, that's why artists are upset about "AI" generated images, we just want to do what we are already doing without anyone bullying us
@@krsmanjovanovic8607 The real solution is to implement Universal Basic Income, so everyone gets enough money to cover the basics, and can work if they want more. A side effect would be that we'd be effectively subsidising the creation of culture, as artists would be able to do what they love without wasting time on another job just to pay the bills.
@@krsmanjovanovic8607 Yeah no, paying 1200$ for a single drawing that we can't even criticize without hurting your feelings and have to deal with the possibility that you're a Wokist and that you're going to put propaganda in it, it's a big no-no. "Artists" played with fire for way too long, it was time that their decisions back-fired, like we say, go woke, go broke.
This reminds me of something people were doing years ago, adding subtle noise maps to images to make earlier AI misidentify those images. For instance, two seemingly identical images of a penguin might be identified as something totally wrong like a pizza or a country flag, based on the noise map that was added to the original image. That might even be exactly what evolved into image glazing.
I sort of do the same thing. I add a layer of lines on top of all my drawings, similar to the ones on college ruled paper. Or I make the foreground way too busy for AI to possibly learn anything from it. Like, having flower petals or dagger slices covering like ¼ of the drawing.
Yup, that's the same idea. Only glazing is much more subtle than GAN noise, and is designed to counter the mechanism by which SD mimics artist "style" by making the features these models recognize as style inconsistent across images (thus making the model "see" garbage input).
If an artist is asking for their images to not be used as training data, and you use them as training data anyways, and they don't work as training data, you haven't been betrayed or fooled in some fashion, because the artist doesn't owe you good training data. If an artist where advertising their images as OK to train on, or even selling them as training data, and where using Nightshade on those, that might run into some legal trouble (in my opinion, justifiably). If an artist is just posting their art and doesn't have a stance on if it should or shouldn't be used as training data... I don't know. It's certainly going to be the niche for how Nightshade'd images can start slipping through the cracks in large numbers, though.
@LumocolorARTnr1319 That's a disingenuous comparison. AI is not just "inspiration". It is much more like sampling in music. Where you DO in fact have to pay royalties.
@@LumocolorARTnr1319 That is absolute garbage and I can prove you wrong in one sentence because just about every AI picture has the original artist's signature left over, they're just often edited out.
AI as we were promised: "We're taking the hard labor jobs you don't want, freeing you up to do more art!" AI we're getting: "We're taking over the creation of art away so people can have more time for manual labor."
@matowakan you're a moron. If a baby could shovel crap at the same rate as an adult man but the baby would legally do it for free and the man wanted a living wage, the hirer would take the baby any day, and the man and the child would both starve to death. This is the same. Ai art is terrible but it's better for producers who don't care. They churn out enough garbage and the industry dies and artists are no longer incentivised to be good and now we're all shovelling crap for pennies.
@@matowakan people aren't letting it take it away from them. it's forcefully being taken away from them. you think corporations will take human creativity that costs a little of their budget over AI slop? It's already happened, with that one alien show that made a intro with AI even though it had mistakes and looked like shit
i find it funny how companies get mad at ppl for pirating their software yet those same companies just get to use artists who they didnt even ask permisssion to steal their art with no consequences?
There's no such thing as pirating software, just a hijacked term. At best, you can speak of unlicensed use, unauthorized downloading, or just… copying without creator's permission.
"That photo filter should be illegal because when I harvested the picture without the creator's consent for a commercial purpose it didn't do any harm to my actual property, but it meant that I couldn't generate profitable output from the other pictures I harvested without consent!" That's like a mugger suing a victim that ran away from them, because the mugger accidentally dropped their favourite illegally purchased weapon in the gutter while chasing them.
except thats literally not how that works? were wasting our effort on an entirely ineffective tool and patting ourselves on the back on a job well done.
Apparently using both makes it even more difficult for AI to rip off your work. If you use both, you're supposed to use Glaze first, and Nightshade afterwards.
It certainly is ironic that the creative aspects of human labour seem easier to automate than the manual ones. Any job that requires a basic level of movement around space and manipulating things seems incredibly difficult for a computer.
Because manual jobs are already mostly automated or have very specialized machinery that reduce human input. Also the economy of the 1st world shifted to the service sector over the last 30 years.
Yeah the oversight is the digital frontier. It logically makes more sense and is cheaper to automate the digital world through the monitor rather than creating a whole industry for robotics, not to mention develop robotics, then program AI for robotics, then train the robot AI, and then finally implement it in real world physics. For any start up the most logical and efficient way is the path of least resistance. This only casts the inevitable that the rest of the jobs are gonna definitely be automated. If "creativity" in it's highest form is replaceable, then nothing human can do is in any way competitive with it's superior counterpart. The only way manual labor can sustain would depend on how soon we are able to enter the age of automated robots/cybernetics. If the timeline is quite slow, roughly a 100 years, we will probably see an uprise in manual labor work and creative influx, like the odd rise in oil paints and art fares, etc. But once the robot can replace a 3D human, it's over for all of that and the remainder of the manual labor. In the end humans should be progressing towards living their own life and detaching from industrial constructs.
The issue with glaze unfortunately is that it is *really* visible for more cartoony artstyles, or ones with lots of flat colors. But we've seen AI start to inbreed as more AI generated images make it to places typically scalped *by* AI. Artists have taken inspiration and iterated off each other since the dawn of human hystory, meanwhile AI can't make it past 2 years without it becoming glaringly apparent that it cannot create, only make shittier copies of human work
This isnt actually that true, the statement spawned somewhere on twitter and it seems reasonable but doesnt really translate into reality its easy to filter out images with "bad generations" e.g 3 hands, 6 fingers etc with good enough vision models (which are already being trained on ai generated images and give out good results / its improving) Its feasible that the best models in 3 years will have a lot of Synthetic / ai generated data with good captioning in the dataset and have better results. A lot of Ai labs are trying to perfect Synthetic data right now as data scarcity will probably be a bottleneck soon
I think in the future, state-of-the art AI will only bother training on images that were archived before 2021. Anything after that you have to assume is either AI generated, or poisoned, and isn't useful for training. It's like how highly sensitive radiometers can only be made using metal that was produced before nuclear weapons were invented.
Important note:with these "confusers", it is typically required to actually *have* the generating model you're trying to fool. Also, this won't prevent new models with new architectures from being trained on these images. Trivially, when YOU look at the picture, you see pretty much exactly the original, whatever is confusing the current models is some property of these exact models, not of this kind of machine learning in general. So, this is only partially useful.
And even then, the example they provided to "prove" the tech worked is questionable at best. There are so many variables to control for that their example doesn't mean anything. You would have to generate hundreds of examples before and after and know exactly what went into the dataset in both cases. More than likely the "after" version is simply them adding the new glazed images onto a mostly pre-trained model which results in it behaving weirdly because they have manipulated the model itself and haven't adapted for the changes like anyone who knows what they are doing would do.
With all this cat and mouse... Is it really worth the effort over just putting out art that people actually want? If an artist has made a name for themselves and takes commissions, I'd bed that they can still compete well against AI if they're creative enough
@@alexipestov7002 this isn't cat and mouse so much as it is the contents of the cats stomach giving a little tiny rumbly and it needs to take a short nap
yeah it must be a specific exploit in its processing/classifying of images/objects/symbols other models likely wouldnt be vulnerable unless they share the same processing/logic
While Ai art is not at the point to completely destroy artists, as an artist, i dont doubt it will vastly reduce opportunities for many throughout the decade.
If we consider other forms of art, it’s already an issue. Voice acting is getting replaced by AI generated voices that obviously use the voice artists’ recorded voices for example. It’s not at large scale yet, but when we’re talking about big corporations and the world of short term profit we live in, it’s only a matter of time before it becomes a norm.
Ai could’ve been a revolutionary tool. It still is if used as one, but a lot of people instead use it as full on replacement for the act of making art. Even then, ive seen arguments that prompting an entire piece is “using a tool” and i suppose to an extreme degree it is, but im starting to wonder what the term tool means
16:38 I’d call it an acceptable DRM measure: - doesn’t require any extra closed source code to run on your computer - doesn’t affect legit users or fair use - doesn’t alienate any consumer hardware that would otherwise be able to access the content
in a year id be funny if the largest models got poisoned, then you need to hire a translator "hi, id like to make a dog driving a car" "ok, computer, generate cat plonking the cow"
As poisoning and glazing tools will most likely be foss, nothing stops companies to have countermeasures against them and even create other tools to "clean" the samples or at least detect and automatically remove them from datasets. Just like watermark removal.
Encryption standards are also open and well-known, but that doesn't enable companies to decrypt those messages. How does knowing how this glazing happen let companies "clean" the image? How do they tell a glazed image apart from a clean one?
the idea that a Photoshop filter, essentially, is dangerous malware? When you're running it on your own art before distributing it? That's the most Silicon-Valley-poisoned idea i've ever heard
IDK; if someone was able to manipulate, say, an audio file so that loading it up in Audacity would cause the program to crash or play it improperly or or do something else you didn’t want, I’d think that would be considered malicious. Or that video from Mrwhosetheboss (“How THIS wallpaper kills your phone.”) about an image that causes certain Android phones to crash if viewed in certain ways, or outright brick them if set as a wallpaper, or that Janet Jackson song that caused laptops to resonate in a way that messed up their hard drives; those were created accidentally, but if intentional, it would be malicious. So manipulating an image in a way that causes it to mess up AI image processors could also be considered malicious. Of course, if you think this form of AI image generation is bad, then it would be a good thing, although I’ve heard it would have negative influence on other fields like computer vision.
@@KnakuanaRka Patreon and plenty of other streaming services do this with their videos. It makes it a real pain to download stuff from them, but it exists.
@@KnakuanaRka "Malice", in this context, is "making a media file that someone's program parses incorrectly". In any sane world, you would be able to make art without being accused of malice by people who make shitty programs that can't parse your art correctly.
@@andrewkoster6506 Pegasus isn't a malware. NSO group should be able to make their own images without being accused of malice by people who make shitty operating systems that can't parse their art correctly.
Considering how most of the images that an ai is trained on are stolen without the permission of the person who created or uploaded it, it’s really the company’s own fault if something they stole broke their program. This is a really cool tool to protect your work.
It feels wrong stop companies from using those art pieces, because ai learns just like how humans would, we see a cat than we try drawing it in different ways. It feels morally correct, of course artists are free to do whatever they want but so are ai companies. This is coming from an artist
@@ikosaheadromAI doesn't learn, it copies and then gets rewarded or punished for how closely it copied. Learning is about iteration, expirementation, interpretation, etc. AI physically is not capable of these things
@@tyeklund7221 learning is just coping there is nothing else to it, and it does experiment to get the results it wantes to get better, as well as interpret what it got and see if it was close to what it wanted.
@@ikosaheadrom But the problem is that AI doesn't actually "learn" like people. It's not a person, it doesn't "think" it's built and created through random chance. It has no creativity of its own. It does not exist or have feelings or even know what it does outside of when we prompt it to do something. In order to build an AI, companies steal and scrape information (or other artists' work in this case) without the artists' permission in order to build the system that generates the "art". What this tool does is allow artists to protect their intellectual property from being stolen and used by these large companies without permission. This tool is essentially preventing plagiarism while protecting an artist's right to decide how their work is used.
I’d love to see an added database of sites that are known to upload AI art and block them from like google images or other similar image search engines, because I fucking hate looking up photos of like a real animal and getting nothing but AI slop
before I’ve looked up pictures of animals like frilled lizards and velvet worms and a few of the results were ai slop that looked nothing like the real animal- it’s almost misleading.
You were planning on getting in touch with whoever it is that took the photographs of those "real" animals to ask permission to rip them off for your Manga comic about retarded jungle animals, right?
@@MrNexor-cj8gs Sadly, yes. And I don't think it will take very long before it happens either. Those AI tools are starting to get pretty damn good now. Even video and audio is getting better all the time, so go figure.
@@MrNexor-cj8gs Unfortunately, most people are sheeps and will just jump on the next bandwagon without thinking critically about anything. We know how things works by now.
The concept of Nightshade gives me hope that the uncanny era of image generation will never end and there will always be ways to force image generators to spit out abominations wanted or unwanted.
@@ianmcmurchie6636I mean yeah ai learns it will eventually figure it out, I think it will be a never ending fight, like yeah for a while ai will lose but then it will beat the system, and then a new system it will eventually beat it so on and so on
@@bandanaboii3136the amount of money people on social media who like to sell the idea of "off grid" or "homestead" living have to spend to sustain such lifestyle really makes me think that it is not possible to achieve that for normal people with normal amounts of money, unless of course you just give up everything and become a hermit in the woods.
@@rhael42No one is thinking you're smart. ...mostly because you're deadass wrong; "a" becomes "an" when the word after it begins with a vowel. ("A book" and "An illusion" are correct, "An book" and "A illusion" are incorrect) If you are going to correct someone at least actually correct them.
It is Malware as it is put there to intentionally damage a program without the end user knowing. If tho the post includes a disclaimer about it, then its known and entirely the end users fault.
@@TrixyTrixter I understand that logic, but I'd say it's more than justified, as the only program it will harm is data scraping, which is copyright infringement and theft.
@@amazonbox5939 more illegal than infringing on the copyright of hundreds of thousands of artists by using their art as training data without their consent? How justified is that, then?
@@tttttttttttttttp12 doesn't matter who is at fault they are doing something illegal. You are talking like the law exists to be morally right but that's not what the law or justice system is. The justice system exists to make sure people know their place. If a serial killer is murdered by some random guy on the street the guy would be arrested even though he was "justified". (This is assuming the guy just offed the killer without any provocation, not if the killer attacked him)
Nightshade should be totally legal. If these companies want to use AI models trained on copyrighted artwork, then they should reap what they sow. You can't both train an AI model on copyrighted work and control what artists do and don't put online. There is a part of me that wonders once artists stop creating so much work online what sort of hellish work would be created in the art soup that the AI repeatedly cannibalizes to try to create something "new". Maybe it would die then.
I will try using these tools on my own artworks. ...Doubt I'll ever really be a target of theft with my complete lack of popularity, but might as well. Just for the symbolic gesture against this automated theft called A.I., it's worth it. Thanks for sharing the existence of these tools.
If you truly wish to stick it to the man you should just don't produce any art at all. I am sure your brave and daring move will astound the market as it is deprived of more low-skill mass produced digital art.
@@bewawolf19 And if you truly want to be edgy and provocative, just stop doing anything you like doing in life, and see everything through the lens of competition and "is what I am doing part of a mass of lower skill stuff". I'm sure that will make you happy. BTW, that's exactly what "the man"/"the system" wants you to do. So, good job being a fake rebel who actually totally bought and integrated the propaganda.
@@AnAngelineer What propaganda? I am making fun of all the artists who are being replaced by something because they can't make anything worthwhile, so resorted to calling it theft despite it not being theft. They are the same digital artists whom the traditional artists complained about being replaced with, and suddenly when digital artists now are on the chopping block, they suddenly turn into Luddites. Your comment was especially funny as your "Symbolic gesture" is just as brave and stunning as idiots putting Palestine flags on their twitter bio.
If they want to classify that kind of tools as malware, I also I assume they are willing to accuse people training models on other's people art as thieves right? Otherwise I'd love to see their models being fucked up by the artists.
Might as well go the full malware route, let's say... Create an anti glaze adversarial algorithm with a sleeper agent inside it, wait a week or two, then amp the voltage on the gpu and frying them, even if it happens just once it'll be enough to set a precedent.
@@elisehalflight Reported, and also, careful what you wish for, cause little brats like you are giving me more than enough to make sure people like never see the light of day again.
The most ineffective response though, it has had 0 effect to the big companies lol… people seem to forget how resourceful big companies are, if they see a problem they get a group of people to design something to reverse it in this case and they can keep updating that each time the tool to poison images are updated, it will be just an endless loop of this and of course the companies will be able to act faster to fix it than the people behind the tools
If they want that, why post it on a publicly accessible site? There are plenty of more private places to post art, like Patreon, Discord, or private channels It’s a rule of the internet that once you post something online, anyone can and will use it for whatever they want, and you have no say. This goes for text, irl photos, and art you make. People want the benefits (praise, popularity, commissions) of posting art publicly, but not the risk of other people using it for whatever they want
@@cara-seyun Because they seen many people do so without problem before and figure they have a right to it, also artists looking to make a career or show the public something should be allowed to take measures against certain things.
@@Wolf-oc6tx I’m not saying they aren’t allowed to do so, but actions have consequences You can’t have all the benefits of publicly displaying art without any of the drawbacks People imitating your art is a problem that goes long before the internet Even Shakespeare got mad that people copied his plays
One thing i do like is these AI tools are being trained on stuff scraped from image sites (take Danbooru for example) then people using AI post their "work" which is then added to the image scraping sites and is then thrown at the AIs the way to beat AI is by giving it AI content
This is a serious problem. If you want to train your AI, you can't just scrape the internet and assume data you find is generated by humans. This applies to any type of machine learning application such as language, image, voice, etc.
Danbooru has been banning and delete AI works as general for now, with strong worded disclaimer and rules with it ever since the Great "Deletion Request" incident. (Basically in 2022 some AI company admitted their training data comes from Danbooru, causing around hundreds of Japanese artist sending takedown request from the site) While Pixiv require users to declare the use of AI. Even Danbooru with its 7 million works across 20 years are actually tiny training data compare to internet as a whole. However AI artist usually post their "work" in twitter, which a lot of them don't declare the use of AI honestly, some of them even open Patreon. And Twitter, thanks to its diverse content, tend to be the training model itself...and now added with the AI works. Model collapse coming lol.
Danbooru and Gelbooru have started soft working with AI Art trainers and disallow AI art uploads to the site and have begun correcting their tagging info. They are seeing the writing on the wall and just accepted it.
cg artists also have a good trick: they load up their models with an ai friendlie title like "high quality dog, great topology" and then the acutal 3d model looks like a piece of sh*t and has the worst possible topology. then if companies dont ask for permission and just download stuff of sites, the ai gets a piece of sh*t to eat :) lots of them.
@@morphine000 You'd be surprised I guess by "no one" they mean corporations and such. People that want the most sanitized, inoffensive thing possible with an art style that's "easy on the eyes." If you're just fucking weird you'll attract other people that are just as weird as you are
@@morphine000 Its a lot harder to do it, but it has been done (Tsukumizu). However you can still make art for yourself. Regardless making money on art before the AI craze was difficult so I wouldn't make it a career.
As a filmmaker, I hope someone can develop something similar to this for video. I know it's probably an inevitable battle, but hopefully it could delay ai taking over film long enough for some of us to make our first feature films while that's still a thing.
Only now more people will be able to make their own feature films thanks to the new technology, and those already making films can up their game. The playing field is being more leveled than it’s ever been.
@@platypuz1702 that makes sense in theory, but as a filmmaker, I can tell you it probably won't work that way. Technology isn't the biggest obstacle in the way of me making the films I want to make, it's distribution and access to famous people. People don't watch movies the same way they consume other art. If you make an incredible movie, that's no guarantee people will watch it. Because people's attention spans are short, they only watch movies under very special circumstances. Nobody watches a full length movie they stumble across on TH-cam. They watch movies when they sit down on a streaming platform and have already made the decision to watch a 2 hour movie. And then, they really only watch movies that either have a famous person they recognize in it or some form of IP/character that they like, like Mario or John Wick. That's why studios focus so hard on remaking the same movies. The other side to this is the slots you see when you open a popular streaming platform you trust like Netflix or HBO. These are projects that get funded and distributed by production companies who have relationships with distribution platforms like streamers. They decide what to push for people to watch. So, I can tell you the way this will probably look is anyone can make whatever movie they want, but there will be a depressing lack of human creativity, because you're just letting the computer tell you what your movie should look like, and even if the movie is good, you'll drop it online and very few people will watch it because good content doesn't rise to the top of the internet, clickbait content does. The "good" movies that people will watch will be the ones with famous actors in them or famous characters, or the ones that are pushed by advertisers. And that comes down to usage rights and money. You have to pay to have a famous actor or character in your movie, whether it's on camera or AI generated. It's also going to come down to powerful relationships in Hollywood. Again, who's movie get watched today comes down to who you know in Hollywood and these big institutions that invest in and distribute people's projects. Except nobody is going to be looking for the next Safdie Brother or Tarantino with a cool idea for a movie, because why would they? The machines do that now. Sure that person can drop their movie online, but where? TH-cam? Who's going to watch it when they are competing with a 10 minute Mr. Beast video? They'd need a famous actor in it, but they aren't a Hollywood elite who knows actors and they don't have money. The result will actually be a consolidation of power where success in the industry comes down primarily to who you know.
Using those tools on your art is self defense. If enough of those sort of tools spread out, it will probably render public sampling for training data unviable, due to the sheer number of countermeasures you need to deploy to make sure no contaminated data slips though, with each of them having a risk of ruining your training set by itself.
we are likely to see it use the Chat GPT method of not allowing in data before a certain timeframe. It will limit the data set but not outright kill it
Glazing tools will be rendered useless. It's a war that favours the AI generators, not the glazers. The glazers will eventually become very noticeable to the human eye if this goes on.
@@Shaker626 honestly the main glaze is that AI content eating itself creates worse AI content, which is basically the same as poisoning these AI models manually
@@DragesNolya And what happens to all this when the average computer can run these models or better ones (which will almost definitely be trained by pirates)?
@@Shaker626 1. we have a bit of a computing brick wall right now, we cannot physically fit more processing power into computer components, so without a major computing breakthrough we arent going to see that happen 2. once again, the main problem with AI content, is that if it uses AI content to train itself, it becomes objectively worse at its job. So the more AI content there is, the worse the AI content generated will be.
Whenever new technology is introduced, there will always be people who will abuse it. It's a tale as old as time. Myself, I am both a traditional artist and a person who experiments with AI generated art. It is very effective to brainstorm ideas, color combinations etc. I believe that anything created solely with AI should be public domain by default, and it should be required to state that an AI generated image is in fact AI. Artists should also be able to fight back if someone copies something of theirs. Edit: Changed some things because replies brought up some good points
I can't really think of any current uses of AI that shouldn't be public domain, but I could easily just not be thinking of an obvious example. I guess if someone creates their own data for the dataset, so there's no questionable ethics
any aspects from an "AI" cannot be copyrighted unless someone convinces a judge that those glorified probability tables are indistinguishable from a human brain, and I don't see that happening unless the judge is 80 years old and/or doesn't know what a netflix is What this can also mean is that, while the modifications made by a human could be covered, their copyright registration which, as you might know, is really useful for copyright lawsuits, might be thrown out if they fail to declare the origin of the stuff they submitted and blatantly try to pass it off as theirs
I almost wonder if instead of using copyright, a lawsuit could have more traction under anti-trust laws. I mean seriously, this AI tech is using the work of small creators, who are effectively small businesses, to shove them out of the market. It could be argued as an advancement of technology, or it could be seen as an abuse of technology and power in a market from a corporate entity. Who knows, Im not saying it's a good strategy but one that should be considered
Under US copyright law, according to the copyright office, works created by AI already can’t be copyrighted. They are public domain essentially. There is some nuance on works created humans that have incorporated AI created stuff (the overall work is copyrighted to the human author, but the AI works on their own are still public domain).
Ok, but this is a race you cannot win. If everyone can use this tool, then the thieves can too. You could train the AI to undo the cloaking, by using the cloaking tool to generate training data for the uncloaking. A lot of AI detection / adversarial approaches can be circumvented in this manner.
The only solution to this that I can see would be using your own private glazing model. If it's not public, they can't train for it as easily. But it all depends on how easy it is to create a glazing network that requires a unique detection/neutralisation method. And on other side how easy or costly it is to create anti glazing model, and how universal those models are. It is an interesting topic. I can imagine stock photo, or other image sharing company just creating a different secret glazing model each few days and applying it to random images to discourage usage of their images for AI development. Dealing with one glazing method might be easy and cheap, but dealing with thousands of them might be prohibitively costly. These are just my speculations, I have little knowledge in this matter.
"this is a race you cannot win"🤓 wrong. the current status quo is there is little you can do to publicly distribute artistic content while retaining your intellectual property since the LLMs are essentially out to steal/copy your works. these LLMs have to make money. if their cost of operations escalate enough then they have to search for another method of getting data or shut down. yea, you can do stuff to counteract it but if it costs an egrigeous amount then that company can't do business
@@pingwingugu5 you could infer something computationally equivalent to the inverse transform of the glaze, with the "more parameters than pixels" constraint
Simple solution, add randomization to the process so they can't make a model to predict the changes made, if the method is not uniform but does the same thing anyway than that will utterly confuse any model they try to build
the "antidote" identifies patrons about alteration, because it doesnt have the original to compare, it can only say: "this is poisoned" and it cannot re-make it because it doesnt know how it was before the alteration, so the "sample" its curated and put away from the process, basically it helps artist what they want, not being trained on.
BRO NOBODY IS HIRING PROMPT ENGINEERS. Im so sick of internet dwellers copy pasting their world view from media. Anyone who works as a prompt engineer is just a mediator for people who dont have AI tools
Its kinda funny and concerning at the same time how everyone thought creative careers like art and writing will be the last one to be overtaken by AI, but now those are literally one of the first ones being overtaken by AI
@@bbrainstormer2036 They don't know that basically every bit of electronics and mechanical stuff is mostly assembled by machines with human supervision. The term "AI" is becoming such a marketing gimmicking like "Smart" devices. Can't wait for a washing machine to have AI slapped into it for no reason.
i remember fighting against the horrible implications this new tech would have , and possibly making it less hostile, so users could co-exist with it , but after 2 years+ of arguing with people ive learned that nobody will care until it hits them.
How to keep the masses enslaved: 1. Take away all upward mobility 2. Work them so hard, they are too tired to do anything else 3. Turn one half of the lower class against the other 4. Don't let them save any money
>Make AI art popular and cool >Artists use it >So much AI art that the AI is training using its own art >Art looks terrible >Nobody uses AI much anymore
It is not clear that it would happen that way. One hope is that AI art will eventually be denied copyright. Then real artists could freely intersperse it with their real art and poison the AI even more. But copyright laws and enforcement can often get screwed up.
@@user-vb6lq9il5v bro, almost 100 years to protect something in specific in the internet era?are we serious?the only reason i think one should have copyright IS because of wanting to be the only one Who can use the specific thing they own...but then we have fan arts, fanfics, memes, parodies and etc.
Fortunately, some artists are making private portfolios, Pieces they do not share online, these pieces can be shared at interviews and only artists see them best way I have found so far
That's probably what I'm gonna do tbf, all the art jobs I'm looking at aren't fussed about an online portfolio and would rather just have you send them a word file or PDF.
If you wanna add an extra layer of safety to prove you're not using AI for your art, you could always use a hand-cam alongside a recording of the process. Speedrunners do this all the time to prove that they aren't cheating.
AI also sucks for generating 3D content. AI can generate pictures that resemble high quality rendered 3D content, but not the 3D content itself. So your portfolio of 3D models is safe to publish openly... for now.
I guess the ironic part about all this is that Glaze has allegedly violated another project's (DiffusionBee) GPL license. They've since removed the GPL parts from their codebase, and also removed the earlier, GPL-violating binaries of the software from their website but AFAIK they have never published the source code for those earlier releases, basically trying to sweep the whole thing under the rug.
@@CALndStuff What makes you think that? The GPL itself says: «You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License».
The issue with using filters to poison AI models is that, not only are the models ALREADY TRAINED, but that all it takes is another filter to "clean up" your image for future training... even if the ai models are outlawed, they're too easy to train and too widely distributed to destroy. That genie is out of the bottle.
its not so simple. for the filter to be removed it needs to be recognized. and these filters are unique per image. attempts to remove them can ruin the training data you are trying to get by distorting or removing details
very simple actually, heres how you do it. gather a large clean dataset, these exist. apply the filters to the traing set marking them, you dont have to have them all the same across the images even duplicate images with variations of the filter would work and be helpful. run it through the network and train to detect poisoning. Sorry but this a classic neural network problem.
@zippo32123 People would have to understand how AI / neural networks work to know that. Edit I meant most people don't understand how AI neural nets work so people Zippo's comment is probably just preaching to the choir
Note that the article on the judge "dismissing" the suit is misleading/incorrect. Only a couple of claims were dismissed but prosecution was allowed to refile with updated evidence and claims that they recently did. The filing is public ofc and from what I've read of the current suit it seems *very* convincing.
It's just watermarks but more complex. But this also highlights a misunderstanding of information theory. Or GANs. Adversarial attacks will only make the models stronger. And these are really targeted. so this might work for image to image. But not for text to image. Unless you somehow associate your art station username with something NSFW in the training set, to get filtered out or something. Have people not read the CLIP paper? It works because of scale the training set is 5B image-caption pairs. And with the push for accessibility on the web ... Captions will improve. If you have a good clip model, you can train and image decoder against it. Model weights will only learn a bad distribution of the quality of image - caption pairs drop. Or maybe use a clip model to filter you dataset. In the end, they used a clip model to create bad data and try to inject it for specific concepts. At a rate of 3:2. Or 1% of pre training. Which is 5M images with intetionally bad captions. Invisible watermarks or adversarial attacks can be detected and defeated by tweaking the convolutional steps and pool shape. It's kernels you can modify anyway. If a human can distinguish the image, a model can learn that... And more. you aren't hiding yourself by adding some obvious tool. The best chance is for your name to become a dedicated token and essentially overfit on a specific direction you don't want to. And then have the attention heads never attend to that token. so you want your name in text data and even image-caption pairs that aren't associated with your art directly. The real issue we are facing, is that corporations will gatekeep models and model generation. Only "ethically trained" models (think Adobe Firefly, Getty images Picasso) will be allowed for commercial use. One example is steam games already. Meaning none of the open source goodness will be viable: webuis, finetunes, loras, mergey, edgy stuff, naughty stuff, etc etc. Copyright law will protect corporations, not individuals.
I think Steam already dropped that requirement because it made no sense and it could cause more harm to them in the long run since indie devs who use open source AI to make completely non-infringing content will just choose alternative storefront instead.
The data replaced by nightshade and glaze are not recoverable, any AI trained to "fix" glazed/nightshaded images would be making up the missing data, the only workaround would be detecting and removing images that are glazed/nightshaded
I'm a "Prompt engineer!" It takes REAL SKILLZ to know how to type "ARTSTATION; BEST QUALITY; HIGH RESOLUTION; SAMDOESARTS STYLE; PURDY SUNSET; HOT LADY" Only a REAL ARTIST can know how to type ALL those words!!!.
It's nice to see artist get tools to fight back against AI stealing their work. An algorithm blending together stoleb works to create something visually different doesnt make it any less stealing. But worse, art is the soul of humanity, and handing it off to corporate ai tools is the worst thing we could do as a species
When AI art is used as part of a piece of work, it's not too bad, but when I consume art on its own and realize that the art I'm looking at is generated by AI, I stop feeling that the image is important or has any value. I don't think AI can replace art in this sense. In video games, TV shows, cartoons, etc., it's possible, but when I look at art without that context, AI art doesn't bring me pleasure. I mean, it's something important when art has its creator.
So you feel an image is "important" only if you believe that it was made by a person? Have you ever seen how humans trace and copy other stuff and real life?
That's only now. Later on art won't be seen as a bunch of people that know hard techniques. People will start to appreciate art for the content and not just how hard you work on it (aka ego). And good art will be Ai generated images that have really cool and I terresting message behind it. And bad ai art will be the rest. It allows us to transcend technique and make art that is about the art and not the artist.
@@marcogenovesi8570 This analogy is so common that it's not even funny and it's still complete nonsense. People take ideas and make it their own because they like it, because it makes them feel something. Good art iterates on the source and adds a layer of themselves into the art. Humans don't care about art because ooh image pretty, it's because art is another way for humans to feel a connection with other humans. AI fundamentally cannot replicate that unless we invent a complete generalized self-aware intelligence with the same goals and wants as us. AI is just consumerism on steroids.
In hindsight, it kind of makes sense that "content" was the first job on the chopping block. If a self-driving car or factory robot messes up, people die and property gets damaged, so that needs to be 100% perfect 100% of the time right off the bat, or you're toast, whereas if a content generator messes up, you point and laugh at the output and try again. It's safe to just brute-force it.
I think it is a great step forward for artists, considering there are algorithims that are very difficult if not impossible to reverse like some instances of the blur effect, fixing an image with this noise thing could be really expensive considering noise is "random", it could take a whole lot of machine power to clear entire "corrupted" datasets for these to work as they used to.
you're in for a treat.... DE NOISIFYING is exactly how the image generators work in the first place! Images in the starting are random noise and then the algorithm creates the image by de noisfying it bit by bit according to the prompt image de blurrring and upscaling also already existed before this tool and this is free, so anyone can use the tool to create say pair of 1000 noised and original images and then teach it to remove the noise
A new cyber arms race is starting and I dig it. Excited to see how the software develops (both ai and glazing) and I hope it develops into a similar situation to the whole ads vs ad blockers or malware vs anti-virus where the race is always close but any lead the badguys get is always quickly lost due to the dedication of the goodguys.
Turns out that making physical robots work in the domain of unskilled labourers is super hard, but making software robots that work in a domain which consists mostly of bullshitting your way to the top is piss easy.
Most of the pictures he showed were months if not years old at this point, if you were shown some images made with the newest sdxl models by someone who knows how to correctly prompt them, you wouldn't be able to tell 9/10 times
@@dafoexi think its mostly that "unskilled labour" is mechanical process in a complex environment, while knowledge work is in a highly controlled environment. ai will also come for all other domains soon enough though, robotics is making fast progress by integrating LLMs etc
The shame is, there's artists who were copied to the point where their original art "Looks like AI", because their style was copied so effectively. Those people's reputations are actively being damaged because their art looks so much like AI, even though they were the ones being stolen from.
I just started putting a white border around my art and my signature. If someone used my art it would have a smugy white outline and black artifacting. And I've always used low pixel counts, so AI will have a hard time making something new because instead of having a million pixels it only has a few thousand.
I think artists should be allowed to use whatever they want to protect their artwork. It can't corrupt your system if you aren't trying to steal it to build up your system.
Steam did not ban AI generated work. All they said is, state that you are using AI, in case you break copyright law we can take your game down. Breaking copyright law consists in having the AI generating Pikachu for example. AI art is allowed for Steam.
At 6:39, there are examples of AI-generated human faces that trigger an "uncanny valley" feeling in people, but they just look like normal photographs of people to me! At least until I look really hard to find weird details. Do I have partial face blindness or something?
Well i think 2 things are going to happen: 1, People are going to find good ways to filter data sets before or during training or get packs that are "certified" to be ok. 2, some ways to unglaze the (less sophisticated) poisoned images is going to be developed for that filter. (Probably with the amount of available good data, i think this is going to be lower priority, for now...)
I think #2 could end up landing some people in copyright infringement hell, since you'd be creating derivative works from the glazed images that'd certainly wouldn't fall under fair use.
Honestly, as an AI art fan (mostly because it can be used to generate some interesting things, give you some basic ideas or improving your work), I believe nightshade and other things like it are great. I mean, it's actually a way for people to protect themselves rather than just accept it as an inevitable part of life.
If you have a bit of experience in generating yourself it becomes easier to see what is ai generated, the images seem to look good at a quick glance but as you zoom in you start seeing that stuff doesn't make sense, especially in background details Sometimes they also put alot of unnecessary detail where it's not needed
I stopped paying attention for awhile, but looking at the comments, I see the whole 'people complaining about AI' and 'people complaining about the people complaining about AI' thing is still going strong. My thoughts are still the same as they were before, and they are indeed personal. Before AI, the people that use it now and say things like 'there's a tool that exists for me to make art now' were never willing to put the work in to learn, so why are you saying you're an artist now that it exists when you were never interested in the first place? Why I think this will make sense to you once you hear what I think about art, maybe. I do want to point out, I can draw. I'm not confident in myself at all, but I'm honestly happy enough with what I can do, and I'm always learning. I'm not personally interested in using AI, but I don't care that it exists either. I'd just never use it myself I guess since it defeats the entire point of my personal growth, and the satisfaction of it all. After all, that's what art is; a person putting their thoughts, ideas, imagination, whatever onto a canvas in a way that's unique to them. That's the entire reason art is interesting. It's not about the result, it's about the process, and the beauty of each person's creation, as well as the happiness of enjoying a hobby and the love you have for it. I don't really know if people can still have that passion nowadays. So the reason I thought so strongly about the 'art wannabes' so to speak is specifically because they have no passion. Art is just a means to an end to them, so they just use this fancy new tool to do the process for them. Skipping the process defeats the entire point of art.
It's just another step towards people loosing mental fortitude. The obesity rates in America show that humanity gets weaker with each day. "Why try doing something if you don't need to?"
Very clear headed take. I had similar thoughts. I think it's just part of a larger trend of alienation and loss of skill and value in skill due to technology. Having a command in any craft is one of the most rewarding things in life, and before this excess of tech, that was enforced. The technofetishists say that you can still do that, but that's like saying you can still participate in the joys of using tactile, obsolete technologies like tapes and vinyl records. You're by yourself with maybe a couple enthusiasts. Everyone is no longer forced by technical limitations to enjoy and appreciate limitation and requirement of effort. There's no more need to think through things due to limits on a mass scale. Everyone now with few exceptions are plugged in their phones and going for what is easy like it's an opium epidemic, because that's what people do. They gravitate, en masse, to what is easiest. Maybe it's a cultural thing that Americans are predisposed to. I think that might be the case. (As an American I see that it's a culture that celebrates spectacle and new technology (Neil Postman explains this better), over human skill and discipline, with exception to sports.)
@@krunkle5136 i agree with pretty much everything you said, and i think that last line is an interesting case, and i wish it could be applied in a similar way with everything else, not just sports. In sports, there isnt much to automate, the entire point is that its a showcase of human skill, so the only things that can be automated are the tools they use to practice, or tools they use for accuracy, like a pitching machine, or a speedometer, or a camera to record the sport, but nothing about the actual sport itself can truly be automated, atleast not for the people playing the sport. The people who watch it, however, that can be automated through the use of games that can perfectly recreate sports, or even make things that havent been done before happen, but its all treated as a seperate thing that much fewer people enjoy compared to the original sport. I wish that this is how art could be treated, where the tools artists use improve, allowing artists to get better as technology does, while the medium that kind of replaces them in a way is kept separately as its own little niche. The problem is that people dont enjoy sports just for the final result, they enjoy it because of the process to get to that result, they love watching the game not just seeing the score. Whereas with art, for everyone but the artist, the final product, the score, is all that matters, so the proccess of art being created doesnt matter to them, because thats not the part they enjoy, its the ending result that they want. I guess the reason is that sports are competitions while art isnt, in fact its basically one of the unwritten rules of being an artist that you dont compare yourselves to others. for a game of football, an ai couldnt just generate the final score to make it a good match in a viewers eye, because the work it took to get to that score is what the viewer is there for, but for art, all it has to do is generate that final score and the viewer is satisfied because they dont care what it took to get there, they just want the cool image.
@@jc_art_ I think that could have something to do with lack of art education in public schools. It's also treated as this thing that's inherent, and there's a lot of bad training too. Looking at the entertainment industry, up until now American in particular seems to prize "realistic" spectical. Also it's largely dominated by Disney, which has too much power over the tastes in popular media and is mishandling whatever it consumes (Christ sake they own Star Wars and Marvel). I think it's a good observation that art is seen as this uncompetitive thing, and I don't know if it's a matter of anti competitive sentiment in the art world or what. One thing is for sure, America doesn't have its own equivalent of Comiket, the Japanese indie comic market which draws in staggering numbers of indie creators annually. I think they're very competitive and it'd be nice if America adopted that. Maybe then it'd have its own competitive art/comic scene. Unfortunately everything seems to be funneled down to apps and the sentiment is that "you can just do that online". The epitome of cheapness and cynicism.
Artists out here trying to protect their work, and the tech bros are so adamant on using their works that they create tools to work around the protections instead of using art of people who consent or using public domain
TBH this could have been fixed through legislation easily if we had actually competent law makers. Music Copyright laws for example are very thorough and have protected musicians for the last 50+ years. They could easily introduce the same penalties with AI Art. Unfortunately our law makers do not have the mental capacity to comprehend this technology enough to actually protect creativity. What this will amount to in the long run is less actual artists pursuing artistry simply because the demand for them will be dried up, there will be simply very few to little jobs for remaining artists. This will overall effect the quality of work that will be produced in the future in all artistic medium. You can blame this on the plagiarists who used this technology for personal profit at the expense of hard working artists, most which had to work for over decades to master their craft. AI art still requires understanding of composition, creativity, and deeper concepts, and thus the best operators of this technology would actually be other seasoned artists. This is the one thing people do not quite understand yet since everyone is so captured by the digital rendering capabilities it has. The artist "eye" will be as important as ever. You cannot really train the AI for this yet, which is why most all AI content look the same, despite it looking nicely rendered.
Not really a big question if you pay attention behind the scenes. Some of the prices were raised to adjust for production costs, but a majority of it is from businesses jacking up the price for the hell of it. People are out of jobs because not only do companies not want to pay liveable wages, but they also profit by pushing work onto one employee instead of maybe 3 or 4. AI is just a lazy, greedy way to make money, and they do it by ripping off real talent
Yeah, nowadays i am very sceptical about automation under capitalism - it all seems to go the luddite route if possible: push out workers, often make worse quality goods, potentially make good quality goods almost non-existent through destroying the craft... but hey, you now earn more money - can't spell economy with a con
So now that people can use nightshade on their artwork, does that mean this is concrete evidence of plagiarism by AI companies if they try to get an AI to undo their poisoning of artwork?
This concerns me deeply: Stability ai, which has published several models by now, is spread so widely they're being targeted with lawsuits. But any proprietary solution will not have as easy a claim against it becuase the procedure is hidden. But then consider what happens if these lawsuits succeed, and lets go further, every single artwork that's used has been used with explicit approval from artists. Then we're stuck in a proprietary hell where adobe or someone has paid ludicrous sums of money to artists (in total, not each induvidual) to land themselves a monopoly on art generation. And presuming advancements continue they're gonna have a monopoly on art eventually. Suing stability AI that have at least published models unlike closed systems like openai dalle is a bad move in general. Whatever role these models will play in the future I'm sure it's in most artists interest for these to be open.
"monopoly on art generation>monopoly on art" - lol. So, how many advancements has AI made after SD1.5? (large diffusion img gen models becoming public?) AI-art isn't progressing even linearly, there will be a lot of time for new models to appear, and most artists will have no more arguments against it, once the training data is cleaner.
It's funny how much people cared about fair use (on TH-cam especially) or shortening/abolishing copyright (e.g. with Disney) until their own creations became involved. Now many artists want to be paid if you so much as take inspiration from their works. If people treated source code with the same the same level of greed the open source movement would not exist.
@@elliotn7578 fair-use advocates are usually not the same kind of group... I don't even know how you've made a connection. You can make an argument that most online artists works to which they don't have written rights (characters, or whatever), but vast majority will respond to cease and desist accordingly, while AI-art is supposedly exception, because..? Art community also always shunned tracers, so I don't know why the reaction to AI-art would be unexpected.
Agree: Suing open source models made by non-profits is the worst solution ever, since it solves nothing. We get no open AIs and only the Adobes and Facebooks can use it.
@@tteqhu You're talking about models that are a year and a half in release and there's been a lot of improvement, especially in base models (specializations/fine tuning like lora aren't meaningful long term). Stable diffusion open source came out August 2022. Whatever you're extrapolating here it's clear we're thinking different time frames. I'm thinking 10-30 years. And of course "monopoly" is excessive. I'm not arguing people will stop drawing things or top taking photo/video. I'm arguing that it'll replace a massive portion of artistic use and make it unprofitable or transform artists to art directors. Excitable types argue that's today. I don't agree. Being pessimistic if AI scales poorly with resources and no advances in algorithmic technique are made then maybe I'm wrong. Maybe ai literally never reaches a good level. But the focus on ai and resources poured into it there will be improvements on all fronts. At least some. It seems unlikely to me that you couldn't improve on these processes that are just so very general right now.
Not only is "self driving" not ready yet, it never will be, period. The best case scenario would be trains that are driven entirely by automation with no human input in the train itself but even then you might still want someone there to ensure things are done smoothly and safely, so even THEN it cannot be replaced. People really need to get out of this fantasy of robot cars driving around the roads anytime soon. Maybe in 400 years.
It could through the use of simtraning. By making a simulation of an environment and training the Ai there and translated in real life, it is showing good results.
sorry mam, but its already working, only thing that is holding it back are government regulations, people that are against or fear it, and of course, costs, wich would require to potentially hundreds of millions of vehicles to be reconditioned to autonomous driving
Hello! Friendly AI Computer Engineer. I’m an aspiring artist since I’ve been like 4 years old (but not good enough to write home about, sadly…). When I saw AI image generation tools: I criticized it because of copyright infringement (just like everybody else). Eventually: I tried it. After over 1000 hours of working with stable diffusion and midjourney I have come to understand the limitations of AI generated images and models. Sure, the AI models can appease the average user and even grant artists a proof of concept when trying to make something new. But AI, despite being impressive, is incapable of successfully completing an image (‘bringing it home’). Ultimately: I think AI is going to bring artists and supporters even closer together. AI will be a gateway to creating crap content and making people wish for better content. It is also a way to immortalize a fallen artist and his/her style. There are creative ways artists can use AI tools to facilitate their creative endeavors. But the essence of my comment: AI will not displace artists. Some of the limitations I’ve seen in Stable Diffusion are things we’ve been trying to fix on other AI technologies since, at least, the 9Os. It is a cool tool, it has come a long way. It still has a long way to go before it becomes capable of replacing artists.p
it makes sense tho, labour jobs are important for the economy to work, so you cant just replace them with risky machine learning programs that could f*ck everything up while creative medium like art, writing, music etc are all just for entertainment, so theres no danger of society collapse if people try to replace them with bots.... ...but IMO the problem with that is. .whats the point of consuming "content" if its as souless as it gets, big part of why we enjoy art, movies, novels & music is because we know they are made by passionate people who want to share their ideas and crafts, and that what makes them worth consuming, doesnt matter if its bad or good, infact the varying difference is quality is another thing we love about that stuff, if it all looks and feels the same (like AI content) then that stuff gets boring and stale really fast. like who cares about AI art when everything has that high professional quality to it, especially if you know theres no actual talent or history behind the creation of it
The funniest part of this to me is that AI is trained to look like anime or like Pixar but can’t actually replicate simple styles like lorelay bove’s work. They can’t just make simple looking art so the artists that make the simplest quickest works are fine
makes sense. Generative models make generic work in the most literal sense, so it won‘t necessarily consider more simpler and minimal artstyles because they aren't that much on people's radar.
Training LORAs doesn't necessarily require a huge dataset and can be trained on consumer hardware locally. That's one good way to get a specific style if there's no pre-existing model.
I was in a seminar about developing techniques for cleaning up these poisoned images, calling them ”adverserial injections” in training data, and I was like ”Huh?” I suppose if legitimately obtained training data runs the risk of corrupting the entire training regimen, it makes sense, but let’s not pretend it isn’t for dealing with cheeky artists protecting their own interests and livelihoods when the judicial system has completely failed them.
honestly, I don't really care about ai art that much, but what I do care about is an ART competition at my school is allowing ai art entries... I don't understand why tbh. Since if you can just ask an AI to draw you something to win a few hundred bucks doesn't that defeat the purpose of making an art competition??
@sfkdsxzjkcfjldskaf99sddf809sdf yes but thats different cuz the one we had already good enough so if any new model is poisoned Just use the old one . like i do with my app.
The use of AI to replace artists and writers really is an impressive level of petty. If the goal of automation is to save on labor costs, artists and writers are usually a fairly small slice of the pie. Like, the use of AI to generate images for the advertising campaign of Civil War only saved the studio like a tiny fraction of a percent of their advertising budget for that movie. Conversely, if you are a professional artist, you are doing it because you live doing the art(that doesn't mean you love doing every job you get). So, these corporations have invested significant time and money into trying to eliminate one of the most fulfilling professions for the most marginal of cost savings. If that ain't a giant middle finger to working people everywhere, I don't know what is.
A detail about modern artists which isn’t mentioned at all in mainstream discourse about this sorta thing is how all of those aforementioned artists, is by signing up on Microsoft’s services, Google, Apple, X and so on with those agreements that nobody reads in full. They’ve basically signed up for having the data of their artistry, together with all of their other data, taken away to be sold and used for training. Wether they’re aware of it or not.
That’s not the case for all pages. Deviant art for instance updated their ToS and forced all their users to agree to the new data training. Their users weren’t given a choice to agree or not. That’s illegal. You can’t just update your ToS and force people to agree. Also, why is it that these corporations are careful doing it with any other data, such as music or images owned by large corporations (Disney IPs). Dance diffusion said they didn’t want to train on copyrighted music as it could lead to a lawsuit. All while Stable diffusion teams had no problem training on copyrighted images (both dance diffusion and stable diffusion are owned by Stability AI). It’s double standards like that which makes it so shady. And they know it.
@@Thesamurai1999this! When websites say you own the art you make then do a sneaky 360 to spit back to artists with a premium tag like they did artists a favor for over decade.
@@Thesamurai1999its not hard. Chinese companies maybe will develop models that were trained on us copyrighted movies, what will people do then? some countries dont respect ip laws... only a matter of time
I like how the solution to constantly shooting ourselves in the foot is more bulletproof boots.
While increasing the caliber of the gun XD
@@martinferrand4711 jajajaja
@@martinferrand4711 true
It doesn’t shoot anyone in the foot.
@@first-last-nullwearing explosive shoes and walking in a minefield.
The poisoning reminds me of the ad nauseum extension - not only does it block ads, but it also clicks them so the advertiser has to pay & the added bonus of ruining your advertising profile.
That's a certainly a twist to the adblock technique
Yo what? I've never heard about that before that sounds so cool!
@@AvitheTiger Yea Ad nauseam is pretty cool (from now advertisers lost 120$ by my fault ahah). But in fact, you have to install it manually while the extension was removed from the Chrome store.
Give it a try, it's pretty fun
Yo that's the kind of stuff that gets applications, extensions and even _users_ banned. Just letting you know.
@@TheDragShot what do you mean?
What took them so long? As an "artist" whose drawings were only ever good enough to make the neural networks worse, I've been doing this for years
thank you for your service lol
Aesthetic ratings are done on the training data. Your art helped as it was auto tagged as low quality.
@@ShivaTD420 Nah man you got it twisted, human art has real quality. AI spew is garbage and we don't want it. Your sparkly eyed waifu that looks like a carbon copy of the other generated picture right next to it will ultimately be forgotten but human art with its quirks will be remembered. Cope, seethe, mald
There's a critic in every crowd...now there's gonna be AI critics too? ;*[}
@@sfkdsxzjkcfjldskaf99sddf809sdf funny how you're the one saying "cope, seethe, mald" while you literally cope, seethe and mald. the ai waifus are clearly good enough for most people, the human touch in art was only worth so much.
I feel like treating Nightshade as an illegal piece of malware is like saying home security cameras should be illegal because you're taking away the house robbers' main source of income.
Exactly what I was thinking
I understand not wanting your art being used but ruining the model with only 50 of the images affected by Nightshade is insane especially when you can't really tell what image is causing it is actually insane and shouldn't be the path to take like it would waste countless hours for people just trying to make small AI projects like image recognition, it will also probably make AI models eventually become better and not get affected as much by it if you are gonna go offensive on it anyway lol
corrupting progress is wrong cause the AI is just a tool and only who is using and how is using it makes it good or bad, i know u are mad like me on peoples who seling AI generated work and ruining the market for digital artits like me but the best way for the artist is to be more efective just try to learn to use AI 2 couse ewery artist in this world knows here is not enough time to create all the projects and art peaces u want in your life with ony yours hands and AI is greate tool giving us possibility to create much more in our lifes, and only peoples are immoral not AI.
@@rarehyperion wasting countless hours? what about the hours "wasted" on making art just for it to be ripped off, used for training and then generated?
@@jykox i think this glaze and nightshade filter is targeted at companies that take the art without permission or compensation for generation, and not ai in general
Everyone parroting "They'll just use AI to work around Nightshade" are missing the point. The point of glazing images is to make it JUST annoying enough that data scrapers don't bother to circumvent the cloaking. It's the same logic as getting a big, scary padlock for your property. It's not supposed to stop the expert thief who is after you specifically, it's just supposed to keep your common everyday thief out. Glazing your images is likely to stop data scrapers from going after you because circumventing a glaze takes more time and effort than it would be to just find an unglazed image elsewhere, much like how most thieves see an industrial padlock and just leave to look for unsecured goods somewhere else instead.
The problem is that anyone making an AI are not everyday people. They should have some on it, and If everyday people can look at one video and come up with a solution that can work reasonably well. The people making the AI could.
They can't use any workarounds on nightshade or glaze other than removing them from their datasets when detected, the original data from the image is irreversibly damaged by glaze/nightshade.
Isn't the big problem that while a master thief can steal for, idk, 60 years of heists, an AI can be trained and preserved to have ALL the lockpicking strats for the next however-long-the-internet's-alive?
I'm more worried about permanent art-pirating progress that can be copy-pasted into any tech...
@@ED-gw9rg The reason that most artist have is that AI "steals" the image and uses it to make art.
I train AI and Glaze has never stopped me.
Nightshade won't either.
It's a good idea made by smart people, but as a retired engineer, "good ideas made by smart people are always foiled by common sense attacks"
My biggest issue with neural networks is that it should be Opt IN, not opt OUT
If they want to include an artist's works then they should contact the artist and get their permission, not just use the works until the artist finds out and asks them to remove it
The problem with this is that its kinda not viable
@@kolliwanne964 asking for consent is not viable? I mean, only if you're straight up a shit person lol
What do you consider the alternative "viable" for? If "ease/quality of use" overrides consent from those who produce the art then you're honestly lost as a person
Because that's the thing, the only way you can see a "viable" option for AI image generation being through stolen art because AI image generation exists not through need or want by those who make it
If the only "viable" option is through ignoring consent, it's best there are no options at all
@@kolliwanne964 Yeah it is, use a bot to send out emails and DMs automatically. Even scammers can do it for basically no money. They just don't give enough of a fuck to do that.
@@kolliwanne964 How is that the artist's problem? It certainly isn't viable either to ask artists to go to every company and ask for them not to train on their art. A solution that could benefit both party would be a platform where user could upload their art to train models or a simple check box that would say i consent to this being used by ai. Main ethical problem with ai really is the lack of consent.
@@saperate No it is just unrealistic. And that is why it becomes the artists problem.
The solution for anybody who doesnt want their data to be scraped is not uploading them freely.
I am pretty sure that if you have your art only behind paywalls, you will experience way less problems than those with the brilliant idea to upload their work for free on social media.
Eh... as some once said... the cycle continues...
Ad > Adblock > Adblock Blocker > Anti Adblock Blocker >...
DRM > Cracks > New DRM > New Crack >...
Cloaked Image > Image Uncloaker > Anti DeCloak...
As the demand for blocking annoying ads stays strong because the ads only get worse with time, it becomes an unwinnable battle even for the richest companies.
Same goes for anti ai art attempts. People will make cool pictures of licensed character fighting and use them as desktop backgrounds and copyright holders, wether artists or corps, wont be able to do jack about it.
Only question is how much money they'll waste in the attempt.
Adblocks and SCENE are winning the races.
@@serioserkanalname499 Huh? Lot's of people have the AI generate images and claim they are the ones who made it or they use it for thumbnails. Personal use is something I never hear about with the AI humpers. They're always using it for some kind of gain.
@@j.k.4479 People do complain about personal use as well, since it's an opportunity cost to the artists. If you choose to generate an image for yourself using AI there's a small chance you might've commissioned an artist had you not had the opportunity to use AI.
@@inv41id Not what I meant, I mean the AI users never mention they're using the AI for personal use like a desktop wallpaper.
But yeah I understand your point.
If you eat someone else's lunch from the breakroom fridge, don't be surprised if it gives you explosive diarrhea.
Guess what, this is also a crime.
@@malakoihebraico2150 That actually depends, if you're talking about trapping your property certain states it's legal certain states it isn't. It's a legal grey area because on one hand you're intending to cause harm, on the other hand so is the other person, and it could be argued it's self defense
@@stephenkrahling1634 It's certainly vague, since legislators don't bother about codifying it, with exception of Arkansas that have laws against it. But in other common law cases on the matter, it's mostly against boobytrapping, and there is actually none siding with the ones that involved property only.
@@malakoihebraico2150Not if you use heavy spice instead of laxatives.
@@conspiracypanda1200 It's not about the tool, but the intention that is evalueted in court.
Artists wouldn't need to put "malware" in their work if these companies weren't using their work without permission.
Artists just want to do job they love for an okay pay, I would do art for McDonalds kind of pay just because I love doing it and if I can survive on drawing I will do so gladly, heck, pay me a rent for some shoe-box sized one room apartment in bad neighbourhood and feed me once a day and I would still gladly draw for you, that's why artists are upset about "AI" generated images, we just want to do what we are already doing without anyone bullying us
@@krsmanjovanovic8607 The real solution is to implement Universal Basic Income, so everyone gets enough money to cover the basics, and can work if they want more. A side effect would be that we'd be effectively subsidising the creation of culture, as artists would be able to do what they love without wasting time on another job just to pay the bills.
@@Roxor128 whose paying for it commie
@@Roxor128wow magic. Didn't know that was real
@@krsmanjovanovic8607 Yeah no, paying 1200$ for a single drawing that we can't even criticize without hurting your feelings and have to deal with the possibility that you're a Wokist and that you're going to put propaganda in it, it's a big no-no.
"Artists" played with fire for way too long, it was time that their decisions back-fired, like we say, go woke, go broke.
This reminds me of something people were doing years ago, adding subtle noise maps to images to make earlier AI misidentify those images. For instance, two seemingly identical images of a penguin might be identified as something totally wrong like a pizza or a country flag, based on the noise map that was added to the original image. That might even be exactly what evolved into image glazing.
Cool
I sort of do the same thing.
I add a layer of lines on top of all my drawings, similar to the ones on college ruled paper.
Or I make the foreground way too busy for AI to possibly learn anything from it. Like, having flower petals or dagger slices covering like ¼ of the drawing.
Yup, that's the same idea. Only glazing is much more subtle than GAN noise, and is designed to counter the mechanism by which SD mimics artist "style" by making the features these models recognize as style inconsistent across images (thus making the model "see" garbage input).
The SAND lab at UChicago also made Fawkes many years ago, which was designed to prevent facial recognition in photos.
Question is when this will be figured out and reversed. If you can calculate noise map that was used or algorithm, you can reverse this effect.
If an artist is asking for their images to not be used as training data, and you use them as training data anyways, and they don't work as training data, you haven't been betrayed or fooled in some fashion, because the artist doesn't owe you good training data.
If an artist where advertising their images as OK to train on, or even selling them as training data, and where using Nightshade on those, that might run into some legal trouble (in my opinion, justifiably).
If an artist is just posting their art and doesn't have a stance on if it should or shouldn't be used as training data... I don't know. It's certainly going to be the niche for how Nightshade'd images can start slipping through the cracks in large numbers, though.
How many artists pay royaltes to their sources of "inspiration"? How many of them asked for permission to be "inspired" by everyone else?
@@LumocolorARTnr1319You're not an artist of any kind and it shows. Get a grip
@LumocolorARTnr1319 That's a disingenuous comparison. AI is not just "inspiration". It is much more like sampling in music. Where you DO in fact have to pay royalties.
@@anderslarsen4412 AI is no different from an artist looking at something and copying the style, they do it all the time.
@@LumocolorARTnr1319 That is absolute garbage and I can prove you wrong in one sentence because just about every AI picture has the original artist's signature left over, they're just often edited out.
AI as we were promised: "We're taking the hard labor jobs you don't want, freeing you up to do more art!"
AI we're getting: "We're taking over the creation of art away so people can have more time for manual labor."
dawg nobody has taken anything away from you, actually generators like SD, MidJourney, and Dalle-3 have allowed more people to visualize their ideas
@@matowakanIt’s taking away jobs for artists, you can’t spin your way out of this
@@onlyscams If you let it "take" your job as an artist I guess you really didn't want it then
@matowakan you're a moron. If a baby could shovel crap at the same rate as an adult man but the baby would legally do it for free and the man wanted a living wage, the hirer would take the baby any day, and the man and the child would both starve to death.
This is the same. Ai art is terrible but it's better for producers who don't care.
They churn out enough garbage and the industry dies and artists are no longer incentivised to be good and now we're all shovelling crap for pennies.
@@matowakan people aren't letting it take it away from them. it's forcefully being taken away from them. you think corporations will take human creativity that costs a little of their budget over AI slop? It's already happened, with that one alien show that made a intro with AI even though it had mistakes and looked like shit
i find it funny how companies get mad at ppl for pirating their software yet those same companies just get to use artists who they didnt even ask permisssion to steal their art with no consequences?
Its fair use. It uses from each image like less than a pixel in content.
one of the many contradictions inherent in capitalism
The difference is who has more money... Companies can steal whatever they want as long as they're only stealing from the poor.
@@cherubin7th ig
still kinda funny tho
There's no such thing as pirating software, just a hijacked term. At best, you can speak of unlicensed use, unauthorized downloading, or just… copying without creator's permission.
I think NightShaded is a cool term. Any poisoned AI network has been “NightShaded” sounds so Cyberpunk
Bruuuuuuh yeeesss, were here, were finally here, high tech low life!!!
my life is like le hecking video gayme
@@middnightly no bro
Night-shed.
get nightshaded scrub {MLG horn plays}
"That photo filter should be illegal because when I harvested the picture without the creator's consent for a commercial purpose it didn't do any harm to my actual property, but it meant that I couldn't generate profitable output from the other pictures I harvested without consent!"
That's like a mugger suing a victim that ran away from them, because the mugger accidentally dropped their favourite illegally purchased weapon in the gutter while chasing them.
Pretty convoluted but you've got the spirit
Nightshade seems like a dye packet in a wad of bills and it's loudest dissidents just sound like they're crying "you ruined what I stole"
except thats literally not how that works? were wasting our effort on an entirely ineffective tool and patting ourselves on the back on a job well done.
Nigtshade doesnt do shit lmao
Except the dye in the packet is easily removed with water. It could might as well just be beet juice.
@@Shaker626bro giving people tips on stealing money 💀
@@DolusVulpes The government always steals from you but all you say is "yes daddy".
Apparently using both makes it even more difficult for AI to rip off your work. If you use both, you're supposed to use Glaze first, and Nightshade afterwards.
What about glaze, nightshade, then glaze again?
the opposite. nightshade first then glaze
@@GalaxColor Glaze the nightshade, then use an antidote, then glaze it and then nightshade it again.
@@thesomewhatfantasticmrfox have you not read the paper? It says exactly what I said. Or are you just trying to trick artists?
@@GrumpyIan I suggest you follow the rules of the paper
It certainly is ironic that the creative aspects of human labour seem easier to automate than the manual ones. Any job that requires a basic level of movement around space and manipulating things seems incredibly difficult for a computer.
You literally need like 2-10K to automate some manual jobs, literally easier to just go with a manual, because its cheaper
Because manual jobs are already mostly automated or have very specialized machinery that reduce human input.
Also the economy of the 1st world shifted to the service sector over the last 30 years.
Just need more sensors and computing power. It's a matter of time.
Yeah the oversight is the digital frontier. It logically makes more sense and is cheaper to automate the digital world through the monitor rather than creating a whole industry for robotics, not to mention develop robotics, then program AI for robotics, then train the robot AI, and then finally implement it in real world physics. For any start up the most logical and efficient way is the path of least resistance. This only casts the inevitable that the rest of the jobs are gonna definitely be automated. If "creativity" in it's highest form is replaceable, then nothing human can do is in any way competitive with it's superior counterpart. The only way manual labor can sustain would depend on how soon we are able to enter the age of automated robots/cybernetics. If the timeline is quite slow, roughly a 100 years, we will probably see an uprise in manual labor work and creative influx, like the odd rise in oil paints and art fares, etc. But once the robot can replace a 3D human, it's over for all of that and the remainder of the manual labor. In the end humans should be progressing towards living their own life and detaching from industrial constructs.
(laughing in most industrial automation that has left only the more complex tasks for humans already)
Well, if a company want's to use my artwork to train their network, I reckon they can always pay me for an un-poisoned copy.
Lol sike - there's enough data gathered to never need you weirdos ever again.
or they can fix it up with some antidote in a mere nanosecond and add your art to the training heap :)
No matter how we feel about it, AI is becoming more smarter and better than us. They will find ways to learn from your works.
@cate01a suckin on these boots i see
@@cate01a Or you can fsck off, thief.
The issue with glaze unfortunately is that it is *really* visible for more cartoony artstyles, or ones with lots of flat colors. But we've seen AI start to inbreed as more AI generated images make it to places typically scalped *by* AI. Artists have taken inspiration and iterated off each other since the dawn of human hystory, meanwhile AI can't make it past 2 years without it becoming glaringly apparent that it cannot create, only make shittier copies of human work
this is actually really interesting the idea that AI can actually inbreed i hope your right
This isnt actually that true, the statement spawned somewhere on twitter and it seems reasonable but doesnt really translate into reality
its easy to filter out images with "bad generations" e.g 3 hands, 6 fingers etc with good enough vision models (which are already being trained on ai generated images and give out good results / its improving)
Its feasible that the best models in 3 years will have a lot of Synthetic / ai generated data with good captioning in the dataset and have better results.
A lot of Ai labs are trying to perfect Synthetic data right now as data scarcity will probably be a bottleneck soon
I’ve thought about the AI poisoning itself too. I wonder what it will look like in another 2-3 years
I've tried glazing my art but it was really noticeable because it had flat colors and I have a cartoony style
I think in the future, state-of-the art AI will only bother training on images that were archived before 2021. Anything after that you have to assume is either AI generated, or poisoned, and isn't useful for training. It's like how highly sensitive radiometers can only be made using metal that was produced before nuclear weapons were invented.
Important note:with these "confusers", it is typically required to actually *have* the generating model you're trying to fool. Also, this won't prevent new models with new architectures from being trained on these images. Trivially, when YOU look at the picture, you see pretty much exactly the original, whatever is confusing the current models is some property of these exact models, not of this kind of machine learning in general. So, this is only partially useful.
And even then, the example they provided to "prove" the tech worked is questionable at best. There are so many variables to control for that their example doesn't mean anything.
You would have to generate hundreds of examples before and after and know exactly what went into the dataset in both cases. More than likely the "after" version is simply them adding the new glazed images onto a mostly pre-trained model which results in it behaving weirdly because they have manipulated the model itself and haven't adapted for the changes like anyone who knows what they are doing would do.
With all this cat and mouse...
Is it really worth the effort over just putting out art that people actually want?
If an artist has made a name for themselves and takes commissions, I'd bed that they can still compete well against AI if they're creative enough
Oh so people are just going to do this against Dalle2 adn Midjourney V4 because tha's all the courts know of XD
@@alexipestov7002 this isn't cat and mouse so much as it is the contents of the cats stomach giving a little tiny rumbly and it needs to take a short nap
yeah it must be a specific exploit in its processing/classifying of images/objects/symbols
other models likely wouldnt be vulnerable unless they share the same processing/logic
If nightshade is a harmful malware then every ad trying to bypass adblocker or website blocking the use of adblockers is thievery,
If nightshade is malware, the comment section would not be allowed.
While Ai art is not at the point to completely destroy artists, as an artist, i dont doubt it will vastly reduce opportunities for many throughout the decade.
If we consider other forms of art, it’s already an issue. Voice acting is getting replaced by AI generated voices that obviously use the voice artists’ recorded voices for example. It’s not at large scale yet, but when we’re talking about big corporations and the world of short term profit we live in, it’s only a matter of time before it becomes a norm.
Ai could’ve been a revolutionary tool. It still is if used as one, but a lot of people instead use it as full on replacement for the act of making art. Even then, ive seen arguments that prompting an entire piece is “using a tool” and i suppose to an extreme degree it is, but im starting to wonder what the term tool means
That's why artists should have never gone digital.
@@yheti3584Like calling myself a cook because I can use a microwave.
@@yheti3584starting?
It was readily apparent that it was a step up from a dumb tool.
16:38 I’d call it an acceptable DRM measure:
- doesn’t require any extra closed source code to run on your computer
- doesn’t affect legit users or fair use
- doesn’t alienate any consumer hardware that would otherwise be able to access the content
in a year id be funny if the largest models got poisoned, then you need to hire a translator "hi, id like to make a dog driving a car" "ok, computer, generate cat plonking the cow"
they would never release new models if they got poisoned, even the ones we have now are pretty good
Just grab some popcorns then watch some circles spam memes and nsfw as "I authorize AI to study my piece" samples and the whole world start to burn...
you could poisen them also with forbidden stuff like poo and nudity 😈
Why don't you hire an artist to draw that)))
@@techleontius9161 cause that requires talent
As poisoning and glazing tools will most likely be foss, nothing stops companies to have countermeasures against them and even create other tools to "clean" the samples or at least detect and automatically remove them from datasets. Just like watermark removal.
Yeah I highly doubt this will be that effective
In fact some of the earlier lawsuit was filed because of very obvious Getty watermark.
@@chongjunxiang3002 Yeah. I'm not sure what came of that
FOSS or not it would be a matter of time when someone defeats it, only thing is if companies are willing to do.
Encryption standards are also open and well-known, but that doesn't enable companies to decrypt those messages. How does knowing how this glazing happen let companies "clean" the image? How do they tell a glazed image apart from a clean one?
the idea that a Photoshop filter, essentially, is dangerous malware? When you're running it on your own art before distributing it? That's the most Silicon-Valley-poisoned idea i've ever heard
IDK; if someone was able to manipulate, say, an audio file so that loading it up in Audacity would cause the program to crash or play it improperly or or do something else you didn’t want, I’d think that would be considered malicious. Or that video from Mrwhosetheboss (“How THIS wallpaper kills your phone.”) about an image that causes certain Android phones to crash if viewed in certain ways, or outright brick them if set as a wallpaper, or that Janet Jackson song that caused laptops to resonate in a way that messed up their hard drives; those were created accidentally, but if intentional, it would be malicious.
So manipulating an image in a way that causes it to mess up AI image processors could also be considered malicious. Of course, if you think this form of AI image generation is bad, then it would be a good thing, although I’ve heard it would have negative influence on other fields like computer vision.
it's cope at best. Any of this DRM bs will get beaten in weeks
@@KnakuanaRka Patreon and plenty of other streaming services do this with their videos. It makes it a real pain to download stuff from them, but it exists.
@@KnakuanaRka "Malice", in this context, is "making a media file that someone's program parses incorrectly". In any sane world, you would be able to make art without being accused of malice by people who make shitty programs that can't parse your art correctly.
@@andrewkoster6506 Pegasus isn't a malware. NSO group should be able to make their own images without being accused of malice by people who make shitty operating systems that can't parse their art correctly.
Considering how most of the images that an ai is trained on are stolen without the permission of the person who created or uploaded it, it’s really the company’s own fault if something they stole broke their program. This is a really cool tool to protect your work.
It is, but the law sucks. Kinda like how if you hurt yourself while robbing a home, you could sue the owner
It feels wrong stop companies from using those art pieces, because ai learns just like how humans would, we see a cat than we try drawing it in different ways. It feels morally correct, of course artists are free to do whatever they want but so are ai companies. This is coming from an artist
@@ikosaheadromAI doesn't learn, it copies and then gets rewarded or punished for how closely it copied. Learning is about iteration, expirementation, interpretation, etc. AI physically is not capable of these things
@@tyeklund7221 learning is just coping there is nothing else to it, and it does experiment to get the results it wantes to get better, as well as interpret what it got and see if it was close to what it wanted.
@@ikosaheadrom But the problem is that AI doesn't actually "learn" like people. It's not a person, it doesn't "think" it's built and created through random chance. It has no creativity of its own. It does not exist or have feelings or even know what it does outside of when we prompt it to do something.
In order to build an AI, companies steal and scrape information (or other artists' work in this case) without the artists' permission in order to build the system that generates the "art". What this tool does is allow artists to protect their intellectual property from being stolen and used by these large companies without permission.
This tool is essentially preventing plagiarism while protecting an artist's right to decide how their work is used.
I’d love to see an added database of sites that are known to upload AI art and block them from like google images or other similar image search engines, because I fucking hate looking up photos of like a real animal and getting nothing but AI slop
before I’ve looked up pictures of animals like frilled lizards and velvet worms and a few of the results were ai slop that looked nothing like the real animal- it’s almost misleading.
AI Prompts/Autotags and Search Engine Optimization are a match made in hell.
google makes their own ai models doubt they'd do that
you can create your own search engines with custom filters using google cse to achieve it
I’ve tried looking up a reference for an old tv and there was so much fucking ai
You were planning on getting in touch with whoever it is that took the photographs of those "real" animals to ask permission to rip them off for your Manga comic about retarded jungle animals, right?
Auto-generated content everywhere. The dead internet Theory is REAL.
It's certainly becoming a reality.
@@MrNexor-cj8gs Sadly, yes. And I don't think it will take very long before it happens either. Those AI tools are starting to get pretty damn good now. Even video and audio is getting better all the time, so go figure.
@@MrNexor-cj8gs Unfortunately, most people are sheeps and will just jump on the next bandwagon without thinking critically about anything. We know how things works by now.
Things die when normies and corporations get their grubby hands on them
@@distorted_heavy I'm afraid so.
The concept of Nightshade gives me hope that the uncanny era of image generation will never end and there will always be ways to force image generators to spit out abominations wanted or unwanted.
It's been proven these "attacks" don't work, hell it even improves the AI models
@@hphector6 that is a bummer if that’s the case.
I suppose hellish parodies will require a purposeful badly designed model.
Ah like early CGI huh? Yeah I can see where you're coming from. The crudeness was so much more interesting.
@@hphector6 maybe if you would've read the actual paper you wouldve known this works on image generators like stabel diffusion, not LORA's
@@ianmcmurchie6636I mean yeah ai learns it will eventually figure it out, I think it will be a never ending fight, like yeah for a while ai will lose but then it will beat the system, and then a new system it will eventually beat it so on and so on
"Egad! This package I stole from your porch contained a venemous snake! I'm suing you for damages!"
That's what the AI guys sound like.
Step 1: Live off grid.
Step 2: Paint and write.
Step 0: escape $5600/month of expenses
Instructions unclear, I think we're on TH-cam.
step 3: dies because i forgot about food and water.
That doesn't help 😐 That's the whole problem.
@@bandanaboii3136the amount of money people on social media who like to sell the idea of "off grid" or "homestead" living have to spend to sustain such lifestyle really makes me think that it is not possible to achieve that for normal people with normal amounts of money, unless of course you just give up everything and become a hermit in the woods.
trolling is also an art so i'm all for this
trolling is _a*_ art.
@@rhael42No one is thinking you're smart.
...mostly because you're deadass wrong; "a" becomes "an" when the word after it begins with a vowel. ("A book" and "An illusion" are correct, "An book" and "A illusion" are incorrect)
If you are going to correct someone at least actually correct them.
@@BattyBest he trolled you
@@zx3227 I'm aware it's a troll, that's why I kept the insulting them part at a minimum. Still good to clarify the actual grammar.
Trolling is a art @@BattyBest
Calling Nightshade malware is like calling some software malware because your poor attempt to crack it caused your computer to shit itself.
It is Malware as it is put there to intentionally damage a program without the end user knowing.
If tho the post includes a disclaimer about it, then its known and entirely the end users fault.
@@TrixyTrixter I understand that logic, but I'd say it's more than justified, as the only program it will harm is data scraping, which is copyright infringement and theft.
@@tttttttttttttttp12 still illegal no matter how justified you think it is
@@amazonbox5939 more illegal than infringing on the copyright of hundreds of thousands of artists by using their art as training data without their consent? How justified is that, then?
@@tttttttttttttttp12 doesn't matter who is at fault they are doing something illegal. You are talking like the law exists to be morally right but that's not what the law or justice system is. The justice system exists to make sure people know their place. If a serial killer is murdered by some random guy on the street the guy would be arrested even though he was "justified". (This is assuming the guy just offed the killer without any provocation, not if the killer attacked him)
"They deliberately put garlic in the candy I'm stealing from them!!!"
garlic
to be fair, garlic is awesome. you can use it to make great stuff (and it's not half-bad by itself, provided you have a drink nearby)
this is nft level bs lmao, they stole my png
Nightshade should be totally legal. If these companies want to use AI models trained on copyrighted artwork, then they should reap what they sow. You can't both train an AI model on copyrighted work and control what artists do and don't put online.
There is a part of me that wonders once artists stop creating so much work online what sort of hellish work would be created in the art soup that the AI repeatedly cannibalizes to try to create something "new". Maybe it would die then.
Doesn't really matter anyway, there are already countermeasures against Nightshade. This is going to be a back and forth for a long ass time.
Or maybe we enter a predator-prey population cycle with AI/non-AI artwork
Inkcel luddites shouldn't be allowed to stop technological progress.
@@skillorb slit
@@TabbuEmeye i see this going in the same direction as anticheat and cheats in games a never ending arms race
I will try using these tools on my own artworks. ...Doubt I'll ever really be a target of theft with my complete lack of popularity, but might as well.
Just for the symbolic gesture against this automated theft called A.I., it's worth it. Thanks for sharing the existence of these tools.
If you truly wish to stick it to the man you should just don't produce any art at all. I am sure your brave and daring move will astound the market as it is deprived of more low-skill mass produced digital art.
@@bewawolf19 And if you truly want to be edgy and provocative, just stop doing anything you like doing in life, and see everything through the lens of competition and "is what I am doing part of a mass of lower skill stuff". I'm sure that will make you happy.
BTW, that's exactly what "the man"/"the system" wants you to do. So, good job being a fake rebel who actually totally bought and integrated the propaganda.
@ddontyy That's probably true. Even more reason to do it then. :)
@@AnAngelineer What propaganda? I am making fun of all the artists who are being replaced by something because they can't make anything worthwhile, so resorted to calling it theft despite it not being theft. They are the same digital artists whom the traditional artists complained about being replaced with, and suddenly when digital artists now are on the chopping block, they suddenly turn into Luddites. Your comment was especially funny as your "Symbolic gesture" is just as brave and stunning as idiots putting Palestine flags on their twitter bio.
@@bewawolf19you wouldn't download a car
If they want to classify that kind of tools as malware, I also I assume they are willing to accuse people training models on other's people art as thieves right? Otherwise I'd love to see their models being fucked up by the artists.
Might as well go the full malware route, let's say... Create an anti glaze adversarial algorithm with a sleeper agent inside it, wait a week or two, then amp the voltage on the gpu and frying them, even if it happens just once it'll be enough to set a precedent.
@@elisehalflight Reported, and also, careful what you wish for, cause little brats like you are giving me more than enough to make sure people like never see the light of day again.
@@elisehalflight I'm not sure you have any idea how modern voltage regulators work
The tools aren't malware, and the people training models on other people's art aren't any more thieves than an individual trying to mimic said art.
If only it used mallard and such to counter anyone who tries to bypass,
nightshade seems like the most based response artists could give to AI
The most ineffective response though, it has had 0 effect to the big companies lol… people seem to forget how resourceful big companies are, if they see a problem they get a group of people to design something to reverse it in this case and they can keep updating that each time the tool to poison images are updated, it will be just an endless loop of this and of course the companies will be able to act faster to fix it than the people behind the tools
if you ignore the fact that it's also affecting legit customers of artists' works - because it make art look much work it's irritating to look at.
I think options like Night Shade are a good idea on the basis that artists should have some control over what there work is used for.
If they want that, why post it on a publicly accessible site? There are plenty of more private places to post art, like Patreon, Discord, or private channels
It’s a rule of the internet that once you post something online, anyone can and will use it for whatever they want, and you have no say. This goes for text, irl photos, and art you make.
People want the benefits (praise, popularity, commissions) of posting art publicly, but not the risk of other people using it for whatever they want
@@cara-seyun Because they seen many people do so without problem before and figure they have a right to it, also artists looking to make a career or show the public something should be allowed to take measures against certain things.
@@Wolf-oc6tx I’m not saying they aren’t allowed to do so, but actions have consequences
You can’t have all the benefits of publicly displaying art without any of the drawbacks
People imitating your art is a problem that goes long before the internet
Even Shakespeare got mad that people copied his plays
@@cara-seyun I am saying, people should be allow to put in place safeguards to counter certain things.
@@cara-seyun I am saying taking safeguards against AI based copying is reasonable rather then a cope.
100% not a malware. It’s a prickly anti theft device
One thing i do like is these AI tools are being trained on stuff scraped from image sites (take Danbooru for example)
then people using AI post their "work" which is then added to the image scraping sites and is then thrown at the AIs
the way to beat AI is by giving it AI content
Clown to clown to clown to clown to cl
This is a serious problem. If you want to train your AI, you can't just scrape the internet and assume data you find is generated by humans. This applies to any type of machine learning application such as language, image, voice, etc.
Danbooru has been banning and delete AI works as general for now, with strong worded disclaimer and rules with it ever since the Great "Deletion Request" incident. (Basically in 2022 some AI company admitted their training data comes from Danbooru, causing around hundreds of Japanese artist sending takedown request from the site)
While Pixiv require users to declare the use of AI. Even Danbooru with its 7 million works across 20 years are actually tiny training data compare to internet as a whole.
However AI artist usually post their "work" in twitter, which a lot of them don't declare the use of AI honestly, some of them even open Patreon.
And Twitter, thanks to its diverse content, tend to be the training model itself...and now added with the AI works. Model collapse coming lol.
Danbooru and Gelbooru have started soft working with AI Art trainers and disallow AI art uploads to the site and have begun correcting their tagging info. They are seeing the writing on the wall and just accepted it.
Won't work long term as synthetic data gets better anyway.
The worst part is that AI art is hurting everything, even niche kid's games. The Neopets art competitions have started being infiltrated by AI art.
AI is the greatest tool of spammers, scammers and content slop makers
It should be 100% illegal to use copyrighted artwork to train your AI.
and also make fan art
cg artists also have a good trick: they load up their models with an ai friendlie title like "high quality dog, great topology" and then the acutal 3d model looks like a piece of sh*t and has the worst possible topology. then if companies dont ask for permission and just download stuff of sites, the ai gets a piece of sh*t to eat :) lots of them.
This is the only art I’m going to make from now on 😂
Pro tip: Just make art that no one wants to copy/imitate.
I already do that
You won't make money from that
@@morphine000 You'd be surprised
I guess by "no one" they mean corporations and such. People that want the most sanitized, inoffensive thing possible with an art style that's "easy on the eyes." If you're just fucking weird you'll attract other people that are just as weird as you are
@@morphine000 Its a lot harder to do it, but it has been done (Tsukumizu). However you can still make art for yourself.
Regardless making money on art before the AI craze was difficult so I wouldn't make it a career.
Or just make physical art like in the good ol' days
As a filmmaker, I hope someone can develop something similar to this for video. I know it's probably an inevitable battle, but hopefully it could delay ai taking over film long enough for some of us to make our first feature films while that's still a thing.
Film grain would work somewhat okay, or you could manually glaze each individual frame which would take a very long time
Only now more people will be able to make their own feature films thanks to the new technology, and those already making films can up their game. The playing field is being more leveled than it’s ever been.
Given the failure of disneys wish i dont think you need to worry yet 😂
Wish wasn't made by AI@@jismeraiverhoeven
@@platypuz1702 that makes sense in theory, but as a filmmaker, I can tell you it probably won't work that way. Technology isn't the biggest obstacle in the way of me making the films I want to make, it's distribution and access to famous people. People don't watch movies the same way they consume other art. If you make an incredible movie, that's no guarantee people will watch it. Because people's attention spans are short, they only watch movies under very special circumstances. Nobody watches a full length movie they stumble across on TH-cam. They watch movies when they sit down on a streaming platform and have already made the decision to watch a 2 hour movie. And then, they really only watch movies that either have a famous person they recognize in it or some form of IP/character that they like, like Mario or John Wick. That's why studios focus so hard on remaking the same movies. The other side to this is the slots you see when you open a popular streaming platform you trust like Netflix or HBO. These are projects that get funded and distributed by production companies who have relationships with distribution platforms like streamers. They decide what to push for people to watch.
So, I can tell you the way this will probably look is anyone can make whatever movie they want, but there will be a depressing lack of human creativity, because you're just letting the computer tell you what your movie should look like, and even if the movie is good, you'll drop it online and very few people will watch it because good content doesn't rise to the top of the internet, clickbait content does. The "good" movies that people will watch will be the ones with famous actors in them or famous characters, or the ones that are pushed by advertisers. And that comes down to usage rights and money. You have to pay to have a famous actor or character in your movie, whether it's on camera or AI generated. It's also going to come down to powerful relationships in Hollywood. Again, who's movie get watched today comes down to who you know in Hollywood and these big institutions that invest in and distribute people's projects. Except nobody is going to be looking for the next Safdie Brother or Tarantino with a cool idea for a movie, because why would they? The machines do that now. Sure that person can drop their movie online, but where? TH-cam? Who's going to watch it when they are competing with a 10 minute Mr. Beast video? They'd need a famous actor in it, but they aren't a Hollywood elite who knows actors and they don't have money. The result will actually be a consolidation of power where success in the industry comes down primarily to who you know.
Using those tools on your art is self defense.
If enough of those sort of tools spread out, it will probably render public sampling for training data unviable, due to the sheer number of countermeasures you need to deploy to make sure no contaminated data slips though, with each of them having a risk of ruining your training set by itself.
we are likely to see it use the Chat GPT method of not allowing in data before a certain timeframe. It will limit the data set but not outright kill it
Glazing tools will be rendered useless. It's a war that favours the AI generators, not the glazers. The glazers will eventually become very noticeable to the human eye if this goes on.
@@Shaker626 honestly the main glaze is that AI content eating itself creates worse AI content, which is basically the same as poisoning these AI models manually
@@DragesNolya And what happens to all this when the average computer can run these models or better ones (which will almost definitely be trained by pirates)?
@@Shaker626 1. we have a bit of a computing brick wall right now, we cannot physically fit more processing power into computer components, so without a major computing breakthrough we arent going to see that happen
2. once again, the main problem with AI content, is that if it uses AI content to train itself, it becomes objectively worse at its job. So the more AI content there is, the worse the AI content generated will be.
Whenever new technology is introduced, there will always be people who will abuse it. It's a tale as old as time. Myself, I am both a traditional artist and a person who experiments with AI generated art. It is very effective to brainstorm ideas, color combinations etc.
I believe that anything created solely with AI should be public domain by default, and it should be required to state that an AI generated image is in fact AI.
Artists should also be able to fight back if someone copies something of theirs.
Edit: Changed some things because replies brought up some good points
What do you think of the new Steam AI changes? And how would you define a "majority part"?
I can't really think of any current uses of AI that shouldn't be public domain, but I could easily just not be thinking of an obvious example.
I guess if someone creates their own data for the dataset, so there's no questionable ethics
any aspects from an "AI" cannot be copyrighted unless someone convinces a judge that those glorified probability tables are indistinguishable from a human brain, and I don't see that happening unless the judge is 80 years old and/or doesn't know what a netflix is
What this can also mean is that, while the modifications made by a human could be covered, their copyright registration which, as you might know, is really useful for copyright lawsuits, might be thrown out if they fail to declare the origin of the stuff they submitted and blatantly try to pass it off as theirs
I almost wonder if instead of using copyright, a lawsuit could have more traction under anti-trust laws. I mean seriously, this AI tech is using the work of small creators, who are effectively small businesses, to shove them out of the market. It could be argued as an advancement of technology, or it could be seen as an abuse of technology and power in a market from a corporate entity. Who knows, Im not saying it's a good strategy but one that should be considered
Under US copyright law, according to the copyright office, works created by AI already can’t be copyrighted. They are public domain essentially. There is some nuance on works created humans that have incorporated AI created stuff (the overall work is copyrighted to the human author, but the AI works on their own are still public domain).
Ok, but this is a race you cannot win. If everyone can use this tool, then the thieves can too. You could train the AI to undo the cloaking, by using the cloaking tool to generate training data for the uncloaking. A lot of AI detection / adversarial approaches can be circumvented in this manner.
All of these tools and preventions are like people trying to put the AI cat back in the AI bag
The only solution to this that I can see would be using your own private glazing model. If it's not public, they can't train for it as easily. But it all depends on how easy it is to create a glazing network that requires a unique detection/neutralisation method. And on other side how easy or costly it is to create anti glazing model, and how universal those models are.
It is an interesting topic. I can imagine stock photo, or other image sharing company just creating a different secret glazing model each few days and applying it to random images to discourage usage of their images for AI development. Dealing with one glazing method might be easy and cheap, but dealing with thousands of them might be prohibitively costly.
These are just my speculations, I have little knowledge in this matter.
"this is a race you cannot win"🤓
wrong. the current status quo is there is little you can do to publicly distribute artistic content while retaining your intellectual property since the LLMs are essentially out to steal/copy your works. these LLMs have to make money. if their cost of operations escalate enough then they have to search for another method of getting data or shut down. yea, you can do stuff to counteract it but if it costs an egrigeous amount then that company can't do business
@@pingwingugu5 you could infer something computationally equivalent to the inverse transform of the glaze, with the "more parameters than pixels" constraint
Simple solution, add randomization to the process so they can't make a model to predict the changes made, if the method is not uniform but does the same thing anyway than that will utterly confuse any model they try to build
I REALLY like the idea of people being afraid to train these models on stolen artwork...
the "antidote" identifies patrons about alteration, because it doesnt have the original to compare, it can only say: "this is poisoned" and it cannot re-make it because it doesnt know how it was before the alteration, so the "sample" its curated and put away from the process, basically it helps artist what they want, not being trained on.
BRO NOBODY IS HIRING PROMPT ENGINEERS. Im so sick of internet dwellers copy pasting their world view from media.
Anyone who works as a prompt engineer is just a mediator for people who dont have AI tools
Its kinda funny and concerning at the same time how everyone thought creative careers like art and writing will be the last one to be overtaken by AI, but now those are literally one of the first ones being overtaken by AI
Because the people who created fiction about AI were writers and artists
Plenty of jobs have been automated in the past. I'm confused by your point
@@bbrainstormer2036 They don't know that basically every bit of electronics and mechanical stuff is mostly assembled by machines with human supervision. The term "AI" is becoming such a marketing gimmicking like "Smart" devices. Can't wait for a washing machine to have AI slapped into it for no reason.
Calculator used to be a human job
@@bbrainstormer2036his point is that those jobs weren't taken by AI, that's literally what he wrote
i remember fighting against the horrible implications this new tech would have , and possibly making it less hostile, so users could co-exist with it , but after 2 years+ of arguing with people ive learned that nobody will care until it hits them.
How to keep the masses enslaved:
1. Take away all upward mobility
2. Work them so hard, they are too tired to do anything else
3. Turn one half of the lower class against the other
4. Don't let them save any money
If people can correctly process poisoned versions, AIs can be trained to do this as well.
They are only making the training set more robust. It will slow them down for a little bit and then help the training in the long run
Yeah, I’d think it wouldn’t be that difficult to make an AI to detect or clean up poisoned images, and then the arms race would continue.
This is DRM, and as usual the ignorant will buy into DRM while everybody that knows the tech will be able to work around it
Then it can also be trained to re-mess up the AI again over and over in a circle like how TH-cam tries to kill adblockers, but can't.
People cannot do something right 100% of the time
>Make AI art popular and cool
>Artists use it
>So much AI art that the AI is training using its own art
>Art looks terrible
>Nobody uses AI much anymore
Alternate universe:
It is not clear that it would happen that way.
One hope is that AI art will eventually be denied copyright. Then real artists could freely intersperse it with their real art and poison the AI even more. But copyright laws and enforcement can often get screwed up.
@@user-vb6lq9il5v copyright is almost outdated
@@alexanderdavidlocarnogomez3441 No, it is not.
@@user-vb6lq9il5v bro, almost 100 years to protect something in specific in the internet era?are we serious?the only reason i think one should have copyright IS because of wanting to be the only one Who can use the specific thing they own...but then we have fan arts, fanfics, memes, parodies and etc.
Fortunately, some artists are making private portfolios,
Pieces they do not share online, these pieces can be shared at interviews and only artists see them best way I have found so far
That's probably what I'm gonna do tbf, all the art jobs I'm looking at aren't fussed about an online portfolio and would rather just have you send them a word file or PDF.
If you wanna add an extra layer of safety to prove you're not using AI for your art, you could always use a hand-cam alongside a recording of the process. Speedrunners do this all the time to prove that they aren't cheating.
@@Mrhellslayerz thank you for the idea, I’ll try this out
AI also sucks for generating 3D content. AI can generate pictures that resemble high quality rendered 3D content, but not the 3D content itself. So your portfolio of 3D models is safe to publish openly... for now.
I guess the ironic part about all this is that Glaze has allegedly violated another project's (DiffusionBee) GPL license. They've since removed the GPL parts from their codebase, and also removed the earlier, GPL-violating binaries of the software from their website but AFAIK they have never published the source code for those earlier releases, basically trying to sweep the whole thing under the rug.
So they are hypocrites huh. Whou would've thought...
They're not supposed to publish the source code for those releases, that's literally what the GPL license is about.
@@monoproject0 no, the GPL requires you to provide the source if you provide a binary
@@CALndStuff What makes you think that? The GPL itself says: «You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License».
The issue with using filters to poison AI models is that, not only are the models ALREADY TRAINED, but that all it takes is another filter to "clean up" your image for future training... even if the ai models are outlawed, they're too easy to train and too widely distributed to destroy. That genie is out of the bottle.
its not so simple. for the filter to be removed it needs to be recognized. and these filters are unique per image. attempts to remove them can ruin the training data you are trying to get by distorting or removing details
very simple actually, heres how you do it. gather a large clean dataset, these exist. apply the filters to the traing set marking them, you dont have to have them all the same across the images even duplicate images with variations of the filter would work and be helpful. run it through the network and train to detect poisoning. Sorry but this a classic neural network problem.
@zippo32123 People would have to understand how AI / neural networks work to know that.
Edit I meant most people don't understand how AI neural nets work so people Zippo's comment is probably just preaching to the choir
doesnt mean you cant put rules on it lmfao
@@WilliamSmith-gj8wc you don't think the people making the A.I. know how to do that.
I can't wait for the "nightshade antidote antidote" model to be trained. And then maybe a nightshade antidote antidote antidote, who knows!
Nah, nightshade already does not work.
They have AI that can completely corrupt the database of other AIs. It's coming. You are not prepared
it came out like a year ago, no news about it worked only news about it being released .
@@VJETRAGlaze came out a year ago, and it has been working, nightshade very recently released
@@CALndStuff what plateform did it effect?
Before AI, it was bad for digital artists, because: The Industry was sett upp in such way, a corrupted hell.
Note that the article on the judge "dismissing" the suit is misleading/incorrect. Only a couple of claims were dismissed but prosecution was allowed to refile with updated evidence and claims that they recently did. The filing is public ofc and from what I've read of the current suit it seems *very* convincing.
I love how even when artists fight AI they create art
It's just watermarks but more complex. But this also highlights a misunderstanding of information theory. Or GANs. Adversarial attacks will only make the models stronger. And these are really targeted. so this might work for image to image. But not for text to image. Unless you somehow associate your art station username with something NSFW in the training set, to get filtered out or something. Have people not read the CLIP paper? It works because of scale the training set is 5B image-caption pairs. And with the push for accessibility on the web ... Captions will improve. If you have a good clip model, you can train and image decoder against it. Model weights will only learn a bad distribution of the quality of image - caption pairs drop. Or maybe use a clip model to filter you dataset. In the end, they used a clip model to create bad data and try to inject it for specific concepts. At a rate of 3:2. Or 1% of pre training. Which is 5M images with intetionally bad captions.
Invisible watermarks or adversarial attacks can be detected and defeated by tweaking the convolutional steps and pool shape. It's kernels you can modify anyway. If a human can distinguish the image, a model can learn that... And more. you aren't hiding yourself by adding some obvious tool. The best chance is for your name to become a dedicated token and essentially overfit on a specific direction you don't want to. And then have the attention heads never attend to that token. so you want your name in text data and even image-caption pairs that aren't associated with your art directly.
The real issue we are facing, is that corporations will gatekeep models and model generation. Only "ethically trained" models (think Adobe Firefly, Getty images Picasso) will be allowed for commercial use. One example is steam games already.
Meaning none of the open source goodness will be viable: webuis, finetunes, loras, mergey, edgy stuff, naughty stuff, etc etc.
Copyright law will protect corporations, not individuals.
I think Steam already dropped that requirement because it made no sense and it could cause more harm to them in the long run since indie devs who use open source AI to make completely non-infringing content will just choose alternative storefront instead.
Copyright will ALWAYS protect companies over people.
@@voidmain7902lol
The data replaced by nightshade and glaze are not recoverable, any AI trained to "fix" glazed/nightshaded images would be making up the missing data, the only workaround would be detecting and removing images that are glazed/nightshaded
@@CALndStuff making up missing data... you do know that's kinda part of how diffusion works right?
I'm a "Prompt engineer!" It takes REAL SKILLZ to know how to type "ARTSTATION; BEST QUALITY; HIGH RESOLUTION; SAMDOESARTS STYLE; PURDY SUNSET; HOT LADY" Only a REAL ARTIST can know how to type ALL those words!!!.
This is like the old "Real Programmer" joke
AI art is ok for personal use
But is should not be Monetizeable
.
I'M GONNA PROOOOMPT!
REALZ LAST TIME I SPENT 5 WHOLE MINUTES TRYING TO COME UP WITH A PROMOT
It's nice to see artist get tools to fight back against AI stealing their work. An algorithm blending together stoleb works to create something visually different doesnt make it any less stealing. But worse, art is the soul of humanity, and handing it off to corporate ai tools is the worst thing we could do as a species
When AI art is used as part of a piece of work, it's not too bad, but when I consume art on its own and realize that the art I'm looking at is generated by AI, I stop feeling that the image is important or has any value. I don't think AI can replace art in this sense.
In video games, TV shows, cartoons, etc., it's possible, but when I look at art without that context, AI art doesn't bring me pleasure. I mean, it's something important when art has its creator.
The creator is the AI.
So you feel an image is "important" only if you believe that it was made by a person? Have you ever seen how humans trace and copy other stuff and real life?
That's only now. Later on art won't be seen as a bunch of people that know hard techniques. People will start to appreciate art for the content and not just how hard you work on it (aka ego). And good art will be Ai generated images that have really cool and I terresting message behind it. And bad ai art will be the rest. It allows us to transcend technique and make art that is about the art and not the artist.
So the art doesnt have a "soul" only after you know have been made by AI?
@@marcogenovesi8570 This analogy is so common that it's not even funny and it's still complete nonsense. People take ideas and make it their own because they like it, because it makes them feel something. Good art iterates on the source and adds a layer of themselves into the art. Humans don't care about art because ooh image pretty, it's because art is another way for humans to feel a connection with other humans. AI fundamentally cannot replicate that unless we invent a complete generalized self-aware intelligence with the same goals and wants as us. AI is just consumerism on steroids.
The AI that turned everything into Anime knew what it was doing
*The inner misalignment is real*
In hindsight, it kind of makes sense that "content" was the first job on the chopping block. If a self-driving car or factory robot messes up, people die and property gets damaged, so that needs to be 100% perfect 100% of the time right off the bat, or you're toast, whereas if a content generator messes up, you point and laugh at the output and try again. It's safe to just brute-force it.
Yeah, doesn’t change how pissed I am, they just happened to take my talents first. Fuck it, send me to the mines.
I think it is a great step forward for artists, considering there are algorithims that are very difficult if not impossible to reverse like some instances of the blur effect, fixing an image with this noise thing could be really expensive considering noise is "random", it could take a whole lot of machine power to clear entire "corrupted" datasets for these to work as they used to.
you're in for a treat....
DE NOISIFYING is exactly how the image generators work in the first place!
Images in the starting are random noise and then the algorithm creates the image by de noisfying it bit by bit according to the prompt
image de blurrring and upscaling also already existed before this tool
and this is free, so anyone can use the tool to create say pair of 1000 noised and original images and then teach it to remove the noise
A new cyber arms race is starting and I dig it.
Excited to see how the software develops (both ai and glazing) and I hope it develops into a similar situation to the whole ads vs ad blockers or malware vs anti-virus where the race is always close but any lead the badguys get is always quickly lost due to the dedication of the goodguys.
At this point, AI art still looks like AI art, even the most cartoony ones you showed.
Turns out that making physical robots work in the domain of unskilled labourers is super hard, but making software robots that work in a domain which consists mostly of bullshitting your way to the top is piss easy.
Most of the pictures he showed were months if not years old at this point, if you were shown some images made with the newest sdxl models by someone who knows how to correctly prompt them, you wouldn't be able to tell 9/10 times
@@dafoexi think its mostly that "unskilled labour" is mechanical process in a complex environment, while knowledge work is in a highly controlled environment.
ai will also come for all other domains soon enough though, robotics is making fast progress by integrating LLMs etc
The shame is, there's artists who were copied to the point where their original art "Looks like AI", because their style was copied so effectively. Those people's reputations are actively being damaged because their art looks so much like AI, even though they were the ones being stolen from.
@@Globss
Can you recommend platforms that knowledgable prompters use to post then?
I just started putting a white border around my art and my signature. If someone used my art it would have a smugy white outline and black artifacting. And I've always used low pixel counts, so AI will have a hard time making something new because instead of having a million pixels it only has a few thousand.
Yoo I didn't consider pixels. I usually work on canvases about half the size of my screen, didn't know I was giving myself an advantage lmao
I think artists should be allowed to use whatever they want to protect their artwork. It can't corrupt your system if you aren't trying to steal it to build up your system.
Steam did not ban AI generated work. All they said is, state that you are using AI, in case you break copyright law we can take your game down. Breaking copyright law consists in having the AI generating Pikachu for example. AI art is allowed for Steam.
Just like DLSite
At 6:39, there are examples of AI-generated human faces that trigger an "uncanny valley" feeling in people, but they just look like normal photographs of people to me! At least until I look really hard to find weird details. Do I have partial face blindness or something?
Nah it's just an effect thats blown out of proportion due to surface level explanations
holy shit its jeb
tip for spotting ai: It’s often that the rendering is of a pro but the “””art””” has mistakes reminiscent of a beginner artist
Or it's just "too good"
Well i think 2 things are going to happen:
1, People are going to find good ways to filter data sets before or during training or get packs that are "certified" to be ok.
2, some ways to unglaze the (less sophisticated) poisoned images is going to be developed for that filter. (Probably with the amount of available good data, i think this is going to be lower priority, for now...)
Still more steps so more wasted resources
I think #2 could end up landing some people in copyright infringement hell, since you'd be creating derivative works from the glazed images that'd certainly wouldn't fall under fair use.
And once AI gets good enough, it could train itself.
Honestly, as an AI art fan (mostly because it can be used to generate some interesting things, give you some basic ideas or improving your work), I believe nightshade and other things like it are great. I mean, it's actually a way for people to protect themselves rather than just accept it as an inevitable part of life.
Prompt engineers will soon be trying to get a picture of a cow on a skateboard and getting a cat eith a propeller on its back in the style of cubism
Prompt engineers trying to get anything but a picture of anime boobies even exist?
If you have a bit of experience in generating yourself it becomes easier to see what is ai generated, the images seem to look good at a quick glance but as you zoom in you start seeing that stuff doesn't make sense, especially in background details
Sometimes they also put alot of unnecessary detail where it's not needed
Like for example?
I stopped paying attention for awhile, but looking at the comments, I see the whole 'people complaining about AI' and 'people complaining about the people complaining about AI' thing is still going strong.
My thoughts are still the same as they were before, and they are indeed personal. Before AI, the people that use it now and say things like 'there's a tool that exists for me to make art now' were never willing to put the work in to learn, so why are you saying you're an artist now that it exists when you were never interested in the first place?
Why I think this will make sense to you once you hear what I think about art, maybe.
I do want to point out, I can draw. I'm not confident in myself at all, but I'm honestly happy enough with what I can do, and I'm always learning. I'm not personally interested in using AI, but I don't care that it exists either. I'd just never use it myself I guess since it defeats the entire point of my personal growth, and the satisfaction of it all. After all, that's what art is; a person putting their thoughts, ideas, imagination, whatever onto a canvas in a way that's unique to them. That's the entire reason art is interesting. It's not about the result, it's about the process, and the beauty of each person's creation, as well as the happiness of enjoying a hobby and the love you have for it. I don't really know if people can still have that passion nowadays.
So the reason I thought so strongly about the 'art wannabes' so to speak is specifically because they have no passion. Art is just a means to an end to them, so they just use this fancy new tool to do the process for them. Skipping the process defeats the entire point of art.
It's just another step towards people loosing mental fortitude. The obesity rates in America show that humanity gets weaker with each day. "Why try doing something if you don't need to?"
Very clear headed take. I had similar thoughts. I think it's just part of a larger trend of alienation and loss of skill and value in skill due to technology.
Having a command in any craft is one of the most rewarding things in life, and before this excess of tech, that was enforced.
The technofetishists say that you can still do that, but that's like saying you can still participate in the joys of using tactile, obsolete technologies like tapes and vinyl records.
You're by yourself with maybe a couple enthusiasts.
Everyone is no longer forced by technical limitations to enjoy and appreciate limitation and requirement of effort. There's no more need to think through things due to limits on a mass scale.
Everyone now with few exceptions are plugged in their phones and going for what is easy like it's an opium epidemic, because that's what people do. They gravitate, en masse, to what is easiest. Maybe it's a cultural thing that Americans are predisposed to.
I think that might be the case.
(As an American I see that it's a culture that celebrates spectacle and new technology (Neil Postman explains this better), over human skill and discipline, with exception to sports.)
@@krunkle5136 i agree with pretty much everything you said, and i think that last line is an interesting case, and i wish it could be applied in a similar way with everything else, not just sports.
In sports, there isnt much to automate, the entire point is that its a showcase of human skill, so the only things that can be automated are the tools they use to practice, or tools they use for accuracy, like a pitching machine, or a speedometer, or a camera to record the sport, but nothing about the actual sport itself can truly be automated, atleast not for the people playing the sport. The people who watch it, however, that can be automated through the use of games that can perfectly recreate sports, or even make things that havent been done before happen, but its all treated as a seperate thing that much fewer people enjoy compared to the original sport.
I wish that this is how art could be treated, where the tools artists use improve, allowing artists to get better as technology does, while the medium that kind of replaces them in a way is kept separately as its own little niche. The problem is that people dont enjoy sports just for the final result, they enjoy it because of the process to get to that result, they love watching the game not just seeing the score. Whereas with art, for everyone but the artist, the final product, the score, is all that matters, so the proccess of art being created doesnt matter to them, because thats not the part they enjoy, its the ending result that they want.
I guess the reason is that sports are competitions while art isnt, in fact its basically one of the unwritten rules of being an artist that you dont compare yourselves to others.
for a game of football, an ai couldnt just generate the final score to make it a good match in a viewers eye, because the work it took to get to that score is what the viewer is there for, but for art, all it has to do is generate that final score and the viewer is satisfied because they dont care what it took to get there, they just want the cool image.
@@jc_art_ I think that could have something to do with lack of art education in public schools. It's also treated as this thing that's inherent, and there's a lot of bad training too.
Looking at the entertainment industry, up until now American in particular seems to prize "realistic" spectical. Also it's largely dominated by Disney, which has too much power over the tastes in popular media and is mishandling whatever it consumes (Christ sake they own Star Wars and Marvel).
I think it's a good observation that art is seen as this uncompetitive thing, and I don't know if it's a matter of anti competitive sentiment in the art world or what.
One thing is for sure, America doesn't have its own equivalent of Comiket, the Japanese indie comic market which draws in staggering numbers of indie creators annually.
I think they're very competitive and it'd be nice if America adopted that. Maybe then it'd have its own competitive art/comic scene.
Unfortunately everything seems to be funneled down to apps and the sentiment is that "you can just do that online". The epitome of cheapness and cynicism.
Artists out here trying to protect their work, and the tech bros are so adamant on using their works that they create tools to work around the protections instead of using art of people who consent or using public domain
TBH this could have been fixed through legislation easily if we had actually competent law makers. Music Copyright laws for example are very thorough and have protected musicians for the last 50+ years. They could easily introduce the same penalties with AI Art. Unfortunately our law makers do not have the mental capacity to comprehend this technology enough to actually protect creativity.
What this will amount to in the long run is less actual artists pursuing artistry simply because the demand for them will be dried up, there will be simply very few to little jobs for remaining artists. This will overall effect the quality of work that will be produced in the future in all artistic medium. You can blame this on the plagiarists who used this technology for personal profit at the expense of hard working artists, most which had to work for over decades to master their craft.
AI art still requires understanding of composition, creativity, and deeper concepts, and thus the best operators of this technology would actually be other seasoned artists. This is the one thing people do not quite understand yet since everyone is so captured by the digital rendering capabilities it has. The artist "eye" will be as important as ever. You cannot really train the AI for this yet, which is why most all AI content look the same, despite it looking nicely rendered.
Problem is, farm work is automated but for some reason, food prices are still going up and a lot of people are jobless. A big question on why.
Not really a big question if you pay attention behind the scenes. Some of the prices were raised to adjust for production costs, but a majority of it is from businesses jacking up the price for the hell of it. People are out of jobs because not only do companies not want to pay liveable wages, but they also profit by pushing work onto one employee instead of maybe 3 or 4. AI is just a lazy, greedy way to make money, and they do it by ripping off real talent
Yeah, nowadays i am very sceptical about automation under capitalism - it all seems to go the luddite route if possible: push out workers, often make worse quality goods, potentially make good quality goods almost non-existent through destroying the craft... but hey, you now earn more money - can't spell economy with a con
So now that people can use nightshade on their artwork, does that mean this is concrete evidence of plagiarism by AI companies if they try to get an AI to undo their poisoning of artwork?
First of all is not legally plagiarism to begin with.
This concerns me deeply: Stability ai, which has published several models by now, is spread so widely they're being targeted with lawsuits. But any proprietary solution will not have as easy a claim against it becuase the procedure is hidden.
But then consider what happens if these lawsuits succeed, and lets go further, every single artwork that's used has been used with explicit approval from artists. Then we're stuck in a proprietary hell where adobe or someone has paid ludicrous sums of money to artists (in total, not each induvidual) to land themselves a monopoly on art generation. And presuming advancements continue they're gonna have a monopoly on art eventually.
Suing stability AI that have at least published models unlike closed systems like openai dalle is a bad move in general. Whatever role these models will play in the future I'm sure it's in most artists interest for these to be open.
"monopoly on art generation>monopoly on art" - lol.
So, how many advancements has AI made after SD1.5? (large diffusion img gen models becoming public?)
AI-art isn't progressing even linearly, there will be a lot of time for new models to appear, and most artists will have no more arguments against it, once the training data is cleaner.
It's funny how much people cared about fair use (on TH-cam especially) or shortening/abolishing copyright (e.g. with Disney) until their own creations became involved. Now many artists want to be paid if you so much as take inspiration from their works. If people treated source code with the same the same level of greed the open source movement would not exist.
@@elliotn7578
fair-use advocates are usually not the same kind of group... I don't even know how you've made a connection.
You can make an argument that most online artists works to which they don't have written rights (characters, or whatever), but vast majority will respond to cease and desist accordingly, while AI-art is supposedly exception, because..? Art community also always shunned tracers, so I don't know why the reaction to AI-art would be unexpected.
Agree: Suing open source models made by non-profits is the worst solution ever, since it solves nothing. We get no open AIs and only the Adobes and Facebooks can use it.
@@tteqhu You're talking about models that are a year and a half in release and there's been a lot of improvement, especially in base models (specializations/fine tuning like lora aren't meaningful long term). Stable diffusion open source came out August 2022.
Whatever you're extrapolating here it's clear we're thinking different time frames. I'm thinking 10-30 years. And of course "monopoly" is excessive. I'm not arguing people will stop drawing things or top taking photo/video. I'm arguing that it'll replace a massive portion of artistic use and make it unprofitable or transform artists to art directors.
Excitable types argue that's today. I don't agree. Being pessimistic if AI scales poorly with resources and no advances in algorithmic technique are made then maybe I'm wrong. Maybe ai literally never reaches a good level. But the focus on ai and resources poured into it there will be improvements on all fronts. At least some. It seems unlikely to me that you couldn't improve on these processes that are just so very general right now.
Not only is "self driving" not ready yet, it never will be, period. The best case scenario would be trains that are driven entirely by automation with no human input in the train itself but even then you might still want someone there to ensure things are done smoothly and safely, so even THEN it cannot be replaced.
People really need to get out of this fantasy of robot cars driving around the roads anytime soon. Maybe in 400 years.
It could through the use of simtraning. By making a simulation of an environment and training the Ai there and translated in real life, it is showing good results.
sorry mam, but its already working, only thing that is holding it back are government regulations, people that are against or fear it, and of course, costs, wich would require to potentially hundreds of millions of vehicles to be reconditioned to autonomous driving
Lmao no. @@hessen6022
Hello! Friendly AI Computer Engineer. I’m an aspiring artist since I’ve been like 4 years old (but not good enough to write home about, sadly…). When I saw AI image generation tools: I criticized it because of copyright infringement (just like everybody else). Eventually: I tried it. After over 1000 hours of working with stable diffusion and midjourney I have come to understand the limitations of AI generated images and models. Sure, the AI models can appease the average user and even grant artists a proof of concept when trying to make something new. But AI, despite being impressive, is incapable of successfully completing an image (‘bringing it home’). Ultimately: I think AI is going to bring artists and supporters even closer together. AI will be a gateway to creating crap content and making people wish for better content. It is also a way to immortalize a fallen artist and his/her style. There are creative ways artists can use AI tools to facilitate their creative endeavors. But the essence of my comment: AI will not displace artists. Some of the limitations I’ve seen in Stable Diffusion are things we’ve been trying to fix on other AI technologies since, at least, the 9Os. It is a cool tool, it has come a long way. It still has a long way to go before it becomes capable of replacing artists.p
> Build AI meant to simulate how our brains work
> AI becomes good at human cognitive tasks
*Shock*
it makes sense tho, labour jobs are important for the economy to work, so you cant just replace them with risky machine learning programs that could f*ck everything up
while creative medium like art, writing, music etc are all just for entertainment, so theres no danger of society collapse if people try to replace them with bots....
...but IMO the problem with that is. .whats the point of consuming "content" if its as souless as it gets, big part of why we enjoy art, movies, novels & music is because we know they are made by passionate people who want to share their ideas and crafts, and that what makes them worth consuming, doesnt matter if its bad or good, infact the varying difference is quality is another thing we love about that stuff,
if it all looks and feels the same (like AI content) then that stuff gets boring and stale really fast. like who cares about AI art when everything has that high professional quality to it, especially if you know theres no actual talent or history behind the creation of it
The funniest part of this to me is that AI is trained to look like anime or like Pixar but can’t actually replicate simple styles like lorelay bove’s work. They can’t just make simple looking art so the artists that make the simplest quickest works are fine
makes sense. Generative models make generic work in the most literal sense, so it won‘t necessarily consider more simpler and minimal artstyles because they aren't that much on people's radar.
You just didn't found fine-tuned models for that or didn't found a proper prompt.
ehmm.... im pretty sure i could grab any graphic design software and make something that could be considered a decent aproximation in... 2 hours tops
Training LORAs doesn't necessarily require a huge dataset and can be trained on consumer hardware locally. That's one good way to get a specific style if there's no pre-existing model.
It's just that nobody made a lora for it.
I was in a seminar about developing techniques for cleaning up these poisoned images, calling them ”adverserial injections” in training data, and I was like ”Huh?” I suppose if legitimately obtained training data runs the risk of corrupting the entire training regimen, it makes sense, but let’s not pretend it isn’t for dealing with cheeky artists protecting their own interests and livelihoods when the judicial system has completely failed them.
The system is a shit
honestly, I don't really care about ai art that much, but what I do care about is an ART competition at my school is allowing ai art entries... I don't understand why tbh. Since if you can just ask an AI to draw you something to win a few hundred bucks doesn't that defeat the purpose of making an art competition??
That's horrible, that totally defeats the purpose of the competition
Eventually, motorbikes will be allowed in marathons.
New techniques and models come out every time, I don’t think this will last long
Then it can also be trained to re-mess up the AI again over and over in a circle like how TH-cam tries to kill adblockers, but can't.
@sfkdsxzjkcfjldskaf99sddf809sdf yes but thats different cuz the one we had already good enough so if any new model is poisoned Just use the old one . like i do with my app.
The use of AI to replace artists and writers really is an impressive level of petty. If the goal of automation is to save on labor costs, artists and writers are usually a fairly small slice of the pie. Like, the use of AI to generate images for the advertising campaign of Civil War only saved the studio like a tiny fraction of a percent of their advertising budget for that movie.
Conversely, if you are a professional artist, you are doing it because you live doing the art(that doesn't mean you love doing every job you get).
So, these corporations have invested significant time and money into trying to eliminate one of the most fulfilling professions for the most marginal of cost savings. If that ain't a giant middle finger to working people everywhere, I don't know what is.
A detail about modern artists which isn’t mentioned at all in mainstream discourse about this sorta thing is how all of those aforementioned artists, is by signing up on Microsoft’s services, Google, Apple, X and so on with those agreements that nobody reads in full. They’ve basically signed up for having the data of their artistry, together with all of their other data, taken away to be sold and used for training. Wether they’re aware of it or not.
That’s not the case for all pages. Deviant art for instance updated their ToS and forced all their users to agree to the new data training. Their users weren’t given a choice to agree or not. That’s illegal. You can’t just update your ToS and force people to agree.
Also, why is it that these corporations are careful doing it with any other data, such as music or images owned by large corporations (Disney IPs). Dance diffusion said they didn’t want to train on copyrighted music as it could lead to a lawsuit. All while Stable diffusion teams had no problem training on copyrighted images (both dance diffusion and stable diffusion are owned by Stability AI). It’s double standards like that which makes it so shady. And they know it.
@@Thesamurai1999this! When websites say you own the art you make then do a sneaky 360 to spit back to artists with a premium tag like they did artists a favor for over decade.
@@Thesamurai1999its not hard. Chinese companies maybe will develop models that were trained on us copyrighted movies, what will people do then? some countries dont respect ip laws... only a matter of time