Will AI art generators create an image and then use that image as reference, without knowing or “remembering” it, and as such, fail in a way to select influences from a wide range and instead, use limited influence? Lots of these images look like knock offs of each other. Can AI fool itself into interpreting its own output or content as “of the world” instead and thus at times treat certain things as real, which were merely its own creation? What if AI made a fake video of a nuke launch and later saw it as real? I feel like I am seeing that base problem here of AI being self- referential. Sort of a scary potentiality.
Will AI art generators create an image and then use that image as reference, without knowing or “remembering” it, and as such, fail in a way to select influences from a wide range and instead, use limited influence? Lots of these images look like knock offs of each other. Can AI fool itself into interpreting its own output or content as “of the world” instead and thus at times treat certain things as real, which were merely its own creation? What if AI made a fake video of a nuke launch and later saw it as real? I feel like I am seeing that base problem here of AI being self- referential. Sort of a scary potentiality.