No, gen AI pictures are not dirived works of their training data. They are seperate processes. The algorithm that actually generates the image has no knowledge of the training data.
The algorithms involved in the actual creation of the images are not the ones actually trained on the data. So its not at all accurate to claim they are derived.
The training data trains an algorithm that effectively just describes an image it sees (which BTW is super useful for blind people) and gives a score for each keyword.
Then the actusl generative part takes a random background, tries to denoise it into somerthing recognisable, then shows it to thr first algorithm that gives it a score on how closely it resembles the prompts. Then does some fancy maths and performs another denoising cycle and gets another score from the first algorithm, more maths, another cycle etc. Until it spits out and image that maches the prompt.
So the algorithm that genrstes the image has no data from the training process whatsoever.
But thats not the same as a derivative. Like saying a chart on which art styles were most popular in every decade is a derivate of every work in that survey. Because those works were used to create the data being presented.
But… that is derivative. You can’t know which styles were most popular in a decade without looking at the styles popular in that decade. Such a chart must change if the data it’s built from changes.
No, gen AI pictures are not dirived works of their training data. They are seperate processes. The algorithm that actually generates the image has no knowledge of the training data.
The numeric weights are derived from the training material it previously ate: they’re not extricable.
The algorithms involved in the actual creation of the images are not the ones actually trained on the data. So its not at all accurate to claim they are derived.
Are you arguing that the training process has no effect on the output of the model? What on earth are they doing it for, then?
Not directly no.
The training data trains an algorithm that effectively just describes an image it sees (which BTW is super useful for blind people) and gives a score for each keyword.
Then the actusl generative part takes a random background, tries to denoise it into somerthing recognisable, then shows it to thr first algorithm that gives it a score on how closely it resembles the prompts. Then does some fancy maths and performs another denoising cycle and gets another score from the first algorithm, more maths, another cycle etc. Until it spits out and image that maches the prompt.
So the algorithm that genrstes the image has no data from the training process whatsoever.
It gets a, uh, score. You wrote that yourself, I don’t know how you could forget.
But thats not the same as a derivative. Like saying a chart on which art styles were most popular in every decade is a derivate of every work in that survey. Because those works were used to create the data being presented.
But… that is derivative. You can’t know which styles were most popular in a decade without looking at the styles popular in that decade. Such a chart must change if the data it’s built from changes.