The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.
So is trying to bucket them based on copyright violation: there are very powerful, open dataset, more or less reproducible LLMs trained and runnable on a trivial amount of electricity you can run on your own PC right now.
Same with use cases. One can use embeddings models or tiny resnets to kill. People do, in fact, like with Palantir’s generative free recognition models. At the other extreme, LLMs can be totally task focused and useless at anything else.
The distinction is corporate/enshittified vs not. Like Reddit vs Lemmy.
The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.
You know this is a stupid take, right? You know that chatgpt and Stockfish, while both being forms of “artificial intelligence,” are wildly incomparable, yeah? This is like saying “the distinction between an ICBM and the Saturn-V is meaningless, because they both use the same underlying tech”
The distinction between AI and GenAI is like the difference between eating and cannibalism; one contains the other, but there’s still a meaningful distinction.
Generative AI produces text or images by leveraging huge neural networks weighted by tons and tons of training data. It’s fundamentally a system of guesses and vibes.
Machine learning in general is often much more precise. The model finding early cancer in scans isn’t just guessing the next word, it’s running the image through a series of precisely tuned layers.
The industry term for the distinction is supervised vs unsupervised learning.
That first claim makes no sense and you make no argument to back it up. The distinction is actually quite meaningful; generative AI generates new samples from an existing distribution, be it text, audio, images, or anything else. Other forms of AI solve numerous problems in different ways, such as identifying patterns we can’t or inventing novel and more optimal solutions.
The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.
Genuinely doubt the tech used to control Zerg is the same tech used to generate an essay about elephant which contain numerous misinformation. AI lately is being used liberally, which lost their meaning.
The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.
So is trying to bucket them based on copyright violation: there are very powerful, open dataset, more or less reproducible LLMs trained and runnable on a trivial amount of electricity you can run on your own PC right now.
Same with use cases. One can use embeddings models or tiny resnets to kill. People do, in fact, like with Palantir’s generative free recognition models. At the other extreme, LLMs can be totally task focused and useless at anything else.
The distinction is corporate/enshittified vs not. Like Reddit vs Lemmy.
You know this is a stupid take, right? You know that chatgpt and Stockfish, while both being forms of “artificial intelligence,” are wildly incomparable, yeah? This is like saying “the distinction between an ICBM and the Saturn-V is meaningless, because they both use the same underlying tech”
The distinction between AI and GenAI is like the difference between eating and cannibalism; one contains the other, but there’s still a meaningful distinction.
Generative AI produces text or images by leveraging huge neural networks weighted by tons and tons of training data. It’s fundamentally a system of guesses and vibes.
Machine learning in general is often much more precise. The model finding early cancer in scans isn’t just guessing the next word, it’s running the image through a series of precisely tuned layers.
The industry term for the distinction is supervised vs unsupervised learning.
It’s like saying hard drives are bad because of what you can store on them
That first claim makes no sense and you make no argument to back it up. The distinction is actually quite meaningful; generative AI generates new samples from an existing distribution, be it text, audio, images, or anything else. Other forms of AI solve numerous problems in different ways, such as identifying patterns we can’t or inventing novel and more optimal solutions.
Genuinely doubt the tech used to control Zerg is the same tech used to generate an essay about elephant which contain numerous misinformation. AI lately is being used liberally, which lost their meaning.
I dunno, agree to disagree.