

That won’t matter when everything becomes paywalled.
That won’t matter when everything becomes paywalled.
What are they supposed to do? Pull away from the UK?
Well, modern smartphones don’t even deserve a mention in the news. Guaranteed these models aren’t gonna make a difference in the experience compared to using the previous or previous previous versions.
Which is why OpenAI put relationships with real people as a competitor of ChatGPT.
Or “agents” that can or cannot follow the instructions you add.
It feels like a generation from now, doing what was common in the US during the creation of Apple and Microsoft will be considered terrorism.
Yeah, these claims seem very vague. I’d like to see how all that works, with examples.
Marginalia should be one of the most important things to preserve, in a similar importance to Wikipedia.
Yeah, the best is never going to be “now”, which is always drown in uncertainty and chaos. When you look back, everything looks safe and deterministic.
The problem is how the whole thing was presented to people. You just need to pass by subreddits related to ChatGPT to see the amount of misunderstandings about how it works, just an example:
https://www.reddit.com/r/ChatGPT/comments/1ld6dot/a_close_friend_confessed_she_almost_fell_into/
https://www.reddit.com/r/ChatGPT/comments/1koadmg/testing_gpts_response_to_delusional_prompts_it/
https://www.reddit.com/r/ChatGPT/comments/1low386/this_is_what_recursion_looks_like/
This whole thing is kinda scary. About how easily some people can spiral into delusion when over-relying on LLMs.
These models fill gaps with plausible-sounding but often enough fabricated information.
It’s understandable how non technical users treat their outputs as profound revelations, mistaking AI-generated fiction for hidden truths.
I’m just thinking now that the Mac is next.
I thought that as much as these companies preach about LLMs doing their coding, the cost of development would go down, no? So why does it need to reduce everything to a single code base to make it easier for developers?
All I see is people chatting with an LLM as if it was a person. “How bad is this on a scale of 1 to 100”, you’re just doomed to get some random answer based solely on whatever context is being fed in the input and that you probably don’t know the extent of it.
Trying to make the LLM “see its mistakes” is a pointless exercise. Getting it to “promise” something is useless.
The issue with LLMs working with human languages is people eventually wanting to apply human things to LLMs such as asking why as if the LLM knows of its own decision process. It only takes an input and generates an output, it won’t be able to have any “meta thought” explanation about why it outputted X and not Y in the previous prompt.
I just wish I’m long gone before humanity descends into complete chaos.
Or the most common cases can be automated while the more nuanced surgeries will take the actual doctors.
They might, once it becomes too flooded with AI slop.
This is quite funny actually.
I like the saying that LLMs are “good” at stuff you don’t know. That’s about it.
When you know the subject it stops being much useful because you’ll already know the very obvious stuff that LLM could help you.
It doesn’t work because the car’s front is shaped to minimize drag, and a turbine would add drag — forcing the motor to work harder to maintain speed. Turbines generate energy by resisting airflow, not letting it slide past. So you’re not harvesting free energy; you’re paying for it with more fuel or battery.
And when some data is leaked, your id will be with it.