

Not in this case. This is permadeath: you get deleted, permanently.
Not in this case. This is permadeath: you get deleted, permanently.
Not to mention the Bengal famine of 1943, which is apparently due to Churchill diverting the food resources to the soldiers, starving the population: https://en.m.wikipedia.org/wiki/Bengal_famine_of_1943
It’s not like this is going to stop hackers.
I understand that it is for many. Just not for me. I prefer exploring lands and it’s many wonders.
Only souls game I beat is Elden Ring. Only by modding an easy mode into the game. I loved exploring in the game, especially the lore without worrying about dying because of my lack of skills.
If anyone says that a souls-like is only fun because of it’s difficulty is wrong, at least in my experience.
I am doing it for last 15 years.
Project wingman is rather incredible.
If LLMs are smart enough to judge me, I am sure chatbots are smart enough to do the job. They should let chatbots do the job they were hiring for.
Generating crap is easy. Creating art is not so easy. LLMs are not artists.
Why on earth drinking hydrogen-infused water increase athletic performance? I honestly doubt any claims made by this shit product.
Edit: I am no chemistry expert, but wouldn’t most hydrogen just collect above the water surface and as soon the can is opened, it escapes. I don’t know if it happens or not, if I am wrong please correct me.
NFTs are the past. Now LLMs are the future.
But, what is the use case for a piece of software that straight up makes stuff at least 60-70% of the time? Plausible sounding made-up stuff is still made-up stuff. I would rather not use LLMs because I don’t trust them.
Because LLMs are so damn useful, that they have to be forced to be used.
So glad that I am not a frog.
deleted by creator
I see this comment section is full of various operations defined on fuckery. I don’t think they form a field or even a ringi
Pretty cliche, but The Witcher 3.
I can assure you LLMs are, in general, not great at extracting concepts and work upon it, like a human mind is. LLMs are statistical parrots, that have learned to associate queries with certain output patterns like code chunks, or text chunks, etc. They are not really intelligent, certainly not like a human is. They cannot follow instructions like a human does, because of this. Problem is, they seem just intelligent enough that they can fool someone wanting to believe them to be intelligent even though there is no intelligence, by any measure, behind their replies.
deleted by creator