

This is reminding me of those pc optimizer tools like CCleaner that promised to find a bunch of things to uninstall and redundant/trash files to delete and make your pc 3000x faster, but ended up breaking your system.
This is reminding me of those pc optimizer tools like CCleaner that promised to find a bunch of things to uninstall and redundant/trash files to delete and make your pc 3000x faster, but ended up breaking your system.
What do you mean? Chinese people cant host a lemmy server? Or just the main servers worldwide arent available there?
Where you got that from? Looks like your imagining a whole other discussion there
Thats just not what im saying.
It can be a hobby, sure. But men having a hobby isnt was was being discussed at all. Nobody cares about men having hobbies, the issue is when this hobby is a potential threat to other people. Isnt this rather obvious?
Are you really pretending this is about men having hobbies?
Going after someone after the spotlight is on them is a lot harder
“Exerting control” or “getting you cknditioned to follow orders” is the default motive for conspiracy theories that people cant think of a good motive for. I heard it a lot about using masks during covid.
No need for that “and”, it’s redundant.
What helps is that the aumomotive/gas industry lobby there isnt so effective.
You mean that they would manage to fake an injury in a real shooting? A bit too unpredictable of a plan isnt it?
They are people too, just like you and me
For tex, i would suggest taking a basic template, and writing what you need, looking up how to do things as you need them. Theres a bunch of documentation on sites like overleaf, and you can learn a lot by looking at stackexchange threads.
The associativity thing also doesnt make sense.
That the script could incorporate some checking mechanisms and implement an “i dont know” for when the LLMs answers fails some tests.
They already do some of that but for other purposes, like censoring, or as by recent news, grok looks up musks opinions before answering questions, or to make more accurate math calculations they actually call a normal calculator, and so on…
They could make the LLM produce an answer A, then look up the question on google and ask that LLM to “compare” answer A with the main google results looking for inconsistencies and then return “i dont know” if its too inconsistent. Its not a rigorous test, but its something, and im sure the actual devs of those chatbots could make something much better than my half baked idea.
The chatbots are not just LLMs though. They run scripts in which some steps are queries to an LLM.
Do you have a source for that 300 students gov number?
Maybe ccleaner was fine, there were a bunch of these tools and ccleaner is the one i remembered the name. Wasnt really trying to criticize ccleaner specifically