It has little to no effect on the outcome, in this hypothetical. The way the conclusion came to be is different, but are you making an argument that the outcome is different on the ideal plane or something?
If you use a small hammer, you can hammer a nail in.
If you use a big hammer, you can hammer a nail in.
Hypothetically, the person buying the nailed-together wood will see no difference.
But for the worker, using the larger hammer will have caused a different physical change in their hammer-wielding arm, versus if they had used the smaller hammer.
Sure, depending on which tool you use for a job, there is a different process of working. Some nails and materials work better with bigger hammers, some are fine with smaller hammers. The process of coming up with equivalent end-products while reducing labor-time and reducing injury is a part of improving the productivity of labor, which can, in socialism, be used to provide what people need while minimizing working hours.
So you agree that it is materially different for the person using the AI, great, you had been arguing against that for hours.
Now, if an AI is meant to replace or automate cognitive/decision-making processes, then would you reckon that the effect on the user would be more like a musculoskeletal change, like in the hammer example, or more like a mental change (keep in mind that the brain is a material thing too)?
No, I did not argue against the idea that different tools are used differently to produce similar results. This is another strawman, something you seem fond of.
As for AI, if it’s image generation, the user puts in a prompt and evaluates whether or not the output fits what they want, then adjusts. In the case of, say, a texture of wood for a game, this is pretty simple, does it fit or not? If it’s for summarizing, generating names, etc it isn’t a substitute for cognition, and can actively backfire like what can happen if someone asks AI to spit out code that ends up being buggy. Just because you can use AI to do something doesn’t mean it’s the optimal way.
No, I did not argue against the idea that different tools are used differently to produce similar results
That is not what I said. I said you’ve been arguing that a process has no effect on the person doing the process. I’ve probably said something like ‘the process has an effect on the user’ a half dozen times now, and you’ve either sidestepped or ignored the issue because you objected to the terminology rather than meeting me halfway by trying to see if there is any truth there.
Anyway, I appreciate you having helped me sharpen my critique of AI into something that can be argued in purely materialist terminology, as opposed to my more oblique approach.
At this point though, I think it’s clear where I was going with the argument, and if you don’t want to engage with that in good faith, then there’s not much point wasting my time trying to convince you of something you don’t want to be convinced of.
I have never said that a process has no effect on the person performing a process. You still aren’t adhering to materialism fully, even if you have improved. It’s not about being bad-faith, I’ve been good faith this entire time even as you’ve openly mocked me.
The core of your argument seems to be that using AI, under all circumstances, is cognitively damaging. You also call it a process and not a tool, but all tools have associated process, including correct and incorrect process. A hammer can be misgripped, causing strain on muscles and thus pain. You can also use a hammer for the wrong purpose, like driving a screw and not a nail. You can kinda do it, but it’s less efficient at best, and harmful at worst. AI is similar.
It has little to no effect on the outcome, in this hypothetical. The way the conclusion came to be is different, but are you making an argument that the outcome is different on the ideal plane or something?
If you use a small hammer, you can hammer a nail in.
If you use a big hammer, you can hammer a nail in.
Hypothetically, the person buying the nailed-together wood will see no difference.
But for the worker, using the larger hammer will have caused a different physical change in their hammer-wielding arm, versus if they had used the smaller hammer.
Or do you disagree
Sure, depending on which tool you use for a job, there is a different process of working. Some nails and materials work better with bigger hammers, some are fine with smaller hammers. The process of coming up with equivalent end-products while reducing labor-time and reducing injury is a part of improving the productivity of labor, which can, in socialism, be used to provide what people need while minimizing working hours.
So you agree that it is materially different for the person using the AI, great, you had been arguing against that for hours.
Now, if an AI is meant to replace or automate cognitive/decision-making processes, then would you reckon that the effect on the user would be more like a musculoskeletal change, like in the hammer example, or more like a mental change (keep in mind that the brain is a material thing too)?
No, I did not argue against the idea that different tools are used differently to produce similar results. This is another strawman, something you seem fond of.
As for AI, if it’s image generation, the user puts in a prompt and evaluates whether or not the output fits what they want, then adjusts. In the case of, say, a texture of wood for a game, this is pretty simple, does it fit or not? If it’s for summarizing, generating names, etc it isn’t a substitute for cognition, and can actively backfire like what can happen if someone asks AI to spit out code that ends up being buggy. Just because you can use AI to do something doesn’t mean it’s the optimal way.
That is not what I said. I said you’ve been arguing that a process has no effect on the person doing the process. I’ve probably said something like ‘the process has an effect on the user’ a half dozen times now, and you’ve either sidestepped or ignored the issue because you objected to the terminology rather than meeting me halfway by trying to see if there is any truth there.
Anyway, I appreciate you having helped me sharpen my critique of AI into something that can be argued in purely materialist terminology, as opposed to my more oblique approach.
At this point though, I think it’s clear where I was going with the argument, and if you don’t want to engage with that in good faith, then there’s not much point wasting my time trying to convince you of something you don’t want to be convinced of.
I have never said that a process has no effect on the person performing a process. You still aren’t adhering to materialism fully, even if you have improved. It’s not about being bad-faith, I’ve been good faith this entire time even as you’ve openly mocked me.
Can you see where my argument was headed or no
The core of your argument seems to be that using AI, under all circumstances, is cognitively damaging. You also call it a process and not a tool, but all tools have associated process, including correct and incorrect process. A hammer can be misgripped, causing strain on muscles and thus pain. You can also use a hammer for the wrong purpose, like driving a screw and not a nail. You can kinda do it, but it’s less efficient at best, and harmful at worst. AI is similar.