I know. But that problem seems to me to loom larger in the social realm than in a discipline like, say, calculating where to site coffer dams and pumped storage dams in order to respond to climate conditions in some geographic regions that trend toward less snowpack and more rainfall.
LLMs are also biased by the defaults of visual media t…
I know. But that problem seems to me to loom larger in the social realm than in a discipline like, say, calculating where to site coffer dams and pumped storage dams in order to respond to climate conditions in some geographic regions that trend toward less snowpack and more rainfall.
LLMs are also biased by the defaults of visual media to regurgitate popular stereotypes. I find it revealing in that way; it's like a burlesque of the superficiality of prevailing popular attitudes. People stereotype- and also conform to stereotype- out of a desire to keep the elements of social existence dumbed down and simple. AI does Simple all too well. Not only does it mimic popular delusions and the madness of crowds, but it also puts them into bold relief. The problem with the programming instructions designed to defeat the tendency is that the only effective way of counteracting the robotism of stereotyping is to view situations and individuals idiosyncratically, which requires thinking, which AI doesn't do. Imposing an overlayer of formulaic instructions to intended to counteract the conditioning influences of human history and social patterning as a servo is, at best, a clumsy and ineffective kluge fix.
I hold out the hope that AI can be programmed to detect logical fallacies simply by programming it with a precise set of instructions about how to apply the principles. I don't think that requires capabilities beyond their means. On the off chance that readers might not know what I'm referring to, here's an example: https://www.logicalfallacies.org/
I don't see how the process of detecting logical fallacies requires any more sense of self-aware consciousness than applying Euclid's postulates of geometry.
I don't pretend to know about that one way or the other.
In my observation, many people are capable of applying logical fallacy detection capably. The problem is that all too many of them only apply it to the opposing position, not their own.
Ego influences are probably the #1 obstacle to learning and clear thinking. The ego is supposed to work as a guardian, but it has a way of turning into a jailor. Playing deception, with the insistence that admitting to being wrong about something- about anything- is"defeat" that exposes weak character and intellectual inferiority.
Bollocks. Anyone who insists on being right all the time is a lot less intelligent than they otherwise would be.
That's why I'd cheer the entry of AI into the game of logical fallacy detection, if it can be successfully accomplished. AI has no ego compulsion to cling to a given position, no agenda, no points to defend. No ego concerns about how others may judge it for its findings. (I do get that AI can be programmed to mimic such concerns. But it isn't an innate priority.) AI that demonstrates high functioning performance with handling informal logic and fallacy detection can withstand any level of social heat in that regard, because anyone who throws a fit over having their logical fallacies detected is merely indulging in more logical fallacy. AI don't be caring. And its findings are always reviewable- the accuracy can be assessed by the humans analyzing its output. Competent AI should be able to achieve results that are granted by general consensus. The defining criteria of logical fallacies are unambiguous, and typically leave little room for argument. (With a few exceptions, like "slippery slope" arguments- some slippery slope arguments are fallacies, but other times, slopes really are as slippery as they're made out to be, so it's a judgement call. But that's an exception- for most logical fallacies, if they're accurately detected, what gets laid down stays there.)
The detection of logical fallacy content doesn't automatically discredit a position. But it does mean that the argument has to be reframed without relying on the fallacy for support.
I know. But that problem seems to me to loom larger in the social realm than in a discipline like, say, calculating where to site coffer dams and pumped storage dams in order to respond to climate conditions in some geographic regions that trend toward less snowpack and more rainfall.
LLMs are also biased by the defaults of visual media to regurgitate popular stereotypes. I find it revealing in that way; it's like a burlesque of the superficiality of prevailing popular attitudes. People stereotype- and also conform to stereotype- out of a desire to keep the elements of social existence dumbed down and simple. AI does Simple all too well. Not only does it mimic popular delusions and the madness of crowds, but it also puts them into bold relief. The problem with the programming instructions designed to defeat the tendency is that the only effective way of counteracting the robotism of stereotyping is to view situations and individuals idiosyncratically, which requires thinking, which AI doesn't do. Imposing an overlayer of formulaic instructions to intended to counteract the conditioning influences of human history and social patterning as a servo is, at best, a clumsy and ineffective kluge fix.
I hold out the hope that AI can be programmed to detect logical fallacies simply by programming it with a precise set of instructions about how to apply the principles. I don't think that requires capabilities beyond their means. On the off chance that readers might not know what I'm referring to, here's an example: https://www.logicalfallacies.org/
I don't see how the process of detecting logical fallacies requires any more sense of self-aware consciousness than applying Euclid's postulates of geometry.
If the people or powers want logic we’ll have logic, as they almost certainly don’t...
I don't pretend to know about that one way or the other.
In my observation, many people are capable of applying logical fallacy detection capably. The problem is that all too many of them only apply it to the opposing position, not their own.
Ego influences are probably the #1 obstacle to learning and clear thinking. The ego is supposed to work as a guardian, but it has a way of turning into a jailor. Playing deception, with the insistence that admitting to being wrong about something- about anything- is"defeat" that exposes weak character and intellectual inferiority.
Bollocks. Anyone who insists on being right all the time is a lot less intelligent than they otherwise would be.
That's why I'd cheer the entry of AI into the game of logical fallacy detection, if it can be successfully accomplished. AI has no ego compulsion to cling to a given position, no agenda, no points to defend. No ego concerns about how others may judge it for its findings. (I do get that AI can be programmed to mimic such concerns. But it isn't an innate priority.) AI that demonstrates high functioning performance with handling informal logic and fallacy detection can withstand any level of social heat in that regard, because anyone who throws a fit over having their logical fallacies detected is merely indulging in more logical fallacy. AI don't be caring. And its findings are always reviewable- the accuracy can be assessed by the humans analyzing its output. Competent AI should be able to achieve results that are granted by general consensus. The defining criteria of logical fallacies are unambiguous, and typically leave little room for argument. (With a few exceptions, like "slippery slope" arguments- some slippery slope arguments are fallacies, but other times, slopes really are as slippery as they're made out to be, so it's a judgement call. But that's an exception- for most logical fallacies, if they're accurately detected, what gets laid down stays there.)
The detection of logical fallacy content doesn't automatically discredit a position. But it does mean that the argument has to be reframed without relying on the fallacy for support.