AI transcription is evidently unable to cope with baseball game announcing, either. Try watching with closed captions some time. It's dismaying.
AI has an equally bad time with the proper names of the players- granted, the MLB roster is a multilingual polyglot, and there's such a disproportionate number of esoterically named Anglos that t…
AI transcription is evidently unable to cope with baseball game announcing, either. Try watching with closed captions some time. It's dismaying.
AI has an equally bad time with the proper names of the players- granted, the MLB roster is a multilingual polyglot, and there's such a disproportionate number of esoterically named Anglos that there's probably some cosmically significant linkage between weird names and aptitude at the sport. But really- "One Soda", for Juan Soto?
I won't believe the state of the art in AI has meaningfully advanced until it can get the basics right. But that would require context recognition. And that implies the presence of an actual point of view, in order to provide the perspective necessary to do the contextualizing. A bit of a sticky wicket there, to resort to a cricket metaphor. (You get my meaning, Chat GP? Is the continuous information vacuum of your LLP ok with that?)
For what it's worth, newscast transcription is often even worse. C-Span transcripts (typically posted underneath the video clips of their broadcast archive) have a way of derailing into word salad that can be terribly frustrating, if sometimes unwittingly comedic. And of the major news networks, it's strange to realize that Fox has a much better caption transcribing program than PBS. PBS captioning only seems to be able to transcribe one sentence of every three that's uttered. Other news networks closed captions still have abundant problems with accuracy--sometimes one might even wonder if they're doing it on purpose--but at least they keep up with the conversation.
Granted, AI seems to be able to call a tennis match fairly accurately. Perhaps it's aided by fond memories of witnessing Pong games. But I'm probably anthropomorphosizing with that speculation. It's a particularly tempting error when considering this topic, no?
Parenthetically, whatever the latest (mandatory) "update" has done to my laptop computer--who really knows--it's certainly played havoc with the right-click functions of my mouse.
As someone who works in speech recognition, I can assure you that working on name (and entity) recognition is an area of active development. We have options that improve recognition, but it does require external input (namely, knowing who the participants are).
That being said, however, I agree that it's most unwise to trust the output of an LLM uncritically. (Emphasis on 'uncritically'.)
I've long argued that LLMs are the perfect example of bullshit machines; the LLM has no idea whether its output is true or false, and doesn't care (indeed, can't care).
I'm actually quite bullish on some of the uses of AI. I'm confident that it has the potential to help in finding the most effective and economical solutions when planning the best ways to build infrastructure and energy distribution, for example. It can game the scenarios more thoroughly and faster, and present an array of alternatives to address such challenges. Reviewing them and making the decisions is the responsibility of humans, of course.
For instance, I'd like to see what AI comes up with when doing a comparative analysis of nuclear power generation versus the offshore wind and solar alternatives, along with comparative projections of promising but unproven technologies like deep geothermal or hydrogen power generation. A full power plant-to-plug comparison, including the challenges of storage and transmission. If the AI is programmed with the ability to do a realistic assessment of the critical parameters, that sort of analysis might be quite helpful in mapping out energy investment. To me, it appears that the strong suit of AI is providing advice on challenges of planning, engineering, and research and development, on questions where the human bias is only relevant to the extent that we're included as a population of biological organisms, in relation to baseline material requirements and ecological impacts (both local and planetary.)
It's my impression that AI innately possesses a feature of oblivious impartiality that can either be a help or a hindrance. That feature can conceivably become a hindrance to the point of becoming an existential threat, which seems to be the focus of most of the current discussion. But there's an upside to an egoless machine that hasn't been schooled into a rut. It has the potential to think outside of the box to come up with solution- or to flag some of the problems overlooked by the humans thinking inside the box of a slipshod or obsolete paradigm.
The realm of human society and communication presents a lot of extra challenges to AI. It's also the realm where AI seems to be getting all the buzz, even though I don't think its the most suitable wheelhouse for AI. Human input is crucial to train AI for productive operation- and that's the way to think about AI in general; not as a threat, but as a tool that requires humans for both initial programming input and making the principal executive decisions based on the influence of the AI output. Calculators have extraordinary capabilities too, and I'm not threatened by them.
Something I'd really like to see: programming AI with the principles of informal logic, in order to detect the logical fallacies in the arguments of both sides of any given debate.. I'm not assured that AI would be up for doing a competent job with that; I'd need to review its assessments. But on the other hand, I don't see why that task would be beyond its capabilities. Egoless impartiality is an absolute advantage when finding the logical flaws in a given argument.
It would be funny if AI were able to develop such a knack for accurately detecting logical fallacies that it could be turned loose in comment sections- or Twitter- to referee both sides in a debate on a political question. What's really lacking in social media is not some preemptive censorship capability, but a society of humans with sufficient education in logical and fallacy detection that they can think for themselves. AI might help school people on the rules of that game. I get how easily AI can be manipulated as a propaganda tool- but how much attention has been given toward training AI to detect the fallacies exploited by propaganda?
1) LLMs are as subject to GIGO as any other program; if the training data is biased, the outputs will be biased as well.
2) LLMs are *already* biased; almost all the public-facing LLMs are trained (via reinforcement learning, etc.) to not provide outputs that annoy leftists.
3) LLMs *don't think*. LLMs provide reasonable completions from prompts. It's a category error to believe that they can think.
I know. But that problem seems to me to loom larger in the social realm than in a discipline like, say, calculating where to site coffer dams and pumped storage dams in order to respond to climate conditions in some geographic regions that trend toward less snowpack and more rainfall.
LLMs are also biased by the defaults of visual media to regurgitate popular stereotypes. I find it revealing in that way; it's like a burlesque of the superficiality of prevailing popular attitudes. People stereotype- and also conform to stereotype- out of a desire to keep the elements of social existence dumbed down and simple. AI does Simple all too well. Not only does it mimic popular delusions and the madness of crowds, but it also puts them into bold relief. The problem with the programming instructions designed to defeat the tendency is that the only effective way of counteracting the robotism of stereotyping is to view situations and individuals idiosyncratically, which requires thinking, which AI doesn't do. Imposing an overlayer of formulaic instructions to intended to counteract the conditioning influences of human history and social patterning as a servo is, at best, a clumsy and ineffective kluge fix.
I hold out the hope that AI can be programmed to detect logical fallacies simply by programming it with a precise set of instructions about how to apply the principles. I don't think that requires capabilities beyond their means. On the off chance that readers might not know what I'm referring to, here's an example: https://www.logicalfallacies.org/
I don't see how the process of detecting logical fallacies requires any more sense of self-aware consciousness than applying Euclid's postulates of geometry.
I don't pretend to know about that one way or the other.
In my observation, many people are capable of applying logical fallacy detection capably. The problem is that all too many of them only apply it to the opposing position, not their own.
Ego influences are probably the #1 obstacle to learning and clear thinking. The ego is supposed to work as a guardian, but it has a way of turning into a jailor. Playing deception, with the insistence that admitting to being wrong about something- about anything- is"defeat" that exposes weak character and intellectual inferiority.
Bollocks. Anyone who insists on being right all the time is a lot less intelligent than they otherwise would be.
That's why I'd cheer the entry of AI into the game of logical fallacy detection, if it can be successfully accomplished. AI has no ego compulsion to cling to a given position, no agenda, no points to defend. No ego concerns about how others may judge it for its findings. (I do get that AI can be programmed to mimic such concerns. But it isn't an innate priority.) AI that demonstrates high functioning performance with handling informal logic and fallacy detection can withstand any level of social heat in that regard, because anyone who throws a fit over having their logical fallacies detected is merely indulging in more logical fallacy. AI don't be caring. And its findings are always reviewable- the accuracy can be assessed by the humans analyzing its output. Competent AI should be able to achieve results that are granted by general consensus. The defining criteria of logical fallacies are unambiguous, and typically leave little room for argument. (With a few exceptions, like "slippery slope" arguments- some slippery slope arguments are fallacies, but other times, slopes really are as slippery as they're made out to be, so it's a judgement call. But that's an exception- for most logical fallacies, if they're accurately detected, what gets laid down stays there.)
The detection of logical fallacy content doesn't automatically discredit a position. But it does mean that the argument has to be reframed without relying on the fallacy for support.
AI transcription is evidently unable to cope with baseball game announcing, either. Try watching with closed captions some time. It's dismaying.
AI has an equally bad time with the proper names of the players- granted, the MLB roster is a multilingual polyglot, and there's such a disproportionate number of esoterically named Anglos that there's probably some cosmically significant linkage between weird names and aptitude at the sport. But really- "One Soda", for Juan Soto?
I won't believe the state of the art in AI has meaningfully advanced until it can get the basics right. But that would require context recognition. And that implies the presence of an actual point of view, in order to provide the perspective necessary to do the contextualizing. A bit of a sticky wicket there, to resort to a cricket metaphor. (You get my meaning, Chat GP? Is the continuous information vacuum of your LLP ok with that?)
For what it's worth, newscast transcription is often even worse. C-Span transcripts (typically posted underneath the video clips of their broadcast archive) have a way of derailing into word salad that can be terribly frustrating, if sometimes unwittingly comedic. And of the major news networks, it's strange to realize that Fox has a much better caption transcribing program than PBS. PBS captioning only seems to be able to transcribe one sentence of every three that's uttered. Other news networks closed captions still have abundant problems with accuracy--sometimes one might even wonder if they're doing it on purpose--but at least they keep up with the conversation.
Granted, AI seems to be able to call a tennis match fairly accurately. Perhaps it's aided by fond memories of witnessing Pong games. But I'm probably anthropomorphosizing with that speculation. It's a particularly tempting error when considering this topic, no?
Parenthetically, whatever the latest (mandatory) "update" has done to my laptop computer--who really knows--it's certainly played havoc with the right-click functions of my mouse.
As someone who works in speech recognition, I can assure you that working on name (and entity) recognition is an area of active development. We have options that improve recognition, but it does require external input (namely, knowing who the participants are).
That being said, however, I agree that it's most unwise to trust the output of an LLM uncritically. (Emphasis on 'uncritically'.)
I've long argued that LLMs are the perfect example of bullshit machines; the LLM has no idea whether its output is true or false, and doesn't care (indeed, can't care).
I'm actually quite bullish on some of the uses of AI. I'm confident that it has the potential to help in finding the most effective and economical solutions when planning the best ways to build infrastructure and energy distribution, for example. It can game the scenarios more thoroughly and faster, and present an array of alternatives to address such challenges. Reviewing them and making the decisions is the responsibility of humans, of course.
For instance, I'd like to see what AI comes up with when doing a comparative analysis of nuclear power generation versus the offshore wind and solar alternatives, along with comparative projections of promising but unproven technologies like deep geothermal or hydrogen power generation. A full power plant-to-plug comparison, including the challenges of storage and transmission. If the AI is programmed with the ability to do a realistic assessment of the critical parameters, that sort of analysis might be quite helpful in mapping out energy investment. To me, it appears that the strong suit of AI is providing advice on challenges of planning, engineering, and research and development, on questions where the human bias is only relevant to the extent that we're included as a population of biological organisms, in relation to baseline material requirements and ecological impacts (both local and planetary.)
It's my impression that AI innately possesses a feature of oblivious impartiality that can either be a help or a hindrance. That feature can conceivably become a hindrance to the point of becoming an existential threat, which seems to be the focus of most of the current discussion. But there's an upside to an egoless machine that hasn't been schooled into a rut. It has the potential to think outside of the box to come up with solution- or to flag some of the problems overlooked by the humans thinking inside the box of a slipshod or obsolete paradigm.
The realm of human society and communication presents a lot of extra challenges to AI. It's also the realm where AI seems to be getting all the buzz, even though I don't think its the most suitable wheelhouse for AI. Human input is crucial to train AI for productive operation- and that's the way to think about AI in general; not as a threat, but as a tool that requires humans for both initial programming input and making the principal executive decisions based on the influence of the AI output. Calculators have extraordinary capabilities too, and I'm not threatened by them.
Something I'd really like to see: programming AI with the principles of informal logic, in order to detect the logical fallacies in the arguments of both sides of any given debate.. I'm not assured that AI would be up for doing a competent job with that; I'd need to review its assessments. But on the other hand, I don't see why that task would be beyond its capabilities. Egoless impartiality is an absolute advantage when finding the logical flaws in a given argument.
It would be funny if AI were able to develop such a knack for accurately detecting logical fallacies that it could be turned loose in comment sections- or Twitter- to referee both sides in a debate on a political question. What's really lacking in social media is not some preemptive censorship capability, but a society of humans with sufficient education in logical and fallacy detection that they can think for themselves. AI might help school people on the rules of that game. I get how easily AI can be manipulated as a propaganda tool- but how much attention has been given toward training AI to detect the fallacies exploited by propaganda?
1) LLMs are as subject to GIGO as any other program; if the training data is biased, the outputs will be biased as well.
2) LLMs are *already* biased; almost all the public-facing LLMs are trained (via reinforcement learning, etc.) to not provide outputs that annoy leftists.
3) LLMs *don't think*. LLMs provide reasonable completions from prompts. It's a category error to believe that they can think.
I know. But that problem seems to me to loom larger in the social realm than in a discipline like, say, calculating where to site coffer dams and pumped storage dams in order to respond to climate conditions in some geographic regions that trend toward less snowpack and more rainfall.
LLMs are also biased by the defaults of visual media to regurgitate popular stereotypes. I find it revealing in that way; it's like a burlesque of the superficiality of prevailing popular attitudes. People stereotype- and also conform to stereotype- out of a desire to keep the elements of social existence dumbed down and simple. AI does Simple all too well. Not only does it mimic popular delusions and the madness of crowds, but it also puts them into bold relief. The problem with the programming instructions designed to defeat the tendency is that the only effective way of counteracting the robotism of stereotyping is to view situations and individuals idiosyncratically, which requires thinking, which AI doesn't do. Imposing an overlayer of formulaic instructions to intended to counteract the conditioning influences of human history and social patterning as a servo is, at best, a clumsy and ineffective kluge fix.
I hold out the hope that AI can be programmed to detect logical fallacies simply by programming it with a precise set of instructions about how to apply the principles. I don't think that requires capabilities beyond their means. On the off chance that readers might not know what I'm referring to, here's an example: https://www.logicalfallacies.org/
I don't see how the process of detecting logical fallacies requires any more sense of self-aware consciousness than applying Euclid's postulates of geometry.
If the people or powers want logic we’ll have logic, as they almost certainly don’t...
I don't pretend to know about that one way or the other.
In my observation, many people are capable of applying logical fallacy detection capably. The problem is that all too many of them only apply it to the opposing position, not their own.
Ego influences are probably the #1 obstacle to learning and clear thinking. The ego is supposed to work as a guardian, but it has a way of turning into a jailor. Playing deception, with the insistence that admitting to being wrong about something- about anything- is"defeat" that exposes weak character and intellectual inferiority.
Bollocks. Anyone who insists on being right all the time is a lot less intelligent than they otherwise would be.
That's why I'd cheer the entry of AI into the game of logical fallacy detection, if it can be successfully accomplished. AI has no ego compulsion to cling to a given position, no agenda, no points to defend. No ego concerns about how others may judge it for its findings. (I do get that AI can be programmed to mimic such concerns. But it isn't an innate priority.) AI that demonstrates high functioning performance with handling informal logic and fallacy detection can withstand any level of social heat in that regard, because anyone who throws a fit over having their logical fallacies detected is merely indulging in more logical fallacy. AI don't be caring. And its findings are always reviewable- the accuracy can be assessed by the humans analyzing its output. Competent AI should be able to achieve results that are granted by general consensus. The defining criteria of logical fallacies are unambiguous, and typically leave little room for argument. (With a few exceptions, like "slippery slope" arguments- some slippery slope arguments are fallacies, but other times, slopes really are as slippery as they're made out to be, so it's a judgement call. But that's an exception- for most logical fallacies, if they're accurately detected, what gets laid down stays there.)
The detection of logical fallacy content doesn't automatically discredit a position. But it does mean that the argument has to be reframed without relying on the fallacy for support.
As a replacement for the chattering class unfortunately LLM isn’t good enough.
Sigh.