One man's look at generative artificial intelligence
This resource includes primary and/or secondary research. Learn more about original research at Wikiversity. |
This article by Dan Polansky looks at what is called generative artificial intelligence (GenAI) and large language models (LLM). Examples include ChatGPT, Gemini, Copilot and LLaMA.
There are benefits, there are risks and there are costs.
An immediately obvious risk is inaccuracy. GenAI can easily generate inaccurate/untrue statements. This can be mitigated by awareness of the user. After all, users need to learn critical attitute toward sources that they read anyway; GenAI is far from the only offender as for being source of untrue statements.
A benefit is the use of GenAI as a source of ideas to be indepedently examined or verified. One use of this is for initial statement verification/probing: one can e.g. ask 'Is the following accurate: "Adjectives are never capitalized in English."' and have the statement corrected. However, the above mentioned risk really needs an emphasis. It seems all too easy and tempting to trust the answer without independent verification.
One may wonder whether GenAI can be used as a form of psychotherapy. A remarkable feature is the limitless patience shown in answering questions, even stupid or annoying questions. One can practice asking questions, improving formulations of questions, thinking critically about the answers, etc.
GenAI can be charged to contribute to global climatic change via electricity use. The ethics of this aspect is for each prospective user to consider; governments have not prohibited GenAI for this reason and seem unlikely to do so, provided they did not for the most part even prohibitd cryptocurrency/cryptoasset mining. A serious analysis of this aspect would include a quantitative comparison of other dispensable uses of energy such as video streaming.
GenAI can also draw/paint/create images based on verbal description. For this use, the label large language model seems misleading or inaccurate, on the face of it.
Interestingly, GenAI seems rather inapt in even trivial calculation, as per Edmund Weitz video.
Tools providing complementary facilities to GenAI are e.g. Wolfram Alpha and Desmos Calculator. It would be interesting to see what would happen if one could somehow integrate genAI with e.g. Wolfram Alpha, that is, when genAI would delegate computational assignments to Wolfram Alpha (or equivalent).
One can ask whether the label generative artificial intelligence is appropriate. That is, one can ask whether this really is an intelligence, one that is artificial and generative. Very superficially, something suggestive of human verbal intelligence is there. Moreover, given the term artificial general intelligence (AGI), we may use the term artificial intelligence much more broadly to include specialized problem/task solving, and then, chess playing would be artificial intelligence. Generative artificial intelligence may even approach passing the Turing test. Paradoxically, the responses from GenAI are too fast to be human, which betrays the artificial origin. Be it as it may, GenAI does not really seem to understand what it is saying; but then, as a sinister note, too many humans speak as if they did not understand what they are saying either. And then, one may wonder whether part of the human brain does not really implement something like GenAI (such an idea is found e.g. here).
As for the mechanism of function, sources seem to indicate that textual GenAI just tries to determine the next word given the sequence of words (using artificial neural networks). I struggle to find this plausible and to understand how that principle could possibly produce the kind of behavior that we see, but what do I know. I would find it much more plausible if somewhether in the guts of textual GenAI, there would be something like OpenCyc ontology.
As for plagiarism. It seems to me that GenAI generally commits plagiarism: it does not attribute the sources from which it takes ideas. Plagiarism is not the same concept as copyright violation. Plagiarism is the use of ideas obtained from sources without attribution and presenting them as one's own. Some form of plagiarism is perhaps widespread anyway; attributing all ideas to sources seem to be a rather stringent requirement. One defense could be this: GenAI does not represent to have any ideas of its own; it attributes all ideas to sources, albeit unspecified ones. But from what I understand, failing to specify sources from which ideas are taken is still plagiarism. See also One man's look at copyright law.
See also
[edit | edit source]- Should Wikiversity allow editors to post content generated by LLMs?
- Wikiversity:Artificial intelligence
- Motivation and emotion/Assessment/Using generative AI
Further reading
[edit | edit source]- Generative artificial intelligence, wikipedia.org
- ChatGPT und die Mathematik by Edmund Weitz, youtube.com (in German)
- ChatGPT und die Logik by Edmund Weitz, youtube.com (in German)