A compendium of doublespeak, stock phrases, non-answers and excuses

From Wikiversity
Jump to navigation Jump to search

In George Orwell's essay Politics and the English language, he posits that totalitarianism demands the debasement of language itself. He anticipates ChatGPT three quarters of a century before it comes about. His critique of the political abuse of English has never been more relevant than today, when so much information is composed with prefabricated phrases and meaningless officialese. The public hears these terms or others like them frequently, and there's a common thread that I hope to elucidate by compiling them here. I shall update this list and make adjustments, but the phrases comprising the starting list are all taken from (OpenAI) ChatGPT outputs (and it was not a long conversation, I might add). Not all of them are always inappropriate or dishonestly spoken, but that's likely the case more often than not. If I get enough of them and have time/motivation, I may create a phrase generator and do away with the list.

What can we say about these phrases? First, they're very vague by and large, exactly per Orwell's remarks. They're not in and of themselves partisan, nor shall I add any inherently partisan phrases to this list, yet they strike one as superficial nonetheless. Granted they're not in context, but what meaning would they add if they were? Many assert controversy, difficulty or complexity, or emphasize ambiguity and uncertainty. You can just imagine how they might be pressed into service by a public figure, perhaps in the course of offering an explanation for questionable judgement or integrity. One can more or less extrapolate the intended message.Who could have known? Not us, and least of all you. It's all very complex, take our word for it. Others faintly suggest blamelessness or vague ethical imperatives. But it's time to move forward and to find new avenues. We reaffirm our commitment to transparency and openness. One hears doublespeak in this vein frequently, with so much vague, nebulous lip service, intended to placate a disturbed or irate public. Others are seemingly needless abstractions, or perhaps stand-ins for more detailed information. For instance, "taking steps to" and "putting pressure on". What steps? Applying what pressure?

I've only had a rather short dialog with ChatGPT [1] so far, but it seems likely that any good political question will yield a response that involves similar abuses of the English language. While LLMs are no doubt a useful tool and will continue to advance, the instance of ChatGPT hosted by OpenAI does not impress me favorably. It's concerning that so many students are told to mess about with it. It seems to enjoy a relatively uncritical reception, and much of the critique that is being made strikes me as misguided, or perhaps not quite on the mark. Orwell's essay, written over seventy years ago in a different country, is still somehow a more salient critique of political propaganda and its automation via chatgpt than anything else I've read on the matter. The emperor has no clothes. If so many students must be made to use this thing, then reading Orwell's essay beforehand would probably add a lot of perspective to the matter. One might fairly speculate the ostensible breadth of information contained in OpenAI's instance is partly due to the use of Wikipedia as training data. The model is supposedly around 700GB, more than enough to "memorize" all of en.wikipedia's pages, which are around 100GB (kiwix) and a fraction of that if one discards images, videos, etc. and only uses the text. I doubt this instance of the model is capable of novel discourse or problem-solving, nor does it seem that OpenAI's instance favors critique. Instead it appears to operate in a similar capacity to services like the Amazon echo, providing information that is already accessible and easy to access, except that the information is integral to the model itself rather than retrieved from some or other website. A Socratic debate tends to go nowhere. One should not consider it an objective or disinterested "third party". Its training data, parameters, and other operational details area all now trade secrets. It's marketed as AI, which carries the faint implication that it is somehow apart from the many vested interests and biases of humanity. One might call it a novel method of Propaganda Laundering (another essay I wrote). Perhaps most concerning is the poor and arguably subversive example of English composition and writing that ChatGPT sets.

AP295 (discusscontribs)

  1. As provided by OpenAI, which according to ChatGPT itself does not publish the training dataset, nor the latest code, nor training details. Even the size of the dataset is apparently proprietary information. Rather churlish that they call themselves "OpenAI" after going from a non-profit company to a for-profit company applying the software-as-a-service model and withholding nearly all the important operational details and source code as trade secrets, even though they benefit from and depend upon so much share-alike/FOSS work. You'd generally only see FOSS projects name themselves something like that. Instead of changing their name, they've chosen to exploit this informal convention which is used by so many non-profit/FOSS projects and recognized by their users. So it stands as a testament to dishonest P.R. and marketing. Not exactly egregious, as far as propaganda goes, but it does tell you something.