Jump to content

Talk:One man's look at the debate format in Wikiversity

Page contents not supported in other languages.
Add topic
From Wikiversity
Latest comment: 11 months ago by AP295 in topic ChatGPT

ChatGPT

[edit source]

I gave it a whirl. My impression is that it's presently quite overrated for most purposes that are unrelated to propaganda or customer service. It's perhaps important for users to consider that trained models are arguably just a proxy for whoever decided the exact training data, loss function, and other relevant metaparameters, and should never be thought of as some sort of objective oracle, which seems to be the common theme of its promotion. ChatGPT's output (or more accurately, the output of the ChatGPT instance that I used) was strongly reminiscent of Orwell's characterization of debauched, "modern" writing: "As I have tried to show, modern writing at its worst does not consist in picking out words for the sake of their meaning and inventing images in order to make the meaning clearer. It consists in gumming together long strips of words which have already been set in order by someone else, and making the results presentable by sheer humbug. The attraction of this way of writing is that it is easy. " Orwell actually anticipated the mechanized production of prose, either in that essay or another one. The language that's often used to speak about ChatGPT seems to conflate the idea of a trained instance (which is what people are actually interacting with) and the model itself, which further obscures the simple fact that it's a product of supervised learning whose output is largely derivative of and ultimately dependent on what it was trained with and how. Humans make observations, process them, and finally use language to distill and communicate those observations or conclusions based upon them. Conversely ChatGPT is mimicry, and its apparent coherence does not save it from insipidity. Any general model whereby the masses ask serious questions and receive canned responses with unclear attribution is not likely to be an especially appealing substitute for conversation, and probably easily abused. I suppose I could make an essay of this. It seems topical. One often reads assertions that chatgpt is useful for something or another, but I've yet to incidentally hear/read that chatgpt is complete shite for some purpose. The latter possibility seems the stronger one, prima facie, as a panacea is generally less common than a bottle of snake oil. AP295 (discusscontribs) 16:34, 23 November 2023 (UTC)Reply

That is interesting, but arguably does not really belong here. In this article, ChatGPT is a mere aside or tangential section. --Dan Polansky (discusscontribs)
What doesn't belong where? You wrote about chatgpt, and this is my comment regarding chatgpt and discourse. AP295 (discusscontribs) 17:22, 23 November 2023 (UTC)Reply
This article is not about ChatGPT per se. The point of the ChatGPT section was this: "Even if Wikiversity decides to censor this particular topic, it can be explored by anyone from ChatGPT".
A general criticism such as the one you wrote above would perhaps be for "What is wrong with ChatGPT 3.5", "One man's criticism of ChatGPT 3.5" or perhaps "Pros and cons of ChatGPT 3.5".
If you want to hear about weak points of ChatGPT, there is an interesting video in German: https://www.youtube.com/watch?v=medmEMktMlQ, by the excellent Weitz from Hamburg.
What I for one find discouraging about the above writeup is that is strong on opinion and weak on fact. By contrast, Weitz from Hamburg shows us cases where ChatGPT produces correct answers, and the many cases where it does not; it is full of specific facts. I like that approach. --Dan Polansky (discusscontribs) 17:39, 23 November 2023 (UTC)Reply
Language models will continue to improve (hopefully also in terms of their explainability) and I'm not making light of their potential utility, but it's important to have a realistic view of their social impact. I don't speak German, so I have no comment on the video. I can't think of anything else to add at the moment. Consider my writeup as an observation, if you must use an o-word. Observations are subjective, certainly, but that alone is hardly a reason to write them off.AP295 (discusscontribs) 18:58, 23 November 2023 (UTC)Reply
And I should point out that your own comment, "Even if Wikiversity decides to censor this particular topic, it can be explored by anyone from ChatGPT" is an opinion. My comment was made in the hope you might reconsider or reword it. Sentiments like we may have censorship, but at least I can still talk to the machines aren't exactly reassuring, I'm sure you'd agree. As far as silver linings go, it's on the disturbing side. To be more generous one could say you have an eagle eye for silver linings and to be less generous one could say it's bizarre and disturbing to posit that a simulated conversation with some LLM should be an acceptable substitute for communicative discussion between humans. Most people including myself would agree that infanticide is wrong by any reasonable conception of morality (or even just in an aesthetic sense), so I have no interest in that particular topic, to be clear. I respect that such topics are and should be considered philosophically and dispassionately and would not endeavor to censor such work, despite my own lack of interest. The idea that LLMs should have a conciliatory function vis-a-vis censorship is what I object to. A society that would put up with censorship and accept an inferior psychological substitute for discourse stands a real chance of being so imposed upon. (Just so you don't get the wrong idea, I don't make any presumptions about what you do or don't value. ) AP295 (discusscontribs) 19:13, 23 November 2023 (UTC)Reply
I suspect this also holds true generally. Anything one wouldn't risk life and limb for and defend with physical force if necessary is by definition something they don't terribly mind being deprived of. In that case it would be hard to claim that one lives by such a principle. I think any free society needs some minimum fraction of citizens (if not a majority) who do live by principle in order to remain a free and decent society. Otherwise it suffers debasement and opportunistic exploitation to the extent that its people tolerate. I'm stating the obvious here, but the phrase liberty or death means that one would prefer either liberty or certain death to a society deprived of liberty. I say this to explain why I have such caution about our various constitutional niceties and in the hope my comments are not mistaken as pedantic nit-picking or some other act of indulgence. I see it as an existential matter and so I hope you'll not mind if I'm frank. On the whole, I'm quite serious about what I say. AP295 (discusscontribs) 20:26, 23 November 2023 (UTC)Reply
Let's try a different angle: hardly anyone is going to look for a discussion of ChatGPT on a talk page, especially one of an article called "One man's look at the debate format in Wikiversity". You are likely to have much better reach on a page named like "What is wrong with ChatGPT 3.5", "One man's criticism of ChatGPT 3.5" or perhaps "Pros and cons of ChatGPT 3.5". --Dan Polansky (discusscontribs) 07:13, 24 November 2023 (UTC)Reply
Again, take my comments as feedback on the part of your resource about ChatGPT. While I may later build upon them in a resource of my own, clearly in this context they're intended as feedback, no? I realize that part of your resource is somewhat incidental, but I don't see why this should preclude relevant feedback. At any rate, you don't seem terribly interested so I'll probably leave it at that. AP295 (discusscontribs) 19:17, 24 November 2023 (UTC)Reply