ChatGPT and This Writer

For the last several weeks much of my reading has been about AI and ChatGPT, learning as much as I can about this new technology. Thanks to a good friend from graduate school (back in the Dark Ages of landlines and library card files), I’ve learned a lot about AI and what it could mean in areas beyond writing, such as automotive, medicine, and hard science. But the only area I’m concerned with here is the AI directed toward producing word texts—articles, essays, stories, memos, ad copy, and the like. 

When ChatGPT appeared on the scene for the general audience, in 2022, most people were caught off guard and stunned at what it could do. Writers, understandably, and myself among them, were horrified that a machine would soon be producing texts. What would that mean for our futures? (The writers in California are striking over the same issue.) This anxiety has not declined; some even speculate that this new technology could soon make human efforts obsolete and even lead to our end. Like the dinosaurs. 

During a webinar held by the Authors Guild on Thursday, July 20, 2023, one of the participants made some important points about language, so that if nothing else, we understood what we were talking about. ChatGPT is one application of AI. It is called a large language model, borrowing some terminology from linguistics and the work of Noam Chomsky. But this is where it becomes misleading. ChatGPT requires large amounts of data—copies of the written word—in order to produce texts on demand. The designers of the application have scoured the Internet for documents to feed into their computer. Books are found on pirate sites that are often fending off take-down notices from writers (I’ve sent some of those notices). With these texts, the machine is trained to recognize acceptable sequences of words and when their use is most relevant to the question presented. Think “keywords” lined up.

The user of ChatGPT can type in a request, and the software will type out an answer. If you want an outline for a novel, type that request with some details to guide the machine such as setting, characters, and time period, and the machine will send back an outline in conformity with your guidance. The designers of ChatGPT describe the answer as being generated, as a generative text. But this isn’t accurate, as one of the webinar participants pointed out. The machine cannot generate. The machine cobbles together bits and pieces according to patterns, and spews out the result. The text is derivative; it is derived, taken from documents fed into the machine. As the participant went on to say, the result is plagiarism of someone’s work, and in fact of many works by many someones. 

Why does this matter?

We are writers. Accuracy matters. As George Orwell demonstrated only too clearly in 1984, words lead us and determine how we think (or don’t think), and so we as writers should always be accurate in how we present our ideas. 

The work that AI designers insist on describing as training is in fact copying, copying of a copyrighted text without permission, which is an infringement of that law and also known as plagiarism. The text derived and reproduced by the machine does not carry any acknowledgment of this fact.

The purpose of the Authors Guild webinar was to bring members up to date on their efforts to protect and maintain the rights of writers. They are lobbying for several goals: First, payments and damages for training/copying already done. Second, AI content clearly labeled as such (The White House meeting on AI this week includes a request for a watermark or something like it, to indicate an AI produced text). Third, disclosure by AI companies of what work has been used already. (A list of ISBNs used has been made available, but when I tried it there was no way to search it, though a tool is sometimes available.) Fourth, expanding the right of publicity law from name, image, etc., to include style (a writer’s or artist’s style). 

The Authors Guild is also talking with publishers about contract clauses that allow the writer to deny AI companies the right to copy the text or other contents of a publication for training or any other purpose without permission.

Some writers are already adding a clause to the standard copyright statement. “All rights reserved. No part of this book . . . ” To this, authors are adding “This work may not be used in AI training without written permission by the author,” or similar statements.

AI ChatGPT has many supporters as well as detractors, and I continue to learn about it. And no, I haven’t tried to use it for my work but a friend asked it for a summary of my first novel, Murder in Mellingham. The summary was atrocious, and included a character name I’ve never used. 

My firend also used ChatGPT to produce a letter requesting that OpenAI stop using my material in its training. I’ve sent the letter and am now waiting for a reply. I’ll post about it when I get one.

5 thoughts on “ChatGPT and This Writer

  1. Hi Susan. Just returned from a cruise to this very sobering news. Good information. Something I need to reflect on and do something about. I think Angela’s information on Copyright Alliance might be a good place for me to start. Talk about insult to injury. They steal your work and feed it to a computer, so a program can put you out of business. Soulless, AI, and the thieves. Interesting that people want to make people obsolete.

    Liked by 1 person

    1. This is indeed sobering information, and I sometimes wonder if I really want to write about it. This morning it occurred to me that ChatGPT is “taking over the world” by stealing all our data and using it to create the products of its machines. ChatGPT should be public property, like the national parks. Anyway, Congress seems to be more alert on this than it was on social media’s arrival. But change has arrived.

      Like

  2. Great article! Have you joined the Copyright Alliance in DC? They are working hard on this one. Also, the recent new Federal law, the C.A.S.E. Act, is handy to keep around in case one needs it. That new law allows for Intellectual Property cases to be tried in Small Claims Court for a small fee of $150 with up to $30K judgment possible. The UK does this in Small Claims Court and their cases are far less than they were in the past. The cost is a half million dollars to try an Intellectual Property case in Federal Civil Court. Our choices are; Civil Court, Small Claims Court, or Criminal Court. I personally like the Criminal Court option. Thieves never want to go to prison nor do they want to be caught and embarrassed publicly as some of them are famous and well-known. If enough writers sent a few criminals to prison we could make believers out of the riff-raff doing this. The riff-raff behind the AI robots would be going to prison. Imagine that. Better yet or at least funnier, imagine a robot in an orange suit doing time.

    Liked by 1 person

    1. Thanks, Angela, for the information and thoughtful reply. I do know about the new Small Claims Court, and will look into the Copyright Alliance.

      This is a hot-button subject for so many people because we can see benefits but it is a new technology that seems to be getting away from the regulators and can do a lot of harm. I don’t want to be replaced by a machine, but as a former teacher I don’t want students to shrink their learning by relying on a machine no matter how sophisticated it is. The response from AI folks is that all information (this is called “knowledge,” which to me is different) will be at our fingertips. I wonder what this will do to scholarship, verified facts and presentations. We’re in quite a new world on this one.

      A robot in an orange jumpsuit. Perhaps the penalty is to be dismantled by the writers injured. ;-))

      Like

Comments are closed.