AI experts on whether you should be “afraid” of ChatGPT

ChatGPT is an artificial intelligence that writes for you, any type of writing you want – letters, song lyrics, research papers, recipes, therapy sessions, poems, essays, drafts, even software code. And despite its clunky name (GPT stands for Generative Pre-trained Transformer), within five days of its launch, more than a million people were using it.
How easy is it to use?
Try entering, “Write a limerick about the impact of artificial intelligence on humanity.”
CBS News
or how about “Tell the story of Goldilocks in the style of the King James Bible.”
CBS News
Microsoft has announced that it will incorporate the program into Microsoft Word. The first books written by ChatGPT have already been published. (Well, self– posted by people.)
“I think this is huge,” said Professor Erik Brynjolfsson, director of Stanford University’s Digital Economy Lab. “I wouldn’t be surprised in 50 years, people look back and say, wow, that was a really significant set of inventions that happened in the early 2020s.”
“Most of the American economy is knowledge and information work, and that’s who will be most affected by this,” he said. “I would put people like lawyers at the top of the list.” Obviously, a lot of copywriters, screenwriters. But I like to use the word ‘affected’ rather than ‘replaced’, because I think if it’s done right, it’s not going to be AI replacing lawyers; it will be AI lawyers replacing non-AI lawyers.”
But not everyone is thrilled.
Timnit Gebru, an artificial intelligence researcher who specializes in the ethics of artificial intelligence, said: “I think we should be really afraid of this whole thing.
ChatGPT learned how to write by examining millions of written texts on the Internet. Unfortunately, believe it or not, not everything on the internet is true! “It’s not taught to understand what’s fact, what’s fiction or anything like that,” Gebru said. “It will just parrot back what was on the internet.”
Of course, writing it sometimes spits it out it sounds authoritative and confident, but completely false:
CBS News
And then there’s the problem on purpose misinformation. Experts worry that people will use ChatGPT to flood social media with fake articles that sound professional, or flood Congress with letters that sound authentic.
Gebru said, “We should understand the harm before we spread something everywhere, and mitigate those risks before we put something like this out there.”
But no one can be more upset than the teacher. And here’s why:
“Write an English Essay on Race in To Kill a Mockingbird.”
CBS News
Some students are already using ChatGPT to cheat. No wonder ChatGPT has been called “The End of High School English,” “The End of the College Essay,” and “The Return of the Handwritten Essay in Class.”
Someone using ChatGPT doesn’t need to know structure or syntax, or vocabulary, or grammar, or even spelling. But Jane Rosenzweig, director of the Harvard Writing Center, said, “However, the part that I also worry about is the thinking. When we teach writing, we teach people to research an idea, to understand what other people have said about that idea, and to understand what they think about that. A machine can do the part where it puts ideas on paper, but it can’t do the part it puts on yours ideas on paper”.
School systems in Seattle and New York have banned ChatGPT; so they have some faculties. Rosenzweig said: “The idea that we would ban it is against something bigger than all of us, which means that soon there will be everywhere. It will be in word processing programs. It will be on every machine.”
CBS News
Some educators are trying to figure out how to work with ChatGPT to allow it to make a first draft. But Rosenzweig counters, “Our students will stop being writers and become editors.”
“My initial reaction to that was, are we doing this because ChatGPT exists? Or are we doing this because it’s better than other things we’ve already done?” she said.
OpenAI, the company behind the program, declined “Sunday Morning” requests for an interview, but offered a statement:
“We don’t want ChatGPT to be used for misleading purposes – in schools or anywhere else.” Our policy states that when sharing content, all users should clearly indicate that it is generated by artificial intelligence “in a way that no one could reasonably miss or misunderstand,” and we are already developing a tool to help everyone identify text generated by ChatGPT.”
They talk about an algorithmic “watermark,” an invisible flag embedded in ChatGPT’s writing, that can identify its source.
There are ChatGPT detectors, but they probably won’t stand a chance against the upcoming new version, ChatGPT 4, which is trained for 500 times more writes. People who have seen it say it is miraculous.
Erik Brynjolfsson of Stanford said, “A very senior person in OpenAI, he basically described it as a phase change. You know, it’s like going from water to steam. It’s just a whole ‘nother level of ability.’
Like it or not, writing AI has been around forever.
Brynjolfsson suggests we embrace it: “I think we’re going to have potentially the best decade of creative flourishing that we’ve ever had, because a whole bunch of people, a lot more people than before, will be able to contribute to our collective art and science.”
But maybe we should let ChatGPT have the final say.
CBS News
For more information:
Story produced by Sarah Kugel. Editor: Lauren Barnello.
Also see: