Here Comes Chat GPT – From agriculture to healthcare, advancements in artificial intelligence (AI) have been reshaping various industries in recent years. Journalism is no exception. Chat GPT (Generative Pre-trained Transformer) is a prime example of AI technology that has the potential to transform the way news and information are disseminated.
As a writer I’ve been skeptical about the newest technology and have been a stalwart resistor. However, since I was writing about the topic, I figured I should at least give it a go.
Full disclosure: the introductory sentences in this piece were written with Chat GPT assistance. For better or for worse, the rest of the article is all me, I promise.
It’s easy to see why students, businesses, and anyone who needs to use words succumbs to the ease and convenience of the latest AI technology.
This is not the first time technology has replaced human efforts, particularly in academic pursuits. Microfiche was able to compile much more information than stacks of old magazines and periodicals. The internet replaced card catalogs, encyclopedias, and newspapers. Websites like BibMe and Bibcitation, my personal favorites, automatically generate citations, making bibliographies and footnotes much easier to produce.
But this is the first time in history when technology is going beyond simply compiling, classifying, and sorting data or increasing the speed with which it can do it. AI is quickly replacing the need to assimilate knowledge and come to conclusions with that information. To do so requires a skill known as critical thinking.
There is a fine line between mundane tasks and work that requires a conscience, particularly in a field such as journalism, where more and more, ethical considerations should be taken into account.
I can’t imagine this type of AI doesn’t make us dumber. I’m no neuroscientist, but I do know learning how to master the most basic skills, such as reading and writing, strengthens neural networks.
We’ve been on this slippery slope of cognitive decline for a while now. Internet maps, while extremely convenient, have removed the need to remember directions or require a driver to stay vigilant. Self-driving cars are already here. Remembering your grandmother’s favorite recipe is all but a lost art.
While I’m not advocating for a ban on technology, the speed with which we are advancing is worrisome.
Therein lies the challenge. I don’t think anyone wants to go back to the stone ages of folded maps or cooking over fire – although both or those options sound pretty good right now – but how far is too far?
All seem to agree that boundaries need to be set for AI. But where are those boundaries, how do we draw them, and who sets them?
Senate Hearing on AI
These are the questions the Senate Judiciary Subcommittee on Privacy, Technology and the Law tried to answer this week.
Senator Richard Blumenthal, D-Conn., opened the session by presenting the challenges of AI. “Too often we have seen what happens when technology outpaces regulation. The unbridled exploitation of personal data, the proliferation of disinformation and the deepening of societal inequalities. This is not the future we want.”
Although the senator really didn’t say those things. His opening remarks came courtesy of audio made by a voice cloning software, with words written by ChatGPT.
The CEO of Open AI, the creator of Chat GPT, Sam Altman, offered testimony at the hearing, ironically, advocating for regulation.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening,” Altman claimed.
Lobbying for regulation over the very technology you created may seem counterintuitive, but Altman is doing just that.
He is not alone. Elon Musk also has warned about the perils of his own mastermind. The “Godfather of AI,” Dr. Geoffrey Hinton has recently left Google to warn of the dangers of AI and calling for some kind of global regulation. Years ago, Stephen Hawking claimed AI could be the worst thing that ever happened to humanity.
Altman laid out a three point plan for government oversight: 1. Create a government agency charged with licensing and enforcing regulation over large AI models 2. Design a set of safety standards for AI models, including evaluations of their dangerous capabilities and 3. Require independent audits, by independent experts, of the models’ performance on various metrics.
The current debate around AI presents more concerns than just job replacement and economic alternatives. It strikes at the heart of a fundamental, existential question: what does it mean to be human?
Is life nothing more than ease, convenience, and maximum efficiency? While our fast-food culture prizes such improvements, these are not virtues. Chat GPT won’t teach patience, goodness, kindness, or empathy. And while these qualities don’t seem particularly rampant in the field of journalism, I still believe as long as humans write the reports, there is hope.
Although everyone clamors for news outlets to report “just the facts” without opinion, what a mundane and sad world it would be without humans to tell stories.
Chat GPT could have told you the history of AI, how it benefits journalism, its challenges, and what regulations are being considered. But it could not have offered perspective, insight and, love it or hate it, my unique expression of the topic. Is this necessary for all articles, particularly hard news? No. But the more we allow AI to infuse our lives, the more like it we become. Humans will have created the very thing that destroys them.
AI may be one of the only bi-partisan issues today. No matter what side of the aisle we sit on, most seem to agree that we don’t want a world with only an aisle and no sides. A world without contrast, challenge, and individuality is a world without humanity. And that is no world at all.