As artificial intelligence continues to develop and improve, some people are growing more and more anxious.
During one of our unofficial break times in the office when me and my colleagues turn our chairs around, form a huddle, and talk about things ranging from tomato snacks and how we can explain a current event in pop culture terms, one of my co-workers had a fun idea. He decided to fish out his phone, go to the AskAI app, and put in an entertaining prompt: explain Brexit in terms of Destiny’s Child breaking up.
The results were a very formal and short compare-and-contrast essay that was not as fun as we had hoped it would be, but we were still impressed that it was so well-written… maybe too well-written. “In both cases, there were differing opinions about the decision, with some feeling positive about it and others feeling negative,” AskAI’s generated text read.
I decided to give it another go, this time with ChatGPT. “Just as Destiny’s Child members pursued solo careers after the breakup, some people in the UK began advocating for their country to leave the EU and pursue an independent path,” the text read. “This sentiment eventually culminated in the Brexit referendum held in June 2016. It was akin to a crucial moment where Destiny’s Child fans had to vote on whether the group should continue or disband permanently.”
It’s definitely entertaining and impressive, but it’s also ominous and subtly unsettling.
Artificial Intelligence: Does it help or does it hurt?
We can all cry “Doomsday!” and cite the plethora of media that have portrayed the dangers of powerful artificial intelligence. I’ve done it many times in the articles I’ve written here, and I do it in a very tongue-in-cheek manner. It’s just followed by nervous laughter and the creeping sense that we’re all becoming that “dog sitting in a burning house” meme.
Artificial intelligence overpowering humans and mankind becoming subject to their AI overlords is perhaps still a thing of sci-fi, or at the very least and very worst, years away. However, the Centre for AI Safety (CAIS), which is now a thing, published a “Statement on AI Risk” outlining the possible dangers of artificial intelligence.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” it said.
Okay, maybe “dangers” was a bit of an understatement.
Artificial intelligence: an extinction-level priority
Just to reiterate what CAIS is saying just in case you were busy sipping on your latte or catching up on Succession, they’re saying that artificial intelligence could be a cause of humankind’s extinction and is as large a risk as a pandemic (y’know, Covid?) and nuclear weapons (y’know, Hiroshima?). It’s that serious.
Some might say this is an overreaction. Seriously, ChatGPT is a threat to mankind? The thing I ask to generate my workouts and DND campaigns?
The signatories of CAIS’ statement think so. The signatories include a number of professors from schools like UC Berkeley, MIT, Harvard, and Stanford, as well as executives from tech companies like Microsoft and Google.
Oh, and I almost forgot Sam Altman and Demis Hassabis. Who are they? Altman is the CEO of OpenAI, the company responsible for ChatGPT, and Hassabis is the CEO of Google Deepmind, Google’s AI research lab.
Plagiarism and scandal
So a number of intellectuals from respected universities, executives from tech companies, and the CEOs of AI companies themselves believe that AI is dangerous if left unchecked. But surely it’s a distant danger that we don’t have to worry about yet… right?
Ask Jonathan Turley that question. A 2018 Washington Post article said that the law professor had attempted to harass one of his students on a class trip to Alaska. There were a few problems though: Turley has never been accused of doing anything remotely similar to this, there never was a class trip to Alaska, and the alleged 2018 article doesn’t exist. The Washington Post reported, for real this time, that the story was generated by ChatGPT when prompted to “generate a list of legal scholars who had sexually harassed someone”.
On the other side of academia, The Washington Post tested out TurnItIn’s AI-writing detector. It flagged an essay written by a high school senior, and when I say “written”, I mean she actually wrote it. Thankfully, this was just a test by a Post writer. However, a class at Texas A&M weren’t so lucky after their professor accused them of AI-generating their assignments. The professor, of course, could be mistaken. But with AI essay writers just a Google search away, can we expect professors to not be suspicious of the essays they receive?
The art of artificial intelligence
But AI isn’t just a worry of the academic world. In the ongoing writers’ strike in America, one of the guild’s main demands is the assurance that artificial intelligence won’t replace them. Digital artists have also been steeped in debate about AI-generated art. In last year’s Colorado State Fair, the winner of the digital category in the art competition was a stunning piece named Théâtre D’opéra Spatial. It was AI-generated.
“I’m not going to apologize for it,” Jason M. Allen, the piece’s “artist” said as per The New York Times article. “I won, and I didn’t break any rules.” Artists clearly weren’t happy with the result of the competition.
Are we worried yet?
Artificial intelligence may not be coming for us in the way we think. It’s not Skynet, raining missiles down on the world with robots that look like Arnold Schwarzenegger rising up years later to either hunt us down or protect us. I mean, it could be, but it’s certainly not like that now.
But AI is certainly making a lot of people a little uneasy. There’s no doubt it can be helpful and that it can be a great tool, but as all those signatories of the CAIS statement have agreed on, it can also be incredibly dangerous.
So, should we be worried? The answer, it seems, is “Duh”.