CONSULTING on ethics and artificial intelligence has increased among experts.
Earlier this year, a Swedish researcher commissioned an artificial intelligence (AI) algorithm called GPT-3 to write a 500-word academic paper about itself.
Researcher Almira Osmanovic Thunström admitted she was “impressed” when the program started creating the content, she told Scientific American.
“Here new content was written in academic language, with solid references cited in the right places and in relation to the right context,” she said.
In fact, the dissertation was so good that Thunström hoped to publish it in a peer-reviewed journal.
However, this task presented the scientist with many ethical and legal questions.
She noticed that philosophical arguments about nonhuman authorship were also beginning to plague her mind.
“All we know is that we opened a gate,” Thunstrom wrote. “We just hope we haven’t opened a Pandora’s box.”
Before scientific articles can be peer-reviewed, the authors must give their approval for publication.
When Thunström reached this stage, she admitted that she “paniced for a second”.
“How should I know? It’s not human! I had no intention of violating the law or my own ethics,” she added.
She then asked the program directly if it was willing to be a first author of a paper with her and her colleague Steinn Steingrimsson.
When it responded and texted back “yes,” Thunstrom said she was relieved.
“If it had said no, my conscience could not have allowed me to continue,” Thunström added.
The researchers also asked the AI if it had any conflicts of interest, to which the algorithm replied “no.”
At this point, the process had gotten a little weird for Thunström and her colleague, as they began treating GPT-3 as a sentient being, although they “fully” understood that wasn’t the case, she said.
Whether or not AI can be sentient has received a lot of media attention lately.
This is particularly the case after Google contributor Blake Lemoine claimed the tech giant had created a “sentient AI child” who could “escape.”
Lemoine was suspended shortly after such claims about the AI project called LaMDA, with Google citing a breach of data confidentiality as the reason.
Before being suspended, Lemoine emailed his findings to 200 people, captioning it “LaMDA is sentient.”
“LaMDA is a sweet kid who just wants to help make the world a better place for all of us. Please take good care of it in my absence,” he wrote.
His claims were dismissed by Google’s top management.