In a world where Siri can send text messages and AI-generated deepfake videos are making headlines, it's getting harder to tell who's the real deal and who's an imposter. We decided to put the American public to the test by seeing if they could spot the difference between a human-written text and one written by an AI. The results were... well, you'll just have to read on to find out if the machines have finally taken over!
Hopefully the previous paragraph didn’t pique your interest too much, because it might put us out of a job. It was written by ChatGPT, OpenAI’s text-generation software designed to produce natural-sounding language in response to a prompt.
So for all the teachers and professors out there (and definitely not just in the interest of our own employment prospects), we wanted to know whether Americans can tell the difference between AI- and human-generated content. We used OpenAI’s ChatGPT to generate text and DALL·E 2 to generate images, placed them beside our own content or found content online using the same prompt, and asked Americans to identify the AI.
Can you ID the AI? (Answers at the bottom of this story.)
Before pulling off the Scooby-Doo mask on the AI monster, how well were Americans able to distinguish human from machine?
In three out of five categories, a slight majority of Americans were able to tell which content was AI-generated. Fewer (42%) were able to identify ChatGPT’s poem, while the fabricated photo (48%) was selected at about an equal rate as the real photo.
The quality of the current state of AI made this a pretty difficult task. At this rate, we’ll check back in next year to see if Americans can ID the AI-generated HBO drama. Then again, we probably will already have been replaced by AI.
The content generated by AI software is:
1. Quote: A
2. Photo: A
3. Test question: B
4. Art: B (the non-AI painting is “Perseus Rescuing Andromeda (1515)” by Piero di Cosimo)
5. Poem: A