A friend of mine, who happens to be a writer, was a little concerned about an article she read recently:
Two Quotes to consider from the article:
But there’s another kind of poetry AI will have to beat—poetry as an art of brilliant accuracies, of reality redescribed in ways that bind sound to perception. And, here, AI’s deficiencies are brutally exposed. Because, to compete at this imitation game, a machine has to show that, by micro-adjustments of effect, it can draw our senses to the highest pitch of expression.
Gebru was referring to the alarming ease with which people fall under AI’s spell, a phenomenon memorably embodied in one of the earliest forays into text-based algorithms. Released in 1966 by MIT professor Joseph Weizenbaum, ELIZA was the world’s first chatbot. Designed to impersonate a therapist, it would reflect back a user’s statements with open-ended questions and prepared responses (“My mother never loved me” would trigger “Please go on” or “Tell me more.”) Weizenbaum’s goal was to explore a computer’s capacity for conversation.
Instead, he was alarmed by how completely users were taken in by ELIZA’s shallow repartee; his own secretary once insisted he leave the room so she could talk to the program in private. Credulity even extended to graduate students who had watched him build ELIZA from scratch. Sherry Turkle, a social scientist and Weizenbaum’s colleague, called it “the ELIZA effect,” which she defined as “human complicity in a digital fantasy.”
I’ve been in computing for so long that I remember writing versions of ELIZA and the nonsense poetry program Racter for fun in high school. At the time, both were exercises in string data manipulation and not really machine learning. I read Weizenbaum’s book back then too. ELIZA was a single chapter in a larger work about his skepticism about AI – which I agree with.
I see the concept of the algorithms in search engines the same way I see them in Poetry or other creative works. They are bland and for some average person who doesn’t know any better. They do not inspire curiosity about something new.
Google used to be a lot less intelligent. You could surf the web exploring surprising and interesting corners in those days. Today google feeds us content that is supposedly interesting for us but turns out to be bland and repetitive. I get bored quickly On YouTube when all the videos are on the same topic. AI can follow patterns, but it rarely surprises and delight our curiosity.
Ironically what makes us think AI is creating and thinking is why it doesn’t: we tell stories to explain the world. We observe and have empathy for the world. Then, to make all the world fit into an existing story we’ve told ourselves, we make a new story to fit the information and feelings we have. AI only makes decisions. It does not generate stories that explain a worldview. As I proofread this on Grammarly, it does not understand the emotionally charged words I write that illustrate the story. Instead, it wants to delete them as unnecessary.
AI cannot have empathy or generosity. It can fake it, but empathy requires a complex set of experiences that AI cannot have, and like a human faking empathy, it becomes apparent rather quickly when AI tries. AI also has a considerable problem of selection bias when it is fed data. The data often is biased toward white men. If you fed all the English on the internet to an AI, due to who put the information there, maybe it could imitate Coleridge. However, People of Color are so misrepresented that AI can never imitate Langston Hughes or Maya Angelou.
If you are afraid of a creative takeover by machines, my point is to continue to have empathy and generosity, and you cannot lose to a machine.