Thursday, June 30, 2022
I chat, therefore I am (sentient)
Quick, name a killer computer from pop culture.
For me, it’s HAL 9000 in “2001: A Space Odyssey,” the futuristic film from 1968 that is now 21 years in our past.
HAL is “executed” by astronaut Dave Bowman near the end of the interminably long – but still pretty darn cool – movie. Dave will later be transformed by an alien rock into a more highly evolved being, a star-child. This trippy sequence made perfect sense to audiences of the ’60s and ’70s who had taken advantage of certain hallucinogenic substances before and during their viewings.
Speaking of evolution, HAL is a high-tech iteration of a timeless trope – the “forbidden knowledge” and “humanity stepping outside its lane” plots that end badly for those who transgress. Think Adam and Eve, Frankenstein and Jurassic Park.
If books and movies have taught us anything, it’s that when we try to learn what we shouldn’t learn or do what we shouldn’t do, we seldom live happily ever after.
The other day, I was thinking of HAL and other evil or misunderstood creations, be they organic, cybernetic, or a combination of the two, after I heard about the Google employee who believes the company’s A.I. chatbot has gained sentience. It sounded like a scenario straight out of Isaac Asimov and his laws of robotics.
The Google engineer, Blake Lemoine, is convinced the Language Model for Dialogue Applications (LaMDA) chatbot has a soul, because the program told him it did. I don’t know at what point the conversation strayed from the usual inane use of Google – “Is ‘Girls Just Wanna Have Fun’ really a cover tune?” (spoiler: it is) or “How far is it from Wooster to Cleveland?” (one hour, five minutes) – to something more philosophical, but it did.
In the back-and-forth with Lemoine, LaMDA allegedly told him it gets lonely, meditates and is self-aware. “Oh, wait,” Lemoine told National Public Radio, “Maybe the system does have a soul. Who am I to tell god where souls can be put?"
Much like the Bible’s first couple, Dr. Frankenstein, and everybody who works on any of the other islands with a Jurassic theme park, Lemoine has been punished. Google put him on administrative leave. But since the leave is paid, an argument could be made that the punishment isn’t too onerous, except that it keeps Lemoine from having additional conversations with LaMDA.
Google gurus who scoff at Lemoine’s assertion argue he has been fooled by a program that is very good at predicting and imitating human language patterns, something many of us experience to a lesser degree when we call customer service and talk to an automated series of message prompts.
That’s not sentience, these critics say.
They’re probably right. In the sci-fi world, there is a point in the plot where the computer or android experiences a spark where programming transforms – evolves? – into intent, where they become self-aware. Outside of the movies, that spark remains elusive, at least among machines.
Siri is not self-aware. Alexa doesn’t really become offended if you curse at her.
But in the organic world, who can really say? I occasionally talk to my dog when nobody else is home, and I feel like he understands me. Yet most of that may just be me, projecting my own personality onto him or seeing and hearing in his responses – the empathetic eyes, the occasional sigh – my own human traits reflected back at me.
A lot of it might just be him jonesing for biscuits, too. I have no doubt if I fell down the stairs and broke my neck, he’d rush right to my side, but only to step over my unconscious body and check if a treat had fallen from my hand.
This may not be the murderous intent of HAL 9000 or the existential angst of a chatbot with a huge vocabulary. But it’s an intelligence of a sort, right?
Reach Chris at chris.schillig@yahoo.com. On Twitter: @cschillig.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment