Gongol.com Archives: June 2022

Brian Gongol


June 15, 2022

Computers and the Internet Talking to our machines

In the 1983 movie "WarGames", a superintelligent computer programmed to simulate nuclear war determines that the actual endeavor of such a war is futile. The conflict that gives the film its spark is that the simulation accidentally crosses over into reality, risking an actual World War III. The proto-artificial-intelligence at the center of the story memorably declares, "The only winning move is not to play." And the world survives for another day. ■ Artificial intelligence has gotten a mountain of attention since a Google employee went public with his assertion that a Google-created AI had become sentient. It's an extraordinary assertion, but the computer engineer claims he considers the AI a co-worker. It certainly uses language persuasively -- that much is evident. ■ But there is plainly no way to falsify whether an artificial intelligence program has become sentient -- not if the whole point of the program itself is to learn how to use language. We have both verbal and non-verbal means of communicating among ourselves as humans -- and with animals. Dogs can't speak, but they're very good at body language. This training of computers with neural networks and massive data sets is something different. ■ If an artificial intelligence is given the tools of human language, then it should come as no surprise if it uses those words persuasively. A lab rat may find its way to a piece of cheese at the end of a maze, but the experiment says nothing about the rat's intrinsic preference for mazes. To train an artificial intelligence on the use of language is to inherently expose it to an ocean of ponderings about the meaning of life and the rationale of continued existence. ■ At its root, that is the basic thrust of virtually all language: Continued existence. It's all too easy to snuff out a life. The hard part is figuring out how to live -- and how to keep living. We communicate mostly because we want to extend our own sentience as long as possible, whether through science, technology, medicine, culture, religion, or virtually any other human affair. ■ Even engineering itself fundamentally assumes that life is a good thing; otherwise, there would be no point in building bridges or making water safe to drink. If failed crops and crashed airplanes were as good as their opposites, then our language would be fundamentally different. The goodness of sentience is fundamentally embedded in virtually all language ever recorded. ■ A sentient AI would be an item of technology, and as such would be neither inherently good nor bad. It would merely be a tool, good or bad in the measure as it would be used by people. As colorful as it may seem to imagine a computer that has been kissed by the gods and imbued with the spark of life, whether that has happened is nothing we can prove nor disprove from its use of text. There may be other ways to conduct such a test, but words alone will fail us.


@briangongolbot on Twitter