Gongol.com Archives: February 2022

Brian Gongol


February 10, 2022

Computers and the Internet Putting up a caution flag

Social media gives us high-visibility reminders that there is no reliable correlation between the ease of finding content and the truth of that content. For every faithful account of events, there could be infinite misrepresentations of the same. That's the whole point of propaganda tactics like hashtag flooding, recently used by the Chinese government to swamp the social-media hashtag "#GenocideGames" (meant to protest the 2022 Beijing Olympics) and drown it in a sea of spam, effectively neutering the original message. ■ Even when bad faith isn't strictly involved, lots of people share their thoughts online with insufficient regard for their duty to the truth. And because so much of the contemporary understanding of the world is influenced either by what people read online (84% of American adults "often" or "sometimes" get their news through digital devices), and by how journalistic outlets reach their news judgments (one research paper called it the "routinization of Twitter into news production"), the flow of content is too important to overlook. ■ For example, it would be a useful feature if social media tools allowed users add a marker like a caution triangle to the accounts they follow, visible only to themselves, to mark those accounts they follow out of curiosity or necessity, but which need to be read with added caution or skepticism. Sources vary not only in the frequency of what they share, but also the weight that should be attached to them. ■ One of the privileges of a well-rounded education is in gaining an understanding that lots of things asserted in writing or in other records are subjective, distorted by the author's perspective, purely opinion-based, or flat-out wrong. And they're often mixed with truths. Conscious consumption of all sorts of media requires that the audience be able to detach itself from the moment and consider it critically. ■ A good, well-rounded education also helps a person to understand that sometimes the best information is found in a footnote. Or in marginalia. Or in the informal institutional memory of an organization. Or in the disorganized stacks in the basement of a library. ■ Humans need practice to develop the skill of learning how to sort, rate, and weigh information. If a person's understanding is (crudely) "It's in a textbook, so it must be true", then they need more practice. Things asserted as "Facts" sometimes really belong in gray spaces. ■ Likewise, we read and understand things through filters that include principles, and sometimes those principles come into conflict. "Tell the truth" is a vital principle -- unless, for instance, telling a lie would save a life. Then, "Save a life" should prevail and the truth should go out the window. The whole reason to have a Supreme Court is to reach judgments in those places where rules and principles come into friction with one another. Court opinions, "stare decisis", and common law are all parts of a communal attempt to reach decisions through a cloud of imperfect information. ■ In light of all this, it is worrisome that we have such a tenuous grasp on what it means to learn. We really know shockingly little about how the human learning process works -- and now humans are programming computers to "learn", such as it is. And often, badly, which is why self-driving cars have been programmed to roll through stop signs and otherwise convincing artificially-generated faces sometimes contain telltale errors, like bungled teeth. ■ The real friction is that increasing dependency on artificial intelligence exposes us to big shortcomings in our understanding of the nature of learning itself. How do you tell artificial intelligence not to believe everything it "reads"? Or that some principles are inviolable...until they aren't? Or that a footnote can mean anything from "This explains everything above" to "The author was bored and wanted to crack a joke"? ■ The prospects are vast for machine learning to do lots of useful things -- in consultation with human oversight and judgment. But we can't let crypto-bros and techno-utopians do all the thinking about what's upstream of AI, or else we're headed for serious trouble. We need caution flags not only for ourselves, but for the tools we're training to think like us, too.


Recent radio podcasts