What Claude Shannon Knew In 1950 That We’re Pretending Is NewAI didn’t arrive yesterday; it just changed its outfitEvery era gets its favorite tech panic. Ours, apparently, is watching a chatbot say something polished, half-right, and faintly dangerous, then acting as though civilization has been ambushed by a brand-new problem. But Claude Shannon saw the outline of this mess in 1950, back when computers were still large enough to qualify as real estate. In his paper, “Programming a Computer for Playing Chess,” Shannon wasn’t trying to build a novelty act for brainy cocktail parties. He chose chess because it forced a machine to face the sort of problem machines are facing now: too many possible moves, not enough time to calculate everything, and no choice but to make a judgment anyway. That should sound familiar to anyone watching generative AI answer questions about our products, policies, processes, or software configuration with the calm confidence of a man giving directions while standing in the wrong city. “Tolerably Good” Was the GoalOne of the most useful things Shannon did was lower the temperature. He didn’t say machines had to play perfect chess. He said they had to play “tolerably good” chess. He understood that perfect performance wasn’t realistic. The problem space was too large. The machines of the day couldn’t analyze every possible continuation. So the real challenge wasn’t perfection. It was usefulness.
That’s still the question with AI now, even though people keep dressing it up in more expensive language. We don’t actually need AI to be magical. We need it to be useful without wandering off into fiction. We need it to stop sounding certain when it should be saying, “I’m not entirely sure, and frankly neither should you be.” The trouble is that “good enough” means very different things to different people depending on the task. If AI drafts a mediocre summary of a meeting, nobody faints. If it gives our prospects or customers the wrong setup instruction, skips a prerequisite, or blends two different product versions into one smooth paragraph of nonsense, suddenly “tolerably good” starts looking like a phrase that should come with a lawyer.
The Machine Doesn’t Know — It Guesses, ConfidentlyShannon understood something that people still resist because it ruins the fantasy. A machine doesn’t always arrive at the answer by knowing the answer. Often, it arrives there by evaluating possibilities and choosing what seems best according to the signals it has. Modern AI works pretty much the same way. It doesn’t know your product the way an experienced support agent knows it. It doesn’t understand your docs the way a careful reader does. It doesn’t stop, scratch its chin, and ask whether two similar procedures might apply to different user roles. It predicts. It estimates. It assembles a response that looks like the kind of thing a good answer would look like. When the signals are strong, this can feel impressive. When the signals are weak, missing, or inconsistent, it still produces something. That’s where the trouble starts. The output may sound measured and complete, which is often the first sign you should worry. Coherence Isn’t the Same Thing as AccuracyPeople often treat fluent, easy-to-process language as a cue that something is true. Psychologists call this processing fluency, and a large body of research ties it to truth judgments. In a classic study, statements that were simply easier to read were more likely to be judged true. The authors’ conclusion was blunt: “perceptual fluency affects judgments of truth.” |