Book review. A Mind at Play: How Claude Shannon Invented the Information Age
This book was published by Jimmy Soni and Rob Goodman in 2017. It is 385 pages long and informative.
This is a good book. Shannon is already an extremely interesting researcher, so the book is interesting. The writing is good, gets very good at some places, but it is not top-class writing. A master storyteller like Michael Lewis ( e.g., "Undoing Project"), Walter Isaacson, or Steven Levy would have made this book excellent. I guess the difference would be that these masters would put in an order of magnitude more research in to the subject, do dozens of interviews, and extensive archive search. They would also distill the story and build the book around a single strong theme with some side themes tying to that, and tell a much more engaging story. They would not leave a stone unturned. They would go the extra mile to point us to the insights they gathered, without explicitly showing them, but gently nudging us toward them to make us think we came up with those insights.
Claude Shannon is a giant. It won't be an overstatement to say Shannon is the father or digital era and information age. In his master's thesis at MIT, in 1937 when he was only 21 years old, he came up with the digital circuit design theory, demonstrating the electrical applications of Boolean algebra. In the book review for "Range: Why Generalists Triumph in a Specialized World", I had included this paragraph about Shannon.
For his PhD in 1940, under supervision of Vannevar Bush, Shannon developed a mathematical formulation for Mendelian genetics, called "An Algebra for Theoretical Genetics".
Then in 1948, eleven years later after his MS thesis, he founded information theory with a landmark paper, "A Mathematical Theory of Communication", with applications to digital communication, storage, and cryptography.
He has been very influential in cryptography (with the introduction of one-time pads), artificial intelligence (with his electromechanical mouse Theseus), chess playing computers, and in providing a mathematical theory of juggling, among other things.
The tragic part of this story is how it ends. After suffering progressive decline of Alzheimer's disease over more than 15 years, Shannon died at the age of 84, on February 24, 2001.
As is my custom in book reviews, here are some of my highlights from the book. The book has many more interesting facts and information about Shannon, so I recommend the book strongly if you want to learn more about Shannon.
He was a man immune to scientific fashion and insulated from opinion of all kinds, on all subjects, even himself, especially himself; a man of closed doors and long silences, who thought his best thoughts in spartan bachelor apartments and empty office buildings.
It is a puzzle of his life that someone so skilled at abstracting his way past the tangible world was also so gifted at manipulating it. Shannon was a born tinkerer: a telegraph line rigged from a barbed-wire fence, a makeshift barn elevator, and a private backyard trolley tell the story of his small-town Michigan childhood. And it was as an especially advanced sort of tinkerer that he caught the eye of Vannevar Bush—soon to become the most powerful scientist in America and Shannon’s most influential mentor—who brought him to MIT and charged him with the upkeep of the differential analyzer, an analog computer the size of a room, “a fearsome thing of shafts, gears, strings, and wheels rolling on disks” that happened to be the most advanced thinking machine of its day.
Shannon’s study of the electrical switches directing the guts of that mechanical behemoth led him to an insight at the foundation of our digital age: that switches could do far more than control the flow of electricity through circuits—that they could be used to evaluate any logical statement we could think of, could even appear to "decide." A series of binary choices—on/off, true/false, 1/0—could, in principle, perform a passable imitation of a brain. That leap, as Walter Isaacson put it, “became the basic concept underlying all digital computers.” It was Shannon’s first great feat of abstraction. He was only twenty-one.
And yet Shannon proved that noise could be defeated, that information sent from Point A could be received with perfection at Point B, not just often, but essentially always. He gave engineers the conceptual tools to digitize information and send it flawlessly (or, to be precise, with an arbitrarily small amount of error), a result considered hopelessly utopian up until the moment Shannon proved it was not.
Having completed his pathbreaking work by the age of thirty-two, he might have spent his remaining decades as a scientific celebrity, a public face of innovation: another Bertrand Russell, or Albert Einstein, or Richard Feynman, or Steve Jobs. Instead, he spent them tinkering. An electronic, maze-solving mouse named Theseus. An Erector Set turtle that walked his house. The first plan for a chess-playing computer, a distant ancestor of IBM’s Deep Blue. The first-ever wearable computer. A calculator that operated in Roman numerals, code-named THROBAC (“Thrifty Roman-Numeral Backward-Looking Computer”). A fleet of customized unicycles. Years devoted to the scientific study of juggling.
Claude’s gifts were of the Einsteinian variety: a strong intuitive feel for the dimensions of a problem, with less of a concern for the step-by-step details. As he put it, “I think I’m more visual than symbolic. I try to get a feeling of what’s going on. Equations come later.” Like Einstein, he needed a sounding board, a role that Betty played perfectly. His colleague David Slepian said, “He didn’t know math very deeply. But he could invent whatever he needed.” Robert Gallager, another colleague, went a step further: “He had a weird insight. He could see through things. He would say, ‘Something like this should be true’ . . . and he was usually right. . . . You can’t develop an entire field out of whole cloth if you don’t have superb intuition.”
I had what I thought was a really neat research idea, for a much better communication system than what other people were building, with all sorts of bells and whistles. I went in to talk to him [Shannon] about it and I explained the problems I was having trying to analyze it. And he looked at it, sort of puzzled, and said, “Well, do you really need this assumption?” And I said, well, I suppose we could look at the problem without that assumption. And we went on for a while. And then he said, again, “Do you need this other assumption?” And I saw immediately that that would simplify the problem, although it started looking a little impractical and a little like a toy problem. And he kept doing this, about five or six times. I don’t think he saw immediately that that’s how the problem should be solved; I think he was just groping his way along, except that he just had this instinct of which parts of the problem were fundamental and which were just details. At a certain point, I was getting upset, because I saw this neat research problem of mine had become almost trivial. But at a certain point, with all these pieces stripped out, we both saw how to solve it. And then we gradually put all these little assumptions back in and then, suddenly, we saw the solution to the whole problem. And that was just the way he worked. He would find the simplest example of something and then he would somehow sort out why that worked and why that was the right way of looking at
“What’s your secret in remaining so carefree?” an interviewer asked Shannon toward the end of his life. Shannon answered, “I do what comes naturally, and usefulness is not my main goal. I keep asking myself, How would you do this? Is it possible to make a machine do that? Can you prove this theorem?” For an abstracted man at his most content, the world isn’t there to be used, but to be played with, manipulated by hand and mind. Shannon was an atheist, and seems to have come by it naturally, without any crisis of faith; puzzling over the origins of human intelligence with the same interviewer, he said matter-of-factly, “I don’t happen to be a religious man and I don’t think it would help if I were!” And yet, in his instinct that the world we see merely stands for something else, there is an inkling that his distant Puritan ancestors might have recognized as kin.
Engineering’s rising profile began to draw the attention of deans in other quarters of the university, and disciplinary lines began to blur. By the time Shannon began his dual degrees in mathematics and engineering, a generation later, the two curricula had largely merged into one.
It was Vannevar Bush who brought analog computing to its highest level, a machine for all purposes, a landmark on the way from tool to brain. And it was Claude Shannon who, in a genius accident, helped obsolete it.
In Michigan, Shannon had learned (in a philosophy class, no less) that any statement of logic could be captured in symbols and equations—and that these equations could be solved with a series of simple, math-like rules. You might prove a statement true or false without ever understanding what it meant. You would be less distracted, in fact, if you chose not to understand it: deduction could be automated. The pivotal figure in this translation from the vagaries of words to the sharpness of math was a nineteenth-century genius named George Boole, a self-taught English mathematician whose cobbler father couldn’t afford to keep him in school beyond the age of sixteen.
Finished in the fall of 1937, Shannon’s master’s thesis, “A Symbolic Analysis of Relay and Switching Circuits,” was presented to an audience in Washington, D.C., and published to career-making applause the following year.
A leap from logic to symbols to circuits: “I think I had more fun doing that than anything else in my life,” Shannon remembered fondly. An odd and wonkish sense of fun, maybe—but here was a young man, just twenty-one now, full of the thrill of knowing that he had looked into the box of switches and relays and seen something no one else had. All that remained were the details. In the years to come, it would be as if he forgot that publication was something still required of brilliant scientists; he’d pointlessly incubate remarkable work for years, and he’d end up in a house with an attic stuffed with notes, half-finished articles, and “good questions” on ruled paper. But now, ambitious and unproven, he had work pouring out of him.
Armed with these insights, Shannon spent the rest of his thesis demonstrating their possibilities. A calculator for adding binary numbers; a five-button combination lock with electronic alarm—as soon as the equations were worked out, they were as good as built. Circuit design was, for the first time, a science. And turning art into science would be the hallmark of Shannon’s career.
Somewhere on the list of Vannevar Bush’s accomplishments, then, should be his role in killing American eugenics. As president of the Carnegie Institution of Washington, which funded the Eugenics Record Office, he forced its sterilization-promoting director into retirement and ordered the office to close for good on December 31, 1939.
But the poison tree bore some useful fruit. (Few scientists had compiled better data on heredity and inheritance than eugenicists.) And Shannon was there, in its last months, to collect what he could of it [for his PhD thesis titled "An Algebra for Theoretical Genetics".]
Bell researchers were encouraged to think decades down the road, to imagine how technology could radically alter the character of everyday life, to wonder how Bell might “connect all of us, and all of our new machines, together.” One Bell employee of a later era summarized it like this: “When I first came there was the philosophy: look, what you’re doing might not be important for ten years or twenty years, but that’s fine, we’ll be there then.”
Claude Shannon was one of those who thrived. Among the institutions that had dotted the landscape of Shannon’s life, it’s hard to imagine a place better suited to his mix of passions and particular working style than the Bell Laboratories of the 1940s. “I had freedom to do anything I wanted from almost the day I started,” he reflected. “They never told me what to work on.”
“I think he did the work with that fear in him, that he might have to go into the Army, which means being with lots of people around which he couldn’t stand. He was phobic about crowds and people he didn’t know.”
If anything, his reaction to the war work was quite the opposite: the whole atmosphere left a bitter taste. The secrecy, the intensity, the drudgery, the obligatory teamwork—all of it seems to have gotten to him in a deeply personal way. Indeed, one of the few accounts available to us, from Claude’s girlfriend, suggests that he found himself largely bored and frustrated by wartime projects, and that the only outlet for his private research came on his own time, late at night. “He said he hated it, and then he felt very guilty about being tired out in the morning and getting there very late. . . . I took him by the hand and sometimes I walked him to work—that made him feel better.” It’s telling that Shannon was reluctant, even decades later, to talk about this period in any kind of depth, even to family and friends. In a later interview, he would simply say, with a touch of disappointment in his words, that “those were busy times during the war and immediately afterwards and [my research] was not considered first priority work.” This was true, it appears, even at Bell Labs, famously open-minded though it may have been.
As in other areas of Shannon’s life, his most important work in cryptography yielded a rigorous, theoretical underpinning for many of a field’s key concepts. This paper, “A Mathematical Theory of Cryptography—Case 20878,” contained important antecedents of Shannon’s later work—but it also provided the first-ever proof of a critical concept in cryptology: the “one-time pad.”
According to Turing’s biographer, Andrew Hodges, Shannon and Turing met daily over tea, in public, in the conspicuously modest Bell Labs cafeteria.
Shannon, for his part, was amazed by the quality of Turing’s thinking. “I think Turing had a great mind, a very great mind,” Shannon later said.
The work wasn’t linear; ideas came when they came. “These things sometimes... one night I remember I woke up in the middle of the night and I had an idea and I stayed up all night working on that.” To picture Shannon during this time is to see a thin man tapping a pencil against his knee at absurd hours. This isn’t a man on a deadline; it’s something more like a man obsessed with a private puzzle, one that is years in the cracking. “He would go quiet, very very quiet. But he didn’t stop working on his napkins,” said Maria. “Two or three days in a row. And then he would look up, and he’d say, ‘Why are you so quiet?’ ”
The real measure of information is not in the symbols we send—it’s in the symbols we could have sent, but did not. To send a message is to make a selection from a pool of possible symbols, and “at each selection there are eliminated all of the other symbols which might have been chosen.”
The information value of a symbol depends on the number of alternatives that were killed off in its choosing. Symbols from large vocabularies bear more information than symbols from small ones. Information measures freedom of choice.
Shannon proposed an unsettling inversion. Ignore the physical channel and accept its limits: we can overcome noise by manipulating our messages. The answer to noise is not in how loudly we speak, but in how we say what we say.
Shannon showed that the beleaguered key-tappers in Ireland and Newfoundland had essentially gotten it right, had already solved the problem without knowing it. They might have said, if only they could have read Shannon’s paper, “Please add redundancy.” In a way, that was already evident enough: saying the same thing twice in a noisy room is a way of adding redundancy, on the unstated assumption that the same error is unlikely to attach itself to the same place two times in a row. For Shannon, though, there was much more. Our linguistic predictability, our congenital failure to maximize information, is actually our best protection from error.
The information theorist Sergio Verdú offered a similar assessment of Shannon’s paper: “It turned out that everything he claimed essentially was true. The paper was weak in what we now call ‘converses’ . . . but in fact, that adds to his genius rather than detracting from it, because he really knew what he was doing.” In a sense, leaving the dots for others to connect was a calculated gamble on Shannon’s part: had he gone through that painstaking work himself, the paper would have been much longer and appeared much later.
Shannon would later tell a former teacher of his that Theseus had been “a demonstration device to make vivid the ability of a machine to solve, by trial and error, a problem, and remember the solution.” To the question of whether a certain rough kind of intelligence could be “created,” Shannon had offered an answer: yes, it could. Machines could learn. They could, in the circumscribed way Shannon had demonstrated, make mistakes, discover alternatives, and avoid the same missteps again. Learning and memory could be programmed and plotted, the script written into a device that looked, from a certain perspective, like an extremely simple precursor of a brain. The idea that machines could imitate humans was nothing new.
In a life of pursuits adopted and discarded with the ebb and flow of Shannon’s promiscuous curiosity, chess remained one of his few lifelong pastimes. One story has it that Shannon played so much chess at Bell Labs that “at least one supervisor became somewhat worried.” He had a gift for the game, and as word of his talent spread throughout the Labs, many would try their hand at beating him. “Most of us didn’t play more than once against him,” recalled Brockway McMillan.
He went on: “you can make a thing that is smarter than yourself. Smartness in this game is made partly of time and speed. I can build something which can operate much faster than my neurons.”
I think man is a machine. No, I am not joking, I think man is a machine of a very complex sort, different from a computer, i.e., different in organization. But it could be easily reproduced—it has about ten billion nerve cells. And if you model each one of these with electronic equipment it will act like a human brain. If you take [Bobby] Fischer’s head and make a model of that, it would play like Fischer.
I am having a very enjoyable time here at MIT. The seminar is going very well but involves a good deal of work. I had at first hoped to have a rather cozy little group of about eight or ten advanced students, but the first day forty people showed up, including many faculty members from M.I.T., some from Harvard, a number of doctorate candidates, and quite a few engineers from Lincoln Laboratory. . . . I am giving 2 one and a half hour sessions each week, and the response from the class is exceptionally good. They are almost all following it at 100 percent. I also made a mistake in a fit of generosity when I first came here of agreeing to give quite a number of talks at colloquia, etc., and now that the days are beginning to roll around, I find myself pretty pressed for time.
In a lecture titled “Reliable Machines from Unreliable Components,” Shannon presented the following challenge: “In case men’s lives depend upon the successful operation of a machine, it is difficult to decide on a satisfactorily low probability of failure, and in particular, it may not be adequate to have men’s fates depend upon the successful operation of single components as good as they may be.” What followed was an analysis of the error-correcting and fail-safe mechanisms that might resolve such a dilemma.
When an offer came for a full professorship and a permanent move to Massachusetts, it was hard to decline. If he accepted, Shannon would be named a Professor of Communication Sciences, and Professor of Mathematics, with permanent tenure, effective January 1, 1957, with a salary of $17,000 per year (about $143,000 in 2017).
After accepting the MIT offer, the Shannons left for Cambridge via California—a year-long detour for a fellowship at Stanford’s Center for Advanced Study in the Behavioral Sciences. Prestigious as the appointment was, the Shannons mainly treated it as an excuse to see the country. They made the leisurely drive through the West’s national parks to California, and back, in a VW bus.
Before setting off for the West, though, Claude and Betty purchased a house at 5 Cambridge Street in Winchester, Massachusetts, a bedroom community eight miles north of MIT. Once their California year was complete, they returned to their new home. In Winchester, the Shannons were close enough to campus for a quick commute but far enough away to live an essentially private life. They were also living in a piece of history—an especially appropriate one in light of Shannon’s background and interests.
The house would figure prominently in Shannon’s public image. Nearly every story about him, from 1957 on, situated him at the house on the lake—usually in the two-story addition that the Shannons built as an all-purpose room for gadget storage and display, a space media profiles often dubbed the “toy room,” but which his daughter Peggy and her two older brothers simply called “Dad’s room.” The Shannons gave their home a name: Entropy House. Claude’s status as a mathematical luminary would make it a pilgrimage site for students and colleagues, especially as his on-campus responsibilities dwindled toward nothing.
Even at MIT, Shannon bent his work around his hobbies and enthusiasms. “Although he continued to supervise students, he was not really a co-worker, in the normal sense of the term, as he always seemed to maintain a degree of distance from his fellow associates,” wrote one fellow faculty member. With no particular academic ambitions, Shannon felt little pressure to publish academic papers. He grew a beard, began running every day, and stepped up his tinkering. What resulted were some of Shannon’s most creative and whimsical endeavors. There was the trumpet that shot fire out of its bell when played. The handmade unicycles, in every permutation: a unicycle with no seat; a unicycle with no pedals; a unicycle built for two. There was the eccentric unicycle: a unicycle with an off-center hub that caused the rider to move up and down while pedaling forward and added an extra degree of difficulty to Shannon’s juggling. (The eccentric unicycle was the first of its kind. Ingenious though it might have been, it caused Shannon’s assistant, Charlie Manning, to fear for his safety—and to applaud when he witnessed the first successful ride.) There was the chairlift that took surprised guests down from the house’s porch to the edge of the lake. A machine that solved Rubik’s cubes. Chess-playing machines. Handmade robots, big and small. Shannon’s mind, it seems, was finally free to bring its most outlandish ideas to mechanical life. Looking back, Shannon summed it all up as happily pointless: “I’ve always pursued my interests without much regard to financial value or value to the world. I’ve spent lots of time on totally useless things.” Tellingly, he made no distinction between his interests in information and his interests in unicycles; they were all moves in the same game.
One professor, Hermann Haus, remembered a lecture of his that Shannon attended. “I was just so impressed,” Haus recalled, “he was very kind and asked leading questions. In fact, one of those questions led to an entire new chapter in a book I was writing.”
He was not the sort of person who would give a class and say “this was the essence of such and such.” He would say, “Last night, I was looking at this and I came up with this interesting way of looking at it.” He’d say
Shannon became a whetstone for others’ ideas and intuitions. Rather than offer answers, he asked probing questions; instead of solutions, he gave approaches. As Larry Roberts, a graduate student of that time, remembered, “Shannon’s favorite thing to do was to listen to what you had to say and then just say, ‘What about...’ and then follow with an approach you hadn’t thought of. That’s how he gave his advice.” This was how Shannon preferred to teach: as a fellow traveler and problem solver, just as eager as his students to find a new route or a fresh approach to a standing puzzle.
Even with his aversion to writing things down, the famous attic stuffed with half-finished work, and countless hypotheses circulating in his mind—and even when one paper on the scale of his “Mathematical Theory of Communication” would have counted as a lifetime’s accomplishment—Shannon still managed to publish hundreds of pages’ worth of papers and memoranda, many of which opened new lines of inquiry in information theory. That he had also written seminal works in other fields—switching, cryptography, chess programming—and that he might have been a pathbreaking geneticist, had he cared to be, was extraordinary.
The club benefited from Shannon as well, in his roles as network node and informal consultant. For instance, when Teledyne received an acquisition offer from a speech recognition company, Shannon advised Singleton to turn it down. From his own experience at the Labs, he doubted that speech recognition would bear fruit anytime soon: the technology was in its early stages, and during his time at the Labs, he’d seen much time and energy fruitlessly sunk into it. The years of counsel paid off, for Singleton and for Shannon himself: his investment in Teledyne achieved an annual compound return of 27 percent over twenty-five years.
The stock market was, in some ways, the strangest of Shannon’s late-life enthusiasms. One of the recurrent tropes of recollections from family and friends is Shannon’s seeming indifference to money. By one telling, Shannon moved his life savings out of his checking account only when Betty insisted that he do so. A colleague recalled seeing a large uncashed check on Shannon’s desk at MIT, which in time gave rise to another legend: that his office was overflowing with checks he was too absentminded to cash. In a way, Shannon’s interest in money resembled his other passions. He was not out to accrue wealth for wealth’s sake, nor did he have any burning desire to own the finer things in life. But money created markets and math puzzles, problems that could be analyzed and interpreted and played out.
This is a good book. Shannon is already an extremely interesting researcher, so the book is interesting. The writing is good, gets very good at some places, but it is not top-class writing. A master storyteller like Michael Lewis ( e.g., "Undoing Project"), Walter Isaacson, or Steven Levy would have made this book excellent. I guess the difference would be that these masters would put in an order of magnitude more research in to the subject, do dozens of interviews, and extensive archive search. They would also distill the story and build the book around a single strong theme with some side themes tying to that, and tell a much more engaging story. They would not leave a stone unturned. They would go the extra mile to point us to the insights they gathered, without explicitly showing them, but gently nudging us toward them to make us think we came up with those insights.
Claude Shannon is a giant. It won't be an overstatement to say Shannon is the father or digital era and information age. In his master's thesis at MIT, in 1937 when he was only 21 years old, he came up with the digital circuit design theory, demonstrating the electrical applications of Boolean algebra. In the book review for "Range: Why Generalists Triumph in a Specialized World", I had included this paragraph about Shannon.
[Shannon] launched the Information Age thanks to a philosophy course he took to fulfill a requirement at the University of Michigan. In it, he was exposed to the work of self-taught nineteenth-century English logician George Boole, who assigned a value of 1 to true statements and 0 to false statements and showed that logic problems could be solved like math equations. It resulted in absolutely nothing of practical importance until seventy years after Boole passed away, when Shannon did a summer internship at AT&T’s Bell Labs research facility.
For his PhD in 1940, under supervision of Vannevar Bush, Shannon developed a mathematical formulation for Mendelian genetics, called "An Algebra for Theoretical Genetics".
Then in 1948, eleven years later after his MS thesis, he founded information theory with a landmark paper, "A Mathematical Theory of Communication", with applications to digital communication, storage, and cryptography.
He has been very influential in cryptography (with the introduction of one-time pads), artificial intelligence (with his electromechanical mouse Theseus), chess playing computers, and in providing a mathematical theory of juggling, among other things.
The tragic part of this story is how it ends. After suffering progressive decline of Alzheimer's disease over more than 15 years, Shannon died at the age of 84, on February 24, 2001.
As is my custom in book reviews, here are some of my highlights from the book. The book has many more interesting facts and information about Shannon, so I recommend the book strongly if you want to learn more about Shannon.
Shannon's research approach
Of course, information existed before Shannon, just as objects had inertia before Newton. But before Shannon, there was precious little sense of information as an idea, a measurable quantity, an object fitted out for hard science. Before Shannon, information was a telegram, a photograph, a paragraph, a song. After Shannon, information was entirely abstracted into bits.He was a man immune to scientific fashion and insulated from opinion of all kinds, on all subjects, even himself, especially himself; a man of closed doors and long silences, who thought his best thoughts in spartan bachelor apartments and empty office buildings.
It is a puzzle of his life that someone so skilled at abstracting his way past the tangible world was also so gifted at manipulating it. Shannon was a born tinkerer: a telegraph line rigged from a barbed-wire fence, a makeshift barn elevator, and a private backyard trolley tell the story of his small-town Michigan childhood. And it was as an especially advanced sort of tinkerer that he caught the eye of Vannevar Bush—soon to become the most powerful scientist in America and Shannon’s most influential mentor—who brought him to MIT and charged him with the upkeep of the differential analyzer, an analog computer the size of a room, “a fearsome thing of shafts, gears, strings, and wheels rolling on disks” that happened to be the most advanced thinking machine of its day.
Shannon’s study of the electrical switches directing the guts of that mechanical behemoth led him to an insight at the foundation of our digital age: that switches could do far more than control the flow of electricity through circuits—that they could be used to evaluate any logical statement we could think of, could even appear to "decide." A series of binary choices—on/off, true/false, 1/0—could, in principle, perform a passable imitation of a brain. That leap, as Walter Isaacson put it, “became the basic concept underlying all digital computers.” It was Shannon’s first great feat of abstraction. He was only twenty-one.
And yet Shannon proved that noise could be defeated, that information sent from Point A could be received with perfection at Point B, not just often, but essentially always. He gave engineers the conceptual tools to digitize information and send it flawlessly (or, to be precise, with an arbitrarily small amount of error), a result considered hopelessly utopian up until the moment Shannon proved it was not.
Having completed his pathbreaking work by the age of thirty-two, he might have spent his remaining decades as a scientific celebrity, a public face of innovation: another Bertrand Russell, or Albert Einstein, or Richard Feynman, or Steve Jobs. Instead, he spent them tinkering. An electronic, maze-solving mouse named Theseus. An Erector Set turtle that walked his house. The first plan for a chess-playing computer, a distant ancestor of IBM’s Deep Blue. The first-ever wearable computer. A calculator that operated in Roman numerals, code-named THROBAC (“Thrifty Roman-Numeral Backward-Looking Computer”). A fleet of customized unicycles. Years devoted to the scientific study of juggling.
Claude’s gifts were of the Einsteinian variety: a strong intuitive feel for the dimensions of a problem, with less of a concern for the step-by-step details. As he put it, “I think I’m more visual than symbolic. I try to get a feeling of what’s going on. Equations come later.” Like Einstein, he needed a sounding board, a role that Betty played perfectly. His colleague David Slepian said, “He didn’t know math very deeply. But he could invent whatever he needed.” Robert Gallager, another colleague, went a step further: “He had a weird insight. He could see through things. He would say, ‘Something like this should be true’ . . . and he was usually right. . . . You can’t develop an entire field out of whole cloth if you don’t have superb intuition.”
I had what I thought was a really neat research idea, for a much better communication system than what other people were building, with all sorts of bells and whistles. I went in to talk to him [Shannon] about it and I explained the problems I was having trying to analyze it. And he looked at it, sort of puzzled, and said, “Well, do you really need this assumption?” And I said, well, I suppose we could look at the problem without that assumption. And we went on for a while. And then he said, again, “Do you need this other assumption?” And I saw immediately that that would simplify the problem, although it started looking a little impractical and a little like a toy problem. And he kept doing this, about five or six times. I don’t think he saw immediately that that’s how the problem should be solved; I think he was just groping his way along, except that he just had this instinct of which parts of the problem were fundamental and which were just details. At a certain point, I was getting upset, because I saw this neat research problem of mine had become almost trivial. But at a certain point, with all these pieces stripped out, we both saw how to solve it. And then we gradually put all these little assumptions back in and then, suddenly, we saw the solution to the whole problem. And that was just the way he worked. He would find the simplest example of something and then he would somehow sort out why that worked and why that was the right way of looking at
“What’s your secret in remaining so carefree?” an interviewer asked Shannon toward the end of his life. Shannon answered, “I do what comes naturally, and usefulness is not my main goal. I keep asking myself, How would you do this? Is it possible to make a machine do that? Can you prove this theorem?” For an abstracted man at his most content, the world isn’t there to be used, but to be played with, manipulated by hand and mind. Shannon was an atheist, and seems to have come by it naturally, without any crisis of faith; puzzling over the origins of human intelligence with the same interviewer, he said matter-of-factly, “I don’t happen to be a religious man and I don’t think it would help if I were!” And yet, in his instinct that the world we see merely stands for something else, there is an inkling that his distant Puritan ancestors might have recognized as kin.
University of Michigan
A’s in math and science and Latin, scattered B’s in the rest: the sixteen-year-old high school graduate sent his record off to the University of Michigan, along with an application that was three pages of fill-in-the-blanks, the spelling errors casually crossed out.Engineering’s rising profile began to draw the attention of deans in other quarters of the university, and disciplinary lines began to blur. By the time Shannon began his dual degrees in mathematics and engineering, a generation later, the two curricula had largely merged into one.
MIT
The job—master’s student and assistant on the differential analyzer at the Massachusetts Institute of Technology—was tailor-made for a young man who could find equal joy in equations and construction, thinking and building. "I pushed hard for that job and got it."It was Vannevar Bush who brought analog computing to its highest level, a machine for all purposes, a landmark on the way from tool to brain. And it was Claude Shannon who, in a genius accident, helped obsolete it.
In Michigan, Shannon had learned (in a philosophy class, no less) that any statement of logic could be captured in symbols and equations—and that these equations could be solved with a series of simple, math-like rules. You might prove a statement true or false without ever understanding what it meant. You would be less distracted, in fact, if you chose not to understand it: deduction could be automated. The pivotal figure in this translation from the vagaries of words to the sharpness of math was a nineteenth-century genius named George Boole, a self-taught English mathematician whose cobbler father couldn’t afford to keep him in school beyond the age of sixteen.
Finished in the fall of 1937, Shannon’s master’s thesis, “A Symbolic Analysis of Relay and Switching Circuits,” was presented to an audience in Washington, D.C., and published to career-making applause the following year.
A leap from logic to symbols to circuits: “I think I had more fun doing that than anything else in my life,” Shannon remembered fondly. An odd and wonkish sense of fun, maybe—but here was a young man, just twenty-one now, full of the thrill of knowing that he had looked into the box of switches and relays and seen something no one else had. All that remained were the details. In the years to come, it would be as if he forgot that publication was something still required of brilliant scientists; he’d pointlessly incubate remarkable work for years, and he’d end up in a house with an attic stuffed with notes, half-finished articles, and “good questions” on ruled paper. But now, ambitious and unproven, he had work pouring out of him.
Armed with these insights, Shannon spent the rest of his thesis demonstrating their possibilities. A calculator for adding binary numbers; a five-button combination lock with electronic alarm—as soon as the equations were worked out, they were as good as built. Circuit design was, for the first time, a science. And turning art into science would be the hallmark of Shannon’s career.
Vannevar Bush
More to the point, it was a matter of deep conviction for Bush that specialization was the death of genius. “In these days, when there is a tendency to specialize so closely, it is well for us to be reminded that the possibilities of being at once broad and deep did not pass with Leonardo da Vinci or even Benjamin Franklin,” Bush said in a speech at MIT.Somewhere on the list of Vannevar Bush’s accomplishments, then, should be his role in killing American eugenics. As president of the Carnegie Institution of Washington, which funded the Eugenics Record Office, he forced its sterilization-promoting director into retirement and ordered the office to close for good on December 31, 1939.
But the poison tree bore some useful fruit. (Few scientists had compiled better data on heredity and inheritance than eugenicists.) And Shannon was there, in its last months, to collect what he could of it [for his PhD thesis titled "An Algebra for Theoretical Genetics".]
Bell Labs
If Google’s “20 percent time”—the practice that frees one-fifth of a Google employee’s schedule to devote to blue-sky projects—seems like a West Coast indulgence, then Bell Labs’ research operation, buoyed by a federally approved monopoly and huge profit margins, would appear gluttonous by comparison.Bell researchers were encouraged to think decades down the road, to imagine how technology could radically alter the character of everyday life, to wonder how Bell might “connect all of us, and all of our new machines, together.” One Bell employee of a later era summarized it like this: “When I first came there was the philosophy: look, what you’re doing might not be important for ten years or twenty years, but that’s fine, we’ll be there then.”
Claude Shannon was one of those who thrived. Among the institutions that had dotted the landscape of Shannon’s life, it’s hard to imagine a place better suited to his mix of passions and particular working style than the Bell Laboratories of the 1940s. “I had freedom to do anything I wanted from almost the day I started,” he reflected. “They never told me what to work on.”
War time
Things were moving fast there, and I could smell the war coming along. And it seemed to me I would be safer working full-time for the war effort, safer against the draft, which I didn’t exactly fancy. I was a frail man, as I am now... I was trying to play the game, to the best of my ability. But not only that, I thought I’d probably contribute a hell of a lot more.“I think he did the work with that fear in him, that he might have to go into the Army, which means being with lots of people around which he couldn’t stand. He was phobic about crowds and people he didn’t know.”
If anything, his reaction to the war work was quite the opposite: the whole atmosphere left a bitter taste. The secrecy, the intensity, the drudgery, the obligatory teamwork—all of it seems to have gotten to him in a deeply personal way. Indeed, one of the few accounts available to us, from Claude’s girlfriend, suggests that he found himself largely bored and frustrated by wartime projects, and that the only outlet for his private research came on his own time, late at night. “He said he hated it, and then he felt very guilty about being tired out in the morning and getting there very late. . . . I took him by the hand and sometimes I walked him to work—that made him feel better.” It’s telling that Shannon was reluctant, even decades later, to talk about this period in any kind of depth, even to family and friends. In a later interview, he would simply say, with a touch of disappointment in his words, that “those were busy times during the war and immediately afterwards and [my research] was not considered first priority work.” This was true, it appears, even at Bell Labs, famously open-minded though it may have been.
As in other areas of Shannon’s life, his most important work in cryptography yielded a rigorous, theoretical underpinning for many of a field’s key concepts. This paper, “A Mathematical Theory of Cryptography—Case 20878,” contained important antecedents of Shannon’s later work—but it also provided the first-ever proof of a critical concept in cryptology: the “one-time pad.”
According to Turing’s biographer, Andrew Hodges, Shannon and Turing met daily over tea, in public, in the conspicuously modest Bell Labs cafeteria.
Shannon, for his part, was amazed by the quality of Turing’s thinking. “I think Turing had a great mind, a very great mind,” Shannon later said.
Information theory
On the evenings he was at home, Shannon was at work on a private project. It had begun to crystallize in his mind in his graduate school days. He would, at various points, suggest different dates of provenance. But whatever the date on which the idea first implanted itself in his mind, pen hadn’t met paper in earnest until New York and 1941. Now this noodling was both a welcome distraction from work at Bell Labs and an outlet to the deep theoretical work he prized so much, and which the war threatened to foreclose.The work wasn’t linear; ideas came when they came. “These things sometimes... one night I remember I woke up in the middle of the night and I had an idea and I stayed up all night working on that.” To picture Shannon during this time is to see a thin man tapping a pencil against his knee at absurd hours. This isn’t a man on a deadline; it’s something more like a man obsessed with a private puzzle, one that is years in the cracking. “He would go quiet, very very quiet. But he didn’t stop working on his napkins,” said Maria. “Two or three days in a row. And then he would look up, and he’d say, ‘Why are you so quiet?’ ”
The real measure of information is not in the symbols we send—it’s in the symbols we could have sent, but did not. To send a message is to make a selection from a pool of possible symbols, and “at each selection there are eliminated all of the other symbols which might have been chosen.”
The information value of a symbol depends on the number of alternatives that were killed off in its choosing. Symbols from large vocabularies bear more information than symbols from small ones. Information measures freedom of choice.
Shannon proposed an unsettling inversion. Ignore the physical channel and accept its limits: we can overcome noise by manipulating our messages. The answer to noise is not in how loudly we speak, but in how we say what we say.
Shannon showed that the beleaguered key-tappers in Ireland and Newfoundland had essentially gotten it right, had already solved the problem without knowing it. They might have said, if only they could have read Shannon’s paper, “Please add redundancy.” In a way, that was already evident enough: saying the same thing twice in a noisy room is a way of adding redundancy, on the unstated assumption that the same error is unlikely to attach itself to the same place two times in a row. For Shannon, though, there was much more. Our linguistic predictability, our congenital failure to maximize information, is actually our best protection from error.
The information theorist Sergio Verdú offered a similar assessment of Shannon’s paper: “It turned out that everything he claimed essentially was true. The paper was weak in what we now call ‘converses’ . . . but in fact, that adds to his genius rather than detracting from it, because he really knew what he was doing.” In a sense, leaving the dots for others to connect was a calculated gamble on Shannon’s part: had he gone through that painstaking work himself, the paper would have been much longer and appeared much later.
Tinkering
Theseus was propelled by a pair of magnets, one embedded in its hollow core, and one moving freely beneath the maze. The mouse would begin its course, bump into a wall, sense that it had hit an obstacle with its “whiskers,” activate the right relay to attempt a new path, and then repeat the process until it hit its goal, a metallic piece of cheese. The relays stored the directions of the right path in “memory”: once the mouse had successfully navigated the maze by trial and error, it could find the cheese a second time with ease. Appearances to the contrary, Theseus the mouse was mainly the passive part of the endeavor: the underlying maze itself held the information and propelled Theseus with its magnet. Technically, as Shannon would point out, the mouse wasn’t solving the maze; the maze was solving the mouse. Yet, one way or another, the system was able to learn.Shannon would later tell a former teacher of his that Theseus had been “a demonstration device to make vivid the ability of a machine to solve, by trial and error, a problem, and remember the solution.” To the question of whether a certain rough kind of intelligence could be “created,” Shannon had offered an answer: yes, it could. Machines could learn. They could, in the circumscribed way Shannon had demonstrated, make mistakes, discover alternatives, and avoid the same missteps again. Learning and memory could be programmed and plotted, the script written into a device that looked, from a certain perspective, like an extremely simple precursor of a brain. The idea that machines could imitate humans was nothing new.
In a life of pursuits adopted and discarded with the ebb and flow of Shannon’s promiscuous curiosity, chess remained one of his few lifelong pastimes. One story has it that Shannon played so much chess at Bell Labs that “at least one supervisor became somewhat worried.” He had a gift for the game, and as word of his talent spread throughout the Labs, many would try their hand at beating him. “Most of us didn’t play more than once against him,” recalled Brockway McMillan.
He went on: “you can make a thing that is smarter than yourself. Smartness in this game is made partly of time and speed. I can build something which can operate much faster than my neurons.”
I think man is a machine. No, I am not joking, I think man is a machine of a very complex sort, different from a computer, i.e., different in organization. But it could be easily reproduced—it has about ten billion nerve cells. And if you model each one of these with electronic equipment it will act like a human brain. If you take [Bobby] Fischer’s head and make a model of that, it would play like Fischer.
MIT professorship
MIT made the first move: in 1956, the university invited one of its most famous alumni, Claude Shannon, to spend a semester back in Cambridge as a visiting professor. Returning to his graduate school haunts had something of a revivifying effect on Claude, as well as Betty. For one thing, the city of Cambridge was a bustle of activity compared to the comparatively sleepy New Jersey suburbs. Betty remembered it as an approximation of their Manhattan years, when going out to lunch meant stepping into the urban whirl. Working in academia, too, had its charms. “There is an active structure of university life that tends to overcome monotony and boredom,” wrote Shannon. “The new classes, the vacations, the various academic exercises add considerable variety to the life here.” Reading those impersonal lines, one might miss the implication that Shannon himself had grown bored.I am having a very enjoyable time here at MIT. The seminar is going very well but involves a good deal of work. I had at first hoped to have a rather cozy little group of about eight or ten advanced students, but the first day forty people showed up, including many faculty members from M.I.T., some from Harvard, a number of doctorate candidates, and quite a few engineers from Lincoln Laboratory. . . . I am giving 2 one and a half hour sessions each week, and the response from the class is exceptionally good. They are almost all following it at 100 percent. I also made a mistake in a fit of generosity when I first came here of agreeing to give quite a number of talks at colloquia, etc., and now that the days are beginning to roll around, I find myself pretty pressed for time.
In a lecture titled “Reliable Machines from Unreliable Components,” Shannon presented the following challenge: “In case men’s lives depend upon the successful operation of a machine, it is difficult to decide on a satisfactorily low probability of failure, and in particular, it may not be adequate to have men’s fates depend upon the successful operation of single components as good as they may be.” What followed was an analysis of the error-correcting and fail-safe mechanisms that might resolve such a dilemma.
When an offer came for a full professorship and a permanent move to Massachusetts, it was hard to decline. If he accepted, Shannon would be named a Professor of Communication Sciences, and Professor of Mathematics, with permanent tenure, effective January 1, 1957, with a salary of $17,000 per year (about $143,000 in 2017).
After accepting the MIT offer, the Shannons left for Cambridge via California—a year-long detour for a fellowship at Stanford’s Center for Advanced Study in the Behavioral Sciences. Prestigious as the appointment was, the Shannons mainly treated it as an excuse to see the country. They made the leisurely drive through the West’s national parks to California, and back, in a VW bus.
Before setting off for the West, though, Claude and Betty purchased a house at 5 Cambridge Street in Winchester, Massachusetts, a bedroom community eight miles north of MIT. Once their California year was complete, they returned to their new home. In Winchester, the Shannons were close enough to campus for a quick commute but far enough away to live an essentially private life. They were also living in a piece of history—an especially appropriate one in light of Shannon’s background and interests.
The house would figure prominently in Shannon’s public image. Nearly every story about him, from 1957 on, situated him at the house on the lake—usually in the two-story addition that the Shannons built as an all-purpose room for gadget storage and display, a space media profiles often dubbed the “toy room,” but which his daughter Peggy and her two older brothers simply called “Dad’s room.” The Shannons gave their home a name: Entropy House. Claude’s status as a mathematical luminary would make it a pilgrimage site for students and colleagues, especially as his on-campus responsibilities dwindled toward nothing.
Even at MIT, Shannon bent his work around his hobbies and enthusiasms. “Although he continued to supervise students, he was not really a co-worker, in the normal sense of the term, as he always seemed to maintain a degree of distance from his fellow associates,” wrote one fellow faculty member. With no particular academic ambitions, Shannon felt little pressure to publish academic papers. He grew a beard, began running every day, and stepped up his tinkering. What resulted were some of Shannon’s most creative and whimsical endeavors. There was the trumpet that shot fire out of its bell when played. The handmade unicycles, in every permutation: a unicycle with no seat; a unicycle with no pedals; a unicycle built for two. There was the eccentric unicycle: a unicycle with an off-center hub that caused the rider to move up and down while pedaling forward and added an extra degree of difficulty to Shannon’s juggling. (The eccentric unicycle was the first of its kind. Ingenious though it might have been, it caused Shannon’s assistant, Charlie Manning, to fear for his safety—and to applaud when he witnessed the first successful ride.) There was the chairlift that took surprised guests down from the house’s porch to the edge of the lake. A machine that solved Rubik’s cubes. Chess-playing machines. Handmade robots, big and small. Shannon’s mind, it seems, was finally free to bring its most outlandish ideas to mechanical life. Looking back, Shannon summed it all up as happily pointless: “I’ve always pursued my interests without much regard to financial value or value to the world. I’ve spent lots of time on totally useless things.” Tellingly, he made no distinction between his interests in information and his interests in unicycles; they were all moves in the same game.
One professor, Hermann Haus, remembered a lecture of his that Shannon attended. “I was just so impressed,” Haus recalled, “he was very kind and asked leading questions. In fact, one of those questions led to an entire new chapter in a book I was writing.”
He was not the sort of person who would give a class and say “this was the essence of such and such.” He would say, “Last night, I was looking at this and I came up with this interesting way of looking at it.” He’d say
Shannon became a whetstone for others’ ideas and intuitions. Rather than offer answers, he asked probing questions; instead of solutions, he gave approaches. As Larry Roberts, a graduate student of that time, remembered, “Shannon’s favorite thing to do was to listen to what you had to say and then just say, ‘What about...’ and then follow with an approach you hadn’t thought of. That’s how he gave his advice.” This was how Shannon preferred to teach: as a fellow traveler and problem solver, just as eager as his students to find a new route or a fresh approach to a standing puzzle.
Even with his aversion to writing things down, the famous attic stuffed with half-finished work, and countless hypotheses circulating in his mind—and even when one paper on the scale of his “Mathematical Theory of Communication” would have counted as a lifetime’s accomplishment—Shannon still managed to publish hundreds of pages’ worth of papers and memoranda, many of which opened new lines of inquiry in information theory. That he had also written seminal works in other fields—switching, cryptography, chess programming—and that he might have been a pathbreaking geneticist, had he cared to be, was extraordinary.
Stock market
By then the family had no need of the additional income from stock picking. Not only was there the combination of the MIT and Bell Labs pay, but Shannon had been on the ground floor of a number of technology companies. One former colleague, Bill Harrison, had encouraged Shannon to invest in his company, Harrison Laboratories, which was later acquired by Hewlett-Packard. A college friend of Shannon, Henry Singleton, put Shannon on the board of the company he created, Teledyne, which grew to become a multibillion-dollar conglomerate. As Shannon retold the story, he made the investment simply because “I had a good opinion of him.”The club benefited from Shannon as well, in his roles as network node and informal consultant. For instance, when Teledyne received an acquisition offer from a speech recognition company, Shannon advised Singleton to turn it down. From his own experience at the Labs, he doubted that speech recognition would bear fruit anytime soon: the technology was in its early stages, and during his time at the Labs, he’d seen much time and energy fruitlessly sunk into it. The years of counsel paid off, for Singleton and for Shannon himself: his investment in Teledyne achieved an annual compound return of 27 percent over twenty-five years.
The stock market was, in some ways, the strangest of Shannon’s late-life enthusiasms. One of the recurrent tropes of recollections from family and friends is Shannon’s seeming indifference to money. By one telling, Shannon moved his life savings out of his checking account only when Betty insisted that he do so. A colleague recalled seeing a large uncashed check on Shannon’s desk at MIT, which in time gave rise to another legend: that his office was overflowing with checks he was too absentminded to cash. In a way, Shannon’s interest in money resembled his other passions. He was not out to accrue wealth for wealth’s sake, nor did he have any burning desire to own the finer things in life. But money created markets and math puzzles, problems that could be analyzed and interpreted and played out.
Comments