The world needs some magic, the world’s running out of dreams.
It seems they’ve been blown to kingdom come; do you have some?
Who under heaven are we saving it for? If now is not the time, then when is?
— Jimmy Webb
It scarcely seems possible that it’s time to holiday shop for your gift again, but the calendar is an inarguable and implacable witness. Since I myself received a really unusual gift from the University (more later on) this year, I do really need to take my gifting seriously and try to do something for you a bit beyond the norm.
So why not start in Poland? Plock, to be precise, with the country under the Russian Empire. Jan Józef Chmielewski was born there in 1891, not really an obvious start to a tale of 21st-century Princeton, but bear with me. A studious sort, and not enamored of the educational options under the czar, he ended up enrolling as John Joseph Hopfield (a literal translation of Chmielewski, trust me) at Syracuse at the age of 22, then eventually moving with his professor to Berkeley where he earned a Ph.D. in spectroscopy in 1923. Crucially, he won a Guggenheim Fellowship (in the fellowship’s fourth year of existence) in 1928 for study in Berlin; “crucially,” because by the time he returned, the Depression was on and faculty hiring was almost nonexistent. So Hopfield, now married to another scientist, wangled a temp job designing the physics exhibit at the Chicago World’s Fair. Three really unique things were born there in 1933: the physics exhibit, the inaugural Major League Baseball All-Star Game (see Ruth, Babe) and John Joseph Hopfield, his son. Afterward, the elder managed to latch on to Libby-Owens-Ford, the pioneering glass fabrication conglomerate, where he began work on a series of patents earned while doing work on glass and metals for the government during World War II. Afterward he went to Johns Hopkins and the Naval Research Laboratory, doing advanced optics work until his sudden death in 1953. Lyman-Birge-Hopfield bands of nitrogen spectrum emission are named for him.
At that point his son John Joseph the younger was a junior physics major at Swarthmore, as if that weren’t inevitable. Growing up at home, he had standing permission to disassemble anything he wished around the house — as long as he then put it back together. Looking for an applied, cross-subject physics Ph.D. program, he chose much larger Cornell over Princeton because the fluid nature of Princeton’s departments wasn’t clear to him (as if John Wheeler by himself weren’t proof enough). By the time Princeton made him a teaching offer in 1964, after his 1958 Ph.D. and stints at Bell Labs and Cal-Berkeley, he jumped at it. As a physics professor, starting in with solid-state challenges of the day, he followed in his father’s footsteps and earned a Guggenheim at Cambridge in 1968. Returning to Princeton, he morphed from physics into the then-fluid field of genetics, postulating reasons for human genes to seemingly self-correct when biochemical errors arose. Hired away by Caltech jointly in chemistry and biology in 1980 and then teaching a course with the magical Richard Feynman *42 on The Physics of Computation, Hopfield in 1982 published his first work on associative neural computer networks (now known as Hopfield networks) that artificially mimic the structure of the human brain and allow for progressively sophisticated systems to “learn” from exposure to large data environments. He was named a MacArthur Fellow (a “genius grant”) in 1983. As his interests grew, he ended up back at Princeton in 1997, now with a chair in molecular biology, but with various feelers into physics, chemistry, genomics, neuroscience — whatever caused questions to arise in his mind. He had come a long way from the Chicago World’s Fair.
The campus to which he returned included a junior physics major named Fei-Fei Li ’99. In a coincidental bow to the multidisciplinary spirit of Hopfield, her honors thesis adviser Bradley Dickinson was in electrical engineering. She had come to Princeton with the normal curiosity and limitations of any science-motivated graduate of Parsippany High School, but with a twist — she had been speaking conversational English for only three years.
Li was born in Beijing in 1976 to a middle-class couple and grew up to attend honors schools in Chengdu, which to be fair did include some rudimentary English. Having already experienced gender discrimination for her science interest by age 13, her world then changed with the cultural revolution in 1989 (including Tiananmen Square); her family had a background with the Kuomintang, the Chinese regime defeated by the Communists, and their dire situation became the stuff of millions of other Chinese. Her father migrated to the U.S., managed to get a crucial visa, and four years later the rest of the family joined him in Parsippany, New Jersey, which had a significant immigrant community. She enrolled in high school, and her parents, with limited transferable skills, ended up running a dry cleaner’s. While she toiled over both her schoolwork and mastering enough English to even read the questions, she worked in the shop on weekends, at peak load. Her math teacher Bob Sabella (a fellow sci-fi junkie) and his wife Jean became her mentors and presumably had some input into the major surprise of her life, which continues to resonate today.
In acclimatizing herself to American culture and stories, Li had come to identify with another immigrant of scientific bent who seemed to have conquered his challenges and been productive: Albert Einstein. So she focused on nearby Princeton as something of a symbol of her goals and dreams. When she received an ominous thin envelope from the University admission office, she was hardly surprised, until she discovered inside a legendary YES! letter from Fred Hargadon, complete with an essentially full scholarship. Utterly floored, she became a Tiger three years after immigrating … and continued to work in the dry cleaner’s on the weekends. She was like a science culture junkie, not only glorying in informal visits to Einstein’s Institute for Advanced Studies, but attending classes with such astrophysicists as Neil deGrasse Tyson and David Spergel ’82, as well as biologist Eric Wieschaus, who one morning apologized for wrapping his genetics class early since he had to go to his press conference for winning the Nobel Prize in Medicine.
Li chose to do her doctoral work at Caltech (small world …) as she became increasingly interested in artificial intelligence. Back at Princeton with her Ph.D. as an assistant professor in computer science, she began with her students to construct a visual database sufficient to test computer algorithms attempting to mimic human brain function; the result, ImageNet, became the base for her subsequent work in the huge tech facilities at Stanford, where she went in 2009. Having conquered the challenge of constructing a large enough data set through clever internet crowdsourcing, her group set up an annual contest for designers to test their brain-mimicking equations. The algorithm cognition level quickly reached 75% or so (the human brain on a good day is up to maybe 97%) then got stuck.
Until 2012, when a group from the University of Toronto submitted an algorithm called AlexNet which right away got 85% of the images correct. Why this huge leap forward? The senior member of the Toronto team was Geoffrey Hinton, regarded as the “Godfather of Deep Learning,” who since the mid-1980s had advanced research in the neural networks begun by Hopfield, even long after bigger, sexier algorithms had begun to dominate the field and soak up the funding. As Li says, “It was like being told the land speed record had been broken by a hundred miles per hour in a Honda Civic.” To cut to the chase, this AlexNet leap acted as a starter’s pistol for the mad AI race you now see splashed across the front page every other day, almost entirely based on associative neural networks.
Thus your very timely yet decades-in-the-making holiday gifts here in the History Corner. First, on Dec. 10, you can watch as Hopfield and Hinton are awarded the Nobel Prize for their neural networks, then get to compliment each other in their winners’ lectures. Second, you can follow with Li, who now is the co-founder of the Stanford Institute for Human-Centered Artificial Intelligence. The “human-centered” label there is crucial, since it addresses the pressing need for caution and responsible controls on AI, heavily emphasized by pioneers Hinton and Hopfield as well. Her autobiography/AI manual The Worlds I See was the freshman Pre-read this fall, my gift from Princeton that I regift to you; read it, and take a gander at Li’s recent TED talk and her Pre-read chat with President Christopher Eisgruber ’83 to grasp the gravity of the field at this juncture. Think about that, and about the generous gifts from Plock and Chengdu that made it possible. It’s time all of us did some deep learning, folks.
And God bless us, every one.
0 Responses