Yale People
The simple goal of Daniel Spielman’s complicated work: Make things better
Lots of problems have already been solved. They just don’t always have the best solution, said Yale professor Daniel Spielman.
“By thinking about a problem, you can come up with a whole new way of solving it that might be much faster,” said Spielman, the Henry Ford II Professor of Computer Science, Statistics and Data Science and Mathematics and Applied Mathematics. “The whole field of algorithms is, to some degree, about finding new ways to solve problems. And people in this field, like me, really just like thinking of completely new ways to solve things.”
He compares his research to someone who’s trying to get better at throwing a ball. The conventional way is to practice throwing, lift weights and get bigger muscles. “A smarter person invents a slingshot — and an even better way of doing that is to use a catapult,” he said, laughing.
That’s essentially what Spielman does, only with graphs, equations and computers. “One of my main projects has been designing faster algorithms for solving systems in linear equations, and then working on leveraging those algorithms to do other things faster,” he said. “A lot of my life is dedicated to figuring out how we can compute things a lot faster.”
In its raw form, Spielman’s work is abstract, but the end results have led to countless applications from faster communication to medical imaging to how online retailers recommend products for you. His thesis work helped revolutionize the field of error-correcting codes, which allow communication devices to transmit information even if part of it is corrupted. That work has made communication faster and more reliable and has been used for broadcasting high-definition television.
In a separate line of research, Spielman was recognized for his work in what’s known as “smoothed analysis,” which has been credited with giving a more realistic understanding of how an algorithm will perform. The concept deals with the phenomenon in which some algorithms work better in practice than in theory. “We sort of proved that algorithms work as long as there’s a little bit of noise in the input — because usually there’s noise in real world data.”
In 2013, he and two collaborators made international headlines when they solved the Kadison-Singer conjecture, a problem that had gone unsolved by mathematicians for more than 50 years. Its solution could have significant implications for the field of statistics.
Throughout his career, Spielman’s work has earned him numerous accolades. The Simons Foundation named him to its inaugural class of Simons Investigators, providing Spielman and Yale’s Department of Computer Science with $660,000 over five years. The same year, he was named a MacArthur Fellow, popularly known as the “genius” grant. He has also won the Rolf Nevanlinna Prize, one of the most prestigious awards in mathematics. And he’s won the Gödel Prize, awarded annually for outstanding papers in the area of theoretical computer science — twice.
The most recent Gödel Prize, awarded in 2015, is connected to his work focusing on linear equations, particularly those related to what’s known as Laplacian matrices, a grid of numbers representing a graph. They’re used in computational sciences and optimization research, as well as network analysis, which is directly related to Spielman’s position as co-director the Yale Institute for Network Science (YINS). He and his frequent collaborator, Shang-Hua Teng, the Seeley G. Mudd Professor of Computer Science and Mathematics at the University of Southern California, created an algorithm that solves linear systems in what’s known as near linear time — that is, the time it takes to run the algorithm increases in proportion to the size of the problem. Fellow mathematicians and computer scientists have called the algorithm — aptly known as the Spielman-Teng algorithm — a “technical tour de force.”
“I try to focus on fairly fundamental problems, the things that come up all over the place,” he said. “My primary interest — my end goal — is really to understand what we can compute, and what we can’t. I want to understand how quickly we can compute things we care about.”
So when did he decide that he liked this line of work? It began partly when, as a teenager, he worked on puzzles. Not the normal way of putting them together himself, but by getting his computer to work out a solution for him. His interest was further stoked shortly after he arrived at Yale as an undergraduate.
“I took a course in algorithms as a freshman here at Yale, and it was amazing to me,” he said. “There are a lot of problems in which we know one way of solving them — often you can do it by hand — but by coming up with a better algorithm, you can do it infinitely faster.”
One thing that appealed to him was that computers not only let you speed up what you would otherwise do by hand, they also give you the ability to implement more complicated algorithms. “The thrill of algorithms for me is that you look at a problem in which there is a straight ahead, brute-force way of doing it, but with thinking, you can come up with something much, much better. And that’s really what we’re doing when we invent a new algorithm.”
We’re now in a time when our problem-solving ambitions have outpaced the increase of computing power, he notes, adding that makes better algorithms all the more crucial.
“We know that you can get better answers to problems with more data, but our data has scaled much more than our available computational resources,” he said. “The real problem is that the size of problems we want to solve has increased dramatically in the last decade. That’s partially because we have much more data — the world is now awash in data.”
Besides, he said, increasing computer power isn’t cheap:
“More computers, that costs real money. And if I can save all that money just by thinking and solving things better, then I’m a very happy person.”
Media Contact
William Weir: william.weir@yale.edu, 203-432-0105