Engineering faculty devising solutions to increasing cybersecurity threats

With cutting-edge operating systems, cryptocurrencies, privacy-enhancing hardware, and more, Yale researchers are revolutionizing cybersecurity.
An illustration of a man in a back suit and fedora walking down a hallway of computer icons

This article originally appeared in Yale Engineering magazine.

Every few days, the news brings us stories of massive data breaches, resulting in the theft of massive amounts of money, or the release of sensitive information. Whether you’re making a routine online transaction or casting a vote in a national election, the issue of cybersecurity has everyone on watch.

As fast as technology progresses, hackers find new means to exploit it. It’s a problem that requires strategies on several fronts and rethinking approaches to software, hardware and the law. At SEAS, a number of faculty members are exploring the question of how we make computing more secure and reliable.

A Hacker-Proof Operating System

Ideally, a computer’s operating system would have at its core a small, trustworthy kernel that can manage all the hardware resources and also provide isolation and protection for different software components. But operating systems are complicated, and just one flaw in the code can elude detection by its makers and leave a system vulnerable to hackers. Completely eliminating those flaws, however, is extremely challenging.

Zhong Shao

The evolution of computer systems has been heavily influenced by business considerations — people want to be the first to the market,” said Zhong Shao, professor and department chair of computer science. To save time, developers often rely on legacy code, a practice that seemed fine when no one imagined that computers would play such a critical role in our lives. Now, with so many different platforms in use, the Internet of Things taking off, and self-driving cars just around the corner, we need greater assurances that our operating systems are secure and reliable.

To that end, Shao is leading a team in the construction of CertiKOS (Certified Kit Operating System). It’s an operating system based on formal verification, in which the kernel’s interdependent components are carefully untangled, and mathematical specifications are written for each kernel module’s intended behavior. To do so, they structure the kernel using certified abstraction layers and design each component so that it can be formally verified separately and linked by the layer framework.

This differs from the conventional way of checking a program’s reliability, in which the software developer tests the program against numerous scenarios. One of the things that sets CertiKOS apart from other previously verified systems is that it’s a concurrent operating system, which allows it to run on computers with multiple processors. For complex software with concurrency, testing can no longer cover the program’s state space, so it cannot eliminate all the bugs in the system.

Computing has reached a point where allowing the possibility of any bugs is no longer acceptable. Shao compares an operating system to a government, which is also multi-layered and manages and facilitates processes. Its structure also means that corruption in just one layer can damage others as well.

The CertiKOS approach is to apply formal principled techniques to decompose these very complex systems into many carefully designed abstraction layers,” Shao said. “For each complex component, you try to figure out its semantic structure. We build the operating system in a clean way, layer by layer.”

If the layers aren’t organized in a way that’s consistent, Shao said, you’ll likely create a bad foundation for any new features. If you really understand the semantic underpinnings of these different features, though, then they can serve as a foundation for a system that’s more extensible — that is, able to take on new features and be adapted for different application domains.

A big challenge of the project is making sure that it’s not seen as an academic exercise, Shao said. That means growing the CertiKOS ecosystem to the point that it can be deployed in many realistic application domains. “We still need engineers to make the product more comprehensive, and apply it to more platforms,” he said. “Otherwise people won’t take it seriously.”

Shao acknowledges that the project is quite ambitious and the new CertiKOS ecosystem could disrupt and transform how the computer industry has been working for decades. But he also knows that the current way of doing things is unsustainable.

For the future, we need to build a new operating system ecosystem that has to be super clean and provides a bug-free and hacker-proof guarantee,” he said. “Otherwise, the cyber world, self-driving cars, IoTs, blockchains, and the future of artificial intelligence could all run into serious problems.”

Securing Your Vote

Keeping up with the constantly morphing field of cyberconflict has proven tricky in many areas, including the law. To that end Joan Feigenbaum, the Grace Murray Hopper Professor of Computer Science & Economics, has teamed up with Professors Oona Hathaway and Scott Shapiro from the Yale School of Law to work on new ways of thinking about cybersecurity.

Joan Feigenbaum, Grace Murray Hopper professor of computer science & economics
Joan Feigenbaum, Grace Murray Hopper professor of computer science & economics

The collaboration, funded with a two-year $406,000 grant from the William and Flora Hewlett Foundation, includes the course “The Law and Technology of Cyberconflict.” For the first semester, the students (10 law and 11 computer science) met for weekly seminars. In the second semester, students worked individually and in teams on projects that address pressing cyberconflict issues. One team of students, Sahil Gupta, Patrick Lauppe, and Shreyas Ravishankar, developed a cryptocurrency they call “FedCoin.” It’s similar to BitCoin, but would be sanctioned by the Federal Reserve, or any nation’s central bank. It would possess the security properties of modern cryptography with the legal and social properties of conventional currencies.

Another team, composed of computer science students Soham Sankaran, Sachith Gullapalli and Lincoln Swaine-Moore, took on a particularly timely project. They focused on developing a system that allows voters to verify that their votes were recorded and counted correctly, without allowing access to information on other voters’ ballots. Previously, researchers Tal Moran and Mori Naor had shown that this system could work in theory, but hadn’t built a system based on their ideas. To see how it would fare in real life — ideally, at the scale of a national election — the three students developed their own system to demonstrate that it could work in practice.

Taking the papers that Moran and Naor published, the students implemented the cryptography and interface that made up the theoretical system of vote casting, tallying, and verification. They added a few new features as well, including a system for verifying receipts and receipt signatures. It’s a crucial addition, since Moran and Naor acknowledged that their theoretical system was vulnerable to corrupt voters who could conceivably falsify voter receipts or diminish the legitimacy of an election by “crying wolf” about fake receipts. Their system incorporates receipts that contain “commitments” to the vote — that is, short pieces of information determined by the vote. These commitments could be used by the voter (who already knows how he or she voted) for verification purposes, but would most likely be of no use to someone who didn’t know the vote cast by the actual voter.

The next step for the project is to test out the system in an election with real voters and see how intuitive the interface is for people unfamiliar with the system. They would also like to make the system open source to get community feedback and allow others to customize the system to their own specific needs.

Better Security from a Device’s ‘Fingerprint’

All electronic devices — even the same products by the same manufacturers — possess unique variations in their hardware. Jakub Szefer, assistant professor of electrical engineering, thinks he can use these variations to create additional security for users of these devices.

An illustration of a red fingerprint in the center of a computer chip.
Could an electronic device’s unique “fingerprint” be used to make it more secure?

Szefer’s project, for which he won a Faculty Early Career Development (CAREER) Award from the National Science Foundation this year, focuses on using a device’s hardware “fingerprint” to increase security of everyday computing devices. Nearly every computing device, smart phone, or Internet of Things device depends on a data-storage system known as dynamic random-access memory (DRAM). The charge in the DRAM’s cell’s storage capacitors dissipates over time, so a refresh mechanism is needed to recharge the capacitors and keep the data in storage.

When a manufacturer fabricates the DRAM chip, the manufacturing process can result in different wire lengths, changes in the capacitors’ thickness, and other microscopic variations. These differences are very slight, but enough to result in each DRAM having its own unique decay behavior.

When you turn off the refresh mechanism, the data decays, but different DRAM cells decay at different rates — that’s where the fingerprint comes in,” Szefer said. “The decay rate of the collection of DRAM cells is unique to each model, as a result of the manufacturing variations.”

The uniqueness of each device can be used to make it more secure. For instance, an iPhone user now might log in to a banking site using a password — which could be stolen. But if the security system also includes the device’s hardware fingerprint, someone would need both the password and the user’s iPhone itself to get into the banking site.

The ideas that come out of the research can be readily deployed, since the project centers on hardware already present in the devices. Putting the fingerprint to practical use, however, is part of the challenge of the project. To keep from completely disabling the refresh mechanism, which would render the device useless, Szefer needs to selectively disable some regions of the DRAM while preventing data in other regions from decaying. That way, they can read the fingerprint, but the rest of the DRAM would be refreshed and the device could keep running.

The project could also leverage the physical properties of the computer hardware to develop new hardware-based versions of cryptographic protocols. For instance, it could incorporate what’s known as Oblivious Transfer, in which two parties can exchange certain pieces of information but keep others secret. Such protocols have been incorporated into software programs, but those are based purely on different decaying mathematical models that could theoretically be broken by a clever hacker. Szefer said that intertwining the cryptographic protocols with a device’s hardware could create one more layer of security.

Another possibility Szefer is exploring is using this technology to create a system for checking whether a device is counterfeit or not — a problem that plagues smaller electronics in particular. For instance, the manufacturer could take the fingerprint of every device that’s produced. A system could then be set up that would let consumers check the authenticity of their devices.

Maintaining Privacy, and Self-Correcting Chips

When two parties share data on a large scale, it’s not always easy to control what information is released and what is kept private. Aiming to change that, assistant professor of computer science Mariana Raykova is working on a program that allows multiple users to share data without running afoul of privacy concerns and legal restrictions.

A portrait of assistant professor of computer science Mariana Raykova.
Mariana Raykova

Raykova, who received a Google Faculty Research Award this year for the project, noted that there are often legal agreements between parties stipulating that only a subset of data that’s part of a larger body of data can be used. But knowing for sure that these agreements are honored is a complicated matter. In one scenario, two or three hospitals may need to share information about a particular medical procedure of a patient. Due to HIPAA and other privacy concerns, however, the patient’s other medical data shouldn’t be open to all three parties. Raykova’s group is developing cryptographic techniques that allow the various parties to design a model that fits their specific needs.

Exactly how Google makes use of its data is proprietary, but Raykova said there are a number of ways that her team’s project could benefit the search engine giant. Google collects a massive amount of user-related data, and then applies machine learning techniques to build models that neatly explain the behavior of those users. Often, the data comes from multiple sources. Some advertisers on Google, for instance, want a better idea of how effective their ads are: Do users go to the company’s website, for example, or purchase its products? Conversely, these companies might have data that’s helpful to Google about people using the search engine.

Raykova said the same technology could be applied to numerous other uses, such as for tax and judicial databases. One of the main challenges in developing these techniques is making them efficient, she said, since translating a regular computation into a secure computation can take a large amount of computing time and other resources.

She is also working on another pressing security concern: chips with faulty or malicious integrated circuits (IC). It’s a problem that happens when semiconductor designers outsource circuitry to possibly untrusted fabricators. Because of the complexity of certain chips, flaws within these outsourced circuits can go unnoticed by the developers, but could ultimately harm the product. Current solutions are limited to legal and contractual obligations, or post-fabrication IC testing — neither of these is considered adequate.

The project, titled “Verifiable Hardware: Chips that Prove their Own Correctness,” won a $3 million National Science Foundation grant. In addition to Raykova, the project includes researchers from University of Virginia; New York University; University of California, San Diego; and CUNY City College.

The researchers are looking at the wider benefits of their project by developing new practical approaches to the problem of general verifiable computation. Verifiable hardware, they say, is critical to building future computing systems that are reliable and free from major security failures. Ultimately, they’re aiming to use open-source tools to make verifiable hardware accessible for use in cryptographic hardware applications.

Share this with Facebook Share this with X Share this with LinkedIn Share this with Email Print this

Media Contact

William Weir: william.weir@yale.edu, 203-432-0105