One of the big hurdles in locating other Earth-like planets in our galaxy is the difficulty of cutting through all the “noise” — light distortion from stars.
For astronomers, deciphering and filtering through the observational data they collect is painstaking work that requires a high level of expertise and careful discernment. But what if you could train an artificial intelligence to do it?
That’s what incoming Yale researcher Yan Liang has done.
She is the creator of Æstra, an artificial intelligence (AI) neural network designed to decode the tangled signals of distant worlds. By training Æstra on spectral data, Liang taught it to recognize subtle distortions caused by stellar activity and distinguish them from the delicate gravitational footprints of orbiting planets.
Liang, who will continue this work as a postdoctoral fellow at Yale beginning this fall, was recently named a 51 Pegasi b fellow. The 51 Pegasi b Fellowship, established in 2017 by the Heising-Simons Foundation, provides research opportunities for postdoctoral scientists in planetary astronomy. It is named for the first exoplanet discovered orbiting a sun-like star.
The fellowship provides up to $450,000 of support for independent research over three years.
“The moment I realized I could teach AI to reconstruct spectral distortions accurately, line by line — without words, just pure data — was the moment I felt like a true, independent researcher,” said Liang, who will earn her Ph.D. in astrophysics this spring from Princeton University.
At Yale, Liang will work primarily with Malena Rice, a planetary astrophysicist in the Department of Astronomy in Yale’s Faculty of Arts and Sciences, and use Æstra to analyze decades of archival data to search for planets previously obscured by stellar noise.
Specifically, Liang will look for planets in the habitable zones of both M dwarf stars — which are small, cooler in temperature, and abundant — and younger, highly-active stars.
“We have more data than ever before, but independent human analysis on each system is not the solution,” Liang said. “We need an AI-powered program or machine learning algorithm to objectively look at the data and squeeze out the remaining juice.”