An MIT Alumni Association Publication
Nick Lopez, left, Keldin Sergheyev, and Helen Liu. Photo: Jon Sachs
NSE students, from left, Nick Lopez, Keldin Sergheyev, and Helen Liu. Photo: Jon Sachs

Guest Blogger: Peter Dunn

Big data brings big challenges in separating information from noise. “We tend to dump all that data into the visual realm and figure that if there’s something there you’ll see it,” says Keldin Sergheyev ’15, a Nuclear Science and Engineering (NSE) student. “But we do have other senses.”

That’s one of the factors that prompted Sergheyev and two fellow NSE students, Helen Chang Liu ’16 and Nick Lopez ’15, to experiment with the generation of sound from nuclear radiation—a project that has resulted in several intriguing musical compositions and ideas for future exploration. Hear the music.

The idea was hatched in the MIT Music Department’s 21M.065 Introduction to Musical Composition class, taught this spring by acclaimed composer Keeril Makan. All three students took that course, which relied on resources found through a nuclear instrumentation and shielding lab class, where Sergheyev and Lopez were also enrolled.

Working with NSE grad student Zach Hartwig, the three rummaged through available particle detectors to find a functional one that would allow them to tap directly into its data stream. “Fortunately, Zach had a large library of code so we didn’t have to reinvent the wheel,” says Sergheyev.

They settled on a lanthanum bromide detector, a sealed cylinder with an internal crystal that scintillates when exposed to the products of nuclear decay, and then they began analyzing radiation sources, including gamma-emitting isotopes like cobalt 60 and cesium 137, a cigarette, and a vintage Fiestaware plate with a mildly radioactive glaze.

Each gamma ray picked up by the detector fell into one of thousands of energy intervals (between, for example, 1 and 2 electron volts, 2 and 3 electron volts, etc.) The students wrote software to tally the number of readings for each interval and map those intervals to particular sonic frequencies—the higher the tally, the more prominent that frequency would be. Additional code synthesized the raw frequency data into a sonic waveform that can be played back over speakers or headphones. The result is a set of ethereal compositions with a shimmering, angular quality, in the vein of John Cage, Laurie Anderson, or Brian Eno.

“All of us have done work towards additional compositions, and we’re also thinking about practical applications, like alternatives to visual displays—they can get overwhelming and difficult to process,” says Sergheyev. “Maybe a reactor could have a tone that represents the neutron population during startup or a particle detector could generate sounds that indicate whether it’s calibrated properly. Audio is something that’s used quite sparingly, but we all have amazing audio processors in our heads.”

Next Up