Alumni across MIT are researching new and innovative ways to advance human and machine intelligence.
MIT Media Lab Researcher Joy Buolamwini SM ’17 created the Gender Shades project to examine error rates in the gender classification systems of three commercially available facial-analysis products. Her accompanying paper shows a significant accuracy gap between classifying male and female faces, as well as between darker and lighter faces. One gap was most pronounced: the highest error for light-skinned males was .08 percent while, for darker females, it was 34.7 percent—raising questions about the data sets used to train such machine learning systems. Buolamwini is founder of the Algorithmic Justice League, devoted to highlighting algorithmic bias and developing practices of accountability during the design, development, and deployment of coded systems. (Pictured above; Credit: MIT Spectrum)
Expediting Drug Discovery
Leila Pirhaji PhD ’16 spoke at the MIT Quest for Intelligence launch about her startup, ReviveMed, which uses AI technology developed at MIT to leverage data from metabolomics (the study of small molecules) to expedite drug discovery. Pirhaji created the company with support from MIT programs including delta v, iTeams, StartMIT, StartIAP, the $100K Entrepreneurship Competition, and the Sandbox Innovation Fund.
Social robotics pioneer Cynthia Breazeal SM ’93, SCD ’00 is associate professor of media arts and sciences, leads the Personal Robots Group at the MIT Media Lab, and is founder and chief experience officer of Jibo, Inc., maker of the world’s first “family robot.” Her research focuses on developing the principles, techniques, and technologies for personal robots that are socially and emotionally intelligent, interact and communicate with people in human-centric terms, and collaborate with people as helpful teammates and companions. Her recent work investigates the potential of social robots to help people of all ages achieve personal goals that contribute to quality of life in domains such as education, creativity, health care, well-being, and aging in place.
Predicting Sound from Video
In the laboratory for computational audition, Joshua McDermott PhD ’07 operates at the intersection of psychology, neuroscience, and engineering. An assistant professor in the Department of Brain and Cognitive Sciences, McDermott and his team work to understand how humans derive information from sound, to improve treatments for hearing impairment, and to enable the design of machine systems that mirror human abilities to recognize and interpret sound. In one such project, McDermott collaborated with colleagues from the Computer Science and Artificial Intelligence Lab (CSAIL) on an algorithm that can learn how to predict sound from video footage and produce the expected sound realistically. Such an algorithm could strengthen machines’ ability to understand the physical properties of objects.
Excerpted from “Implications, Discoveries, Applications” in the spring Spectrum. Read the full article.