Two research teams affiliated with the UTSA MATRIX AI Consortium for Human Well-Being received $2 million each from the National Science Foundation's Emerging Frontiers in Research and Innovation program.
The funding will help the researchers continue exploring energy-efficient AI that learns continuously throughout its lifetime.
Dhireesha Kudithipudi, the McDermott endowed chair in UTSA's electrical and computer engineering department and founding director of the MATRIX AI Consortium, is the project's principal investigator working with co-investigators Itamar Lerner, an assistant professor in the university's psychology department, and several researchers at institutions across the country.
They're trying to develop AI models that mirror how the human brain processes information at a fraction of the energy that current AI systems use. Kudithipudi calls these "grand challenges" within the field.
"The challenge really is translating these principles of biological intelligence into engineered learning systems," she said of the 4-year project. "We are trying to take these principles that we observe in the brain and trying to translate them into AI models" and, eventually, more efficient hardware.
According to one study, the hardware operating the popular AI interface Chat GPT-3 consumed 1,287 megawatt-hours of electricity to train the system before its launch. The numbers grow exponentially with usage and updates to the AI models — and that's just one AI platform.
By comparison, Kudithipudi said, the human brain uses 20 watts of power and individual computations consume about 0.1 watt — a fraction of the computer's power use.
And while current AI models are good at specific tasks they're trained on, they don't work as well in real-world situations where they encounter unfamiliar environments, incomplete data or unknown inputs.
Kudithipudi's team is exploring the temporal scaffolding hypothesis, which theorizes that during sleep, the human brain replays memories at an accelerated rate, which enables it to recognize patterns within such experiences. The idea is that future AI systems could mirror how the brain processes patterns when its awake and asleep.
Co-investigator Itamar Lerner, a cognitive and computational neuroscientist, said the human brain can process constant streams of information without expending vast amounts of energy. By recognizing patterns over time, people can predict and react to what could happen in the future.
Patterns help people "extract the gist" of the vast amounts of information our brains process in order "to handle new situations in future," he said.
Much of this happens during sleep when the brain rapidly processes information.
The team will perform human sleep studies at UTSA's sleep lab as part of its research.
"This project is based on trying to take this this basic mechanism and see if we can implement it in large neural networks in an energy-efficient way," Lerner said.
Fidel Santamaria, a professor in the UTSA College of Sciences' neuroscience, developmental and regenerative biology department, is the principal investigator of the other project that received a $2 million grant. He's also working with several researchers at other universities.
In Santamaria's project, his team will apply a mathematical theory — which explains how neurons adapt based on their previous activity — to design more efficient electric circuits.
Humans are "history dependent," meaning we use history and previous experiences to process new information, Santamaria said.
Similarly, neurons, which transmit electric pulses, adapt their responses based on their past, and such history dependence is an important part of their function. Santamaria said that the more neurons are stimulated, the more they adapt.
"The objective of the project is to develop the theory, implement circuits that discover new capacitors with these history-dependent properties and then test if we can have optimal computation and optimal energy consumption at the same time," he said.
The team is eyeing specialized capacitors, devices that store energy, in developing its circuits.
"All the electronics we are using right now are based on resistors," he said in a statement. "Instead, math told us that if we can find a capacitor ... that is history dependent; then, we will be able to translate what we know from neuroscience to actually build a circuit that will have the same properties of the real neurons we have been investigating."
The goal, he said, is to build a single neuron with electric circuits and determine whether that neuron can outperform larger networks.
"There is not enough energy on the planet to train all the AI" that humans are developing, he said. "That's a grand challenge for the country" to stay at the forefront of AI development.
The MATRIX AI Consortium launched in 2020. Since then, it's brought together roughly 65 researchers from various disciplines across UTSA, as well as from UT Health, Southwest Research Institute and Texas Biomedical Research Institute.
©2023 the San Antonio Express-News. Distributed by Tribune Content Agency, LLC.