A team of Purdue Polytechnic researchers, including Robert Nawrocki, Richard Voyles, Haiyan Henry Zhang, plus graduate student Yi Yang, have developed a new algorithm that could transform the way advanced artificial intelligence (AI) systems learn. Their work, published in Neurocomputing, introduces the Fractional-Order Spike-Timing-Dependent Gradient Descent (FO-STDGD) algorithm, designed to improve spiking neural networks (SNNs), a type of AI that mimics how the human brain processes information.
Spiking neural networks are different from regular neural networks because they communicate using "spikes," or brief bursts of electrical activity, similar to how neurons in the brain fire. This allows them to handle information in a more energy-efficient and biologically realistic way. However, training these networks has been a challenge due to the complexity of working with these precise spike timings. Nawrocki's team has addressed this issue with their FO-STDGD algorithm, making it easier to train SNNs with higher accuracy and faster learning.
“Our research shows that by using fractional orders—a method of adjusting how much weight we give to certain pieces of information during training—we can significantly improve how well these networks learn,” said Nawrocki. “In one example, using a fractional order of 1.9 led to a 155% improvement in accuracy compared to traditional methods.”
Improving artificial intelligence efficiency
One of the biggest advantages of spiking neural networks is their ability to process information quickly and with less power, making them ideal for tasks that require real-time decisions, such as self-driving cars or robotics. The FO-STDGD algorithm helps these networks learn by breaking down and adjusting the timing of their spikes in a more manageable way. It also uses a method called "fractional gradient descent," which allows for more flexibility in how the learning process is adjusted over time.
“This method of using fractional orders gives us more control over the learning process,” Nawrocki said. “We’ve tested it on widely-used datasets and found that our approach is not only more accurate but also faster.”
Expanding the possibilities of AI
Spiking neural networks are a powerful method for creating efficient AI, but they’ve been difficult to train effectively. With this new algorithm, the training process is smoother and requires fewer resources. The team’s novel approach allows the networks to fire spikes at the right times more quickly, reducing the overall time and computational power needed to train the system.
The research was supported by grants from the Office of Naval Research and the National Science Foundation, valued partners of the college for both academic and government-funded projects.
Additional information