"Backpropogation doesn't correlate to the brain," insists Mike Davies, head of Intel's neuromorphic computing unit, dismissing one of the key tools of the species of A.I. In vogue today, deep learning. "For that reason, "it's really an optimizations procedure, it's not actually learning." 

Davies made the comment during a talk on Thursday at the International Solid State Circuits Conference in San Francisco[1], a prestigious annual gathering of semiconductor designers. 

Davies was returning fire after Facebook's Yann LeCun, a leading apostle of deep learning, earlier in the week dismissed Davies's own technology during LeCun's opening keynote for the conference.  

"The brain is the one example we have of truly intelligent computation," observed Davies. In contrast, so-called back-prop, invented in the 1980s, is a mathematical technique used to optimize the response of artificial neurons in a deep learning computer program. 

Although deep learning has proven "very effective," Davies told a ballroom of attendees, "there is no natural example of back-prop," he said, so it doesn't correspond to what one would consider real learning. 

Also: Facebook's Yann LeCun reflects on the enduring appeal of convolutions[2]

Davies then went on to give a talk about "Loihi," his team's computer model of a neural network that uses so-called spiking neurons that activate only when they receive an input sample. The contention of neuromorphic computing advocates is that the approach more closely emulates the actual characteristics of the brain's functioning, such as the great economy with which the brain transmits signals. 

He chided LeCun for failing to value the strengths of that approach.  

"It's so ironic," said Davies. "Yann bashed spikes but then he said we need to deal with sparsity

Read more from our friends at ZDNet