
Click the link below the picture
.
-
-
On the AI field’s gaps: “There’s going to have to be quite a few conceptual breakthroughs…we also need a massive increase in scale.”
-
On neural networks’ weaknesses: “Neural nets are surprisingly good at dealing with a rather small amount of data, with a huge numbers of parameters, but people are even better.”
-
On how our brains work: “What’s inside the brain is these big vectors of neural activity.”
-
The modern AI revolution began during an obscure research contest. It was 2012, the third year of the annual ImageNet competition, which challenged teams to build computer vision systems that would recognize 1,000 objects, from animals to landscapes to people.
In the first two years, the best teams had failed to reach even 75% accuracy. But in the third, a band of three researchers—a professor and his students—suddenly blew past this ceiling. They won the competition by a staggering 10.8 percentage points. That professor was Geoffrey Hinton, and the technique they used was called deep learning.
Hinton had actually been working with deep learning since the 1980s, but its effectiveness had been limited by a lack of data and computational power. His steadfast belief in the technique ultimately paid massive dividends. The fourth year of the ImageNet competition, nearly every team was using deep learning and achieving miraculous accuracy gains. Soon enough deep learning was being applied to tasks beyond image recognition, and within a broad range of industries as well.
.
Noah Berger / AP
.
.
Click the link below for the article:
.
__________________________________________
Leave a comment