An inferiority complex for ComputerVision?

It’s undeniable that the rise of ComputerVision technology has aided our society in many ways, by making the completion of complex and time-consuming tasks easier and faster. Yet in spite of the many advances made in the field, particularly over the past few years, the technology still isn’t able to rival the capabilities of humans – at least not now.

According to an article entitled “Comparing machines and humans on a visual categorization test,” published this month in the Proceedings of the National Academy of Sciences (PNAS), the ability of ComputerVision software to recognize pre-defined objects, complete projects, and solve problems may be quick and efficient, but it still falls short of what humans are capable of.

In an experiment conducted with people and machines, the test subjects had to recognize and classify abstract images. Again and again, human test subjects proved that they had the ability to “learn” – and apply what has been learned – to decrease the rate of error in recognizing reoccurring patterns. After viewing less than 20 images, most human participants were able to pick up on the pattern. Meanwhile, computers that normally fare well while working within a set of limited data required thousands of examples to produce correct answers, demonstrating that elaborative tasks which rely upon more abstract identification and reasoning are a weakness.

The study refers to this shortcoming of computer technology as a “semantic gap.” Of course, the pertinent question isn’t necessarily whether or not the reasoning abilities of computers will ever be able to parallel that of humans. Instead, perhaps we should be asking when they will be able to.

Leave a Reply

Post Navigation