Will Machine Learning Build Up Dangerous ‘Intellectual Debt’?

Long-time Slashdot reader JonZittrain is an international law professor at Harvard Law School, and an EFF board member. Wednesday he contacted us to share his new article in the New Yorker:
I’ve been thinking about what happens when AI gives us seemingly correct answers that we wouldn’t have thought of ourselves, without any theory to explain them. These answers are a form of “intellectual debt” that we figure we’ll repay — but too often we never get around to it, or even know where it’s accruing.

A more detailed (and unpaywalled) version of the essay draws a little from how and when it makes sense to pile up technical debt to ask the same questions about intellectual debt.

The first article argues that new AI techniques “increase our collective intellectual credit line,” adding that “A world of knowledge without understanding becomes a world without discernible cause and effect, in which we grow dependent on our digital concierges to tell us what to do and when.”

And the second article has a great title. “Intellectual Debt: With Great Power Comes Great Ignorance.” It argues that machine learning “at its best gives us answers as succinct and impenetrable as those of a Magic 8-Ball — except they appear to be consistently right.” And it ultimately raises the prospect that humanity “will build models dependent on, and in turn creating, underlying logic so far beyond our grasp that they defy meaningful discussion and intervention…”

Share on Google+

View source

Codice amico Very Mobile Diagonal Media Digital Marketing