Machine Dreams: Sleepwalking Into the Future

From NBC: Weaponized Drones: Connecticut Bill Would Allow Police to Use Lethal Force From Above

It’s odd that we are sleepwalking into a world where our skies will soon be filled with swarms of giant metal bugs — delivery drones, commercial drones, surveillance drones, police drones (and criminal drones) — buzzing over us day and night. There’s very little debate over whether this is a good thing or not as a general development for our human community, whether the particular advantages provided by this technology will justify its effect on the quality of life in the world it will create.

The same applies the entire panoply of automation that’s encompassing more and more aspects of human life. Of course, there are many benefits to be gained from any specific technology — and not just practical or economic ones, but also in opening up new realms for creativity, beauty and knowledge. But it’s striking how little thought is being given to the kind of world being formed from the nearly unregulated development and application of various technologies in all walks of life — and to the fact that most of these developments and applications are being done either for private commercial purposes or by governments seeking ever-more powerful methods of control over the public.

Shouldn’t we have some kind of continual public adjudication of how and where and when we want these technologies to be applied? We often do this in our private lives. For example, a couple might decide they’d rather their children not have access to the undeniably impressive and effective technology of a chainsaw. (Or, more realistically, they decide their seven-year-old shouldn’t have unfettered access to the internet.) But there is nothing like this on the public scale. Yet we seem to be heading toward a world where not only our jobs (including white collar jobs) are replaced by robots & AI, but we will also be policed by robots, judged by robots, get medical treatment and legal counsel from robots, go around in driverless (and hackable) cars whose speed might be controlled by insurance companies (or by the computer monitors of insurance companies), read news reports “written” by computers (this is already happening with stock reports and sports stories), and so on. Is this really what we want? Are there other, better ways of incorporating these technologies into our societies, and dealing more productively and justly with the consequences and changes they will bring?

And who will control all of these controlling systems? Who will program the artificial intelligence systems - that is, whose beliefs and biases will inevitably and unavoidably influence this programming? Whose values will these automated programs reflect?

I have no beef with computerized technology at all — I write with it, stay in touch with family and friends with it, learn things from it, access marvellous works of art and entertainment with it, make music with it, take pictures and draw and paint with it, etc. But when dealing with the accelerating automation of human society in general, there are dozens, hundreds of concerns like the ones outlined above that cry out for debate and informed reflection. But there seems to be no venue, no way for us to determine — as a human community at large or in our national or local communities — the way in which we want these technologies to shape our world … and the ways in which we don’t want them to shape it. And I think this leaves us in the very real danger — and the very great likelihood — of ending up in a world that none of us would want to live in.