Today, not for the first time on this blog, we visit Amsterdam. This time for the innovative work of Thijs Turèl, who both heads up the Responsible Sensing Lab and manages the Responsible Urban Digitization program at the Amsterdam Institute for Advanced Metropolitan Solutions.
Whether it’s in smart cities with cameras and algorithms, or in broader applications of Artificial Intelligence, it’s a repeated challenge to understand what is being observed, what is being calculated, and how decisions are made. Turèl uses “provokotypes,” prototypes that are “designed to show the user how an AI might behave and then explain why it made the decision(s) it did.” The first such example is a charging station.
The speed of charging might depend on the amount of renewable energy available, planned trips, the occupation of the car’s owner, and much more. By explaining how these things work, the Dutch hope to avoid people getting frustrated when, say, their neighbor’s car is fully charged and theirs isn’t.
Last week, I wrote about the choices we make regarding infrastructure and how they can advance the society we want. Similarly, as Stacey Higginbotham argues in her post, beyond privacy and understanding the technology that surrounds us, “trying to make policy decisions transparent helps us align our stated goals for society with actual results.”
Part of that alignment with society can also come from the values we integrate in our decisions. As Thijs Turèl mentions, most of the time decisions are based on efficiency and price, sometimes little else, including for so-called smart cities. Embedding our values in our decisions has to be part of the process.
That was part of the rationale for creating the ethical sensing lab earlier this year, to ensure that as technology gets deployed in Amsterdam the city can also use values-based design instead of simply choosing the cheapest or easiest option. [Emphasis mine.]