Two-thirds of American cities are making big investments in smart city technology – from intelligent street lights and utility meters to next-generation traffic signals and parking solutions – in an effort to increase operational efficiency, maximize limited resources, and improve people’s quality of life.
While these technologies can be useful on their own, they are much more effective when paired with good data. That’s why cities like Chicago, Boston, San Diego and others have rolled out expansive networks of sensors that collect real-time data on important city metrics, including public health, crime, the environment and traffic congestion.
But as cities increasingly turn to data-driven technologies to tackle difficult challenges, they should be aware of unintended biases that can crop up in analytical tools and exacerbate the very socio-economic divides these technologies are looking to address.
One is algorithmic bias. The MIT Technology Review covered this issue recently:
“Algorithmic bias is shaping up to be a major societal issue at a critical moment in the evolution of machine learning and AI. If the bias lurking inside the algorithms that make ever-more-important decisions goes unrecognized and unchecked, it could have serious negative consequences, especially for poorer communities and minorities. […] Algorithms that may conceal hidden biases are already routinely used to make vital financial and legal decisions. Proprietary algorithms are used to decide, for instance, who gets a job interview, who gets granted parole, and who gets a loan.”
Cities are starting to notice. New York City passed a bill to provide transparency in the way that city agencies use algorithms. Here’s what the New York Civil Liberties Union’s had to say:
“This bill is the first in the nation to take such a broad view of the problem and recognize that for algorithms to benefit society, they must be subject to public scrutiny...to remedy flaws and biases. […] A flawed algorithm can lead to someone being trapped in jail for no good reason or not receiving a public benefit.”
Another issue is sampling bias related to where and how data is collected. A Financial Times article touched on this with regard to Twitter:
“It is in principle possible to record and analyze every message on Twitter and use it to draw conclusions about the public mood. […] But while we can look at all the tweets, Twitter users are not representative of the population as a whole.”
Apply this idea to all those sensors being rolled out in Chicago, Boston, San Diego and elsewhere. What if those sensors only end up in the city center or affluent neighborhoods? Cities would be missing those who live on the outskirts of the city, perhaps in poorer neighborhoods or communities of color.
Cities are still relatively new at this, and the policy debates around smart cities are going to play out over several years. Regardless of how those shake out, there are proactive things public and private sectors can be doing now to help eliminate unintended bias in smart city solutions and the algorithms that drive the technology.
First, take steps to eliminate bias from the outset: invest more in R&D before algorithms and other technologies are put into use by city agencies; develop user guidelines to reduce errors or misuse by city officials; and disclose potential biases if they can’t be eliminated.
Second, invest in more smart city applications in under-served communities that tend to be overlooked by cities and companies for various reasons – it may increase the cost of the project or the right infrastructure may not exist in these communities. Just as cities and companies are considering equitable distribution of economic development investments, smart city technologies should be considered similarly.
Finally, develop better messaging, education and transparency around smart city technologies. Tell the public how and why their data is being collected; explain the societal benefits, including real-life scenarios of how it can help (and potentially harm if not done, or not done right).
If we truly want our cities to be smarter, we must be willing to invest the time, energy and resources to do it right. Otherwise, we risk creating biased, unsustainable urban models that benefit only a segment of the population. That’s not smart at all.