By Rebecca Tan
When cities become smart — when every traffic light, bus stop and building is equipped with a sensor — the lives of billions of people who live in cities will be transformed. In this vision of the future, the Internet of Things (IoT) — the networking of programmable physical devices embedded in everyday objects — takes centre stage, becoming fully integrated into our lives.
It is an ethical obligation, then, for those developing IoT technology to carefully consider the long-term impact of their actions, according to a panel of esteemed computer scientists speaking at a public lecture held at the Singapore Management University.
Organised in conjunction with the Global Young Scientists Summit 2017 (GYSS 2017), the well-attended public lecture brought together three Turing Award winners: Dr Vinton Gray Cerf (2004), Professor Richard Karp (1985) and Professor Butler Lampson (1992). Mr Tan Kok Yam, Head of the Smart Nation Programme Office and Deputy Secretary, Strategy Group, Prime Minister's Office, rounded out the panel with his policy perspective.
Smart city or insane city?
Starting the session on a personal note, Dr Cerf, who is Vice President and Chief Internet Evangelist at Google, cited the example of the Nest thermostat in his home. Because the sensors are placed in rooms that people are seldom in, the Nest believes that Dr Cerf and his wife are never home, leaving the rooms too cold in winter and too hot in summer.
“Obviously, the solution to this is to put more sensors in the house,” Dr Cerf said. “But I want you to take the smart home example and extend that to smart cities. Think about the type, quantity and resolution of information that a smart city would need in order to do sensible things with the data it has collected.”
“These are simple things, that if done poorly, could make for an insane city instead of a smart city,” he warned.
Agreeing, Professor Karp pointed out that existing smart technology applications are restricted to areas where they cannot cause too much damage even if they were to malfunction.
“We can afford to use machine learning and artificial intelligence for things like natural language processing or identifying people in images where the penalty of making mistakes is not enormous,” Professor Karp said. “In other contexts, safety considerations may be a real barrier to using this very attractive technology.”
Other than the amount and quality of information, one other important factor designers of smart cities need to take into account is what happens when things go wrong, Dr Cerf added. If we had enough data and a model that would allow us to anticipate what would happen when different factors change, he said, it would inform our decisions and be better than just guessing.
“An example of this would be the effort by a team at the Singapore-MIT Alliance for Research and Technology that I visited yesterday,” he continued. “They’ve gotten to the point where they can literally model a person walking through the city. So we see that a smart city is not necessarily just the functionality of the city, but what intelligence we have about the city itself.”
An ethical code
Yet even with the best of intentions, handing over decision making to autonomous software is inherently risky, particularly if there are unanticipated bugs in the software, Dr Cerf said.
This problem is exacerbated by the fact that companies making IoT gadgets usually cannot hire the best programmers as they cannot offer a credible career path, Professor Lampson added.
“A consequence of this is that software gets written by second or third rate people. This is not a problem with things like Fitbits, but is certainly going to be a problem for a lot of IoT devices that people are seriously considering,” Professor Lampson said.
A possible solution would be a form of licensing or accreditation, similar to what is needed to qualify as a civil engineer, Dr Cerf suggested. However, he noted that this is an unpopular idea among programmers, and that figuring out what tests would determine if a programmer is qualified is also tricky.
“Nonetheless, these are the kinds of things that will have to start happening, otherwise we will be at the whim of software that the programmers who wrote the code may not have to live with,” Dr Cerf said. “This pushes the limits of what is ethical, where people who make the software don’t have to deal with the consequences of the trouble it causes other people.”
Making the invisible visible
Recognising and spelling out these challenges needs to be part of the conversation about Singapore’s Smart Nation initiative, said Mr Tan.
“What we want to do with invisible applications of technology, like the use of data analytics to decide where to place an overhead bridge for example, is to make them as visible as possible. Not to show off the latest technology, but so that there is an informed, educated discussion about these things,” he said.
“We should try our best to bring these issues out for public discussion so that we are able to reckon with those risks collectively and think of the best way forward,” he said.
We just sent you an email. Please click the link in the email to confirm your subscription!
OKSubscriptions powered by Strikingly