Stay updated with us
Sign up for our newsletter
Smart cities sound great until your face gets scanned, your movements tracked, and AI decides where police should show up next.
Let’s talk about the real risks behind the tech.
Smart cities promise a reduction in traffic, cleaner air, increased safety, and faster services. Who wouldn’t want that? A traffic signal that adjusts when it is busy or slow, garbage bins that send a ping when they are full, streetlights that turn off when no one is outside. It is a dream.
Also Read: Should We Regulate Virtual Identities in the Metaverse
But, if you look closely, you will see the problems. Because behind all that efficiency is something far worse. Constant surveillance. Little to no privacy. Embedded systems that will disproportionately affect certain people. And tech that collects as much information about us as possible – without us even knowing.
Let’s discuss what really happens when a city goes “smart.”
Smart Cities & Privacy
Smart cities collect more personal data than we know and most of the time without consent Our phones, our cars, our faces, our movements, even sometimes our voices. Sensors installed around the city are tracking our activities. Public WiFi is following your device as you stroll past. Cameras can zoom in on your face and run it through their facial recognition systems. Metro cards track your origin and destination.
Take Singapore as an example. Arguably one of the “smartest” cities in the world, it took some hits when, in 2021, the government went public explaining how one of their COVID-tracking apps, TraceTogether, in addition to tracking COVID clusters, provided police with facial recognition data for investigations. That is a fairly good example of smart city privacy issues.
Even smart streetlights in San Diego devolved into a privacy issue. Meant to monitor traffic flow and save energy, smart streetlight data was later leveraged by law enforcement for surveillance, without letting the public know.
This is what complicates smart city data privacy. The lines keep getting blurred. You think your data is being used to support service improvements, only to find out it is in a database linked to monitoring and, potentially, control.
Surveillance in Smart Cities Isn’t Just Watching
When people hear the word “surveillance” they are most often think in terms of crime control. And that, to some degree, is true. But in smart cities, the means by which surveillance occurs isn’t even dependent upon what users did or did not do—it progresses in systems and smart cities knowingly surveil everyone. Not because you did anything bad, simply the function of the system.
In India, law enforcement apparatuses in large urban areas, like Delhi, use a software system called “Crime Mapping, Analytics and Predictive System,” which integrates surveillance footage, crime reports, and historical data to suggest possible future crime zones. Sounds efficient? Perhaps. But there is a price.
These software systems can wrongly implicate us based on faulty reasoning. Being in a neighborhood labeled “high-risk” may initiate extra rounds of police presence, even when you do absolutely nothing wrong.
The smart city surveillance risks regarding examples above are no longer science fiction. In New York, a study in 2020 found facial recognition technology, used by police, misidentified Black and Asian faces at significantly and consistently higher level than white facial recognition. That implies that innocent people can be subjected to police targeting, questioning, or arrest by law enforcement based on the software’s classifications.
This is where smart cities ethical issues really start to show. In Detroit, a man named Robert Williams was arrested unjustly, in front of his children, because the facial recognition software misidentified him as a criminal. He ultimately spent 30 hours in jail for a crime he did not commit. That’s not a glitch. That’s a red flag.
Also Read: Why Every Business Needs Digital Transformation in 2025
AI Used in Urban Planning & Biases
In the United States, cities like Chicago and Los Angeles employ predictive policing systems. These systems predict where future crimes might happen based on historical crime data. But here is the pivotal issue. That historical data is already biased. More arrests in Black or low income neighborhoods resulted in those neighborhoods being flagged more. That being said, the neighborhoods were flagged for more patrols. The police executed more arrests. So, the cycle continues.
In the United Kingdom, facial recognition systems that police employed disproportionately misidentified Black faces. In one example, they misidentified Black faces 81 percent of the time — that is not just poor coding. That begs problematic questions about the legitimacy of AI based surveillance – in urban planning specifically.
The bias doesn’t stop with AI policing – it even extends to something as benign as traffic management systems. A 2021 study on smart traffic lights in Phoenix, Ariz showed that wealthier neighborhoods received priority in travel time while speed in low income areas was slowed down.
Most cities don’t even have a way out of the system. You cannot escape being tracked unless you are home, and turn off all of your devices. That is obviously not realistic. You should not have to trade basic freedom to access basic public services.
The technology is outpacing any rules or protections regarding it.
Let’s be real. This isn’t going to slow down. By 2050, 70% of the world’s population will live in urban settings. And most of these will have some smart technology embedded in them. More sensors. More cameras. More automation.
But the laws to protect people? They aren’t keeping up. Most countrys don’t even have rules established on how to deal with urban data. Who owns the data. Who can access it. How long it can be kept.
This is an area where the ethical issues of smart cities really come to light. The technology is not neutral, it reflects the same inequalities as exist in society.
Control Over Data
Most individuals have almost no say in how their data is collected or how their data is used in a smart city. This is one of the worst parts about smart cities, how little impact individual citizens actually have. Governments and tech companies handle the responsibility for making decisions about sensors, data collection, data storage, surveilling individuals, and artificial intelligence; these decisions are almost entirely made without citizen involvement.
In 2017, Toronto announced Quayside, a new smart neighbourhood initiative spearheaded by Sidewalk Labs, a Google company. Quayside was going to be some kind of high tech, sensor driven, paradise. But this novel project was met with plenty of questions.
Who’s data do they own? How will they use the data? Will the data be sold, and to who? What are the privacy concerns?
The company was unable to answer these questions with certainty. Public dissatisfaction continued to grow. In 2020, the project was completely cancelled. This is an example of smart city ethics.
Most cities don’t even have a way out of the system. You cannot escape being tracked unless you are home, and turn off all of your devices. That is obviously not realistic. You should not have to trade basic freedom to access basic public services.
Speed Of Tech & Protections Around
Let’s be real. This isn’t going to slow down. By 2050, 70% of the world’s population will live in urban settings. And most of these will have some smart technology embedded in them. More sensors. More cameras. More automation.
But the laws to protect people? They aren’t keeping up. Most countrys don’t even have rules established on how to deal with urban data. Who owns the data. Who can access it. How long it can be kept.
This makes smart cities a little like the wild west. There’s a lot of promise, but very little regulation. And that is dangerous.
If we really want to build cities that are actually for everyone, we need to have some actual guardrails. That means:
– Third party audits of surveillance systems
– Transparency about what data is being collected, and why
– A clear opt out for people
– Rules against private companies hoarding public data
– Tech that has been tested for bias before being rolled out
Because surveillance in smart cities is not just about watching. It is about who is watched more. And who gets to decide what is fair.
Final Thoughts
Excitement should not leave us blind to the real risks. As much promise as these smart cities hold, they also face serious privacy risks. These cities are built on a data foundation. Your data. And more often than not, you don’t get a say on how that data is collected, where that data goes, or what that data is used for.
This is not only about ads following you across the internet. This is also about your movements being tracked everywhere you go on the public streets. About your face being scanned by cameras you never noticed. About your patterns being stored in a system that is accessible to law enforcement and private companies and potentially hackers. The problem with data privacy in smart cities is that as the connectedness of your environment increases, you lose more and more control.
And that is not even the worst part.
The expansion of surveillance and smart cities ethical issues changes the power dynamic. It turns everyone into a suspect first and a citizen second. If you live in a neighbourhood that has been overpoliced historically, smart surveillance will only double down on this.
AI surveillance in urban planning governs urban planning is not a distant looming threat. It is already present in various places like the US, China, India, the UK, and other areas. We are already seeing the influence of artificial intelligence on urban planning as it relates to traffic management, law enforcement, equitable access to healthy housing and food, as well as equitable access to healthcare. Artificial intelligence is impacting how cities work, but it is also impacting how people are treated within them. The scary part? Most people don’t even know these tools exist, or how they function. And if you can’t see the system, then you can’t challenge it.
What we are confronting is not just a technology issue. It is a power issue, it is a human rights issue, and it is a democracy issue.
A truly smart city should be one that listens to its residents. One that humanizes the dignity and rights of all. One that uses technology to support residents and elevate their position in the world, not just to classify, surveil, and dominate them. A smart city that uses smart city ethics as the foundation of all decision making, not an afterthought.