Issues of identity and data have been in the news once more recently, and once more for no good reason. In New York, tenants of a New York apartment block are fighting the installation of a system using facial recognition to control access. In Cardiff, a legal challenge is being fought over whether automated facial recognition in a public area breaches an individual’s human rights. And there is growing anger that Apple and Android apps are transmitting data even while the phones are turned off. It seems like the era of data is coming down even heavier upon us, with worrying consequences.
The arguments against facial recognition in both cases are broadly similar. In New York, the plaintiffs claim that the planned technology will allow developers to ‘track’ them across the building in a loss of privacy, and have alleged a wider racial motive, with the only other alternative being to leave the block. The Cardiff case has pedestrians captured by automated facial recognition on a police van with, again, no choice: if you want to shop on the street you have to walk past, and be snapped by, the cameras. The issue is one of choice: if the individuals involved want to live out their lives normally, they have no choice but to submit to the recognition. At what point does living in ‘the real world’ become dependent on giving up our information and identity, and is there a point of no return where we are powerless to prevent this?
Also, the issue of apps harvesting and siphoning data has been around almost as long as apps themselves, but the hot topic now is on just how large a scale they are doing this and the almost complete lack of control we have over how, why and for whom this is done. It is once again dressed up as a choice – by choosing to use these apps you accept some level of monitoring – but when marketing firms and other companies at two or three removes end up getting your data, firms about which you have never heard or given direct permission too, the thread of privacy starts to unravel further. To what degree can we remain private and individual when data is taken without direct knowledge or consent? Moreover, as is increasingly becoming the standard, if it is not possible to live what would be termed a normal life while withholding consent, are we not deliberately being put at risk by the very systems supposed to make our lives easier?
Law enforcement agencies worldwide are coming to rely more and more on facial recognition technology and data extraction in all its forms, arguing that as crime and terrorism become more decentralised and technical, obtaining banks of data and geolocations of individuals is vital to keep one step ahead. But while there may be some truth in this, the consequences may inadvertently bring about what they are trying to prevent. If people are surveilled, tracked and monitored in many aspects of their daily lives, everything that they say and do thus becomes a security risk even over and above the issue of the ‘right to privacy’. Access to facial images for fake passports, ID badges and the like means that our appearance can be co-opted into a scam without our ever knowing. Every single bit of data can be used to profile and predict us: not only in the realm of targeted adverts by revenue- and click-hungry corporates, but by those darker elements able to steal/access the data and use it to identify their next mark.
At the root of all this is the extent to which the personal data economy is being abused and how this seems insurmountable with what we are constantly told is ‘progress’, compared to the decimation of transparency which is, ironically, crystal clear. The core is that data on and about every part of us, from our images to our preferences to our habits and even a complete record of our actions in the digital realm, is being taken and can no longer be called ‘ours’. It is a wake-up call that not only do we need to be far more protective and security conscious about our data, in whatever context, but also be more challenging of those that blithely assume that they have a right to it.