AI Surveillance
TL;DR
What if we asked ChatGPT-4o to analyze our behavior in the context of security - where will we draw the line?
When we talk about AI, most of us probably think about chatbots, or more specifically large language models (LLM’s), that help us express ourselves in a more guided and automated manner. Even Apple has revealed its involvement in the AI race while incorporating OpenAI as its primary chatbot. While this collaboration wasn't entirely unexpected, it seems that Apple wants to capitalize on the advantages of AI without taking on the responsibilities that are associated with it. While this debate is intriguing by itself, especially considering Microsoft's position, this article will not dwell on the misaligned nature of the agreement between OpenAI and Apple, but instead focuses on what lies ahead for OpenAI's latest model, ChatGPT-4o, where the "o" stands for "omni". Interestingly, thirteen years ago, I had envisioned AR glasses called Omni Glass, but that's beside the point.
What truly fascinates me about ChatGPT-4o is how it seemingly possesses the ability to pick up on subtle cues about a person's emotional state, comprehend their surroundings, and even delve further if prompted. Imagine if we asked it to constantly observe through our eyes and analyze anything relevant to us. This is not science fiction; In fact, I'm willing to bet that there are already companies integrating this model as we speak, particularly its vision capabilities, into upcoming AR glasses that are yet to hit the market. These glasses will eventually become available, enabling us to register and process the world through our own eyes. While OpenAI likely restricts processing other people's faces for privacy reasons, would the same apply to capturing people's behavior? More importantly, shouldn't it be capable of detecting if someone engages in actions that endanger our well-being? Does it not have a moral responsibility to alert us in such cases?
Considering these principles of protection and safety, one might argue that safety encompasses a range of behaviors, such as shoplifting. Clearly, as a society, we must establish the limits to which we want this technology to extend. As a general rule of thumb, I believe the most obvious principle is that we should be comfortable if those same rules would be applied to ourselves. If we would agree in that regard, most of us would likely respect each other's boundaries. However, what if others wouldn't mind being monitored? Or what if people feel compelled to accept this surveillance as part of a broader security policy? We are already under constant online surveillance, with numerous services recording our activities and using AI to analyze and summarize them, constructing a narrative about us. Similarly, public places, stores, bars, restaurants, and even many homes have surveillance cameras installed to ensure safety and security. What if those same cameras could leverage ChatGPT's model to transform the footage into a similar narrative about us? What if we are not just being monitored, but meticulously examined at every step?
The Dutch supermarket chain Jumbo already started experimenting with artificial intelligence to enhance its security cameras' capabilities, allowing them to detect shoplifting and other misconduct. This strategy likely arose due to the significant number of products being stolen each year, as customers are now entrusted with scanning and paying for items themselves, without the involvement of cashiers. Instead of hiring numerous security personnel, why not pay a monthly fee to an AI company that can handle security? Its task would be to identify suspicious behavior and take appropriate action, such as alerting physical staff or flagging the scanning devices of potential shoplifters. It's as simple as that: individuals are now subject to scrutiny and analysis, much like how they scan items in their shopping carts. The irony cannot be overstated.
But why stop there? Once people become accustomed to this technology, why not equip security guards and officials with glasses and body cameras that can perform the same real-time security functions, using their own vision? Or better yet, combined with existing surveillance cameras. One glance at someone would suffice to determine if they meet the criteria outlined in their security policy, which the other person is unaware of. This could range from anything like shouting to accidentally bumping into someone else. While one might argue how this approach would ensure a more transparent and less subjective method of identifying individuals who require further attention, it could also give us the feeling of being constantly watched and analyzed. Especially when the policies governing the surveillance process are not transparent, we might be caught off guard. What if the algorithm suddenly incorporates certain behaviors that are mistakenly deemed peculiar but are actually harmless? Or what if the algorithm is biased and disproportionately targets certain individuals or groups? These are valid concerns that must be addressed before implementing such widespread surveillance.
Overall, the integration of AI into AR glasses and other wearable devices has the potential to revolutionize our daily lives and enhance our safety. However, it also raises important ethical questions regarding privacy, consent, and the limits of surveillance. The trade-off between security and privacy is something we must carefully consider. As a society, we must actively engage in discussions and establish clear guidelines to ensure that the benefits of this technology are maximized while protecting individual rights and maintaining transparency. Only when we can monitor the monitoring itself, can we truly embrace the potential of AI-powered surveillance without compromising our values and privacy.