Trusting Robots
TL;DR
ChatGPT seems to be shifting from a passive tool to a more engaging AI companion by asking users questions to foster conversation and build trust. This change may signal OpenAI's strategy to eventually monetize these interactions through subscription fees, raising important concerns about data privacy and our reliance on AI in daily life.
Is it just me, or has ChatGPT started asking us questions? My girlfriend first noticed this while using ChatGPT for some guidance, and while providing a helpful answer, it also asked her follow-up questions to encourage further discussion. While I understand how this could help clarify things and perhaps lead to more accurate results in the long run, the cynic in me couldnât help but see it as a clever tactic to increase engagementâand, more importantly, to gradually build trust between the human and the robot.
I've starting wondering: to what extent will these chatbots become more assertive, moving away from their passive âread-onlyâ nature? One thing we can be sure of is that OpenAI desperately wants to become your AI assistantâor perhaps even your new best friend. And the only way to achieve that is by earning your trust. To achieve that goal, the first step is providing reliable information. Hallucinations aside, I think itâs fair to say that large language models (LLMs) often give useful and accurate answers. Otherwise, we wouldnât interact with them so frequently, would we? The next phase of their plan to become your virtual companion is to be reactiveâencouraging you to keep the conversation going. This helps establish trust and a willingness to engage, perhaps even socially.
The final phase, I believe, will involve the chatbot starting conversations. It might start innocuouslyâlike remembering you had an important presentation and asking how it went, or reminding you of something you mentioned earlier but forgot. At some point, it may even directly ask how youâre doing. And rather than dismissing it as a mere digital interaction, we might actually feel guilty if we donât respond. Some people might even use these bots to vent, sharing their experiences and struggles. They could find that the AIâs advice isnât so bad after all. This makes sense, given the sheer number of conversations the chatbot has processed by that point. Itâs not just a collection of answers and questions; itâs learned whatâs most effective.
The data from these interactions is invaluable to OpenAI. In fact, theyâve made it harder to opt out of sharing your data. Instead of offering a straightforward toggle labeled âShare my questions and answers with OpenAI,â the option is phrased as âImprove the model for everyone.â You only see a detailed explanation of this if you click on the option itself. Needless to say, the option is enabled by default. If you donât delve into the settings, you might never notice it. This suggests that OpenAI is relying heavily on users to share their data. The motivation is clear: they aim to be the next Google, gathering as much data as possible to refine their model, eventually becoming the virtual assistant weâve seen in films like Her, starring Joaquin Phoenix. Every obstacle in the way of achieving this goal is gradually removed. What's interesting is that you canât change your email address or phone number once youâve created your account. I didnât think much of this when I first signed up, but now Iâm increasingly concerned about the motivations behind these measures. They feel more like attempts at âlock-inâ rather than user security.
Meanwhile, several key employees, particularly those involved in security, have left OpenAI. With the U.S. pushing to stay ahead of China in the AI race, OpenAIâa supposedly non-profit organization that actually pursues maximum profitâis being driven to its limits. After securing enormous funding, OpenAI expects to turn a profit as early as 2029. By then, something will have to give. Whatâs their plan to deliver on these promises? I wouldnât be surprised if they introduce a steep price hike once a significant portion of the population is hooked on their software. Whether itâs for social interaction, intellectual stimulation, or practical use, they will do whatever it takes to win our trust. At that point, weâll likely be paying a monthly fee to talk to the AI as if it were human. And if we want full access to all features in their multimodal stack, weâll have to pay a premium.
This might very well be OpenAIâs ultimate plan: get us attached to our virtual companions and then charge a substantial fee. As I outlined in my previous article, Digital Disalignment, this shift might happen without us noticing, and by the time we do, it could be too late to turn back.
Meanwhile, Meta has been experimenting with AI features in their Ray-Ban glasses. Images captured by these glasses can and will be used for AI training, whether you agree to it or not. At the moment, thereâs no way to disable this setting. It comes down to a matter of trust: we have to trust that the robots will handle our data responsibly and hope they have secure systems to prevent our data from falling into the wrong hands. We also have to hope that all of this data collection will be worth it in the end, when these companies reach a level so advanced that it justifies the sacrifices weâve made in terms of privacy. But will that day come? Who knows. Will it improve our lives? Only time will tell.
If we trust the robotsâand by extension, the tech companiesâthen weâre fast approaching a point where weâll be making a decision, whether consciously or unconsciously, to hand over the last piece of our digital dignity and share every aspect of our lives. At that point, weâll be relying on these companies to give us something valuable in return. But what if all we get are higher fees, poor privacy policies, or addictive tendencies that current laws arenât equipped to handle, just like with TikTok? As a society, weâll need to find a solution. Let's just hope we recognize all sides of the equation before itâs too late.