Trusting Robots

By Mark Nuyens
8 min. readđŸ“±Â Technology
TL;DR

ChatGPT seems to be shifting from a passive tool to a more engaging AI companion by asking users questions to foster conversation and build trust. This change may signal OpenAI's strategy to eventually monetize these interactions through subscription fees, raising important concerns about data privacy and our reliance on AI in daily life.

Is it just me, or has ChatGPT started asking us questions? My girlfriend first noticed this while using ChatGPT for some guidance, and while providing a helpful answer, it also asked her follow-up questions to encourage further discussion. While I understand how this could help clarify things and perhaps lead to more accurate results in the long run, the cynic in me couldn’t help but see it as a clever tactic to increase engagement—and, more importantly, to gradually build trust between the human and the robot.

I've starting wondering: to what extent will these chatbots become more assertive, moving away from their passive “read-only” nature? One thing we can be sure of is that OpenAI desperately wants to become your AI assistant—or perhaps even your new best friend. And the only way to achieve that is by earning your trust. To achieve that goal, the first step is providing reliable information. Hallucinations aside, I think it’s fair to say that large language models (LLMs) often give useful and accurate answers. Otherwise, we wouldn’t interact with them so frequently, would we? The next phase of their plan to become your virtual companion is to be reactive—encouraging you to keep the conversation going. This helps establish trust and a willingness to engage, perhaps even socially.

The final phase, I believe, will involve the chatbot starting conversations. It might start innocuously—like remembering you had an important presentation and asking how it went, or reminding you of something you mentioned earlier but forgot. At some point, it may even directly ask how you’re doing. And rather than dismissing it as a mere digital interaction, we might actually feel guilty if we don’t respond. Some people might even use these bots to vent, sharing their experiences and struggles. They could find that the AI’s advice isn’t so bad after all. This makes sense, given the sheer number of conversations the chatbot has processed by that point. It’s not just a collection of answers and questions; it’s learned what’s most effective.

The data from these interactions is invaluable to OpenAI. In fact, they’ve made it harder to opt out of sharing your data. Instead of offering a straightforward toggle labeled ‘Share my questions and answers with OpenAI,’ the option is phrased as ‘Improve the model for everyone.’ You only see a detailed explanation of this if you click on the option itself. Needless to say, the option is enabled by default. If you don’t delve into the settings, you might never notice it. This suggests that OpenAI is relying heavily on users to share their data. The motivation is clear: they aim to be the next Google, gathering as much data as possible to refine their model, eventually becoming the virtual assistant we’ve seen in films like Her, starring Joaquin Phoenix. Every obstacle in the way of achieving this goal is gradually removed. What's interesting is that you can’t change your email address or phone number once you’ve created your account. I didn’t think much of this when I first signed up, but now I’m increasingly concerned about the motivations behind these measures. They feel more like attempts at “lock-in” rather than user security.

Meanwhile, several key employees, particularly those involved in security, have left OpenAI. With the U.S. pushing to stay ahead of China in the AI race, OpenAI—a supposedly non-profit organization that actually pursues maximum profit—is being driven to its limits. After securing enormous funding, OpenAI expects to turn a profit as early as 2029. By then, something will have to give. What’s their plan to deliver on these promises? I wouldn’t be surprised if they introduce a steep price hike once a significant portion of the population is hooked on their software. Whether it’s for social interaction, intellectual stimulation, or practical use, they will do whatever it takes to win our trust. At that point, we’ll likely be paying a monthly fee to talk to the AI as if it were human. And if we want full access to all features in their multimodal stack, we’ll have to pay a premium.

This might very well be OpenAI’s ultimate plan: get us attached to our virtual companions and then charge a substantial fee. As I outlined in my previous article, Digital Disalignment, this shift might happen without us noticing, and by the time we do, it could be too late to turn back.

Meanwhile, Meta has been experimenting with AI features in their Ray-Ban glasses. Images captured by these glasses can and will be used for AI training, whether you agree to it or not. At the moment, there’s no way to disable this setting. It comes down to a matter of trust: we have to trust that the robots will handle our data responsibly and hope they have secure systems to prevent our data from falling into the wrong hands. We also have to hope that all of this data collection will be worth it in the end, when these companies reach a level so advanced that it justifies the sacrifices we’ve made in terms of privacy. But will that day come? Who knows. Will it improve our lives? Only time will tell.

If we trust the robots—and by extension, the tech companies—then we’re fast approaching a point where we’ll be making a decision, whether consciously or unconsciously, to hand over the last piece of our digital dignity and share every aspect of our lives. At that point, we’ll be relying on these companies to give us something valuable in return. But what if all we get are higher fees, poor privacy policies, or addictive tendencies that current laws aren’t equipped to handle, just like with TikTok? As a society, we’ll need to find a solution. Let's just hope we recognize all sides of the equation before it’s too late.