Cognitive warmup. I pointed out some weeks ago that the ‘AI browser’ conversation has gone cold. Perplexity, OpenAI, and everyone else who tried to make one, has been rather silent these past few months. I can say with some confidence that even if they harbour hopes of relevance, not everyone will be able to share the same space with Google. The tech giant’s AI browser salvo, which essentially is Gemini in Google, would have started rolling out in India by the time you read this.
ALGORITHM
In this week’s Neural Dispatch:
- Google + Chrome + Gemini = 🔥
- Ray-Ban has a Meta problem
Google + Chrome + Gemini = 🔥
Google’s pitch is simply about as many access points to Gemini in the user’s journey, as conveniently possible, and for each of those, localised relevance.
For instance, Google Gemini within Gmail has more specific context and focused capabilities that the Gemini app wouldn’t have otherwise when answering a query or drafting an email. Within Chrome, Gemini hopes to find relevance with everything, everywhere, all at once—from summarising a web page to shopping for you, and working across Google apps for relevant information. It’s a full-fledged chatbot for the world’s queries.
That’s a true AI browser, if ever there was one. First, Chrome’s market share scale. Secondly, Gemini’s context advantage, if you’re part of the Google ecosystem. Gemini in Chrome will need at least a Google AI Pro subscription tier, that’s priced at ₹1,950 per month (and also includes 2TB storage).
Ray-Ban has a Meta problem
Some of you may have noticed how I haven’t reviewed the Ray-Ban Meta Gen 2 glasses that were launched a few months ago, in India. After all, I had written about the experiential performance of the first generation Ray-Ban Meta. Why, you may be wondering? AI glasses feel cool for a while, but the more I pondered later, the greater a sense of discomfort with the idea. To be walking around with a device that hears and sees everything you do, may seem okay to the wearer, but we live in a society where such devices are a privacy breach for everyone in my line of sight. Without them having any inkling that they are being recorded (I do not mean a user actively recording; you know what I’m getting at) or listened to—and to be fair, no one really knows where this data is otherwise going.
Now we know, because Swedish newspaper Svenska Dagbladet has published a worrying report that summarises the privacy nightmare that is Ray-Ban Meta, with this line “Bank details, sex and naked people who seem unaware they are being recorded”.
We may understand that “Hey Meta” is the keyword to invoke action, but the report seems to indicate that many a times, people had no idea they were being recorded—and every single footage recorded by any Ray-Ban Meta glass in the world, lands in the hands of Sama, Meta’s subcontractor based in Nairobi, Kenya.
Estimates suggest more than 7 million pairs of Ray-Ban Meta were sold in 2025. From the outset, you’ll have to understand the scale of the problem.
- Workers told Svenska Dagbladet that they regularly see deeply private moments in the footage—people undressing, using the bathroom, having sex, and even exposing bank card details by accident. As one worker put it: “We see everything, from living rooms to naked bodies.”
- Meta says its automatic face-blurring system protects people caught on camera. Workers say that protection often breaks down, especially in poor lighting. Faces that should be anonymised sometimes remain fully visible. Someone recorded without their knowledge could end up clearly identifiable to a reviewer in Nairobi.
- Hidden inside Meta’s terms of service is a line that carries major consequences: the social-media company reserves the right to conduct “manual (human) review” of AI interactions. That clause creates the legal basis for sending highly intimate footage from people’s homes to low-paid workers bound by NDAs, monitored by office surveillance, and discouraged from asking questions. According to workers, raising objections can cost them their jobs.
- This is the same contractor, Sama, that TIME magazine had reported on in 2023 for paying Kenyan workers about $2 an hour to label graphic content for OpenAI, while charging far more for their labour. Workers described the experience as traumatic. After that contract ended, Sama moved on to labelling footage from Meta’s smart glasses. I’m not sure if the costing and salary structures have changed since.
- Meta markets these glasses as being “designed with your privacy in mind”. In practice, that privacy safeguard amounts largely to a small LED on the frame that many people may not even notice. Behind the scenes, the footage can be sent through a system involving a contractor accused of worker exploitation, and weak anonymisation in general. Meta stands on a very shaky ground, as it mostly has in its existence, around “privacy”.
- The next step may be even more troubling: Meta is reportedly planning facial recognition for future versions of these glasses, a conversation that has been simmering for the past few weeks. That means a system that already struggles to reliably blur faces in training footage could soon be used to identify people as a feature.
Broader warning about AI gadgets in general? I’d most certainly categorise it as so.
THINKING
👆 This sentiment, echoed by Huang in an interview with CNBC in the US in the week of 25 February—after benchmark indices S&P 500 and Nasdaq finished their second straight week in the red, with S&P 500 falling to approximately 6,603 till then. This is after months of AI bros incessantly telling the world AI agents will replace humans in the workplace and that we should be prepared to live out the rest of our lives jobless.
Turns out, such bravado—and utter nonsense—to keep the AI bubble and circular financing afloat, eventually has an expiry date.
Huang argued that AI agents will actually increase the use of existing software tools—acting as expert users on behalf of humans—rather than replacing the companies that make them. Despite this optimism, market has been sceptical. Really? You don’t say!
Mind you, this sudden discovery of “balance and restraint” isn’t still for humans or their jobs that sustain livelihoods, but for AI funders hit by the rout on Wall Street. Read between the lines.
Huang believes AI agents will populate “systems of record” (like those from ServiceNow) more efficiently than humans, driving more—not less—software usage. He envisions every company becoming an “AI factory” where software and AI are inseparable, which he expects will eventually drive a massive earnings cycle. Of course, the market got it wrong. They didn’t check with the AI companies first.