Three taxis
My ongoing journey with AI
I saw a Waymo for the first time in 2024, when I spent a day in San Francisco. It was weird watching a car move around with nobody behind the wheel, or even in the car! I have now gotten used to seeing these driverless cars zipping around, hearing stories of them getting confused in the middle of the road, and people getting stuck inside.
I first took a ride with my wife a few months back. The Waymo came up the driveway of the hotel, and instinctively, I raised my hand to show where we were standing. This had my wife laughing for a while. Thankfully, I did not wish (almost did) the Waymo a “Good evening, how are you?” when I sat inside.
I love good technology. Dropbox was a boon, and I have loved using Khan Academy, interactive software like GeoGebra, and simulations from PHET and the Freudenthal Institute. But I now pause when it comes to AI.
Conversations about AI, especially with people applying it in social development, are often punctuated with guardrails, ethical use, responsible use, etc. I wonder why there are so many caveats and positioning around #AIforGood. Is it because we are aware of the misuse cases and #AIforBad?
The totem
Alan Turing proposed the imitation game, now known as the Turing Test, to assess a machine’s capability to think like a human. A human has a conversation with a machine and another human; if it cannot tell the difference, the machine passes the test. As AI becomes more capable, we will need to pass this test to determine whether we can distinguish machines from humans.
Companion bots that people fall in love with and even die for, bots on social media platforms that amplify hate, and even “grief tech” where one can interact with AI avatars of loved ones who have passed away! It seems that every human emotion is being modelled and machined.
How much humanness do we want from AI? Or, like artificial sweeteners, do we want that aftertaste to tell us what it isn’t? Do we want people to believe that there is a trusted person on the other side? Or do we design it just well enough to be useful, but users know it is a machine?
How will we know what is real and what is not? In the movie Inception, people had totems – unique objects to help them distinguish between reality and the dream world. In an AI-powered world, I wonder what my totem could be to help me distinguish between humans and machines.
On an Uber ride, we were surrounded by Waymos, and I asked the driver how he felt about these driverless cars. Surprisingly, he was all for it, reasoning with a degree of fatedness that new technologies would keep coming and that we must evolve with them. He even argued for it and shared its merits – a Waymo cannot be distracted, it cannot drink and drive, it cannot harass a woman, and it is not racist.
We cannot drink AI
At a conference I recently attended (one of the rare ones), a speaker who promotes the use of AI among nonprofits said that he doesn’t know much about AI himself, but he uses a fridge very well, too, without knowing how it works.
In its early days, fridges using sulfur dioxide and methyl chloride were promoted as safe but highly toxic, and leaks caused deaths. Others used flammable gases that were prone to causing fires. These were replaced by chlorofluorocarbons (CFCs), which I read about in school for their impact on the ozone layer. Then came hydrofluorocarbons (HFCs), which, due to their high global warming potential, were phased out beginning in 2016. The latest fridges use Isobutane, which is hopefully ok. It has taken us over a hundred years since the first home fridges were sold to figure out what works for refrigeration.
While new technologies are fine-tuned to mitigate risks and harness their potential, they cause harm, sometimes unknowingly. But how do we overlook risks that are here and now?
The case of Plachimada in the 2000s highlighted how something as commonplace in our lives as a soft drink affects communities and their access to groundwater. Now, there are several stories about how data centres are affecting rural communities.
In the same conference, the head of a large tech company’s AI for social good arm gave the keynote. In the Q&A, an audience member asked about the resources AI requires and the environmental impact. He did not evade the question, but his answer about how AI is becoming more efficient did not address it either. Ironically, one of the main examples in his speech was how AI can help farmers in India, especially in drought-prone areas, by providing weather information to plant at the right time.
The conundrum
Can I survive without AI? Clearly, yes; I have for many years. But with AI all around me, with visible shiny prompts on my devices, and in many invisible ways too, it is hard to avoid.
The lure of efficiency draws me in, too. I started experimenting with Bard (the earlier avatar of Gemini) in 2023 and tested it with prompts to draft documents. Even then, the first drafts were very good.
But lately, I have been questioning why I need AI in my life. What do I lose? Is my query worth the use of resources? Will my one query make any difference? What would I do with the time I save? Would I spend that extra time with my family and friends, or would I work more, or would I spend more time watching videos fed by AI algorithms?
I will use AI, but in addition to my totem, I need a modern version of Gandhi ji’s talisman to help me decide whether to use it more or less.
On a taxi ride from New York, I learned that my Moroccan driver grew up with Bollywood movies like Deewar and Silsila. His favourite actor is Mithun Chakraborty, and his favourite movie is Disco Dancer. We listened to the movie’s title song, and I wondered how a driverless car could help me connect.
Some inspirations
1. Derek Muller’s talk – He started a YouTube channel, Veritasium, and this is one of my favourite talks from last year, where he talks about understanding what education is and his fears of AI reducing effortful practice.
2. Geoffrey Hinton’s Nobel acceptance speech in 2024, and his other talks on neural networks and the risks of AI
3. A series of videos from More Perfect Union, including this one about data centres



Arjun, this is a very well-written piece. I agree that AI could solve many problems, but human intervention and emotions are difficult to completely eliminate. For example, in publishing, it’s often said that AI can translate a book in two minutes, but it takes eight minutes to edit it. This highlights the importance of finding a balance between using technology and traditional methods to achieve greater efficiency.
You may want to check out Zeynep Pamuk's book called Politics and 'Expertise: How to Use Science in a Democratic Society', if you haven't already. I found some answers to similar questions in it.