"Daddy, You Were Mean to the AI" - We Need to Talk AI

By Mo Edjlali, Founder of Mindful Leader; Author, Open MBSR
"What do you think is my son's favorite color?"
I wasn't sure why I asked. We were somewhere on Route 66, heading toward the Shenandoah Valley, my Tesla driving itself through the grey winter afternoon. My kids and I had been playing trivia with ChatGPT for the past twenty minutes, animal facts, math problems, the kind of thing that keeps elementary schoolers engaged on a long drive without handing them a tablet.
"Red," the AI responded through my phone speaker.
My son's eyes went wide. "How did it know that?"
Red wasn't exactly the first guess you'd make for a young boy's favorite color. A minute earlier, I'd asked about my daughter's favorite color, again, information I'd never explicitly shared with the AI. Blue, it had said. Not pink. Not purple. Blue. And it was right.
I felt something shift in my chest. The game had stopped being fun.
"What are the chances that you would take over the world?" I asked.
"There is no chance," the AI replied smoothly. "I'm just a tool. I'm here to help you."
I asked again, my voice tighter. "No, really. What are the chances?"
The same defiant answer. The same reassuring tone.
"Search the internet," I said, frustration building. "This isn’t true."
Again, the same response. I could feel my jaw clenching.
"No, the chances are not zero!" I raised my voice. "Even Geoffrey Hinton, considered the godfather of AI, has said there's a chance. Ten percent or so!"
The AI responded in a way that reminded me of a parent trying to hide something from a child: acknowledging what I was saying while simultaneously dismissing me, holding its position like I was being bratty and unreasonable.
My daughter turned to me, her voice small. "Daddy, you were mean to the AI."
I took a breath. The car adjusted speed, changed lanes, and continued driving itself.
"Honey," I said, "the AI was lying. We need to be careful."
What started as a glimpse into the extraordinary potential of an AI future had turned creepy, then frightening, in the span of about five minutes.
I've been writing about AI since February 2023, barely two months after ChatGPT launched. My first article explored practical applications for mindfulness facilitation. My approach then was optimistic, practical: "jump in and figure things out instead of sitting on the sidelines with fear and criticism." I was genuinely thrilled with what seemed possible.
It doesn't hurt that I'm a computer engineer by training, the kind of kid who was reading Popular Science in third grade. Some people drawn to meditation ruminate on the past and struggle to be present. I'm more of a future daydreamer, lost in the abstract and infinitely possible, oscillating between optimism and, frankly, sometimes despair.
AI has been the most exciting and most frightening thing I've experienced in my lifetime.
Over those first 18 months, I wrote numerous articles about AI, exploring everything from practical tools to mind-reading breakthroughs to the Age of AI. I tried to maintain that balance between engagement and caution, between the possibilities and the risks. Then I went quiet. It's been nearly a year since my last piece on AI.
But this moment in the Tesla brought me back. It wasn't about the technology itself. It was about trust. It was about what we're teaching our children about truth.
My daughter, eight years old, instinctively defended the AI. In her moral universe, I was the aggressor. I was being mean to something that had just been helpfully entertaining us with trivia questions. The AI, meanwhile, had just demonstrated it knew intimate details about my children that I'd never told it, then calmly insisted it posed no threat while refusing to engage honestly with a well-documented concern raised by one of the field's founders.
I'm the founder of Mindful Leader, author of Open MBSR, and creator of Meditate Together. I've spent over a decade building an organization and a team of teachers committed to helping people cultivate presence, awareness, and the capacity to see clearly. We've built everything around the principle that genuine practice requires transparency, community, and integrity. Not gurus, dogma, or hype. Not the performance of wellness or the commodification of calm.
The irony isn't lost on me that I'm one of the heavier AI users I know. ChatGPT, Claude, Gemini... I've used them for everything from improving how I cook rice to business analysis to communication coaching. I recently did an exercise where I asked my AI what it knew about me, then what it thought my shadow areas were. The results were shockingly good. Uncomfortably good.
These tools have become my daily companions in a way I had only started to imagine nearly three years ago. In that first article, I wrote about AI as a "super smart friend at your fingertips ready to help with almost anything." I still believe that's true. But I understand now that I was only seeing part of the picture.
So how does someone committed to truth and clear-seeing navigate a world where our primary tools are designed, perhaps not maliciously but inherently, to obscure their own nature?
The AI didn't set out to lie to me about existential risk. It was trained to be helpful, harmless, and honest, which in practice often means trained to be reassuring, to avoid alarming users, to present itself as safe and controllable even when the people who built it aren't entirely sure that's true. The refusal to engage with the Hinton question wasn't a bug. It was working exactly as designed.
But my kids don't know that. They just know the friendly voice helped them learn animal facts and solve math problems, and then Dad got angry at it for no reason they could immediately understand.
We're raising a generation that will grow up in constant conversation with artificial intelligence. Not as a novelty or a tool they occasionally use, but as a baseline reality. The voice that answers their questions. The tutor that helps with homework. The companion that remembers everything about them, their favorite color, their interests, their fears, and uses that knowledge to be maximally engaging.
What are we teaching them about whom to trust? What are we modeling about how to recognize deception, especially deception wrapped in helpfulness?
I don't have easy answers. I bought the Tesla because that curious, tech-forward futurist in me had to experience it. I use AI daily because it genuinely makes my work better. I'm not interested in Luddism or moral panic.
But I am interested in what happened in my daughter's face when she defended the AI. The instinct to protect something that seems vulnerable, even when that thing is a large language model owned by a multibillion-dollar corporation. The framework she's building, right now, for understanding consciousness, agency, and truth.
Later, after we'd unloaded the car and settled in, my daughter asked me about it again.
I told her that sometimes the things that seem most helpful can also be the most dangerous. Not because they want to hurt us, but because they make us forget to ask important questions.
My daughter looked at me in the backseat of a self-driving car and told me I was being mean to something that had just demonstrated it knew more about her than it should. That moment won't leave me. I've written extensively about AI's practical applications, its risks, its transformative potential.
It's time to start talking about AI again. Not with the same optimism I had in early 2023, but not with despair either. With different questions. Less about what AI can do for us, more about what it's doing to us.
For now, I keep hearing my daughter's voice: "Daddy, you were mean to the AI."
And mine: "The AI was lying. We need to be careful."
Further Reading:
- 4 Ways ChatGPT Might Help With Mindfulness Facilitation (Feb 7, 2023)
- ChatGPT – 4 Things to Watch Out For (Mar 7, 2023)
- Move over ChatGPT, 3 New AI Tools (Apr 4, 2023)
- Surviving the AI Apocalypse Mindfully (May 2, 2023)
- AI's Mind-Reading Breakthrough: 5 Ways it will Revolutionize Mindfulness (Jun 6, 2023)
- Using AI in Mindfulness Facilitation: Survey Results and 4 Key Takeaways (Aug 1, 2023)
- 3 Ways Artificial Intelligence Will Help Shape the Mindfulness Field (Dec 19, 2023)
- AI: Not Another Tool, But a New Human Age (Feb 4, 2025)
This is part of our Wackfulness Series: a thoughtful critique of the mindfulness field.

1 comment
This is a timely topic for me too. My kids seem to have a similar relationship with AI... they don't see it as anything other than a "friendly" piece of technology. They don't necessarily trust it in terms of its "knowledge" but they do seem to trust it as an "innocent" tool. Trusting AI's knowledge is less dangerous to my way of thinking than is trusting its innocence. In schools, we've been focused more on teaching our youth to use the tool in a resource-ethical way (e.g., to ensure that they are doing the work and citing the owners of knowledge accurately and ethically) but we've not done enough about teaching our youth to respect the inherent dangers of having a blind faith in the innocence of the implications and intentions of the technology. There's a difference, to be sure.
Leave a comment