The quiet way AI takes over

Set Status: Online / S01E03

It didn’t arrive with a single dramatic moment where everything changed and we all noticed. It seeped in quietly. Through recommendations, filters, predictions, shortcuts. Until one day it wasn’t a tool anymore. It was just… there.

In this episode, I start with a question everyone keeps asking and slowly realise it might be the wrong one.

Not “will AI destroy us?”
But “what have we already handed over without noticing?”

The real shift wasn’t intelligence, it was access

AI has existed for decades. Most of it lived in the background, buried in systems and infrastructure. Fraud detection. Spam filters. Autofocus. Routing traffic.

The real shift happened when AI got a user interface.

Suddenly we weren’t just moving through systems powered by AI. We were talking to it. Asking it to write, plan, translate, design, decide. That’s when it started to feel human. And that’s where things get complicated.

Why trust is the uncomfortable part

One of the most subtle risks with AI isn’t bad actors or deepfakes. It’s confidence.

AI sounds sure of itself, even when it’s wrong. And when you know enough to spot the mistake, fine. When you don’t, it quietly becomes an authority.

If more and more decisions get outsourced to systems that can hallucinate, the uncomfortable question becomes: who’s responsible when things go wrong?

Power doesn’t disappear, it concentrates

Even if AI doesn’t end humanity, it can still reshape who holds power.

Models, compute, data, infrastructure. These things tend to consolidate. And history hasn’t exactly shown that monopolies are good at protecting nuance, diversity, or human messiness.

The risk isn’t intelligence.
It’s concentration.

The upside is real and worth defending

This isn’t an anti-AI episode.

AI genuinely expands access. It levels playing fields. It removes boring work. It accelerates science. It helps people create, communicate, and participate in ways that weren’t possible before.

Seen clearly, AI is a power tool. Dangerous in the wrong hands. Transformative in the right ones.

The tension to sit with

Jobs probably won’t disappear overnight. They’ll change until they’re barely recognisable.

And before AI replaces people, people using AI will replace people who don’t.

The real risk isn’t replacement.
It’s dependence.

So maybe the challenge isn’t keeping up with AI.

Maybe it’s keeping up with our humanity, and keeping it interesting enough that it’s not worth outsourcing.

Listen to the full episode

And consider subscribing to the podcast, so you never miss an episode!

Full Transcript

00:01
Hey, hi, hello. We’re online. And so are you. In this episode, I’m wittily calling the question that everyone is answering. Will AI be our downfall?

00:20
I’m hoping I can dive into a little bit of where I’m currently sitting with all the things flying around about AI nowadays. Because I think it’s maybe a little bit clichéd and over-spoken about at the moment, but it has a very direct impact on both professionally and personally.

00:50
We interact with AI every day, sometimes without even realising it. Think every time Netflix suggests a show, or your email filters spam, or your phone unlocks with your face, or even Google Maps rerouting because of traffic. That’s all already AI.

And it’s not the flashy robots-taking-over kind, but a quiet background intelligence deciding what we see, what we click on, what we buy, and what we believe.

01:22
So often I think maybe the question isn’t “will AI be our downfall?”, but rather: has it already become so integrated that we don’t even notice it?

01:33
I don’t think AI will destroy us in a single dramatic event. Maybe I’m being naively optimistic, but it feels more like a slow osmosis, even if it’s happening very quickly.

01:51
Before we get into the future, I found it interesting to look at the past.

AI’s roots go way back. Around 1950, Alan Turing proposed the Turing Test. In 1956, the Dartmouth Conference coined the term artificial intelligence.

Back then, computers filled rooms and AI mostly meant teaching machines to play chess or solve logic puzzles.

02:22
By the 1980s and 1990s, expert systems started creeping into industry. Helping doctors diagnose illnesses, banks assess credit risk, that kind of thing.

AI existed long before ChatGPT made headlines. It looked more like rules and algorithms than creativity and conversation.

03:06
Most of us have used AI for decades without knowing it. Recommendation systems on YouTube or Spotify. Fraud detection in banks. Spam filters. Predictive text. Siri and Alexa. Camera autofocus.

It’s been the internet’s plumbing for a long time. It routes information, filters noise, keeps things running.

04:04
The big difference now is that it has a user interface.

We can talk to it, not just through it. And that’s the tipping point.

AI feels threatening because it feels human. We’ve personified it far more in the last few years than we ever did before.

04:34
That shift really happened when AI went public around 2020 and 2021.

GPT-3, then ChatGPT in 2022. Suddenly everyone was using it. I remember running AI-generated art events in Cape Town when DALL·E first came out. The technology felt almost dissociative in how powerful it was.

05:31
Add Midjourney and Stable Diffusion and AI moved from background process to creative collaborator.

That’s when the public imagination shifted from AI as automation to AI as identity.

06:21
Now every major app integrates AI. Search, office tools, design software, productivity platforms.

We’re in an AI-everywhere economy.

McKinsey’s 2024 State of AI report found adoption doubled between 2022 and 2024. We’re long past asking whether AI changes work. It already has.

07:49
The bigger question is how much control we have over the shape it takes.

There are obvious threats. Bad actors. Deepfakes. Scams. Manipulation.

But the less obvious one is over-trust.

AI sounds confident. And when it’s wrong and you don’t know enough to question it, that’s where things get dangerous.

09:14
Underneath all of this sits concentration.

Even if AI doesn’t destroy humanity, it can consolidate control over it. Models, compute, data. And the only thing more dangerous than AI is a human using AI with power.

10:05
There’s real opportunity too. Efficiency. Creativity. Access.

AI removes busywork, levels playing fields, accelerates science, and expands inclusion through translation, captioning, and accessibility tools.

I see AI as a power tool. Dangerous without intention. Incredible with it.

11:44
Regulation is struggling to keep up. The EU AI Act, frameworks in the US and China, all chasing a moving target.

Which is why I think literacy matters more than bans. Knowing how AI works, when to question it, and how to coexist with it responsibly.

14:58
Looking ahead, AI is becoming more agentic. Systems that don’t just respond, but act.

The real bottlenecks now are energy and data, not imagination.

AI won’t take our jobs overnight, but it will make them unrecognisable.

16:27
Dependence is the real risk.

Use AI to speed things up, but don’t outsource your thinking. If AI becomes the only way out of a problem, that’s a dangerous place to be.

17:25
Before AI replaces people, people using AI will replace people who don’t.

So will AI be our downfall? Probably not in an apocalyptic sense.

But it could quietly erode trust, autonomy, and originality.

Only if we let it.

So maybe the real challenge isn’t keeping up with AI.

It’s keeping up with our humanity.

Reply

or to participate.