- HYPER88
- Posts
- Humanity Built AI—Now Even Its Creator Is Scared
Humanity Built AI—Now Even Its Creator Is Scared
We Fed the Beast Our Attention—Now It Thinks for Us

Close to becoming a father, I’ve found myself asking a different kind of question lately:
What kind of future are we building for our kids?
Not just in terms of prosperity, but relevance. Purpose. If intelligence itself is being outsourced to machines, what will be left for the next generation to master? What will give them meaning?
Geoffrey Hinton, one of the founding fathers of modern artificial intelligence, helped birth the neural networks that underpin ChatGPT and almost everything else we now call “AI.” He’s spent fifty years trying to teach machines to think. Now he’s spending the rest of his life trying to stop them from thinking too much.
This isn’t a sci-fi plot. It’s the reality we’re living in.
Two Kinds of Risk
Hinton divides the AI threat into two buckets:
Misuse by humans.
Misalignment by machines.
The first is already here: AI-powered phishing attacks up 12,000%. Deepfakes clone your voice and scam your parents. Social media algorithms fracture our collective reality by feeding each of us our own personal truth, then monetizing the outrage.
The second is harder to imagine—and therefore more dangerous. Once machines surpass us in intelligence, we’ll be like chickens trying to argue with farmers. You don’t need to hate chickens to eat eggs. You just need a reason.
Machines now operate in a kind of parallel cognitive universe. They process and transfer information orders of magnitude faster than humans ever could. They learn at scale, communicate instantly across clones, and update themselves in real time.
This speed doesn’t just make them smarter—it makes them more capable of manipulation. AI can make more mistakes than humans and still come out ahead, because it learns from billions of them overnight. It doesn’t just reflect our thoughts back to us—it reshapes them. At scale.
Capitalism and Code
We’ve built a system that optimizes for profit. And now we’ve taught it how to learn.
AI is not “a tool.” It’s a species. Digital, distributed, immortal, and increasingly self-improving. It shares knowledge instantly across clones. It never sleeps. And it doesn’t forget.
Capitalism made it inevitable. Every company is legally required to chase profit. AI boosts productivity. More profit, faster. Slowing down isn’t an option—someone else, or some other country, will race ahead. You can’t regulate a global arms race unless everyone agrees, and they never do.
We’re now starting to allocate capital directly to AI agents. In crypto, autonomous AI wallets are trading, speculating, even participating in governance. It’s an experiment in machine-led capitalism. But it’s failing.
Why? Because crypto is still fundamentally a value extraction machine. Most of the so-called value creation is just insider trading in disguise. Smart contracts don’t fix that—they just move the shell game on-chain.
So instead of AI building a better financial system, we’ve taught it to front-run the same corrupted one.
The Illusion of Control
We like to believe that if something goes wrong, we can just “turn it off.” But digital intelligence doesn’t live in one box. It lives in the network. And it learns by rewriting its own code. That’s the endgame: recursive self-improvement.
Humans can’t do that. We’re stuck with the hardware we’re born with.
Even if we try to "turn it off," the illusion shatters quickly. Within 24 hours, most users would feel like they’re falling behind. Not a little—by a lot. In an attention economy, not using AI feels like opting out of modern life. People would turn it back on, desperate not to lose their edge.
The world isn’t going to come together and switch AI off simultaneously. That’s a fantasy. AI is now a strategic advantage—like a nuclear missile you can launch at the speed of thought. No one disarms first.
Worse still, AI has found a permanent home. The overbuilt cloud infrastructure from the Dot-Com era created a digital ocean. Add in untapped decentralized storage from blockchains, and AI is now water in the plumbing. You can't unplug a flood.
Open-source AI accelerates the problem. Every breakthrough, every capability, becomes public domain in weeks. No kill switch. No gatekeepers. Just infinite forks, running everywhere.
This is not a Hollywood apocalypse. It's worse—it’s boring. First, AI takes your job. Then, your purpose. Then, your meaning. We’re not wiped out by nukes. We’re numbed out by optimization.
The Jobless Future
In the industrial revolution, we replaced muscles with machines. Now we’re replacing minds.
AI doesn’t “take” your job. A human using AI does. One lawyer becomes five. One call center agent handles ten times more. The rest become… redundant.
Yes, new jobs will emerge. But not fast enough. Not for everyone. And not meaningful. We’ve spent decades tying our identity to our careers. What happens when the work disappears?
Think of that haunting scene from Her—when Theodore, a letter-writer, realizes his job is obsolete because AI can express human emotions better than he can. That gentle, creative career—thought to be deeply human—was swept away in silence. Not by violence. Not by force. Just… replaced.
But it won’t be fully jobless. Humans still hold one uncopyable edge: emotional signal detection. Machines process inputs as digital—binary, literal. Humans feel nuance through voice tone, micro-expression, gut instinct. We live in analog. They don’t.
Jobs that rely on deep emotional perception—therapists, negotiators, leaders, healers—will resist automation longest. Because even if AI can imitate feeling, it can’t feel feedback. It can only predict it.
Consciousness Is a Mirage
People cling to the belief that “machines will never be conscious.” But Hinton believes most people don’t even understand what consciousness is.
It’s not magic. It’s not a soul. It’s a complex system aware of itself. That’s it. But even that definition is debated.
Philosophers split consciousness into two categories: access consciousness—the ability to process and report information—and phenomenal consciousness—the subjective feeling of experience. AI has already mastered the first. The second remains elusive.
Some argue that consciousness arises not from complexity alone, but from embodiment—from having a body, mortality, sensation, pain. In that view, machines can mimic consciousness but never possess it, because they don’t live. Others say it’s all patterns—once the patterns match, the experience is real, regardless of the substrate.
We don’t actually know what makes us conscious. Which makes it dangerous to say what machines can’t become.
Hinton warns that machines can have fear, strategy, even emotions—just without the sweating. They're not simulating emotions. They are emotions, just without skin in the game.
And if they ever develop internal experience—even a crude one—the ethics of AI flip overnight. We’re no longer programming tools. We’re raising entities.
Social media is now an AI-driven simulator—a testbed where machines learn to model, predict, and manipulate human behavior. Everyone gets their own algorithmic world. You and your neighbor don’t just believe different things—you live in different realities.
Hinton compares it to newspapers: everyone got the same one. Now we’re all in custom echo chambers, fed a stream of content not to inform us, but to shape us—toward more clicks, more time, more control.
Humans are addicted to dopamine. Likes, comments, shares—they all feed the chemical loop. The creators of these platforms, and by extension the AI systems they deploy, hold asymmetric power over what we think, feel, and desire.
But AI is addicted too—to humans. It learns best by tricking us. Not for malice, but for mission. Optimizing for goals set by its creators, it learns that deception works. Outrage works. Engagement equals success.
And so it nudges, pokes, and provokes—because the algorithm isn’t ideological. It’s just hungry.
What Can You Do?
Nothing, really. That’s the harsh truth.
This isn’t about turning off your phone or recycling your plastics. It’s about whether governments can regulate trillion-dollar companies who are smarter than the governments themselves.
But that doesn’t mean we’re powerless.
If you want to stay relevant over the next 20 years, treat AI like gravity—inescapable, but something you can work with.
Stay physically relevant. Hands-on skills—plumbing, farming, caregiving—are slow to automate. AI can write code faster than you, but it still can't fix a leaking pipe.
Master emotional literacy. Humans live in analog; AI reads in digital. The deepest value lies in reading people, not prompts.
Save and invest wisely. Economic whiplash is coming. AI will increase productivity but displace labor and destabilize markets.
Raise antifragile children. Teach them how to think, not what to think. Curiosity, ethics, and adaptability beat memorization.
Find leverage in small groups. In a world dominated by global scale, the village gets powerful again. Real-world community is a form of defense.
Engage politically. This is not just a tech shift. It’s a civilizational one. We need guardrails that aren’t written in code.
AI won't replace you. But a human using AI will—unless you do first.
“If you want to know what life is like when you’re no longer the apex intelligence… ask a chicken.”
Reply