Just Because We Can: The Madness of Modern Innovation
Delving Into the Ethics of Modern Innovation and What Happens When We Forget to Ask the Big Questions
We humans have always been tinkerers. From poking at fire until it became something vaguely controllable to figuring out that a wheel works better on a cart than a square block, we’ve always been driven by the sheer joy of solving problems—whether or not they needed solving.
In the 21st century, we’re no longer just dabbling in physics and engineering; now we’re rewriting the genetic code of life, teaching machines how to think, and casually debating whether we should reflect the sun’s rays back into space. The mantra of innovation has become a loud, unrelenting “Can we?” while the quieter voice of “Should we?” seems to have taken a very long tea break.
Take CRISPR, for example—a gene-editing tool so precise it’s been called “molecular scissors.” It promised breakthroughs in curing diseases, ending hunger, and other heartwarming utopian dreams. In 2018 He Jiankui, a Chinese scientist took CRISPR out for a joyride and edited the genes of human embryos. His reasoning? To make them immune to HIV. While that sounds noble enough, the scientific community collectively slammed on the brakes. Why? Because he not only bypassed ethical oversight but also edited the human germline—DNA changes that could ripple down generations with consequences nobody can predict.
Those two embryos, known as “Lulu” and “Nana,” became the world’s first genetically modified humans. And while He envisioned himself as some sort of pioneer, others saw Frankenstein with a pipette. There were immediate concerns about safety, consent, and the unsettling possibility of a future where only the wealthy could afford to “design” their offspring—turning genetic engineering into a dystopian version of shopping for premium upgrades. He was sentenced to three years in prison for his rogue experimentation, but the ethical quandary remains: when we play God, what’s to stop us from also playing the Devil?
Speaking of devils, let’s turn to weaponised AI and its less-than-angelic offspring: autonomous drones. Drones started as remote-controlled devices for surveillance and reconnaissance but have since evolved into full-on fuckhead murderbots. Israel, for instance, has used drones extensively in its “fight against Hamas” and every other person in Palestine, Hezbollah, many others in Lebanon, Iran and now Siria. In one recent strike, the Israel Defence Forces (IDF) deployed drones to assassinate Hamas leaders in Lebanon, prompting UN experts to condemn the action as a potential breach of international law. Lebanon’s sovereignty was blatantly ignored, and no legal justification was provided, leaving critics to label the strike as a thinly veiled extrajudicial killing. Israeli drones are routinely used to shoot kids in the head when the IDF don’t have enough snipers on hand.
We can’t pretend this is a one-sided problem. Hezbollah and other non-state actors have also embraced drones, using them to carry out retaliatory strikes against Israeli targets, although their targets have by enlarge been more ethically selected than those carrying dolls and toys. As these technologies become cheaper and more accessible, they’re being turned into tools for asymmetric warfare, with implications that make even the most seasoned ethicists break out in a cold sweat.
But it’s not just humans directing drones. Autonomous weapons like Turkey’s Kargu-2 have already made battlefield appearances. In Libya, this drone allegedly hunted and attacked targets independently, without human input—a milestone as terrifying as it sounds. Machines making life-and-death decisions without human oversight? What could possibly go wrong? Proposals to regulate such weapons, including treaties to ban fully autonomous systems, have largely stalled, leaving us teetering on the edge of a future where wars are fought by algorithms with no sense of morality.
And yet, somehow, it gets worse. Okay, maybe not worse than targeting and murdering kids in cold blood, but bad, crazy… Borderline psychopathic…
Let’s consider geoengineering: the audacious, hubristic plan to fix climate change by directly altering the Earth’s systems. Ideas like spraying reflective aerosols into the stratosphere to cool the planet or fertilising oceans with iron to absorb carbon dioxide. It might sound like the premise of a Disney sci-fi blockbuster, but they’re actively being researched. The ethical dilemmas are enormous. What if these interventions trigger unforeseen environmental disasters? What happens if powerful nations deploy them unilaterally, turning the global climate into a geopolitical weapon? For all our enthusiasm about “engineering our way out” of climate disaster, we must wonder if we’re just digging the hole deeper.
Of course, history offers plenty of cautionary tales. The Manhattan Project, for example, was a technological triumph that gave us nuclear energy, but also the atomic bomb. Facebook, once heralded as a tool to connect the world, became a hotbed of disinformation and political manipulation thanks to its algorithmic obsession with engagement at any cost. And let’s not forget social media’s less catastrophic but equally horrifying side effects, like TikTok dances and the overuse of the word “slay.”
It’s not all doom and gloom, though. Sometimes, humanity gets it right. After Dolly the sheep was cloned in 1996, the global outcry against reproductive cloning was swift and decisive. Many countries implemented bans, recognising that just because we could clone humans didn’t mean we should. Similarly, the 1972 Biological Weapons Convention showed that even the most cutthroat superpowers could agree on one thing: unleashing bioweapons is probably a bad idea.
But these moments of restraint are the exception, not the rule. In most cases, innovation charges ahead, dragging ethics along like a reluctant younger sibling. AI researchers, for instance, are already grappling with the ethical implications of artificial general intelligence (AGI)—a hypothetical future AI that could surpass human intelligence. If we create a sentient machine, does it deserve rights? And if so, what happens when its rights conflict with human priorities?
Meanwhile, Elon Musk and others are racing to colonise Mars. But while the idea of escaping Earth’s problems by terraforming another planet sounds appealing, it raises uncomfortable questions. Should we prioritise fixing our own planet before messing up another? Will space colonisation become just another playground for the ultra-wealthy, leaving the rest of us to deal with worsening crises on Earth?
This is where philosophers come in—or at least, where they should come in. Embedding ethicists in research labs, tech companies, and military think tanks might seem like a bureaucratic buzzkill, but it could be the best way to prevent our next great invention from becoming our next great catastrophe. Imagine if philosophers had been consulted before He Jiankui edited human embryos or before Facebook prioritised outrage over truth. Maybe, just maybe, we’d have avoided a few disasters.
Ultimately, the problem isn’t our capacity to innovate; it’s our failure to ask the right questions. The next time someone excitedly proclaims, “We’ve built this amazing thing!” the appropriate response isn’t a round of applause but a moment of thoughtful silence—and maybe, just maybe, the whispered question: “Should we?”
The New Face of Warfare: How Low-Cost Drones Are Revolutionising Conflict
The New Face of Warfare: How Low-Cost Drones Are Revolutionising Conflict
I've been watching a series about Greek Myths, and the lesson that keeps coming up is that when humans either disobey the gods or elevate themselves through hubris to the status of gods, the inevitable result is grievous tragedy. Apart from the fact that probably only some obscure, eccentric academic knows anything about these myths, humans refuse to learn from their mistakes, and they get carried away with their ingenuity and enthusiasm for new technologies, especially if there's a pile of money to be made.