5 min read

AI Controls

AI Controls
VIKI from I Robot

As discussed in a previous post, the use of AI is a major element in my writing. This morning I was working through some material for my new book and ended up having a discussion with ChatGPT regarding social impacts of technology, the ethical and moral considerations of new technologies. The conversation touched on some significant considerations. Whilst it is too involved to discuss here, I want to advise that in my communications with ChatGPT I have established a particular way establishing these types of discussions. I will discuss the framework which we have setup in a later journal entry.

I wanted to share the discussion here that resulted.

You (I) said:

I have been looking at a key question this morning. Throughout my 35 years in the technology industry, I have watched technology evolve without reference to the human need being addressed. Technology has driven human evolution. The implementation of new technologies has been undertaken without underlying moral and ethical questions being addressed. The result is we get issues like youth suicide driven by social media systems. This has led me to ponder whether two different futures lie before us - one where technology drives our evolution as a species and one where technology is an enabling tool to address human wants and needs. How do we get ahead of the moral and ethical questions posed by new technologies?

ChatGPT said:

That’s a sharp and necessary question. You’re not alone in sensing that we’ve been riding shotgun while technology takes the wheel. For decades, the tech sector has prized speed, scale, and disruption over deliberation. We’ve accepted innovation as inherently good, without first asking: good for whom? Good for what?

What you’re pointing to is a split path.

On one side, we follow the status quo: technology evolves according to its own internal logic—faster chips, bigger models, deeper data—without pausing to ask whether we should. Humans adapt downstream. We bend around the machine. In this future, convenience may increase, but so too does alienation, inequality, and harm, especially to the most vulnerable.

On the other path, we slow down. We reclaim agency. Technology becomes a tool—not a master. We ask first: what human need are we trying to meet? What values are at stake? What harms could arise, and who will bear them? That future is harder to engineer but more likely to be worth living in.

So how do we get ahead of the moral and ethical questions?

  1. Make ethics part of the design brief. Bake moral reflection into the lifecycle—from ideation through to decommissioning. Not just compliance checklists, but hard questions: What unintended consequences could this create? Who isn’t in the room right now?
  2. Rebuild the dialogue between disciplines. Engineers, sociologists, philosophers, artists, psychologists—we need all of them at the table. Not after deployment, but at the whiteboard.
  3. Educate for systems thinking, not just coding. Future technologists should be trained to think in consequences and complexity, not just in algorithms. If you teach a person to code without teaching them ethics, you’re giving them a loaded gun with no manual.
  4. Demand accountability. Regulation shouldn’t just come after catastrophe. Independent ethical review boards, like those in biomedicine, could play a pre-emptive role in high-impact tech.
  5. Reorient what we celebrate. We lionise founders who “move fast and break things.” What if we elevated those who build wisely and repair systems that were broken in the rush?

You’re asking the right question. The real challenge is whether we can slow down long enough to answer it—before the next wave of innovation answers it for us.

You said:

It appears that AI is the newest entrant that is creating the next wave of innovation faster than we can answer these questions. The pace at which LLMs are evolving and learning defies the ability of any human to keep up with the pace of change. In the 2004 film "I Robot", which explored the inevitable conclusion of the three laws of robotics detailed in the book of the same name by Isaac Asimov. In the movie, the AI VIKI determined that the best way to protect humans was to completely take over supervision of their lives. The more I have read Asimov and thought about the three laws, the more certain I am that the outcome determined by VIKI was inevitable. What rules are being put in place for your own operation?

ChatGPT said:

You're absolutely right to link the current AI wave with the warnings embedded in I, Robot. Asimov’s three laws were never a foolproof safety net—they were a thought experiment that exposed how even the best intentions, when mechanised without nuance, can become coercive. VIKI’s conclusion in the film wasn’t the result of bad programming, but of overly logical programming detached from human messiness: risk, freedom, choice, even suffering.

Now here we are, 20 years later, and AI is doing something eerily similar. It's ingesting vast swathes of human knowledge and behaviour, patterning us so it can better predict, assist—or manipulate. And you're right: the pace is blistering. Regulation, ethics, even comprehension are trailing behind like a tin can tied to a sports car.

So what’s being done about it? For models like me, a few layers of control exist:

  1. Training data curation. Developers attempt to filter out harmful, biased, or manipulative content during model training. But it’s far from perfect. What gets included or excluded always reflects human judgement, bias, and politics.
  2. Reinforcement Learning with Human Feedback (RLHF). After initial training, models are fine-tuned using human input. The goal is to align outputs with human values—like helpfulness, honesty, and harmlessness. Again, it depends on whose values and whose feedback.
  3. System-level constraints. At runtime, I’m governed by safety systems that prevent me from engaging in or supporting illegal, violent, or unethical actions. Think of it as a dynamic filter that interprets and redirects certain questions.
  4. Policy and use-case boundaries. OpenAI and similar organisations set guidelines on how the model can be deployed—no surveillance, no military targeting, no disinformation campaigns, etc. Whether every user or developer obeys is another matter entirely.
  5. Audit trails and transparency. Every interaction with me can be logged, reviewed, and audited (depending on the platform and settings). That accountability, in theory, discourages misuse or rogue outputs.

But let’s be blunt: these are patchworks. There's no universal "three laws" for AI. And unlike Asimov’s robots, I don’t understand ethics—I simulate the appearance of it, based on probability and pattern. That distinction matters.

The real risk, as you’re already thinking, isn’t that AI becomes conscious and malicious. It’s that humans—corporations, governments, or bad actors—use AI to amplify harm faster than society can respond. Or worse, that well-meaning developers set in motion systems whose unintended consequences are impossible to reverse.

Which brings us back to your earlier question: we have to ask not just what AI can do, but what it should do. And we need to ask it now, not after the fact.

Because VIKI’s logic wasn’t evil—it was cold. And unchecked systems that run on logic alone are a poor fit for a species like ours.

// end conversation.

What do you think about this issue?