Regulating AI Like It’s Magic Won’t Make It Responsible

There is a growing temptation to treat artificial intelligence as something fundamentally different from every other knowledge technology that preceded it. That temptation is understandable — and wrong.

Much of today’s “responsible AI” rhetoric rests on the idea that AI is a new moral category requiring bespoke rules, exceptional consent regimes, and ritualized oversight. In reality, AI is best understood as an extension of reading, synthesis, and industrial computation — activities we already regulate.

Consider training data. A university professor may lawfully purchase books and journals, read them, synthesize the ideas within, and teach others under established copyright and fair-dealing rules. The fact that an AI can read faster than a human does not alter the ethical framework any more than a gifted scholar alters it by reading more widely than a student. Capability at scale is not a new moral category.

The correct question is not “Did the author explicitly consent to AI training?” but “Was the material lawfully accessed?” Copyright law already answers that question. If AI is trained on pirated material, that is already illegal. If it is trained on lawfully obtained material, the law already permits reading and learning.

Calls for “black-box accountability” suffer from similar confusion. Society does not require full internal transparency of every complex system before deployment. We regulate outcomes and harms. When products fail, we apply liability, negligence, and consumer-protection law. If those frameworks are insufficient, the gaps should be identified — not papered over with new bureaucratic structures that provide the appearance of control without its substance.

Environmental concerns are real — and already addressed. Data centres are industrial facilities, no different in principle from refineries or smelters. Canada already regulates energy use, emissions, and water consumption. If environmental law needs strengthening, that is a general issue, not one that should be smuggled in through AI panic.

Job displacement is also not new. From mechanization to computers to globalization, technology has been replacing tasks for centuries. Singling out AI for special treatment does not protect workers; it avoids the harder conversation about social safety nets and labour policy that should apply across the economy.

Finally, misinformation is not an AI invention. Humans and institutions already produce it at scale, often with impunity. Labeling AI-generated content while tolerating human misinformation is not a solution — it is theatre. Existing laws against defamation and fraud apply regardless of whether the speaker is human or machine.

Trust in technology is not built through consultations alone. It is earned through performance. Aircraft were not trusted because of town halls; they were trusted because they flew safely. AI should be no different.

If we want responsible AI, we should stop pretending it is magic. It is a tool — powerful, imperfect, and entirely capable of being governed by the same legal and ethical principles that already apply to knowledge, industry, and society.

Leave a Comment