The new role of Big Tech has created the urgent need for ethical companies to take a clear stand
In J.R.R. Tolkien's "The Lord of the Rings", the Palantir refers to 7 magical, indestructible "seeing stones" used to watch events in other parts of the world. It is a Quenya (Elvish) word meaning "far-seer" or "one who watches from afar" (palan = "far", tir = "to watch").
The Origin Story

The dangerous, negative influence of the Palantir in the story stems from the fact that it requires immense willpower to control. The inherent corruption of its users is one of the fundamental messages of the story.
Tolkien's story goes to some lengths to underline that the Palantir stones do not lie, but they can be used to present selective truths that foster a false impression. This creates visions that can be manipulated to ensure the viewer misinterprets what they see, leading to disastrous decisions.
The "seeing stone" metaphor fits the eponymous data analysis and surveillance firm's goal of identifying patterns and gaining insight from vast, distributed datasets.
As a cautionary tale, the Palantir allows for the tool's use for good in the exceptional instance in which one possesses both the strength to control it and the authority to command it. In the story, Aragorn is just such a worthy user, but the message is clearly that as long as the thing exists, the evil Sauron will always scheme to wield it for dark and nefarious purposes.

The Tech Republic
This past week, the so-called "Palantir Manifesto" - a bulleted Twitter summary of a translucently technofascist tome that until recently enjoyed a 55% fire sale discount on Amazon - enjoyed the beneficial exposure of the Streisand Effect as pundits clarified what everyone already knew:
- Bigtech is now enabling hard power without accountability
- Societal safeguards have been eliminated in the name of morality
- AI use for war and other sinister motives have been normalized

You don't have to infect yourself with the ignoble dystopian rhetoric of toxic techbros to realize the entire exercise is a thinly-veiled book promotion using shock and fear-mongering.
However, the world it depicts is one where logic is an enemy and truth is a menace. As such, what should not be lost on anyone is the sense of inevitability that pervades this work of moral inversion by a "philosopher in the valley" and wannabe comic book villain whose pearls of wisdom include:
- “Our product is used on occasion to kill people.”
- “Scare enemies and, on occasion, kill them.”
- “We are making America more lethal.”
- "The peace activists are war activists."
- “Bring violence and death to our enemies.”

Having shaped US domestic and foreign policies for years, it makes total sense that these edgelords would concoct a form of patriotic capitalism where regulatory capture depends on seeing enemies everywhere and pulverizing them without a second thought. Or any thought, for that matter, given the unaccountable system's fundamental dependence on murky artificial intelligence.
It is that very illustration of an adversarial mindset that has led to the company being tapped to safeguard the nation's food supply, for an initial $300 million, ostensibly against the elusive influence of the Axis of Resistance - and the cleverly-dubbed Axis of Evil 2.0 - on the USA's supply chains.
As manifestos go, the justification to use dark money in manufacturing consent for eliminating civil liberties, international law, ethical norms, human rights and personal freedoms therefore practically writes itself, elegantly positioned as a sweeping necessity by way of ideological alignment more reminiscent of Mein Kampf rather than the Declaration of Independence.

Anchoring a Manifesto for Good on Solid Ethics
Since we can't allow Sauron to have the last word, it's time to take a look at what we've already got and articulate a coherent posture of our own. Organizations looking to distance themselves from the predatory practices of data brokers, the invasive practices of mobile tools and the manufactured consent of ambiguous practices should be unapologetically direct in their articulation of what matters more to them and their audiences.
My involvement with a pair of Canada's most established cybersecurity organizations - Informatica / Datarisk and the Knowledgeflow Foundation - goes back decades. Over the years, we have sought to improve upon the transparent practices that have defined our information security practices at Informatica and influenced the way non-profits have approached ethical technology use through Canada's cybersafety foundation: KnowledgeFlow. Assiduously building such a strong pedigree has been both a source of pride and purpose for me, and everyone involved with our work.
The importance of establishing strong standards and norms cannot be overstated. Such professional integrity baselines are as straightforward as they are rare, but they form the foundation of resistance to the banality of evil.
To give you an idea of the importance of clearly articulating a professional code of conduct, I could simply enumerate our ethical canons here, but I want to go one step further and demonstrate that such exercises are not mere rhetoric, but they are the mortar that binds the pieces of a reputationally sound brand, and they don't get by the Google algorithm unnoticed.
Let's have a look at how Gemini's chain-of-thought reasoning interprets these published norms, critically reflecting on the positive value of our work:
Informatica / Datarisk Canada adheres to a strict code of ethics focused on data privacy, security, and professionalism. Key pillars include:
acting with honesty, protecting client systems, avoiding conflicts of interest, and ensuring AI/data technology is safe, ethical, and transparent.
It goes on to say: "They emphasize regulatory compliance and responsible stewardship."
Core ethical principles include:
Professional Conduct: Focus on telling the truth, providing competent advice, protecting confidential information, and addressing errors promptly.
AI & Data Safeguards: Prohibit dark-pattern analytics and covert monitoring, while requiring robust governance for AI systems.
Environmental & Social Responsibility: Minimize electronic waste and uphold zero tolerance for discrimination or harassment.
Due Diligence: Perform human-rights risk screening and use-case safety reviews for clients.
Having duly scraped the Code of Professional Ethics of the cybersafety foundation, Gemini goes on to expertly paraphrase the non-profit organization's "Fundamental Principles and Trusted Values" in plain language:
The KnowledgeFlow Cybersafety Foundation's Code of Ethics mandates that all representatives act with honor, honesty, responsibility, and legality, driven by the principles of the International Information Systems Security Certification Consortium (ISC2). It focuses on protecting public trust, promoting secure, ethical digital practices, and prioritizing the safety of individuals
Fundamental Canons: Act legally, honorably, and responsibly; provide diligent service; and advance the field of cybersecurity.
Professional Integrity: Protect the integrity of public infrastructure and the Foundation's reputation.
Truthfulness: Maintain objectivity and honesty, ensuring all stakeholders are accurately informed on a timely basis.
Confidentiality & Trust: Respect user privacy, maintain confidentiality of client systems, and honor all contracts.
Conflict Avoidance: Avoid conflicts of interest or the appearance of such conflicts.
Protection Mandate: Prioritize the safety of society followed by the protection of individuals.
Sounding genuinely impressed, Gemini added: "The foundation also recently updated its curriculum to include AI safety and ethics, extending these ethical guidelines to the use of artificial intelligence in education."
While these were great endorsements, Sauron's manifesto did generate far more views simply because of its outrageous and dystopian imperial delusions. So it is in the spirit of resistance that I present our own proclamation. In the name of Aragorn, I encourage other ethical entities to follow suit in articulating their doctrine for good, pronouncing their own code of conduct and coherently declaring the norms that infuse their work with integrity and purpose.

The Cybersafety Manifesto
A Declaration for Human Dignity in the Digital Age
We are not against technological innovation. We have spent our lives building it, securing it, teaching it, repairing it, and helping people use it well. We know what it can do when it is guided by care. Technology can connect families, strengthen schools, protect patients, help small businesses compete, expose corruption, expand knowledge, and give ordinary people tools once reserved for the powerful.
But technology does not deserve our trust simply because it is new, profitable, efficient, or impressive. It earns trust by serving people without diminishing them.
A dangerous idea has taken hold in parts of the technology world: :
- that progress belongs to those willing to build harder, faster, more secretive, more intrusive, more militarized systems
- that freedom is protected by surveillance;
- that peace is secured by better weapons;
- that society must accept dependence, monitoring, and data extraction as the price of modern life.
This idea is often dressed up as patriotism, innovation, public safety, or national destiny. But under the polished language, it asks the same thing of people again and again: surrender more, question less, and trust the builders.
We reject that bargain.
Technology must serve human life. It must not organize human life around its own appetite for data, influence, power, or control. Cybersecurity must not become a cover story for surveillance. Artificial intelligence must not become a moral shortcut for violence. Public safety must not become a permanent excuse for public surveillance. Innovation must not mean extracting more from people who have little real power to refuse.
Cybersafety begins with a simple statement: people are not raw material. They are not data mines, behavioural profiles, risk scores, targets, or training inputs. We are all are citizens, workers, students, patients, parents, children, neighbours, creators, and owners of their own lives.
To protect information is to protect people.
The Present Crisis
We now depend on systems most people do not understand, cannot inspect, and do not control. :
- A family buys a smart device and hopes it is not listening too much.
- A worker opens a company dashboard and wonders who is measuring them.
- A student logs into a learning platform and leaves a trail they may never see.
- A patient trusts a health system with the most intimate facts of their body and mind.
- A small business moves to the cloud, then discovers that ownership has become access, access has become rent, and rent can be changed at any time.
This is the quiet crisis of the digital age. It does not always arrive as a scandal. More often, it arrives as a checkbox. Or a box that can't be unchecked.
Click agree. Accept all. Continue. Upgrade. Subscribe. Sync. Share. Allow.
We are told this is consent, but consent is not meaningful when the alternative is exclusion. When people must surrender privacy to work, learn, receive care, communicate, or participate in society, they are not freely choosing. They are being managed by design.
The modern digital economy practices over-collection without calling it Big Data. It has made indefinite retention the status quo, especially for children's data. It defaults to dark patterns. It has made secret scoring and dynamic pricing the norm. It has normalized subscription traps. It has made systems that manipulate attention and behaviour feel like ordinary business.
This is not nature. This is intentional architecture. But it can be re-architected.
The New Digital Feudalism
The old landlord owned the land. The new landlord owns the platform. The old toll road charged passage. The new toll road is the cloud. The old gatekeeper held the keys. The new gatekeeper controls identity, access, visibility, reputation, and reach.
People once bought tools. Increasingly, they rent permission. They rent software, storage, entertainment, business systems, family memories, and even access to their own records. Organizations once owned infrastructure. Now many are bound to vendors whose terms they cannot negotiate and whose failures they cannot survive. You are told you will own nothing and you will be happy.
But you won't be. And you won't accept it.
This is why cybersecurity is not just a technical issue. It is a question of power.
Without cybersecurity, privacy collapses. Without privacy, free speech weakens. Without secure property, independent enterprise becomes fragile. Without transparency, accountability becomes theatre. Without education, people are left defenceless before systems designed to confuse them. Without trust, the digital economy becomes a place where everyone is watching, selling, scoring, or exploiting everyone else.
A society cannot call itself free if its people live inside systems they cannot question, cannot leave, and cannot understand. But understand, they can. All we need to do is remind them that they've done harder things. We all have.
The Rights of the Digital Person
Human rights do not end at the edge of a screen. A person does not become less free because they open an app. A child does not lose dignity because they learn online. A worker does not surrender autonomy because they receive a company device. A patient does not become public property because their health information enters a database. A citizen does not become a suspect because they move through a "smart" city.
The rights we defend in physical life must be defended in digital life: privacy, security, property, expression, association, due process, dignity, and the right to meaningful choice.
Privacy is not secrecy. Secrecy is security by obscurity. And that's the very definition of a false sense of security, let alone privacy. That is the ordinary boundary that allows a person to live with dignity. It is the space to think, speak, learn, heal, love, organize, dissent, and make mistakes without being continuously measured by hidden authorities. To secure ourselves, we can develop a sense of surveillance. We can sense behaviour modification and resist forced narratives.
Security is not domination. It is protection from domination. So when our children show interest in privacy and information security, we support them.
Safety is not obedience. It is the condition that allows freedom to exist without fear. So when our children show an interest in blocking ads or setting fire to third-party cookies, we hand them a match.
A decent society does not remove every door and call it safety. It does not treat every private life as a source of intelligence. It does not force people to prove they are harmless before allowing them to live normally.
The Rights of Organizations
People are not the only ones exposed. Small businesses, schools, charities, clinics, municipalities, professional offices, and community organizations also have the right to defend their existence. They have the right to protect their data, their people, their intellectual property, their operations, their reputation, and their future.
These organizations should not be treated as disposable victims in a hostile digital economy. They should not be priced out of protection. They should not be abandoned to ransomware, fraud, abusive platforms, surveillance vendors, and contracts that push all risk downward.
Cybersecurity must become an equalizer. Protection cannot be a luxury reserved for banks, governments, monopolies, and military contractors. It must be understandable, affordable, repeatable, and available to the organizations that hold communities together.
A country is not secure because its largest institutions are secure. A country is secure when its communities can keep standing.
The Duty of Leading by Example
Leadership in the digital age is not proven by adopting the newest system. It is proven by accepting responsibility for what that system does.
If you collect data, you are responsible for it. If you deploy technology, you are responsible for its consequences. If you use systems that expose people to harm, you are accountable for that harm. If you ignore known risks, you are not unlucky when damage follows. You are at fault.
No leader should hide behind a vendor. No board should hide behind jargon. No company should hide behind fine print. No public office should hide behind complexity. Good leaders say no to invasive practices and vendors that are implicated in harmful practices.
Cybersecurity, privacy, and responsible data governance are not decorative functions. They are not insurance paperwork, annual slide decks, or emergency rituals after a breach. They belong at the intersection of fiduciary duty, operational resilience, customer trust, and public legitimacy.
Trust is not a slogan. Trust is evidence. It is what people can see, test, verify, and rely on before something goes wrong. It helps everyone when you stand firm and demand verifiable evidence.
Opposing Weaponized Innovation
We expressly reject the notion that the highest purpose of technology is to make power more efficient, especially when it demands that we make our identities more transparent.
We reject artificial intelligence built to accelerate human harm while distancing human beings from responsibility. We reject surveillance tools built to classify, intimidate, or suppress populations. We reject predictive systems that turn poverty, illness, migration, dissent, or vulnerability into suspicion. We reject identity systems that make access to ordinary life depend on pervasive tracking. We reject smart devices that spy on homes, classrooms, workplaces, streets, and bodies. We reject advertising systems that manipulate attention and behaviour while pretending to offer choice.
We also reject security theatre, fear-based selling, backdoors, dark patterns, and technologies that create dependence while pretending to create freedom.
The fact that something can be built does not mean it should be built. The fact that something can be sold does not mean it should be bought. The fact that something is profitable does not make it legitimate. The fact that something is useful to the powerful does not make it good for society.
The real questions are not only technical. They are human.
Whom does this serve? Whom does it harm? Who controls it? Who audits it? Who profits from it? Who carries the risk? Who can refuse? Who can appeal? Who can turn it off?
If your questions cannot be answered plainly, the system has not earned our trust.
The Urgency of Public Awareness
A public that does not understand digital power cannot govern it. This ignorance is convenient for those who profit from confusion. It allows companies to bury abuse in terms of service. It allows institutions to hide behind complexity. It allows governments to expand surveillance in the name of safety. It allows vendors to sell dependency as transformation.
Cybersafety education is therefore not optional. It is the foundation and plumbing of generational freedom and autonomy.
People must know how data is collected, how consent is weakened, how fraud works, how platforms manipulate behaviour, how surveillance enters ordinary life, and how ethical security can be adopted in homes, schools, workplaces, and communities. They must ask better questions before harm occurs. They must learn to recognize systems that are unsafe, unfair, or unnecessarily invasive. They must learn how to protect themselves and how to protect one another.
Knowledge protects one person. Shared knowledge protects a community. The flow of knowledge creates a web of trust that makes entire populations resilient and unhackable.
That is how resilience is engineered. Not through fear. Not through obedience. Not through blind trust in distant systems. Through education, practice, accountability, and mutual care.
A Standard Worth Defending
We call on technology companies, cybersecurity professionals, privacy practitioners, educators, directors, regulators, public officials, and citizens to adopt a better standard. To adopt this standard. To create their own baseline and share it.
Normalize this:
Do no harm. Tell the truth. Collect less. Protect more. Explain clearly. Give people meaningful choices. Refuse abusive work. Reject mass surveillance. Reject dark patterns. Reject backdoors. Reject human rights abuse. Build capacity in others. Make protection practical. Make privacy real. Make accountability visible. Make technology answer to human dignity.
Compliance is a noble goal, but it is not enough. The law matters, but the law often arrives late. It is often written after the damage is done, after the breach has spread, after the public has been manipulated, after the vulnerable have paid the price.
Ethics need to be verifiable. A decent organization should not need to be forced to respect the people it serves. If an organization operates in the service of power, it fails the test and should be called out, exposed and rejected.
What Must Be Done
Corporate leaders must treat cybersecurity and privacy as the permanent duties of care. Professionals must expose and refuse work that enables persecution, unlawful surveillance, discrimination, or human-rights abuse. Governments must protect digital rights rather than trading them away for vendor promises and security theatre. Schools must teach cybersafety as a life skill and push back on administrative, institutional policies that expose children to risk. Communities must organize around privacy, fraud prevention, digital literacy, and the protection of vulnerable people.
Customers must demand proof, not promises. Employees must speak when systems are unsafe. Boards must ask harder questions. Vendors must be judged not only by what their tools can do, but by what harms they prevent, what harms they permit, and what harms they conceal.
This is not anti-business. Verifiable trust is great business.
This is not anti-government. Rights and freedoms impart legitimacy upon government.
This is definitely not anti-technology. Human dignity is the reason technology matters. It's why we instinctively strive for convenience and are at highest risk when buying into such narratives.
Our Declaration
We declare that technology must be accountable to humanity.
We declare that privacy is not dead, not outdated, and not negotiable.
We declare that cybersecurity is a public good, available to everyone.
We declare that safety, security, privacy, dignity, property, and freedom are inseparable.
We declare that people are not data mines. Children are not products. Workers are not sensors. Patients are not files. Citizens are not suspects. Customers are not prey. Communities are not markets to be harvested until nothing is left.
We declare that the future must not belong to surveillance empires, digital landlords, weapons vendors, manipulative platforms, or those who confuse domination with civilization.
Canada can help define another path: practical, ethical, resilient, democratic, privacy-respecting, and human. Not a future where people surrender their rights for convenience. Not a future where companies own what people cannot live without. Not a future where safety means surveillance. Not a future where innovation is measured by how much can be extracted from the public.
A better future will not arrive on its own. It must be taught, built, defended, expected, and demanded. All of those things. And the practice of passive, uninformed consent must come to a definitive end.
It is time to trust our technology again. The age of cybersafety must begin.
We do not have to surrender the digital world.
Many of us were born in this world, as digital natives.
It is up to us to secure it.