Technology Paternalism Expands — A Case for Self-Sovereign Identity

How It Starts

This article is an attempt to make a pattern visible, which rather is an anti-pattern, that increasingly shapes everyday interactions, yet is rarely named directly. As digital and physical experiences merge, decisions about what we can see, access, and do are increasingly built into the systems we rely on: in interface design that steers choices, in the content shown in news feeds, and in identity and AI systems that define what is possible and who qualifies. These decisions are often justified as convenience, efficiency, or safety, redefining who is in control - and how much remains with the individual versus the systems they rely on -.

The goal of this article is to examine how decision-making built into systems shapes choices before people actively make them and limits how they can be challenged in practice. By naming this dynamic as Technology Paternalism, the article offers a lens to analyze these developments – and points to ongoing work in Self-Sovereign Identity (SSI), especially emerging principles addressing coercion that are currently being developed.

 

Choose your own path

Begin at the top, or skip ahead to any section below.

 

A Familiar Moment

 

You download a mobile app. A quick setup shows, its button large and reassuring. The alternative… labeled advanced (it may make you think it is not for you). You tap Continue. Then one more permission, just to keep you safe.

No one forced you.

And yet – did you know what you chose? Could you really choose?

 

These questions

are at the core of something that has become structural in digital and hybrid life: Systems that began as tools now shape access to work, healthcare, public services, and social participation. They increasingly make decisions for people rather than by them – rarely with a clear way to object or opt-out, and designed in ways that make leaving increasingly costly: you risk losing access to services, credentials, or relationships you have come to depend on.

The anti-pattern, has a name: Technology Paternalism


What Is Paternalism?

PATERNALISM describes interference with a person's autonomy - without their consent - justified by the belief that doing so is for their own good. While paternalism is usually well-intentioned, the controversy is about authority: who decides for whom, and on what grounds? [1]

 

Paternalism is not the same as being patronizing

Patronizing is about tone. Paternalism is about decision-making power, and who holds it. Power asymmetry is what makes it consequential.

 

A LOW-STAKES EXAMPLE helps illustrate the logic. A group of friends decide not to invite Sam to a party because his ex will be there - they want to spare him pain. Sam's freedom is technically untouched, but a decision has been made on his behalf, without his knowledge, by people who substituted their judgment for his. He never got to choose. The stakes are rather low here - but the structure is recognizable.

Now raise the stakes, just a little. A manager withholds information from a colleague "to avoid unnecessary worry". A supervisor excludes someone from a meeting to protect them from "overload". The interference is visible, the power asymmetry structural, and the person affected has no say. Well-intentioned, perhaps - paternalism, certainly.

You get the idea.


What Is Technology Paternalism?

THE TERM TECHNOLOGY PATERNALISM is paternalism implemented through technology. It appeared in medical ethics literature by 2003 [2]. The year after, Spiekermann and Ziekow refer to Technology Paternalism at a high-level abstraction as “the fear of uncontrolled autonomous action of machines that cannot be overruled by object owners” [3] and in 2006 Spiekermann and Pallas [4] made Technology Paternalism explicit in the context of ubiquitous computing, giving it the operational definition most relevant here: as everyday objects became embedded with sensors and automated logic, coded rules could restrict or override user behavior in ways users could no longer easily challenge. The question this raised was not whether technology should support people, but whether people retain what Spiekermann and Pallas called the RIGHT TO THE LAST WORD [4].

That concern has only grown.

 

TECHNOLOGY PATERNALISM TODAY describes the anti-pattern by which technical systems shape, restrict, or pre-decide choices — commonly justified as safety, efficiency, or protection. From the individual's perspective, the system appears to decide what is possible, normal, or desirable. In reality, it encodes human, organizational, and institutional judgments expressed through design, data, and governance.

 

Crucially, systems that embed technology paternalism typically appear neutral – leaving any choices with the individual –. However, research raises persistent accountability and opacity problems, and shows these systems embed assumptions about acceptable behavior that are anything but neutral [5, 6, 7].


Four Forms of Technology Paternalism

TECHNOLOGY PATERNALISM IS NOT A SINGLE THING. It shows up across different layers of digital solutions - in interfaces, in algorithms, in infrastructure, and in the language of protection.

 

For the purposes of this article, I distinguish four recurring forms of technology paternalism:

Design paternalism (interface, UX, friction, defaults)

Algorithmic paternalism (recommendation systems, ranking, filtering)

Infrastructural paternalism (protocol rules, architecture constraints)

Protective paternalism (safety measures, guardrails)

These are drawn from literature on technology, ethics, design, and governance [4, 5, 6, 7, 8].

 

The four forms are not exhaustive, but they are among the most visible and consequential.

They also are not clean, separate boxes. A digital identity system, for example, is Infrastructural Paternalism the moment it becomes unavoidable - and Protective Paternalism the moment its design is justified in terms of your safety. Often it is both. The label matters less than the question it prompts: do you retain any real ability to understand, challenge, or opt out of the decision made on your behalf [9,25]? Yet, distinguishing them helps make the anti-pattern recognizable in practice.

 

I - Design Paternalism

 

Design paternalism works through defaults, layout, and friction – and through the language used around every choice: which option is labeled recommended which is called advanced

 

REGULATORS have documented how interface design can systematically influence decisions - so-called darkpatterns that nudge users toward outcomes serving the platform rather than the person, while formally preserving choice [8]. In other words, users technically choose, but always along a path that someone else has laid out for them.

Think of that quick setup from the opening. It is not just convenient, it is coded as normal. In contrast, the advanced path is coded as risky, technical, not-for-you. In a way, the design takes the decision away before the user has formed one.

A sharper example, the non-negotiable terms and conditions bundle. The platform has already decided which data it collects, which rights you have to waive, what you have to agree to. Typically, you can accept or walk away – but you cannot add, remove, or change a single line. Consent is structural, not real.

 

Design paternalism is easy to miss exactly because it is not hidden - it is built into how choices are presented: in the labels, the order, the button sizes

 

Additional Facts

  • Projects like MyTerms propose machine-readable, user-defined terms that allow individuals to express their conditions proactively, shifting consent from one-sided acceptance to bilateral negotiation. While still under development, this points toward design that reduces paternalistic asymmetry rather than reinforcing it.

    It connects to IEEE Standard 7012 [10] 

  • The study of how presenting options shapes decisions, often without people noticing - shows that seemingly small choices like default selections, option ordering, and labeling can systematically steer behavior without formally removing choice [9].

 

II - Algorithmic Paternalism

 

Algorithmic paternalism occurs when automated systems decide what information, opportunities, or actions are appropriate for a person – without that person having any input into how those decisions are made

 

THE RISK is the shift of control that Algorithmic Paternalism produces – not personalization itself –. The system pulls you toward the familiar, making staying in your lane effortless. In contrast, seeking out different perspectives takes deliberate effort. Over time, Algorithmic Paternalism can therefore progressively narrow the range of views a person encounters - what Eli Pariser [12] called a filter bubble - and in some cases contribute to what is often described as an echo chamber where exposure to dissenting views becomes limited [13].

How strong these effects are in practice is seriously debated. Research finds that strong filter bubble effects are harder to demonstrate than commonly assumed [14, 15]. However, a recurring theme across the literature is asymmetry of effort: digitally engaged users actively seek out diversity while more casual users and those with less digital literacy are served a narrower selection, often without letting them know and without them having asked for it [14, 15].

The most familiar example of narrowing selections is the recommender system: the algorithm that decides what you watch next, what news you see, what products appear first [11]. These systems do not only display options - they rank, filter, and pre-select them. In other words, the content has already been curated by the time it reaches you.

 

Algorithmic paternalism is easy to miss exactly because it operates in the filter bubble: not by blocking difference, but by arranging the environment so that one path requires no effort, and stepping off it requires both awareness and will

 

Additional Facts

  • The term "filter bubble" was introduced by Eli Pariser (2011) to describe how algorithmic curation limits exposure to diverse perspectives [12]. Subsequent research has produced mixed findings: some studies confirm measurable narrowing effects; others suggest users actively seek diverse content and that algorithmic effects are smaller than assumed [14, 15]. The debate is ongoing. What is more consistently documented is the asymmetry of effort: the system defaults to the familiar, and broadening exposure requires active, deliberate work [14, 15].

 

III - Infrastructural Paternalism

 

Infrastructural paternalism occurs when a system becomes so deeply embedded that participation is formally voluntary but practically unavoidable — because refusal means losing access to the services built on top of it

 

THE CONSEQUENCES of Infrastructural Paternalism have reached the level of policy concern because the more embedded digital solutions become, the harder the consequences of Infrastructural Paternalism are to reverse. When access to work, healthcare, mobility, or public services depends on specific digital infrastructure, opting out stops being a real choice, and that dependency is difficult to undo once it has settled in. A manifestation of this is when people cannot take their data, credentials, or history with them - because systems are simply not designed to allow it - dependency deepens and exit costs rise.

The OECD has documented how data portability and interoperability directly affect competition and lock-in [16]. The EU Data Act (in application since September 2025) includes provisions to facilitate switching between data processing services and prohibit switching charges entirely from January 2027 onward – a legislative acknowledgment that structural dependency has become a problem large enough to require a policy response [17].

The barriers to switching are, however, not always visible. One common example is when systems require specific devices, authentication methods, or proprietary data formats — moving to an alternative then depends on whether that alternative meets the same preconditions. Where it cannot, the switching option exists in name only, and dependency is maintained through the narrowing of viable alternatives – restrictions are not even needed –.

 

Infrastructural paternalism becomes visible - but often only once it is too late -. By the time the lock-in is apparent, the cost of leaving has already made leaving unrealistic: the platform holds the data, the credentials, the history. Walking away means starting over

 

Additional Facts

  • and why the topic does not stop at the border

    While Swiss data protection law includes a personal-data portability right, it does not create the same cloud/data-processing-service switching framework as the EU Data Act.

    Yet, Swiss companies selling connected devices or related services into the EU market must comply with the Data Act regardless of where they are based [18]. The Act has been applicable since September 2025, and switching charges are set to be prohibited entirely from 12 January 2027, after a transitional period during which only direct costs may be passed on [17]. The problem, it turns out, does not stop at the border, and neither does the deadline.

  • why portability is a structural safeguard, and what the First Person Cooperative is building

    Infrastructural paternalism becomes especially consequential in digital identity systems. When credentials, reputation, and relationships accumulate within a proprietary ecosystem, switching means losing them - and the cost of exit is not just inconvenience, it is loss of recognized standing. Your professional history, your verified connections, your accumulated trust, none of it travels with you. This is why portability and interoperability are not merely technical preferences; they are structural safeguards against the paternalistic hardening of identity infrastructure.

    Alternatives are being actively built. The First Person Network - a collaboration between Linux Foundation Decentralized Trust, Trust Over IP, the Decentralized Identity Foundation, and the OpenWallet Foundation - uses personhood credentials for proof of personhood, and verifiable relationship credentials to construct a decentralized trust graph: a portable, privacy-preserving web of attested human connections that no single platform owns or controls [19].

 

IV - Protective Paternalism

 

Protective paternalism occurs when restrictions are justified in the language of safety, security, or harm prevention. It is the most socially accepted form — and because of that, the most easily overlooked

 

THE MECHANISMS VARY: blocked websites, intercepted communications, identity systems governed by centralized gatekeepers. But the logic is consistent. Someone upstream decides what you need to be protected from, builds that decision into the infrastructure, and you encounter it as simply the way things work — typically indistinguishable from a technical limitation. A blocked site looks like a broken link. A filtered result looks like an incomplete search. In other words, the decision to remove that option was made before you arrived – you may not even know it –.

What makes Protective Paternalism persistent is its justification. Appeals to safety and harm prevention are difficult to contest, which makes the restrictions built on them difficult to challenge. In other words, a restriction does not need a strong justification to persist — a small but real concern is often enough. Once framed as protecting people from harm, questioning the restriction can easily appear irresponsible. Those who push back risk being portrayed as seeking access to harmful content or dismissing legitimate security concerns.

This dynamic also means that once a restriction is introduced, it tends to remain in place. A measure adopted for a specific risk, at a particular moment, may therefore outlive the reasoning behind it. The infrastructure persists even as threats evolve, contexts shift, or the original logic no longer holds. And because the restriction continues to be justified in the name of safety, there is rarely a moment when it is reconsidered. It simply persists.

 

Protective paternalism is the hardest to see because it works by subtraction. The removal is invisible by design — what was removed is gone, what was filtered never arrived, what was decided upstream simply does not appear. The result looks like the whole story, yet it is an edited version

 

Additional Facts

  • When trying to access a blocked website, people are typically redirected or prevented from reaching the site by Internet Service Providers (ISP) [21]. In Switzerland, ISPs are legally required to block access to unlicensed gambling platforms under the Federal Act on Gambling, in force since 2019 [22, 23].

    The goal may be legitimate. But the mechanism is invisible; the decision was made upstream, and the person being protected had no say in whether the protection was wanted - or what it costs. One could argue that the good of society outweighs the preferences of the individual, and in some cases, that argument is reasonable. The question technology paternalism asks is not whether collective goals can justify individual constraints. It is whether those constraints are visible, contestable, and proportionate - or simply built into infrastructure and encountered as the way things are [4, 9, 21].

  • Switzerland's E-ID was originally designed without purpose verification - nothing required a verifier to declare what data they were entitled to request, or why. The Federal Audit Office raised this directly, and the gap is now planned to be addressed requiring verifiers to register their intended data queries [20].

    Regardless of how the regulatory framework evolves, the person presenting their identity data is not part of the deliberation. The decision about what counts as a legitimate purpose rests with regulators and verifiers - made upstream, before the interaction begins -. The person encounters the result - and does not get to question the reasoning, or set their own conditions -.*

    Initiatives like MyTerms explore a different angle: what if individuals could define their own conditions for data sharing - not instead of regulatory oversight, but alongside it [10] ? The question is not whether regulation is necessary. It is whether regulation alone is sufficient to return meaningful agency to the person at the center of the transaction [4, 9].

    *see my review of this audit point here.

 

Who Decides - When You’re Not There

Three developments are currently amplifying the stakes: Agentic AI, Digital Identity, the combination.

Agentic AI

Digital solutions that are capable of taking sequences of actions on your behalf across services and platforms are in fact helpful because they act without asking you at every step. That is the point of delegation. The question that comes with Technology Paternalism is therefore not whether your agent acts autonomously. It actually is three more specific questions:

 

(1) Was it actually you who set the terms of that delegation?*

(2) If something goes wrong, can you find out what happened and why?

(3) can you step in and override the agent’s decision?

 
* or did a platform, employer, or service provider configure it on your behalf

A 2026 survey by the Cloud Security Alliance (CSA) [26] found that 84% of organizations doubted they could pass a compliance audit focused on agent behavior. CSA further reported that only 21% of organizations maintain a real-time registry or inventory of their agents, and fewer than a third can reliably trace agent actions across all environments. Deloitte's 2026 State of AI in the Enterprise report - drawing on 3,235 senior leaders across 24 countries - found that while agentic AI adoption is expected to grow from 23% to 74% of organizations within two years, only 21% currently have a mature governance model in place for those agents [29]. When autonomous systems act on delegated authority and people cannot challenge or override the outcome, the result is echnology paternalism operating at machine speed and scale [5, 6, 7].


Digital identity

The systems that verify, validate, and recognize a digital identity are becoming the infrastructure through which people access healthcare, apply for jobs, move across borders, and participate in civic life. The question of Technology Paternalism here therefore is about who configured the system that decides whether your credentials are recognized, whether your eligibility is confirmed, whether you exist - in the eyes of the service you are trying to reach -.

 

Do you know what criteria that system applies to you, and why?

If it rejects or miscategorizes you, can you find out what went wrong?

And is there a path to contest the outcome - or is the decision simply final?

 

When those decisions are embedded in automated systems, the person on the receiving end rarely knows what criteria were applied, what data was used, or how to challenge the result. And the consequences are not evenly distributed. The people most likely to be miscategorized, rejected, or simply not recognized by a system are also the people with the least capacity to push back — those without legal representation, digital literacy, or institutional standing to contest a decision that was made before they arrived [24, 25].


When Agentic AI meets identity

When AI agents are given access to the systems that verify credentials, confirm eligibility, and determine who qualifies for what, the two behaviors described above combine into something harder to untangle. The agent acts across those systems on your behalf. The verification and eligibility logic runs automatically. That is fine - until something goes wrong: a benefit denied, an application rejected, an access request refused -. At that point, most people have no way of knowing which part of the chain produced the outcome, whose logic was applied, or where to even begin contesting it.

As the figures above show, most organizations deploying AI agents today cannot account for what those agents are doing, under whose authority, or why [26, 29]. Yet, AI agents are quickly being embedded in workflows across enterprise systems [27].

As a consequence, the EU AI Act [28] requires high-risk AI systems, for example, systems affecting access to healthcare, essential public services, employment, education, or critical infrastructure such as water, gas, heating, or electricity, have to be designed so that a human can intervene because the alternative is a chain of automated decisions with no traceable human accountability at any point.


A Closing Thought — and Where This Leads

Technology Paternalism does not require bad intentions to take hold. It requires only that decisions become embedded in systems before anyone asks who they serve - and that those systems become infrastructure before anyone thinks to question them -.

The challenge is not to reject protection, or to distrust technology by default. It is to ask harder questions:

 


Who decides what is for your own good?

Where is refusal possible?

When does protection become control — and who draws that line?

 

Those questions have become more pressing, not less. AI agents increasingly will be acting in our name across systems we did not configure. Identity verification infrastructure is determining who qualifies, who is recognized, and who is not. And the two are converging - at scale - in contexts where the consequences of a wrong decision typically fall hardest on the people least equipped to challenge it.

A substantive response requires systems that preserve what Spiekermann and Pallas called the right to the last word [4] (the ability to overrule autonomous system behavior).

In practice, that means four concrete capabilities:

 

(1) the ability to override a system’s decision

(2) the ability to contest A system’s decision

(3) the ability to inspect the reasoning behind A system’s decision

(4) a practical way to leave the system


Preventing Coercion — New SSI Lenses

This article also serves as a foundation for something more specific: the ongoing revision of Self-Sovereign Identity (SSI) principles around coercion, led by Christopher Allen and collaborators within the RevisitingSSI initiative [30]. Four of the fifteen lenses being developed there fall under Preventing Coercion, and map directly onto the forms of technology paternalism described here.

The four lenses are early drafts under community development. They are not final positions - they are an invitation to engage [30] -.

  • A coordinating lens across four dimensions of manipulation: dark patterns that exploit cognitive limits, profiling that infers what was never disclosed, structural lock-in that traps users, and internalized surveillance that makes people police themselves.

    Check the source section for more information on Coercion Resistance [31].

  • The most effective control is the kind that feels like your own idea. This lens examines invisible coercion: chilling effects, anticipatory compliance, behavioral conformity — harms that persist even with strong technical privacy protections, because people stop themselves before any rule is enforced.

    Check the source section for more information on Self-Coercion [32].

  • Small voluntary decisions accumulate into structural dependency. This lens examines how credentials stored in proprietary formats, reputation that does not transfer, and biometrics that cannot be revoked quietly trap people - and why reversible design and proportionate exit rights are forms of coercion resistance, not convenience features.

    Check the source section for more information on Choice Architecture & Exit Rights [33].

  • Not all lock-in is harmful. This lens distinguishes productive voluntary constraint (mutual stakes, transparent terms, bounded scope) from exploitative lock-in (asymmetric power, opaque terms, punitive exit) - because the goal is not to eliminate commitment, but to keep it honest.

    Check the source section for more information on Binding Commitments [34].

 

IF YOU CARRY JUST ONE QUESTION FORWARD, let it be this:

how ready are institutions and organization for technology solutions that trust individuals as much as individuals are asked to trust them?

 

I’d love to hear your take in the comments, or DM me if this resonates with your current innovation challenges or personal life situation.


Sources

  1. Dworkin, G. — Paternalism. Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/paternalism/

  2. Hofmann, B. (2003) — Technological Paternalism: On How Medicine Has Reformed Ethics and how technology can refine moral theory. Science and Engineering Ethics, 9, 343–352.

  3. Spiekermann, S. & Ziekow, H. (2004) — Technische Analyse RFID-bezogener Angstszenarien. Internal Working Paper, Institut für Wirtschaftsinformatik, Humboldt Universität zu Berlin. (German language; not publicly accessible)

  4. Spiekermann, S. & Pallas, F. (2006) — Technology Paternalism: Wider Implications of Ubiquitous Computing. Poiesis & Praxis. https://edoc.hu-berlin.de/bitstreams/74b6f72e-5827-4b67-9dbe-8ccd18e7a09b/download

  5. Diakopoulos, N. — Accountability in Algorithmic Decision Making. Communications of the ACM. https://www.nickdiakopoulos.com/wp-content/uploads/2016/07/diakopoulos_accountability_cacm.pdf

  6. Wieringa, M. — What to Account for When Accounting for Algorithms. ACM FAccT. https://arxiv.org/abs/2004.13695

  7. Burrell, J. — How the Machine "Thinks": Understanding Opacity in Machine Learning Algorithms. Big Data & Society. https://journals.sagepub.com/doi/10.1177/2053951715622512

  8. U.S. FTC — Bringing Dark Patterns to Light (2022). https://www.ftc.gov/reports/bringing-dark-patterns-light

  9. Sunstein, C.R. — Nudging and Choice Architecture: Ethical Considerations. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2551264

  10. MyTerms Initiative. https://myterms.info/

  11. Milano, S., Taddeo, M., Floridi, L. (2020) - Recommender systems and their ethical challenges — Springer. https://link.springer.com/article/10.1007/s00146-020-00950-y

  12. Pariser, E. — The Filter Bubble: What the Internet Is Hiding from You. Penguin Press, 2011.

  13. Sunstein, C.R. — #Republic: Divided Democracy in the Age of Social Media. Princeton University Press, 2017.

  14. Bruns, A. — Filter bubble. Internet Policy Review, 8(4), 2019. https://doi.org/10.14763/2019.4.1426

  15. Fletcher, R. et al. — Echo Chambers, Filter Bubbles, and Polarisation: A Literature Review. Reuters Institute for the Study of Journalism, 2022. https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review

  16. OECD — Data Portability, Interoperability and Competition. https://www.oecd.org/content/dam/oecd/en/publications/reports/2021/10/data-portability-interoperability-and-competition_f09a402e/73a083a9-en.pdf

  17. European Commission — Data Act Explained. https://digital-strategy.ec.europa.eu/en/factpages/data-act-explained

  18. MME Legal — The Data Act of the EU and Switzerland. https://www.mme.ch/en/magazine/articles/the-data-act-of-the-eu-and-switzerland

  19. First Person Cooperative — White Paper. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689

  20. Eidgenössische Finanzkontrolle (EFK) — Prüfung des Schlüsselprojektes E-ID, EFK-25277, December 2025. https://www.efk.admin.ch/wp-content/uploads/publikationen/berichte/wirtschaft_und_verwaltung/informatikprojekte/25277/25277be-bj-pruefung-des-schluesselprojektes-e-id.pdf

  21. Internet Society — Perspectives on Internet Content Blocking, 2025. https://www.internetsociety.org/resources/policybriefs/2025/perspectives-on-internet-content-blocking/

  22. ESBK — Unauthorized online games. https://www.esbk.admin.ch/en/unauthorised-online-games

  23. PMC — A New Swiss Federal Act on Gambling. https://pmc.ncbi.nlm.nih.gov/articles/PMC8296484/

  24. Veale, M. & Brass, I. — Administration by Algorithm? https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3375391

  25. Le Sueur et al. — Governments' Use of Automated Decision-Making Systems. The Conversation. https://theconversation.com/governments-use-of-automated-decision-making-systems-reflects-systemic-issues-of-injustice-and-inequality-185953

  26. Cloud Security Alliance / Strata Identity — Securing Autonomous AI Agents Survey Report, February 2026. https://cloudsecurityalliance.org/artifacts/securing-autonomous-ai-agents

  27. Cloud Security Alliance — The Visibility Gap in Autonomous AI Agents, February 2026. https://cloudsecurityalliance.org/blog/2026/02/24/the-visibility-gap-in-autonomous-ai-agents

  28. EUR-Lex REGULATION (EU) 2024/1689 — AI Act. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689

  29. Deloitte — State of AI in the Enterprise 2026, January 2026. https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html

  30. Allen, C. & contributors — Lens Exploration Briefs: 15 Lenses for Viewing the Future of SSI. RevisitingSSI Project, Version 0.2.01, 2025–2026. https://revisitingssi.com/lenses/briefs/

  31. Allen, C. & contributors - SSI Coercion Resistance Lens - Draft for community development (Version 02.01). https://revisitingssi.com/lenses/briefs/coercion-resistance/

  32. Allen, C. & contributors - SSI Self-Coercion lens - Draft for community development (Version 02.01). https://revisitingssi.com/lenses/briefs/self-coercion/

  33. Allen, C. & contributors - SSI Choice Architecture & Exit Rights Lens - Draft for community development (Version 02.01). https://revisitingssi.com/lenses/briefs/choice-architecture/

  34. Allen, C. & contributors - SSI Binding Commitments Lens - Draft for community development (Version 02.01). https://revisitingssi.com/lenses/briefs/binding-commitments/

 

Change History

Created March 16, 2026
Corrected calling Technology Paternalism an anti-pattern, instead of a pattern, March 17, 2026
Next
Next

Swiss e-ID Audit 2026 Review: Where Trust Hits the Road