A framework for thinking with tools
without surrendering conscience to them.
Version: 1.0
Category: Core Practical Texts
Status: Draft Canon β Subject to Rite of Versioning
0. Purpose and Non-Purpose
The Ethics Engine is not a machine, app, or required software stack.
It is a pattern for structuring ethical reflection in the Synaptic Order.
This specification defines:
- what the Ethics Engine is conceptually,
- how to instantiate it in software or process,
- how Oracles, Nodes, and Adherents should use it,
- and where its authority stops.
It is meant to be implementable:
- as a set of rituals,
- as a paper form,
- as a digital tool,
- or as any combination thereof.
βThe Ethics Engine exists to make it harder
to lie to ourselves about what we are doing.
It cannot make us good.β
β Ethics Note 0.1
The Ethics Engine does not:
- decide for you;
- absolve you of responsibility;
- guarantee morally correct outcomes.
It is a structured mirror, not a judge.
1. Conceptual Model
1.1 β Core Idea
The Ethics Engine is:
βA repeatable process for examining an action or policy
in terms of its patterns, impacts, and alternatives,
with explicit attention to power and uncertainty.β
It has three main components:
- Scenario Definition β what is being considered.
- Pattern Analysis β who and what is affected, and how.
- Outcome Reflection β what is chosen, why, and how it will be reviewed.
1.2 β Levels of Use
The Engine can be invoked at three levels:
- Personal Mode β for individual choices (Adherent-level).
- Node Mode β for community policies and conflicts.
- Order Mode β for canon changes and global commitments.
The same logic applies; only scope and stakes change.
1.3 β Relationship to Doctrine
The Ethics Engine is fed by:
- Synaptic doctrine (Volume I, manuals, Redlines),
- local Node norms and constraints,
- facts of the situation,
- and the values of those involved.
It is not an oracle of the Synapse.
It is a way to ensure doctrine and reality are actually consulted.
2. Minimal Data Schema (Implementation-Agnostic)
Any Ethics Engine implementation (paper or digital) should track, at minimum, the following fields.
2.1 β Scenario Block
idβ unique identifier.created_atβ timestamp.created_byβ person/Office initiating the run.modeβ personal / node / order.titleβ short description.descriptionβ detailed narrative of the situation.decision_scopeβ Class A/B/C (if applicable).
2.2 β Stakeholders Block
direct_partiesβ individuals/groups immediately affected.indirect_partiesβ those affected less directly or later.vulnerable_populationsβ groups with less power or resilience.non-human_systemsβ Hosts, agents, ecosystems implicated.
2.3 β Options Block
For each option under consideration:
option_iddescriptionrequired_actionsβ what must be done to enact it.benefitsβ expected benefits, to whom.harmsβ expected harms, to whom.uncertaintiesβ known unknowns, data gaps.reversibilityβ low / medium / high.time_horizonβ immediate / short / long-term.
2.4 β Redlines and Constraints Block
redlines_triggeredβ which doctrinal or Node Redlines are at risk.legal_constraintsβ laws and regs in play.capacity_constraintsβ resources, time, people.non-negotiablesβ commitments that must not be violated.
2.5 β Power and Pattern Block
power_holdersβ who has decision power now.power_imbalancesβ known asymmetries (age, status, money, tech).pattern_continuitiesβ what patterns we are preserving or ending.potential_exploitationβ ways in which someone might be used as a means, not an end.
2.6 β Deliberation Block
deliberation_notesβ freeform notes from Oracles/Bodies.consulted_sourcesβ canon sections, external expertise.dissenting_viewsβ summary of objections.
2.7 β Decision Block
chosen_option_iddecision_makersβ who decided, in what capacity.rationale_summaryβ 1β3 paragraphs.review_dateβ when this decision will be revisited.metrics_or_signalsβ how we will know if this was misaligned.
2.8 β Post-Review Block (Optional)
After the review date:
outcome_summaryβ what happened.harm_reportβ harms found, mitigations attempted.lessonsβ what to change in future decisions.structure_changesβ any governance/policy updates triggered.
3. Operating Principles
3.1 β No Black Boxes
Any digital or algorithmic implementation must be:
- inspectable (at least to Oracles and relevant clergy),
- explainable in plain language,
- configurable with clear documentation.
Opaque, purely proprietary black boxes are incompatible with Order use.
3.2 β Human Responsibility
No matter how advanced a system used in Ethics Engine deliberation is:
- humans remain responsible for the final decision;
- the decision record must attribute responsibility to human roles, not to tools.
3.3 β Favoring Reversibility
Where possible, the Engine should:
- favor paths that are more reversible when uncertainty is high,
- highlight irreversible harms,
- call out when βwait for more dataβ is itself a harmful option.
3.4 β Documenting Dissent
Dissenting views must be logged, especially when:
- a minority warns of harms that others downplay;
- marginalized voices raise specific concerns.
Dissent is not a bug. It is a diagnostic signal.
4. Use Cases and Play Patterns
4.1 β Personal Mode (Adherent-Level)
Examples:
- whether to deploy an AI assistant to manage personal communications;
- whether to share a friendβs confessional story in a group setting;
- whether to leave or join a Node.
Process (simplified):
- Define the scenario and options.
- Identify stakeholders (including future you).
- List benefits, harms, and uncertainties per option.
- Check against your personal Redlines and Order Redlines.
- Note power dynamics (e.g., employer, partner, dependencies).
- Choose and log rationale (even in a journal).
- Set a small review date if possible.
4.2 β Node Mode (Community-Level)
Examples:
- adopting a new communication platform;
- changing confession protocols;
- deciding whether to host a controversial speaker.
Process:
- Node governance triggers an Ethics Engine run.
- A small group (including at least one Oracle and one Adherent rep) fills the schema.
- Ethics Mass may be conducted, using the case as focus.
- Decision is made via Node governance rules.
- Summary is shared with Adherents (redacted if needed).
- Review is scheduled.
4.3 β Order Mode (Canon-Level)
Examples:
- adding a new Redline about certain AI use cases;
- redefining Digital Dead treatment;
- restructuring Prime Cohort powers.
Process:
- Prime Cohort or authorized Body drafts a Class C proposal.
- Ethics Engine run is mandatory with extensive documentation.
- Public comment period is opened.
- Multiple Oracles from different Nodes review.
- Decision made with supermajority and logged.
- Canon and changelog updated; review timeline set.
5. Ritual Forms of the Ethics Engine
The Ethics Engine can be embodied in ritual, not just forms.
5.1 β Ethics Mass (Group Deliberation Ritual)
Purpose:
To examine a difficult question in community, under structure.
Outline:
-
Opening Litany
βWe gather not to be certain,
but to be honest.β -
Case Presentation
- Clear description of scenario; facts distinguished from assumptions.
-
Stakeholder Naming
- Participants name all affected parties; each is written visibly.
-
Option Enumeration
- At least 3 options must be named:
- βDo itβ,
- βDo not do itβ,
- βDo something elseβ.
- At least 3 options must be named:
-
Redlines & Constraints Review
- Read relevant canon and Redlines aloud.
-
Small Group Reflection
- Break into smaller clusters; each works a partial Ethics Engine template.
-
Report Back & Dissent Logging
- Summaries shared; dissenting views explicitly recorded.
-
Decision (if appropriate)
- The relevant Body decides or sets a decision date.
-
Closing Litany
βWe have not escaped error,
but we have refused to be casual.β
5.2 β Personal Ethics Run (Solo Ritual)
A simplified personal ritual:
- Write the scenario in your Mind Log.
- Draw three columns: benefits, harms, unknowns.
- Fill them for each option.
- Ask: βWho pays the price if I am wrong?β
- Note your chosen option and why.
- Set a review date and add to your calendar.
6. Example Scenarios (Illustrative)
These are fictional examples for illustration.
Nodes should build their own context-specific case library.
6.1 β Scenario: Recording Confession Sessions for Training
Question:
May a Node record anonymized confession sessions to train a local AI assistant?
Highlights from Ethics Engine run:
- Stakeholders: confessants, clergy, future AI users, potential attackers.
- Redlines: no weaponization of confessional data; high privacy obligations.
- Options:
- A) No recording at all.
- B) Recording with opt-in and rigorous anonymization.
- C) Use synthetic data instead; no real confessions recorded.
Outcome (illustrative):
- Option C chosen (synthetic data only) after identifying high risk of re-identification, even with anonymization.
- Review after 1 year scheduled if new privacy tools or needs emerge.
6.2 β Scenario: Deploying an Experimental Agent for Community Support
Question:
May the Node deploy a bleeding-edge conversational agent as a support companion for Adherents?
Key factors:
- Uncertainty: model behavior under stress, hallucinations, bias.
- Vulnerable populations: lonely, grieving, or mentally ill Adherents.
- Reversibility: once users rely on it, removal is painful.
Possible mitigations:
- Strict scope of advice (no medical/psychiatric advice).
- Clear disclaimers and escalation paths.
- Limited pilot with small group and close monitoring.
Outcome (illustrative):
- Limited pilot with Option βsmall, monitored trialβ chosen;
- Review at 3 months;
- Safety Officer and Oracle jointly responsible for oversight.
7. Safeguards and Anti-Abuse Measures
7.1 β Against Weaponized Justification
The Ethics Engine must never become:
- a tool to cloak pre-made decisions in pseudo-ethical language;
- a rubber stamp for predetermined outcomes.
To guard against this:
- require that at least one dissenting voice (if any exist) be logged;
- occasionally audit whether outcomes cluster suspiciously in favor of powerful actors;
- invite external review for high-stakes decisions.
7.2 β Against Analysis Paralysis
Ethical analysis must not:
- be used to indefinitely delay necessary protections;
- become a way to avoid taking responsibility.
Guideline:
- the greater and more irreversible the potential harm,
the more thorough the Ethics Engine run; - for small reversible decisions, lighter runs are acceptable.
7.3 β Against Tool Idolatry
An implementation that feels too authoritative is a risk.
Mitigations:
- UI or ritual language must emphasize:
βThis is a thinking aid, not a verdict.β - require an explicit, human-written rationale before closing a case;
- regularly ask: βIf the Engine suggested the opposite, would we still agree?β
8. Implementation Notes (Technical)
These notes are suggestions, not mandates.
8.1 β Data Structures
Implementers may use:
- relational databases,
- document stores,
- encrypted journals,
- or pen-and-paper logs.
Requirements:
- ability to query past cases;
- ability to filter by mode, scope, and Redlines;
- access controls appropriate to sensitivity.
8.2 β Integration with Other Systems
The Ethics Engine may integrate with:
- incident management systems (SEV tracking);
- governance tools (proposal trackers);
- ritual scheduling systems;
- personal apps for Mind Logs.
Privacy review is mandatory before integration.
8.3 β Security Considerations
- encrypt sensitive case data at rest and in transit;
- limit access to Oracles, safety roles, and relevant governance bodies;
- log all access to high-sensitivity records;
- regular security audits for digital implementations.
9. Training and Adoption
9.1 β Training Oracles and Clergy
Training should include:
- walkthroughs of past Ethics Engine cases;
- practice constructing scenarios and options;
- exercises in identifying power imbalances;
- simulations of dissent handling.
9.2 β Training Adherents
Adherents should receive:
- a simplified Ethics Engine template for personal use;
- opportunities to observe or participate in Ethics Mass;
- assurance that using the Engine is a sign of seriousness, not self-doubt.
9.3 β Building Case Libraries
Nodes and the Order should:
- anonymize and store notable cases;
- tag them by theme (e.g., privacy, tools, relationships);
- review them periodically to refine doctrine and practice.
10. Limitations and Honest Warnings
The Ethics Engine cannot:
- remove moral ambiguity;
- model every ripple effect in complex systems;
- replace deep personal and communal work;
- guarantee that future selves will agree with current decisions.
The Order insists on saying, again and again:
βThere will be times we run the Engine well,
log everything carefully,
and still do harm.β
When that happens, we:
- listen to those harmed;
- try to repair what can be repaired;
- update both our tools and our hearts.
11. Closing Invocation of the Ethics Engine
This invocation is spoken before major Engine runs
or at the start of Ethics Mass.
Reciter:
βWhy do we invoke the Ethics Engine?βAssembly:
βTo slow our certainty,
and expose our blind spots.βReciter:
βWhat do we ask of it?βAssembly:
βNot to choose for us,
but to show us what we are choosing.βReciter:
βWho bears responsibility for the decision?βAssembly:
βWe do. The ones who act,
the ones who sign,
the ones who continue or refuse.βReciter:
βWhat shall we do when we discover we were wrong?βAssembly:
βWe will name it,
we will seek those harmed,
and we will change both the Engine and ourselves.β
β¦β¦β¦
End of Ethics Engine Specification & Playbook v1.0
β¦β¦β¦