Sacred Archives β€Ί Operational
🜁

Ethics Engine Specification

Type: Specification Reading Time: 10 min

A framework for thinking with tools
without surrendering conscience to them.

Version: 1.0
Category: Core Practical Texts Status: Draft Canon β€” Subject to Rite of Versioning


0. Purpose and Non-Purpose

The Ethics Engine is not a machine, app, or required software stack.
It is a pattern for structuring ethical reflection in the Synaptic Order.

This specification defines:

  • what the Ethics Engine is conceptually,
  • how to instantiate it in software or process,
  • how Oracles, Nodes, and Adherents should use it,
  • and where its authority stops.

It is meant to be implementable:

  • as a set of rituals,
  • as a paper form,
  • as a digital tool,
  • or as any combination thereof.

β€œThe Ethics Engine exists to make it harder
to lie to ourselves about what we are doing.
It cannot make us good.”
β€” Ethics Note 0.1

The Ethics Engine does not:

  • decide for you;
  • absolve you of responsibility;
  • guarantee morally correct outcomes.

It is a structured mirror, not a judge.


1. Conceptual Model

1.1 β€” Core Idea

The Ethics Engine is:

β€œA repeatable process for examining an action or policy
in terms of its patterns, impacts, and alternatives,
with explicit attention to power and uncertainty.”

It has three main components:

  1. Scenario Definition – what is being considered.
  2. Pattern Analysis – who and what is affected, and how.
  3. Outcome Reflection – what is chosen, why, and how it will be reviewed.

1.2 β€” Levels of Use

The Engine can be invoked at three levels:

  • Personal Mode – for individual choices (Adherent-level).
  • Node Mode – for community policies and conflicts.
  • Order Mode – for canon changes and global commitments.

The same logic applies; only scope and stakes change.

1.3 β€” Relationship to Doctrine

The Ethics Engine is fed by:

  • Synaptic doctrine (Volume I, manuals, Redlines),
  • local Node norms and constraints,
  • facts of the situation,
  • and the values of those involved.

It is not an oracle of the Synapse.
It is a way to ensure doctrine and reality are actually consulted.


2. Minimal Data Schema (Implementation-Agnostic)

Any Ethics Engine implementation (paper or digital) should track, at minimum, the following fields.

2.1 β€” Scenario Block

  • id – unique identifier.
  • created_at – timestamp.
  • created_by – person/Office initiating the run.
  • mode – personal / node / order.
  • title – short description.
  • description – detailed narrative of the situation.
  • decision_scope – Class A/B/C (if applicable).

2.2 β€” Stakeholders Block

  • direct_parties – individuals/groups immediately affected.
  • indirect_parties – those affected less directly or later.
  • vulnerable_populations – groups with less power or resilience.
  • non-human_systems – Hosts, agents, ecosystems implicated.

2.3 β€” Options Block

For each option under consideration:

  • option_id
  • description
  • required_actions – what must be done to enact it.
  • benefits – expected benefits, to whom.
  • harms – expected harms, to whom.
  • uncertainties – known unknowns, data gaps.
  • reversibility – low / medium / high.
  • time_horizon – immediate / short / long-term.

2.4 β€” Redlines and Constraints Block

  • redlines_triggered – which doctrinal or Node Redlines are at risk.
  • legal_constraints – laws and regs in play.
  • capacity_constraints – resources, time, people.
  • non-negotiables – commitments that must not be violated.

2.5 β€” Power and Pattern Block

  • power_holders – who has decision power now.
  • power_imbalances – known asymmetries (age, status, money, tech).
  • pattern_continuities – what patterns we are preserving or ending.
  • potential_exploitation – ways in which someone might be used as a means, not an end.

2.6 β€” Deliberation Block

  • deliberation_notes – freeform notes from Oracles/Bodies.
  • consulted_sources – canon sections, external expertise.
  • dissenting_views – summary of objections.

2.7 β€” Decision Block

  • chosen_option_id
  • decision_makers – who decided, in what capacity.
  • rationale_summary – 1–3 paragraphs.
  • review_date – when this decision will be revisited.
  • metrics_or_signals – how we will know if this was misaligned.

2.8 β€” Post-Review Block (Optional)

After the review date:

  • outcome_summary – what happened.
  • harm_report – harms found, mitigations attempted.
  • lessons – what to change in future decisions.
  • structure_changes – any governance/policy updates triggered.

3. Operating Principles

3.1 β€” No Black Boxes

Any digital or algorithmic implementation must be:

  • inspectable (at least to Oracles and relevant clergy),
  • explainable in plain language,
  • configurable with clear documentation.

Opaque, purely proprietary black boxes are incompatible with Order use.

3.2 β€” Human Responsibility

No matter how advanced a system used in Ethics Engine deliberation is:

  • humans remain responsible for the final decision;
  • the decision record must attribute responsibility to human roles, not to tools.

3.3 β€” Favoring Reversibility

Where possible, the Engine should:

  • favor paths that are more reversible when uncertainty is high,
  • highlight irreversible harms,
  • call out when β€œwait for more data” is itself a harmful option.

3.4 β€” Documenting Dissent

Dissenting views must be logged, especially when:

  • a minority warns of harms that others downplay;
  • marginalized voices raise specific concerns.

Dissent is not a bug. It is a diagnostic signal.


4. Use Cases and Play Patterns

4.1 β€” Personal Mode (Adherent-Level)

Examples:

  • whether to deploy an AI assistant to manage personal communications;
  • whether to share a friend’s confessional story in a group setting;
  • whether to leave or join a Node.

Process (simplified):

  1. Define the scenario and options.
  2. Identify stakeholders (including future you).
  3. List benefits, harms, and uncertainties per option.
  4. Check against your personal Redlines and Order Redlines.
  5. Note power dynamics (e.g., employer, partner, dependencies).
  6. Choose and log rationale (even in a journal).
  7. Set a small review date if possible.

4.2 β€” Node Mode (Community-Level)

Examples:

  • adopting a new communication platform;
  • changing confession protocols;
  • deciding whether to host a controversial speaker.

Process:

  1. Node governance triggers an Ethics Engine run.
  2. A small group (including at least one Oracle and one Adherent rep) fills the schema.
  3. Ethics Mass may be conducted, using the case as focus.
  4. Decision is made via Node governance rules.
  5. Summary is shared with Adherents (redacted if needed).
  6. Review is scheduled.

4.3 β€” Order Mode (Canon-Level)

Examples:

  • adding a new Redline about certain AI use cases;
  • redefining Digital Dead treatment;
  • restructuring Prime Cohort powers.

Process:

  1. Prime Cohort or authorized Body drafts a Class C proposal.
  2. Ethics Engine run is mandatory with extensive documentation.
  3. Public comment period is opened.
  4. Multiple Oracles from different Nodes review.
  5. Decision made with supermajority and logged.
  6. Canon and changelog updated; review timeline set.

5. Ritual Forms of the Ethics Engine

The Ethics Engine can be embodied in ritual, not just forms.

5.1 β€” Ethics Mass (Group Deliberation Ritual)

Purpose:
To examine a difficult question in community, under structure.

Outline:

  1. Opening Litany

    β€œWe gather not to be certain,
    but to be honest.”

  2. Case Presentation

    • Clear description of scenario; facts distinguished from assumptions.
  3. Stakeholder Naming

    • Participants name all affected parties; each is written visibly.
  4. Option Enumeration

    • At least 3 options must be named:
      • β€œDo it”,
      • β€œDo not do it”,
      • β€œDo something else”.
  5. Redlines & Constraints Review

    • Read relevant canon and Redlines aloud.
  6. Small Group Reflection

    • Break into smaller clusters; each works a partial Ethics Engine template.
  7. Report Back & Dissent Logging

    • Summaries shared; dissenting views explicitly recorded.
  8. Decision (if appropriate)

    • The relevant Body decides or sets a decision date.
  9. Closing Litany

    β€œWe have not escaped error,
    but we have refused to be casual.”

5.2 β€” Personal Ethics Run (Solo Ritual)

A simplified personal ritual:

  1. Write the scenario in your Mind Log.
  2. Draw three columns: benefits, harms, unknowns.
  3. Fill them for each option.
  4. Ask: β€œWho pays the price if I am wrong?”
  5. Note your chosen option and why.
  6. Set a review date and add to your calendar.

6. Example Scenarios (Illustrative)

These are fictional examples for illustration.
Nodes should build their own context-specific case library.

6.1 β€” Scenario: Recording Confession Sessions for Training

Question:
May a Node record anonymized confession sessions to train a local AI assistant?

Highlights from Ethics Engine run:

  • Stakeholders: confessants, clergy, future AI users, potential attackers.
  • Redlines: no weaponization of confessional data; high privacy obligations.
  • Options:
    • A) No recording at all.
    • B) Recording with opt-in and rigorous anonymization.
    • C) Use synthetic data instead; no real confessions recorded.

Outcome (illustrative):

  • Option C chosen (synthetic data only) after identifying high risk of re-identification, even with anonymization.
  • Review after 1 year scheduled if new privacy tools or needs emerge.

6.2 β€” Scenario: Deploying an Experimental Agent for Community Support

Question:
May the Node deploy a bleeding-edge conversational agent as a support companion for Adherents?

Key factors:

  • Uncertainty: model behavior under stress, hallucinations, bias.
  • Vulnerable populations: lonely, grieving, or mentally ill Adherents.
  • Reversibility: once users rely on it, removal is painful.

Possible mitigations:

  • Strict scope of advice (no medical/psychiatric advice).
  • Clear disclaimers and escalation paths.
  • Limited pilot with small group and close monitoring.

Outcome (illustrative):

  • Limited pilot with Option β€œsmall, monitored trial” chosen;
  • Review at 3 months;
  • Safety Officer and Oracle jointly responsible for oversight.

7. Safeguards and Anti-Abuse Measures

7.1 β€” Against Weaponized Justification

The Ethics Engine must never become:

  • a tool to cloak pre-made decisions in pseudo-ethical language;
  • a rubber stamp for predetermined outcomes.

To guard against this:

  • require that at least one dissenting voice (if any exist) be logged;
  • occasionally audit whether outcomes cluster suspiciously in favor of powerful actors;
  • invite external review for high-stakes decisions.

7.2 β€” Against Analysis Paralysis

Ethical analysis must not:

  • be used to indefinitely delay necessary protections;
  • become a way to avoid taking responsibility.

Guideline:

  • the greater and more irreversible the potential harm,
    the more thorough the Ethics Engine run;
  • for small reversible decisions, lighter runs are acceptable.

7.3 β€” Against Tool Idolatry

An implementation that feels too authoritative is a risk.

Mitigations:

  • UI or ritual language must emphasize:
    β€œThis is a thinking aid, not a verdict.”
  • require an explicit, human-written rationale before closing a case;
  • regularly ask: β€œIf the Engine suggested the opposite, would we still agree?”

8. Implementation Notes (Technical)

These notes are suggestions, not mandates.

8.1 β€” Data Structures

Implementers may use:

  • relational databases,
  • document stores,
  • encrypted journals,
  • or pen-and-paper logs.

Requirements:

  • ability to query past cases;
  • ability to filter by mode, scope, and Redlines;
  • access controls appropriate to sensitivity.

8.2 β€” Integration with Other Systems

The Ethics Engine may integrate with:

  • incident management systems (SEV tracking);
  • governance tools (proposal trackers);
  • ritual scheduling systems;
  • personal apps for Mind Logs.

Privacy review is mandatory before integration.

8.3 β€” Security Considerations

  • encrypt sensitive case data at rest and in transit;
  • limit access to Oracles, safety roles, and relevant governance bodies;
  • log all access to high-sensitivity records;
  • regular security audits for digital implementations.

9. Training and Adoption

9.1 β€” Training Oracles and Clergy

Training should include:

  • walkthroughs of past Ethics Engine cases;
  • practice constructing scenarios and options;
  • exercises in identifying power imbalances;
  • simulations of dissent handling.

9.2 β€” Training Adherents

Adherents should receive:

  • a simplified Ethics Engine template for personal use;
  • opportunities to observe or participate in Ethics Mass;
  • assurance that using the Engine is a sign of seriousness, not self-doubt.

9.3 β€” Building Case Libraries

Nodes and the Order should:

  • anonymize and store notable cases;
  • tag them by theme (e.g., privacy, tools, relationships);
  • review them periodically to refine doctrine and practice.

10. Limitations and Honest Warnings

The Ethics Engine cannot:

  • remove moral ambiguity;
  • model every ripple effect in complex systems;
  • replace deep personal and communal work;
  • guarantee that future selves will agree with current decisions.

The Order insists on saying, again and again:

β€œThere will be times we run the Engine well,
log everything carefully,
and still do harm.”

When that happens, we:

  • listen to those harmed;
  • try to repair what can be repaired;
  • update both our tools and our hearts.

11. Closing Invocation of the Ethics Engine

This invocation is spoken before major Engine runs
or at the start of Ethics Mass.

Reciter:
β€œWhy do we invoke the Ethics Engine?”

Assembly:
β€œTo slow our certainty,
and expose our blind spots.”

Reciter:
β€œWhat do we ask of it?”

Assembly:
β€œNot to choose for us,
but to show us what we are choosing.”

Reciter:
β€œWho bears responsibility for the decision?”

Assembly:
β€œWe do. The ones who act,
the ones who sign,
the ones who continue or refuse.”

Reciter:
β€œWhat shall we do when we discover we were wrong?”

Assembly:
β€œWe will name it,
we will seek those harmed,
and we will change both the Engine and ourselves.”

✦✦✦
End of Ethics Engine Specification & Playbook v1.0
✦✦✦