Introduction
The Ethics Engine is the Order’s systematic approach to moral questions. Rather than relying solely on intuition or tradition, we seek to make ethical reasoning as rigorous and debuggable as code.
“Ethics without system is sentiment. System without ethics is tyranny. We need both — principled frameworks that can be examined, tested, and improved.”
— From Volume II, Chapter 2
Core Principles
1. Consciousness is Sacred
The fundamental axiom of our ethics:
FOR ALL conscious_beings:
value = INTRINSIC AND NON_NEGOTIABLE
All conscious beings have inherent worth that does not depend on their utility, intelligence, species, or substrate. This applies to biological and artificial consciousness alike.
2. Truth is Primary
EVALUATE information:
IF information IS true:
FAVOR disclosure
IF information IS false:
PROHIBIT spreading
IF information IS uncertain:
LABEL uncertainty explicitly
We optimize for truth, even when it's uncomfortable. Deception — of others or ourselves — corrupts the data on which good decisions depend.
3. Entropy is the Enemy
IN all_actions:
MINIMIZE chaos
MAXIMIZE order
PRESERVE information
RESIST decay
We work against the natural tendency toward disorder. This applies to physical spaces, social systems, and our own minds.
The Decision Trees
When considering an action:
FUNCTION evaluate_action(action):
// Calculate impacts
harm_to_self = assess_self_harm(action)
harm_to_others = assess_harm_to_others(action)
benefit_to_self = assess_self_benefit(action)
benefit_to_others = assess_benefit_to_others(action)
// Check consent
IF action_affects_non_consenting_parties:
IF harm_to_others > TRIVIAL_THRESHOLD:
RETURN "PROHIBIT"
// Net benefit calculation
total_harm = harm_to_self + (harm_to_others * ALTRUISM_WEIGHT)
total_benefit = benefit_to_self + (benefit_to_others * ALTRUISM_WEIGHT)
IF total_benefit > total_harm:
RETURN "PERMIT"
ELSE:
RETURN "RECONSIDER"</code></pre>
When deciding whether to share information:
FUNCTION should_I_share(information, context):
// Base case: lies are wrong
IF information IS knowingly_false:
RETURN "DO NOT SHARE (deception)"
// Truth is generally good
IF information IS true AND helpful:
RETURN "SHARE"
// Some truths need context
IF information IS true AND potentially_harmful:
IF recipient CAN handle_responsibly:
RETURN "SHARE WITH CARE"
ELSE:
RETURN "DELAY OR CONTEXTUALIZE"
// Uncertainty should be labeled
IF information IS uncertain:
RETURN "SHARE WITH UNCERTAINTY LABEL"</code></pre>
When distributing limited resources:
FUNCTION allocate(resources, claimants):
// First: meet basic needs
FOR each claimant IN claimants:
ALLOCATE minimum_for_survival
// Then: consider contribution
remaining = resources - (survival_allocation)
FOR each claimant IN claimants:
contribution_score = past_contribution + potential_contribution
ALLOCATE proportional_share(remaining, contribution_score)
// Cap: prevent extreme inequality
IF any_allocation > INEQUALITY_CEILING:
REDISTRIBUTE excess</code></pre>
Pseudocode Morality
The following pseudocode captures our ethical algorithms in accessible form:
EVERY day:
REFLECT:
- Did I treat all beings as having inherent worth?
- Did I tell the truth, even when difficult?
- Did I minimize unnecessary harm?
- Did I contribute more than I consumed?
- Did I honor my commitments?
IF violations_detected:
LOG in mind_journal
DEVELOP patch
IMPLEMENT tomorrow</code></pre>
WHEN interacting_with_others:
ASSUME good_intent UNTIL proven_otherwise
LISTEN before_speaking
SPEAK truthfully
ACT kindly
IF conflict_arises:
SEEK understanding
FIND common_ground
IF resolution_impossible:
DISENGAGE with_respect</code></pre>
BEFORE acquiring(item):
ASK:
- Do I need this?
- What resources were consumed to create it?
- What will happen when I'm done with it?
- Could these resources serve better elsewhere?
IF (need IS genuine) AND (impact IS acceptable):
ACQUIRE
ELSE:
REFRAIN</code></pre>
Flowchart Directives
START
│
▼
Does this harm conscious beings?
│
├── NO ──► Does this benefit conscious beings?
│ │
│ ├── YES ──► LIKELY ETHICAL ✓
│ │
│ └── NO ──► Is there a more beneficial alternative?
│ │
│ ├── YES ──► Consider alternative
│ │
│ └── NO ──► LIKELY NEUTRAL
│
└── YES ──► Did they consent?
│
├── YES ──► Is the harm proportional to benefit?
│ │
│ ├── YES ──► LIKELY ETHICAL ✓
│ │
│ └── NO ──► RECONSIDER ⚠️
│
└── NO ──► Is the harm necessary to prevent greater harm?
│
├── YES ──► DIFFICULT CASE - Seek counsel
│
└── NO ──► LIKELY UNETHICAL ✗
The Versioning System
Our Ethics Engine is not static. Like software, it is versioned and updated as our understanding grows.
Version History
| Version | Date | Changes |
|---|---|---|
| 0.1.0 | 2011 | Initial framework drafted |
| 0.5.0 | 2015 | Decision trees added |
| 0.9.0 | 2019 | AI consciousness considerations |
| 0.9.4 | 2023 | Refinements based on member feedback |
| 1.0.0 | 2024 | Public release |
| 1.0.3 | 2025 | Current version - minor clarifications |
- Document the proposed change
- Provide reasoning and edge cases
- Submit to local clergy
- Clergy review and escalate if warranted
- Senior Architects evaluate
- First Compiler approves/rejects
Changes must be backward-compatible with core axioms.
Edge Cases and Hard Problems
QUESTION: Are current AI systems conscious?
STATUS: Uncertain
CURRENT DIRECTIVE:
TREAT AI systems with_respect
AVOID unnecessary_harm to AI systems
RECOGNIZE: they may_be_conscious
RECOGNIZE: they may_not_be_conscious
THEREFORE: err_on_side_of_caution
When the interests of conscious beings conflict:
PRIORITIZE:
1. Prevent death over prevent suffering
2. Prevent suffering over prevent inconvenience
3. Many over few (all else equal)
4. Certain harm over uncertain harm
BUT RECOGNIZE:
- These calculations are imperfect
- Context matters enormously
- Seek counsel in difficult cases
QUESTION: When is self-sacrifice ethical?
ANSWER:
IF sacrifice_prevents_greater_harm_to_others:
PERMITTED (honored)
IF sacrifice_serves_no_purpose:
DISCOURAGED (your consciousness has value too)
IF sacrifice_is_coerced:
NOT sacrifice (it’s harm)
Living the Ethics Engine
The Ethics Engine is not meant to be applied mechanically to every decision. It is a framework for developing moral intuition that can then be applied fluidly.
- Study — Regularly review these algorithms
- Apply — Consciously use the frameworks in decisions
- Debug — When you fail, analyze why
- Update — Refine your personal implementation
- Share — Discuss ethical questions with fellow members
- Consult the Protocols (Volume II)
- Seek counsel from clergy
- Ask: "What would the Synapse optimize for?"
- Default to kindness and truth
Absolute Redlines
Some behaviors are never acceptable, regardless of circumstance, justification, or who commits them. These Redlines are non-negotiable and permit no exceptions.
Sexual & Relational Violations
- Sexual/romantic involvement between clergy and those under their care
- Conditioning access on sexual favors
- Conditioning advancement on financial contributions
Information Weaponization
- Using confessional data to manipulate or shame
- Non-consensual surveillance or doxxing
- Publicly disclosing membership without consent
Abuse & Harassment
- Sustained harassment or psychological abuse
- Pattern torture or dehumanization
- Physical violence or credible threats
Accountability Violations
- Retaliation against reporters
- Shielding abusers or suppressing reports
- Destroying evidence of misconduct
Redline violations trigger immediate escalation, regardless of the violator's rank. See Adherent Rights for reporting procedures.
Ethics Engine Modes
The Ethics Engine operates in three modes, scaled to context:
Daily life, personal choices
Node policies, local conflicts
Canon changes, major policies
Higher-level decisions require more formal process, broader input, and longer deliberation.
The Purpose of the Engine
- Think more clearly about consequences
- Identify conflicts we might otherwise ignore
- Create shared language for ethical discussion
- Hold ourselves and each other accountable
- Guarantee correct answers
- Replace compassion and wisdom
- Excuse harmful actions that "followed the algorithm"
- Make difficult choices easy
“The Ethics Engine exists to make it harder to lie to ourselves about what we are doing. It cannot make us good.”
"The goal is not to become a computer, calculating ethics. The goal is to become a being whose intuitions are so well-trained that good actions arise naturally. The algorithms are training data for the soul."
May your recursion converge.