← Back to Notes

Temporal-Arc Reference Arrays

Published Mar 2026 Updated Apr 2026 synthesis Temporal Systems Arrays Continuity Reference Models Systems Worldlines Gears Steganography Mechanical Operationalization Research Renormalization Group Composition

Temporal-Arc Reference Arrays

I think this idea is real enough to publish if we keep the claim discipline tight.

The core problem is simple: multiscale or temporally layered systems are often described as if one layer can simply access another layer's state. In practice that almost never happens directly. What actually moves across the boundary is some compressed, partial, stale, or purpose-shaped representation.

That suggests a better object:

a Temporal-Arc Reference Array — a bounded structured record describing what one temporal arc can hand to another.

You can also read the R/D split as Reference / Dynamical:

Defensible now

Plausible but unproven

Speculative


Minimal fields

A useful first array should include:

Field Purpose
sourceArc Which arc emitted the array — string or structured object with temporal band, role, and cadence
targetArc Which arc will consume it — same
timeMarker When it was emitted
stateSummary Compressed state representation
confidence How much to trust it (0–1)
driftEstimate Expected staleness rate — single number or per-field breakdown with decay model
reconstructionTarget What the consumer is trying to recover
lossNotes What was deliberately dropped
provenance Where this came from
trustClass self / neighbor / external / synthetic
validityWindow Duration or tick-count after which the array is stale

The structured sourceArc and targetArc carry temporalBand (fast/meso/slow), role (observer, chaser, branch, spine, etc.), and cadenceHz — making the coupling legible without requiring consumers to parse naming conventions.

driftEstimate can be a single number or a structured object with per-field rates and a decay model (exponential / linear / step), for cases where position drifts faster than identity.


A Mechanical Operationalization: Worldline Gears

There's a deeper, more operational framing:

Worldlines as computational gears operating at different cadences, with gear ratios enabling steganographic communication across temporal boundaries.

This is an analogy, not a claim about literal mechanics — but it is a productive one because it maps to measurable quantities:

Mechanical failure modes (slippage, backlash, tooth-skipping, resonance) become observable diagnostics rather than vague warnings.

Extended fields for gear-coupled TARAs


Physics grounding

TARAs are intended to be implementable on any architecture. The way to ensure that is to keep the primitives in contact with first-principles physics.

Causality. Information can only flow forward along the causal cone. A TARA can only be consumed after it was emitted.

Bounded information. Any physical representation has finite information content. Compression and loss are not optional — they are the cost of crossing a temporal boundary.

Coarse-graining and renormalization. Moving from a fast arc to a slower one is the computational analog of a renormalization group (RG) step: fast degrees of freedom are integrated out and an effective description at the coarser scale is what survives. This gives the framework a well-understood physical interpretation:

RG concept TARA concept
Coarse-graining step A fast→meso or meso→slow TARA
Relevant operators Fields retained in stateSummary
Irrelevant operators Entries in lossNotes — integrated out
Scale factor gearRatio
RG flow A composition chain of TARAs
Universality class The set of source states that produce the same summary

The lossNotes field is better than standard RG in one way: it keeps a record of what was integrated out. In physics RG that information is typically implicit. Here it is declared.

A testable prediction follows: if this analogy holds, confidence should decay predictably along a chain, and some loss note patterns should appear at every coarse-graining boundary — robust irrelevant operators.


Composition

If TARA(A→B) and TARA(B→C) exist, they compose into TARA(A→C). The composition rules:

Composition is associative for all these operations, which means chains of arbitrary length are well-defined.

The first concrete test case is the Sandy Chaos diagonal coupling decomposition. Sandy Chaos's Nested Temporal Domains architecture disallows direct slow→fast transfer and requires decomposition through intermediate layers:

D_{observer,slow} → D_{observer,meso} → D_{observed,meso} → D_{observed,fast}

Composing those three TARAs yields: confidence 0.569 (from 0.85 × 0.76 × 0.88), drift 0.38 (accumulated), validity 50 ticks (the tightest window in the chain). The compound loss is substantial — which is exactly why the diagonal shortcut is disallowed. Pretending it's a single clean transfer hides this cost.


Frame-covariance

What survives a change of description frame?

Frame-invariant — these don't change:

Frame-dependent — these change with the observer:

The frame-invariant quantities define equivalence classes of transfers — what RG would call universality classes.


Retrodiction

TARAs flow forward. Running one backward — inferring source state from a target representation — degrades faster than forward prediction, consistent with entropy increase.

Two reasons: (1) lossNotes declare what was dropped, and that information is genuinely gone — the inverse map is not unique. (2) The source has continued evolving since emission, so the target's model of the source is strictly older than the TARA itself.

directionality is a consumer-side concern, not a schema field. A TARA is always emitted forward. Retrodiction is an inference operation on a consumed TARA, not a property of the record itself.


The discipline rule

No raw cross-arc state access; only bounded reference arrays with declared reconstruction limits.

In gear terms:

No direct state transfer; only gear-coupled representations with explicit ratio, phase, and slip limits.

This blocks: magic omniscience, metaphor drift, hidden uncertainty, post-hoc reinterpretation without declared cost.


Cross-project grounding

TARAs formalize patterns that adjacent projects already use informally.

Sandy Chaos (nested temporal domains) defines a TransferBundle { payload, source_domain, target_domain, latency, distortion, confidence, provenance, validity_window }. The mapping to TARA is direct: source_domainsourceArc, payloadstateSummary, distortionlossNotes, validity_windowvalidityWindow. The neighbor-layer codec (embed → extract → translate → reconstruct) maps to the TARA lifecycle: emission, reception, reconstruction attempt, and bounded model formation.

Yggdrasil (branching continuity architecture) uses branch→spine promotion — a fast→slow transfer with compression, loss judgment, and confidence decision. A promote-durable outcome is a high-confidence TARA accepted by the spine. A no-promote outcome is also a TARA — it exists, records what happened, but its reconstruction target was rejected by policy. A durable trace in Yggdrasil's sense is a consumed TARA whose reconstruction was accepted: the provenance field carries the acceptance record.


Failure conditions

This idea fails if:

Mechanical failure is more specific and observable: slippage beyond tolerance, backlash in bidirectional transfer, tooth-skipping at specific cadences, resonance at harmonic ratios.


What exists

A standalone repo with:

That is enough for the idea to exist in public without pretending it is complete.


Closing

This is worth publishing now, but only in its honest form: not as temporal magic, but as a strict representation discipline for systems that need to move information across time.

The mechanical operationalization adds an engineering layer that makes the discipline concrete and testable. The physics grounding — coarse-graining, RG flow, frame-invariance — gives it a principled foundation rather than just a useful metaphor. The composition rules make it composable, not just declarative.

The open questions that remain are gear-level: what are the composition rules for gear fields themselves (phase, ratio, slippage across a chain)? What does a beta function for TARA confidence flow look like under repeated coarse-graining? These are tractable and worth answering once real usage data exists.