[Fleegello originally wrote his critique of octan physics as section 2.3 of the *Principia*. It is listed separately here, as it frankly stands apart, and interferes with the general flow of that work. Most scholars agree that Fleegello did not introduce any novel physics in this section. Rather he took ideas already advanced by contemporary physicists, and adapted them to his own philosophical framework. As in the primary document, editorial comments are enclosed in brackets, to distinguish them from the main text.]

The physical (pan)universe has been identified as the Physical Consistency Subfield (PCS) of the Consistency Ideo Field (CIF) – the complete set of abstract mathematical objects that both define temporospatial (i.e., physical) relationships and are compatible with consistency logic. These objects may manifest (in part) as states of a physical system, or as **observables** or other **operators** that act on those states. If modern physics is a reliable guide, they incorporate a broad class of multidimensional objects known as **dimensors**. A dimensor operator defines **linear** relations between other dimensors. Because of this linear character, dimensors are often represented by multidimensional arrays. [Dimensors include the familiar mathematical objects called **scalars**, **spinors**, **vectors**, and **tensors**.]

In standard **Shrodiik [quantum] theory** [named in honor of the pre-Dracian physicist Shrodo], a physical state (of any sufficiently isolated system) is denoted by a **ket** symbol |*ψ*>, where *ψ* is an arbitrary label. This entity is supposed to encompass all physical aspects of a system. |*ψ*> is commonly interpreted in terms of the positions and motions of material particles at a time *t* in a 3-dimensional space ** x**. Observables then include the positions, momenta, and energies of these particles.

For any physical system, there is a range of possible states. A novel feature of Shrodiik physics is that a system can consist of a linear combination, or **superposition**, of these available states. The selection of a set of fundamental **basis** states is then arbitrary, to some extent; any given set of basis states can be mixed into new combinations, to form distinct sets. In general, |*ψ*> can be viewed as a **vector** in an abstract **space** that spans all the possible states. If the components of a state vector are defined with respect to a specified set of basis vectors, then the state may be represented by a single-column dimensor array. Observables and many other operators may in turn be represented by square dimensor arrays that transform any given state (by the usual rules of **matrix multiplication**) into another state.

Let the symbol *A* represent an operator corresponding to some observable. When *A* is applied to an arbitrary state vector, the result is typically a linear combination of other state vectors. Suppose, however, that *A* is applied to a state vector |*ψ _{a}*> characterized by a well-defined value

*A*|*ψ _{a}*> =

This is what it *means* for *A* to represent an observable. Mathematically, |*ψ _{a}*> is an

Contemporary Shrodiik physics has another, most peculiar trait. For any physical state |*ψ*>, only the *probabilities* for measuring different values of a given observable can be computed. Even granted complete knowledge of a physical system at a particular moment, the future course as seen by any octan observer cannot in general be predicted with certainty. Performing a measurement (observation) somehow reduces a system to an eigenstate of the observed quantity, corresponding to the measured value. Detailed prescriptions for computing probabilities may be found in Shrodiik physics texts.

In **bra-ket** notation [physicists used this archaic script during Fleegello's era], the numeric overlap between two states |*φ*> and |*ψ*> is represented by <*φ*|*ψ*>, where the states are normalized such that <*ψ*|*ψ*>=1 for all |*ψ*>. The overlap value is a **probability amplitude** for starting with a system in state |*ψ*>, but observing it in state |*φ*>. The actual probability is the absolute square of this amplitude, or |<*φ*|*ψ*>|^{2}. For example, consider a one-particle system. If |** x**> is the state with the particle at position

The **expectation value** of an observable *A*, defined as the average value *A*_{avg} over repeated measurements on identical states |*ψ*>, is given by

*A*_{avg} = <*ψ*|*A*|*ψ*> .

The observable *A* has a definite value *a* only if |*ψ*> is already an eigenstate |*ψ _{a}*> of

In classical physics, material particles were treated as localized entities, distinct from **waves** (such as light). Only the latter could undergo self-**interference**, or **diffract** around obstacles. At the dawn of the Shrodiik revolution, waves were found to sometimes act like classical particles, and so-called particles were found to have wave properties. Observables with a continuous range of possible values in classical physics (e.g., the energy of an electron in an atom) might now be **quantized**, and restricted to discreet values.

Largely because of this duality of wave-particle characteristics, the order in which observables are measured may be significant. Consider two observables, represented by operators *A* and *B*. The operators **commute** if the order of measurement is irrelevant, or equivalently if the order in which *A* and *B* act on an arbitrary physical state is irrelevant – i.e., if *AB* = *BA*. The operators do not commute if *AB* ≠ *BA*. In this case the very act of measuring *A* or *B* introduces uncertainty into the value of the other, **complementary** observable. A system cannot simultaneously be an eigenstate of two operators that do not commute – neither operator would alter such a state, so their order could not matter. It is then impossible to simultaneously measure the values of two non-commuting observables, since such a measurement must create an eigenstate of both.

The archetypal pair of non-commuting observables in Shrodiik mechanics are the position *x* and **linear momentum** *p _{x}* of a material particle along a spatial coordinate (direction)

*p _{x}* = ℎ / λ

where ℎ is the miniscule (shorter wavelength begets more particle-like behavior) but nonzero **Planko constant** [named for the pioneering physicist **Planko**], and ℏ is the reduced Planko constant. The operator *p _{x}* is then directly proportional to the spatial rate of change of the wavefunction in direction

*x p _{x}* -

Using **calculus**, it can be shown that the (unnormalized) wavefunction *ψ*(*x*) of a particle with pure wavenumber *k _{x}* (corresponding to momentum

*ψ*(*x*) = *e*^{i kxx} = *cos*( *k _{x}x* ) +

where *e* is the **Eulero number** of mathematics, and the "cos" and "sin" terms refer to standard (wave-like) **trigonometric functions**. This wavefunction is totally unlocalized in space; the absolute-square probability of finding the particle at a given position is the same for all *x* values. Conversely, it can be shown that a wavefunction corresponding to a particle at a definite position must include all possible wavelengths, or linear momenta.

Consider then a system |*ψ*> = |*x _{o}*>, in which a particle initially has a definite position

Consider now a more general system in which one member of a pair of non-commuting observables is well defined. Mathematically, the system can be considered a **superposition** of pure eigenstates with different but well-defined values of the other non-commuting quantity. The existence of non-commuting observables is contrary to classical (pre-Shrodiik) physics. The natural law that describes physical evolution applies to superpositions of pure states, rather than to states in which all classical variables have precise values.

The state |*ψ*> of a physical system can in general be written as a coherent sum

|*ψ*> = *C*_{1}|*φ*_{1}> + *C*_{2}|*φ*_{2}> + *C*_{3}|*φ*_{3}> + . . . = Σ_{j}*C _{j}*|

over a complete set of orthonormal states |*φ*_{j}>, where the *C*_{j} are (complex) constants.

The |*φ _{j}*> are orthonormal if <

The choice of the |*φ _{j}*> is arbitrary to some extent, but they must be eigenstates of a

The probability of starting with the system |*ψ*> and finding it in state |*φ _{j}*> is then

*A*_{avg} = <*ψ*|*A*|*ψ*> = Σ_{j}Σ_{k}*C _{j}*

The off-diagonal terms *j* ≠ *k* in the sum represent nonclassical **interference** between the different states in the coherent superposition comprising |*ψ*>. These terms in general vanish only if the |*φ _{j}*> have well-defined values (i.e., are eigenstates) of

What determines useful observables, other than position and time? The mathematician **Noethra** has linked such quantities to **symmetries** in the equations of motion that describe the temporal evolution of |*ψ*>. In particular, Noethra's **first theorem** states that for every continuous, **differentiable** coordinate **transformation** that does not alter these equations, there is a corresponding observable whose expectation value is **conserved**, or constant over time. For sufficiently isolated systems, the equations are in fact generally unaffected by several such transformations, including time displacement, spatial displacement, and spatial rotation. Each of these symmetries is associated with an observable and conserved quantity.

But why *are* the equations of motion unaffected by the given transformations? Although physical conditions clearly vary at different locations in time-space, there is nothing else to distinguish points or directions. From an ideobasic perspective, the same physical law should then apply universally to all times, places and orientations. This law should further depend only on extant physical conditions. The inherent equivalence of all points and directions thus leads to the observed symmetries and conserved quantities.

When the laws of motion are not affected by displacements in time (i.e., they remain the same over time), then what is commonly called **energy** is conserved. This is primarily what makes energy a useful observable. Note that only the laws of motion are unchanging; physical systems themselves may change dramatically over time. Energy is associated with the **Hoobitean** operator *H* [named for the classical physicist Hoobitu]. The time rate of change of |*ψ*> is proportional to *H*. More precisely, *H* equals (*i*ℏ) multiplied by the time rate of change. If |*ψ*> is an eigenstate of *H*, then it has a definite energy *E*, temporal frequency *f*, and angular frequency *ω*, related by

*E* = ℎ *f* = ℏ *ω* .

The (unnormalized) time-dependent wavefunction *ψ*(*t*,*x*) of a particle with wavenumber *k _{x}* and angular frequency ω (corresponding to momentum

*ψ*(*t*,*x*) = *e*^{i (kxx-ωt)} = *cos*( *k _{x}x*-

For a one-particle system, the relationship between time and energy is thus analogous to that between position and linear momentum. For a multi-particle system, however, the situation is more nuanced. Whereas every particle in such a system may be assigned its own dynamical position operator, all particles traditionally share a common time. Time is then treated as a system parameter, and not associated with a true operator.

When the laws of motion are not affected by displacements in spatial position (i.e., the laws are the same at different spatial points) or by spatial rotations, then **linear momentum** and **angular momentum** are also conserved, and are both useful observables. It can be shown more generally that the expectation value of any operator that both commutes with *H*, and is not explicitly a function of time, is also a **constant of motion**. Every symmetry in *H* is thus associated with a conserved quantity, and a corresponding observable.

Classical observables may have nonclassical analogs that result from a reinterpretation (typically involving commutation relations) of associated operators. In particular, the commutation relations among the three orthogonal (mutually perpendicular) angular momentum operators imply the existence of a nonclassical type of angular momentum, known as **spin**. Elementary particles are found to inherently possess this type of angular momentum. Particle spin is naturally quantized to discreet values, characterized by a **spin number**, which must be an integral multiple of 1/2. Overall spin angular momentum (which also equals the maximum possible component in any direction) is the spin number multiplied by the reduced Planko constant.

The spin angular momentum operators associated with a spin number *s* can be represented by **irreducible** (2*s*+1) x (2*s* +1) arrays. The spin aspect of a spin-s particle can then be represented by a spatially-oriented, (2*s* +1)-dimensional single-column dimensor known as a pointor, designated by Š. An overall particle state may in turn be represented by a pointor function Š(*t*,* x*) of time

Spinless (*s*=0) particles are represented by scalar (zero-rank dimensor) functions, with no inherent directionality – for example, *f*(*t*,* x*), where

Yet particles do not normally exist in isolation. How then can the state of a multiparticle system be represented? Suppose first that the particles are distinguishable, and motions are much slower than light speed. Such systems have traditionally been represented by a **direct product** of the pointor functions for the individual particles, in which time *t* is a common system parameter, but the coordinates * x_{P}* of the different particles

Š_{a}(*t*,** x_{1}**) Š

where subscripts *a* and *b* label two different single-particle states.

Suppose now that at least two particles in a system are **identical**. The probability of finding either cannot be affected when their labels are exchanged – they would otherwise be distinguishable. Since the probability is related to the absolute square of the system wavefunction, then the state can at most acquire a complex **phase factor** (a factor with an absolute value equal to one) under particle exchange. Because two successive exchanges must leave the overall state unchanged, then the phase factor is limited to the values ±1. The state itself must then either be symmetric (unchanged) or antisymmetric (phase factor -1) under identical particle exchange.

The wavefunctions of identical **bosons** (particles with integral spin) are found to be symmetric, while those of identical **fermions** (particles with half-integral spin) are antisymmetric. The appropriate symmetry can be achieved if a system is represented by a sum over the direct pointor products, in which the functional dependencies of the particles are suitably interchanged. For example, the state of two identical fermions might be represented by

Š_{a}(*t*,** x_{1}**) Š

Symmetries in the equations of motion are not limited to continuous time-space transformations, but may also include discrete operations, such as **time reversal** and **parity inversion** (mirror reversal). [Fleegello stubbornly maintained that various discrete spacetime symmetries should generally hold, despite contrary evidence. For example, experiments seemed to demonstrate that **parity is not conserved** during certain types of radioactive decay. Parity is conserved if the equations of motion are unchanged when a system is replaced by its mirror image. Fleegello believed that physics could not be affected by such a simple transformation, and felt that crucial elements had been omitted from experimental analyses. Yet physicists soon realized that, since time and space are intimately linked, and antiparticles are equivalent to ordinary particles moving backward in time, the true symmetry involves the

Indeed, the fundamental interactions between elementary particles are thought to derive from a variety of internal **local gauge symmetries**. For example, consider a single-particle wavefunction *ψ*(*t*,* x*). Under a local phase transformation,

The physicist **Vigno** has argued that symmetry principles do not merely restrict the laws of quantum physics, but *define* them. Elementary particles and their interactions have been associated with and characterized by the mathematical representations of abstract **symmetry groups**. Every such fully consistent object and process must coexist with every other compatible object and process somewhere within the PCS. This may require that the PCS is naturally divided into distinct physical universes.

Coordinate systems do not exist a priori in nature. The choice of a coordinate framework to describe a physical system should thus be arbitrary, from a strictly mathematical viewpoint (although one frame may be more convenient than another for a given purpose). It should then be possible to describe the laws of physics in a coordinate-free manner, in which observables appear only as abstract quantities, with no explicit reference to coordinate components. Expressing physical laws in such a **covariant** manner simplifies identification of symmetries and conserved quantities.

If the PCS is to respect the inherent arbitrariness in the choice of coordinate system, then fundamental **physical constants** that appear in the laws of physics should also be the same for all observers within a given physical universe, independent of the choice of reference frame. This applies in particular to **dimensionless** constants (e.g., the **fine structure constant** of atomic physics), which carry no physical units, but can be expressed as the ratios or products of **dimensional** constants that do possess units. Changes in the values of dimensional constants are generally meaningful only with respect to changes in their dimensionless combinations. So long as the values of physical constants are individually changed in a way that maintains the values of all fundamental dimensionless constants, the physical world is unaffected. Dimensionless constants stand independent of any arbitrary choice of measurement units. Indeed, no variations over time or space have thus far been detected.

[Some quantities thought to be fundamental constants in Fleegello's era have since been found to be variable. These have been reinterpreted as functions of truly fundamental constants and local physical conditions.]

Dimensionless fundamental constants need only be the same at all points within a particular physical universe. The values in distinct, non-interacting universes may be different. Indeed, if there is no fundamental reason a constant should have a particular value, then the PCS *must* encompass a host of universes covering the range of acceptable values. Yet this range cannot be continuous; the values must in some sense be quantized, and countable. All the worlds otherwise could not have meaningful existence within the PCS.

Even fundamental dimensional constants (whose numeric values depend on the choice of physical units) should be the same for all observers in a given universe, when measured with respect to reproducible units characteristic of fundamental physical processes. In particular, the **speed of light** in a vacuum, commonly denoted by the symbol *c*, appears to constitute a universal limit to the rate at which information can propagate through space. As first proposed by the physicist **Niestu** in his **theory of inertial invariance**, the speed *c* has the same value for all observers, irrespective of their state of motion. This is contrary to classical expectations, whereby an observer moving toward (away from) a light source detects a higher (lower) relative light speed than an observer at rest with respect to the source. That *c* is finite may be expected from an ideobasic viewpoint. An infinite speed is a special, limiting case of a general value, and the PCS should opt for the most general conception.

Niestu introduced a major paradigm shift in physics when he showed that a common value for *c* implies that time (space) intervals measured by one observer may be partially seen as space (time) intervals by an observer in a relative state of motion; time and space do not exist separately, but must be combined into a unified **timespace** [scientists of Fleegello's era apparently preferred this expression to today's more common term *space-time*]. The effect is tiny at low velocities, but becomes significant as speed approaches *c* (so-called Niestiik speeds). The associated coordinate transformation between reference frames in a relative state of motion is distinct from that of classical physics. If the equations of motion are to remain invariant under a velocity transformation, then those equations must be modified as well. A remarkable consequence of inertial invariance is that any mass *m* is associated with an energy *mc*^{2}. For a free particle, the relationship between total energy *E*, momentum *p*, and rest mass *m* becomes

*E*^{2} = *p*^{2}*c*^{2} + *m*^{2}*c*^{4} .

Niestu ultimately expanded his ideas into the **theory of general invariance**, which describes gravity in terms of distortions in the geometry of timespace.

[Fleegello overlooked a related serious inconsistency in his view of the CIF. The CIF must encompass all possible reference frames. If It experiences the same time as observers in those frames, as Fleegello envisioned, It must integrate the various time lines to maintain a single unified state of being. Yet if speed *c* is the same for all observers, events that are simultaneous in one frame may be *non*simultaneous in another. Events could then be seen by the CIF as both simultaneous and not simultaneous, a contradiction. This inconsistency is resolved only if the CIF transcends physical time, and experiences it the way corporeal creatures experience space – as **block time**. All events in the physical panuniverse then span a single, eternal moment in the mind of the CIF. Yet the CIF must still distinguish the time-like and space-like separations among physical events that define causal chains. Primacy resides in these causal chains, and not in the reference frames that observers use to describe them.]

While inertial invariance was readily incorporated into Shrodiik mechanics for single particles, problems arose for multi-particle systems. In particular, time and space coordinates were not treated coequally in the traditional equations of motion. Inertial invariance requires that time and position both be treated either as system parameters, or as formal operators. Currently the most widely adopted solution, based on the first approach, is to reformulate Shrodiik mechanics into a Niestiik **quantum field theory** (QFT), in which elementary particles of a given type are treated as quantum excitations of an underlying **field**. The theory covers both traditional particles with mass, such as the electron, and zero-mass particles once considered waves, such as the photon. Different particle types are represented by distinct fields, defined by a variety of attributes, including rest mass, spin, and electric charge. Field operators replace the single-particle position and momentum operators.

A simple field state in QFT is characterized by the number of (identical) **quanta** occupying each of a set of allowed levels. Field quanta contain no explicit particle labels; QFT respects the exchange symmetry of identical particles in a remarkably natural way. Indeed, in QFT it can be shown that the wavefunctions of fields with half-integral spin must be antisymmetric, and those with integral spin symmetric. The number of quanta in a field is just the number of particles of the given type. Any distribution with a particular number of particles can be represented by a superposition of simple states (covering a range of level occupation number sets, each with the same total number of quanta). The overall state of a system is represented by the direct product of its constituent fields, or more generally by a superposition of such products. Unlike Shrodiik mechanics, QFT is not limited to a fixed number of particles. Field interactions result in the creation/destruction of associated quanta.

The mathematician Draci has proposed a multi-time alternative to QFT, in which both the (observer-based) times and positions (*t _{j}*,

In terms of observer coordinates (*t _{j}*,

It is important to note that multi-time theory does not posit multiple independent time dimensions in our own universe. If it did, then assigning a unique position operator to every particle in classical single-time theory would also imply more than three independent spatial dimensions. While every particle has its own time line, these become correlated through interactions, in a way consistent with a single overall time-like dimension.

With multiparticle systems, it is often useful to adopt composite spatial coordinates. For two particles, define

* X* = (

In particular, **center-of-mass** coordinates with *a*_{1}=*m*_{1}/(*m*_{1} + *m*_{2}) and *a*_{2}=*m*_{2}/(*m*_{1} + *m*_{2}) are routinely used in non-Niestiik treatments of two-body systems. Analogous composite time coordinates may also be defined in the multi-time approach, by

*T* = *a*_{1}*t*_{1} + *a*_{2}*t*_{2} and *ρ* = (*t*_{2} - *t*_{1}) .

The coordinates (*T*,* X*) and (

In the weak interaction limit, *ψ* should show no preferred value of *ρ* or * r*. For strong repulsive interactions,

*S* = √*r*^{2} - *c*^{2}*ρ*^{2} ,

where *r* is the length of the radial vector * r*. For strong attractive interactions, there should be solutions of

Yet how are timespace coordinates (*t*, * x*) meaningfully defined at all, for a system composed of multiple elementary particles? Neither time nor space can be measured in absolute terms. Temporal and spatial intervals are gauged only with respect to physical processes and structures, which have traditionally been interpreted in terms of elementary particles and their interactions. Stripped of its vestments, timespace loses all meaning. Physical objects and dimensions of relation are inextricably linked.

Every elementary particle does have an inherent time scale, known as **proper** time, measured along its own **world line**. Further temporospatial relationships among particles can be defined only if they interact, either directly or indirectly. In QFT, interactions corresponding to the **fundamental forces** are associated with gauge symmetries. The mathematical description of these forces can be interpreted in terms of the exchange of **phantom** elementary gauge bosons by elementary fermions. In particular, the **electromagnetic**, **weak**, and **strong** interactions involve the exchange of phantom photons, **W** and **Z bosons**, and **gluons**, respectively (all spin-1). Phantom particles have all the attributes of their "real" counterparts, except mass; the usual relationship between energy, linear momentum and standard rest mass is not followed, making phantom particles ephemeral. Indeed, some physicists consider phantom particles not real in *any* sense, but only a bookkeeping device in describing interactions. Elementary fermions include **electrons**, **neutrinos**, and **quarks** (all spin-1/2). [The set of particles identified as elementary has changed considerably since Fleegello's time; his archaic list excludes various types of **invisible matter**, that interact with ordinary matter only through the gravitational force.] Only gravity, which is presumably intermediated by the exchange of (normally massless) spin-2 bosons known as **gravitons**, has eluded incorporation into the quantum field theoretic framework.

Because phantom bosons are superpositions spanning energies and momenta that do not respect standard mass relationships, their exchange should not be literally interpreted in terms of trajectories from one real particle to another. Yet these disturbances in interacting fields do transmit information at light speed between real particles. This process can establish causal links (CLs) between interacting fermions. Of course, CLs can also be forged by real particles. A link specifically from phantom processes will be referred to as a phantom causal link (PCL).

A PCL from fermion *j* at proper time *τ _{j}* to fermion

Every [electromagnetic] PCL can be associated with a three-component (three-dimensional, or 3D) unit vector * û_{jk}*, defining a spatial direction of flow. It is further possible to define a 3D interparticle distance vector

Every PCL can also be associated with a collinear energy and linear momentum transfer. Let *ω _{jk}* be the angular frequency associated with energy transferred along

Although information carried by a PCL effectively moves at light speed, such links do not represent real particles, so there is no fixed relationship between *ω _{jk}* and either

Any PCL can be associated with a probability amplitude *γ _{jk}*, represented by a function

*γ _{jk}* (

The probability (per unit time, distance, solid angle, energy, and momentum) that a link exists is the absolute square value of its amplitude. PCLs are universal; all observers recognize the same connections, at the same proper times. [Fleegello's description of PCLs is incomplete; in particular, full link amplitudes also convey information concerning angular momentum transfer.]

An elementary event may be defined as any point along the world line of a real elementary particle at which a causal link (of any type) is established with another particle. Niestu has made the radical suggestion that the network of CLs among particles does not merely occur *within* timespace, but actually *defines* timespace. The number of spatial dimensions is set by the number of components in the unit vectors * û_{jk}*. For example, consider an electromagnetic interaction between two charged particles, labeled #1 and #2 in the diagram at right. Suppose that a PCL (dotted line) exists from particle 1 (solid line) at (local) proper time

From the perspective of particle 1 (ignoring any nominal acceleration), event B occurs at time-distance coordinates (*τ*_{1B} , **d**_{12B}), given by

*τ*_{1B} = (*τ*_{1A}+*τ*_{1C})/2

**d**_{12B} = *d*_{12B} **û**_{1AB}

where **û**_{1AB} is a 3D unit vector pointing in the direction of flow from A to B from the perspective of particle 1, and *d*_{12B} is the scalar interparticle distance

*d*_{12B} = (*τ*_{1C}-*τ*_{1A}) *c*/2 .

Note that **û**_{1AB} = -**û**_{1CB} from the same perspective.

From the perspective of particle 2, the timespace coordinates of event B are simply (*τ*_{2B} ,0). If particle 2 moves with respect to particle 1, then **û**_{2BA} differs from **û**_{1AB} by a Niestiik-like velocity transformation, and **û**_{2BA} ≠ -**û**_{2BC}. The given links do *not* determine the distance from event B to particle 1 from the perspective of particle 2, since link pathlengths along the two legs could now differ. The distance may be inferred to equal *d*_{12B} only if time is absolute, as in classical physics. With inertial invariance, the distances are *not* in general equal.

The directions of adjacent CLs must be consistent, if they are to make physical sense; interparticle distances are otherwise ill defined. Under a time reversal operation (in which the direction of time reverses along all world lines), CL directions must also reverse.

In classical physics, the depicted interaction defines a definite correspondence between times *τ*_{1B} and *τ*_{2B}, and a common interparticle separation at this time. With inertial invariance, the interaction instead defines a correspondence between the respective timespace coordinates (*τ*_{1B} ,**d**_{12B}) and (*τ*_{2B} ,0). In Shrodiik physics, the interaction contributes to a probability amplitude for this correspondence.

What is the expected functional form of *γ _{jk}* (

*e*^{-i ωjk θjk (τk-τj)} .

This factor does not favor any linkage *j* to *k*; its absolute square is the same for all links. However, a complex state of *γ _{jk}* consists of a superposition of simple states. The summation

∑_{ω} *e*^{-i ω (t-to)}

over a range of ω tends to peak at *t* ∼ *t*_{o}. If *τ _{j}* and

*e*^{-i ωjk (θjk [τk-τj]-djk/c)} .

The corresponding (unnormalized) overall phase factor for a simple state that is symmetric in time and space parameters is

*e*^{-i ωjk (θjk [τk-τj]-djk/c)} *e*^{+i θjk Kjk·(rjk-djk)}

where the **scalar product** * K*·

If the linked particles are identical, then exchanging their labels must not alter any physical link characteristic - both link probability and direction of information flow must be preserved. Because *γ _{jk}* and

*γ _{jk}* (

The overall phase factor associated with *γ _{jk}* is unchanged, as long as

PCL amplitudes can presumably be derived from QFT, or some extension of that theory. Still, exploration of PCL properties may in itself shed light on the ability of interactions to correlate time lines and define interparticle distances. A single time scale is appropriate to describe the motion of two particles *only* if their proper times can be correlated one-to-one. While multiple serial links are needed to define time correlation and interparticle distance, ultimately the information must be consistently encoded in the amplitudes of successive links.

Reconsider then the two-particle system, in which a pair of PCLs connect event A on the time line of particle 1 at *τ*_{1A} =*τ*_{1B} - *d*_{12B}/*c* to event B on the time line of particle 2 at *τ*_{2B} in the direction **û**_{AB}, and B back to event C on the time line of particle 1 at *τ*_{1C} =*τ*_{1B} + *d*_{12B}/*c*. As before, from the perspective of particle 1, this defines distance **d**_{12B} = *d*_{12B}** û**_{AB} at time *τ*_{1B}. Assume that the energy passed from A to B is comparable to the energy passed from B to C, while the corresponding momenta are reversed, and let *K*^{+} equal the absolute value of **K**^{+}_{AB}. Adopt bare phase factor states *γ*_{AB} and *γ*_{BC}, with *θ*_{AB} = *θ*_{BC} = +1, *θ*_{CB} = -1, **û**_{AB}= -**û**_{CB}, and **r**_{AB} = **r**_{CB}. Then

*γ*_{AB} *γ*_{BC} ∼ *e*^{+2iK+(r12B-d12B)} .

The absolute square of this joint amplitude is unity, and does not favor any particular *r*_{12B}. Only if link amplitudes are summed over a range of *K*^{+} can *r*_{12B} be localized to a value near *d*_{12B}.

Consider now the classical limit, in which time lines *τ*_{1} and *τ*_{2} can be perfectly synchronized, and a common, well-defined 3D distance **d**_{12}(*τ*_{1}) exists at any time *τ*_{1}. Ignoring accelerations,

*γ*_{12} ∼ *δ*(*θ*_{12}[*τ*_{2}-*τ*_{1}] - *d*_{12}/*c*) *δ*^{3}(**r**_{12}-**d**_{12})

where *δ* is the Draci **delta function**, defined such that *δ*(*x*-*y*) = 0 for all *x*≠*y*, and *δ*(0)=∞ such that the area under the curve *δ*(*x*-*y*) along *x* for a given *y* value is unity. *δ*^{3} is the product of three delta functions, one for each spatial component of (**r**_{12}-**d**_{12}).

Mathematically, the given product of delta functions can be written as the summation

*γ*_{12} ∼ ∑_{ω12} (Δ*ω*_{12}) ∑_{K12} (Δ^{3}**K**_{12}) *e*^{-i ω12 (θ12 [τ2-τ1]-d12/c)} *e*^{+i θ12 K12·(r12-d12)}

over all values (-∞ to +∞) of *ω*_{12} and each of the 3D components of **K**_{12}, where Δ*ω*_{12}→0 and Δ^{3}**K**_{12}→0 are minimal increments of angular frequency and 3D wavevector. This corresponds to a maximal, equally-weighted superposition of the simple states of *γ*_{12}. [Fleegello here glosses over thorny technical issues regarding normalization of these functions.]

In the real world, a PCL summation may not favor all frequencies and momenta equally. In QFT electromagnetic calculations, analogous sums include factors like 1/(*ω*^{2}-*c*^{2}*K*^{2}), favoring photon-like states with an effective mass near zero. As discussed earlier, *ω*_{12} and *K*_{12} should be restricted to positive (negative) values if the interaction is repulsive (attractive). There may also be maximum absolute values *ω*_{max} of angular frequency and *K*_{max} of wavenumber. Restricting values accordingly, but neglecting possible weighting factors, the sums can be converted to **integrals**, and evaluated using calculus. Probability distributions are obtained from the absolute square of the result.

Normalizing to unit area, the probability distribution *P*(*σ*) for *σ *≡ ( |*τ*_{2}-*τ*_{1}| - *d*_{12}/*c*) is

4 sin^{2}(*ω*_{max} *σ*/2)

*P*(*σ*) ≃ ** ^{_________________}** .

𝜋

While this distribution still peaks at *σ*=0, now with a finite value *ω*_{max}/𝜋, the width (~ first null) is no longer zero, but ~2𝜋/*ω*_{max}. The correlation between *τ*_{2} and *τ*_{1} is then *not* one-to-one. Single-time theory would be an approximation; a multi-time scheme should be more accurate.

To integrate over **K**_{12}, it is convenient to use polar coordinates, with the axis along (**r**_{12}-**d**_{12}). The normalized probability distribution for *η *≡ |**r**_{12}-**d**_{12}| is more complicated than the distribution for *σ*, but also peaks at *η*=0, with a finite value 5*K*_{max}/6𝜋. The width (now roughly defined by the first minimum in the distribution) is ~8/*K*_{max}, analogous to the result for *σ*.

For real massless bosons, a total travel distance *d* defines a maximum wavelength *λ*_{max}=2*d*, corresponding to a *minimum* frequency *ω*_{min}=𝜋*c*/*d* and wavenumber *K*_{min}=𝜋/*d*. However, because a PCL represents collective phantom processes and does not comprise an independent time line, phase cannot gradually change along its length; only the net shift from one end to the other is meaningful. This shift must then be limited to the range 0 to +2𝜋 (-2 𝜋 to 0) if the interaction is repulsive (attractive). Any outside value would be mathematically equivalent to, and so physically indistinguishable from, a number inside the range. Since a link *γ*_{12} persists for time *d*_{12}/*c* and distance *d*_{12} from the vantage of particle 1, this corresponds to *maximum* absolute values *ω*_{max} = 2𝜋*c*/*d*_{12} and *K*_{max} = 2𝜋/*d*_{12}. The classical limit is then approached only through a sum of phantom processes and (higher-energy) real bosons. The minimum uncertainty in the correlation between *τ*_{2} and *τ*_{1} from phantom processes alone is ~*d*_{12}/*c* (the time it takes light to travel *d*_{12}). An uncertainty ~*d*_{12} is likewise inherent in the specification of interparticle distance. Note that these limits refer only to phantom processes between a pair of real fermions. Values for a macroscopic observer examining a single fermion could be much smaller, if real particles are used as probes.

The multi-time wavefunction for a system of particles is traditionally defined from the perspective of an external (inertial) observer. This observer consists of an organization of myriad particles, which together interact to establish a composite timespace framework. While interactions among observed particles internally define their relative positions, interactions between the observer and the particles determine positions seen by the observer. Consider a three-particle system. A standard multi-time wavefunction may be represented by *Ψ*_{o}(*t*_{o1}, **x**_{o1}; *t*_{o2}, **x**_{o2}; *t*_{o3}, **x**_{o3}). Draci has proposed an alternative, more symmetric, internal-perspective multi-time wavefunction, more closely tied to PCLs:

*Ψ*_{123}(*τ*_{12}, **x**_{12}; *τ*_{13}, **x**_{13}; *τ*_{21}, **x**_{21}; *τ*_{23}, **x**_{23}; *τ*_{31}, **x**_{31}; *τ*_{32}, **x**_{32}) .

Here *Ψ*_{123} represents the joint probability amplitude that the first particle at proper time *τ*_{12} sees the second particle at position **x**_{12} and proper time *τ*_{21}, while the second particle at proper time *τ*_{21} sees the first particle at position **x**_{21} and proper time *τ*_{12}; and so on, for all particle pairs. The (proper) time and position coordinates *τ*_{jk} and **x**_{jk} are treated coequally, as required for compatibility with inertial invariance, and are considered system operators. To accomodate more than two particles, proper times now have a double subscript; the first index indicates the primary perspective particle, and the second index the observed particle.

An elementary particle may even have PCLs to itself. A link connecting *τ*_{ja} to *τ*_{jb} along the world line of particle *j* effectively extends distance (*τ*_{jb} - *τ*_{ja})*c*/2 from the particle. A student of Draci has suggested that these self-links should be reflected in the internal wavefunction. [Draci first refused to endorse the idea, because of the bizarre interpretation of the amplitudes *γ _{jj}*; in particular, how could the 3D direction of information flow be the same at both ends of a link?] For a three-particle system, the modified wavefunction has the form

*Ψ*_{123}(*τ*_{11}, **x**_{11}; *τ*_{12}, **x**_{12}; *τ*_{13}, **x**_{13}; *τ*_{21}, **x**_{21}; *τ*_{22}, **x**_{22}; *τ*_{23}, **x**_{23}; *τ*_{31}, **x**_{31}; *τ*_{32}, **x**_{32}; *τ*_{33}, **x**_{33}) .

Now *Ψ*_{123} incorporates the joint amplitude that the first particle at proper time *τ*_{11} sees a self-link at a distance **x**_{11} (linking times *τ*_{11} - |**x**_{11}|/*c* and *τ*_{11} + |**x**_{11}|/*c*); and similarly, for the other particles. The *γ _{jj}* have been related to the particle rest mass energies

What is the relationship between PCL amplitudes and the internal wavefunction of a group of fermions? Do PCL amplitudes have their own equations of motion? If phantom processes define the very space within which fermions move, PCL amplitudes do embody probabilities of relative positions, as well as the transfer of energy and 3D momentum between particles. Only through interactions and associated superpositions of simple link states over a range of energies and momenta can relative positions be localized.

[Fleegello evidently hoped that CLs would provide an alternative multi-time framework for representing multi-particle systems. They instead ultimately proved to simply be useful conceptual tools for thinking about interactions. Draci et al. showed that, even in a multi-time theory, PCLs are dictated by gauge symmetries in the coupled fermion equations of motion.]

Insofar as PCLs specify 3D direction vectors, they also define (probability amplitudes for) the relative 3D positions of all particles in a system, regardless of their number. But what if PCLs did *not* specify vector direction? Consider in this case an isolated system of *N* distinguishable particles. If relative speeds are non-Niestiik, then *d _{jk}* ≅

The web of CLs and associated proper times among the world lines of elementary particles could thus determine the geometry of timespace, in particular the large-scale geometry, whether or not PCLs specify spatial direction. In either case, space can unfold from the relationships among myriad connected events. To the extent this occurs, space does not have independent existence, but is defined by the connections between the mathematical objects we perceive as particles. A world without causal links would be a world without space; any particles would be independent of each other, with no meaningful positional relationships.

Yet how and why can inter-particle distances defined by interactions generate an overall *three*-dimensional, nearly *flat* (under normal conditions) space? Classically, even without inherent 3D directions, the *N*(*N*-1)/2 interparticle distances in an *N*-particle system are sufficient to determine all relative coordinates for up to ~(*N*-1) dimensions!

The mathematical physicist **Wittuu** has proposed a mechanism that both restricts and defines the number of spatial dimensions. Large-scale timespace is defined mainly by the electromagnetic interaction, since it is associated with the exchange of phantom photons of unlimited range. While real photons are massless, and so restricted to two spin states, phantom photons have *three* spin states, like any spin-1 particle with mass. These correspond to three inherent, independent "directions." Because all photons are identical, exchanging their identities cannot alter a physical system, so they must share the same three directions. This causes every interparticle distance defined by PCLs from photon interactions to be limited to vectors in a common macroscopic three-dimensional space.

The strong and weak interactions should then establish additional spatial dimensions. Although there are eight gluon types, these are not truly independent, and together generate only three additional dimensions. The three bosons of the weak force generate the same, making a total of nine spatial, or ten timespace dimensions. Yet the ranges of the strong and weak interactions are so tiny (~10^{-13} and 10^{-16} centurets, respectively), they mainly affect the small-scale geometry of timespace. Wittuu suggests that the associated dimensions are "curled up" or "attenuated," and only obvious at very small scales or high energies.

[While Wittuu's argument was sketchy, later generations of physicists demonstrated that his intuition was sound (although he missed a few small-scale dimensions). A rigorous explanation of the origin of macroscopic spatial dimensions was eventually developed. We now realize that space-time arises as an emergent property from the interconnections among myriad primitive, timeless, abstract mathematical forms.]

What about gravity? The carrier of this interaction is presumably the massless graviton. Because the graviton is a spin-2 particle with unlimited range, gravity might be expected to generate its own large-scale 5-dimensional space. Yet gravity has an unusual character, related to its incompatibility with standard QFT. All other fundamental forces are carried by spin-1 bosons, and associated with unique and conserved "charges" (e.g., electric charge). But gravity couples to a system's **stress-energy tensor**, to which *every* interaction contributes. Gravity reduces all "bare" mass energies, and even couples to itself, leading to nonlinearities in the **gravitational field equations**. PCLs established by gravity are thus dependent on and flow from the other interactions, so that gravity does not add any new dimensions. It may nonetheless distort the large-scale stucture of timespace, as in Niestu's theory of general invariance.

It is clear that neither standard graviton exchange nor general invariance represents the complete fundamental description of gravity, even if both are good approximations in the low-energy limit. A missing element in these theories may involve the small-scale structure of time. Physicists have traditionally considered time to be continuous. In an attempt to avoid divergent (infinite) quantities in QFT calculations [which had previously been removed for forces other than gravity by a dubious procedure known as **renormalization**], Planko has proposed that proper time is **quantized** along the world line of every elementary particle with mass. The fundamental (minimum) proper time interval, or **chronon**, is represented by the symbol Δ. Distance between particles is naturally quantized in integral multiples of Δ*c*/2. [Space-time volume is thus more generally quantized, and not space or time separately. By inertial invariance, this quantity is unaffected by a velocity transformation.]

Quantized proper time may be *required* by ideobasic principles. Consistency logic compels the PCS to recognize the most general conception of time. Yet continuous time is only a limiting case of quantized time. The infinity of real numbers on a continuous line segment is furthermore not countable; there is no one-to-one correspondence between the numbers and the set of positive integers. Because it would then be impossible to locate all points on the line within the conscious field of the PCS, they could not have meaningful existence. Finally, time is inherent only along particle world lines; it does not meaningfully reside anywhere else.

If timespace is quantized, then the smooth **differential equations** of Shrodiik mechanics and QFT must be replaced by discrete difference equations. Observables defined in terms of derivatives must be similarly redefined. Symmetry principles and conservation laws are all affected.

[Other physicists had previously hypothesized that spacetime was quantized, but only with respect to the overall coordinate framework of a given observer, not with respect to individual particles. These **lattice approaches** were doomed to failure, as they were divorced from the very processes that define time and space.]

A minimum proper time interval Δ implies a maximum absolute angular frequency

*ω*_{max} = 𝜋/Δ

in any function of proper time, and a range of meaningful frequencies

-*ω*_{max} ≤ *ω* ≤ +*ω*_{max}

(this universal frequency limit should be distinguished from the smaller cutoff specific to PCLs). Based on a symmetric version of the modified single-particle equation of motion, Planko has proposed replacing the linear equation relating the energy *E* of an elementary particle in its own rest frame to its proper time angular frequency *ω* by the trigonometric formula

*E* = (ℏ*ω*_{max}/𝜋) sin(𝜋*ω*/*ω*_{max}) .

This reduces to the standard equation when *ω*/*ω*_{max} << 1, and can alternatively be written

*E* = *E*_{max} sin(*E*_{o}/*E*_{max})

where *E*_{o} is a particle's uncorrected energy *E*_{o} = ℏ*ω* , and

*E*_{max} = ℏ *ω*_{max} /𝜋 = ℏ/Δ at the angular frequency *ω*_{max} / 2 .

Note that *E*_{o} = *m*_{o}*c*^{2} for an elementary particle with an uncorrected (bare) rest mass *m*_{o}.

[Massless particles have no rest frame, and there is no passage of time along their world lines; frequency and energy must be specified with respect to associated particles with mass.]

Proper time quantization reduces and limits a particle's effective rest mass energy, in a manner curiously similar to gravity. For every known elementary particle (characterized by a single, independent proper time line), *E*_{o} / *E*_{max} << 1. The sine function in the modified equation for energy can then be approximated by a truncated power series. Including only the first correction term in this expansion,

*E* ≈ *E*_{o} - *E*_{max}(*E*_{o}/*E*_{max})^{3}/6 .

For an elementary particle, this is

*E* ≈ *m*_{o}*c*^{2} - *m*_{o}^{3}*c*^{6}Δ^{2}/6ℏ^{2} .

According to standard Shrodiik theory, the **uncertainty in the position** of a particle of mass *m* cannot be smaller than ℏ/2*mc*. The rest mass *m*_{o} thus cannot be meaningfully confined to a volume with a radius *r*_{min} smaller than

*r*_{min} ≈ ℏ/4*m*_{o}*c* .

Using this relation to remove one power of *m*_{o} from the previous equation,

*E* ≈ *m*_{o}*c*^{2} - (*c*^{5}Δ^{2}/24ℏ)(*m*_{o}^{2}/*r*_{min}) .

The correction term is equivalent to the classical **gravitational binding energy** of a mass *m*_{o} distributed over a surface of radius *r*_{min}, if one identifies the **gravitational constant G** as

*G* ≈ *c*^{5}Δ^{2}/12ℏ .

Conversely, the chronon Δ can now be related to the gravitational constant by

Δ = √12ℏ*G*/*c*^{5} .

Indeed, Planko has identified the minimum distance Δ*c*/2 with the **Planko length**

L_{P} = √ℏ*G*/*c*^{3} ≈ 10^{-33} centurets,

and the chronon Δ with the **Planko time**

T_{P} = √4ℏ*G*/*c*^{5} ≈ 10^{-43} nocs,

which differs from the value derived above by less than a factor of two.

[These estimates are remarkably close (within a factor of eight) to the chronon value obtained from subsequent experiments. The Planko quantities were inferred theoretically from the time/distance scale at which the quantum effects of gravity become significant.]

Planko has further proposed a **natural system** of units, in which the equations of motion are simplified. The chronon is now the unit of time, and Δ*c* the unit of distance. Interparticle separations are then half-integral multiples of the fundamental unit, and the speed *c* is numerically equal to 1. Units of mass and electric charge are selected so that both the Planko and the gravitational constants are numerically equal to 1, while an elementary electric charge is equal to the square root of the fine structure constant.

How does time quantization affect the energy of a multiparticle system? Consider a pair of elementary particles, both at rest with respect to an observer, and separated by radial distance *r*. The observer may combine the respective proper times scales *τ*_{1} and *τ*_{2} into an overall system time *t* and a time correlation parameter *ρ*, approximated by

*t* ~ (*τ*_{1} + *τ*_{2}) / 2 and

*ρ* ~ (*τ*_{2} - *τ*_{1}) .

When the particles are far apart, interactions are negligible, so *τ*_{1} and *τ*_{2} should be independent and uncorrelated. System time *t* is then effectively quantized in intervals of Δ/2 – increasing either *τ*_{1} or *τ*_{2} by Δ increases *t* by only Δ/2. The two-body wavefunction should be the product of free-particle wavefunctions, with bare masses *m*_{1o} and *m*_{2o} . Regardless of how the total system energy *E* is precisely defined, *E* should be approximately equal to the sum of the individual particle energies:

*E*_{far} ≈ *m*_{1}*c*^{2} + *m*_{2}*c*^{2}

where *m*_{1} and *m*_{2} are the respective effective rest masses of each particle, related to the bare rest masses by the expression

*m*_{1}*c*^{2} = *E*_{max} sin(*m*_{1o}*c*^{2}/*E*_{max}) and

*m*_{2}*c*^{2} = *E*_{max} sin(*m*_{2o}*c*^{2}/*E*_{max}) .

The energy limit *E*_{max} applies only to the rest mass energies of the individual particles, along their respective world lines, and not to the overall system.

At smaller separations, *τ*_{1} and *τ*_{2} should become correlated by interactions, such that *t* is quantized in progressively larger intervals approaching Δ (interactions are presumably also required to define interparticle distance). The maximum correlation, at a minimum meaningful distance *r*_{min}, is equivalent to the particles merging into a single world line and proper time – increasing *τ*_{1} by Δ also increases *τ*_{2} by Δ, and vice versa. The time correlation parameter *ρ* becomes restricted to values near zero, and the individual particle wavefunctions merge into a single function characterized by a bare mass (*m*_{1o}+*m*_{2o}) and system time *t*. Considering only bare mass and time quantization effects, the combined energy is then

*E*_{near} = *E*_{max} sin[(*m*_{1o}+*m*_{2o})*c*^{2}/*E*_{max}] .

Whereas a total energy limit corresponding to the reduced time interval Δ/2 applies when the particles are far apart, a lower limit corresponding to the full time interval Δ applies when the particles are close together.

At intermediate separations *r*, one can write

*E _{r}* = (1-

where *ε* is a function of *r*, such that *ε* → 0 as *r* → ∞, and *ε* → 1 as *r* → *r*_{min} .

This equation can in turn be rewritten

*E _{r}* =

where *E*_{int} is an effective interaction energy

*E*_{int} = -*ε* (*E*_{far} - *E*_{near}) .

If bare mass energies are much smaller than *E*_{max}) , then to good approximation, correction terms of order higher than (*E*_{o}/*E*_{max})^{2} may be ignored, and

*E*_{int} ≈ -*ε* *c*^{6 }*m*_{1}*m*_{2}(*m*_{1o}+*m*_{2o})/2*E*_{max}^{2} .

The minimum separation *r*_{min} can be estimated from the smallest volume that can confine the total uncorrected rest mass energy,

*r*_{min} ≈ ℏ/4(*m*_{1o}+*m*_{2o})*c* .

[Fleegello uses semiclassical reasoning here. He contemplates in a classical manner the total energy of two motionless masses as a function of their separation; then applies a Shrodiik argument that the probability distribution of a particle's position is spread out over space, and confinement entails a degree of motion "jitter," to estimate *r*_{min}. The conclusions nonetheless have some qualitative validity.]

If the chronon is equated to the Planko time, the equation for *E*_{int} can be rewritten

*E*_{int} ≈ -*m*_{1}*m*_{2}*G* (*ε*/2*r*_{min}) .

This is identical to the classical (low-energy) expression for gravitational binding energy, if only the function *ε* is set to

*ε* = 2*r*_{min}/*r* .

This radial dependence is appropriate for a long-range interaction carried by massless gravitons. It also suggests that the degree of correlation between times *τ*_{1} and *τ*_{2} is ~ 1/*r*, consistent with a cutoff frequency *ω*_{max} ~ *c*/*r* for PCLs.

Thus, in the low-energy limit, time quantization appears to effect system energy in a way similar to the low-energy limit of standard graviton exchange, as long as the proper times of elementary particles are appropriately correlated through interactions. Gravity would then be similar to the other forces, in that it can be associated with the exchange of a gauge boson at low energies, but is distinct in its more fundamental association with time quantization.

What if the interacting particles have electric charges *Q*_{1} and *Q*_{1}? The bare electrostatic interaction energy is then

*E*^{o}_{elec} = *Q*_{1}*Q*_{1} / *r* .

Again ignoring corrections of order greater than (*E*_{o}/*E*_{max})^{2}, and adopting the original *ε*(*r*), the residual interaction energy associated with time quantization is now

*E*_{int} ≈ -[*m*_{1}*m*_{2} + (*m*_{1}+*m*_{2})*E*^{o}_{elec}/*c*^{2} + *E*^{o}_{elec}*E*^{o}_{elec}/*c*^{4}] *G*/*r* .

The three terms in brackets can be interpreted as the gravitational interaction energies between the two masses, between the masses and the electrostatic field, and between the electrostatic field and itself. As in more traditional theories, gravity couples to all relevant sources of energy.

The PCLs generated by the electric (or any non-gravitational) force appear to only indirectly affect the correlation of the proper times of the two particles; the derived mass-to-mass interaction energy with and without an electric interaction would otherwise have a different value for a given separation. The energies associated with non-gravitational PCLs apparently engender coincident graviton PCLs, which embody the actual process by which time correlations are established. This may reflect gravity's pivotal role in defining the geometry of timespace, and the observation that gravity does not appear to add any new spatial dimensions.

The origami-like unfolding of space from a milieu of interwoven, correlated events may generally result in a non-Euclidean macroscopic geometry. The effective curvature of conventional timespace would then be a natural consequence of the quantization of proper time intervals and the correlation of time lines by graviton exhange.

An elementary particle's rest mass may derive from a variety of sources. Every particle is effectively surrounded by a cloud of phantom exchange bosons, corresponding to all applicable interactions. But this cannot be the sole source of mass for common particles. For example, if the electron's effective size is the minimum volume that can contain its mass, then the electron electric field contributes less than 1% to the observed mass value (the magnitude of the negative gravitational self-energy contribution is 43 orders of magnitude smaller). Rest mass may also originate in a particle's underlying geometric character. Wittuu has suggested that elementary particles may not be pointlike, but associated with vibrations of extended (but tiny) **geometric forms**. The size of such entities should be comparable to the radial parameter *r*_{min} computed earlier for a given rest mass.

Can quantized multi-timespace be incorporated into a quantum field theory? Time and 3D space have traditionally been treated as continuous system parameters in QFT. Yet past attempts to include gravity in QFT have failed; the theory is not renormalizable for point particles if system timespace is continuous. Even excluding gravity, the standard renormalized version of QFT predicts an enormous **vacuum energy density**. By restricting the proper time intervals between events to integral multiples of a chronon, maximum energies are naturally limited, and the divergent quantities in QFT calculations might be tamed. As discussed earlier, maximum frequencies associated with phantom processes in a multi-time setting may be further restricted, and inversely proportional to the distance between interacting particles. If space is defined by and only exists with respect to real material particles, then phantom processes that are completely disconnected from real particles might also be restricted. Any new formulation should reflect that timespace is meaningfully defined only with respect to particle world lines and interactions. It may prove necessary to radically reformulate QFT in terms of multiple timespace operators corresponding to individual field quanta, and to treat elementary particles as finite-sized objects.

[The approach to quantized time outlined by Fleegello was naive, and flawed in many respects. It does not effectively address relative particle motion, or modifications to symmetry principles and conservation laws, or how an observer can fully integrate the proper times and spatial separations of individual particles with a global timespace coordinate system, and define the total system energy and wavefunction. Fleegello did acknowledge in private correspondence that his approach to quantized timespace was simplistic and incomplete, and certainly did not comprise a testable theory. Yet by quantizing proper time intervals, identifying elementary particles with extended (though miniscule) vibrating geometric forms, and reformulating QFT, physicists were at last able to integrate gravity into quantum theory, and accurately compute the rest masses of elementary particles from first principles, avoiding the infinities that had plagued previous attempts.]

Although the fundamental (microscopic) equations of motion are symmetric in time, physical processes on a macroscopic level superficially do not appear to be time-symmetric. For example, if all the air molecules in a room were clustered in a corner, they would rapidly spread out to fill the entire room; yet the reverse process is not observed to happen. Niestu has proposed that this so-called **arrow of time** is a purely statistical phenomenon.The universal state witnessed by an observer at a given moment is connected by a single time increment (chronon) to a host of other states. In Niestu's view, the number of less ordered (higher **entropy**) states corresponding to a "forward" time process is simply much larger than that for a "backward" process. If conscious experience is a random walk from one state to another, a person is much more likely to experience events along the traditional arrow of time. Reverse time steps occur, but are swamped by the sheer number of forward steps. This distinction acquires significance mainly in macroscopic systems, due to the sensitive dependence of the number of states of a given type on the number of particles in a system.

Yet this statistical feature does not in itself guarantee our experience of time. A conscious being that lacked a memory would live in an eternal present, with no sense of time's arrow. Most animal memories in a given universal state are found to be of events in connected states with lower overall entropy. This implies that the creation of memory generally involves a statistically **irreversible process**. Memories laid down in this direction are normally adaptive, and facilitate survival into an expanding realm of universal states. Memories laid down in the opposite direction could in principle also be adaptive, but only if they overtly present as precognitions, consistent with a person's walk through time.

[Shortly after Fleegello died, the natural philosopher Loh demonstrated that even the observed **expansion of the universe** could be linked to such statistical considerations, providing a critical link between cosmology, gravitation theory and Shrodiik physics.]

As discussed in section 1.18, any physical universe must have [at least] one basic (self-caused) initial state. This state can imply no previous history; the system would otherwise logically extend to an earlier time. Every physical universe must therefore evolve from an initial state characterized by infinitesimal spatial volume. Our own universe appears to have originally experienced runaway, exponential growth – the primeval **hyperburst** of modern cosmology – from a miniscule primitive state. Every newborn universe must further incorporate particles or analogous localized objects relative to which distance can be meaningfully defined. It could otherwise not expand (or contract) in any meaningful way.

[Fleegello failed to recognize that, if the experience of the CIF is timeless, a self-contained physical universe may also be cyclic, along a time-like dimension that loops back into itself. Such a system must in its entirety be the cause of itself. His basic argument has nonetheless since been extended to the **multiverse** of all possible physical worlds, whereby our own universe and its generative hyperburst may have been spawned by a pre-existing, self-caused system.]

The **physical interpretation** of the state vector |*ψ*> has a long and tortuous history. **Originally** it was viewed merely as a device for computing the probability of observing a given outcome in an experiment. Reality was seen to reside in the observed positions and momenta of individual particles. The physical universe was assumed to evolve in a linear manner, with a single unfolding history, which was deterministic in only a limited, probabilistic sense. The act of observation was divorced from the natural evolution of a physical system, and treated as something special, even magical.

Yet consistency logic requires that the universe be totally deterministic. Recently, the contradictions inherent in the original interpretation of Shrodiik mechanics have led the **Evette** group to develop an alternative **multi-world** view, in which reality resides in |*ψ*> itself. The physical panuniverse is viewed as a superposition of conventional quantum worlds, each represented by a restricted state vector. These world systems continually split into new patterns and merge with each other through time. Observers, measuring devices, and measurement processes are included as integral parts of |*ψ*>. A given observer occupies a particular conventional world at a given instant. As this world subsequently splits, the observer likewise branches into multiple selves, each with a distinct future experience. An observer does not see physical evolution as completely deterministic simply because his mind does not encompass all the worlds of the panuniversal state.

[Unfortunately, only the barest references to the original Evette school survive in the historical record. The writings may have been systematically destroyed by conservative, fundamentalist religious sects that flourished at the time, and found the work heretical. These factions believed the universe progressed in a linear fashion along a single preordained path, in accordance with a divine plan for the octan race. The random, branching character of the multi-world view demanded an even greater, and to many more threatening, decentering to the octan psyche than the recognition five octujopes earlier that Jopitar was not at the physical center of the universe, but was a nonsingular ball of gas orbiting an ordinary star in a minuscule corner of a vast ocean of time and space.]

Let |*ψ*> now represent the state of the entire physical panuniverse,

and |*φ*_{i}> a complete set of orthonormal states such that

|*ψ*> = Σ_{i} C_{i}|*φ*_{i}>.

The choice of specific |*φ*_{i}> is somewhat arbitrary, from a strictly mathematical viewpoint. If the |*φ*_{i}> are chosen to correspond to conventional worlds in the Evette sense, however, then the matrix elements *A _{ij}* between different worlds

Consider now one such conventional world |*φ*> that incorporates a subsystem consisting of a simple superposition ( |*B*_{1}> + |*B*_{2}>) of orthonormal eigenstates of an observable *B*. Then

|*φ*> = ( |*B*_{1}> + |*B*_{2}> ) |*E*>

where |*E*> represents the environment of the subsystem. The environment may interact with the subsystem, so as to become correlated with its eigenstates. This happens in particular when the environment includes an observer who measures the value of *B*. If *B* commutes with the interaction Hoobitean *H*_{int}, the eigenstates of *B* are not changed by the interaction, and

|*φ*> ➜ |*B*_{1}>|*E*_{1}> + |*B*_{2}>|*E*_{2}> = |*φ*_{1}> + |*φ*_{2}>

where |*E*_{1}> and |*E*_{2}> are themselves eigenstates of observables that commute with *H*_{int}. Consider now the matrix elements *A*_{12} and *A*_{21} for an arbitrary observable *A*. If *A* commutes with *B*, then *A*_{12} = *A*_{21} = 0, since the eigenstates of *B* are orthonormal. If *A* does not commute with *B*, then it must act only on the given subsystem (it can be shown that if *A* were a product of operators that separately act on the subsystem and its environment, then *A* would not be a valid observable). In this case *A* does not affect |*E*>, and *A*_{12}=*A*_{21}=0 if |*E*_{1}> and |*E*_{2}> are orthonormal. The states |*φ*_{1}> and |*φ*_{2}> can thus be identified as two new conventional worlds, split off from the original |*φ*>, if only |*E*_{1}> and |*E*_{2}> are orthonormal.

[This line of reasoning, which did *not* originate with Fleegello, helped resolve a problem with the many-world interpretation, involving an apparent ambiguity in the identification of the individual worlds. Some researchers argued that the choice of states |*B*_{1}> and |*B*_{2}> in the given example was quite arbitrary. By choosing a rotated basis set, e.g.

|*b*_{1}> = (|*B*_{1}> + |*B*_{2}>)/√2 and |*b*_{2}> = (|*B*_{1}> − |*B*_{2}>)/√2 ,

the state |*φ*> appeared to split into a different set of conventional worlds. Eventually it was realized that the interaction between a system and its environment naturally selects a particular (compatible) basis set. If the operator *B* does not commute with *H*_{int}, then |*B*_{1}>|*E*> does not evolve into |*B*_{1}>|*E*_{1}>, since |*B*_{1}> is itself transformed by the interaction.]

Conventional worlds can thus be distinguished by non-interfering "memories" of prior branchings. The storage sites of these data may include, but are by no means limited to, physical brains (and recently, scientific apparatus acting as extensions of the brain). The structure of a brain determines its interactions with the environment, and thus the types of conventional worlds (i.e., which observables are relevant and well-defined) generated by the observation process. If a brain is so constructed that only one value of a particular observable can communicate with (affect) other elements in a conscious field, then a state including a coherent superposition of different values of that observable at the same moment must correspond to distinct unified ideo fields, or selves, in separate (conventional) worlds. The information stored in a brain does not *define* the external reality of the associated world – a person may make faulty observations – but it may still be a point of reference by which that world is distinguished from others. Two distinct conventional worlds can even merge, if their distinguishing memories are lost or corrupted so as to become identical. Observers inhabiting the worlds would experience no sense of merger, as all valid memories of a former distinct past would be absent.

Physics continues to evolve. Our understanding may yet be profoundly superficial. Will the physical objects and patterns identified so far prove to be unified by a single underlying entity? The multitudinous facets of one magnificant (mathematical) **jewel**? Or are they disparate, random elements, fragments tied loosely together only by the principle of consistency? Our descendants will hopefully discover the answer to this compelling question.

[During Fleegello's era, physics was rocked by conceptual revolutions every several jopes. Prominent scientists would periodically announce that a "**theory of everything**" was at hand, or that all that remained in physics was to clean up a few loose ends. These claims were invariably contradicted by new discoveries. Only after many octujopes of struggle was a viable unified theory in fact attained. Even then, physics was hardly dead. The new vision was so rich in possibilities, that its many veins continue to be mined even to this yad. Indeed, quantum physics is no longer considered the most fundamental of the physical sciences, but is viewed instead as the study of **emergent** phenomena arising from a still deeper level of mathematical reality. Other higher-level sciences (chemistry, biology, psychology, etc.) likewise continue to flourish, as knowledge of basic physical processes is inevitably divorced from an effective understanding of complex emergent reality.]