You are here

Appendix F

Octan Physics

[Fleegello originally wrote his critique of octan physics as section 2.3 of the Principia. It is listed separately here, as it frankly stands apart, and interferes with the general flow of that work. Most scholars agree that Fleegello did not introduce any novel physics in this section. Rather he took ideas already advanced by contemporary physicists, and adapted them to his own philosophical framework. As in the primary document, editorial comments are enclosed in brackets, to distinguish them from the main text.]

The physical (pan)universe has been identified as the Physical Consistency Subfield (PCS) of the Consistency Ideo Field (CIF) – the complete set of abstract mathematical objects that both define temporospatial (i.e., physical) relationships and are compatible with consistency logic. These objects may manifest (in part) as states of a physical system, or as observables or other operators that act on those states. If modern physics is a reliable guide, they incorporate a broad class of multidimensional objects known as dimensors. A dimensor operator defines linear relations between other dimensors. Because of this linear character, dimensors are often represented by multidimensional arrays. [Dimensors include the familiar mathematical objects called scalars, spinors, vectors, and tensors.]

In standard Shrodiik [quantum] theory [named in honor of the pre-Dracian physicist Shrodo], a physical state (of any sufficiently isolated system) is denoted by a ket symbol |ψ>, where ψ is an arbitrary label. This entity is supposed to encompass all physical aspects of a system. |ψ> is commonly interpreted in terms of the positions and motions of material particles at a time t in a 3-dimensional space x. Observables then include the positions, momenta, and energies of these particles.

For any physical system, there is a range of possible states. A novel feature of Shrodiik physics is that a system can consist of a linear combination, or superposition, of these available states. The selection of a set of fundamental basis states is then arbitrary, to some extent; any given set of basis states can be mixed into new combinations, to form distinct sets. In general, |ψ> can be viewed as a vector in an abstract space that spans all the possible states. If the components of a state vector are defined with respect to a specified set of basis vectors, then the state may be represented by a single-column dimensor array. Observables and many other operators may in turn be represented by square dimensor arrays that transform any given state (by the usual rules of matrix multiplication) into another state.

Let the symbol A represent an operator corresponding to some observable. When A is applied to an arbitrary state vector, the result is typically a linear combination of other state vectors. Suppose, however, that A is applied to a state vector |ψa> characterized by a well-defined value a of A – i.e., a state in which a measurement of A will definitely yield the value a. Then A acts on |ψa> by extracting this value:

      A|ψa> = a|ψa> .

This is what it means for A to represent an observable. Mathematically, |ψa> is an eigenstate of A, with a well-defined value a of that observable.

Contemporary Shrodiik physics has another, most peculiar trait. For any physical state |ψ>, only the probabilities for measuring different values of a given observable can be computed. Even granted complete knowledge of a physical system at a particular moment, the future course as seen by any octan observer cannot in general be predicted with certainty. Performing a measurement (observation) somehow reduces a system to an eigenstate of the observed quantity, corresponding to the measured value. Detailed prescriptions for computing probabilities may be found in Shrodiik physics texts.

In bra-ket notation [physicists used this archaic script during Fleegello's era], the numeric overlap between two states |φ> and |ψ> is represented by <φ|ψ>, where the states are normalized such that <ψ|ψ>=1 for all |ψ>. The overlap value is a probability amplitude for starting with a system in state |ψ>, but observing it in state |φ>. The actual probability is the absolute square of this amplitude, or |<φ|ψ>|2. For example, consider a one-particle system. If |x> is the state with the particle at position x, then the overlap <x|ψ> is the single-particle wavefunction ψ(t,x) of Shrodiik mechanics, and |ψ(t,x)|2 is the probability at time t of finding the particle at x.

The expectation value of an observable A, defined as the average value Aavg over repeated measurements on identical states |ψ>, is given by

      Aavg = <ψ|A|ψ> .

The observable A has a definite value a only if |ψ> is already an eigenstate |ψa> of A.

In classical physics, material particles were treated as localized entities, distinct from waves (such as light). Only the latter could undergo self-interference, or diffract around obstacles. At the dawn of the Shrodiik revolution, waves were found to sometimes act like classical particles, and so-called particles were found to have wave properties. Observables with a continuous range of possible values in classical physics (e.g., the energy of an electron in an atom) might now be quantized, and restricted to discreet values.

Largely because of this duality of wave-particle characteristics, the order in which observables are measured may be significant. Consider two observables, represented by operators A and B. The operators commute if the order of measurement is irrelevant, or equivalently if the order in which A and B act on an arbitrary physical state is irrelevant – i.e., if AB = BA. The operators do not commute if ABBA. In this case the very act of measuring A or B introduces uncertainty into the value of the other, complementary observable. A system cannot simultaneously be an eigenstate of two operators that do not commute – neither operator would alter such a state, so their order could not matter. It is then impossible to simultaneously measure the values of two non-commuting observables, since such a measurement must create an eigenstate of both.

The archetypal pair of non-commuting observables in Shrodiik mechanics are the position x and linear momentum px of a material particle along a spatial coordinate (direction) x. Classically, these quantities commute, and every particle simultaneously has well-defined position and linear momentum. After the wave character of particles was discovered, momentum px became associated with the inverse wavelength λx, or the wavenumber kx, of a wavefunction:

      px = / λx = kx / 2𝜋 = ℏ kx ,

where is the miniscule (shorter wavelength begets more particle-like behavior) but nonzero Planko constant [named for the pioneering physicist Planko], and ℏ is the reduced Planko constant. The operator px is then directly proportional to the spatial rate of change of the wavefunction in direction x. More precisely, px equals the constant (-iℏ) multiplied by the spatial rate of change along x, where i is the imaginary unit (square root of -1). It is remarkable how imaginary (or complex) quantities arise naturally in the equations of Shrodiik physics! The commutation relation between x and px becomes

      x px - px x = iℏ .

Using calculus, it can be shown that the (unnormalized) wavefunction ψ(x) of a particle with pure wavenumber kx (corresponding to momentum px) at a particular moment has the exponential form

      ψ(x) = ei kxx = cos( kxx ) + i sin( kxx ) ,

where e is the Eulero number of mathematics, and the "cos" and "sin" terms refer to standard (wave-like) trigonometric functions. This wavefunction is totally unlocalized in space; the absolute-square probability of finding the particle at a given position is the same for all x values. Conversely, it can be shown that a wavefunction corresponding to a particle at a definite position must include all possible wavelengths, or linear momenta.

Consider then a system |ψ> = |xo>, in which a particle initially has a definite position xo. Suppose an observer measures first the position x of the particle, and then its momentum px. Because the system starts in a state of well-defined position, the particle will be found at xo with 100% probability, and the wavefunction is unchanged. Because this wavefunction contains all possible momentum values, any value of momentum may be observed in the subsequent measurement, with equal probability. Now suppose the order of measurement is reversed – the observer measures momentum first, followed by position. The likelihood of initially observing any value of momentum px is the same as before. But the momentum measurement forces the particle into a state of well-defined momentum. The particle's position is thereby scrambled, and the observer may subsequently find the particle at any location!

Consider now a more general system in which one member of a pair of non-commuting observables is well defined. Mathematically, the system can be considered a superposition of pure eigenstates with different but well-defined values of the other non-commuting quantity. The existence of non-commuting observables is contrary to classical (pre-Shrodiik) physics. The natural law that describes physical evolution applies to superpositions of pure states, rather than to states in which all classical variables have precise values.

The state |ψ> of a physical system can in general be written as a coherent sum

      |ψ> = C1|φ1> + C2|φ2> + C3|φ3> + . . . = ΣjCj|φj>

over a complete set of orthonormal states |φj>, where the Cj are (complex) constants.

The |φj> are orthonormal if <φj|φk>=0 for all jk, and <φj|φj>=1 for all j.

The choice of the |φj> is arbitrary to some extent, but they must be eigenstates of a complete set of commuting observables that cover all physical aspects of the system.

The probability of starting with the system |ψ> and finding it in state |φj> is then Cj*Cj, where Cj* is the complex conjugate of Cj. The expectation value of an observable A is

      Aavg = <ψ|A|ψ> = ΣjΣkCj*CkAjk   where Ajk = <φj|A|φk> .

The off-diagonal terms jk in the sum represent nonclassical interference between the different states in the coherent superposition comprising |ψ>. These terms in general vanish only if the |φj> have well-defined values (i.e., are eigenstates) of A.

What determines useful observables, other than position and time? The mathematician Noethra has linked such quantities to symmetries in the equations of motion that describe the temporal evolution of |ψ>. In particular, Noethra's first theorem states that for every continuous, differentiable coordinate transformation that does not alter these equations, there is a corresponding observable whose expectation value is conserved, or constant over time. For sufficiently isolated systems, the equations are in fact generally unaffected by several such transformations, including time displacement, spatial displacement, and spatial rotation. Each of these symmetries is associated with an observable and conserved quantity.

But why are the equations of motion unaffected by the given transformations? Although physical conditions clearly vary at different locations in time-space, there is nothing else to distinguish points or directions. From an ideobasic perspective, the same physical law should then apply universally to all times, places and orientations. This law should further depend only on extant physical conditions. The inherent equivalence of all points and directions thus leads to the observed symmetries and conserved quantities.

When the laws of motion are not affected by displacements in time (i.e., they remain the same over time), then what is commonly called energy is conserved. This is primarily what makes energy a useful observable. Note that only the laws of motion are unchanging; physical systems themselves may change dramatically over time. Energy is associated with the Hoobitean operator H [named for the classical physicist Hoobitu]. The time rate of change of |ψ> is proportional to H. More precisely, H equals (iℏ) multiplied by the time rate of change. If |ψ> is an eigenstate of H, then it has a definite energy E, temporal frequency f, and angular frequency ω, related by

      E = f = ℏ ω .

The (unnormalized) time-dependent wavefunction ψ(t,x) of a particle with wavenumber kx and angular frequency ω (corresponding to momentum px and energy E) has the wavelike form

      ψ(t,x) = ei (kxx-ωt) = cos( kxx-ωt ) + i sin( kxx-ωt ) .

For a one-particle system, the relationship between time and energy is thus analogous to that between position and linear momentum. For a multi-particle system, however, the situation is more nuanced. Whereas every particle in such a system may be assigned its own dynamical position operator, all particles traditionally share a common time. Time is then treated as a system parameter, and not associated with a true operator.

When the laws of motion are not affected by displacements in spatial position (i.e., the laws are the same at different spatial points) or by spatial rotations, then linear momentum and angular momentum are also conserved, and are both useful observables. It can be shown more generally that the expectation value of any operator that both commutes with H, and is not explicitly a function of time, is also a constant of motion. Every symmetry in H is thus associated with a conserved quantity, and a corresponding observable.

Classical observables may have nonclassical analogs that result from a reinterpretation (typically involving commutation relations) of associated operators. In particular, the commutation relations among the three orthogonal (mutually perpendicular) angular momentum operators imply the existence of a nonclassical type of angular momentum, known as spin. Elementary particles are found to inherently possess this type of angular momentum. Particle spin is naturally quantized to discreet values, characterized by a spin number, which must be an integral multiple of 1/2. Overall spin angular momentum (which also equals the maximum possible component in any direction) is the spin number multiplied by the reduced Planko constant.

The spin angular momentum operators associated with a spin number s can be represented by irreducible (2s+1) x (2s +1) arrays. The spin aspect of a spin-s particle can then be represented by a spatially-oriented, (2s +1)-dimensional single-column dimensor known as a pointor, designated by Š. An overall particle state may in turn be represented by a pointor function Š(t,x) of time t and (3-dimensional) position x.

Spinless (s=0) particles are represented by scalar (zero-rank dimensor) functions, with no inherent directionality – for example, f(t,x), where f is a simple function of t and x. [No elementary spin-0 particles were known in Fleegello's era, although composite spin-0 particles (e.g., pions) were certainly recognized.] Spin-½ particles are represented by special two-dimensional pointors known as spinors. Spinors do not transform like geometric vectors under coordinate transformations. Spin-1 particles with mass are represented by three-dimensional pointors, which do transform like geometric vectors. [Note that because massless spin-1 particles (in particular photons) have no rest frame but are constrained to move at light speed, they must actually be represented by two-component pointors.] Particles with even larger spin values are represented by distinct pointor classes.

Yet particles do not normally exist in isolation. How then can the state of a multiparticle system be represented? Suppose first that the particles are distinguishable, and motions are much slower than light speed. Such systems have traditionally been represented by a direct product of the pointor functions for the individual particles, in which time t is a common system parameter, but the coordinates xP of the different particles P are distinguished. For example, the state of a two-particle sytem might be represented by

      Ša(t,x1) Šb(t,x2)

where subscripts a and b label two different single-particle states.

Suppose now that at least two particles in a system are identical. The probability of finding either cannot be affected when their labels are exchanged – they would otherwise be distinguishable. Since the probability is related to the absolute square of the system wavefunction, then the state can at most acquire a complex phase factor (a factor with an absolute value equal to one) under particle exchange. Because two successive exchanges must leave the overall state unchanged, then the phase factor is limited to the values ±1. The state itself must then either be symmetric (unchanged) or antisymmetric (phase factor -1) under identical particle exchange.

The wavefunctions of identical bosons (particles with integral spin) are found to be symmetric, while those of identical fermions (particles with half-integral spin) are antisymmetric. The appropriate symmetry can be achieved if a system is represented by a sum over the direct pointor products, in which the functional dependencies of the particles are suitably interchanged. For example, the state of two identical fermions might be represented by

      Ša(t,x1) Šb(t,x2)-Ša(t,x2) Šb(t,x1) .

Symmetries in the equations of motion are not limited to continuous time-space transformations, but may also include discrete operations, such as time reversal and parity inversion (mirror reversal). [Fleegello stubbornly maintained that various discrete spacetime symmetries should generally hold, despite contrary evidence. For example, experiments seemed to demonstrate that parity is not conserved during certain types of radioactive decay. Parity is conserved if the equations of motion are unchanged when a system is replaced by its mirror image. Fleegello believed that physics could not be affected by such a simple transformation, and felt that crucial elements had been omitted from experimental analyses. Yet physicists soon realized that, since time and space are intimately linked, and antiparticles are equivalent to ordinary particles moving backward in time, the true symmetry involves the CPT transformation – a combination of particle-antiparticle charge exchange, parity inversion, and time reversal – and not any one of these operations in isolation.] Internal symmetries, that do not transform time-space points, can give rise to additional conserved quantities and observables (e.g., electric charge).

Indeed, the fundamental interactions between elementary particles are thought to derive from a variety of internal local gauge symmetries. For example, consider a single-particle wavefunction ψ(t,x). Under a local phase transformation, ψ is multiplied by a phase factor e(t,x), where λ(t,x) is a function of time-space. The absolute square (probability density) of ψ is unchanged by this transformation. If local gauge symmetry holds, then the new wavefunction must also satisfy the standard equation of motion. The kinetic energy part of that equation generally contains terms involving both the time- and space-rate of change of ψ, so the phase factor in the transformed wavefunction generates new quantities. The equation is invariant under the transformation only if it also contains terms that transform so as to cancel the effect of the (t,x) dependence in λ, while maintaining the original form of the equation. These terms can be identified with the vector and scalar potentials of the electromagnetic interaction.

The physicist Vigno has argued that symmetry principles do not merely restrict the laws of quantum physics, but define them. Elementary particles and their interactions have been associated with and characterized by the mathematical representations of abstract symmetry groups. Every such fully consistent object and process must coexist with every other compatible object and process somewhere within the PCS. This may require that the PCS is naturally divided into distinct physical universes.

Coordinate systems do not exist a priori in nature. The choice of a coordinate framework to describe a physical system should thus be arbitrary, from a strictly mathematical viewpoint (although one frame may be more convenient than another for a given purpose). It should then be possible to describe the laws of physics in a coordinate-free manner, in which observables appear only as abstract quantities, with no explicit reference to coordinate components. Expressing physical laws in such a covariant manner simplifies identification of symmetries and conserved quantities.

If the PCS is to respect the inherent arbitrariness in the choice of coordinate system, then fundamental physical constants that appear in the laws of physics should also be the same for all observers within a given physical universe, independent of the choice of reference frame. This applies in particular to dimensionless constants (e.g., the fine structure constant of atomic physics), which carry no physical units, but can be expressed as the ratios or products of dimensional constants that do possess units. Changes in the values of dimensional constants are generally meaningful only with respect to changes in their dimensionless combinations. So long as the values of physical constants are individually changed in a way that maintains the values of all fundamental dimensionless constants, the physical world is unaffected. Dimensionless constants stand independent of any arbitrary choice of measurement units. Indeed, no variations over time or space have thus far been detected.

[Some quantities thought to be fundamental constants in Fleegello's era have since been found to be variable. These have been reinterpreted as functions of truly fundamental constants and local physical conditions.]

Dimensionless fundamental constants need only be the same at all points within a particular physical universe. The values in distinct, non-interacting universes may be different. Indeed, if there is no fundamental reason a constant should have a particular value, then the PCS must encompass a host of universes covering the range of acceptable values. Yet this range cannot be continuous; the values must in some sense be quantized, and countable. All the worlds otherwise could not have meaningful existence within the PCS.

Even fundamental dimensional constants (whose numeric values depend on the choice of physical units) should be the same for all observers in a given universe, when measured with respect to reproducible units characteristic of fundamental physical processes. In particular, the speed of light in a vacuum, commonly denoted by the symbol c, appears to constitute a universal limit to the rate at which information can propagate through space. As first proposed by the physicist Niestu in his theory of inertial invariance, the speed c has the same value for all observers, irrespective of their state of motion. This is contrary to classical expectations, whereby an observer moving toward (away from) a light source detects a higher (lower) relative light speed than an observer at rest with respect to the source. That c is finite may be expected from an ideobasic viewpoint. An infinite speed is a special, limiting case of a general value, and the PCS should opt for the most general conception.

Niestu introduced a major paradigm shift in physics when he showed that a common value for c implies that time (space) intervals measured by one observer may be partially seen as space (time) intervals by an observer in a relative state of motion; time and space do not exist separately, but must be combined into a unified timespace [scientists of Fleegello's era apparently preferred this expression to today's more common term space-time]. The effect is tiny at low velocities, but becomes significant as speed approaches c (so-called Niestiik speeds). The associated coordinate transformation between reference frames in a relative state of motion is distinct from that of classical physics. If the equations of motion are to remain invariant under a velocity transformation, then those equations must be modified as well. A remarkable consequence of inertial invariance is that any mass m is associated with an energy mc2. For a free particle, the relationship between total energy E, momentum p, and rest mass m becomes

      E2 = p2c2 + m2c4 .

Niestu ultimately expanded his ideas into the theory of general invariance, which describes gravity in terms of distortions in the geometry of timespace.

[Fleegello overlooked a related serious inconsistency in his view of the CIF. The CIF must encompass all possible reference frames. If It experiences the same time as observers in those frames, as Fleegello envisioned, It must integrate the various time lines to maintain a single unified state of being. Yet if speed c is the same for all observers, events that are simultaneous in one frame may be nonsimultaneous in another. Events could then be seen by the CIF as both simultaneous and not simultaneous, a contradiction. This inconsistency is resolved only if the CIF transcends physical time, and experiences it the way corporeal creatures experience space – as block time. All events in the physical panuniverse then span a single, eternal moment in the mind of the CIF. Yet the CIF must still distinguish the time-like and space-like separations among physical events that define causal chains. Primacy resides in these causal chains, and not in the reference frames that observers use to describe them.]

While inertial invariance was readily incorporated into Shrodiik mechanics for single particles, problems arose for multi-particle systems. In particular, time and space coordinates were not treated coequally in the traditional equations of motion. Inertial invariance requires that time and position both be treated either as system parameters, or as formal operators. Currently the most widely adopted solution, based on the first approach, is to reformulate Shrodiik mechanics into a Niestiik quantum field theory (QFT), in which elementary particles of a given type are treated as quantum excitations of an underlying field. The theory covers both traditional particles with mass, such as the electron, and zero-mass particles once considered waves, such as the photon. Different particle types are represented by distinct fields, defined by a variety of attributes, including rest mass, spin, and electric charge. Field operators replace the single-particle position and momentum operators.

A simple field state in QFT is characterized by the number of (identical) quanta occupying each of a set of allowed levels. Field quanta contain no explicit particle labels; QFT respects the exchange symmetry of identical particles in a remarkably natural way. Indeed, in QFT it can be shown that the wavefunctions of fields with half-integral spin must be antisymmetric, and those with integral spin symmetric. The number of quanta in a field is just the number of particles of the given type. Any distribution with a particular number of particles can be represented by a superposition of simple states (covering a range of level occupation number sets, each with the same total number of quanta). The overall state of a system is represented by the direct product of its constituent fields, or more generally by a superposition of such products. Unlike Shrodiik mechanics, QFT is not limited to a fixed number of particles. Field interactions result in the creation/destruction of associated quanta.

The mathematician Draci has proposed a multi-time alternative to QFT, in which both the (observer-based) times and positions (tj, xj) of various particles j are distinguished, and now treated as coequal system operators. Consider a system of two particles, labeled 1 and 2, together with an observer in an (inertial) reference frame (t, x). A wavefunction Ψo(to1, xo1; to2, xo2) can be defined in the observer frame, representing the joint probability amplitude that the observer at time to1 sees the first particle at position xo1, and at to2 the second particle at xo2. The subscript "o" explicitly references the outside observer. Unlike in QFT, wavefunction symmetry under the exchange of identical particles must be imposed.

In terms of observer coordinates (tj, xj), and in the absence of interactions, the wavefunction Ψo should separately satisfy the standard free-particle equations of motion for each particle. In the presence of interparticle forces (associated with the various gauge symmetries), interaction terms must be added to these equations, resulting in two coupled equations of motion (for a system of N particles, there are N coupled equations). While the simplest form of multi-time theory is restricted to a fixed number of particles, the interaction terms can be written using creation and destruction operators, allowing particle number to change.

It is important to note that multi-time theory does not posit multiple independent time dimensions in our own universe. If it did, then assigning a unique position operator to every particle in classical single-time theory would also imply more than three independent spatial dimensions. While every particle has its own time line, these become correlated through interactions, in a way consistent with a single overall time-like dimension.

With multiparticle systems, it is often useful to adopt composite spatial coordinates. For two particles, define

      X = (a1x1 + a2x2)  and  r = (x2 - x1) , where a1 and a2 are constants.

In particular, center-of-mass coordinates with a1=m1/(m1 + m2) and a2=m2/(m1 + m2) are routinely used in non-Niestiik treatments of two-body systems. Analogous composite time coordinates may also be defined in the multi-time approach, by

      T = a1t1 + a2t2  and  ρ = (t2 - t1) .

The coordinates (T,X) and (ρ,r) each transform in the same way as any conventional (t,x). If only (a1 + a2)=1, then the rate of change of Ψ with respect to T (with X, r and ρ fixed) is just the sum of the rates of change of Ψ with respect to t1 and t2 (using non-composite coordinates, with x1, x2, and eithert2 or t1 fixed). The rate of change with respect to X (with T, r and ρ fixed) is similarly just the sum of the rates of change with respect to x1 and x2 (with t1, t2, and x2 or x1 fixed). These results are readily extended to systems of N > 2 particles. The rates of change of Ψ with respect to T and X can then be associated in the usual way with total system energy E and momentum P, respectively, equal to the sum of individual particle energies Ej and momenta pj. Just as every (xj, pj) and (in multi-time theory only) every (tj, Ej) form a pair of complementary, non-commuting operators, so too do (X, P) and (T, E).

In the weak interaction limit, ψ should show no preferred value of ρ or r. For strong repulsive interactions, ψ should be significant only for large values of the timespace separation parameter (invariant under a Niestiik velocity transformation)

      S = √r2 - c2ρ2   ,
where r is the length of the radial vector r. For strong attractive interactions, there should be solutions of ψ localized to small S.

Yet how are timespace coordinates (t, x) meaningfully defined at all, for a system composed of multiple elementary particles? Neither time nor space can be measured in absolute terms. Temporal and spatial intervals are gauged only with respect to physical processes and structures, which have traditionally been interpreted in terms of elementary particles and their interactions. Stripped of its vestments, timespace loses all meaning. Physical objects and dimensions of relation are inextricably linked.

Every elementary particle does have an inherent time scale, known as proper time, measured along its own world line. Further temporospatial relationships among particles can be defined only if they interact, either directly or indirectly. In QFT, interactions corresponding to the fundamental forces are associated with gauge symmetries. The mathematical description of these forces can be interpreted in terms of the exchange of phantom elementary gauge bosons by elementary fermions. In particular, the electromagnetic, weak, and strong interactions involve the exchange of phantom photons, W and Z bosons, and gluons, respectively (all spin-1). Phantom particles have all the attributes of their "real" counterparts, except mass; the usual relationship between energy, linear momentum and standard rest mass is not followed, making phantom particles ephemeral. Indeed, some physicists consider phantom particles not real in any sense, but only a bookkeeping device in describing interactions. Elementary fermions include electrons, neutrinos, and quarks (all spin-1/2). [The set of particles identified as elementary has changed considerably since Fleegello's time; his archaic list excludes various types of invisible matter, that interact with ordinary matter only through the gravitational force.] Only gravity, which is presumably intermediated by the exchange of (normally massless) spin-2 bosons known as gravitons, has eluded incorporation into the quantum field theoretic framework.

Because phantom bosons are superpositions spanning energies and momenta that do not respect standard mass relationships, their exchange should not be literally interpreted in terms of trajectories from one real particle to another. Yet these disturbances in interacting fields do transmit information at light speed between real particles. This process can establish causal links (CLs) between interacting fermions. Of course, CLs can also be forged by real particles. A link specifically from phantom processes will be referred to as a phantom causal link (PCL).

A PCL from fermion j at proper time τj to fermion k at τk is directional; energy and other information flow forward or backward from one particle to another. The direction may be indicated by a binary parameter θjk=±1, where value +1 signifies forward flow from j to k, and -1 signifies reverse flow from k to j. It follows that θjk = -θkj.

Every [electromagnetic] PCL can be associated with a three-component (three-dimensional, or 3D) unit vector ûjk, defining a spatial direction of flow. It is further possible to define a 3D interparticle distance vector rjk from j to k, defined from the perspective of particle j at time τk . The vector rjk is the space-like equivalent of the time-like quantity (τk - τj). Note that rjk = -rkj, but ûjk = ûkj. Then we can write rjk = θjk ûjk rjk, where rjk is the scalar length of rjk.

Every PCL can also be associated with a collinear energy and linear momentum transfer. Let ωjk be the angular frequency associated with energy transferred along ûjk, and Kjk the 3D wavevector associated with linear momentum transferred in the same direction, again from the perspective of j. Note that Kjk = Kkj. Whereas Kjk is parallel to ûjk for real bosons, this restriction does not apply to phantom bosons. Define K+jkto be the vector component of Kjk pointing in the direction ûjk, with K+jk= K+jkûjk.

Although information carried by a PCL effectively moves at light speed, such links do not represent real particles, so there is no fixed relationship between ωjk and either Kjk or K+jk. Frequency ωjk and the forward wavenumber K+jkare also not limited to positive values. These quantities may be either positive, if an interaction is repulsive; or negative, if it is attractive. K+jkrespectively points in the same direction as ûjk, or in the opposite direction.

Any PCL can be associated with a probability amplitude γjk, represented by a function

      γjk (θjk, τj, τk, rjk, ωjk, Kjk).

The probability (per unit time, distance, solid angle, energy, and momentum) that a link exists is the absolute square value of its amplitude. PCLs are universal; all observers recognize the same connections, at the same proper times. [Fleegello's description of PCLs is incomplete; in particular, full link amplitudes also convey information concerning angular momentum transfer.]

An elementary event may be defined as any point along the world line of a real elementary particle at which a causal link (of any type) is established with another particle. Niestu has made the radical suggestion that the network of CLs among particles does not merely occur within timespace, but actually defines timespace. The number of spatial dimensions is set by the number of components in the unit vectors ûjk. Particle Interaction For example, consider an electromagnetic interaction between two charged particles, labeled #1 and #2 in the diagram at right. Suppose that a PCL (dotted line) exists from particle 1 (solid line) at (local) proper time τ1A, corresponding to event A, to particle 2 at proper time τ2B, marking event B; and that a second PCL exists from particle 2 at the same τ2B, back to particle 1 at τ1C, or event C.

From the perspective of particle 1 (ignoring any nominal acceleration), event B occurs at time-distance coordinates (τ1B , d12B), given by

      τ1B = (τ1A+τ1C)/2
      d12B = d12B û1AB

where û1AB is a 3D unit vector pointing in the direction of flow from A to B from the perspective of particle 1, and d12B is the scalar interparticle distance

      d12B = (τ1C-τ1A) c/2 .

Note that û1AB = -û1CB from the same perspective.

From the perspective of particle 2, the timespace coordinates of event B are simply (τ2B ,0). If particle 2 moves with respect to particle 1, then û2BA differs from û1AB by a Niestiik-like velocity transformation, and û2BA ≠ -û2BC. The given links do not determine the distance from event B to particle 1 from the perspective of particle 2, since link pathlengths along the two legs could now differ. The distance may be inferred to equal d12B only if time is absolute, as in classical physics. With inertial invariance, the distances are not in general equal.

The directions of adjacent CLs must be consistent, if they are to make physical sense; interparticle distances are otherwise ill defined. Under a time reversal operation (in which the direction of time reverses along all world lines), CL directions must also reverse.

In classical physics, the depicted interaction defines a definite correspondence between times τ1B and τ2B, and a common interparticle separation at this time. With inertial invariance, the interaction instead defines a correspondence between the respective timespace coordinates (τ1B ,d12B) and (τ2B ,0). In Shrodiik physics, the interaction contributes to a probability amplitude for this correspondence.

What is the expected functional form of γjk (θjk, τj, τk, rjk, ωjk, Kjk)? The time dependence must encode the direction of flow, in a manner consistent with θjk. In quantum physics, a complex phase factor e-iωt indicates flow in the positive time direction, where ω is angular frequency. The amplitude γjk for a simple link state should then include a phase factor

      e-i ωjk θjk (τk-τj) .

This factor does not favor any linkage j to k; its absolute square is the same for all links. However, a complex state of γjk consists of a superposition of simple states. The summation

      ω  e-i ω (t-to)

over a range of ω tends to peak at t to. If τj and τk are synchronized as closely as possible, γjk will peak at τj τk - θjk djk/c, where djk is the length of an interparticle distance vector djk. The simple temporal phase factor may then be expanded to the more general form

      e-i ωjk (θjk [τk-τj]-djk/c) .

The corresponding (unnormalized) overall phase factor for a simple state that is symmetric in time and space parameters is

      e-i ωjk (θjkk-τj]-djk/c)  e+i θjk Kjk·(rjk-djk)

where the scalar product  K·r is the scalar length of r multiplied by the projection of K onto r. While r relates to phantom processes defining the space within which particles j and k move, djk captures the initial conditions of those particles. These phase factors are comparable to momentum eigenstates of free particles. [While functions of the given form comprise a complete set of link states, other forms – e.g., angular momentum eigenstates – are also possible, and more convenient in some situations.]

If the linked particles are identical, then exchanging their labels must not alter any physical link characteristic - both link probability and direction of information flow must be preserved. Because γjk and γkj represent the same link seen from a different prespective, the same condition even applies to links between non-identical particles. Under particle exchange,

      γjk (θjk, τj, τk, rjk, ωjk, Kjk) ⇒ γkj (θkj, τk, τj, rkj, ωkj, Kkj) .

The overall phase factor associated with γjk is unchanged, as long as ωjk = ωkj, Kkj = Kjk, rkj = -rjk, and dkj = -djk. The phase portion of the link probability is thus also unchanged. If particle k moves with respect to particle j, then ωkj and the 3D components of Kkj, rkj, and dkj from the perspective of k will differ from the values seen by j. The functional relationships ωkj(ωjk), Kkj(Kjk), rkj(rjk), and dkj(djk) should be compatible with the appropriate Niestiik transformations for a speed-c link from τj to τk, as those equations maintain any collinearity between K, r, and d.

PCL amplitudes can presumably be derived from QFT, or some extension of that theory. Still, exploration of PCL properties may in itself shed light on the ability of interactions to correlate time lines and define interparticle distances. A single time scale is appropriate to describe the motion of two particles only if their proper times can be correlated one-to-one. While multiple serial links are needed to define time correlation and interparticle distance, ultimately the information must be consistently encoded in the amplitudes of successive links.

Reconsider then the two-particle system, in which a pair of PCLs connect event A on the time line of particle 1 at τ1A =τ1B - d12B/c to event B on the time line of particle 2 at τ2B in the direction ûAB, and B back to event C on the time line of particle 1 at τ1C =τ1B + d12B/c. As before, from the perspective of particle 1, this defines distance d12B = d12B ûAB at time τ1B. Assume that the energy passed from A to B is comparable to the energy passed from B to C, while the corresponding momenta are reversed, and let K+ equal the absolute value of K+AB. Adopt bare phase factor states γAB and γBC, with θAB = θBC = +1, θCB = -1, ûAB= -ûCB, and rAB = rCB. Then

      γAB γBC  e+2iK+(r12B-d12B) .

The absolute square of this joint amplitude is unity, and does not favor any particular r12B. Only if link amplitudes are summed over a range of K+ can r12B be localized to a value near d12B.

Consider now the classical limit, in which time lines τ1 and τ2 can be perfectly synchronized, and a common, well-defined 3D distance d12(τ1) exists at any time τ1. Ignoring accelerations,

      γ12  δ(θ12[τ2-τ1] - d12/c)  δ3(r12-d12)

where δ is the Draci delta function, defined such that δ(x-y) = 0 for all xy, and δ(0)=∞ such that the area under the curve δ(x-y) along x for a given y value is unity. δ3 is the product of three delta functions, one for each spatial component of (r12-d12).

Mathematically, the given product of delta functions can be written as the summation

      γ12  ω12 (Δω12) K12 (Δ3K12e-i ω12 (θ122-τ1]-d12/c)  e+i θ12 K12·(r12-d12) 

over all values (-∞ to +∞) of ω12 and each of the 3D components of K12, where Δω12→0 and Δ3K12→0 are minimal increments of angular frequency and 3D wavevector. This corresponds to a maximal, equally-weighted superposition of the simple states of γ12. [Fleegello here glosses over thorny technical issues regarding normalization of these functions.]

In the real world, a PCL summation may not favor all frequencies and momenta equally. In QFT electromagnetic calculations, analogous sums include factors like 1/(ω2-c2K2), favoring photon-like states with an effective mass near zero. As discussed earlier, ω12 and K12 should be restricted to positive (negative) values if the interaction is repulsive (attractive). There may also be maximum absolute values ωmax of angular frequency and Kmax of wavenumber. Restricting values accordingly, but neglecting possible weighting factors, the sums can be converted to integrals, and evaluated using calculus. Probability distributions are obtained from the absolute square of the result.

Normalizing to unit area, the probability distribution P(σ) for σ ≡ ( |τ2-τ1| - d12/c) is

                   4  sin2(ωmax σ/2)
      P(σ) ≃  _________________   .
                       𝜋 ωmax σ2

While this distribution still peaks at σ=0, now with a finite value ωmax/𝜋, the width (~ first null) is no longer zero, but ~2𝜋/ωmax. The correlation between τ2 and τ1 is then not one-to-one. Single-time theory would be an approximation; a multi-time scheme should be more accurate.

To integrate over K12, it is convenient to use polar coordinates, with the axis along (r12-d12). The normalized probability distribution for η ≡ |r12-d12| is more complicated than the distribution for σ, but also peaks at η=0, with a finite value 5Kmax/6𝜋. The width (now roughly defined by the first minimum in the distribution) is ~8/Kmax, analogous to the result for σ.

For real massless bosons, a total travel distance d defines a maximum wavelength λmax=2d, corresponding to a minimum frequency ωmin=𝜋c/d and wavenumber Kmin=𝜋/d. However, because a PCL represents collective phantom processes and does not comprise an independent time line, phase cannot gradually change along its length; only the net shift from one end to the other is meaningful. This shift must then be limited to the range 0 to +2𝜋 (-2 𝜋 to 0) if the interaction is repulsive (attractive). Any outside value would be mathematically equivalent to, and so physically indistinguishable from, a number inside the range. Since a link γ12 persists for time d12/c and distance d12 from the vantage of particle 1, this corresponds to maximum absolute values ωmax = 2𝜋c/d12 and Kmax = 2𝜋/d12. The classical limit is then approached only through a sum of phantom processes and (higher-energy) real bosons. The minimum uncertainty in the correlation between τ2 and τ1 from phantom processes alone is ~d12/c (the time it takes light to travel d12). An uncertainty ~d12 is likewise inherent in the specification of interparticle distance. Note that these limits refer only to phantom processes between a pair of real fermions. Values for a macroscopic observer examining a single fermion could be much smaller, if real particles are used as probes.

The multi-time wavefunction for a system of particles is traditionally defined from the perspective of an external (inertial) observer. This observer consists of an organization of myriad particles, which together interact to establish a composite timespace framework. While interactions among observed particles internally define their relative positions, interactions between the observer and the particles determine positions seen by the observer. Consider a three-particle system. A standard multi-time wavefunction may be represented by Ψo(to1, xo1; to2, xo2; to3, xo3). Draci has proposed an alternative, more symmetric, internal-perspective multi-time wavefunction, more closely tied to PCLs:

      Ψ123(τ12, x12; τ13, x13; τ21, x21; τ23, x23; τ31, x31; τ32, x32) .

Here Ψ123 represents the joint probability amplitude that the first particle at proper time τ12 sees the second particle at position x12 and proper time τ21, while the second particle at proper time τ21 sees the first particle at position x21 and proper time τ12; and so on, for all particle pairs. The (proper) time and position coordinates τjk and xjk are treated coequally, as required for compatibility with inertial invariance, and are considered system operators. To accomodate more than two particles, proper times now have a double subscript; the first index indicates the primary perspective particle, and the second index the observed particle.

An elementary particle may even have PCLs to itself. A link connecting τja to τjb along the world line of particle j effectively extends distance (τjb - τja)c/2 from the particle. A student of Draci has suggested that these self-links should be reflected in the internal wavefunction. [Draci first refused to endorse the idea, because of the bizarre interpretation of the amplitudes γjj; in particular, how could the 3D direction of information flow be the same at both ends of a link?] For a three-particle system, the modified wavefunction has the form

      Ψ123(τ11, x11; τ12, x12; τ13, x13; τ21, x21; τ22, x22; τ23, x23; τ31, x31; τ32, x32; τ33, x33) .

Now Ψ123 incorporates the joint amplitude that the first particle at proper time τ11 sees a self-link at a distance x11 (linking times τ11 - |x11|/c and τ11 + |x11|/c); and similarly, for the other particles. The γjj have been related to the particle rest mass energies mjc2.

What is the relationship between PCL amplitudes and the internal wavefunction of a group of fermions? Do PCL amplitudes have their own equations of motion? If phantom processes define the very space within which fermions move, PCL amplitudes do embody probabilities of relative positions, as well as the transfer of energy and 3D momentum between particles. Only through interactions and associated superpositions of simple link states over a range of energies and momenta can relative positions be localized.

[Fleegello evidently hoped that CLs would provide an alternative multi-time framework for representing multi-particle systems. They instead ultimately proved to simply be useful conceptual tools for thinking about interactions. Draci et al. showed that, even in a multi-time theory, PCLs are dictated by gauge symmetries in the coupled fermion equations of motion.]

Insofar as PCLs specify 3D direction vectors, they also define (probability amplitudes for) the relative 3D positions of all particles in a system, regardless of their number. But what if PCLs did not specify vector direction? Consider in this case an isolated system of N distinguishable particles. If relative speeds are non-Niestiik, then djkdkj, and the γjk(τj, τk) define amplitudes for N(N-1)/2 interparticle distances. These are sufficient to construct amplitudes for the (3N-6) coordinates of 3D interparticle relative position vectors (up to a rigid rotation and displacement of the entire system), as long as N>3. Relative speeds and accelerations are implicit in changes in the djk along world lines. Comparable conclusions apply even to quantum systems of identical particles, and for Niestiik speeds.

The web of CLs and associated proper times among the world lines of elementary particles could thus determine the geometry of timespace, in particular the large-scale geometry, whether or not PCLs specify spatial direction. In either case, space can unfold from the relationships among myriad connected events. To the extent this occurs, space does not have independent existence, but is defined by the connections between the mathematical objects we perceive as particles. A world without causal links would be a world without space; any particles would be independent of each other, with no meaningful positional relationships.

Yet how and why can inter-particle distances defined by interactions generate an overall three-dimensional, nearly flat (under normal conditions) space? Classically, even without inherent 3D directions, the N(N-1)/2 interparticle distances in an N-particle system are sufficient to determine all relative coordinates for up to ~(N-1) dimensions!

The mathematical physicist Wittuu has proposed a mechanism that both restricts and defines the number of spatial dimensions. Large-scale timespace is defined mainly by the electromagnetic interaction, since it is associated with the exchange of phantom photons of unlimited range. While real photons are massless, and so restricted to two spin states, phantom photons have three spin states, like any spin-1 particle with mass. These correspond to three inherent, independent "directions." Because all photons are identical, exchanging their identities cannot alter a physical system, so they must share the same three directions. This causes every interparticle distance defined by PCLs from photon interactions to be limited to vectors in a common macroscopic three-dimensional space.

The strong and weak interactions should then establish additional spatial dimensions. Although there are eight gluon types, these are not truly independent, and together generate only three additional dimensions. The three bosons of the weak force generate the same, making a total of nine spatial, or ten timespace dimensions. Yet the ranges of the strong and weak interactions are so tiny (~10-13 and 10-16 centurets, respectively), they mainly affect the small-scale geometry of timespace. Wittuu suggests that the associated dimensions are "curled up" or "attenuated," and only obvious at very small scales or high energies.

[While Wittuu's argument was sketchy, later generations of physicists demonstrated that his intuition was sound (although he missed a few small-scale dimensions). A rigorous explanation of the origin of macroscopic spatial dimensions was eventually developed. We now realize that space-time arises as an emergent property from the interconnections among myriad primitive, timeless, abstract mathematical forms.]

What about gravity? The carrier of this interaction is presumably the massless graviton. Because the graviton is a spin-2 particle with unlimited range, gravity might be expected to generate its own large-scale 5-dimensional space. Yet gravity has an unusual character, related to its incompatibility with standard QFT. All other fundamental forces are carried by spin-1 bosons, and associated with unique and conserved "charges" (e.g., electric charge). But gravity couples to a system's stress-energy tensor, to which every interaction contributes. Gravity reduces all "bare" mass energies, and even couples to itself, leading to nonlinearities in the gravitational field equations. PCLs established by gravity are thus dependent on and flow from the other interactions, so that gravity does not add any new dimensions. It may nonetheless distort the large-scale stucture of timespace, as in Niestu's theory of general invariance.

It is clear that neither standard graviton exchange nor general invariance represents the complete fundamental description of gravity, even if both are good approximations in the low-energy limit. A missing element in these theories may involve the small-scale structure of time. Physicists have traditionally considered time to be continuous. In an attempt to avoid divergent (infinite) quantities in QFT calculations [which had previously been removed for forces other than gravity by a dubious procedure known as renormalization], Planko has proposed that proper time is quantized along the world line of every elementary particle with mass. The fundamental (minimum) proper time interval, or chronon, is represented by the symbol Δ. Distance between particles is naturally quantized in integral multiples of Δc/2. [Space-time volume is thus more generally quantized, and not space or time separately. By inertial invariance, this quantity is unaffected by a velocity transformation.]

Quantized proper time may be required by ideobasic principles. Consistency logic compels the PCS to recognize the most general conception of time. Yet continuous time is only a limiting case of quantized time. The infinity of real numbers on a continuous line segment is furthermore not countable; there is no one-to-one correspondence between the numbers and the set of positive integers. Because it would then be impossible to locate all points on the line within the conscious field of the PCS, they could not have meaningful existence. Finally, time is inherent only along particle world lines; it does not meaningfully reside anywhere else.

If timespace is quantized, then the smooth differential equations of Shrodiik mechanics and QFT must be replaced by discrete difference equations. Observables defined in terms of derivatives must be similarly redefined. Symmetry principles and conservation laws are all affected.

[Other physicists had previously hypothesized that spacetime was quantized, but only with respect to the overall coordinate framework of a given observer, not with respect to individual particles. These lattice approaches were doomed to failure, as they were divorced from the very processes that define time and space.]

A minimum proper time interval Δ implies a maximum absolute angular frequency

      ωmax = 𝜋

in any function of proper time, and a range of meaningful frequencies

      -ωmaxω ≤ +ωmax

(this universal frequency limit should be distinguished from the smaller cutoff specific to PCLs). Based on a symmetric version of the modified single-particle equation of motion, Planko has proposed replacing the linear equation relating the energy E of an elementary particle in its own rest frame to its proper time angular frequency ω by the trigonometric formula

      E = (ℏωmax/𝜋) sin(𝜋ω/ωmax) .

This reduces to the standard equation when ω/ωmax << 1, and can alternatively be written

      E = Emax sin(Eo/Emax)

where Eo is a particle's uncorrected energy Eo = ℏω , and

      Emax = ℏ ωmax /𝜋 = ℏ/Δ at the angular frequency ωmax / 2 .

Note that Eo = moc2 for an elementary particle with an uncorrected (bare) rest mass mo.

[Massless particles have no rest frame, and there is no passage of time along their world lines; frequency and energy must be specified with respect to associated particles with mass.]

Proper time quantization reduces and limits a particle's effective rest mass energy, in a manner curiously similar to gravity. For every known elementary particle (characterized by a single, independent proper time line), Eo / Emax << 1. The sine function in the modified equation for energy can then be approximated by a truncated power series. Including only the first correction term in this expansion,

      EEo - Emax(Eo/Emax)3/6 .

For an elementary particle, this is
      Emoc2 - mo3c6Δ2/6ℏ2 .

According to standard Shrodiik theory, the uncertainty in the position of a particle of mass m cannot be smaller than ℏ/2mc. The rest mass mo thus cannot be meaningfully confined to a volume with a radius rmin smaller than

      rmin ≈ ℏ/4moc .

Using this relation to remove one power of mo from the previous equation,

      Emoc2 - (c5Δ2/24ℏ)(mo2/rmin) .

The correction term is equivalent to the classical gravitational binding energy of a mass mo distributed over a surface of radius rmin, if one identifies the gravitational constant G as

      Gc5Δ2/12ℏ .

Conversely, the chronon Δ can now be related to the gravitational constant by

      Δ = √12ℏG/c5 .

Indeed, Planko has identified the minimum distance Δc/2 with the Planko length

      LP = √G/c3 ≈ 10-33 centurets,

and the chronon Δ with the Planko time

      TP = √4ℏG/c5 ≈ 10-43 nocs,

which differs from the value derived above by less than a factor of two.

[These estimates are remarkably close (within a factor of eight) to the chronon value obtained from subsequent experiments. The Planko quantities were inferred theoretically from the time/distance scale at which the quantum effects of gravity become significant.]

Planko has further proposed a natural system of units, in which the equations of motion are simplified. The chronon is now the unit of time, and Δc the unit of distance. Interparticle separations are then half-integral multiples of the fundamental unit, and the speed c is numerically equal to 1. Units of mass and electric charge are selected so that both the Planko and the gravitational constants are numerically equal to 1, while an elementary electric charge is equal to the square root of the fine structure constant.

How does time quantization affect the energy of a multiparticle system? Consider a pair of elementary particles, both at rest with respect to an observer, and separated by radial distance r. The observer may combine the respective proper times scales τ1 and τ2 into an overall system time t and a time correlation parameter ρ, approximated by

      t ~ (τ1 + τ2) / 2  and

      ρ ~ (τ2 - τ1) .

When the particles are far apart, interactions are negligible, so τ1 and τ2 should be independent and uncorrelated. System time t is then effectively quantized in intervals of Δ/2 – increasing either τ1 or τ2 by Δ increases t by only Δ/2. The two-body wavefunction should be the product of free-particle wavefunctions, with bare masses m1o and m2o . Regardless of how the total system energy E is precisely defined, E should be approximately equal to the sum of the individual particle energies:

      Efarm1c2 + m2c2

where m1 and m2 are the respective effective rest masses of each particle, related to the bare rest masses by the expression

      m1c2 = Emax sin(m1oc2/Emax)  and

      m2c2 = Emax sin(m2oc2/Emax) .

The energy limit Emax applies only to the rest mass energies of the individual particles, along their respective world lines, and not to the overall system.

At smaller separations, τ1 and τ2 should become correlated by interactions, such that t is quantized in progressively larger intervals approaching Δ (interactions are presumably also required to define interparticle distance). The maximum correlation, at a minimum meaningful distance rmin, is equivalent to the particles merging into a single world line and proper time – increasing τ1 by Δ also increases τ2 by Δ, and vice versa. The time correlation parameter ρ becomes restricted to values near zero, and the individual particle wavefunctions merge into a single function characterized by a bare mass (m1o+m2o) and system time t. Considering only bare mass and time quantization effects, the combined energy is then

      Enear = Emax sin[(m1o+m2o)c2/Emax] .

Whereas a total energy limit corresponding to the reduced time interval Δ/2 applies when the particles are far apart, a lower limit corresponding to the full time interval Δ applies when the particles are close together.

At intermediate separations r, one can write

      Er = (1-ε)Efar + εEnear ,

where ε is a function of r, such that ε → 0 as r → ∞, and ε → 1 as rrmin .

This equation can in turn be rewritten

      Er = Efar + Eint ,

where Eint is an effective interaction energy

      Eint = -ε (Efar - Enear) .

If bare mass energies are much smaller than Emax) , then to good approximation, correction terms of order higher than (Eo/Emax)2 may be ignored, and

      Eint ≈ -ε c6 m1m2(m1o+m2o)/2Emax2 .

The minimum separation rmin can be estimated from the smallest volume that can confine the total uncorrected rest mass energy,

      rmin ≈ ℏ/4(m1o+m2o)c .

[Fleegello uses semiclassical reasoning here. He contemplates in a classical manner the total energy of two motionless masses as a function of their separation; then applies a Shrodiik argument that the probability distribution of a particle's position is spread out over space, and confinement entails a degree of motion "jitter," to estimate rmin. The conclusions nonetheless have some qualitative validity.]

If the chronon is equated to the Planko time, the equation for Eint can be rewritten

      Eint ≈ -m1m2G (ε/2rmin) .

This is identical to the classical (low-energy) expression for gravitational binding energy, if only the function ε is set to

      ε = 2rmin/r .

This radial dependence is appropriate for a long-range interaction carried by massless gravitons. It also suggests that the degree of correlation between times τ1 and τ2 is ~ 1/r, consistent with a cutoff frequency ωmax ~ c/r for PCLs.

Thus, in the low-energy limit, time quantization appears to effect system energy in a way similar to the low-energy limit of standard graviton exchange, as long as the proper times of elementary particles are appropriately correlated through interactions. Gravity would then be similar to the other forces, in that it can be associated with the exchange of a gauge boson at low energies, but is distinct in its more fundamental association with time quantization.

What if the interacting particles have electric charges Q1 and Q1? The bare electrostatic interaction energy is then

      Eoelec = Q1Q1 / r .

Again ignoring corrections of order greater than (Eo/Emax)2, and adopting the original ε(r), the residual interaction energy associated with time quantization is now

      Eint ≈ -[m1m2 + (m1+m2)Eoelec/c2 + EoelecEoelec/c4] G/r .

The three terms in brackets can be interpreted as the gravitational interaction energies between the two masses, between the masses and the electrostatic field, and between the electrostatic field and itself. As in more traditional theories, gravity couples to all relevant sources of energy.

The PCLs generated by the electric (or any non-gravitational) force appear to only indirectly affect the correlation of the proper times of the two particles; the derived mass-to-mass interaction energy with and without an electric interaction would otherwise have a different value for a given separation. The energies associated with non-gravitational PCLs apparently engender coincident graviton PCLs, which embody the actual process by which time correlations are established. This may reflect gravity's pivotal role in defining the geometry of timespace, and the observation that gravity does not appear to add any new spatial dimensions.

The origami-like unfolding of space from a milieu of interwoven, correlated events may generally result in a non-Euclidean macroscopic geometry. The effective curvature of conventional timespace would then be a natural consequence of the quantization of proper time intervals and the correlation of time lines by graviton exhange.

An elementary particle's rest mass may derive from a variety of sources. Every particle is effectively surrounded by a cloud of phantom exchange bosons, corresponding to all applicable interactions. But this cannot be the sole source of mass for common particles. For example, if the electron's effective size is the minimum volume that can contain its mass, then the electron electric field contributes less than 1% to the observed mass value (the magnitude of the negative gravitational self-energy contribution is 43 orders of magnitude smaller). Rest mass may also originate in a particle's underlying geometric character. Wittuu has suggested that elementary particles may not be pointlike, but associated with vibrations of extended (but tiny) geometric forms. The size of such entities should be comparable to the radial parameter rmin computed earlier for a given rest mass.

Can quantized multi-timespace be incorporated into a quantum field theory? Time and 3D space have traditionally been treated as continuous system parameters in QFT. Yet past attempts to include gravity in QFT have failed; the theory is not renormalizable for point particles if system timespace is continuous. Even excluding gravity, the standard renormalized version of QFT predicts an enormous vacuum energy density. By restricting the proper time intervals between events to integral multiples of a chronon, maximum energies are naturally limited, and the divergent quantities in QFT calculations might be tamed. As discussed earlier, maximum frequencies associated with phantom processes in a multi-time setting may be further restricted, and inversely proportional to the distance between interacting particles. If space is defined by and only exists with respect to real material particles, then phantom processes that are completely disconnected from real particles might also be restricted. Any new formulation should reflect that timespace is meaningfully defined only with respect to particle world lines and interactions. It may prove necessary to radically reformulate QFT in terms of multiple timespace operators corresponding to individual field quanta, and to treat elementary particles as finite-sized objects.

[The approach to quantized time outlined by Fleegello was naive, and flawed in many respects. It does not effectively address relative particle motion, or modifications to symmetry principles and conservation laws, or how an observer can fully integrate the proper times and spatial separations of individual particles with a global timespace coordinate system, and define the total system energy and wavefunction. Fleegello did acknowledge in private correspondence that his approach to quantized timespace was simplistic and incomplete, and certainly did not comprise a testable theory. Yet by quantizing proper time intervals, identifying elementary particles with extended (though miniscule) vibrating geometric forms, and reformulating QFT, physicists were at last able to integrate gravity into quantum theory, and accurately compute the rest masses of elementary particles from first principles, avoiding the infinities that had plagued previous attempts.]

Although the fundamental (microscopic) equations of motion are symmetric in time, physical processes on a macroscopic level superficially do not appear to be time-symmetric. For example, if all the air molecules in a room were clustered in a corner, they would rapidly spread out to fill the entire room; yet the reverse process is not observed to happen. Niestu has proposed that this so-called arrow of time is a purely statistical phenomenon.The universal state witnessed by an observer at a given moment is connected by a single time increment (chronon) to a host of other states. In Niestu's view, the number of less ordered (higher entropy) states corresponding to a "forward" time process is simply much larger than that for a "backward" process. If conscious experience is a random walk from one state to another, a person is much more likely to experience events along the traditional arrow of time. Reverse time steps occur, but are swamped by the sheer number of forward steps. This distinction acquires significance mainly in macroscopic systems, due to the sensitive dependence of the number of states of a given type on the number of particles in a system.

Yet this statistical feature does not in itself guarantee our experience of time. A conscious being that lacked a memory would live in an eternal present, with no sense of time's arrow. Most animal memories in a given universal state are found to be of events in connected states with lower overall entropy. This implies that the creation of memory generally involves a statistically irreversible process. Memories laid down in this direction are normally adaptive, and facilitate survival into an expanding realm of universal states. Memories laid down in the opposite direction could in principle also be adaptive, but only if they overtly present as precognitions, consistent with a person's walk through time.

[Shortly after Fleegello died, the natural philosopher Loh demonstrated that even the observed expansion of the universe could be linked to such statistical considerations, providing a critical link between cosmology, gravitation theory and Shrodiik physics.]

As discussed in section 1.18, any physical universe must have [at least] one basic (self-caused) initial state. This state can imply no previous history; the system would otherwise logically extend to an earlier time. Every physical universe must therefore evolve from an initial state characterized by infinitesimal spatial volume. Our own universe appears to have originally experienced runaway, exponential growth – the primeval hyperburst of modern cosmology – from a miniscule primitive state. Every newborn universe must further incorporate particles or analogous localized objects relative to which distance can be meaningfully defined. It could otherwise not expand (or contract) in any meaningful way.

[Fleegello failed to recognize that, if the experience of the CIF is timeless, a self-contained physical universe may also be cyclic, along a time-like dimension that loops back into itself. Such a system must in its entirety be the cause of itself. His basic argument has nonetheless since been extended to the multiverse of all possible physical worlds, whereby our own universe and its generative hyperburst may have been spawned by a pre-existing, self-caused system.]

The physical interpretation of the state vector |ψ> has a long and tortuous history. Originally it was viewed merely as a device for computing the probability of observing a given outcome in an experiment. Reality was seen to reside in the observed positions and momenta of individual particles. The physical universe was assumed to evolve in a linear manner, with a single unfolding history, which was deterministic in only a limited, probabilistic sense. The act of observation was divorced from the natural evolution of a physical system, and treated as something special, even magical.

Yet consistency logic requires that the universe be totally deterministic. Recently, the contradictions inherent in the original interpretation of Shrodiik mechanics have led the Evette group to develop an alternative multi-world view, in which reality resides in |ψ> itself. The physical panuniverse is viewed as a superposition of conventional quantum worlds, each represented by a restricted state vector. These world systems continually split into new patterns and merge with each other through time. Observers, measuring devices, and measurement processes are included as integral parts of |ψ>. A given observer occupies a particular conventional world at a given instant. As this world subsequently splits, the observer likewise branches into multiple selves, each with a distinct future experience. An observer does not see physical evolution as completely deterministic simply because his mind does not encompass all the worlds of the panuniversal state.

[Unfortunately, only the barest references to the original Evette school survive in the historical record. The writings may have been systematically destroyed by conservative, fundamentalist religious sects that flourished at the time, and found the work heretical. These factions believed the universe progressed in a linear fashion along a single preordained path, in accordance with a divine plan for the octan race. The random, branching character of the multi-world view demanded an even greater, and to many more threatening, decentering to the octan psyche than the recognition five octujopes earlier that Jopitar was not at the physical center of the universe, but was a nonsingular ball of gas orbiting an ordinary star in a minuscule corner of a vast ocean of time and space.]

Let |ψ> now represent the state of the entire physical panuniverse,
and |φi> a complete set of orthonormal states such that

      |ψ> = Σi Ci|φi>.

The choice of specific |φi> is somewhat arbitrary, from a strictly mathematical viewpoint. If the |φi> are chosen to correspond to conventional worlds in the Evette sense, however, then the matrix elements Aij between different worlds i and j must be zero for all observables A, including noncommuting quantities. Such worlds evolve independently, with no mutual interference.

Consider now one such conventional world |φ> that incorporates a subsystem consisting of a simple superposition ( |B1> + |B2>) of orthonormal eigenstates of an observable B. Then

      |φ> = ( |B1> + |B2> ) |E>

where |E> represents the environment of the subsystem. The environment may interact with the subsystem, so as to become correlated with its eigenstates. This happens in particular when the environment includes an observer who measures the value of B. If B commutes with the interaction Hoobitean Hint, the eigenstates of B are not changed by the interaction, and

      |φ> ➜ |B1>|E1> + |B2>|E2> = |φ1> + |φ2>

where |E1> and |E2> are themselves eigenstates of observables that commute with Hint. Consider now the matrix elements A12 and A21 for an arbitrary observable A. If A commutes with B, then A12 = A21 = 0, since the eigenstates of B are orthonormal. If A does not commute with B, then it must act only on the given subsystem (it can be shown that if A were a product of operators that separately act on the subsystem and its environment, then A would not be a valid observable). In this case A does not affect |E>, and A12=A21=0 if |E1> and |E2> are orthonormal. The states |φ1> and |φ2> can thus be identified as two new conventional worlds, split off from the original |φ>, if only |E1> and |E2> are orthonormal.

[This line of reasoning, which did not originate with Fleegello, helped resolve a problem with the many-world interpretation, involving an apparent ambiguity in the identification of the individual worlds. Some researchers argued that the choice of states |B1> and |B2> in the given example was quite arbitrary. By choosing a rotated basis set, e.g.

      |b1> = (|B1> + |B2>)/√2   and   |b2> = (|B1> − |B2>)/√2 ,

the state |φ> appeared to split into a different set of conventional worlds. Eventually it was realized that the interaction between a system and its environment naturally selects a particular (compatible) basis set. If the operator B does not commute with Hint, then |B1>|E> does not evolve into |B1>|E1>, since |B1> is itself transformed by the interaction.]

Conventional worlds can thus be distinguished by non-interfering "memories" of prior branchings. The storage sites of these data may include, but are by no means limited to, physical brains (and recently, scientific apparatus acting as extensions of the brain). The structure of a brain determines its interactions with the environment, and thus the types of conventional worlds (i.e., which observables are relevant and well-defined) generated by the observation process. If a brain is so constructed that only one value of a particular observable can communicate with (affect) other elements in a conscious field, then a state including a coherent superposition of different values of that observable at the same moment must correspond to distinct unified ideo fields, or selves, in separate (conventional) worlds. The information stored in a brain does not define the external reality of the associated world – a person may make faulty observations – but it may still be a point of reference by which that world is distinguished from others. Two distinct conventional worlds can even merge, if their distinguishing memories are lost or corrupted so as to become identical. Observers inhabiting the worlds would experience no sense of merger, as all valid memories of a former distinct past would be absent.

Physics continues to evolve. Our understanding may yet be profoundly superficial. Will the physical objects and patterns identified so far prove to be unified by a single underlying entity? The multitudinous facets of one magnificant (mathematical) jewel? Or are they disparate, random elements, fragments tied loosely together only by the principle of consistency? Our descendants will hopefully discover the answer to this compelling question.

[During Fleegello's era, physics was rocked by conceptual revolutions every several jopes. Prominent scientists would periodically announce that a "theory of everything" was at hand, or that all that remained in physics was to clean up a few loose ends. These claims were invariably contradicted by new discoveries. Only after many octujopes of struggle was a viable unified theory in fact attained. Even then, physics was hardly dead. The new vision was so rich in possibilities, that its many veins continue to be mined even to this yad. Indeed, quantum physics is no longer considered the most fundamental of the physical sciences, but is viewed instead as the study of emergent phenomena arising from a still deeper level of mathematical reality. Other higher-level sciences (chemistry, biology, psychology, etc.) likewise continue to flourish, as knowledge of basic physical processes is inevitably divorced from an effective understanding of complex emergent reality.]