blu2stol.gif (3076 byte)

    figtitolo.gif (4762 byte)        THE QUANTUM SPACE         

Aldo Piana           

Part  III

The effects of Quantum Space and Relation on the interpretation of phisical laws and of the most widely accepted theories: ways to solve conflicts and paradoxes, better understanting of discoveries and hypotheses apparently incompatible with our logic and experience.

blu2stol.gif (3076 byte)









(To PART I)    

(To PART II)    

(To PART IV)    

(To HOME PAGE)    

blu2stol.gif (3076 byte)

Aldo PIANA   -   Corso Monte Grappa n. 13   -    10146  TORINO  (Italy)


blu2stol.gif (3076 byte)



(To INDEX)   

blu2stol.gif (3076 byte)

When I proposed the hypothesis of the halo surrounding the particle as the constituent of mass, I have warned, with caution, that there might be the probability that no particle can be completely massless, including the photon.

The development of the mass halo theory, which suggests formation and behavior modes at relativistic speeds, now allows us to hazard a hypothesis on the mass of the photon as well. Obviously, because of the formation mechanisms of the mass halo, due to the particle’s energy’s transpass onto surrounding space quanta at luminal speed, the photon cannot posses a halo, and its energy will interact with a single quantum.

If this is the case it will be natural to wonder: can a particle made up of only one space quantum and without a halo have a mass?

The answer suggested by a series of indications and by the hypotheses derived from the theory of quantum space must be affirmative, although we are dealing with a mass which behaves in a peculiar manner.

There are various clues which support the hypothesized mass of the photon, from the recoil of the electron which will emit a photon, to the deviation experienced by the photon as it passes near big masses, up to the "remote spectral action" itself. All of these clues lead to the conclusion that the interactions between the photon and other bodies or particles are due to some form of affinity on which Relation can act according to the laws of nondimensional Equivalence.

The affinity which can allow such interactions can only be mass, unless we are thinking of particle characteristics which still haven’t been discovered whose existence at present is not supported by any indication.

But the property of mass reveals itself and operates in different ways, according to the circumstances; let’s consider, for instance, what we have already said about the different gravitational and inertial behavior.

The fundamental characteristic of the mass halo is the action it exerts in modifying the space geometry. By analogy, we can represent the halo’s action as a sort of "swelling" which exerts a "pressure" on surrounding space quanta, determining their inhomogeneity; in other words, by drawing the volumetric geometry which will influence the particles’ paths through space.

Instead, in the case of particles which move at the speed of light and have no halo, the mass determined by the energy which only involves one space quantum will have no influence whatsoever on space geometry, but the relationships and the interactions with particles which possess a halo will remain unchanged.

blu2stol.gif (3076 byte)



(To INDEX)   

blu2stol.gif (3076 byte)

The doubt expressed in the title, which has heavy repercussions on extreme astronomical research and even more catastrophic effects on current cosmology, calls for a detailed and exhaustive explanation.

The Hubble constant – at the same time the direct consequence and the motivation of the Big Bang theory – represents in fact an essential parameter for modern astronomy and for its more advanced cosmological branch.

The Bib Bang theory has now is almost universally accepted; but along with supporting evidence it also exhibits many obscure and contradictory elements, one of which is actually Hubble’s constant.

This constant, derived from a law confirmed by all known explosive phenomena, establishes that the speed of the material expelled during an explosion is related to the distance from the center of the explosion, and can be calculated using the simple formula that follows, where d is the distance from the center and H is the incremental constant of the speed V :

 image602.gif (1071 byte)

image599.gif (1096 byte) 

In the universe, the constant H0   allows us to determine the distances to more distant objects, which are out of reach for trigonometric parallax or for estimates based on standard candles which, even with the most recent measurements using the brightness of type-B supernovae, cannot manage distances over a few tens of millions of light-years. The determination of major distances is followed by the calculation of the total size and of the age of the universe, the predictions on its evolution, whether it is closed or open depending on its observed or estimated global mass, and other assessments all of which constitute the foundations of cosmology.

Distances are calculated by assessing the red-shift z of distant objects which gives us their speed of recession V from which the object’s distance can be derived. The formulae are as follows, where c is conventionally the speed of light.

   image585.gif (1040 byte)              image586.gif (1042 byte)

 image599.gif (1096 byte)

It must be remembered that these formulae, commonly used by many astronomers (at least as far as the values attributed to objects according to their red-shift z suggests) and to a lesser extent by cosmologists, are however in total disagreement with the models of the universe derived from general Relativity.

According to the Einstein-De Sitter model, for instance, the red-shift is not caused by the Doppler effect due to the galaxies’ speed of recession, but rather by space dilation in which the galaxies take part, without moving away with respect to space itself. The stretch of the light wave which originates the red-shift we detect would then be due to the dilation of space. In this case, the emitted light wave wouldn’t be red-shifted, like it happens for the Doppler effect, rather it would be stretched according to the dilation of space and the passing of time .

This model, and other analogous more complex ones, seem to be able to balance the accounts, but this would require the introduction of concepts and hypotheses concerning phenomena which have never been observed, and which are thus not verifiable. All the celestial objects we observe, planets, satellites, stars, comets or galaxies, result to be moving with respect to space in their orbital motions, and we have no way of testing whether this motion is overlapped by a mutual recession motion connected to space dilation, and not to the recession caused by the Big Bang which is unrelated to space itself.

These theories appear to solve the paradox of a universe grown to abnormal sizes and at speeds greater than that of light during the time elapsed since its formation, but another paradox arises, because in this case a fundamental postulate of Relativity is not respected: the connection between space and time which can only find its motivation in the very motion of bodies with respect to space. The best thing to do then, at least temporarily, will be to stick to the simplest hypothesis, manageable with the formulae reported above, until different origins for the observed red-shift can be verified.

However it is extremely difficult to determine the H0 constant, so much that its estimated value has been decreasing, beginning with the original calculation by Hubble which yielded 500 km/s per megaparsec, up to a value ranging between 50 and 80 km/s (see fig. Hubble 1).

Along with the difficulty in determining the H0 constant, there are other reasons which call for a revision of its applications in astronomical calculations, the most important of which are the limits within which H0 can be considered to be really constant.

In an explosive phenomenon, the incremental coefficient of the speed of expansion (which I will keep indicating with H0 to avoid misunderstandings) is constant with reference to the distance from the center only at a given moment T1 , while during successive moments of the explosion we can either consider H0 to have a decreasing value or keep it constant but applying it to an increasing unit of measure, subject to constant dilation (in this case, for instance, the Megaparsec should be replaced by its multiples).In effect, what is constant is not the incremental value H0 but P = H0 • ml, its product times the unit of measure to which it is referred at a given time Tn . (see fig. Hubble 2)

In other words, in the absence of factors determining negative acceleration (the resistance of the medium in which the material expelled by the explosion expends or opposed by gravity) the initial speed of every single particle of the expanding material will remain constant, and its relationship to distance will be related to time.

The actual, more substantial reduction of H0 and P is however produced by the gravitational attraction which slows down the expansion with no limit, both in the case of a closed and of an open universe. (see fig. Hubble 3).

The evolution modes of the Hubble "constant" H0 consequently produce a distortion in our view of the universe, comparable to a mirage, as we said in the title. A mirage alters the image because light passes through air layers at different temperatures, H0 distorts "the image" because it acquires different values according to the thickness of the time layers that light has to go through.

The slowing caused by gravitational attraction cannot be calculated with sufficient accuracy, because we do not know the real mass of the universe, but starting from the mass needed to hold galaxies and galaxy clusters together a first approximate estimate is possible.

After that, the Hubble "constant" that will have to be adopted to determine the distances to far away objects, will be greater the farther back in time the light coming from them left. The red-shift it exhibits follows from the speed the object had at that time, and in order to calculate its distance we will have to refer to the "fossil" constant characteristic of that time.

Successive adjustments and approximations will provide us with a general picture more coherent than the one we have today: the universe will certainly appear to be smaller, older (if the red-shift produced by gravitational lenses and microlenses is confirmed) with total mass close to, or at least not extremely smaller than the critical mass. Big distant astrophysical events will be downgraded as well, as the energy at stake there will turn out to be more similar to that of the events observed in less distant areas, in our galaxy or in the closest clusters.

Adopting the "fossil constants" in the calculations, estimated according to the curve of figure Hubble 3, the distances derived from the red-shift will be placed on a curve which shows the same pattern as the one predicted by the Einstein-De Sitter model; and it offers two advantages, i.e., we will not need to attribute to the red-shift an origin different from the Doppler effect - which would be hard to verify and hasn’t been verified up to now – and we will not need the participation of space in the motion of expansion, which would produced the negative effects explained above.

hubble1_2def_en.gif (7581 byte)

hubble3def_en.gif (4586 byte)

blu2stol.gif (3076 byte)



(To INDEX)   

blu2stol.gif (3076 byte)

Which consequences does quantum space lead to, according to the interpretation of the homonymous mechanics?

In spite of the elective affinities which the common adjective seems to suggest, quantum space calls for a partial revision of quantum mechanics, though it does not substantially alter its informing principles and, above all, it does not question the already achieved results.

The fundamental postulate of quantum mechanics is the Principle of Uncertainty which we never seem to be able to overcome whenever we try to determine the characteristics of particles, their speed, position, spin etc.

For this reason, quantum mechanics is forced to make its predictions on a statistical-experimental basis: it is not possible to predict whether an individual electron coming out of a Stern-Gerlach detector will exhibit a right or left spin (or up or down, according to the detector’s orientation), but there is no doubt that one half of the electrons passing through the detector will exhibit a right spin, and the other half a left spin. Similarly, one is not able to predict which route a photon facing two slits will take, but if a number of photons finds itself before the slits, they will come out of the two slits in equal number, originating an interference figure.

Copenaghen’s interpretation then extends the uncertainty concept to the intrinsic characteristics of particles; in other words, it wouldn’t make sense to assume that a particle’s characteristics as revealed by measurements were the same also before the measurements were made, nor to assume that the particle finds itself in any other definite condition modified by the measurements, since the spin, like all other parameters concerning speed, position, and orbital orientation, must be considered to be uncertain.

Einstein could never accept this interpretation, and he considered quantum mechanics to be an incomplete theory in which uncertainty was caused by the missing part: his dispute with Bohr concerning the conceptual experiment known as the EPR paradox (from the authors’ names Einstein, – Poldolskij – Rosen) is well-known.

The reasoning behind EPR was based on the results of measurements carried out on a particle belonging to a pair emitted in opposite directions; the measurement of the particle’s speed or position would also allow us to know the speed or position of the other particle without intervening on it in any way. This shouldn’t be feasible, according to the postulate of quantum mechanics, which states that all particle characteristics remain uncertain until the measurement is carried out.

In the EPR as elaborated by David Bohm in the attempt to mediate between Einstein’s and Bohr’s stances, the particles in question are two electrons emitted in opposite directions during the decay of a neutral pion; the pion has a zero spin, thus the two electrons will have to exhibit opposite spins. But if the spin is uncertain until it is measured, when the measurement is carried out on one of the two electrons and reveals a right spin, some kind of signal will be required between the two particles which induces the other electron to acquire a left spin, and this signal must be instantaneous, no matter what the separation between the two electrons is.

In addition to the "remote spectral action", as it was probably later defined by Einstein himself, Bohm introduced other concepts in the framework of his theory, such as the "quantum potential", and the "hidden variables"; such concepts might be reevaluated, although in a somewhat different form.

The attempts to reconcile the classical idea of particles possessing precise characteristics and Bohr’s hazy and indefinite idea, or the tries to counter all the objections raised about the latter, were taken on by John Bell, Alain Aspect and others, but nobody has ever found a way to bridge the big gap between the two opinions, neither on the theoretical nor on the experimental level where old experimental methods have been refined but no new, original and resolutive method has been elaborated.

Revising the interpretation of quantum mechanics in quantum space, can material particles have certain characteristics, or will these remain uncertain?

The answer to this question must necessarily be ambiguous, or using a more appropriate definition, which is however harder to accept, ambivalent. Let us see how this is possible.

First, in quantum space particles are not seen as "little spheres" with specific characteristics, as classical theories hold, rather they are made up of space quanta (it will be even clearer to define particles as space volumes containing a given number of quanta) on which energy has produced a change of state. The particles originating from this combination, space quanta plus energy, must obey two laws from which nothing can escape: Relation and the nondimensional Equivalence; the former dynamically configures the geometry of space in which the particle moves and regulates its behavior, while the latter establishes the relationship between the individual particle and the particle, or particles, to which it is connected. The condition of a hypothetical isolated particle then must be absolutely uncertain, in accordance with the interpretation of quantum mechanics, so that it can acquire at any time the conditions which interactions with other particles will determine.

But an isolated particle can only exist in our hypotheses, since the nondimensional Equivalence requires that it must always be countered by an element, which might be another particle, an atom or an entire galaxy, able to compensate and absorb the effects of its changes of state. The changes of state of a given particle must be adjusted inside its system, they cannot be absorbed by any other particle, as it would destroy the equilibrium of the system to which this particle belongs.

Moreover, if the adjustment of a particle’s change of state causes the alteration of the system to which it belongs, it will be absorbed by the upper-order system which includes the initial system, in full accordance with the system hierarchy which constitutes the universe’s organization. In short, the more we go up the hierarchical order the closer we get to the realm of chemistry.

The condition of a particle within a material "object", not influenced by cunning external factors, then cannot be defined as uncertain, rather it is indeterminable, as it can acquire any of the permitted states at any time, in response to the geometrical configuration of space it encounters along its path and to the state variations of the particles to which it is connected.

Here, Bohm’s hypotheses on hidden variables reappear in disguise, which now are no longer hidden, because they materialize in the state level of space quanta, and also "THE REMOTE SPECTRAL ACTION action at a distance theory - A theory of the interaction of two bodies separated in space, without concern for a detailed mechanism of the propagation of effects between bodies", which identifies itself with Relation and enables instantaneous communication at any distance, does not seem to be incompatible with Relativity, as it does not imply the exchange of energy mediated by particles which must respect the limit of luminal speed.

It might seem strange in such a scenery, but both Einstein’s and Bohr’s hypotheses this way become totally compatible.

A further consideration allows us to understand why the formulation of Copenaghen’s interpretation, virtually correct, cannot be followed to the letter.

When measurements on a particle are made, the particle acquires a specific condition determined by the physical stress it has to endure because of the measurement’s action. This way the uncertainty assumption fails, at least as far as the measured characteristic is concerned.

Similarly to what happens as a consequence of measurements, the particle will receive from the other components of the system it belongs to the same kind of pressure it is subject to during measurements, and thus it will acquire a specific condition. But, like we said, this condition is indeterminable, as the number of permitted states will vary according to the dynamic conditions of space in which it is included.

But what makes quantum mechanics so strange and hard to accept is not really its uncertainty, the indefinable character of its population of objects which are like ghosts, the ambiguity and unpredictability of the phenomena in which such objects are involved, but rather it is its sharp contrast with "the world of our senses" (using Einstein’s definition), which is constituted by "quantum objects", which however appears to be certain and predictable.

How does the transition from quantum uncertainty to classical physics determination occur? How can unpredictable and incomparable phenomena participate in determining an environment whose phenomenology is predictable and allows coherent, comparable and reiterable measurements?

Most quantum physicists maintain it is unnecessary to try and explain uncertainty resorting to strange hidden variable: probabilistic combinations can contribute to the determinism of macroscopic objects out of mere mutual compensation, without the need for obscure characteristics.

The connection between the two worlds is however indisputable just like the necessary existence of a mechanism that allows it: it does not seem superfluous to try to understand it, although this might exceed our current experimental capabilities.

In previous chapters, we have already dealt with the fundamental laws which regulate phenomena in quantum space, Relation and the nondimensional Equivalence, and we do not lack arguments supporting the explanation of the disappearance of uncertainty in the macroscopic world: however, it will be appropriate to emphasize what happens during the transition from one world to the other, as determined by the above-mentioned laws.

In the world of classical physics, the phenomena we observe always originate from the interaction between two or more systems, i.e. of complex elements with their own internal equilibrium. Conversely, in the quantum world we try to measure the characteristics of individual objects which, according to the nondimensional Equivalence, are related to other objects we cannot detect. Uncertainty originates from the particle’s response to the combination of our measurement’s action and the reaction imposed by the correlated particle, within the limits permitted by the local conditions it experiences.

At a macroscopic level we can find a residual trace of this combination of effects in the transition from reductionistic behavior to the emergence of complex systems, and in the small probabilities, irrelevant in practice, we still encounter in the determinism of the "world of our senses".

To understand the peculiarities of quantum mechanics, one must also keep in mind the enormous dimensional dissimilarity between our detecting means and the objects of our measurements which we are not able to adjust so to adequately make up for the mistakes of our methology.

Let’s suppose to use intercontinental missiles to kill a microbe; that might give us an idea of the dissimilarity that exists between the size and power of our instruments and the particles we are examining. For instance, in the experiment with the two slits, what for us is two slender clefts on a thin wall is "seen" by the photon as two tunnels with bumpy walls (the material’s crystal structure) where we do not know how it can move and which effects it can produce on the outward trajectory.

blu2stol.gif (3076 byte)



(To INDEX)   

blu2stol.gif (3076 byte)

The study of the rotation of galaxies, galaxy clusters and superclusters, has highlighted a behavior which is in sharp contrast with the laws of gravitation.

The laws of gravitation, tested and confirmed through the observation of the Solar system, state that within a planetary system, bodies will revolve around their center of gravity with linear speeds decreasing as the distance, or orbital radius, increases.

Analogously, this rule should apply to galactic systems and clusters as well.

The spectral analysis of the Doppler shifts of galactic rotation instead reveals a linear nearly uniform speed. But the morphology of spiral galaxies cannot be justified by a constant linear speed at any distance from the center either: the stars in spirals must rotate with constant angular velocities, and therefore with growing linear velocities with the increase of distance, because otherwise their spiral arrangement would disappear after only one rotation, due to the different orbital lengths.

Because of the high linear rotational speed of peripheral bodies, the galaxy’s mass, as estimated on the basis of the visible matter, cannot ensure the system’s stability, and its stars would end up drifting away into space after a cosmologically very brief time.

Since this is not happening, and the ages galaxies exhibit can confirm their long-term stability, scientists conclude that the real galactic mass should be much larger than that of visible matter, which would only constitute about 10%, or even less according to some, of the total mass.

All of these considerations have produced the DARK MATTER concept, whose mass would provide for the systems’ stability. But in spite of all the hypotheses which have been put forward in this respect, (brown dwarfs stars, mass of neutrinos, very massive exotic particles, non-baryonic matter, etc) and in spite of frantic research, no hint has been found to confirm its existence and its amount, other than the above-mentioned gravitational effects.

Dark matter is thought to be distributed in a spheroidal halo which should englobe the whole galaxy; the halo’s rotation could, in this case, justify the stars’ rotational speeds, and its mass could guarantee galactic stability.

But the total estimated mass needed for the galaxy’s stability, visible matter plus dark matter, still would not be sufficient in order to ensure the stability of clusters; therefore other much bigger haloes should englobe the clusters’ galaxies. Similarly, superclusters should also possess their own halo to ensure their cohesion.

Some clusters have recently been found to be englobed within huge molecular clouds at extremely high temperatures, whose mass, though greater than that of all embodied masses, is however only about 20% of what the system’s stability would require.

In such conditions it is very hard to hypothesize the interactions which could allow all these dark matter haloes to exist side by side without self-destructing, considering that the enormous gravitational pull generated by the higher-order halo – necessary to ensures the cohesion of the group it belongs to – might easily lead to a collapse or to the merging of the lower-order ones. This halo overlapping, which may even continue up to the level of a halo englobing the whole universe, would furthermore reduce the percentage of visible matter to less than 1% of the total, with cosmological consequences incompatible with observations and with the currently most reliable hypotheses.

Quantum space however offers a way out to explain the morphology of galaxies, clusters, superclusters and increasingly larger structures.

The hypothesis it suggests is partly similar to the one put forward to account for the formation of planetary disks, generated by the rotation of state levels induced by the irregular distribution of the young star’s mass in the forming planetary system.

If we consider the galactic system of a spiral galaxy, which allows us to identify its structure’s arrangement more clearly, the stars which lie on a disk revolving around the nucleus will cause the flow of state levels near the outer stars, with constant angular speed. Such a state level dragging translates into a small, constant acceleration, which will continue during the whole galaxy’s life; at first, over an extremely long period of time – hundreds of millions of years - this acceleration uniforms the rotational angular speed all through the disk, and once this has ben achieved, it will ensure the curvature of space geometry keeping the stars in orbit, instead of letting them drift away, as their velocities and the resulting centrifugal force would suggest.

Figure 26 clearly traces the galaxy’s evolution stages.

fig26galassia_spirale_eng.gif (14489 byte)

During the first galactic formation stage, the disk material – gas, dust, newly-born or forming stars – revolve around the galactic center with linear decreasing speeds towards the outskirts, in accordance with the laws of gravitation.

But the rotations of the material placed in rings A, B, C, D. – into which the disk has been arbitrarily divided – each induce the synchronous dragging of the gravitational field represented by the state levels of the rings external to them.

The flow of state levels accelerates the outermost material until the angular velocity has become uniform. This gravitational field dragging is very small, but it becomes relevant because of the high amount of mass contained in the disk and because of the extremely long application times. In a planetary system, field dragging cannot be detected, because the disk contains too little mass, because any effect would require too much time to become evident (much longer than the supposed life-time of a system), or because of gravitational interferences.

The spiral arrangement of the arms provides a first indication of the difference between the rotational speed during the first stage of the galaxy’s life and the successive acceleration of the outer stars. The spiral forms because the stars closest to the center move more rapidly but the gravito-magnetic effect gradually accelerates the outer stars, thus making the disk’s angular velocity uniform.

The spirals’ angular development can provide an indication of the time required for the disk’s acceleration compared to the time needed for a complete rotation.

The hypothesis can be tested with a relatively easy computer simulation, which can determine whether the field dragging is sufficient to provide the acceleration necessary both to increase and uniform the disk’s velocity, and to successively keep the external material in stable orbits. If this is confirmed, which is very likely, dark matter will not be essential to account for the stability of galactic systems and clusters any longer. Its percentage then could drop, even drastically, compared to visible matter, without upsetting the universe’s structure.

blu2stol.gif (3076 byte)



(To INDEX)   

blu2stol.gif (3076 byte)

Not only science-fiction writers, but also committed scientists in their most daring speculations, talk of travels through time, of travels at hyperluminal speeds through wormholes, of instantaneous shifting between the dimensions of a multi-dimensional universe.

Obviously leaving science-fiction novels aside, these hypotheses - largely discussed and emphasized by the media, not just generic ones – are never sufficiently dealt with in detail, not even by their proposers. Their interpretation is left to the arbitrary imagination of information consumers, and, above all, the consequences such motions would have on these "timenauts" seem to remain over-looked.

Which elements provide the base for these extremely tantalizing hypotheses? And which strategies should we devise in order to put them into practice, provided they can be put into practice?

To answer these questions, we will have, first of all, to try and fully understand what time is, how it works, how it is connected to space, and which distortions, especially conceptual ones, can derive from such a connection.

Time is a very strange entity, which can take the most diverse forms according to the circumstances. Dealing with Relation, I have already pointed out that time does not exist in quantum space. Time only arises when space contains a particle which travels in it, or more precisely, a discrete amount of energy which generates a particle along the way as it moves among its constituent quanta. Time belongs to the particle, and it retains a meaning only in relation to it.

But what does it actually mean?

Space imposes a certain delay to the energy traveling through it. The particle in motion (particle and energy mean the same thing) reveals itself temporarily on a space quantum, and then moves on along its path. In the successive transitions from one quantum to the next the delay we identify as time is created. The path between space quanta cannot be traveled more rapidly than at the speed of light.

This particular characteristic allows us to set a universal absolute time which for now no clock has been able to measure. The elementary unit of measure for the universal time is represented by the delay experienced by a photon as it moves from a space quantum to the next. The luminal speed limit also produces a set of amazing effects connected to the variability of relative times.

Let us see how relative times arise, what they mean, and how they are related to the universal time.

Time, as we know it, is the parameter by which we can compare the duration of different phenomena and events: the year is the Earth’s period of revolution around the Sun expressed in days, the day is the time the Earth takes to complete one rotation (plus about one degree), and the hour is the 24th part of the day, the second is the 3600th part of an hour. The duration of any physical phenomenon we observe, or of any event which concerns us, can be compared to that of any other when we relate it to any of the multiple or submultiple units of measure derived from the Earth’s rotation. Also the clocks we employ as measuring devices work using the continuity of a reiterating physical phenomenon, or a related mechanical movement; this instrument has then become so important to us that we often identify time with the clock.

Our time is strictly related to the duration of physical or astronomical phenomena which affect us more directly, but its validity is limited to our planet: the second, the hour, the day, would have extremely different lengths for the inhabitants of Mars, Venus or Jupiter. However since our techniques have allowed us to study our satellite and our neighboring planets, we have been able to measure or compute their local times compared to that of the Earth.

Neil Armstrong used terrestrial clocks when he was on the Moon, and similar clocks will be employed on Mars when we get there. In every inertial system, time flows with different tempos according to the velocity at which the system moves through quantum space. Moreover, even within a single system, time is not constant, and can be characterized by cycles due to the orbital shifts of the systems or groups it belongs to.

If we consider the Earth-Moon system, for instance, we will notice that the Moon, orbiting the Earth in a nearly perfect circle, and therefore revolving around the Sun along a cycloidal orbit, moves through space at a variable speed which will add or subtract to the speed of the Earth’s revolution around the Sun. The differences in speed relative to space that Earth and Moon then exhibit can determine a cyclic shift of our satellite’s time with respect to the Earth, even though the length of each cycle remains identical for both bodies, corresponding to the Moon’s rotation. Figure 27 shows the possible Earth-Moon time shift.

tempo_terra_luna_eng.jpg (35067 byte)

Talking of the cyclic stretching of time for orbital systems requires the conditional, since the different time flow velocity, in other words the evolutionary duration of the phenomena from which it derives, does not vary linearly with respect to the speed at which systems move in space.

The duration of the same event or phenomenon relatively to absolute time varies according to the following transformation and related derivatives:

       ;             ;      

Where T1 represents the relative time measured on an inertial system moving through space at speed v, T represents the universal time, and c, obviously, the speed of light.

As already pointed out, the greater duration of relative times for accelerated systems with respect to the universal time is caused by the impossibility to exceed the speed of light. It follows that if the relative speeds of a system’s components are already close to that of light, the acceleration of the system would add to the internal motions, and the limit speed would be exceeded. When such conditions arise, a system’s acceleration causes the slowing down of internal components within the permitted limits. This will produce a stretch of the phenomena’s durations, which will translate into a local time delay with respect to absolute time. When a system is accelerated up to the speed of light, the local time stops. All processes must cease as no internal movement is possible, since any speed the components may acquire will have to be added to the speed of light.

But if the system’s components do not exhibit relative speeds close to that of light, the system can be accelerated until the internal subsystems move at luminal speed, without producing any time stretching. Very small differences between the mutual velocities of a system’s components with respect to c are enough to prevent time stretching on accelerated bodies. A 0.1% difference already allows a 300km/s acceleration with no time increase.

We cannot rule out a priori the hypothesis according to which a system’s acceleration in the lower-velocity range would cause a different stretch of the times of not directly related subsystems, according to the difference of their constituents’ velocities. In other words a dissimilarity in the delay affecting individual processes might occur; two different events with equal initial duration would appear to have different durations after the acceleration, but the difference would in any case turn out to be extremely small.

The time stretch on an orbiting body, with respect to the time measured on its system’s central body, therefore reveals itself as an average, if not for the single subsystems, only when limit conditions are exceeded. In addition to the interval with no time shift due to the speeds of the system’s constituents, the orbit’s orientation relative to the direction of motion in space can also contribute to further reduce time stretching. The Moon’s path relative to space can acquire a spiraling rather than a cycloidal pattern, or a combination of the two. The Earth-Moon cyclic time variation might then never take place, or it could range between the maximum value of K= +/- 1.000003407, down to values too small to be detected.

When the neutral time stretching interval is exceeded, the durations of two analogous events occurring on two systems moving in space at different speeds (it’s appropriate to point out that we are always talking of speeds referred to space) will differ from the universal absolute time, and thus from each other, but on both systems all kinds of clocks will indicate the same duration. In the case of relativistic speeds, differences become more evident.

The peculiarities of time create a series of curious effects: the twin paradox or the different impressions between the passengers in a train traveling at relativistic speed and the people standing on a station’s platform are only some of the examples used to better explain the implications of the time influence. Many other counterintuitive situations determined by the variability of relative times in moving systems can be presented.

Two space ships or two celestial bodies moving through space at equal speeds will have equal local times, no matter what their mutual speed and direction might be. Mutual speed may hypothetically reach the upper limit of 2c, if they move in diametrically opposite directions. But if the velocity of the two space ships differs from space and universal time, their local times will be different, regardless of their mutual speed and direction; in this case, the motion’s upper limit will have to be less than 2c, since no speed difference between the two space ships could exist at such a speed.

If we could visit two systems with different local times, we would notice that each physical, chemical or biochemical reaction exhibits a different duration, but these durations would still be proportional to one another within the individual system. Local clocks would indicate the same time on both systems, which would however differ from the above-defined universal time.

We must bear in mind that time differences between two inertial systems can only be assessed by referring each system’s time to the universal time, which constitutes the connection with quantum space.

Other apparently paradoxical instances.

If inhabitants of far-away planets were observing us through telescopes with unlimited capabilities, they would not see us, but rather episodes from our history, which would vary according to the distance that separates them from us. From a planet 2050 light-years away, observers would see Julius Caesar returning from Gaul, from a distance of 4620 light-years (approximately the separation between Earth and Proxima Centauri, the star closest to us) they would witness the construction of the pyramid of Khufu (or Cheope), and from 65 million light-years away they could see the fall of the asteroid which caused the extinction of the dinosaurs. A mirror placed on a planet 500 light-years apart would show us our ancestors who lived 1000 years ago. If we could move away from Earth faster than light we would observe scenes from our past, the further back in time the greater the distance. But once back on Earth, we would find ourselves cast into the future, according to the duration of our trip as measured on Earth.

Now, considering the emphasis placed on the hypotheses on possible time travels, it will be first of all appropriate to determine whether the arrow of time is reversible, and to which extent time traveling hypotheses can be realistic.

Considering the nature of time as it has been examined, from its origin to its effects in quantum space, the arrow of time is absolutely irreversible.

The overwhelming majority of phenomena and events are characterized by irreversible evolution, but even in the case of reversible phenomena apparently unrelated to time, like the behavior of elementary particles, the universal absolute time cannot undergo an inversion; any different interpretation stems from misunderstandings or from the use of inappropriate terminology.

The principle known by the acronym CPT (Charge Conjugation, Parity and Time-reversal) hypothesizes that the transformation of two particles a and b into two other particles c and d might indistinctly take place in either direction of T. A part from the recently reported results of experiments carried out at FERMILAB and at CERN, which have shown that the transformation of the neutral antikaon into a kaon is slightly more likely than the kaon/antikaon transformation, which violates the CPT principle with respect to T, the indifference to time is only theoretical. Moreover, the detected difference of T "virtual time" could also be due to distortions caused by measurements made on an inertial system which moves with respect to space.

In any case, if the transformation from a and b to c and d requires a time t1, the transition from c and d to a and b will require a time t2 which will not annul t1 rather it will add to it and the whole process, from rather it will add to it and the whole process, from a and b to a and b through c and d, will require a total time given by t1 + t2.

It follows that even reversible processes cannot reverse the arrow of time.

In other words, time stems from the delay line represented by space traveled by energy, or equivalent particles, and any delay line, even when traveled backwards, will never be able to annul time previously accumulated, but will always give rise to a further delay.

Then, on which indications do the hypotheses on possible time traveling or of hyperluminal speeds rely?

The example stated about the CPT principle is relevant; the connection to time of a phenomenon which would seem unrelated to it could be misunderstood by the layman, and non only by the latter, as well as possibility that its flow's direction might be reversed.

The mathematical description of physical phenomena can produce misconceptions, and might foster unrealistic hypotheses which ignore the arrow of time; the signs of the equations can be inverted without altering the absolute value of the results, except for the sign. The results are obviously not disputable, but their consistence with reality must be assessed based on experiments' outcomes, or on the evidence provided by many different hints, in the case of extreme extrapolations.

However, the features we have attributed to time and its manifestations suggest that the possibility to travel through time might exist, at least in theory, but only in the direction of the arrow of time, when and if our technology ever allows us to achieve relativistic speeds.

We cannot go back to the past, not even to observe it from the outside, since that would require us to exceed the speed of light, but we could travel into the future moving at speeds even slightly lower than that of light.

If we came back to Earth after traveling for 20 years at a speed of 280.000km/s, for us only 20 years would have gone by, but on Earth 300 years would have elapsed (a little less, since the Earth too travels through space and its time is slightly stretched with respect to the universal time).

But at this point, do we realize what could happen to us? We can ignore the unpredictable consequences on our great-great children. But we would be like a man living in the late 1600s, thrown into the year 2000. Provided he could survive the stress caused by the new living conditions, far more complex than that of his time, he would be very likely to be run over by a car the first time he attempted to cross the street.

Our fate wouldn't be very different.

As for hyperluminal motion in quantum space, it appears to be absolutely impossible, not only because the energy that makes up matter cannot by any means annul or reduce the delay it encounters as it moves from one space quantum to the next, but also because of the huge amount of energy necessary for the acceleration, and the need for a sufficiently big mass of support. Wormholes, which should allow hyperluminal motion, stem from mathematical extrapolations which are not supported by any observational indication.

My considerations still have to be verified, but they seem to be the most reliable at the moment, since they can best fit our current knowledge.

What would really be relevant to us is not the ability to take trips between past and future, but rather the possibility to see the present; given the characteristics of light, we do not see reality, but only ghosts. This is not very relevant in our world, at least in normal life - it's not important if a cruising airliner is seen a few millimeters behind its actual position (about 8 mm for a typical plane and nearly 35 mm for a Concorde) - but if we look just outside of our planet, the surprising effects of time distortion become evident.

We see our Sun where it was 8 minutes earlier, that is about 15.000 km behind its actual position. The nearest star is seen where it was more than 4 years ago, and the objects at the edge of the universe are observed in the positions and in the conditions they exhibited billions of years ago.

Will we ever be able to see the universe as it really is?

Quantum space offers us some very feeble hope: Relation, the invariance of scale, space geometry can in theory allow the observation of the universe's image in real-time, but the reading and the interpretation of the geometrical space configuration that englobes it will be a big challenge.

blu2stol.gif (3076 byte)



(To INDEX)   

blu2stol.gif (3076 byte)

Much emphasis has been placed on the role of the geometrical configuration of space in the dynamic organization of systems, from the smallest up to the largest, all extremely complex, which make up the universe’s structure, but despite the many remarks made about its effects, the proposed descriptions need to be dealt with in greater detail.

How can we depict space geometry so that our imagination may be able to configure it in its entirety and complexity?

Figure 28A shows some isolated soap bubbles, while in 28B the bubbles are in contact with one another.

bolle_sapone_isolate.jpg (11095 byte)      bolle_sapone3b.jpg (29588 byte)

First of all, let us consider one isolated bubble. It is perfectly spherical equipotential surface,

the series of points of equilibrium between the atmospheric pressure and the slightly higher pressure of internal air contained by the cohesion of soapy solution’s molecules.

As a first approximation, the soap bubble could represent a surface with homogeneous state level surrounding a particle; in other words, a set of points where the field has a constant value.

As the bubbles cluster, the spherical surfaces deform and join together, acquiring a configuration which keeps the inner pressure’s equipotential value constant, not only with respect to atmospheric pressure now, but also to an external environment consisting of a structure of bubbles in direct or indirect contact.

Similarly, when the virtual sphere with constant field value surrounding a particle or a body gets in contact with similar spheres of the field generated by other particles, it deforms and acquires a state value, which is the sum of those produced by all of the group’s particles.

The multiform field generated by an isolated particle, however, cannot be likened to an individual bubble. It will be more appropriately represented by an unlimited set of concentric spherical surfaces, each with a radius increasing by a space quantum, and with constant total state level at any distance.; the unitary surface state level is proportional to the inverse of the radius’s square.

Figure 29 shows two particles (or two bodies) p1 and p2, each originating a field. As mentioned above, the field of each isolated particle is made up of a set of concentric spheres, sketched with circles of the particle’s color. The sate level on each sphere’s surface is constant everywhere.

sfere_livelli_stato.jpg (78786 byte)

However, when the field of one particle interacts with that of another particle, the shape of these surfaces with constant state levels changes greatly: every point in the global field, in other words every space quantum, will acquire a state level equal to the sum of the values induced by each particle.

In the picture, the value of the state level of the points in the total field, taken as an example and indicated with Greek letters, becomes the sum of the values existing on the virtual spheres which intersect in that particular point.

Considering the gravitational field for particles of masses Mp1 and Mp2 the state level La

on point a, and analogously on all other highlighted points, will be:

and in the case of n particles:

Fig. 30 shows the deformation induced on a reference sphere placed near a group of particles, as seen from two different points of view.

sfere_deformate_livst.jpg (31496 byte)

To make things simpler, we have only considered the gravitational field, but the total field is obtained from the sum of all fields, and therefore also electrical, magnetic, and gravito-magnetic fields induced by the dynamic of involved objects, each of which contributes its positive or negative value and exerts its selective influence on the particles which are involved in the total field.

The geometry which arises from such a combination is incredibly complex, and it is even constantly variable at a speed almost equal to that of light.

The universe’s systemic structure however privileges the internal geometry of every system. Although systems exhibit a trace of the deformations induced by interacting external fields – a very weak trace given the great distances proportionally enormous for the values at stake – they will be affected only in the long-run by distortions due to their evolution and to external influences.

If we now make a huge effort and are able to conceive the geometrical configuration of space, we can realize the origin of the apparent casualness which alters the evolution of phenomena in a nearly undetectable way.

But if the influence of space geometry is reduced with the dilation of the action time on macrosystems, its action becomes more and more detectable as the system’s size decreases. At the atomic and particle levels, where the distances in which the geometric layout unfolds have extremely small absolute values, the influence of spatial distribution and of the state level dynamic becomes so strong over such small scale times that quantum uncertainty arises.

There is no more discontinuity, for the mediation between the world of elementary particles and the "world of our senses". Uncertainty is universally pervasive, and is determined by our inability to know the evolution of space geometry configuration in the detail; it can be over-looked when the scale times of its influence are greater than the duration of the phenomena we are predicting and measuring, and it becomes overwhelming when the duration and size of the phenomena are smaller than the times and ranges of action of space geometry.

blu2stol.gif (3076 byte)



(To INDEX)   

blu2stol.gif (3076 byte)

In the first version of his General Relativity, Einstein had introduced in his equations a "cosmological coefficient" so that they would yield the model of a flat universe. This coefficient represented a gravitational force with negative sign, in other words a repulsive force which contrasted gravity. He successively revised his hypothesis and eliminated the cosmological coefficient, calling it the greatest mistake of his life as a scientist.

But recently anomalies have been observed in the response bodies exhibit to gravitational attraction, which seem to re-propose the existence of a fifth force acting as negative gravity in many ways similar to what the cosmological coefficient predicted. Scientists have discovered galaxy clusters the expansion of which does not seem to be slowed down by gravitational attraction. In some cases expansion even appears to be accelerated. This phenomenon has been discovered through a sophisticated analysis of the sizes and red-shifts of some double-lobe radiogalaxies by the team led by Ruth A. Daly and Erick Guerra of Princeton University, and is apparently confirmed by the results obtained by two more research teams, the Supernovae Search Team led by the Australian Brian Smidt and the Supernova Cosmology Project of the American Saul Perlmutter, who use type 1A supernovae as standard candles.

Unlike the cosmological coefficient, the action of this new force wouldn’t however be constant all over the universe, but would reveal itself locally in the form of a space-time anomaly, probably with limited duration, produced by unknown causes and in unknown ways.

These observations are accompanied by contradictory detections. The spacecraft Pioneer 10 and Pioneer 11 slow down more rapidly than expected as they travel toward the outskirts of the Solar system. The detected deceleration is extremely small but nevertheless unaccountable, and does not seem due to systematic errors; it is moreover confirmed by a similar behavior shown by probes Ulisses and Galileo, which exhibit even greater decelerations.

All of these indications suggest the influence of space geometry on the motion of bodies is far more complex. The gravito-magnetic action, the dragging of state levels, performed by stars, galaxies and clusters can actually justify both the acceleration and the deceleration of systems moving within their fields of action, which moreover are not limited by distance, but only by scale-times which exhibit a variable size and interactive effects which can either enhance or minimize their effect.

The motion of more distant cosmic objects, however, can also be influenced by a factor which, as far as I know, has never been taken into account: the rotation of the universe.

All cosmic objects and systems known to us rotate around their own axis. Therefore it would be strange, at the least, if the universe as a whole did not perform such a rotation.

In the case of a rotating universe, space geometry will induce a set of effects similar to those expected for planetary systems, galaxies and clusters, produced by the overall rotation of state levels, which can account for all the observed local acceleration and deceleration patterns in accordance with the predictions of the new interpretation of the cosmological coefficient which stems from the aforesaid observations.

Another effect capable of altering our vision of the universe derives from the distortion the universe can produce on the red-shift.

The linear speed of rotating systems, which grows towards the edge of the cosmos because of the tendency to uniform the angular speed, will cause the red-shift to increase, according to the formula:

The wave-length l1 of the emitted light is stretched according to the rotational speed with respect to the wave-length l of the light emitted in the same conditions by a motionless body.

The rotation of the universe, which we currently are not able to verify, and much less to assess the angular and linear speed at which it drags systems, would bring with it dramatic consequences on cosmology as a whole, modifying our evaluation of distances, expansion speeds and of phenomena’s intensities.

Since detections are not possible, caution would require us to look for alternative hypotheses, based on computer simulations, which could provide reliable indications referred to the known motions of galactic systems.

(To INDEX)   

blu2stol.gif (3076 byte)

Aldo PIANA   -   Corso Monte Grappa n. 13   -    10146  TORINO  (Italy)


blu2stol.gif (3076 byte)