blu2stol.gif (3076 byte)

    figtitolo.gif (4762 byte)        THE QUANTUM SPACE         

Aldo Piana           

Part  II

The effects of Quantum Space and Relation on the interpretation of phisical laws and of the most widely accepted theories: ways to solve conflicts and paradoxes, better understanting of discoveries and hypotheses apparently incompatible with our logic and experience.

blu2stol.gif (3076 byte)










(To PART I)    

(To PART III)    

(To PART IV)    

(To HOME PAGE)    

blu2stol.gif (3076 byte)

Aldo PIANA   -   Corso Monte Grappa n. 13   -    10146  TORINO  (Italy)


blu2stol.gif (3076 byte)



(To INDEX)     

blu2stol.gif (3076 byte)

The paradoxes mentioned below are mainly caused by the difficulty of correctly interpreting and of correlating news and information on astronomical discoveries. Although their origin is known, and in spite of the praiseworthy efforts of astronomers who try to find consistencies among amounts of extremely blurred and ambiguous data and clues, I will point out the paradoxes which can derive from the extension and the combination of non-comparable theses to support different hypotheses which are, at least in appearance, more suitable to the environment of quantum space.

This subject will be further developed in the chapter on the Hubble constant.

The most recent observations have led to the discovery of astronomical objects (mainly quasars but lately, thanks to the Hubble Space Telescope, also many galaxies) up to 10 to 12 billion light-years or more away from us, spread all over the heavens. Therefore, the apparent diameter of the known universe would turn out to be between 20 and 24 billion light-years.

It must however be pointed out that 10 to 12 billion years have elapsed since the emission of the electromagnetic signals we receive today, and that the apparent diameter of the universe is not the diameter it has at present, but the one it had about 12 billion years ago.

Since the age of the universe is estimated to be about 15 billion years (between 10 and 20, according to the estimated value of the Hubble constant - which ranges between 100 and 50 km/s per Megaparsec - which allows us to calculate the distances of objects too far away for trigonometric parallax and for measurements with standard candle comparisons, by considering the speed of recession derived from the object’s red-shift). It follows that the more distant objects should have moved apart from one another up to a distance of over 20 billion light-years during the first 3 to 5 billion years of the universe’s life. This is a paradox, according to the principles of Relativity.

Not even the most recent cosmological theory of inflationary expansion which implies hyperluminal expansion speeds – which could however be only applied to rather modest sizes – can solve this paradox. This calls for a revision of the basic concepts of cosmology, everyone of which (the BIG BANG expansion theory, the cosmological red-shift and the HUBBLE constant, general Relativity, and quantum theories) seems reliable by itself, in accordance with both observations and mathematical analysis, but gives rise to various paradoxes if considered together.

Moreover, the actual brightness of the more distant objects, calculated considering their apparent brightness and the distance determined from the red-shift and the Hubble constant, is so high that it requires matter-to-energy conversion mechanisms with efficiency close to, at times even greater than, 30%.

Such a high efficiency is greatly superior to that of all known mechanisms, which leads to rather daring hypotheses (accretion rings around black holes with masses of hundreds of millions up to many billions of solar masses, with incredibly large mass concentrations in their surroundings, such to allow the conversion into energy of several solar masses/year for millions or even billions of years). These hypotheses can account for the amount of energy at stake, but so far observations have not confirmed them, with respect to the involved amount of mass.

Actually, these large mass concentrations which should be involved in these phenomena are the reason why the estimates of energy emissions from objects on the edge of the universe are very unlikely. In fact, these regions are characterized by very high expansion speeds, and masses should be far more spread out there than in central regions which actually do not exhibit events involving so much energy.

An aesthetically very attractive alternative, capable of solving cosmological paradoxes is the hypothesis according to which space where the Big-Bang occurred was not totally empty, but was occupied by clouds of matter (also including the dark matter which reveals itself through its gravitational effects on galaxy rotation but which still has not been detected). The explosion’s shock wave in this case would have triggered those accumulation processes which determined the formation of stars and galaxies, similarly to what happens inside galactic and intergalactic clouds involved in supernova explosions.

The Hubble "constant" in this case would no longer be a constant, but rather a coefficient which decreases according to distance and time. A decreasing Hubble "coefficient" would imply two contrasting consequences: the age of the universe would turn out to be greater than currently estimated, thus dilating the amount of time needed for the formation of the oldest galaxies, which now would be too short. On the other hand, also the estimated size of the universe would increase, making the explanation of the intensity of the events on its edges even more difficult.

However, the estimates of the distances d to the more distant objects, and thus of the universe’s size, don’t only depend on the Hubble "coefficient", but also on V, that is the speed of recession of the objects of interest. V is in turn obtained from z, the red-shift of emission and absorption spectral lines of the observed objects, according to the formulae:

image599.gif (1096 byte)  

image600.gif (1078 byte)            image601.gif (1079 byte)

But does red-shift only depend on the speed of recession or are there other causes? The hypothesis according to which it is determined by a loss of energy experienced by light along its path has already been formulated and soon abandoned, since no mechanism or observational result has been found to support it.

However, the action of the gravitational field should be effective not only on the path of light passing near a mass, diverting its trajectory, but also on the wave-length of the diverted light, according to the mass and the size of the body which diverts it.

If this is the case, the action of the gravitational field should not only divert light from its original trajectory, but also stretch the wave-length as follows:

 image587.gif (1085 byte) 

where  l is the wave-length of light approaching a mass,  l1 is the outgoing wave-length, and a  is the angle between the incoming and outgoing directions (see fig. 14). The deviation imposed on the light’s path thus causes a red-shift which is unrelated to curvature or, more appropriately for quantum space, to the geometrical deformation of space, a deformation which in turn depends on the densities of the objects encountered along the way. It follows that every encountered atom, the nucleus of which represents the densest thing that we know directly, diverts light with an infinitesimal angle. Repeated deviations compensate one another and the motion turns out to be virtually straight, whereas wave-length increases are added, contributing to the cosmological red-shift thus leading to wrong assessments of the speed of recession, yielding values probably even over twice as high as the real one.

But how can we test the contribution of the two components, the Doppler effect caused by the speed of recession, and the cosmological red-shift of gravitational origin, which together determine the total red-shift we have measured?

A first indication may be obtained through a statistical comparison of galaxy distribution. Let’s see how. Since the universe’s expansion causes a dilation of intergalactic space which is greater as the distance increases, if we count the galaxies in each cubic megaparsec, for instance, (this unit of measure however might still be too small) at a distance of 10 megaparsecs from us, and then compare our result with the number of galaxies per cubic megaparsec at distances of 20, 30, or more megaparsecs, we should notice a decrease in the count of more distant galaxies proportional to the inverse of the square of the distance.

The statistical count will obviously have to integrate the number of galaxies, classifying their types and sizes, correcting the number of the smallest ones counted at greater distances so to consider those which might be no longer visible, with respect to those detected at shorter distances. Moreover, since the count will first have to be made with reference to the distances calculated so far, further adjustments will be necessary. If galactic distribution turns out to differ from the inverse square law, an immediate correction of actual distances will be possible, from which the contribution of the cosmological red-shift may be derived. Also the Hubble constant may be determined in this way with greater accuracy.

Because of the cosmological red-shift, extreme objects, which apparently receded at speeds close to that of light at the time of departure of the electromagnetic signals we now receive and observe, would then turn out to be far less rapid and distant; large mass accumulations would be more probable, because they were formed in a more favorable environment; the related energy processes would be much less majestic; and the overall size of the universe at the time referred to by our observations would be perfectly compatible with known physical laws.

In this scenery the supposed need for dark matter as a means to hold galaxies and galaxy clusters together would be strongly downgraded, because if these objects turn out to be at a shorter distance, then their size will be smaller. This implies the increase of the gravitational action of matter, whose emissions are detectable all over the elecromagnetic spectrum, which is equal to the ratio of the square of the previously estimated size to the square of the real size.

In a region of space partly filled with clouds of matter, we could also hypothesize the presence of pre-existent star, globular or galaxy concentrations, which would justify the presence of those objects which seem to be of the same age of, or even older than the universe. At the same time we would get rid of the difficulties encountered in justifying the transition from the supposed initial isotropy to anisotropy which led to the formation of stars and galaxies.

Also the "singularity" at the origin of the Big Bang would lose most of its mysterious singularity, and would become more like a hyperbolic black hole, which can already be solved, in part, with our current theoretical investigation capabilities, but which can be further studied in quantum space.

The explosion of such a black hole after a critical threshold of energy storage in the space containing it has been exceeded is still to be investigated15), (To Footnotes) but the peculiar nature of the black hole can account for the mechanisms which led to the complete destruction of all elements, which hypothetically could have flown into it during a previous Big-Crunch, along with their recombination, during the explosion, into the simplest elements, i.e. hydrogen with a small percentage of helium.

In quantum space, and in a primordial universe where the Big Bang might have occurred in an environment partly filled with gas and dust clouds, even with possible star groupings, we can hypothesize an expansion mechanism capable of better explaining certain observed effects, along with a pseudoinflation expansion process.

The state values of space quanta piled up around an expanding mass are in turn expanding, flowing outward and inducing an expansion motion of gas clouds or of surrounding objects, even though they haven’t yet been reached by the actual shock wave which will get to them only later. The expansion effect due to the flow of state values is synchronous everywhere in the universe, since the variation of state values is not related to time. The graph of fig. 10 illustrates the state level increase on points distant from expanding masses.

fig10ter.gif (17996 byte)

The flow of state values moreover tends to spread pseudo-isotropically the accelerated material, although it is applied to spheres of increasingly large sizes, as the total mass that generates them grows with the increase of the distance to the exploding mass because of the acquisition of surrounding gas or dust clouds involved in the expansion.

blu2stol.gif (3076 byte)



(To INDEX)     

blu2stol.gif (3076 byte)

Relativity has introduced the concept of space curvature, which has unwillingly coined the paradigm: "The shortest distance between two points in space is not represented by the straight line that connects them, but by a geodetic line".

Such a paradigm can easily lead to conceptual misinterpretations, as it is expressed in a categorical and summary manner.

Let’s see why.

fig11bis.jpg (36615 byte)

Fig. 11 shows two houses, A and B, located on opposite sides of a river which is crossed by a bridge, placed at a given distance from A and B. Would it be correct to say that the shortest distance between the two houses is the curve d2, instead of the straight line d1 which ideally connects them? The correct answer to this question should be: "The shortest distance between A and B is represented by d1, whereas the shortest path from A to B and vice versa, which is also the only possible path, is the curve d2.

Similarly, in quantum space the only possible way of moving between two points is to follow the set geodetic lines induced by fields (the concept of possible geodetic line however has to be revised) but it won’t be appropriate to abandon the Euclidean idea of the straight line as the shortest distance between two points. In fact, although this is an abstract representation, it still constitutes the only reliable system for the representation of the positions of objects in space, because it is not affected by distortions which will characterize any other method which aims to consider, or which might be influenced by the geodetic line of space.

A further reason is that quantum space is not only curved, but it is also uneven, hence no curve can adequately represent it.

Let us imagine a vehicle making a trip around the world carefully following the solid surface. This path, if observed from a distance at which it is possible to see it entirely in the visual field of a telescope, will look like a perfect circumference, since the imperfections of the Earth’s contour (less than 20 km of radial difference maximum) represent an irrelevant percentage (about 0,30%) difficult to detect with respect to the Earth’s radius, which exceeds 6350 km.

In quantum space, any orbit or trajectory will exhibit irregularities comparable to those of the Earth’s surface, since space curvature determined by the main mass of a system (the state level with spherical symmetry of space quanta in its surroundings) turns out to be altered and dynamically variable, because of the presence of surrounding moving masses, in addition to the inhomogeneity of the main mass. The invariance of scale, moreover, will contribute to the further deformation of space, causing alterations at any distance.

If we connect two points in space with measuring tape, it will inevitably follow the path determined by the geometrical deformation present in that instant, but the value of the distance measured in this way would only be valid in that instant, and would be different from the value measured at a successive moment. Not even measurements carried out with electromagnetic waves (with a radar or with a laser beam) would be unaffected by geometrical distortions of space, although the path of the beam would still be as close as we can get to a straight line or to the geodetic line.

In conclusion, the geometrical configuration of space cannot be represented by Gaussian curves or surfaces, as we are dealing with an irregular, constantly changing volumetric deformation; the most convenient method to follow remains Euclidean geometry, since, though abstract, it is the only one which will enable us, with the future development of computing methods, to analyze the distribution of state levels in each direction with the best approximation.

blu2stol.gif (3076 byte)



(To INDEX)     

blu2stol.gif (3076 byte)

Relativistic SPACE-TIME has proven to be an ingenious intuition which can help us understand both the relative value of time, according to the motion, speed and acceleration of objects in space, and the reasons why we cannot determine time correlations among different events and phenomena in mutual motion, no matter if near or far. Relativity’s concept of time has however produced a negative effect, leading to the rejection of Newtonian absolute time. Although it needs some adjusting, absolute time cannot be ignored, since two or more events, no matter how and how fast they are moving in space, no matter their separation and the system they belong to, are always part of an upper order system which contains them: hence they must be bound by a real, also temporal relationship.

In quantum space the concepts of relative and absolute time lose their categorical meaning and their consequent incompatibility. Let us see how:

The concepts of Quantum Space and of Relation configure time as a consequence of the line of delay represented by the quantum succession encountered by energy as it moves through space. Such a configuration of time leads to two considerations: the first is the invariability, with relative explanation, of the speed of light which represents the typical limit value for the motion of energy among space quanta. The second implies that the delay experienced by the photon shifting in position by a quantum unit (the absolute unit of measure for distance), represents the absolute time we are looking for and its unit of measure; both units of measure are indivisible.

The correlation between distant events in mutual motion, which is necessary to the systems’ functioning and equilibrium, is ensured by the action of Relation which has no limit and is unrelated to time.

In order to completely understand relativistic Space-Time in quantum space, it is however important to comprehend how and why absolute, or quantum time and relativistic time differ, and why the value of the typical duration of a phenomenon varies according to the way the involved objects move in space.

Two reasons determine the variability of relativistic time: the nature of matter - which is constituted, as we have seen, by discrete amounts of energy (particles) in mutual motion along orbital paths defined by the space geometry configuration primarily determined by particles themselves - and the speed of light, the maximum limit for any energy motion in space.

Let us consider an individual atom belonging to an object involved in a process and experiencing acceleration; nothing would prevent its electrons from moving around the nucleus at the speed of light, but if the atom was itself moving the electrons should move along a spiral orbit (the resultant between the direction of the instantaneous tangent to the orbit and the direction in which the atom is moving) at an actual speed exceeding the speed of light, with reference to quantum space. This cannot occur, according to postulated reasons.

What happens then to objects when their motion in space is accelerated? The speeds of mutual motions of all its constituent particles will slow down to compensate the speed at which the object moves through space. For objects moving at a uniform speed, the equilibrium between the speed of translation and the internal movements’ speed has been achieved as the uniform motion began.

In other words, the amount of motion of each particle is fixed and invariable; thus, whenever a body experiences acceleration, let’s suppose it to be linear, this can only occur at the expense of internal orbital movements, with a decrease in their relative velocities.

But if the changes in the orbital velocities of an object’s or a system’s internal components solely depended on the acceleration experienced by the system itself, this would lead to a catastrophic consequence for its dynamic equilibrium.

In fact, orbital velocities would undergo - along with the slowing imposed by the system’s acceleration – variations exhibiting multi-sinusoidal patterns depending on the different orientations of the orbital plane and of the instantaneous tangent to the orbit in the particle’s position with respect to the direction of the acceleration: this would upset the relations between the system’s components.

The absolute proportionality of speed variations of internal components is ensured by the geometrical configuration of the system’s internal space, which represents, with the dynamic structure of state levels, the complex organization of the system. The configuration of space geometry is obviously managed by Relation.

Hence, if even one of the system’s components exhibits a proper speed which added to the acceleration tends to exceed the speed of light, the velocities of all of the system’s components would decrease, so to pre

vent the fastest component from exceeding the critical value, thus maintaining the equilibrium of internal relations. Orbital velocities would also maintain their uniformity and their absolute proportionality in their path’s various positions and orientations.

Let’s now suppose to observe the same physical phenomenon, or a determined chemical reaction, happening on two celestial bodies, or inertial systems, which move in space at different speeds. An outside observer, able to receive comparable signals from both bodies, would see the phenomenon or the reaction occur at a different pace in the two systems, and precisely, they would happen faster on the body which moves more slowly, and more slowly on the more rapid body.

Instead, to observers on each of the two bodies, the phenomena would appear to take place at equal pace according to their reference clocks, because their clocks experience the same time stretching which influences the phenomena. Whether they are pendulum, mechanical or atomic clocks, their constituent particles obey the same laws as the events they are measuring.

It is however appropriate to note that when we speak of a body’s speed, in the cases stated above, we mean the speed of the shift in position with reference to quantum space. This causes a curious effect: when a body experience acceleration, the overlapping of the motions of the higher-order systems to which the body is related might actually decrease the speed of translation relative to space.

This also implies that, if we cannot know all the parameters which determine the motions of the components of the system hierarchy to which the two above-described hypothetical bodies belong, we cannot by any means correlate their local times, although they are connected through absolute time.

But if the speed of light does not change with respect to quantum space, observers on the two moving inertial systems mentioned above should detect a different speed for the light coming from a distant celestial object, since they have different speeds with reference to the source. In practice, the actual speed of light detected from an inertial system which approaches or moves away from the source can vary, if it is referred to absolute time T, from 2c   to zero respectively. The speed detected from the moving systems instead will always rigorously be equal to c . Let us see the reasons:

if c = the speed of light;

l = light’s actual wave-length and  l1 = wave-length measured on the moving system;

f and f1 = frequencies corresponding to l and l1 ;

D = distance traveled by light in quantum space in absolute time, expressed in quantum units;

v1 = speed of light detected from a body or system moving relatively to the source;

d1 = space traveled by light measured from the moving system;

t1 = relativistic time measured on the moving system, then:

1) image588.gif (989 byte) ;           e 1B) image589.gif (1030 byte)

But if d1 turns out to be greater or smaller by a factor k with respect to dt1 measured on the moving system will also be equal to k T hence:

2)  image590.gif (1111 byte)              then v1 = c 

This means that the speed of light measured on a system moving relatively to the source, regardless of the mutual velocity, will always result to be equal to c.

Light's actual wave-length l , equal to:

3)  image591.gif (994 byte)                or  image592.gif (1067 byte) 

on the moving system will instead turn out to be:

4)         image593.gif (1096 byte)   ,

and :  d1 = k D    will be    l1 = k l     and      f1 = f /k

thus highlighting a Doppler effect with k   as coefficient.

An important consequence of the inclusion of Space-Time in Quantum Space concerns the variation of physical phenomena’s reaction times, or of biological times, and for similar reasons of the functioning of clocks placed on objects in orbit, with reference to the phenomena or to time shown by clocks located on a system’s central body.

In addition to time stretching due to acceleration and gravitational effects, which has already been demonstrated, the orbiting clock should show an oscillating time according to its position in the orbit. In other words, the time indicated in a determined orbital position should turn out to be faster or slower compared to the one indicated in the diametrically opposed position.

The oscillation width depends on the systems’ actual velocities, but also on the orbital plane orientation with reference to the moving direction of the system, and is zero when and if the orbital plane is normal with reference to the moving direction in space.

The oscillation, which is sinusoidal with respect to the ground-based reference clock, would appear to be irregular compared to absolute time, because it would be altered by the multiple cycloidal overlaps along the orbital path, due to the sum of movements of all higher-order systems.

After talking about the absolute and relative speed of light, and about the flow of time indicated by clocks placed on objects moving in quantum space, now we have to determine why Michelson and Morley were not able to test the effects of the Earth’s rotation through space quanta, which were first believed to be analogous to those predictable for motion through the ether.

Michelson’s interferometer used in the experiment (see fig. 12) is an instrument in which the light emitted by a source moves in a space with fixed geometrical configuration. Within this configuration, light adjusts its speed, so that the maximum speed c is not exceeded, and this speed will be maintained in all directions, just like it happens with every other component of the elements of the system in which it is moving.

interferometro1en.jpg (27931 byte)

The speed of light measured in the system is always equal to c, because of the reasons mentioned above and because of the relations among paths, relative and absolute times of formulae 1), 1b) and 2), regardless of the speed and the direction of the movement in space of the instrument and the inertial system to which it belongs; speed variations postulated among the internal paths of Michelson’s interferometer are not possible.

The experiment reported in the graph below, which would allow us to appreciate the Earth’s motion with respect to quantum space, consists in measuring time differences recorded by orbiting clocks at different positions along their orbital paths comparing their results with those of a ground-based clock.

In the graph (see fig. 13) two clocks are located in geostationary orbits, an equatorial and a polar one, placed perpendicularly over ground-based stations to facilitate the exchange of data (obviously the equatorial orbit allows the satellite to remain in a steady position with reference to the ground-based station, whereas the polar orbit only allows the satellite to be positioned perpendicularly to the station at a determined hour.)

Hence, the satellite in a geostationary equatorial orbit requires only one ground-based detection station, whereas the satellite in the polar orbit will require more stations, so that a direct comparison among times measured at the two positions of the satellites can be made).

This way the Earth’s movement through quantum space could be tested. The oscillation pattern of the different orbital positions of the system with two satellites would allow us not only to verify the Earth’s movement with respect to space, but also to determine its speed and orientation, with good approximation.

interferometro3en.gif (37112 byte)

These data, however, both undergo constant changes according to the multiple overlaps of the systems which include Earth. Repeated measurements throughout the year would also provide an indication of the temporary values of the solar system’s speed and direction of motion relative to space.

blu2stol.gif (3076 byte)



From "reductionism" to "emergence"

(To INDEX)     

blu2stol.gif (3076 byte)

The study of complex systems is just dawning, and, in spite of various proposals by many authors, the subject has only been touched upon.

However, two schools of thought have existed from the beginning in dealing with the system issue: on the one hand, the concept of reductionism, supported by physicists of condensed matter, which is countered by the concept of emergence, formulated in the early 1970s by the physicist Philip W. Anderson.

Let us see what this is all about, and to what extend quantum space can help solve the problem of complex systems.

Reductionists believe systems can be solved by analyzing the properties of their constituents. In other words, they think, for instance, that the properties of a molecule can be determined by studying its constituent atoms, or that the characteristics of a cell can be found by following its subsystem hierarchy down to the atomic level. In other words, according to reductionism, the system, i.e. the whole, exactly corresponds to the sum of its constituents.

The concept of emergence, of which Anderson is the precursor, instead postulates that when a high number of elements is combined in a system, then properties "emerge" (hence the term) which the individual constituents do not possess. In other words, it seems that two and two might be five or, anyway, be greater than four.

This description of the two concepts may lead us to think that reductionism is the more rigorously logical: lower-complexity systems seem to confirm the sum of the characteristics acquired from the components. But high-complex systems show that their characteristics can be different, at times even very different, from those we might expect analyzing their constituents. Be careful, however: different, but not greater or smaller. Using the arithmetic example above we might conclude that two and two is always four, but the four can be written in a different character.

In quantum space it is possible to understand why a system’s characteristics can differ even greatly from those of its constituents. Within a system, the constituent loses its identity (which it might get back once it is taken out of the system), and the consequent effects will provide it with new properties.

Let’s see the causes which alter a constituent’s properties inside a system: the characteristics of an element (an atom, a molecule or a subsystem) are determined by its inner geometrical configuration which defines the paths of its constituent particles and, consequently, its interactions with the reagents or with the instruments used to determine its properties. Anyway, the reaction always appears to be the same, since it is always the same elements to react together.

Inside a complex system, the situation changes radically, because the geometry of the constituent element is constantly altered by the proximity of the other elements, the inner paths of its particles appear to be modified and its relationships with its "neighbors" keep changing. Conversely, within a system a component is in contact with many other elements which behave as catalysts or reagents, in a constant role exchange.

If we were able to analyze the characteristics of a molecule within a system, we would find that they are not only different from those of an analogous isolated molecule, but also constantly changing.

An example of the constituent’s loss of identity has already been provided in my description of the behavior of protons and quarks inside the atomic nucleus.

              lineapallineblu.gif (1702 byte)


According to the study of complex systems, the evolution process of more complex systems is defined as chaotic and disordered; Quantum Mechanics has probably somehow contributed to this definition, with the introduction and the extension of the concepts of uncertainty.

But how do chaos and disorder fit in quantum space and in the theory of Relation?

First of all it is essential to understand the nature of designated events, to verify whether the use of the terms reflects their semantic and etymological values or whether a new and different meaning is attached to them.

The terms chaos and its synonym disorder originally refer to a disposition which does not follow any design of primary elements. That means they apply to a predominantly static condition, whereas in their physical application they do not actually refer to the static disposition of constituent elements, but rather to the evolutionary dynamic of complex systems.

At this point we may ask ourselves whether they are referred to processes whose development does not follow any rule, or whether such processes obey an articulated and complex set of rules which are impossible to identify, therefore hindering their exact prediction.

In quantum space, and not only there, it would be absurd to suppose the existence of phenomena outside of already known or yet to be discovered physical laws.

In the hypotheses we have formulated, each particle - or more precisely the energy which generates the particle included in a system by connecting itself to the quanta contained in a given volume of space – must inevitably move along the path traced by the space geometry. If we suppose that a particle could escape its set path – the fundamental condition for the outset of a chaotic development of the system it belongs to – we would have to assume that the particle possesses a decisional power; which is totally absurd, also because, if this was the case, the universe would disintegrate within a fraction of a second.

Chaos and disorder then acquire the more appropriate translated meaning of pseudochaos and pseudodisorder, indicating evolutionary phenomena governed by a kind of space geometry which we have no way of deciphering for now, hopefully only for now.

The fact that "chaotic" phenomena are actually such only because of our insufficient knowledge of their elements is demonstrated by meteorology, the evolutionary study of the most complex system within our direct reach, i.e. our atmosphere, which we can predict with increasing reliability as our detecting and computing abilities become more common and more accurate.

As for the system evolution issue it must also be remembered that we can observe systems with equal apparent levels of complexity, which however can be predicted with even very different degrees of accuracy.

In the chapter on systems’ hierarchies the different evolutionary contribution due to the geometry of space inside and outside the systems has already been mentioned: the different intensity with which paths are traced in the two regions of space, the different relationships which connect a system to the upper-order system that includes it determine a system’s capability of maintaining its internal equilibrium, therefore being more or less predictable.

This also allows us to define the nature and degree of casualness which distinguishes the evolutionary paths of two equal systems, not only by the different already mentioned spatial collocations but also by the different bond intensity with the higher-order systems.

blu2stol.gif (3076 byte)



Mode of field-wave propagation

(To INDEX)   

blu2stol.gif (3076 byte)

A fundamental experiment to test the theory of Relation’s reliability is the determination of field-wave propagation. Relation postulates that a field will propagate synchronously with the body that generates it, at any distance from the body itself; in other words any field-wave variation must propagate instantaneously in quantum space without undergoing any time deformation.

As far as I know, no experiment aiming at verifying such propagation modes has ever been attempted.

In order to determine them, the following method can be used:

The waves originating from two oscillating and synchronous field generators A and B, placed far apart, are detected by a sensor S located between the two generators. The synchrony of the two field generators is ensured by feeding them with the same oscillator G connected to both through connectors of equal length (C1 and C2).

fig9a9bbis.jpg (41636 byte)

If field-waves propagate at finite speeds, the sensor S will detect them in phase and with equal width (fig. 9 B (A)) when it finds itself in the central position S1 with respect to A and B, whereas, when it is in any position outside the central plane (S2) it will detect them with a phase displacement proportional to the propagation speed and to the difference between the sensor’s distances from the two generators (fig. 9B (C)).

Instead, if wave-field variations propagate synchronously, as postulated by Relation, the waves detected by sensors must always turn out to be in phase, no matter what the difference between the sensor and each of the generators is (fig.9A (A)-(B)).

Figures 9 A and 9 B (A,B,C) show the arrangement of the components of the experiment and the graphic representations of the waves which should be detected during the experiment.

Fig. 9 B (A) – Perfectly juxtaposed waves detected by a sensor in central position.

Fig. 9 B (B) – Waves detected by a sensor in a decentralized position in the case of synchronous propagation. (The waves emitted by the two generators result to be in phase though with different widths)

Fig. 9 B ( C) - Out of phase waves with different widths, detected by a sensor in a decentralized position in the case of propagation at a finite speed.

A similar experiment (see fig. 9 C) can also be carried out to verify the modes of gravitational wave diffusion, by replacing the oscillating field generators with two pairs of masses rotating at high speeds, and the sensor with an antenna capable of detecting gravitational waves. Also in this case, the comparison of the waves detected when the antenna is in a central position with respect to the two pairs of rotating masses and the waves detected from a decentralized position should not exhibit phase displacements.

fig9cantgravit-en.jpg (30301 byte)

It must be pointed out, however, that the proposed experiments are extremely difficult to carry out, in spite of their apparent conceptual simplicity. In fact it would be necessary to determine very small differences with respect to the speed of light, on distances as short as possible, to build extremely precise pairs of oscillators and extremely sensitive sensors, to screen the equipment from any external interference, to carry out a series of increasingly more accurate recordings so to gradually increase the degree of possible synchrony of the compared events.

blu2stol.gif (3076 byte)



(To INDEX)   

blu2stol.gif (3076 byte)

In quantum space, the deviation of light from its straight path caused by large masses or by objects with high densities undergoes, along with changes in direction, a wave stretching, that is a red-shift of gravitational origin.

fig14bis.gif (18206 byte)


Light coming from a distant source with direction d and arriving with wave-length l, moves away from the mass M with direction d1 and with wave-length l1 (fig. 14)

  image594.gif (1085 byte) 

As for the Sun, the gravitational effect produces a deviation to the path of light, a deviation which had already been detected by Eddington to confirm Relativity, and it also causes a wave stretching which will however be much harder to detect.

The red-shift produced by our star can be calculated using the formula reported above deriving cos a from the star’s radius and from the focal length of the Sun’s gravitational lens which is estimated to be about 550 Astronomical Units.

The value of the red-shift turns out to be:

 image595.gif (1653 byte) 


blu2stol.gif (3076 byte)



(To INDEX)   

blu2stol.gif (3076 byte)

In the framework of quantum space, and of related theories, some hypotheses can be formulated to account for the origin and the evolution of the most important astrophysical phenomena, observed or predicted by today’s most widely accepted theories.

Let’s begin by examining, for instance, the mechanisms which can give rise to the formation of protoplanetary disks and successively to the birth of planets. It must be noted that such disks are very unlikely to form together with the central star; their formation will be triggered only later by the star itself, and it will be influenced by the mechanisms which have determined its origin.

There are three principal ways in which a star can originate:

In figures 19 A, 19 B and 19 C, the phases of a gas cloud collapse caused by a supernova shock wave are sketched. During the first phase, the front part of the shock wave compresses the cloud, an irregular spheroid which begins to collapse. In the second phase, the increasing condensation leads to the formation of a mass which accelerates the collapse with its gravitational action.

fig19abc_en.jpg (30998 byte)

Finally, during the first period of the newborn star’s life, when thermonuclear reactions are ignited, the star goes through a phase of great instability during which repeated explosions will expel the outer layers. So far however, the remaining part of the cloud near the star and the material expelled during the explosions are still distributed in a virtually spherical volume, while the front part of the shock wave still retains the shape of a spherical cap, partly deformed and centered on the supernova.

In the successive phases, the star’s rotation, acquired because of the conservation of angular momentum of the original cloud at a high speed, drags with it the gravitational field (the Dragging of space-time by a rotating star, the Lense-Tirring or Einstein’s gravitomagnetic effect) giving rise to the protoplanetary disk.

Let us see how the Lense-Tirring effect, the formation of the disk and the successive aggregation of the disk material to form planets can be explained in quantum space.

fig20bis.jpg (38592 byte)

At the origin of phenomena involving interstellar material is the geometrical configuration of quantum space, and its dynamic evolution, determined by the distribution of state levels. Fig. 20 again represents the hypothesized idea of "mass halo" of particles which can deform (or, more precisely, "configure") the surrounding intermediate space. The presence and disposition of mass haloes gives state quanta a different state level whose distribution traces the most favorable paths which will be followed by the particles.

At a macroscopic level, the mass of a star, especially if it is of recent formation, is by no means distributed with increasing radial uniformity towards the center, instead, because of the turbulence caused by thermonuclear reactions, it is characterized by strong surface and internal dissimilarities.

fig21_22en.jpg (35639 byte)

If we now picture the space surrounding the star as a succession of virtual concentric spheres (see fig. 21 and 22), a space quantum apart from one another, because of the tendency to the invariance of scale we would find on each of them a state level distribution which in proportion reflects the dissimilarities of stellar masses. The star’s rotation drags with it the virtual spheres with the same angular speed, thus determining a flow of state levels having maximum speed on the normal plane to the star’s axis of rotation (the ring A in fig. 21); the different speed of the state level flow conveys the spherically distributed material around the star onto the plane of ring A, leading to the formation of the disk16)   (To Footnotes).

The material places itself in the disk following the order set by complex mechanisms which combine element densities, velocities and mutual motions, collapse speed (positive or negative for the expelled material), the star’s angular speed and the speed of state level flow.

Most probably, the lighter material expelled by the star’s outer layers will be found on the outskirts of the disk, while the inner regions will host the heavier elements captured by the front part of the shock wave which has not been completely stopped by initial explosions, and which is locally fragmented by the star’s field, capturing its components.

After the disk has formed, the larger planetesimals start to capture the smaller clumps of matter, initiating the formation of planets. As their mass gets larger, the planetesimals’ speed increases, and they begin to spiral in towards the star sweeping away from the disk all the material they encounter.

Their growth, and their motion toward the star won’t stop until they encounter a band of space which has already been emptied by another forming planet, or by the star itself for the most internal band. (see fig. 23).

fig23en.gif (9348 byte)

Another observed phenomenon which can be adequately explained in quantum space concerns the presence of radioactive jets or jets of matter coming from neutron stars or particularly active galaxies. In these cases, the presence of an accretion disk of a group of stars arranged on a discoidal plane or in an ellipsoidal volume is responsible for the geometrical configuration which creates a preferential channel for the material expelled by the active object.

Let us see how.

The possibility that material (or radiation) might be expelled by a star or by any body depends on their mass and density, which determine the escape speed. The escape speed is the minimum departure speed required for an object in order to travel backward the path traced by the state level pattern around the star.

In the case of an isolated body, the state level pattern is perfectly spherical, which implies that the escape speed is radially uniform.

Instead, if a star is surrounded by an accretion disk, the state level distribution is altered by the presence of the disk, which juxtapposes on the spherical distribution of the star’s levels an additional distribution of growing levels on the disk’s plane. The state levels on the disk’s plane however do not exhibit a growing pattern up to the center of the star. In the star’s proximity they exhibit two opposite hollows whose minimum coincides with the system’s rotation axis. (see fig. 24 and 25).

fig24_25bis.gif (74212 byte)

Therefore, connected with the rotation axis is a preferential corridor for the expulsion of material or radiation where the escape speed is lower; the magnetic fields will then focus the jets according to their intensity.

Similarly to individual stars, galaxies with eccentric ellipsoidal or disk-like distributions will also exhibit a preferential expulsion channel connected with the rotation axis; the degree of nuclear activity - related to the rotation speeds, the total mass and its distribution, to magnetic fields etc – will determine the features of possible jets.

The most elusive phenomenon predicted by theories however concerns black holes, their evolution and their final destiny. These mysterious objects have never been seen (and they never will be), which is obvious because of their characteristics. However, recent observations have revealed effects which can only be justified by huge mass concentrations, such as enormous black holes.

Such observations represent an important, virtually indisputable indication supporting theories which predict the formation of black holes in particular gravitational conditions. On the other hand, the mathematical development of theories seems to leave no doubt about their ineluctability.

However, even if we take for granted that matter can collapse so much to form an object whose escape velocity is greater than the speed of light, from which therefore nothing can escape, it still is not clear how far the collapse can go, and what will happen to the matter involved.

Studies carried out by Stephen Hawking, who has collaborated with Roger Penrose, propose a mechanism which could lead to the "evaporation" of the black hole, thus reemitting all the matter it contains in the form of energy, but other physicists such as Gerard ‘t Hooft and Leonard Susskind, believe that this hypothesis contains a paradox, concerning the information loss which takes place when matter is captured by the black hole. Moreover, most of Hawking’s radiation, which heats up the infalling material, is brought back to the black hole by the material itself as it is captured.

Let’s see which hypotheses can be formulated in quantum space to solve this problem and to determine the destiny of black holes.

Keeping in mind the characteristic we have attributed to quantum space:

  1. the material particle is made up of one or more space quanta on which a discrete amount of energy is attached,
  2. mass is constituted by a halo surrounding the particle; in the halo, space quanta partly share the particle’s energy (the mass halo in turn acts on external quanta, modifying their state levels, configuring their geometry),
  3. space quanta can acquire any amount of energy, provided that their state levels are high enough to retain it until the outside level distribution exceeds a given tilt,

From the above-postulated characteristics, the following hypotheses can be derived with good reliability:

There is one last point to clarify about this series of events: what happened to the information contained in matter?

In quantum space information is not impressed on material particles or in their distribution, but rather it is connected to the geometrical configuration of space. When the energy expelled by the black hole’s explosion has sufficiently thinned out, the space level distribution rebuilds the information in the form allowed by density. Not only the information contained in the matter conveyed into the black hole does not go lost, but also the information of the black hole itself will not be lost because of the explosion, since the information of the black hole will be rebuilt as well, when the appropriate interactions between energy and space will be established.

(To INDEX)   

blu2stol.gif (3076 byte)


15) In quantum space, the existence of a black hole can be only supposed inside a mass grouping determining a growing pattern of the state level towards the black hole. If the black hole at the center of a galaxy should capture the whole galaxy, this would produce a situation analogous to the one hypothesized for particles created in large accelerators: a large amount of mass concentrated in a small volume at extremely highh state levels, contained in a region of space with a "typical" state level, would cause the explosion of the black hole, producing effects similar to those predicted by the Big Bang, though at a smaller scale. For further details, see the proposed hypotheses on the evolution and the final destiny of black holes. (Go Back)

16) If the arrangement of state levels on the virtual spheres were uniform, neither the dragging of space-time, as predicted by the Lense-Tirring effect, nor the formation of protoplanetary or accretion disks would be possible; The material attracted by the star would directly fall onto the star itself, which would englobe it. (Go Back)

16b) The size of the region of space the black hole must empty so that the collapse can carry on and reach the final stage depends on the ratio of its mass to the total mass existing outside such region. This ratio is yet to be determined.(Go Back)

16c) No "motionless" object with respect to space can exist, because of the intrinsic characteristics of energy which is the constituent of matter. (Go Back)

blu2stol.gif (3076 byte)

Aldo PIANA   -   Corso Monte Grappa n. 13   -    10146  TORINO  (Italy)


blu2stol.gif (3076 byte)