|
Good Evil
Question: Why do you call the best players "evil" (Traits)?
Answer: A statistical thing. In complex games, the winning process doesn't care about money, casualties, or morale. It strives to win, and positive and negative are terms that refer only to the end result of the game. Above all, the best players are the opposite state of the "good".
Deeper reasons are found in the very nature of the uncertainty force. Without an excess of "amount of options", the body is just a dead physical structure, and it unconditionally follows the principle of least action, from which all known trajectories of physics are derived. Without excess options, physical systems are not living beings and have no ability to lie, nor to compete, that is, to defy that principle of dead nature. On the other hand, there is the law of conservation that quantity, which traps us in the belt between the surplus of possibilities, which we defend against with fear of uncertainty, and the shortage, against which the desire for freedom arises.
Therefore, resistance to superior strategy players (Reciprocity) is a sign of our belonging to a belt of less vitality and fear of what may be lurking behind the force of probability. I have ranked the winning game strategies (naturally) so that the majority of the current ones are among the "good guys", the players of the III league, the majority of the remaining ones are among the "manipulators", the players of the II league, and the rest are a small minority of the "evil", the players of the I league (Win Lose). Again Pareto's rule predicts that we are mostly somewhere around the upper group of the III league, and the lower ability that has a fear of excess vitality, and also of subjects with larger "amounts of options”.
Oppositions to the natural principle of the least action after maturity look more and more like disturbances to us, so that with later years we completely lose the ability to finely distinguish some details. That vitality, which we lose in old age, is measured by the information of perception, Q = ax + by + cz + ..., and it is greater when greater inputs (x, y, z, ...) we oppose larger outputs (a, b, c, ...), positive to positive and negative to negative, in proportion to our capabilities (Liberty). Moreover, that measure (Q) is even greater when we manage to split, and what is good (bad) for us on average, we separate into positive and negative components, which we would treat with appropriate positive and negative reactions.
Results, which are more often attractive to us, should be distinguished from the process of winning. So, our "winning character" can be likable, but also charismatic, depending on the results we consider good, or on the way of behavior that would be fair. When towards the outcomes we have less avoidance arising from uncertainty than from the methods to them. Also, from the tendency to give freedom for the sake of security or efficiency, which again comes from the general sparing of interactions, that is, communications.
Abnormal
Question: Is life an abnormal phenomenon in nature?
Answer: I hope you don't mean dogma. Until further notice, life is like some "abnormal" structure, but only for physics, then for chemistry and biology, at least in those parts where phenomena are reduced to pure physics. I don't know that anyone managed to really take steps towards the difference between living and non-living matter, before me, so then life was an abnormal phenomenon for information theory as well.
However, social sciences and major parts of biology more or less deal with problems related to "living beings", without deciding to give this central subject an exact definition. I will repeat once again, in this theory, unlike the non-living, the living being has an excess of information compared to the dead physical substance of which it is composed. With that excess, it may (or may not) have more freedom of physical movement and have the power to understand, or deal with truths and lies.
All trajectories of physics known today are derived from Euler-Lagrange equations which are an expression of the principle of least action. All these phenomena arise from the smallest possible interaction of the subject with the environment. On the other hand, the most common random outcomes are those with the highest probabilities, so we map all the smallest actions to the most probable ones. That process goes further into the least informative (minimalism), which is the turning point after which we continue to view information as some equivalent of a physical effect, a product of energy change and elapsed time.
According to Hartley (1928), information is the logarithm H = log n of the number n equally likely outcomes. The reciprocal of the number n is the probability of some average of the individual outcomes, p = 1/n, even when they are not equally likely. Therefore, Shannon's (1948) information can be said to be the expectation, or average value of Hartley's given probability distribution
S = -p1 log p1 - p2 log p2 - ... - pn log pn.
At the micro level, outcome information is less when the probability of the outcome is higher. When we know that something is going to happen and it happens, then it's not news. However, in large sets (n → ∞), an inversion occurs, when the Shannon summand (-p log p) increases/decreases with probability (Exponential II), so the average information of a complex random system is higher where its outcome probabilities are higher.
Thus we arrive at information perception
Q = a1b1 + a2b2 + ... + anbn
as a sum of products of the corresponding coefficients of subject and object from their possible communication. When we arrange these two series of coefficients, one in ascending and the other in descending order, we will get minimal information of perception. Such will express the communications of inanimate physical substances. An example is Shannon's information, which contains the form of perception information, without visible subject-object interaction. Hartley's information is a very special case of those forms. But, the forms of sum of products also have scalar product vectors. Hence, the information of perception includes Hermitian spaces of quantum states.
How the "abnormal" permutation of the coefficients of the subject occurs, so that mutual information greater than the least appears, is the subject of this theory's search. The very possibility of placing such an object under such an exact microscope (Q) tells us that excess freedom is not an abnormal condition, at least not as far as mathematics is concerned.
Overgrowth
Question: Is the mentioned "overgrowth" of probability into information an "abnormality"?
Answer: Actually, it's not. In the picture we see the blue graph of normal distribution (ρ) below the Shannon summand (φ) of that distribution, more precisely:
\[ \rho(x) = \frac{1}{2\pi}e^{-\frac{x^2}{2}}, \quad \varphi(x) = -\rho(x) \ln \rho(x), \]which extend both left and right unlimited along the abscissa, so that:
\[ \int_{-\infty}^{+\infty} \rho(x) \ dx = 1, \quad \int_{-\infty}^{+\infty} \varphi(x)\ dx = \ln\sqrt{2\pi e} \approx 1,42. \]The first integral proves that we have a probability distribution for which the second integral a can be defined, which represents the Shannon information. You can see the proof of these two integrals in the book Physical Information respectively Theorem 2.4.1 and Theorem 2.4.16. The graph demonstrates the approaching and border merging of the summation of the Shannon and probability (φ ≈ ρ).
The mentioned "merging" is a general phenomenon (Surprises), and starts before and after the abscissa x = ±1/e . As we can see, it refers to increments of mean values of information, to their specific densities and corresponding densities of probability distributions, and not to the individual information itself, which we calculate directly from probability. Therefore, under such conditions, the perception information Q = ⟨a|b⟩, or in the discrete case
Q = a1b1 + a2b2 + ... + anbn
for large numbers n = 1, 2, 3, ..., can speak the same as the "probability" of perception, although in isolated cases the information is greater when the probability is lower. Such "overgrowth" is not a rare phenomenon. We see various forms of this both in cases of transition from the micro to the macro world, as well as elsewhere.
For example, the surface area increases with the square and the volume with the cube of the length, so let's say the cooling of a smaller body is faster than a larger one. The laws of large numbers of probability theory speak of greater certainty of happening closer to the mean values of more numerous outcomes, of smaller dispersions of the multitude of cases. However, we also recognize such reversals in the transformations of rotations, in changing the direction of oscillations of springs, yo-yo toys, changing the phases of wave movement, also in the movement of heavenly bodies around the sun, further up to the cosmic forces of repulsion (dark energy) versus attractive gravity.
Dark Energy
Question: Explain to me this about "dark energy"?
Answer: There are two theoretically very obscure phenomena of cosmology that have particularly visible expressions. The first of these is "dark matter" (AGC 114905), and the second is "dark energy". So the real answer to the question is "I have no idea", but it's not entirely true (Far Before).
I do not have any direct consequence of the informatics principle, that the fabric of space, time, or matter is information and that the essence of information is uncertainty, instead I consistently test some hypotheses. I will briefly mention three of them, which at first glance are very different and at the same time seem to have a common root, because they agree with this concept.
1. There is a repulsive force between bosons. Such "particles of space" could have been all in one place and, as modern cosmology believes, they could have been at the time of the "Big Bang" 13.8 billion years ago. Then the "forces of probability" acted, first as the only and dispersive ones, but completely in accordance with the Riesz's proposition and its interpretation of uniqueness, which is in the basis of my theory of information (Only One).
The initial expansion of the universe, which, driven by uncertainty and dispersion, has analogous forms of known physical forces (Surface), including their spatiality, is additionally supported by time and boson transformations in fermions. For now, we know about conversions into the Higgs boson and then the creation of a mass of elementary particles. According to this, the so-called Higgs mechanism, at high energies, the electromagnetic and weak interactions behave as a single interaction (electroweak), with a field of least potential energy.
When the energy collapses, due to the expansion of the early universe, the field moves to a new minimum of potential energy, which results in the spontaneous breaking of symmetry during interactions. This triggers the Higgs mechanism of other weakly interacting bosons, specifically the W and Z bosons, to interact with it to gain mass. Physicists Steven Weinberg and Abdus Salam developed the electroweak theory using this mechanism, from which the modern version of the standard model of particle theory emerged in 1973.
2. There is an "intolerance" (Pauli exclusion principle) between the same fermions, that identical fermions cannot be in the same quantum state simultaneously. It is a stronger version of the aforementioned principled uniqueness and Riesz's proposition. Hence my speculations (Current) about the attraction of even electrons in a state of mutual rest, by a deficit of information, which is actually not possible due to the oscillation of the wave-particle phases, more precisely due to information that disappears as soon as it is created.
The previous deficit of rest is then replaced by a surplus of information that is created by movement, a forced change of position and time, and with it the mutual rejection of the subjects. Basically, these are again "uncertainty forces" with principled "minimalism". During that initial phase of the universe's development through dense cosmic energy, the force of chance gave birth to the first fermions again with their repulsive forces. The development of diversity, we note, favors uniqueness and the growth of certainty, and this lower density of uncertainty calls for larger "volumes" of space-time. This is how memory is created, at the same time sustaining the expansion of the universe.
3. The initial predominant formation of fermions from bosons, due to the extremely high densities of energies of the initial universe and the lack of them, slowly calms down and turns into the opposite process, to the more frequent formation of bosons from fermions. That's why there is more and more space, and the galaxies are further and further apart. I cannot be sure of this, as it remains to be seen from several such hypotheses, each acceptable from the point of view of this theory, which may be the first to prove physically provable.
The various states of the universe throughout its history are fundamental. There is also its cooling, the one due to the reduction of the density of uncertainty, and then there are other changes as well. Let's not exclude "overgrowth" and isometries, permutations, as well as periodic local rotations. What we now call "dark energy", according to this, we may call it "the force of uncertainty" in the future.
Gravity
Question: Have you something about gravity?
Answer: We know that Newton (1687) based his theory of gravitation on the previous observations of Kepler and Galileo, but his ideas about physical force made an ingenious shift in those understandings. Einstein (1916) put the same story on the curvature of space-time equivalent to the potential energy of the field. In the script "Space-Time" (1.2.9 Einstein’s gravity) you have his method, and then (1.2.10 Schwarzschild solution) showing agreement with Newton's, in weaker gravitational fields.
1. Bodies move due to gravity in such a way that they keep the total energy, potential and kinetic, constant. In other words, trying not to interact while traveling, so as not to lose or gain energy. It is an apparently new idea, on top of the previous two, so it is not surprising that there is a derivation of the same expressions of gravity (Minimalism of Information, 2.5 Einstein’s general equations) only from the principle of least action.
I don't consider this third method a negation of the previous one, nor Einstein's as a cancellation of Newton's, but I see them as tools, or steps on the way to greater accuracy. Multiplicity is the basis of this theory of information. In this sense, there should be calculations (Space-Time, 1.2.8 Vertical drop) which in "ordinary" (weak, the so-called Newtonian) gravitational field give the same Schwarzschild metric that is also given by the procedure derived from Einstein's general equations.
2. But that's just the beginning of my informatic story about gravity. The image on the upper right reminds us of the well-known formula of the surface of a sphere 4r²π, where its radius is r, so the density of a piece of a sphere of the same surface decreases with the square of the distance. Thus, the density of information, or the action of a gravitational wave that moves radially away from the center of mass, decreases. It tells us about the changes in the potential energy of the field!
Closer to the center, where the potential energy of the gravitational field is lower, the energy of part of the gravitational wave is higher, similar to the kinetic energy of a body in free fall. That simple interpretation of the decrease in the density of the surface of the sphere is difficult to see just like that, just as it is not easy for a fish to see that it lives submerged in water, or for us before Pascal that the Earth is pressed by air.
3. Below is an observation on the equivalence of communication and interaction, or information and action. Places of less interaction (communication) are those of lower potential, which is compensated by a higher speed of movement (v) of the body (mass m) and an increase in kinetic energy (mv²/2). It follows that a stronger gravitational field, for some reason, communicates less. In other words, it manifests itself to a given body with less information.
Less information (minimalism), i.e. less communication, as well as less action, are the guides of physical phenomena. That "some reason" is actually the presence of a part of the body affected by gravity in dimensions that are not visible to a relative observer. This one thus rushes towards a smaller potential, striving for minimalism.
4. Gravity is an effect of the macro-world, unlike electromagnetic and nuclear forces. An example is the overgrowth of "surplus to shortage" of information, where the principle of reduction copes with the excessive accumulation "bursting at the seams" of space-time. Analogous to the beginnings of the "Big Bang" (Dark Energy, 1), driven by randomness and its dispersion, we arrive at another interesting point. It is the penetration of gravity and through time layers, albeit much shallower due to the enormous speed of light (c) and the small importance of time (t) in contributing to the "length of time" ( ict), measuring by ours normal sizes.
In this way, expanding in other dimensions, for the sake of the lowest possible density of information and the law of conservation, large masses, not wanting additional substances, capture more and more substances, presenting themselves to newcomers as if they are deficient in this. Those processes of capture lead to "black holes" that would not let even light out, and in the background, thanks to the wider coverage through those invisible dimensions, additional drama happens. The future, which is more and more certain and more and more thin with information, drags the present to itself.
Waves III
Question: Explain to me concentric spheres of field waves?
Answer: We notice that waves of water go up and down, they are vertical and we say transferal considering the surface. The movement of particles on or in water is floating and circling around stationary verticals, although there are (longitudinal) sea currents, with a horizontal momentum that actually drives water streams into new areas. Similarly, we know that light is a transferal wave, but it carries a longitudinal impulse that propels it lengthwise. Although this seems like a recent question (Waves II), repeating the answer can be avoided.
1. Tsunami, are large ocean waves that are usually created by earthquakes under the ocean and reveal the nature of such water movements. The initial movement of the concentric wave from the epicenter, such a sudden and violent phenomenon is barely noticeable on the surface due to the depth and mass of water above, but it becomes a disturbance that is transmitted further towards the shores and shallows. It loses energy along the way, which in the end can still be in sufficient quantities of water raised to tens of meters and destroyed on the banks.
Destructive tsunamis are large masses of water that fall on objects on the coast, turning the power of the vertical (transferal) movement of water particles into a strong horizontal (longitudinal) momentum. He uses his strength to break objects. But if there would be no damage from such a force, or there would be no coast, the devastating effects of the tsunami would be absent and only vertical waves of decreasing height (amplitude) would remain due to the power of the water, which gradually diminishes with distance from the epicenter.
2. When the water wave arrives in the shallows, due to the shallower depth and heavier flow, the height rises and the length of the wave shortens, and it seems as to come alive. However, above a flat bottom, the wave would lose strength and height, remaining (mostly) of equal wavelength. Transferred to force field waves, less powerful disturbances at greater distances from the epicenter do not entail altered wavelengths. Thus, we understand that the amplitude means the chance for the wave to be perceived, to cause damage, that is, to leave an impression on the surrounding observers, while the wavelength speaks of a unit (normalized) momentum.
A similar thing happens with light waves, as well as with De Broy's matter waves. The wave turns towards the centers of lower speed, its wavelengths are shortened, so that this is corrected when it exits. Such move slowly towards the center of gravity, so to astronomers the distant stars appear as if they are moving slightly towards and away from the heavy object that passes between them. This is another argument for the (untested) idea that gravity slows down light, but now it is more important to note the effect of light on the electron.
3. Like gravitational waves around a gravitational, an electromagnetic field vibrates around an electric charge. Not everything communicates with everything and it makes sense to focus on the communications of electrons and electro-charged particles. They (Feynman diagram) emit concentric spheres of electromagnetic waves, which are virtual photons (nonexistent) until they interact with another charge when they become real and impart momentum and spin to them. This is the classic explanation.
In information theory, I would add, each of these spheres is a present moving at the speed of light, because that's how we see it. It would be the same to say that they oscillate in place, say periodically changing their radii, but such an abstract view would be realistically impossible from our point of view. Formally, it would explain why there is no decay of an electron that constantly emits its (virtual) photons. The alternative is that the cosmos rules the exact balance of all such emissions and absorptions, or that the virtual photons really do not exist until they interact.
4. Be that as it may, these spheres of electromagnetic waves decrease in amplitude, but not in wavelength, as they move away from the epicenter. The first ones speak about the probability of interaction with a possible next appropriate charge, and the second ones (wavelengths) about the impulse that will be transferred, that is, exchanged on that occasion. This part of the IT interpretation does not have any significant correction of the classical one, except for the following.
A higher amplitude goes with a higher probability of photon interaction, just as a larger tsunami wave is followed by greater attention from the possibly endangered. Also, each individual sphere is a simultaneous event, regardless of the fact that we do not see it as such (simultaneity is relative) and that is why it all collapses, in possible contact. The entire surface of the sphere is one photon!
5. This is how we come across the explanation of quantum entanglement, which I mentioned on other occasions. It is a consequence of simultaneity, and the indivisibility of the sphere of radiation around the charge. What makes it strange to us, like "phantom action at a distance," is actually the relativity of simultaneity. In the case of multiple charges, when the surfaces of these simultaneities take different topological forms (stretching without tearing), the predicted connection of topology and quantum entanglement (in my earlier papers) seems to be recently confirmed (Entanglement and Topology).
Simultaneity
Question: Are concentric spheres of gravity concurrent?
Answer: They're not really. In his paper on the special theory of relativity (1905), Einstein described how it is possible to have simultaneity in a coordinate system moving at constant speed and inertial. However, in general theory (1916) this is not possible. Then, because the space-time is curved.
The image on the right shows a sphere, three large circles and along the arcs AB, BC and CA the parallel displacement (translation) of the vector. That vector always lies on the surface of the sphere, so its direction at the end is not the same as at the beginning. Vector sizes of such curved space are changed by translation, which is not the case with flat, Euclidean space. Gauss (Theorema Egregium, 1827) proved that such curvatures of space are irreversible without deformation of the surface, hence the inevitable physical consequences.
Momentums are vectors that in general relativity change by translating along Gaussian closed trajectories. As 4-momentums have energy as a component, not only energy, but also information will leak somewhere due to spatial-temporal changes of gravitational fields. That kind of disturbance has consequences, and one of them is additional dimensions in which the object in the gravitational field is always in some part.
A simultaneous region of the gravitational field, a part of 3-dim space seen from a single event (a 4-dim point), is not simultaneous everywhere, and to that we add the mass nature of gravity. Each material point is a separate contribution to the central attraction. That is why the Moon raises seas of water and causes tides, when the oceans are closer to it than the center of the Earth and the pulling force depends on the square of the distance.
In the picture on the left, we imagine two such material points, O1 and O2, with corresponding spheres of the same radii, which intersect in two points A and B on this intersection plane. If the points of the circles were simultaneous with them, they are not relatively from another center. By translating to the left along the arc of the first A → B, then along the arc of another B → A, does not end with the initial state (vector).
In short, even if we imagine such a two-sphere of simultaneous events in special relativity, it becomes unreal under conditions of different gravities. Simultaneous even more complex similar "topological surfaces" in expansion (of more such spheres) are more and more difficult to sustain in the presence of increasingly weak gravity.
Time III
Question: Why bother computer scientists with "quantum entanglement"?
Answer: In fact, we neither need nor it is possible to bother with all the parts of such an extensive topic as it appears to us now with the theory of information.
Fortunately, mathematics is sufficiently deductive and so successful at prediction, that it is very instructive for us in this matter. If we start from the correct assumption, only with the precise deductions’ characteristic of it, we can unlimitedly "ride" through the worlds of truths, not caring about what we don't see. All parts of such exact theories are in the same accord with all other similar parts. When something of everything is shown to be contradictory, it certainly indicates to us that the assumption was not correct.
We thus rely on the reliability of mathematics when we use the imaginary (i² = -1) length (x4 = ict) for the path of light at a speed (c = 299 792 458 m/s) in a given time interval (t), which we then equally successfully use as the fourth coordinate of physical space-time. Because of the same, we do not divide by zero, so in Planck energy (E = hf) the Planck Time (tp = 5·10-44 s) is an incredibly small interval that emerges from several basic quantities in theoretical physics, that Planck's frequency (f = 1/tp) seems to us to be particularly meaningful.
However, the Planck energy is not the highest possible energy (it is worth about 2 × 109 J, like the chemical energy of the gas tank of a larger airplane), although the Planck time is supposedly the shortest possible. That is why we are not obliged to transfer such thinking of physics into the mathematics. We still do not divide by zero, but we will allow the idea of even shorter time intervals (from tp) with corresponding increasing "Planck energies". Another question, less important here, is whether this means canceling the other Planck units.
Thus we arrive at a freer interpretation of the classic product Et = h, where f = 1/t is the frequency and h ≈ 6.626 × 10−34 J⋅Hz−1 Planck's constant. That product, on the left Et, does not have to be equal to the value of h on the right, but the "energy" change E over time t then it is no longer necessarily a physically measurable quantity (observable). However, the non-physical quantities belong to the topics of information theory. In addition to physics, information theory should include news as well as thought phenomena, ideas, truths and lies.
The next step is to observe the threshold time (t → 0). Then the energy E can be any value, along with a similar indefinite expression ∞⋅0 calculus of limes. That is why we can talk about "quantum entanglement" as a consequence of the simultaneity of the aforementioned spheres and their unions. We see the reality of such situations from our ability to imagine and turn them into physical actions, assuming that there is a scientifically researchable connection between them (Unnoticed).
The very connection of informatics with physical phenomena could be of interest to computer scientists, informatics in the narrower sense, but it doesn't have to be. Such is the nature of information theory, if it is developed by adhering to the foundations of mathematics. They won't miss anything important. However, things are similar with those whose subject of interest is physical information. They also need not be interested in algorithms, software, or computer hardware in the narrow sense.
Guiding
Question: Do you have any analogies with virtual spheres?
Answer: Yes, I will name a few. Such examples, of these analogies, do not prove the accuracy of the ideas of "virtual spheres", but they are an aid to their understanding, or rather their taming.
The past defines the present and guides the future. We agree that a more thorough study of topics of undoubted facts, such as in the exact sciences, makes the knower more cautious in adopting doubtful explanations. It is an additional question, when some new knowledge after research becomes equal to what was established a long time ago.
The analogy with the expanding virtual sphere (Waves III), whose previous smaller forms had a higher chance of interaction and a smaller range of action, with said knowledge is obvious. The simultaneity lasts approximately the wavelength of the sphere, Δt = λ/c, that is, the time it takes light to travel between the two spheres. The speed of the sphere is the speed of our processes.
The multitude of various concentric spheres centered around the center of gravity of the mass of gravity reminds, for example, of the evolution of biological species. The duration of the life of an individual is, for example, the duration of one present (wavelength), with a greater range of varieties as we look at its older genetic origin. On the other hand, the influence of those distant ancestors on the new descendants is smaller the longer the time between them.
The heat energy transferred through the test tube, in the upper left picture, if the transfer was limited to it, would be equal to the energy that can be transferred by the concentric spheres through the air around the burner. Each of those spheres would carry an equal amount of energy, but each would depart with a reduced specific heat density and lower temperature. The glow of very distant sources is very weakly felt.
Dealing with information theory, I adopted a slightly different treatment of entropy than the classical one. I consider colder places less informative and increasing entropy decreasing information. In this sense, we have another analogy, now with the expansion of entropy and virtual spheres. The more distant the areas are, the less informative they are, the less influential they are, with less heat given to the distant, colder environment.
Finally, the very calculation of the surfaces of spheres, that purely geometrical consideration, is the kind of analogy we are talking about. I hope it's obvious enough that I don't have to bore you with it.
Dependence
Question: Explain this simultaneity of virtual spheres to me?
Answer: I assume that the action-reaction connection of the distant parts of its surface is confusing to you, because that is the most common question for me. Recently (Summary) I reduced it to an interpretation, from the algebra of quantum mechanics itself, which it originally is. It will be enough to understand it, so I will explain this on that way.
Consider the vectors x, y, z, ... as quantum states, and the linear operators A, B, C, ... as quantum processes of some n = 1, 2, 3, ... a dimensional vector space, now a quantum system. The simple algebraic equality y = Ax is the change of state x to state y by of processes A, where the processes are any Hermitian operators or their matrix representations. This additional constraint serves so that the eigenequation Ax = λx always has a real number for the eigenvalue λ.
Without going into the explanation that the "state" is also a "process" (Harmonic) nor into the simple and most common interpretation of the eigenvalue as energy, for now let's just note that λ multiplies (increases or decreases) its associated eigenvector x. For example, an electron changes its amount of energy by moving from one atomic shell to another, but still remains an electron. However, the Pauli Exclusion Principle applies to the electron.
The Pauli exclusion principle, for example, helps chemistry understand the arrangement of electrons in atoms and molecules, and also explains the classification of elements in the periodic table. In general, it is the principle of impossibility for two quantum particles (fermions) to have all the same states in the same quantum system. So the characteristic equation of one process A can have different solutions Ax1 = λ1x1, Ax2 = λ2x2, ..., Axn = λnxn, while with different eigenvalues (λj ≠ λk) are different and perpendicular to each other (xj ⊥ xk) eigenvectors.
In general, the dependence of momentums, energies, information, in the sense of the law of conservation, also applies in nature. The system is so balanced that one part of it loses as much as the other gains. This limits the range of behavior not only of fermions (like electrons), but also of bosons (like photons). That's why the virtual sphere will be created in the center and expanding to try to maintain all those rules. The true meaning of "simultaneity" is that from its initial "cage" of values, there is no change of one without a reciprocal change of the other, in accordance with the given rules. As long as the sphere, expanding, is simultaneous.
However, simultaneity is relative phenomenon. That "phantom action at a distance" can confuse the relative observer, when the alignment of one system with itself tries to separate two different systems into conversation, even though the algebra of eigenvalues allows just that. Superposition record Ψ in two different ways (Summary):
\[ \Psi(x_1, x_2) = \sum_{n=1}^\infty \psi_n(x_2) u_n(x_1) = \sum_{s=1}^n \varphi_s(x_2) v_s(x_1), \]does not mean different quantum systems, or states. They represent one and the same system or the same state, so that a change in one of the two records will also change the other, just as the text written in Serbian and its translation will change, if the content is corrected.
This "correction" of the content is made by the apparatus of the physicist's laboratory, which measures, or changes, the balanced system in one of the records, in order to read the change in the other. Different basis vectors, which cross the same vector space differently, make these notations different. However, they are different spatial-temporal events of relative observers, which makes this interpretation of algebra puzzling.
A similarly closed system consists of a particle (atoms, molecules) that decays. Expanding, the parts belong to the ongoing present of the laboratory and constantly remaining balanced sizes for which, the laws of conservation apply. One record, or one of the parts, cannot be changed or perceived without changing the others. It is only one series of components, observables as they are seen by the laboratory technician in the given circumstances, but the circumstances are the landscape of the wider picture.
Scantiness
Question: Does the present spontaneously disintegrate into the past and the future?
Answer: Interesting question. I have already answered it on various occasions. Let's say, from the conservation of total energy (Information Stories, 1.14 Emmy Noether) and its immutability, the geometric property of symmetry emerges, also the mapping of isometry. This further leads to quantization (Packages), because only then is the proper subset really smaller than the larger whole.
Without entering now into (the definition of) infinity, that it can be equal to its real part, just as the set of even numbers is equal to the set of all integers, even rational numbers, we will stick only to the views of minimalism. As nature tries to realize more likely outcomes more often, it will go to less informative states, and on the other hand, it will be reluctant to leave more certain situations. The present therefore disintegrates non-spontaneously.
That is the answer to the question. For example, the present of a virtual sphere will tend to remain compact (thickness of a wavelength), as will the present of a coordinate system in uniform, rectilinear and inertial motion, until some external force overwhelms them (Simultaneity). It is similar with the quantum of action (h = energy × time), or the equivalent of information.
Against this attractive force of certainty, thanks to which the cosmos gives birth to laws, or let's say due to which the corresponding natural phenomena (everything does not communicate with everyone) adhere to their rules, both the face and the reverse are repulsive force of uncertainty. The probability force has a slightly different form, and not far from these is the "aggregation force", which I will briefly explain.
For example, from Chebyshev’s Inequality
\[ \Pr\{|X - \mu| \ge r\} \le \frac{\sigma^2}{r^2}, \]where X is the random variable of dispersion σ² = E(X²), and μ = E(X ) is its mean value. The value of the probability Pr{|X - μ| ≥ r} denotes the chance that the random variable for more than r > 0 exceeds the mean value and, as we can see, it decreases with the square of that value. Chebyshev's law of large numbers follows from this inequality.
In general, the laws of large numbers speak of decreasing chances of scattering values of random variables (X) if these radii (r) are increasing. In other words, these probabilities are getting smaller and smaller when there are more and more random variables within the radius range. If a given r is fixed, and the number of trials X increases, the frequency around the mean E(X) increases, that is, it reduces the dispersion E(X²). There are other ways to describe the law of large numbers of probability theory, but the point always remains somewhere around increasing certainty by increasing the number of outcomes.
When the tendency towards more frequent realizations of more likely random events is understood as a "probability force", then the laws of large numbers speak of the "aggregation force". I have listed several ways of performing, or seeing the cause of the "force of gravity": by means of the principle of least action, scantiness of information or communication, the probability of moving towards a smaller potential, towards a slower flow of time, and the like. Now we can add to that series this "force" of the law of large numbers, which is actually just one of the forms of the previously mentioned "uncertainty force".
Because of this retreat from uncertainty, the present becomes more certain, and because of the law of conservation of information — the past deepens, it directs and focuses the future. It is probability that pulls what is “now” towards what “will be” for the sake of history, and all that together makes one of many, our reality.
Saturation
Question: What happens to saturation due to the "clumping force"?
Answer: Due to the laws of large numbers, the dispersion of longer series of tests is shrink, grouped around the expected values, the system becomes more certain and more attractive. But that attractive force than is a trap, because the desire of nature is not a concentration of information, so it is extracted from there in special ways. Here are some.
In gravitational crowding, there is a leakage of information into other dimensions, a curvature of space and a relatively slower flow of time that joins the aforementioned "aggregation force". These two aspects, pushing the law of large numbers with the appearance of a deficit of information due to the slower flow of time, work like harmonious brothers at the same job.
When we look more boldly, with the transition of observation from the micro-world to the macro-world, a similar effect of slowing down time seems to occur. Above is the same speed of light as the limit of both world’s sizes, but the relative measures of their lengths are not equal, so smaller objects behave more vividly than large ones. Changes in the vibration rates of electrons in atoms and molecules in substances are significantly greater than, for example, the movement of celestial bodies. The hopping of insects seems incredibly fast compared to the movements of elephants.
Although it seemed to us that it had nothing to do with gravity, I hope that now we perceive that subtle connection between the law of large numbers and nature's tendency to skimp on information that we call the "accumulation force". There is a similar phenomenon and a similar effect of "slowing down time" in the macro-world in relation to the micro-world. In both cases, the situation of a slower flow of time becomes an attractive possibility.
The same analogy applies to the process of growing up, maturity and aging of a living being. During maturation, it seems to us that subjective time passes more and more slowly, because we are younger and hungrier for information, which we later satiate. When we have more experience, less prone to risk and uncertainty, similar to bodies in gravity and, in general, living beings slow down. In this way, too well-organized societies, more precisely over-regulated and therefore too mature, slow down.
These are "vertical changes", from sparser to denser environments. In the case of "horizontal changes", such as in the special theory of relativity, time is slower with a higher speed of movement of the system and faster if the speed of the system is lower, so there is no gravitational attraction. Nor does it have the effect of large numbers. A body in free fall does not notice the effect of the force of gravity, nor its changes in speed, nor does it move toward mass. Such it can remain in orbit around the center of gravity, without descending forever.
Burden
Question: Small objects are bouncy and large objects are sluggish?
Answer: The sluggishness of the masses is known, I guess, and the question is why, how to interpret it with information theory.
It follows approximately from the above
f = c/d,
that the frequency (f) of the body is equal to the quotient of the speed of light (c) and the diameter of the body (d). Such is the wave, wavelength λ and speed v = fλ, so it is
v : c = λ : d.
The smaller (d) the body is, the livelier it is, the higher the speed (v) of free movement and vice versa, the more saturated by repeating the experiment, and are less mobile, more inert.
This "bounciness", the frequency of small bodies (f) is related to dispersions that decrease with repeated trials and the laws of large numbers, mentioned in the previous question. On the other hand, is the wave nature of matter, added here to arrive at an interpretation of inertia in the manner of this theory of information.
All bodies vibrate, smaller ones vibrate more and larger ones slowly. For comparison, sea waves move at a speed (meters per second):
\[ v = \sqrt{\frac{g\lambda}{2\pi}} \approx 1.25\sqrt{\lambda}, \]where the standard gravitational acceleration is g = 9.80665 m/s², and λ is the length of the water wave in meters. Ripples created by wind on the surface have frequencies greater than f = 5 Hz with wavelengths around λ = 10-2 m. Larger sea waves have a frequency of 5 to 0.1 Hz and wavelengths of 130 and more meters, speed 0.2⋅130 ≈ 14 m/s, which is the speed of v and according to the given formula.
The explanation of inertia, as well as laziness, in the way of this information theory is in resistance to options. Bodies accumulate mass (experiences) driven by the desire for certainty and following the law of large numbers, which becomes a burden and slows them down. By building objects that are not whole in one present (light takes time to reach from end to end of the body), lazy and inert compositions are created. States resist options and the more they are stacked, the greater their burden.
Games
Question: Can we use game theory in information theory?
Answer: In its essence, the game is a competition with an opponent and defying the spontaneous flow of things, so it acquires the greatest meaning only with participants who are living beings. It is therefore a topic of "information theory" (mine), not such physics, for example. But machines can also play games.
The simulations we make are proof that there are algorithms that simulate live opponents, moreover, that they can beat them (Reciprocity). The best strategy then is one that would follow the information perception Q = ax + by + cz + ..., where (a, b, c, ...) player components opposed by the corresponding opponent (x, y, z, ...). When the sum of products Q is higher, the communication of the opposites is greater, the vitality of their coupling is higher and the level of play is higher. .
The goal of the algorithm of the best game (I league) thus becomes the deployment of its capacities to counter the opponents' initiatives in a timely, measured and unpredictable manner. At the same time, it is necessary to work a lot on the assessment of the position and the evaluation of the components. Machines can do all this, only now after understanding "Information of Perception".
For example, explaining gravity by a "saturation force" (Saturation), where smaller and more spinning particles would coalesce into larger slower groups escaping the greater (proportionately size) uncertainty of the micro-world, both sides (micro and macro) of "competitors" behave as a physical substance and build perception information by combining larger coefficients with smaller ones and smaller ones with larger ones, thus as the weakest players (III league). However, the micro-world enters the "game" with greater uncertainty (force of probability) and despite equal timeliness and moderation — wins.
According to this, natural processes can also be viewed as "competitions" of very weak players that take place exactly as they do because of "Uncertainty Force", that is, the principle of saving information, or, which becomes the same in this theory, for the principle of least effect. So much for this game theory, which derives from this information theory.
Penalty
Question: Do you have any other examples of this and that game theory?
Answer: It has been known in legal practice for a long time, and in game theory until recently, that punishments and actions should be balanced (1.10 Crime and Penatly, Stories about information). A contribution to this, among others, is Higher Punishment, Less Control, agent and crime games.
For example, in football, sending off a player for a small thing would make the game uninteresting, similarly ignoring all fouls. Countries with too little tolerance for violations of the government (dictatorships) or generally too regulated lose their vitality, become unsuccessful and boring.
Let's observe the game of "deeds and measures" as a competition of citizens against the regime and measure their perception information Q = a1b 1 + ... + anbn. If the pairs (ak, bk) form corresponding factors sum of products, then a higher number of Q goes with a higher vitality of the society. The greatest result and the best strategy is when the participants adhere to timely, proportionate (to their powers) and unpredictable (original) reactions to the other party's actions. With that, the moves of the regime are punishments for evil (minus on minus) and rewards for good deeds (plus on plus), and if the other is missing, we have only negative games of "crime and punishment".
However, the similarity with the above legislation, or classical game theory, is obvious. However, we see that including rewards for doing good deeds (plus on plus) can significantly raise vitality (Q). All societies are more penetrating, more successful and more energetic when they have greater vitality. When the value of Q is decreasing, then society is aging. It is like the overly strict soccer game mentioned above, which has therefore become too cautious, or on the contrary is in disarray due to too loose restrictions.
Moreover, if we separate the behaviors of average good (bad) individuals and act accordingly, as in the case of:
Q' = ... + 5⋅3 + ... < ... + 9⋅4 + (-4)⋅(-1) + ... = Q'',
because 15 < 40, we get an even greater vitality of society. It would be like if a criminal, who was sentenced to the most severe punishment, was given a reward for a good deed in the meantime, which is normally not a practice, but according to this it should be. However, dividing the good ones into less good (and bad into less bad) acts is not good for vitality, as can be seen from the example:
... + 9⋅4 + 4⋅1 + ... < ... + 13⋅5 + ...,
because 40 < 65. In games, the first case means the value of subtlety, and the second the strength of the collective. We recognize these examples in situations ranging from commercial to war strategies.
The alleged fight of the state against crime formally becomes a competition of "anti-behavior measures" with timely, measured, unusual, which not only defeat illegal behavior, but also improve the general ability (vitality) of the state itself. The Nash equilibrium model (Cooperation) is just a seemingly different example of the game, as is the "minimax theorem" von Neumann, the founder of game theory. Newer computer science and classical (this and that) game theory are, or will be, shown to be two sides of the same coin.
Minimax
Question: Explain Von Neumann's "minimax theorem"?
Answer: In the picture on the right, we see a typical saddle-shaped surface drawn with geodesic lines. The first of these lines follow the red arc AB, and the second, perpendicular to them, CD is green.
As functions, the first lines have minimal ordinates at the base and around the saddle, as if gathering around a green arch. Their maximum is the black dot at the top of the bottom of the saddle. The other lines, which follow the green one, have maxima at the top of the saddle depression, along the red arc, and the minimum of such lines is again the black point at the top of the saddle bottom.
To be precise, let f(x, y) be the saddle surface whose graph is shown in the picture, and X and Y are sets of first (comparative AB) and second (following CD) geodesics in order. That's when
\[ \min_{y\in Y} \max_{x\in X} f(x,y) = \max_{x\in X} \min_{y \in Y} f(x,y). \]This is the statement of the famous Von Neumann's "Minimax theorem" and we have just shown that it holds for saddle surfaces, like the one in the picture.
The same theorem does not apply to non-saddle surfaces. This is easily verified when we imagine some of them. For example, the first geodesics of the sphere would be horizontal circles with diameters left-right (AB), and the second would be perpendicular to them, circles of vertical planes with diameters of one of the perpendiculars to the first (CD ⊥ AB). The minimum of the maximum of the former is the bottom of the sphere, and the maximum of the minimum of the latter is the side point of the sphere.
Examples of the validity and invalidity of the minimax theorem, using matrices, I listed in the attachment Strategies. Let's look at an example like that
\[ M = \begin{pmatrix} 4 & 3 & 8 & 6 \\ 2 & 1 & 5 & 7 \\ 1 & 2 & 0 & 3 \end{pmatrix}. \]This matrix in the columns has the maximum values in the order 4, 3, 8, 7, among which the smallest is 3. The one in the rows has the smallest values in the order 3, 1, 0, and the largest of them is also the same number 3. Therefore, this matrix it is analogous to the saddle surface from the given picture. By the way, matrices are more often not "saddle", and this is easy to check by taking them at random.
Let the columns of matrix M represent the values of possible moves for the first player (team), and the types of possible moves of the second. If the maximum choices in the columns are the most dangerous moves of the second player, then the choice of the smallest among the maximums, so the second column with 3, is the best move of the first player. This one prevents the other from making his best move. If the other side has the same prospect, as in matrix M, then there is no win.
When the game is "saddle" in the sense of the picture above, then the best strategy is to make such moves that the opponent does not have the best moves. However, for more complex situations, with too many options or too little time, that simplistic type of game generalizes to "distraction", that is exactly what this theory of information proposes (Reciprocity). In short, it means positive responses to positive initiatives, and negative responses to negative ones, in a timely, measured and at least slightly unpredictable manner.
The peculiarity of Von Neumann's strategy (minimax) is reflected especially in "zero-sum" games (Zero-sum game). These are games where one side wins exactly as much as the other loses. Then a draw from the "saddle" surface is really a no-win situation. The opposite of zero-sum games are games where both sides can win (win-win game), like trade which exchanges money for goods, value for the same value, but still generates some profit for both parties (for both the seller and the buyer). The opposite of zero-sum games is those when both sides can lose (lose-lose). Perception information categorizes all of these (Win Lose).
The universality and simplicity of information perception methods are its advantages in game theory move analysis, although at first glance this and that (information theory and classical game theory) have little to do.
Parallel
Question: If "parallel universes" exist, how come we don't see anyone from there?
Answer: Along with the picture on the left is a link to an interesting video that will help with the answer. An important part is the superposition, entanglement as well as measurement, which he talks about, which for now we translate as distribution, simultaneity and interaction.
Interaction is measurement, what we perceive, it is declaration, or transformation of uncertainty into one of the possible outcomes, the process of the so-called collapsing superposition. The formation of certainties instantly creates different outcomes in separate "parallel worlds". Only one of these, ours, is real, and the others are pseudo-real to us.
The total "amount of uncertainty" of the prior distribution of probabilities, or the superposition of states, is equal to each of the information given to each of the outcomes (individual states), that is, to those "parallels". Formally, it is like the uncertainty before throwing a die (log 6) and the individual information that is obtained by falling one of its numbers, any of its possibilities.
Those initial possibilities, as uncertainties, also contain all those that "we don't see from there". The answer to the question reaches first the strangeness of the law of conservation of information and, secondly, the understanding of ultimate objective uncertainty, not simple unpredictability. As much as we don't have a "six", when some other number fell on, after throwing the dice, that's why we "don't see" other (pseudo)realities.
In the book "Multiplicities" (Differences, pp. 11-14) you will find proof that the discrete world of perception is located in a continuum of possibilities. Considering an incomparably greater eternity than our reality, the worlds of possibilities are so numerous that with zero probability we can randomly go to some target reality, such as the probability of hitting a predetermined number at random on the interval of real numbers. There is no chance that we can go back to our past in a time machine, if the described return would even be possible. Also, to randomly guess any of the preset pseudo-realities.
The two restrictions, the conservation law and the zero probability, are also supported by the rarely mentioned Riesz' view (Frigyes Riesz, 1880 - 1956) of linear algebra, that for for every functional f of the finite-dimensional vector space V there exists a unique vector u ∈ V such that f(v) = ⟨v, u⟩ for every vector v ∈ V. Namely, the state is the interpretation of a vector, and the sum of products, as information of perception, is the interpretation of a functional. Accordingly, the perceiving subject is the mentioned vector u and it is unique.
Riesz's position is sufficiently general, or abstract, that we can consider the environment beyond that subject as unique, as well as each individual "parallel reality". Truths, that is, exact laws, are either mutually independent, or they are non-contradictory, and so is this limitation. It also stands with principled diversity because nature does not like equality (Differences).
Transitions
Question: Do transitions between "parallel realities" occur?
Answer: Yes theoretically, maybe on a quantum level, but (almost) never in our reality. First is the matter of mathematics and its theories such as imaginary numbers which are not incorrect but are directly unrealistic (Pinocchio).
The second touches on the deeper meaning in the previous answer (Parallel), which I wrote about, but it is necessary to wait for some confirmation of practical physics. Transitions from uncertainty to certainty (according to my information theory) are the basis for understanding, say, particle-wave duality, double slit, tunneling, or bypasses. The subject of the link in this picture on the right is similar, but from my point of view as an unfinished story.
The state of uncertainty, in quantum physics closest to "superposition", or in probability theory "distribution", passing into the state of certainty, measurement, or information, whatever you call it, corresponds to the transition of a wave into a particle, in the mentioned experiments. Like a dice roll, all its possibilities (six of them) turn into outcomes, each in a different "parallel reality", of which only one (ours) is real, and the others (five of them) are pseudo. Each such, from pseudo-reality, is equally accurate with ours, as a complex analysis with math of real numbers.
The transition from uncertainty to certainty is an energy-time process in each of the realities of information (action is the equivalent of information). From the selectivity of communication and the principle of least action, it follows that this transmission occurs in special circumstances, that it is energetically and temporally in the smallest portions. However, the uncertainty is simultaneously copied in other pseudo-times, between which there is no time flow, so they neither have a mutual transfer of energy, nor a transfer of information.
When the probability waves passing through the two apertures, in the upper image on the right, meet the curtain somewhere at the end of their journey, they accumulate as uncertainties until the moments when they can be translated into certainties, analogous to the process of the photo-electric effect (Half Truths). But, when some other possibility of interaction (communication, measurement) gets in their way, that uncertainty doesn't even reach the curtain. For example, by closing only one of the apertures, the interference magic disappears, together with the corresponding diffraction lines on the eventual curtain, and only a fraction of the behavior of the previous probability wave remains.
While in a state of uncertainty, the particle-wave is passively "present" in all possible realities, because if it has declared itself in one, then all its options are somewhere in some others. It can "disappear", pass into something else only after declaring itself in a given reality and independently of its change among others of those. Only when that change becomes a new uncertainty, we can talk about "transitions" between pseudo-realities, or more precisely about new realizations. In this sense, the interferences, the above images, are phenomena present in multiple (pseudo)realities. It's the same with creepy, superstitious, here we say by-pass solenoid.
Another thing is that the nature of its form likes to repeat itself, so in the macro-world it imitates the material versions of the micro-waves of probability. It baffles us, but not for long until we recognize that the original behavior of uncertainty is becoming rarer with the certainties of the law of large numbers.
The Bottom
Question: Is the "principle of minimalism" happening at the quantum level?
Answer: I think I understand the dilemma. How to strive for even less, if you are the smallest possible? The answer is in the same question, we cannot do less, but we do not give ourselves to more. That principle of minimalism is kept to the very bottom.
The smallest information was created from at least two possibilities, so as soon as they appear, they are logged out. Stale news is not news. Therefore, information will disappear as soon as it is created, going into uncertainty, and then again into one of its possible outcomes. Such are the waves of probability in the dual structure of matter, wave-particle.
Certainty-uncertainty changes are constantly at their lowest level. When we move from the quantum world to the macro world, the laws of large numbers take their toll and the visibility of coincidences decreases, the certainty cease to be more important to us. Through the properties of elementary particles, or especially atoms and molecules, nature finds ways to "sweep under the carpet" the randomness of its structure. She seems to be running away from herself.
The small number of options for individual elementary particles, and also the multitude of those micro-physics options, is understandable. Like quanta of action, information resides in the smallest packages of uncertainty, parts of which would further be certainties. In contrast, the principle of uniqueness goes with the impossibility of repeating the same broader wholes and, therefore, with diversity. That's why the world can be as big as these two settings allow.
On the other hand, the world of information is infinite (because such math is correct), so this theory must adopt that possibility as well. However, then we accept the Borel–Cantelli lemma. It says that an infinite number of independent outcomes (distributions) have only a finite number of them that are relevant. All that multiplicity is in vain, when there is zero chance that any of it will be realized. In other words, beneath the limited variety of the micro-world of physics lurk the unlikely unlimited possibilities. Many of them we will never get to know.
|