|
Chances
Question: I don't understand that probability at all. What are the odds?

Answer: Let's imagine that we are tossing a fair coin, such that the chance of landing a "Tail" is equal to the chance of landing a "Head." We say the chances are 50:50 percent, or even. When we toss such a coin 100 times, about 50 times it will land "Tails," and everything else, about 50 times, will be "Heads." If the number p = 50:100 is the probability that one of the two will fall, it doesn't matter which of them. It is the probability base for tossing a coin.
An unfair coin would have 0 < p < 1 probability of falling "tail" and q = 1 - p probability of falling "head," when p ≠ q. For example, as in randomly drawing one of 10 equal balls from a box, with 7 white and 3 black, when p = 0.7 and q = 0.3.
1. If we toss a fair coin twice, the outcomes are four equal options: {TT, TH, HT, HH}. If they knew that one of the options is "tail" (T), then the outcomes are among {TT, TH, HT}. The chance it is also the second "tail" is one in three! And not half, which would be a flippant and incorrect answer. They would check this experimentally by tossing two coins at least 100 times, not counting the outcomes with two "heads." There would be about 75 of them, and of these, a third would have both "tails." This is how we come to understand "conditional probability".
2. When we toss a coin three times, the chance of landing a "tail" all three times is one of eight possibilities:
TTT, TTH, THT, THH, HTT, HTH, HHT, HHH.
In general, for n = 1, 2, 3,... coin tosses, the chance of landing a "tails" each time is one in 2n = 2, 4, 8,..., so the probability of such an event is pn = 2-n, in case the coin is fair. It is the number pn when p = 1/2, as we saw in the introduction.
Now we know how to define the event that, in exactly the nth toss of a coin, a "tail" lands because this means that in the previous n-1 tosses, a "head" came up. So, such as probability qn-1p, where q = 1 - p probability that (in a particular throw) a tail does not fall. In the case of a fair coin, it will be p = q = 1/2, but in the case of an unfair one, it is p + q = 1. These are common situations in which the coin "does not remember" previous outcomes, and the numbers p and q are constants.
3. When a coin does not "remember" previous outcomes, the probabilities that the nth toss will land heads for the first time are:
P1 = p, P2 = qp, P3 = q2p, ..., Pn = qn-1p, ...
Its sum is:
P1 + P2 + P3 + ... + Pn + ... =
= p + qp + q2p + ... + qn-1p + ...
= (q0 + q1 + q2 + ... + qn-1 + ...)p
= p/(1 - q) = p/p = 1.
So we have a probability distribution Pk ∈ (0, 1), the sequence of outcomes from which exactly one will happen.
The mathematical expectation of the distribution is the mean value of the kth step:
μ = 1⋅P1 + 2⋅P2 + 3⋅P3 + ... + n⋅Pn + ... =
= p + qp + q²p + q³p + ... = (1 + q + q² + ...)p
= [1/(1 - q) - 1]'q p = [1/(1 - q)²] p = 1/p.
It is, therefore, greater the probability p ∈ (0, 1) is smaller number.
4. We can see that this is an exponential distribution from pn = e-λn, where the base here is Euler's number e = 2.71828... and λ = -ln p. Similarly, the probability density of the exponential distribution is ρ(x) = λe-λx, λ > 0, so it is
\[ \int_0^\infty \rho(x) \ dx = \int_0^\infty \lambda e^{-\lambda x}\ dx = 1. \]The mathematical expectation of the exponential distribution is the mean value of the variable:
\[ \mu = \int_0^\infty \lambda x e^{\lambda x}\ dx = \frac{1}{\lambda}. \]You will find proofs and many such calculations in my book Physical Information.
This one has the maximum information, of all continuous distributions on the domain x ∈ (0, ∞) with the given expectation μ. Apart from the strict proof mentioned in the link, we can try to understand intuitively why the given exponential function would have maximum information.
5. When a future event "remembers" a previous one, then old outcomes influence new ones. The past guides the future, and the odds of heads and tails are no longer equal when flipping a coin. Again, knowing that one of two events is more likely to occur reduces uncertainty and reduces the information about the outcome.
For example, in binary, the written Hartley information of two equal options is log2 2 = 1 = - log2 0.5, and the mean information of a fair coin is:
-0.5⋅log2 0.5 - 0.5⋅log2 0.5 = 1.
We can mark with 1 the information of the outcome of a fair coin (log2 2 = 1). That is in base 2 logarithm of binary Hartley information. The information for two tosses of a fair coin is 2 (log2 4 = 2), for three tosses it is 3 (log2 8 = 3) and so on (log2 2n = n).
6. However, when p ≠ 0.5, let's say higher chances, and the probability of the opposite event q = 1 - p ≠ 0.5 is then subtracted, the mean information value of that unfair coin toss will be smaller:
-p log2 p - q log2 q < 1.
When we know that something is going to happen, then it is not news. Similarly, when we have some higher probabilities of the first members of the distribution, but again such a slowly converging series of probabilities (3) that its mean value remains the same, or correspondingly similar densities in the opposite exponential (4), the mean information will be smaller.
This is why the principle of saving information appears against the exponential distribution, in the sense of reducing the density of information in various ways and therefore degenerating the exponential change of probabilities. For example, bacilli will not reproduce exponentially because, as they grow, they infect the environment by killing or creating victims resistant to their spread, changing the chances of further reproduction.
Deflection
Question: What is this "force of probability" you sometimes refer to?

Answer: I've linked to this question, which should be enough of an answer, but here's a little more clarification. In the meantime, check out the Compton effect (Collision).
The tendency for more probable states to be realized more often shows many elements of the movement of the celestial bodies of the solar system, as well as electromagnetic charging, in the abstract space of probabilities. By the way, there is no physical action without the transfer of information, in such a way that action and transfer of information are equivalent phenomena, so we reduce the choice of less information to "uncertainty force". It remains for us to recognize such forces (probabilities, information, and uncertainties) in the well-known basic physical properties of particle-waves.
It is known that in Heisenberg uncertainty relations, position uncertainty has the meaning of wavelength λ particles (Quantum Mechanics, 1.4.4 Uncertainty principle). A shorter wavelength means less uncertainty and a consistently higher probability of position. In other words, the particle-wave will not deviate from the shorter wavelength λ to a larger λ' without force or collision with another particle. It does not spontaneously go into a less likely state.
This is, for now, just my "informatics" interpretation of the Compton effect (same book, 1.1.6 Born rule):
\[ \lambda' - \lambda = \frac{h}{m_ec}(1 - \cos\theta), \]where h = 6.626 070 15 × 10−34 J⋅Hz−1 Planck's constant, me sub> = 8.187 105 × 10−14 J the mass of the electron at rest, c = 299 792 km/s the speed of light, and θ is the yaw angle. This is Compton's formula.
Briefly, Compton (1923) observed the scattering of γ-rays from carbon electrons and, after collision with electrons, found them scattered with longer wavelengths (λ') than the incident ones (λ). Quotient h/mec = 2.43 × 10-12 is called the Compton wavelength of the electron. To obtain Compton's formula, the laws of conservation of energy and momentum are used, then (E = pc) the relationship of energy, momentum, and the speed of light, with a little math.
It is similar to Rutherford scattering (1911), pictured above left, which was previously observed and used to prove atoms. The phenomenon was the domain of nuclear physics to explain the deflection of alpha particles striking thin sheets of metal, used in addition to evidence for the existence of a positively charged atomic nucleus that has almost all the mass of an atom.
Another proof of the "probability force" in the informatics sense is Born's law (same book, 1.1.6 Born rule). Enough has been written about it that it can only be mentioned here. In short, it is the law of wave amplitude, as opposed to Compton's law of wavelength, as a measure of probability. The square of the amplitude is the probability of finding a given particle wave under the circumstances of the measurement.
The electric energy of photons induces magnetic energy so that it becomes electric again. A particle of light thus cycles from one phase to another, keeping its energy constant through two electromagnetic cycles. It is the force of probability that induces these changes, both in the case of photons and in the more complex systems of the microworld. However, at least the real physical action is quantum.
Spacetime
Question: Can you explain to me the concept of "present" that you develop from the theory of Information of Perception?

Answer: The question first concerns the Pythagorean theorem (picture link on the right), the theory of relativity from its beginnings, and then the definition of information and probability.
1. Let's start with "the sum of the squares of the legs equal to the square above the hypotenuse," a² + b² = c², which is the Pythagorean theorem from ancient times. When we consider as intervals the length of the sides of right triangles, let's say grids according to the Cartesian rectangular coordinate system (Oxyz), this theorem becomes Δx² + Δy² + Δz² = Δd². Then the axes of the system parallel to the edges of the cuboid (rectangular parallelepiped) are the lengths Δx, Δy and Δz, where Δd length of the diagonal of the cube. It is also useful to see the inner product of vectors.

In the picture on the left, the triangle ABC1 is obtuse (∠C1 > 90°), and the triangle ABC2 is rectangular (∠C2 = 90°) and triangle ABC3 is acute (∠C3 < 90°). Then it is:
c² > a² + b², γ1 > 90°, c² = a² + b², γ2 = 90°, c² < a² + b², γ3 < 90°,
where (in lower case) the page marks are opposite the same (in upper case) marks on the top of the triangle. The gamma angles (indices 1, 2, and 3) are opposite the c sides. Namely, the peripheral angle over the same chord is twice smaller than the central one, so the angle is 2γ2 = ∠AOB = 180° and hence γ2 = 90° This proof will serve as some observations later in similar questions.
2. Now let's move on to the physics of relativity and the first image of the Pythagorean theorem. Let's imagine that light travels through the network of coordinate lines within the drawn squares above the sides of the triangle, with its constant speed around c = 300,000 km/s. We will not be confused by the same designations (c) of the speed of light and the length of the hypotenuse because they are different concepts, and it will be clear which of the two designations it refers to from the context.
The mentioned geodetic lines, otherwise parallel to the coordinate axes, here the edges of the square, are at equal distances. The time of light that will pass all those lines of a single square is proportional to the area of the square, so for the areas of A, B, and C, the squares in the picture, we can consider the time it takes for light to sweep through the interior of the square. With this new meaning, it is again A + B = C.
3. Aligning units of length (x) with the time it takes for light (t) to travel that length (x = ct) at its speed (c), we can combine the previous explanations to be Δx² + Δy² + Δz² = (Δct)², that is
ds² = - c²dt² + dx² + dy² + dz²,
when we move to infinitesimals (Δx → 0). We call this ds the 4-interval of space-time, and from this story, it is also an infinitesimal deviation from the Pythagorean theorem. It is clear that ds = 0 has the meaning of the Pythagorean theorem in the 4-dim space-time of events and that it is exactly the trace of the "present" from the question above.
4. As explained more succinctly here, space-time is the concept of the union of physical space and time. It was proposed by Minkowski (1908), once Einstein's professor of mathematics, as a formal addition to the special theory of relativity (1905), then his former student. In his first work, Einstein explained that the simultaneous falling of the light of a lit match on the opposite walls of the train compartment is not simultaneous for the observer from the embankment. While the light for the passenger in the train reaches the front wall of the compartment and the back, for the observer from outside, the train is moving away, and these events are not simultaneous.
Einstein punctuated the discovery of non-simultaneity with the principle of relativity of motion and the constant speed of light, independent of the speed of the source. This refers to the vacuum and rectilinear inertial movements, in contrast to later inertial but curvilinear movements under the influence of gravity. He showed that every rectilinear inertial system has its own (proper) simultaneity, which is not the case for other systems. Now we understand it as a metric that joins infinitesimal events (4-dim points) so always is ds = 0. Mic-by-mic, moving inside the given system, we also define its simultaneous events.
5. For the departing train, the next event for the observer from the embankment arrives later, and this means that the present of the passenger arrives in the past of the observer from the embankment. We understand the same in this way: at the moment of passing, the times of the train and the embankment are simultaneous, and only then, and by the relative slowing down of time, does the train system fall behind in the past. On the other hand, from the oncoming train, the same wall, the event, is closer to the observer from the embankment and arrives earlier. That is why the present to the arriving traveler comes from the future to the observer from the embankment. The same: at the moment of passing, the time of the train and the embankment are the same, so due to the consistently slower time of the train, the train was in the relative future.
6. Now let's move on to physical forces. Classically, force is the change in momentum per unit time (F = Δp/Δt). The energy consumed is equal to the work of the force along the path (ΔE = FΔr). Taking into account that forces, momentums, and positions are vector quantities:
F = (Fx, Fy, Fz), p = (px, py, pz), r = (rx, ry, rz),
where instead of position coordinates rξ we often write only their indices, and that only if ξ ∈ {x, y, z}. Next is:
F = Δp/Δt, ΔE = F⋅Δr,
where energy (ΔE) is the scalar product of force and path. Here is:
Fx = Δpx/Δt, Fy = Δpy/Δt, Fz = Δpz/Δt,
so it is:
ΔE = Fx Δrx + Fy Δry + Fz Δrz =
= (Δpx/Δt)Δrx + (Δpy/Δt)Δry + (Δpz/Δt)Δrz,
ΔE Δt = Δpx Δrx + Δpy Δry + Δpz Δrz.
Hence, the value of the expression
Q = - ΔE Δt + Δpx Δrx + Δpy Δ
zero when no external forces act on the system when it is inertial. Another sense of this (Q) is a measure of "disorder," shall we say, through perception. This is taken from the book Information of Perception (2016), or some of my items before that. 7. Adding p0 = -E/c as a new component of the momentum vector, we get it as 4-vector, first of classical Pμ = (p0, p), then the quantum (operator) of momentum. When we go to infinitesimals (Δp → dp) this corresponds to the 4-length ds = (-icdt, dx, dy, dz), intensity vector ds² previously (2) explained. Their combination is the scalar product (6). This is how information of perception is created, that is, one of its interpretations. 8. However, Hartley's (1928) information H is the logarithm of the numerus N, which is the number of equally likely outcomes, H = log b N, where the base of the logarithm, b, is the measure of the unit of information. In the case of unequally probable outcomes of the total information Q, in contrast to the Shannon information (S) which is the mean value of the probability distribution, we are talking about some averaged and fictitious value of the number of outcomes, say M, so Q = logb M, or M = bQ, then quantum ψ(r, t) = ψ0 exp[i(-Et + p⋅r)/ℏ], where ℏ = h/(2π) Planck's reduced constant, and the expression ψ is the free particle wave function. Thus we arrive at (quantum free particle) another interpretation of perceptual information, in addition to many others that I leave out here so that explanations do not burden the answer too much. 9. Finally, the concept of "present" that I develop in the information theory of perception contains "trapped" information at one moment, in a state in which it cannot remain, because every state (minimalism), as such, tends towards less information. Practically, experience tells us that old news is not news. Here, by postulating information in the structure of space, time, and matter, and uncertainty as its essence, we will achieve that time becomes necessary and "real". What in string theory would be called "membranes" or "strings" (threads) would be these "presents". However, the theory of information from which I derive this is broader than that field, at least because of the principle of uncertainty and its consequences. On the other hand, instead of new dimensions of space, it deals with additional dimensions of time (Dimensions ). Consequently, they are not two identical theories, even if they coincide in many respects.
Fragments
Question: What do you mean by "pieces of the present" in that theory?

Answer: In "perception theory" (my information theory), time is a very important concept. When you remove time from space-time, you get mathematized physics, let's not skip string theory, but on the contrary, ignoring space, we approach my field.
1. Only in inertial rectilinear motion is there simultaneity throughout some physical space (Spacetime, 4), but such a system requires the complete absence of mass and gravity. Light (Einstein's matches) from one point reaches two equally distant places of the first such (passenger on the train), but at various moments of the second such (the observer from the embankment). We can imagine that the directions of light for the first and second observers are on the same line or parallels; anyway, each has its own "stacks" of simultaneous 3-dim spaces.
In the central gravitational field, we have no such possibility. Radially speaking, lower positions have a slower time course than higher ones, so the "simultaneous" arrival of the signal to a lower and a higher place "equally" distant from the source is no longer the case when the source moves even a little. In other words, then we don't have 3-dim concurrency but only 2-dim concentric spheres around the field source.
When two or more such fields are nearby, there are no more complete spheres that are simultaneous, but only their sections remain. Multitudes of masses will carve out the very pieces of simultaneity, fragmented a little differently for each observer, which closely resembles threads, membranes, and other elements string theory. The expression ds² (Spacetime, 3) is changed in the manner of a tensor to the corresponding space metric (< a href="https://en.wikipedia.org/wiki/Metric_tensor">Metric tensor).
2. Complications with splitting the concurrency area don't end at the space itself. Vectors that move in parallel in curved space do not reach the same values when they reach their starting points again (Simultaneousity). This applies to position vectors as well as to momentums, forces, and then to energies and information. The present "melts" (has less information density), leaving a trace in the past.
There are various consequences of this, and one of the unknowns that I have often written about is the influence of the past masses on the present. This is easily reduced (or subsumed) to the currently unexplained in science "dark matter". Due to the relatively huge square of the speed of light, past events are much further away than neighboring spatial events, and the effect of gravity on them is significantly less. Nevertheless, it is observed in strong fields, for example, the shearing of the ellipse of Mercury in mass-rich galaxies, but also by its "strange" absence in mass-poor galaxies (AGC 114905). Space in a strong field behaves like a mushy mass that the body manages to move.
3. Fragments of the present are objective, but such phenomena are received differently by the participants in communication. It makes sense to consider them as buildings of information because, like photons, in which the electric phase induces the magnetic one and vice versa, again and again, they can change in a cyclical form, representing the disappearance of information by its very creation. If we put them differently, the building blocks of information are the same as its parts.
The rarefied fragments of the present can thus formally represent rarefied information, unreal phenomena, as well as lies (The Truth ). Lies attract us by the force of probability which pushes us into less informative states. However, lies also distract us from ourselves as "other options." They are deviations from the spontaneous natural flow of things, from the principle of least action of physical reality, from that part of nature (dead matter) that cannot lie. That is why lies are so easily spread by the media, and when exposed, they can be particularly irritating.
Capture
Question: Explain to me your concept of "trapping." I'll just ask without formulas this time, if possible?

Answer: It all boils down to the "force of probability", to the more frequent occurrence of more likely possibilities, or nature's tendency towards the more certain and the less informative.
1. A democratically elected politician will please a sufficient part of the population, with which, as in any action, his character (Traits) is profiled and habits are directed and modeled. The very ones in the streams are adjustments all because of the mentioned "uncertainty force", which pushes us towards order, routines, laziness. The brain loves the familiar, and from the same, we surrender our freedoms for safety or efficiency. These particular examples of "entrapment" are special incentives for the politicians who keep them in the shackles of the Second League (Reciprocity).
The target population of politicians are the players of the third league (goods), who are about 80 percent of the population (Pareto rule), while the rest are divided again into approximately 80 percent of the players of the second league (manipulators) and first league players (evils). The latter, therefore, are barely four percent and seem irrelevant for an immediate electoral victory. However, when the chosen one enters the scene, those "evils" matter. The ability of the major leaguers to (almost) always beat the minor leaguers, as well as the latter's ability to beat the minor leaguers, is a hierarchy that is then established (Win Lose) and captured.
The strategy of the first-league is not winnable, the "evils" (bad guys) usually know that, so they use its circumvention (Sneaking). There is intimidation, blackmail, spreading apathy, and values of tolerance, so-called "cooking a frog," but also rewarding, so that "the loser doesn't understand." The second-league players (manipulators) can please the first-league player (evil) in the game against the third-league player (good) — by the nature of domination. For example, it is easy to convince "goods" (good guys) that it is "good" to give in under pressure and to go into soft, which is actually a strategy that loses to everyone else (it is a game of dead nature, the principle of least action).
2. The individual would spontaneously reduce the number of their options (minimalism) if it were not for the wishes of everyone around them, their occupancy (uncertainty is the basis of the world), and if it were not for the law of conservation of information (the amount of options). That is why it unites and commits itself to the collective. Thus, vitality (a surplus of options compared to dead matter) would pass from the individual to the group, which would be more and more similar to a living being if it were not for the analogous aspirations of the groups themselves.
All natural processes have tendencies towards this kind of degeneration, including democracy. We pay the price of agonizing resistance to uncertainty for the sake of discovering natural laws, bearing in mind the later indulgence in routines. Lawyers and scientists strive similarly — the arrangements of the world. Inventing obligations achieves gains in security, efficiency, laziness, subordination, or dominance, in the simplification, which is ultimately an end in itself (according to the principled parsimony of information). Succeeding in this, society ossifies and becomes less vital.
3. On the other side of the force of losses, there is the obligation to maintain the total amount of information in the given system. It is more tolerant in the range, but stiffer when there is no width for it. A solitary photon will constantly oscillate through the electric and magnetic induction phases, creating a traveling wave, which is also a type of trapping. Because of the same, in general, living beings arise, whose excess options would actually disappear. Nature itself is captured by its information structure, without which it does not exist, and which it would like to get rid of.
Collapse
Question: How is it possible for information to emerge from uncertainty, contrary to the "principled minimalism" of its emissions?

Answer: The overall uncertainty of the environment is constant, and its forms change. The information does not run out in any situation, so everything that exists could last and be woven from it. Meanwhile, real communications are physical interactions, energy changes over time, and forceful actions.
1. A fair dice with six equal chances before the roll will have an uncertainty logarithm of the number six (log 6) which after the roll will be converted into equal information, let's say, "the number three fell". There is no change in the amount of the current "total" communication system, but the effect is a movement into a new present. Each such news immediately afterward will no longer be that stale news but becomes different, one or more of them with the same total amount of uncertainty and each accompanied by impulses of some quantum of action.
2. It is the fate of reality, which does not want itself to forcefully create new and new present — equally fleeing from that force. Where the emergence of the present is less, there is less force, uncertainty is diluted, and the environment is more attractive (Past). In the gravitational field, we see phenomena as places of the slower flow of time, and in the structure of the formation of the history of the cosmos, these are states of greater certainty in the future. It seems as if the present rushes towards the future because of it.
3. All photons are quanta of the same effect Eτ = h, energy change factors E, and elapsed time τ, but not all of them carry the same information as these energies E = hν, where the frequency ν = 1/τ, are not the same. An array of values of the abscissa part (2-1, 2-2,..., 2-n,...) can be mutually unambiguous, i.e., by bijection, map it to a series of natural numbers (1, 2,..., n,...), just as this entire length can also be mapped to an infinite semi-axis (x → 1/x). Thus, we arrive at various bijections between the quantum of physical action and information. Both are discrete occurrences (Packages).
The quanta of longer time intervals (larger τ and therefore less energy E), which are represented by slower frequencies, are less informative, thus less "strong". For light sources in the gravitational field or distant galaxies that are moving away from us faster and faster, the Doppler effect will slow down the frequencies, conveying to us a message about the attractive force acting on them there. It may be absurd, but places less powerful to physical matter are more attractive.
4. Even though a probability distribution has infinitely many outcomes, only finitely many of them will always have a total probability of one. This is determined by the Borel–Cantelli lemma (Accumulating). This tells us something important about the information here, that a finite number of outcomes can arise from infinity. Then, when we already have them finitely, their uniform distribution is maximal information (Extremes), which principle parsimony will pull towards diversity. If one of those possibilities happens with a probability greater than half, the chances of its realization are greater than its non-realization. However, then this can no longer be prevented by the aforementioned principle of minimalism.
As we can see, a seemingly impossible situation arises. Following mathematical certainty (Borel-Kanteli's lemma) and minimalism, we arrive at an "absurd" occurrence. information from uncertainty. That is the answer to the question.
5. Wave function collapse is a mechanism in which a system, by interacting with its environment, including measuring devices, is transformed from a superposition of states into a certain (classical) state with a well-defined value of a given measurable quantity. Your question posed to me like this is very interesting because the understanding of that collapse is one of the important areas of quantum mechanics (Schrödinger) and is still an unexplored topic (Collapse Problem).
Variations
Question: Why does the attraction of a slower flow of time in general not apply in the special theory of relativity?

Answer: The effect of force is enhanced by the change in the flow of time on the road, so all one-time decelerations are not enough.
1. In the picture on the right, you can see the plate in rotation. Any point r away from the center of the plate will orbit the center with angular velocity
ω = Δθ/Δt
where it will turn by the angle Δθ for time Δt with speed v = rω. Therefore, the farther the point is, the greater the speed v. These changes in velocity with distance vary with the slowing down of time, resulting in centrifugal force F = mω² r, where m is the point mass. Otherwise
\[ \Delta t = \frac{\Delta t_0}{\sqrt{1 - \frac{v^2}{c^2}}}, \]where Δt is the relative length of the proper time interval Δt0 of a point moving at speed v, while c ≈ 300,000 km/s speed of light. I take this from the time dilation of special relativity.
If it were loose and unrestrained, the material point would continue to move inertially, which means in a straight line and without changing the speed of the flow of time. It is the meaning of "spontaneous" movement without the action of forces.
2. In a centrally symmetric gravitational field (Space-Time, 1.2.10 Schwarzschild solution), the interval is
\[ ds^2 = -\left(1 - \frac{r_s}{r}\right)c^2dt^2 + \frac{dr^2}{1 - \frac{r_s}{r}} + r^2(\sin^2\theta d\varphi^2 + d\theta^2), \]where rs = 2GM/c² Schwarzschild radius, M is the mass of the body at the center of gravity, G is the gravitational constant, and r is the distance of the point (of mass m) from the center. Then, the force of attraction is F = -mc²rs/2r, and time dilation
\[ \Delta t = \frac{\Delta t_0}{\sqrt{1 - \frac{r_s}{r}}}. \]By equating the time dilation of the special theory of relativity with this one, we would find the proportionality of the kinetic energy of a body in free fall and the gravitational force. I cite this as an example of the importance of changing the speed of the flow of time for the appearance of force. The same as the following.
3.In the same book (Space-Time, 1.2.8 Vertical drop), I consider the fall of a material point into the gravitational field, where, apart from classical equations, I only use E = mc². Falling by dr the mass increases by dm, the force attracts it, so it is:
\[ dmc^2 = -\frac{GMm}{r^2c^2}dr, \] \[ \frac{dm}{m} = -\frac{GM}{r^2c^2}dr, \] \[ \ln m = \frac{GM}{rc^2} + \text{const}, \] \[ m = m_0e^{GM/rc^2}, \]where m0 is the proper, own mass (body at rest and outside the field), and m is the relative mass concerning a fixed observer at the starting point.
By expanding the functions in power series and neglecting very small terms due to the magnitude of the speed of light:
\[ m \approx m_0\left(1 + \frac{GM}{rc^2}\right) \approx \frac{m_0}{\sqrt{1 - \frac{r_s}{r}}}, \]which result is equal to the previous one, when we see that the relative time slows down in proportion to the relative mass increase. In the case of circling the center of mass, when working this out, take into account the contraction of the lengths.
Although this result was derived from classical Newtonian gravity, meaning weak central gravitational fields, with all approximations, we see that we can form the interval (2) of the Schwarzschild solution of Einstein's general equations. However, it demonstrates the importance of changing the metrics of space-time, both geometry and gravity, and especially changing the speed of the flow of time for the force to appear. See my contribution on least action (1.6) for an explanation of this.
4. Gravitational motions can also be derived from the principle of least action in physics (Minimalism of Information, 2.5 of Einstein's general equations). Using the Lagrangian (1760) of classical mechanics based on the principle of least action and using the local 4-dim coordinates of Minkowski spacetime, general coordinate transformations. Riemann's and Ricci's tensor leads to the Einstein–Hilbert action, which has the characteristics of a classical one except that it refers to the 4-dim space-time of events.
We vary this 4-action and look for the shortest paths through space-time, from which we get for the Einstein equation in vacuum Rμν = 0 only if gμν = 0. First is the Ricci tensor, which represents the difference in volume from the curved versus the flat space of the given place. The second of the equalities confirms it for the metric tensor, which is positive definite (gμν > 0) in the curvature of Riemannian geometry.
In other cases, by varying the Ricci tensor using the covariant derivative, a detailed expression of the Einstein equation for the vacuum is reached via the Palatini equality
\[ R_{\mu\nu} - \frac12 g_{\mu\nu}R = 0. \]To find the general equation for a space that is not empty but contains matter, the action in the presence of mass is added to the Einstein-Hilbert action, and, again by varying, this expression becomes
\[ R_{\mu\nu} - \frac12 g_{\mu\nu}R = -\frac{8\pi G}{c^4}T_{\mu\nu}, \]which are Einstein's general equations (without the cosmic constant Λ). The left side of this equality represents the geometry of space (given by curvature and metric tensors only), and the right side represents matter (given by the energy tensor).
5. These observations are particularly good for (my) information theory, where the importance of coupling through the information of perception (sum of products) is emphasized, in which it is negated existence in itself. The weighting of subjects and objects through such couplings, their adaptations, and, on the other hand, their multiplicity, primarily due to inevitable differences, speak of the inertia of spontaneity.
The subject is reluctant to abandon averageness and does not strive for the inevitability of variations due to the nature of uncertainty. In this way, let's consider a body that falls freely towards the gravitational field, or orbits around its center as a laziness-changing state. It is the resistance to changing the speed of time, that is, the resistance to changing the amount of options (information) found.
Commutator II
Question: Explain to me the concept of "generalized commutator" information.

Answer: I hope you have seen the explanation of the 2-dim commutator so that we can understand the information as a "surface." If not, let's go back to the next description, let's say S. Hawking's.
For a body collapsing towards a black hole, gravity shortens the radial units of length and slows down its time. It becomes flattened and covers the sphere of the black hole like a mantle, but never actually reaches it. It creates a 2-dimensional layer of information that is deposited with others like it in the black hole.
A step beyond this explanation is the observation that the condensation of information is threatened by minimalism which would dilute it. Due to the gravitational constraint, it becomes a slippage of content into lateral times, also into the past, from which it acts like dark matter. It then behaves equally as a generalized external multiplication (4.10) vector and then also as a "generalized commutator.".
Using the e-symbol of the third rank eijk, we abbreviate the Laplace development of the third-order determinant:
\[ \begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{vmatrix} = e^{ijk}a_{1i}a_{2j}a_{3k} = \]= a11a22a33 - a11a23a32 + a12a23a31 - a12a21a33 + a13a21a32 - a13a22a31
= a11(a22a33 - a23a32) - a12(a21a33 - a23a31) + a13(a21a32 - a22a31)
\[ = a_{11}\begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} - a_{12}\begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} + a_{13}\begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix}. \]This symbol is +1 when the indices are an even permutation of string 123 and -1 if the indices are an odd permutation, so it is zero for repeating indices.
Similarly, we define the symbol of the second rank, eij, the one for determinants of the second order:
\[ \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} = e^{ij}a_{1i}a_{2j} = a_{11}a_{22} - a_{12}a_{21}. \]This symbol is +1 when the indices are an even permutation of string 12, and -1 if the indices are an odd permutation, and therefore zero for repeating indices. Analogously, we define the symbol of the fourth rank eijkl, for example, for the development of determinants of the fourth order, and then in general, the e-symbols n-th rank.
We know that the determinant is the "volume" (generalized) spanned by its column vectors. Such is the determinant det A = [a.1, a.2, a.3], the real volume of the prism of the edges of the vectors a1, a2 and a3 of its columns. We know from vector algebra that it also represents the mixed product of vectors a1⋅(a2 × a3). In the image above, on the left, the mixed or outer product of the vectors using the vector c = a × b which is perpendicular to the plane bisected by the factors, any angle that a and b overlap, unless they are in the same direction.
These vectors form a prism of volume equal to the determinant
det A = [a, b, c].
It is also the notation of the "generalized commutator," which in the case of only two vectors, say a = (ax, ay) and b = (bx, by) becomes a "regular" commutator
[a, b] = axby - aybx.
This one represents the ordinary surface and the generalized volume of what (here the parallelogram) the given vectors span. From the Lowenheim-Skolem theorem (Range) we know that deductive theories of the first kind, which would describe a single structure, are actually not possible, so this type of generalization is, in that sense, expected.
Escaping information from the grip, due to principled minimalism, is expected and necessary in one way or another. Let's look at the formal notation of the outer (mixed) vector product in exactly the same way. Its result is a lateral, displaced vector and the expansion of a smaller (generalized) volume into a larger one. These forms are information because of the previously assumed structure of space, time, and matter.
Squeezed
Question: Do you have an example of a "lateral, squeezed" information vector?

Answer: We continue on the above concept of information. The image on the left is of an electromagnetic wave, or light. It spreads with speed c along the abscissa (x-axis), and the electric and magnetic phases alternate in the direction of the ordinate (y-axis) and the application (z-axes) respectively.
Classical physics recognizes that the electric phase induces the magnet, the magnetic then induces the electric, and so on. My theory of information complements this by saying that stale "news" is no longer news and that the law of conservation of information alternates between these phases. Consistently further, there is the "diversion of squeezed-out" information. Note also the directions (signs) of the vectors in vector multiplications:
x × y = z, z × x = y.
Proportional to the action, they describe the "side" phenomena of the magnetic and electric actions after the previous electric and magnetic phase.
A similarity to this is in the example of the action of electro-magnetic fields (Tensors - part 2A). The video provides a description using tensors to emphasize the independence of the phenomenon from the choice of coordinate system. Reading between the lines, you will notice that the spin of the photon is interpreted the same way. Like a magnetic field, which by this "turning" is forced to circle around an electric conductor and vice versa, each time by a quarter of a full angle and an integer spin (photons ±1).

The next, fourth example is again related to light waves, as in the double-slit experiment, or ripples on the surface of the water (fig. right). Informatically speaking, forward (x-axis) and vertical (y-axis) impulse motion produces lateral wave propagation (z-axis) and ripple becomes a circular phenomenon around the source. I attribute the indeterminacy of wavelength (direction of movement) and amplitude (vertical deviation) to the informational nature of waves.
We have repeatedly discussed the longitudinal character of light, although "everyone knows" that light is transferal (Waves III), as well as vice versa, say transferal waves of sound, although "it is known" that sound is certainly (also) a longitudinal phenomenon. Namely, light as a vertical oscillation without a forward impulse would stand still, and sound would miss us only longitudinally so that we would not hear it. Add to that the known wave nature of the matter in general, then its (unexplored) informational basis, and finally this vector multiplication of vectors.
We know that with interference the phases of the wave are added in proportion to their contributions to the action (read information), and with external multiplication, we can understand more deeply diffraction. Lateral turns are also impulses, actions, and information and therefore come in packages. Such lateral pressures are not without their "wavelengths" and "momentums" as phases that add up in wave fashion.
The movement of a charged particle in a magnetic field (Example 1.4) indicates that "something is happening" and that some other information is present that can be measured using a commutator. In a plane perpendicular to a stronger magnetic field, the radius of circulation of the charged particle is smaller, although the intensity of the velocity remains unchanged.
Stretching
Question: Could the origin of the past be a "spillover" of time into space?

Answer: There are no stupid questions, until stupid answers, I told my colleague to try to confuse or just be funny with this question — well, let's go on a "mission impossible" and try something "smart" with what you suggest. And here's my best range.
It can, actually, when you think about it, the more space there is, the less time there is. If you're in a hurry somewhere and you're running out of time, it's a disaster when your destination is too far away. To prepare a larger room with the same volume of clutter, you might need time, which you would have to spare in the case of a smaller space.
The units of time in a moving physical system are longer for a relative observer, and the units of length in the direction of movement are shortened, so that:
\[ \Delta t = \frac{\Delta t_0}{\sqrt{1 - \frac{v^2}{c^2}}}, \quad \Delta x = \sqrt{1 - \frac{v^2}{c^2}}\ \Delta x_0, \]where the index zero denotes the proper (own) values of the observer at rest in the (moving) system. The duration and length for one's own are Δt0 and Δℓ0, while for the relative, the one who sees the movement, they amount to Δ t and Δx. Movement speed is v and c ≈ 300,000 km/s is the speed of light. This is what the special theory of relativity teaches us. Stretched time is condensed space, and vice versa.
It is similar in general relativity (Variations), although at first glance the things of uniform motion and gravitational attraction are very different. I also have a supplement to this in the text on tensors (Least Action 1.6), so in the gravitational field:
\[ \Delta t = \frac{\Delta t_0}{\sqrt{1 - \frac{2GM}{rc^2}}}, \quad \Delta r = \sqrt{1 - \frac{2GM}{rc^2}}\ \Delta r_0, \]where M is the mass of the planet (sun) that attracts the body at a distance r of the center, and G ≈ 6.67 × 10-11 m3 kg-1 s-2 is the gravitational constant. If the body stands at a smaller distance from the center of gravity, time slows down and the radial lengths shorten.
It is similar with the "spillover" of time into the past, to put it as you say. Like the dough that a baker stretches out, the ever-thinning present leaves behind an ever-longer trail of the past, in such a way that the total amount of information (events) remains unchanged. Thinning out (impoverished by information), the present becomes more certain, less saturated with randomness, and slower in the flow of time, but with an increasingly distant limit to the visible universe.
Until the Big Bang event, from our point of view, about 13.8 billion years ago, in the time before time, when we can say that infinite durations passed in an instant, there was no space. All bosons can fit into one, unlike fermions that were created after space expanded. We can also understand the growing distance between the boundaries of the universe by reducing the units of length. And again, from our point of view, the gift of seeing more of the past — like the time that "moved" there for us to see it.
Аbstraction III
Question: Did I understand correctly that the "present" could be the basis of spacetime?

Answer: The formal basis of all space, time, and matter is information, so uncertainty, waves, and the present are also attendants. This is what we could deduce from the Löwenheim–Skolem theorem (see image link on the right). In short, it says that countable theory has a model for every cardinal number greater than or equal to ℵ0. On the other side is the limitlessness of truth (Deduction II).
Hence, the correct theory is "everyone's," and practice is always "someone's." In other words, theory cannot be localized to just one phenomenon, and physical reality, on the contrary, is always unique. For example, what has just been said is also a theoretical statement, because the truth is everyone's and a lie is always someone else's. It sounds universal and accurate, unlike specific practices. Physical nature doesn't lie (if we prove something can't happen, it won't happen), but only structures that have vitality (a surplus of options) can lie.
The present is the bearer of some realities, whatever they may be, which the surrounding concrete subjects perceive as a series of waves, each in their own way. From this short description, a lot has already been said about the types of viewing the foundations of space, time, and matter. The ability to have such an abstract understanding gives us vitality. That is to say, the nature that follows the truth and only the truth, which cannot lie, does not see theory. But we have already discussed that, perhaps from a different side (Surplus, 3). The theory is both a created and discovered phenomenon.
A large part of reality (never all) can also be seen with the help of waves. Louis de Broglie (1924) put forward the idea that, at the atomic level, particles of matter in motion, for example, electrons, except for their particle properties, also have wave properties. A pilot wave model, established with de Broglie's idea of the wave behavior of particles, was made by Schrödinger (1925). It gave rise to wave mechanics, which then grew into quantum formalism. It was rediscovered by David Bohm (1952).
Wave equations are therefore a theoretical story and therefore universal. Before Schrödinger, of course, there were many discoveries as well as successful applications of wave equations. So many that it is unnecessary to list them. Let us at least mention Maxwell's (1862) wave equation of light:
\[ \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} + \frac{\partial^2 f}{\partial z^2} - \frac{1}{c^2}\frac{\partial^2 f}{\partial t^2} = 0 \]from which we can drop the y and z coordinate axes without loss of generality. An electromagnetic wave of amplitude f at time t is at location x and moves at constant speed c. When f = f(x ± ct) the first and second non-zero partial derivatives are:
∂x f = f, ∂t f = ±cf, ∂x² f = f, ∂t² f = c²f.
Substituting, we find that this solves the above (abbreviated) equation:
\[ \frac{\partial^2 f}{\partial x^2} - \frac{1}{c^2}\frac{\partial^2 f}{\partial t^2} = 0. \]For mathematics, therefore, a wavelet is any function that moves because f(x - ct) moves to the right and f (x + ct) to the left. Also, waves are periodic forms in the manner of trigonometric functions (a, b, k, ω = const.) of the solution form:
f(x, t) = a cos(kx - ωt) + b sin(kx - ωt).
However, the wave equation is also solved by saying f = ax + bt, for the constants a and b, so "waves" can be various "movements," and muffled announcing to us that on the lowest order of sizes, the shapes may not be important. With a little daring (creative science), here is the principle of uncertainty.
What the general "wave theory" could still tell us is the necessity of the time, and subtly hint further why we have so many present times and perhaps the existence of coincidence, but with "information theory," such secondary ideas become first-rate, more receptive, and complete.
Sound
Question: When does consideration of the wave equation of light cease to be theoretical?

Answer: I understood the dilemma. To begin with, the most useful answer, according to the above (Abstractions), would be to compare the light with the same equation, e.g., sound.
Find an anonomaly, a phenomenon where light and sound behave differently, and since both are phenomena of physical reality, there will be many of them. However, this still does not mean that the latter is a matter of practice because there are some different theories with common elements. It is, after all, a matter of the skills of generalization and concretization, and do not underestimate it, it is one of the most difficult in teaching mathematics. If that separation is even possible!
By measuring the Doppler effect (see image link), we find that the speed of sound does not depend on the speed of the source, as in the case of light, but only on the type of medium through which the sound propagates. Let's determine the abscissa x = nλ of the coordinate system as the number n of wavelengths λ, and the ordinate also as the path u = ncτ which wave travels at speed c counting its periods τ. Then the previous Maxwell's equation is:
\[ \frac{\partial^2 f}{\partial x^2} - \frac{\partial^2 f}{\partial u^2} = 0, \]where the lateral coordinates y and z are again excluded. The equality of the partial derivatives of the amplitude f indicates the symmetry of these coordinates. Then we can easily guess that the coordinate transformation will:
x̄ = γ(x - βu), ū = γ(u - βx)
be invariant of the equation, when β and γ are arbitrary constants. Really:
\[ \frac{\partial f}{\partial x} = \frac{\partial f}{\partial \bar{x}}\frac{\partial \bar{x}}{\partial x} + \frac{\partial f}{\partial \bar{u}}\frac{\partial \bar{u}}{\partial x} = \frac{\partial f}{\partial \bar{x}}\gamma - \frac{\partial f}{\partial \bar{u}}\gamma\beta, \] \[ \frac{\partial^2 f}{\partial x^2} = \frac{\partial}{\partial \bar{x}}\left(\frac{\partial f}{\partial x}\right)\frac{\partial \bar{x}}{\partial x} + \frac{\partial}{\partial \bar{u}}\left(\frac{\partial f}{\partial x}\right)\frac{\partial \bar{u}}{\partial x}, \] \[ \frac{\partial^2 f}{\partial x^2} = \gamma^2\frac{\partial^2 f}{\partial \bar{x}} + \gamma^2(1 - \beta)\frac{\partial^2 f}{\partial \bar{x}\bar{u}} - \gamma^2\beta \frac{\partial^2 f}{\partial \bar{u}^2}, \] \[ \frac{\partial^2 f}{\partial u^2} = \gamma^2\frac{\partial^2 f}{\partial \bar{u}} + \gamma^2(1 - \beta)\frac{\partial^2 f}{\partial \bar{u}\bar{x}} - \gamma^2\beta \frac{\partial^2 f}{\partial \bar{x}^2}. \]The symmetry of the coordinates comes to the fore, so after replacing them in the above equation:
\[ 0 = \frac{\partial^2 f}{\partial x^2} - \frac{\partial^2 f}{\partial u^2} = (1 + \gamma^2\beta)\left(\frac{\partial^2 f}{\partial \bar{x}^2} - \frac{\partial^2 f}{\partial \bar{u}^2}\right), \] \[ \frac{\partial^2 f}{\partial \bar{x}^2} - \frac{\partial^2 f}{\partial \bar{u}^2} = 0, \quad 1 + \gamma^2\beta \ne 0, \]which is formally the same wave equation. Exactly such are relativistic, Lorentz transformations, when β = v/c, and
\[ \gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}} \]is the so-called Lorentz coefficient, where c is the speed of light and v is the speed of its source. Therefore, the very transformations of the coordinates introduced here, with the number of wave phases, remain in the domain of theory as well as the wave equation. Where is that crack between sound and light that we "know" exists because of the concreteness of those phenomena?
I note that the transverse, or relativistic Doppler effect (Transverse Doppler) could also be recognized in the way of longitudinal propagation of light (Squeezed), or lateral propagation of sound. But there is no need.
The first real separation is not even in the Doppler effect, because the relativistic one (of light) is almost equal to the classical one (of sound). We find the first through real contractions of length and dilation of time, and the second by counting them only as apparent observations of the observer. In fact, it is precisely the physical difference, in the actual relative contraction of units of length and the slower flow of time in the case of consideration with waves of light, without which is the calculation of sound.
The ether, through which light moves flickeringly, is actually space-time itself. If we consider it as some material structure like water or air, in general, the carrier of sound vibrations, then all of them—ether, space, time, and matter—are theoretical fictions themselves. For now, we believe that they are real and that this is a difficult dilemma for active physics, we will understand only when we look at the "illusion" of the past.
Wigner’s friend
Question: So finally, say whether our reality is "practical" or "theoretical."?

Answer: To understand this question, you should read the previous two answers. One of the consequences of Gödel's impossibility theorem (if it is correct, the theory cannot be complete) is that every theory has countless applications, and practice is unique. Literally, and the reverse is also true, that part of "practice" that we multiply is theory, and the part of "theory" that is unique is not theory. Similar to truth and lies, the former is unlimited, unlike the latter. What is unquestionably true is put simply like this. And perhaps it is hopeless to expect that all the reaches of Gödel's logic will be truly understandable to many.
To be "practical," we must be definitive. Indeed, we consider everything around us to be finite; there is no infinity in our perceptions; we do not meet them on the street or at all; they are so abstract to us that only a few think that infinities might exist. However, we are given the power to "see" them, say through infinitesimal calculus (potential infinity), set theory, and cardinals (infinite "numbers" of set elements).
Actually, from my information theory point of view, the real miracle is how nature manages to hide infinity, not the other way around. And here's why: First of all, the finiteness of observations follows from the Borel–Cantelli lemma, which establishes that in an infinite distribution, only finitely many outcomes have probability one, and uniqueness follows from Riesz's statement. There are many other reasons.
I have long been proving and exploiting the discovery that there are as many sets of lies as there are truths (Information Stories, 1.3 Truth or Lie). We see this most simply from the replacement ⊤ → ⊥ along with ⊥ → ⊤ in all statements. This is followed by seeing the "not really" as a type of "reality". In short, those worlds of the "unreal" are truly infinite, and yet they are equivalent (there are just as many of them) as our finite ones. This is the paradox of "infinite finitude" just announced. How is it possible?
Everything around us is local and finite because we lack the time to go far, the intelligence to think deeply, and the senses and nerves to perceive a lot. This also happens with the objects of our communications, enough to make us believe that there is no infinity. How much this wrapping, infinity in finite sizes, is subtle and far-reaching we see from the quantization of certain information (Packages), and on the other hand from the enclosure of all cosmic perspectives (see link on the picture).
This kind of information theory dictates that behind the event_horizon there is an infinite universe, and those infinities are also hidden within quantum actions, but that it does not have to be the domain of physics, at least not as it dominates today. One hint that this physics will change after all may be the seriousness with which he is interested in "Wigner's friend". I have written about this many times before (Pseudo-reality), so much so that there is no need to repeat the same here.
In the end, a satisfactory answer to the question posed above could be that our reality is "practical" in a "theoretical" way, or that nature, with its unsurpassed skills, creates "illusions" of the concrete from the abstract. I say alleged illusions because they are so good that we don't really have them. Finitude is realized through objective uncertainty, with an uncertainty that necessitates infinity.
Uncertainty III
Question: How is "finitude realized through objective uncertainty"?

Answer: What I mean by the "objectivity of uncertainty," as the impossibility of knowing the effects, has been recounted on several occasions (Uncertainty II). This is consistent with Gödel's (no theory of all theories) or Russell's (no set of all sets) discovery.
Second, even knowing all the first outcomes, the same "objectivity" prevents us from knowing further consequences. This is what narrows consciousness, or the power of perception, and reduces us, and thus our environment, to the "practical," in the sense of non-theoretical. Additionally, this "objectivity" becomes a guide to future states by choosing previous ones.
As in the case of throwing a die, when we get "three," we don't get "four." The choice of one is the loss of other possibilities, and because of the principled uniqueness, that loss is definitive and final. The "real" world is a part of its infinite environment, but it is also different, for example, according to the laws of conservation (energy, mass, momentum, information, and others), which are not there. An infinite set is defined as one that can be a proper subset of itself, such as the set of natural numbers ℕ = {1, 2, 3,...}, which is a proper subset of the integers ℤ = {..., -2, -1, 0, 1, 2,...}, and both have the same cardinality (ℵ0).
Heisenberg's (1927) uncertainty relations are, for example, realizations of objective uncertainty into finiteness. The smallest cells of space-momentum, or time-energy, are actions that are at the bottom of the power of physical measurement. It's not that we can't think of a smaller one, but it's the lower limit of discovering physically measurable properties. Below is again "something" that we need to approach with other "tools," and that is what this theory of information is all about.
On the third side is the "fear of uncertainty" (see image link), which keeps us from becoming too vital and from remaining in the finitude of reality. It is an aspect of the principle of saving information, or "force of probability", which could not be avoided even through the evolution of biological species on Earth. We are so well trapped and guarded in all these finitudes that it is almost impossible for us to notice that prison.
Distances II
Question: How do you interpret space?

Answer: I basically treat space-time with metrics, first steps abstractly. The difference numbers will grow with uncertainty, and when they can represent a metric (example 1.5, Hamming's), they will grow with that "distance." Formally, these are strong foundations, and the superstructure is the principle of saving information.
With minimalism, that nature tends towards less informative states, goes the force of probability, that more likely outcomes are realized more often, and with it the "organization of space," which we perceive as something special. A similar metric goes through temporal dimensions, those that are outside the current present and are multiplied by an imaginary coefficient (i² = -1).
Similar to the repulsive "uncertainty force" I imagine a repulsive "force of space" that increasingly "repels" less probable realizations, which, according to the model, were more "distant" events. Therefore, less and less, a material point from place x can appear at place y, which is the distance d(x, y) greater. Only after a series of circumstances and receiving x1 → x2 →... → y, when the distance decreases, the chance of such a jump increases (x → y). Simply put, it is extremely accurate and intuitively yet to be digested.
In contrast to simple geometry, the above description of space-time contains the movement of events (4-dim points) through different present times by stepping from the past to the future, but it fits equally well into timeless forms, for example, Euclidean. Due to the very axioms of metrics, otherwise derived from classical geometry, there is no inconsistency with the usual intuitive understanding of space.
Gathering
Question: Why does gravity make the surrounding matter rotate?

Answer: Because part of the matter that has decayed is "gone", so we see only what is left rotating around the center. But that's not all. During the decay of a substance, the densities of its surroundings are rarely the same, and it chooses to be dragged by a larger mass. Then all those environments are synchronized, including those whose initial trajectories were not directed towards the center of gravity.
I bring up this "poky" question only because of the interesting theoretical part of the answer, about spontaneous clustering. Nature will try to go outside the homogeneous distribution when it has maximum information. According to the principle of minimalism information will somehow decrease, but according to multiplicity it will tend to occur in different ways, and yet consistent with the infinite application of theories versus the uniqueness of practice, the form of the same will appear in various situations. See also the question of "present" (Abstractions), and look at the next two examples.
1. Let's say that we need to give another node to the graph, the previous image on the left, to the existing peer links. There are eight reds (left) and six blues (right), not counting the common black. Hence the probability that it is red is p = 8/15, that it is blue q = 6/15 and that both r = 1 /15. There are three extensions: P, for the node to choose the red link; Q, to choose blue; R, to choose black. Obviously choosing red is the most likely, but let's compare this to the principle of economy of information.
The mean (Shannon's) information of this distribution is (Logarithm):
S = -p ln p - q ln q - r ln r.
Initial state (S0) and three specified choices (Sp, Sq, Sr) have the mean of information:
\[ S_0 = -\frac{8}{15}\ln\frac{8}{15} - \frac{6}{15}\ln\frac{6}{15} - \frac{1}{15}\ln\frac{1}{15} \approx 0.882311 \] \[ S_p = -\frac{9}{16}\ln\frac{9}{16} - \frac{6}{16}\ln\frac{6}{16} - \frac{1}{16}\ln\frac{1}{16} \approx 0.864740 \] \[ S_q = -\frac{8}{16}\ln\frac{8}{16} - \frac{7}{16}\ln\frac{7}{16} - \frac{1}{16}\ln\frac{1}{16} \approx 0.881532 \] \[ S_r = -\frac{8}{16}\ln\frac{8}{16} - \frac{6}{16}\ln\frac{6}{16} - \frac{2}{16}\ln\frac{2}{16} \approx 0.974315 \]As expected, the smallest mean information is Sp. What this theory predicts actually happens in practice. Computer networks, power lines, and road intersections spontaneously build hubs with more and more peer links versus nodes with few. Some have a lot of acquaintances, roles in films, scientific works, and the like, while many have few. Bestsellers are not as common as less successful releases when buyers are equal. Again, with the free flow of money, goods, and services, some (nodes) will be greater holders of it than many others.
The equality of links leads to the inequality of nodes. It is the principle of saving information for the sake of the efficiency of the network and its flows. Hence the rule of six degrees of separation, which would become seven or eight steps if the number of people on the planet was much larger, i.e., five or four with a smaller number of nodes of peer links. Mathematical network theory deals with the estimation of these optima.
2. Another example is a gambling game. Two opponents flip a coin in a row or decide in some way, but always in the same way, each time with probability p, say p = 1/2, that the first one will get 10 coins from the second, means that the second, with the probability q = 1 - p, said q = 1/2, will receive such an amount of money from the first. When the odds are half-and-half, it seems that the game can go on indefinitely with a 50-50 profit.
However, if the players do not have the same starting money supply, the one with more will win. Namely, there is a sequence of n = 1, 2, 3,... consecutive wins of the first with probability pn, or the same with probability qn wins of the other, which is in the case of n = 10 and fair game probability 2-10 = 1/1024. The game of chance is such that such, even less likely, outcomes must occur, which will sooner exhaust the initial stock of the player who had less money.
When he plays that game against the casino, which has more money than any player, he necessarily loses everything unless he ends the game at some point before the end. The player has earned if he quits at the moment of winning, but gamblers are reluctant to do so because their passion does not allow it. That's how casinos make money all the time because not all players will always lose all their money, which would be bad publicity for them, and not all gamblers stop playing when they win.
3. In its own way, each of these examples tells us that nature does not like equality. The case of matter circling around the center of gravity remains the dilution and avoidance of a uniform void around the center. A network of nodes is never left with one node and all links versus all other nodes with one each, because that would create the equality of these others. However, logically, by explaining (1) how the graph develops, we see in a better way why this extreme efficiency does not occur.
It will not happen that one person knows everything and everyone else knows only him, or that only one book in the world has circulation, or that only one owner possesses all possible money and all the others have nothing. Each of these extreme situations can be explained in its own specific ways, which is actually a new confirmation of the subtle phenomenon that is hard to accept by intuition: that "nature does not tolerate equality.".
Compression
Question: Is information "squeezing" possible?

Answer: Yes, of course, it happens all the time with the gravitational field, for example. Let's consider how, and see later for generalizations. If this is a good idea, the same form will occur in seemingly unrelated situations.
Information and uncertainties are essentially space, time, and matter. That is the assumption of this theory. Entering the gravitational field, particles bring their options to it, which are dampened according to the principle of conservation of information, but their quantity persists due to the law of conservation. The substance cools and crystallizes, reducing the oscillations of its particles, and renouncing aggressive choices. Those or other ways that translate the substance into a mode of reducing the emission of information often act like reduction by noise in the Markov chain.
Suppressed information reaches a level of non-emitting that it could not have before and reluctantly gives it up, again according to principled austerity, and therefore looks like an environment with less information. She is so attractive. An absurd situation occurs: the gravitational field with a false image appears to be an attractive place to newcomers. To a relative observer, objectivity is like that, including the slow flow of time, which, as we know from before, is gravitationally attractive. Less information, fewer events, and a slower flow of physical time.
Let's note that, with this theory, we reduce the attraction of less emission of information and a slower flow of time to the same thing. It is the same as the "loss" of a message passing through the Markov chain when it does not disappear but is choked by the noise and is less readable. In both cases, the principle of minimalism defeats itself.
With further accumulation, the gravitational field information will be squeezed out at other times. The hypothesis (from at least a decade ago in this theory) that "dark matter" is exactly that—the gravity of the past on the present mass—still holds water for me. On the other hand, let's call it lateral, such tight redundancies are realistic options and will participate more in parallel times. They not only slow down in their time, but by binding and pulling in the surroundings, first, they go towards some "communication" of parallel realities, and second, with this connection, they indicate the additional nature of mass and inertia.
With even further accumulation, when the center of gravity becomes a "black hole," it stops emitting even light but does not give up its attraction to other masses. Therefore, this "squeezing" of information exists; it does not contradict "minimalism"; moreover, it feeds on it.
Choices
Question: How is it that options "communicate and connect" mass or inertia?

Answer: Not only metaphorically, but it is pseudo-real, for example, that the excess of options usually has more uncertainty, which in the case of the outcome can be presented as more information. For example, a six-option dice with uncertainty ln 6 ≈ 1.79 will convey the same amount of information with each possible outcome, unlike the lower uncertainty coin ln 2 ≈ 0.69.
With more options, there is more uncertainty, and, in that sense, the responsibility of choosing is greater (see image link). We say it's a "harder" decision, but even then, it's not just metaphorical. This is especially true when we accept that greater vitality comes with options, cunning, additional ability to act, and different influences on the environment. For less vital individuals, choices and actions are a heavier burden, and that in itself means that choices are a pain that we may or may not endure.
Sticking to bare-dead nature, we look at systems with multiple options as a larger association, and because of the law of conservation, their additivity (a kilogram plus a kilogram equals two kilograms) carries a lot more. If the parts have even the smallest mass, their multitude can have a lot of it in sum. It is similar to inertia, energy, action, or information. However, this theory implies the objectivity of uncertainty and then the reality (weight) of similar dilemmas.
More often, we interpret the outcomes of more likely random events by the "force of probability" and, as seen in that contribution of mine, we find forms with strong analogies in Kepler's, Newton's, and Coulomb's laws. Furthermore, we see this "force" in the special powers of living beings over inanimate ones, and especially in games to win and other specificities of "vitality". Still, life does not know lies or games to win, so it is not familiar with the manipulations of politics, literature, and other fiction without which even mathematics cannot do (Surplus, 3 ).
This "force of uncertainty" is an addition to the player's skill, without which the strategy cannot be separated from the lowest league, the so-called. "do-gooders," or it is the lowest bottom, which lies on the "principle of least action." These potential powers bind both animate and inanimate matter, ranging from the unreal to the real.
I emphasize that the reason that will elevate the top game, the skill of "evil," above the "manipulators" (middle, the second league), is precisely the subtle mixture of the rational and the fictitious, as we discovered in the case of mathematics too. There is no evidence without lies (not only by contradiction), no understanding of the truth, and then not even the strongest game of the first league.
It's an allusion to the "squeezing" of options by gravity in which they, remaining masked and unable to truly disappear, become stretched with stronger connections to a wider, additional environment. But some of the answers are as important to the bigger picture as the details themselves.
|