Previous

April 2023 (Original ≽)

Next


Inertia

Question: How do you explain inertia?

Inertia

Answer: Space is the condenser of the present (Entanglement), it is a record of the past that defines the ongoing future. With it, we can best see how much past events determine the future and new interpretation of inertia, derived from "information theory".

With the information we measure options, and with inertia we measure the spontaneous aspiration, the force towards more likely states, those with a smaller "quantity of options". Inertia is a tendency to do nothing or to remain unchanged so that space-time does not give it a given state. Contrary to principled minimalism, inertia is more similar to the law of conservation. We talk about "embeddedness" implying also spontaneous flows.

Mass m is a measure of body inertia, or energy E = mc2, where c = 300,000 km/s the approximate speed of light in a vacuum, that is, a physical action that is a product of energy and time. I consider action as the equivalent of (physical) information, so we arrive at inertia as a measure of the body's ability to communicate in a unit of time (trying not to).

When we observe an object of greater inertia in a single, therefore limited time interval, according to the previous deduction, we must count on its greater representation in times that may not be available to us. What I am saying is easier to understand with the examples of the special theory of relativity.

As we know, we see the time of an object in motion as slowed down, it’s mass larger, and the units of length in the direction of movement shorter. If the object leaves us, it goes into our deeper past, and conversely, if it approaches us, it reaches us from the future. This simple interpretation of the effects of relativity tells us that our present is induced not only by the past but also by our future. It is irrelevant to me whether this about the future is known to physics, it is still a conclusion that inertia is given to the body by its future events.

It is a little more difficult to understand that "parallel realities" also contribute to inertia. Therefore, first read my simple contribution about them (Dimensions), then notice that the presence of the "amount of options" already speaks about them. Multiple dimensions of space-time arise from the objectivity of the alternatives, that is, from some kind of reality of those options. The fact that we think that we do not perceive them is fiction, like the fact that we perceive the very present of the surrounding objects. In this way, we will somehow understand that the body's inertia is also rooted in its, parallel to us, reality.

In this way, we will somehow understand that the body's inertia is also rooted in its, parallel to us, reality, with which we arrive at a more important interpretation of the effects of special relativity from the point of view of information theory. By moving, the time of the body slows down, because it thus becomes partly present in the observer with inaccessible parallel realities. The perception of its events, which otherwise define the speed of time, is reduced.

However, the inertia is greater, and so is the mass of the body. We understand the shortening of length units as reductions on the "memory capacitor", but then when. What concerns the "forces" in this interpretation is the attraction of the slow flow of time — due to the tendency towards less (perceived) information — which becomes actual in the case of gravity due to the gradual slowing down of time towards a stronger field at the center of mass.

Laziness

Question: Does physical "inertia" have anything to do with our laziness and what?

Laziness

Answer: Yes, those connections are the novelty of the alleged information theory (mine). The tendency to do nothing of that bare substance, we say resistance to options, seems to us as if nature "will not" the information that makes up the cosmos.

It is not new, if we note that information is equivalent to action and that the "principle of least action" has been known in physics for a long time. This principle boils down to the "Euler-Lagrange equations" whose solutions are all the movements of physics known today. Because of its universality, the same tendency has been observed in biology and beyond, and is called the "principle of least effort". However, I believe that all such "principles" actually have a common root in the "minimalism" of information.

In contrast to the "smallest actions", which the dead substance literally clings to, the minimalism of information is a milder form and speaks of the aspiration towards such, but it also applies to living beings. Basically, it's a tendency toward greater probability, which I've calculated by calling it the "uncertainty force" (Attraction). I believe that during evolution, or whatever, such a "force" is embedded in living beings, replaced by emotions, or similar more life-like features, and that is why it is harder to recognize. That's why we investigate laziness in more direct ways, using the tools of psychology, sociology, and marketing.

Loosening

Question: What are the forces of loose, possibly accumulated information?

Loosening

Answer: With the explanation of inertia I mentioned the expansion of the universe. And for those that concern space itself, physics is already researching it as "dark energy" and, although we don't have to be aware of it, the solutions they get will be partly answers to the above question.

The deepening of the past is not contained only in the expansion of the space itself. It also has significant memory phases that modern physics might call "dark matter." Its action, unknown matter, and known gravitational force were observed. Like the previous one, let's say purely spatial, we express it using the length (x = ict) that light (with speed c) travels in a given time (t).

The imaginary unit (i2 = -1) in the calculus of coordinates is the rotation of time into space and does not leave imaginaries in the expressions of observables, because time then appears squared and real. That square of the speed of light is a huge number, so a second of memory becomes a relatively long distance. Therefore, the gravitational force from that past is negligible compared to the simple spatial one.

My theory of information implies both present and future interactions. I mentioned the influence of the past on the future (Alternatives) as a trivial question, and the reverse possibility even on the basis of special relativity (the body approaching us comes from our relative future).

There is also the old observation that in addition to the increased frequency (f) of the light (λf = c) of the incoming source, the shortening of the wavelength (λ ) can be treated as an increase in probability density, and the outgoing source as a decrease in density. Hence, the movement towards the future is more likely and therefore weaker, but still with some kind of influence of the future on the past. Differences in the wavelengths of outgoing and incoming light, the Doppler effect, given the differences in time of past and future events, will give the value of the force that pushes us into the future.

I mention such relativistic explanations as a curiosity because Einstein, the author of two theories of relativity, believed in determinism, which excludes the basic assumptions of information theory. However, his theory itself, although useful, is not necessary for this one, in addition to the principled frugality of information (minimalism), to talk about the directionality of process flows.

Mass curves space creating a force of gravity that pulls where time flows more slowly. However, the slower the relative flow is the higher the body is in "lateral times" inaccessible to the observer. Hence comes the possibility of calculating that "pulling force" in other dimensions simply by means of the force of gravity. Thus, we see that the gravitational potential is also the potential of that force.

Curvature

Question: Can you elaborate on the above?

Curvature

Answer: Of course. In order to preserve the simplicity of the text, we will stay a bit away from dry mathematics. First, there are stories from geometry, about Gauss's discovery of curvature of space, then Riemann tensor, and about Einstein's connection of geometry with energy. In the end, we will come to an explanation of this connection by the principle of least action, and the minimalism of information.

The German mathematician Gauss (1777 – 1855) was among the first to seriously deal with non-Euclidean geometries. Among other things, he discovered the "exceptional theorem" (Theorema Egregium, 1828). Having established that two vertical circles inscribed on the surface have radii whose product of reciprocal values κ = 1/r1r2 is invariant (does not change), during bending or unfolding without stretching the surface. That number κ is the Gaussian curvature of the 2-dim space.

For example, let's look at the three surfaces in the image above right. The first of them is "saddle", the second is "cylindrical", and the third is "spherical". In the "waist" of the first one, two circles of opposite radii can be inscribed, one goes around the waist and the other down the waist. Therefore, the Gaussian curvature of such a surface is a negative number. Similarly, when we do with a cylinder, the circle "down the waist" will have an infinitely far center and an infinitely large radius, and a Gaussian curvature of zero. The two largest vertical circles of the sphere can be inscribed in the point of the sphere, with the same radius direction and therefore positive Gaussian curves.

For example, because a sphere has curvature (κ > 0) and a plane does not (κ = 0), it is not possible to develop a map of the globe correctly (without plastic deformation) on a table plane. On the geometry of the sphere, defining straight lines as the largest circle of the sphere, the sum of the angles of the triangle will be greater than the extended one, the ratio of the circle's circumference and diameter will be less than π = 3.14..., and the Pythagorean theorem will not be valid either.

Algebraically, we will define the type of space, its curvature, using the generalized Pythagorean theorem with a⋅Δx2 + b⋅Δy2 = Δz2, where the numbers a, b can be arbitrarily defined, and Δx, Δy are the legs of a right triangle and Δz hypotenuse. For arbitrary points within the surface, the "straight line" between them would be the path of the shortest distance, regardless of the viewpoint of a three-dimensional observer who sees it as a curved line.

Already at this point in the story, we see the theory of information in the "relativity" of the laws that can arise from the maximum uncertainty that we had at the time of the "Big Bang". But the sequel follows.

Translation

Question: Rumor has it that the Riemann tensor is inexplicable without formulas?

Translation

Answer: In the picture on the left, we see that the parallel transmission of vectors, along various "geodesic lines" (shortest paths) in the sphere, does not end with the same values. There was the idea of Riemann (1854), how to express the Gaussian curvature of space with this disturbance of translation around a given point.

Gauss once wrote that he didn't like teaching and considered himself a bad lecturer, but it happened that Dedekind (the founder of set theory) and Riemann, in addition to some later prominent German mathematicians, were his students and that they dared to wrestle with the then greatest current problems that their incomparable teacher would say were too difficult for him.

Well, in a not very simple way, Riemann managed to formulate the famous tensor Rli,j,k, which can express the value of the Gaussian curves, in general n-dim space. In doing so, he practically became one of the founders of tensor calculus, n-dimensional geometries, and the inventor of non-Euclidean geometry, also named after him.

Riemann's is a geometry of positive curves (like spherical), in contrast to the geometries of the Russian mathematician Lobachevsky (hyperbolic, 1829), who worked at the time of Gauss and before Riemann, whose geometries can now be said to have a negative Gaussian curve. In the time before Riemann's, and a few years after Gauss's discoveries, the Hungarian mathematician János Bolyai (1831) appeared with similar ideas.

The fantastic thing about these geometries is that they are all equally accurate, as perfect as Euclidean, but again they give us such different views of the "apparently same" phenomena. I count them among the examples of perhaps various developments in cosmic physics, but certainly also among the possibilities of our one. The latter was discovered by Einstein (1916).

Energy

Question: I have never quite understood Einstein's derivation of "general relativity", can you explain it to me?

Energy

Answer: It was not derived by pure deduction but is a mixture of the most difficult mathematics and ingenious intuition. Everything is energy and that’s all there is to it — is one of Einstein's famous sayings that could be used to understand his theory.

Namely, if "everything is energy", then geometry can also be viewed energetically, and this is precisely what he allegedly knock off in 1916 (as contemporaries said) in the equation Gi,j = Ei,j later called the Einstein tensor (Space-Time, 1.2.9 Einstein’s gravity).

On the left side of his equation is an expression derived from the Riemann tensor, a 4×4 = 16-component tensor that expresses the curvature of space, and on the right is some energy with the same number of components, which Einstein came up with for the occasion. It should be known that energy is a scalar (number), as opposed to a vector, which is a series of numbers, or a matrix — rectangular number scheme. A tensor is a general quantity that includes higher dimensional ones, such as 4-dim spacetime. Einstein simply equated geometry with "energy" and then by "tuning" he got the general equations of relativity. To the astonishment of contemporaries.

A distinguished physicist once attended Einstein's presentation of relativity and, confused, told reporters on the way out that he had previously heard that only three people understood Einstein's relativity, and now he realized that Einstein was not among those three.

From the point of view of information theory, the above explanation of inertia, we can even better understand the very idea that springs from the expression "everything is energy". Namely, looking at the translation of the vector on the sphere, from the previous question, let's say it's a momentum vector. The difference between the initial and final values tells us about a defect, a leakage of energy somewhere outside the plane of the sphere. It is a phenomenon that was first noticed (I think) by the Russian physicist Landau, who unfortunately died in 1968, that Einstein's gravitational field is questionable due to the law of conservation of energy.

Landau believed in the correctness of those equations, and that part of their difficulties was constantly left aside. After all, the solutions to Einstein's equations have been tested by experiments, perhaps more than any part of physics, and have been shown to be accurate to ten decimal places, which is also a greater precision than found in any previous field of physics (all but quantum mechanics).

That energy can leave curved 4-dim space-time I consider consistent with the finding that it is located in 6 dimensions. The idea that multidimensionality is "unacceptable" to Einstein's understanding of reality, which Einstein and many to this day consider deterministic and, therefore, limited to those only four dimensions. Moreover, we saw that the development of the universe implies its expansion into all those additional dimensions, and this story explains another mechanism of that process.

Lagrangian

Question: Are there correct proofs of the general relativity equations?

Lagrangian

Answer: Yes, along with the experimental ones, there were also many attempts to prove the general equations of relativity theoretically, and among them many successful ones. I will present one of the methods that I liked the most.

From Newton (1687) we know the concept of energy change "by the work of a force on a path" (dE = F⋅dx, little by little), or changes in the "force-time" momentum (dp = F⋅dt). The work invested is equal to the energy obtained (E), and the momentum of the force (p = mv) is proportional to mass and speed. Hence "force gives acceleration to mass" (F = ma). In playing with the simple formulas of Newtonian mechanics, speed (v = dx/dt = ẋ) is a change in path over time, and change in speed as acceleration (a = dv/dt = v̇) also participate.

Some would say that Einstein completely rejected that concept, but it would be more accurate to say that he just avoided it. To reduce energy and force to the geometry of space, it is enough to stick to inertial motion when we do not feel force or energy exchange. Among their methods appeared the method of Lagrangian mechanics (1788) with the Euler-Lagrange differential equations of motion, pictured above on the left, the execution and examples of which you will find in my book "Minimalism of Information". This intermediate mode acts as the third aspect of it.

In order to understand the above formula, let's recall that kinetic energy is the half-product of mass and the square of velocity (T = mv2/2) and that it is converted from or into the potential, say elastic springs whose tension increases with the square of the distance from the equilibrium position (V = kx2/2). The difference between kinetic and potential energy is Lagrangian (L = T - V). In the picture, on the left side of the equation in parentheses is the shift in the Lagrangian (exchanged energy) when the speed changes, i.e. momentum, and a change in momentum is a force. On the right side, there is the force again — which "worked along the way". So, Lagrange's method is actually Newtonian in disguise!

The novelty is the discovery that the absence of force means the same as the absence of exchange (between the potential and kinetic) energy, and also the possibility of handling forces but without knowing or mentioning the force. It is a convincing step towards Einstein's understanding that space is energy, and that the gravitational force is simply "free fall" along the geodesic lines of the gravitational field.

We further see that the Lagrangian over time defines the physical action and that the product (Lt), of the exchanged energy multiplied by the duration of the exchange, is equivalent to communication. In constant units of time, the derivatives (changes) of the Lagrangian with time at a "stationary point" (minimum, maximum, or inflection) vanish. Such are expressions of the principle of least action, or the tendency towards a less informative state, that is, more frequent outcomes of more probable events.

Furthermore, all that remains is to strictly derive Einstein's general relativity equations (1916) from the Euler-Lagrange equations, thus reducing them to the principle of least action. You will find that, along with the proofs of the generalized Euler-Lagrange formula, in the titles " 2.4 Euler–Lagrange equation" and " 2.5 Einstein’s general equations" in my mentioned book (Minimalism of Information, Economic Institute Banja Luka, 2020).

Hamiltonian

Question: Can you explain Hamiltonian mechanics to me as fantastically well and succinctly as Lagrangian?

Hamiltonian

Answer: Yes, thank you, but the quality of the explanation is a matter of prior knowledge of the interlocutor, taste, and moment. On a slightly higher level, let's admit that the contribution "Physics Mini Lesson" by that author, is even better.

In the picture on the right, q is the path (it is x on the previous one on the left), and p is the momentum of the force (the quotient in parentheses of the previous one). These are the usual notations of Hamiltonian mechanics. The Hamiltonian represents the sum of the kinetic and potential energy H = T + V of a physical system, unlike the Lagrangian which is the difference between those two energies (L = T - V).

If you were in a room freely falling in a gravitational field (on a satellite), the Lagrangian predicts that you would feel no gravitational force. You would be in weightlessness, as long as the room moves along the paths, we calculate using Lagrangian mechanics. But the sum of the room's kinetic and potential energy is Hamiltonian. It is also constant, due to the law of conservation of energy, and is more important to the outside observer of the room.

We know that the planets revolve around the Sun (satellites around the Earth) in elliptical paths, we say "free falling". As it decays, moving closer to the center of gravity, the satellite's kinetic energy increases at the expense of less potential energy, so that the total (Hamiltonian) is constant. Somewhere along the path, when the satellite moves towards a weaker gravitational field, its speed decreases, the kinetic energy decreases, and the potential energy increases again so that the Hamiltonian is constant. This is what can be seen by an observer at some arbitrary but fixed point outside the satellite, without the need to understand how the one inside, in the satellite itself, feels.

It is a fantastic thing that with the equations of Hamiltonian Mechanics (1835), in the mentioned picture on the right, we can get those almost identical results of the trajectories of the celestial bodies which Newton calculated in his own way almost 150 years before Hamilton, or Lagrange could have obtained them 50 years before, and Einstein 80 later. At the same time, it is possible not to notice that in all four cases, it is about variations of the same truth, and of course also about the improvement in details through the development of science during those decades.

Question: Do these equations look ominous to me?

Answer: He asks me while reading the book "Quantum Mechanics" and especially parts of the chapter "1.3.8 Waves of matter", and in this explanation I will stick to the image above. The change in position is the velocity and it is, in the first equation, equal to the change in the Hamiltonian, H = H(q, p, t), per unit momentum p. The partial derivative by p means that we assume no change in the other two variables (position q and time t) in the total energy (H), but that's why we work with infinitesimals.

The higher the speed of the body, the greater the increase in total energy measured per unit of momentum. The greater the force, the greater the increase in energy of the body, we would say in the language of Newtonian mechanics. The second equation says that the action that pushes the body in one direction and changes its energy causes the same reaction force but in the opposite direction — the description is another equation, because is the change in momentum over time, i.e. force.

It is clear that Hamiltonian mechanics derives from Newtonian, and it will not be much more difficult to understand that the Hamiltonian can be derived from the Lagrangian, for example from the video "Hamiltonian from Lagrangian". There you will also see why the Hamiltonian is "conservative energy", i.e. all the energy of a given physical system for which the law of conservation applies. I won't bore you with it here.

The fantastic accuracy of Hamilton's mechanics in predicting the travel of physical bodies culminated in quantum mechanics, precisely in those so tiny details of our world that we almost have no other tools to observe it. David Hilbert contributed the most to that discovery, at the beginning of the 20th century, after whom we now call abstract spaces (Hilbert space), almost terrifying in the infallibility of quantum physics experiments.

Hilbert's idea was simple, to treat quantum states and even quantum processes as representations of vectors. It was shown that the former represents complex probability distributions, and the latter represents unitary (unit) operators. Thus, states are a superposition of possible outcomes, and the others are such that they can obey conservation laws. It remains to investigate which operator could interpret some of the physical phenomena. Thus, the "energy operator" belongs to the Hamiltonian.

Furthermore, as Ĥ = iћ∂t is the Hamiltonian operator, and ψ the quantum state vector, then their eigenequation is Ĥψ = Eψ just the legendary Schrödinger equation, with which the idea of quantum mechanics grew. This E, now the eigenvalue of the Hamiltonian, is observable (a quantity that can be measured experimentally) of the energy of a given system.

We only need one more step, to see the product of Hamiltonian and time, otherwise known as physical action, as (physical) information.

Freefall

Question: Do you know any alternative theories of gravity?

Freefall

Answer: The interlocutors are not the same and some questions are, and it is a good opportunity to deepen these difficult topics. Take the above explanations as an introduction, then in my book "Minimalism of Information" under the title "2.5 Einstein’s general equations" try to understand the proof.

1. We start from the usual action of the form S = ∫ L d4V, continuing by calculating the sum (integral) of the Lagrangian of 4-dim space-time events along arbitrary paths and looking for the one that is the shortest. It is a known method of variation. As we know, coordinate transformations are done by the Jacobian, so this also applies to the metric tensor gμν. What we find is the general Pythagorean theorem, whose infinitesimal step is its hypotenuse.

That step (ds) is also called a 4-dim interval, or metric. It has the form ds2 = gμνdxμdxν, where μ, ν ∈ {1, 2, 3, 4} coordinate indices, the first three of which represent infinitesimal displacements along (otherwise any) axes, while the last dx4 = icdt is temporal, and it is added according to the repeated upper and lower index. The Riemann tensor with four indices reduces to the Ricci tensor with two, the latter to a scalar (without index) or the derivative of the Lagrangian of such a metric. After extensive calculations, we get Einstein's general equations of relativity.

The interesting thing about this calculation is that it uses "classical" mathematical physics without the principle of relativity, and it gets the same thing that Einstein predicted earlier by relying on those principles. I cited this derivation above as an example of a "definitive proof" of those equations. And as for the theory, there are other interesting methods.

2. In my book "Space-Time" (Economic Institute Banja Luka, 2017) the title "1.2.8 Vertical drop" is the processing of an interesting idea found on the Internet, to try to derive the effects of general relativity separately, from Newtonian physics itself (weak field) and the part of special relativity, especially E = mc2. It turns out that it is possible, and more than that.

According to Galileo's principle of equivalence of inertial and gravitational mass, the strength of the gravitational force increases as the mass increases. If during the time interval dr the mass increases by dm, the energy change dE = F⋅dr, with the shift E = mc2 and integrating we find m = m0⋅exp(GM /rc2), where m0 is the rest mass of the falling body, M is the gravitational attraction mass, and G is the gravitational constant.

The exponential function of a small exponent can be approximately written ex = 1 + x, which is especially correct here due to the huge value of the square of the speed of light c2 and assumptions of a relatively small mass M of weak fields. Those and corresponding approximations 1/(1 + x) = 1 - x and vice versa 1/(1 - x) = 1 + x, so similarly √(1 - x) = 1 - x/2, will reduce the previous exponential expressions to the known relativistic effects of the gravitational field. Among other things, these are the relative increase in mass, the slowing down of time, or the contraction of lengths in the direction of the force.

The famous Schwarzschild metric of general relativity

ds2 = (1 + 2GM/rc2)dr2 + r2(sin2θ dφ2 + r22) - (1 - 2GM/rc2) c2dt2

is valid for centrally symmetric gravitational fields and is collapsible also from these free fall results. It is unusual that it is no longer only for weak fields. Otherwise, it is usually written as shown, with a spherical coordinate system, because the "line element"

2 = sin2θ dφ2 + r22

is circular, is perpendicular to the direction of the force, and is therefore not subject to relativistic effects, which simplifies its notation.

3. In the same book (Space-Time), under the title "1.2.10 Schwarzschild solution", I covered another interesting contribution, which belongs to the answers to the question posed to me here. It is a derivation of the Schwarzschild metric from Einstein's field equation, but starting from the most general form

ds2 = e2A(r)dr2 + r22 - e2B(r)c2dt2

where A(r) and B(r) are arbitrary functions dependent only on the distance r from the center of gravity. A not very simple calculus of tensors, demonstrated in the rest of the book, shows that these functions necessarily reduce to previous Schwarzschild coefficients.

In other words, there is no alternative! Almost. Just like the old motto "all roads lead to Rome", this evidence shows us the freedom of choosing the way of deriving always "one and the same" consequences of Einstein's equations, and their limits.

Space

Question: Any ideas on space metrics?

Space

Answer: Yes. The structure of the cosmos is made up of information, and its essence is uncertainty. No matter what kind of physical system you imagine, big or small, it is always in some relation of uncertain, both with respect to its interior and also with respect to its exterior. This implies the finiteness of the divisibility of matter (Packages) and the infinity of everything to outside.

At the same time, the repulsive "Uncertainty Force" act. With no boundaries to oppose it, the universe expands. There is more and more space and less and less substance — in such a way that the specific information per unit volume decreases. This is what the cosmic metric needs, with the caveat that it may not be so. There are already some space metrics, but they do not follow the above idea, so I also researched others, and I wrote down some things locally. Say the "Rotating Universe", or the "Metric of the Torus" and a few more, I'll mention when I remember. I didn't like them.

Following the logic of Gauss's "exceptional theorem" (Curvature), if space expands only radially from us (and also from everyone else in space), then the curvature of space tends to spherical, positive values. If it was negative, now it could be zero, and as the diameters (2r) of the circles around us grow and the circumference (s) stays constant, the ratio π = s/2r decreases, so the curve already goes towards positive, to ellipticity curved.

We don't really have enough astronomical observations to make a decision, I believe, but we have options. One possibility is that the circular lengths increase so that the space remains constantly flat even though it is constantly expanding.

Potential II

Question: Information is the equivalent of action, and what is the equivalent of striving for less information?

Potential

Answer: The potential. We are talking about information multiplied by a constant. Potential energy multiplied by time is potential action, potential force. It can be positive when the information is greater than the surrounding information and its force is repulsive, or it can be negative if the information is smaller and its force is attractive.

It is consistent with the previous question (Potential), but let's continue to talk about physics. The gravitational field information will be the counterpart of the gravitational potential φ = -GM/r, otherwise defined as the work W to be done by an external agent to bring a unit mass from some neutral infinity to a point at a distance r center of mass M.

Electrons that do not move mutually are in a deficit of information, and when they move, they are in a surplus (Motion). That is one of the ideas for interpreting their rejection, which I once liked. She seeks the notion of "negative information", not only in a relative sense as a deficit to the current environment but also with a value "less than the void".

The smallest packages of information (quantum) exist because of the discrete nature of options, among other things. By reducing the amounts (of information) to smaller and smaller amounts, in the final step, we arrive at pure uncertainties. And I believe that under them we would find certainties again. Because of this extraordinary lack of uncertainty, the forces that hold the quantum together are unusually strong. Moreover, these forces are increasing with the distance of the parts of the physically smallest packages.

We know about quarks, but here I mean also other unknown and possible divisions of elementary particles held together by perhaps even stronger forces. We can also observe the expansion of space itself (Space) driven by forces increasing with distance. Different information creates tension, drives, it gives force. When we yield to the forces then we are natural, dead, and if we defy them then we are vital.

Deception

Question: Does this mean that lies also have potential?

Deception

Answer: That's right, the truth is strong but it is unattractive (The Truth) which tells us about some forces, and therefore about the potential.

The interesting documentary "Art of Deception", or something similar, let's say, about the "force of deception", deals with this topic in its own way. Magic happens in the mind of the beholder, not in the hands of the magician — is a magician's saying.

Along with this video, the author (Alex Stone) writes that magic is a dramatized deception. It offers us the lie of performance art, the opposite of theater. Our brains "see" what is not real and for various reasons allow magicians to get away with it. It turns out that we can learn a lot about how the mind works and why, by understanding how magicians distort our perceptions. Through a blend of psychology, storytelling, and dexterity, exploring the cognitive underpinnings of misdirection, illusion, deception, and secrecy. Now through the information theory too.

To remind you; you need to have the vitality to be able to lie (Untruth). It is about the excess of information, for example, that living beings may have compared to the non-living substance they have, because of which they can engage in deception. Nature does not lie, it is said to refer to that part of physical reality that we get from the Euler-Lagrange equations, i.e. the one that literally follows the principle of least action of physics.

Collision

Question: What then would be the potential of an elementary particle?

ollision

Answer: I will explain this effect on the example of a free particle, then collisions and especially for Compton scattering.

1. Momentum p = (px, py, pz) bind relation k = p/ℏ with the wave vector of an elementary particle. The ℏ = h/2π ≈ 1.055×10−34 J⋅s is Planck's reduced constant.

ψ = a⋅exp[i(kr - ωt)]

is the wave function of a free particle, a solution to the Schrödinger equation of quantum mechanics. There a is the wave amplitude, r = (x, y, z) the particle position vector, ω is ℏk2/2m if the particle has mass m (equivalent to E = p2/2m ), or ω = kc for massless particles. The wave vector and momentum intensities are respectively k and p = ℏk.

The news is that we now associate the number φ = kr - ωt with information. It is the angle in Euler's equation exp() = e = cos(φ) + i sin(φ), where it can be seen that the exponential function is a real number for the values φ = 0, ±π, ± 2π, ..., while the exponential for all the other arguments φ — has complex or purely imaginary values.

Information is (in my theory) extended to complex numbers, where values that are not real represent pseudo-real phenomena, departures of physical objects into parallel realities (Bypass). It is transmitted through real space-time (r, t) by discrete action steps.

2. By collision and generally by the action of some force, this "angle" changes by Δφ, so the information that the particle carries changes proportionally. It will be accompanied by a change in the wave number k, i.e. the momentum p = ℏk, the position r and the direction of movement, and Energy. Such observation of collisions may be unusual, new, and physically repulsive at first glance, but mathematically correct. It is in accordance with the classical interpretation of potential and force.

3. Compton scattering is a special type of collision, as in the image above left. Photon γ, wavelength λ hits electron e- deviating from the path by angle α (electron by angle β), whereby is the wavelength of the reflected photon greater by Δλ = h⋅(1 - cos α)/mc, where m ≈ 9.109×10-31 kg electron mass, and the speed of light c ≈ 300,000 km/s.

The photon loses some energy and changes wavelength, and the effect is significant because it confirmed the corpuscular (particle) nature of light. Compton received the Nobel Prize in Physics in 1927 for his discovery and explanation.

You can see additional observations about such dissipation, for example, in my book "Space-Time" (1.1.9 Compton Effect). If the deviation of the photon, from a path of shorter to longer wavelengths, is seen as a departure from a path of higher probability density, then this is another confirmation of the claim that "force changes probabilities", typical of this information theory. The cycles of the φ photon angle are longer, it becomes rarer in this reality, but because of this, the outgoing part will share itself with the electron, leaving it with energy and movement.

By the way, the wavelength is considered in Heisenberg's uncertainty relations as a smeared probability of finding a particle and, as far as I know, only there in physics — until this interpretation. They should not be confused with the particle amplitude (Born rule) whose intensity squared represents the probability of measuring the particle.

Changes

Question: You say "force changes probabilities". Are there any restrictions on these "changes" and which ones?

Changes

Answer: Changes are limited by conservation laws, from the total momentum of a given system, energy, through spin, to information. There are known laws of physics, but also some other of the "information theories" (mine, unofficial).

The novelty is that we now consider the number φ = kr - ωt in the free particle exponent as information, possibly multiplied by some constant. This is also the information s = pr - Et = φ/ℏ, written with marks previous answer, so its change over time — energy change.

This is in accordance with the equivalence of information and action (changes in energy over time), or the determination of probability by information ln(e). And the requirement to deal only with real values of the results is not necessary while rounding to some mean values is. The information itself concerns the following more or less (un)known limitations.

1. In the micro world of physics, information is a complex number, which becomes less and less visible to us in the increasingly large world due to the law of probabilities. However, quantum mechanics has shown much of its world by working only with real observables, although this has its limitations.

2. There is a derivative constraint, S' ≤ S, Shannon information probability distribution fk = fk(x) by arbitrary variable x. Namely:

S' = -(∑k fk ln fk)' = -k(fk' ln fk + fk') =
= -(k fk' ln fk) + (k fk)'
= -(k fk' ln fk) + 1'
≤ -k fk ln fk = S,

because the derivative of the constant is zero (1' = 0), and the inequality arises from the statements of the answer "Emergence II".

This limits the changes of the (average) information to the information itself, regardless of the type of variable. It applies to the macro world of physics, so it is not possible to forcefully convert positive information into negative, say positive charge into negative. We distinguish it, for example, from the creation and annihilation of particles.

3. Integrating the inequality S' ≤ S we get S ≤ C⋅ex, where C is an arbitrary constant, and variable x of intermediate information S = S (x) of the mentioned probability distributions. There is no intermediate information (Shannon) that could change faster than the exponential function.

4. The following are three inequalities from the script "Information Theory I" in the title "41. Restrictions". There is a three-part theorem that talks about the probability density with a continuous real variable.

If that density is zero everywhere outside the interval (a, b) ∈ [0,1], then the largest mean information log(b - a) has the uniform distribution of that interval. If the final expectation μ of that distribution is given, then the exponential distribution will have the largest possible information log(). When the dispersion of the σ distribution is known, then the most informative log √(2πeσ) will be the normal (Gaussian) distribution.

Vastitude

Question: Our perceptions are finite, and we are in an infinite sea of options. How is it possible?

Vastitude

Answer: Coincidences have their regularities. The basis is in the more frequent realization of more likely chances, so less informative states are more attractive, hence the order and inevitability of further legalities.

When there are endless possibilities of approximately the same chances, then none of them happen alone and we would consider that there are none. Despite the extreme accuracy of mathematical analysis (infinitesimal calculus) and set theory, this is what we see.

We understand infinities here as "real" as energy, momentum, and force, but only the latter physics recognizes as its own, so "information theory" is not a branch of physics. The essence is no longer in the energy, which becomes less "nebulous", it is not easier to feel, smell, hear, or see than the continuum of real numbers, but in the differences that arise from the division of tasks.

This is how physics accepts the principle of least action, and the Euler-Lagrange equations that derive from it, Noether's position (Information stories, 1.14 Emmy Noether), conservation laws, and I believe the divisibility of information down to the smallest packets, but not infinity itself. And there are laws that create an even greater gap between the finite and the infinite.

Towards the end of the third script (Information Theory III), there about information transmission channels and coding, but it is easy to transfer them to any processes, you will find attitudes about spontaneously isolating a small number of highly probable outcomes from a large number of improbable ones, in cases where there are very many outcomes. These are difficult theorems (difficult proofs), but they are tamed if we apply the principle of parsimony of information.

When a uniform distribution will have the most information, then the decomposition of uniformity into different probable outcomes will occur spontaneously. Then to a small number of them much more likely. Forces and similar tendencies will hit their "head against the wall" in the created obstacles and there will be a lack of strength to overcome spontaneity. Laws themselves can be so deficient in information (quantity of options) that they become impenetrable to free choices. That is the interpretation of "order" typical of this information theory.

On the other hand, uncertainties exist even when we don't see them, they are also there when we ignore them. The properties of any physical system, no matter how big or small, are indelible, so changes remain an important invariant. Repetition is a way for information to obey the law of conservation, but not exactly the same cycles due to inevitable uncertainty, so we cannot have periodicity like the decimals of rational numbers. They are apparent, approximate periodicities, more like writing down real numbers behind which springs a continuum of which we do not have to be aware.

I consider the fundamental uncertainty of "information theory" to be sustainable only with the constant and barely visible creation of the cosmos, which draws its novelty from the infinite sea of options surrounding our finitude. That is why we can have such a small number of senses, and windows to the world, in addition to so many infinite possibilities, countless dangers, and ways of suffering because only a small number of options from too many are relevant. It is the nature of chance that it contains within itself the germs of necessity.

I do not believe that those infinities, which we constantly deal with in mathematics, have definitely worn out all their secrets, giving birth to only finite perceptions and the world of physics as we assume, with no chance of anything changing. That in the past, present and future all possibilities are always one and the same, or from some set, no matter how big, but which could be rounded. Such a world would be so limited that there would be no place in it for Russell's paradox (there is no set of all sets), Gödel’s impossibility theorem (there is no theory of all theories), nor this information theory (of infinite choices).

Limitations

Question: Can you explain to me the limitation of force from the point of view of information perception?

Limitations

Answer: Yes, let's look in order, first of all, what is "information perceptions", then spontaneity and force? Let's start with an ordered sequence of occurrences ω1, ..., ωn that the participants A interact with (talk about) and B.

The subject A can be a natural phenomenon, a group of objects, a person, and also an opponent B. Let the intensity of occurrence ωk (k = 1, ..., n) from participants A and B given by values ak = akk ) and bk = bkk) respectively. Arrays of values, vectors a = (a1, ..., an) and b = (b1, ..., bn ) have their total squared intensities a2 = a12 + ... + an2 and so b2 = b12 + ... + bn2, in the simplest, that is in the Euclidean metric. The scalar product of such vectors is perception information S = a1b1 + ... + anbn.

For example, A is an object that emits light, sound, or smell of intensity a, while B is a subject with more or less developed receptors of intensity b on which such charms act. Participants A and B can be two people or two groups that communicate, but they can also be objects that interact and exchange the effects of light, sound, chemical reactions, and the like.

Let the first array be ordered, increasingly a1 ≤ ... ≤ an and distinguish three extreme cases:

  1. the second sequence is descending b1 ≥ ... ≥ bn
  2. the second sequence is not monotonically ordered
  3. the second sequence is ascending b1 ≤ ... ≤ bn

As we know, depending on these arrangements of the second string, we will have the following order of perception information values S1 ≤ S2 ≤ S3. When additions contain multiplications of smaller with larger (and larger with smaller) components from the series, the information of perception is minimal, and if they multiply larger with larger (and smaller with smaller), the information of perception is maximal. It shows an easy account, which I would not deal with again.

It is important to note that the given orderliness of the first sequence (A) and the spontaneity of the second (B) means minimal information of perception S1 — given the data of the participants. It is a "natural phenomenon" in the most literal sense of the physical nature of things. Still, dead nature never resists, it always yields.

If the opponent (B) is unruly, possesses some vitality, and will not always surrender to fate, he, with his disorganization, i.e. incompatibility with the first can cause the first to give in to the "natural course of things". Then B behaves as if it exerts some force on A. A greater difference in their perception information ΔS = S2 - S1 means a greater force. This force is greatest when the arrangement of the "communication" participants reaches the perception information S3. There is nothing further, except that the structures of vectors a and b are changed.

When both opponents A and B is vital, in the sense that they have some freedoms other than spontaneity, they will remain within the three-mentioned information of perception. And if we can say that they act mutually violently, their behavior may be competition and the subject of game theory. Now it is also clear why the "reciprocity" strategy is superior to other (win games), I hope, although this explanation has nothing to do with classical game theory for now.

Question: So, what is a "good" and what is a "evil" force?

Answer: Ha, ha, funny question, it's too easy, but I'll explain it. Dangerous radiation from the cosmos or from the Sun for astronauts in a spaceship is "evil". The reaction to that activity is the improvement of the protection of the ship's passengers, let's say the intensity, the success of the planking in opposing harmful radiation.

In a computer simulation made to test the "strength of strategies", which I did before, I evaluated those ominous initiatives with negative numbers, and also the reactions to them. Their products are positive, so they contribute to the increase of information, perception, and vitality of the game. This method proved to be very good, it predicted the success of the reciprocity strategy very well.

Position assessment and evaluation of "good" and "bad" are the most critical parts of the program, and they must work every little while. The found number of points a is then distributed in proportion to the opponents' initiatives and respecting the square norm ∑k|ak|2 = a2. Interestingly, it is possible to choose other metrics with almost equal prediction success.

Induction

Question: How do you interpret the information of perception in the micro world of physics?

Induction

Answer: I will explain, but first to clarify the different positions of my information theory from modern quantum mechanics on the basic parts of the question.

Information is the fabric of the world, but one that is both short-lived and protected by the conservation law. The way out of such a predicament is two unusual developments in information situations: induction and collapse. The first is a ripple, a cyclical repetition, as when the electric phase of the photon induces the magnetic one, then the magnetic one induces the electric once again, and the series of light information seems to go on and on, until the interaction sometimes in some third event. The process remains trapped in repetition until further notice (Chasing tail).

By the second, with the collapse, there is an interruption of the "ripples", actually delivering information to the environment. After that, the uncertain previous trajectory of the quantum particle will be better determined. A collapsing particle and its trajectory lose information (the amount of uncertainty) and become defined. That this is not inconsistent with classical mechanics, we see from Heisenberg's famous statement at the time of the foundation of that branch of physics (beginning of the 20th century) that only the act of observation defines the path of the electron. It is the physics known "observer effect" which I would rather call the "measurement effect", or more precisely the "interaction effect ", i.e. the "effect of communication". It is also the meaning of "Schrödinger's Cat" (The Cat in Box).

Due to the principle of thriftiness of information emissions, collapse does not happen just like that, but only in special situations of forced interactions, also in conditions of principled minimalism, I suppose. That is the difference between this and the known theory, it is a supplement of the old from the new. Then we also have the information of perception, let's say a fermion written with the expression S = AB - BA where the factors in the summands are no longer ordinary numbers but operators.

Due to changes in the flow of information, collapsing successive stages will not produce equal outcomes. This is also confirmed by the information calculation of perception, which in this world of small sizes is defined by the sum of products of operators. For example, if quantum states are represented by eigenvectors φ and ψ of the operators A and B respectively, then the representations of those operators are quantum processes that can act on given states. By its own (eigen) action the state remains of the same kind, Aφ --> φ and Bψ --> ψ.

The catch is that the only possible observables (that which can happen, observe, and measure) are representations of eigenstates. In addition, the process algebra will prevent the use of the same eigenvectors in successive processes, in some steps. An example of such, position and momentum operators, has long been known in Heisenberg's uncertainty relations.

To measure the position of the electron more accurately, we use photons of shorter wavelengths, but they are of higher energy and greater momentum, which transmit greater uncertainty onto the object, and we learn the momentum of the electron more inaccurately. The same and vice versa, knowing more precisely the impulse (with photons of less energy), the position remains more uncertain (due to the longer wavelength).

A similar example is the perception information of spin, S = σxσz - σzσx = 2iσy. Check it out! This means that there is no eigenvector x such that Sx --> x, because the Pauli matrix σy ≠ 0, and Sx ≠ 0, and there are no observables, i.e. there is no exact measurement.

The differences presented in this way between classical physics and the interpretation of the physics of the micro world using "Information of Perception" are barely noticeable in these effects. And that goes in favor of the accuracy of this kind of information theory itself, despite its possible repulsion due to fundamentally different points of view.

Conics

Question: What are these "central constant" forces?

Conics

Answer: Let's first see what "conic sections" are. In the picture, there are two cups (cones), placed one on top of the other so that their tops touch and they are cut by a plane placed at different angles in relation to their (vertical) axis of symmetry. We observe the lines of intersection of the plane and the envelope of the cones.

When the plane is parallel to the "directrix" of the cone (sideline), the intersection is a parabola (or a straight line, if the plane and the cones touch). When the plane is perpendicular to the axis of the cone, the section is a circle (or a point). When the plane completely intersects one of the cones, the intersection is an ellipse, and if it intersects both — hyperbole. You will find a description of the details of conics in my book "Konusni preseci" (2014).

Let's further imagine that a charge moves under the influence of a force of constant intensity from a stationary point in space. Therefore, the constant central force is fixed in the coordinate system. I considered the situations that may arise, for example, in the script "Notes I". The title "3. Potential Information" (4th example) observes the movement of a point A --> B along a curved line under the influence of a constant force from the point O, especially due to the surface p of the curvilinear triangle OAB. The calculation shows that the change in the area, the derivative of the area with time, is zero (ṗ = dp/dt = 0). Also, that the charge moves in one plane.

When the force is gravitational, the example proves Kepler's second law, but also that the force in the conditions of the described 4th example can be something else. Therefore, Kepler's second law (the segment from the Sun to the planet covers equal areas at equal times) is not sufficient for Newton's law of gravity (F = GMm/r2), as we usually think. That's why I demonstrated the same proof of the 4th example before, say in "Gravity" of the book "Multiplicities" (Economic Institute Banja Luka, 2018), although I believe that it is a mandatory topic for more serious physics studies.

Conics_2

If the gravitational constant G = 6.6743 × 10-11 m3 kg-1 s-2 were some other constant values, these Kepler surfaces area (which would sweep segment along from the Sun to the planet in equal times) would again be equal. An extreme situation of this would be zero force (G = 0).

In the picture on the left, there is no force (F = 0) at point O and the "charged", point A --> B, is moving inertially along the line l. The material point of that charge crosses equal lengths at equal times (AB = d = const.), and the height of the triangle ABO, which is equal to the distance of the line l from the alleged force source O, is constant (h = const.), so the area of that triangle is constant.

In the continuation of the same "Notes", under the title "Conics", behind the proof of Binet's formula, working on that occasion in the polar system of coordinates, the general equation of the path of movement of charges under a central constant force is derived. Those trajectories are conics. Combining this with the previous proof, we see that the paths of the charges moved by a constant central force are parabolas, ellipses, and hyperbolas, depending on whether the force is attractive (in the first two) or repulsive (in the third), and that the segment, the combination of the force up to the charge in equal times to wipe equal surfaces.

Abstraction

Question: Does the previous explanation of "conics" mean anything in the explanation of force in general in information theory?

Answer: Yes, it is very significant. That's exactly why it was an "expected discovery" for me, that the same holds true in the abstract space of probabilities. In addition to that, it would be expected, but painstaking, to find the environment and probabilities in the order of individual forces. However, the above elaboration of the conic generalizes everything and simplifies this problem, reducing them, the central constant forces, to the probability — with just one stroke.

In the "Information Theory" my scripts, among other things, this was proven, the extract of which is the attachment "Uncertainty Force". I wandered around that topic a lot and maybe didn't put the best parts in the scripts. In short, first this idea is confirmed by the Chebyshev inequality

Pr(|X - E(X)| ≥ rσ) ≤ 1/r2

which says that the probability of the difference between the random variable X and its mean value, the mathematical expectation E(X), is greater than , but with the probability less than r-2. There r > 0 is an arbitrary real number, and σ is the dispersion (the root of the variance) of the given probability distribution.

If we observe the very limit of the inequality of Chebyshev probabilities, which decreases with the square of the number r, we see that in the "probability space" in which that number is the "distance", we get the equivalent of the above laws. The method is common in mathematics with abstract probability spaces that are proven and heavily used as the old stuff. However, the novelty is seeing "more likely events more often" as the principle that defines the repulsive "uncertainty force".

The second is a computer simulation. In order to confirm the movement along the conics and "Kepler's second law" (in the generalized way described above), some of the tests I did are given. Imagine a constant number of n trials, such as flipping a coin, rolling a dice, drawing a number from a lotto drum, and the like, with an increasing number of N repetitions of the outcome in turn for each of those trials. The probability of the k-th (k = 1, 2, ..., n) outcome is pk, the number of realizations is Nk, but qk = Nk /N --> pk, due to the law of large numbers. Otherwise, that number qk is called a statistical probability and it is approximately but more precisely equal to the true probability pk the greater the number of N trial repetitions.

So, in the simulation we have a sequence, a vector q = (q1, ..., qn) which is getting closer to the vector p = (p1, ..., pn), with increasing number of trials N. In various metrics of the space of random events (there is arbitrariness in the choice of the metric space) the angle φ between two vectors, state that it is getting smaller, and substituting, changing N with r, possibly multiplied by some constant, we could find a hyperbola as a path by which the "probability force" drives the movement of "charges", i.e. vector q.

The unit of time is one cycle of repetition of all n random events, the number of all repetitions possibly multiplied by some constant (c, the speed of light perhaps) is the distance. From the scalar product

qp = q1p1 + ... + qnpn = qp⋅cos φ,

where q and p are the intensities of the vectors of the same name, there is the sine of the angle φ, and then the area qp⋅sin φ. Similarly, I consider the products of consecutive, or after an equal number of steps (same time intervals), vectors q. The "expected discovery" turns out to be correct, that "uncertainty forces" drive the random vectors q so that they sweep over equal surfaces at equal times.

Furthermore, if the mentioned constant c is really constant (it doesn't have to be the speed of light in a vacuum, those spaces are abstract), these supposed forces will decrease with the square of the distance and only then. Then and only then (with speed c = const.) the charge paths will be conics. By the way, note that due to the movement of the perihelion in the orbit of Mercury around the Sun and the deviation of that orbit from the ellipse, the speed of light will not be the same in a strong gravitational field.

Previous

April 2023 (Original ≽)

Next