April 2024 (Original ≽)



Question: How does conservation overall information not contradict principled minimalism?


Answer: Physical information is equivalent to the action, the product ΔE⋅Δt, of the energy changed over time. We emphasize changes, because information disappears as soon as it was created, because stale news is no longer news. It is therefore time as much as energy.

Information is the basic structure of space, time and matter, and its essence is uncertainty. In other words, the principal characteristic of this world is uniqueness. First, because of information as every part of the cosmos itself, but also because of the recently mentioned Riesz proposition (Parallel). If you understand this as I do, you will find the following idea more acceptable, only at first glance a redundant assumption.

The primary is the maintenance of the total amount of action, and the secondary is the conservation of energy in the case of equal time intervals. That is why, with a relatively slower flow of time, higher proper energies are perceived, and there is no spontaneous movement towards a lower energy density, as in information processes. A subtle difference, but enough that the maintenance drives uncertainty and uniqueness in the directions of difference.

Some of those differences are synergy, emergence and life, and finally intelligence, and physical memory is a special story. Extreme information densities have uniform (exponential and normal) probability distributions and one way to dilute them is to stretch the current present into an increasingly long past. Not in any writing, but in a trace that will be able to act, direct and accurately supplement the total amount of information with what is left in the present.


Question: What is "focus" in this theory of information?


Answer: In the beginning, the focus of the theory was Hartley's logarithm with Shannon's mean value of information, with "information of perception" somehow later. Today, that focus is on interpretations of a multitude of formulas.

On the other hand, such questions also refer to the discoveries of the theory. In particular, there are spontaneous flows towards less information and therefore towards increased certainty. A special case of these is the emergence of an increasingly long history of a given present, with an equal aspiration. The past focuses the present (Totality).

The past trajectories of bodies determine their future trajectories. The oceans of early Earth gave birth to cyanobacteria that could produce oxygen from water as a byproduct, which was easily dissolved in water along with iron then also abundant in the seas. In that solution, their compound, rust, was slowly formed over millions of years, which did not dissolve in water but settled at the bottom of the ocean, so that over time the free iron disappeared and oxygen began to escape more massively into the atmosphere. That waste and poison for the first forms of life on our planet became necessary for the generations that evolved later.

Processes do not have to be commutative (Giddy), and it can be shown that this is the main cause of uncertainty, and in that sense also the fundamental thing of this information theory. Thus, for the position and momentum operators (Momentum II) the equality

\[ [\hat{x}, \hat{p}]\psi = i\hbar \psi, \]

which says that changes in state (ψ) of momentum and position will not give the same results as supposedly the same change in position then momentum. The indeterminacy of those results corresponds to the quantum of action, and this commutator otherwise corresponds to the surface, and thus to the information. We are talking about the importance of past events for future outcomes.

The past works by limiting the present and in its own way is completely subject to the world of information. Each of them is therefore unique. The present is also always unique, although never the only option, so each past-present duo belongs to a much wider world of choices. Countably infinite sequences of outcomes are incomparably fewer than the surrounding uncountably infinite sets of possibilities, as is the sequence of integers compared to the continuum of real numbers.

Hence the idea (hypothesis) that even the strictest logical truths, for example mathematical theorems, are the focus of the development of the present. By ergodic consequences, a process that has choices degenerates, misinforms and dilutes, as information that travels through a series of channels, a Markov chain, getting lost altogether when the chain becomes black box due to its length. . Since it is a mathematical truth — all the exact, like every measurable, and generally scientific truth — will be of a lower order and should agree with this one, regardless of the fact that it is also such a type of pseudo-reality.

For example, the Big Bang, which started the expansion of our universe about 13.8 billion years ago, is a pseudo-reality from the point of view of even slightly different possibilities for the development of the universe, but any exact or experimental science would have to confirm to us that it happened, if it "really" happened. However, it will be equally logically true that the laws of such "real" truths (mathematical) parts were created by similar processes, possibly by the evolution of certainties "under the radar" of these (Far Before ).


Question: Is there a present without a past?


Answer: In physical reality we have no such case: the present without the past, or the future. Time is unstoppable. It doesn't even begin just like that, because it belongs to the physical action (E × t) which is the equivalent of information, of which the cosmos consists.

A phenomenon that would have arisen just like that, not by transformations from some of its other forms, would be contrary to the principle of conservation of action and energy, so we do not see such things around us. Everything we come into contact with lasts, they are objects that somehow survive even if they are transformed. I guess that's clear. But the explanation of the founder of quantum mechanics that the particle-wave acquires its trajectory only when we perceive it is also acceptable.

A quantum particle "exists" in a state of uncertainty, in quantum physics it would be called a superposition (Parallel), so that through interaction it would deliver part of its information and become more certain. It then gets a more specific previous trajectory, a more concrete history, so that it can present itself together with the past in the physical reality of the present.

All physical, real processes flow in the same direction:

past → the present → future

including parts of the communications between the two entities, because it takes time for the signal to cross the path. When that time is not needed, we have simultaneity, which in reality means quantum entanglement (Waves III), or abstraction in the world of ideas such as mathematical and other fictions. There is no physical communication without the past.

There is still the idea (hypothesis) that logical truths are like mathematical focuses of the development of the present. That we understand such universality, or omnipresence, by the all-time duration of an unlimited "now", for which or next to which there is no place for history. Then the past is debatable.

Forest fire

Question: Moving the present through uncertainty is like burning, a forest fire?

Forest fire

Answer: Yes, indeed, the transformation of uncertainty into information reminds us of "combustion", and also on the progress by the combustion front of a forest fire, behind which the "past" remains as a trace of cinders that still, for a time, smolder until the flames completely consume them.

The comparison with burning emphasizes the irreversibility of uncertainty when it turns into information. At the same time, it brings out the gradual extinction of "memory", the consumption of information that reaches us from the past, as well as the immutability of what remains.

From the continuum of possibilities, one instant of the present is realized at a time, in a discrete, countably infinite series of states of our reality. It is not only a recreation, a rearrangement of given possibilities, but the present is and the process of creation (Crystallization), here in the sense of the realization of discrete options from a continuum whose possibilities cannot be enumerated. As a comparison to crystallization, this theory of information can compare the present to "burning" options from an uncountably infinite set of possibilities in front of it, to evoke unpredictability within unpredictability.

Namely, it would be inconsistent to observe the moment of choice in front of given, known options as the only way to develop the present, because the set of given possibilities would be closed, without uncertainty and, therefore, without possibilities beyond the given possibilities. A discrete set is not open. The thing is that there are uncountable infinitely many real numbers and more than integers, or rational numbers, and that the sets of the latter are always closed.

However, a forest fire is an incomplete comparison with the course of events precisely because of the "combustibility" and what remains as a burn behind the actual fire. What "burns" is only one countably infinite series of states, in addition to which there remain an uncountably infinite number of such series. This continuum of possibilities remains for additional dimensions of pseudo-reality.


Question: Everything that really happens has consequences?


Answer: That's right, and there is no present without the past (Inseparable). Every action is a creation of information that extracts from the physical world something of its primary uncertainty, that is, it certainly tries to do so. These are the consequences of the "force of probability".

The strength of the present, therefore, is the strength of its past, analogously it is the strength needed to turn away from the course of their consequences. It is seen in the physics of inertia, the psychology of habits, in social phenomena. The truth is strong but unattractive (The Truth) while the strength of the lie is in the onslaught, and the weakness in memory and its consequences are mostly in what it grabs at the very beginning.

Roughly speaking, larger bodies take up more space and time (while the signal gets from end to end) and, as more complex systems, contain overall larger amounts of information. Because of the connection, which reduces the density of the information plot, it becomes more inert. The action on one-part spreads in a chain around, and each of the components leaves some traces of the past and adds itself to the forecasts of the future. When we move to known physical laws, such connections become better differentiated and clearer, but they are not the subject now.

In the examples of living beings, we have, for example, the processes of growing up, maturing and aging, which are in their own ways of collecting, organizing and settling information during the life of an individual. Each of these phases is a change with its own consequences, and again, each individual appears as a redefinition of the wider environment. Although many forms of behavior of generations are repeated, consistent with the abstract laws of adding cups or heavenly bodies, i.e. networks of neurons, electrical distribution and links of acquaintances between people, these processes always take new forms.

The evolution of societies is also directed through lies, simply because of the power of people to turn them into physical actions. My theory of information does not exclude the existence of ideas (which are not physical objects) which, for now, I have nowhere to place except to "shove them under the carpet", i.e. in the "under the radar" processes of physical action (Mystical). There is a possibility to explain it in terms of the photo-electric effect (Half Truths).

Seemingly contrary to what was said about formal truths (Inseparable), an unlimited "now" beside which there is no place for additional histories, it is also possible to view them as a "frozen" past that goes on and on. Such has an unchanging direction of consequences, like a mold that affects different changing circumstances somewhat differently. That dualism of interpreting the present limits the answer to the above question to reality.

Second, the consequences of real events are also those in pseudo-reality, outcomes that could have happened but did not. Therefore, everything that happens in reality has real consequences, but it's not the other way around, if we don't have real consequences, it doesn't mean that they were not preceded by real events. Physics can therefore stick to the research of only measurable effects, but not mathematics. The third, the effect of pseudo-reality on reality, is an open question.


Question: Relationships between reality and pseudo-reality are complicated?


Answer: Separating the possible from the impossible is a central problem of knowledge in general, especially science and mathematics. Could there be anything more complicated and important to civilization than cognition, or a greater purpose for the intelligence we already have.

Wherever there is physical reality, there is somewhere the history of its creation and the possibility of measurement. This concludes the physics of this theory of information, and formal logic remains. Things become even more complicated with the investigation of other, true non-realities, especially because we have them through the influence of our own ideas on the real world. Therein lies one of the answers to the particularly interesting question of measuring something physically unrealistic.

Another reason for taking ideas as the subject of information theory is of a principled nature (Pinocchio). Before this physical information, as equivalent to physical action, information is technical, mathematical and abstract, a phenomenon on which we spend (electrical) energy in order to have it, and was not something that itself has energy.

In the picture above on the left is the breaking of the world record for knocking down the largest dominoes in a row, in August 2009. It is of interest here as another demonstration of chaos theory's "butterfly effect", an even slight trigger movement that can produce a significant effect. If we allow the possibility of such action "under the radar" (Non-physical), the possibility of communication between non-physical and physical "realities" opens up. It is a necessary addition to an otherwise interpretation like the photoelectric effect.

Another possibility is that some non-observable information is in the form of continuous but also other functions a that can develop into series like Fourier series (Fourier series ). I remind you, infinity as a general mathematical truth "reality" is my theory of information.

For example, infinite sequences of trigonometric functions (Quantum Mechanics, Example 1.3.69) can form bases of the vector space of a given domain. With them, and with imaginary units (i² = -1), then by Euler's number e = 2.71828..., more precisely by the complex number

e = cos φ + i sin φ,

we first define the general Fourier coefficients

\[ a_k = \frac{e^{ikt}}{\sqrt{2\pi}}, \quad k = 0, \pm 1, \pm 2, ..., \]

so for an arbitrary function x = x(t) by a special, the scalar product

\[ \xi_k = \langle x| a_k \rangle = \frac{1}{\sqrt{2\pi}} \int_{-\pi}^{\pi} x(t) e^{-ikt}\ dt. \]

so a different record of the same function is obtained

\[ x(t) = \sum_{k=-\infty}^\infty \xi_k e^{ikt}, \]

which is called the Fourier series of that function.

A kind of "superposition of waves" could result from this, despite the reasonable expectation that information will not simply settle, because stale is no longer news. This is especially so, since we have many examples of collecting and preserving information in the conservation law. Again, on a sub-physical level, the otherwise universally useful area of mathematics "roots of unit" could be shown.

The interestingness of this area of mathematical analysis is the possibility of similar development of a given function with different (non-trigonometric) fragments, shapes, which is why I wrote earlier that such functions in the micro-world have no shape. Mirrored to this "reality" that we are dealing with now, it will be that unobservable are also without micro-forms, like real, quantum particle-waves.

Not to be long, the relations between reality and pseudo-reality are more complicated than it seems at first glance, but this is a convenient circumstance for real researchers, those who do not run away from such challenges. With them, this issue will go in a different direction and gain a different significance.


Question: Does "bypassing" reality have a memory?


Answer: There is no reality without a past (Inseparable), but this is optional for its parts in "bypass“. Thus, the "memory" of dreams, when it is irrelevant that they are not reality, and does not follow physical laws in all its details, does not have to be consistent with the ability to recall. The memorization range of the unreal is hampered by inconsistencies, ambiguity and unreliability.

This intuitive, straightforward assessment, of course, also has its algebraic basis. We know that all quantum mechanics operators should be Hermitian, thus guaranteeing the reality of their eigenvalues λ, characteristic equation Âv = λv, which we usually interpret as energy (Harmonic). This means that in the "bypass" phase, operators should be found among non-Hermitian operators (some Hermitian operators do not give real processes), and this is always possible.

For example, quaternions (Quantum Mechanics, p. 133) are not Hermitian operators:

\[ q_x = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}, \quad q_y = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}, \quad q_z = \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix}, \]

because they are not self-adjoint (qq), but are roots of a negative unit matrix (q² = -I), so let's say:

q1(q2q2)q1 = q1(-I)q1 = I.

That's why this kind of composition can be inserted as neutral in any real process. There are countless different ways to break down operators into different factors, so there is no end to the representation of real processes mixed with "bypass" phases. From the ease of such combining, an interesting question arises: does each process contain countless "bypasses" that we are just not aware of?

As a (hypo)thesis, that every process contains countless "bypass", there is no contradiction in sight, if we adopt the point of view that such processes do not flow in (our) time. We are talking about processes that were temporarily in a parallel reality, from which they eventually came to us from there by "bypassing", because multiplied by an imaginary unit (i² = -1) quaternion and the like become Hermitian operators.

As our time and "time" during the "bypass" are not the same phenomena, memories are not the same concepts either. This is consistent with the additional temporal dimensions of this theory, but it does not mean that some other (mathematical) measures of "distance" between parallel worlds do not exist.

It is difficult to imagine such fantastic, abstract forms outside the micro-world, but it is easy to assume some other equally strange phenomena. Within the theory of information, I do not go beyond the forms that would have justification in mathematics.


Question: Can the trace of reality be understood by "the force"?


Answer: That's a good question, because action produces consequences (Entail). It seems to be clearing its way through reality, let's say by forcing itself. As mathematical forms can be added to that, then it is a question of some "force".

First of all, reality is built by the appearance of information. It is the equivalent of a physical action, that is the product of energy changes with elapsed time, ΔE⋅Δt, and energy is the work of a force on the road, ΔE = F⋅Δx. Roughly speaking, information is equivalent to the product of a force and the "volume" element of spacetime, FΔxΔt, and determinants of operators are those "volumes". With the case of one dimension they are lengths, in the case of two dimensions they become areas, with three they are volumes, and so on.

When there is information, then none of these factors is zero, but their product can take positive or negative values. As we know from mathematical analysis, the distortion factor between quantities in uv-space and the corresponding quantities in xy-space is called the Jacobian:

\[ \frac{\partial(x,y)}{\partial(u,v)} = \begin{vmatrix} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v} \end{vmatrix} = \frac{\partial x}{\partial u} \frac{\partial y}{\partial v} - \frac{\partial x}{\partial v} \frac{\partial y}{\partial u}. \]

It is the determinant, here the 2-dim "volume" (area), or commutator coordinate change (g : OuvOxy), ie information. When the domain D changes to the new labels D*, then let's say we change the integral

\[ \iint_D F(x,y)\ dxdy = \iint_{D^*} F(g(u,v))\left|\frac{\partial(x,y)}{\partial(u,v)}\right| \ dudv. \]

In this way, we move from one coordinate system to another in the same space, without changing the corresponding information, or otherwise the potential of the space.

For example, the Jacobian of polar coordinates follows from the transformations:

\[ \frac{\partial(x,y)}{\partial(r,\varphi)} = \begin{vmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial \varphi} \\ \frac{\partial y}{\partial r} & \frac{\partial y}{\partial \varphi} \end{vmatrix} = \begin{vmatrix} \cos \varphi & -r\sin \varphi \\ \sin \varphi & r\cos \varphi \end{vmatrix} = r. \]

Therefore, the surface element dA = dxdy in polar coordinates is rdrdφ, so the factor r appears by integrating in polar. This distortion becomes clearer to us if we draw a certain area of the Cartesian rectangular system in the polar one. □

The upper picture on the right show’s deformations in optics, namely barrel distortion and pincushion distortion. In information theory, it would correspond to repulsive or attractive forces due to the excess or lack of information that could communicate with each other. Also, gravitational fields are distortions of physical space itself. There we have the Ritchie tensor in Einstein's general equations, which otherwise evaluates the difference between the given volume element and the Euclidean one.

All these seemingly different "forces of reality" have a common root in the "force of probability". It expresses the aspiration that it is more likely to be realized more often, which clearly shows that it is about reality, that information is created by such efforts. After all, the action of information necessarily produces changes, because each subsequent state of the world is unique.


Question: Why are "setbacks" useful in moving forward?


Answer: In game theory, a game that includes sacrifice is more likely to win than one that prohibits it (Reciprocity). This is simply because of the greater choices and greater vitality that goes with it.

Reviving capabilities after sleep contributes more to overall efficiency than monotonous lower vitality. That's why many species have evolved intermittent sleep, into retreat for advancement.

On the picture on the left is a demonstration of faster gravitational decay on a winding path than on a straight one, because the speeds that the ball gets on a higher, downhill are kept lower by inertia, and the winding wins. Since the body is more inert the more widely connected in space-time is (according to my information theory), this case has the same roots as the previous two.

By enumerating such examples, we would arrive at the well-known proverb "mind reigns and strength serves", thinking that we should not underestimate intelligence, or if we emphasize the difference between strategic (planned, long-term) and tactical (immediate) success. Humans as a biological species possessed this strategic advantage, unlike even the most powerful predators, when they were so far ahead of others in controlling natural phenomena.

Spontaneity, minimalism and laziness as principles of the information universe are against both development, intelligence and strength, but perhaps a little less against the latter. There is much less intelligence than strength among the species of living things on Earth. Nevertheless, in extortions there were "living organisms" (the first primitive systems) which, by force of circumstances, would remain in a state of a greater "amount of options" (information) than what naturally belongs to them, which by their "invincibility" (a higher degree of competition against the dead of nature) survived and prospered.

Over billions of years, species of life suffered huge casualties, starting with the death of individuals (which was overcome by reproduction), and ending with the extinction of entire species (overcome by evolution into more adapted forms). These succeeded, for example, by freeing up space for new opportunities, testifying to us the usefulness of "stagnation" in progress. We were just underlining the niches of otherwise "unnatural" excess information of (living) systems that refused to settle down. Paradoxically, efficiency, which arises from the reduction of options and goal orientation, i.e. through the smaller information, exists and helps in the development of the accumulation of options (in the case of living beings).

We usually think of evolution as a mere combination of equally likely random arrangements, genes, or whatever defines a living being, but here we see how incorrect that is. It is equally naive to expect that "do-gooder" (Traits) strategies could be more successful than "manipulators" as well as any of them against "despiteful". Now we see that both of these misconceptions have a common origin in the usefulness of "setback" in progress. You need to be with an excess of information to dive into a deficit, like swinging a pickaxe to drive it deeper (Defiance II).


Question: Do fictions have memories?


Answer: There is no reality without the past (Inseparable), but similar to "bypassing" (Delay), there is no consistency in the memory of the fictions themselves. When bound by reality, as in our body, abstract ideas acquire a mixture of such histories. And through records, on stone, in books, also through oral traditions, civilization preserves memory in its own special, again hybrid physical ways.

An isolated example does not make a rule, but a counterexample will disprove it. Thus, it is revealed that the virtual world has no memory with each new computer created without prior knowledge. Furthermore, we can add various fictions by connecting them with the basic minimalism of information. Said sub-actions (equivalents of physical information) mutually attract each other, just as redundancies repel each other. That's why quanta (physically smallest actions) stick and don't disintegrate, and their parts don't need to be diluted and leave a trace of the past.

Virtual particle-waves (Waves III) have this tendency to combine into physical actions. If we assume that they define their present on the surface of the sphere through interaction, then we do not have to give meaning to their time, or even the past, and still understand the expansion of the virtual sphere as an escape from smaller, denser parts of this kind. However, I do not believe that all types of sub-particles tend to be grouped into physically real ones, and here's why.

Infinity is some kind of reality (The Bottom) for information theory, at least one that is a mathematical reality. Among the infinite number of imaginable possibilities, the most finite many will be physically relevant to us, and they are above all those prone to physical association. Let's note that principled minimalism does not have to be valid in the deep sub-physical world, if it is already valid in our reality. Reality would be unstable if all such fragments tended to merge into quanta, and vice versa if they all wanted to be infinitesimals.

For now, we are only discussing ideas about the "under the radar" phenomena of physical reality (Mystical), but they will mature and become more usable as complex numbers, once " unimaginable", finding their applications from large airports to quantum mechanics.

For example, living outside of reality and too much in fiction, reality escapes us. It is also known in psychology that good manipulators are not the best implementers. Similarly with information, realistic physical ones, such as interactions and actions, have consequences, while unrealistic ones may not. That which remains behind the unreal befits the unreal. Or another example, a lie need not be consistent with an assumption.

Line up

Question: How can different masses have histories of the same length?

Line up

Answer: See the image link to the left about the tautochrone curve, its application to the equal periods of two pendulums of the same length, driven from different heights. Galileo was the first to notice the consequence of equal gravitational acceleration for all bodies in free fall (at the same place).

1. Gravitational acceleration is about a = 9.8 m/s², for a = GM/r², where G = 6.67430×10-11 Nm²/kg² gravitational constant, and the mass and radius of the Earth M = 5.927×1024 kg and r ≈ 6.36×106 m (new measurement). The general term for the period of a mathematical pendulum is

\[ T = 2\pi \sqrt{\frac{\ell}{a}}, \]

with ℓ as the length of the pendulum, from which it can be seen that the swinging time depends only on the acceleration a of the earth's gravity, at a constant length ℓ.

2. The past is created at the expense of less information density of the present, and thus time slows down (because there are fewer and fewer events) and history becomes longer and longer. Just by the deposition of new and prolongation of existing events, history changes, details fade, and it is additionally consumed by memory, so the past of the present is an unusually dynamic phenomenon. The question posed above refers to the paradoxically different lengths of the past of the various systems of the present, which should somehow align in the oldest moments.

3. Similar to the above (1), it happens with the gravitational slowing down of time of larger masses. Observed from distant points, time flows more slowly in places of stronger gravity. For a relative observer, the history of the denser field is shorter than its proper (own). The relative mass of the body in the field is proportionally greater. On the other hand, a larger mass bends space-time more and goes more into parallel times, which is precisely why such a body is more inert. The observer himself does not see the same masses as the relative ones, nor the same durations of the respective histories.

Very massive celestial bodies are slow-moving, lag behind the others, and seem to have a shorter history (starting with the "Big Bang" 13.8 billion years ago) than they should given their greater mass. However, they estimate their durations to be longer in proportion to the relatively slower course of time. The defect of time compensates for the lack of information and presence from the point of view of relative observers.

In other words, for different observers, bodies of different masses start from different moments of the "Big Bang," but the actual present is always reached at the same moments, as in the previous case towards the midpoint, in the upper picture on the left. The same example shows that such an absurd situation is possible, now with different speeds of time flow. And Einstein's famous elevator example gives it a deeper meaning: outside the field, passengers in an accelerating elevator would feel as if they were in gravitational acceleration, and conversely, in a free-falling elevator, they would be weightless.

4. The calculation is simplified by the same ratio of relative mass increase (m = m0γ) and time slowing (Δt = Δt0γ), say in two (γ1 = γ2 = γ) cases:

\[ \gamma_1 = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}, \quad \gamma_2 = \frac{1}{\sqrt{1 - \frac{2GM}{rc^2}}}, \]

in uniform inertial motion and in a weak gravitational field (Space-Time, Theorem 1.4.4) first special and second general theory of relativity. These are the relative and eigenvalues of the observer who sees the movement of the body at speed v, that is, who outside the field sees a body at rest in the gravitational field of mass M at a distance r from the center.

The other catch here is that a planet rotating at speed v around the sun, from the point of view of a distant observer at rest, will have equal to γ with the body at rest at the height of that rotation (same book, 1.2.8 Vertical fall). For comparison, the rotation speed of the sun in the Milky Way galaxy and the speed of light are v = 220 km/s, c = 300,000 km/s, and γ = 1.0000003. This means that the coefficient γ is very small when the speeds seem very high to us. The speed of the Earth around the Sun is "only" 30 km/s.

Measurements from somewhere far from the galaxies, or a quiet place, would slightly shorten the value of the age of the universe, which we estimate to be around 13.8 billion years.


Question: Is there uncertainty in certainty?


Answer: In other words, is there information in the outcome of only one of the possibilities? Yes, in the (hypo)thesis of the theory of information that survives for now. The trick is to say "there's nothing."

According to Hartley's (1928) definition of information, the logarithm of equally likely possibilities, log 1 = 0, we say "there is zero". And according to Shannon (1948)

S = -p1 log p1 - ... - pn log pn,

what is the mean of Hartley's given probability distribution, if one of the outcomes will tend to certainty (pk → 1), then all others tend to zero (pj → 0, jk), so the information tends to zero (S → 0 ). So, those zeros can be considered infinitesimals.

The given image on the right is a link to an interesting observation of the paradox of the certainty of everyday life. If we stay in certainty for too long, it becomes boring to us, and when our lives are unsettled, we long for certainty. The theory of information will add that we live in a zone between the largest and the smallest number of options, from which the fear of freedom (uncertainty) defends us from going up, and going down is the anxiety of lack of freedom. In any case, the law of conservation of the information we possess is present. The same applies to the lower quantity systems, and to exclude zero would be like going back to the time before Fibonacci.

We start from the assumption that information is a ubiquitous phenomenon and that uncertainty is its essence. We measure time by the emission of random outcomes, and, in this sense, it flows relatively slower where the frequency of those outcomes is lower. If there are very few outcomes, therefore, time is very slow and therefore consistent; in the case of a single outcome — it stands. That is why photon time (almost) stands still, because there are only phases of induction of magnetic field by electric and vice versa, and the truths that we discover in mathematics are everywhere in the present, to give only two examples.

Truths are therefore omnipresent, instantaneous as quantum entanglement, and their inconsistency in different areas (known or unknown to us) is not possible. The theory of information, interpreted in this way, is for now the only one that offers any kind of explanation of this otherwise well-known phenomenon of truth to theorists.

The omnipresence of the present truth in its past and future reveals at least two consequences. That it is a unique body to which different phenomena react differently. Nature does not like equality, so we have the fundamental unrepeatability of each of its subjects, i.e. a great diversity of natural phenomena despite its universal truths. In particular, there is a regularity in separating less from greater uncertainty, because the former cannot emit as much information as the latter and is therefore more attractive (to those that recognize them).

In short, the absence of uncertainty can be treated as its special type, as zero uncertainty. There is also an answer to the question about certainty, until further notice.


Question: Where does consistency come from?


Answer: This is a question about the very roots of mathematics and logic in general, the laws of nature and society, all the way to the origin of lies. Where there is no consistency, there is no formation of the past, so there is no direction for the future, or all existence is encompassed by one permanent present. We understood this in the previous answers, I hope. We have only scratched the surface of this topic.

Physical information and actions are equivalent phenomena. They are not equal to information in general, neither in complexity, in quantization, nor in certainty, nor in consistency. We will distinguish certain sub-physical information that tends to combine into quanta from the rest below, primarily by inconsistency in communication. The galimatias of others contain several recognizable outlines, among countless possibilities.

Since it is in the definition of infinity that it can be equal to its true part, it is because of the law of conservation that information is finitely divisible. Thus, the statement will be true (⊤) or false (⊥) and we reduce each of the "maybe" polyvalent logics to one of the previous two cases. Simple nature will not recognize a lie. When it can be proven that something cannot happen, it does not happen, so the question arises where lies come from.

If there were no darkness, for example, it wouldn't make sense to talk about light. Also, if there were no untruths, we would not recognize truths. Therefore, lies exist as "unnatural", abstract phenomena. We need them as tools in mathematics and logic, in research, and in defining the laws of nature. They are necessary for the competition of living beings, but the non-living world does not perceive them. Therefore, there is information that is not physical.

We talk about the world we live in as if it is woven of information and whose essence is uncertainty. We would not be able to understand the physically real part, which is the only subject of truth, without that area of invisible untruths. This is how we talk about the implication (AB) from A follows B, which means that both statements are by no means true: both A and the negation of B. Negated, it becomes

\[ \begin{array}{C|C|C|C} A & B & A \Rightarrow B & A \Rightarrow' B \\ \hline \top & \top & \top & \perp \\ \top & \perp & \perp & \top \\ \perp & \top & \top & \perp \\ \perp & \perp & \top & \perp \end{array} \]

The last column of the table (A ⇒' B) contains the negation of the implication. Each item of that negation is the opposite of the value of the original (implication), by changing the value ⊤ → ⊥ and vice versa ⊥ → ⊤. Each binary logical operation (in several ways) can be represented by similar tables, each with four values (22 = 4), and each of the results with two options, so there are 16 such tables (24 = 16). Therefore, the negation of any of those tables will return one from the same set of tables.

It is easy to check that the formula AB gives the same results as the formula ¬AB, and its negation A ∧ ¬B, where the sign ¬ negates the value after, in the first is a disjunction (∨) in the second conjunction (∧). This statement justifies the remark that there are several ways to represent the same table. Furthermore, let's note that there is a bijection (mutually unique mapping) of these tables to their negations, also that every true statement is then copied into an incorrect statement, tautology into a contradiction, and vice versa.

The last item followed from the analog representation of every algebraic logical operation in a binary way. For example, such a ternary, of the form f(A, B, C) will take 23 = 8 variable values with 28 = 256 values of all expressions. In general, n-ary logical operator, g(A1, A2, ..., An), will have n = 1, 2, 3, ... variables with possible m = 2n values "true" or "false" and 2m possible basic function definitions g.

We formally proved that, in addition to the world of truth, there is an equivalent world of untruths. We are a mixture of both, physically real and imagined reality. It is not possible to deal with mathematics or science without those two structures, substance and ideas, and all daily life activities exist in their inconsistent connections. We actually understand consistency by recognizing inconsistency, just as we can see light because we can compare it to darkness.

The first part of the answer to the question is therefore: consistency comes from understanding inconsistency. The second part comes to us from the conclusion, which is then imposed, that reality (the world of information) is a conglomerate of at least both. Physical matter is what seriously attracts us, its orientation and stability, in such a way that we do not notice the other side of the same world, the "non-serious" one. Better said, we underestimate that undirected. An important part of all this stems from the principle of parsimony of information. Truth is constant, but it is insufficient for us.


Question: How can you know something that is beyond the reach of physics?


Answer: Mathematics is more general and abstract than the future theory of information, and the latter treats physics only incidentally. When, if ever, people go to Mars, all the equations of mathematics will still remain, although they will find temperature, or atmospheric pressure on that planet and different conditions for life than on Earth. The proof of the Borel-Cantelli lemma (down the image link on the right) may vary from author to author, but not its accuracy.

I mean, the theory we are discussing is not limited by the scope of physics. And especially, in that (future) theory of information, a question like "How can you talk about something other than energy, time, and speed?" will be as meaningless as the question "Will the sum of the angles in a triangle be 180° and on Monday?" This is what I call the "future" theory of information, which I emphasize once again to avoid confusion.

We understand that ideas have the structure of information and that there are some kinds of uncertainty, whether or not they are physical phenomena. They are infinite, and on the other hand, they are the subject of the study of probability theory. The mentioned lemma then establishes that there are at most finitely many "ideas" from every infinite complete set of independent such with total probability one and therefore infinitely many from such their cumulative probability is zero. We also allow for the possibility that the physical world that surrounds us could be just a selection of many such coincidences.

So, in infinite distributions, only some finite subgroups are always certain (we don't know in advance which ones), and everything else is irrelevant. There was the possibility of infinitely many such groups, up to a situation where finally many such final relevant outcomes could appear for a candidate reality such as the physical one we know. For the conservation law to be valid, there must be finitely many of them.

Let's note that this description, like mathematics in general, does not use the concept of time. We can equally speak of "time before time" (Far Before). The assumption is that "then" lasted infinitely; it doesn't matter if it was "then" or simply "came into being," because we are talking about continuum of great infinity whose elements are not countable, they cannot be arranged in an infinite sequence. Already in this lies the assumption of the primacy of mathematics, that abstract laws are ahead of concrete ones.

At the time of the Big Bang, about 13.8 billion years ago, the very abstract laws of probability theory and the "forces of probability" began to extract the first certainties from the amorphous initial cosmic mixtures. It is a performance given by our ongoing present. The presentation shows the appearance of elementary particles at that time, then, for example, mass, so that with the drop in the concentration of the early mixture and its temperature due to the expansion of space, the electro-weak force was separated into electromagnetic and weak nuclear. Many of these formations are yet to be discovered. The history of the universe and the emergence of its past come at the expense of reducing the concentration of information in the present.

In addition to the formation of the history of the universe, this theory also sees the reasons for the rarefaction of information in the present (perhaps) in today's somewhat increasingly probable transformation of fermions into bosons, i.e., substances into space, so the expansion of the universe is going faster and faster. In particular, more precisely, in addition to these, the expansion of the universe can also be a property of slowing down the time of the present due to an increase in certainty. Namely, we see the movements of distant galaxies in the time of their distant past, when the light from there started towards us.

Many of these explanations were preceded by the tracing of theorems, otherwise so distant from the everyday way of thinking that many of them existed long ago and went unnoticed in information theory. In addition to now understanding them in radically new ways, we notice the ease and harmony of these interpretations. Over time, the cosmos has developed an accumulation of the certainty of the present with an ever-thickening sediment of its past, and what we experience is filtered as just one of its (uncountably infinitely many) possible processes.

Rope Walking

Question: Vitality is like walking a tightrope?

Rope Walking

Answer: Yes, the comparison is good, as is riding a bike. The ignorant is constantly waiting for the driver to fall, but he drives and drives in his state of tense equilibrium. The past directs him, but the vitality leads its work and pushes the subject through other choices into paths that inanimate matter would not follow.

Often, in criticizing "random selection" in the evolution of species on Earth, we have overlooked the subtle state of balance in the strategy of resisting the uncharacteristic dead substance of physics. Put simply, not all choices are equally likely. The same goes for the sophisticated observation that is the subject of the sequel.

Behind a "bad" choice (future or current), life is "locked," if for no other reason than the law of conservation of the amount of options (information), the fear of uncertainty from above, and the need for freedom from below. Furthermore, there are known defenses of life through the fear of death, special vital abilities, reproduction, and adaptation through evolution, for example.

Once it reaches a given level of the "amount of options," the system (life), even if it would like to, for the sake of principled minimalism, cannot simply cut its excesses lower because the whole environment is also filled with information. But it waits for opportunities, some of which involve surrendering individual freedom to groups. Communities thus become more vital. They thereby grow from a lower form of life to a higher one in a spontaneous but forced, natural way.

I will add here an example of Mark Twain's question mark, which I quoted a long time ago, explaining the tendency to submit to the herd, not indirectly by emotions but by the principle of saving information. Supposedly, two characters in the story meet. The first asks the second what he does for a living, and he answers that he is a herder. Well, what do you say, then: if three cows are in the meadow and two are looking south — where is the third looking? And, fool, says the second, how will I know where the third is looking, define the situation a little better! You are no herdsman, answers the first, and the third cow looks in the same direction. Such is the nature of a cow.

In fact, we see methods of survival of vitality all around us in various forms. In companies, assets (funds at their disposal) and liabilities (sources of funds) must always be in balance, with the value of the former equal to the latter. Another example is cash, let's say dead capital, which (a bank, investor, or company) has. It should be as small as possible, but not too small, in order for the company to operate as well as possible. If there is too much of it, it lies on the money, the value is insufficiently used, and with too little, there is a risk of failure in situations where cash is needed. The most successful are the companies that best practice this "walking the tightrope."

All such examples are, theoretically speaking, in timely, measured and unexpected responses to challenges (Reciprocity). Vital systems are equipped for such abilities, moreover, they prove to be more vital the better they are able to "stick" to challenges. Consequently, the level of possible mastery is much more, than we sometimes think, in the power of the student rather than the teacher (Traits).

Conservation III

Question: What makes it possible to memorize information?

Conservation III

Answer: First of all, it is the law of conservation. Physical reality has a past. All trajectory solutions known to theory were already derived by physics from Euler-Lagrange equations, among other ways. They have been known for a long time; they are derived from the principle of least action and talk about symmetry, which means conservation of amounts of given physical quantities (Stories, 1.14 Emmy Noether).

Quantum mechanics, or more precisely its algebra, has the scalar product of unitary operators as an expression of those conservations. I used that in information theory to define "information of perception". When you cannot get rid of some quantity (mass, energy, momentum, spin) that you can only transform from one form to another, it must have some past of its own that is connected to the present, and from which some of its future derives. Information also has this property.

Due to the very definition of information perception as a sum of products, the law of conservation applies to it in the quantum world, and this law applies part by part in a larger sum. for larger units. Hence the equivalence of physical information and action, or, on the other hand (Packages), the ultimate divisibility of both action and information. The latter (unnoticed earlier in physics) also comes from the property of infinity, which can be its proper part, and the conservation of quantity implies the finite divisibility of the whole.

Physics does not deal with infinities. That is the topic of mathematics and the future (this) theory of information, so speaking only about quanta of action, i.e. by the smallest energy changes in a given period (energy × time), the conservation of energy is a consequence of the conservation of information, equivalent to quanta, observed in equal periods of time. This is always possible from our point of view, from the position of the macro-world. The reverse is also true, that from the conservation of quanta in equal time intervals, the conservation of energy is valid.

However, more massive information systems find a way to relieve the burden (consistent with minimalism) by going in parts into the past and into parallel dimensions of time. To a relative observer, such time slows down, which reveals a lack of information by creating an attractive force (gravitational force). Relative energy is then as many times greater as time is slowed down because the mentioned (non-physical) reason for infinity also applies to energy, and especially to its conservation.

In the previous image on the right, we see how the kinetic energy of the ball is distributed among the cones, through the bowling pins, knocking them down so that the total energy is conserved. Another example is a body that in free fall gradually replaces its potential energy with kinetic energy so that the total sum of the two is always the same. We have often considered this second case, say a more massive body relatively slows down time and flattens the beginnings of the past (Line up). Also, the slowing down of time by movement corresponds to that due to rest at the reached height of the gravitational field, from the point of view of a stillness observer outside the field (Space-Time, 1.2.8 Vertical drop).

Another important trait that accompanies memorization is consistency. The past naturally guides the present in various ways. When we say "we are the way we are (were)", or "we are working on pacification" we are talking about political stability, for example, to have a desirable past with a supposedly better state. The purpose of the past physically is to dilute the present and make it more certain, and with that comes consistency. These again are not enough in the world of non-physical ideas (Consistency).

In other words, we arrive again at the thesis that we can only expect the ability to remember from the physically real world (Inseparable). That is why it is always related to some medium of a physical nature, starting from ancient stone tablets and ending with modern databases. Abstract truths such as mathematical truths can be considered so ubiquitous that they belong to the past, present, and future at the same time. However, we can consider them equally only as the present, without the real past, because before the creation these truths would not have been like that.


Question: Is being forgetful a sign of high intelligence?


Answer: I know what you're getting at, but I'm afraid it's a bit trite. If there are such scientific findings, I do not doubt their authenticity, e.g. Professor Blake Richards, connect them directly to these theses either It would be like saying Darwin, dictate to me now the detailed DNA of all species up to man. Or Democritus to elaborate on the periodic table of elements. Such a causality would, after all, contradict the theory of information that I am presenting.

However, there are some clues. If greater intelligence (statistically) means a greater "amount of options" (information) of an individual, and this is characterized by a "world of ideas" rather than "physical reality", then its memory is not the first in importance either (Fictional ). Living outside reality and too much in fiction, reality escapes us, and it is the main bearer of "memorization" (we mean forensic, archaeological, and other material evidence of the past). On the other hand, also, capturing the points and not bothering oneself with superfluous details, it is stated in the appendices as the main theoretical argument.

The development of artificial intelligence will confirm this concept. The current development of computers to machine learning has determined (Diagnosis) that we can separate the software that thinks (algorithms) from the one that remembers (data) to gain easier programming, universality, and computational efficiency. Thereby also the ease of handling the data of the machine itself. It underlines, and highlights the difference between pure "smartness" and pure "knowledge".

In its own way, the evolution of species on earth supports the same thesis. If we pay attention to development through reproduction, as a semi-periodic process (descendants resemble ancestors, but not quite), which progresses in a kind of wave towards more vital and sometimes intelligent forms of life, and is almost without (visible) memory of ancestors, it also tells us that greater vitality does not necessarily go with the abilities of better memory, nor historical depth. With the discovery of letters and other aids, let's add, people can be dumber and inherit increasing technological and scientific power to some level. The decline of the abilities of the individuals themselves at the expense of the growth of the vitality of civilization is currently underway.

The ability to remember is transferred from lower to higher forms of life and cell forms to the complex organism they build, in ways that resemble the transfer of substance. In the advanced stages, however, that resemblance goes more slowly, or backward, so that with material complexity it moves forward again, then back with the strengthening of the organization's vitality. It is known in medicine that the cells of our body "remember". We also say it in the way when we complain that "the body remembers fat", trying to lose a few kilos. No matter what I do when I lose weight, hunger regularly puts me back at my old weight. Stubborn memory is not smart — would be an assertion in support of the above assertion.

When we try to quit gambling, smoking, alcohol, or drugs, it happens that the "memory" power of our unconscious body overcomes its "mind". Similarly, routine behavior that relies on the "known" and without the skill and unpredictability of, say, intelligence, is not very vital. It is not strategic enough, in the sense of winning the war while losing some battles, and that is why it remains a minor league in higher competitions. It is almost as powerful, though said differently, that the past directs the future, robs it of options, and liveliness comes with surprises.

Be that as it may, this (hypo)thesis, about the necessarily smaller memory of living individuals with higher intelligence, seems to me superficial, but not unfounded if we consider it as a slight, barely visible "connection". First of all, because inconsistency is a characteristic of "manipulators", II League players (Traits), who are stronger than "good guys" (III League) because they have more vitality, but not from "bad guys" (First League) with even greater vitality, but which manipulate fantasy-reality relations even better.


Question: Why do you think we are "evolving to be less sexual"?


Answer: It's very long-term. The indicators are a lowering of the tolerance threshold for rape, an increase in the percentage of those not interested in sex, and an increasingly frequent exchange of partners by technology, including future single (artificial) births, etc. The sexual attraction of intelligence, strength, and health will fade.

Emotions are one of the topics I keep coming back to, which I skillfully avoid and reduce to the principle of less actions (spontaneity towards less information, greater probability).

Last time I mentioned this was via the origin of intelligence. Long before that, I (bitterly) discussed the (in)accuracy of the "significant increase in intelligence" during the 20th century (Flynn effect) , citing a possible measurement error due to the large increase in education in that period, which could then make it easier for respondents to solve the IQ test. At that time, women with a higher education, career, or intelligence, as later, tended to have fewer children and thereby contributed less to the next generations with their genes. That's why I regularly avoid descriptions of passion.

Not only "unhealthy diet", or "polluted environment", nor similar topics among today's popular ones will be the only, nor the leading reason for our decline in inherited abilities, but it is above all the principle of minimalism. By the very desire for comfort, we make artificial aids, and then all the bodily things that we do not use will become stunted (through generations). Instincts will veer towards different ways of living than the ancestors, for example leaving the fun of offspring to the state and leaving marriage for single parenthood. And organizations in general will assume more and more powers.

With the progress of medicine today, we are stealing the health of future generations, even when it comes to reproduction. For now, such causes are not emphasized, but only analyzed like Levine et al.'s 2017 report that sperm count is declining at an accelerated rate (now 2.64% per year), worldwide; male fertility is described as "a major public health problem" (The decline). Reproductive problems on both sides, in men and in women, are growing at an alarming rate, and more and more frequent statements are made in recent analyses. Infertility rates are on the rise.

Facts are not rare (Reproductive Problems) that the entire spectrum of male reproductive problems is increasing by about 1 percent per year in Western countries. This "1 percent effect" includes declining rates of sperm count, decreased testosterone levels, and increased percentages of testicular cancer, as well as an increase in the prevalence of erectile dysfunction. On the female side of the equation, abortion rates are also increasing by about 1 percent per year in the US, as is surrogacy. Meanwhile, the total fertility rate worldwide declined by nearly 1 percent annually from 1960 to 2018.

The need to dominate as well as to be dominated stems from a principled desire for less action, less communication, and more probability. It only seemingly, secondarily becomes what we often replace it with, such as the will for power and protection, the urge to die, or the desire for comfort. The same is the deep root of the passion of scientists for discovering the rules and developing technology, all the way to selling search engines today, robots tomorrow, entertainment, and medicines. Consideration and tolerance in the dealings between people grows also because of this universal tendency to inertness (Laziness).

The need to calm down leads to less information of perception (Traits) between the subject and the environment, and less defiance to the external actions, which will make us more and more players of the III League. In other words, tolerance will tend to reach such a level that you will be embarrassed to tell the truth, as it was humorously announced. Combining only good with good will spontaneously becomes an increasingly common phenomenon in direct interpersonal relationships, and the dominance of such will be strengthened by legislation. It will be so in synergy with technology in the further decline of the inherited abilities of people.

The development of these flows, of course, cannot be linear. Globalism today is losing the battle against the sovereigntist (I announced it a decade ago), because nature does not like equality (subordinate to superior) as much as universal diversity. But that doesn't mean that some of its elements won't be strengthened later when painted differently. One of those, I keep repeating, is the need to surrender personal freedoms to collectives for the sake of supposed efficiency and security.

Although such a future may seem terrible from the point of view of some of the past and us today, if it comes as a combination of natural-artificial humans and other creatures, it will be acceptable to the majority. Let's remember the era before the appearance of the car and the predictions of general madness and the destruction of the world that will cause people to move at unnatural speeds, or telephones, televisions, and many new devices because of which "the apocalypse is coming".


Question: How does "conservation of force" follow from the law of conservation of information?


Answer: A fair coin has "tails" and "heads" sides with equal chances of landing when tossed. There are two of them, each with probability 1/2 and mean information S2

log 2 = -0,5⋅log 0,5 - 0,5⋅log 0,5.

A die has sides {1, 2, ..., 6} all with landing probabilities of 1/6, the mean information S6 = log 6.

More of these outcomes carry greater uncertainty and equal to the information of each individual realized. All this amount of uncertainty is squeezed out in realization, like water from a sea cap (jellyfish in the picture on the left) in movement. The principle of least action prevents this "crowding out", but the law of conservation of information encourages it.

Already from the very described process of birth and death of the outcome, without entering into the interpretation of types of, we understand the alternation of the force of probability and uncertainty, i.e. the presence of the smallest "packages of force". We have seen (Effort) that such tiny but unavoidable forces exist in the expression for information (F⋅ΔxΔt). Movement (along the x-axis) causes a relative contraction of length and dilation of time (Line up, 4), here Δx = ⋅Δx0/γ and Δt = Δt0⋅γ, so ΔxΔ t = Δx0Δt0, and the product is constant also in the gravitational field.

In these cases, therefore, the laws of conservation of force are reduced to conservation of the amount of action, that is, information. More options, more uncertainty, in the above example, a bigger cap, more water intake, and momentum, now means a bigger mentioned small force F. Less likely outcomes carry a greater force, which I verified with simulations (Uncertainty Force).

More interesting than the question itself are the consequences of this maintenance of force. Some of them, I hope, can explain popular enough for answers in a place like this. For example, it is easy to notice that nature is never at peace, because its micro-forces are inevitable in every piece of information, and information makes up the fabric of space, time, and matter. Another example is that a gravitational field with a slower flow of time and fewer events will have fewer of those forces, which is why places with a slower flow of time become attractive. Otherwise, less informative states are attractive, as are more likely outcomes.


Question: Can induction be seen as "short-term memory"?


Answer: Yes. Electromagnetic induction is the appearance of electric voltage by changing magnetic flux. When the field magnet B, pictured on the right, goes through the coil, and at an angle θ on the normal to its surface A, the magnetic flux is

φ = -AB cos θ.

If the coil has n = 1, 2, 3, ... turns, the induced electromotive force is ε = - n Δφt. The speed of the field flow is Δφt. The induced emf opposes the cause and hence the negative sign (Lenz's Law).

1. We imagine the forces of the magnetic field as detour around the magnet from its north (N) to south (S) pole. The number of these magnetic lines through the surface (A) is called the flux (φ). This classic explanation now becomes informatical, and the lines (magnetic field lines) are, let's say, "disruptions" (paths of providing the information) seen by the magnet. Not everything communicates with everything, we know that, but here it is obvious that electricity also interacts with disturbances. The movement of the magnetic creates an electric field and vice versa.

2. Lorentz transformations of electric and magnetic fields, vectors:

E = (Ex, Ey, Ez),   B = (Bx, By, Bz),

between two rectangular Cartesian coordinate systems Oxyz and O'x'y'z' in rectilinear inertial motion with speed v = (v, 0, 0) are:

\[ \begin{cases} E'_x = E_x \\ E'_y = \gamma(E_y - vB_z) \\ E'_z = \gamma(E_z + vB_y) \end{cases}, \quad \begin{cases} B'_x = B_x \\ B'_y = \gamma(B_y + \frac{v}{c^2}E_z) \\ B'_z = \gamma(B_z - \frac{v}{c^2}E_y) \end{cases}, \]

where c ≈ 300,000 km/s speed of light in a vacuum, and

\[ \gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}} \]

is the so-called Lorentz factor. The field, electric or magnetic, does not change in the direction of motion, but only perpendicular to the velocity, the vector v parallel to the x-axis.

3. The number of intersected lines increases with speed and their density, and one relativistic phenomenon should be noted, that this number will not be the same when the conductor is stationary and the magnet is moving, with the number of the stationary magnet with the moving conductor. A field source that moves relatively gives an increased number of cross-sectional forces due to the contraction of the length in the direction of movement, precisely because of the lateral cutting with speed.

4. Before we similarly understand the action of the electron on the conductor, and the charge itself on the charge (Moving charges), relativistically without calling out the magnetic field, let's recall the old explanation with the help of the following image. Parallel conductors attract if the currents are in the same direction, and repel when the currents are in the opposite direction. The picture shows a view down the conductors ⊙ with the current flowing towards us and the ⊗ with the current flowing from us.

Induction B

The resulting magnetic field around the current wires, both in the direction of the thumb of the right hand, has lines of force in the direction of the fingers and is like rubber bands that if stretched, from the first to the second conductor, try to squeeze like a fist, and if they are opposed to each other, they push away. Informatic-wise, that's okay too, because the sparseness of communication in that picture on the left is consistent with minimalism and is attractive to currents, while the density on the right is repulsive.

5. The conductor itself is electrically neutral. Consider the motion of a single electron next to, so that we have two attractive parallel currents in the same direction, relativistically and without a magnetic field. The charges of the conductor are negative at rest concerning the individual electron, but the positive ones move relatively and contract along the length, due to contraction along the direction of movement. They condense and create an imbalance with an excess of positive charge, which therefore becomes attractive to a solitary electron.

When the electrons of the conductor are moving, and the protons are at rest relative to the separated electron, the electrons of the conductor are condensed and the Coulomb force between the conductor and of a special electron becomes repulsive. It is a case of parallel currents of opposite relative motion with repelling conductors. Just as in informational explanations, we still hesitate with the details of the description of the lines of electromagnetic forces, we also avoid the description of the deficit between the negative and positive electric charge. The informatics theory fits into the descriptions of streamlines even without this.

6. According to Coulomb's law, electrons are repelled by forces that decrease with the square of the distance, but their movements induce opposite magnetic fields, partially slowing them down. This long-known effect can be explained relativistically, without magnetic forces, simply by slowing down the time of the second electron from the point of view of the first. However, the measurements suggest that Coulomb's law is not exactly the most accurate, as the repulsion decreases faster than the square of the distance, but with an explanation that is a good patch.

There is another way of understanding it (Current). Let's accept that the magnetic field is induced with the movement of the electric charge, and not in the case of rest. We have seen that it is perfectly fine in both ways, classical and relativistic, but let's add that electrons would attract each other at rest. It sounds absurd, and it is, at least because it is not possible to completely stop elementary particles, however, it again explains the phenomenon of repulsion. Then let's add that two such particles-waves can be "glued" to each other, so that they are in a wave superposition, in a phase when there is no mutual movement between them.

7. Although very unusual, the explanation (6) of electrons that are not repelled at rest is stronger than the previous one (5) of one electron next to a conductor and two electrons, because it is equally valid for two conductors. It also brings up a much deeper topic of interference, the superimposition of two or more waves that result in an amplified or attenuated wave. All nature is a story of waves, starting with the smallest particles (Matter wave), the Schrödinger equation of quantum mechanics based on that concept, through (Roots of units) of periodic phenomena that can store amounts of energy, actions or information, and even isometries in general (mappings that keeps the distances unchanged between points).

8. By the way, the word "induction" has many other meanings, but mostly as a consequence of something that preceded it. For example, the induction period in chemical kinetics is the initial slow stage of a chemical reaction, after which the reaction speeds up. In medicine, induced stem cells (iSC) are derived from somatic, reproductive, pluripotent, or other types of cells by deliberate epigenetic reprogramming. Mathematical induction is a method of proving that the statement T(n) is true for every n = 1, 2, 3, ... Grammatical induction is a process in machine learning, learning a formal grammar from a set of observations, to obtain a model with the characteristics of the observed objects.


Question: Do you have something for pests?


Answer: The above question was joking to me, but it can be inspiring and serious. A Pest is a type of plant or animal that threatens property, or humans. It mostly refers to living organisms unfavorable to crops, livestock, and us in our homes. The most common pests are mice and rats, ticks, fleas, cockroaches, then the insects on the left, but also numerous other organisms.

1. Looking more broadly, we can formalize pests in the competition between good and evil into a dynamic process. Let in the k-th step (k = 1, 2, 3, ...) some population has xk good (desired) and yk evil (pest) which are changed in steps as follows:

\[ \begin{cases} x_{k+1} = 1.03x_k - 0.1y_k \\ y_{k+1} = 0.08x_k + 0.85y_k \end{cases} \]

This means that the growth of the good in itself (1.03) would be 3 percent, but of that, the pests take away 10 percent (-0.1). This is what the first linear equation says, and the second that the population of pests decreases in the absence of goods (0.08), because they are parasites of the same goods, and the second that 85 percent of those goods (0.85) infect, or convert into their own. It is the so-called recursive process

\[ \begin{pmatrix} x_{k+1} \\ y_{k+1} \end{pmatrix} = \begin{pmatrix} 1.03 & -0.1 \\ 0.08 & 0.85 \end{pmatrix} \begin{pmatrix} x_k \\ y_k \end{pmatrix}, \]

or written shorter, pk+1 = Apk, matrix-vector. From the characteristic equation (Avi = λivi) we find for the eigenvalues numbers λ1 = 0.95 and λ2 = 0.93 with corresponding eigenvectors:

\[ \textbf{v}_1 = \begin{pmatrix} 1 \\ 0.8 \end{pmatrix}, \quad \textbf{v}_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}. \]

Otherwise, the general solution of this linear system of the nth order (associate matrices n × n), eigenpairs λ k with vk and k = 1, 2, ... , n, is

pk = C1λ1kv1 + C2λ2kv2 + ... + Cnλnkvn,

where Ci are constants that depend only on the initial conditions. Here (n = 2) is

\[ \begin{pmatrix} x_k \\ y_k \end{pmatrix} = C_1(0.95)^k \begin{pmatrix} 1 \\ 0.8 \end{pmatrix} + C_2(0.93)^k \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \]

pk = C1(0.95)kv1 + C2(0.93)kv2,

orderly for the steps k = 1, 2, 3, ... Let's notice that both of these lambdas (eigenvalues) are less than one, so by increasing the number (k → ∞) they disappear, become zero. Hence pk → 0, when k → ∞.

In other words, these populations, with pests that only parasitize the commons, disappear over time. Such makes a destructive system in which nothing survives, neither good nor evil. They have no future.

2. However, if we engage in the control of pests, let's say as much as in the destruction of goods, changing the coefficient (0.85 → -0.1) from the second equation of the previous system, we get the matrix equation

\[ \begin{pmatrix} x_{k+1} \\ y_{k+1} \end{pmatrix} = \begin{pmatrix} 1.03 & -0.1 \\ -0.1 & 0.85 \end{pmatrix} \begin{pmatrix} x_k \\ y_k \end{pmatrix}. \]

Now "good" by itself will continue to prosper in steps (k = 1, 2, 3, ...) by 2 percent, and "evil" will still take a tenth from it. However, the first retaliates and also destroys a tenth of the second, whereby the "evil" adopts (infects, subjugates) a significant part of the "good" (the same previous 85 percent).

Now the eigenvalues are approximately λ1 = 1.075 and λ2 = 0.805 which stand next to the eigenvectors:

\[ \textbf{v}_1 = \begin{pmatrix} 1.075 \\ 0 \end{pmatrix}, \quad \textbf{v}_2 = \begin{pmatrix} 0 \\ 0.805 \end{pmatrix}. \]

The perspective of this population is completely different, as we can see from the general solution

pk = C1(1.075)kv1 + C2(0.805)kv2,

k-th step. With steps (k → ∞) the first summand of solutions is getting bigger, and the second one is disappearing. In other words, "good" prospers, and "evil" disappears. I deliberately use such general terms, because like a multiplication table, this model can be used for various calculations, if only we grasp the essence of the terms "good" and "evil".

3. The lesson of the previous two examples is that if we do not suppress "evil" (1) in the end it destroys everything including themselves. However, there are (2) not too-difficult ways to suppress the second to survive only the first. During this, "evil" parasitize "good", renames it, and damages it, to some extent, of each of the listed damages.

If the pest were not concerned with destruction or infection, nor subjugation, its character and the nature of these equations would change. That is why this type of destruction is specific, because in the first case (1) it is self-destructive, while the second (2) shows us that such a malignant damage generator is definitely defeatable, with a model that completely exterminates "evil".

For example, in short, the non-resisting side loses; otherwise, we can say that the loser is the opponent that overreacts (Reciprocity).


Question: Complex relationships in nature between goods and pests?


Answer: That's right, but subsume the described here (Pest) under the two grouped sums, x and y, variously in "good" and "bad." But it can also be detailed and more complex.

Allegedly, in China, once in order to protect the crops (x), they killed sparrows (y) which normally eat and bugs (z) too, also the grain pests. It is easy to construct examples of systems of equations that simulate, among other things, the suppression of only one type of pest and, at the same time, our allies against plagues.

Imitating the previous simple system, its more detailed form would be like the following

\[ \begin{cases} x_{k+1} = a_{xx} x_k + a_{xy} y_k + a_{xz} z_k + ... \\ y_{k+1} = a_{yx} x_k + a_{yy} y_k + a_{yz} z_k + ... \\ z_{k+1} = a_{zx} x_k + a_{zy} y_k + a_{zz} z_k + ... \\ ... \end{cases} \]

no matter how many "good" and "bad" participants there are. In simple cases, coefficients aij are constants that indicate the share of j's "good" or "evil" in i-th, during each of the k = 0, 1, 2, ... steps of the process. In a separate attachment, I described ways of solving such recursions (Systems).

However, the example with sparrows is significantly different from the previous one, which is based on pathogens that would infect us or the object of preservation, or on an army (ants) that comes not only to rob but also to enslave. Such differences require different coefficient models for such linear systems of recursive equations. In addition, there are their limitations, for example, due to the nature of the processes that themselves change as they change the environment, when these coefficients (aij) become variable sizes.

An even further step into the systemic competition of good and evil would lead us through games of victory (Reciprocity), because basically any defiance of the processes of dead nature (the principle of least action) it leads to greater vitality.


April 2024 (Original ≽)