March 2023 (Original ≽)



Question: The present overtakes the past at the speed of light?


Answer: The metric space-time makes sense of it first. It is the concept of Hermann Minkowski from 1908, a mathematician who found a way to reformulate Albert Einstein's special theory of relativity (1905). Time is defined by the paths crossed by light (x = ict) at its speed in vacuum (c ≈ 300,000 km/s), while imaginary (i2 = -1) was needed for everything to fit in mathematically.

Einstein's space-time is not flat, to which my theory of information gives a deeper meaning through the objectivity of uncertainty and new dimensions. The present, that is otherwise relative to the observer, now becomes a structure of relation to all other times. At the micro level, it walks into the "forbidden" areas bypassing its reality, while at the macro level it moves away from its past at the speed of light.

Namely, that c is the maximum speed we perceive and determines the steps between the present of Minkowski's space-time. On the other hand, it helps in understanding some other phenomena, for now outside the scope of natural sciences. Additionally, the very idea of the present "moving" through space-time is consistent with information theory.

A subject moving at the speed of light has no flow of time, for she time stands still, and the effect of "change" is created by other observers. In the following moments, the photons are in the (most) likely places, from the positions of relative observers, and their frequencies (energies) are accordingly different. Thus, we see a source of light that goes into our future (moving away) with lower frequencies, in contrast to a source that is closer to us in the past (approaching), which has higher frequencies of the same light, which is a phenomenon known as the Doppler effect.

Only a step beyond the described will be reality in different positions of space-time, the concept of the present "in motion". If the light source is moving away from us at an increasing speed, the energy of the observed photons decreases, so that in the limiting case of the source moving away at the speed of light, the photons would "disappear", and we would no longer be able to perceive the light of the source.


Question: What are the most important problems with the movement of the present in space-time that you mentioned?


Answer: The first question is "through what" the present is moving; does this environment, space-time, have any dynamics and what kind? Along with that question, how does the statics/dynamics of the surrounding space-time relate to the objectivity of the case? There is also the question of "the past catching up" with such a fast present, and there are more. But let's go through the possible answers in order, without haste.

The past that remains behind us is static, except that it is fading (Channel). Processes of the present, formally viewed, are channels of information transfer from the past to the future, from the outputs of which the input can be read iff the transfer probabilities of each i to i are greater than the sum of all others transfer of i to j (j ≠ i). However, if the chances of transmission errors are higher, there will be no inverse matrix (Informatics Theory, 10. Stochastic matrices) and there will be unrecoverable transmission errors.

The loss of initial information is greater the longer the channel is, it can also be seen from the theorems of the next chapter (11. Adaptation by stochastic matrix) of the aforementioned script, which speak of the narrowing of the range of the output distribution and the existence of only one (eigen distribution) to which all inputs tend, and which is an impersonal uniform distribution. The latter can especially be seen from the following theorem and example (13. Multiplicity). They prove the increasingly difficult reconstruction of the increasingly distant past, as an objective phenomenon, and in that relative sense, its dynamics in relation to the ongoing present.

The information of the past grows, which actually means that the longer history for the present gradually becomes a "black box", something from which we cannot find out the content, then due to overloading with interference (noise), or misinformation. This increase in uncertainty formally behaves as if it then pushes (Uncertainty Force) the present away from itself, from the past towards the future. The answer to the third question is even more interesting.

Older times are faster (Growing). In this way, we "remember" past events, more than we could "predict" future ones. Time slowly slows down due to principle saving information and certainties become greater, which means fewer outcomes, less "uncertainty force" and greater attraction of the future for the ongoing present. The consequence of this slowing down of time is also the reduction of the speed of light, I suppose.

Some of these "solutions" are only hypothetical and can be changed for perhaps better ones, as you can see, but there are some that arise from the new information theory itself. I dealt with them simply because, by pushing the idea of "objective chance" to the point of absurdity, it can be tested its validity.


Question: Can certainty be understood only as reduced uncertainty, or is it a separate category?


Answer: I'm not sure for now, but I still treat "certainty" as a type of uncertainty. Waiting some maybe evidence. I do not admit simple induction, for example, if in all your years you have never been dead, it must be that you are immortal.

For the sake of testing, postulating certainty as a great lack of uncertainty, I still don't find convincing places, but I admit that I didn't deal with that setting too seriously, I didn't test the idea in sufficiently absurd situations. The objectivity of the "uncertainty force", the more frequent realization of more likely outcomes, is in its favor. And the very fact that I dealt so much with that "force" shows that I do not underestimate such topics.

If something will not happen even in billions of years, in the theory of chance it still does not mean that it cannot happen. On the other hand, if we extend the principle of economy with information to situations of "pure certainty", then zero information, the rest of the theory fits quite well. We then strive to discover legality just as we strive for order, efficiency, security, thus running away from freedom. Nature follows the same rules for the same reasons, avoiding uncertainty. The theory becomes more compact and simpler, more convincing.

From the point of view of "memory", certainty is more easily accessible to the present, even when it is in the future. Then they (memories) somehow fall into the same category as "predictions", which again is not illogical due to the related nature of time before and after the present. At the same time, the same compulsion to move from the past to the future remains, again due to principled minimalism.


Question: Explain to me simply what "fermions" are?


Answer: Fermions I described exactly, you can read about them much more simply by clicking on the picture on the left, and I will try to explain more banally now.

"Fermions" are simple particles of half-integer spin (±½), which we distinguish from "bosons" of integer spin (0, or ±1). In quantum mechanics, spin is, roughly speaking, a type of two-stroke internal combustion engine, or more professionally, a cyclical change of state of a particle, including rotation.

The conservation law applies to spin, so a fermion cannot decay into two fermions, when it were a boson (±½ + ±½ --> 0 or ±1). Theoretically it could on three (½ --> ½ - ½ + ½), and practically also on a boson and a fermion of opposite spin (½ --> 1 - ½). The decay processes of unstable particles are "memoryless" and have an exponential distribution (Decays).

For similar reasons, two identical fermions cannot be found in the same place at the same time (Pauli exclusion principle). However, in addition to spin, they have three more degrees of freedom (Quantum Numbers) by combining which and the mentioned banning, for electrons in an atom, we get Mendeleev's Table. Every day we witness the decay of fermions in the formation of chemically heavier elements and the emission of light (spin ±1) in fusion processes on the Sun as well as on other stars.

It can be said that due to the importance of position and the instant for fermions, we will not be able to cram them into a pile like bosons in the picture above left, so we have to distinguish their positions and order of appearance. They therefore follow Fermi-Dirac statistics, where we distinguish between tail-head and head-tail outcomes in the coin flipping, either that one coin has been tossed twice, or two coins will be tossed at once. Hence, in general larger particles (up to a dime), followers of the same statistics, are considered a type of fermion.


Question: Weird and simple can you explain "bosons" to me as well?


Answer: Bosons are particles that carry energy and forces throughout the universe. The strong force is carried by "gluon" spin ±1, the electromagnetic force is carried by "photons" spin ±1, and "W and Z bosons " are responsible for the weak force, the first spin ±1 and the second 0. We believe there is also a "graviton" spin ± 2 for gravity, which is still being searched for.

Particle creation is a subfield of physics that deals with their collisions and then observing the resulting trajectories. Almost all known particles decay. Mostly very quickly. Only the electron, the three types of neutrinos, the photon and the graviton are stable, and only the last two are bosons. In July 2012, ATLAS from CERN announced experimental confirmation of the existence of the now famous Higgs boson, spin 0.

ATLAS (A Toroidal LHC Apparatus) and CMS (Compact Muon Solenoid), two "general purpose" detectors at the LHC (Large Hadron Collider), were the first to observe the decay of the Higgs boson to fermions. However, this phenomenon is characteristic of the time of the early universe from about 13.8 billion years ago, when mass was formed. The Higgs particle is very unstable. Once we produce it, it immediately disintegrates into other stable particles, which is why there are no more free ones.

The Pauli exclusion principle does not apply to bosons. Equal bosons can be in the same place at the same time; unlike fermions, identical bosons are tolerated. For example, tossing two coins at once we need to distinguish between four equally likely outcomes {HH, HT, TH, TT} even if we think that the outcome Head-Tail is "indistinguishable" from Tail-Head. We distinguish them because they are "fermions", but if they were "bosons" this would not be the case and the mentioned set would have only three equally likely outcomes. This is the reason why the cosmos of bosons could be packed in just one single "point", maybe in time of the "Big Bang" nearly 14 billion years ago.

According to "information theory" (unofficial), the spontaneous growth of entropy of thermodynamics applies to matter (fermions) and not to space (bosons) as believed, while information both spontaneously decreases. The information of the substance slowly melts into space and gradually longer memory, so that the total amount of all three remains the same. That's why we have more and more space, the substance of cosmos is getting rarer, the specific mass is getting less and less. It simply means that the average probability of the particles of the universe decaying into bosons would be (slightly) higher than a similar decay into fermions.


Question: Does it mean (previously stated) that in the distant future, only space will remain of the universe?


Answer: Maybe. The problems of such questions with this theory come from its assumption of uncertainty. Starting from "at least somewhere" she arrives to "many places". The impossibility of predicting the outcome of a coin toss, that of ergodic theorems (61.2. Information Theory II) limits and memory span and predictive ability. And there are already many limitations, from some immediate outcomes, random events of the micro-world, to estimates of the most distant past and future events.

Not counting the changes of the natural laws, themselves, but only one effect of shrinkage the cosmos with the principle of thriftiness of information, growing space at the expense of substance will ultimately give empty space. If then only boson particles would remain, due to the possibility of the presence of such bosons in the same place at the same time and on the other side of gravity acting on space as well, the collapse of the universe into one single point is also an option, i.e. a single space-time event.

The past, which until then was getting longer and longer by "dilution", would itself be condensed into a single point, because time is also "thinned out" by the absence of outcomes, it slows down, thus to a complete stop, disappearance. A huge amount of uncertainty packed into such a small event would explode somehow, that's certainly.

Such a tale about periodic "big explosions" (Big Bangs) of our universe is drinkable, nice story, and consistent with this theory, but the catch is that it is not the only one. The same theory of information not only leaves those possibilities of different continuations of world processes, but predicts them as equally realistic.


Question: What do you mean by "memory"?


Answer: Conditional probabilities from Quantum Mechanics (1.1.5) with a slightly broader topic. When past outcomes did not change the probabilities of future outcomes, then we had a "memoryless" process, or a series of independent events.

In the picture on the right is the union of sets of random events AB given by set differences A\B, B\A and intersection AB. The differences and intersection have no common elements, so we say that they are disjoint sets, also that they are mutually exclusive events. Hence sums and probabilities:

|AB| = |A\B| + |AB| + |B\A|,
|A| = |A\B| + |AB|,     |B| = |B\A| + |AB|
|AB| = |A| + |B| - |AB|,
Pr(A + B) = Pr(A) + Pr(B) - Pr(AB),

where by sum we usually mark the union of random events, and by product the intersection. In addition, the notion of conditional probability is also important to us:

Pr(A|B) = Pr(AB)/Pr(B),   Pr(B) ≠ 0,

the probability that something from the set A will happen, when we know that something from the set B happened.

Events from A are said to be independent of events from set B when Pr(A|B) = Pr(A). Likewise, the relation Pr(AB) = Pr(A)Pr(B) defines mutually independent listed events. Events that are not independent are dependent. That's all pretty obvious, but the following theorem isn't:

  • if events A and B are independent, then they do not exclude each other;
  • if events A and B are mutually exclusive, then they are dependent.
See proofs and other details in the aforementioned book Quantum Mechanics.

We consistently note that events that have no "memory" of previous ones are not dependent on them. Let's say, there are dependent steps of a system whose information is decreasing, also which is moving to states of higher probability. The universe makes a series of such events, dependent, with all the longer tails of memories that older ones have that can exclude some current outcomes.

Finally, let's add the theorem of the exponential distribution, which we have seen represents independent processes, processes without memory.


Question: Does the word "memory" then seem redundant next to "independence"?


Answer: No, if we note that "memory" in this context means the opposite of "independence". On the one hand, it will mean the easing of the "burden of freedom", and on the other, it will be a "deposit" for preserving the total amount of information. Accordingly, "remembering" becomes a kind of "directing". We see little of this if we stick only to the abstract expressions of the mathematical theory of probability.

Temporally, addiction also means some guidance. It is like organizing, losing some options and reducing information. However, we need further interpretation of it in terms of inertia. Reducing the system to less information in general means reducing possibilities, limiting the display of force and influence. From those positions we better understand "independence" as the potential of power within the feeble efficiency of mere disorder.

The movement of the physical body by inertia is as much a consequence of its own memories as it is a consequence of the observer’s memory. We have pictorial examples of this in the movements of light particles (photons), which have no proper time, and no memories. That's why this information theory is special, because it prefers to base the observations on "information of perception". However, we understand that the greater power of penetration against the "uncertainty force" is not dull indolence as much as it comes from vitality.

We ourselves are an example of how the mass of dinosaurs and their millions of years of duration on Earth is not comparable to human flights outside the Earth. With scientific knowledge and the power to control the planet. When Newton was once asked on the occasion of the discovery of gravity (1687) how he did it, he allegedly replied "I stood on the shoulders of giants", in order to see further and better than others.

The development of biological species in general can be understood as a process directed by the collective memory of the species and the environment, then analogously the development of the knowledge of our species, then the development of each individual civilization by culture, to the process of living of certain individuals "enriched" with memory.


Question: As far as "memories" go, is there any relation to that?


Answer: Yes, that's the theme of the title "63. Signal range" from the script "Information Theory II" or "61.2. Ergodic Theorem" of the same. Discussing those ranges would be inappropriate for this type of wording, so I will adapt the story.

Let's say that the signal A is transmitted through the channel K with a reliability of 99 percent, which means with a probability of p = 0.99. Then one after the other two such channels will conduct the same signal with 98 percent confidence (0.992 ≈ 0.98), a series of three channels with confidence 0.97 (0.993 ), and so on, up to a sequence with some n = 68 such channels of correct transmission in about half of the cases (1 : b = 0.5).

However, the range of a given signal through a series of channels K can be a little larger than 68 chains, because the signal A can go to the wrong signal B which with a new error can give the starting signal A. That's why the exact calculation is more complicated, but the essence will not escape us. The point is that with longer and longer processes, these transmission errors can accumulate so that the channel noise could gradually choke and suffocate the original information.

In general, let a single channel transmit a signal with reliability p ∈ (0, 1), and it decreases through a series of n = 1, 2, 3, ... channels to the reliability 1/b, where b > 1. In the last case it will be pn ≈ 1/b, and hence n⋅logb p ≈ -1. When we understand this logarithm as information of a given probability, I = -logb p, the same relation can be written more simply n⋅I ≈ 1, with a unrefined interpretation.

For example, the farther away the object in the video frame, the less we see the details, in a way almost equal to the relationship between time distance and memory. Also, if the speed of the film is higher, the resolution of the image is lower, or how many times the momentum (energy) of the particle is determined, the position (time) is more uncertain — similar to Heisenberg's uncertainty relations.


Question: By explaining "resolution" do you determine that the channel information is getting smaller?


Answer: Of course not. The given answer with the probability p evaluates the "reliability" of the transmission of the signal A through the channel K. This sureness goes from "none" (p = 0) through increasing to the ultimate "certain" (p = 1). What I = -logb p expresses are like the bottoms of a 1 dl drinking glasses, where n is their depth, so that each has the same volume. The base of the logarithm (b > 1) determines the unit of information.

I explained the reliability information on the non-square channel matrix using the example "semaphore", and then in more detail about that unusual type of matrix at the beginning of the script "Information Theory I". Its eigenvalue, of one or a series of those matrices (channels), will tell us more about the transmitted original information of the source itself. Compared to normal descriptions of (Markovian) channels, which prioritize the total classical information, the input plus obstructions in between, the non-standard matrix (Q) targets quantum mechanics.

Question: I read, but I don't understand. Can you explain that Q matrix to me?

Answer: It is simple, even more mundane than the square matrix for describing the Markov chain (stochastic). A series of probability distributions can be represented by unit vectors in the Cartesian rectangular coordinate system, say Ox1x2. The coordinate axes with the vector open the angles α1 and α2, with coefficients that are the cosines of those angles. No matter how many coordinates there are (not just two), the sum of the squares of such cosines is always one, which is convenient for dealing with probability distributions.

Well, the cosines of the angles α define an array that is a unit vector u, and the matrix Q transforms it into another, also a unit vector v, but with angles β inclinations towards the coordinate axes. Algebra of matrices (u = Qv) is respected, but vector transformations no longer need to be followed by square matrices, moreover not even with regular ones when they are square. Also, I proved that rotations can be reduced to sums of these matrices, which can behave as zero (Q'u = 0) though non-zero coefficients.

If you are more interested in this, otherwise new, probability algebra, I will gladly try to explain it. In the meantime, you can watch the slightly more professional contribution "Q Channel".

Random Process

Question: Is there some linear operator that would interpret the random motion of, say, a gas particle?

Random Process

Answer: Not officially, but you have my contribution "Q Channel". The matrix Q can be a description of one step of just such a random process.

For example, if we have only one particle with velocity components, then we have velocity angles to coordinate axes and cosines discussed in that appendix. And the situation is similar with more of the same kind of particles that would move chaotically.

In these cases, the Q is a one-time process of changing the direction of the velocity per component, but with collectively unchanged energy. Series α are formed by angles of the velocities of each system particle down to each of the coordinates, which are then transformed into the array β while they all vibrate in an unpredictable way. The problem of finding a single operator in such a complex random process seemed unsolvable, but there happened to be a matrix Q : α --> β. Its study is more of a theoretical significance, due to the simplicity of the linear operator themselves against the complexity of the specific phenomena.

For another way of studying random motion, also outside of official science, see my book "Physical Information", under the title "2.5 Discrete probability". There are a number of theorems about the "random walk" of point, especially about the "physical information" — for whose total quantity the law of conservation applies.

Random processes are not reversible, they lead to more probable and less informative states, and when the laws of conservation prevent it, then the system is diluted, fragmented, and complicated as it organizes itself. This is the meaning of the irregularity of the matrix Q, the impossibility of a unique return after the step α --> β.

When it is possible to lose information through the walls of the vessel, then in such a development the chaoticity of particle oscillations will decrease during the decline of heat and temperature, and with the increase of entropy. Similarly, we find a "free running" point that expands to an ever-increasing space — reducing the specific information of the place while maintaining its total amount. The changes that are piling up on them are as irreversible as their past. Their one-way processes are their memories.


Question: Where do all those memories go that we forget?


Answer: Memories fade. Generally speaking, about the process of "forgetting" information passed through the channels, it "disappears" drowning in the multitude never (in quantity) disappearing completely.

Let's think, for example, of the planet Earth in some 7-8 billion years when the Sun grows more than 250 times and swallows it, perhaps melting and burning all of existing before. All the history of the living and non-living world that has ever resided on this planet, with all the lived events, emotions, fantasies, and particulars will fuse with that fire. They will surrender to that vast amount of uncertainty which may contain them all, and tell us nothing definite.

When information travels through the K channel, or we can say when the process of transformation of past events into future ones is going on, it encounters disturbances. They are the noise of the channel that settles on the primary information, suffocates it, and pushes it into oblivion in relation to all those who would receive and decipher it at some point. I am recounting the content of "61.2. Ergodic theorem" of the script "Information Theory II", but also a series of further ones in those scripts that supplement it. There are no long tracks, memory or forgetting, in disposable channels — such as the previously described Q.

The Uncertainty Force is such, though mild but persistent, that over time it overcomes everything. It defines the bottom of the micro-world of physics for us, makes it impossible to know the future, limits our physical space to the visible universe, as well as makes it difficult to see the past. Each perception of the past has its own barrier built by uncertainty with the objective masking of all events older than it with impenetrability analogous to that of predicting the future outcome of a coin toss.

Even "scientific" beliefs about what happened 13.7 billion years ago, when the universe was a hot soup condensed into a single point, perhaps with the same total information today and zero entropy, it could only be a fantasy — if the "uncertainty force" is significant to the laws of physics. If those laws were to arise and disappear precisely through the action of such a force.


Question: Do you have examples of cluttered environments (with information)?


Answer: In the picture on the right, Calhoun created a utopia for mice, where they could thrive in a secluded space, completely without fear of predators or lack of resources. The utopia of mice was soon overpopulated and then began to degenerate.

In the worst cases of overcrowding, pregnant females had more abortions, and mothers lost track of their children. Other mice resorted to fighting when they were in direct contact with one another for an extended period.

The assumption was that the strange actions of a group of mice correlated with an increased population, since when such relationships were called "behavioral sinks." Calhoun reported the results of an experiment with mice in the 1962 issue of Scientific American, and the concept (behavioral abyss) soon attracted public attention. Rats may suffer from crowding, but human beings can handle it — Freedman said of Calhoun's findings.

In the above example, one of the types of extremes is artificially created. There are no spontaneous paths to utopia for living beings. In an abundance of grass, carnivores will evolve alongside herbivores, not counting natural disasters and conflicts between the herbivores themselves or between plants. Synergies will follow some plant and animal mismatches as well. But with the appearance of predators, their prey was encouraged to evolve into more capable ones, then the predators also develop. Real utopia is "lived" by dead matter, it is the one around which there is always "abundance".

However, overcrowding also happens to an ordinary substance, from which it "defends itself" by escaping into "memory". Space and the limitation of the speed of light can be understood as a kind of defense. A signal that comes from somewhere always comes from the past of the subject of perception. There is also delving into the "real" past. A body that flies by inertia follows previous positions and behaves as if it remembers what has been affecting its future behavior.

In the case of increasingly strong gravitational fields, there is an increasingly powerful breakthrough into all dimensions. That not all bodies are equally inclined to expand into memories (space-time) also refers to the differences between the micro and macro world. We believe that smaller particles enter parallel realities more easily, those complex ones are prevented from doing so by the laws of large numbers. The ratio of such departures is not the same, which resembles the ratio of the surface and volume of a smaller and larger body due to the surface increases with the square of the length, and the volume with the cube.

You may not have expected such examples, but their strangeness nicely demonstrates where the question posed to me can lead us all. The answer to the question is "yes", there are many such examples, but the resolution for all of them is one-way, so to speak, as a result of the general saving of information that nature does not like equality (Equality ).


Question: On what basis do you build the superiority of the "reciprocity" strategy?


Answer: Let's face it, these are games to win and "superior" is a bit too strong a term for anything in the realm of "chance" (statistics or probability theory).

We are talking about the strategy "tit-for-tat" also known from game theory, so "perception information " and computer simulations. Each of them makes some contribution to answering the question.

The most popular (relatively speaking) tactic is "good for me and good for him" (win-win). It is usually explained with examples from business or politics. The buyer pays the seller for the goods, after which both parties are satisfied. Both feel "winning" when we consider this method as a tactic, some even a strategy. Politicians like to call it "compromising", but you should know that it is at the bottom of almost every top list of winners.

"Sacrifice to win" strategy (lose-lose) will almost always beat the win-win strategy. We actually feel this intuitively, when speaking about a slacker or a loser, "he won't drink cold water while he's hot" and urge him to do something uncomfortably for the sake of success. When in chess you are ready to sacrifice a piece to gain a better position or checkmate, unlike your opponent, you will have an advantage. It is similar to borrowing (credit) in business. The game will then include rhythm, waiting, anticipation, memorization, and reinforcement.

Politically elected officials are more dependent on the less successful part of the electorate, so the political public is full of praise for those "do-gooder" tactics. Thus, we discover that the nature of politics is not strategic, and then that the arrangement of the bare joining of good with good needs the coercion of the state, because otherwise they would be overpowered. The strongest strategy is self-sufficient, it does not need a guardian. Also, we find that when politics get involved in the games of the major leagues — break the toy.

The superior strategy (reciprocity) implies returning good for good and evil for evil. It absorbs everything lower, raises the vitality of the game, and draws strength from "defying" the natural flow of things (Win Lose) and that is why it is difficult, that is why we avoid it. It is invincible with other strategies, so if the opponent also accepts the game of the same league, let's wait "for the ignoramus to come to himself". When the opponent believes in the value of tolerance, it is easier to overcome him, or at least in karma (not to mention him that it did not help India against the English).

In a more urgent case, an intimidation like "roaring lions" or "brutality of Nazis" will help, but not if the opponent is bold. If we continue to intimidate the brave, the roles will be reversed and we will find ourselves (doing evil against good) out of the league. It doesn't pay off, as simulations show that consistent application of reciprocity holds up well against stronger opponents (with more starting points). A moose with an average weight of up to 700 kilograms can be hunted by a lone wolf weighing 40-50 kilograms. Also, a larger herd does not mean a harder catch for the predator, but rather an abundance of food.

The strategy of "reciprocal measures" is all the more combative as it is consistent: measured, opportune and unpredictable. If a merchant sells more goods than they are worth, he loses, and when he continues to do so excessively, the store collapses. The reverse is also true, if he takes more money, he risks being recognized as a swindler and will be shunned. It happens that we later notice the fraud and for some reason, we don't report it to the person in question but we talk about it around. The weakness of an untimely reaction is obvious, and real fighters will recognize the value of unpredictability even without a theoretical conviction.

An excess of goodness (evil) as well as a deficiency harms victory. The measure of initiative as a phenomenon of "reciprocity" is highlighted by "perception information". It raises the importance of time otherwise, and therefore also in competition, but only the third of the mentioned "tools" for winning is its "favorite".

Because of the power of "uncertainty" games in general, and competitions in particular, are things of "vitality". That is why important applications of game theory entered economy, then wars. Von Neumann's "minimax theorem" (make moves so that the opponent does not have the best options) offers perhaps a better strategy but is inapplicable in a deficit time or in a surplus of possibilities, which are the most common situations in reality.

The importance of timeliness, the tempo, and tactical maneuvers of "reciprocal responses" require great skill, it is clear, but the greatest weight of strategies, in general, comes from the need for unexpectedness (to the opposing side). There is no force without "the force of uncertainty", so routines alone will not guarantee victory. That is why the evolution of better and better hunters in the nature around us is sometimes also a path to the greater intelligence of the species, even though the "fitting in of the fittest" does not favor it in itself.

Two equal players, of equal playing strength, even when it comes to the strategy of reciprocity, will first reach equilibrium. The compromise is then the fairest and has a good chance of lasting the longest. Otherwise, the option will be the subordination of one side and domination of the other, or lies, and fraud. This effect of computer simulations is also visible in the world of wild animals.


Question: You are talking about equivalence, not equality of information and action. Why?


Answer: Actions contain the least physically free, or mobile, amount of information. A subset of them is within interactions with physical space-time and is equivalent to quanta. But I don't believe that these should be the only types of information.

I emphasize "should" instead of "must" because I'm still floundering with the definition of "information." One of the dilemmas is about the smallest packages. Namely, if they are the least amount of uncertainty, and I do not acknowledge the emptiness, does it mean that by subtracting from their uncertainty I gain certainties? I take it that the answer is yes, and with it goes the information inside the quantum. They themselves are not physically active, but they are abstractly correct. Thus, the broader theory of information is not only a subject of physics.

To the question, then, what keeps "abstract" information so firmly in the quantum of action, the answer is again in principle minimalism. This lack of information is "attractive" for them, at least in the way that in the abstract space of probabilities, we could talk about an aspiration for more probable outcomes. Uncertainty has strong outlines of physical forces, and since it made the fabric of the universe with information, every physical property becomes explainable by them. Thus, we arrive at equivalence.

On the other hand, the situation of parts, abstract information that would "compete" to occupy their minima, and to form a quantum, is like a competitive game in which they will reach "Nash equilibrium". It is a profile of player strategies in which neither player can achieve more by unilaterally changing their strategy. The Nash equilibrium states that every game with a finite number of strategies available to the players has at least one equilibrium point — the best for all participants in the game.

That's why there are quanta, that is, why there is no such information outside of quanta, I would say. Because of the laws of conservation of physical information and, hence, their finite divisibility. Nash's existence proofs require a finite set of strategies, but not the very concept of his equilibrium.

The subject of game theory is mathematical forms in the processes of rational decision-making in conditions of conflict and reconciliation of the interests of game participants with elements of risk and uncertainty. The sets of strategies of individual participants in a Nash equilibrium may be different, and it may be irrational from the point of view of third parties, but this is actually what makes it suitable for this information theory. The part about "rationality" also refers to the memory aspect of the association, and about "conflict" to vitality.


Question: Does this mean (mentioned earlier) that movement is also a type of perception?


Answer: Yes, of course, movement is the essence of reality, it is physics. We see him through memory at least in two positions in two moments. All bodies in general actually "live" at once in several moments, in intervals at least as long as the light needs to reach from end to end as if they "remember".

We believe that light (photon) has no time of its own (proper), although we feel that this is not entirely true. Just as the content of the fairy tale you would read, according to the "information theory" (mine), is not just fiction, so it will be the other way around, if we have information of movement, it means that there is some movement. Moreover, the light of different colors and wavelengths travels at different speeds through a transparent medium like glass or water (as opposed to a vacuum), which is why it splits up in a prism. Therefore, light can transfer its "velocities" to electrons and change their impulses, and vice versa, electrons can descend to lower energy orbits in the atom by transferring photons.

So, although the perception of the movement of something timeless (photon) can be a consequence of the movement through time of the observer himself, in classical physics remains a fiction, here it becomes a "reality" that exerts an effect on the subject. This gives the relativity of movement a deeper meaning.

Whether the object is moving towards us, or we are moving towards it, its relative energy is increased compared to the one it has at rest, we perceive a slower flow of time and shorter units of length in the direction of movement. These, so far only physical phenomena, we are also looking at them from an informatics point of view. However, the moon remains where it was until I look at it because reality gives it not only my perception but the totality of the perception of the moon’s surroundings. In contrast to the particle of the micro world poor in observers and communications.

A similar example is given by the flow of electric current through a conductor (Current). It is a wave carried by a much smaller movement of the electrons themselves, like a water wave whose generator particles have a much smaller range. Actions produced by information about the opposite movements of two parallel currents will repel the conductors instead of direct movements that will attract them. We know that currents induce magnetic forces, which then cause such differences, and now let's get to know this phenomenon from the informatic side.

Just as geometry can equally accurately treat phenomena as forms that algebra understands as values, let's try to understand that informatics with physics can reveal its faces in the domain of movement. As on the processing table, while we turn the object using different tools and see that we can have many results this way and that way, we notice that there is a greater perception of movement in the case of opposite electric currents through parallel conductors, than direct current.

In the attachment, this phenomenon of a repulsive excess of information and an attractive deficiency is interpreted on the electrons themselves. As long as they are mutually at rest they may attract each other, but as soon as they start moving, they move in opposite directions and will repel each other. Repulsion is therefore the stable state of the electron, and rest (if at all possible) is unstable. Thus, looking at electrons, we notice the principle of minimalism in the mutual perception of their movement.


Question: Do you still cling to that idea of "permanent creation" of the universe?


Answer: Yes, that (hypo)thesis seems more and more convincing to me. The idea that the laws of the universe are still emerging, that the development of space even in that domain is not a finished story once and for all, holds up surprisingly well so far. I would call it crystallization rather than permanent creation.

Crystallization in its original meaning is the solidification of a liquid substance into a highly structured solid whose atoms or molecules are arranged in a well-defined three-dimensional crystal lattice. Here I would abuse that term to "harden" the rules of behavior of our cosmos.

When the premise of this theory is that information, as a particularly defined amount of uncertainty, builds space, time, and matter, spontaneously saving its emissions, then the order is created by consistently reducing disorder. With the release of information comes legality. It is a memory process, an increase in the size of the universe followed by a decrease in specific information (Growing).

With the evolution of lawfulness, starting about 13.7 billion years ago with the "great lawlessness", which in cosmology we call the "Big Bang", the universe could enter into various processes of the emergence of the laws of physics. That's how things stand with this theory of mine, for now. How we found ourselves in the laws of nature known today is a question we may not have the ultimate answer to, like the details of the crystal formation of a snowflake, or the exact outcome of a coin toss. But who knows?


Question: Are you saying that the "Big Bang" would have followed a different path some other time, that the laws of nature might have been different?


Answer: That's right, sticking to this story consistently. Starting from such a moment, from the "Great Chaos", with the same (amount) of information that the spatially huge universe known today has, it then began to develop from a negligibly small volume and an incomprehensible high density (of information). In the meantime, certainties were deposited, and space, time, and matter were formed with them.

This is one aspect of "minimalism", strangely unknown to modern science, although the discoveries of classical information theory have already scratched the surface. From the "Greatest Uncertainty" there are actually certainties lurking, which grow from insignificant to arrangements that are familiar to us, or similar. They are driven by the principle of more frequent realization of more probable outcomes, that is, the tendency to develop into less informative states. At the same time, to emphasize again, I do not consider the concept of thermodynamic "entropy" to be fully understood in modern physics. We agree that entropy (substance) increases, but I believe that information decreases.

Space-time is getting bigger, and specific information (per unit of volume) is getting smaller. While there is less and less uncertainty, certainties grow with ever-thicker layers of memory. Memory directs the further evolution of the present, reduces its options, and pushes the dictatorship profiled in it even deeper. Other options took the initial universe in other directions into parallel dimensions of space-time, so they too are flaring up. All that, at least the 6-dim space-time universe, is getting bigger.

Ergodic theorems, which in my scripts (Information Theory II) refer to information transmission channels and talk about jamming the sent code with growing noise, indicate the same. With the longer and longer transmission process, we become informed about the increasing middle information (the amount of uncertainty) of the increasingly old message. Locally, we know that the channel adds interference to the original message by stifling it, but in the gigantic scale of the whole universe available to our perceptions, measurements, and interpretations, holding that present-day certainty was constantly present in its course, we supposedly have no other option but to agree to everything that it is "obviously offered" to us.

In that "obvious truth", the universe was created by the explosion of a super-dense fluid, and everything comes into its own. All findings such as forensics, observations, and theorizing of cosmology, the whole of physics, theorems of the mathematical theory of information and communication — will confirm the findings. But the same theorems will also tell us that all of this could be a "big deception", that is the settling, the settling did consistently and perfectly in its own way.

I do not consider these possibilities, different interpretations of the "one and only truth" well known to mathematicians and those who apply them, to be a phallic knowledge, but a confirmation of the principles of (my) information theory. There are objective coincidences, options of states, also of processes, but there are also "objectively correct" various representations of the same mathematical truths. They are not all possible in a single 4-dim universe, like our space-time, but in its various outcomes.

We have the ability to think from a continuum of ideas (Robots), although the moments of our existence together with all that 4-dimensional universe are countably infinite at most. Just as there could be a continuum of many options of the 6-dim universe, not to prove it again here, it is a testimony, or a window opened to us, into "all" laws of nature that are "somewhere" realized.


Question: Our past determines our future?


Answer: Yes, that is the definition of memory stated here, as something from another time, more precisely from our past, but such that it can sufficiently influence the current present and the subsequent future. So, the insinuations of the question here are not news.

However, a more interesting observation from "deposition" is that starting from the "Big Bang" the universe can move along very different paths of development. So there is a huge expansion of possibilities and the emergence of different physics in further laterals. Even at smaller "possibilities widths", say at small distances from "current memory", all the probabilities of the "same" events measured from actual and alternative ones do not have to be the same.

If the here unlikely options remained so even after jumping into them (accidental realizations of such), the present would have a discontinuity with easy jumping back to the "main axis" of events. It would also be unprincipled from a probability point of view of an unlikely outcome (return) happens so easily. However, parallel realities would not be consistent with "information theory", which is my main criterion. Not to be too long, going further into the "lateral times" we would reach little by little further into some physics that are less and less familiar to us.

Yes, our past determines our future — is the answer to the question — more or less, depending on what and from where we look.


Question: Are there larger timeless physical systems?


Answer: They exist as the present, or as quantum entangled. My "information theory" has long considered duration to be the primary phenomenon, while lengths are secondary. The "news" presented the second time is no longer that news. On the other hand, everything we see around us (people, buildings, nature) is not primary news of our present, but secondary news from the past.

Light is of finite speed (c ≈ 300,000 km/s) and takes some time to arrive, so space proves to be just another kind of memory depot. From it, we read how important the past is to the present, which is the only one in real-time, by the way, and then to the future. However, the present depends on the observer, starting from the fact that simultaneous events for some are not for others. This "weirdness" was discovered at the beginning of the 20th century with the theory of relativity, but it is far from being fully understood, in the case of quantum entanglement, for example.

When one particle decay into two or more, the particles are then simultaneous, and if (recently) we notice that they are quantum entangled, due to the tininess of the objects we observe and the novelty of that field of physics itself, it will take us a lot to connect the two. To notice, to accept simultaneity as a related phenomenon of quantum entanglement.

The fact that Alice and Bob (from quantum mechanics experiments) do not seem simultaneous to us while synchronizing — indeed it seems strange if we do not realize that our perception is not the same as their experience.


February 2023 (Original ≽)