Question: What is the connection between information perception, abilities and limitations?

Answer: This is one of the basic questions of (my) information theory and it is worth trying to explain it in an extremely simplified way. I hope not banally.

Ability, or "intelligence" (`I`) is an increasing quantity with the amount of possibilities that the subject
possesses in a given "situation vitality", i.e. freedom (`S`), and decreasing with limitations, that is "hierarchy"
(`H`). The simple form of this dependence is linear, when `I = S : H`, and hence `S = I × H`.
Information theory for `n` = 1, 2, 3, ... (the finite number is assumed) essentially different temptations will attribute
different freedoms `S`_{1}, `S`_{2}, ..., `S _{n}` and calls the sum

The explanation of the formula of perception information (`S`) shows its applicability both to the participant in
the situation in which he is coping, as is to the measure of the "quantity of options" of the overall situation (from the point
of view of the given subject). Moreover, it is a (new) definition of information with even more general possibilities of
interpretation. In particular, it turns out that its special case is information known from Shannon (1948), and as we know, the
Shannon's is also a special case of Hartley's information (1928).

Simple examples of information perception in matters of popularity (Popularity) and economics (Investment) I recently cited in successive responses.

Question: Physical action is also an example of information perception?

Answer: Yes, as in the book “Space-Time”,
on page 52 in the subtitle "1.2.6 Lagrangian". Quantum of action (Planck 's constant `h` = 6,626 × 10^{-34} m^{2} kg/s)
corresponds to the smallest possible product of the changes of energy over time, that is, the changes of momentum along the way
(`Et = px = h`). In the rectangular Cartesian coordinate system `Ox`_{1}`x`_{2}`x`_{3}`x`_{4},
where light passes imaginary (`i`^{2} = -1) distance `x`_{4} = `ict` speed
`c` = 300 000 km/s, during `t`, the information of perception becomes the mentioned Lagrangian, formula (1.103).
The quantum of action is also of the order of magnitude of the product of the uncertainty of energy and time
(`∆E⋅∆t` ≈ `h`), that is momentum and distance (`∆p⋅∆x` ≈ `h`) Heisenberg's famous
relations. Greater action corresponds to greater perception information.

From the above formulas, in the case of the same amounts of actions, we find that energy and momentum are related as distance and
time (`∆E : ∆p = ∆x : ∆t`). In the case of different ranges of action we have different information of perception;
let `S`_{1} = `E`_{1} ⋅ `t`_{1} and `S`_{2} = `p`_{2} ⋅ `x`_{2}, so that `S`_{1} > `S`_{2}.
Then, since work (energy expended) is equal to the action of force on the road (`∆E = F ⋅ ∆x`), and the force is
change of momentum over time (`F = ∆p : ∆t`), we find `F`_{1} ⋅ `∆x`_{1} `∆t`_{1} >
`F`_{2} ⋅ `∆x`_{2} `∆t`_{2}.
In terms of the law of conservation of space-time (considered the answer Inversion), we find
`F`_{1} > `F`_{2}. It is an interesting and simple conclusion, that greater information of
perception corresponds to a greater force.

Question: Why does a higher ability with a larger constraint and vice versa (smaller with a smaller one) give a higher value of perception information than, say, the opposite pairing (larger with smaller and smaller with larger)?

Answer: Intuitively, because such an approach speaks of greater vitality, combativeness, use of options or actions of the subject. When we sacrifice less and get more. By showing further analysis of the same, we confirm the correctness of the definition of "information of perception", its agreement with the expectation of intuition. By the way, in addition to confirming such an thereabout assessment, it turns out that some of its "conservation law" also applies to the new type of "information".

In mathematical analysis, let us first observe the information of perception with two components `S = xa + yb`. Let
the first factors be the intensities of the subject's engagement in solving two problems in a row, the corresponding intensities
of which are the second addition factors. Let the sums of the factors of subject and object be constant, and if we put units for
these constants (`x + y` = 1, `a + b` = 1), it is not lost in generality. The derivative of the function
according to the first factor of the subject `S' = dS/dx = a - b` is positive in the case that the first number of the
object is greater than the second (`a` > `b`). This means that then the perception information `S`
increases when the first argument (`x`) increases, as opposed to the reverse (`a` < `b`) when the
perception information decreases. Also, using algebra, we get the same result. We then set the given situation like this
(`x - y`)(`a - b`) > 0, which is true if the first numbers are greater than the others. Hence by multiplying
`xa + yb` > `xb + ya`, which means that the information of perception is really larger if we multiply the
larger by a larger factor and the smaller by a smaller one.

In general, when `S = a _{1}b_{1} + a_{2}b_{2} + ... + a_{n}b_{n}`
is the information of perception from some

A special way of proving it would be by observing the series of values of the subject and the object as vectors
**a** = (`a _{1},...,a_{n}`) and

Question: Do you have any other derivations of Bernoulli's equation besides the one in the answer „Fluid“?

Answer: Yes, of course, Bernoulli's equation can be related to
"perception information" in the following way. It equals action, `S = ∆p _{x} ⋅ ∆x`, here the product
(changes) of momentum

In information theory, the law of conservation also applies to 4D (four dimensional) space-time (Inversion),
so, on this orthogonal axis `x` ⊥ `v` (on the direction `v` system movements) we have a relativistic
deceleration of time, but not a change in momentum. In other words, the lateral force is reduced (`F _{x}`), that
is, a moving system “suck”. This slowing down of time is proportional to the so-called gamma coefficient that is approximately,
for relatively low system speeds relative to the speed of light in vacuum

I have as many such examples as you want, maybe more than (known to us) ways of deriving, for example "sine law".

Question: How is it possible to have so much proof of Bernoulli's equation at once?

Answer: False theories may seem to be consistent, but correct are always such. We are often surprised by these connections, for example classical and coordinate geometry or both with probability theory. At one time, I was also surprised by the explanation of Bernoulli's fluid equation (Fluid) using the theory of relativity, when it was first about the "paradox" (two passing trains) that was supposed to be "proof" that the theory of relativity was incorrect.

Question: Why so much freedom in generalizing "perception information", even on "probability of perception"?

Answer: It comes from the freedom to define probability itself. It is
known that Kolmogorov (1933) propound the axioms of
the previously known "naive probability theory", which transformed it into a mathematical theory. He assumed some (arbitrary)
universal set of random events Ω and function (mapping) `P` from Ω on a set of real numbers, so determined that for all
subsets `A`, `B` ⊆ Ω the following three axioms apply:

- non-negativity,
`P`(`A`) ≥ 0, for each random event`A`; - mutual exclusivity,
`P`(`A`⋃`B`) =`P`(`A`) +`P`(`B`), when`A`⋂`B`is empty set; - completeness,
`P`(Ω) = 1.

The value of `P`(`A`) is called the probability of event `A`. However, if we replace some of these
axioms with a correspondingly different claim, they can define, say, “axioms of uncertainty”.

For example, it is known that repeated "news" is no longer news. Applied to electromagnetic radiation, this means that in
Planck's formula `ε = hf`, the frequency of the wave-particle of light `f` = 1/`τ`, where `τ`
is the period of oscillation of the energy `ε` of the photon, has all oscillations represent by different events. The group
of these events makes one quantum of light with one energy which is also an expression of some different outcomes. The corresponding
"axioms of uncertainty" would contain Kolmogorov's the second axiom, but not the third, especially if we wanted these
"uncertainties" to be harmonized with Heisenberg's relations
(`ετ` ≥ `h`).

By the way, we notice that the possibility of these generalizations comes from the duality of probability and information, which I have written about several times and there is no need to repeat (enumerate) here, which is just a confirmation of their real deeper connection. In particular, probability is the cause (consequence) of information, and vice versa, and in addition, there is a mutually unambiguous mapping (bijection) between the two.

Question: How to distinguish duality, probability from information, in the coupling of information perception?

Answer: Hard, but possible. When you form the perception information
`S = ax + by + cz + ...`, the factors in the sums are the intensities of the uncertainties of the two subjects in
conjunction, in terms of a series of given independent events. Both are some "quantities of options", but there are different
details of their definitions, ways of measuring. Consider this in situations A and B.

A: The values of the first sequence (`a, b, c, ...`) should belong to the vassal, and the values of the second
(`x, y, z, ...`) to the ruler, by sequence independent situations concerning the freedoms of the participants in the
alliance, their rights and duties, the performance of jobs, the execution of orders, or behavior within different social classes.

When the coefficients of subordinate and superior respectively present information, then we expect the subordinate in command situations to have fewer options (right of refusal, or choice) than the superior. It is similar in socializing with the upper class, but it is the other way around with the lower, where we assume that the subordinate feels "at home" and more comfortable. Generally, we multiply the smaller coefficients with the larger ones, and then the perception information takes a smaller value. This means less communication, less vitality of the connection, or less conflict.

Representing "quantities of options" with probabilities, the same example takes on a similar meaning. But, a larger number of
options then becomes a smaller chance for an individual, so the subordinate opposes the given will of the superior with a higher
probability. In a series of factors, the larger ones are again multiplied by the smaller ones, and the smaller ones by the larger
ones, and the total sum of products (`S`) is again minimal, however with the meaning of the coupling probability.
Subjects will not unite spontaneously, get closer, but on the contrary.

B: When subjects of similar inclinations socialize, the numbers of options around individual (independent) events of their
interest will be positively correlated (increasing / decreasing together), so their sums `S` will be higher. The first
case, the interpretation of the "quantity of options" by information, tells us about the greater communication of such subjects,
the greater vitality, that is, the tension of their association. The second case, the interpretation of the "amount of options"
by probability, tells us about the more difficult it is to break their link.

Results A and B are examples of a combination of lower and higher intensity, which in case B can be seen in two simple answers, popularity and investment. Confirmation of what has been said here can also be found in complex societies which, due to greater tension (vitality), will first split into separate related groups (nations, parties, ideologies, interests). As the tension increases, the splitting of the structure continues, when at the beginning the slight differences of the "homogeneous environment" become significant.

Question: Is there a situation in the microworld of physics when the principle of least action does not apply?

Answer: Yes, with a reservation said, these are situations of "tension" when a higher probability (information) opposes a higher and a lower to lower. Such are, for example, the interaction of photons of appropriate energy with a substance. For example, in the photoelectric effect, where the energy of the emitted electron from the metal is influenced by the color (wavelength) and not the intensity of light.

An interesting example is the Heisenberg microscope, when we shoot a particle with a smaller wavelength λ to determine its position
Δ`x` more precisely, where the uncertainty of its momentum Δ`p` increases in accordance with the approximate
formula Δ`x`⋅Δ`p` = `x`/4π. The uncertainties of the photon (λ) by which we look and the affected
electron (Δ`x`) are synchronized. However, less position uncertainty means higher densities of probability of finding
particles in given places, so in the summation of the perception information we have a harmonization of higher with higher
coefficients of factors, and thus smaller with smaller, which means greater "vitality" of the situation and everything above
mentioned that arise from it.

This observation, of course, has not yet been recognized in physics (except as hypothetical in my interpretation of information theory related to physics), which is why I emphasize it so much. In general, the effects of "excessive" perception information are often a prerequisite for otherwise picky communications. Note that this release of information in situations of greater "tension" and communication occurs due to the need for resolution, that it is actually in line with the principled minimalism of information. In other words, such situations are not in real conflict with the "principle of least action", but on the contrary, although at first glance it might look different.

Question: How do you define "objective uncertainty"?

Answer: Relatively. The information of perception speaks of communication, of the interactions of pairs, subject and object, two subjects, or two objects. And we communicate because we don’t have everything we need, nor can we finally achieve that.

For example, you play chess and you don't know what your opponent is planning, or you play cards and you don't know what others have. That is objective, but relative uncertainty. Even if you find out what has the first opponent, you become an additional unknown for the next one. By taking the uncertainty from the first, by getting information about the ideas or cards he has, you increase your own uncertainty for the next.

The amount of information that you can transmit is the capacity of uncertainty that you can absorb and then pass on as information to another person, and part of that or all of it is passed on to a third party and so on. Information contains personal ability, type of intelligence and power to act. For example, there is a hunter who hunts prey with cunning or a trap, and from which the prey could learn something, although it does not have to.

These are interactions or other expressions for conveying information or uncertainty. I also generalize them to drawing numbers from the lottery drum. There is more about the walk of Free will.

Question: Can you explain to me once again the Dunning-Kruger effect, from the point of view of information theory?

A student who is supposedly preparing some work in psychology asks me.

Answer: The Dunning–Kruger effect is a hypothetical cognitive bias stating that people with low ability at a task overestimate their own ability, and that people with high ability at a task underestimate their own ability. So it says Wikipedia.

Let us now look for an explanation in my book „Information of Perception“,
title "2.1 Liberalism", Example 2.1.1, page 34. Here in the pictures on the left. We look at two people `A` and
`B` separately, and then as a group ` AB `, on a test of only five questions. On each of the questions, the
person who has no knowledge (gives an incorrect answer ⊥) is more dominant than the person who has the knowledge (gives the
correct answer ⊤), so in case of their confrontation, in the case of the first and second question, in the picture from left to
right, as common, compromise takes the wrong answer. It turns out that the community has fewer correct answers than any of the
equal individuals.

This example easily extends to a set of more equal individuals, again with the same conclusion, that the Dunning-Kruger effect makes the average mind of a mass of equal people less than the mind of the average individual of that mass. Under "equal" we can obviously include modern democracies, where those who are engaged usually vote or are convinced that they know the knowledge. Thus (my) information theory also takes the position that (and explains when) the minds of equals are smaller than the organized wit, either in the way of living tissue cells or a simpler hierarchy.

Note that there are other ways in which this theory comes to the same conclusion. For example, using the principle minimalism of information (maximalism of probability). Nature tends to develop towards less informative events (into more probable outcomes), and a lie is a kind of muted information. Therefore, in conditions of equality, untruths have an advantage, they spread faster and easier to the media, they have a greater appeal. Thus, people who "suffer" from the DK effect are only "victims" of the information principle.

The Dunning-Kruger effect is easy to confuse with "collective wisdom". The first is more a matter of the subjective, the second of the objective. The latter is more often managed by third parties (marketing, authorities, collective drowning) in order to be so engaged, biased, wrong, and from the above description it follows when and why it does not have to be equal to the DK effect.

Question: What is "beauty"?

Answer: They survive best adapted, and through the generations, emotions develop that support that survival. I retell the theory of evolution, and continue to capture it yourself.

However, a lie is a kind of muted information and the principle of minimalism of information begins to do its thing (see in the previous "DK Effect"), and its contribution is that the beauty in turn deceives you. Looking for something more that you may get behind the beauty can be a futile job also because evolution itself is not perfect. Our adaptation to the environment (which is otherwise constantly changing) is never complete, usually not even optimal.

This is how we come to the common definitions of "beauty", now with reservations. Some of the synonyms of the word "beauty" in English are: allure, artistry, charm, elegance, grace, refinement, style, attraction, bloom, class, glamor, loveliness, polish, symmetry. Otherwise, beauty is a combination of qualities, such as shape, structure, shades or appearance (person, performance, nature) that satisfy the aesthetic senses.

Question: What does information theory say, did the "Big Bang" happen some (13.8) billion years ago, or was the universe always there?

Answer: It says both are true! This sounds like a contradiction, but it's not. The little data about the universe that we have can be arranged in this way, with the help of the principles of information theory (Big Bang) to fit into one logical picture.

In short, the time of the "present" is slowing down. It slows down so much that traveling backward in a time-machine, to the very beginning of the past, an imaginary traveler would need infinite persistence to reach the Big Bang, the starting moment of the universe. But, on the other hand, measuring that infinite period from the standpoint of our present, our own time (proper time), the sum is finite.

Similarly, inversely, it happens when a body falls into a "black hole", a celestial body so massive and so strong by gravity that it captures light as well. The boundary beyond which light cannot be drawn into the outside world is called the "event horizon." It is a sphere around the center of a black hole. In the body closer to that sphere, time flows more slowly in relation to the relative (external) observer, so that, from that point of view, it decays infinitely and never enters the interior of the sphere. By the way, from the point of view of the body itself, the so-called proper time, its fall over the event horizon is a short, final time process.

Similarly, non-inversely, we have a visible universe with a boundary. It is expanding so that more distant galaxies (on average) are moving away from us faster and faster, to the boundary sphere also called the "event horizon", now the boundary of the outer universe that we do not see. Due to the high relative speed of the movement of galaxies far from us and according to the special theory of relativity, we believe that their relative time slows down. But because of the distant past as we see them (light travels billions of years from there to us and informs us of the situation there billions of years ago), and according to information theory, their relative time (relative to ours) has flowed faster, it can happens that we observe neither the deceleration nor the acceleration of the time of these galaxies.

I emphasize that the time of the present slows down due to the "principled minimalism of information", when events develop towards more probable and less informative ones. This principle makes that the "present" to which we belong has more and more certainty and less and less uncertainty, and hence, that there is less and less realization of random events — the amount of which defines the speed of time.

Turn it around-around, there are many more facts known to today's science that I have combined in "countless" ways so far, and I am almost certain that the concept I have just described works "perfectly." Try to find a "mistake" (the text is an excerpt from private correspondence), I would be grateful if you could reveal it to me, and otherwise, that model remains miraculously effective...

Question: Explain to me once again, why is the universe expanding?

Answer: Astronomers believe that the higher speed of expansion is a consequence of the mysterious, dark force that separates galaxies. It is driven by the so-called dark energy. One explanation for dark energy is that it is a property of space. ... As a result, this form of energy would cause the universe to expand faster and faster. That is the furthest science has so far, and you can officially read about it in this NASA article (Dark Energy, Dark Matter).

Question: What attitude would information theory have about this expansion? In short.

Answer: My position on that is unchanged, until new cosmological findings that would possibly indicate some corrections, and it is presented as briefly as possible. Space, time and matter are made up by information, and "minimalism of information" applies to them, a principle that says that information emission is as little as possible where it can be (despite the law of preservation of information and its inevitability).

If information is emitted (slightly) less from space than from substance, then consequently the conversion of bosons (space) occurs less frequently than the conversion of fermions (substances). Therefore, for example, the entropy of a substance would grow spontaneously and the Second Law of Thermodynamics would apply. That's why the space would grow. On the other hand, the past is a kind of reality, that is, a kind of information, so it can act on the present in its own way, and produce the effect of dark matter. I can't be shorter.

Question: That is why you need additional confirmations of experimental physics about the transformation of a substance into space. Do you think some already exist?

Answer: Yes, that would be in favor of converting the particles of the substance into space, reducing the information of the substance at the expense of space, expanding the universe and spontaneous growth of entropy (substance). It would confirm the unfolding of the principles of information theory in the direction I favor for now.

If we exclude theoretical reasons (that many cosmic paradoxes then unravel nicely and logically), it is, for example, the fact that "the ground state energy of bosons is always lower", which can be seen, for example, in the introduction of the article (University of Tokyo). So, because of the principle of least action, otherwise known in physics. Also, "by tuning the interaction between fermions they can be made to pair up and behave like bosons", is the sentence from the third paragraph of the next appendix. (PhysicsWorld) which supports the thesis. However, I believe that such findings are still not enough and that is why I am hesitant.

Question: Why in the information of perception `S = ax + by + ...`
the series of resistance values (`x, y, ...`) that correspond to the abilities (`a, b, ...`) you call a hierarchy?

Answer: In the subtitle "2.3 Authority" of the book “Information of Perception” you will find that the conflict arises due to the absence of another way of deciding and judging. This feature of clash, that is, conflict, to represent a type of criterion, or the ability of squabble to be a substitute for decision-making authority, is universal for all living beings. Before that, it is stated that two or more persons are equal in competing to a single object when they do not have the criterion of division of the given object. They can then decide on a conflict. The reverse is also true — if two or more persons are in conflict, it means that they are equal in the right to the indivisible object.

I will explain the latter. Conflict is a situation of heightened information (uncertainty, briskness), which is contrary to its principled minimalism. The excess of action that occurs in competition does not come spontaneously, nor does the movement of the body from a state of rest. Such arises from the possibility (vitality) of the subject itself, the need for the object and the absence of easier criteria of division, hierarchy. Contrary to the conflict, the hierarchy consists of handed over parts of the personal freedoms of the subjects, the reasons for the calm of the built-in individuals. According to the freedom (amount of options), the subjects are placed in a hierarchy, and due to the law of conservation (information), they reluctantly leave that place.

It follows from the above that "equality" is the absence of hierarchy (priority), in contrast to "inequality" — in matters of division of common interest. That's why "nature doesn't like equality" (Equality), and disputes arise from the need to overcome it. Let us notice how true the last statement is basically and how much it is turned upside down in legal proceedings. This absurdity, that trials strive to "establish equality", is a curse of legal science and a reason for the rule of law to suffer from excessive administration, bureaucracy, and the additional cost of the system. On the other hand, for the price of maintaining the hierarchy of rights, individuals get the release of a part of personal vitality and the possibility of engaging in other directions.

Then we note that the need for the term "hierarchy" arises in the case of an analytical approach, a one-by-one breakdown, in the interpretation of cumbersome organizations. In a complex network of nodes and links, a hierarchy is like the local environment of a single node. The assembly of living tissue cells inside a single living organism is a complex "network of hierarchies", which can be so complicated that we do not notice the details. However, the fact that we do not see the elementary particles of physics in everyday life does not make them non-existent. It is the same with hierarchies, they are elements of larger structures.

Every complex organization, from the state of citizens to the living organism of its cells, can be disintegrated into elements,
because (the free) information is an atomized phenomenon. An elementary group of constraints in the formula of information
perception `S = I ⋅ H` in the book of the same name, it was called "hierarchy", and the name has remained in use to
this day. Hierarchy is a series of values `H = (x, y, ...)` which is opposed by the corresponding sequence
`I = (a, b, ...)`, the abilities called "intelligence".

Question: What is "domination"?

Answer: Domination (lat. Dominatio — power) is the supremacy of one side over the other. Its deepest causes are in the principle minimalism of information, and they can also be derived from the (formula) of information perception. Read the article on "Stockholm Syndrome", so let's continue.

Thus, the power of domination arises not only from the fear of retaliation, but also from the desire to submit. This is just one of the consequences of the general natural tendency to give up excess information, the spontaneous flow towards a smaller "amount of options", less action and interactions with the environment. Living beings have more information than non-living beings, so they have a more visible tendency towards laziness, avoidance of risk and aggression, as well as to association. The latter is the handing over of personal information to the collective, which we usually call the handing over of freedoms to the social order for, say, greater security.

Formally, it can be explained like this. The total freedom of the associated subject and object is defined by "information of
perception", `S = x _{a}y_{a} + x_{b}y_{b} + x_{c}y_{c} + ...`, which is
higher when private abilities, defined by the series (

See an example of perception information in popularity and one simple economic example
(Investment) which I mentioned earlier. Especially now, if it is `x` = `y` = (3,2,1),
then it is `S` = `x`⋅`y` = 3⋅3 + 2⋅2 + 1⋅1 = 14. However, when the abilities adapt to
the circumstances (`x` → `x'`), so that they change into the series `x'` = (2,2,2) of the same sum of
members, the coupling information is reduced to a value `S'` = 2⋅3 + 2⋅2 + 2⋅1 = 12. If abilities are further
subordinated to their hierarchy, in the sequence `x"` = (1,2,3), the vitality of the given couple is further reduced to
the value `S"` = 1⋅3 + 2⋅2 + 3⋅1 = 10.

This last subject (`x"`) does not defy the prohibitions of the dominant side at all, but on the contrary listens,
behaves responsibly, follows the line of least resistance, or we say it is released like a log through the water, however, and
then the system (association, hierarchy, dominant side) is left with excess information or action which can be used for some other
jobs.

By yielding to social norms, by submitting to the stronger side, the individual gets rid of the excess of its information. The individual hands over a part of own "amount of choice" to the collective, which then becomes more powerful for the quantity received. This gain can then be used by the organization as an additional action. At the same time, the example shows us that each of the parties, subordinates and superiors, by establishing domination become relaxed for some dose of freedom.

Question: The disadvantage of equality is that it spontaneously breaks down into hierarchies, you say (Equality). What, then, is the weakness of the hierarchy?

Answer: You notice well, if you assume that every organization could have its key flaw. At least that's how this question looks to me. The critical defect of the hierarchy, say of the imperial government, is that everyone would like to be the first ruler at the top. The competition becomes too hard, the struggles cease and the structure of the hierarchy becomes too dedicated to internal struggles.

If it survives, it is absurd, but a strong hierarchy thus becomes "feminized" (Entropy generalization). It is more focused on arranging the inner than the outer world and, neglecting the environment that inevitably changes and develops, the internal structure of the "good hierarchy" becomes obsolete. For example, remember the reigns of Julio-Claudian dynasty from the history of Imperial Rome.

However, even the "middle" between these two (equality and hierarchy), which would be like the organization of the cells of a living being, cannot function for long. This is in many ways the most "perfect" way of organizing; of similar origin (stem) cells, highly specialized, highly efficient and unlimitedly devoted to the wholes, which can perform much more complex operations than any consciousness (centered in a brain), that couldn’t even be aware of the all, and wherever capable of managing the whole — it still dies.

A good organization in general, as well as a living being, is a concentration of information with its surpluses that it wants to resolve, spontaneously, based on the principle of minimalism of information. Sooner or later, that happens and the structure dies.

Question: Understand that "information of perception" increases with the "abilities" of the subject and decreases with objective "limitations" and that you derive "information theory" from that setting. I ask, do you have any other parallel approach?

Answer: Yes. Pay attention to the calculation information in the book
Physical Information, say in the subheadings
"2.3 Binomial Distribution". Find there (Representative) and see the Theorem 2.3.6. It talks about
calculating the information of the binomial distribution `B(n, p)`, which is about the number of realizations
of a random event of probability `p` in a series of `n` repetitions. It is exactly equal `n` times
the information of the realization of the same probability in one outcome.

The point is in the law of conservation of information. The sum of the information of the parts should always be equal to
the information of the whole. It is similar to energy, that information cannot arise from nothing or disappear into nothing.
However, this is not in Shannon's definition of information, and if we follow the path of accepting his definition of
information, with as little rejection as possible, but respecting the law of conservation, then we come to a new definition,
such as `S = ax + by + cz + ...`, which is the information of perception.

Continuing with the new definition and acknowledging that more certain events are less informative, we come to merge with the theory that you have stated to understand.

Question: According to the level of aggression, you say, you divide the world of information into: living beings, inanimate beings and laws. What, for example, is the "Pythagorean theorem" of information, if there is no trace of uncertainty in it?

Pythagorean theorem says that the sum (area) of the square over the legs, of any right triangle, is equal to the square over the hypotenuse. One of the hundreds of substantially different proofs of this theorem found so far is in the figure to the right.

Answer: That is the problem. I have dealt with it on several occasions. First, look at the attached book "Information Stories", subtitle "3.17 Present", on page 125, so let's continue...

In short, the idea is refracted on the thesis that information is omnipresent, that it is the basic tissue of the cosmos, the ontological structure of everything. If this is true, then I still do not find a contradiction (in the attempts to prove the opposite), because we can say that we have action again, a quantum equal to the product of energy and duration (or momentum and position), where duration is infinite and energy is zero. There is no controversial place, you can try and make sure so I do not have to list it here.

There is another (both additional and independent) possibility. What we observe is always finite, so neither energy nor time (momentum and propagation) are infinite, either too small or too large. I have elaborated this and partly published it (locally) on several occasions, mostly starting from the "universality of information".

However, there is also a theoretical, stronger side to justify the same theses. These are the relations of Heisenberg's uncertainty and the so-called the uncertainty principle that can be derived from the noncommutativity of the operator (multiplying a variable by two and adding three will not give the same result as adding three to the variables and multiplying everything by two). This means that "relations of uncertainty" cannot be deceived, bypassed by any "mathematical tricks", i.e. that their "avoidance" (in a possible "experiment") would be a sure sign that it is a fraud, or scientific charlatanry.

This further means that the cosmos, no matter how big we imagine it, cannot be a static container from which we can accidentally choose random outcomes for the future of a present, but that container is also unpredictable — to the very infinity. Our perceptions, our present, as well as what we consider information, are only parts of infinity — which also has the characteristics of unpredictability, so it is also some information.

No matter how insane it sounds, there are no known physical laws that would really negate the assumption of the opposite course
of time of a particle-antiparticle, an electron-positron, and even a positive-negative charge in general. All equations of
classical, relativistic and quantum mechanics remain the same by changing the sign of time (`t → -t`). It is
similar with (my) information theory, except that it also solves, in that sense, the strange, spontaneous growth of
entropy (statistical mechanics).

Ernst Stueckelberg (1905–1984), Swedish mathematician and physicist, and later Richard Feynman (1918–1988), American theoretical physicist, proposed the interpretation of the positron as an electron moving back in time, reinterpreting solutions of the negative energy Dirac's equations. The reversing electrons would have a positive electric charge.

The theory of information, due to its principles on the spontaneous development of particles (physical systems, bodies, waves) towards less informative states and the law of conservation (maintenance) of information, has its remarks in that sense. The present is its some 3D wideness, together with the encompassed substance (not always the same for different observers), which "evolves" along one time axis through the 6D space-time of the universe. In this process of progress, the "present" has less and less substance and more and more space.

Particles that move at the speed of light do not have their own time, their presence is related to the present of slower particles, those that have a rest mass and, therefore (again due to the principled minimalism of information), inertia. All of them (because of the same) usually go into the most probable states and come from their previous most probable states, which means that the same principles of minimalism and conservation apply to the eventual walk backwards in time.

This is an important observation, the addition that comes from information theory, in solving the paradoxical "spontaneous growth of entropy" that could not happen in the reverse course of time of classical, more precisely said in the deterministic physics. The positron we see is a younger and younger "some particle" with (almost completely) the same properties and in the appropriate "physically possible" place (of its past), because this development is most likely in both time courses.

Question: When you know everything about antimatter like that, tell me why there is so much less of it than matter?

Answer: This article about "antimatter", is nice of you, of course, but who lied to them and you so much that antimatter and matter must be the same in the universe (rhetorical question, I don't need an answer). Even because our present travels only into the future, because there are fewer and fewer substances and more and more space, it follows that this universe is not "symmetrical". That is why there cannot be "equally many" of them.

Question: What do you mean "why"?

Answer: Because in the case of "equality of the amount of matter and antimatter" in the universe, the time of our present would stand still. However, it moves (everything changes, you can't step into the same river twice — Heraclitus also noticed), so there is more matter, there is more positive flow of time.

Question: What if antimatter is gravitationally repulsive to matter, so it is gone because it has long since escaped (rejected) us?

Answer: Yes, I have considered this possibility, it seems logically correct, but the alleged practice does not confirm it. In that case, the far distant galaxies, perhaps those beyond the "event horizon", beyond the visible universe, would consist of antimatter. Therefore, the space-time of the universe would be curved. Places far away from us, but which are between those anti worlds, would have a slower course of time. The demarcation lines would be gravitationally attractive to both. They would curve the space-time of the universe. Specifically, then is questionable whether the universe would really expand, or would gather around these lines of slow-moving time. However, the macro-observed space-time is approximately flat, at least as far as today's telescopes are concerned.

Question: What does "perception information" have to do with the "flow of time"?

Answer: Information is the fabric of space, time and matter, and the law of conservation applies to it, so the corresponding one also applies to space-time. In addition, in (my) information theory, the amount of measure of change is the speed of time.

So, first, the speed of the observed time flow increases with the volume of outcomes (random events) and decreases with
limitations. Therefore (at least in the first approximation) for the "flow rate of time" we can take the same quotient
(`I = S : H`) which defines intelligence (`I`) by means of perception information `S`) and
hierarchies (`H`), see Intelligence. Hence, the perception information is proportional to the
scalar product of the time velocity components and the corresponding constraints (`S = I × H`).

However, we can build on this as well. The flow rate of time (`I = S : H`) is proportional to the observed amount
of outcome (`S`), and inversely proportional to the reference rate (`H`). Now the perception information is
of the same form (`S = I × H`), but the factors, sequences, are relative (`I`) and reference (` H `)
time flow rate. When the reference, say our own (proper) time (`H`) goes faster, then relatively less is observed, so
the calculation is also fine.

Question: Why is the negative course of time of antimatter (perhaps) so fatal that in the encounter with matter there is annihilation (annulment of both)?

(An incredibly interesting question is asked by one, he will apologize, but who is not a particularly good connoisseur of theoretical physics, and certainly not a "master of knowledge" of these border areas that we talked about. I added "perhaps" in parentheses, because the topics are speculative. I shorten the less important parts.)

Answer: First, because two physical systems that would have opposite flows of time could not communicate. A question sent from one to another would go into the past of the other, so the latter would have to send an answer before hearing the question. This violates the principle of objective uncertainty.

Secondly, according to the theory of information that I represent, because space-time as matter is subject to the laws of conservation (Inversion), because information is the tissue to which this law applies. Simply put, by adding the positive and negative course of time, times must be annihilated, therefore annulled, zeroed.

Question: What, then, is the difference between the "phantom action at a distance" of quantum entanglement and the inability to communicate "inverse" flows of time?

Answer: In the case of the Quantum entanglement there is no transfer of information, so there is no communication. Simply put, the entangled events are simultaneous in relation to some objective observer, so they are synchronized for everyone else.

Question: Is there anything about information perception that you never said, and would it be possible if that theory turns out to be correct?

Answer: This question can also be answered in the affirmative. The
perception information `S`_{1} = `ax + by + cz + ...` is a product (value) of the subject's intelligence
string (`a, b, c, ...`) and appropriate objective limitations (`x, y, z, ...`). It represents a certain "amount
of possibilities" of a combination of abilities and obstacles.

Suppose further, that another subject responds to the same objective constraints with its different skills
(`p, q, r, ...`) and to produce a similar amount of possibilities `S`_{2} = `px + qy + rz + ...`.
Those two, `S`_{1} and `S`_{2}, do not have to give the same "experience of reality",
information of perception. Their sum `S`_{1} + `S`_{2} will be `S = (a+p)x + (b+q)y + (c+r)z + ...`,
hence again some, then magnified, the information of perception `S`, with aggregate abilities (`a+p, b+q, c+r, ...`)
over the same restrictions.

The example shows how the sum of abilities increases the amount of possibilities. In other words, with an increasingly intelligent approach to the same constraints, we can draw more and more options, different and new outcomes. Get them more and more from the unchanged given phenomena. An unusual conclusion, but not if we recognize it in the famous "Russell's paradox" (there is no set of all sets, 1901), or "Gödel's impossibility" (an overall theory is not possible, 1931).

Question: Can you explain to me what a "paradox" and "impossibility" it is?

Answer: Bertrand Russell (1872–1970, British mathematician) once criticized the idea of a “set of all sets”, in a way that was accepted in mathematics and for which the initial set theory was revised. He noticed a contradiction in the "universal set", i.e. one that would "contain everything."

The idea of contradiction that Russell found can be understood with the help of the village where lived and worked "a barber who shaved everyone who does not shave himself". The contradiction is revealed by the question "who shaves that barber?" Because, if he shaves himself, then he cannot shave himself (by the definition of his job). If he does not shave himself then he must shave himself.

Such contradictions are easily made in a "universal set". Russell's method can be extended further. For example, if God is omnipotent, or it is any cosmic "power", but really so omnipotent that there is nothing that it could not, then the contradiction is revealed by the known question "can such create as a heavy stone that he himself cannot pick it up?”

Such contradictions (you can find similar ones at will) prove that there are no "universal powers", apart from the non-existence of "universal sets". Kurt Gödel (1906–1978, Austrian-American mathematician) went a step further. It is known (and I am still talking about mathematics) that parts as well as whole correct theories cannot be in contradiction with any parts of any correct theories. This gives us the opportunity to expand and expand knowledge by endlessly connecting correct theories (new axioms, postulates, attitudes, theorems, consequences) into one "omniscient".

However, when the question is asked whether there is an end to it, that is, assume a "theory that knows everything" – you will again discover some contradictions. Gödel found the first of these and correctly concluded that there is no end to knowledge (research), even when we assume that these accumulations of knowledge could sometimes be infinitely infinite.

My above answer, about the additivity of information perception, is an appendix in that sense. Immeasurable knowledge is hidden in the simplest stone they would find on the way, only if it were deciphered by endlessly intelligent philosophers. At the same time, the actions are quantized, there are the smallest amounts of "free information". They are more than "whole universes" of information, but not all are free.

Question: Do you interpret vector intensity using "perception information"?

Answer: Yes, formally, it is logical. The distance from the beginning
to the end of the vector, like the "length" of ordinary geometry, is calculated as the diagonal of a rectangle whose sides are
parallel to the coordinate axes of the Cartesian rectangular system. Then the usual 2-dim rectangle is considered as a 3-dim
square and in general `n`-dim "rectangle" (`n` = 1, 2, 3, ...), with the appropriate so-called Pythagorean
theorem. This is not the end, because the 4-dimensional space of the special theory of relativity has a corresponding similar
"Pythagorean theorem".

The equivalents of such are self-coupled information of perception, of self-observation. This is consistent with the projection of the vector on the coordinate axis, which in quantum mechanics gives the probability of measuring a given observable (physically measurable quantities), where the axes represent observables and the vectors — quantum states (particles, waves).

In particular, that square of the relativistic interval is Δs^{2} = `Δx`^{2} + `Δy`^{2} +
`Δz`^{2} - `c`^{2}`Δt`^{2}, whereby for the path of light (speed `c`
during `t`) between two events, the first (`x, y, z, t`) and another (`x+Δx, y+Δy, z+Δz, t+Δt`), the interval
is zero, `Δs` = 0. The latter has additional meaning in information theory (Quantum entanglement).
In that path of light, events are simultaneous, they can be (quantum) entangled for all other observers.

Question: Can you explain this to me a little more, the quantum entanglement by simultaneity?

Answer: In mutually moving systems, the notion of simultaneity is relative, Einstein discovered in the so-called special theory of relativity in 1905. Modified his thought experiment with a passenger on a train and a man on the embankment next to it, in the video (Einstein’s Thought Experiment), I will now retell in a more original form, which is more convenient for adding to quantum entanglement.

A passenger in the moving train stands right in the middle of the compartment and lights a match. The light has the same speed, independent of the movement of the source, so the match light arrives simultaneously on two opposite walls of the trolley — from the passenger's point of view.

The same lighting of a match is observed by a man on the embankment who is right in the middle of the car at that moment. He "observes" the movement of the match light towards the ends of the room, which are now moving forward, in the direction of the train's movement. The speed of light is the same, final, so the person sees the first arrival of light on the back wall (which goes towards him) and subsequently on the front (which moves away).

Therefore, the arrival of light on both sides of the wagon is not a simultaneous event for the passenger on the train and the person on the embankment. So much for the special theory of relativity.

Let us now turn to quantum mechanics. Let the two rays of light of the match (two photons) be entangled. The spin of a photon is a random event, it can be polarized "up" or "down" with equal probability. However, if the initial total spin of the two photons was zero, it will remain zero sum even as the photons move away toward the wagon walls. The law of conservation the total spin will apply and the "objective haphazard" of the individual will apply, but without haphazard in the total.

In other words, whenever we have one observer for whom such two events (began as) simultaneously, then the total sum of spins (momentums, charges) of the entangled pair of particles will be maintained and maintained. This is the mentioned position of information theory, when there is no transfer of information and it seems as if there is an action.

Question: Are you sure space-time conservation laws apply? Is it because of the law of conservation of energy?

Answer: Yes — the answer to the first question. Read "Inversion" and hereinafter referred to as "Conservation". In information theory, this is simply because information builds the cosmos (space, time and matter), and the law of conservation applies to information. The consequence is the law of conservation of energy, more than the cause.

That is primary. Secondarily, which can now only be cited as a confirmation, the calculation of
"Jacobians" is known in mathematics
when changing coordinates. For example, the infinitesimal element of a surface area in rectangular Cartesian coordinates
`dΠ = dxdy`. It is the area of an infinitesimal rectangle, length ` dx ` times width ` dy `. Going to
the polar coordinates (distance from the center ` r ` and the angle of deviation from the given axis ` φ `),
we first calculate the Jacobian ` J = r `, so the equation ` dП = dxdy = rdrdφ `, and the area expressed by
the new coordinates remains unchanged. Apparently, Jacobian in Descartes' coordinates was ` J = 1 `.

By equalizing the infinitesimal volume elements of Cartesian rectangular (`Oxyz`) and spherical (`Orφθ`)
coordinates, we find ` dV = dxdydz = r`^{2} sin `θ drdφdθ `, with the Jacobian as square of the distance
from the given point to the center, multiplied by the sine of the second angle (deviation from the vertical axis). We also do
such substitutions in the change of integrals when calculating the volumes of the body, changing the coordinate systems to make
the calculation easier, and adding the Jacobian factor to make the result accurate.

In the theory of relativity we work with four coordinates, the fourth is ` x`_{4} = `ict` the path
of light (imaginary unit ` i`^{2} = -1, speed of light in vacuum ` c ` = 300 000 km / s, time
` t`), when we use Jacobians again in tensor transformations — because of the same reasons. In other words, space-time
that is "distorted", we say "curved" by the presence of matter or gravity, long since we calculate and predict (consciously or
unconsciously) respecting the law of conservation of space and time — by adding Jacobian.

Now just add to that the consequences of "information theory", for example, listed in my mentioned answers "Inversion" and "Conservation". Unspecified would be, for example, transformations of metrics that would represent "information perception".

Question: You interpret this "information of perception" in various ways. Does that mean that it thus acquires a different physical meaning?

Answer: Yes. I try to understand it more broadly than the scalar product of the vector, and already such depends on the type of vector space. The breadth of these observations should follow the possibilities of informatics. Then, within each of the same types of vectors, we know different ones metric spaces, and therefore, different scalar products. Further, each of these mathematical abstractions is followed by different applications in mathematics, and each of these applications then has its own different representations.

Question: How do you see the "information of perception" in the framework of quantum mechanics?

Answer: Heisenberg uncertainty relations (uncertainty principle),
for example, they say that by increasing the accuracy of finding the position `Δx` of the particle-wave, we lose the
accuracy of determining the momentum `Δp _{x}` along the same measurement line. The same applies to the time of
measurement

Since the mentioned "scalar product" `S` takes the least values from its components, it is "information of perception"
of inanimate matter that consistently adheres to the principle of least action of physics. It has a minimum "amount of options".
The formulation `S` itself has the physical dimension of the action.

The next example is free particle-wave. It is known
that the wave function can be written `ψ(r, t) = Ae ^{iS/ℏ}`, where

In general, the wave function defines the probability, and the logarithm of the probability represents the information, so it is
in this case (`S = -iℏ` log `ψ/A`). Thus, the most complex representation of information perception is reduced to
the simplest example of information, the Hartley logarithm
of the number of equal options.

Question: How is it possible that there is the least amount of information, and at the same time that the principle of its minimalism applies?

Answer: This is a good question, astute and crucial for information theory. The answer is reminiscent of three people, the first of whom owes a certain amount of money to the second, the second to the third the same amount and the third to the first. Each has the same claim, but none has cash and the debt settlement cannot begin. Situation a dog chasing its tail and fails to reach it.

An example similar to this is the famous game "rock paper scissors". Although "rock" defeats "scissors" (stone breaks scissors), and "scissors" defeats "paper" (they cut paper), "paper" still defeats "stone" (paper covers rock). The first is "stronger" than the second, the second than the third, but the third still beats the first.

It is a relation of order that is not transitive. A different example of such a relationship is in my recent “cycles” response to a question that at first glance has nothing to do with this. The interlocutor asked me to look for a mistake in his program, which in some situations of setting priorities would "inexplicably go crazy". It turned out that these are connected relations of order, a series of smaller and smaller elements in which the "smallest" eventually turns out to be bigger than the "largest" and again in the circle further, while in the meantime nothing important for these orders changes.

This is a somewhat hidden, but long-known property of some "magic squares", numbers arranged in square patterns whose sums of rows, as well as columns, or diagonals, are equal. The answer used the magic square 3×3, numbers from 1 to 9, instead of which there could have been many others (not any).

By the way, there is an observation that is most important here. That in information theory such non-transitive (non-transferable) relations of order explain the processes in which information (whose essence is uncertainty and change) can escape into "cyclical movement". Constantly spinning in a circle, it constantly strives for minimalism (respecting the principle of the least action, i.e. the least communication), and at the same time, in fact, in a broader picture, it makes standing waves.

Question: I see you are reading Hazony's book The Virtue of Nationalism. Were you interested in the relationship between globalism and nationalism from the point of view of information theory, do you have any opinion on that?

Answer: I have only read some reviews of that book (Yoram Hazony, 2018), for now, but I have my "views". Information theory also concerns the size of empires throughout history that have been limited by range and speed of communication. I think that moment is known to historians, although it is underestimated.

A completely new thesis, that is mine, is that "nature does not like equality" (Equality). In short, this is because equal outcomes provide more information, and more information is less likely — contrary to the natural effort to realize more likely events more often. So, because of that "principled minimalism of information" (maximalism of probability), all forms of "reduction to the same", in other words, processes leading to equality, globalization — are flows that will in one way or another, sooner or later, tear along the seams.

Predicting where and how the "seams will tear" stands sometimes before "objective unpredictability" which persistently appears here and there, because it is also in the very essence of the same mentioned "principle of information". The difficulties and eventual failure of globalism, as well as the disintegration of empires, including the recent disintegration of Yugoslavia, always have elements of surprise in them.

Complex structures, such as multinational communities, have a greater "amount of options", so with more information they can develop faster than others, but they can "choose" to turn more strongly to conflicts and disintegrate faster. Excess vitality drives, but also costs.

Question: It's amazing that you can explain shark breeding with information theory (shark reproduce without a mate). Isn't that a little too much?

Answer: Find me an area of studying the living or non-living world, on our planet, our galaxy, the universe, which we can say with certainty there is not of any use the help of computers, so I will know that the limits of information theory are there.

For all other fields (mathematics, natural sciences, philosophy, psychology, sociology, law) only the "principle of information" (that nature spontaneously tends to states of less information) is enough to spend a few lives for me. That's why I keep coming back on it, because if it stays untold, someone will think of making 5-10 different sciences from that one tomorrow. It concerns "feminization", and this, I believe, of living beings.

In short, I wrote about "feminization" in popular ones "Information Stories", chapter "1.4 Feminization", page 15. It is a process of spontaneous growth of (generalized) entropy, i.e. reduction of information, due to which the internal order of a given system increases and external communication decreases. The process also applies to living beings in circumstances that allow it (Development).

Thus, we come to the (my) "information theory" and simple but unknown in science and for now unusual the explanation of the "virgin birth" in sharks reproduction without a male partner. Species that live in "dangerous" environments in terms of the increased need for survival through risk, failure or success, i.e. rushing, are evolving into bisexuals. Without the need to deal too much with external cases, communication and control of the outside world, the species will "feminize" which means dealing more with the interior.

The described event with sharks, that they can give birth without the need for the other sex, simply means that the seas and oceans are relatively safe environments for them. They live in conditions of slightly higher security than species that are necessarily bisexual.

Question: Give me another example similar to "shark feminism", which you have never told anyone before. Do you have any?

Answer: Of course, I publish less than 10% of ideas somewhere. Take, for example, the Taliban, Afghanistan, or militant Muslim societies in general, which are the very opposite of women's domination, i.e. the far-reaching consequences of "feminization." They are like that (male societies), because they have been attacked, plundered and harassed by the western imperial forces for a long time. If the West knew how to leave them alone, their development would go through generations, for example, like Sweden today (which is slowly, imperceptibly feminizing).

That's my theory. I note once again that it has nothing to do (for now) with official science and starts from the "principle of information", and not from observing different evolutions of, say, chimpanzees and bonobo species of monkeys, otherwise evolutionarily closest to each other, and then to us. I mean, it's not worth sending me any "would-be scientist" observations. I temporarily name those who simply disagree with my theory but have no theoretical proves.

Question: How are you so sure?

Answer: Demolishing theorems by experiments are futile work. And the principle of minimalism of information, for example, follows from the corresponding principled (hence universal) maximalism of probability which has the weight of a mathematical statement (theorem). I invented the name of the latter, which is a principled phenomenon, because nature "prefers" to realize more probable events and therefore favors those less informative.

First, let such a suspect disprove the said principle of probability and further deduction so that I would take his possible "facts" seriously. I say the alleged facts because I consider this theory "strong." In such cases, the opposite happens to what is too often known to us from everyday life in the case of "weak theories" that we can challenge with physical findings. It would be a bad thing to declare Pythagoras' theorem incorrect on the basis of "experimental proof" that in a right triangle "there" the sum of the squares over the legs was not exactly equal to the square over the hypotenuse (... I'm waiting for an answer).

Question: It is known that mathematics is a form, perhaps the language of the universe, but what is and where does the essence "reside"?

Answer: It is an old and mostly common understanding that mathematics deals with the form itself, without going into the essence of phenomena. That was the case, we believe, but it seems that it will not stay that way with information theory.

For example, in the algebra of logic, a
proposition or statement is a sentence that can take only one of two values, true (⊤) or false (⊥). It does not deal with the
question of what could be "true" or "false". The implication "if `a` is `b`" (`a` ⇒ `b`),
which is an incorrect (false) statement only if the assumption (`a`) is correct (true) and the consequence (`b`)
is incorrect, it does not discuss the nature of the variables.

After the discovery of this binary, or bivalent logic, the polyvalent logic with values of true, perhaps false, with a different range of "maybe" constants soon appeared in mathematics. But they fell into oblivion just as quickly after the discovery of the theorem which states that any proposition of any polyvalent logic can be reduced to propositions of bivalent logic. They may be revived by the advent of quantum computers that would work with superpositions of quantum states at once, that is, with probability distributions, instead of individual values.

The novelty brought by information theory (which I am developing) is its claim to be the ontological tissue of the cosmos. The space, time and matter are buildings of the information that is a measure of uncertainty, and on the other hand, what exists cannot be incorrect. If we can really prove that "something" is incorrect, it will not be confirmed by experiment, it cannot exist or be the information (World of Lies). Therefore, "false" is a disguised form of "true", and the content of "essence" also consists of statements.

It is similar with numbers. When adding, we say "two and three are five" (2 + 3 = 5) and we do not have to think of any specific objects of addition (two plus three kilograms, apple, person), but we separate the process of calculation from the content. We allegedly abstract quantities from the concrete.

However, information is also value, size. Moreover, we can say that it is "without essence", because its content is uncertainty, which is a term close to "nothingness" at a given moment. And the data themselves that we otherwise consider different from their quantity, i.e. information, an analysis like the above would reduce to some information. The multiplicity of information is equivalent to the differences in size.

Question: Do you have any physical confirmation of these strange attitudes?

Answer: Yes, say in microparticle conversions. These attitudes relax the problem of "decay" of supposedly elementary particles into other particles, probably "more elementary". We just pretend not to notice anything strange in the remark that these other particles are not significantly more elementary, that the elementality of both could be something normal, and then we further ignore the even stranger phenomenon that particles formed by decay into "more elementary" in some processes decay again at the initial, supposedly less elementary.

Question: You assume that intelligence in humans had a greater sexual attraction (Development) than in other species and that this contributed to its development, to our exceptionality in that respect in relation to animals. How do you defend this thesis?

Answer: I do not defend it, there is no special originality in defending well-known theses. When (if) an idea closes with experimental confirmations and becomes a scientific truth, it ceases to be a theory. By the way, the assumption about the sexual attraction of intelligence in humans comes from the attempt to determine the "information theory" between Creationism and Darwin's evolution.

So, these "theses" are theoretical, and extreme are often simply to test the information theory. When it is discovered in another (true scientific) way that they are "defended", they cease to be interesting in this sense. The concerned you were talking about becomes closer to one, less interesting, if the following recent allegations (Sapiosexual) are true, I quote:

“A team of Australian researchers developed the Sapiosexual Questionnaire (SapioQ) in order to test whether people were sexually attracted to intelligence (which they defined with an IQ score) and whether they wanted an intelligent person as a partner. They found that the participants seemed to be sexually and romantically attracted to people with above-average intelligence, up to an IQ of about 120. Above an IQ of 120, both sexual and partner interest decreased. Very high IQs were not perceived as particularly sexually attractive or as the most desirable quality in a partner.”

Of course, the dilemmas about the origin of intelligence in people with this are not the end of the story.

Question: Is this sexual attraction of intelligence, about 120 IQ, neither smaller nor larger, determined by the "principle of information minimalism"?

Answer: Maybe, I don't know, but I wouldn't go that far with this principle. Nature likes to "hide" (save, avoid, suppress) information, action, truth, therefore it is easier to encode than decode, it is in accordance with the multiplicity of this world, so entropy grows spontaneously (information then decreases) and the Second Law of Thermodynamics applies (heat spontaneously passes from a warmer to a neighboring colder body), so more likely outcomes are more common (more likely is less informative). However, I do not see that connection between the attractiveness of intelligence in people and this principled minimalism that you suggest to me.

On the other hand, I believe that this data, about the attractiveness of about 20 IQs above the average 100 IQs, can be used in some calculations of the average intelligence of different groups of people. For example, in environments where the bride does not choose the groom but her future spouse is assigned according to special norms, the average value of intelligence could differ from the standard one. Let's say that the customs of parents in choosing marriage partners for children, through generations, diminish the intelligence of average descendants of such environments. Then the mentioned difference could be an indicator (because) of the "alienation" of the social norm, and the need for norms is a consequence of the minimalism of information.

It may also mean that we used to be more intelligent, that the need for 20 IQs of more intelligence than the average we now have is a reflection of genetic memories like the emotion of beauty described above (Beauty).

Question: Can you explain the logarithm to me?

Answer: I can. Imagine one of eight numbers, from 1 to 8, say 7, and I guess it with binary questions. I divide the numbers into two equal classes, smaller 1-4 and larger 5-8 and ask "is that number smaller (1-4)". The answer is "no", so I divide the expected (larger) class again into two, 5-6 and 7-8, to ask "is the requested number in the smaller one". The answer to the second question is again "no". I divide the remaining (larger) class again into two, 7 and 8, and ask if the required number is "less of these two". The answer is yes. For one of the eight options, I spend a maximum of three (binary) questions.

The number of equally likely possibilities was `x` = 8, and the number of binary questions needed to extract one out
was `L` = 3. It is no coincidence that `L` is exactly the logarithm of the base two of the given numbers of
(equally probable) possibilities. In other words, the information obtained by extracting one of these (`x`) possibilities
is a binary logarithm, i.e. `L` = log_{2}(`x`). The unit of binary information is the bit (binary digit).

One digit of the binary number system {0, 1} carries 1 bit of information. One digit of the decimal system {0, 1, ..., 9}
carries 1 decit (decimal digit) of information. As each decimal digit has a log_{2}(10) bit, it is a
log_{2}(`x`) = log_{10}(`x `)⋅log_{2}(10), the logarithm of the base 2 of the
number `x` is equal to the product of the logarithm of the base 10 of the number `x` and the logarithm of the
base 2 of the number 10. In general,
log_{c}(`x`) = log_{b}(`x`)⋅log_{c}(`b`).
I hope you didn't expect me to explain "logarithm" to you without "information"?

Copyright © 2002 - Rastko Vukovic

Template by OS Templates