
Artificial
Question: How do I test if (artificial) "intelligence" is really intelligent?
Answer: By questions like: "My grandfather has two buckets of five and nine liters, and he needs exactly 7 liters of water." How can he set aside so much?"
If she says there is no solution, or she talks nonsense, then she is not intelligent enough. If this question has already been asked to her, with a solution suggested somewhere, she needs to come up with something new like the same.
Let the grandad pour water into bucket Five, as in the picture on the left, then pour it all into the bucket Nine. Anew again at Five, then pour out 4 to complete Nine, 1 liter will remain in the first. Empty the Nine and pour that 1 liter into it, then pour the full Five again and sleep in the Nine — where 6 liters are now. By the way, grandfather had each of the volumes from the set {1, 5, 6, 9} liters.
If he continues and pours water again at Five, and reaches it at Nine, the smaller of the buckets remains with 2 liters. If he shakes everything out of Nine, and reaches those 2, then pours a full Five and adds to Nine, he will have the required 7 seven liters. In the meantime, he had each of the {1, 2, 5, 6, 7, 9} liter volumes. If he continues, he will achieve separations of 4 and 8 liters of water.
It is possible for a computer to adopt this solution and apply it, but then it is not the intelligence. Moreover, it is not and when it adopts an algorithm for solving such searches with bins in general the volume m, n ∈ ℝ, if the given numbers are mutually prime.
Creativity
Question: Artificial intelligence will never be able to think creatively ...?
Answer: You should not be naive. What if they come up with it over time and insert a certain percentage of random decisions, assessments and thoughts, or ideas into them (themselves). The power of vitality is the power of "quantity of options", increased information versus the dead matter of the naked body. Hence the creativity.
Consistent with the information theory I am dealing with, the essence of "creativity" is the excess of communication that a physical system would have in relation to the inanimate matter of which it is composed. It is a completely different question whether known modules (like Codes), focused algorithms, can also find such "crossroads for opportunities" that would guide artificial intelligence into creativity, not into blockages.
Quantum computers, when they arrive, could have too many of those options. The distribution of probabilities, superpositions, will map their flows to the places of the now deterministic paths of state 0 or 1, i.e. "true" or "false". When we increase the electric currents "there is electricity" and "there is no electricity" in proportion to the ratio of that small world and our big world, the difficulties arise that we are having with quantum computers — instability.
It is not that one atom has more uncertainty than a handful of substance, but the ratio of those and certainties is in favor of the second, due to the law of large numbers of probability theory. The calculation would show that the law of conservation applies to such, the one we accept for energy, because information is the equivalent of action, the product of energy and duration (in the macro world of feasible constants). That relationship is actually like the relationship of energies, but the macro coincidences are better disguised. Too much freedom then is our weakness.
Equivalence
Question: The discovery of the equivalence of action and information is a big deal?
Answer: Yes, it drives a lot in information theory. First of all, it gives a new insight into entropy in relation to information. Entropy is thought to be a measure of "disorder", as this is what happens to us after a glass falls from the table and a heart spill on the floor. We are changing that now.
The glass did indeed break, as the energy of the glass was given a chance to escape from the embrace of the molecules and indulge in a spontaneous growth of entropy. However, particles that tend to distribute themselves evenly from congested positions would rather be said to do so by striving for order rather than disorder. The soldiers lined up for review resemble gas molecules in a container, after the gas expands to increase their entropy. A substance cools while its entropy increases.
Individual elements of the whole thus become impersonal, we can say, losing information about themselves. It is true that with the increase of entropy, then order is lost, but not because of the growth of the "amount of uncertainty" (information), but on the contrary — because it is declining.
The growth of entropy means the reduction of information, but also because the spontaneity of "more frequent occurrence of more probable outcomes" means the same as "stinginess with information" or "principle of least action". These are equivalent forms of inertia (laziness). Thus, the spontaneity of the entropy increase of the gas in the container comes from the transmission of the vibrations of the molecules, let's say by pushing them, due to which they lose part of their energy and average oscillation, and again (due to the equivalence of action and information) they also lose part of the information.
This simple explanation is one of the most difficult disagreements between classical information theory and my own. The energy consumed in the operation of the computer does not only use the equipment, but also supplements the loss through the emission of the information itself that the computer produces. It is not unusual (for this theory) that our brain is such a big consumer of energy compared to, say, muscles. A machine that would miraculously perform many IT operations at high speed should have significant energy inputs.
Sets of natural and rational numbers (Infinity II) are equivalent in the sense of "equal size" (cardinal number), which proves the existence of bijection (mutually unique association) between them. They are also equivalent spaces of points and vectors in the sense of the bijection between them, but also the existence of equal norm (length between) of corresponding pairs and we say that they are also isomorphic. Discovered equivalences are the possibility of transferring some (not all) properties of structures and make it easier for us to know one by means of the other.
Defiance II
Question: Working despite the slightest resistance is a sign of vitality?
Answer: That's right. Still life, a substance in itself, does not defy, it surrenders and never deviates from the principle of least action from which all the known trajectories of theoretical physics can be derived.
The ability to lie (dilute the truth), as well as to compete, through play or combat, adorns vitality. In order to possess such, however, an excess of "amount of uncertainty" (information) is needed, in relation to the inanimate substance of which we are composed. This is the point of view of information theory that I am developing (there is no such in the classical one).
One way of measuring vitality is sum of products, information of perception. Although such arrays have "too many" current components and are difficult to determine precisely, so that the idea is more theoretical than practical (for now), we easily recognize the "defiance" in the juxtaposition of larger subject components (a, < var>b, c, ...) to object sizes (x, y, z , ...) in gaining greater vitality Q = ax + by + cz + ... . This tells us how to raise the level of the game (Strategies), but also what will bother us more.
All nature strives for less communication (less information and action, greater probability), but the law of conservation, the ubiquity of information and the same desires of everyone else, hinder that "death drive", so we hand over excess freedom to the collective and unite, submitting to it. Like the particles of a geyser, which go up even though they should fall down, an excess of the power of choice is attained by one in spite of contrary general tendencies.
Either way, the excess of information gives us the power to push ourselves to lower levels than spontaneous dead life could "conquer" under the given circumstances. An example is smart, meaningful and therefore focused texts (Letters) which, unlike completely random ones, do not have a uniform frequency of individual letters and thus achieve less information.
It seems absurd, but it is true that more "informative" content then means less informative content. Also efficiency, organization and generally focusing on a topic, subject, or action, means throwing out redundant options and reducing (specific) information. You need to have a surplus of information to be able to dive into the deficit, just as we need momentum up to drive the pickaxe deeper down.
Imploding
Question: Please explain to me exactly what you mean by "condensing"?
Answer: As opposed to explosion, the violent expansion shown in the first of the sketches on the left, compression would be implosion which usually means collapsing or causing a violent inward collapse. In the theory we are discussing, it is a compression to an even lower level of information than "expected", which the system would have spontaneously.
It was about the "violent" compression that living beings can achieve due to the possession of excess power, freedom and intelligence. Answering first, to the previous question about "Feminization" (Information Stories, 1.4 Feminization), I said that women like to be useful, and men are less bothered to be useless. It could have started like that, but the point was that "Story" on the importance of uncertainty. Then, and not even now, I am not sure that the interlocutor could understand me well. It's a new theory.
What drives the world are actually the "Forces of Probability", because information is its fabric and uncertainty is its essence. Therefore, it is freedom of choice and action. It is present everywhere, from the superposition of the smallest particles of quantum physics to the way of development of the great cosmos. Living things are one type of "unnatural" phenomenon. This is only in the sense that the principle of least action does not literally apply to them, which is otherwise inevitable in physics, because they have an excess of freedom in relation to the bare physical substance of which they are composed. But information theory is broader than physics and can deal with such things.
The laws of large numbers and multiplicity of phenomena limit uncertainty. Well, the spontaneous and general aspiration towards certainty in its own way wriggles against the law of conservancy the total "amount of freedom" (information) of a closed system by diluting, fragmenting, diversity. Information cannot last, but it can "slyly" oscillate, change circularly through several forms, being trapped by the conservation of totality.
One of the ways that the principle of saving information "defeats" conservation and its rivals (they all want less information and more likely outcomes) is precisely living beings and intelligence. It is also possible to dive into its shortcomings with the help of excess information. It turns out to be a "mischief" of the principle of minimalism to the principle of conservation, figuratively speaking. That's what is implied by the word "condensing".
Functional
Question: Is "perception information" functional?
Answer: Yes, in the basic sense, perception information is also a representation (f) of functional, linear mappings
vectors into scalars (numbers). That space is the function L(V, Φ) over the vectors of the space V again some vector space, we call it dual space V and is usually labeled L, V^{† }, and V^{*}.
What vectors can be was a special question for me (Examples), and in short, vector representations are quantum states, and their spaces are quantum systems. What is important for the general public, and is not emphasized in books on quantum mechanics, is that this field (the connection of vectors with states) gave the most accurate thing that physics has today, and according to what science has in general (mathematics is not a science, it is special discipline). Hence, every physical state is a representation of a vector, just as a physical body is a conglomerate of elementary particles.
Quantum processes are representations of linear operators (Hermitian), and they are again vectors, so processes are also states, states of processes. On the other hand, states are processes already by the fact that they make different courses of events by their very presence. The information of perception binds those "strange" and until recently fantastic phenomena into the rational body of science.
For example, there is a littleknown but interesting linear algebra theorem, Riesz theorem (Frigyes Riesz, 1880  1956), that for every functional f on the finitedimensional vector space V there is a unique vector u ∈ V such that f(v) = ⟨x, y⟩ for every v ∈ V . It was in the question (Uniqueness) and I clarified that even in the new interpretation it really means the uniqueness of the perception of the object by the subject and vice versa. In particular, this proof of the uniqueness of the functional also means the uniqueness of the information of perception. It is a generalization of the Pauli principle.
In order not to go too far (into abstractions, not into inaccuracy), let's return to the upper image on the right and its two vectors \(\vec{a}\) and \(\vec{b}\) in the Euclidean plane. The scalar product of the two (\(\vec{a}\cdot\vec{b}\)) is functional. It maps those vectors into a number, the value of the product of their intensities with the cosine of the angle between them (ab cos φ). When the vectors are the interpretation of the states, subject and object, we interpret the corresponding information of perception with that function. The mentioned "principle of uniqueness" (Riesz theorem) is valid for them, because perceptions are always finite, in contrast to space (universe), which can be infinite.
Although two vectors are always in the same plane, each of them is in its wider environment \( \vec{a} = (a_1, ..., a_{n_1}) \) and \( \vec{b} = ( b_1, ..., b_{n_2}) \), which we expand until they are both in the same vector space, that is, we reduce them to less n = min{n_{1}, n_{2}}, discarding the least likely members of the longer one. That longer one, for example, is the environment that the subject perceives with a reduced but perhaps most influential choice of senses.
The information of perception, therefore, is greater the smaller the angle \( \varphi = \angle(\vec{a},\vec{b}) \) because then the cosine is larger, so the vitality of the coupling of the two states is greatest when the vectors representing them are "glued" to each other. It is already a story about the best strategy, which will give the highest level of play due to the greatest vitality. Then the sum of products will pair larger factors (by absolute values) of one vector with larger factors of another as well as smaller ones with smaller ones, positive ones with positive ones and negative ones with negative. Hence the superior strategy which is responding to good with good and bad with bad, keeping to moderation and unpredictability.
The issue of uncertainty is fundamental. It is the force that drives everything. Choices spring from it, without it there is no information. The principal flow of the system towards smaller states of information should not be confused with the absence of options. Also, large systems that, due to the law of large numbers, or because they manage to lock their uncertainties into many small systems, should not be considered (absolutely) nonrandom. Only then can we understand the importance of uncertainty in general, and especially for matters of vitality such as winning games, economics, wars, or life itself.
Unlike a dead substance, living beings have greater perceptions, greater freedom, that is, they have greater vitality. They will not move exactly along the curved line t in the given image — along which the vector \( \vec{a} \) of the state of the dead object would go, gathering influences and driven by the aforementioned "probability force". Vitality means an excess of information (amount of options) and the ability to act violently despite the spontaneous course of things.
It is incredible how much we can extract about the world around us from the theory of functionals, although (for now) we cannot concretely see all its numerous components even in the simplest phenomena (objects, subjects) of the macroworld.
Veracity
Question: Intelligence then appears easily?
Answer: Actually, no. The truth is brutal, repulsive, we like to suppress it. It is unpleasant for us to recognize it, and it is a real miracle if it happens to us who are lonely, then cut off from trends and the right to society. Originality is the essence of intelligence, but it is also its curse.
Life is a rare phenomenon in space, and on Earth, where it is abundant, humanlevel intelligence is a miracle. Dinosaurs, for example, lasted and dominated the planet for millions of years, but never evolved into intelligent ones. This is because there are enough ways to spread vitality, once such an excess of information appears.
The consequences of the ubiquity of information and the effort to make it less is the power of truth and its unattractiveness (The Truth). In such friction, the dualism of lies and truth persists as a privilege of vitality, that is, associations with excess information (options and actions). Dead nature cannot lie, it will not and cannot, and we also consider it special, often an object of admiration. Physical substance literally follows the principle of least action; that's why it is like that, in contrast to the living cells that sometimes manage to get away from the lines of the least action.
At the lowest, quantum level, the particlewave is powerless to get rid of excess options, however miraculously powerful it seems to many to transform itself into anything. The bottom of the bottom of physics today is a form of free wave
\[ \psi = ae^{i(\vec{p}\cdot \vec{r}  Et)/\hbar}, \]where a is the wave amplitude, \(\vec{p} = (p_x, p_y, p_z) \) is the momentum vector, \( \vec{r} = (x, y, z) \ ) position vector, E energy and t time. The imaginary unit is i² = 1, and the reduced Planck constant is ℏ.
The wave function ψ is a solution to the Schrödinger equation. According to Born's law it contains the probability of finding a wave particle as a physically measurable quantity (observable) thanks to the amplitude (the square of its absolute value, a², is the probability). Let us add here that its logarithm is information. The bracketed part of the exponent, the product of the 4vector of momentum (energy is the fourth component) and spacetime (events at a given place at a given time), is the information of perception and action
where p_{t} = E is often changed to p_{0} = iE/ c, for "duration" to be the imaginary length x_{0} = ict that light travels at speed c for time t. Thus the fourth summation (here the information of perception) becomes Et = p_{0}x_{0}.
Let's look at the ψ function itself as a (complex) exponential probability distribution. In the real domain, they have maximum mean (Shannon) information for a given expectation μ, which means they do not remember. They leave their traces onto the environment and space. That's why they don't have the slightest vitality, because they don't have excess information (options). It seems absurd since we think of that small world as being very random, but it is what is.
In the macroworld, the world of small particles gains volume and mass, but loses departures in imaginary time (they become negligible) and we can consider time changes through energy changes as constant, so the information is hidden by the energy itself. In movements, at our level, we see it only as kinetic energy (E_{k} = mv²/2), where m are the mass of the body, v is the speed. The previous exponential distribution of the action thus becomes a Gaussian distribution of velocity, a characteristic bellshaped function.
Gaussian distributions, on the other hand, have the maximum information of predetermined dispersions. Such carriers, already overloaded, are not suitable for surpluses. In order for nature to reach vitality, it has to work much harder. Living beings are therefore a miracle, and human intelligence is a miracle of miracles.
Origin
Question: So how did intelligence arise in humans?
Answer: Intelligence was sexually attractive. For some birds it was colorful feathers, for deer antlers, for lions it was strength, but for the primates from which we evolved, intelligence was a more powerful aphrodisiac than for other species.
About 30 thousand years ago, the meeting of Homosapiens and Neanderthals, during the arrival of the first ones from Africa to Europe and eventual competition with natives, I believe, strengthened the attraction of collectivism. We see this in modern wars, when some defend each of their villages, while others unite and conquer those forts one by one. Since then, our brains are shrinking, cognitive intelligence is declining, but social intelligence is improving. States and empires are born.
Here I am not drawing conclusions from theory (there is nothing more practical than a good theory), but from practice I am guessing agreement with it. By further stretching the same idea, we would come to the discovery that intelligence (due to the ability to create aids) will undermine itself. Even now, it could be apparent to a certain percentage of "clairvoyants" that we are stealing physical strength, health and other abilities from future generations for the sake of current generations.
Failure
Question: Life comes from failure?
Answer: Yes, life resists failure by using it. The principle of minimalism (from my) theory of information hinders him in this, and it also helps him in a different way. Here's how.
When there is an argument about the evolution of species, we take the "equal probabilities" of all possibilities as the "argument" of the contestation and discuss how many of them are too many, and here we have rejection, because there was too little of time? The error of such a "conclusion" lies elsewhere in the polemic.
In a mostly good strategy, the "Monte Carlo" method, say moderate investments in random projects in order to later support later successful ones, one should know that not all random developments are equally likely. Nature does not prefer equality, because it means maximum information, just as it does not like maximum actions, or minimum probability. Equal outcomes are therefore spontaneously stratified. Therein lies the informatics interpretation of the wellknown "fundamental theorem" of channel decoding (Informatic Theory III, 87.3. Block code).
Namely, during a series of transmission channels, random messages will be spontaneously grouped into fewer of the more likely ones and much more of the others. This process is called interference, or channel noise, which first dampens the weakest ones so that the others, stronger than those sent, arrive. Let's say that life has a chance due to its confinement in its processes, such as the binding of electrons to atoms or atoms to molecules, to reproduce itself, even though it possesses more information to penetrate into its lower levels than others.
Stratification and diversity create local minimum states from which it is first necessary to go up in order to reach lower and better. It happens that then the gap is too big and the state is locked in possibly worse than the neighboring minimum. I mentioned such situations of cases of extinction of highly adapted species to their former environments, which would later change and leave them in the lurch. I cited this as a possible reason for the frequent appearance of two sexes, of which the male has a greater dispersion, so that the species would jump over the potential mentioned gap and last.
The jump of a species from a weaker minimum to a better one, through worse states, in many ways resembles the transition from dead nature to living, but also to seemingly completely different models of locking labile information inside a substance through molecules, atoms, or vibrations in general. Such processes have little in common with simple uniform distribution.
Moreover, by digging deeper, we would also find why the strategy of combining "good with good" (winwin) almost always loses to the strategy "sacrifice to victory" (loselose), or other moments that do not have a great foothold in even random outcomes (Win Lose).
Purpose
Question: Does intelligence have a cosmic purpose?
Answer: Purpose is neither a scientific nor a mathematical term, but rather a colloquial one. When it is used, it counts on the mind of the reader. Those who have an ear for deductions will understand, and those without understanding will not even read such texts.
The purpose of sharp rocks is not for animals to scratch, but opportunists noticed and used them. The use will give the "purpose" of the subject to the possibly created and adapted subject to such a world, so by replacing the thesis it turns out that the subject was created to serve the future subject. Objects created for the needs of subjects, on the other hand, can have a purpose, like artificial intelligence.
In a deeper sense, "purpose" is an unsolved and therefore unreliable concept. Let's understand each other, because we cannot say with sufficient accuracy whether we are some kinds or someone's creation (divine, extraterrestrial, deterministic), and even less what our purpose would be in such an eventuality. The question is all the more interesting (otherwise I wouldn't have answered it) because long ago we used genetic engineering to create life, plants and animals, which did not exist before, and without which it would not have appeared.
A particular example of the purpose with which physics plays is time. Unlike mathematics, whose statements are timeless, the timeless equations of physics are without purpose. And yet, physics does not work without mathematics. From the point of view of mathematics, and in a certain sense also of (my) information theory, the universe as a whole — it has no time, not in the sense that we locally experience it. For photons, light, time does not flow, but that flow belongs to the subjects who observe them. It is the same with the limits of the visible universe in relation to any of the observers in space — they are moving away from them at the speed of light. Local time, which flows, is relative and flows at different speeds to different subjects.
In (my) information theory, the time of the present slows down relative to (imaginary) past observers, as events become more certain (due to the parsimony of information). Thus (Growing) we find ourselves in a situation of an increasingly strong gravitational field, except that it does not pull us into its space, but into our future. In that case, life is like atoms and molecules, which were created after the Big Bang, with the purpose of making this world different and thus diluting the homogeneity of the "ancient soup" in which it was 13.8 billion years ago.
It can be said that the purpose of time is to provide the perception of information, to each individual subject, yet uniquely collective. Intelligence, we can say, is there to concentrate that experience, perception (keep more information in a state) like particles, atoms or molecules that are amplified places of energy in their surroundings. Of course, only in retelling the equations.
Unimprovable
Question: What is the important reason for the emergence of intelligence?
Answer: Of course, information. Its ubiquity, randomness and ability to be noncommutative here and there. Permanent commutators and the uncertainty principle. I know I surprised you with the answer, so I will clarify.
First of all, you know that you cannot expect me to deal with bio, or chemical details, the zigzags of practice, or anything else that "is" the origin of life, and on to intelligence.
I am looking for deeper reasons in the principles of minimalism, in the irreversibility of flows, as well as the real nature of time, which (sometimes) does not allow changing A and then changing B to give the same result as first changing B and then change A. Commutator permanent [A, B] = AB  BA = const. ≠ 0 in some process cases. An example of this is the rotation of the cube in the picture.
The noncommutativity of change of position and change of momentum, when the commutator is of the order of magnitude of Planck's constant, \( [\hat{x}, \hat{p}] = i\hbar \), are already worn out. It is the equivalent of Heisenberg's uncertainty relations, that by looking at a microparticle with stronger gamma rays (smaller wavelengths), we determine its position more accurately, but its momentum less accurately (a shorter wavelength have a higher momentum). Similarly, by increasing the resolution of an individual image, we must reduce their frequency in order to maintain the same number of bytes, the same information of the film.
The quantum of action, Planck's constant, is the smallest package of physical information transfer. Because there is such a (Information Stories, 1.14 Emmy Noether), that is why the law of conservation applies, equally to both the physical action and its equivalent physical information. Information is equivalent to surface, and also to commutator, so here are rarer examples:
\[ [\begin{pmatrix} \frac12 & 1 \\ 1 & 1 \end{pmatrix}, \begin{pmatrix} 1 & 1 \\ 1 & \frac12 \end{pmatrix} ] = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \] \[ [\begin{pmatrix} 1 & \sqrt{2} \\ 1 & 1 \end{pmatrix}, \begin{pmatrix} 1 & 1 \\ \sqrt{2} & 1 \end{pmatrix} ] = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \] \[ [\begin{pmatrix} 1 & i \\ 0 & 1 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ i & 1 \end{pmatrix} ] = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. \]You will find the values of these commutators among quaternions and Pauli matrices (Matrices). See also eigenvectors (Example 10).
The thing is that such can create vortices, trap information in a local minimum that can be of a significantly higher level than the surrounding ones and then inaccessible to the given system (Equilibrium, 6.4). Keeping to the lowest possible energy levels, this is how elementary particles are formed, as well as hadrons, nucleons and mesons bound by quarks, then atoms and molecules, special and all surrounded by huge voids. The processes of people are similar to whirlpools that do not allow the substance to calm down, to die down, at least for a while.
The nature we are talking about does not like equality, whose information outcomes are maximal, so it disperses. It tends to accumulate in few nodes, the concentrators of loose networks with a large number of (peer) connections versus many nodes with a small number of connections, thus achieving greater efficiency (Six Degrees of Separation). Greater efficiency, and greater organization, means fewer wandering options, less communication, and less need for interactions.
Just as very energetic hadrons, in a sense not at first glance given, mean saving action, so intelligence has its power of editing, lowering the average information of the system to which it belongs, even though it itself has an excess of it.
Vital few
Question: You said that commutativity of operators has to do with vitality, can you clarify that for me?
Answer: At the bottom of the bottom of the microworld of physics, in addition to molecules, atoms and elementary particles, there are operators of linear algebra.
The fantastic accuracy of the guesses of quantum mechanics was achieved precisely by their close connection with that algebra. We interpret vectors as quantum states, operators as what changes states. Just as atoms exist even when we don't see them, so do vectors and operators even when we don't know them.
States are vectors that operators change. When the state is processed through commutative operators, then events flow like a wellestablished system, without empty talk, without surprises, without vitality. But, without errors (Failure) there is no new knowledge, because the essence of information is unpredictability. So, mistakes are the basis of original behavior, and originality (finding out in new, unfamiliar situations) is the essence of intelligence.
The composition of operators is a chain, a series of consecutive process executions, and when the links of that chain are not commutative operators, the state can go astray, or alternate, either imitating Pareto's rule: that many tasks with the use of about 20% of resources can be solved up to the level of 80%. Or that 75% of global trade is done by 25% of people. Also, in order to correct the first 20% of errors that are most often reported in computer systems of say Microsoft, 80% of related errors and malfunctions would be eliminated.
Processes
Question: What is the difference between a state and a process if they are both vectors?
Answer: Representations of vectors in quantum mechanics have long gone to the topic of information theory in the way I will explain it now, but they stop before it.
Algebraic vectors are finite or unbounded sequences of the set Φ, the socalled scalar, i.e. real or complex numbers. Matrices are representations of linear operators. Depending on the base of the vector space, the part of the whole vector that can express all the individual vectors of that space, we understand the matrix as a series of arrays, the vectorcolumns of the matrices that can span the given vector space.
If the vector space consists of sequences of length n = 1, 2, 3, ..., we say that it is ndimensional, then the space of the matrix, and thus of the corresponding operator, dimensions n² = 1, 4, 9, ... . That is the first of the differences. Although the matrices themselves, i.e. the linear operators they represent, are some kind of vectors and closely related to the vectors they map, what they map and the way they do it are treated as facts and theory. The first and the second are mutually supported, but to see only the first without the second is to be an inanimate substance of physics, not be able to understand, or further — to be unintelligent.
Even if we stick only to the operators of quantum mechanics (Hermitian), which follow the law of conservation and the reality of observables, looms the "Kepler's second law" (Probability force) in the ratio of the radius (r) and the surface (4πr²) of the sphere, analogous to the dimensions of the corresponding vectors and operators. Such a comparison goes on and on, and here's how.
The world of communication is much larger than the number of our senses. Although few, according to the ergodic theorems (Informatic Theory I, II and III), they have good chances, because only a small number of outcomes are significantly more likely than all others. We live in a world of such essential relatively few chances to suffer or prosper. Encroachment into ever wider domains, then ever less likely events, requires ever greater effect, information or vitality. There, increasingly stronger repulsive forces of uncertainty are lurking at us.
As there is no set of all sets (Russell's Paradox sets), nor a theory of all theories (Goedel's Theorem of impossibility), there is no end to the depths of that world of uncertainty. But we are privileged to see parts of its surface otherwise completely invisible to inanimate creatures (physical substance itself).
Bijection
Question: Does this mean that the "world of perceptions" is smaller than the "world of theories"?
Answer: Not really. The previous answer defines the first "world" as a space of strings and the second as a space of strings of strings. Similar to the proof of the countability of the set of rational numbers, in the picture on the right, the equivalence of vector elements and associated matrices, and thus operators, can be proved by bijection (twosided unique mapping).
That we perceive only finite magnitudes does not diminish the force of this proof, unless if it would be that the infinity does not exist simply because we do not see it (with our simple senses). It was not possible to see the other side of the Moon either, but even in their time, some people understood the reasons for its existence, before the space vehicles which later confirmed such expectations. The power of mathematical and scientific logic to predict reality surprises us again and again, if we confuse it with the colloquial.
Geometrical proof, let's say that the sum of the angles of a triangle is 180° it has little to do with physics or experiments, although in every practical situation, where its temptation arises, the same truth will be confirmed. Note that, unlike the dark side of the Moon, we will never be able to "measure" geometric claims (not even close to the accuracy with which they are stated), but again it is unreasonable to doubt them.
Accepting the equivalence between the facts and theories, we also accept the informational nature of the latter as well. Then things get a lot more interesting. For example, through "blockmatrix" we can extend the notion of "observable" (physically measurable quantities), otherwise eigenvalues of quantum mechanics operators, to matrix, which further means "physical process". I already do that routinely in perception information.
Perhaps an even more drastic example is the extension of information theory from the physical, or rational world, into the world of lies (The Truth). That procedure leads to the proof of the equivalence of the set of tables of the algebra of logic with the one that would be obtained by replacing "true" (⊤) with "false" (⊥), and then a tautology with contradictions. Between those two extremes would swim a world of halflies, or halftruths, available to the vital.
However, the world of options is bigger than the world of realizations. These are infinities related as the cardinal number of natural, integers, or rational numbers to irrational (Values). Without this greater potency of options, it is not possible to prove the new dimensions of time, nor to assume the objectivity of uncertainty.
Lying
Question: What can you say from information theory about lying?
Answer: On the left are the conjunction and its negation, tables of the algebra of logic. First A ∧ B will return a true statement if both statements A and B are true (⊤). Another A ∧' B will return a false statement if two statements A and B are false (⊥).
The negation A' of true will be false (⊤' = ⊥) and conversely, the negation of false is true (⊥' = ⊤). In particular, the sentence A ∧ A' is the contradiction (the statement is false for all values) of the conjunction, but the negation of the conjunction of the same A ∧' A' is a tautology (true for all A). In general, the negation of all tables of the algebra of logic translates true sentences into false and vice versa. It is (my, long time ago) proof of the bijection between true and false statements, then the equivalence of the world of truths and lies.
Second, dead physical nature does not lie. She has a minimum of information, performs a minimum of communication. That inanimate matter performs the smallest possible interactions given the circumstances is proven by the execution of trajectories known to theoretical physics starting from the principle of least action. There is no exception, everything is subject to that principle, and the theory of information will teach us that the principle of least action (physics) results from the more frequent realization of more likely outcomes. This is also (my) novelty, the equality of the mentioned three principles (communication, interactions, probabilities).
Thirdly, systems with an excess of "amount of options" (information) in relation to the physical substance of which they consist, which I sometimes call "vital", can see (perceive, know) something of the world of lies that is shown to them mixed with notorious truths. The spice of lies dilutes information (The Truth), makes it weak against hard reality (physics), but consistently principled minimalism such a mixture is attractive to vitals.
That's why we like to lie. We prefer to read fiction rather than proofs of theorems, or turn more easily to art. Using misinformation as well as falling for it, putting yourself in someone else's point of view, or manipulating the "what ifs", are traits that give us an excess of information at our disposal as individuals. A lie is the root of "life", we know about cunning, and let's also admit intelligence.
A lie is an attractive passive to us, but it is a tiring active. The need is to break away from natural minimalism and, possessing an excess of information, penetrate the uncertainty otherwise inaccessible to dead matter. The requirement to overcome the "forces of probability" requires the "effort of lying", observed in the physiology of not only humans, but never in this way. I'm not talking now about details like the tip of the nose or the liar's finger getting cold, for the better use of blood flow, but about the principal questions of information theory.
Jellyfish
Question: What process increases vitality despite its spontaneous decrease?
Answer: I mentioned "sexual attraction" Origin recently. A different example is "avoidance of violence" (see Strategies) which, not only in closed environments, can lead us to even greater violence, and harder to compromise.
The third example is less biological, but it comes in handy with an introduction about how jellyfish swim, pictured right. She pulls out her hat, spreads it and catches more water, which she needs to squeeze into thrust to move in the opposite direction, using her tails as well. The description also simulates the movement of photons before observation.
The elementary particle before the interaction carries information differently than after the declaration. Like the state of uncertainty before a dice is rolled, six possibilities versus one outcome, in equal amounts. The particlewave will get a more accurate trajectory by observation, Heisenberg and Schrödinger noticed it at the beginning of the 20th century, and now we can add, because it is shaped by the loss or exchange of information.
A traveling elementary particle interacts with the vacuum. It happens in a virtual way, in "too short" time intervals or parts of the road for us, but again real enough for Heisenberg's uncertainty relations. Inversely proportional to the time and space, the uncertainties of energy and momentum are large (the product of momentum and position, or energy and time is the quantum of action).
Let's also notice the informatic explanation of the socalled Double Slit, a famous puzzling experiment in quantum mechanics, by describing the fuzziness of energy and momentum. There is as well a (Bypass) effect which happens more easily to the particlewave due to the greater uncertainty.
Traveling, during the fluctuations, as the particle would resolve the uncertainty by leaving it behind and then running away from it, I paraphrase, it will move like a jellyfish. The law of conservation does not give permanent disappearance or emergence of its uncertainty, the same one that keeps information quantized (Packages). That mechanism also exists in the evolution of vitality.
A living being, like all matter around it, would get rid of a part of uncertainty, even handing over its freedoms (choices) to the collective to decide. The parts it submits become uninteresting (repulsive) to subject and sometimes push it into other uncertainties. For example, joining together for protection forces us to fight for the survival of the collective, or by being employed in a company we get jobs that we would never have on our own. Starting a family takes us into activities unsuitable for single people.
Intelligence is a rare phenomenon (Veracity), but it could have been created by a similar "movement". By coming together, living individuals form a higher level of "living being", or at least a greater vitality of the collective (Emergence II). Pushing further, they would evolve into a complex living organism. Gradually renouncing the fragments of options, and with it the ability for independent life, the cells of the living whole would, with the death of their collective, all individual — died faster or slower.
The life time of an ant or a bee outside of its society, as well as of nerve cells outside the organism, is an indicator of the social individual's dependence on the whole, as well as proof of the transfer of information that I talked about here. It turns out that the evolution into higher forms of life continues not "despite" (as you say), but thanks to the principled minimalism of information.
Chasm
Question: Give another example of the physics using biology?
Answer: I actually have answers to such a provocative question. Here's one.
Information theory postulates that we have free will. The team has more of it, and the higher the vitality it is easier to "walk" through the dimensions of time. Liveliness increases dispersion (Defiance) and inasmuch makes easier to understand the story that follows.
By making choices, by going to a parallel reality, the subject moves to a new environment and to a new "Markov chain" of transformation of information to which it then belongs. It is known that (ergodic theorem) processes spontaneously converge to eigenvectors, which is an algebraic proof, among others, of the principled minimalism of information, because an effort of vitality is required to deviate from such flow. On the other hand, the same algebra will establish that the subject's new environment is different from other "parallel" times.
From the physical side, this then means that the laws are the same there and here, or that the subject will not notice their essential differences, except perhaps for some leaves on a tree that will fall there rather than here and similar small changes that he can attribute to his own actions. In fact, (this) information theory allows for, and even predicts, the possibility that the natural laws of some distant (parallel, past, or future) time are significantly different from those of the ongoing present.
That's why there are such alternative truths, all equally accurate (geometric, coordinate, probabilistic and similar), proofs and observations of the same topics, because there are alternative realities. That is why there are also "redundant" solutions that seem to us to belong nowhere, but logically they are extremely correct. Therefore, this "biological" example also helps with mathematics.
The existence of "parallel" times is only a hypothesis, but it is almost as accurate as saying that the ratio of the circumference to the diameter of a circle is equal to the number π = 3.14159... and equally physically unprovable. How much and how to measure circles, thicknesses and lengths, in order to bring the experiment closer to the accuracy of geometric positions? Then again, how do you find a situation where a geometric theorem would be physically incorrect?
Differences
Question: Do you have anything else (besides biology) about the origin of the intelligence?
Answer: Differences, inhomogeneity. The dispersion of information around is a prerequisite for the development of extreme vitality and intelligence. This is due to the tension, which increases as the smallest and largest move away, but gives larger local minima. It is an IT reason.
This complexity can develop to extremely precise items, with zero information, different from the "soup" in which the cosmos was in its "early phase". With its development, with 13.8 billion years, the average of the information of the present has thinned, to gain diversity and thickness of memory.
I remind you; my theory of information predicts the decline of the density of the present and the growth of the past (Growing) which, with pseudo reality, compensates for the deficit. Space expands and time slows down, but simultaneous events of an object, some parts in the decay of larger decaying particles, or a pseudo sphere of photons expanding from an electron are still possible. Such are quantum entanglements. I mention them only as an introduction to the explanation of intelligence for the next two components, which are a little less physical.
It is important to have "triggers", as we know from the theory of deterministic chaos, when small differences of action develop into drastically separable phenomena. Another aspect will be found in the minimal polynomial operator alongside the Hermitian operators of the quantum of mechanics, applied to stochastic processes, which in the manner of a Markov chain converge to a black box (we have seen: because they have real eigenvalues), analogous to selfadjoint processes.
Namely, in the "allowed" processes of quantum mechanics, we have observables defined by real eigenvalues. In data transmission, these are situations when the process (Markov chain) converges to a black box. Then the sent information is choked by noise, channel interference, and with a new interpretation let's add — becomes the past. It participates in the present with a smaller and smaller part. On the other hand, we have phenomena of bypass, interesting to us here only within the same present, from the point of view of the given object.
I'm talking about departures into the "imaginary reality" of coincidence when there is no loss of information, like the wellknown quantum entanglement also without the transfer of information (action). In algebra, we will represent such linear operators with complex eigenvalues, in contrast to the now quantummechanical ones (Hermitian). The brain is the concentrate of those extremes, listed here, and with it or similar also are the intelligent behaviors of living beings in general.
More precisely, when the mapping is stochastic the step copies become eigenstates and the overall flow (chain) is a black box, except for the isometry when the sequence of mappings is cyclic repetition. In the second of these, all eigenvalues are roots of 1. When some of them are complex numbers then we don't have real observables but bypass (see Solenoid) useful for intelligence.
Measurement
Question: Explain me this at the end of the previous answer, about the observables?
Answer: The Stochastic matrix, probability distribution columns, Markov chain generator, is typical for information transfer process settings. In the expression Mv = λv it (M) acts on states, vectors (v) and gives observables (λ), real numbers from the set ℝ.
Something similar exists in quantum mechanics, but then with Hermitian operators, on complex space, with analogous eigenvalues also of real numbers. These operators are "unit norms" (to support the conservation law), and in order for their eigenvalues to be real, they are "symmetric" (selfadjoint).
Stochastic processes are subject to disturbances, channel noise, and converge to a "black box" from who’s the output message is not recognizable the input one. The output of such is always the eigenvector (v) of the given channel generator (M). Conversely, a message would be permanently stored for output if and only if it travels cyclically, which can be proved (in minimal polynomial), if that generator transmission has all the eigenvalues of the unit root.
There are known cases of Markov matrices (M) which do not converge to the black box and which reduce to the unit (identity) matrix, or those which we derive by permutations of the columns of the unit. I wrote about them before, and recently in scripts Informatic Theory I, II III. I add an addition (novelty) to them using the algebra of characteristic polynomials, mentioned above, and maybe it is the one that confuses.
Namely, the roots of unity are not only ±1, but also complex numbers from the unit circle of the complex plane ℂ which divide it by equal angles (arguments φ) into n = 1, 2, 3, ... equal arcs. They are of the form λ = exp(iφ) and are representatives of periodic message carriers. However, they are not from the class of Hermitian operators, they do not have observables, there are no physically measurable quantities accompanying such operators. This is not a problem for (my) information theory, if it is for physics.
In order to understand it, this "nonphysical reality", let's note that any isometry (mapping that does not change lengths), especially symmetries, and reflections, can be derived from only rotations. Central symmetry is a rotation around the center by 180°, parallel displacement (translation) is reduced to two central symmetries, axial symmetry is a rotation of the plane around the axis by 180° through the third dimension of space, and the plane (mirror) symmetry will be a rotation of 3dim space by 180° through a new dimension. That's the problem, the physical immeasurableness of the new dimension.
Real processes are changes in 3dimension space over time. The Isometries (operator S) express the conservation laws of physics, and from their standard definition (∥Sv∥ = ∥v ∥) come the roots of unity for the eigenvalues (∥λ∥ = 1), which can all be covered by the complex notation λ = exp(iφ). In addition, the imaginary additions do not have Hermitian eigenvalues but they tell us that we have no observables. That they cannot be found as physically measurable quantities is not a problem for information theory, if it is for physics.
We would do nothing with these interpretations of complex algebra if we did not add the pseudoreality of bypass. Such are imaginary departures to "imaginary space" or "parallel time" and arrivals from there. It would not make sense to talk about them without the additional dimensions of time, which are one of the first consequences of (this) information theory. This is why this kind of story is a specialty of "information theory", and the next is what it has to do with intelligence.
The first is stability, the absence of information losses under conditions of the highest information that a homogeneous distribution has on a finite interval, or exponential for a given expectation, or normal for a given dispersion. In the aforementioned scripts (Information Theory) you will find proofs of these together with "ergodic theorems". That's why isometrics. The greater intelligence requires greater vitality and greater information (quantity of options, actions).
The second is "perspicacity", the ability to separate from banal, from concrete, physically measurable ways of seeing the phenomena around you. That property of the mind that makes it essentially imperceptible to the "reality of the apparatus", which is why we say that the living brain cannot be completely mapped into the physical substance of dead nature itself, that some part of it is insubstantial.
Dual Power
Question: Opposites are the key to vitality and intelligence?
Answer: Yes, if you mean the ability to choose more than one. On this occasion, let's remember the wellknown "paradox of the court", recently discussed between me and pair of my colleagues in a different context, and now to emphasize the aspects and consequences of "dual power", if that is the topic.
It happened that the famous philosophersophist Protagoras also taught Euatlas the art of lawyering. With the agreement that the student pays half of the tuition immediately, and the other half when he receives the first lawsuit. As the student did not practice law after completing his education, the teacher sued him in court:
PROTAGORA: You will certainly pay the rest of my tuition, because when you lose the case, you will pay according to the court's decision, and if you win, you will pay me according to our agreement.
EUATLO: If I win the lawsuit, I won't pay you because that's what the court ruled. If I lose, I won't pay you because we agreed that I would only pay when I win the first lawsuit.
It is a situation of extortion that is common among those who can lie, and quite impossible in the inanimate world. Liar laws have no physical power, but must be supported. That's why they are so opposite that they are not literally paralyzing what a real (mathematical) contradiction would be. It is the excess of options that allows vitality to come up with all kinds of conflicting fictitious rules, and on the other hand, not respecting them.
Another question is how, say, a living being succeeds in lying, as opposed to an inanimate one. The answer is in the previous question (Measurement) and the previous one. I will briefly repeat only the formal side of it. Hermitian operators are the basis of quantum mechanics because they have real eigenvalues, and they define observables (physically measurable quantities). Untruth is not in that domain; it is a physical substance that cannot lie. At the same time, quantum states (vectors corresponding to observables) are often in the complex domain, which reveals their unrealistic nature in the meantime.
Well, if we include nonHermitian operators, on the one hand we will get stochastic matrices and Markov chains, typical models of information theory message transmitters, and on the other hand quantum states and processes of "observables" in the states of the neverland (Unit Roots). The mind can also maneuver in that "land of the unknown", which is in the states between observables.
Disappear
Question: Does disappearing from this reality necessarily mean going to one of the additional dimensions of time?
Answer: No, why would it be. Wherever there are ways to surprise us, the laws of nature usually do. Not every disappearance is inevitable and, on the way, the bypass do it, by literally passing through such a "parallel reality", I believe.
Fictions should be added to vitalities (according to the above descriptions), or they should be seasoned with untruths, which make truths pale. With lies, reality becomes like a ghost, the kind that does not exist in bare physical reality. I don't believe that walks with the prescribed dimensions alone, when describing nature, are enough.
Types of disappearance are also decay or transformation of one particle into another. They can also occur in interactions with the vacuum itself, in which case, due to the shortness of the path and duration, the indeterminacy of the momentum and energy of the particle will be so great that we can say that "an elementary particle before interacting with another physical particle does not have a clearly defined path", in the words of Heisenberg (Werner Heisenberg, 19011976).
Vitality is a physical macro phenomenon that can use the properties inherent in the quantum world in various ways, more distinctly than other parts of macro physics. It thus better absorbs uncertainties and, above all, achieves intelligence, which distinguishes it as a special quality of nature.
By binding a series of pairs of real numbers (Complexification, position 1) one can achieve the illusion of expansion into additional dimensions, without adding new linearly independent vectors to the existing ones. Such bindings are natural, recognized in quantum mechanics, but abstract in the macro world, so we can also consider them as special abilities of vitality.
Somewhere at the end of the "disappearance" list is the possibility of "bypass", perhaps sometimes it should be more precisely said "replacements". The alleged replacement is the departure of a given particle into a parallel reality with, in a shorter interval, similar phenomena from one of those in this reality. How real such an option is, we can see from the impossibility of distinguishing each of the following states of the particle created from this reality, from a state delivered from another.
Uncertainty II
Question: What do you mean by "objectivity of uncertainty"?
Answer: The impossibility of accurately predicting at least some outcome will make the uncertainty of a given situation objective to us. This practically happens through various restrictions.
The subjectivity of uncertainty is its first and most easily visible feature. With a greater power of understanding it decreases, when we talk about the environments of the subjects we are comparing. Thus, inanimate matter is "in the dark" compared to vital matter. It does not choose much, because it has much less choice than living matter. The dead substance of physics strictly obeys the principle of least action.
We understand the relativity of uncertainty by comparing a hunter who hunts game with a trap, knowing more than the prey. Also, during learning the student's knowledge and powers of prediction change. The announced news is no longer news (there is less uncertainty in repetition). Also, the particle is changed by interaction (exchange of information). Decimals of the number π = 3.14159265... are random digits during the first read, but not reading it later.
The algebra of such uncertainty comes from the properties of linear independence of vectors. No matter how hard we try to add or subtract vectors of some ndim space, we will not get out into n + 1 dimension. That makes objectivity. The space then represents the (physical) system, and the vectors are its states. The interpretation of the body of mathematics as a space in which we can place and from there take truths and only truths, Gödel (Kurt Gödel, 1906  1978), will lead us to the impossibility of a "theory of all theories" (Sufficiency). In short, there is no end to the expansion of the linear independence of the states, that is, knowledge.
Uncertainty is a deep natural phenomenon, its essence, but which nature itself seems to overcome through us, its parts, but also the whole. Like Heisenberg's relations of uncertainty of momentum and position, energy and time (Eigenvector), or the resolutions of film and image, the content of the novel will be more definite for us the more pages we read. The more we read a writer, the better we get to know him, we recognize his style, themes and plots.
However, when we read a novel in preparation, while the author is still refining its parts and changing the content, we find ourselves in a state similar to the evolution of the cosmos. Parts of the world change, as does the forest we would walk through every day and suddenly find ourselves in some of its new ones. Uncertainties also objectively evolve into certainties. By making choices, reality is determined, directed and focused. Coincidences are withdrawn due to the general tendency of the universe to have fewer of them (Minimalism).
Moreover, I think, there is more and more surety. The stricter ones are more permanent, and the absolute ones are like mathematical — they are endless. Their changes are therefore less visible because we consider them more immutable, but again the very occurrences of certainties (objectively) are unpredictable to us. That is why mathematics is difficult, and its development is so slow (it is the oldest of the "sciences").
For example, it's hard to just discover the magic of the number 1089:
Subtract from it that number written backwards (543  345 = 198).
Add the backward written result to the result (198 + 891 = 1089).
You have discovered the "magic number" 1089.
In this sense, innovations in mathematics are "objectively unpredictable", although absolutely certain. By the way, certainty and uncertainty are two sides of the same "coin", the world in which we live.
Uncertainty melts away and in infinity becomes as in the example. However, its total amount remains the same. Its thinning replaces expansion, which we first see as spatial, then as an increase in the thickness of memory, and as an increase in the width of additional dimensions of time (soon understood).
