|
Only One
Question: How is it possible that every subject is unique, given all the possibilities and the size of the universe?

Answer: For more than a century, quantum mechanics determines the accuracy of the measurement of vector-states and operator-processes — with hitherto unprecedented coincidences of measurement and theory in science. Let us add to this the extension of the same notion of vector as the state and process of the operator in the macro-world of physics as well.
Let's note that similar to molecules, which recently (Boltzmann, 1870s) the physics accepted as its own, we can represent macro physical states using vectors and their changes using operators. The probability distributions themselves and the corresponding stochastic matrices, which we already use extensively, are not a typical example. That's the first.
Second, observables (physically measurable quantities) are such representatives of the state, which interact with the environment (measuring apparatus) in ways close to the distribution of probabilities. We can view them as interactions (Force of Probability, 8.1) and as individual probabilities.
Ergodic theory then reveals that, out of a large number of possibilities, only a small fraction of the choices remains relevant. With great accuracy, these "selected senses" can be reorganized into an orthonormal base e1, ..., en. We can write the vectors of such ordered space x = ⟨x, e1⟩e1 + ... + ⟨x, en⟩en where the square of the norm ∥x∥² = |⟨x, e1⟩|² + ... + |⟨x, en⟩|² (13. Example).
Building on this, we note (22. Stand) that for every functional, for example φ, of the finite-dimensional space X there is one and only one vector y ∈ X such that φ(x) = ⟨x, y⟩ for each x ∈ X. The implications of this for (my) information theory is far-reaching. Namely, functional are representations of "information of perception", when the mentioned attitude becomes the uniqueness of the perception (φ) between the given environment (x) and the subject (y).
Third, it has just been said with the formulas that the subject-object coupling, which defines the participants of perception and, again, which defines reality, is one and only one such. With these connections, each subject (as well as each object) is a unique phenomenon of the universe. Furthermore, since processes are also vectors, so "process states" are also states, the processes of the universe are also unique.
The melting of ice cubes, pictured above on the left, is a unique process. Repeating it every time, there would be at least a small "mistake" due to which it would have at least a slightly different flow. That melting process would always be in an even slightly different interaction with the environment, no matter how hard we tried to make the conditions of the experiment exactly the same.
Question above, how is this possible, therefore becomes a method. If we take the mentioned uniqueness as an accomplished act and starting point, then it becomes a tool for a deeper understanding of the "possibility of the great universe".
Spreading
Question: Give me one example of "uniqueness as a starting point" for a deeper understanding of the universe?

Answer: The growth of space is a major consequence of the global uniqueness of the state and process of the universe. Bosons are such particles that more than one can be in the same place at the same time. They can all fit into one, as it was at the beginning of everything (Big Bang) about 13.8 billion years ago.
Bosons are also particles of space from the time of the Big Bang that began to diverge and have not stopped to this day, nor will they, primarily due to the uniqueness of the state of the universe. The principled parsimony of information further made the present thin, supplementing the deficit of uncertainty with the growth of the past (Genesis), due to which new laws of the cosmos were revealed. With the help of Higgs bosons, fermions appeared.
What I am writing is a new "information theory" and, as you may see, it does not need "thickening" with recognized ones. However, it also has its own additions to the known knowledge of physics, of which I will mention only two more.
The first is a consequence, or a prediction that during the history of the visible universe, the spontaneous creation of fermions from bosons decreases so that such processes would be reversed by time — into more frequent transitions of fermions into bosons. Therefore, the substance melts and space expand.
The second is a consequence of the ergodic assumption that we imply with this kind of theory, that different outcomes are possible, hereinafter referred to as information transfer errors. The information of the past comes to us as output messages carried by a long chain like Markov. However, such will adapt over time, they will become characteristic states (eigenvector) of the transmission itself, so that the long chain will be a "black box". These eigenstates are uniform probability distributions that would give us a picture of the universe as we have it at the time of the "Big Bang," whatever it was then.
From both appendices, as well as the introductory part of this answer, the "evolution of legality" can be seen as a consequence of the increase in the certainty of the present and the addition of "uniqueness as a starting point". But let that remain the subject of another answer.
Coupling
Question: Can you explain the EPR-paradox to me?

Answer: Yes, and I will try not to repeat myself (Gloves). In that stormy observation of 1935 Einstein, Podolsky and Rosen discovered is quantum entanglement, from which the authors concluded that it is not realistic and that quantum physics has some "hidden parameters" that could possibly improve the situation.
Three decades later an Irish physicist (Bell, 1964) proved the impossibility of such an assumption, and only decades later experimental physicists would understand the misunderstanding and begin, at first with disbelief, to successfully prove the reality of quantum entanglement. It's very much a classic and less exciting than the next one.
The scalar product discussed there (EPR-paradox) is the physical quantity A corresponding to the first system with eigenvalues an to which the eigenfunctions un belong (x1), where x1 are the variables of the first system, and \( \psi_n(x_2) \) are the coefficients development of the scalar product \( \Psi \) into a series of orthogonal functions \( u_n(x_1) \). That's right:
\[ \Psi(x_1, x_2) = \sum_{n=1}^\infty \psi_n(x_2)u_n(x_1) \]where x2 denotes the variables describing the other system. Suppose we measure A and find its value ak. Then the first system remained in the state \( u_k(x_1) \), and the second in the state \( \psi_k(x_2) \).
The freedom in the choice of these coordinates, I and II systems, makes such formulas paradoxical. And the real choices of these coordinates are represented by possible relative observers. Their range constitutes the information of perception and this is the novelty that we give to quantum entanglement through information theory. Before clarifying, let's look at what Einstein wrote next.
If instead of the physical quantity A we choose the physical quantity B of the eigenvalues bs to which the eigenfunctions vs, instead of the previous one we will get a formula
\[ \Psi(x_1, x_2) = \sum_{s=1}^\infty \varphi_s(x_2)v_s(x_1), \]where \( \varphi_s \) are the new coefficients. If we now measure B and find its value br, it will mean that the first system remained in the state \( v_r(x_1) \) and the second in \( \varphi_r(x_2) \). This is what the algebra of quantum mechanics predicts and is actually what happens in the micro-world, which we call quantum entanglement. Don't confuse this A and B with Alice and Bob below.
Relative observers, coordinate systems, see the same result from the scalar product, \( \Psi(x_1, x_2) \), the same perceptual information differently. In the picture above on the left, we have such a case, typical of what quantum physics is considering today. In the middle, the blue is the particle that Alice and Bob are moving away from. The total spin of those three is zero, in a way that if Alice has a positive spin, then Bob has a negative spin and vice versa. However, it is objectively uncertain which sign is which of the two of them.
For the total spin of these three particles, the law of conservation applies and it is a constant, that is the uniqueness contained in the value \( \Psi \). At each separation step, Alice and Bob will be parts of the same present with the same sum spin zero, with internal sign uncertainty. But this confuses the relative observer, usually a laboratory worker at rest with respect to the blue particle, who does not see Alice and Bob as parts of a single present.
If he sets up a device and measures, say, the positive spin of Alice, it inevitably means the negative spin of Bob, no matter how far apart they are at the time. Also vice versa, a negative first will mean a positive second. Until the measurement, the objective uncertainty of Alice's spin remains, by which act Bob's uncertainty also disappears. These two particles do not have two uncertainties, but only one — quantum entangled. Non-simultaneity is what gives relative observers the illusion of "phantom action at a distance": Alice's spin on Bob's spin.
So, in addition to the unique spin content in the value Ψ(x1, x2), we will have in their ways unique viewpoints of relative observers of given events. However, they might all think that by measuring Alice's spin, and actually subtracting its uncertainty, the phantom (faster-than-light) instantaneous action is going to distant Bob.
Matrix Q
Question: Do you have any other proof of uniqueness?

Answer: Yes, using Q-matrices and q-probability Informatics Theory (03. Q channel matrix).
Namely, imagine two vectors (states), input x = (x1, ..., xm ) and outputs y = (y1, ..., yn) of the linear mapping Q : x → y, when natural numbers m, n ∈ ℕ.
Let the sum of the squares of each of these vectors be one, so we can write them using the cosine of the angles of inclination to the coordinate axes, xj = cos αj and yk = cos βk, so the sum of squares (module) of each of these vectors represents some distribution of probabilities. As with one of the examples of convolution.
The Q matrix is m×n with m = 1, 2, 3, ... rows and n = 1, 2, 3, ... column and coefficients Qm,n = cos β j cos αk. It is quasi-stochastic, because the sum of the squares of the kth column is cos²αk, and the sum of the squares of the j row is cos²βj, so only the sum of squares of all elements of this matrix is one. When it is quadratic (m = n) its determinant is zero, because it has no inverse matrix, it is unique, its use is one-time Q : x → y. By changing at least one of the input-output vectors, this matrix changes.
The uniqueness of the Q matrix, the process, also stems from the position that two vectors cross one plane, regardless of how many components they have, that is, in how dimensional space they are. This in itself provides another answer to the question posed. The crossed surface of the mentioned vectors is the value of the commutator, and it can be used to represent information. There is a unique communication of one (arbitrary, fixed) of them with the surrounding ones, that is, there is a unique information of the perception of such a coupling. This brings us back to functional and Riesz's proposition.
Note that we have included three ways of demonstrating uniqueness, and that the uniqueness of vector coupling does not mean the only way of mapping (process) between them. Namely, from y = Ax always follows y = A'A''x, given that every matrix can be decomposed into (various) factors. If I give a package to Alice who gives it to Bob and he delivers it at the post office, that is not the same as the way I personally deliver that package, nor is my sentimental attachment with inside of the given package.
In quantum mechanics, it was said (Heisenberg) that the measurement is what define the electron's previous path, and (I will add) the smallest action given the circumstances. However, where and when an electron is measured, its charge will always be the same.
Synergy
Question: Is there synergy in information theory?

Answer: It will be like an emergence, if there wasn't one yet, or a form of increased perception information when grouping a couple initiative of the same sign.
We know that the value of information perceptions Q = a1b1 + ... + anbn increases with increasing (n = 1, 2, 3, ...) of the number of additions if they are positive, and the data do not change. The same is achieved by reducing the number of additions, by combining a and the corresponding factors of the sequence b.
Namely, when each of the additions ak = α1 + α2 and b k = β1 + β2 of the same sign, then α1β1 + α2β2 < akbk, which means that these components are better kept together if we aim for increased summary of products. Otherwise, synergy is the interaction or cooperation of two or more organizations, substances or other agents that produce a combined effect of greater value than the sum of their components. Now we can add to that the algebraic way of information perception.
What can be obtained in this way is an increase in vitality, by the appearance of a greater "amount of options" (information) of the system relative to the total sum of its separate components. The consequences are visible in the interpretation of winning games, when this kind of association strengthens the game, but it is also intuitively understandable. On the other hand, such algebraic synergies exist within the observed "strange" phenomenon emergence (mathematics will never be able to explain it well — claimed an acquaintance of mine).
However, when the additions of different signs and the same index are mentioned, when ak < |α1| + |α2| and bk < |β1| + |β2|, then α1β1 + α2β2 > akbk, which means that it is better to keep these components separate if we need an increased sum of products. Applied in games to win (Slime Mold), with the previous one, it would mean that we increase the level of the game when we combine the same kind (of the same sign) and separate the different kind. We group ourselves and stratify the opponent so that we take the positive and reject his negative.
Whether we join or disjoin the corresponding components of the above sequences a and b, their scalar product Q = a⋅b remains some functional for which Riesz's assertion also applies. They again remain some new information of the perception of unique subjects. Colloquially speaking, we remain ourselves while improving ourselves. However, greater synergy creates a paradox with the principle of least action. The law on the conservation of information does not allow it to be reduced just like that, because the environment is already filled.
That is why vital systems are easily manipulated in vital ways, unknown to dead matter. They address themselves according to their level, slightly more often to those below them, and all the more uniformly, to a lower and higher level than themselves, the higher their own vitality. In other words, weaker characters will prefer to attack weaker than themselves, unlike "higher league" ones who will defy more often (against the principle of lesser effect).
Let's say it like this, serf natures tend to fall under the first leadership. Bigger leaders will overshadow smaller ones, so (temporary) stability arise that can support life in waves of birth and death.
Aging
Question: Declining vitality is aging?

Answer: It can be said that way, as for a stone thrown up, which will always fall with the same acceleration of the earth's gravity, gradually consuming its upward speed. Here, vitality increases as a synergy, while simultaneously decreasing under the action of the principle of saving information.
One needs to have a surplus of information to be able to dive into its deficit, below the ground, just as we need upward momentum to drive a pickaxe deeper down. This is roughly the explanation of efficiency by the effect of vitality, or intelligence. Those below will push you up, because they want to stay down, hindering your effectiveness (while below ground) but also helping you last (while in the heights).
I trivialize and, as much as possible can stick to the algebra of inner products. Different living individuals of the same sign join together and increase the vitality of the organization through synergy, handing over some parts of personal freedom to it. In the meantime, inertness also acts, analogous to the smallest action, or the smallest emission of information. Increasing and decreasing vitality revolve around the same efforts. To the more frequent and more likely outcomes, that is, if possible, less interaction, and so to the less communication.
Hence our desire for peace and order, in order for society to be balanced and more efficient, which is therefore becoming less free. All communities thus die at micro levels in each of their stages. By wasting its potential "pickaxe swing energy" (information), the death of a more efficient society becomes larger, more obvious. Aging grows through entanglement in one's own regulations (Democracy), or more originally, by the tendency of all living cells towards non-opposition. We grow old because losing freedom is easier than gaining it.
In this sense, as we talk about "force of probability", we should also talk about "force of strategy" that would drag us from a higher league to a lower one. From increased vitality to decreased. Society loses information and becomes easier prey for those around it (Degeneration). At the same time, the resulting "indentations" of one occasion are for another, as in the pushing of waves on the surface of the sea — where every particle of water would like to be lower and in its "anger" will throw out others of unwanted height. Those to whom they give the potential to return empowered.
The bottom of the bottom of aging is a dead physical substance and, as we see, everything weighs on it and at the same time, like in a "pool full of crocodiles", everything is teeming with dynamism. Let's add to that, without these movements there is no nature and no cosmos. The fabric of space, time and matter is information, which does not exist without excruciating uncertainty. In the end, it is not the years of our life that count but the life during those years (Abraham Lincoln).
Half Truths
Question: Why does physical reality ignore half-truths?

Answer: I will explain it using the photo-electric effect, but before that let's remember the principle of minimalism.
The truth is strong but unattractive (The Truth). Half-truths are reluctant to leave, but they are happily arrived if something can absorb them (vitally), primarily by the principle of saving information.
This is another reason that information is always provided in final packages (Packages). Entering into smaller parts of those "pure" uncertainties, we find more and more certainties, half-truths are hidden in ready-made truths. They are very attractive and this keeps them firmly together within their packages. Among us who can manipulate falsehoods, they circulate because of the "vacuum", the deficit of information that we can create, and because of the same, they spread among us more easily than pure truths.
Otherwise, the quantization of energy derives from the conservation law (Information Stories, 1.14 Emmy Noether), i.e. from the properties of symmetry and Euler–Lagrange equation. However, it all comes from the same information that is equivalent to action (the product of energy and time), since in the macro-world it is possible to observe equal time intervals.
The level of information of simple physical reality, its smallest packages, is higher than this manipulative vital system, so half-truths are reluctantly surrendered to it. That's why we have the photoelectric effect, now explained in an IT way. It is the emission of electrons from metals under the influence of light that was discovered by Hertz in 1887 quite by accident. Physicists were then disturbed by the fact that more light emits more electrons but does not change their energy.
The energy of the electron is affected by the color, the greater corresponds to the shorter wavelength, not the intensity of the light. Einstein solved all doubts in 1905 by assuming that light is of a particle nature, that it spreads only in quanta called photons. More photons eject more electrons, but the energy of the ejected electrons can only increase if the energy of the photon increases. That Einstein's assumption was then so radical that even Planck, the founder of quantum theory, opposed it. In 1921, Einstein was awarded the Nobel Prize in Physics precisely for explaining the photoelectric effect.
If an electron inside a material receives the energy of a photon, energy greater than the binding energy, it is likely to be ejected. When the photon energy is too low, the electron cannot escape from the material. It's like a ball in a funnel that can't pop out without enough speed, but with a strong swing it can find itself outside the hole. The same happens with half-truths that cannot reach the level required for the information of a dead physical substance.
Vital systems can manipulate information in vital ways unknown to dead matter. They thus also address levels somewhat above or below them, increasing the range of "least action". In this way, half-truths are also available to them, i.e. those with too little density, or energy, which could not reach a simple physical substance.
Tautology
Question: How do you mean to reduce the accuracy of something that is indisputably accurate?

Answer: The algebra of logic knows "statements", assertions that are either "true" (⊤) or "false" (⊥). They range from "tautologies" (true for all variables ⊤ or ⊥) to "contradictions" (false for every ⊤ or ⊥). So, there are all these kinds of simple or combined truths and falsehoods.
Electrical engineering has long successfully used placeholders, say 1 for "true" and 0 for "false", which then quantum mechanics slowly finds mixed with probabilities.
Therefore, we can observe the algebraic statements f = f(x1, ..., xn) with n ∈ ℕ of variables xk ∈ {⊤, ⊥}, with a total of 2n input possibilities. Among them are some A ∈ [0, 2n] true and B = 2n - A false. We can always replace such numbers with smaller a = 2-nA and b = 2-nB, for which a + b = 1 and a, b ∈ [0, 1]. Although these a and b of a single statement f take only discrete values (no more than 2n), all statements are infinitely many, uncountably infinite.
Second, to each statement f, from tautologies to contradictions, if we associate the factors p = p(f) ∈ (0, 1) we will give them more or less importance, participation, in the current event. This product can be treated as the probability of a given event, which is partly already done in quantum physics (Born rule).
In short, reducing the accuracy of something "indisputably true" is already contained in the tenets of (this) information theory. Uncertainty is the essence of information, and it is the fabric of the cosmos. That's why we can turn things "upside down" and say that logic and algebra have the above, because there are levels of (in)accuracy, and that "indisputably correct" is a consequence of the law of large numbers, known to us from probability theory.
At the bottom is the (Heisenberg) uncertainty of the quantum world. Between that micro and our macro world there is a growing definiteness. Determinism is at the end of such a level that we need a lot of persuasion to understand the concepts of freedom. Inside that size range is the world of physics with bare substance without the perception of lies, nor the pursuit of them. On the contrary, the vital world, to which we belong and together with other living beings, can unimaginably lie and exchange some of the truth in the same communication. What lies beyond that size zone, between the quanta and us, if anything there, remains to be discovered.
Plunging
Question: I read and read this answer (Genesis) and it always fascinates me. Are these "decays" into other dimensions the same for all types of substance?

Answer: Astute observation! This question has been bothering me for a long time, but I think that long before that, Hawking with his radiation theory of black holes (1974) founded my answer, although he didn't even think of dealing with dimensions or information in my way.
However, the mechanism of radiation, which Hawking first observed, is equally applicable to submerging parts of matter that would sink, or go beyond the observer's reality — while observing the relative slowing down of the time of the system in motion, or in the gravitational field.
For example, I believe that the non-uniformity of such is reflected in the mutual induction of electric current and magnetic field. The movement of a magnet past an electrical conductor will initiate a current flow and vice versa, the movement of current through a coil will induce a magnetic field through it (right-hand rule). It is not inconsistent with some of my earlier explanations (Current), but somehow, I don't like to make their details public. Call another time.
Question: Can you tell me any more details?

Answer: In the picture on the right, you can see a sketch of a ray of light. It propagates by alternately inducing the electric and magnetic parts of the wave. If we were to try to cut off only one of those phases, the assumption is that the other one would appear somewhere there.
Maxwell's equations confirm the same in the abstract. Ampère's law states that a magnetic field is created as a consequence of an electric field (1826), which Maxwell derived using hydrodynamics (1861) and now we have it in one of his equations. The law defines the relationship between magnetic fields and the electric currents that produce them, which means that cutting off one of the two fields necessarily results in the other.
In other words, with the movement of electricity or magnets, both will not go evenly into a parallel reality, and the lack is the cause of forces! The consequence is that the part that remains visible to the relative observer induces the part that is missing. But, I say, that concept of this theory needs to be reexamined and I'm still not willing to flaunt these raw pieces.
Simplicity
Question: The truth is always simple?

Answer: Unfortunately, that is not the case. It is debatable whether the deep theories of nature are really simple, or whether they just seem so to us because we understand them best.
Similarly, it may seem to us that different theories speak of the same truths (Sufficiency), let's say that geometry or probability (Buffon's needle) talk about various numbers π = 3.14..., because they find (that) number starting from incredibly different settings. So, when we see the equality of (all) infinitely many decimals, again the truth is not just one. No matter how great perceptions someone has — the truth cannot be seen as a whole.
Russell discovered that there is no set of all sets, in mathematics also Gödel proved that there is no theory of all theories, then we know that from the more complex there are always even more complex, that there are also unsolvable tasks. The intractability is, for now, considered relative. This means that with the right knowledge, with the right tools, they would reach where others think is impossible. The uniqueness of the conjunction of viewpoint and truth is analogous to the uniqueness of the conjunction of subject and object (Only One).
When we talk about uniqueness here, we mean the uniqueness of each subject in the universe, their differences measured by the combination of perceptions with the whole environment. We derive the proof of that uniqueness from Riesz's statement, that every coupling, let's say perception information φ, has its own unique subject (vector, the state of matter) y of the environment, the space of all states X, such that the perception information φ(x) = ⟨x, y⟩ for each state x from X.
Let's note that this proof of the uniqueness of subjects in the universe derives from the assumption that the subjects are vectors, and linear algebras, and not algebras of logic or set theory, the two from the text above. I point this out only to establish the principles of (my) information theory. Uniqueness itself does not require additional evidence.
Theorems
Question: Theorems are zero information?

Answer: It is a critical place for questions of weaving the structures of the cosmos with information and uncertainty.
Basically, there are two (hypo)theses. There were first very sophisticated laws, and then the remains of the universe evolve, or their evolutions are common, intertwined. Starting from this second case, one can arrive at the first, that the age of the cosmos, which would be a broader concept than the physical, was greater than our current estimates of cosmology.
Namely, with the known speeds of the development of the universe from the super-hot ancient "soup", from the time of the Big Bang and during only 13.8 billion years until today, it was not possible to develop everything that was needed for physics, for example, if this includes sophisticated mathematical apparatus. First there were bosons alone and, unlike fermions can do that, they were all piled up in just one position then. Then those ancient particles separated and spread building space.
If we say that that stormy period was also the time of the development of some of the first elements of geometry, there were too few such (final) steps for the emergence of unimaginably many options of mathematics which, with discover of Gödel, there are more than every predetermined (infinite) quantity. The Higgs mechanism created the first fermions and mass from pristine-bosons with, consistently asserting, their then new laws.
With the creation of fermions (electrons, protons, neutrons), the laws of atomic physics were created, if we assume that they never existed before, as well as the forces that were later redefined. One electroweak force developed (split) into two, electric with magnetic and weak nuclear, all accompanied by the appearance of corresponding bosons like photon, early light. Let's accept the possibility of such a development, but let's not forget about the unfinished answer to the question "how it was before anything was", thinking of the complexity of mathematics itself.
The theory that starts from information as the fabric of everything in the universe would now be in contradiction if it were not for the ergodic theorems of Markov chains. In the attachment on transformations it can be seen that states through the chain of information transmitters slowly become homogeneous distributions of probabilities, so that after enough steps (time) what emerges we could see as the "Big Bang" of the universe 13.8 billion years ago. In the periods before that "illusion" there is nothing physically measurable, nothing real, because "proof" of something would also be proof of the inaccuracy of the mentioned ergodic theorems.
But fantasies are possible, as well as abstractions. In other words, the theory that disputes the reality of space, time and matter before the Big Bang of the universe, does not deny that the course of those 13.8 billion years of our universe can be just one of its similar periods. So, that we live in the illusion that the mentioned 13.8 billion years, or however many there are, is just one fragment that is shown to us, just one island (in time) out of many. That infinity of analogical flows of everything, the environment in which we are immersed, could also accommodate the emergence of endless branches of mathematics.
Thus, it makes sense to talk about the infinitely long duration of theorems, or any similar strict abstract laws, with infinitely small (zero) energy, while still remaining in the information theory in its unchanged sense. Action, as a product of energy and duration, can still be considered the equivalent of information, although then we don't have to have physics outside the framework given to us (islands of time). Such an abstract "action" could be zero, probability one.
If we go back to the two (hypo)theses offered above, we can finally notice how starting from the second one we arrived at the first one. Basically, we follow the belief that what is certain, is known to happen, so if it happens, then it is not news, and at the same time, on the other hand, sticking to nature's informatics.
Concrete
Question: What after this, that the abstract is "older" than the concrete?

Answer: Abstract truths are an ancient "ocean" with a relatively small "island" of accelerated flow (13.8 billion years) of physical reality phenomena with their laws and perceptions of all of us. That is why some constants of physics are "arbitrary", because they happened that way during development.
In this way, information theory will, in its own way, contribute to cosmology, physics and sciences in general, but especially to a deeper understanding of the nature of mathematics.
The concept of "information in everything", from its zero value (total certainty) onwards, which is the measure of delivered uncertainty and only action for its further transformation through space, time and matter, while conservating the delivered and total quantity, as well as other rules and "nature's games", some of which have arisen along the way and some of which are still unknown to us. It is comprehensive, very unusual for "normal" sciences, and do not expect that it could be easily understood, nor that it will be quickly accepted, and then not seriously elaborated. For now, we stick to checking that it is non-contradictory.
Question: How do you know that mathematics accurately depicts the world?
Answer: I don't really know. I understand that from Thales (626-545 BC) to this day, some are feverishly questioning it, trying to find some inconsistency in it, at least in the applied one. So far unsuccessful, but we hope for better!
Joking aside, but I will answer with a counter-question: what do you offer that is more accurate, and therefore more useful than mathematics, for depicting the world in general?
Question: How is it possible that in the ocean of "nothing" we appeared, an island of "something" huge?
Answer: I understand. If the question is "how can we add zeros into something bigger", then the answer is in infinitesimal calculus, in limes of the form 0⋅∞ which, depending on the multiplied functions, can take any values. If the question is "how can a part be equal to the whole" (a smaller part equal to itself extended), the answer is in the actual infinity of set theory.
Namely, sets of, for example, the natural numbers ℕ = {1, 2, 3, ...}, integers ℤ = {0, ±1, ±2, ±3, ...}, and of rational ℚ = {m/n | m ∈ ℤ ∧ n ∈ ℕ} are infinite and have the same number of elements, although ℕ ⊂ ℤ ⊂ ℚ. It is such an unusual property of infinity in relation to finite sets that it seems distant, doubtful, or unreal (depending on the knowledge of mathematics) to us, although in fact we are all immersed in it.
Question: Are you saying that our "island" is in the flow of time, in the "ocean" of something bigger?
Answer: Not that way. Time is relative, it depends on the observer, and it is so unusual that there are pseudo realities. We are surrounded with such dimensions of time whose existences have no common events. It makes no sense to say, for example, this "island" was before that one.
Question: Can you explain the "subordination" of the concrete to the abstract?
Answer: Yes, among other things, by principled minimalism of information. From the assumption that truth like mathematics has zero information, following the principle of least action, it must be maximally attractive to all physical substances (non-zero information). The concrete is so attached to the abstract, we would say the laws of physical forces, that it seems to us that physical reality is governed by truths or theorems.
Further stratifications come from principled diversity (equalization of chances increases uncertainty), then not everything communicates with everyone, and then not all charges react to all forces, nor do similar ones react with equal intensity, for example, depending on the distance. On closer inspection, this is all consistent with the objectivity of uncertainty. Despite every seeming mismatch of information theory, we will still be able to develop science and mathematics equally.
Networks
Question: Let's face it, certainty and uncertainty are everywhere mixed up?

Answer: That's right, from the point of view of uniqueness. For now, and until further notice, the wider universe is also diverse. The picture on the right shows a simpler model of a free network. The nodes of equal links spontaneously separate into a smaller number with many links and a large number with few links — for easier circulation of information on the network.
The structure of such networks, called scale-free, is more common in technological and biological phenomena than in everyday life, because the former is simpler and less burdened with additional freedoms. There is no reason (for now) to doubt a similar structure in the wider universe, except that it might be denser, more dimensional and multi-layered. The case of completely random networks, also possible, would be with information without memory (Extremes).
Nature likes to repeat its universal laws as various "patterns", if we think visually, on patterns, as well as the ripples of the water surface, or alternating attractive with repulsive forces descending into the microworld, so much so that I believe in the possibility of transferring these themes from the local universe to the wider one. And that only until the first possible contradiction.
So, we agree that the universe is a "web" of certainty and uncertainty. And maybe it is layered with local places, the aforementioned islands, with repulsive deposits of uncertainty around. All this is interwoven with attractive very old (timeless) laws like those of mathematics. Fantastic.
Suppression
Question: Can we avoid uniqueness by reducing options?

Answer: Not. Uniqueness or diversity are the necessary consequences of unpredictability, which is the essence of information, and which is the basic fabric of substance.
Time is a factor of uncertainty. That the news that would be repeated would no longer be news, and then the unrepeatability of the given, of each of the moments of existence. We see the latter as a special factor of uniqueness, and then as an irreversible flow of time. Part of it again as minimalism information, and this one because of more frequent more likely outcomes.
By reducing diversity, therefore, we cannot avoid uniqueness, but we can control the object by suppressing it. If the goal of action is a living being, an individual or a society, then it is good to know at least the basic categories of character, consistent with this text (Traits), in order to understand what I am going to say. The middle (second) league in terms of success in games to win are, for example, "manipulators".
The theory of "perception information" predicts, and the simulations also significantly confirm, that the "manipulators" regularly beat the "good guys" (III league) and lose equally from the "bad guys" (I league). We could now add to the official psychology that the partner does not have to run away from a possible manipulator, but can move from a lower to a higher league and turn the game around, if that moving from one character to another is an easy thing.
The manipulator pushes the victim to a lower level of vitality, which, due to the law of conservation of information, can be just as unpleasant as the effort to raise the character to a higher level (in order to solve the problem), so the parties willingly enter the "death embrace". The subordinate gives up some of her freedoms for the sake of security and peace, and the superior submits to that "control" wanting her own peace as well. Calming down and generally striving for inaction comes to everyone from the principled minimalism of information, and further to the principle of least effect of inanimate matter.
In a relationship like the Pareto rule (Vital Few) we submit to the collective, the collective to the state, and the state to empires, similar models of escape from freedom, all guided by principled minimalism (amount of options), which we tend to indulge in, similar to the "death drive". Thus, psychopaths (I and II leagues) are a minority before the good ones (III league), and the evil ones are a minority among psychopaths. These ratios could be less than a fifth (20 percent) to match modern psychology and very few top players.
However, handing over freedom to another is not a loss of it, but a unique change in the giver-receiver system. By subordinating the individual, the collective will change into a unique new whole, more vital or efficient, depending on what is added. By adding freedoms society can be divided, by taking away it can be more efficient, so survival in development is a kind of juggling act (Defiance II). Analogously, with their defiance, vital (intelligent) creatures will further influence the order around them and subjugate nature to their needs.
However, whatever state nature passes into, through our artificial or spontaneous action, it does not abandon the principles of uniqueness, diversity, unpredictability and minimalism that were mentioned. Everything in the world of living beings is subordinated to them, either by instincts, pleasures and fears, or by unconscious processes.
Simulation
Question: What kind of "simulations" are you often talking about?

Answer: About alleged, because I usually do tests of information theory formulas, ideas and processes. They are like searching for (checking) solutions using the "Monte Carlo" method, but look at the example of Force of Probability that such things are also a bit more complex.
If you mean the computers themselves, it is much more difficult to give the whole instead of pieces of Codes, because it would be in volumes of pages that (believe it or not) the authors themselves at the end of that work, they don't understand at all, or they hardly understand. The simulation will also be finished with the module cvxopt which completes the solution of the "linear program" to "Information of Perception" was optimal, or a smaller part of it.
By working on simulators, their quality becomes better, but they should never be overestimated. Fiction is one thing, reality is another. However, once you get used to "handling" simulators, they can become a powerful tool for testing a lot of things. For example, without the timely creation of a simulator, or rather a large and never fully completed package of "Terminator" modules for testing games for victory, I would hardly have paid attention to the success of the "Tit-for-tat" strategy.
The simulator would thus define the ranking of win games (Win Lose) with a theoretical basis in "perceptual information", which are not yet part of the official game theory in which this concept includes the "minimax theorem" of Von Neumann (the founder of game theory) in the strategies of more complex competitions. Then follows the definition of Traits based on success. And the idea also sheds a lot of light on the notion of vitality, now with the help of a new "information theory", and with-it additional explanations to the previously pure psychology personalities. The earlier abstract interpretations of the rise and fall of civilizations from the point of view of this theory (Democracy and beyond), after recognizing the "power of games" (in a new way), acquire a deeper meaning.
I can't say that this approach is secondary to me, nor its consequences that fall out like from a full basket, but privately I am more attracted to "drawing or trying" new ideas in ways where simulators have not yet set foot. Those that are seemingly elusive to the organized mind.
Abstraction II
Question: I know what computer simulations are (deals with that), but what do you mean by that other "abstract" thing?

Answer: Nothing in particular. It is playing with ideas using mathematical patterns. For example, motion simulation using vector or matrix form.
For example, a creature has two forms of movement: forward-backward and right-left. We write that
where v indicates one movement with x units of length forward (negative number means back) and y units of length right (negative number means left). If we understand the movement v as an action, without going into deeper meanings, it consists of the movements x and y.
To understand the movement of such a simple robot, let's look at what would be the "perception information" of parts of its v movements:
which means that the back-and-forth movement is increased a1 times, the right-left movement b1 times in order to a new back-and-forth movement was obtained. The right-to-left movement changes similarly, with a2 times magnified by the first component and b2 times magnified by the second component. The result is a new movement v' = (x', y') of the robot.
We can also write this in the form of the system of linear equations:
\[ \begin{cases} x' = a_1x + b_1y \\ y' = a_2x + b_2y \end{cases} \]and that too matrix
\[ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} a_1 & b_1 \\ a_2 & b_2 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} \]or shorter written v' = Mv. By changing the parameters of the M matrix, we manage the movement of the robot. The beauty of this "abstract simulation" is the ability to use a huge database of ready-made algebra tools.
Let's be even more specific, and notice that it is
\[ \begin{pmatrix} 7 \\ 2 \end{pmatrix} = \begin{pmatrix} 2 & -\frac72 \\ 1 & -\frac52 \end{pmatrix}\begin{pmatrix} 7 \\ 2 \end{pmatrix}. \]This means that acting with the specified matrix M on the given movement mode v, the movement of the robot does not change. Twice the matrix gives twice the steps, M' = 2M : v → 2v, or more detailed
\[ \begin{pmatrix} 14 \\ 4 \end{pmatrix} = \begin{pmatrix} 4 & -7 \\ 2 & -5 \end{pmatrix}\begin{pmatrix} 7 \\ 2 \end{pmatrix}. \]Some other matrices will change not only the "speed" (intensity), but also the direction of the robot's movement.
Imagine what you can do playing algebra with machines, and I believe you can sense the possibilities of playing algebra with algebra, or with various simulations of reality. By the way, the vectors for which the matrix does not change direction are called characteristic vectors (eigenvectors) of that matrix, and the resulting changes in their intensities are called its characteristic values (eigenvalues). .
Expansion
Question: What abstractions should I use to simulate synergy and emergence?

Answer: For example, let us have the motion of a point on a flat surface. Let's understand this as some spatial movement orthogonally projected onto the surface (Abstraction).
It is clear that there are several ways of 3D movement that are projected into the same 2D, that spatial movement needs more freedom, it has a greater vitality than projection, which means a higher dimension to algebra. The action with two components should be expanded to three and that is synergy, when we split one of the adders into two, ab → (a1 + a2)(b1 + b2), to get a larger sum of products.
It will be the way as in the picture up on the right, in case of emergency with the ant colony. Shredding reduces the density and increases the overall information of the environment and releases latent synergy. By expanding to contract to squeeze out excess vitality. Here is how linear algebra will support such a calculation.
It is just as correct to say "two cows plus two cows are four cows" as it is to say "two cups plus two cups are four cups", ignoring the type of cows or the shapes of the cups. That's the math. Its universality allows us to call the activities of computers (or robots) "actions", referring to physical actions, equivalent to information. Moreover, we interpret the emergence problem informatically, while solving it algebraically.
So, we take that u = (x, z), with components x right-left and z up- below, represents the movement of a point on the monitor screen, while v = (x, y, z) is the movement of the same point we see and the depth y. Screen gestures, for example:
\[ \begin{pmatrix} 5 & 3 \\ 0 & 6 \end{pmatrix}\begin{pmatrix} 1 \\ 0 \end{pmatrix} = 5 \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \begin{pmatrix} 5 & 3 \\ 0 & 6 \end{pmatrix}\begin{pmatrix} 3 \\ 1 \end{pmatrix} = 6 \begin{pmatrix} 3 \\ 1 \end{pmatrix}, \]or shorter M : u1 → 5u1 and M : u2 → 6u2, mean that there are two states u1 and u2 which this matrix extends respectively 5 and 6 times in the plane of the screen. We can understand them as the following three movements:
\[ \begin{pmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \] \[ \begin{pmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \end{pmatrix} \begin{pmatrix} 2 \\ 3 \\ 0 \end{pmatrix} = 4\begin{pmatrix} 2 \\ 3 \\ 0 \end{pmatrix}, \] \[ \begin{pmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \end{pmatrix} \begin{pmatrix} 16 \\ 25 \\ 10 \end{pmatrix} = 6\begin{pmatrix} 16 \\ 25 \\ 10 \end{pmatrix}. \]Short written N : v1 → v1, N : v2 → 4v2 and N : v3 → 6v3. We usually normalize the eigenvectors (divide the coefficients by the length), but here they are integers. Multiplying an eigenvector by any non-zero number does not change the characteristic equation (Mv = λv) of the given M matrix and the associated λ eigenvalues.
When we visualize vectors in oriented lengths of the Cartesian rectangular coordinate system, it is a transition from the plane Oxz to the space Oxyz. It is clear that by depth we can have another, third vector whose elongation is not observed in the projection on the plane.
The upper triangular matrices on the diagonal have eigenvalues, so this problem was easy to figure out. Second, block diagonal matrices can be treated as "eigenvalues" from the otherwise more unusual vector space algebra of a non-commutative field scalars Φ. The advantage of this approach is that the eigenvalues of block matrices can easily be replaced by the eigenvalues of a wider matrix.
And that's it, by expanding the number of coordinates and reading the given scalars with new ones, we get synergy, that is, the emergence of the action (information) we were working with. It is analogous to adding another addition to the sum of the products to increase vitality, the information of perception, described earlier.
Unnoticed
Question: Some information is "under the radar" of physical actions?

Answer: Yes, that's right (Half Truths). These are fictions, from lies, unrealistic imaginations, to perhaps very accurate but abstract ideas. However, we are more interested in "almost real" fantasies. I will take this opportunity to note the importance of the block matrices mentioned above.
Scalars are usually real, or complex numbers (ℝ, ℂ ∈ Φ), but they can also be regular square matrices of the same order. Many theorems of linear algebra will still be valid, and with them the corresponding interpretations of quantum mechanics.
Thus, conjugation and transposition, i.e. Hermitian transpose of the matrix (M, the second order) becomes
\[ M^\dagger = \begin{pmatrix} A & B \\ C & D \end{pmatrix}^\dagger = \begin{pmatrix} A^\dagger & C^\dagger \\ B^\dagger & D^\dagger \end{pmatrix}, \]where A, B, C and D are now square matrices or ordinary scalars. At the same time, Hermitization, for example A†, for a matrix is the transposition and conjugation of elements, while in the case of a complex number (from the set ℂ) it is only conjugation, and for a real (ℝ) nothing of both.
Hermitian operators (matrices M† = M) still remain current in quantum mechanics. In the classical eigen equality, Mv = λv, they have real eigenvalues λ which define the probabilities of the observable (physical quantity) process M during measurement, and the corresponding eigenvector v then expresses the quantum state. In the extended interpretation, "eigenvalues" can also be processes. When block matrices are Hermitian, their eigenvalues will be real numbers interpreted in observables.
We imagine very complex such processes in the macro-world, just as we understand them through the atomic structure, although in both cases we do not directly see the elements of the micro-world. We can now see the process as an intrinsic value as a physical reality that does not begin or end in the present, so that is not in the range of relations of indeterminacy of the current present. It does not have to have an exchange of information (communication) with the environment.
The information thus crucified undelivered is uncertainty, it contains several possible expressions. A particle-wave superposition can collapse into different outcomes, like a dice roll. However, the state-process slice between two physical interactions has no real eigenvalue, so it remains as a process (matrix), or is a complex number (bypass). I explained this second case with commutators, with the decomposition of the Hermitian process into factors of non-Hermitian processes. The first case is a separate topic.
When we look at processes as states (operators are vectors), their large time intervals Δt go together with small energy changes ΔE, not only because of the Heisenberg uncertainty relations (Δt⋅ΔE ≈ h), but also for the reasons mentioned above. The process as a closed entity, without energy exchange with the environment, can be considered a particle-wave between interactions. It can have many paths, and only the measurement will determine one of the possible ones.
Not everything communicates with everyone, something with anything, so processes can miss us. Long processes thus fell into the wrong environments, remaining unexplained and "under the radar" of surrounding actions. As in the aforementioned photo-electric effect, those too long processes with insufficient energy remain imperceptible to robust physical matter. They are powerless to rouse it. As particle-waves, they fail to reach levels where matter can perceive them.
In this way, I defend the idea of the omnipresence of information in space, time and matter, and its essential uncertainty. It is such a fabric of both the physical world and the abstract laws that drive it. Concrete is easily attached to the abstract, less free phenomena attract more freely, while it seems to us that physical reality is governed by truths, i.e. the laws of nature.
Superposition
Question: What about superposition?

Answer: Superposition of states is the representation of a quantum system with multiple expressions simultaneously before measurement. It can be explained with Thomas Young's experiment from 1804 (Double Slit). It is written as a wave function, that is, a quantum state vector.
The description of the superposition with probability distributions is also useful, and its collapse into some measurement outcome with the realization of only one of the possible random events. In this way, we remove some of the usual "magic" from it and facilitate the application of (this) information theory. I'll avoid repetition and skip some of those parts.
Spin, or the internal moment of momentum of an elementary particle can be roughly described macroscopically as the spin of a small gyroscope. It is an expression of a physical effect, the product of the spin quantum number s and the reduced Planck constant ℏ. Particles of integer spin quantum number are bosons, and those of half integer fermions. The simplest have only two states ±sℏ, pointing "up" or "down", when they represent vectors with two components. Such operators are Pauli matrices.
The Position operator corresponds to the position of the particle that is visible. A free particle can take very different positions, so it has the operator \( \hat{x} = i\hbar\partial_p \), a partial derivative of momentum, with a large range of action. The matrix representation of that operator
\[ \hat{X} = \sqrt{\frac{\hbar}{2m\omega}}\begin{pmatrix} 0 & \sqrt{1} & 0 & 0 & \dots \\ \sqrt{1} & 0 & \sqrt{2} & 0 & \dots \\ 0 & \sqrt{2} & 0 & \sqrt{3} & \dots \\ 0 & 0 & \sqrt{3} & 0 & \dots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix} \]has an infinite number of components, and the position states are represented by vectors with correspondingly so many components.
Through the interaction, the quantum particle-wave from the state of superposition collapses into one of the possible outcomes. It thus declares its previous status and in a new guise continues to a new communication. Cascading, with its steps, it emits and downloads information, redefining itself on its way. Each of its possible states (spin, position, energy, ...) can be changed in those steps, and each one is represented by a vector with two or more components. Each, even the most elementary particle, is a set, a function of such vectors, their tensor product, or a matrix with them as columns. You can then only imagine how complex a physical system is a complex mathematical structure.
What should be particularly emphasized here is the univocity of the outcome from the multitude of possibilities. The more complex the structure of the physical system, the denser the statements described, the more frequent redefinitions, and the relatively less vagueness of individual elements in relation to the whole. A reality is being built that can be influenced less and less by the smaller ones. That's why the moon is where it is even when we're not looking, unlike an electron that will jump out of position when hit by a single adequate photon.
A more complex state represents a better determined process. It has better targeted outcomes, greater predictability. The harder it deviates from the planned direction of development, the more inert it is. For now, we can speak roughly in statistical averages, ignoring the principal diversity of nature to which it would spontaneously go from greater information (greatest in homogeneous distributions). As the bodies are lazier, their behavioral dispersions are smaller.
|