|
Hahn-Banach » srb
Question: Why is the Hahn-Banach theorem so important when there are few (outside of mathematics) who can understand it?

Answer: The link in the image on the left is to an unusual enumeration of facts about that theorem. There are many explanations of this theorem and various forms of it, some of which I have chosen in the attachments of the site (3.10. Theorem). It's hard to tell about it, at least a little more fluently, but I'll try.
1. Let's imagine two workers who would work at some given jobs V with greater efficiency individually than together. They bump into each other and get annoyed that the other is bothering them. They work according to the norm, so their total wages (payment) for joint work are less than the sum of wages for individual work. This is the content of the first of the relations, the inequality of sublinearity, and the second is the equality of homogeneity, which with this salary expresses their better effects when they work separately:
p(x + y) ≤ p(x) + p(y), p(αx) = αp(x),
for every real number α ∈ ℝ. States x, y ∈ V are the strings that define these workers, therefore, vectors, so V is the vector space on which their work is determined. The payout is determined by the function p: V → ℝ, given a sublinear mapping of work performance V into real monetary value ℝ.
In practice, it happens that we evaluate work on the basis of a narrower part of jobs W ⊂ V and based on some generator g: W → ℝ, simulations of work, whether it is a theoretical estimate of the engineer or the result of tests during the actual process. If it is found that g(x) ≤ p(x) for all x ∈ W, the engineer will report to the employer that the previous estimate of the value of the worker's work was wrong, excessive, and should be recalibrated.
Management will determine the linear extension f: V → ℝ function g, so f(x) = g(x) for all x ∈ W, and f(x) ≤ g(x) for all x ∈ V. This is, so to speak, the literal content of one of the H-B theorems. If there is such an extension f(x), see my attachment (Proposition 3.11).
2. Let's now imagine the situation of a company: of all its possible jobs V, if on some smaller sample of those jobs x ∈ W ⊂ V we can standardize those jobs and determine the values of ∥x∥ of each phase of work. Let α ≥ 0 some fixed number. Then g(x) ≤ α∥x∥ sublinear function, see the previous payment p, because the norm is like that. Simply put, job standardization was done.
The Hahn-Banach theorem for normed spaces (3.12. Proposition) establishes that it is possible to extend this standard of work with W on V. Strictly speaking, there is an extension f = g for all W such that f(x) ≤ α∥x∥ for all jobs x ∈ V of this company. In other words, the standard set on a smaller sample can always be extended to a larger one if the elements of the system can be viewed as vectors.
The Hahn-Banach theorem (3.13. Proposition) establishes something similar to this. The previous one referred to sublinear functionals, and this one is for linear ones. It establishes that normalizing jobs by a linear function g(x) of some smaller volume of jobs x ∈ W ⊂ V can also be extended to all actions of a given company, with some extension f(x) where then x ∈ V.
3. In quantum physics, these theorems confirm the interpretation of vectors as a quantum state and vector spaces as a quantum system. It is especially useful that linear mappings are also vectors to interpret them into quantum processes. We can imagine how many physical phenomena are hidden in that part of functional analysis and are waiting for us to interpret them.
The Hahn-Banach theorem was discovered in the 1920s, when Gödel’s discovery of the logic of deductive theories (Deduction II), however, the consequences went in different directions. The first ones dealt with extensions of functionals not only in mathematical (abstract vector or metric) spaces but also in concrete applications. Others dealt with extensions of theorems, axioms, and definitions, in general with theories themselves.
In information theory, as I see it, information is the basis of space, time, and matter, as well as their laws, so these two extensions are analogous. For example, the deductive theory g, which has an application to a narrower practice W, also has broader applications to the practice V ⊃ W with its extension f, such that f = g is everywhere on W ⊂ V.
Ripples » srb
Question: Said "extensions" require a "vector space"?

Answer: The assumption for further application of the functional, from a smaller to a larger domain, is the linearity of that space (3.10. Theorem). The basis is a vector space, with or without a norm, depending on the type of expansion.
Vectors are any quantities that can be multiplied by scalars and added together. We often represent them using oriented lengths. We imagine them as some arrows, the sides of a parallelogram that add up to its diagonals and are simply connected by multiplication. However, vectors are also sequences of scalars (numbers), then polynomials, or solutions of differential equations, and also quantum states, or sequences of possible observations of the subject.
In this picture of a duck on the water, ripples on the surface of constant wavelengths λ and, with further distance, the decreasing amplitudes a are visible. What we see is a local (length λ) change in the height (depth) of the wavelet y = a sin(2πx/λ), which is the graph of a sinusoid if it is represented in the Cartesian rectangular coordinate system (Oxy). However, in reality a → 0, so the amplitude a determines the decreasing visibility (existence) of the wave and is not an extensive operator in vector space in the above sense. Unlike the height y, the length λ is portable; it does not change by increasing the distance x.
Amplitude a has nothing to do with frequency ν or wavelength λ. However, wavelength and frequency have an opposite relationship in such a way that increasing one decreases the other. Their exact product is the wave velocity, λν = c, so I consider the frequency in some cases to be the "quantity of uncertainty" of the phenomenon when λ is the uncertainty of the proposition.
All matter is of a wave nature (Louis de Broglie, 1924), everything from quantum quantities to everything that consists of them, and is a "duck on water" example that is more than instructive. Longer wavelengths mean greater indeterminacy of the proposition of material phenomena, and amplitudes measure visibility and the chance of occurrence. The movement of matter is, in the sense of the Hahn-Banach theorem, extensive, but its visibility is not.
Perhaps some historical phenomena are cyclical, with periods τ = λ/c instead of wavelengths, so that their importance decreases over the centuries, say, due to technological development. For example, the wicked Egyptian chariots are not as important in warfare today as they were in ancient times. Deterministic chaos theory predicts the recurrence of storms, but the earth's climate changes through millennial revolutions around the sun, so the old influences fade. This interpretation of linear spaces makes sense because operators are vectors, so processes are states.
Recently (June 18, 2024), Richard Lighthouse contacted me with the interpretation that our entire universe is flickering. It supposedly turns off and on at more than a trillion cycles per second. Attached was a link to the video (New Paradigms) with an interesting interview about that alleged discovery. Unfortunately, for now, I still don't have a specific comment.
Ballista » srb
Question: While we emphasize their primary properties, what do the extensions' secondary properties do?

Answer: Objective randomness (Accordance) also includes ignorance; we supplement this with communication, and with it we remove uncertainties. By striving for a smaller "amount of options" (measured by information), it happens that we have to communicate.
In this absurd "doing for the sake of doing less," which we constantly encounter in various forms, we emphasize the smaller while ignoring the larger. It is absurd that something bigger "hidden under the carpet" remains irrelevant, and this is exactly what happens to us all the time. That's why your question is revealing.
Dealing with the ballista, the Roman marksman did not have to be aware of the atomic structure of the substance, which, by the way, is very important for the laws of the universe and himself, nor did he, rejoicing at the hit, understand all the consequences of his act on world history. This is what, in the last case, the theorems of vector spaces also describe to us. Not to go too far, there is a "mini version" of vectors with complex numbers (Triangle in Complex Plane).
For example, in the complex plane of the vertices of the triangle z1, z2, z 3 ∈ ℂ are the same complex numbers, i.e., z1 = x1 + iy1, z2 = x2 + iy2 and z3 = x3 + iy 3, where xk, yk ∈ ℝ (k = 1, 2, 3) remain real numbers, and i² = -1. It is the center of gravity
G = (z1 + z2 + z3)/3.
Let's denote by a = z3 - z2 and b = z1 - z3 two side "vectors" of this triangle, and with ā = x - iy conjugate complex number a = x + iy. With ℐ(x ± iy) = ±y here we denote the imaginary part of the complex number, so
PΔ = ℐ(āb)/2
is area of the given triangle. The remaining third side c = z2 - z1 in the expression:
2PΔ = |a||ha| = |b||hb| = |c||hc|
is the height factor hc. The other two, ha and hb, are page heights a and b, respectively.
Similarly, we find many other significant points of the triangle, or sizes, with which we are used to working in geometry in different ways. Many of them are listed in the aforementioned contribution of mine, and even more are left there to the reader, the student (at that time, in 2016), for independent research.
Behind the "simple" high school geometry remains undiscovered an equally large but more complex and perhaps more applicable theory of complex numbers, and in addition to both, the algebra of vector spaces. I mean, it's not just the material world full of hidden "details," but mathematics and fiction in general have that same quality. When the focus is not on them, they hide the property of not only being invisible but also, in many ways, irrelevant.
Interrelations » srb
Question: Can an object be unnoticed by every subject?

Answer: In reality, it can't. It can be pseudo-reality or fiction. This is the essence of the theory of information, at least as we derive it: that the reality of the objects of the universe emerges through perception.
Out of uncertainty, communication expels information that remains with greater certainty, becoming physical reality. It used to be a disputed statement (1924) that an electron only gets its trajectory by measurement, but now it is something that is taken for granted, in which there is no longer any mysticism. However, this point of view is deeply established in, say, functional analysis and, of course, still goes unnoticed.
Proposition. Let x0 ≠ 0, a fixed point in the Banach space X. On X there is a bounded linear functional f such that:
f(x0) = ∥x0∥, ∥f∥ = 1.
Proof: Points x ∈ X of the form αx0, with α scalar, form the vector subspace V ⊂ X, because with α1x0 and α2x0 the point λ1(α1x0) + λ2(α2x0) belong to V. To V we assign the function f(x) = α∥x0∥, obviously linear and bounded, and in addition:
\[ \|f\|_V = \sup_{\|x\|=1} |f(x)| = \sup_{\|\alpha x_0\|=1} |\alpha\|x_0\|| = 1. \]Applying the Hahn-Banach proposition (3.10. Theorem) to this functional, we will extend it to the entire space X and at the same time, we do not increase its norm. The thus obtained bounded linear functional f achieves the attitude conditions. ∎
The information of perception is functional, mapping vectors into scalars, that is, physical states into numbers. Such a number is the norm ∥x0∥ of arbitrary state x0. According to this view, interpreted, for every object, there is at least one subject who perceives it.
Once the information is thrown out, the repulsive forces of uncertainty will push it away. In other words, the perceived object will preserve the law of conservation of information in physical reality. This is exactly the one that is the basic criterion of reality here. Because of it, information, which must disappear as soon as it is created, must also change from one form to another. It is like a flame that we transfer from a tree to a candle, but it is also always maintained, born, or, should we say, fed with new and new burning agents.
Outsider » srb
Question: What are pseudo-real objects to you?

Answer: In the realm of objective coincidences, possible outcomes that did not occur are not part of physical reality. Such are pseudo-realities. They do not meet the criterion of reality because they abandon this side's law of conservation of information. That criterion is relative, and the renegades are also equally pseudo-realistic, I believe.
The immediate pseudo-real states will continue their "life" in processes analogous to the real ones, it is assumed, and may move further and further away from the original flows. The fundamental uncertainty, which is common to all, will increasingly limit the accuracy of guesses and the understanding of the range of further outcomes. These pseudo-worlds would eventually escape the power of our imagination, and they could be the interpretations of some well-known theorems. One such, the next one, can be translated as "this side," or real perception.
Proposition. Let the vector space V ⊂ X and x0 ∈ X-V fixed point at distance d > 0 from V. Then there exists on X a bounded linear functional f such that: f(x) = 0 where x ∈ V, and f(x0) = 1 and ∥f∥ = 1/d.
Proof: Point x ∈ V and scalar t, point y = x + tx0 ∈ V0 ⊂ X, are uniquely determined. Let the functional f(y) = f(x + tx0) = t be defined on V0. It is obviously linear, and f(y) = 0 for y ∈ V and f(x0) = 1.
For each y ∈ V0 is ∥y∥ ≥ |t|d. Indeed, if y ∈ V0-V, then:
\[ \|y\| = \|x + tx_0\| = |t| \left|\frac{1}{t}x + x_0\right| \ge |t|d, \]because x/t ∈ V, so ∥x/t + x0∥ cannot be less than the distance of the point x0 to V; but if y ∈ V, the inequality is true because then t = 0. So ∀y ∈ V0 is:
\[ |f(y)| \le \frac{1}{d}\|y\|. \]On the other hand, let (xk) be a sequence of points from V such that:
\[ \lim_{k \to \infty} \|x_k - x_0\| = d. \]Then |f(xk - x0)| ≤ ∥f∥V0∥xk - x0∥, while for t = -1 according to the previous one:
|f(xk - x0)| = 1,
\[ \|f\|_{V_0} \ge \frac{1}{\|x_k - x_0\|}. \]When k → ∞, followed by ∥f∥V0 ≥ 1/d. That with the previous one gives ∥f∥V0 = 1/d, which had to be proven. ∎
One of the formal applications of this theorem can be seen in the proof of the proposition that in a Banach space, the terms fundamental and total set of vectors are equivalent.
Mathematics opens a wider picture than physical reality, and fiction is even wider. Unlike the previous paragraph (Interrelations), which talks about at least one functional that can map the point x0 ∈ X, which can be as many subjects as would perceive the given object x0, the last paragraph speaks of the existence of a functional that "does not perceive" V ⊂ X. We have no objections to that conclusion because we know that not everything communicates with everyone.
What may seem odd about the attitude is the declining capacity of ∥f∥ we say "perceptual information" when the seen object x0 is farther than the unseen, all x from V. One way to understand this is by looking at V as the distant past, when the information density of the present becomes less and less.
The mapping of space to scalars is functional, f: X → Φ. Applied to quantum physics, the functionals are quantum processes (operators), while scalars are complex numbers (Φ = ℂ). Here we further interpret them as "perception information," when scalars in the macro world are real numbers (Φ = ℝ). We know from algebra that functionals can always be interpreted as a scalar product of vectors or points of metric space, and that the spaces of functionals and vectors are mutually dual.
Variance » srb
Question: Explain to me the difference between "dual vectors" and "dual spaces"?

Answer: I assume that this answer should be devoted to co- and contra-variant vectors, or more precisely, to a series of coordinates; otherwise, the question is too easy. First, I will clarify "invariant".
In the picture on the right is the vector \( \overrightarrow{OA} = \vec{a} \) of length a. From O to point A is reached by connecting x unit vectors of abscissa and y unit vectors of ordinate, \( \vec {a} = x\vec{e}_x + y\vec{e}_y \). Therefore, the ordered pair A(x, y) are the covariant coordinates of the point A.
What seems natural in this oblique system is not in the polar one. And since the diameter is normal to the circle, then vertical projections on curvilinear coordinate axes would be better suited, here A'(x', y'). We write the same vector and the same point A = A' in two ways, co- or, in this second case, by contra-variant ordered sequences of projections. When working with more general or longer sequences, covariant coordinates will be denoted by lower indices and contravariant coordinates by upper indices.
The first are numbers of displacement of the base vectors of the given coordinate system; the second are "real" coordinates and are more often used. The scalar product of those two sequences is the invariant, the square of the length of the given vector. It is a quantity that does not change when changing the coordinate system:
A⋅A' = xx' + yy' = = x(x + y cos φ) + y(y + x cos φ) = x² + y² + 2xy cos φ = a².
The image and the cosine theorem are used. Of course this result holds in general: x1x1 + x2x2 +... + xnxn = a², when we observe the same vector in some n-dim system A(x1, x2,..., xn) co- or A' (x1, x2,..., xn) of counter-variant coordinates. You will find proofs in many textbooks.
Linear functional, f : V → ℝ, is a linear mapping of vectors to numbers. Due to the nature of vector spaces, to determine the functional it is sufficient to know the mapping of base vectors:
\[ f(\vec{e}_1) = f_1,\ f(\vec{e}_2) = f_2, \ ..., \ f(\vec{e}_n) = f_n, \]because the:
\[ f(\vec{a}) = f(x^1\vec{e}_1 + x^2 \vec{e}_2 + ... + x^n \vec{e}_n) = \] \[ = x^1f(\vec{e}_1) + x^2f(\vec{e}_2) + ... + x^n f(\vec{e}_n) \] \[ = x^1f_1 + x^2f_2 + ... + x^nf_n = f(\textbf{a}). \]I deal with such detailed proofs of functionals in different metric spaces in the Representations section of this site.
When we take a closer look at the presented calculation, we will see that such xs are covariant, and f are contravariant. Also, the functional f can be uniquely written as a scalar product of a given vector and a series of bases: (x1, x2,..., xn)⋅(f1, f2,..., fn) = f(a).
By the way, the mapping of bases and vectors in tensor calculus is co- and contra-variant, so this presentation is neat from that point of view. Looking further, vectors and their linear mappings form dual spaces, so it is fine to say that the former are co-variant and the latter contra-variant vectors. That is the answer to the question. Look for further details yourself (e.g., Dual Vectors).
Curiosity » srb
Question: Is curiosity a boon or a curse?

Answer: In myth, in the world of the gods, there was Pandora, a woman of burning curiosity. The gods were very favorable to her, gifting her with exceptional speech abilities and intelligence. Once, she received a beautiful box with instructions not to open it under any circumstances.
Pandora's curiosity grew into an obsession with that mysterious box. One day, Pandora finally decided to let go of the discomfort of her curiosity. When she opened the box, terrible creatures and sounds came out of it, spreading terror and chaos around Pandora. She wanted desperately and in vain to control the creatures and put them back in the box. Broken, she decided to open the box a second time, seeking salvation. But then wonderful phenomena came out of the box.
This story is an ancient lesson about the extreme consequences that can arise when we venture too deeply into the unknown. On the virtues and vices explored by intelligence (A = Q/B), striving (due to the law of conservation of information) to remain in the middle of greater freedom (Q = A⋅B ) and often wanting to step outside the boundaries of the hierarchy (B), The freedoms are further described in the product summary:
Q = a1b2 + ... + anbn,
where intelligence and limitations are broken down into appropriate factor sizes. It takes us even further into these stories about information of perception, as well as a reminder of the genius of ancient thinkers.
On the other side is the force of probability. We discover it in the usual more frequent occurrence of more likely outcomes, in the tendency of the brain to prefer the familiar, in inertia. It is the deepest cause of fear of the unknown, I believe, but also the leading opponent of vitality. According to these findings, curiosity springs from the need to deviate from the trajectory of the principle of least effect of dead physical substance, and, in this sense, it grows with courage, with the desire to dominate (animate and inanimate worlds), then with the need to resist, then with the volition for duration.
This, in turn, brings us to the topic (Creativity), where we often disagreed about the limitations of artificial intelligence. My opinion remains that AI should be approached using alternatives at the crossroads of algorithms for its initiation. With the incorporation of a "need," which I called "curiosity," for processes to occasionally deliberately deviate from the path that the machine would regularly take, however, from the above, this "curiosity" also leads to the desire for survival (the law of conservation of randomness), to a certain daring, or dominance. It is not known what is worse for the creator of AI himself.
Infiltration » srb
Question: It's hard for me to understand how the "resistance" strategy can deal with an apparently stronger one?

Answer: It was confusing to anyone who understood the meaning of "perception information," and then again, a surprise after testing with a game simulation. Nature sometimes keeps its great secrets fully revealed, as well as the processes of infiltrating life through evolution.
Let's remember how microbes manage to knock down even the largest creatures and remain invisible until that turning point. How the ancient Greeks managed to defeat the hitherto invincible Troy with an attractive Trojan horse. The more dangerous enemies of the empire have always been internal rather than external. All such similar victories, as well as assimilations of smaller with larger and larger with smaller nations, are contained in the principle of the strategy of "evil" (first league).
It is not enough to just let go when you become subordinate, but on the contrary, you need constant, measured, timely, and creative (unpredictable by the majority of "goodies" and a little less "manipulators") stubbornness and a style of resistance. With that charm, you get into every pore of the larger beast's skin and rein it in, slowly taming it. By breathing down your opponent's neck, you guide him into knowing that he wants to be what you want him to be. It increases the information of perception:
Q = a1b1 + a2b2 + ... + anbn = a⋅b = ab cos φ
due to the reduction of the angle φ = ∠(a, b) between the state of the subject and the object, the vector a and b, when their intensities a and b remain unchanged. The infiltration that occurs then is a very natural phenomenon in the sense of "like sticks to like." It is like an interference wave, otherwise, a universal cosmic phenomenon, which suddenly becomes your powerful ally in victory.
In order for the waves to interfere, a certain alignment is required. Then, in order to withdraw, a measured, timely, and creative (especially against living beings) initiative is needed. You'll be towing the larger one like a cosmic tug, a spacecraft that would approach an asteroid and easily shear it off course with its mere presence and gravitational pull.
Of course, all these descriptions, and in my opinion, the very simulations that serve as proof, are shallow compared to the information formulas of perception — once they are accepted as mathematically correct. Only then will they be a real help in understanding that strategy of resistance.
Criminal » srb
Question: Can it be explained through the prism of "information perception" why the typical criminal organization regularly loses when it collides with a decent state?

Answer: The question is too easy if you know something about "information perceptions"; otherwise, that's why the answer is instructive and worth the effort.
The essence of crime is exaggeration and manipulation. This is in many ways a better definition than the one we know, which, for the best of cases, says that organized crime is a complex of highly centralized companies established for the business of illegal activities. The intention is not to enter into a discussion about what is (il)legal, because that is irrelevant for formal logic.
Therefore, this kind of "crime," which is abstracted from legal norms, is in the same category ("manipulators") as politicians. Maniacs and thieves, to which we can add murderers and terrorists, would be in a second league and could be "coped," with occasional victories for each side. Their supremacy over perhaps about 80 percent of the "good guys" in the population is more or less unsuspected, as is their poor performance against the "bad guys" first-ligue competitors. These "masters of victory" in that story are the institutions of decent states that, with their silent presence, raise the average of the country concerning politicians and criminals.
It simply is when we look at what goes into the value of "perception information" (Win Lose) and what its result represents. The point of the top game is to be very precise and subtle about guessing the optimum of "good" to "good" responses (generally "+" to "+") and "evil" to "evil" ("-" to "-" values), which is the essence of intelligence, mastery, and vitality. Such a high level of play is not possible without deep knowledge of details, an understanding of measurement, timing (the tempo of the game), and an intentional injection of unpredictability.
Community » srb
Question: How to define "community" with "information of perception"?

Answer: Individuals in a community sacrifice some of their freedoms for the sake of security or efficiency. This is because the total "physical information" is constant, while on the other hand, it is the fabric of the physical world.
Physical information and action (ΔE⋅Δt) are equivalent, so during equal time intervals, the periods (Δt = const.) also give us constant amounts of energy (ΔE = const.). Hence the long-known law of conservation of energy: energy can change forms but not the total amount. The definition considers "community" as a physical action and does not break the connection with classical meanings. Viewed a little more broadly, the community would also contain those aspects of this theory of information that are not physical, i.e., not real, according to its "criterion".
A third is added to these two determinants. A community participant is more its subject the more freedom he has given up, i.e., gained greater security or contributed to greater efficiency. If, on the other hand, the efficiency of the community is greater, the stratification (Monopoly, 1) is greater when the network of peer links is grouped into a smaller number of nodes with many links and many nodes with fewer connections. A better subject is, therefore, more "naive," less independent, less vital, more susceptible to manipulation, and at the same time more exposed to crime, including that of its leaders.
Note that we treat "community" with formal logic. We develop it as a deductive theory. Let's not get confused by the usual concepts of social, psychological, or biological sciences, despite the fact that we will arrive at similar results. As we are not defining legality, defining the emotional determinants of community participants is not a topic that interests us. With the method, we leave the possibility to understand, for example, the substance of a physical process as a community of elements, as from topology (without metrics), through geometry and metric spaces, we arrive at the definition of "being" or "state." They are vectors, that is, ordered sequences of numbers.
The result of the transfer of freedoms from individuals to the community is stratification in terms of perception information, i.e., the level of the game (Traits). The percentage of "good guys" (third league) is growing, and with it, the power of "manipulators" (II leagues) is growing. The opportunity is opening up for the few "bad guys" (I league), too distant and unsympathetic bottom ones, which they nevertheless rule through the middle ones as "deep state", corporations, or banks.
Namely, the principle of equality (link) causes the accumulation of power or money of individuals (nodes), while on the other hand, it leaves desirable leaders (manipulators) somewhere between, let us say, the hammer (evil) and the anvil (good) in the hierarchy of masters of victory. Motto "lie locally, work globally" becomes the guiding principle of politicians.
Thus, with "information of perception," we define "community" so that it becomes clearer what is happening to today's democracies, how such monarchies crystallized in previous times, or why the Roman Republic would turn into the Roman Empire. The cause of these processes is the spontaneous tendency of nature towards less uncertainty, and their consequence is the breakdown of hierarchies. Nature avoids equality.
The exits from the "manipulative" state become the dissolution of the community, but also, what is even more absurd, the strengthening of community institutions. I explained the first with the previous one, and the second follows from the transfer of information from bad elements to structures that resemble "evil." This brings better supervision, control, and state-legal coercion. Precisely this management of the "loose" is happening in the era of "globalism." Domination over the states will melt their institutions (making the work of their manipulators easier), and on the other hand, the good guys, who do not like the players of the first league, will retreat to the nation states.
However, if the rule of law would become so ideal that it would have an optimally measured and timely response to all undesirable behavior in the community, if it would act as a player of the first league against the citizens, not tolerating political or other major manipulators, from our current point of view of freedom, its inhabitants would be excessively unfree (Hyperbole II).
Cosmos II » srb
Question: Are there "cosmos" explanations with "perceptual information"?

Answer: Yes, almost as before, this information theory considers the world a physical reality in addition to fiction. I'll start with that.
1. The Criterion of reality is the law of conservation. That's the first news. It agrees with the view that a physical experiment cannot confirm something that would prove to be incorrect. So, in short, the truth is not afraid of scrutiny, unlike lies. What is real is what is true. All this has "strange" consequences — which I am working on.
First, the new "criterion" of reality is highly mathematical. This "information theory" accepts this from the very beginning (Dimensions). I am writing it supposedly because it was originally intended to prove determinism with the postulate of "the objectivity of chance" (in outline) and contradiction. But, little by little, that goal kept slipping away, and this theory was created that fits too well with the proven ones. Challenged by my inability to detect a contradiction, I raised the bar for her more and more. Hence the demand that every mathematical truth be a reality, that it be indestructible and sustainable.
Consistently, the physical laws are a kind of physical reality, so we have an explanation of where such attachment to each other comes from because they are parts of the same whole. Thus, the old dilemma of probability theory about the difference between "apparent randomness" (the decimal digits of π = 3.14159... which tests show to be random numbers, and we know they are predictable) and "absolute randomness," which is really unpredictable (the outcome of a coin toss ) — disappears.
2. We hereby extend the definition of "objective uncertainty" (Lateral) to all that we do not know and can possibly obtain by exchanging information, never by trickery or other means. It defines "information" as a measure of the amount of uncertainty, which the theory agrees with. Additionally, certainties, as theorems, are random events of probability one. That's a big "heresy" for now, one that made me hesitate to attend math presentations.
A group of axioms and a correct (which cannot produce inaccuracies) deductive theory that they derive are news, and hence an even more absurd conclusion; the same cannot be complete (there will be correct statements outside of it). Namely, given that information is a fabric of space, time, and matter with uncertainty as its essence, then deductive theories are also news, subjects, and objects. Receivers and senders of information are always incompletely informed. It links this theory to Gödel's Incompleteness Theorem (Deduction II), with the implicit statement that there is no unobserved object (Interrelations).
3. When we have the conservation law, then we have isometries. They are linear mappings that preserve norms (vector intensities), so we also have states as vectors. Numerical values of those states provide functionals and perception information, from which the uniqueness of each state of the vector space (Riesz' statement). It is interesting to note that theorems are unique statements, and they have numerous ways of proof and unlimited application. Concrete physical things, dually abstractions, are unique perceptions; they have many occurrences and unlimited connections with the first.
From the law of conservation comes the memorization of information (Conservation IV), as well as the conservation of physical action (ΔE⋅Δt). A concrete physical phenomenon is news that, as soon as it appears, in short periods (Δt = const.) is no longer that news; it is in constant periodic change. This is now reflected in the energy changes (ΔE = const.) in finite portions, which distinguishes it from the energy changes (ΔE → 0) of the infinite duration of theorem (Δt → ∞). The consequence is that concrete physical things have a history and an increasingly ancient origin.
However, since physical reality has an increasingly long history based on the same law of conservation, it must have an increasingly "thinner" present. Today's certainty is increasing, events are less frequent, and time units are longer. It looks to us like the expansion of the cosmos—that distant galaxies are moving away from us faster and faster. On the other hand, the physics of the states spontaneously leads to greater certainty. The latter helps to notice the principle of minimalism of information and the force of probability as a consequence of the above "criterion" of truth.
4. Finally, let me explain the starting image, top right. We assume that the cosmos began with the Big Bang (about 13.8 billion years ago), with the point O in the picture. Everything around us (point A) expands so that the starting point surrounds us like a sphere, the limit of the growing cosmos. Over time, it gives the shape of the inner sphere, one circular section of which c can be seen in the picture. The largest such sphere is the limit of the visible universe. Other places, events, or galaxies, like B, also evolve and appear "later" than the time of our evolution because the path OB + BA is greater than the path OA.
On the line ℓ there should be events simultaneous with us, but it is not possible for us to communicate with them (due to the finite speed of light), and they do not exist realistically at that moment (the present). We are always farthest into the future of the universe relative to all the rest of its content, which is again consistent with the objectivity of uncertainty and the ubiquity of information.
Logarithm » srb
Question: When there are dual vectors, is there "dual" information?

Answer: Unfortunately, I do not have a digital version of the book in this picture, "The Nature of Data — Discussion on the Origin of Information," Novi Glas, Banja Luka, 1999. Among other things, I wrote there about human sensory organs and the observations of the physiologist Weber (1834). He was one of the first to notice and write in detail about the ratio of the differential threshold of the drag (ΔW) and the power of the drag (W) so that:
\[ \frac{\Delta W}{W} = \text{const}. \]Some lights are too dim to see, some sound too soft to hear, and some touches are too light to feel. The absolute threshold is the point on the intensity scale of physical attraction at which perception begins. The threshold does not represent a sharp boundary line but a zone in which energies gradually change and have no effect, from those with a partial effect to those with a full effect. Psychologists agree that the absolute threshold is the point at which attractiveness is perceived in half of the cases. The threshold measured under different conditions will give different results in the same organism. I keep quoting the book.
The physical dimension of the quantity W can be anything, not just energy, to make the upper fractional constant (k) dimensionless. It is valid for each sense separately and for each particular situation. For extremely sensitive senses, the differences ΔW will be infinitesimal, so we find:
I(W) = k⋅logb(W/W1)
the information of the sense W, where W1 is the unit of measure of that sense, and b > 1 is the base of the logarithm, which is also the unit of measurement of information. This physiologically justifies Hartley's (1928) definition of information: H(n) = logb(n) as the logarithm of equally likely n ∈ ℕ outcomes.
A number is a component of a vector, but a number is also a vector of dimension one. On the one hand, this is a way to make the statistical probability P = M/N, the quotient of the number of outcomes M = 1, 2,..., N and the number N = 1, 2, 3,... experiments, let's use to find the Hartley information, H = -log(P), say natural logarithm (base e ≈ 2.71828). On the other hand, we also have the space of linear operators X* = ℒ(X), dual to the space of vectors X on which such operators act. This is already contained in the "information of perception" itself.
Another way to define "dual information" is through matrices, or representations of linear mappings. Sticking to Hartley's definition and development in the series of logarithmic function:
ln(x) = (x - 1) - (x - 1)²/2 + (x - 1)³/3 - ...
when the argument (numerus logarithm) is x ∈ (0, 2]. We use this expression by taking the matrix (A) for x, and the unit in brackets is the unit matrix (which has units on the diagonal and all other elements zeros).
1. Example. Stochastic matrix:
\[ A = \begin{pmatrix} 1 & 0.3 & 0.2 \\ 0 & 0.7 & 0.3 \\ 0 & 0 & 0.5 \end{pmatrix}, \quad I_3 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \]is the upper triangle. It is like that in the standard vector base. The operator matrix is changed by changing the base (coordinate system), but the eigenvalues, which here are the diagonal elements, remain. Its determinant also remains the same, so it doesn't change either:
\[ \det(A - I_3) = \begin{vmatrix} 0 & 0.3 & 0.2 \\ 0 & -0.3 & 0.3 \\ 0 & 0 & -0.5 \end{vmatrix} = 0. \]Therefore, the logarithm of the stochastic matrix ln(A) is equivalent to the sum of matrices of zero determinants, so in Hartley, we can consider the information of the stochastic matrix as equivalent to certainty. The logarithm of the probability of one is zero. □
From this simple example and comments, it is clear that any stochastic matrix (whose elements are all non-negative and whose column sums are unity), if square, will have this determinant det(A - I) = 0, and therefore log A is 0. Here is one more easy example and one analogous example to confirm it.
2. Example. The general stochastic matrix of the second order and the unit are:
\[ B = \begin{pmatrix} a & c \\ b & d \end{pmatrix}, \quad I_2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \]where all coefficients are non-negative, but a + b = c + d = 1. Then:
\[ \det(B - I_2) = \begin{vmatrix} a-1 & c \\ b & d-1 \end{vmatrix} = \begin{vmatrix} -b & c \\ b & -c \end{vmatrix} = 0. \]It is again: ln(B) = (B - I2) - (B - I2)²/2 + (B - I2)³/3 - ... = 0, of all sums has zero determinants. □
3. Example. Now let's look at the analogy with matrices of the third order:
\[ C = \begin{pmatrix} a & u & x \\ b & v & y \\ c & w & z \end{pmatrix}, \]with all coefficients non-negative and unit sums of columns. Then (-b - c = a -1 and so on):
\[ \det(C - I_3) = \begin{vmatrix} -b - c & u & x \\ b & -u - w & y \\ c & w & -x - y \end{vmatrix} = 0, \]because adding the second and third rows to the first one gives us zeros in the first row. □
In other cases, for matrices that are not stochastic, the calculation will give logarithms different values.
Determinant II » srb
Question: Is the determinant of a sum equal to the sum of the determinants?

Answer: It is not, not even in the case of identity matrices of n order:
det(I) = 1, det(I + I) = 2n.
Determinant det: Φn → Φ, where the scalars Φ are mostly real or complex numbers, is non-linear functional on the square n order, matrices A.
1. Example. Determinant sums matrices of the second order:
\[ \det(A + B) = \begin{vmatrix} a_{11} + b_{11} & a_{12} + b_{12} \\ a_{21} + b_{21} & a_{22} + b_{22} \end{vmatrix} = \] \[ = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} + b_{21} & a_{22} + b_{22} \end{vmatrix} + \begin{vmatrix} b_{11} & b_{12} \\ a_{21} + b_{21} & a_{22} + b_{22} \end{vmatrix} \] \[ = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} + \begin{vmatrix} a_{11} & a_{12} \\ b_{21} & b_{22} \end{vmatrix} + \begin{vmatrix} b_{11} & b_{12} \\ a_{21} & a_{22} \end{vmatrix} + \begin{vmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{vmatrix}. \]The determinant of the second order is decomposed into 2² = 4 determinants. Similarly, the third-order determinant would be decomposed into 2³ = 8 determinants of the third order. In general, the determinant of the nth order would be decomposed into the sum of 2n determinants of the nth order. □
2. Example. The previous sum of the middle two determinants gives:
\[ \begin{vmatrix} a_{11} & a_{12} \\ b_{21} & b_{22} \end{vmatrix} + \begin{vmatrix} b_{11} & b_{12} \\ a_{21} & a_{22} \end{vmatrix} = \begin{vmatrix} a_{11} & b_{12} \\ a_{21} & b_{22} \end{vmatrix} + \begin{vmatrix} b_{11} & a_{12} \\ b_{21} & a_{22} \end{vmatrix}, \]so considering the surfaces S spanned by the vectors of the determinant column in the previous picture on the left, we can write:
det(A + B) = S[a.1, a.2] + S[a.1, b.2] + S[b.1, a.2] + S[b.1, b.2].
Here are the column vectors:
\[ \textbf{a}_{\cdot 1} = \begin{pmatrix} a_{11} \\ a_{21} \end{pmatrix}, \quad \textbf{a}_{\cdot 2} = \begin{pmatrix} a_{12} \\ a_{22} \end{pmatrix}, \quad \textbf{b}_{\cdot 1} = \begin{pmatrix} b_{11} \\ b_{21} \end{pmatrix}, \quad \textbf{b}_{\cdot 2} = \begin{pmatrix} b_{12} \\ b_{22} \end{pmatrix}. \]The determinant of the sum of two matrices of the second order is the sum of 2² = 4 surfaces of four parallelograms crossed by column vectors (combination of pairs) of those matrices. □
We work similarly with determinants of the third order (Sum of determinants). It is the decomposition of each of the three rows into sums of two by two determinants so that, in the end, they have a total of 2³ = 8 determinants of the third order. They can be described as the sum of eight volumes:
det(A + B) = S[a.1, a.2, a.3] + S[a.1, a.2, b.3] + ... + S[b.1, b.2, b.3],
bodies spanned by three specified vectors each. There are 2³ of these volumes because, in each of the three columns, there can be a vector a.k or b.k for k ∈ {1, 2, 3}, which is a total of eight three-member strings.
In general, the determinant of the sum of matrices of the nth order is the sum of 2n generalized (n-dim) volumes S, n-membered arrays of vectors, each with n elements, spanning the volumes. In particular, when n = 2, the "volume" is the area. For now, let's just note the strong analogy of multiplying these "volumes" by decomposing the determinant into two additions with binary search, as well as a binary system, or by tossing a coin or a die, or a similar definition of information.
Trace Matrix » srb
Question: The logarithm of a matrix is not equal to the logarithm of its determinant?

Answer: Of course, it is not. In that first example, the logarithm of the matrix A is the sum of the powers of the matrix A - I whose determinant is zero. However, the determinant of that matrix (det A) is the product of its eigenvalues (1⋅0.7⋅0.5 = 0.35), so its logarithm is also a number, which is not zero (log 0.35).
For example, we use the term eigenvalue to denote the value of a measurable quantity associated with a wave function. If we measure the energy of a particle, we operate on a wave function with the Hamiltonian operator:
\[ \hat{H} = -\frac{\hbar^2}{2m}\nabla^2 + V(\vec{r}), \]where ℏ = h/2π reduced Planck's constant, m the mass of the particle at the location \(\vec{r}\) and V the potential of that location. This is a linear operator because nabla operator ∇ is linear, as is its square ∇², i.e., Laplace operator.
The Hamiltonian operator Ĥ comes from the classical expression for the total energy of a particle p²/2m + V(x) by changing the momentum with the momentum operator p → -iℏ∂x. It is an application of the "rewriting principle" originally proposed by Niels Bohr. He says that the behavior of systems described by quantum theory can be reproduced by those of classical physics, in the limit of large numbers. Consequently, in quantum mechanics, the differential equations hold:
\[ \hat{p}\psi = p\psi, \quad \hat{H}\psi = E\psi \]and the like. Here p and E are the eigenvalues (momentum and total energy), and ψ is the eigen(wave) function of the momentum and total energy operators. Linear operators have matrix representations, so we can continue with the posed question.
The eigenequation of a matrix is Ax = ax, where A is a matrix, a is an eigenvalue (one of n = 1, 2, 3,...) of that matrix, and x is the eigenvector of such an eigenvalue. Determinant, det(A) = a1a2...an, is the product of all eigenvalues of the matrix, and matrix trace, tr(A) = a1 + a2 +... + an is the sum of eigenvalues. In analogy with the Hamiltonian operator and the eigenvalues of the possible energies of the states of the quantum system, which are almost information, the trace of that operator is the "almost" information of the quantum system. The determiner does not have that interpretation.
To avoid confusion, I emphasize that the sum of the diagonal elements of the matrix is the trace of the matrix (i.e., the sum of its eigenvalues), but the individual diagonal elements themselves do not have to be equal to the eigenvalues. The product of the diagonal elements therefore does not have to be equal to the product of the eigenvalues, so we must calculate the determinant separately.
Changes II » srb
Question: So dual information means "changes"?

Answer: It's like that. If we look at the states as vectors of the space X, its dual is the space X* = ℒ(X) of a linear operator. A special part of these mappings is linear functional, and many of them can be interpreted by processes.
Operators are also vectors, just as processes are states. However, to perceive the processes, we have to remember and compare them, which makes them pseudo-real (Cosmos II). We took the law of conservation as the criterion of reality, so periodic processes would be "real" if we could directly perceive them as pure truths. But we see them through "reasoning," which, like mathematics, is unattainable without lies (Not really) and is not pure truth, nor is it "pure" reality.
There is nothing special in the conclusion that "realities" from some other times are not like those from our present, except that it is now derived from a new and dubious "criterion" that reality is what is true and for which the law of conservation applies. It is generally understood that proofs in mathematics can also be found through the use of lies. The novelty is that the processes, consistent with the same criteria, are not always exactly repeatable regarding content and quantity of changes.
From the point of view of this theory of information, our processes as participants in the world around us must be unique and therefore unrepeatable. Consequently, the repetition of history (the events of living or non-living things) is never exactly circular. Its cycles always change at least a little, both concerning the interior and the surroundings. No matter how similar the elliptical orbits of the Earth around the Sun may seem to us, they are never exactly equal. If the Earth were to be in the same mutual position as the Sun after the revolution, the space around them would be different every year.
For example, in an imaginary system, a body that was "time machined" back into the past would never be found in the state it once was. Then it means, first, that the past state is unsustainable — repeating the conclusion that it is pseudo-real. And then that every unique state is unrepeatable, and that there are not enough conservation laws for such. That's why repeated "news" is no longer news. Therefore, everything we perceive is partly pseudo-real, and we know that this is also because light takes some time to reach the subject from the object.
The present is real, correct, and incomplete (Deduction II). Perceived outside by the subjects who pass it, going towards the future, the present is a series of replicas (Fragments) that are different. More precisely, they are unique and follow the law of conservation. The strange "phantom action at a distance" of quantum entanglement comes from this consistency of the present.
Emptiness » srb
Question: Do you know of a condition that just radiates information?

Answer: Such is the past. She only informs us about herself, without the possibility for us to tell her something, so we have to lie and let her slowly fade as she is. And when all the information of the past is exhausted, what remains is a "void," which is the limit of infinity.
We consider the law of conservation to be a kind of consequence of finite divisibility (Packages), perceptions, physical actions, and the like. The consequence of the law of conservation is an increasingly long history of the same and, consistently, an increasingly "thinner" filling of the present with the same. Only the principle of the omnipresence of information in space, time, and matter makes this a reason for knowledge about the past.
The uncertainty of the present slowly passes into the past, which grows into a longer and longer history of ongoing events from which it receives less and less information, so that in the end it also wears out. That deposit of uncertainty drags like an ever-longer tail behind the present, draining its options and making the future more and more certain. Being spent (pseudo-realities not conserved), the deposit ultimately remains spent, the "emptiness".
Unlike the information we receive from the past, transmitted by a Markov chain, which with many links (steps) becomes a "black box," the real state at the end of that past for liquid is nothingness. So what's the difference, you may ask? In the case of transmitters, the "black box" is reached by accumulating interference, or channel noise. Therefore, surpluses—we are also talking about misinformation—will create a channel that always gives the same output to any input. Such an output does not tell us the input data.
On the contrary, by exhausting the past with constant emissions of information without filling it, an "emptiness" remains that no longer has either outgoing or incoming information. However, we know that infinity is determined by the property that it can be its proper part and that below this limit, the supposed void, infinity, can stand. Of course, not "reality" as a physical substance but as "truth." The amount of that infinity, in its own way, is constant.
Relationship » srb
Question: What is the "criterion" for reality, besides the "law of conservation"?

Answer: I used "relationship" when writing the book "Information of Perception" or around then (2016). The basis is the position (Interrelations) from Banach or vector spaces that each point will have some linear functional of its norm.
Interpreted, each object A has an array of mediators B1, B2,... Bn which are successive can be mutually perceived, so that the last one can be perceived by subject C. It is a connected space, typically a vector, representing the reality of the subject C. Such a definition of reality is valid both for the most distant bodies of the universe and for the smallest physical particles, first of all, because the universe, from its largest to the smallest physical parts, in that book, is "vector space." Its systems, states of matter, and physical processes are interpretations of vectors.
The old criterion of reality seems perfect, but it is also a bit boring when we step into some unusual new dimension. One such step is Minkowski's space-time (relativity theory). Pseudo-metrics:
(ds)² = (dx)² + (dy)² + (dz)² - (cdt)²
there is the "Pythagorean Theorem", with "time" ct the path traveled by light at speed c ≈ 300 000 km/s during t seconds. However, there is no communication (mutual exchange of information) between the past and the present. The same description thus remains vague.
We can look at the possibilities of reality at least as wide as the range of vector spaces to accommodate disjoint subspaces (with no common vectors), which could include additional dimensions of time. Pseudo-realities could be real-realities for some, and then the definition of reality by mere "connection" is phallic.
Such a flaw is insignificant compared to our communication with ideas (Pinocchio). The criterion of "connectedness" is rigid, and it is rude to stay on it — in addition to the breadth offered by the concept of "information of perception," — and, there was a need for "real" statements to be "true" as well. Namely, why should we keep unreal what physics and exact sciences cannot do without.
However, we accept items of mathematics as correct only when they are based on appropriate systems of axioms. Due to the absence of some of the axioms, part of the statement will be unprovable, so the criterion needs to be supplemented again. The universal and simple complement of truth is its duration; it clearly distinguishes it from a lie.
In this way, we raise "information theory" slightly above physics and still within the value of mathematics, so that the provably impossible will not be physically real. The final limitation is the "reality" associated exclusively with systems for which the "law of conservation" applies. It is reality in a broader sense that includes, say, isometries and permanent truths in general. Realities from everyday life are mostly those that we can see, touch, or eat. But if science ends with them, then we finished science a long time ago.
We say that something is "real" when it is so true that it could happen, with which we are "connected" and with the logic of things, in addition to the "material," an otherwise highly doubtful concept of classical physics. As you can see, I break down these criteria of "reality," which I apply to redefine everything we are surrounded by, because "reality" rarely turns out to be what it was previously thought to be with the development of science. Even greater advances in science have always changed our perception of the world.
Deposits » srb
Question: Explain the "building up" of energy and the "slowing down" of events?

Answer: The question seems out of context, but with a little attention you will see that it is not.
We divide all the energy of a certain closed system into potential and kinetic, so that their sum remains unchanged by changes, and then let's take only the kinetic as "real." In this way, we reduce energy to movements.
For a body to move or change speed, it needs acceleration, which means the action of some forces. I'm trying to reduce the number of formulas due to the requirements of the question, so I only point out that forces accelerate a system to a speed v relative to an observer at rest. The moving system received some energy and accumulated it in the following amount:
\[ E = \frac{E_0}{\sqrt{1 - \frac{v^2}{c^2}}} \approx E_0\left(1 + \frac12\frac{v^2}{c^2}\right) = E_0 + \frac{m_0v^2}{2} = E_0 + E_k \]where E0 would be the energy of that body at rest, we say its proper energy, c ≈ 300,000 km/s is the speed of light in a vacuum, m0 = E0/c² is the mass of the body at rest, and Ek is its kinetic energy. The above approximation is more and more accurate as the velocity quotient v/c is a smaller number.
If constant forces were acting on the body, a motionless observer would notice a decreasing acceleration of the moving body because its mass is increasing (E = mc²). This acceleration decreases so that the speed of the body asymptotically approaches the speed of light (v → c) without ever reaching it. As time slows down, a relative observer would see longer and longer units of time:
\[ \Delta t = \frac{\Delta t_0}{\sqrt{1 - \frac{v^2}{c^2}}}, \]where Δt0 is the inherent (proper) duration and Δt is relative. We calculate it based on the constancy of the speed of light and the relativity of motion, as in the book "Space-Time" (1.4.1 Lorentz transformation). However, it tells us in general about the accumulation of energy and its compression, resulting in a higher density but also a shorter overall life for the moving system.
The novelty is the remark that more dense information leaks into other dimensions of time. She finds those ways because of principled minimalism, the explanation for this is that the outcomes of more likely random events, or less informative ones, are more frequent. Also, denser uncertainty has more options, but therefore more chances to leave us.
The speed of the system is increased by adding kinetic energy, and this is then accumulated over shorter and shorter durations. A body that would always move for a relative observer would be younger, with a shorter cosmic life, as if it existed shorter as well as more densely in the shared universe. The same, on the other hand, means more charged energy during the body's history and better organization of accumulated information, that is, uncertainty — again because of minimalism.
The mentioned book treats (1.3.3 Red shift) the relativistic Doppler effect. In short, the increase in wavelengths perpendicular to the motion is:
\[ \lambda_{\perp} = \frac{\lambda_0}{\sqrt{1 - \frac{v^2}{c^2}}}, \]where λ0 is the characteristic wavelength of the source at rest, and λ⊥ is the relative wavelength, observed vertically in the direction of movement. We also know that the wavelengths of the approaching sources are shorter than their proper (λ- < λ0 < λ+) and are longer from the moving away source. Then, the mean of the two is exactly the above vertical wavelength: λ⊥ = (λ- + λ+)/2.
We understand this lengthening of wavelengths as a "message" to the relative observer that the uncertainties of the position of the moving system are greater, that it accumulates more information, and is therefore not a desirable place. As an example, we can use it metaphorically to represent the present, which is being pushed towards a more certain future by the greater uncertainty of the past. And the reality of the "observer" is actually in avoiding changes, and only then in the rest.
Systems of higher vitality (Curiosity), consistent with this and the law of conservation, will remain at their level until the action of some force, information, or energy changes. The fact that, in changing environments, the attraction of less frequent events, and therefore slower time, regularly prevails, indicates a greater representation of lower forms of vitality and dead nature.
Roughly speaking, systems seem to leave behind more uncertainty in their desire to follow paths of greater certainty. As if uncertainty is a relative phenomenon, like movement. When we talk about the meaning of spontaneity, that is, describing the principle of minimalism, every "real" (kinetic) increase in energy then appears as a relative capture of uncertainty in a shorter duration — relative observer.
Null Space » srb
Question: How do you formulate the union of "unreal" and "real"?

Answer: Every day we encounter the coexistence of the "unreal" with the "real" in living beings and us. The first are fictions that do not remember, are unstable, lie, and are not consistent; the law of conservation does not apply to them. Unlike other physical realities, United becomes a different story.
In algebra, the first is, say, Null Spaces, and the second is other vector spaces.
In the picture on the right, we see the vector space X of the standard base ex = (1, 0) and ey = (0, 1), and a vector (no need to introduce other labels):
v = 4ex + 3ey = (4, 3).
Its dual would be the space of linear operators X* = ℒ(X). Of them, we are interested in, for example, the following two:
A: (x, y) → (y, 0), B = A⊤: (x, y) → (0, x).
The second is transposed to the first. Their matrix representations are:
\[ A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \quad B = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}. \]Repeating each of these operators returns zero: A² = 0 and B² = 0, a matrix with all four zeros. It demonstrates the impermanence of the processes interpreted by them. These are alone, but combined will give more stable results:
\[ AB = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad BA = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}. \]Thus (AB)(AB) = (AB)² = AB, a process that does not shut down, and so does (BA)² = BA. Moreover, their sum is the unit operator, AB + BA = I, an isometry.
This null-space "revival" occurs when the commutators of the operator are non-zero: [A, B] = AB - BA ≠ 0. They can then represent action, or information, that behaves like "reality" (Giddy). Such are the ladder operators for determining the microlevel of energies. The position and momentum operators of quantum physics have such non-commutativity, from which we derive uncertainty relations.
Further action of the null space on the rest of the vector spaces even more easily results in something that we interpret as real flows or states.
|