Previous

July 2024 (Original ≽)

Next


Saddle

Question: What is a saddle surface?

Saddle

Answer: When we distort the flat Euclidean surface along the abscissa so that the points further away from the starting point on both sides are higher, and along the ordinate so that the points farther from the same origin are lower, It can also be the other way around, as seen in the picture on the left.

1. When at the beginning (red dot in the picture), a vertical circle starts the abscissa (from above), and another upright, but perpendicular to the first, starts the ordinate (from below), with radii in the order rx > 0 then ry < 0, then the number κ = 1/rxry we call Gaussian curvature of the surface. When κ = 0 is a surface of flat geometry, it is Euclidean. If κ < 0, which we see in the saddle in the picture, is a surface of negative curvature, when κ > 0, as in the case of a sphere, the surface is of positive curvature. We know negative surfaces as hyperbolic or Lobachevsky geometries, and positive surfaces are usually called Riemannian, spherical, or ellipsoidal.

Gauss was the first to find those curves κx = 1/rx and κy = 1/ry (Theorema Egregium, 1827) to observe how the surface of the curve κ = κ1κ2 ≠ 0 not to straighten without deformation. After all, that is why we cannot accurately represent the geography of the globe on flat maps. These ideas were developed by his student Riemann by extending the tensor calculus and then accepted by Einstein through the general theory of relativity.

If we draw equal lengths from the starting point, red in the picture, with the surface of the saddle and its ends, we define a curvilinear "circle" around it with a radius of a given length. The quotient of the circumference and diameter of such a circle is greater than π, than the ratio of the circumference and diameter of a circle of flat geometry. When we draw a curvilinear "triangle" on the saddle surface, connecting three given points with the shortest lines and placing tangents to those lines at the vertices, the sum of the three angles between the tangents will be less than π radians (180 degrees), which is less than the sum of the angles of a right triangle.

2. For example, in the rectangular Cartesian 3-dim coordinate system Oxyz, when all heights z of the saddle surface with projections onto the plane Oxy fall into points (x, y), we get the equation z = x² - y² of the saddle surface. When x, and in another case y, are constants, we have vertical planes that are hyperbolas in the sections of that saddle surface.

Saddle, hyperbole

The picture on the right is one such hyperbola. The vertical sections of the planes of the saddle surface, as seen in the previous image on the left, are hyperbolas, and that is why the geometry of saddle surfaces is also called hyperbolic.

If we observe the same vertical sections of the saddle surface in the above image, there will be vertical hyperbolas. They become with arms facing upwards, i.e. convex, or with arms facing downwards, which is concave.

3. In the more general case, we can write this surface as z = f(x, y). This is how we work with mappings f: X × Y → ℝ, which, more generally, can be from the m-dim space X ⊂ ℝm in some n-dim space Y ⊂ ℝn. If that surface is compact, or, let's say complete (contains its boundary points), where:

f(⋅, y) : X → ℝ concave for fixed y;
f(x, ⋅) : Y → ℝ convex for fixed x;

then it is:

\[ \max_{x \in X}\min_{y \in Y} f(x,y) = \min_{y\in Y}\max_{x \in X} f(x,y). \]

Formally, this is Von Neumann's minimax theorem, whose consequences and interpretations are very important in game theory (Strategies). They are much deeper as well as more important than the simplicity of its confirmation on the surface of the upper picture on the left: the lowest point of the convex and the highest concave of the steepest hyperbola touch in one, so-called. saddle point, shown in red.

4. For example, a seller has a certain daily turnover and an expected income of some z monetary units. That income changes from time to time, so a larger deviation from expectations is less and less likely, but it's bigger news and more reason to fear when income decreases or to rejoice when it increases. The first is probabilities, and the second is information. We ignore the discreteness of the value of money.

When we project probability values and information onto the Oxz and Oyz planes, after leveling, or aligning the lengths of the abscissa and ordinate with the applicate, we get a saddle-shaped income surface. The minimax theorem will then tell the (obvious) truth that the expected value of income is the highest in probability and the lowest in information (news).

Senses

Question: How do I use the minimax theorem in information perception?

Senses

Answer: As a general theorem, minimax, with the correct initial conditions, accurately predicts it. Often, it can give good enough estimates even with approximately correct assumptions, and with chaos theory, we distinguish those rare situations when approximate initial conditions lead to significantly different final results. Otherwise, that discussion belongs to the previous conversation (Saddle), and only because of its length is it special.

In the picture on the left we see vectors with n = 1, 2, 3, ... components:

\[ \vec{a} = (\alpha_1, \alpha_2, ..., \alpha_n), \quad \vec{b}_k = (\beta_{1k}, \beta_{2k}, ..., \beta_{nk}) \]

intensities a and b, which in pairs define (when they do not match) only one plane each. According to "Information of Perception", a developing theory, these vectors can be some "physical states", say object A and subject B, and their scalar product is "mutual coupling", then communication, but also perception information. Therefore, the components of these vectors tell us about the chances of two participants perceiving living or dead nature around them.

Even with the number of possibilities n → ∞, according to Borel–Cantelli lemma, only finally many of them will be relevant, so let them be these. Each subject will perceive the environment in a unique way (Only One). With the equal vector \( \vec{a} \) then the "same" participant \( \vec{b}_k \), in the picture k ∈ {1, 2, 3} and at least of the same intensity, the norms of the vector space \( b = \|\vec{b}_k\| \), does not have to span the same plane.

In microphysics, such a number of components is very small. An electron in an atom has only four quantum numbers, then n = 4. As the building elements are also in larger bodies, that is, their number of components (n) is very large, with maybe several millions or billions, but that is irrelevant because these conclusions are also theoretically interesting. We observe the projection of the state B on A, so the value in the bracket of the scalar product is:

\[ S_k = \vec{a}\cdot\vec{b}_k = a_1b_{1k} + a_2b_{2k} + ... + a_nb_{nk}, \] \[ S_k = \vec{a}\cdot\vec{b}_k = ab\cos\varphi_k = a(b\cos\varphi_k). \]

The smaller the angle φk, the cosine of that angle is larger, cos φk ∈ (-1, 1), so the projection is larger, b⋅cos φk, states B to state A, although the capacity, or intensity b, may remain constant. However, even with the unchanging capacity of perception of the object by the subject's senses, or the intensity of a and b, the change in the direction of the vector changes the state of the subject B and its relationship to the environment A, as well as its actions. If these two vectors coincided in direction, then there would be some λ such that b = λa for intensities, as well as all coefficients, βik = λαi, in order of index i = 1, 2, ..., n .

The change of these influences is in the idea of reciprocity of game theory, in the manner of this information theory. When the angle φ between the vectors is smaller, the perception of the object by the subject is greater, but it can be said that the effect of the subject on the object is also greater. In the case of competition (valid only for vitals), the mutual information of perception grows with this higher coupling, and with it, the levels of the game. Vitality is hard work, and not everyone is the same.

The greatest masters of the game manage to stick their B vector almost to the opponent's vector. Then the one who manages to maintain the higher levels of the game wins, and the other becomes prey (Sneaking) in a game that he easily loses. The top league level is described as "sticking." He always wins the second level, i.e., "walking" from the first and "dodging," yielding to the principle of the least effect of still life, and both defeat that lowest level of play, which is close to inanimate physical matter. In old conversations and some texts, I called the players of these leagues in order "evil," the middle "manipulators," and at the bottom "good" (Traits).

Once we have cleared this up, it is easier to define the prerequisites for the minimax theorem. Various applications are possible, and here is an easy example. The larger the deviation of the angle φ from the optimal, here zero value, reduces the information of perception. However, as the uncertainty grows, so does the ignorance of B about the object A, and we can apply the minimax theorem: there is a single "saddle point," the maximum information (space X), and the minimum uncertainty (space Y) that joins them.

Successfully tracking this otherwise game-changing saddle point is game mastery that leads to player victory. Let's note that a good move by the opponent is to escape into uncertainty, unpredictability, or creativity, which is otherwise known in complex conflicts. It agrees with the "probability force" interpretation, which I consider my discovery as well as this theory. However, in the case of a less complex game, when there are not too many moves, or the player has enough time, this example can also be reduced to the classic games of Von Neumann: a move that interferes with another's best move — leads wins.

Aposematism

Question: The ease with which you apply the minimax theorem is interesting?

Aposematism

Answer: As in the case of solving problems in geometry, this "ease" came with mastery through many hours of studying the subject. It is nothing special, and you can see it in various skilled craftsmen, believe me. It is equally tempting for the ignorant to think that he could imitate it, which in this case, in the frequent misstatement of theorems, reinforces doubts about mathematics.

In this, I see part of the difficulties of applying abstract methods in the research of history, psychology, and biology. Such sciences, I hope you can see, are present in my discussions, but in a completely unconventional way. On the other hand, the tendency of many to underestimate these methods makes it easier for me to discover them with different knowledge.

Then, unexpectedly, the interlocutor suggested that I say something about aposematism. I found that aposematism, or aposematic, from the Greek apo (further) and semantic (meaning), is perhaps the most common form of warning sign in some animal species for protection from predators. Contrary to hiding colors, animals have developed the ability to discourage potential predators with the bright colors of their bodies, thus letting them know that they are poisonous food or otherwise dangerous. And here is my story.

On the one hand, it is the ability to have more offspring due to brighter skin colors, greater sexual attractiveness of colored couples, greater reproductive power, and better longevity. It is the space X of the saddle surface (Saddle). On the other hand, predators are less likely to prey on them because of their tastelessness or toxicity. This is the space Y. The third prerequisite of the minimax theorem is "completeness". It is achieved through a long series of generations, when the length of an individual's life becomes negligible, almost infinitesimal, in relation to the duration of the evolution of the species. Those three give the "saddle point" in aposematism.

These two "curvatures" exist and complement each other at a common extreme point, the maximum of the first and the minimum of the second, the greatest reproduction when there is the least danger, both coinciding with the bright colors of the body. Thus, the given species becomes an "isolated point." In mathematical analysis, we call the point zZ an isolated point when there is an environment around z in which there are no others except z points from Z. In biology, a "point z" would be one such species that has no very close relatives.

Aposematism is more widespread in insects than in vertebrates because, looking at it this way, it fits better with the minimax theorem. Most insects live less than a year because they are cold-blooded and do not survive the winter. On the other hand, the fossil record of insects goes back about 400 million years to the Lower Devonian. Therefore, the ratio of the life span of an individual and a species in insects (it tends to zero) is favorable for "completeness." However, further search for such confirmations is naive for mathematics; its positions are not provable, nor are they disputable empirically.

When there are no assumed spaces X and Y of the saddle surface or completeness conditions, then there is no such focus and isolated saddle point. The species has a "streaked series" of transitional evolutionary forms over a relatively short time interval. Developmental forms of such are more often visible.

Singularity

Question: As you tell me this about aposemia, I wonder if saddle points and isolated points are related terms?

Singularity

Answer: Yes, they are very related. In mathematics, we distinguish them just enough to allow for different variations of related topics (see image link on the left). Simply put, the way we find saddle surfaces, we find isolated points, but vice versa only with small variations.

Namely, if z is a saddle point from the space Z = X × Y (Saddle), then it belongs to X and there is a certain environment where it has the highest value. The same point in the neighborhood of Y has the smallest value. Such zZ is a saddle point of space Z, and there are no others like it around it. Otherwise said, no matter how small the interval is, around the saddle point, there are points that have both a smaller and a larger value than it. But, conversely, a singular point is a local minimum, maximum, or inflection, and only in the third case is it a saddle point. It is singular and "phallic" when it carries the smallest value but not the largest, or vice versa, when it carries the largest without the smallest, like the two in the next picture.

Singularity B

I hope we distinguish "point position" from "point value." For example, in the place of this-and-this, there is a hill of the height of that-and-that. The advantage of this theorizing is, for those familiar with mathematics, the incredible ease of applying its theorems wherever we have the appropriate assumptions. The first assumption is the Banach space. It is a complete vector space provided with a norm, also called a metric space, but then again contains the limit values of its sequences.

In order to separate ourselves from the trivial view of space as physical or geometric, we interpret it with the metric itself (Distances II). For example, the Hamming metric represents the "distance" by the amount of dissimilarity between two given "places." It formally mimics distances as well as all other recognized measures, but it can also tell us about the distances between, for example, different biological species during evolution. This resolves only one of the three "saddle point" prerequisites: zZ.

These are the prerequisites of the minimax theorem. The remaining two are that the subspace X in some neighborhoods of z has the largest value, and the subspace Y in some neighborhoods of z has the smallest value, so Z = X × Y. In the following, we need to "just" find opposite tendencies, and there are unique, singular phenomena such as saddle points. The following are two (actually several) examples.

1. Example. It is known that today, in order to increase freedoms, liberalism advocates various needs for "equal representation." We first see this in gender equality, but working consistently, such a principle should be extended to, for example, seeking in work (life) on land an equal representation of fish and vice versa, an equal presence of terrestrial animals in the sea. It turns out that the right to choose stifles diversity.

The concept of freedom can be brought to absurdity like this because more will not always mean better, as it does in other cases of saddle surfaces. Freedom is like that, but there are also many other phenomena that we find in dualism, subject to the "authority" of two opposing principles: "good" against "evil." These are also situations of quantization of information (Packages), now proven differently. □

2. Example. To last as a species, living things need offspring and survival. The first is in the conditions of a local maximum, the space X, while the second is in the conditions of a local minimum, the space Y. The Minimax theorem states that copies of copies in a long series of generations give individuals individual characteristics. Similarly, the "character" of society, which we call culture, is built. Thus, the problem of building a supranational society will arise, with the relatively small weight of the cultures themselves, as opposed to those then increasingly strong general forces of individuality.

Analogous cellular structures, separating singularities, are also found in non-living nature. Molecules, atoms, and even smaller particle waves satisfy the same prerequisites of the minimax theorem when we set things consistently above. They should reproduce (last) and survive (cling to life), i.e., be "born" as new news from the old, because the old "news" is no longer news and due to the law of preservation of information. □

The subspace of the saddle surface, of course, does not have to be a saddle surface, so by abstraction (separation), we can get everything from something else in addition to what is stated here. The job of interpreting the minimax theorem is more extensive than it might seem at first glance.

Standoff

Question: Why do you think that a balanced game of "reciprocity" (first league) between two opponents leads to a draw, rather than a balanced game from other lower leagues?

Answer: Let's say because of the simulations I've been trying, which then go on and on. Living beings (they are the only ones who play) then get tired and define their territory or cooperate. The side that gives in will not bear the reached level of vitality (which is certainly exhausting) and becomes defeated and subordinate. The ideas of simulation and observation in reality, on the other hand, come from theory.

Standoff

First of all, not all games are equally complex. For example, the tic-tac-toe in the picture (Tic-Tac-Toe), starting with a "x" always wins. The opponent "o" cannot block all the lines that are then opened and lose, and the third move "xxx" in the bottom row is a winner. There are no draws.

Singularity Chess

A chess game is "open" because the players don't hide their cards, but it seems to be "closed" because the moves have too many possibilities. It is on the border of easy games and the validity of John von Neumann's minimax theorem because we cannot know for sure the best move of the position and, therefore, we cannot make the move that leaves the opponent with no best move, which is valid for the recipe of not losing the game. Although we don't know all possible chess moves, we can still theorize.

The chess game is special as it is one-dimensional, with the move of the first and then the second mover, then the first and the second, and so on, waiting for the opponent's move, which significantly differs from multidimensional games such as war, politics, or economics, in which multiple moves by both sides take place at the same time. to which the perception information in the previous analysis refers (Senses).

Let's imagine that, as recommended by John von Neumann, we play chess in such a way, move by move, that we do not leave the opponent with his strongest answers. He cannot defeat us. If he also played his best with the remaining possibilities, he wouldn't lose either. So the game goes to a draw.

In the reality of the chess game, we do not know all the possible moves, but we will rely on estimates. The best strategy of "reciprocity" in the case of a player of two truly equal estimators, further proportional, timely, and equally uncertain responses, will limit the best responses of the opponent, and we will have a draw game by the previous logic. However, by playing as in a second league (manipulators), when walking between proportional resistance and, in principle, the least effect (dead nature), anything can happen, and there will be victories and defeats for both sides equally and with equal opponents.

Supreme vitality, let's imagine again, would be knowing all possible moves or all their chances, so by drawing them in such a way that the opponent will not have the best left, he cannot lose the game. If the other player plays the same way, he does not lose, and the game cannot go beyond a draw. Complexities such as interstate, political, market, and war are implied here.

More specifically (Senses), the most vital responses are "saddle points." They are such isolated and unique (Singularity), therefore "unsullied" decisions, for which it is not true that "would and could not" be valid for victory. In that extreme state of vitality, which is the other extreme of dead nature (the obligatory principle of least action), there is the certainty of knowing everything possible. Of course, we consider this extreme only theoretically, and with it, we point to possibly the same conclusion as in the case of John von Neumann's situation.

That's why I think that the even game of "reciprocity," players of the first league, leads to a draw more often than the competition of even competitors of lower leagues. Ultimately, due to the structure of the form, which is similar to the beginning and end of the time of the cosmos, or the smallest and largest physical world — which is progressing towards certainty.

Discrete

Question: What do you mean by "finite divisibility"?

Discrete

Answer: The term "finitely divisible" is a description of a discrete set. These are, say, subsets of the natural numbers ℝ = {1, 2, 3,...} which are not bounded from above but are bounded from below. Discrete integers ℤ = {0, ±1, ±2,...}. However, they are also rational numbers.

\[ \mathbb{Q} = \{\frac{m}{n}| m \in \mathbb{Z} \land n \in \mathbb{N}\}, \]

because they are countable, i.e., we can arrange them in one (infinite) sequence, but then this expression is not happy.

More precisely, the set SX is discrete when every point xS has a neighborhood U such that SU = {x}. Then the points S of the set are isolated, and we have the singularity of every point of the given (sub)space. In short, a set is discrete if it has a discrete topology if every subset of it is open.

This specification has interesting consequences for perceptual information, one of which concerns its smallest packages. Namely, according to premise 2.5 of Banach spaces, "when the additive mapping is continuous at a point in the space, it is continuous in the entire space," and "additive mapping" is a process in which the sum of the originals is equal to the sum of the copies, which is guaranteed by the law of conservation. This further means that the world information perception is cellular, so to speak for the world of singular phenomena.

On the other hand, the discreteness of the coupled information of the subject and the object does not oblige the individual information to be discrete. It is confusing at first glance, like some statement that the set of rational numbers is "everywhere dense" (no matter how small the interval given, there will be rational numbers). Overall, the term "finite divisibility" is a simplistic usage intended for a kind of discreteness for a set like the natural numbers.

Summation

Question: What kind of "coupled information" are you talking about?

Summation

Answer: Linked information is the basic concept of information perceptions, that is, the theory of information that I am working on. As with any new phenomenon, the brain first resists it and then perhaps realizes its possible accuracy, simplicity, and beauty. Therefore, I will underline the explanation of information coupling again, with a few banally known examples.

1. Feeding evil we get evil (+ ‧ - = -); going evil to good (- ‧ + = -) we get evil. Answering with good on goodwill (+ ‧ + = +) returns good to us, and we must return evil with evil (- ‧ - = +); just like that, otherwise, we lose. This is not a matter of choice but an obligation — on the recommendation of the part of this theory that concerns game theory (Reciprocity).

Namely, sum of products, which is a form of perception information:

Q = a1b1 + a2b2 + ... + anbn,

where the strings a = (a1, a2,..., an) and b = (b1, b 2,..., bn) represent the state of subjects (or objects) of perception. The higher this number Q, the greater their mutual perception, and the stronger the coupling of information. In addition, the corresponding components ak and bk, respectively for k = 1, 2,..., n, are related information of communication participants (couplers).

2. The picture on the top right with the attached link explains the work of the force on the road:

W = Fr = |F||r|cos∠(F,r) = Fxrx + Fyry + Fzrz,

where F = (Fx, Fy, F z) and r = (rx, ry, rz) force vectors and the direction in which it should act. When the angle, θ = ∠(F,r), between the given vectors, the smaller cosine of the one is larger, -1 ≤ cos θ ≤ 1. Then the larger scalar product W.

When the force and path components are both positive (Fk > 0 and rk > 0), or both negative (Fk < 0, rk < 0), their product is positive (Fkrk > 0), and that sum is a contribution to a larger sum (W). However, when the first of the factors is positive and the second is negative (Fk > 0 and rk < 0), or the first is negative while the second is positive (Fk < 0 and rk > 0), their product becomes negative (Fkrk < 0), and such addition reduces the total sum (W).

3. Measurement in quantum mechanics, the interaction of a quantum state, a vector written in Dirac brackets |ψ⟩, in a process  to obtain the quantity A, which can be the position of the particle, energy, momentum, and the like, is generally observable. Eigenequation of that process is Â|uk⟩ = λk|uk⟩, where λk eigenvalues that correspond exactly to the eigenstates |uk⟩ of the given state of the measurement system |ψ⟩.

Instead of one outcome, as it would be in classical mechanics, here one of n ∈ ℕ possibilities of observables, indexed k = 1, 2, 3,..., n. Although we do not know the exact outcome, we do know the probabilities of all possible Pr(λk) = |⟨uk|ψ ⟩|². This is the difference between the micro and macro worlds of physics, in which quantum works only with chances; it is called superpositions of states, which in mathematics would be called probability distributions.

4. If we put ak = ⟨uk|ψ⟩ it will be Pr(λk) = |ak|² the probability of k-th observable in a given state |ψ⟩, in a given measurement, is such that:

|ψ⟩ = a1|u1⟩ + a2|u2⟩ + ... + an|un⟩,

where |uk⟩ are the unit vectors of the given measurement settings. In the second setting of the same, we would have a measurement state of the unit vectors |vk⟩, such that:

|φ⟩ = b1|v1⟩ + b2|v2⟩ + ... + bn|vn⟩.

In particular, with two measurement settings, we also find the well-known "spooky action at a distance" (Spooky), from whose later interpretation a new field of quantum mechanics arose, quantum entanglement. The interaction of these measurement device states is the scalar product ⟨φ|ψ⟩, where ⟨φ| = |φ⟩ label adjoint of the given vector (conjugate transposed).

5. Only under equal measurement conditions would it be ⟨φ| = ⟨ψ|, when each of the coefficients bk = ak, so then:

φ|ψ⟩ = ⟨ψ|ψ⟩ = a1*a1 + a2*a2 + ... + an*an = 1,

because this is the sum of the distribution probabilities, and ak*ak = | ak|² = Pr(λk). In other cases of measurement settings, we will have "only" information of perception, then of a local situation.

However, the product of the different states is the sum of the products:

\[ \langle \varphi | \psi \rangle = \sum_{i,j=1}^n b_i^*a_j\langle v_i|u_j\rangle \le \sqrt{\langle \varphi|\varphi\rangle \langle \psi|\psi\rangle}. \]

The inequality on the right is known Cauchy–Schwarz which now tells us that the sum of the products is less than or equal to one, which means that ⟨φ|ψ⟩ can formally be some coupling probability of two subjects of a given measurement.

6. When we have several superpositions at once, then the total state can be understood as a combination of many quantum states, just as a macro-body consists of small particles. The sum of vectors is again a vector, a sum of superpositions, or a sum of probability distributions, whose components in an ever-increasing set converge to information (Exponential II). There is also a new quality.

The shape vectors remain the same:

\[ \vec{x} = (\xi_1, \xi_2, ..., \xi_N), \quad \vec{y} = (\eta_1, \eta_2, ..., \eta_N) \] \[ S = \vec{x}\cdot\vec{y} = \xi_1\eta_1 + \xi_2\eta_2 + ... + \xi_N\eta_N, \]

where the first vector is not conjugated, because the imaginary is lost in the macro world, it becomes negligible. However, the interpretation also extends to the coupling of the two participants and their perception of information. The larger this scalar S is, the greater the communication between the two states, and the power of mutual perception increases with it; that is the level of play in the case of vital subjects.

7. By decomposing one of the addends of this sum, we get:

\[ \xi_k\eta_k = (\xi_{k1} + \xi_{k2})(\eta_{k1} + \eta_{k2}) = \xi_{k1}\eta_{k1} + (\xi_{k1}\eta_{k2} + \xi_{k2}\eta_{k1}) + \xi_{k2}\eta_{k2} \]

thus explaining the results of (1) multiplying the signs. When the factors in the sum are both "good" (+⋅+) or both "evil" (-⋅-), the sum is positive; otherwise, it is negative.

The higher the sum of S, the higher the level of the game, due to its greater vitality. This is when we return "good" to "good" or "evil" to "evil." Also, the strength of the game increases by joining relatives. On the contrary, different signs (good on evil or evil on good) lower the level of the game, leading to the loss of the game. Separating the components into such (various) ones increases the power of the game. In short, joining together makes us stronger, as does separating the "wheat from the chaff" with consistent answers (Win Lose), and the rest weakens us.

Determinant

Question: You don't give an example with determinants in your answer?

Determinant

Answer: In the previous answer (Summation) many things can be listed because, with concreteness, there are (correct) theories with countless applications alongside practices. The theory is always "everyone's," while practice is "someone's" (Abstractions). However, I agree that determinants are useful and can be a difficult story, that's for sure. Let's try to understand them.

Commutator is a determinant of order two, area and information, [Ã, B̃] = A xBy - BxAy. It is a form of the Heisenberg uncertainty relations of the position and momentum operators [x̃, p̃], or [t̃, Ẽ] of time with energy, because both are physical actions and are of the order of magnitude of Planck's constant. If the first operator is larger, the second is smaller, and vice versa, so that the commutator is always of the same area. Operators have intensity, norm (2.4.IV), and the mentioned ones are also linear.

The determinant of the third order is the volume V[u, v, w] of three 3-dim vectors:

u = (ux, uy, uz),   v = (vx, vy, uz),   w = (wx, wy, wz)

who spans it. Also, V = ux[v, w]yz + vx[w, u]yz + wx[u, v]yz, because:

\[ V = \begin{vmatrix} u_x & v_x & w_x \\ u_y & v_y & w_y \\ u_z & v_z & w_z \end{vmatrix} = u_x\begin{vmatrix} v_y & w_y \\ v_z & w_z \end{vmatrix} - v_x\begin{vmatrix} u_y & w_y \\ u_z & w_z \end{vmatrix} + w_x\begin{vmatrix} u_y & v_y \\ u_z & v_z \end{vmatrix}, \]

so the volume is the sum of commutators, the sum of actions, or, let's say, formally, the sum of products, i.e., the information of perception.

The determinant of the fourth order is the 4-dim volume spanned by the vectors from its four columns, and this is how we interpret the general determinants n -th order in the tensor calculus. In addition to everything previously said, in linear algebra, the determinant is specially treated as a product of the eigenvalues of the operator it represents. And then, again, the determinant of the nth order is:

V = a1b1 + a2b2 + ... + anbn

where for the series (ak) of the components of the subject, we can take any row of the given determinant when the components of the object (bk) are corresponding cofactors.

If two different development lines were identical or proportional, the determinant would be zero. That famous theorem of algebra now says that there is no information (V = 0), there are no such identical subjects (ak = λa'k, index order k = 1, 2,..., n), which can be underlined under the long-known Riesz's theorem (Functional), and now the principled uniqueness of information. Otherwise, each feature of the determinants will give some interpretation of the world of information.

Topology

Question: I'm a little unclear about that abstract concept of "space", can you clarify it for me?

Topology

Answer: The simplest "space" in mathematics is "topology". It is also the most abstract. Topologically space is without measurement of distance, as well as without direction. Simply put, it is a set to which the unions and intersections of its subsets belong.

For example, a topology T is a set X whose elements are all sets: ∅, {1}, {2}, {3}, {1,2}, {2,3}, {3,1}, {1,2,3}. An empty set is an element of every set; so is the first of the above, followed by the union of letters or numbers 1, 2, and 3.

We can arrange these elements, or subsets, as in the picture on the right, which is not important for the topology itself, but here it is on a scaffold organized for easier understanding of a special (sub)type of topological spaces, which we call "metric spaces." Given a discrete topology, its subtype is also a discrete metric space. Nevertheless, they are topological spaces equipped with "metrics", i.e., with the expression of distances between elements, then called points, which simulates measurement in physical or geometric space.

In the figure, the distance d(x, y) between the points x and y is simply the number of different elements of those subsets. For example, d(∅, {2}) = 1, d({1}, {1,3}) = 1, but d ({2}, {1,3}) = 3. So, we have: d(x, x) = 0, if x y then d(x, y) > 0, also d(x, y) = d(y, x) and d (x, y) ≤ d(x, z) + d(z, y). And, for the example in the picture, the axioms of metric spaces apply. It is a metric space, a set of points related to a given distance and subsumed under a given topological space.

A step further in detailing would be the norm of the point ∥x∥ = d(∅, x), then the vector x. In this example, say ∥∅∥ = 0, ∥{3}∥ = 1, ∥{1,2}∥ = 2, and ∥{1,2,3}∥ = 3, and, in general, the norm of a set is the number of elements of that set. Analogously, we define the discrete vector space of this example. As we can see, the more detailed we are, the farther we are from the starting topology, which is the roof over what we carry out.

We can associate probabilities with the points of these spaces at each phase of extraction. Here is an example of such an assignment:

Pr(∅) = 0,   p1 = Pr{1} = 1/2,   p2 = Pr{2} = 1/4,   p3 = Pr{3} = 1/15,
Pr{1,2} = 1/8,   Pr{1,3} = 1/30,   Pr{2,3} = 1/60,   Pr{1,2,3} = 1/120.

The sum of all of them is 1, so when we look at these points as independent random events, then we have one probability distribution here. The starting set X in this way became a probability space. The mean (Shannon) information of the entire set, distribution X, is:

\[ S = -\frac12\ln\frac12 - \frac14\ln\frac14 - ... - \frac{1}{120}\ln\frac{1}{120} ≈ 1.355. \]

This is a natural logarithm (base e = 2.71828...) and the formula:

logcb ⋅ logba = logca,

serves to convert logarithms from one base to another, where logbb = 1.

Step by step, the initial very broad notion of topological space takes on more and more concrete forms and becomes less and less a theory and more and more something special. As we reach the end with detailing, the model becomes so practical that it is unique that it is reduced to physical reality. It is noticeable that there are many ways of concretizing or reducing endlessly applicable theories to ultimate reality. This follows not only from the very logic of space reduction but also from the multiplicity of the ultimate goal.

The multiplicity is in the selection of initial, say "numbers" 1, 2,..., n → ∞, which can be any objects or concepts, with the assignment of a "distance", which dictates the definition of a "norm", but, again, with the freedom to choose the probabilities p1, p2,..., pn elements, while their unions do not even have to be distributions. The resulting "space" can be the physical space-time of an event or a piece of change, and not just a concrete state in the process.

The distances of such "parts of change" are getting bigger as the jump from one to the other is less likely, so this mathematical formalism, which has been developed for centuries, fits quite well into the new (my) theory of information. And I mentioned some other properties of topologies in the metric attachment (2.11. Statement). The Dimension of the point and the finite set of points is zero.

Equilibrium II

Question: What is the "equilibrium point" of the payoff matrix?

Equilibrium II

Answer: A zero-sum matrix game results in a saddle point if the smallest values are chosen from the row and the largest from the column. In the picture on the left are the smallest species values (down, right): (1,4):3, (2,4):2, (3,4):6, (4,4):8, and (5,1):1. And the highest value is 8 in column 4. I will explain why game developers are crazy about this and how it is achieved later.

1. Here is one of the famous easier examples. In the zero-sum game, in the next picture on the right, Ross has to choose one of three moves, type A, B, or C, and Colin one of two, column A or B. They did not know what the opponent played until the payoff. For example, if Rosa chose row A and Colin chose column A, Rosa gets 2 points and Colin loses them (he "gets" -2). However, if the moves are (A, B), Colin Rossi takes 3 points. Knowing the payoff matrix, Rosa will not choose B because she gains nothing or loses 2. Colin, on the other hand, can lose 10 points if he chooses B.

Equilibrium II b

Let's note that for the matrix of the zero-sum game, it is enough to know only the first payoff numbers, these red ones, Rosе. So the previous one, from the left, is a 5×5 matrix, where the first player always wins, but with different amounts.

By choosing the fourth row, his gain is maximum (8 points), because a cunning opponent will always choose the fourth column. The opponent chooses from the columns of the highest value to decide on the suit with the smallest of them: (3,1):18, (3,2):24, (5,3):22, (4,4): 8 and (5,5):15, to reduce the other's profit. The point (4,4) of the given payoff matrix is therefore "equilibrium," because the amount 8 is common to both players, for the rows (min-max) and columns (max-min).

2. Not every matrix has this equilibrium (the saddle point). Let us write the types (8, 1, 9), (7, 2, 6), and (3, 4, 5) one below the other in the form of a matrix of type 3×2 without a saddle point. The probability that a randomly generated matrix of type m×n has a saddle point is m!n!/(m + n - 1)!. Thus, a randomly formed square matrix of the second order (type 2×2) with probability 2/3 will have an equilibrium point and a square matrix of the third order with probability 0.3.

3. With the link of the first image on the left, you will see a Java program for finding the saddle point of a given matrix. The following code works similarly in Python 3:

# Find min row then max col in matrix
def MinMax(mat):
    m = mat.shape[0]; 
    n = mat.shape[1];
    print('(rows, cols): (%2d,%2d)' % (m, n))
    # min value in row, max as col ind
    for i in range(m):
        row = mat[i][0];
        col = 0;
        for j in range(1, n):
            if (row > mat[i][j]):
                row = mat[i][j];
                col = j;
        k = 0;
        for k in range(n):
            if (row < mat[k][col]):
                break;
            k += 1;
        if (k == n):
            print("Saddle Point: ", row)
            print('Position: (%2d,%2d)' % (i+1,col+1))
            return True
    print('There is no saddle point.')
    return

import numpy as np
mat = np.array([[10, 12, 7, 3, 12], [3, 10, 6, 2, 8],
[18, 24, 17, 6, 10], [15, 21, 10, 8, 12], [1, 18, 22, 4, 15]])
print(mat)
MinMax(mat)

When this program is run, for example, with the above matrix, which has a saddle point, it prints:

[[10 12  7  3 12]
 [ 3 10  6  2  8]
 [18 24 17  6 10]
 [15 21 10  8 12]
 [ 1 18 22  4 15]]
(rows, cols): ( 5, 5)
Saddle Point:  8
Position: ( 4, 4)

To run the same program with another matrix (mat), just replace it with, say, the following sentence:

mat = np.array([[8, 1, 9], [7, 2, 6], [3, 4, 5]])

This matrix has no saddle point, and the printout is:

[[8 1 9]
 [7 2 6]
 [3 4 5]]
(rows, cols): ( 3, 3)
There is no saddle point.

See also a slightly more advanced version of this module in my code attachment (MinMax). By the way, this topic is also very theoretical (Strategies).

Previous

July 2024 (Original ≽)

Next