July 2023 (Original ≽)



Question: Say something about the need for collective?


Answer: The information of the system is greater with a greater dispersion of options, a greater number and more uniform chances of selection. This information is similar to the potential energy of an object that we lift in order for it to perform useful work when lowered. By reducing it, it can become organized, efficient, or smart (Letter Frequency).

The principal parsimony of nature in the emission of information, its minimalism, will make systems spontaneously lose potential information. This also happens through stratification, or dilution, and not only by editing or focusing specific information, i.e. the amount of possibilities. It is equivalent to the principle of least action, which in physics literally means the smallest, and which we also see in living beings as the need for inaction. When we discuss the "minimalism force", it will be like talking about the "uncertainty force".

In our region, there is still a saying "where all the Turks go, the little Mujo so", analogous to an anecdote by Mark Twain. Allegedly, two people meet and one is interested in what the other does, and when he says he is a farmer, the first asks: "There are three cows in the meadow, and two are looking in one direction, where is the third looking?" — "Hm, that's a stupid question, how do I know," he replied. — You are a weak farmer if you don't know that the third is also looking in the same direction, thinks the first.

Luc Montagnier is at a rally, and towards the end of his life, on the occasion of disagreement with the vaccination campaign of the then alleged "corona epidemic", and in the heat of the fight against corporations shouted "Anti-vaxxers will save the world!" His is the exclamation of a desperate person who would like freedom to remain in "potential information" (my expression, unknown to them) instead of being spent on the profits of corporations, and yet as a kind of belonging then less to the former than to the new latter.

We will remain prisoners of information minimalism, knowing something about this theory or not, almost regardless, just as we remained in gravity even after Newton or Einstein debunked it. The drive for collective behavior arises first by tying evolution to physical, bodily minimalism, and then through the successes of the growing deposits of such a connection, the conviction remains that emotions are primary. And since nature is not immune to it, neither will we be.


Question: Where does the layering and dilution of information, which you mention, happen, can you elaborate a little?


Answer: Today, an increasingly current layered technology is emerging in software engineering, as in the stages in the picture on the left. The program moves from one layer to another in which it processes different types of tasks until completion, after which it moves on to the next. The efficiency achieved goes with code transparency and easier programming.

Diffusion also exists in the anatomy and physiology of living things, such as homeostasis (the tendency to maintain more favorable internal conditions). We consider this an important way for biological bodies to regulate the concentration of substances so that ions, molecules and compounds can move more easily and descend down concentration gradients.

Placing the politically convenient in leadership positions that would also suit more professionals is a better-known example of stratification than the previous two. The popular elected person steals respect among his peers by possessing the power of his position, in order to tell them “See where I am, or who I am and what I can do, so you know what to do", neglecting the very effectiveness of the system he manages, sometimes. Little by little, there is a stratification of his social and professional success. It is possible to compensate by weighting the government system, but even that only to some extent.

So, here we are not only talking about the expansion of the universe, particularly about the space and the increase of its memory capacity due to spontaneous flows towards higher probabilities, i.e. the natural sequence of events from more uncertain to more certain, but about a very universal principle. We are talking about generalized inertness, the tendency to do nothing, the death drive, and precisely we are talking about the minimalism of information. Further descriptions would go into formulas (and they are not for the interlocutor).

Rotation II

Question: Information disappears as soon as it is created. What math do you use to solve that?

Rotation II

Answer: Rotation, sinusoid, oscillation. Due to the law of conservation, as soon as information disappears, it must also be created, and the quantity that is maintained over time is transferred. Nova appears where it is most likely. So, we also need probability theory, but in the end many actions are reduced to energies, momentums and forces.

During the time τ appears the energy E = hf with the frequency of disappearance and emergence f = 1/τ, where h ≈ 6,626 × 10-34 m2 kg/s Planck's constant. And every phenomenon has an momentum p = h/λ smeared by wavelength λ = cτ, where c ≈ 300,000 km/s speed of light in a vacuum. I'm talking about photons, but it's the same with more complex particles, with more complex oscillations.

Change in momentum p over time τ is a classic description of force. However, that force is also the naked force of uncertainty, actually that deeper cause of known physical forces that is revealed through information theory.

Thus, information that disappears persists. They last, because even from our position, their next appearance is more likely at the next moment. The world is changing and not all of its states are equally suitable, so late information could be unsuitable, less acceptable. Light therefore appears to us to be in continuous motion, due to the laws of conservation, probability and change.

A blur

Question: Can you explain a little about the "adaptation of information" you were talking about?

A blur

Answer: This conversation was conducted while working on the texts of the book "Information of Perception". It is the scalar product of two series of mutual communications, \(\vec{a}\) and \(\vec{b}\), evaluated with corresponding action-reaction intensities \(S = a_1b_1 + ... + a_nb_n\). When these pairs of factors of both sequences are monotonically increasing (decreasing), the information \(S\) is maximal, and otherwise it is smaller as one is decreasing while the other is increasing.

I punctuated that observation with the conclusion that systems in communication (factors in \(S = a_1b_1 + ... + a_nb_n\)) will tend not to resist each other, spontaneously "adapting" so that smaller coefficients are paired with larger ones and vice versa. The unnatural process them would be the "defiance" I attributed to "vitality." That book was not supposed to have formulas, so I did not refer to the previously known "ergodic theorem", from my first book "Mathematical Theory of Information and Communication" (it's not translated and I don't have it digitally).

The ergodic theorem talks about the same process of adaptation in its own way. When we build and repeat the data transmission channel, we form it as the so-called Markov's chain, the output message becomes eigenvector of that channel, at the end, one and the same no matter what was sent there in the beginning. It is a story about a "black box", i.e. suffocation of the initial information with misinformation (channel noise) so that it is impossible to decipher at the output. But, let's note, it also talks about adaptation, which is perhaps more appropriate to call "blurring".

So, in that conversation, I presented the idea of the "Big Bang", about the time of the creation of our universe about 13.8 billion years ago, which may not have happened. What if what we see as such is an illusion created for us by the ergodic theorem of the process of the development of the universe, the transmission of the early stages through the eons of the process to us today? I did not attach much importance to that doubt, I underestimated its possibilities, to be honest, until recently when noticed it was not a dead end.


Question: Where did you notice the opening of the "black box" idea, did you say in the memory of information?


Answer: Yes, and the idea itself is not difficult to understand, although it is very unusual. Try first to see the "blurring", part of the process from just behind the original to the black box, as remembering the message.

Let's imagine that the message is like a digital pointer of the number 8, with seven dashes, three of which are horizontal and four are possibly vertical. By adding one of the two missing vertical lines of number 5, it becomes 6 or 9, and by adding both to it, we get 8. This is how interference works on the channel of information transmission. Noise "corrupts" the message by addition (say the ergodic theorems), not, as is commonly thought, by subtraction.

It is easy to make sure of this when you notice that the total noise (black box) has the maximum information of the given channel, unlike a meaningful message whose information is always less (Letter Frequency). But that's just the beginning of this story.


Question: Is there a spatial-temporal example of the idea of "forgetting" in the "black box" development?


Answer: Something like analytical, ie. coordinate geometry, or geometric probability (e.g. Buffon's needle), so will soon be the geometry of information.

On the other hand, we already have one, because physical space is a warehouse of memories — at least as far as my theory of information is concerned. Such a depot will teach us how speed, in addition to environment, can be important in memory processes. How much perspective reminds us of forgetting and how regular the processes of "blurring" themselves can become, that statistically they can be almost deterministic.

The further the threshold of the track (the partition of the ladder) is smaller, with the regularities we are talking about from the simple Thales theorem of similar triangles to double-ratio and the principle of duality (Konusni preseci, 3.4.4 and 3.4.5). And the places in the infinity of parallel lines, which projective geometry calls points at infinity or ideal points, are analogies of complete oblivion, or the black boxes.

However, the spatial perspectives are only one form of distance from the present. For now, there are untold novels, and such a large volume that we should not underestimate. In addition, this question is quite unusual, new terrain, that I have to divide the introduction to the answer into so many parts, to make the whole more palatable. Well, let's look further.


Question: What is important for the extent of memorization?


Answer: The size of the system, but also the complexity of the structure, the density of information. We have different ways of defining the "size" of something, but reducing, say, the length of the body in principle reduces the chances and the need for memory.

Space memory is kind of an upper limit. The totality available to us is 13.8 billion years, or however much larger it is, will be the size limit and upper measure for everyone else. The bottom of the micro world of quantum mechanics is the other extreme. That thin and accelerated world has little or no memory. In the intermediate space are, for example, microchips with surprisingly large ranges of processing or data storage, as well as perhaps some phenomena of a Multiplicities nature that science has yet to discover, or could reveal.

At the end without the possibility of memory, there is the exponential distribution in the case of a given mean of random variables (μ), where the information should be defined simply as the logarithm (of complex numbers) of those states with all the consequences. And the consequences known to us are the periodicity of their logarithms, as well as the eigenvalues of the observables of their processes.

The news, which concerns the question, is paying attention to the speed of events in the micro world with regard to the macro world, due to the limitation of the speed of light. That small world seems to live in a hurry compared to us big ones. It really happens because of the impossibility of memory, then its unnecessariness.

What the universe stretched out in 13.8 billion years and, due to principled minimalism, is still stretching out, among elementary particles happens in a fraction of a second. They go through all stages of "blurring" at high speed and collapse from the superposition into the outcome — as observable and quantum states, i.e. eigenvalues and eigenvectors.


Question: Are there processes without eigenstates?


Answer: When the quantum state (vector) is in such an environment that will not communicate, for example, it is of a lower level of information as a vacuum for a particle-wave, it happens that the state lasts and lasts, we say it oscillates. The information remains, it is not delivered to the other party because the environment will not accept it. Then we have the physics of a process without an eigenstate.

Algebra can accompany that "information theory" thesis (mine), here's how. The picture on the left shows a characteristic equation, where A is a linear operator (process), v is an eigenvector (state), λ eigenvalue (observable). Quantum physics implies that λ a real number, as a measurable physical quantity, while processes and states can have complex components. But there are no real eigenvalues of the process that must last.

Cayley-Hamilton theorem deals with solving this equation. We see it on the example of an operator given by a square matrix of the second order:

\[ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \lambda \begin{pmatrix} x \\ y \end{pmatrix}, \] \[ \begin{pmatrix} a - \lambda & b \\ c & d - \lambda \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} = \hat{0}, \] \[ \begin{vmatrix} a - \lambda & b \\ c & d - \lambda \end{vmatrix} = 0, \] \[ (a - \lambda)(d - \lambda) - bc = 0, \] \[ \lambda^2 + \alpha_1 \lambda + \alpha_2 = 0. \]

It is the so-called characteristic polynomial of a given matrix, which is nth order for matrices of the same order, and the Cayley-Hamilton theorem states that any square matrix will cancel its characteristic polynomial. Therefore, the matrix will have at least one eigenvalue, and then at least one eigenvector, if and only if the characteristic polynomial has at least one root.

Fundamental Theorem of Algebra adds that every polynomial equation of the nth degree with real or complex coefficients there are n solutions, i.e. roots, in complex numbers. There may or may not be real solutions among the solutions, so if there are none, then the process has no observables. The beauty of mathematics comes to the fore in the ergodic theorem, to which processes with such a duration are exceptions.


Question: Give an example of the duration of the process without finally delivering the information?


Answer: As we have used the table of addition or multiplication almost everywhere, we have a similarly convenient rotation of coordinates for oscillations, as in the picture on the right.

Then we get the characteristic equation as follows:

\[ (\hat{A} - \lambda \hat{I})\vec{v} = 0, \] \[ \det (\hat{A} - \lambda \hat{I}) = 0, \] \[ \begin{vmatrix} \cos\theta - \lambda & -\sin\theta \\ \sin\theta & \cos\theta - \lambda \end{vmatrix} = 0, \] \[(\cos\theta - \lambda)^2 + \sin^2\theta = 0, \] \[ (\cos\theta - \lambda)^2 = - \sin^2\theta, \] \[ \lambda_{1,2} = \cos\theta \pm i\sin\theta = e^{\pm i\theta}. \]

The eigenvalue is a complex number, except when \( \theta \in \{0, \pm \pi, \pm 2\pi, ...\} \), then \(\lambda = \pm 1\) and "rotation" becomes zero or central symmetry.

We substitute the eigenvalue into the initial equation and calculate the eigenvector:

\[ \begin{pmatrix} \cos\theta - e^{i\theta} & -\sin\theta \\ \sin\theta & \cos\theta - e^{i\theta} \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = 0, \] \[ \begin{pmatrix} -i\sin\theta & -\sin\theta \\ \sin\theta & -i\sin\theta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = 0, \] \[ (x \pm iy)\sin\theta = 0. \]

Therefore, the eigenvector is the zero vector (\(x = y = 0\)), whenever the sine is not zero, i.e. when \(\theta \notin \{0, \pm \pi, \pm 2\pi, ...\} \). We get the same for the angle \(-\theta\).

It is a typical computational part of such tasks. What is new in this is the informatic interpretation, that the particle-wave of quantum physics, which could be simulated with such rotations, cannot transmit information (interact), because the environment does not want it, it ignores it.

When the interaction (acceptance) happens, I emphasize, then instead of the rotation process we have another — with real eigenvalues.


Question: Briefly explain the particle-wave momentum from the point of view of information theory, that is, these eigenvalues?


Answer: We often do not distinguish between the term’s "momentum" and "impulse" of a force here, although there is a difference between the two. So first watch it in a simple video Physics.

We transfer classical definitions of physical quantities to quantum and linear operators. The position x of the axis is thus the position operator \(\hat{x}\) as a simple multiplication by x, and thus the linear moment from the image on the left becomes momentum operator

\[ \hat{p} = -i\hbar\frac{\partial}{\partial x}. \]

Solving the Schrödinger equation we find the wave equation, first for the free particles

\[ \psi(x, t) = e^{\frac{i}{\hbar}(px - Et)} \]

where p is the momentum in the x-direction and E is the energy of the particle. Hence, by taking the derivative along the abscissa, which is otherwise a well-known linear operator, we get

\[ \frac{\partial \psi}{\partial x} = \frac{ip}{\hbar} \psi. \]

We use the possibility that this is the characteristic equation of the linear momentum operator and construct it in the above form. An example that confirms the correctness of this implementation is the uncertainty relations, explained here on several occasions.

However, the operator construction procedure neglected the new aspect in the micro-world of this physical quantity revealed by the immediately preceding answers. There is no exact particle-wave momentum of that magnitude level until the first statement, or observation, or measurement. The information that this vagueness conveys — defines it. What remains of the amount of uncertainty, if anything remains, will be more certain. Simple as that.

Before the measurement, there is no exact path of the particle, Heisenberg (1927) noticed at one time, so Schrödinger popularly recounted it with his example The Cat in Box. Information theory will give such "mystical" phenomena (when it is accepted) a simple rational explanation, that the processes representing waves-particles are not the same before and during the declaration. The former does not have real eigenvalues, and the latter do.

First of all, this means that in the process of measurement, we must arrange the apparatus in accordance with principled minimalism, so that there is communication between the measuring and the measured. That the device spontaneously accepts the information of the particle-wave, or in general the quantum state, consistent with the aforementioned (Delivery).


Question: The world boils down to information's attempts to remain silent and its failures to do so? Can you also tell me about the transition from classical to quantum harmonic oscillator?


Answer: Witty but quite accurately said. I agree, because information is what you have when you don't have it and you don't have it when you have it. One such is my recent explanation of the constraint of information between its shortness of breath and the law of conservation. But alas, all nature is woven from such.

When we consider these things a little more seriously, from an engineering point of view, we mostly see undulations, harmonic oscillations. The electric field photon induces magnetic, then magnetic induces electric, it then magnetic again, and the process continues (Lasting) until a possible delivery. And similarly waves and other matter, as the space and time too. Such is the nature of the world of information.

Similar to the exchange of potential and kinetic energy in a harmonic oscillator, we are talking about potential and stated information. Similarly, we talk about diffraction, interference or the timing of information delivery, because it's the same story, so to speak.

According to Hooke's law, the spring force is proportional to the stretch, or more precisely \( m\ddot{x} + kx = 0 \), applied to the x-axis. For the force constant \(k > 0\), the solution of this differential equation is a sine function. In classical physics, it is known that \( k = m\omega^2\), where \(m\) is the mass of the oscillating particle, and \(\omega\) is the so-called angular frequency of oscillations. This is how we arrive at the harmonic oscillator energy:

\[ E = \frac12 mv^2 + \frac12 kx^2 = \frac{p^2}{2m} + \frac12 m\omega^2 x^2, \]

which in classical physics can have any value.

In quantum mechanics, this energy is represented by the Hamiltonian, the operator:

\[ \hat{H} = \frac{\hat{p}}{2m} + \frac12 m\omega^2\hat{x}^2, \]

where the momentum \(\hat{p}\) operators of the abscissa and position \(\hat{x}\), both in the coordinate representation (Quantum Mechanics, 1.4.8 Harmonic Oscillator), defined by its action on the wave function \(\psi = \psi(x)\):

\[ \hat{p}\psi = -i\hbar\frac{d}{dx}\psi, \quad \hat{x}\psi = x\psi. \]

The first summand of the Hamiltonian is the kinetic energy of the particle-wave, and the second is its potential energy. The meaning of the Hamiltonian operator is given by its application, again to the wave function

\[ \hat{H}\psi = E\psi, \]

where \(E\) is the particle-wave energy. The solutions of this second-order differential equation are:

\[ \psi_n(x) = \frac{1}{\sqrt{2^n n!}}\cdot\left(\frac{m\omega}{\hbar \pi}\right)^{1/4}\cdot e^{-m\omega x^2/2\hbar}\cdot H_n\left(\sqrt{\frac{m\omega}{\hbar}}\cdot x\right), \]

where n = 0, 1, 2, ..., and \(H_n\) are Hermite polynomials. Discrete eigenvalues

\[ E_n = \hbar \omega \left(n + \frac12\right) = (2n + 1)\frac{\hbar}{2}\omega \]

are the corresponding energy levels.

When you want precision, quantum mechanics becomes very difficult due to the sheer difficulty of mathematics and the way experiments are prepared, but then it also becomes the most accurate thing that physics and science have in general.


Question: So that's how we get to "ladder operators"?


Answer: Yes, that harmonic oscillator is one way to define the lowering and raising, or annihilation and creation, operator. The computational problem is very demanding, but perhaps I can explain it without being trite.

We can write the above Hamiltonian

\[ \hat{H} = \frac{\hbar\omega}{2}(\hat{p}^2 + \hat{q}^2) \]

using two one-dimensional operators, momentum and position:

\[ \hat{p} = \frac{\hat{p}_x}{\sqrt{m\omega \hbar}}, \quad \hat{x} = \frac{\hat{q}}{\sqrt{m\omega/\hbar}}. \]

Let's also define two non-Hermitian operators:

\[ \hat{a} = \frac{1}{\sqrt{2}}(\hat{q} + i\hat{p}), \quad \hat{a}^{\dagger} = \frac{1}{\sqrt{2}}(\hat{q} - i\hat{p}), \]

which will be shown to represent decreasing and increasing indices \( n \to n \mp 1\), i.e. lowering and raising operators.

First of all, let's note that:

\[ \hat{a}^\dagger \hat{a} = \frac12(\hat{q} - i\hat{p})(\hat{q} + i\hat{p}) = \frac12(\hat{q}^2 + \hat{p}^2) + \frac{i}{2}[\hat{q}, \hat{p}], \]

which due to the commutator \( [\hat{x}, \hat{p}_x] = i\hbar \) and the Hamiltonian gives:

\[ \hat{a}^\dagger \hat{a} = \frac12(\hat{q}^2 + \hat{p}^2) - \frac12, \] \[ \hat{H} = \hbar\omega(\hat{N} + \frac12), \quad \hat{N} = \hat{a}^\dagger \hat{a}, \]

where \(\hat{N}\) is a new operator, obviously Hermitian, which we call the number operator.

Note that the Hermitian operator is linear with the number operator and that they commute. That is why they have a set of common eigenstates, which we denote by Dirac notation \( |n\rangle \) and call eigenstates of energy, so we have:

\[ \hat{N}|n\rangle = n|n\rangle, \quad \hat{H}|n\rangle = E_n |n\rangle. \]

It follows from that

\[ E_n = (n + \frac12)\omega\hbar, \]

and these are the harmonic oscillator energy levels listed above. In class, it is routinely checked that n is a non-negative integer.


Question: I know it's easy, but it doesn't work for me. Can you show me how you work with "ladder operators"?


Answer: Okay, watch this video on the right and any another like it, then try to go through the exercises in the order below.

1. The commutators of these operators with the Hamiltonian are:

\[ [\hat{a}, \hat{H}] = \hbar\omega\hat{a}, \quad [\hat{a}^\dagger, \hat{H}] = -\hbar\omega\hat{a}^\dagger. \]

From there it comes out:

\[ \begin{cases} \hat{H}(\hat{a}|n\rangle) = (\hat{a}\hat{H} - \hbar\omega\hat{a})|n\rangle = (E_n - \hbar\omega)\hat{a}|n\rangle, \\ \hat{H}(\hat{a}^\dagger|n\rangle) = (\hat{a}^\dagger\hat{H} + \hbar\omega\hat{a}^\dagger)|n\rangle = (E_n + \hbar\omega)\hat{a}^\dagger|n\rangle. \end{cases} \]

Therefore, \(\hat{a}|n\rangle\) and \(\hat{a}^\dagger|n\rangle\) are eigenstates of the operator \(\hat{H}\) with eigenvalues respectively \(E_n + \hbar\omega\) and \(E_n - \hbar\omega\). This is the point of calling the operators \(\hat{a}|n\rangle\) and \(\hat{a}^\dagger|n\rangle\) respectively lowering and raising operators, in a word, ladder operators.

2. The commutators of the operators \(\hat{a}\) and \(\hat{a}^\dagger\) with the number operator are:

\[ [\hat{a}, \hat{N}] = \hat{a}, \quad [\hat{a}^\dagger, \hat{N}] = -\hat{a}^\dagger, \]

so \(\hat{N}\hat{a} = \hat{a}(\hat{N} - 1)\) and \(\hat{N}\hat{a}^\dagger = \hat {a}^\dagger(\hat{N} + 1)\). Then we get:

\[ \begin{cases} \hat{N}\hat{a}|n\rangle = \hat{a}(\hat{N} - 1)|n\rangle = (n - 1)\hat{a}|n\rangle, \\ \hat{N}\hat{a}^\dagger|n\rangle = \hat{a}^\dagger(\hat{N} + 1)|n\rangle = (n + 1)\hat{a}^\dagger|n\rangle. \end{cases} \]

These relations show that \(\hat{a}\) and \(\hat{a}^\dagger\) are eigenstates of \(\hat{N}\) with eigenvalues (n − 1) and (n + 1) respectively. When \(\hat{a}\) and \(\hat{a}^\dagger\) act on \(|n\rangle\) they respectively decrease and increase n by unity, so is \(\hat{a}|n\rangle = c_n |n - 1\rangle\), where the constants \(c_n\) have yet to be determined.

3. Let us determine the constants \(c_n\) from the requirement that the states \(|n\rangle\) be normed. In that direction, we note that from the above follows:

\[ (\langle n|\hat{a}^\dagger)\cdot(\hat{a}|n\rangle) = \langle n|\hat{a}^\dagger\hat{a}|n\rangle = |c_n|^2 \langle n - 1 | n - 1\rangle = |c_n|^2, \] \[ (\langle n|\hat{a}^\dagger)\cdot(\hat{a}|n\rangle) = \langle n|\hat{a}^\dagger \hat{a}|n\rangle = n\langle n|n\rangle = n. \]

By combining we get \(|c_n|^2 = n\). This means that the number n, which is equal to the norm \(\hat{a}|n\rangle\), cannot be negative. The substitution gives \(\hat{a}|n\rangle = \sqrt{n}|n - 1\rangle\).

4. In the following, it is easy to show that \( \hat{a}^\dagger |n\rangle = \sqrt{n+1}|n+1\rangle\), and that repeated application of \(\hat{a}^\dagger \) on \(|n\rangle\) generates an infinite sequence of eigenvectors \(|n + 1\rangle\), \(|n+2\rangle\), \(|n+3\rangle\), . .., and as n is a positive integer, the energy spectrum of the harmonic oscillator is discrete:

\[ E_n = \left(n + \frac12\right)\hbar\omega, \quad n = 0, 1, 2, ... \]

and that is a confirmation of what was previously found.

5. It's easy to check:

\[ |n\rangle = \frac{1}{\sqrt{n}}\hat{a}^\dagger |n-1\rangle = \frac{1}{\sqrt{n(n-1)}}(\hat{a}^\dagger)^2|n-2\rangle = ... = \frac{1}{\sqrt{n!}}(\hat{a}^\dagger)^n|0\rangle. \]

6. In the coordinate representation of the matrix of ladder operators are:

\[ (\hat{a}) = \begin{pmatrix} 0 & \sqrt{1} & 0 & 0 & ... \\ 0 & 0 & \sqrt{2} & 0 & ... \\ 0 & 0 & 0 & \sqrt{3} & ... \\ ... \end{pmatrix}, \quad (\hat{a}^\dagger) = \begin{pmatrix} 0 & 0 & 0 & ... \\ \sqrt{1} & 0 & 0 & ... \\ 0 & \sqrt{2} & 0 & ... \\ 0 & 0 & \sqrt{3} & ... \\ ... \end{pmatrix}. \]

They do not commute with \(\hat{N}\), so their matrices are not diagonal.

7. The states \(|n\rangle\) are the joint orthonormal eigenstates of the operators \(\hat{N}\) and \(\hat{H}\), and the representations of these operators in that basis are infinite diagonal matrices:

\[ \hat{N} = \begin{pmatrix} 1 & 0 & 0 & ... \\ 0 & 2 & 0 & ... \\ 0 & 0 & 3 & ... \\ ... \end{pmatrix}, \quad \hat{H} = \frac{\hbar\omega}{2} \begin{pmatrix} 1 & 0 & 0 & ... \\ 0 & 3 & 0 & ... \\ 0 & 0 & 5 & ... \\ ... \end{pmatrix}. \]

8. From the attached and based on the definitions:

\[ \hat{x} = \sqrt{\frac{\hbar}{2m\omega}}(\hat{a}^\dagger + \hat{a}), \quad \hat{p}_x = i\sqrt{\frac{m\hbar\omega}{2}}(\hat{a}^\dagger - \hat{a}) \]

compile the position and momentum matrices yourself.


Question: Шта је то Диракова делта функција и како се до ње долази?


Answer: In theoretical physics, the Dirac delta (δ) distribution, also called the unit impulse, is a generalized function over real numbers of value zero everywhere except at zero, and whose integral over the entire real line is one.

The action of the momentum operator \(\hat{p} = -i\hbar\frac{d}{dx}\) on the wave function \( \psi(x) \) is its derivative. Inserting the complete state of the position we write:

\[ \hat{p}\psi(x) = \langle x|\hat{p}|\psi\rangle = \int\langle x|\hat{p}|x'\rangle \langle x'|\psi\rangle \ dx' = \int \langle x|\hat{p}|x'\rangle \psi(x')\ dx', \]

so if we put:

\[ \langle x|\hat{p}|x'\rangle = -i\hbar\frac{\partial}{\partial x}\delta(x - x') = i\hbar \frac{\partial}{\partial x'}\delta(x - x'), \]

then we can use the partial integration:

\[ \hat{p}\psi(x) = i\hbar\int\frac{\partial}{\partial x'}\delta(x-x')\psi(x')\ dx' = -i\hbar\int \delta(x-x') \frac{d\psi(x')}{dx'} \ dx' \]

and get it right

\[ \hat{p}\psi(x) = -i\hbar \frac{d\psi(x)}{dx}.\]

Links and at least three ways of explaining the Dirac delta function are listed, and one more insight can be added to the need for it in physics, from the point of view of information theory.

In the previous answers, we saw the matrix interpretations of some discrete physical quantities (position, momentum, energy) in the states of the harmonic oscillator. There are, for example, electrons with energy levels in steps (discrete) across atomic shells, or situations of absorption and emission of photons from electrons and the like. But, free particles-waves will be in a much larger number of states, when we need to move from the addition of series to integration and the Dirac delta function.

For individual particles, the present "blinking" through time goes discretely in observations (Ripples), but from their multitude we derive it into a Cauchy series. We work with the most countable infinity of individual outcomes within a continuum of all possibilities. More precisely, we would say, there is a limitation of physics to measurements and, therefore, the impossibility of that "real" physics to accept phenomena before and between interactions. That's when mathematical physics and the Dirac delta come into play.


Question: To your simple descriptions of the otherwise difficult places of quantum physics ... (my colleague praises me), can you add angular momentum?


Answer: I solved the problems of "angular momentum" in the book "Quantum Mechanics" (1.4.10), but unfortunately it is too difficult for the wider audience, otherwise extensive and then unpolished and unpublished. It's still worth reading, because I can't go into detail in the correspondence (and here).

The focus of this story is, of course, on the problems of the informatics and quantum aspect, but nevertheless, take a look at the attachment in the link in the image on the right. And quantum numbers are the second step. Here's how.

The classical, instantaneous angular momentum \(\vec{L} = \vec{r}\times \vec{p}\) becomes operator:

\[ \hat{L} = \hat{r} \times \hat{p} = -i\hbar(\hat{x}, \hat{y}, \hat{z}) \times \left(\frac{\partial}{\partial x}, \frac{\partial}{\partial y}, \frac{\partial}{\partial z}\right), \]

from where we get \( \hat{L}_z = -i\hbar\left(x\frac{\partial}{\partial y} - y\frac{\partial}{\partial z}\right) \) and cyclic for the other two. They are operators with a discrete spectrum of eigenvalues \(\sqrt{j(j+1)}\), where \( j = 0, \frac12, 1, \frac32, 2, ...\), and the meaning of this observable is the surface, i.e. information (Commutation) in a plane perpendicular here to the z-axis. Otherwise, the angular momentum projections on the given (here z) axis are quantized, and for the given j, the values they can take are \(m = -j, -j + 1 , ..., j\).

As well as linear, classical moments can take the values of real numbers without restrictions. The difference between the micro and macro world in that matter concerns the law of large numbers of probability theory, and discreteness in quantum is a matter of information (Packages). This is compounded by the limitation of observables to real numbers, as we have seen, decisively preventing us from immediate insight into the continuum around us.

Spin II

Question: Spin is a logical continuation of the exposition about angular momentum?

Spin II

Answer: Basically, formally spin is an important part of the angular momentum story. In the picture on the left we see the method of measuring the electron spin. Pass their beam through a magnet and record the deflection up or down (± one z-axis).

The eigenvalues of the square of the magnitude of the spin operator are

\[ S^2 = s(s+1)\hbar^2, \]

where for the electron \(s = \frac12\), so \( S^2 = \frac34\hbar^2\). Elementary particles for which these \(s = \frac12, \frac32, ...\) are called fermions, and elementary particles with these numbers \(s = 0, 1, 2, ...\) are called bosons. Since the eigenvalues are real, there is the possibility of measurement.

Therefore, we define the basic electron spin matrix by the equality \(\hat{S}^2 = \frac34\hbar \hat{\sigma}^2\), with two base vectors "up" and "down", \(\binom{1}{0}\) and \(\binom{0}{1}\), and by the matrix \(\hat{\sigma}\) of the second order whose square is the unit matrix, \(\hat{\sigma} ^2 = \hat{I}\). There are three solutions to the roots of the unit matrix of the second order, except for itself:

\[ \hat{\sigma}_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad \hat{\sigma}_y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad \hat{\sigma}_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, \]

which is easy to check by squaring. They are called Pauli matrices.

Since the squares of each of these are unit matrices, each has only two eigenvalues ±1, so we find their eigenvectors:

\[ \vec{x}_\pm = \frac{1}{\sqrt{2}}\binom{1}{\pm 1}, \quad \vec{y}_\pm = \frac{1}{\sqrt{2}}\binom{i}{\pm i}, \quad \vec{z}_\pm = \binom{1}{0}, \binom{0}{1}, \]

which is also easy to check by multiplication. Spin is a fundamental concept among observables, because the law of conservation applies to it and its eigenvalues are not just our fictions. Charge, or energy, mass is also fundamental, and they have in common measurability, perception.

Consistent with the principled tendency towards less information and deflection of electrons when measuring spin, it is possible to talk about the lack of uncertainty that spin represents in the magnetic field as the cause of such movements. What spin represents is information, whatever else we think of it.


Question: What did you use quaternions for?


Answer: Quaternions were invented by Hamilton (1843) as a method for multiplying, dividing, rotating and stretching vectors. Individually they are distinct imaginary numbers with easy matrix representations. Quaternion \(\hat{q} = i\hat{\sigma}\), where \(i^2 = -1\) and sigma is the Pauli matrix. Thus, we have:

\[ \hat{q}_x = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}, \quad \hat{q}_y = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}, \quad \hat{q}_z = \begin{pmatrix} i & 0 \\ 0 & - i \end{pmatrix}. \]

I explained these in more detail in Quantum Mechanics (1.2.4), but quaternions can also be derived with square matrices of higher order. They appear as solutions of the matrix equation \( \hat{q}^2 = -\hat{I}\), dually spin matrices which are solutions of the equation \(\hat{\sigma}^2 = +\hat{I} \).

Due to the fact that its square is a negative unit matrix, the quaternion has two eigenvalues \(\pm i\) and eigenvectors respectively:

\[ \vec{x}_\pm = \frac{1}{\sqrt{2}}\binom{1}{\pm 1}, \quad \vec{y}_\pm = \frac{1}{\sqrt{2}}\binom{i}{\pm i}, \quad \vec{z}_\pm = \binom{1}{0}, \binom{0}{1}. \]

These are therefore the same vectors (states) as for spin. However, their imaginary eigenvalues separate them from observable processes. Hence the meaning of time dimensions to quaternions (Space-Time, 1.4.3).

The assumption of the objectivity of chance leads to parallel realities (Dimensions) with as many temporal dimensions as there are spatial ones. Then the idea of the quaternion as a carrier, a representative of time dimensions, appears by itself.


Question: What is the consequence of the common eigenvectors of spin and quaternions?


Answer: The problem with physics is that it is limited to the results of measurements, I think. This becomes emphatically visible in the informatic insight into reality, when what was before measurement becomes dramatically different from what is being measured.

In the micro world of quantum physics, we have a superposition collapse in the observable, similar to turning six possible outcomes before into one after rolling the dice. A similar example is the transition of a quaternion to a spin process and vice versa.

A set of quantum particles is a "state", and their change is a "process", which is the interpretation of a vector and the operator that maps it. Due to the demonstrated high accuracy in the measurements of such representations and the dual nature of operators and vectors, I consider processes to be states as well. Their peculiarity is their abstractness, but it does not exclude their objectivity, a special kind of reality, as a presupposition of the "real", measurable one.

The basic idea of matrix mechanics lies in the possibility of writing any electron spin state as a linear combination of two measurables: up and down, u and v. Superposition of these ψ = αu + βv, is the unuttered state before the measurement, where the numbers in front of the vector define the probabilities |α|2 or |β|2 observable. They represent the chances of an electron deflecting up or down in a given measurement. The basis vectors u and v are eigenvectors, mostly \(\vec{z}_\pm\), and are common to spin and quaternions.

In other words, we do not formally distinguish the spin from the quaternion representation of the state ψ, but we can measure the former while we cannot measure the latter. Hence my conclusion that the latter represents the same thing, but is in a zone with which we do not communicate, with which our apparatus does not interact, that is, which is in "parallel reality". By following the quaternion process, \(\hat{q}^2 = -\hat{I}\), \(\hat{q}^3 = -\hat{q}\), \(\hat{q}^ 4 = \hat{I}\), ..., measurable and immeasurable states alternate in a four-stroke cycle (multiply these by the state ψ).

On various occasions, we recognize such phenomena as "bypasses" of reality (Bypass), otherwise characteristic of (my) interpretation of information theory.


Question: Different vectors represent different states, I agree, but isn't that too absolute a view in a world of information where perceptions define everything?


Answer: I understood the question, I hope. What bothers you is the uniqueness of the perceptions of each of the perceiving subjects in the case of an arbitrary fixed object and vice versa. The affirmative answer comes from an otherwise well-known functional theorem (Dual Space).

If \(f : X \to \Phi\) is a functional on the n-dimensional vector space \(X\), then there is one and only one vector \(y \in X\) such that is \(f(x) = \langle x | y\rangle\) for all \(x \in X\).

The uniqueness of the vector \(y\) is obvious, because for \(f(x) = \langle x|y\rangle = \langle x|z\rangle\) for all \(x \in X\) implies \(\langle x | y - z\rangle = 0 \) for all \(x \in X\), and then also for \(x = y - z\) which gives \(|y - z| = 0\), i.e. \(y = z\). The proof of the existence of the functional \(f(x)\) in the form \(\langle x|y\rangle\) is a bit longer, but it is not important for this question.

Therefore, there are no two vectors \( y = (\eta_1, ..., \eta_n)\) and \(z = (\zeta_1, ..., \zeta_n)\), objects or states, which the subject \(x = (\xi_1, ..., \xi_n)\) could be defined by equal perception information \(\xi_1\eta_1 + ... + \xi_n\eta_n\) and \(\xi_1\zeta_1 + ... + \xi_n\zeta_n\). These two for the same x, but different y and z, of one vector space, will be two different functionals \(f_y(x) = \langle x|y\rangle\) and \(f_z(x) = \langle x|z\rangle\), different phenomena. The same is true for the reverse, which we can obtain by changing the order of multiplication.

I took this question out of the context of the "twins" story. The existence of such subjects of a given state space (reality) that would be equally defined by two different objects of perception is impossible. The reverse is also true, that it is not possible to have such an object of the same reality that different subjects would perceive exactly the same.

The second part of the theorem (the unspoken proof) establishes that for each subject there is some object of a given reality that (uniquely) defines it, in the way of information of perception.


Question: What are and are not Hermitian matrices?


Answer: A Hermitian matrix (self-adjoint) is a square matrix of complex terms that is equal to its conjugate transpose matrix. Its element of the r-th row (horizontal) and k-th column is equal to the complex conjugate element of the k-th row and r-th column, for all indexes r and k. We denote the conjugate and transposed matrix \(\hat{A}\) by \(\hat{A}^\dagger = (\hat{A}^*)^\tau = (\hat{A}^\tau)^ *\), or with \(\hat{A}^H\).

In the following, for simplicity, I write matrices without caps and vectors without arrows, except where it would lead to confusion.

1. If \(A = (\alpha_{rk})\) is a Hermitian matrix, then \(\alpha^*_{rk} = \alpha_{kr}\) for every pair of indices, which for diagonal elements means \( \alpha^*_{kk} = \alpha_{kk}\), therefore, that they are real numbers.

2. The Double stochastic matrix is Hermitian, as opposed to ordinary stochastic. All their coefficients are non-negative numbers, and the sums of each row and each column of the double, amount to exactly one, while the ordinary stochastics has unit sums of rows, or columns, but not both.

3. The elements of the stochastic matrix are conditional probabilities. As a channel, it transmits messages that are probability distributions \( p = (p_1, ..., p_n)\). The message elements, vector coefficients, are non-negative (real) numbers sum to one. When \(\alpha_{rk}\) is the probability that the k-th message signal is transmitted as r-th, the lines (rows) will also be some (different) distributions. Then \( q = A p \) means the output distribution contra variant written.

By transposition, from the stochastic matrix of rows we get the stochastic matrix of columns, the column vectors will be row vectors, so we write the same equation covariant \( q^\tau = p^\tau A^\tau \). When it is not symmetric, the stochastic matrix is not Hermitian.

4. Let \(m_r\) be the smallest and \(M_r\) the largest member of the r-th row of a stochastic row matrix, then, due to the inequality:

\[ m_r \le q_r = \sum_{k=1}^n \alpha_{rk}p_k \le M_r \]

will be the distribution \( q = A p \) narrowed whenever \(m_r \ne 0\) or \(M_r \ne 1\). As the message passes through multiple channels, the annoyance will increase and the message will be narrowed, if that matrix has more non-zero members. This is the meaning of the ergodic theorem.

Therefore, even when they are not Hermitian, stochastic matrices will have the both, real eigenvalues and eigenvectors. The first are 1, and the second uniform distribution. By increasingly scaling \( A^n\) we aim for a "black box", a matrix that turns each input into its eigen (characteristic).

5. Real matrices, with real coefficients, can have complex eigenvalues and vectors. Such, for example, are rotations. However, a real matrix with a real eigenvalue will have a corresponding at least one real eigenvector.

Namely, from \( A x = \lambda x \) follows \( A^* x^* = \lambda^* x^* \), so \( A x^* = \lambda x^* \). Hence by adding \( A(x + x^*) = \lambda (x + x^*) \). The sum of conjugate complex numbers is real, and this is also an eigenvector.

When a real matrix is of odd order, its characteristic polynomial is of odd degree and will have at least one real root. This means, the matrix will have at least one real eigenvalue and at least one corresponding real eigenvector.

6. A real symmetric matrix is Hermitian. By its action on any of the two vectors, their scalar product remains the same. Its eigenvalues are real, and its eigenvectors are orthogonal. Here's the proof.

Namely, \(\langle Ax|y\rangle = \langle x | A^\dagger y\rangle = \langle x|Ay\rangle\), i.e. \((Ax)\cdot y = x\cdot(Ay) \). Further, from \(Ax = \lambda x\) and \( x \ne 0\) it follows \(Ax^* = \lambda^* x^*\) and \(x^\dagger A = \lambda^* x^\dagger\), and multiplying this by \(x\) we find \(\lambda^* x^\dagger \cdot x = x^\dagger \cdot Ax = x^\dagger \cdot \lambda x = \lambda x^\dagger \cdot x\), and because \(x^\dagger\cdot x > 0\), it will be \(\lambda^* = \lambda\), which means that \(\lambda\) a real number.

To prove the orthogonality of the eigenvectors \(x_1, x_2\) with corresponding different eigenvalues \(\lambda_1, \lambda_2\), we calculate:

\[ \lambda_1 x_1^\dagger \cdot x_2 = x_1^\dagger \cdot A^\dagger x_2 = x_1^\dagger \cdot Ax_2 = \lambda_2 x_1 \cdot x_2 \] \[ (\lambda_1 - \lambda_2)x_1^\dagger\cdot x_2 = 0, \]

and hence \(x_1^\dagger \cdot x_2 = 0\), which means \(x_1 \perp x_2\), the orthogonality of the vectors.

7. A Hermitian matrix has real eigenvalues and orthogonal eigenvectors. Again we have \(Ax = \lambda x\) and \(x \ne 0\), but the coefficients of the matrix are complex numbers for which \(\alpha^*_{rk} = \alpha_{kr}\) holds. Next is:

\[ x^\dagger\cdot Ax = x^\dagger \cdot \lambda x = \lambda (x^\dagger \cdot x) = \lambda \|x\|^2, \] \[ x^\dagger \cdot Ax = (Ax)^\dagger\cdot x = \lambda^*x^\dagger \cdot x = \lambda^*\|x\|^2, \]

and hence \(\lambda^* = \lambda\), which means that the eigenvalue \(\lambda\) is a real number.

Let \(Ax_k = \lambda_kx_k\) for k = 1, 2, so in the case \(\lambda_1 \ne \lambda_2\) we prove that the eigenvectors \(x_1, x_2\) are orthogonal:

\[ (Ax_k)^\dagger = x_k^\dagger A^\dagger = \lambda_k x_k^\dagger = x_k^\dagger A, \quad k = 1, 2. \]

Since \(x_1^\dagger\cdot Ax_2 = x_1^\dagger \cdot \lambda_2x_2 = \lambda_2(x_1^\dagger\cdot x_2)\) is a scalar, it follows:

\[ \lambda_2(x_1^\dagger \cdot x_2) = (x_1^\dagger \cdot Ax_2)^\dagger = x_2^\dagger\cdot Ax_1 = x_2^\dagger \cdot \lambda_1x_1 = \lambda_1(x_2^\dagger\cdot x_1), \]

and hence:

\[ \lambda_1(x_1^\dagger\cdot x_2) = \lambda_2(x_2^\dagger\cdot x_1), \] \[ (\lambda_1 - \lambda_2)\langle x_1 | x_2\rangle = 0, \]

so \(x_1\cdot x_2 = 0\), because \(\lambda_1 - \lambda_2 \ne 0\).


July 2023 (Original ≽)