top of page
Search

Eigen..what?

Writer's picture: Dom RobinsonDom Robinson

Eigenvectors, Eigenvalues and Eigenstates - all new terms to me and it has been fun finding out about them. Here is some AI generated musing i have created on the topic using jasper.io as part of another exploration..

In quantum mechanics, eigenvalues and eigenvectors are of great importance. For a given system, the specific values of energy that an electron might have correspond to its unique eigenvalue. In addition, each eigenvalue corresponds to a unique vector that is the corresponding eigenvector.


In more advanced topics of quantum mechanics, it is possible to have infinite eigenvalues. For example, in the hydrogen atom model, one eigenvalue corresponds to the electron being at rest at a given radius from the nucleus. However, another eigenvalue corresponds to an electron that is accelerating, revolving around the nucleus. So this second energy level has no defined radius—it continues forever as the electron accelerates towards the nucleus (and, correspondingly, spirals inwards).




The eigenvalues of a quantum system are important because they represent the energy levels of that system. Each different eigenvalue corresponds to an occupation number - essentially how many particles are at each energy level of the quantum system. The occupation number can be calculated from the eigenvalue by multiplying the eigenvalue with its corresponding eigenvector. Only one particle occupies each energy level, so when there are multiple particles at an eigenvalue, they must be sharing that energy level. This means that they can be considered as one particle with a probability of occupying each eigenvalue.




The occupation numbers represent the number of particles that will be found at each energy level after an experiment is carried out. The probabilities for all possible results must add up to make 100% (or 1), like normal counting fractions. However, because the particles are being treated as one entity with a distribution of probabilities for each possible eigenvalue, the expected numbers of particle counts will not add up to 1.


For example, consider a system with two identical atoms in a fixed position on a surface. The lowest energy of the system is called the ground state, and this can be thought of as the original state of the system before an experiment is carried out. The eigenvalues and eigenvectors for this ground state are given as:




Eigenvalues: E = (1/2)(x - 2y)




Eigenvectors:


v_x = (1, 0),


and


v_y = (0, 1)


In this case, it can be seen that v_x and v_y are perpendicular to each other. This is because the total energy of the system is a combination of kinetic and potential energy - if either of these were zero then there would not be a way to get from one end of the system to another, and this would break the requirement that a particle can move freely throughout a quantum system.


Eigenvalues are a physical quantity - a certain numerical value for energy which corresponds to an eigenvector being acted upon by a specific operator. Eigenvectors represent physical quantities too - the location of particles if they have been displaced from their starting positions in a system or not. In general, it is possible to take any quantum mechanical operator and perform an eigenvalue problem on it, but none will give infinite results because all operators have finite eigenvalues (an exception exists in one dimension).


In order to actually work out the eigenvectors, however, we must first find the determinant of our operator. In this case, the Hamiltonian is given as:




H = p^2/2m + V(x,y)


In order to find the eigenvalues of H, we first must find the eigenvectors of a similar function - specifically one that has a set point at x = y = 0. This can be done by multiplying an arbitrary location with itself and then adding a certain value k to it to get zero. The product will have two eigenvalues depending on whether k > 0 or k < 0 (in essence mirroring each other). To make our life easier, let's take k=0 for now and consider what happens when it becomes negative later. So we'd like E = (x - 2y) to equal zero when x = y = 0. This gives us:




E = (x - 2y)


Now, we need to find an expression for the eigenvectors of this function, similar to what we did above. We also know that two eigenvalues will be produced by this expression, which means they must cancel each other out when multiplied - so the terms containing x and y must have opposite signs. If either term has a positive coefficient then their product will give a negative result because 1*0 > 0, therefore the two terms cannot both have positive coefficients anymore.


While this may be confusing, the following derivations will show that adjacent eigenvectors are orthogonal (perpendicular) to each other.


Derivation 1:


Letting u = x - 2y and dropping extra terms for simplicity, we can write u as:


u = y/2 + t*(x/2)


Now, let's take our eigenvector basis of two eigenvectors for the same operator as before - only now they are on different locations with different k. So how do they interact? This is where these eigenvectors come in handy! A product of two functions multiplied together would look like this:


f_ * f_2 = u_1*u_2


If we take this and add k to both sides, we get:


k*f_ * f_2 = u_1*u_2 + k(y/2 + t(x/2))


Now we need to find the eigenvalues of the two expressions in brackets. Let's take them out and put them on either side:


k*u_1*u_2 - (y/2 + t(x/2))^2 = 0


This gives us our common eigenvalue equation: an eigenvalue for a given operator is zero when multiplied by that operator, therefore the only way for this equation to be true is if each term has a magnitude of zero, or that they add up to zero.




u_1*u_2 = 0


Because this eigenvalue equation is true for any values of x and y, this tells us that u_1*u_2 are orthogonal - perpendicular - to each other. This can be proved further by multiplying both sides by u_2:


(u_1*u_2)^ * (u_1*u_2) = 0


Which gives us:


(y/2 + t(x/2))^ * (y/2 + t(x/2)) = 0


This shows that the individual are indeed eigenvectors and eigenvalues.


Derivation 2:


Let's take a look at the coefficient matrix for our eigenvector basis:


C = [u_1, u_2] = [y/2 + t(x/2), x*k*t - (x/2)^3]


We can see that there are clearly two terms here, each with their own coefficents and exponents. As we know from before, the determinant of this must be zero if any non-zero term is to have an eigenvalue of zero because 1*0 > 0. Therefore, both coefficients must be negative or opposite signs for this to work out correctly, therefore making them orthogonal.


C is therefore our coefficient matrix for the eigenvectors.




We have now found the eigenvalues of the Hamiltonian! Remember that k = 0, so let's plug in some negative numbers to see what happens. The two eigenvalues are -1 and 1, which are the same as found earlier through other methods. By using these eigenvalues with our orthogonal basis vectors u_1*u_2 = 0 and u_2*u_1 = 0 we can find two normalized eigenvectors which form a basis with these eigenfunctions:


x + y/2


and


x - y/2


The complete state of an electron is then represented by two complex numbers, one for each eigenfunction:


a_1 = x + y/2 = r cos(wt) and a_2 = x - y/2 = r sin(wt) where w is the angular frequency of the electron's wave function. The complex number representing such a state is called an amplitude function, and in this case it equals to |a_1|^2 + |a_2|^2 which represents how much probability there is that at a given time, we may find our electron near "x" or near "y".


The linear combination of these two eigenfunctions gives us all possible combinations that can be formed from them:


C0 = [1, 0]


C1 = [0, 1]


C2 = C0 + C1 = r cos(wt) + i*r sin(wt)


We can also represent these eigenvectors as a sum of both the eigenfunctions:


C0 + C1 * exp(-i*k*t/hbar) = r sin((wt - hbar k)/hbar)


This is because k=0 so w=k/hbar. Doing so results in an amplitude function equal to |a_1|^2 which represents how much probability there is that at a given time, we may find our electron near "x" or near "y".


It is important to learn about eigenvalues and eigenvectors in quantum mechanics because it is very similar to classical mehanics - instead of using classical variables, however, we use momentum values instead of x-y position coordinates. The difference between the two calculi is how these variables interact with each other. In macroscopic physics, conservation laws are used which involve integrating over all subdomains of the system under study so that physical quantities such as mass or momentum remain independent of time or change smoothly from one region of space to another (conservation laws are classically expressed with partial derivatives, but quantum mechanically the variables are complex numbers so this doesn't work out so nicely).


However, in atomic, subatomic and even nucleonic physics, where electrons move around at high speeds or remain in a small space compared to their wavelength, these laws of conservation do not hold. This is because eigenstates of each variable cannot be decomposed into a set of independent partials. Some quantities have to change discontinously when crossing certain boundaries between domains which can lead to interactions between different physical systems that we would normally expect to be unconnected - you can think about this as if we tried using 2D coordinates instead of 1D and had problems finding eigenvectors. The only difference here is that instead of using real numbers, quantum mechanical systems use complex numbers.


The final outcome is the same in both - if you want to find eigenvalues and eigenvectors, you have to solve a set of linear equations which are based on the Hamiltonian or other matrices. It can be applied to any quantum-mechanical system whose observables are Hermitian, meaning they are self-adjoint operators representing physical quantities with values which can take positive or negative values (although it does not work for non-Hermitian cases).


This tells us that types of problems involving classical mechanics where observing something changes its state gives different results when applied to quantum mehanics - this is because eigenstates of a Hermitian operator are canonical, meaning they don't change under functions of time.







 
 
 

Comentarios


Post: Blog2 Post
Start of Blog
  • LinkedIn

©2021 by Quantum Multicast.

bottom of page