Math How to check if m n-sized vectors are linearly independent?
Math How to check if m n-sized vectors are linearly independent?,math,vector,matrix,linear-algebra,Math,Vector,Matrix,Linear Algebra,Disclaimer This is not strictly a programming question, but most programmers soon or later have to deal with math (especially algebra), so I think that the answer could turn out to be useful to someone else in the future. Now the problem I'm trying to check if m vectors of dimension n are linearly independent. If m == n you can just build a matrix using the vectors and check if the determinant is != 0. But what if m < n? Any hints? See also this video lecture.
This is not strictly a programming question, but most programmers soon or later have to deal with math (especially algebra), so I think that the answer could turn out to be useful to someone else in the future.
Now the problem
I'm trying to check if m vectors of dimension n are linearly independent. If m == n you can just build a matrix using the vectors and check if the determinant is != 0. But what if m < n?
See also this video lecture.
Construct a matrix of the vectors (one row per vector), and perform a Gaussian elimination on this matrix. If any of the matrix rows cancels out, they are not linearly independent.
The trivial case is when m > n, in this case, they cannot be linearly independent.
Construct a matrix
M whose rows are the vectors and determine the rank of
M. If the rank of
M is less than
m (the number of vectors) then there is a linear dependence. In the algorithm to determine the rank of
M you can stop the procedure as soon as you obtain one row of zeros, but running the algorithm to completion has the added bonanza of providing the dimension of the spanning set of the vectors. Oh, and the algorithm to determine the rank of
M is merely Gaussian elimination.
Take care for numerical instability. See the warning at the beginning of chapter two in Numerical Recipes.
m<n, you will have to do some operation on them (there are multiple possibilities: Gaussian elimination, orthogonalization, etc., almost any transformation which can be used for solving equations will do) and check the result (eg. Gaussian elimination => zero row or column, orthogonalization => zero vector, SVD => zero singular number)
However, note that this question is a bad question for a programmer to ask, and this problem is a bad problem for a program to solve. That's because every linearly dependent set of
n<m vectors has a different set of linearly independent vectors nearby (eg. the problem is numerically unstable)
I have been working on this problem these days.
Previously, I have found some algorithms regarding Gaussian or Gaussian-Jordan elimination, but most of those algorithms only apply to square matrix, not general matrix.
To apply for general matrix, one of the best answers might be this: http://rosettacode.org/wiki/Reduced_row_echelon_form#MATLAB
You can find both pseudo-code and source code in various languages. As for me, I transformed the Python source code to C++, causes the C++ code provided in the above link is somehow complex and inappropriate to implement in my simulation.
Hope this will help you, and good luck ^^
If computing power is not a problem, probably the best way is to find singular values of the matrix. Basically you need to find eigenvalues of
M'*M and look at the ratio of the largest to the smallest. If the ratio is not very big, the vectors are independent.
Another way to check that m row vectors are linearly independent, when put in a matrix M of size mxn, is to compute
det(M * M^T)
i.e. the determinant of a mxm square matrix. It will be zero if and only if M has some dependent rows. However Gaussian elimination should be in general faster.
Sorry man, my mistake...
The source code provided in the above link turns out to be incorrect, at least the python code I have tested and the C++ code I have transformed does not generates the right answer all the time. (while for the exmample in the above link, the result is correct :) -- )
To test the python code, simply replace the
and the returned result would be like:
Nevertheless, I have got a way out of this. It's just this time I transformed the matalb source code of
rref function to C++. You can run matlab and use the
type rref command to get the source code of
Just notice that if you are working with some really large value or really small value, make sure use the
long double datatype in c++. Otherwise, the result will be truncated and inconsistent with the matlab result.
I have been conducting large simulations in ns2, and all the observed results are sound. hope this will help you and any other who have encontered the problem...
A very simple way, that is not the most computationally efficient, is to simply remove random rows until
m=n and then apply the determinant trick.
m < n: remove rows (make the vectors shorter) until the matrix is square, and then
m = n: check if the determinant is 0 (as you said)
m < n(the number of vectors is greater than their length): they are linearly dependent (always).
The reason, in short, is that any solution to the system of
m x n equations is also a solution to the
n x n system of equations (you're trying to solve
Av=0). For a better explanation, see Wikipedia, which explains it better than I can.
#9Could you please explain better your solution? I should execute a gaussian elimination on what exactly?
#10Let's say you have the 2 vectors (2 3) (4 6). They map to the following set of equations:
2x + 3y = aand
4x + 6y = b. If you try a gaussian elimination of x, you end up with
0x + 0y = 2a - b. Having the zeros indicates that the two vectors are not independant. Generalize for
#11So this has at least
O(n^3)complexity (depending on how we measure). Do you know if there is a (perhaps randomized)
#12Can I use partial pivoting with this gaussian elimination?
#13Yes, but not every independent set has a dependent set nearby.
#14True, but that just means every output of the algorithm on dependent set is somewhat bogus
#15How do you define 'not very big'?
#16Are you sure it is the transpose and not the conjugate transpose?
#17Can you give any reference(s) on this approach of randomly removing components of vector?