Author |
Topic: Converse for independent functions (Read 528 times) |
|
ecoist
Senior Riddler
Gender:
Posts: 405
|
|
Converse for independent functions
« on: Dec 28th, 2006, 8:42pm » |
Quote Modify
|
One way to show that n real-valued functions fi are linearly independent is to somehow find n elements xi in the common domain of these functions such that the nxn matrix A=(fi(xj)) is nonsingular. What about the converse? Given n linearly independent real-valued functions fi with common domain D, do there always exist n elements di in D such that the nxn matrix A=(fi(dj)) is nonsingular?
|
|
IP Logged |
|
|
|
Eigenray
wu::riddles Moderator Uberpuzzler
Gender:
Posts: 1948
|
|
Re: Converse for independent functions
« Reply #1 on: Dec 29th, 2006, 12:47am » |
Quote Modify
|
Essentially this is the statement that the functions are linearly independent iff the rank of the n x |D| "matrix" (fi(dj)) is n. Let k be the dimension of the column span, i.e., the largest k for which there exist x1,...,xk such that the n x k matrix A=(fi(xj)) has rank k. Then k < n. Suppose that k < n. We show that the fi are linearly dependent. Since A is n x k with k<n, there is some non-zero vector v of Rn such that vtA = 0, i.e., for each j=1..k, [sum]i vi fi(xj) = 0. But for any y in D, the maximality of k implies that the column vector F(y) = (fi(y)) is in the column span of A, so F(y) = Aw for some w in Rk (depending on y). That is, for each i=1..n, fi(y) = [sum]j fi(xj)wj. But now vtF(y) = vtAw = 0. Since this holds for all y, it follows [sum] vi fi is identically 0, and the fi are linearly dependent.
|
|
IP Logged |
|
|
|
|