[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 8 KB, 547x185, day.gif [View same] [iqdb] [saucenao] [google]
4299956 No.4299956 [Reply] [Original]

Putnam/Olympiad problem of the day from
http://www.math.harvard.edu/putnam/

Let M be a set of real n x n matrices such that
(i) <span class="math">I \in M[/spoiler], where I is the n x n identity matrix;
(ii) if <span class="math">A \in M[/spoiler] and <span class="math">B \in M[/spoiler], then either <span class="math">AB \in M[/spoiler] or <span class="math">-AB \in M[/spoiler], but not both;
(iii) if <span class="math">A \in M[/spoiler] and <span class="math">B \in M[/spoiler], then either AB = BA or AB = -BA;
(iv) if <span class="math">A \in M[/spoiler] and <span class="math">A \rlap{\kern{1.5pt}/}= I[/spoiler], there is at least one <span class="math">B \in M[/spoiler] such that AB = -BA.
Prove that M contains at most <span class="math">n^2[/spoiler] matrices.

>> No.4300441

If the number of matrices is bounded, then they're all determinant 1 or -1.

>> No.4300451

>>4300441
Did you rule out determinant zero?

>> No.4300632

>>4300441
Can you show your work?

>> No.4300668

>>4300632
Not same anon, but I can show you this.

If <span class="math">A\in M[/spoiler] then either <span class="math">A^2[/spoiler] xor <span class="math">-A^2[/spoiler] is in <span class="math">M[/spoiler]. Iterating, we see that for all positive integer <span class="math">k[/spoiler], <span class="math">A^k\in M[/spoiler] xor <span class="math">-A^k\in M[/spoiler].
If <span class="math">\det A \notin \{0,1,-1\}[/spoiler], then <span class="math">\{det A^k\,:\, k=1,2,\ldots\}[/spoiler] is an infinite set. Therefore, the set <span class="math">\{A^k\,:\, k=1,2,\ldots\}[/spoiler] cannot possibly be finite.

>> No.4300670

>>4300668
<span class="math">\{det A^k\,:\, k=1,2,\ldots\}[/spoiler]
should be
<span class="math">\{\det A^k\,:\, k=1,2,\ldots\}[/spoiler]

>> No.4300737

>>4300668
I have a follow-up observation.

If <span class="math">A,B\in M[/spoiler], (iii) says that either <span class="math">AB=BA[/spoiler] or <span class="math">AB=-BA[/spoiler].

If <span class="math">AB=-BA[/spoiler] then <span class="math">A^kB^m=(-1)^{km}B^mA^k[/spoiler].

If <span class="math">AB=BA[/spoiler] then <span class="math">A^kB^m=B^mA^k[/spoiler].

Either way, <span class="math">A^{2k}B^m=B^mA^{2k}[/spoiler] for all nonnegative integers <span class="math">k, m[/spoiler].

As I noted in >>4300668
for all positive integer <span class="math">k[/spoiler], either <span class="math">A^{2k}\in M[/spoiler] or <span class="math">-A^{2k}\in M[/spoiler].

Notation: If one of <span class="math">A[/spoiler] and <span class="math">-A[/spoiler] is in <span class="math">M[/spoiler], let <span class="math">\sigma_{A}[/spoiler] denote the appropriate sign such that <span class="math">\sigma_{A}A\in M[/spoiler].

Then <span class="math">(\sigma_{A^{2k}}A^{2k})B= B(\sigma_{A^{2k}}A^{2k})[/spoiler] for all <span class="math">B\in M[/spoiler].
But this contradicts (iv) unless <span class="math">\sigma_{A^{2k}}A^{2k} = I\,[/spoiler]!!!

Thus, for all <span class="math">A\in M[/spoiler], <span class="math">A^{2k}=\sigma_{A^{2k}}=\pm I[/spoiler].
In particular, <span class="math">A^2 = \pm I[/spoiler].

So, indeed <span class="math">\det A = \pm 1[/spoiler], but we now have a much stronger statement than that.


Next step: use this fact to solve the problem.

>> No.4300759

>>4300737
Additional observation:

<span class="math">\det -I= (-1)^n= -1 [/spoiler] for odd <span class="math">n[/spoiler].
But <span class="math">\det A^2 = (\det A)^2 \geq 0[/spoiler].

Therefore, for odd <span class="math">n[/spoiler], <span class="math">A^2 = I[/spoiler].

>> No.4300822

>>4300759
Well, my first hope was that we could show the total number of square roots of <span class="math">I[/spoiler] and <span class="math">-I[/spoiler] is less than <span class="math">n^2[/spoiler]. But this is hopeless: the number of square roots is infinite. For <span class="math">n=2[/spoiler],
<div class="math">
A=
\begin{matrix}
a & b\\ c & -a
\end{matrix}
</div>
satisfies <span class="math">A^2=I[/spoiler] iff <span class="math">a^2 +bc = 1[/spoiler].

>> No.4300844

>>4300822
Hmm. I'll try again. Where is there a FAQ on how to use jsMath?

For <span class="math">n=2[/spoiler],
<div class="math">
A=
\left[\matrix{a& b\\ c& -a}\right]
</div>
satisfies <span class="math">A^2=I\;[/spoiler] iff <span class="math">\;a^2 +bc = 1[/spoiler].

>> No.4300854

>>4300844
Once more before I give up. This will look ugly...The part that's left out is

<div class="math">
A = {{a\;\;\;\;b} \brack {c\;\;\;-a}}.
</div>

>> No.4300906

I know this is kind of off-topic, but is this the kind of thing you need to be able to do for Math Graduate school?

>> No.4300929

>>4300906
if you want to do number theory

>> No.4300938

>>4300929
This isn't number theory.

>>4300906
Not sure how to answer that; it depends on what you mean by "need" and "this kind of thing". I can say that if you know how to solve this problem, then you are probably in good shape to begin grad school.

>> No.4301023 [DELETED] 

>>4300906
Literally no one solved this particular problem on the actual test in 1992.

>> No.4301057

>>4301023
Now you're making me want to give up on it. I thought I was close, but now I'm not so sure...

>> No.4301249

I'm new to this so bear with me....
M is a set of real n x n matrices.
n x n = n^2

It's not that straight forward though, is it?

>> No.4301251 [DELETED] 
File: 330 KB, 704x576, vlcsnap-6166835.png [View same] [iqdb] [saucenao] [google]
4301251

>>4301249
Holy shit, I think he's got it!

>> No.4301352

>>4301249

"n x n matrices" does not mean "there are n x n matrices in set M", but that the each matrix in M has n rows and n columns. So the size of the set M still needs to be proved. Other than that I'm retarded and can't contribute to this problem.

>> No.4301837
File: 42 KB, 550x296, Good-Will-Hunting-still-one.jpg [View same] [iqdb] [saucenao] [google]
4301837

>>4299956

stand back gais, I got this

>> No.4301914
File: 34 KB, 431x450, 1302359767760.jpg [View same] [iqdb] [saucenao] [google]
4301914

>>4301023

>> No.4302032
File: 3 KB, 400x400, EFG.gif [View same] [iqdb] [saucenao] [google]
4302032

>>4301914
ok, maybe not.

>> No.4302071

Ok I want to show that all the matrices in M are all mutually orthogonal. That could do it. I'll try it.

>> No.4302118

Claim: M is linearly independent

Proof:
Since I is in M, A*I is in M, therefore -A is not in M
Since there exists at least one B for all A =/= I such that AB = -BA, 0 is not in M

let An be lin dep and AnB* = -B*An
AnB* = sum ki (Ai B*) = sum ki (-B* Ai)

but since I is an element of M, B* = -B*, this leads to a contradiction.

I've just shown that it has at most n^2 + 1 elements, now I have to show that I is not a lin comb of the rest of M

>> No.4302136

>>4302118
continued

let I = sum ki Ai
multiply both sides by Am,
Am = sum ki (AiAm)

since AiAm or -AiAm is in M, I being lin dep implies Aj is lin dep, which we ruled out above.

QED

>> No.4302152

>>4302136
>>4302118

I just realised that only proves I is lin ind of the rest of M.

Working on the second half

>> No.4302171

Okay, I'll start back from anon's work in >>4300759

For <span class="math">A\in M[/spoiler], <span class="math">A[/spoiler] is full rank (we noticed the determinent is <span class="math">1[/spoiler] already).
Now, let's show <span class="math">A[/spoiler] is diagonalizable with eigenvalues 1 and -1:
<span class="math">A^2-I=\mathbf{0}_{n\times n}[/spoiler], thus the minimum polynomial of <span class="math">A[/spoiler] divides <span class="math">X^2-1=(X-1)(X+1)[/spoiler]. (that's enough, right?)

So we're down to <span class="math">2^n[/spoiler] matrices... Not that good, but at least we have a clear idea of the shape of the matrices in <span class="math">M[/spoiler].
After that, we can take <span class="math">A=PDP^{-1}\in M[/spoiler] and <span class="math">B=QEQ^{-1}\in M[/spoiler] with <span class="math">D[/spoiler] and <span class="math">E[/spoiler] diagonal matrices with only <span class="math">1[/spoiler] and <span class="math">-1[/spoiler] on the diagonal, and then, maybe find properties of <span class="math">P[/spoiler] and <span class="math">Q[/spoiler] so that either <span class="math">AB=BA[/spoiler] or <span class="math">AB=-BA[/spoiler]?

>> No.4302188

>>4302152

let An be lin dep and AnB* = -B*An
AnB* = sum ki (Ai B*) = sum ki (-B* Ai)

this is the violating step, what I should have wrote was that since for at least one Am =/= An
AmB* = -B*Am

If we remove An and repeat the process with an Am, we will eventually reduce the set M to a lin ind set subject to the above contradiction.

Now I think I'm done.

>> No.4302185

>>4302171
Actually we're down to <span class="math">2^{n-1}[/spoiler] because (ii) with <span class="math">A=I[/spoiler] guarantees that for all <span class="math">B\in M[/spoiler], either <span class="math">IB=B\in M[/spoiler] or <span class="math">-IB=-B\in M[/spoiler], but not both. Thus <span class="math">-B\notin M[/spoiler]. That's not much better but the fact that the opposite of a matrix in <span class="math">M[/spoiler] isn't in <span class="math">M[/spoiler] might help.

>> No.4303216

>>4302185
Hey, I'm the poster of >>4300737
You are correct (and gave the correct reason for the fact) that every matrix in <span class="math">M[/spoiler] is diagonalizable with eigenvalues ±1.

However, I don't see how this observation alone reduces the number of matrices to <span class="math">2^n[/spoiler]. If (and only if) <span class="math">M[/spoiler] were a commuting family of matrices, then the matrices in <span class="math">M[/spoiler] would all be simultaneously diagonalizable — i.e., there would exist an invertible <span class="math">P[/spoiler] such that <span class="math">P^{-1}AP[/spoiler] is diagonal for all <span class="math">A\in M[/spoiler]. If this were the case, it would be clear that <span class="math">M[/spoiler] has size at most <span class="math">2^n[/spoiler] (in fact, at most <span class="math">2^{n-1}[/spoiler], as you noted).

However, <span class="math">M[/spoiler] cannot possibly be a commuting family because of condition (iv). So we can't yet rule out the possibility that there exist distinct matrices in <span class="math">M[/spoiler] that have the same diagonalization: <span class="math">A = PDP^{-1}[/spoiler], <span class="math">B= QDQ^{-1}[/spoiler], <span class="math">A\neq B[/spoiler].

Such a pair <span class="math">A,B[/spoiler] must anticommute; perhaps we could use this fact to show that no such distinct pair could exist in <span class="math">M[/spoiler]. Or at least there might be a way to limit the number of possible transformation matrices <span class="math">P[/spoiler] for a given diagonal <span class="math">D[/spoiler] by using the fact that all pairs of matrices in <span class="math">M[/spoiler] either commute or anticommute.

But I suspect this is the wrong avenue for ultimately solving the problem.

>> No.4303368

>>4303216
Yeah I stupidly counted the potential diagonals only, and forgot the changes of basis.

Also, I think that the <span class="math">n^2[/spoiler] indeed hints towards linear independence, as >>4302188 did. I've been too lazy to read his posts because of the non-LaTeXed equations though, so I was waiting for someone to confirm before taking the time ^^'

And yeah, I don't really know how to go on from where I stopped. The few things I tried after posting didn't work.

>> No.4305533

Since M is an arbitrary set of real n x n matrices such that conditions i-iv are satisfied, we can restrict M so that it contains no more than n^2 matrices without loss of generality. Thus, by definition, M contains at most n^2 matrices. This concludes the proof. We leave the verification of the general (unrestricted M) case as an exercise to the reader.

>> No.4306202

>>4305533

:DDDDD

>> No.4306500
File: 67 KB, 248x315, 5terfsdf.png [View same] [iqdb] [saucenao] [google]
4306500

I was lost halfway through guys ...

>> No.4306750

>>4302118
I think you're interpreting the problem statement wrong. It's that given an A in M, there's a B in M (which may depend on the A chosen) such that AB = -BA. Not that there's a B such that AB = -BA for all A in M.

But I think this is a promising approach.

>> No.4306761

>>4306750
*given an A in M --other than I--, there's a B in M
*Not that there's a B such that AB = -BA for all A in M --other than I--.

>> No.4307032

Suppose there is a set of n linearly dependent <span class="math">\{A_i\}[/spoiler] in M. Choose any B in M.

We have
<span class="math">\sum_i k_i A_i = 0[/spoiler]

Split the sum into the terms that commute with B, and the terms that anticommute:
<span class="math">\sum_i k_i A_i + \sum_j k_j A_j = 0[/spoiler]
<span class="math">\sum_i k_i A_i B + \sum_j k_j A_j B = 0[/spoiler]
<span class="math">\sum_i k_i B A_i - \sum_j k_j B A_j = 0[/spoiler]

From
>>4300737
we have that <span class="math">B^2 = \pm I[/spoiler], so B is invertable, and we can take
<span class="math">\sum_i k_i A_i - \sum_j k_j A_j = 0[/spoiler]

And we conclude that both the subset of <span class="math">\{A_i\}[/spoiler] that commutes with B, and the subset that anticommutes, are linearly dependent:
<span class="math">\sum_i k_i A_i = 0[/spoiler]
<span class="math">\sum_j k_j A_j = 0[/spoiler]

Repeat this procedure for all <span class="math">B \in M[/spoiler] to obtain a linearly dependent set of <span class="math">\{A_i\}[/spoiler] such that given any B, either every <span class="math">A_i[/spoiler] in the set anticommutes with it, or every <span class="math">A_i[/spoiler] commutes.

Choose two distinct matrices <span class="math">A_1[/spoiler] and <span class="math">A_2[/spoiler] from that set. For any B in M, either
<span class="math">A_1 A_2 B = A_1 B A_2 = B A_1 A_2[/spoiler]
or
<span class="math">A_1 A_2 B = -A_1 B A_2 = B A_1 A_2[/spoiler].

Since <span class="math">A_1 A_2[/spoiler] commutes with every B, it must be I. But that would make
<span class="math">A_1 = A_1 I = A_1^2 A_2 = \pm I A_2 = \pm A_2[/spoiler].
So either <span class="math">A_1[/spoiler] and <span class="math">A_2[/spoiler] are the same matrix, or both <span class="math">I A_1[/spoiler] and <span class="math">-I A_1[/spoiler] are in M. In either case, we have a contradiction.

Therefore the matrices in M are linearly independent, and there can be at most <span class="math">n^2[/spoiler] of them.

>> No.4307125

>>4307032
Looks nice, but I don't understand that part:
Repeat this procedure for all <span class="math">B\in M[/spoiler] to obtain a linearly dependent set of <span class="math">\{A_i\}[/spoiler] such that given any <span class="math">B[/spoiler], either every <span class="math">A_i[/spoiler] in the set anticommutes with it, or every <span class="math">A_i[/spoiler] commutes.

Why does repeating the above property give you that? From what I understand, the only thing it gives you is that given any <span class="math">B[/spoiler] in <span class="math">M[/spoiler], the subset of <span class="math">\{A_i\}[/spoiler] that commute with it is linearly dependent and so is the subset that anticommute. Do you mean repeating it by taking your "linearly dependent" initial subset as something that depends on the previous choice of <span class="math">B[/spoiler] (e.g. the linearly dependent set that commute with the original <span class="math">B[/spoiler])? Either way I don't get it. If the set <span class="math">\{A_i\}[/spoiler] remains the same, I don't get why you can say that, and if it changes, I don't get how you can make sure it has at least two elements (for the discussion on <span class="math">A_1A_2[/spoiler] to make sense).

But I'm a bit numb at the moment, so maybe I'm just missing something obvious... Anyway I'm sure that even if there's a mistake here or there, at the very least the basic shape of the proof is in your post.

>> No.4308236

>>4307125

>Do you mean repeating it by taking your "linearly dependent" initial subset as something that depends on the previous choice of B (e.g. the linearly dependent set that commute with the original B )?

This one. Let me take another stab at defining how these subsets are generated, this time more rigorously.

First enumerate the sets in M: <span class="math">M = \{A_1, A_2, \ldots, A_n\}[/spoiler]. Note that I'm changing my notation here: In this post, <span class="math">A_i[/spoiler] will range over all M, rather than over the subset I'm working with.

If M is linearly dependent, there exist<span class="math">k_1, \ldots, k_n[/spoiler] with at least one nonzero <span class="math">k_i[/spoiler] such that
<span class="math">\sum_i k_i A_i + \sum_j k_j A_j = 0[/spoiler].

For the first step, let the sum in the previous argument range over all matrices in M such that <span class="math">k_i \rlap{\kern{1.5pt}/}= 0[/spoiler]. Call the set <span class="math">R_0[/spoiler].
<span class="math">R_0 = \{A_i \in M | k_i \rlap{\kern{1.5pt}/}= 0\}[/spoiler]

And for the B of the previous argument, take <span class="math">B = A_1[/spoiler].

We separate the set into two subsets
<span class="math">S_1 = \{ A_i \in R_0 | A_i A_1 = A_1 A_i \}[/spoiler]
<span class="math">T_1 = \{ A_i \in R_0 | A_i A_1 = - A_1 A_i \}[/spoiler]
and find:
<span class="math">\sum_{i, A_i \in S_1} k_i A_i = 0[/spoiler]
<span class="math">\sum_{i, A_i \in T_1} k_i A_i = 0[/spoiler]
At least one of these sets has to be nonempty. Call that set for <span class="math">R_1[/spoiler] and use it as the starting set for the next step.

>> No.4308241

cont. from
>>4308236

For each n, define <span class="math">R_n[/spoiler] inductively as whichever of
<span class="math">\{ A_i \in R_{n-1} | A_i A_n = A_n A_i \}[/spoiler]
<span class="math">\{ A_i \in R_{n-1} | A_i A_n = - A_n A_i \}[/spoiler]
is nonempty.

The final set is <span class="math">R_n[/spoiler] has the properties that

(i) <span class="math">\sum_{i, A_i \in R_n} k_i A_i = 0[/spoiler]
with all the <span class="math">k_i[/spoiler] nonzero.
Since no <span class="math">A_i = 0[/spoiler], this means that <span class="math">R_n[/spoiler] must have at least two elements.

(ii) for any <span class="math">A_i \in M[/spoiler]
there exists <span class="math">s_i \in \{+1, -1\}[/spoiler]
such that for every <span class="math">A_j \in R_n[/spoiler],
<span class="math">A_j A_i = s_i A_i A_j[/spoiler].

Given two elements <span class="math">A_h, A_k \in R_n[/spoiler], this gives us
<span class="math">A_h A_k A_i = A_i A_h A_k[/spoiler] for all <span class="math">A_i \in M[/spoiler], and we obtain a contradiction as before.

>> No.4308821

bump (I still want to think about this one)

>> No.4310911

>>4308821
bump again