Subject: EHP proof of Lambda admissible monomial basis Date: Mon, 14 Jun 2004 03:17:15 -0500 From: Bill Richter To: dmd1@lehigh.edu Mark Mahowald says there's a simple EHP proof of the Lambda admissible monomial basis. Here is my best EHP proof. It's not simple: it uses the proof in my Hopf preprint (which I think is due to Bousfield). I'll write \x & \(a_1,...,a_s) for lambda_x & lambda_(a_1,...,a_s). The goal is to construct EHP sequences Lambda(n) >-E--> Lambda(n+1) -H-->> Lambda(2n+1) If we define Lambda as usual, and Lambda(n) as the subspace spanned by the admissible monomials \(a_1,...,a_s) with a_1 < n, then we don't know how to define H, as the usual definition requires the admissible monomial basis! I'll give a different definition of Lambda(n). First ************ Pension operator preliminaries ************ Lambda is the tensor algebra over {\p : p >= 0} modulo the 2-sided ideal generated by the symmetric Adem relations of the 6 authors [p, n] := sum_{i+j=n} (n choose i) \{p+i} @ \{theta(p)+j} which is given by "differentiating" n times the basic relation p @ theta(p). Extending Wang's theta = Sq^0, let's define theta: N ---> N by theta(x) = 2x+1. Note that theta([p,n]) = [theta(p), 2n]. Differentiating uses the 2-dim pension operator we'll shorthand by 1 0 + 0 1 In my Hopf preprint, I use the 3-dim pension operator 1 2 0 + 1 0 2 + 0 1 2 together with the the 3-dim differentiation pension operator 1 0 0 + 0 1 0 + 0 0 1 to get relations among Adem relations. The following sum is zero in the tensor algebra, for p, n, m >= 0: sum_{i+j=n, s+t=m} (n choose i) (m choose s) { [p+i, j+s] @ \{4p+3+2j+t} + \{p+j+t} @ [2p+1+i, 2j+s] } In my Hopf preprint I derive the Lambda basis from these relations between Adem relations, by "shrinking" the 2-sided ideal. For text-clarity I'll often drop the \'s and braces, as in the following definition: For p >= 0 and n > 0, define the element [p, n]-hat of the tensor algebra by the equation [p,n] = p @ theta(p)+n + [p, n]-hat + p+n @ theta(p). Note that theta([p,n]-hat) = [theta(p), 2n]-hat. ************ EHP preliminaries ************ Recall from my preprint that the sequence (a_1,...,a_s) is called n-pseudo-admissible if we have the inequalities a_1 < n, a_2 < n + a_1 + 1 ... a_s < n + a_1 + ... + a_{s-1} + s-1 Let's say an admissible (a_1,...,a_s) is n-admissible if a_1 < n. In my preprint, I derived Singer's unstable compositions from the fact that Adem relations preserve n-pseudo-admissibility (plus the obvious fact that n-admissible => n-pseudo-admissibility). I'm grateful to Pete Bousfield for pointing out that this argument is essentially due to Ed Curtis! From Wang's paper one gleans the Curtis excess CE(a_1,...,a_s) = min{ a_1, a_2 - a_1 - 1, ..., a_s - a_1 - ... - a_{s-1} - (s-1) } Curtis's result, stated and proved by Wang (Prop 1.8.2), is that CE(a_1,...,a_s) = n implies \(a_1,...,a_s) \in Lambda(n+1) Surely Curtis's original proof is the same as mine: applying an Adem relation can't increase CE. This fact immediately implies Singer's result (stated without proof in "The algebraic EHP sequence") Lambda^{s,t}(n) . Lambda(n+t) subset Lambda(n) and as Singer observes in his preprint (where he proves his result), this immediately implies the Lambda EHP sequence (given the Lambda admissible monomial basis). Singer and Bousfield seem to agree with me that the Lambda EHP sequence is nontrivial, and a good explanation of non-triviality is given by the complexity of the Curtis excess formula. ************ definition of Lambda(n) & H ************ Define P(n) to be the vector subspace of the tensor algebra with basis the n-pseudo-admissible sequences. Define Lambda(n) to be P(n) modulo the subspace spanned by the {\em n-pseudo-admissible relations}: (a_1 ... a_i) @ [p, r] @ (c_1 ... c_j) , where (a_1,..., a_i, p, theta(p)+r, c_1 ... c_j) is n-pseudo-admissible. Then clearly Lambda is the direct limit of the Lambda(n), and Lambda(n) is spanned by the admissible monomials with a_1 < n, and we immediately get Singer's unstable products: Lambda^{s,t}(n) @ Lambda(n+t) ---> Lambda(n) We don't know we have inclusions Lambda(n) >-E--> Lambda(n+1), because we don't know we have inclusions Lambda(n) >---> Lambda. We'll now define the Hopf invariant Lambda(n+1) -H-->> Lambda(2n+1) simultaneously with an inverse to E, called K: Lambda(n+1) --->> Lambda(n) It will be easy to see that H preserves Adem relations, but it takes pension operators to see that K preserves Adem relations. So take an (n+1)-pseudo-admissible sequence (a_1,...,a_s), and write a = a_1 b = a_2 c = a_3 m := n + a + 1 p := n + a + b + 2. Then pseudo-admissibility means the inequalities a <= n b <= m c <= p ... a_s <= n + a_1 + ... + a_{s-1} + s-1 . Now we'll inductively define Lambda(n) <-K_n-- Lambda(n+1) -H_n-->> Lambda(2n+1) H_n(a_1,...,a_s) = delta(a = n) K_m(b,...,a_s) + delta(a < n) theta(a) . H_m(b,...,a_s) K_n(a_1,...,a_s) = delta(a < n) { a . K_m(b,...,a_s) + R(a,m) . H_m(b,...,a_s) } where R(a,m) = [a, r]-hat, where a+r = n, so m = n+a+1 = theta(a) + r. These definitions follow easily from fiddling with the Mahowald-Singer formula for the suspended Hopf invariant EH_n(a_1,...,a_s) = delta(a = n) (b,...,a_s) + theta(a) . EH_m(b,...,a_s) ************ EHP proof of Lambda basis ************ We will show that H and K are well-defined by induction first on s (the Adams filtration) and then on n (the stem degree -1). Since K is obviously a left inverse to E, we'll be done. Well-definedness means that H_n & K_n take an (n+1)-pseudo-admissible relation to a n- or (2n+1)-pseudo-admissible relation. Showing H_n is well-defined is so much easier that I'll leave it as an exercise. By s-induction, we've handled (n+1)-pseudo-admissible relation (a_1 ... a_i) @ [p, r] @ (c_1 ... c_j) with i >= 1. So we must only consider [a, r] @ (c_1 ... c_j) So we're assuming, per our conventions above, a <= n b = theta(a)+r <= m = n+a+1 c_1 <= p = n+a+b+2 ... c_j <= p + c_1 + ... c_{j-1} + j-1 . Call gamma = \(c_1,..., c_j) in Lambda(p+1). We'll consider 3 cases: If a = n, then our only possible (n+1)-relation is [a, 0] . gamma, so H_n(a . theta(a) . gamma) = 0 , K_n(a . theta(a) . gamma) = 0 So we can assume a < n. Now we have 2nd case, b = m: a < n b = theta(a)+r = m = n+a+1 p = theta(m) So a+r = n. Now [a, r] = a @ m + n @ theta(a) + R(a, m), so [a, r] gamma = a . m . gamma + n . theta(a) . gamma + R(a, m) . gamma We'll first compute K_n of the first 2 terms first. K_n(n theta(a) . gamma) = 0 K_n(a . m . gamma) = a K_m(m . gamma) + R(a,m) H_m(m . gamma) = 0 + R(a,m) K_p(gamma) Now we turn to the inner terms in R(a, m) . gamma. Note that R(a,m) is the sum of various term x @ y, where x = a+i, y = theta(a)+j, i+j=r, (r choose i) = 1, ij > 0 So x < n, y < m, so R(a, m) is n-pseudo admissible, and we'll perform a stunt of the sort that Hikida uses often in his nice Lambda paper. R(a, m) in Lambda^{2,a+m+2}(n), and a+m+2+n = theta(m)=p so R(a, m) gamma in Lambda(n). Just keep that in mind. What we'll use is that x < n, y < n+x+1. Now write R(a, m) gamma as the sum of term x y gamma, and K_n(x y gamma) = x K_{n+x+1}(y gamma) + R(x, n+x+1) H_{n+x+1}(y gamma) = x y K_p(gamma) + x R(y,p) H_p(gamma) + R(x, n+x+1) theta(y) H_p(gamma) So the first terms add up to R(a, m) K_p(gamma) which we showed above was K_n(a m gamma). So we have that K_n([a, r] gamma) is the sum of the { x R(y,p) + R(x, n+x+1) theta(y) } H_p(gamma) running over the x y terms of R(a, m). So it suffices to show that the "coefficient" of H_p(gamma) vanishes. In fact, as we'll show below, it's zero in the tensor algebra! That is, sum_{x, y} { x R(y,p) + R(x, n+x+1) theta(y) } = 0 To see this, first substitute back in x & y, giving the sum sum_{i+j=r, ij > 0} (r choose i) { (a+i) R(theta(a)+j, p) + R(a+i, theta(a)+r+i) theta(theta(a)+j) } since n+x+1 = a+r+a+i+1 = theta(a)+r+i. Substitute [x, f]-hat = R(x, theta(x)+f), and use the 3 equations theta(theta(a)+j) + 2i = theta^2(a) + 2j + 2i = theta^2(a) + 2r = theta(theta(a)+r) = theta(m) = p, theta(a)+r+i = theta(a)+2i+j = theta(a+i)+j, and theta(theta(a)+j) = theta^2(a)+2j. We obtain sum_{i+j=r, ij > 0} (r choose i) { a+i @ [theta(a)+j, 2i]-hat + [a+i, j]-hat @ theta^2(a)+2j } Now someone could use binomial identities to show this vanishes, provided they were lucky enough to be using the symmetric Adem relations. That's similar to the way I got my pension operator relations in the first place. But I'm going to instead use the pension operator relation directly, with p <- a n <- r m <- 0 So we have our pension operator equation: sum_{i+j=r} (r choose i) { [a+i, j] @ theta^2(a)+2j + a+j @ [theta(a)+i, 2j] } = 0 Note this sum belongs to our subspace P(n+1) of the tensor algebra. Apply the obvious projection P(n+1) --->> P(n), sending the strictly (n+1)-pseudo-admissible sequences to zero. It may help to note that [x, f] is (x+f+1)-pseudo-admissible, while [x, f]-hat is (x+f)-pseudo-admissible. The 4 terms with ij = 0 obviously vanish under projection to P(n), because either the 1st coordinate is a+r, or the 3rd coordinate is theta^2(a)+2r = theta(m) = p. For the terms with ij > 0, the the sum of outer terms is sum_{i+j=r, ij > 0} (r choose i) {a+r @ theta(a)+2i @ theta^2(a)+2j + a+j @ theta(a)+i @ theta^2(a)+2r} due to cancellation of two a+i @ theta(a)+2i+j @ theta^2(a)+2j terms. So this projects to zero as well, and hence the P(n) projection of our pension operator equation is sum_{i+j=r, ij > 0} (r choose i) { [a+i, j]-hat @ theta^2(a)+2j + a+j @ [theta(a)+i, 2j]-hat } = 0 since (as we knew above) all of these terms are n-pseudo-admissible. Thus, the "coefficient" of H_p(gamma) vanishesin the tensor algebra, and we've shown that K_n( [a, r] gamma ) = 0. \subqed Now on to the 3rd case, of [a, r] gamma with a < n, a+r < m=n+a+1, p=n+a+b+2, where gamma is (p+1)-pseudo-admissible. So we define z so that a+r+z = m, and look at the pension operator relation with the substitutions p <- a n <- r m <- z We have the identity in the tensor algebra sum_{i+j=r, s+t=z} (r choose i) (z choose s) { [a+i, j+s] @ theta^2(a)+2j+t + a+j+t @ [theta(a)+i, 2j+s] } = 0 This sum is in P(n+1), and we'll essentially project this onto P(n). Let's do the K_n calculation first. Now [a, r] is the sum of x @ y, for all the x,y satisfying x = a+i, y = theta(a)+j, i+j=r, (r choose i) = 1 So K_n( [a, r] gamma ) is the sum of the K_n(x y gamma) = x K_{n+x+1}(y gamma) + R(x,n+x+1} H_{n+x+1}(y gamma) = x ( y K_p(gamma) + R(y, p) H_p(gamma) ) + R(x,n+x+1} theta(y) H_p(gamma) Summing over x,y, the first terms add up to sum_{x,y} x y K_p(gamma) = [a, r] K_p(gamma) but that's a nice n-pseudo-admissible relation, and we're done with those 1st terms. For the remaining term, we have sum_{x,y} { R(x,n+x+1} theta(y) + x R(y, p) } H_p(gamma) let's quickly massage this into pension-form: sum_{i+j=r} (r choose i) { [a+i, j+z]-hat (theta^2(a)+2j) + (a+j) @ [theta(a)+i, 2j+z]-hat since (we have to switch i & j in the 2nd summand) n+x+1 = a+r+z+a+i+1 = theta(a)+r+i+z = theta(a+i)+j+z theta(theta(a)+j) = theta^2(a)+2j theta(theta(a)+i)+2j+z = theta^(a)+2r+z = p. Now this is sum is not zero in the tensor algebra (as happened for z=0). But it's part of the pension operator LHS: the "hat" parts of the s=m, t=0 terms. So it's equal to the sum of the other terms, which is sum_{i+j=r, s+t=z, t > 0} (r choose i) (z choose s) { [a+i, j+s] @ theta^2(a)+2j+t + a+j+t @ [theta(a)+i, 2j+s] } + sum_{i+j=r} (r choose i) { a+i @ theta(a)+2i+j+z @ theta^2(a)+2j + n @ theta(a)+2i @ theta^2(a)+2j + a+j @ theta(a)+i @ p + a+j @ a+j+2j+z @ theta(a)+2i } [Here we have to switch i & j back.] The 2nd sum simplifies to sum_{i+j=r} (r choose i) { n @ theta(a)+2i @ theta^2(a)+2j + a+j @ theta(a)+i @ p } = n @ [theta(a), 2r] + [a, r] @ p But in the first sum, all of the terms are n-pseudo-admissible except j = r, t = z, and that's the 2 terms above! So now we just have a sum of n-pseudo-admissible relations, which means K_n is well-defined. Let's sum that up: In calculating K_n([a,r] gamma), the "coefficient" we obtained in the tensor algebra was sum_{i+j=r} (r choose i) { [a+i, j+z]-hat @ theta^2(a)+2j + a+j @ [theta(a)+i, 2j+z]-hat and using the pension operator relation, that's equal to sum_{i+j=r, s+t=z, t > 0, (j,t) \ne (r,z)} (r choose i) (z choose s) { [a+i, j+s] @ \{theta^2(a)+2j+t} + \{a+j+t} @ [theta(a)+i, 2j+s] } which is a sum of n-pseudo-admissible relations, and multiplying this on the right by H_p(gamma) gives a sum of n-pseudo-admissible relations. \qed