Like Euler products for number-theoretic functions, the resultant is one of those amazingly simple gadgets that you’d never imagine existed. Given two polynomials *f* and *g* in a single variable, there is a number called the *resultant, *denoted *res(f,g)*, such that:

*res(f, g) = 0*iff f and g share a common root- The coefficients of
*res(f, g)*are polynomials in the coefficients of*f*and*g*.

Think about this for a second. Given the coefficients of an arbitrary polynomial, we have in general no algebraic expression for its roots, but nonetheless we have a way of determining if two polynomials share a root by simply adding and multiplying together some of their coefficients!

First, let’s see why such a thing ought to exist. Say that and , and define

This clearly satisfies condition (1) above — the product will equal zero iff for some *i* and *j.* However, it also satisfies condition (2). Why? Well, if we regard *res(f, g)* as a polynomial in the ‘s, with coefficients which are polynomials in the ‘s, then it’s a *symmetric* polynomial in the ‘s — it’s invariant under permuting the order of the ‘s. Further, if we regard one of the coefficients of this polynomial as a polynomial in the ‘s, then this polynomial is also symmetric.

Why does this matter? Well, the space of symmetric polynomials in *n* variables is spanned by the *elementary symmetric **polynomials*

1,

,

,

and so forth.

But you’ll recognize that the coefficients of a (monic, univariate) polynomial are precisely the elementary symmetric polynomials in its roots! That is, given a monic polynomial in one variable, *any* symmetric polynomial of its roots is just a linear combination of the coefficients. (Throwing in the and at the front handles the case when the polynomials aren’t symmetric.)

I vaguely remember learning about elementary symmetric polynomials in my undergrad algebra sequence, but at the time I had no real idea what they were for. They didn’t look that complicated, so I figured they probably didn’t matter too much. As it turns out, though, the whole subject of invariant theory is really interesting, and symmetric polynomials are just the first nontrivial example.

As an added bonus, note that we can also determine whether a polynomial has a double root by calculating *res(f, f’)*, where *f’* is the derivative of *f*. (You can define the derivative of a polynomial without using any calculus — just consider the power rule *et al.* as definitions instead of theorems.) Now *res(f, f’)* is zero iff *f* and *f’* share a root. Suppose *a* is a root of *f*; then for some *n* and some *g*, where *a* is not a root of *g*. Taking the derivative, we have , so if *n > 1*, or if *n = 1*.

Up to a sign, the resultant *res(f, f’)* is known as the *discriminant,* as you’ll remember from high-school algebra when they seemingly needlessly assigned this fancy name to the term appearing in the quadratic formula. The form here generalizes to univariate polynomials of arbitrary degree, but in fact it can be generalized further, to arbitrary multivariate polynomials as well.