Leibnitz's Rule
Differentiation Under the Integral Sign - Kaplan §4.9
$F(\alpha)=\displaystyle\int_0^1 x^\alpha\,dx = \frac{1}{\alpha+1}$. Can we find $F'(\alpha)$ without re-integrating?
1. Setup: An Integral That Depends on a Parameter
Most integrals we've seen so far are "done" once evaluated - plug in limits, get a number. But some integrals carry an extra variable that isn't the integration variable. Think of $e^{-\alpha x}$: here $x$ is what we integrate over, but $\alpha$ is a parameter - a dial we can turn.
When $\alpha$ is small, $e^{-\alpha x}$ decays slowly - the curve stays high and the area under it is large. When $\alpha$ is large, the exponential drops steeply and the area shrinks. So the integral itself becomes a function of $\alpha$:
$$F(\alpha) = \int_0^1 e^{-\alpha x}\,dx$$We can evaluate this one explicitly: $F(\alpha) = \dfrac{1-e^{-\alpha}}{\alpha}$. But that's precisely why it makes a good test case - we can check our work. The real payoff of Leibnitz's rule comes when the integral has no closed form at all (like $\int e^{-\alpha x^2}\,dx$). We're starting with $e^{-\alpha x}$ so we can verify the technique on solid ground before applying it in the dark.
Drag the slider. The shaded region is $F(\alpha)$ - watch how it shrinks as $\alpha$ grows. The dashed curve traces $F$ as a function of $\alpha$.
Here's the key question: how fast does $F$ change as we nudge $\alpha$? That is, what is $F'(\alpha)$? We could differentiate the closed form, sure. But Leibnitz's rule gives us a way to find $F'(\alpha)$ directly from the integral - without ever computing $F$ first. That's the technique we'll build in the next card.
2. Fixed Limits: Differentiate Under the Integral
When the limits are constants, Leibnitz's rule simplifies to:
For our example: $F'(\alpha) = \displaystyle\int_0^1 (-x)\,e^{-\alpha x}\,dx$.
Toggle between the exact Leibnitz formula and a finite-difference numerical derivative - they agree everywhere.
3. Variable Limits: The Full Rule
Now let $F(\alpha) = \displaystyle\int_0^{\alpha} \sin(\alpha x)\,dx$. Both the integrand and the upper limit depend on $\alpha$.
The derivative $F'(\alpha)$ has three terms:
4. The Full Leibnitz Rule
If $F(\alpha) = \displaystyle\int_{a(\alpha)}^{b(\alpha)} f(x,\alpha)\,dx$, then
$$F'(\alpha) = \underbrace{\color{#ef4444}{\int_{a(\alpha)}^{b(\alpha)} \frac{\partial f}{\partial \alpha}\,dx}}_{\text{integrand changes}} +\; \underbrace{\color{#22c55e}{f\!\big(b(\alpha),\alpha\big)\,b'(\alpha)}}_{\text{upper limit moves}} -\; \underbrace{\color{#3b82f6}{f\!\big(a(\alpha),\alpha\big)\,a'(\alpha)}}_{\text{lower limit moves}}$$When does each term matter?
5. When Antiderivatives Don't Exist
Try this one: $\dfrac{d}{dx}\displaystyle\int_0^x \frac{\sin t}{t}\,dt$.
Predict first. We're differentiating an accumulation function where the upper limit is $x$. The FTC says the answer should just be the integrand evaluated at $x$...
But wait - $\frac{\sin t}{t}$ has no elementary antiderivative. This integral defines the "sine integral" function $\mathrm{Si}(x)$, and there's no closed-form expression for it in terms of standard functions.
Does that stop us? Not at all. This is $F(x) = \int_0^x f(t)\,dt$ with $f(t) = \frac{\sin t}{t}$. The integrand doesn't contain the parameter $x$, and only the upper limit depends on $x$. Leibnitz's rule gives:
Key insight: Leibnitz doesn't need an antiderivative. It differentiates around it. The integral may be impossible to evaluate in closed form, but the derivative of the integral is perfectly tractable.
6. All Three Terms Fire
Here's where the full rule earns its keep. Find $\dfrac{d}{da}\displaystyle\int_0^{a^2}\sqrt{a+t}\,dt$.
The parameter $a$ appears in the integrand ($\sqrt{a+t}$) and in the upper limit ($a^2$). All three terms of Leibnitz's rule contribute:
Evaluate that last integral directly:
$$\int_0^{a^2}\frac{1}{2\sqrt{a+t}}\,dt = \Big[\sqrt{a+t}\Big]_0^{a^2} = \sqrt{a+a^2} - \sqrt{a}$$Combining all three terms:
Common trap: When the parameter appears in both the integrand and the limits, all three terms fire. The most common mistake is forgetting the integral term - the one that comes from $\partial f/\partial a$ inside the integral.
7. Variable Limits, Parameter-Free Integrand
Find $\dfrac{d}{dx}\displaystyle\int_x^{x^3} e^{-t^4}\,dt$.
The integrand $e^{-t^4}$ doesn't contain $x$ at all - the dummy variable $t$ is not the parameter. So $\partial f/\partial x = 0$ and the integral term vanishes. Only the boundary terms survive:
Numerical check at $x = 1$: $3(1)^2 e^{-1} - e^{-1} = 2e^{-1} \approx 0.736$.
Common trap: Don't confuse the dummy variable $t$ with the parameter $x$. The integrand $e^{-t^4}$ doesn't contain $x$, so $\partial f/\partial x = 0$. If you mistakenly differentiated with respect to $t$, you'd get a completely wrong answer.
Playground: Test Leibnitz's Rule
Enter $f(x,\alpha)$, $a(\alpha)$, $b(\alpha)$ using JavaScript syntax. Variable names: x, a (for $\alpha$).
Practice Problems - §4.9
From Kaplan, problems after §4.9
Here $f(t,x) = t^2$ (no explicit $x$-dependence in the integrand), $a = 1$ (constant), $b(x) = x^2$, so $b'(x) = 2x$.
Since the integrand doesn't depend on $x$, the "integrand changing" term is zero. Only the upper-limit term survives: $$\frac{d}{dx}\int_1^{x^2} t^2\,dt = f(b(x),x)\cdot b'(x) = (x^2)^2\cdot 2x = 2x^5$$
$$\int_1^{x^2} t^2\,dt = \frac{t^3}{3}\bigg|_1^{x^2} = \frac{x^6 - 1}{3}$$ $$\frac{d}{dx}\frac{x^6-1}{3} = 2x^5 \quad\checkmark$$
Here $f(x,t) = \sin(x^2)$ (no explicit $t$-dependence), $a(t) = t$ so $a'(t)=1$, $b(t) = t^2$ so $b'(t) = 2t$.
The "integrand changing" term vanishes since $\partial f/\partial t = 0$.
Fixed limits $a=0$, $b=1$. Integrand $f(x,t) = \cos(xt)/x$. Only the "integrand changing" term applies.
$$\frac{\partial}{\partial t}\frac{\cos(xt)}{x} = \frac{-x\sin(xt)}{x} = -\sin(xt)$$
Note: the $1/x$ singularity in the original integrand vanishes after differentiation - a common simplification via Leibnitz's rule.
For $n > -1$: $$F(n) = \int_0^1 x^n\,dx = \frac{1}{n+1}$$
The left side, by Leibnitz's rule (fixed limits, parameter $n$): $$F'(n) = \int_0^1 \frac{\partial}{\partial n}x^n\,dx = \int_0^1 x^n \log x\,dx$$ The right side: $$\frac{d}{dn}\frac{1}{n+1} = -\frac{1}{(n+1)^2}$$
For example, $n=0$: $\int_0^1 \log x\,dx = -1$, which matches integration by parts.