Okay, we've seen that functions describe relationships between inputs and outputs, like y=f(x). Now, let's explore a foundational concept for calculus: the limit. Limits help us understand how a function behaves near a particular input value.
Imagine you're walking towards a specific point on a path. A limit is like asking: "Where does it look like I'll end up as I get incredibly close to that point, even if I don't actually step on that exact spot?"
Sometimes, we can just plug an input value into a function to see the output. If we have f(x)=x+2 and want to know what happens at x=3, we calculate f(3)=3+2=5. Simple enough.
But what if the function is undefined at the specific point we're interested in? Or what if we want to understand the trend or tendency of the function right around that point? This is where limits become essential. They allow us to analyze the function's behavior as we approach a value, regardless of what happens precisely at that value.
Let's stick with the simple function f(x)=x+2. We already know f(3)=5. But let's pretend we don't know that, and instead, let's see what value f(x) approaches as x gets closer and closer to 3.
We can try values of x slightly less than 3:
And values slightly greater than 3:
Notice a pattern? As x gets closer and closer to 3 (from either side), the output f(x) gets closer and closer to 5. We say that "the limit of the function f(x)=x+2 as x approaches 3 is 5."
In mathematical notation, we write this as:
x→3lim(x+2)=5Let's break down this notation:
The real utility of limits shines when a function has a gap or is undefined at a point. Consider the function:
g(x)=x−1x2−1What happens if we try to plug in x=1? We get 1−112−1=00, which is undefined. Division by zero is not allowed. So, g(1) does not exist.
However, we can simplify this function algebraically for values where x is not equal to 1. Since x2−1=(x−1)(x+1), we have:
g(x)=x−1(x−1)(x+1)=x+1,provided x=1So, the graph of g(x) looks exactly like the graph of f(x)=x+1, except there's a "hole" at the point where x=1.
Let's ask: What is the limit of g(x) as x approaches 1? We can use the simplified form x+1 because the limit only cares about values near x=1, not at x=1.
Let's check values near x=1:
As x gets arbitrarily close to 1, g(x) gets arbitrarily close to 2. Even though g(1) is undefined, the limit exists. We write:
x→1limx−1x2−1=2Here's a visualization of the function g(x)=x−1x2−1, which behaves like y=x+1 but has a hole at x=1.
The graph of g(x)=(x2−1)/(x−1) is the line y=x+1 with a single point removed at (1,2), indicated by the open circle. The limit as x approaches 1 is the y-value the function gets close to, which is 2.
A limit L exists for a function f(x) as x approaches some value c if we can make the function's output f(x) as close to L as we desire, simply by choosing an input x sufficiently close to c (but not actually equal to c). The function doesn't even need to be defined at c for the limit to exist.
This concept of "getting arbitrarily close" is the cornerstone upon which the idea of the derivative is built. Derivatives, as we'll see in the next chapter, measure instantaneous rates of change, and limits provide the mathematical machinery to define this "instantaneous" behavior. Understanding limits gives us the foundation to understand how functions change, which is fundamental to optimizing machine learning models.
© 2025 ApX Machine Learning