RIP Philo

Why is second order of smallness negligible?

y = x^2

Start to differentiate, and you get:

y + dy = x^2 + 2x * dx + (dx)^2

You can safely discard (dx)^2 because it's only a small fraction of a small fraction.  Solve completely and you get the nice pure looking:

dy/dx = 2x

Destroying information for the sake of making the problem easier seems like sacrilege to me.  It's negligible on paper, but if you're dealing with a gigantic enough system or a long enough running process, won't that little bit of a little bit come into play EVENTUALLY?
Permalink Michael B 
March 5th, 2007 12:25pm
Well dx isn't that big in the first place. But I dunno. I think we should ask Newton.
Permalink Send private email strawberry snowflake 
March 5th, 2007 12:29pm
dx is as close to zero as you can get without getting there (otherwise dy/dx don't mean a thing). That's what I be saying.
Permalink Send private email strawberry snowflake 
March 5th, 2007 12:31pm
> won't that little bit of a little bit come into play EVENTUALLY?

That's why Laplace was wrong about Kerry getting elected.
Permalink son of parnas 
March 5th, 2007 12:37pm
From my EE days (digital systems (fast fourier transform stuff), analog classes); you could hack and slash out pieces of an algorithm like a madman.  Like you were saying, when the systems get big.  Imagine an electrical device with 10 resistors and/or capacitors and diodes and you are to hand calculate the current or voltage.

And it is normally easy to do after the calculations, for example, if you are getting a piece of an equation that adds up to like, 0.000000001; that is a no-brainer.  I could never find out what should be canceled out to simplify the equation, except for the easy cases like (x/10000) ^ 2.

Anyway; to answer your question, sure why not.  It makes more sense in practical applications like in engineering and physics.  In relation to math, they probably won't allow you to ignore 0.0000001.
Permalink Bot Berlin 
March 5th, 2007 12:43pm
It has nothing to do with practicality. dx is defined to be as close to zero as you can get without getting there. Therefore (dx)^2 is defined to be zero (because it's smaller than dx, but can't be smaller than zero).
Permalink Send private email strawberry snowflake 
March 5th, 2007 12:47pm
Interesting.

If you don't neglect the (dx)^2 term, you get

dy/dx == 2x + dx.

Now, the 'differential' of an equation is the measurement of the slope of the line at that point.

So treating the '2x' term as the answer will give you one value, and the '2x + dx' will give you the value one itsy-bitsy-tiny space further up the curve.

But I think "dx" is DEFINED to be the tiny-est little piece that makes no difference.  So (2x + dx) will really be equal to (2x) by definition of 'dx' at that point.

It's been a long time since College Calculus, so I could be way off here.  That seems a slightly more satisfying conclusion than "dx is tiny, and so dx^2 is really zero".
Permalink SaveTheHubble 
March 5th, 2007 12:53pm
So it's basically an ugly negligible wart in an otherwise really useful scheme.
Permalink Michael B 
March 5th, 2007 12:55pm
I guess so.  This may be why it took somebody like Newton to get it down.
Permalink SaveTheHubble 
March 5th, 2007 12:56pm
Well, I hit "Ok" too quickly.

So, is 2x == 2x + dx?

Practically?
Theoretically?
Permalink Michael B 
March 5th, 2007 12:58pm
Do you need this solved in order to write a calculator?
Permalink Send private email muppet 
March 5th, 2007 1:01pm
Now that the blog is done, I mean.
Permalink Send private email muppet 
March 5th, 2007 1:01pm
I want to buy your calculator for forty thousand dollars.  Can we meet in a dark alley and will you tie me up before I give you the money?
Permalink Send private email muppet 
March 5th, 2007 1:01pm
The premise in the OP is incorrect. You don't discard dx^2; that is not the mathematically rigorous way to understand differentiation. The derivative of x^2 is 2x exactly, with not even the tiniest bit of error.

The reasoning is explained in books on calculus, but the mathematics can get a bit headachey, which is why people sometimes use simplified explanations like "discarding" the dx^2 term.
Permalink Send private email bon vivant 
March 5th, 2007 1:07pm
I always knew on an intellectual level that it could be discarded, but it still felt icky.
Permalink Colm 
March 5th, 2007 1:09pm
It's ok, neither Newton nor Leibniz could figure it out either. (Nor Euler nor Gauss.)
Permalink Send private email strawberry snowflake 
March 5th, 2007 1:27pm
Multiplying an infinitely small increment by itself ought to be undefined, like divide by zero.  Shouldn't it?

You can't take division by zero out of an equation unilaterally just because it seems to work out.
Permalink Send private email muppet 
March 5th, 2007 1:31pm
>> Destroying information for the sake of making the problem easier seems like sacrilege to me.

You do not destroy it at all. Have you taken a course in epsilontics? That should be mandatory in any calculus course. Weierstrass has explained it very clearly.

Read this http://www.amazon.com/Convergent-Larry-Niven/dp/0345339223 ?

That has a short story of the devil and the pentacle. The devil is raised by drawing a pentacle. The hero has to kill the devil. He can erase the pentacle but has to draw it again once, and the devil will get inside it. Where does the hero draw it and kill it?
Permalink Send private email के. जे. 
March 5th, 2007 1:40pm
On the devil's stomach, of course.
Permalink Send private email Aaron F Stanton 
March 5th, 2007 1:41pm
"Shouldn't it?"

No. Why should it?

Neither do mathematicians take division by zero out of an equation just because it seems to work out. Thinking so, and the "discarding" of (dx)^2 terms, are all the product of poor learning.

You can get useful results from things without necessarily understanding the rigorous proofs behind them. Teachers are supposed to explain this when they give simplified explanations. They are meant to say "this isn't really kosher, but pretend with me that it's OK for now because the results are correct and really worth knowing".
Permalink Send private email bon vivant 
March 5th, 2007 1:47pm
Haven't any of you taken a fucking calculus course?  That's not what the d's mean.  (and note that it was Liebniz who invented them, not Newton)

Search for calculus derivative and you'll get a bunch of results including a bunch of course notes.  I picked one to get a reminder of the

dy/dx = lim (h->0) [ y(x+h) - y(x) ] / h

You don't have the problem of dividing by zero because you're not just sticking in h=0 when you do the limit.

dy/dx = lim (h->0) [ (x+h)^2 - x^2 ] / h

Just do the algebra...
Permalink Send private email Ward 
March 5th, 2007 1:47pm
Precisely. Now, do we have an infinitesimal devil? No, we have no devil at all. It is the *way* the devil diminishes that is infinitesimal.

dx should not be considered as a quantity actively changing. dx is a *constant* that can take any value from a set of possible values. A function f(x) approaches a limit L as  'x' approaches a value 'a' if, given any positive number 'e', the difference f(x) - L is less than 'e' whenever 'x' - 'a' is less than some number 'd' *depending on 'e'*.

Do no think in terms a variable *moving* towards zero. Instead think of all possible values greater than zero and deal with each.
Permalink Send private email के. जे. 
March 5th, 2007 1:49pm
> Haven't any of you taken a fucking calculus course? 

If you took it that must be why I can't find it.
Permalink son of parnas 
March 5th, 2007 1:50pm
The variable does have to *move* towards zero. There are constraints on the differentiability of functions that require this. As x gets closer to x0, f(x) must get closer to f(x0) in an ever approaching manner.
Permalink Send private email bon vivant 
March 5th, 2007 1:54pm
>> The variable does have to *move* towards zero. There are constraints on the differentiability of functions that require this. As x gets closer to x0, f(x) must get closer to f(x0) in an ever approaching manner.

No. We take dx at each instant in time and add up the sum. Integral Calculus is essentially summation.
Permalink Send private email के. जे. 
March 5th, 2007 1:59pm
What the fuck, K.J.?

This whole thread is about differentiation. Who said anything about integration?
Permalink Send private email bon vivant 
March 5th, 2007 2:01pm
Each instant in time? Where did time come from? You are talking out the back of your head.
Permalink Send private email bon vivant 
March 5th, 2007 2:02pm
"Have you taken a course in epsilontics? That should be mandatory in any calculus course."

Nope, it's not mandatary. At least at my university I think you learn this in the second year if you take Analysis.
Permalink Send private email Rick, try writing better English 
March 5th, 2007 2:05pm
>> This whole thread is about differentiation. Who said anything about integration?

Subtraction is adding with -1.

>> Each instant in time? Where did time come from?

"Moving towards zero" is quite very temporal in nature.

>> You are talking out the back of your head.

As opposed you talking out of the back of..well, your backside, perhaps?
Permalink Send private email के. जे. 
March 5th, 2007 2:06pm
"As opposed you talking out of the back of..well, your backside, perhaps?"

Posterity will be the judge of that.
Permalink Send private email bon vivant 
March 5th, 2007 2:08pm
As strawberry snowflake said, no one got it till Cauchy.

http://en.wikipedia.org/wiki/Augustin_Louis_Cauchy
Permalink Send private email Rick, try writing better English 
March 5th, 2007 2:08pm
as I remember its to do with the concept of limits. You are not looking for the absolute value of the equation, but what happens in the limit as whatever approaches infinity. That's why you can discard the fraction of the fraction.
Permalink $-- 
March 5th, 2007 4:36pm
in the limit as dx approaches 0 is what I meant.
Permalink $-- 
March 5th, 2007 4:40pm
The proof:

∀ε>0 ∃δ>0 ∋ 0<|x-a|<δ⇒|ƒ(x)-L|<ε

:P
Permalink Send private email Rick, try writing better English 
March 5th, 2007 5:06pm
how is that a proof?
Permalink Send private email strawberry snowflake 
March 5th, 2007 8:15pm
That's the structure of the proof.

It does not matter what happen when x = a, so long as how ever small the distance btw 2a and (x^2 - a^2)/ (x - a), the distance btw x and a is not larger.

(Sorry, I forgot all the details, and there are many details that I've never go through. I was in CS, not Maths :)