# Talk:Introduction to Calculus/Differentiation

This is good as a study sheet to supplement a lesson plan. To make it a lesson plan, one has to motivate the definitions as well as prove theorems. Pretend that you're preparing a plan for an actual class of students. Why should the student want to define the derivative in that manner? PS. Also, it is rather backwards to note that the derivative of ln(x) is 1/x, as ln(x) was defined as the integral of 1/t dt from 1 to x, as this integral, a "hole" in the power rule, was found to behave like a logarithm if one gave the logarithm the base e. The discovery of this fact would make a great exercise sheet. For more info on the history of ln(x), see "e: the Story of a Number" by Maor. Ron 01:56, 23 August 2006 (UTC)

## Pedagogy

Hey Hypermorphism. I wanted to thank you for taking interest. I am sort of in the process of trying to make what I wrote make a little more sense and be a little more pedagogical, and I was hoping others would help out, but in the meantime I figured I'd post what I had, to get this ball rolling. Feel free to add examples or change my sometimes awkward phrasing. That's what wiki's for right? Anyway, I'll see what I can do.

By the way, I disagree that it's backward or redundant to discover the derivative of ln the way I did. There are a number of equivalent ways to present ln(x) and it is a matter of taste what to call a definition and what to call a result (as long as there's only one of the former of course). I was taking as the definition of ln(x) that it was the inverse funciton of ${\displaystyle e^{x}}$, or in other words that the statements ${\displaystyle ln(x)=y}$ and ${\displaystyle x=e^{y}}$ are equivalent. Daniel M. Roberts 02:26, 23 August 2006 (UTC)

I don't quite understand why there is so much emphasis on e? --HappyCamper 04:35, 23 August 2006 (UTC)

Hey Daniel, I was just nitpicking, as mathematicians are wont to do. :-) I like exposing the history of mathematics as much as possible to the student, in order to humanize a topic that may sometimes seem sterile of human foibles. Most textbooks and courses never talk about the historical development of the subject in any amount of detail; they present only pristine results and well-polished "remarkable" relationships. Many students are unaware that calculus only recently went through two revolutions of rigor after the initial formulation, where hand-waving was the norm when it came to the actual machinery. The players (Ie., Kronecker with his "God created the integers. All else is Man's invention.") were fascinating, but that level of detail is for another course. I would only just comment on little stuff like the meandering discovery of e. :D

I might do some little additions here or there, but I'll be focusing on building a series of lessons that start from defining the real number system (the playground) that then naturally leads to limits, from which derivatives and integrals are only one leap of logic away. Keep creating lessons! :-) Ron 02:46, 23 August 2006 (UTC)
Hmm. That's interesting what you have to say about math and math history. I've done a little bit of research in the past on the history of calculus, although you probably have more of that than I do. Anyway, I'm hoping to add the kind of stuff you were talking about, after I get the basic skeleton of the page down. I know what you mean about glossy-eyed kids uhuh-ing their way through a lecture. Again, feel free to pose some take-a-step-back kind of questions or a few excersizes along the way if any come to mind. Thanks for your ideas. I mean, of course you're right. Daniel M. Roberts 04:29, 23 August 2006 (UTC)
As an example, the fact that we are studying calculus is highly nontrivial - for example, the Greeks were not able to understand limits the way we do today. We should be very thankful that so many great minds thought carefully about these things!! --HappyCamper 04:33, 23 August 2006 (UTC)

## Duplication

Hmm...better be careful that this doesn't duplicate wikibooks:Calculus/Differentiation too much... --HappyCamper 05:09, 23 August 2006 (UTC)

## Reorganization

Hey HappyCamper, thanks for the reorganization you did last night. The only thing I would change is the placement of the whole compound-interest excursion. I know it makes sense to you and to me to jump right into what a derivative is and what all its rules are, but imagine how much sense that would make if you had never been exposed to that. To me the point of the compound-interest excursion was to warm them up with

• The idea of functions (i.e. the relationship between one variable and another)
• The idea of limits and why you would ever use them
• A knowledge of binomial expansion, which is necessary to prove a number of things including the ${\displaystyle x^{n}}$ rule
• Finding a friend in ${\displaystyle e}$ so that when they get into the later section about transcendental functions etc, they already see a familiar face.

It may seem like an arbitrary way to lead up to derivatives, but I think it's important in that it submerges the abstract talk in a context that is obviously related to real life.Daniel M. Roberts 16:46, 23 August 2006 (UTC)

Hmm...the binomial expansion is not necessary to derive the Power Rule - it can be done with the chain rule and induction. Then, one could make a "leap of faith" and say that it works for arbitrary n. This of course, can be stated in the article.
Euler's number is not quite so fundamental for limits and such - its derivation does not address notions of existence of a limit, continuity, or differentiability. I can understand why it is introduced so early, but from my perspective, it is presented too soon for it to be useful.
As for familiarity, I think a variety of examples would suffice. Generally speaking, newcomers to calculus do not have the notion of piecewise continuous functions yet.
What I'm worried about now, is whether this is duplicating the effort over at Wikibooks... --HappyCamper 17:22, 23 August 2006 (UTC)
I know what you mean about all that stuff, but I don't know if you remember learning calculus or not (sometimes I hardly do) but back then, the last thing that you wanted to know about was the formal epsilon-delta formulation of the limit, for example. That goes for smaller things too. It's okay to gloss over the difference between piecewise and non-piecewise continuous functions for example to begin with and bring up the discussion about continuity and differentiability after it is motivated by the knowledge of what a derivative is. As for the importance of ${\displaystyle e}$, I would argue that it is at the heart of calculus and has the interesting quality that without calculus, it's just a number, and with calculus it is God's number. That type of thing I think can be cool to someone just learning calculus.Daniel M. Roberts 17:37, 23 August 2006 (UTC)
I wasn't thinking of an epsilon delta proof - this set of notes seems more like a set of notes for high school students. I actually still have mine, and all the pieces of scrap paper where I was struggling with everything. Well, I'm not very keen on where this e stuff is located at the moment, but since I'm not being too productive with editing the article, I guess I'll leave it as is. There are other gaps I want to fill in, like these differentiation rules for general exponentiation. But sooner or later, I suspect that "units" of knowledge will be split into its own pages, and when lesson plans are drawn up, one can simply say that we'll learn units 1,2,3,16,17 in perspective X, and 1,4,5,6,7 in perspective Y. --HappyCamper 17:54, 23 August 2006 (UTC)
Sounds good to me. I mean you're right, it'll probably be split up and there's a lot to add anyway before we can see how to organize it. Anyway, thanks for working on it with me. --Daniel M. Roberts 23:03, 23 August 2006 (UTC)

${\displaystyle 1+r+{\frac {r^{2}}{2}}+{\frac {r^{3}}{3!}}+{\frac {r^{4}}{4!}}+\dots =\lim _{n\rightarrow \infty }{\bigg (}1+nr{\bigg (}{\frac {1}{n}}{\bigg )}+{\frac {nr(nr-1)}{2}}{\bigg (}{\frac {1}{n}}{\bigg )}^{2}+{\frac {nr(nr-1)(nr-2)}{3!}}{\bigg (}{\frac {1}{n}}{\bigg )}^{3}+\dots {\bigg )}}$

First of all, the fact that the k-th term of the expansion of the binomial converges to ${\displaystyle {\frac {r^{k}}{k!}}}$ is not sufficient to justify the limit under the infinite sum. (Otherwise, on the same line, 1= 1/n + 1/n +..+ 1/n (n terms) would converge to 0+0..+0=0 ) We need a dominate convergence principle for series, which is indeed available, but it would be more correct at least to mention it.

${\displaystyle =\lim _{n\rightarrow \infty }{\bigg (}1+{\frac {1}{n}}{\bigg )}^{nr}={\Bigg (}\lim _{n\rightarrow \infty }{\bigg (}1+{\frac {1}{n}}{\bigg )}^{n}{\Bigg )}^{r}={\bigg (}1+1+{\frac {1}{2}}+{\frac {1}{3!}}+{\frac {1}{4!}}+\dots {\bigg )}^{r}}$

Second, here we are using an expansion where r and n are positive integers (unless we are using an even less elementary tool, the binomial series). Why should the expansion be valid for r=√2? --PMajer (discusscontribs) 20:44, 16 December 2018 (UTC)

## Quiz

Need a quiz? I would love to try out the ABCD template some more.--Rayc 01:32, 22 September 2006 (UTC)

Yes, yes, yes :-) Can we do something like this?
Q: What is the derivative of x?
A) 1
B) 0
C) x
D) none of the above
--HappyCamper 03:21, 28 September 2006 (UTC)

Under the heading of Implicit Differentiation, you make reference to the quotient rule (i.e. 'One could find [the derivative] with the quotient rule...') without ever having mentioned what the quotient rule is. If you're going to mention it (which you may choose not to), you ought to show it (i.e. as an application of the product rule).

## explicite multiplication signs

Consider including multiplication signs in the formulas.

For

${\displaystyle A=A_{0}(1+{\frac {r}{n}})^{nt}}$

write

${\displaystyle A=A_{0}\cdot \left(1+{\frac {r}{n}}\right)^{n\cdot t}.}$

Bo Jacoby 19:20, 29 September 2007 (UTC)

## Proposals

Hi sirs! I just jumped into this, as User:Un-predictable was asking me for help in derivatives. There's so much work already done here! Congrats for your job!

I have thought of a few proposals, and wanted to know your opinion:

When I was taught this, the letter h was also used for the shrinking difference in x. But looking backwards, I think it's easier to understand the formulas if instead of h we write ${\displaystyle \Delta x}$, which is already used when talking about slopes. (I know it's important to teach that bound, or dummy, variables can be called whatever; but being able to doesn't mean it's a good idea, and in this case the name ${\displaystyle \Delta x}$ carries semantic meaning).

Another thing I think would improve the lesson is the inclusion of tons and tons of graphics, depicting e.g. a function f(x), a couple of points f(x) and f(x+h), and such things. I am myself a visual learner, and understand things a lot quicker if I can see or imagine them. I think I could do some graphics on Matlab (I don't have Mathematica installed right now), but I doubt I'd be able to put the labels and colors needed to make it pedagogical. I also ignore the license issues regarding a graphic produced by those programs (which I don't own legally). Is there any software to create such graphics, as are seen in the textbooks?

When teaching derivatives to my cousin I found that it made it easier to present them to him like this (with the aid of a sheet of paper to draw):

${\displaystyle f'(x)=\lim _{\Delta x\rightarrow 0}{\frac {f(x+\Delta x)-f(x)}{(x+\Delta x)-x}}=\lim _{\Delta x\rightarrow 0}{\frac {\Delta f}{\Delta x}}={\frac {df}{dx}}}$

That made him understand ${\displaystyle dw}$ as a shorthand for "${\displaystyle \Delta w}$ when ${\displaystyle \Delta w\rightarrow 0}$". And some properties were made easier to demostrate, e.g. the product rule:

${\displaystyle {\frac {d\left(f\cdot g\right)}{dx}}={\frac {\left(f+df\right)\cdot \left(g+dg\right)-f\cdot g}{dx}}={\frac {f\cdot g+df\cdot g+f\cdot dg+df\cdot dg-f\cdot g}{dx}}}$

${\displaystyle ={\frac {df\cdot g+f\cdot dg+df\cdot dg}{dx}}={\frac {df}{dx}}\cdot g+f\cdot {\frac {dg}{dx}}+{\frac {df}{dx}}\cdot dg}$, with this last term going to 0 with dg

It is probably longer than the definition using limits, but it made him see it less like magic but like something reasonable. (It also aided much a picture of a rectangle of sides f and g which was prolonged by df and dg, being the third term up here the little rectangle df·dg which appears in the corner).

Has anybody at wikiversity tried yet ways to present different explanations for the same thing?

Good day --Jorge 15:11, 13 April 2008 (UTC)