Talk:Vectors

Click Expand

In the first exercise it says "Click Expand for the answers.", yet there is no expand button/link. (The preceding unsigned comment was added by 82.21.211.125 (talkcontribs) 02:49, 15 January 2008)

New Post(Jan 2008)

The section “The basic idea” appears unnecessarily complicated: Scalar is not defined in a simple manner, neither is vector. Neither are particularly complex conceptions at this level. Either the assumption is that the words require no explanation, which is unlikely in view of the next paragraph, or they do require one, in which case, the concept is, as I say below, not difficult.

There is the sentence, “Detailed explanation of vectors may be found at Wikibooks linear algebra.” Would this be necessary were the article to be clearer? Indeed the detailed explanation may not be needed at all until later in the course when a little more groundwork has been covered.)

“A unit vector is a special vector which has magnitude 1 and points along one of the coordinate frame's axes*. This is better illustrated by a diagram.” – no diagram – but the explanation here is clear enough. (*coordinate frame's = graph's)

Simply put, a scalar is a number on the axis of a graph; in 2x + 3y – 8z • the 2, the 3 and the -8 are scalar – they tell you the quantity of the units on the axes. • The x and the y are the axes of the graph and tell you what the thing is (time / size / quantity / quality / etc) and • a vector is the line (usually, but not always, starting at 0) that is resultant of the other lines and ends at the point indicated by the numbers (scalars) on the axes. (see diagrams.) The size and direction of the vector is important and is usually what is being sought

Thus the vector 2x + 3y +4z is the line to the point at which right angled lines from 2 on the x axis, 3 on the y axis and 4 on the z axis intersect. This is the vector 2x + 3y +4z

The final line of this section reads:

“The magnitude of a vector is computed by (root of sum equation). For example, in two-dimensional space, this equation reduces to (root (x squared plus y squared)) .”

This is a masterpiece of obscurity. 1. “The magnitude of a vector is computed by...” I’m sure, should read, “You can find the magnitude of a vector with this formula…” 2. But in the formula, (root of sum) why is it not clear what x and i are? 3. The term, “in two-dimensional space” can be replaced with, “if there are only 2 axes” 4. There is no explanation or even a hint as to why (root of sum) “reduces to” (root (x squared plus y squared)) and 5. the final equation (root (x squared plus y squared)) is no more than Pythagoras’s Theorem, although heavily (and to my mind pointlessly) disguised.

Let me say that the idea of Wikiversity is excellent and I am, despite the above, grateful to and impressed by the person who gave his/her time to this section. He/she clearly knows the subject like the back of their hand; however, many would-be readers (and I am living proof) have not had his/her years to stare at the back of his/her hand.

Perhaps, taken with the observation by the previous poster, this is “a work in progress.” (The preceding unsigned comment was added by 84.215.201.250 (talkcontribs) 13:36, 3 December 2007)

To both of the previous commentators: Thank you very much for your feedback.
Indeed Wikiversity is a work in progress, we are a wiki - this means just be bold to optimize the text. Here you find more info about Wikiversity: Template:Welcome. If you have more questions, you can also visit the Wikiversity:Chat, ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 07:35, 15 January 2008 (UTC)
“A unit vector is a special vector which has magnitude 1 and points along one of the coordinate frame's axes*. This is better illustrated by a diagram.” – no diagram – but the explanation here is clear enough. (*coordinate frame's = graph's)
It is not just clear, it is completely wrong. There is nothing that tells that all single vectors are collinear to axis. It is nonsense. One may easily have ones that are oriented at any angle wrt the axis. --Javalenok (discusscontribs) 13:10, 11 June 2013 (UTC)

Error in text (not a typo)

This is wrong:

If you just look at the figure, you can see that the component of a vector along the basis vector is NOT equal to it's component, $(b_{1},b_{2})\,$ . You need to establish a dual space such as the reciprocal lattice basis in solid state physics.

You could also represent the same vector $\mathbf {a} \,$ in terms of another set of basis vectors ($\mathbf {g} _{1},\mathbf {g} _{2}\,$ ) as shown in Figure 1(b). In that case, the components of the vector are $(b_{1},b_{2})\,$ and we can write

$\mathbf {a} =b_{1}\mathbf {g} _{1}+b_{2}\mathbf {g} _{2}\,~.$ Note that the basis vectors $\mathbf {g} _{1}\,$ and $\mathbf {g} _{2}\,$ do not necessarily have to be unit vectors. All we need is that they be linearly independent, that is, it should not be possible for us to represent one solely in terms of the others.

... Figure 1: A vector and its basis.
After posting this a while back and forgetting to sign, I have decided to replace this with a correct discussion based on Wikipedia.--guyvan52 (discusscontribs) 20:47, 25 May 2014 (UTC)

moved a large section to another page

I moved a large part of this page to Vector calculus

This what was moved:

BEGIN TEXT THAT HAS BEEN MOVED TO Vector calculus

So far we have dealt with constant vectors. It also helps if the vectors are allowed to vary in space. Then we can define derivatives and integrals and deal with vector fields. Some basic ideas of vector calculus are discussed below.

Derivative of a vector valued function

Let $\mathbf {a} (x)\,$ be a vector function that can be represented as

$\mathbf {a} (x)=a_{1}(x)\mathbf {e} _{1}+a_{2}(x)\mathbf {e} _{2}+a_{3}(x)\mathbf {e} _{3}\,$ where $x\,$ is a scalar.

Then the derivative of $\mathbf {a} (x)\,$ with respect to $x\,$ is

${\cfrac {d\mathbf {a} (x)}{dx}}=\lim _{\Delta x\rightarrow 0}{\cfrac {\mathbf {a} (x+\Delta x)-\mathbf {a} (x)}{\Delta x}}={\cfrac {da_{1}(x)}{dx}}\mathbf {e} _{1}+{\cfrac {da_{2}(x)}{dx}}\mathbf {e} _{2}+{\cfrac {da_{3}(x)}{dx}}\mathbf {e} _{3}~.$ Note: In the above equation, the unit vectors $\mathbf {e} _{i}$ (i=1,2,3) are assumed constant.
If $\mathbf {a} (x)\,$ and $\mathbf {b} (x)\,$ are two vector functions, then from the chain rule we get

{\begin{aligned}{\cfrac {d({\mathbf {a} }\cdot {\mathbf {b} })}{x}}&={\mathbf {a} }\cdot {\cfrac {d\mathbf {b} }{dx}}+{\cfrac {d\mathbf {a} }{dx}}\cdot {\mathbf {b} }\\{\cfrac {d({\mathbf {a} }\times {\mathbf {b} })}{dx}}&={\mathbf {a} }\times {\cfrac {d\mathbf {b} }{dx}}+{\cfrac {d\mathbf {a} }{dx}}\times {\mathbf {b} }\\{\cfrac {d[{\mathbf {a} }\cdot {({\mathbf {a} }\times {\mathbf {b} })}]}{dt}}&={\cfrac {d\mathbf {a} }{dt}}\cdot {({\mathbf {b} }\times {\mathbf {c} })}+{\mathbf {a} }\cdot {\left({\cfrac {d\mathbf {b} }{dt}}\times {\mathbf {c} }\right)}+{\mathbf {a} }\cdot {\left({\mathbf {b} }\times {\cfrac {d\mathbf {c} }{dt}}\right)}\end{aligned}} Scalar and vector fields

Let $\mathbf {x} \,$ be the position vector of any point in space. Suppose that there is a scalar function ($g\,$ ) that assigns a value to each point in space. Then

$g=g(\mathbf {x} )\,$ represents a scalar field. An example of a scalar field is the temperature. See Figure4(a).

If there is a vector function ($\mathbf {a} \,$ ) that assigns a vector to each point in space, then

$\mathbf {a} =\mathbf {a} (\mathbf {x} )\,$ represents a vector field. An example is the displacement field. See Figure 4(b).

Gradient of a scalar field

Let $\varphi (\mathbf {x} )\,$ be a scalar function. Assume that the partial derivatives of the function are continuous in some region of space. If the point $\mathbf {x} \,$ has coordinates ($x_{1},x_{2},x_{3}\,$ ) with respect to the basis ($\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}\,$ ), the gradient of $\varphi \,$ is defined as

${\boldsymbol {\nabla }}{\varphi }={\frac {\partial \varphi }{\partial x_{1}}}~\mathbf {e} _{1}+{\frac {\partial \varphi }{\partial x_{2}}}~\mathbf {e} _{2}+{\frac {\partial \varphi }{\partial x_{3}}}~\mathbf {e} _{3}~.$ In index notation,

${\boldsymbol {\nabla }}{\varphi }\equiv \varphi _{,i}~\mathbf {e} _{i}~.$ The gradient is obviously a vector and has a direction. We can think of the gradient at a point being the vector perpendicular to the level contour at that point.

It is often useful to think of the symbol ${\boldsymbol {\nabla }}{}$ as an operator of the form

${\boldsymbol {\nabla }}{}={\frac {\partial }{\partial x_{1}}}~\mathbf {e} _{1}+{\frac {\partial }{\partial x_{2}}}~\mathbf {e} _{2}+{\frac {\partial }{\partial x_{3}}}~\mathbf {e} _{3}~.$ Divergence of a vector field

If we form a scalar product of a vector field $\mathbf {u} (\mathbf {x} )\,$ with the ${\boldsymbol {\nabla }}{}$ operator, we get a scalar quantity called the divergence of the vector field. Thus,

${\boldsymbol {\nabla }}\cdot \mathbf {u} ={\frac {\partial u_{1}}{\partial x_{1}}}+{\frac {\partial u_{2}}{\partial x_{2}}}+{\frac {\partial u_{3}}{\partial x_{3}}}~.$ In index notation,

${\boldsymbol {\nabla }}\mathbf {u} \equiv u_{i,i}~.$ If ${\boldsymbol {\nabla }}\mathbf {u} =0$ , then $\mathbf {u} \,$ is called a divergence-free field.

The physical significance of the divergence of a vector field is the rate at which some density exits a given region of space. In the absence of the creation or destruction of matter, the density within a region of space can change only by having it flow into or out of the region.

Curl of a vector field

The curl of a vector field $\mathbf {u} (\mathbf {x} )\,$ is a vector defined as

${\boldsymbol {\nabla }}\times {\mathbf {u} }=\det {\begin{vmatrix}\mathbf {e} _{1}&\mathbf {e} _{2}&\mathbf {e} _{3}\\{\frac {\partial }{\partial x_{1}}}&{\frac {\partial }{\partial x_{2}}}&{\frac {\partial }{\partial x_{3}}}\\u_{1}&u_{2}&u_{3}\\\end{vmatrix}}$ The physical significance of the curl of a vector field is the amount of rotation or angular momentum of the contents of a region of space.

Laplacian of a scalar or vector field

The Laplacian of a scalar field $\varphi (\mathbf {x} )\,$ is a scalar defined as

$\nabla ^{2}{\varphi }:={\boldsymbol {\nabla }}({\boldsymbol {\nabla }}{\varphi })={\frac {\partial ^{2}\varphi }{\partial x_{1}}}+{\frac {\partial ^{2}\varphi }{\partial x_{2}}}+{\frac {\partial ^{2}\varphi }{\partial x_{3}}}~.$ The Laplacian of a vector field $\mathbf {u} (\mathbf {x} )\,$ is a vector defined as

$\nabla ^{2}{\mathbf {u} }:=(\nabla ^{2}{u_{1}})\mathbf {e} _{1}+(\nabla ^{2}{u_{2}})\mathbf {e} _{2}+(\nabla ^{2}{u_{3}})\mathbf {e} _{3}~.$ Identities in vector calculus

Some frequently used identities from vector calculus are listed below.

1. ${\boldsymbol {\nabla }}(\mathbf {a} +\mathbf {b} )={\boldsymbol {\nabla }}\cdot {\mathbf {a} }+{\boldsymbol {\nabla }}\cdot {\mathbf {b} }$ 2. ${\boldsymbol {\nabla }}\times {(\mathbf {a} +\mathbf {b} )}={\boldsymbol {\nabla }}\times {\mathbf {a} }+{\boldsymbol {\nabla }}\times {\mathbf {b} }$ 3. ${\boldsymbol {\nabla }}(\varphi \mathbf {a} )=\cdot {({\boldsymbol {\nabla }}{\varphi })}{\mathbf {a} }+\varphi ({\boldsymbol {\nabla }}\cdot {\mathbf {a} })$ 4. ${\boldsymbol {\nabla }}\times {(\varphi \mathbf {a} )}={({\boldsymbol {\nabla }}{\varphi })}\times {\mathbf {a} }+\varphi ({\boldsymbol {\nabla }}\times {\mathbf {a} })$ 5. ${\boldsymbol {\nabla }}({\mathbf {a} }\times {\mathbf {b} })={\mathbf {b} }\cdot {({\boldsymbol {\nabla }}\times {\mathbf {a} })}-{\mathbf {a} }\cdot {({\boldsymbol {\nabla }}\times {\mathbf {b} })}$ Green-Gauss Divergence Theorem

Let $\mathbf {u} (\mathbf {x} )\,$ be a continuous and differentiable vector field on a body $\Omega \,$ with boundary $\Gamma \,$ . The divergence theorem states that

${\int _{\Omega }{\boldsymbol {\nabla }}\cdot {\mathbf {u} }dV=\int _{\Gamma }{\mathbf {n} }\cdot {\mathbf {u} }dA}$ ,

where $\mathbf {n} \,$ is the outward unit normal to the surface (see Figure 5).

In index notation,

$\int _{\Omega }u_{i,i}~dV=\int _{\Gamma }n_{i}u_{i}~dA$ Stokes' theorem

The Kelvin–Stokes theorem relates the surface integral of the curl of a vector field F over a surface Σ in Euclidean three-space to the line integral of the vector field over its boundary ∂Σ:

$\iint _{\Sigma }\nabla \times \mathbf {F} \cdot \mathrm {d} \mathbf {\Sigma } =\oint _{\partial \Sigma }\mathbf {F} \cdot \mathrm {d} \mathbf {r} .$ According to Wikipedia, this form of the theorem was first discovered by Lord Kelvin, who communicated it to George Stokes in a letter dated July 2, 1850. Stokes set the theorem as a question on the 1854 Smith's Prize exam, which led to the result bearing his name.

END OF TEXT THAT WAS MOVED

Free and fixed vectors?

I'm not sure we need to disinguish between free vectors and fixed vectors. I appreciate motive for making the distinction, but vectors are generally defined to be free.---Guy vandegrift (discusscontribs) 10:27, 6 April 2016 (UTC)

Rework of article

This article has only ONE reference (I'm not counting the reference to WIKIPEDIA) and only describes Euclidean vectors (magnitude and direction) rather than abstract vectors. These issues make an abstract definition impossible / unclear for those studying things such as tensors, topology, et cetera. This is a very basic definition of vectors that most pure math applications don't have too much use for. This article has WAY too many unsourced statements and needs to be re-done. It's been over a year since the last edit. I was thinking something similar to my sandbox over at ratwiki: https://rationalwiki.org/wiki/User:Zackarycw/Vector (wikiversity blocks me from adding to my own sandbox so I can't link there...) ZackaryCW (discusscontribs) 22:03, 11 July 2019 (UTC)

The current definitions of vectors do not describe all vectors. It is incomplete and with these definitions is impossible to explain abstract vectors or even basic stuff like bi-vectors is almost impossible with this definition. ZackaryCW (discusscontribs) 11:20, 12 July 2019 (UTC)