Linear dependence of the system of vectors. Collinear vectors

Linear dependence of the system of vectors. Collinear vectors

Linear addiction and linear independence of vectors.
Basis vectors. Affine coordinate system

The audience has a trolley with chocolates, and every visitor today will get a sweet couple - analytical geometry with a linear algebra. In this article, two sections of higher mathematics will be raised immediately, and we will see how they get along in one wrap. Make a pause, be boring "Twix"! ... damn, well, nonsense spore. Although okay, I will not score, in the end, there should be a positive attitude to study.

Linear dependence of vectors, linear independence vectors, basis vectors and others. Terms have not only geometric interpretation, but, above all, algebraic meaning. The very concept of "vector" in terms of linear algebra is not always the "ordinary" vector, which we can portray on the plane or in space. It is not necessary to go far for evidence, try to draw a vector of five-dimensional space. . Or the weather vector for which I just went on Gismeteo: - temperature and atmospheric pressure, respectively. An example, of course, incorrect from the point of view of the properties of vector space, but, nevertheless, no one prevents formalize these parameters by vector. Respiratory of autumn ...

No, I'm not going to ship you theory, linear vector spaces, the task is to understand Definitions and theorems. New terms (linear dependence, independence, linear combination, basis, etc.) are applicable to all vectors from an algebraic point of view, but examples will be given geometric. Thus, everything is simple, accessible and visual. In addition to the tasks of analytical geometry, we will look at and some typical tasks of algebra. To master the material, it is advisable to get acquainted with the lessons Vectors for teapots and How to calculate the determinant?

Linear dependence and independence of plane vectors.
Plane base and affinity coordinate system

Consider the plane of your computer table (just a table, bedside tables, floor, ceiling, who likes what). The task will consist in the following actions:

1) Select base plane. Roughly speaking, the countertops have a length and width, so it is intuitive that two vectors will be required to build a base. One vector is clearly not enough, three vectors - Lishka.

2) Based on the selected basis set the coordinate system (coordinate grid) to assign coordinates to all subjects on the table.

Do not be surprised, first the explanations will be on your fingers. And, on yours. Please place finger of the left hand On the edge of the table top so that he looked into the monitor. It will be vector. Now place mysilineal right hand On the edge of the table is exactly the same - that it is directed to the monitor screen. It will be vector. Smile, you wonderfully look! What can be said about the vectors? These vectors collinearnyand therefore linelo expressed in each other:
Well, or vice versa:, where - some number other than zero.

Picture of this action can be viewed at the lesson Vectors for teapotswhere I explained the vector multiplication rule for a number.

Will your fingers set the basis on the computer table plane? Obviously, no. Collinear vectors travel there and here one Direction, and the plane has a length and width.

Such vectors are called linearly dependent.

Reference: The words "linear", "linear" indicate the fact that mathematical equations, expressions there are no squares, cubes, other degrees, logarithms, sinuses, etc. There are only linear (1st degree) of expressions and dependencies.

Two vector planes linearly dependent Then and only when they are collinear.

Cross your fingers on the table to be between them any angle, except 0 or 180 degrees. Two vector planeslinelo notdependent in that and only if they are not collinear. So, the basis is obtained. It is not necessary to embarrass that the basis turned out to be "oblique" with imperpendicular vectors of different lengths. Very soon we will see that for its construction is not only an angle of 90 degrees, and not only single, equal length vectors

Any Vector plane the only way Disclosed by base:
where - valid numbers. Numbers are called coordinates of the vector In this base.

Also say that vector Posted in the form linear combination Basic vectors. That is, the expression is called decomposition of vectorbasisus or linear combination Basic vectors.

For example, we can say that the vector is decomposed on the orthonormal basis of the plane, and it can be said that it is represented as a linear combination of vectors.

Formulate definition of Basisa formally: Base plane called a pair of linear independent (nonollylinear) vectors, , wherein any The vector of the plane is a linear combination of basic vectors.

The essential point of definition is the fact that the vectors are taken in a certain order. Bases - These are two completely different bases! As the saying goes, the little finger of the left hand will not rearrange the Mizinza right hand.

The basis figured out, but it is not enough to set the coordinate grid and assign the coordinates to each object of your computer table. Why not enough? Vectors are free and wandered throughout the plane. So how to assign the coordinates of those small dirty points of the table, which remained after a rapid weekend? We need a starting reference. And such a guideline is a familiar point - the beginning of the coordinates. We understand the coordinate system:

I will start with the "school" system. Already at the introductory lesson Vectors for teapots I highlighted some differences between the rectangular coordinate system and the orthonormal basis. Here is the standard picture:

When they say O. rectangular coordinate system, most often they mean the origin of the coordinates, coordinate axes and scale along the axes. Try to dial in the search engine "Rectangular coordinate system", and you will see that many sources will tell you about familiar with the 5-6th class coordinate axes and how to postpone points on the plane.

On the other hand, it seems that the rectangular coordinate system can be determined through an orthonormal basis. And it is almost like that. The wording sounds as follows:

The beginning of the coordinates, I. Ortonormalbasis set cartesian Rectangular Plane Coordinate System . That is, a rectangular coordinate system definite Determined by the only point and two single orthogonal vectors. That is why you see the drawing that I was led above - in geometric tasks, often (but not always) draw vectors, and coordinate axes.

I think everyone is clear that with the help of a point (start of coordinates) and the orthonormal basis Any point of the plane and any plane vectoryou can assign coordinates. Figuratively speaking, "on the plane everything can be numbered."

Are the coordinate vectors obliged to be isolated? No, they can have an arbitrary non-zero length. Consider the point and two orthogonal vectors of arbitrary nonzero length:


Such a basis is called orthogonal. The origin of the coordinates with vectors set the coordinate grid, and any point of the plane, any vector has its own coordinates in this base. For example, or. The obvious inconvenience is that the coordinate vectors in general Have different lengths other than one. If the lengths are equal to one, then the usual orthonormal basis is obtained.

! Note : In the orthogonal basis, as well as below in affine bases of the plane and space, the units on the axes are considered Conditional. For example, in one unit along the abscissa axis, it contains 4 cm, in one unit along the ordinate axis 2, see this information is enough to translate the "non-standard" coordinates to "our ordinary centimeters" if necessary.

And the second question for which the answer is already given - is it necessary to equal 9 degrees between basic vectors? Not! As the definition says, basic vectors must be only nonollylinear. Accordingly, the angle can be anyone except 0 and 180 degrees.

Point plane called The beginning of the coordinates, I. nonollylinear vectors , ask affine coordinate plane system :


Sometimes such a coordinate system is called kosholnaya system. As examples in the drawing, dots and vectors are depicted:

As you understand, the affine coordinate system is even less convenient, it does not work formulas for the vectors and segments that we considered in the second part of the lesson Vectors for teapotsMany tasty formulas related to scalar product vectors. But there are valid rules for the addition of vectors and multiplication of the vector by the number, the segment division formula in this regard, as well as some more tasks that we will consider soon.

And the conclusion that the most convenient private case of the affinity coordinate system is the decartian rectangular system. Therefore, it, native, most often and has to be contemplating. ... However, everything in this life is relative - there are a lot of situations in which the Kosholnaya is appropriate (or what other, for example, polar) coordinate system. Yes, and humanoids such systems may come to taste \u003d)

Go to the practical part. All tasks of this lesson are valid for both a rectangular coordinate system and for a common affional case. There is nothing difficult here, all the material is even available to a schoolboy.

How to determine the collinearity of the plane vectors?

Typical thing. In order for two plane vector were collinear, it is necessary and enough so that their relevant coordinates are proportional to. According to the creature, this is the redeeble detailing of the obvious relationship.

Example 1.

a) check whether Collinearny vectors .
b) whether the basis forms the vectors ?

Decision:
a) find out if there is a vectors The proportionality coefficient, such to be carried out by equality:

I will definitely tell about the "Pzhonskaya" species of the application of this rule, which is quite rolling in practice. The idea is to immediately make a proportion and see if it will be true:

Make a proportion from the relationship of the corresponding coordinates of the vectors:

Redfish:
Thus, the corresponding coordinates are proportional, therefore,

The attitude could be converted on the contrary, it is an equal version:

For self-test, it is possible to use the fact that the collinear vectors are linearly expressed in each other. In this case, there are equality . Their justice is easily checked through elementary actions with vectors:

b) Two plane vector form a basis, if they are not collinear (linearly independent). Explore the collinearity vectors . Make a system:

From the first equation it follows that, from the second equation it follows that, it means that the system is incomplete (No solutions). Thus, the corresponding coordinates of the vectors are not proportional.

Output: Vectors are linearly independent and form a basis.

The simplified version of the solution looks like this:

Make a proportion from the corresponding vectors coordinates :
It means that these vectors are linearly independent and form a basis.

Usually, this option is not marked by reviewers, but the problem arises in cases where some coordinates are zero. Like this: . Or so: . Or so: . How to act through the proportion? (Indeed, it is impossible to share for zero). It was for this reason that I called the simplified decision "Pzhonsky".

Answer:a), b) form.

A small creative example for an independent solution:

Example 2.

With what value of the parameter vector Will collinearins?

In the sample solution, the parameter is found through proportion.

There is an elegant algebraic method of checking vectors for collinearity., We systematize our knowledge and fifth item just add it:

The following statements are equivalent for two plane vectors.:

2) vectors form basis;
3) vectors are not collinear;

+ 5) The determinant composed of the coordinates of these vectors is different from zero.

Respectively, the following opposite statements are equivalent.:
1) vectors are linearly dependent;
2) vectors do not form the basis;
3) collinear vectors;
4) vectors can be linearly expressed in each other;
+ 5) The determinant composed of the coordinates of these vectors is zero.

I am very and very much hope that at the moment you are already understandable to all the terms and allegations.

Consider a new one, the fifth point: two vector planes Collinearny then and only if the determinant composed of the data coordinates of the vectors is zero:. To apply this feature, naturally, you need to be able to find identifies.

Decisive Example 1 second way:

a) Calculate the determinant composed of the coordinates of the vectors :
So, these collinear vectors.

b) Two plane vector form a basis, if they are not collinear (linearly independent). Calculate the determinant composed of the coordinates of vectors :
So, vectors are linearly independent and form a basis.

Answer:a), b) form.

It looks much more compact and prettier than the solution with proportions.

With the help of the considered material, not only the collinearity of vectors can be installed, but also to prove the parallelism of segments, direct. Consider a pair of tasks with specific geometric shapes.

Example 3.

Dana vertices of a quadricle. Prove that the quadril is a parallelogram.

Evidence: The drawing is not necessary in the task, since the solution will be purely analytical. Remember the definition of the parallelogram:
Parallelogram Called a quadricle, who has opposite sides pairly parallel.

Thus, you need to prove:
1) parallelism of opposite sides and;
2) parallelism of opposite sides and.

We prove:

1) Find vectors:


2) Find vectors:

It turned out the same vector ("on school" - equal vectors). Collinearity is completely obvious, but it is better to make a decision with an alignment. Calculate the determinant composed of the coordinates of the vectors:
It means that these are collinear vectors, and.

Output: The opposite sides of the quadril is parallel parallel, it means that it is a parallelogram by definition. Q.E.D.

More good and different figures:

Example 4.

Dana vertices of a quadricle. Prove that the quadril is a trapezium.

For a more strict wording of the proof, it is better, of course, to get the definition of a trapezoid, but it is enough and just remember how it looks like.

This is a task for an independent solution. Complete solution at the end of the lesson.

And now it's time to quietly move out of the plane into space:

How to determine the collinearity of space vectors?

The rule is very similar. In order for two vessel vectors to be collinear, it is necessary and enough for their respective coordinates to be proportional to.

Example 5.

Find out whether the collinear will be the following vectors of space:

but) ;
b)
in)

Decision:
a) Check whether there is a ratio of proportionality for the corresponding coordinates of the vectors:

The system has no solution, it means that the vectors are not collinear.

"Simplified" is issued by checking the proportion. In this case:
- The relevant coordinates are not proportional, it means that the vectors are not collinear.

Answer: Vectors are not collinear.

b-c) These are items for an independent decision. Try it to arrange in two ways.

There is a method for checking spatial vectors on collinearity and through a third-order determinant, this method is covered in the article Vector artwork vectors.

Similar to the flat case, the considered toolkit can be used to study the parallelism of spatial segments and direct.

Welcome to the second section:

Linear dependence and independence of vectors of three-dimensional space.
Spatial Base and Affine Coordinate System

Many of the laws that we looked at the plane will be fair for space. I tried to minimize the abstract on the theory, since the lion's share of information is already degraded. However, I recommend to read the introductory part carefully, as new terms and concepts will appear.

Now instead of the plane of the computer table, we examine three-dimensional space. First create its basis. Someone is now located in the room, someone on the street, but in any case, we can not go anywhere from three dimensions: widths, lengths and heights. Therefore, three spatial vectors will be required to build a base. One or two vectors are small, the fourth is superfluous.

And again breathe on the fingers. Please raise your hand up and spread in different directions. big, index and middle finger. These will be vectors, they look at different directions, have a different length and have different angles with each other. Congratulations, the basis of three-dimensional space is ready! By the way, it is not necessary to demonstrate such teachers, no matter how cool your fingers, and the definitions are not going anywhere \u003d)

Next, we will define an important issue, any three vectors form the basis of three-dimensional space? Please press the three fingers tightly to the computer table tabletop. What happened? Three vectors are located in the same plane, and, roughly speaking, we lost one of the measurements - the height. Such vectors are complinary And, it is quite obvious that the basis of three-dimensional space does not create.

It should be noted that the compartment vectors are not required to lie in the same plane, they can be in parallel planes (just do not do it with your fingers, so only Salvador gave \u003d)).

Definition: Vectors are called complinaryIf there is a plane with which they are parallel. It is logical here that if such a plane does not exist, then the vectors will not compartment.

Three compartment vectors are always linearly dependent., that is, linearly expressed in each other. For simplicity, we will again imagine that they lie in the same plane. First, the vectors are not enough that the companaries may also be collinear, then any vector can be expressed through any vector. In the second case, if, for example, vectors are not collinear, then the third vector is expressed through them the only way: (And why - easy to guess based on the materials of the previous section).

Fair and the opposite statement: Three noncomplete vectors are always linearly independent, that is, in no way expressed in one friend. And, obviously, only such vectors can form a three-dimensional basis.

Definition: The basis of three-dimensional space called a tripler linearly independent (noncomplete) vectors, taught, with any vector space the only way Disclosed on this basis, where - the coordinates of the vector in this base

I remind you can also say that the vector is presented in the form of linear combination Basic vectors.

The concept of the coordinate system is introduced in the same way as for a flat case, just one point and any three linearly independent vectors:

The beginning of the coordinates, I. noncomplenar vectors taken in a definite, ask affine coordinate system of three-dimensional space :

Of course, the coordinate mesh "oblique" and poorly turning, but, nevertheless, the constructed coordinate system allows us definite Determine the coordinates of any vector and coordinates of any point of space. Similarly, the plane in the affine coordinate system will not work for some formulas that I have already mentioned.

The most familiar and convenient private case of an affine coordinate system, how everyone is guessing is rectangular Space Coordinates System:

Point space called The beginning of the coordinates, I. Ortonormalbasis set cartepow rectangular space coordinate system . Familiar picture:

Before moving to practical tasks, we again systematize the information:

For three vectors of space are equivalent to the following statements:
1) vectors are linearly independent;
2) vectors form basis;
3) vectors are not compartment;
4) vectors can not linearly express each other;
5) The determinant composed of the coordinates of these vectors is different from zero.

Opposite statements, I think, are understandable.

Linear dependence / independence of space vectors is traditionally checked using the determinant (paragraph 5). The remaining practical tasks will be brightly expressed algebraic. It's time to hang on a nail geometric club and wrapping a baseball bat linear algebra:

Three vector vectors Compliannas then and only if the determinant drawn up from the coordinates of these vectors is zero: .

I draw attention to a small technical nuance: the coordinates of the vectors can be recorded not only in the columns, but also in the string (the value of the determinant will not change from this - see the properties of the determinants). But much better in columns, since it is more profitable for solving some practical tasks.

Thus, the readers who are a little challenged methods for calculating the determinants, and may generally be focused on them, I recommend one of my oldest lessons: How to calculate the determinant?

Example 6.

Check whether the three-dimensional basis form the following vectors:

Decision: In fact, the whole decision is reduced to the calculation of the determinant.

a) Calculate the determinant composed of the coordinates of the vectors (the determinant is disclosed on the first line):

It means that the vectors are linearly independent (not compartment) and form the basis of three-dimensional space.

Answer: These vectors form a basis

b) this item for an independent solution. Complete solution and answer at the end of the lesson.

Creative tasks are found:

Example 7.

With what value of the vector parameter will be compartment?

Decision: The vectors are compartment if and only if the determinant drawn up from the data coordinates of the vectors is zero:

Essentially, it is required to solve the equation with the determinant. We turn to zeros as the kerchings on the tubes - the determinant is most advantageous to disclose the second line and immediately get rid of the minuses:

We carry out further simplifications and reduce the simplest linear equation:

Answer: for

It is easy to perform a check, for this you need to substitute the received value to the original determinant and make sure that , Overlook it again.

In conclusion, consider another type of task, which wears more algebraic and traditionally turns on to a linear algebra. It is so common that deserves a separate topic:

Prove that 3 vectors form a three-dimensional basis
and find the coordinates of the 4th vector in this basis

Example 8.

Vast vectors. Show that vectors form the basis of three-dimensional space and find the coordinates of the vector in this base.

Decision: First we disassemble with the condition. For the condition, four vectors are given, and, as you can see, they already have coordinates in some basis. What a basis is not interested in us. And you are interested in the following thing: three vectors may well form a new basis. And the first stage completely coincides with the solution of Example 6, it is necessary to check whether the vectors are really linearly independent:

Calculate the determinant composed of the coordinates of the vectors:

So, the vectors are linearly independent and form the basis of three-dimensional space.

Prerequisite Linear dependence of n functions.

Let functions, have derivative limit (N-1).

Consider the determinant: (1)

W (x) is customary to be called definitely in Vronsky for functions.

Theorem 1. If the functions are linously dependent in the interval (A, B), their vrosanisan W (x) is identical equal to zero in this interval.

Evidence. By the condition of the theorem, the ratio is performed

, (2) where they are not equal to zero. Let be . Then

(3). Differentiate this identity N-1 time and,

substituting instead of their obtained values \u200b\u200binto the determinant of the Vronsky,

we get:

In the determinant of the Bronsky, the latter column is a lingy combination of previous N-1 columns and in connection with this is zero in the entire interval points (A, B).

Theorem 2.In the event that the functions y 1, ..., yn are Lin-Eye-independent solutions of the equation l [y] \u003d 0, all the coefficients of which are continuous in the interval (A, B), then the rogsman of these solutions are different from zero at each point Interval (A, B).

Evidence. Suppose the opposite. There is x 0, where w (x 0) \u003d 0. Make a system N of equations

Obviously, the system (5) has a nonzero solution. Let (6).

Let's make a linase combination of solutions Y 1, ..., y n.

(X) is a solution of the equation l [y] \u003d 0. In addition. By virtue of the theorem of the uniqueness of the solution of the equation l [y] \u003d 0 with zero initial conditions should be only zero, ᴛ.ᴇ. .

We receive identity, where not all are equal to zero, and this means that y 1, ..., y n linelically dependent, which contradicts the condition of the theorem. Consequently, there is no such point where w (x 0) \u003d 0.

On the basis of Theorem 1 and Theorem 2, you can formulate the following assertion. In order for N solutions of the equation l [Y] \u003d 0 to be linously independent in the interval (A, B), it is extremely important and enough so that their Vronoskan is not addressed to zero at any point of this interval.

Of the proved theorems, such obvious properties of Vronoskan are also followed.

  1. If the n solutions l [y] \u003d 0 are zero at a single point x \u003d x 0 from the interval (A, B), in which the CEE products p I (x) are continuous, then it is zero in SO EX points of this interval.
  2. If the n solutions of the equation l [y] \u003d 0 differ from zero at one point x \u003d x 0 from the interval (A, B), then it is different from zero in the entire points of this interval.

Τᴀᴋᴎᴍ ᴏϭᴩᴀᴈᴏᴍ, for lin-essential N of independent solutions of the equation l [y] \u003d 0 in the interval (A, B), in which the coefficients of the equation R i (x) are continuous, it is extremely important and enough for their vrosanic to be different from zero at least One point of this interval.

The required condition of the linear dependence of n functions. - Concept and species. Classification and features of the category "Required condition of the linear dependence of N functions." 2017, 2018.

-

Ships Over Board Cargo Handling Gear) Lecture # 6 Topic: Cargo Gear (Cargo Gear) 6.1. Ship Over Board Cargo Handling Gear). 6.2. Cargo cranes. 6.3. Aprons. Overload is the movement of cargo on or from a vehicle. Many ...


  • - Cargo cranes (Cargo Cranes)

    Certificates (Certificates) Separation of functions (Division of Tasks) Inspection, certification and responsibility are divided in this way: & ....


  • - Do you know him? Lo Conoces?

    There - allla here - Aqui in a cafe - EN El Cafe at work - EN El Trabajo on the sea - EN El Mar 1. You do not know where the cafe? 2. You do not know where Sasha? 3. You do not know where the library? 4. You do not know where Olya is now? 5. You do not know where Natasha is now? Good day! Me ...


  • - Definition of Zmin and Xmin from the absence of cutting

    Fig.5.9. About cutting teeth wheels. Consider how the X rail shift coefficient is associated with the number of teeth, which can be chopped by the rail on the wheel. Let the rail be installed in position 1 (Fig.5.9.). In this case, the direct rail heads will cross the n-n bus line in T. and ...

  • ORD.System of elements x 1, ..., x m lin. Prospect V is a linearly dependent, if ∃ λ 1, ..., λ m ∈ ℝ (| λ 1 | + ... + | λ m | ≠ 0) such that λ 1 x 1 + ... + λ mxm \u003d θ .

    ORD.The system of elements x 1, ..., x m ∈ V is a linearly independent, if from equality λ 1 x 1 x 1 + ... + λ m x m \u003d θ λ 1 \u003d ... \u003d λ m \u003d 0.

    ORD.An element x ∈ V is a linear combination of elements x 1, ..., x m ∈ V, if ∃ λ 1, ..., λ m ∈ ℝ such that x \u003d λ 1 x 1 + ... + λ m x m.

    Theorem (linear dependence criterion): The system of vectors x 1, ..., x m ∈ V is linearly dependent if and only if at least one system of the system is linearly expressed in the rest.

    Dock. Necessity: Let x 1, ..., xm be linearly dependent ⟹ ∃ ∃ λ 1, ..., λ m ∈ ℝ (| λ 1 | + ... + | λ m | ≠ 0) such that λ 1 x 1 + ... + λ m -1 xm -1 + λ mxm \u003d θ. Suppose λ m ≠ 0, then

    x m \u003d (-) x 1 + ... + (-) x M -1.

    Adequacy: Let at least one of the vectors linearly expressed in the rest of the vectors: xm \u003d λ 1 x 1 + ... + λ m -1 xm -1 (λ 1, ..., λ m -1 ∈ ℝ) λ 1 x 1 + ... + λ m -1 xm -1 + (- 1) xm \u003d 0 λ m \u003d (- 1) ≠ 0 ⟹ x 1, ..., xm - linearly independent.

    Cost. Linear dependence condition:

    If the system contains a zero element or a linearly dependent subsystem, it is linearly dependent.

    λ 1 x 1 + ... + λ m x m \u003d 0 - linearly dependent system

    1) Let x 1 \u003d θ, then this equality is valid at λ 1 \u003d 1 and λ 1 \u003d ... \u003d λ m \u003d 0.

    2) Let λ 1 x 1 + ... + λ M x M \u003d 0 be a linearly dependent subsystem ⟹ | λ 1 | + ... + | λ M | ≠ 0. Then at λ 1 \u003d 0 also obtain, | λ 1 | + ... + | λ M | ≠ 0 ⟹ λ 1 x 1 + ... + λ m x m \u003d 0 is a linearly dependent system.

    Basis linear space. Vector coordinates in this base. Coordinates of the sums of vectors and works of vector by number. Required and sufficient condition for the linear dependence of the system of vectors.

    Definition: An ordered system of elements E 1, ..., e n of linear space V is called the basis of this space if:

    A) E 1 ... e n linearly independent

    B) ∀ x ∈ α 1 ... α n such that x \u003d α 1 e 1 + ... + α n e n

    x \u003d α 1 e 1 + ... + α n e n - decomposition of an element x in the basis E 1, ..., e n

    α 1 ... α n ∈ ℝ - the coordinates of the element X in the basis of E 1, ..., e n

    Theorem: If the Basis e 1, ..., e n is given in the linear space V, then ∀ x ∈ V column of x coordinates in the basis E 1, ..., E n is defined uniquely (coordinates are defined uniquely)

    Evidence: Let x \u003d α 1 e 1 + ... + α n e n and x \u003d β 1 e 1 + ... + β n e n


    x \u003d ⇔ \u003d θ, i.e. e 1, ..., e n - linearly independent, then - \u003d 0 ∀ i \u003d 1, ..., n ⇔ \u003d ∀ i \u003d 1, ..., n h. t. d.

    Theorem: let E 1, ..., E N be the linear space basis V; x, y - arbitrary elements of space V, λ ∈ ℝ - an arbitrary number. At the addition of X and Y, their coordinates are folded, with multiplication of X on λ, the x coordinates are also multiplied by λ.

    Evidence: x \u003d (E 1, ..., e n) and y \u003d (e 1, ..., e n)

    x + Y \u003d + \u003d (E 1, ..., E N)

    λx \u003d λ) \u003d (e 1, ..., e n)

    Lemma1: (necessary and sufficient condition of linear dependence of the system system)

    Let E 1 ... E N be the basis of the space V. The system of elements F 1, ..., F k ∈ V is linearly dependent if and only if the columns of these elements in the basis of E 1, ..., E N are linearly dependent

    Evidence: Spatrate F 1, ..., F K on the basis E 1, ..., E N

    f m \u003d (E 1, ..., e n) m \u003d 1, ..., k

    λ 1 F 1 + ... + λ k f k \u003d (E 1, ..., e n) [λ 1 + ... + λ n] i.e. λ 1 F 1 + ... + λ k f k \u003d θ

    ⇔ λ 1 + ... + λ n \u003d what was required to prove.

    13. The dimension of the linear space. Theorem on the connection of dimension and base.
    Definition: The linear space V is called N-dimensional space, if there are n linearly independent elements in V, and the system from any n + 1 elements of the space V is linearly dependent. In this case, N is called the dimension of the linear space V and denotes dimv \u003d n.

    The linear space is called infinite-dimensional if ∀n ∈ ℕ in the space V exists a linearly independent system containing N elements.

    Theorem: 1) If V is N-dimensional linear space, then any ordered system from n linearly independent elements of this space is formed by the basis. 2) If in the linear space V there is a basis consisting of n elements, then the dimension V is n (dimv \u003d n).

    Evidence: 1) Let Dimv \u003d N ⇒ in V ∃ N linearly independent elements E 1, ..., e n. We prove that these elements form a basis, that is, we prove that ∀ x ∈ V can be decomposed according to E 1, ..., e n. Attach X: E 1, ..., E N, X to them - this system contains n + 1 vector and it means it is linearly dependent. Since E 1, ..., E n is linearly independent, then by Theorem 2 x. linearly expressed through E 1, ..., E n i. ∃, ..., such as x \u003d α 1 e 1 + ... + α n e n. So E 1, ..., E N is the basis of space V. 2) Let E 1, ..., E N be the basis V, so in V ∃ N linearly independent elements. Take an arbitrary F 1, ..., F n, f n +1 ∈ V - n + 1 elements. We show their linear dependence. Spread them on the basis:

    f m \u003d (e 1, ..., e n) \u003d where m \u003d 1, ..., n make a matrix from coordinate columns: a \u003d matrix contains n strings ⇒ RGa≤N. The number of columns N + 1\u003e N ≥ RGa ⇒ Columns of the matrix A (i.e., the coordinates of the coordinate F 1, ..., f n, f n +1) are linearly dependent. From Lemma 1 ⇒, ..., f n, f n +1 - linearly dependent ⇒ dimv \u003d n.

    Consequence:If any basis contains N elements, then any other basis of this space contains N elements.

    Theorem 2: If the system of vectors x 1, ..., x m -1, x m is linearly dependent, and its subsystem x 1, ..., x m -1 is linearly independent, then x M - linearly expressed in x 1, ..., x M -1

    Evidence: Because x 1, ..., x m -1, x m - linearly dependent, then ∃, ... ,,,

    , ..., | , | such that. If,, ..., | \u003d\u003e x 1, ..., x M -1 - linearly independent, which can not be. So m \u003d (-) x 1 + ... + (-) x m -1.

    Note that in the future, without disturbing the generality, we will consider the case of vectors in three-dimensional space. On the plane, the consideration of vectors is made similarly. As noted above, all the results known from the course of linear algebra for algebraic vectors can be transferred to a special case of geometric vectors. So do it.

    Let the vectors fixed.

    Definition.Amount, where - some numbers are called a linear combination of vectors. In this case, these numbers will be called the coefficients of a linear combination.

    We will be interested in the question of the possibility of equality of the linear combination to zero vector. In accordance with the properties and axioms of vector spaces, it becomes obvious that for any vectors system there is a trivial (zero) set of coefficients for which this equality is performed:

    There is a question about the existence for this system of vectors of a non-trivial set of coefficients (among which there is at least one non-peer coefficient) for which the equality mentioned is performed. In accordance with this, we will distinguish between linearly dependent and independent systems.

    Definition.The system of vectors is called linearly independent if there is such a set of numbers, among which there are at least one non-zero, such that the corresponding linear combination is equal to the zero vector:

    The system of vectors is called linearly independent if equality

    perhaps only in the case of a trivial set of coefficients:

    We list the basic properties of linear dependent and independent systems proven to be a linear algebra.

    1. Any system of vectors containing a zero vector is linearly dependent.

    2. Let there be a linearly dependent subsystem in the vectors system. Then the whole system is also linearly dependent.

    3. If the vectors system is linearly independent, any of its subsystem is also linearly independent.

    4. If there are two vectors in the vectors, one of which is obtained from another multiplication by a number, then the entire system is linearly dependent.



    Theorem (criterion of linear dependence).The system of vectors is linearly dependent if and only if one of the vectors of this system will be presented as a linear combination of the remaining vectors of the system.

    Taking into account the criterion of the collinearity of two vectors it can be argued that their collinearity is the criterion of their linear dependence. For three vectors in space, the following statement is fair.

    Theorem (criterion of linear dependence of three geometric vectors).Three vectors, and linearly dependent if and only if they are compartment.

    Evidence.

    Necessity.Let the vectors and linearly dependent. We prove their compartment. Then, according to the general criterion of the linear dependence of algebraic vectors, we argue that one of these vectors will be presenting in the form of a linear combination of other vectors. Let, for example,

    If all three vectors, and attach to the general beginning, the vector coincides with the diagonal of the parallelogram built in the vectors and. But this means that vectors, and lie in the same plane, i.e. Compliannas.

    Adequacy.Let the vectors and comparnar. We show that they are linearly dependent. First of all, consider the case when some steam from the specified collinear vectors. In this case, according to the previous theorem, the system of vectors, contains a linearly dependent subsystem and, therefore, itself is linearly dependent according to the property of 2 linearly dependent and independent vectors. Now, no, no pair of vectors under consideration are not collinear. We transfer all three vectors to one plane and give them to the general beginning. Let's spend through the end of the vector direct parallel vectors and. Denote the letter point of intersection of a straight, parallel vector, with a straight line, on which the vector is lying, and the lettering point of the intersection, parallel to the vector, with a straight line, on which vector is lying. By definition of vectors we get:

    .

    Since the vector collinear is a non-zero vector, then there is a valid number that

    From similar considerations, the existence of a valid number of such that

    As a result, we will have:

    Then, from the general criterion of the linear dependence of algebraic vectors, we obtain that vectors, linearly dependent. ■.

    Theorem (linear dependence of four vectors).Any four vectors are linearly dependent.

    Evidence. First of all, consider the case when some triple from the specified four vectors of the compartment. In this case, this triple is linearly dependent in accordance with the previous theorem. Consequently, in accordance with the property of 2 linearly dependent and independent systems of vectors, and the entire four is linearly dependent.

    Now let the vectors have no three vectors of the vectors in the vectors under consideration. We give all four vectors ,, to the general beginning and spend after the end of the plane vector parallel to planes, detected by vast couples ,; ; . The intersection points of the specified planes with straight, on which the vectors lie, and, respectively, letters, and. From the determination of the sum of the vectors it follows that

    which, taking into account the general criterion of the linear dependence of algebraic vectors, suggests that all four vectors are linearly dependent. ■.

    Definition 18.2. System of functionsf., ..., f P.calledl.i- NIP O. Z. and in and s and m. about th in the interval (but, (3) if some nontrivial 5 the linear combination of these functions is zero at this interval identically:

    Definition 18.3. System vectors g 1, ..., x P calls, it is linear in and in and s and m about, if some nontrivial, linear combination of these vectors is equal to a bullet vector:

    L. In order to avoid confusion, we will continue the number of the components of the vector (vector function) to be denoted by the lower index, and the number of the vector itself (if there are several such vectors) the upper.

    "We remind that the linear combination is called nontrivial, if not all coefficients in it are zero.

    Definition 18.4. System of vector functions x 1 ^), ..., x n (t) called linear about Z. and in and s and m about th in the interval, (but, / 3) if some non-trivial linear combination of these vector functions is identical equal to this gap to zero vector:

    It is important to deal with these three concepts (linear dependence of functions, vectors and vector functions) with each other.

    First of all, if you submit formula (18.6) in the deployed form (remembering that each of x g (1) is a vector)


    then it will be equivalent to the equal system

    meaning linear dependence of Ms Component in the sense of first definition (as functions). It is said that the linear dependence of the vector functions entails them pomponent Linear dependence.

    Reverse, generally speaking, incorrect: enough to consider an example of a pair of vector functions

    The first components of these vector functions simply coincide means they are linearly dependent. The second components are proportional, it means. Also linearly dependent. However, if we try to construct their linear combination, equal to zero identically, then from the ratio

    immediately get the system

    which has the only solution C - S.-2 - 0. Thus, our vector functions are linearly independent.

    What is the reason for such a strange property? What is the focus that allows you to build linearly independent vector functions from obviously dependent functions?

    It turns out that the whole thing is not so much in the linear dependence of the component, as in the proportion of the coefficients, which is necessary to obtain zero. In the case of a linear dependence of the vector functions, the same set of coefficients serves all components regardless of the number. But in the example we given for one component, one proportion of coefficients was required, and for another other. So focus is actually simple: in order to obtain a linear dependence of the vector functions of the vector function, it is necessary that all components are linearly dependent "in the same proportion".

    We now turn to the study of the linear dependence of the vector functions and vectors. Here is almost obvious to the fact that from the linear dependence of the vector functions follows that for each fixed t * Vector

    they will be linearly dependent.

    Inverse, generally speaking, does not have the place: from the linear dependence of vectors at each t. Not a linear dependence of vector functions. It is easy to see on the example of two vector functions.

    For t \u003d 1, T \u003d 2 and T \u003d 3 We get a pair of vectors

    respectively. Each pair of vectors are proportional to (with coefficients of 1.2 and 3, respectively). It is not difficult to understand what for any fixed t * Our pair of vectors will be proportional to the coefficient t *.

    If we try to construct a linear combination of vector functions equal to zero identically, then the first components give us the ratio

    what is possible only if FROM = FROM2 = 0. Thus, our vector functions turned out to be linearly independent. Again, the explanation of this effect is that in the case of linear dependence of the vector functions, the same set of CJ constants serves all values t, and in our example for each value t. Required its proportion between the coefficients.

    Views

    Save to classmates Save Vkontakte