Find the distribution of the sum of two random variables. The law of distribution of the sum of two random variables

Find the distribution of the sum of two random variables. The law of distribution of the sum of two random variables

Let there be a system of two random variables X. and Y.whose joint allocation is known. The task is to find the distribution random variable . As examples of Z.can make a profit from two enterprises; the number in a certain way of votes voters from two different sections; The amount of glasses on two playing bones.

1.Look two DSV.Whatever values \u200b\u200btake discrete CC (in the form of a finite decimal fraction, with a different step), the situation can almost always be reduced to the next private case. Values X. and Y. Only integer values \u200b\u200bcan take, i.e. Where . If they were originally decimal fractions, they can be made integers to 10 k. And the absent values \u200b\u200bbetween maxima and minima can be attributed to zero probabilities. Let a joint distribution of probabilities be known. Then, if to numbered the strings and columns of the matrix according to the rules:, then the probability of the amount:

The elements of the matrix are folded along one of the diagonals.

2. The case of two nsv.Let the joint distribution density be known. Then the amount distribution density:

If a X.and Y. Independent, i.e. T.

Example 1. X, Y. - independent, uniformly distributed sv:

Find the density of the distribution of random variable.

It's obvious that ,

St. Z. can take values \u200b\u200bin the interval ( c + D.; a + B.), but not at all x.. Outside this interval. On the coordinate plane ( x., z.) area of \u200b\u200bpossible values \u200b\u200bof the value Z. is a parallelogram with the parties x.=from; x.=a.; z \u003d x + d; z \u003d x + b. In the formula for the limits of integration will be c. and a.. However, due to the fact that in the replacement y \u003d z-xat some values z. Function. For example, if c. , then z \u003d x + c and anyone x. will have: . Therefore, the calculation of the integral should be carried out separately for various regions of changes in value z., in each of which the integration limits will be different, but at all x. and z.. Let's do it for a particular case when a + D.< b+c . Consider three different areas of the value of the magnitude z. And for each of them we will find.

1) c + D ≤ Z ≤ A + D. Then

2) a + d ≤ z ≤ b + c. Then

3) b + C ≤ z ≤ a + b. Then

Such a distribution is called Simpson's law. In Fig.8, 9 depicts the graphs of the distribution density of SV from=0, d.=0.

We use the above general method to solve one problem, namely, to find the law of distribution of the sum of two random variables. There is a system of two random variables (x, y) with the distribution density F (x, y).

Consider the sum of the random variables x and y: and we find the value of the distribution of Z.. To do this, we will construct on the xou line plane, the equation of which (Fig. 6.3.1). This is a straight line that cuts off on the axes of segments equal to z. Straight divides the xow plane into two parts; The right and above it ; Left and lower

The region D in this case is the left lower part of the xou plane, shaded in fig. 6.3.1. According to Formula (6.3.2), we have:

This is a general formula for the distribution density of the sum of two random variables.

For considerations of symmetry, the task is relative to x and y, you can write another variant of the same formula:

It is required to make a composition of these laws, i.e., find the law of the distribution of the value :.

Apply the general formula for the composition of the distribution laws:

Substituting these expressions in the formula already occurring

and this is nothing but a normal law with a scattering center

In addition, the conclusion can be significantly easier with the following qualitative reasoning.

Without revealing brackets and non-transformations in the integrand function (6.3.3), we immediately come to the conclusion that the indicator is square three decisions relative to x type

where in the coefficient and the value Z is not included at all, in the coefficient in the first degree, and in the coefficient C - in the square. With this in mind and applying formula (6.3.4), we come to the conclusion that G (z) there is an indicative function, the indicator of which is a square three decrease relative to Z, and the distribution density; This species corresponds to the normal law. Thus, we; We come to a purely quality conclusion: the permissions of Z shall be normal. To find the parameters of this law - and - We use the formation of the addition of mathematical expectations and the addition of dispersions. By the formation of the formation of mathematical expectations . By the addition of dispersion theorem or From where the formula (6.3.7) follows.

Turning from the standard deviations to proportional to the likely deviations, we obtain:
.

Thus, we came to the next rule: with the composition of normal laws, a normal law is obtained again, and the mathematical expectations and dispersion (or the squares of probable deviations) are summed up.

The rule of the composition of normal laws can be generalized in case of an arbitrary number of independent random variables.

If there are N independent random variables: subordinates to normal laws with dispersion centers and rms deviations, then the value is also subordinated to the normal law with parameters

If the system of random variables (x, y) is distributed according to a normal law, but the values \u200b\u200bof x, y are dependent, it is not difficult to prove, just as before, based on the general formula (6.3.1), that the law of distribution of the value is also a normal law. The dispersion centers are still algebraically, but for the standard deviations, the rule becomes more complex: , where, R is the correlation coefficient of X and Y.

When adding several dependent random variables subordinate to the normal law, the law of distribution of the amount also turns out to be normal with parameters

where is the correlation coefficient of x i, x j, and the summation applies to all different pairs of combinations of magnitude.

We were convinced of a very important property of a normal law: with the composition of normal laws, a normal law is obtained again. This is the so-called "property of sustainability". The distribution law is called sustainable, if the composition of the same type is again obtained with the composition of two laws of this type. Above, we have shown that the normal law is stable. The property of sustainability is very few laws of distribution. The law of uniform density is unstable: with the composition of two laws of uniform density on the plots from 0 to 1, we received the Simpson law.

The sustainability of the normal law is one of the essential conditions for its widespread in practice. However, some other distribution laws have the property of stability, except normal. The peculiarity of the normal law is that with a composition of a sufficiently large number of practically arbitrary distribution laws, the total law turns out to be arbitrarily close to normal, regardless of whether the laws of the distribution of the components were. This can be illustrated, for example, by making the composition of three laws of uniform density in areas from 0 to 1. The resulting transaction G (z) is depicted in fig. 6.3.1. As can be seen from the drawing, the graph of the function G (z) is very reminded by a chart of a normal law.

A decision maker may use insurance to reduce the unfavorable financial impact of some types of random events.

But this consideration is very general, since under the decision makers, it could be implied as a separate person looking for protection against damage caused by property, savings or income and the organization seeking protection from the same damage.

In fact, such an organization may be an insurance company that is looking for ways to protect itself from financial losses due to a too large number of insurance claims that occurred with its separate client or with its insurance portfolio. Such protection is called reinsurance.

Consider one of two models (namely model of individual risks) Widely used in the definition of insurance rates and reserves, as well as in reinsurance.

Denote by S.the magnitude of random losses of the insurance company for some part of its risks. In this case S.it is a random value for which we must determine the distribution of probabilities. Historically for distributions S.V. S.there were two sets of postulates. The model of individual risks determines S.in the following way:

where S.V. Recognizes losses caused by an insurance object with the number I, but n.indicates the total number of insurance objects.

It is usually assumed that they are independent random values, since in this case, mathematical calculations are simpler and not required about the nature of the relationship between them. The second model is a collective risk model.

The considered model of individual risks does not reflect changes value of money over time. This is done to simplify the model, and that is why the title of the article refers to a short time interval.

We will consider only closed models, i.e. Those in which the number of insurance objects N. In formula (1.1), it is known and recorded at the very beginning of the time interval under consideration. If we introduce assumptions about the presence of migration from or into the insurance system, we get an open model.

Random variables describing individual payments

First we recall the basic provisions regarding life insurance.

When insuring in case of death for one year, the insurer undertakes to pay the amount b.If the policyholder dies throughout the year from the date of the conclusion of the insurance contract, and does not pay anything if the insured will live this year.

The probability of the occurrence of the insured event during the specified year is indicated through.

Random value describing insurance payments has a distribution that can be set either by probability function

(2.1)

either the corresponding distribution function

(2.2)

From formula (2.1) and from the determination of moments we get

(2.4)

These formulas can also be received by writing X.as

where is the constant value paid in case of death, and is a random value that takes value 1 at the occurrence of death and 0 otherwise.

So, and , and average value and dispersion S.V. equal and, accordingly, the average value and dispersion S.V. equal and, which coincides with the formulas written above.

Random value with the area of \u200b\u200bvalues \u200b\u200b(0.1) is widely used in actuarial models.

In the textbooks on the theory of probability, it is called indicator, bernoullievskaya random Value or binomial random variable In the sole test scheme.

We will call her indicatorfor brevity considerations, and also because it indicates an offensive, or not an offensive, the event under consideration.

Let us turn to the search for more general models in which the amount of insurance payments is also a random value and several insured events may occur in the time interval under consideration.

Insurance in case of illness, car insurance and other types of property, as well as civil liability insurance immediately provide many examples. Summarizing formula (2.5), put

where is a random value describing insurance payments in the time interval under consideration, S.V. Indicates the total amount of payments in this interval and S.V. It is an indicator for an event consisting of what has occurred at least one insured case.

Being an indicator of such an event, S.V. Fixes availability () or absence () Insurance cases in this time interval, but not the number of insured cases in it.

The probability will continue to be denoted through.

Let us discuss several examples and determine the distribution of random variables and in some model.

We first consider insurance in case of death for a period of one year with an additional payment, if death has come as a result of an accident.

For definiteness, suppose that if death occurred as a result of an accident, then the amount of payment will be 50000. With death, according to other reasons, the amount of payment will be 25,000.

Suppose that for the face of this age, the state of health and profession, the likelihood of death as a result of an accident during the year is 0.0005, and the likelihood of death for other reasons is 0.0020. The formula looks like this:

Summing in all possible values, we get

,

Conditional distribution with. in. provided it seems

We now consider insurance of cars from collisions (compensation is paid to the car owner for the damage caused by its car) with the magnitude of the unconditional franchise 250 and with the maximum amount of payment 2000.

For clarity, suppose that the probability of the occurrence of one insured event in the period under review for a particular person is 0.15, and the likelihood of the occurrence of more than one collision is zero:

, .

Unrealistic assumption that no more than one insurance case may occur for one period, it is done in order to simplify the distribution of S.V. .

We will refuse this assumption in the next section after we consider the distribution of the amount of several insured events.

Since it is the amount of payment of the insurer, and not damage caused by the car, we can consider two characteristics, and.

First, the event includes those collisions in which the damage is less than the unconditional franchise, which is equal to 250.

Secondly, the distribution of S.V. There will be a "bunch" of probabilistic mass at the maximum amount of insurance payments, which is 2000.

Suppose that probabilistic mass focused at this point is 0.1. Next, suppose that the amount of insurance payments in the range from 0 to 2000 can be simulated by a continuous distribution with the density function proportional to (In practice, a continuous curve, which is selected to represent the distribution of insurance payments, is the result of the study of the amount of payments in the previous period.)

Summing these assumptions about the conditional distribution of S.V. Provided, we come to the distribution of a mixed type having a positive density in the range from 0 to 2000 and some "bunch" of the probabilistic mass at point 2000. This is illustrated by the schedule in Fig. 2.2.1.

The distribution function of this conditional distribution looks like this:

Fig.2.1. Distribution function S.V. B under the condition I \u003d 1

We calculate the mathematical expectation and dispersion in the example of the example with automobile insurance in two ways.

First, we will divert the distribution of S.V. And we use it to calculate and. Denoting through the distribution function S.V. , have

For x.<0

This is the distribution of mixed type. As shown in Fig. 2.2, it has both discrete ("clutch" of the probabilistic mass at point 2000) and the continuous part. This distribution function corresponds to the combination of probability functions.

Fig. 2.2. Distribution function S.V. X \u003d IB.

and density functions

In particular, and . therefore .

There are a number of formulas that bind moments of random variables with conditional mathematical expectations. For mathematical expectations and for dispersion, these formulas are

(2.10)

(2.11)

It is understood that expressions in the left parts of these equalities are calculated directly by the distribution of S.V. . When calculating expressions in the right parts, namely, the conditional distribution of S.V. With a fixed value of S.V. .

These expressions are thus functions S.V. and we can calculate their moments using the distribution of S.V. .

The conditional distributions are used in many actuarial models, and this allows you to directly apply the formulas discharged above. In our model. Considering S.V. As and S.V. In quality, get

(2.12)

, (2.14)

, (2.15)

and consider conditional mathematical expectations

(2.16)

(2.17)

Formulas (2.16) and (2.17) are defined as a function from S.V. What can be recorded in the form of the following formula:

Since when, then (2.21)

For we have (2.22)

Formulas (2.21) and (2.22) can be combined: (2.23)

Thus, (2.24)

Substituting (2.21), (2.20) and (2.24) in (2.12) and (2.13), we get

Apply the obtained formulas for calculating and in the example of automotive insurance (Fig. 2.2). Since the density function S.V. Under the condition is expressed by the formula

moreover P (B \u003d 2000 | i \u003d 1)\u003d 0,1, we have

Finally, believed Q. \u003d 0.15, from formulas (2.25) and (2.26) We will receive the following equalities:

To describe another insurance situation, other models can be offered for S.V. .

Example: model for the number of deaths as a result of aviation catastrophes

As an example, consider the model for the number of deaths that occurred as a result of aviation catastrophe in a one-year period of activity of the airline.

We can start with a random variable describing the number of deaths for one flight, and then sum up such random variables on all flights per year.

For one flight event will mark the onset of aircraft crash. The number of deaths that caused this catastrophe will be represented by the product of two random variables and, where - the coefficient of the aircraft load, i.e. the number of persons on board at the time of the aircraft crash, and the share of death among those who were on board.

The number of deaths seems to be precisely in this way, since separate statistics for values \u200b\u200band is more affordable than statistics for S.V. . So, although the share of fatal outcomes among those who were on board, and the number of persons on board are probably related to each other, as the first approximation, it can be assumed that S.V. And independent.

Amount of independent random variables

In the model of individual risks, insurance payments made by the insurance company are presented as the amount of payments to many individuals.

Recall the two methods for determining the distribution of the amount of independent random variables. Consider first the sum of two random variables, the selective space of which is shown in Fig. 3.1.

Fig. 2.3.1. Event

Direct and area under this direct is an event. Therefore, the distribution function S.V. S. It has the form (3.1)

For two discrete non-negative random variables, we can use the full probability formula and write down (3.1) as

If a X. and Y. Independent, the last amount can be rewritten as

(3.3)

Probability feature corresponding to this distribution function can be found by the formula

(3.4)

For continuous non-negative random variables of formulas, corresponding to formulas (3.2), (3.3) and (3.4), are

Ever alone or both random variables X. and Y. Have a mixed type distribution (which is typical for the models of individual risks), formulas are similar, but more cumbersome. For random variables that can also take negative values, amounts and integrals in the above formulas are taken over all values \u200b\u200bfrom to.

In probability theory, the operation in formulas (3.3) and (3.6) is called a convolution of two distribution functions and is indicated through. The convolutionary operation can also be defined for a pair of probability functions or density functions with formulas (3.4) and (3.7).

To determine the amount of the amount of more than two random variables, we can use the iterations of the taking a convolution. For , where they are independent random values, denotes the distribution function S.V., and is the function of distribution S.V. , we'll get

Example 3.1 illustrates this procedure for three discrete random variables.

Example 3.1. Random variables, and independent and have distributions that are determined by columns (1), (2) and (3) of the table below.

We repel the function of probabilities and the distribution function S.V.

Decision. The table uses the designations entered in front of the example:

In columns (1) - (3) contains available information.

Column (4) is obtained from columns (1) and (2) using (3.4).

Column (5) is obtained from columns (3) and (4) using (3.4).

Column definition (5) completes the finding of probability functions for S.V. . Its distribution function in column (8) is a set of partial column sums (5), starting from above.

For clarity, we included column (6), the distribution function for column (1), column (7), which can be obtained directly from columns (1) and (6), applying (2.3.3), and column (8), defined Similarly, in columns (3) and (7). Column (5) can be determined from column (8) by sequential subtraction.

Let us turn to the consideration of two examples with continuous random values.

Example 3.2. Let S.V. It has a uniform distribution on the interval (0.2), and let S.V. does not depend on S.V. and has a uniform distribution on the interval (0.3). Determine the distribution function S.V.

Decision. Since distributions S.V. And continuous, we use the formula (3.6):

Then

Selective space S.V. and illustrated fig. 3.2. The rectangular area contains all possible pairs and. Event you are interested in,, depicted in the figure for five values s..

For each value, the straight line crosses the axis Y. At point S. And straight at the point. The values \u200b\u200bof the function for these five cases are described by the following formula:

Fig. 3.2. Cutting two uniform distributions

Example 3.3. Consider three independent S.V. . For S.V. It has an indicative distribution and. Find the density function S.V. , applying a convolution operation.

Decision. Have

Taking advantage of formula (3.7) three times, we get

Another method for determining the distribution of the amount of independent random variables is based on the uniqueness of the generating function of the moments, which for S.V. determined by the ratio .

If this is a mathematical expectation of course for everyone T. From some open interval containing the origin of the coordinates, it is the only method of distribution of S.V. distribution In the sense that there is no other function other than that would be producing the function of the distribution of S.V. .

This uniqueness can be used as follows: for the amount

If independent, the mathematical expectation of the work in formula (3.8) is equal ..., so that

Finding an explicit expression for the only distribution that corresponds to the generating functions of the moments (3.9), it would completed the foundation of the distribution of S.V. . If it is not possible to specify it explicitly, it is possible to search it with numerical methods.

Example 3.4.. Consider random variables from Example 3.3. Determine the density function S.V. using the producing function of the moments S.V. .

Decision. According to equality (3.9), What can be written in the form With the method of decomposition on the simplest fraction. The decision is . But it is a functioning function of the indicative distribution with the parameter, so that the density function S.V. Has appearance

Example 3.5. In the study of random processes, the inverse Gaussian distribution was introduced. It is used as distribution S.V. IN, Insurance payments. The density function and the functioning of the moments of the reverse Gaussian distribution are set by formulas

Find the distribution of S.V. where S.V. Independent and have the same inverse Gaussian distributions.

Decision. Taking advantage of formula (3.9), we obtain the following expression for the production function of the S.V. :

The generating functions of the moments corresponds to the only distribution, and you can make sure that it has an inverse Gaussian distribution with parameters and.

Approximation for distribution of the amount

The central limit theorem provides a method for finding numerical values \u200b\u200bto distribute the amount of independent random variables. Typically, this theorem is formulated for the amount of independent and equally distributed random variables, where .

For any n distribution S.V. where \u003d. has a mathematical wait 0 and dispersion 1. As you know, the sequence of such distributions (when n.\u003d 1, 2, ...) tends to standard normal distribution. When N. Veliko this theorem is used to bring the distribution of S.V. Normal distribution with average μ and dispersion. Similarly, the amount distribution n. Random variables approaching the normal distribution with medium and dispersion.

The effectiveness of such approximation depends not only on the number of components, but also from the proximity of the distribution of the components to normal. In many elementary courses of statistics, it is indicated that N should be at least 30 in order for the approximation to be reasonable.

However, one of the programs for generating normally distributed random variables used in imitation modeling implements a normal random value in the form of medium 12 independently distributed at the interval (0.1) of random variables.

In many models of individual risks, random variables included in the amount are not equally distributed. This will be illustrated by the examples in the next section.

The central limit theorem also extends to the sequence of unequal distributed random variables.

To illustrate some applications of individual risks, we will use the normal approximation of the distribution of the amount of independent random variables to obtain numerical solutions. If a T.

and further, if S.V. Independent, T.

For the application under consideration, we only need:

  • find medium and dispersion of random variables that simulate individual losses,
  • summarize them in order to get a medium and dispersion of the loss of the insurance company as a whole
  • take advantage of the normal approximation.

Below we illustrate this sequence of actions.

Annexes to insurance

In this section, four examples illustrate the use of a normal approximation.

Example 5.1. A life insurance company offers an insurance contract for death for a period of one year with payments of size 1 and 2 to persons whose probability of death is 0.02 or 0.01. The table below shows the number of persons. NK. In each of the four classes formed in accordance with the payment b K. and probability of an insured event q k:

k. q K. b K. n K.
1 0,02 1 500
2 0,02 2 500
3 0,10 1 300
4 0,10 2 500

Insurance company wants to collect from this group of 1,800 persons an amount equal to the 95th percentage of the distribution of the total amount of insurance payments for this group. In addition, she wants the proportion of each person in this amount is proportional to the expected size of the insurance payment for this person.

The share of the person with the number, the average payment of which is equal to, should be. From the requirement of the 95th percentile it follows that. The magnitude of exceeding, is a risk allowance, and is called relative risk surcharge. Count.

Decision. The value is determined by the ratio \u003d 0.95, where S \u003d x 1 + x 2 + ... + x 1800.This statement about the probability is equivalent to the following:

In accordance with what was mentioned about the central limit theorem in Section. 4, we approximate the distribution of S.V. Standard normal distribution and take advantage of its 95th percentile, from where we get:

For four classes to which insurers are broken, we obtain the following results:

k. q K. b K. Mean b k q k Dispersion B 2 K Q k (1-Q k) n K.
1 0,02 1 0,02 0,0196 500
2 0,02 2 0,04 0,0784 500
3 0,10 1 0,10 0,0900 300
4 0,10 2 0,20 0,3600 500

In this way,

Therefore, the relative risk surcharge is equal

Example 5.2. Customers of the company engaged in car insurance are distributed in two classes:

Class Number in class

Probability of offensive

insurance case

Distribution of insurance payments,

parameters truncated indicative

distributions

k. L.
1 500 0,10 1 2,5
2 2000 0,05 2 5,0

The truncated indicative distribution is determined by the distribution function

This is the distribution of mixed type with density function , and "bunch" of probabilistic mass at point L.. The graph of this distribution function is shown in Fig.5.1.

Fig. 5.1. Truncated indicative distribution

As before, the likelihood that the total amount of insurance payments exceeds the amount collected from the policyholders should be equal to 0.05. We assume that the relative risk surcharge must be the same in each of the two classes in question. Calculate.

Decision. This example is very similar to the previous one. The only difference is that the amounts of insurance payments are now random values.

First we obtain expressions for the moments of a truncated indicative distribution. This will be a preparatory step for the use of formulas (2.25) and (2.26):

Taking advantage of the values \u200b\u200bof the parameters data in the condition and using formulas (2.25) and (2.26), we obtain the following results:

k. Q K. μ K. Σ 2 K. Average Q k μ k Dispersion μ 2 k q k (1-q k) + σ 2 k q k n K.
1 0,10 0,9139 0,5828 0,09179 0,13411 500
2 0,05 0,5000 0,2498 0,02500 0,02436 2000

So, S., the total amount of insurance payments, has moments

The condition for determining remains the same as in Example 5.1, namely,

Taking advantage of approximation with a normal distribution, we get

Example 5.3. The insurance company's portfolio includes 16,000 death insurance contracts for one year according to the following table:

The probability of the occurrence of an insured event q for each of the 16,000 clients (these events are intended to be mutually independent) equal to 0.02. The company wants to set the level of his own retention. For each policyholder, the level of its own retention is the amount of payment below which this company (Cedenti Company) exercises independently, and payments superior to this value are covered under the reinsurance agreement by another company (reinsurer).

For example, if the level of own holding is 200,000, then the company reserves coverage of the amount up to 20,000 for each policyholder and buy reinsurance to cover the difference between the insurance payment and the amount of 20,000 for each of 4500 insureders, the insurance payments for which are superior to the amount of 20,000 .

As a criterion for making a decision, the company chooses minimizing the likelihood that insurance payments left on their own deduction, plus the amount that is paid for reinsurance will exceed the amount of 8,250,000. Reinsurance costs 0.025 per unit coating (ie 125% of the expected The magnitude of insurance payments per unit 0.02).

We believe that the portfolio in question is closed: new insurance contracts concluded during the current year will not be taken into account in the described decision-making process.

Partial solution. I first spend all the calculations by choosing a payment of 10,000 per unit. As an illustration, we assume that with. in. S. It is the amount of payments left on their own retention, has the following form:

To these insurance payments left on their own deduction, S.The amount of reinsurance premiums is added. Total, the total amount of coating according to such a scheme is

The amount left on his own deduction is equal to

Thus, the total reinsured value is 35,000-24,000 \u003d 11,000 and the reinsurance cost is

It means that in the level of their own deduction, equal to 2, the insurance payments left on their own deduction plus the reinsurance costs are. The decision-making criterion is based on the likelihood that this total will exceed 825,

Using a normal distribution, we obtain that this value is approximately equal to 0.0062.

The average values \u200b\u200bof insurance payments for insuring the excessant of unprofitability, as one of the types of reinsurance, can be approximated using the normal distribution as the distribution of general insurance payments.

Let the general insurance payments x have a normal distribution with medium and dispersion

Example 5.4. Consider the insurance portfolio, as in Example 5.3. We will find the mathematical expectation of the amount of insurance payments in the insurance contract for an excessant unprofitability, if

(a) Individual reinsurance is absent and unconditional franchise is set to 7,500,000

(b) Own retention was established in the amount of 20,000 according to individual insurance contracts and the magnitude of the unconditional franchise on the portfolio is 5,300,000.

Decision.

(a) in the absence of individual reinsurance and in the transition to 10,000 as a monetary unit

the use of formula (5.2) gives

this is the amount of 43,770 in the source units.

(b) In Example 5.3, we obtained an average and dispersion of the total amount of insurance payments under the individual level of their own retention of 20,000, equal to 480 and 784, respectively, if we consider 10,000 as a unit. Thus, \u003d 28.

the use of formula (5.2) gives

what is the amount of 4140 in the source units.

In practice, it is often necessary to find the law of distribution of random variables.

Let there be a system (X b x 2) Two continuous s. in. And their sum

Find the distribution density with. in. W. In accordance with the general solution of the previous paragraph, we find the area of \u200b\u200bthe plane where x + X 2 (Fig. 9.4.1):

Differentiating this expression on y, we obtain p. R. Random variable Y \u003d x + x 2:

Since the function f (x b x 2) \u003d xj + x 2 is symmetric about its arguments,

If with. in. H. and H. 2 Inspected, then formulas (9.4.2) and (9.4.3) will take a look:


In the case when independent with. in. X. and X 2 They talk about the composition of the laws of distribution. Produce composition Two distribution laws - it means to find the law of distribution of the sum of two independent with. c., distributed according to these laws. A symbolic recording is applied to designate the distribution laws

which essentially refers to formulas (9.4.4) or (9.4.5).

Example 1. The work of two technical devices is considered (TU). First, the Tuve works after its failure (failure) is included in the operation of TU 2. Times of trouble-free work that b TU 2 - X. and H. 2 - independent and distributed in terms of indicative laws with parameters A, 1 and X 2. Therefore, time Y. Thoughtful work that consists of that! and that 2 will be determined by the formula

Requires to find p. R. Random variable Y, i.e. the composition of two demonstration laws with parameters and X 2.

Decision. According to formula (9.4.4) we obtain (at\u003e 0)


If there is a composition of two demonstration laws with the same parameters (? C \u003d H. 2 \u003d Y), then in the expression (9.4.8) it turns out the uncertainty of type 0/0, revealing which, we obtain:

Comparing this expression with the expression (6.4.8), we are convinced that the composition of the two identical indicative laws (? C \u003d H. 2 = X)it is the law of Erlang second order (9.4.9). With a composition of two demonstration laws with different parameters X. and A-2 get generalized Law Erland Second Order (9.4.8). ?

Task 1. The law of distribution of the difference between two s. in. System with. in. (X and x 2) He has a joint p .:/ (x b x 2). Find p. R. Their difference Y \u003d x. - X 2.

Decision. For system with. in. (X b - x 2) etc. will be / (x b - x 2) i.e., we replaced the difference. Consequently, p. R. A random variable will be lost (see (see (9.4.2), (9.4.3)):

If a from. in. X Xi 2 Independent, T.

Example 2. Find p. R. The difference of two independently distributed with. in. With parameters X. and X 2.

Decision. By formula (9.4.11) we get

Fig. 9.4.2 Fig. 9.4.3

Figure 9.4.2 shows p. R. g. (y). If the difference between the two independently distributed s. in. with the same parameters (A-I= H. 2 = BUT,),that g. (y) \u003d / 2 - already familiar

laplace Law (Fig. 9.4.3). ?

Example 3. Find the law of distribution of the sum of two independent with. in. H. and X 2 distributed by the law of Poisson with parameters a H. and a 2.

Decision. Find the likelihood of an event (X. + H. 2 = t) (T \u003d 0, 1,



Consequently, with. in. Y \u003d x x + H. 2 Distributed by the law of Poisson with the parameter and x2) - and x + a 2. ?

Example 4. Find the amount of distribution of the sum of two independent with. in. X. and X 2 distributed by binomial laws with parameters p x r 2, r respectively.

Decision. Imagine with. in. X. as:

where X 1) - Event indicator BUT Wu "-M experience:

Row of distribution with. in. X, - has the kind


We will make a similar representation for p. in. X 2:where x] 2) - Event Indicator BUT In the "experience:


Hence,

where x? 1) + (2) if an event indicator BUT:

Thus, we have shown that with. in. Take the sum (sh + p 2) Event indicators BUTFrom where it follows that with. in. ^ is distributed by a binomial law with parameters ( p. + п 2), r.

Note that if the probabilities r In various series of experiments are different, then as a result of the addition of two independent with. c., distributed by binomial laws, will work with. c., Distributed not by binomial law. ?

Examples 3 and 4 are easily summarized on an arbitrary number of terms. With the composition of the laws of Poisson with parameters a Kommersant 2, ..., t. again the law of Poisson with a parameter a (T) \u003d a x + and 2 + ... + a t.

With the composition of binomial laws with parameters (n b); (I 2, r) , (nt, r) again it turns out a binomial law with parameters ("("), R), Where p (t) \u003d sh + n 2 + ... + pt

We have proved the important properties of the Law of Poisson and Binomial Law: "Property of Sustainability". The distribution law is called sustainable If, with the composition of two laws of the same type, the law of the same type is obtained (only the parameters of this law differ). In subsection 9.7 we will show that the normal law has the same property of stability.

We use the above general method to solve one problem, namely, to find the law of distribution of the sum of two random variables. There is a system of two random variables (x, y) with the distribution density F (x, y). Consider the sum of the random variables x and y: and we will find the value of the distribution of the value of Z. For this, we construct on the xou line plane, the equation of which (Fig. 7). This is a straight line that cuts off on the axes of segments equal to z. Direct divides the plane of the How into two parts; the right and above it; Left and lower.

The region D in this case is the left lower part of the xou plane, shaded in fig. 7. According to the formula (16) we have:

Differentiating this expression on the variable z, which is included in the upper limit of the internal integral, we get:

This is a general formula for the distribution density of the sum of two random variables.

For considerations of symmetry, the task is relative to x and y, you can write another variant of the same formula:

which is equivalent to the first and can be applied instead.

An example of the composition of normal laws. Consider two independent random variables X and Y, subordinate to normal laws:

It is required to make a composition of these laws, i.e., find the law of the distribution of the value :.

Apply the general formula for the composition of the distribution laws:

If you reveal brackets in an indicator of the degree of integrated function and bring similar members, we get:

Substituting these expressions in the formula already occurring

after transformation, we get:

and this is nothing but a normal law with a scattering center

and rms deviation

In addition, the conclusion can be significantly easier with the following qualitative reasoning.

Without opening brackets and does not produce transformations in the integrand function (17), we immediately come to the conclusion that the indicator of the degree is square three decisions relative to x type

where in the coefficient and the value Z is not included at all, in the coefficient in the first degree, and in the coefficient C - in the square. Having this in mind and applying formula (18), we come to the conclusion that G (z) is an indicative function, the indicator of the degree of which is a square three decrease relative to z, and the distribution density; This species corresponds to the normal law. Thus, we; We come to a purely quality conclusion: the permissions of Z shall be normal. To find the parameters of this law - and - we use the theorem for the addition of mathematical expectations and the addition of dispersions. By the formation of the formation of mathematical expectations. By the addition of dispersion theorem or from where it follows the formula (20).

Turning from the standard deviations to the probable deviations proportional to them, we obtain :.

Thus, we came to the next rule: with the composition of normal laws, a normal law is obtained again, and the mathematical expectations and dispersion (or the squares of probable deviations) are summed up.

The rule of the composition of normal laws can be generalized in case of an arbitrary number of independent random variables.

If there are N independent random variables: subordinates to normal laws with dispersion centers and rms deviations, then the value is also subordinated to the normal law with parameters

Instead of formula (22), it is possible to apply the formula equivalent to it:

If the system of random variables (x, y) is distributed according to a normal law, but the values \u200b\u200bof x, y are dependent, it is not difficult to prove, just as before, based on the general formula (6.3.1), that the law of distribution of the value is also a normal law. The dispersion centers are still algebraically, but for the standard deviations, the rule becomes more complex:, where, R is the correlation coefficient X and Y.

When adding several dependent random variables subordinate to the normal law, the law of distribution of the amount also turns out to be normal with parameters

or in probable deviations

where is the correlation coefficient of x i, x j, and the summation applies to all different pairs of combinations of magnitude.

We were convinced of a very important property of a normal law: with the composition of normal laws, a normal law is obtained again. This is the so-called "property of sustainability". The distribution law is called sustainable, if the composition of the same type is again obtained with the composition of two laws of this type. Above, we have shown that the normal law is stable. The property of sustainability is very few laws of distribution. The law of uniform density is unstable: with the composition of two laws of uniform density on the plots from 0 to 1, we received the Simpson law.

The sustainability of the normal law is one of the essential conditions for its widespread in practice. However, some other distribution laws have the property of stability, except normal. The peculiarity of the normal law is that with a composition of a sufficiently large number of practically arbitrary distribution laws, the total law turns out to be arbitrarily close to normal, regardless of whether the laws of the distribution of the components were. This can be illustrated, for example, by making the composition of three laws of uniform density in areas from 0 to 1. The resulting transaction G (z) is depicted in fig. 8. As can be seen from the drawing, the graph of the function G (z) is very reminding the schedule of a normal law.

Views

Save to classmates Save Vkontakte