THE INTEGRALS AND INTEGRAL TRANSFORMATIONS CONNECTED WITH THE JOINT VECTOR GAUSSIAN DISTRIBUTION

In many applications it is desirable to consider not one random vector but a number of random vectors with the joint distribution. This paper is devoted to the integral and integral transformations connected with the joint vector Gaussian probability density function. Such integral and transformations arise in the statistical decision theory, particularly, in the dual control theory based on the statistical decision theory. One of the results represented in the paper is the integral of the joint Gaussian probability density function. The other results are the total probability formula and Bayes formula formulated in terms of the joint vector Gaussian probability density function. As an example the Bayesian estimations of the coefficients of the multiple regression function are obtained. The proposed integrals can be used as table integrals in various fields of research.

dom vectors. In this paper, the results of works [6,7] are generalized for the case of the joint Gaussian distribution.
1. Integrals connected with the joint vector Gaussian distribution. The random k Ξ -component vector 1 ( ,..., ) T k Ξ Ξ = Ξ Ξ is distributed by the Gaussian (or normal) law if its probability density function has the following form: The following integral connected with function (1) was proved in [6]: is the row vector, A -1 is the matrix inverse to the A, 1 | | Ais the determinant of the matrix A -1 , and k E Ξ is the k Ξ -dimensional Euclidean space. In some cases, it is desirable to consider not one random vector with the Gaussian distribution, but several random vectors with the joint Gaussian distribution. This case is possible to study by partitioning the vector Let us proceed to the calculation of integral (6).
The application to the block matrix A = (A i,j ) (5) of the generalized (block) Gauss algorithm [8] gives us the following block upper triangular matrix: If i > j then the blocks ( 1) , The determinant of the block upper triangular matrix G (7) is equal to the product of the determinants of the diagonal blocks and the same as the determinant of the matrix A [8]: The matrix A can be represented in the form of We denote the diagonal blocks of the matrix D  (9) as ( ) The following equalities are fulfilled: Let us suppose that we can find in the block form the inverse matrix transforms the integrated function 1 (4) to the following function of the argument z: Since, given (8), we have the following function of the argument ( ), 1, As it is known, the following equality is fulfilled when the variables are replaced: where J is the Jacobian of the transformation (11): ( ) Let us rewrite function F(z) (13) as the function of the elements of its matrices: Substituting (15) into (14), we obtain ( ) The integral in the right-hand side of expression (16) is the integral of type (2). In such a case ( ) ( ) Substituting (17) into (16), we obtain the following formula Let us come back in (18) from the matrices D and D  to the matrices A and B. Since (8)) and 1 T D B G -= (formula (12)), then and instead of (18) we get the following expression: It should be taken into account that the matrices A and B in (19) are block matrices, i. e.
and one deals with the block operations of inversion and of multiplication. Let us summarize the obtained result in the form of the following theorem.
T h e o r e m 1 (the integral connected with the joint Gaussian distribution of the random vectors).

Total probability formulae for the joint vector Gaussian distribution.
T h e o r e m 2 (total probability formula for the joint vector Gaussian distribution). Let

is a scalar, and the probability density function f(ξ) is represented in the form
where , where A, B, C are determined by formulae (24), (25), (26). In order to integrate the received function we can use formula (19) and write the equality The proof is completed.
and the probability density function f(ξ) has the following form This theorem is the generalization of the theorem proved in [7].

The Bayes formula for the joint vector Gaussian distribution.
T h e o r e m 4 (the Bayes formula for the joint vector Gaussian distribution). Let the conditional probability density function f(y/ξ) is represented in the form , 1 1 1 W is a scalar, and the probability density function f(ξ) is represented in the form , , , ) .
where A and B are defined by formulae (24), (25). Formula (31) can be written in the following form , , , We will be convinced in the equality between (31) and (32) by multiplication in (32) given expression (33   contains not the scalar components, but is

D B
Ξ Ξ Ν = is defined. For performing these operations we need the programs of transposing, multiplying and inverting of the block matrices.

Conclusion.
The results obtained in this paper are aimed at solving the dual control problems formulated in works [9,10]. The sequence of the control actions in the dual control of the multivariate stochastic objects is defined by the functional equations of the dynamic programming [9], which contain the integrals like the integrals calculated in this article. One of the practical examples is the task of the optimal allowance distribution as the task of the dual control considered in work [10].