2.4 Distributions of Functions of Random Variables and Matrices

This is a consolidated post containing two former posts on distributions of functions of variables.

2.4.1 Distribution of A Simple Function of Gamma Variables

In this post, we look at a nifty result presented in [11] where the probability density function (pdf) of the function r1c+r2 two independent Gamma distributed random variables r1𝒢(k1,θ1) and r2𝒢(k2,θ2) is derived.

The derivation is an exercise (for the scalar case) in computing the pdf of functions of variables by constructing the joint pdf and marginalizing based on the Jacobian. A similar approach can also be used for matrix-variate distributions (which will probably be a good topic for another post.)

Theorem 6.

Let c>0, and let r1𝒢(k1,θ1) and r2𝒢(k2,θ2) be independent random variables, then, the pdf of r=r1c+r2, denoted by pr(r;k1,θ1,k2,θ2,c), is given by

Krrk11exp(rcθ1)U(k2,k1+k2+1;c(rθ1+1θ2)), (2.42)

where

U(a,b;z)=1Γ(a)0+exp(zx)xa1(1+x)ba1dx

is the hypergeometric U-function [15, Chapter 13, Kummer function], and Kr is a constant ensuring that the integral over the pdf equals one.

Proof.

As r1 and r2 are independent, their joint pdf, denoted by pr1,r2(r1,r2;k1,θ1,k2,θ2), is given by

1Γ(k1)θ1k1Γ(k2)θ2k2r1k11r2k21exp(r1θ1r2θ2). (2.43)

Applying transformation r=r1c+r2 with Jacobian dr1dr=c+r2, we obtain the transformed pdf, pr,r2(r,r2;k1,θ1,k2,θ2,c), as

Krrk11(c+r2)k1r2k21exp(rcθ1(rθ1+1θ2)r2), (2.44)

where Kr is a constant ensuring that the integral over the pdf equals one. Next, pr(r;k1,θ1,k2,θ2,c) is obtained by marginalization as

pr(r;k1,θ1,k2,θ2,c)=0+pr,r2(r,r2;k1,θ1,k2,θ2,c)dr2, (2.45)

where the integration is conducted using the integral representation of the hypergeometric U-function from [15, Chapter 13, Kummer function] to obtain the expression in the theorem statement. ∎

Version History

  1. 1.

    First published: 19th Oct. 2021 on aravindhk-math.blogspot.com

  2. 2.

    Modified: 17th Dec. 2023 – Style updates for

2.4.2 Distribution of the Determinant of a Complex-Valued Sample Correlation Matrix

In this post, we look at the distribution of the determinant of the sample correlation matrix of the realizations of a complex-valued Gaussian random vector. The distribution for real-valued Gaussian random vector was developed in [18], and we largely follow the developed framework. Thanks to Prashant Karunakaran for bringing this problem and the many applications to my attention in late 2017/early 2018.

Let 𝒙 be a Gaussian random vector of length p with mean 𝝁p and covariance 𝚺p×p. Let 𝒙(1),𝒙(2),,𝒙(n) denote n realizations, np, of 𝒙. In the terminology of [18], the adjusted sample covariance matrix is given by

𝑺=1ni=1n(𝒙(i)𝒙¯)(𝒙(i)𝒙¯)H,

where 𝒙¯ is the sample mean given by

𝒙¯=1ni=1n𝒙(i).

Note that the adjusted sample covariance matrix is positive semi-definite.

The correlation matrix 𝑹 is defined as:

𝑹=𝑫12𝑺𝑫12,

where 𝑫=Diag(𝑺) is a diagonal matrix with the diagonal elements of 𝑺 on the main diagonal. Hence, 𝑹 has unit diagonal elements and is independent of the variance of the elements of 𝒙.

Now, for real-valued 𝒙, the determinant of 𝑹, denoted by |𝑹|, is shown in [18, Theorem 2] to be a product of p1 Beta-distributed scalar variables Beta(ni2,i12), i=1,,p1. The density of the product can be given in terms of the MeijerG function as follows [18, Theorem 2]:

g(x;n,p)=[Γ(n12)](p1)Γ(n22)Γ(np2)MeijerG[p10p1p1](x|n32,,n32n42,,n(p+2)2).

Analogously, for complex-valued 𝒙, |𝑹| is a product of p1 Beta-distributed scalar variables Beta(ni,i), i=1,,p1. The density of the product can now be given in terms of the MeijerG function, in a straightforward manner, as follows.

g(x;n,p)=[Γ(n1)](p1)Γ(n1)Γ(np+1)MeijerG[p10p1p1](x|n1,,n1n2,,np).

In the following, a Mathematica program for numerical simulation and the corresponding output are provided.

gC[x_, n_,p_] := (Gamma[n])^(p - 1) /
Product[Gamma[n - i], {i, 1, p - 1}] MeijerG[{{},
Table[n - 1, {i, 1, p - 1}]}, {Table[n - i, {i, 2, p}], {}}, x]
r[x_] := Module[{d}, d = DiagonalMatrix[Diagonal[x]];
MatrixPower[d, -1/2] . x . MatrixPower[d, -1/2]]
\[ScriptCapitalD] =
MatrixPropertyDistribution[Det[r[(xr + I xi) .
ConjugateTranspose[xr + I xi]]], {xr \[Distributed]
MatrixNormalDistribution[IdentityMatrix[p], IdentityMatrix[n]],
xi \[Distributed]
MatrixNormalDistribution[IdentityMatrix[p],
IdentityMatrix[n]]}] ;
data = Re[RandomVariate[\[ScriptCapitalD], 100000]] ;
\[ScriptCapitalD]1 = SmoothKernelDistribution[data] ;
Plot[{PDF[\[ScriptCapitalD]1, x], gC[x, n, p]}, {x, 0, 1},
PlotLabels -> {"Numerical", "Analytical"},
AxesLabel -> {"u", "p(u)"}]

The following figure shows that the numerical and analytical results match perfectly for the example case n=4,k=6.

[Uncaptioned image]

Version History

  1. 1.

    First published: 12th Dec. 2021 on aravindhk-math.blogspot.com

  2. 2.

    Modified: 17th Dec. 2023 – Style updates for