Random Variables

MCIntegration.Dist.CompositeVarMethod
function CompositeVar(vargs...; adapt=true)

Create a product of different types of random variables. The bundled variables will be sampled with their producted distribution.

Arguments:

  • vargs : tuple of Variables
  • adapt : turn on or off the adaptive map
source
MCIntegration.Dist.ContinuousMethod
function Continuous(lower::Float64, upper::Float64; ninc = 1000, alpha=2.0, adapt=true) where {G}

Create a pool of continous variables sampling from the set [lower, upper) with a distribution generated by a Vegas map (see below). The distribution is trained after each iteraction if adapt = true.

Arguments:

  • lower : lower bound
  • upper : upper bound
  • ninc : number of increments
  • alpha : learning rate
  • adapt : turn on or off the adaptive map

Remark:

Vegas map maps the original integration variables x into new variables y, so that the integrand is as flat as possible in y:

\[\begin{aligned} x_0 &= a \\ x_1 &= x_0 + \Delta x_0 \\ x_2 &= x_1 + \Delta x_1 \\ \cdots \\ x_N &= x_{N-1} + \Delta x_{N-1} = b \end{aligned}\]

where a and b are the limits of integration. The grid specifies the transformation function at the points $y=i/N$ for $i=0,1\ldots N$:

\[x(y=i/N) = x_i\]

Linear interpolation is used between those points. The Jacobian for this transformation is:

\[J(y) = J_i = N \Delta x_i\]

The grid point $x_i$ is trained after each iteration.

source
MCIntegration.Dist.DiscreteType
function Discrete(lower::Int, upper::Int; distribution=nothing, alpha=2.0, adapt=true)

Create a pool of integer variables sampling from the closed set [lower, lower+1, ..., upper] with the distribution Discrete.distribution. The distribution is trained after each iteraction ifadapt = true`.

Arguments:

  • lower : lower bound
  • upper : upper bound
  • distributin : inital distribution
  • alpha : learning rate
  • adapt : turn on or off the adaptive map
source
MCIntegration.Dist.accumulate!Method
accumulate!(var::Variable, idx, weight) = nothing

Accumulate a new sample with the a given weight for the idx-th element of the Variable pool var.

source
MCIntegration.Dist.locateMethod
function locate(accumulation, p)

Return index of p in accumulation so that accumulation[idx]<=p<accumulation[idx+1]. If p is not in accumulation (namely accumulation[1] > p or accumulation[end] <= p), return -1. Bisection algorithmn is used so that the time complexity is O(log(n)) with n=length(accumulation).

source
MCIntegration.Dist.padding_probabilityMethod
padding_probability(config, idx)

Calculate the joint probability of missing variables for the idx-th integral compared to the full variable set.

padding_probability(config, idx) = total_probability(config) / probability(config, idx)

source
MCIntegration.Dist.rescaleFunction

function rescale(dist::AbstractVector, alpha=1.5)

Rescale the dist array to avoid overreacting to atypically large number.

There are three steps:

  1. dist will be first normalize to [0, 1].
  2. Then the values that are close to 1.0 will not be changed much, while that close to zero will be amplified to a value controlled by alpha.
  3. In the end, the rescaled dist array will be normalized to [0, 1].

Check Eq. (19) of https://arxiv.org/pdf/2009.05112.pdf for more detail

source
MCIntegration.Dist.smoothFunction
function smooth(dist::AbstractVector, factor=6)

Smooth the distribution by averaging two nearest neighbor. The average ratio is given by 1 : factor : 1 for the elements which are not on the boundary.

source