Random Variables
MCIntegration.Dist.CompositeVar
— Typemutable struct CompositeVar{V}
A composite variable is a tuple of variables. The probability of the composite variable is the product of the probabilities of the bundled variables.
Fields:
vars
: tuple of Variablesprob
: probability of the composite variableoffset
: offset of the variable pool, all variables in the pool share the same offsetadapt
: turn the adaptive map on or offsize
: size of each variable pool, all variables in the pool share the same size_prob_cache
: cache of the probability of the composite variable
MCIntegration.Dist.CompositeVar
— Methodfunction CompositeVar(vargs...; adapt=true)
Create a product of different types of random variables. The bundled variables will be sampled with the product of their distributions.
Arguments:
vargs
: tuple of Variablesadapt
: turn the adaptive map on or off
MCIntegration.Dist.Continuous
— Typefunction Continuous(lower, upper, size=MaxOrder; offset=0, alpha=2.0, adapt=true, ninc=1000, grid=collect(LinRange(lower, upper, ninc)))
Create a pool of continuous variables sampled from the set [lower, upper) with a distribution generated by a Vegas map (see below). The distribution is trained after each iteration if adapt = true
.
Arguments:
lower
: lower boundupper
: upper boundninc
: number of incrementsalpha
: learning rateadapt
: turn the adaptive map on or offgrid
: grid points for the vegas map
Remark:
Vegas map maps the original integration variables x into new variables y, so that the integrand is as flat as possible in y:
\[\begin{aligned} x_0 &= a \\ x_1 &= x_0 + \Delta x_0 \\ x_2 &= x_1 + \Delta x_1 \\ \cdots \\ x_N &= x_{N-1} + \Delta x_{N-1} = b \end{aligned}\]
where a and b are the limits of integration. The grid specifies the transformation function at the points $y=i/N$ for $i=0,1\ldots N$:
\[x(y=i/N) = x_i\]
Linear interpolation is used between those points. The Jacobian for this transformation is:
\[J(y) = J_i = N \Delta x_i\]
The grid point $x_i$ is trained after each iteration.
MCIntegration.Dist.Continuous
— Typefunction Continuous(bounds::AbstractVector{Union{AbstractVector,Tuple}}, size=MaxOrder; offset=0, alpha=2.0, adapt=true, ninc=1000, grid=collect(LinRange(lower, upper, ninc)))
Create a set of continuous variable pools sampling from the set [lower, upper) with a distribution generated by a Vegas map, and pack it into a CompositeVar
. The distribution is trained after each iteration if adapt = true
.
Arguments:
bounds
: tuple of (lower, upper) for each continuous variableninc
: number of incrementsalpha
: learning rateadapt
: turn the adaptive map on or offgrid
: grid points for the vegas map
MCIntegration.Dist.Continuous
— Typemutable struct Continuous{G} <: AbstractVectorVariable{Float64}
A continuous variable pool is a set of floating point variables sampled from the set [lower, upper) with a distribution generated by a Vegas map (see below). The distribution is trained after each iteration if adapt = true
.
Fields:
data
: floating point variablesgidx
: index of the grid point for each variableprob
: probability of the given variable. For the vegas map, = dy/dx = 1/N/Δxᵢ = inverse of the Jacobianlower
: lower boundrange
: upper - loweroffset
: offset of the variable pool, all variables in the pool share the same offsetgrid
: grid points for the vegas mapinc
: increment of the grid pointshistogram
: histogram of the distributionalpha
: learning rateadapt
: turn the adaptive map on or off
MCIntegration.Dist.Discrete
— Typefunction Continuous(bounds::AbstractVector{Union{AbstractVector,Tuple}}, size=MaxOrder; offset=0, alpha=2.0, adapt=true, ninc=1000, grid=collect(LinRange(lower, upper, ninc)))
Create a set of continuous variable pools sampling from the set [lower, upper) with a distribution generated by a Vegas map, and pack it into a CompositeVar
. The distribution is trained after each iteration if adapt = true
.
Arguments:
bounds
: tuple of (lower, upper) for each continuous variableninc
: number of incrementsalpha
: learning rateadapt
: turn the adaptive map on or offgrid
: grid points for the vegas map
MCIntegration.Dist.Discrete
— Typemutable struct Discrete <: AbstractVectorVariable{Int}
A discrete variable pool is a set of integer variables sampled from the closed set [lower, lower+1, ..., upper] with a distribution generated by an adaptive distribution. The distribution is trained after each iteration if adapt = true
.
Fields:
data
: integer variableslower
: lower boundupper
: upper boundprob
: probability of the given variablesize
: upper-lower+1offset
: offset of the variable pool, all variables in the pool share the same offsethistogram
: histogram of the distributionaccumulation
: accumulation of the distributiondistribution
: distribution of the variable poolalpha
: learning rateadapt
: turn the adaptive map on or off
MCIntegration.Dist.Discrete
— Typefunction Discrete(lower::Int, upper::Int; distribution=nothing, alpha=2.0, adapt=true)
Create a pool of integer variables sampled from the closed set [lower, lower+1, ..., upper] with the distribution Discrete.distribution
. The distribution is trained after each iteration if adapt = true
.
Arguments:
lower
: lower boundupper
: upper bounddistributin
: inital distributionalpha
: learning rateadapt
: turn the adaptive map on or off
MCIntegration.Dist.accumulate!
— Methodaccumulate!(var, idx, weight) = nothing
Accumulate a new sample with the a given weight
for the idx
-th element of the Variable pool var
.
MCIntegration.Dist.clearStatistics!
— MethodclearStatistics!(T)
Clear the accumulated samples in the Variable.
MCIntegration.Dist.initialize!
— Methodinitialize!(T, config)
Initialize the variable pool with random variables.
MCIntegration.Dist.locate
— Methodfunction locate(accumulation, p)
Return index of p in accumulation so that accumulation[idx]<=p<accumulation[idx+1]. If p is not in accumulation (namely accumulation[1] > p or accumulation[end] <= p), return -1. Bisection algorithmn is used so that the time complexity is O(log(n)) with n=length(accumulation).
MCIntegration.Dist.padding_probability
— Methodpadding_probability(config, idx)
Calculate the joint probability of missing variables for the idx
-th integral compared to the full variable set.
padding_probability(config, idx) = total_probability(config) / probability(config, idx)
MCIntegration.Dist.poolsize
— Methodpoolsize(vars::CompositeVar) = vars.size
Return the size of the variable pool. All variables packed in the CompositeVar share the same size.
MCIntegration.Dist.poolsize
— Methodfunction poolsize(var::AbstractVectorVariable{GT}) where {GT}
Return the size of the pool of the variable.
MCIntegration.Dist.probability
— Methodprobability(config, idx)
Calculate the joint probability of all involved variable for the idx
-th integral.
MCIntegration.Dist.rescale
— Functionfunction rescale(dist::AbstractVector, alpha=1.5)
Rescale the dist array to avoid overreacting to atypically large number.
There are three steps:
- dist will be first normalize to [0, 1].
- Then the values that are close to 1.0 will not be changed much, while that close to zero will be amplified to a value controlled by alpha.
- In the end, the rescaled dist array will be normalized to [0, 1].
Check Eq. (19) of https://arxiv.org/pdf/2009.05112.pdf for more detail
MCIntegration.Dist.smooth
— Functionfunction smooth(dist::AbstractVector, factor=6)
Smooth the distribution by averaging two nearest neighbor. The average ratio is given by 1 : factor : 1 for the elements which are not on the boundary.
MCIntegration.Dist.total_probability
— Methodtotal_probability(config)
Calculate the joint probability of all involved variables of all integrals.
MCIntegration.Dist.train!
— Methodtrain!(Var)
Train the distribution of the variables in the pool.