Truncated#
- class pymc.Truncated(name, *args, rng=None, dims=None, initval=None, observed=None, total_size=None, transform=UNSET, default_transform=UNSET, **kwargs)[source]#
Truncated distribution.
The pdf of a Truncated distribution is
\[\begin{split}\begin{cases} 0 & \text{for } x < lower, \\ \frac{\text{PDF}(x, dist)}{\text{CDF}(upper, dist) - \text{CDF}(lower, dist)} & \text{for } lower <= x <= upper, \\ 0 & \text{for } x > upper, \end{cases}\end{split}\]- Parameters:
- dist: unnamed distribution
Univariate distribution created via the .dist() API, which will be truncated. This distribution must be a pure RandomVariable and have a logcdf method implemented for MCMC sampling.
Warning
dist will be cloned, rendering it independent of the one passed as input.
- lower: tensor_like of float, int, or None
Lower (left) truncation point. If None the distribution will not be left truncated.
- upper: tensor_like of float, int, or None
Upper (right) truncation point. If None, the distribution will not be right truncated.
- max_n_steps: int, defaults 10_000
Maximum number of resamples that are attempted when performing rejection sampling. A TruncationError is raised if convergence is not reached after that many steps.
- Returns:
- truncated_distribution:
TensorVariable Graph representing a truncated RandomVariable. A specialized Op may be used if the Op of the dist has a dispatched _truncated function. Otherwise, a SymbolicRandomVariable graph representing the truncation process, via inverse CDF sampling (if the underlying dist has a logcdf method), or rejection sampling is returned.
- truncated_distribution:
Examples
Truncation with upper & lower points set to +/-1 .. code-block:: python
- with pm.Model():
normal_dist = pm.Normal.dist(mu=0.0, sigma=1.0) truncated_normal = pm.Truncated(“truncated_normal”, normal_dist, lower=-1, upper=1)
Partial truncatin of normal distributions achieved by passing +/-inf truncation points. Examples of 4 truncation conditions: untruncated (-inf, inf), upper truncated (-inf, 1), lower truncated (-1, inf), and both truncated (-1, 1) .. code-block:: python
- with pm.Model():
normal_dist = pm.Normal.dist(mu=0.0, sigma=1.0) partially_truncated_normal = pm.Truncated(“partially_truncated_normal”, normal_dist, lower=[-np.inf, -np.inf, -1, -1], upper=[np.inf, 1, np.inf, 1], shape=(4,))
Methods
Truncated.dist(dist[, lower, upper, max_n_steps])Create a tensor variable corresponding to the cls distribution.