Welcome to JAXNS’s documentation!
JAXNS is a probabilistic programming framework and advanced nested sampling algorithm. It’s goal is to empower researchers and scientists of all types, from early career to seasoned professionals, from small jupyter notebooks to massive HPC problem. Initially, I developed JAXNS to solve my own problems during my PhD. However, it has since grown into a full-fledged probabilistic programming framework. JAXNS has been applied in numerous domains from cosmology, astrophysics, gravitational waves, interferometry, exoplanets, particle physics, meta materials, epidemiology, climate modelling, and beyond. Not to mention it has been used in industry for a variety of applications. All of this is welcomed and gladly supported. JAXNS is citable, use the [(outdated) pre-print here](https://arxiv.org/abs/2012.15286).
Here are 10 things you can do with JAXNS:
Build probabilistic models in an easy to use, high-level language, that can be used anywhere in the JAX ecosystem.
Compute the Bayesian evidence of a model or hypothesis (the ultimate scientific method);
Produce high-quality samples from the posterior distribution;
Easily handle degenerate difficult multi-modal posteriors;
Model both discrete and continuous priors;
Encode complex constraints on the prior space;
Easily embed your neural networks or ML model in the likelihood/prior;
Easily embed JAXNS in your ML model;
Use JAXNS in a distributed computing environment;
Solve global optimisation problems.
JAXNS’s Mission Statement
Our mission is to make nested sampling faster, easier, and more powerful.
User Guide
API Reference
Examples
- Inference of Jones scalars observables (noisy angular quantities)
- Lennard-Jones Potentials for modelling phase transitions in materials
- Constant Likelihood
- Dual Moons likelihood
- Efficient parameter estimation
- First the normal nested sampler (parameter_estimation=False)
- Now with parameter estimation enabled
- Egg-box Likelihood with Uniform Prior
- Poisson likelihood and Gamma prior
- Gaussian processes with outliers
- Thin Gaussian Shells with Uniform Prior
- Using JAXNS to globally optimise Neural Networks
- Gradient Guided
- Multivariate Normal Likelihood with Multivariate Normal Prior
- OU process
- Self-Exciting process (Hawkes process)