This repository implements Motion Planning via Optimal Transport (mpot) in PyTorch.
The philosophy of mpot follows the Monte Carlo methods' argument: more samples discover more and better modes with high enough initialization variances.
Within the multi-modal motion planning scope, mpot performs brute-force parallel planning on GPU, mitigating local minima traps common in optimization-based motion planning.
For those interested in standalone Sinkhorn Step as a general-purpose batch gradient-free solver for non-convex optimization problems, please check out ssax.
This work has been accepted to NeurIPS 2023. Please find the paper on arXiv.
- Python >= 3.9
- PyTorch >= 2.0 (with CUDA for GPU acceleration)
- See
pyproject.tomlfor the full dependency list
Activate your conda/virtual environment, navigate to the mpot root directory, and run:
pip install -e .mpot requires GPU for practical performance. Please verify PyTorch CUDA support:
python -c "import torch; print(torch.cuda.is_available())"python examples/mpot_occupancy.pypython examples/mpot_sdf.pypython examples/mpot_panda.pyEvery run uses a different random seed. The resulting optimization visualizations are stored in the current directory. Refer to the example scripts for playing around with options and different goal points.
Note: For all cases, we normalize the joint space to the joint and velocity limits, then perform Sinkhorn Step on the normalized state-space. Changing any hyperparameters may require tuning again.
The most sensitive parameters are:
| Parameter | Description | Guidance |
|---|---|---|
polytope |
Polytope geometry for directional probing | cube for dim < 10; orthoplex or simplex for higher dimensions |
step_radius |
Step size per iteration | Start small (0.03-0.15), increase if convergence is slow |
probe_radius |
Probing radius (must be >= step_radius) |
Controls exploration range around current waypoints |
num_probe |
Probe points per polytope vertex | 3-5 is usually sufficient |
epsilon |
Decay rate of step/probe size | 0.01-0.05 typical |
ent_epsilon |
Sinkhorn entropy regularization | 1e-2 to 5e-2 balances coupling sharpness vs. speed |
| Cost weights | w_coll, w_smooth |
Application-dependent; tune for your environment |
CUDA Memory Issues:
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512Common Issues:
- If optimization diverges, reduce
step_radiusandprobe_radius - For high-dimensional problems (e.g., 7-DOF robot), use
orthoplexpolytope to reduce vertex count - Reduce
num_particles_per_goalif running out of GPU memory
The Gaussian Process prior implementation is adapted from Sasha Lambert's mpc_trajopt.
If you found this repository useful, please consider citing:
@article{le2023accelerating,
title={Accelerating motion planning via optimal transport},
author={Le, An T and Chalvatzaki, Georgia and Biess, Armin and Peters, Jan R},
journal={Advances in Neural Information Processing Systems},
volume={36},
pages={78453--78482},
year={2023}
}

