From 4e2f6b45662fe206414652ccc8f715c420f3b9cd Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Mon, 24 Jun 2019 17:13:33 +0200 Subject: first shot part OT Wass --- docs/source/quickstart.rst | 119 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 119 insertions(+) create mode 100644 docs/source/quickstart.rst (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst new file mode 100644 index 0000000..3d3ce98 --- /dev/null +++ b/docs/source/quickstart.rst @@ -0,0 +1,119 @@ + +Quick start +=========== + + + +In the following we provide some pointers about which functions and classes +to use for different problems related to optimal transport (OT). + + +Optimal transport and Wasserstein distance +------------------------------------------ + +The optimal transport problem between discrete distributions is often expressed +as + .. math:: + \gamma^* = arg\min_\gamma \sum_{i,j}\gamma_{i,j}M_{i,j} + + s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 + +where : + +- :math:`M\in\mathbb{R}_+^{m\times n}` is the metric cost matrix defining the cost to move mass from bin :math:`a_i` to bin :math:`b_j`. +- :math:`a` and :math:`b` are histograms (positive, sum to 1) that represent the weights of each samples in the source an target distributions. + +Solving the linear program above can be done using the function :any:`ot.emd` +that will return the optimal transport matrix :math:`\gamma^*`: + +.. code:: python + + # a,b are 1D histograms (sum to 1 and positive) + # M is the ground cost matrix + T=ot.emd(a,b,M) # exact linear program + +.. hint:: + Examples of use for :any:`ot.emd` are available in the following examples: + + - :any:`auto_examples/plot_OT_2D_samples` + - :any:`auto_examples/plot_OT_1D` + - :any:`auto_examples/plot_OT_L1_vs_L2` + + +The value of the OT solution is often more of interest that the OT matrix : + + .. math:: + W(a,b)=\min_\gamma \sum_{i,j}\gamma_{i,j}M_{i,j} + + s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 + + +where :math:`W(a,b)` is the `Wasserstein distance +`_ between distributions a and b +It is a metrix that has nice statistical +properties. It can computed from an already estimated OT matrix with +:code:`np.sum(T*M)` or directly with the function :any:`ot.emd2`. + +.. code:: python + + # a,b are 1D histograms (sum to 1 and positive) + # M is the ground cost matrix + W=ot.emd2(a,b,M) # Wasserstein distance / EMD value + +.. note:: + In POT, most functions that solve OT or regularized OT problems have two + versions that return the OT matrix or the value of the optimal solution. Fir + instance :any:`ot.emd` return the OT matrix and :any:`ot.emd2` return the + Wassertsein distance. + + +Regularized Optimal Transport +----------------------------- + +Wasserstein Barycenters +----------------------- + +Monge mapping and Domain adaptation with Optimal transport +---------------------------------------- + + +Other applications +------------------ + + +GPU acceleration +---------------- + + + +How to? +------- + + + +1. **How to solve a discrete optimal transport problem ?** + + The solver for discrete is the function :py:mod:`ot.emd` that returns + the OT transport matrix. If you want to solve a regularized OT you can + use :py:mod:`ot.sinkhorn`. + + + + Here is a simple use case: + + .. code:: python + + # a,b are 1D histograms (sum to 1 and positive) + # M is the ground cost matrix + T=ot.emd(a,b,M) # exact linear program + T_reg=ot.sinkhorn(a,b,M,reg) # entropic regularized OT + + More detailed examples can be seen on this + :doc:`auto_examples/plot_OT_2D_samples` + + +2. **Compute a Wasserstein distance** + + + + -- cgit v1.2.3 From 7f0739f73fa6a8c7fa22269c727b48d3640627be Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Tue, 25 Jun 2019 07:41:47 +0200 Subject: first shot part OT Wass --- docs/source/quickstart.rst | 13 +++++++++++++ 1 file changed, 13 insertions(+) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index 3d3ce98..ac96f26 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -11,6 +11,10 @@ to use for different problems related to optimal transport (OT). Optimal transport and Wasserstein distance ------------------------------------------ + +Solving optimal transport +^^^^^^^^^^^^^^^^^^^^^^^^^ + The optimal transport problem between discrete distributions is often expressed as .. math:: @@ -39,6 +43,8 @@ that will return the optimal transport matrix :math:`\gamma^*`: - :any:`auto_examples/plot_OT_1D` - :any:`auto_examples/plot_OT_L1_vs_L2` +Computing Wasserstein distance +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The value of the OT solution is often more of interest that the OT matrix : @@ -60,6 +66,13 @@ properties. It can computed from an already estimated OT matrix with # M is the ground cost matrix W=ot.emd2(a,b,M) # Wasserstein distance / EMD value + +.. hint:: + Examples of use for :any:`ot.emd2` are available in the following examples: + + - :any:`auto_examples/plot_compute_emd` + + .. note:: In POT, most functions that solve OT or regularized OT problems have two versions that return the OT matrix or the value of the optimal solution. Fir -- cgit v1.2.3 From 1e0977fd346d91c837ef90dff8c75a65b182d021 Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Tue, 25 Jun 2019 08:34:59 +0200 Subject: cleaunup gromov + stat guide --- docs/source/index.rst | 2 +- docs/source/quickstart.rst | 156 ++++++++++++++++++++++++++++++++++++++++----- 2 files changed, 142 insertions(+), 16 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/index.rst b/docs/source/index.rst index 03943e8..9078d35 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -10,7 +10,7 @@ Contents -------- .. toctree:: - :maxdepth: 3 + :maxdepth: 2 self quickstart diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index ac96f26..d8d4838 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -1,8 +1,6 @@ -Quick start -=========== - - +Quick start guide +================= In the following we provide some pointers about which functions and classes to use for different problems related to optimal transport (OT). @@ -11,6 +9,11 @@ to use for different problems related to optimal transport (OT). Optimal transport and Wasserstein distance ------------------------------------------ +.. note:: + In POT, most functions that solve OT or regularized OT problems have two + versions that return the OT matrix or the value of the optimal solution. For + instance :any:`ot.emd` return the OT matrix and :any:`ot.emd2` return the + Wassertsein distance. Solving optimal transport ^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -36,6 +39,10 @@ that will return the optimal transport matrix :math:`\gamma^*`: # M is the ground cost matrix T=ot.emd(a,b,M) # exact linear program +The method used for solving the OT problem is the network simplex, it is +implemented in C from [1]_. It has a complexity of :math:`O(n^3)` but the +solver is quite efficient and uses sparsity of the solution. + .. hint:: Examples of use for :any:`ot.emd` are available in the following examples: @@ -73,16 +80,19 @@ properties. It can computed from an already estimated OT matrix with - :any:`auto_examples/plot_compute_emd` -.. note:: - In POT, most functions that solve OT or regularized OT problems have two - versions that return the OT matrix or the value of the optimal solution. Fir - instance :any:`ot.emd` return the OT matrix and :any:`ot.emd2` return the - Wassertsein distance. - - Regularized Optimal Transport ----------------------------- +Entropic regularized OT +^^^^^^^^^^^^^^^^^^^^^^^ + + +Other regularization +^^^^^^^^^^^^^^^^^^^^ + +Stochastic gradient decsent +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + Wasserstein Barycenters ----------------------- @@ -99,8 +109,8 @@ GPU acceleration -How to? -------- +FAQ +--- @@ -128,5 +138,121 @@ How to? 2. **Compute a Wasserstein distance** - - +References +---------- + +.. [1] Bonneel, N., Van De Panne, M., Paris, S., & Heidrich, W. (2011, + December). `Displacement nterpolation using Lagrangian mass transport + `__. + In ACM Transactions on Graphics (TOG) (Vol. 30, No. 6, p. 158). ACM. + +.. [2] Cuturi, M. (2013). `Sinkhorn distances: Lightspeed computation of + optimal transport `__. In Advances + in Neural Information Processing Systems (pp. 2292-2300). + +.. [3] Benamou, J. D., Carlier, G., Cuturi, M., Nenna, L., & Peyré, G. + (2015). `Iterative Bregman projections for regularized transportation + problems `__. SIAM Journal on + Scientific Computing, 37(2), A1111-A1138. + +.. [4] S. Nakhostin, N. Courty, R. Flamary, D. Tuia, T. Corpetti, + `Supervised planetary unmixing with optimal + transport `__, + Whorkshop on Hyperspectral Image and Signal Processing : Evolution in + Remote Sensing (WHISPERS), 2016. + +.. [5] N. Courty; R. Flamary; D. Tuia; A. Rakotomamonjy, `Optimal Transport + for Domain Adaptation `__, in IEEE + Transactions on Pattern Analysis and Machine Intelligence , vol.PP, + no.99, pp.1-1 + +.. [6] Ferradans, S., Papadakis, N., Peyré, G., & Aujol, J. F. (2014). + `Regularized discrete optimal + transport `__. SIAM Journal on + Imaging Sciences, 7(3), 1853-1882. + +.. [7] Rakotomamonjy, A., Flamary, R., & Courty, N. (2015). `Generalized + conditional gradient: analysis of convergence and + applications `__. arXiv preprint + arXiv:1510.06567. + +.. [8] M. Perrot, N. Courty, R. Flamary, A. Habrard (2016), `Mapping + estimation for discrete optimal + transport `__, + Neural Information Processing Systems (NIPS). + +.. [9] Schmitzer, B. (2016). `Stabilized Sparse Scaling Algorithms for + Entropy Regularized Transport + Problems `__. arXiv preprint + arXiv:1610.06519. + +.. [10] Chizat, L., Peyré, G., Schmitzer, B., & Vialard, F. X. (2016). + `Scaling algorithms for unbalanced transport + problems `__. arXiv preprint + arXiv:1607.05816. + +.. [11] Flamary, R., Cuturi, M., Courty, N., & Rakotomamonjy, A. (2016). + `Wasserstein Discriminant + Analysis `__. arXiv preprint + arXiv:1608.08063. + +.. [12] Gabriel Peyré, Marco Cuturi, and Justin Solomon (2016), + `Gromov-Wasserstein averaging of kernel and distance + matrices `__ + International Conference on Machine Learning (ICML). + +.. [13] Mémoli, Facundo (2011). `Gromov–Wasserstein distances and the + metric approach to object + matching `__. + Foundations of computational mathematics 11.4 : 417-487. + +.. [14] Knott, M. and Smith, C. S. (1984).`On the optimal mapping of + distributions `__, + Journal of Optimization Theory and Applications Vol 43. + +.. [15] Peyré, G., & Cuturi, M. (2018). `Computational Optimal + Transport `__ . + +.. [16] Agueh, M., & Carlier, G. (2011). `Barycenters in the Wasserstein + space `__. SIAM + Journal on Mathematical Analysis, 43(2), 904-924. + +.. [17] Blondel, M., Seguy, V., & Rolet, A. (2018). `Smooth and Sparse + Optimal Transport `__. Proceedings of + the Twenty-First International Conference on Artificial Intelligence and + Statistics (AISTATS). + +.. [18] Genevay, A., Cuturi, M., Peyré, G. & Bach, F. (2016) `Stochastic + Optimization for Large-scale Optimal + Transport `__. Advances in Neural + Information Processing Systems (2016). + +.. [19] Seguy, V., Bhushan Damodaran, B., Flamary, R., Courty, N., Rolet, + A.& Blondel, M. `Large-scale Optimal Transport and Mapping + Estimation `__. International + Conference on Learning Representation (2018) + +.. [20] Cuturi, M. and Doucet, A. (2014) `Fast Computation of Wasserstein + Barycenters `__. + International Conference in Machine Learning + +.. [21] Solomon, J., De Goes, F., Peyré, G., Cuturi, M., Butscher, A., + Nguyen, A. & Guibas, L. (2015). `Convolutional wasserstein distances: + Efficient optimal transportation on geometric + domains `__. ACM + Transactions on Graphics (TOG), 34(4), 66. + +.. [22] J. Altschuler, J.Weed, P. Rigollet, (2017) `Near-linear time + approximation algorithms for optimal transport via Sinkhorn + iteration `__, + Advances in Neural Information Processing Systems (NIPS) 31 + +.. [23] Aude, G., Peyré, G., Cuturi, M., `Learning Generative Models with + Sinkhorn Divergences `__, Proceedings + of the Twenty-First International Conference on Artficial Intelligence + and Statistics, (AISTATS) 21, 2018 + +.. [24] Vayer, T., Chapel, L., Flamary, R., Tavenard, R. and Courty, N. + (2019). `Optimal Transport for structured data with application on + graphs `__ Proceedings + of the 36th International Conference on Machine Learning (ICML). \ No newline at end of file -- cgit v1.2.3 From d20d471a1806bde43c23e67c1f805aa3c8908ec3 Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Thu, 27 Jun 2019 14:34:23 +0200 Subject: update part 1 --- docs/source/quickstart.rst | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index d8d4838..a14358c 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -83,6 +83,29 @@ properties. It can computed from an already estimated OT matrix with Regularized Optimal Transport ----------------------------- +Recent developments have shown the interest of regularized OT both in terms of +computational and statistical properties. + +We address in this section the regularized OT problem that can be expressed as + +.. math:: + \gamma^* = arg\min_\gamma <\gamma,M>_F + reg*\Omega(\gamma) + + s.t. \gamma 1 = a + + \gamma^T 1= b + + \gamma\geq 0 +where : + +- :math:`M\in\mathbb{R}_+^{m\times n}` is the metric cost matrix defining the cost to move mass from bin :math:`a_i` to bin :math:`b_j`. +- :math:`a` and :math:`b` are histograms (positive, sum to 1) that represent the weights of each samples in the source an target distributions. +- :math:`\Omega` is the regularization term. + +We disvuss in the following specific algorithms + + + Entropic regularized OT ^^^^^^^^^^^^^^^^^^^^^^^ -- cgit v1.2.3 From 982ee8345d491d76ac9ba49c6b9a7f5418ed966d Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Thu, 27 Jun 2019 16:40:38 +0200 Subject: start section entropic --- docs/source/quickstart.rst | 90 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 77 insertions(+), 13 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index a14358c..c122d17 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -21,7 +21,7 @@ Solving optimal transport The optimal transport problem between discrete distributions is often expressed as .. math:: - \gamma^* = arg\min_\gamma \sum_{i,j}\gamma_{i,j}M_{i,j} + \gamma^* = arg\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j} s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 @@ -56,15 +56,12 @@ Computing Wasserstein distance The value of the OT solution is often more of interest that the OT matrix : .. math:: - W(a,b)=\min_\gamma \sum_{i,j}\gamma_{i,j}M_{i,j} + OT(a,b)=\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j} s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 -where :math:`W(a,b)` is the `Wasserstein distance -`_ between distributions a and b -It is a metrix that has nice statistical -properties. It can computed from an already estimated OT matrix with +It can computed from an already estimated OT matrix with :code:`np.sum(T*M)` or directly with the function :any:`ot.emd2`. .. code:: python @@ -73,6 +70,25 @@ properties. It can computed from an already estimated OT matrix with # M is the ground cost matrix W=ot.emd2(a,b,M) # Wasserstein distance / EMD value +Note that the well known `Wasserstein distance +`_ between distributions a and +b is defined as + + + .. math:: + + W_p(a,b)=(\min_\gamma \sum_{i,j}\gamma_{i,j}\|x_i-y_j\|_p)^\frac{1}{p} + + s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 + +This means that if you want to compute the :math:`W_2` you need to compute the +square root of :any:`ot.emd2` when providing +:code:`M=ot.dist(xs,xt)` that use the squared euclidean distance by default. Computing +the :math:`W_1` wasserstein distance can be done directly with :any:`ot.emd2` +when providing :code:`M=ot.dist(xs,xt, metric='euclidean')` to use the euclidean +distance. + + .. hint:: Examples of use for :any:`ot.emd2` are available in the following examples: @@ -80,6 +96,32 @@ properties. It can computed from an already estimated OT matrix with - :any:`auto_examples/plot_compute_emd` +Special cases +^^^^^^^^^^^^^ + +Note that the OT problem and the corresponding Wasserstein distance can in some +special cases be computed very efficiently. + +For instance when the samples are in 1D, then the OT problem can be solved in +:math:`O(n\log(n))` by using a simple sorting. In this case we provide the +function :any:`ot.emd_1d` and :any:`ot.emd2_1d` to return respectively the OT +matrix and value. Note that since the solution is very sparse the :code:`sparse` +parameter of :any:`ot.emd_1d` allows for solving and returning the solution for +very large problems. Note that in order to computed directly the :math:`W_p` +Wasserstein distance in 1D we provide the function :any:`ot.wasserstein_1d` that +takes :code:`p` as a parameter. + +Another specials for estimating OT and Monge mapping is between Gaussian +distributions. In this case there exists a close form solution given in Remark +2.29 in [15]_ and the Monge mapping is an affine function and can be +also computed from the covariances and means of the source and target +distributions. In this case when the finite sample dataset is supposed gaussian, we provide +:any:`ot.da.OT_mapping_linear` that returns the parameters for the Monge +mapping. + + + + Regularized Optimal Transport ----------------------------- @@ -89,31 +131,53 @@ computational and statistical properties. We address in this section the regularized OT problem that can be expressed as .. math:: - \gamma^* = arg\min_\gamma <\gamma,M>_F + reg*\Omega(\gamma) + \gamma^* = arg\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j} + \lambda\Omega(\gamma) - s.t. \gamma 1 = a + s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 - \gamma^T 1= b - \gamma\geq 0 where : - :math:`M\in\mathbb{R}_+^{m\times n}` is the metric cost matrix defining the cost to move mass from bin :math:`a_i` to bin :math:`b_j`. - :math:`a` and :math:`b` are histograms (positive, sum to 1) that represent the weights of each samples in the source an target distributions. - :math:`\Omega` is the regularization term. -We disvuss in the following specific algorithms - +We discuss in the following specific algorithms that can be used depending on +the regularization term. Entropic regularized OT ^^^^^^^^^^^^^^^^^^^^^^^ +This is the most common regularization used for optimal transport. It has been +proposed in the ML community by Marco Cuturi in his seminal paper [2]_. This +regularization has the following expression + +.. math:: + \Omega(\gamma)=\sum_{i,j}\gamma_{i,j}\log(\gamma_{i,j}) + + +The use of the regularization term above in the optimization problem has a very +strong impact. First it makes the problem smooth which leads to new optimization +procedures such as L-BFGS (see :any:`ot.smooth` ). Next it makes the problem +strictly convex meaning that there will be a unique solution. Finally the +solution of the resulting optimization problem can be expressed as: + +.. math:: + + \gamma_\lambda^*=\text{diag}(u)K\text{diag}(v) + +where :math:`u` and :math:`v` are vectors and :math:`K=\exp(-M/\lambda)` where +the :math:`\exp` is taken component-wise. + + + + Other regularization ^^^^^^^^^^^^^^^^^^^^ -Stochastic gradient decsent +Stochastic gradient descent ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Wasserstein Barycenters -- cgit v1.2.3 From 7dcfebbef19e1f94928fc71face612a2f71372b4 Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Fri, 28 Jun 2019 08:33:36 +0200 Subject: entropic mostly done, starting general regularization --- docs/source/quickstart.rst | 144 +++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 133 insertions(+), 11 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index c122d17..4f2d9bb 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -5,6 +5,9 @@ Quick start guide In the following we provide some pointers about which functions and classes to use for different problems related to optimal transport (OT). +This document is not a tutorial on numerical optimal transport. For this we strongly +recommend to read the very nice book [15]_ . + Optimal transport and Wasserstein distance ------------------------------------------ @@ -20,10 +23,11 @@ Solving optimal transport The optimal transport problem between discrete distributions is often expressed as - .. math:: - \gamma^* = arg\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j} - s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 +.. math:: + \gamma^* = arg\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j} + + s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 where : @@ -120,8 +124,6 @@ distributions. In this case when the finite sample dataset is supposed gaussian, mapping. - - Regularized Optimal Transport ----------------------------- @@ -146,6 +148,7 @@ We discuss in the following specific algorithms that can be used depending on the regularization term. + Entropic regularized OT ^^^^^^^^^^^^^^^^^^^^^^^ @@ -168,23 +171,107 @@ solution of the resulting optimization problem can be expressed as: \gamma_\lambda^*=\text{diag}(u)K\text{diag}(v) where :math:`u` and :math:`v` are vectors and :math:`K=\exp(-M/\lambda)` where -the :math:`\exp` is taken component-wise. +the :math:`\exp` is taken component-wise. In order to solve the optimization +problem, on can use an alternative projection algorithm that can be very +efficient for large values if regularization. + +The main function is POT are :any:`ot.sinkhorn` and +:any:`ot.sinkhorn2` that return respectively the OT matrix and the value of the +linear term. Note that the regularization parameter :math:`\lambda` in the +equation above is given to those function with the parameter :code:`reg`. + >>> import ot + >>> a=[.5,.5] + >>> b=[.5,.5] + >>> M=[[0.,1.],[1.,0.]] + >>> ot.sinkhorn(a,b,M,1) + array([[ 0.36552929, 0.13447071], + [ 0.13447071, 0.36552929]]) +More details about the algorithm used is given in the following note. + + +.. note:: + The main function to solve entropic regularized OT is :any:`ot.sinkhorn`. + This function is a wrapper and the parameter :code:`method` help you select + the actual algorithm used to solve the problem: + + + :code:`method='sinkhorn'` calls :any:`ot.bregman.sinkhorn_knopp` the + classic algorithm [2]_. + + :code:`method='sinkhorn_stabilized'` calls :any:`ot.bregman.sinkhorn_stabilized` the + log stabilized version of the algorithm [9]_. + + :code:`method='sinkhorn_epsilon_scaling'` calls + :any:`ot.bregman.sinkhorn_epsilon_scaling` the epsilon scaling version + of the algorithm [9]_. + + :code:`method='greenkhorn'` calls :any:`ot.bregman.greenkhorn` the + greedy sinkhorn verison of the algorithm [22]_. + + In addition to all those variants of sinkhorn, we have another + implementation solving the problem in the smooth dual or semi-dual in + :any:`ot.smooth`. This solver use the :any:`scipy.optimize.minimize` + function to solve the smooth problem with :code:`L-BFGS` algorithm. Tu use + this solver, use functions :any:`ot.smooth.smooth_ot_dual` or + :any:`ot.smooth.smooth_ot_semi_dual` with parameter :code:`reg_type='kl'` to + choose entropic/Kullbach Leibler regularization. + +.. hint:: + Examples of use for :any:`ot.sinkhorn` are available in the following examples: + + - :any:`auto_examples/plot_OT_2D_samples` + - :any:`auto_examples/plot_OT_1D` + - :any:`auto_examples/plot_OT_1D_smooth` + - :any:`auto_examples/plot_stochastic` + +Finally note that we also provide in :any:`ot.stochastic` several implementation +of stochastic solvers for entropic regularized OT [18]_ [19]_. Other regularization ^^^^^^^^^^^^^^^^^^^^ -Stochastic gradient descent -^^^^^^^^^^^^^^^^^^^^^^^^^^^ +While entropic OT is the most common and favored in practice, there exist other +kind of regularization. We provide in POT two specific solvers for other +regularization terms: namely quadratic regularization and group lasso +regularization. But we also provide in :any:`ot.optim` two generic solvers that allows solving any +smooth regularization in practice. + +The first general regularization term we can solve is the quadratic +regularization of the form + +.. math:: + \Omega(\gamma)=\sum_{i,j} \gamma_{i,j}^2 + +this regularization term has a similar effect to entropic regularization in +densifying the OT matrix but it keeps some sort of sparsity that is lost with +entropic regularization as soon as :math:`\lambda>0` [17]_. This problem cen be +solved with POT using solvers from :any:`ot.smooth`, more specifically +functions :any:`ot.smooth.smooth_ot_dual` or +:any:`ot.smooth.smooth_ot_semi_dual` with parameter :code:`reg_type='l2'` to +choose the quadratic regularization. + +Another regularization that has been used in recent years is the group lasso +regularization + +.. math:: + \Omega(\gamma)=\sum_{j,G\in\mathcal{G}} \|\gamma_{G,j}\|_p^q + +where :math:`\mathcal{G}` contains non overlapping groups of lines in the OT +matrix. This regularization proposed in [5]_ will promote sparsity at the group level and for +instance will force target samples to get mass from a small number of groups. +Note that the exact OT solution is already sparse so this regularization does +not make sens if it is not combined with others such as entropic. + + + + + Wasserstein Barycenters ----------------------- Monge mapping and Domain adaptation with Optimal transport ----------------------------------------- +---------------------------------------------------------- Other applications @@ -207,7 +294,6 @@ FAQ the OT transport matrix. If you want to solve a regularized OT you can use :py:mod:`ot.sinkhorn`. - Here is a simple use case: @@ -222,7 +308,43 @@ FAQ :doc:`auto_examples/plot_OT_2D_samples` -2. **Compute a Wasserstein distance** +2. **pip install POT fails with error : ImportError: No module named Cython.Build** + + As discussed shortly in the README file. POT requires to have :code:`numpy` + and :code:`cython` installed to build. This corner case is not yet handled + by :code:`pip` and for now you need to install both library prior to + installing POT. + + Note that this problem do not occur when using conda-forge since the packages + there are pre-compiled. + + See `Issue #59 `__ for more + details. + +3. **Why is Sinkhorn slower than EMD ?** + + This might come from the choice of the regularization term. The speed of + convergence of sinkhorn depends directly on this term [22]_ and when the + regularization gets very small the problem try and approximate the exact OT + which leads to slow convergence in addition to numerical problems. In other + words, for large regularization sinkhorn will be very fast to converge, for + small regularization (when you need an OT matrix close to the true OT), it + might be quicker to use the EMD solver. + + Also note that the numpy implementation of the sinkhorn can use parallel + computation depending on the configuration of your system but very important + speedup can be obtained by using a GPU implementation since all operations + are matrix/vector products. + +4. **Using GPU fails with error: module 'ot' has no attribute 'gpu'** + + In order to limit import time and hard dependencies in POT. we do not import + some sub-modules automatically with :code:`import ot`. In order to use the + acceleration in :any:`ot.gpu` you need first to import is with + :code:`import ot.gpu`. + + See `Issue #85 `__ and :any:`ot.gpu` + for more details. References -- cgit v1.2.3 From 56deee6e1a69a087022bf81279419305452f5177 Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Fri, 28 Jun 2019 09:39:23 +0200 Subject: update reg OT --- docs/source/quickstart.rst | 38 +++++++++++++++++++++++++++++++------- 1 file changed, 31 insertions(+), 7 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index 4f2d9bb..62688bc 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -210,7 +210,7 @@ More details about the algorithm used is given in the following note. In addition to all those variants of sinkhorn, we have another implementation solving the problem in the smooth dual or semi-dual in - :any:`ot.smooth`. This solver use the :any:`scipy.optimize.minimize` + :any:`ot.smooth`. This solver uses the :any:`scipy.optimize.minimize` function to solve the smooth problem with :code:`L-BFGS` algorithm. Tu use this solver, use functions :any:`ot.smooth.smooth_ot_dual` or :any:`ot.smooth.smooth_ot_semi_dual` with parameter :code:`reg_type='kl'` to @@ -224,6 +224,13 @@ More details about the algorithm used is given in the following note. - :any:`auto_examples/plot_OT_1D_smooth` - :any:`auto_examples/plot_stochastic` + +Recently [23]_ introduced the sinkhorn divergence that build from entropic +regularization to compute fast and differentiable geometric diveregnce between +empirical distributions. + + + Finally note that we also provide in :any:`ot.stochastic` several implementation of stochastic solvers for entropic regularized OT [18]_ [19]_. @@ -254,33 +261,50 @@ Another regularization that has been used in recent years is the group lasso regularization .. math:: - \Omega(\gamma)=\sum_{j,G\in\mathcal{G}} \|\gamma_{G,j}\|_p^q + \Omega(\gamma)=\sum_{j,G\in\mathcal{G}} \|\gamma_{G,j}\|_q^p where :math:`\mathcal{G}` contains non overlapping groups of lines in the OT matrix. This regularization proposed in [5]_ will promote sparsity at the group level and for instance will force target samples to get mass from a small number of groups. Note that the exact OT solution is already sparse so this regularization does -not make sens if it is not combined with others such as entropic. +not make sens if it is not combined with others such as entropic. Depending on +the choice of :code:`p` and :code:`q`, the problem can be solved with different +approaches. When :code:`q=1` and :code:`p<1` the problem is non convex but can +be solved using an efficient majoration minimization approach with +:any:`ot.sinkhorn_lpl1_mm`. When :code:`q=2` and :code:`p=1` we recover the +convex gourp lasso and we provide a solver using generalized conditional +gradient algorithm [7]_ in function +:any:`ot.da.sinkhorn_l1l2_gl`. +Wasserstein Barycenters +----------------------- -Wasserstein Barycenters ------------------------ -Monge mapping and Domain adaptation with Optimal transport ----------------------------------------------------------- +Monge mapping and Domain adaptation +----------------------------------- Other applications ------------------ +Wasserstein Discriminant Analysis +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + +Gromov-Wasserstein +^^^^^^^^^^^^^^^^^^ + GPU acceleration ---------------- +We provide several implementation of our OT solvers in :any:`ot.gpu`. Those +implementation use the :code:`cupy` toolbox. + FAQ -- cgit v1.2.3 From bed755904e0fd1d66004877c96127a56aa7e0983 Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Tue, 2 Jul 2019 09:42:52 +0200 Subject: regularized OT done --- docs/source/quickstart.rst | 58 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index 62688bc..a005c64 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -243,6 +243,9 @@ regularization terms: namely quadratic regularization and group lasso regularization. But we also provide in :any:`ot.optim` two generic solvers that allows solving any smooth regularization in practice. +Quadratic regularization +"""""""""""""""""""""""" + The first general regularization term we can solve is the quadratic regularization of the form @@ -257,6 +260,17 @@ functions :any:`ot.smooth.smooth_ot_dual` or :any:`ot.smooth.smooth_ot_semi_dual` with parameter :code:`reg_type='l2'` to choose the quadratic regularization. +.. hint:: + Examples of quadratic regularization are available in the following examples: + + - :any:`auto_examples/plot_OT_1D_smooth` + - :any:`auto_examples/plot_optim_OTreg` + + + +Group Lasso regularization +"""""""""""""""""""""""""" + Another regularization that has been used in recent years is the group lasso regularization @@ -276,6 +290,50 @@ convex gourp lasso and we provide a solver using generalized conditional gradient algorithm [7]_ in function :any:`ot.da.sinkhorn_l1l2_gl`. +.. hint:: + Examples of group Lasso regularization are available in the following examples: + + - :any:`auto_examples/plot_otda_classes` + - :any:`auto_examples/plot_otda_d2` + + +Generic solvers +""""""""""""""" + +Finally we propose in POT generic solvers that can be used to solve any +regularization as long as you can provide a function computing the +regularization and a function computing its gradient. + +In order to solve + +.. math:: + \gamma^* = arg\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j} + \lambda\Omega(\gamma) + + s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 + +you can use function :any:`ot.optim.cg` that will use a conditional gradient as +proposed in [6]_ . you need to provide the regularization function as parameter +``f`` and its gradient as parameter ``df``. Note that the conditional gradient relies on +iterative solving of a linearization of the problem using the exact +:any:`ot.emd` so it can be slow in practice. Still it always returns a +transport matrix that does not violates the marginals. + +Another solver is proposed to solve the problem + +.. math:: + \gamma^* = arg\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j}+ \lambda_e\Omega_e(\gamma) + \lambda\Omega(\gamma) + + s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 + +where :math:`\Omega_e` is the entropic regularization. In this case we use a +generalized conditional gradient [7]_ implemented in :any:`ot.opim.gcg` that does not linearize the entropic term and +relies on :any:`ot.sinkhorn` for its iterations. + +.. hint:: + Example of generic solvers are available in the following example: + + - :any:`auto_examples/plot_optim_OTreg` + Wasserstein Barycenters -- cgit v1.2.3 From 82d10f780fc296b9f5548e1fe1da9b20349b1e10 Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Tue, 2 Jul 2019 10:55:05 +0200 Subject: wasserstein barycenetr with fixed support --- docs/source/quickstart.rst | 86 ++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 79 insertions(+), 7 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index a005c64..94bc8cd 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -216,13 +216,7 @@ More details about the algorithm used is given in the following note. :any:`ot.smooth.smooth_ot_semi_dual` with parameter :code:`reg_type='kl'` to choose entropic/Kullbach Leibler regularization. -.. hint:: - Examples of use for :any:`ot.sinkhorn` are available in the following examples: - - :any:`auto_examples/plot_OT_2D_samples` - - :any:`auto_examples/plot_OT_1D` - - :any:`auto_examples/plot_OT_1D_smooth` - - :any:`auto_examples/plot_stochastic` Recently [23]_ introduced the sinkhorn divergence that build from entropic @@ -234,6 +228,15 @@ empirical distributions. Finally note that we also provide in :any:`ot.stochastic` several implementation of stochastic solvers for entropic regularized OT [18]_ [19]_. +.. hint:: + Examples of use for :any:`ot.sinkhorn` are available in the following examples: + + - :any:`auto_examples/plot_OT_2D_samples` + - :any:`auto_examples/plot_OT_1D` + - :any:`auto_examples/plot_OT_1D_smooth` + - :any:`auto_examples/plot_stochastic` + + Other regularization ^^^^^^^^^^^^^^^^^^^^ @@ -335,10 +338,79 @@ relies on :any:`ot.sinkhorn` for its iterations. - :any:`auto_examples/plot_optim_OTreg` - Wasserstein Barycenters ----------------------- +A Wasserstein barycenter is a distribution that minimize its Wasserstein +distance with respect to other distributions [16]_. It corresponds to minimizing the +following problem by seaching a distribution :math:`\mu` + +.. math:: + \min_\mu \quad \sum_{k} w_kW(\mu,\mu_k) + + +In practice we model a distribution with a finite number of support position: + +.. math:: + \mu=\sum_{i=1}^n a_i\delta_{x_i} + +where :math:`a` is an histogram on the simplex and the :math:`\{x_i\}` are the +position of the support. We can clearly see here that optimizing :math:`\mu` can +be done by searching for optimal weights :math:`a` or optimal support +:math:`\{x_i\}` (optimizing both is also an option). +We provide in POT solvers to estimate a discrete +Wasserstein barycenter in both cases. + +Barycenters with fixed support +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +When optimizing a barycenter with a fixed support, the optimization problem can +be expressed as + + +.. math:: + \min_a \quad \sum_{k} w_k W(a,b_k) + +where :math:`b_k` are also weights in the simplex. In the non-regularized case, +the problem above is a classical linear program. In this case we propose a +solver :any:`ot.lp.barycenter` that rely on generic LP solvers. By default the +function uses :any:`scipy.optimize.linprog`, but more efficient LP solvers from +cvxopt can be also used by changing parameter :code:`solver`. Note that these +solver require to solve a very large linear program and can be very slow in +practice. + +Similarly to the OT problem, OT barycenters can be computed in the regularized +case. When using entropic regularization the problem can be solved with a +generalization of the sinkhorn algorithm based on bregman projections [3]_. This +algorithm is provided in function :any:`ot.bregman.barycenter` also available as +:any:`ot.barycenter`. In this case, the algorithm scales better to large +distributions and rely only on matrix multiplications that can be performed in +parallel. + +In addition to teh speedup brought by regularization, one can also greatly +accelerate the estimation of Wasserstein barycenter when the support has a +separable structure [21]_. In teh case of 2D images for instance one can replace +the matrix vector production in teh bregman projections by convolution +operators. We provide an implementation of this algorithm in function +:any:`ot.bregman.convolutional_barycenter2d`. + +.. hint:: + Example of Wasserstein (:any:`ot.lp.barycenter`) and regularized wassrestein + barycenter (:any:`ot.bregman.barycenter`) computation are available in the following examples: + + - :any:`auto_examples/plot_barycenter_1D` + - :any:`auto_examples/plot_barycenter_lp_vs_entropic` + + Example of convolutional barycenter (:any:`ot.bregman.convolutional_barycenter2d`) computation for 2D images is available + in the following example: + + - :any:`auto_examples/plot_convolutional_barycenter` + + + +Barycenters with free support +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + -- cgit v1.2.3 From b250212448ed3c1d023a6412abf4a3395d5585fb Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Tue, 2 Jul 2019 11:01:53 +0200 Subject: wasserstein barycenetr with fixed support --- docs/source/quickstart.rst | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index 94bc8cd..7cbc962 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -220,8 +220,14 @@ More details about the algorithm used is given in the following note. Recently [23]_ introduced the sinkhorn divergence that build from entropic -regularization to compute fast and differentiable geometric diveregnce between -empirical distributions. +regularization to compute fast and differentiable geometric divergence between +empirical distributions. Note that we provide a function that compute directly +(with no need to pre compute the :code:`M` matrix) +the sinkhorn divergence for empirical distributions in +:any:`ot.bregman.empirical_sinkhorn_divergence`. Similarly one can compute the +OT matrix and loss for empirical distributions with respectively +:any:`ot.bregman.empirical_sinkhorn` and :any:`ot.bregman.empirical_sinkhorn2`. + @@ -389,19 +395,21 @@ parallel. In addition to teh speedup brought by regularization, one can also greatly accelerate the estimation of Wasserstein barycenter when the support has a -separable structure [21]_. In teh case of 2D images for instance one can replace +separable structure [21]_. In the case of 2D images for instance one can replace the matrix vector production in teh bregman projections by convolution operators. We provide an implementation of this algorithm in function :any:`ot.bregman.convolutional_barycenter2d`. .. hint:: - Example of Wasserstein (:any:`ot.lp.barycenter`) and regularized wassrestein + Example of Wasserstein (:any:`ot.lp.barycenter`) and regularized Wasserstein barycenter (:any:`ot.bregman.barycenter`) computation are available in the following examples: - :any:`auto_examples/plot_barycenter_1D` - :any:`auto_examples/plot_barycenter_lp_vs_entropic` - Example of convolutional barycenter (:any:`ot.bregman.convolutional_barycenter2d`) computation for 2D images is available + Example of convolutional barycenter + (:any:`ot.bregman.convolutional_barycenter2d`) computation + for 2D images is available in the following example: - :any:`auto_examples/plot_convolutional_barycenter` -- cgit v1.2.3 From 64693f98c22775048222f61f5e495849844e0135 Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Tue, 2 Jul 2019 11:09:50 +0200 Subject: quickstart wasserstein barycenter done --- docs/source/quickstart.rst | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index 7cbc962..8cce1c9 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -217,8 +217,6 @@ More details about the algorithm used is given in the following note. choose entropic/Kullbach Leibler regularization. - - Recently [23]_ introduced the sinkhorn divergence that build from entropic regularization to compute fast and differentiable geometric divergence between empirical distributions. Note that we provide a function that compute directly @@ -417,7 +415,27 @@ operators. We provide an implementation of this algorithm in function Barycenters with free support -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Estimating the Wassresein barycenter with free support but fixed weights +corresponds to solving the following optimization problem: + +.. math:: + \min_\{x_i\} \quad \sum_{k} w_kW(\mu,\mu_k) + + s.t. \quad \mu=\sum_{i=1}^n a_i\delta_{x_i} + +WE provide an alternating solver based on [20]_ in +:any:`ot.lp.free_support_barycenter`. This function minimize the problem and +return an optimal support :math:`\{x_i\}` for uniform or given weights +:math:`a`. + + .. hint:: + + Example of the fee support barycenter estimation is available + in the following example: + + - :any:`auto_examples/plot_free_support_barycenter` @@ -438,7 +456,7 @@ Gromov-Wasserstein GPU acceleration ----------------- +^^^^^^^^^^^^^^^^ We provide several implementation of our OT solvers in :any:`ot.gpu`. Those implementation use the :code:`cupy` toolbox. -- cgit v1.2.3 From 6fdce8f75000ec6e609371ae39484f7edbb19b2c Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Tue, 2 Jul 2019 13:38:20 +0200 Subject: quickstart wda + start unbalanced --- docs/source/quickstart.rst | 148 +++++++++++++++++++++++++++++++++++++++++++-- docs/source/readme.rst | 2 - 2 files changed, 144 insertions(+), 6 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index 8cce1c9..8f4a24e 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -278,7 +278,7 @@ choose the quadratic regularization. Group Lasso regularization """""""""""""""""""""""""" -Another regularization that has been used in recent years is the group lasso +Another regularization that has been used in recent years [5]_ is the group lasso regularization .. math:: @@ -333,7 +333,7 @@ Another solver is proposed to solve the problem s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 where :math:`\Omega_e` is the entropic regularization. In this case we use a -generalized conditional gradient [7]_ implemented in :any:`ot.opim.gcg` that does not linearize the entropic term and +generalized conditional gradient [7]_ implemented in :any:`ot.optim.gcg` that does not linearize the entropic term and relies on :any:`ot.sinkhorn` for its iterations. .. hint:: @@ -421,11 +421,11 @@ Estimating the Wassresein barycenter with free support but fixed weights corresponds to solving the following optimization problem: .. math:: - \min_\{x_i\} \quad \sum_{k} w_kW(\mu,\mu_k) + \min_{\{x_i\}} \quad \sum_{k} w_kW(\mu,\mu_k) s.t. \quad \mu=\sum_{i=1}^n a_i\delta_{x_i} -WE provide an alternating solver based on [20]_ in +We provide an alternating solver based on [20]_ in :any:`ot.lp.free_support_barycenter`. This function minimize the problem and return an optimal support :math:`\{x_i\}` for uniform or given weights :math:`a`. @@ -443,13 +443,149 @@ return an optimal support :math:`\{x_i\}` for uniform or given weights Monge mapping and Domain adaptation ----------------------------------- +The original transport problem investigated by Gaspard Monge was seeking for a +mapping function that maps (or transports) between a source and target +distribution but that minimizes the transport loss. The existence and uniqueness of this +optimal mapping is still an open problem in the general case but has been proven +for smooth distributions by Brenier in his eponym `theorem +`__. We provide in +:any:`ot.da` several solvers for Monge mapping estimation and domain adaptation. + +Monge Mapping estimation +^^^^^^^^^^^^^^^^^^^^^^^^ + +We now discuss several approaches that are implemented in POT to estimate or +approximate a Monge mapping from finite distributions. + +First note that when the source and target distributions are supposed to be Gaussian +distributions, there exists a close form solution for the mapping and its an +affine function [14]_ of the form :math:`T(x)=Ax+b` . In this case we provide the function +:any:`ot.da.OT_mapping_linear` that return the operator :math:`A` and vector +:math:`b`. Note that if the number of samples is too small there is a parameter +:code:`reg` that provide a regularization for the covariance matrix estimation. + +For a more general mapping estimation we also provide the barycentric mapping +proposed in [6]_ . It is implemented in the class :any:`ot.da.EMDTransport` and +other transport based classes in :any:`ot.da` . Those classes are discussed more +in the following but follow an interface similar to sklearn classes. Finally a +method proposed in [8]_ that estimate a continuous mapping approximating the +barycentric mapping is provided in :any:`ot.da.joint_OT_mapping_linear` for +linear mapping and :any:`ot.da.joint_OT_mapping_kernel` for non linear mapping. + + .. hint:: + + Example of the linear Monge mapping estimation is available + in the following example: + + - :any:`auto_examples/plot_otda_linear_mapping` + +Domain adaptation classes +^^^^^^^^^^^^^^^^^^^^^^^^^ + +The use of OT for domain adaptation (OTDA) has been first proposed in [5]_ that also +introduced the group Lasso regularization. The main idea of OTDA is to estimate +a mapping of the samples between source and target distributions which allows to +transport labeled source samples onto the target distribution with no labels. + +We provide several classes based on :any:`ot.da.BaseTransport` that provide +several OT and mapping estimations. The interface of those classes is similar to +classifiers in sklearn toolbox. At initialization several parameters (for +instance regularization parameter) can be set. Then one needs to estimate the +mapping with function :any:`ot.da.BaseTransport.fit`. Finally one can map the +samples from source to target with :any:`ot.da.BaseTransport.transform` and +from target to source with :any:`ot.da.BaseTransport.inverse_transform`. Here is +an example for class :any:`ot.da.EMDTransport` + +.. code:: + + ot_emd = ot.da.EMDTransport() + ot_emd.fit(Xs=Xs, Xt=Xt) + + Mapped_Xs= ot_emd.transform(Xs=Xs) + +A list +of the provided implementation is given in the following note. + +.. note:: + + Here is a list of the mapping classes inheriting from + :any:`ot.da.BaseTransport` + + * :any:`ot.da.EMDTransport` : Barycentric mapping with EMD transport + * :any:`ot.da.SinkhornTransport` : Barycentric mapping with Sinkhorn transport + * :any:`ot.da.SinkhornL1l2Transport` : Barycentric mapping with Sinkhorn + + group Lasso regularization [5]_ + * :any:`ot.da.SinkhornLpl1Transport` : Barycentric mapping with Sinkhorn + + non convex group Lasso regularization [5]_ + * :any:`ot.da.LinearTransport` : Linear mapping estimation between Gaussians + [14]_ + * :any:`ot.da.MappingTransport` : Nonlinear mapping estimation [8]_ + +.. hint:: + + Example of the use of OTDA classes are available in the following exmaples: + + - :any:`auto_examples/plot_otda_color_images` + - :any:`auto_examples/plot_otda_mapping` + - :any:`auto_examples/plot_otda_mapping_colors_images` + - :any:`auto_examples/plot_otda_semi_supervised` Other applications ------------------ +We discuss in the following several implementations that has been used and +proposed in the OT and machine learning community. + Wasserstein Discriminant Analysis ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Wasserstein Discriminant Analysis [11]_ is a generalization of `Fisher Linear Discriminant +Analysis `__ that +allows discrimination between classes that are not linearly separable. It +consist in finding a linear projector optimizing the following criterion + +.. math:: + P = \text{arg}\min_P \frac{\sum_i OT_e(\mu_i\#P,\mu_i\#P)}{\sum_{i,j\neq i} + OT_e(\mu_i\#P,\mu_j\#P)} + +where :math:`\#` is the push-forward operator, :math:`OT_e` is the entropic OT +loss and :math:`\mu_i` is the +distribution of samples from class :math:`i`. :math:`P` is also constrained to +be in the Stiefel manifold. WDA can be solved in pot using function +:any:`ot.dr.wda`. It requires to have installed :code:`pymanopt` and +:code:`autograd` for manifold optimization and automatic differentiation +respectively. Note that we also provide the Fisher discriminant estimator in +:any:`ot.dr.wda` for easy comparison. + +.. warning:: + Note that due to the hard dependency on :code:`pymanopt` and + :code:`autograd`, :any:`ot.dr` is not imported by default. If you want to + use it you have to specifically import it with :code:`import ot.dr` . + +.. hint:: + + An example of the use of WDA is available in the following example: + + - :any:`auto_examples/plot_WDA` + + +Unbalanced optimal transport +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Unbalanced OT is a relaxation of the original OT problem where the violation of +the constraint on the marginals is added to the objective of the optimization +problem: + +.. math:: + \min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j} + reg\cdot\Omega(\gamma) + \alpha KL(\gamma 1, a) + \alpha KL(\gamma^T 1, b) + + s.t. \quad \gamma\geq 0 + + +where KL is the Kullback-Leibler divergence. This formulation allwos for +computing approximate mapping between distributions that do not have the same +amount of mass. Interestingly the problem can be solved with a generalization of +the Bregman projections algorithm [10]_. Gromov-Wasserstein ^^^^^^^^^^^^^^^^^^ @@ -461,6 +597,10 @@ GPU acceleration We provide several implementation of our OT solvers in :any:`ot.gpu`. Those implementation use the :code:`cupy` toolbox. +.. warning:: + Note that due to the hard dependency on :code:`cupy`, :any:`ot.gpu` is not + imported by default. If you want to + use it you have to specifically import it with :code:`import ot.gpu` . FAQ diff --git a/docs/source/readme.rst b/docs/source/readme.rst index 320ddd5..0871779 100644 --- a/docs/source/readme.rst +++ b/docs/source/readme.rst @@ -221,8 +221,6 @@ This toolbox has been created and is maintained by The contributors to this library are -- `Rémi Flamary `__ -- `Nicolas Courty `__ - `Alexandre Gramfort `__ - `Laetitia Chapel `__ - `Michael Perrot `__ -- cgit v1.2.3 From 85cc12bc7731077846bb77346797165c098fc4ec Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Tue, 2 Jul 2019 16:05:31 +0200 Subject: quickstart gfirst shot done! --- docs/source/quickstart.rst | 97 +++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 92 insertions(+), 5 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index 8f4a24e..0dcd7ff 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -417,7 +417,7 @@ operators. We provide an implementation of this algorithm in function Barycenters with free support ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Estimating the Wassresein barycenter with free support but fixed weights +Estimating the Wasserstein barycenter with free support but fixed weights corresponds to solving the following optimization problem: .. math:: @@ -555,7 +555,7 @@ be in the Stiefel manifold. WDA can be solved in pot using function :any:`ot.dr.wda`. It requires to have installed :code:`pymanopt` and :code:`autograd` for manifold optimization and automatic differentiation respectively. Note that we also provide the Fisher discriminant estimator in -:any:`ot.dr.wda` for easy comparison. +:any:`ot.dr.fda` for easy comparison. .. warning:: Note that due to the hard dependency on :code:`pymanopt` and @@ -585,17 +585,104 @@ problem: where KL is the Kullback-Leibler divergence. This formulation allwos for computing approximate mapping between distributions that do not have the same amount of mass. Interestingly the problem can be solved with a generalization of -the Bregman projections algorithm [10]_. +the Bregman projections algorithm [10]_. We provide a solver for unbalanced OT +in :any:`ot.unbalanced` and more specifically +in function :any:`ot.sinkhorn_unbalanced`. A solver for unbalanced OT barycenter +is available in :any:`ot.barycenter_unbalanced`. + + +.. hint:: + + Examples of the use of :any:`ot.sinkhorn_unbalanced` and + :any:`ot.barycenter_unbalanced` are available in: + + - :any:`auto_examples/plot_UOT_1D` + - :any:`auto_examples/plot_UOT_barycenter_1D` + Gromov-Wasserstein ^^^^^^^^^^^^^^^^^^ +Gromov Wasserstein (GW) is a generalization of OT to distributions that do not lie in +the same space [13]_. In this case one cannot compute distance between samples +from the two distributions. [13]_ proposed instead to realign the metric spaces +by computing a transport between distance matrices. The Gromow Wasserstein +alignement between two distributions can be expressed as the one minimizing: + + +.. math:: + GW = \min_\gamma \sum_{i,j,k,l} L(C1_{i,k},C2_{j,l})*\gamma_{i,j}*\gamma_{k,l} + + s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 + +where ::math:`C1` is the distance matrix between samples in the source +distribution and :math:`C2` the one between samples in the target, :math:`L(C1_{i,k},C2_{j,l})` is a measure of similarity between +:math:`C1_{i,k}` and :math:`C2_{j,l}` often chosen as +:math:`L(C1_{i,k},C2_{j,l})=\|C1_{i,k}-C2_{j,l}\|^2`. The optimization problem +above is a non-convex quadratic program but we provide a solver that finds a +local minimum using conditional gradient in :any:`ot.gromov.gromov_wasserstein`. +There also exist an entropic regularized variant of GW that has been proposed in +[12]_ and we provide an implementation of their algorithm in +:any:`ot.gromov.entropic_gromov_wasserstein`. + +Note that similarly to Wasserstein distance GW allows for the definition of GW +barycenters that cen be expressed as + +.. math:: + \min_{C\geq 0} \quad \sum_{k} w_k GW(C,Ck) + +where :math:`Ck` is the distance matrix between samples in distribution +:math:`k`. Note that interestingly the barycenter is defined a a symmetric +positive matrix. We provide a block coordinate optimization procedure in +:any:`ot.gromov.gromov_barycenters` and +:any:`ot.gromov.entropic_gromov_barycenters` for non-regularized and regularized +barycenters respectively. + +Finally note that recently a fusion between Wasserstein and GW, coined Fused +Groimov-Wasserstein (FGW) has been proposed +in [24]_. It allows to compute a similarity between objects that are only partly in +the same space. As such it can be used to measure similarity between labeled +graphs for instance and also provide computable barycenters. +The implementations of FGW is provided in functions +:any:`ot.gromov.fused_gromov_wasserstein` and :any:`ot.gromov.fgw_barycenters`. + +.. hint:: + + Examples of computation of GW, regularized G and FGW are provided in : + + - :any:`auto_examples/plot_gromov` + - :any:`auto_examples/plot_fgw` + + Examples of GW, regularized GW and FGW barycenters are available in : + + - :any:`auto_examples/plot_gromov_barycenter` + - :any:`auto_examples/plot_barycenter_fgw` + GPU acceleration ^^^^^^^^^^^^^^^^ We provide several implementation of our OT solvers in :any:`ot.gpu`. Those -implementation use the :code:`cupy` toolbox. +implementation use the :code:`cupy` toolbox that obviously need to be installed. + + +.. note:: + + Several implementations of POT functions (mainly those relying on linear + algebra) have been implemented in :any:`ot.gpu`. Here is a short list on the + main entries: + + - :any:`ot.gpu.dist` : computation of distance matrix + - :any:`ot.gpu.sinkhorn` : computation of sinkhorn + - :any:`ot.gpu.sinkhorn_lpl1_mm` : computation of sinkhorn + group lasso + +Note that while the :any:`ot.gpu` module has been designed to be compatible with +POT, calling its function with numpy array will incur a large overhead due to +the memory copy of the array on GPU prior to computation and conversion of the +array after computation. To avoid this overhead, we provide functions +:any:`ot.gpu.to_gpu` and :any:`ot.gpu.to_np` that perform the conversion +explicitly. + .. warning:: Note that due to the hard dependency on :code:`cupy`, :any:`ot.gpu` is not @@ -735,7 +822,7 @@ References matching `__. Foundations of computational mathematics 11.4 : 417-487. -.. [14] Knott, M. and Smith, C. S. (1984).`On the optimal mapping of +.. [14] Knott, M. and Smith, C. S. (1984). `On the optimal mapping of distributions `__, Journal of Optimization Theory and Applications Vol 43. -- cgit v1.2.3 From ef00ce42616fe7adf747c23a5590a83b62171a36 Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Tue, 2 Jul 2019 16:40:30 +0200 Subject: quickstart proof reading --- docs/source/quickstart.rst | 173 +++++++++++++++++++++++---------------------- 1 file changed, 88 insertions(+), 85 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index 0dcd7ff..b726149 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -3,7 +3,9 @@ Quick start guide ================= In the following we provide some pointers about which functions and classes -to use for different problems related to optimal transport (OT). +to use for different problems related to optimal transport (OT) and machine +learning. We refer when we can to concrete examples in the documentation that +are also available as notebooks on the POT Github. This document is not a tutorial on numerical optimal transport. For this we strongly recommend to read the very nice book [15]_ . @@ -16,7 +18,8 @@ Optimal transport and Wasserstein distance In POT, most functions that solve OT or regularized OT problems have two versions that return the OT matrix or the value of the optimal solution. For instance :any:`ot.emd` return the OT matrix and :any:`ot.emd2` return the - Wassertsein distance. + Wassertsein distance. This approach has been implemented in practice for all + solvers that return an OT matrix (even Gromov-Wasserstsein) Solving optimal transport ^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -32,7 +35,8 @@ as where : - :math:`M\in\mathbb{R}_+^{m\times n}` is the metric cost matrix defining the cost to move mass from bin :math:`a_i` to bin :math:`b_j`. -- :math:`a` and :math:`b` are histograms (positive, sum to 1) that represent the weights of each samples in the source an target distributions. +- :math:`a` and :math:`b` are histograms on the simplex (positive, sum to 1) that represent the +weights of each samples in the source an target distributions. Solving the linear program above can be done using the function :any:`ot.emd` that will return the optimal transport matrix :math:`\gamma^*`: @@ -43,7 +47,7 @@ that will return the optimal transport matrix :math:`\gamma^*`: # M is the ground cost matrix T=ot.emd(a,b,M) # exact linear program -The method used for solving the OT problem is the network simplex, it is +The method implemented for solving the OT problem is the network simplex, it is implemented in C from [1]_. It has a complexity of :math:`O(n^3)` but the solver is quite efficient and uses sparsity of the solution. @@ -54,15 +58,16 @@ solver is quite efficient and uses sparsity of the solution. - :any:`auto_examples/plot_OT_1D` - :any:`auto_examples/plot_OT_L1_vs_L2` + Computing Wasserstein distance ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The value of the OT solution is often more of interest that the OT matrix : +The value of the OT solution is often more of interest than the OT matrix : - .. math:: - OT(a,b)=\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j} +.. math:: + OT(a,b)=\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j} - s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 + s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 It can computed from an already estimated OT matrix with @@ -92,7 +97,6 @@ the :math:`W_1` wasserstein distance can be done directly with :any:`ot.emd2` when providing :code:`M=ot.dist(xs,xt, metric='euclidean')` to use the euclidean distance. - .. hint:: Examples of use for :any:`ot.emd2` are available in the following examples: @@ -111,15 +115,15 @@ For instance when the samples are in 1D, then the OT problem can be solved in function :any:`ot.emd_1d` and :any:`ot.emd2_1d` to return respectively the OT matrix and value. Note that since the solution is very sparse the :code:`sparse` parameter of :any:`ot.emd_1d` allows for solving and returning the solution for -very large problems. Note that in order to computed directly the :math:`W_p` +very large problems. Note that in order to compute directly the :math:`W_p` Wasserstein distance in 1D we provide the function :any:`ot.wasserstein_1d` that takes :code:`p` as a parameter. -Another specials for estimating OT and Monge mapping is between Gaussian +Another special case for estimating OT and Monge mapping is between Gaussian distributions. In this case there exists a close form solution given in Remark 2.29 in [15]_ and the Monge mapping is an affine function and can be also computed from the covariances and means of the source and target -distributions. In this case when the finite sample dataset is supposed gaussian, we provide +distributions. In the case when the finite sample dataset is supposed gaussian, we provide :any:`ot.da.OT_mapping_linear` that returns the parameters for the Monge mapping. @@ -129,8 +133,7 @@ Regularized Optimal Transport Recent developments have shown the interest of regularized OT both in terms of computational and statistical properties. - -We address in this section the regularized OT problem that can be expressed as +We address in this section the regularized OT problems that can be expressed as .. math:: \gamma^* = arg\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j} + \lambda\Omega(\gamma) @@ -148,7 +151,6 @@ We discuss in the following specific algorithms that can be used depending on the regularization term. - Entropic regularized OT ^^^^^^^^^^^^^^^^^^^^^^^ @@ -162,7 +164,8 @@ regularization has the following expression The use of the regularization term above in the optimization problem has a very strong impact. First it makes the problem smooth which leads to new optimization -procedures such as L-BFGS (see :any:`ot.smooth` ). Next it makes the problem +procedures such as the well known Sinkhorn algorithm [2]_ or L-BFGS (see +:any:`ot.smooth` ). Next it makes the problem strictly convex meaning that there will be a unique solution. Finally the solution of the resulting optimization problem can be expressed as: @@ -172,13 +175,13 @@ solution of the resulting optimization problem can be expressed as: where :math:`u` and :math:`v` are vectors and :math:`K=\exp(-M/\lambda)` where the :math:`\exp` is taken component-wise. In order to solve the optimization -problem, on can use an alternative projection algorithm that can be very +problem, on can use an alternative projection algorithm called Sinkhorn-Knopp that can be very efficient for large values if regularization. -The main function is POT are :any:`ot.sinkhorn` and +The Sinkhorn-Knopp algorithm is implemented in :any:`ot.sinkhorn` and :any:`ot.sinkhorn2` that return respectively the OT matrix and the value of the linear term. Note that the regularization parameter :math:`\lambda` in the -equation above is given to those function with the parameter :code:`reg`. +equation above is given to those functions with the parameter :code:`reg`. >>> import ot >>> a=[.5,.5] @@ -188,10 +191,7 @@ equation above is given to those function with the parameter :code:`reg`. array([[ 0.36552929, 0.13447071], [ 0.13447071, 0.36552929]]) - - -More details about the algorithm used is given in the following note. - +More details about the algorithms used are given in the following note. .. note:: The main function to solve entropic regularized OT is :any:`ot.sinkhorn`. @@ -211,7 +211,7 @@ More details about the algorithm used is given in the following note. In addition to all those variants of sinkhorn, we have another implementation solving the problem in the smooth dual or semi-dual in :any:`ot.smooth`. This solver uses the :any:`scipy.optimize.minimize` - function to solve the smooth problem with :code:`L-BFGS` algorithm. Tu use + function to solve the smooth problem with :code:`L-BFGS-B` algorithm. Tu use this solver, use functions :any:`ot.smooth.smooth_ot_dual` or :any:`ot.smooth.smooth_ot_semi_dual` with parameter :code:`reg_type='kl'` to choose entropic/Kullbach Leibler regularization. @@ -227,13 +227,13 @@ OT matrix and loss for empirical distributions with respectively :any:`ot.bregman.empirical_sinkhorn` and :any:`ot.bregman.empirical_sinkhorn2`. - - Finally note that we also provide in :any:`ot.stochastic` several implementation -of stochastic solvers for entropic regularized OT [18]_ [19]_. +of stochastic solvers for entropic regularized OT [18]_ [19]_. Those pure Python +implementations are not optimized for speed but provide a roust implementation +of algorithms in [18]_ [19]_. .. hint:: - Examples of use for :any:`ot.sinkhorn` are available in the following examples: + Examples of use for :any:`ot.sinkhorn` are available in : - :any:`auto_examples/plot_OT_2D_samples` - :any:`auto_examples/plot_OT_1D` @@ -246,7 +246,7 @@ Other regularization While entropic OT is the most common and favored in practice, there exist other kind of regularization. We provide in POT two specific solvers for other -regularization terms: namely quadratic regularization and group lasso +regularization terms, namely quadratic regularization and group lasso regularization. But we also provide in :any:`ot.optim` two generic solvers that allows solving any smooth regularization in practice. @@ -261,14 +261,14 @@ regularization of the form this regularization term has a similar effect to entropic regularization in densifying the OT matrix but it keeps some sort of sparsity that is lost with -entropic regularization as soon as :math:`\lambda>0` [17]_. This problem cen be +entropic regularization as soon as :math:`\lambda>0` [17]_. This problem can be solved with POT using solvers from :any:`ot.smooth`, more specifically functions :any:`ot.smooth.smooth_ot_dual` or :any:`ot.smooth.smooth_ot_semi_dual` with parameter :code:`reg_type='l2'` to choose the quadratic regularization. .. hint:: - Examples of quadratic regularization are available in the following examples: + Examples of quadratic regularization are available in : - :any:`auto_examples/plot_OT_1D_smooth` - :any:`auto_examples/plot_optim_OTreg` @@ -288,17 +288,17 @@ where :math:`\mathcal{G}` contains non overlapping groups of lines in the OT matrix. This regularization proposed in [5]_ will promote sparsity at the group level and for instance will force target samples to get mass from a small number of groups. Note that the exact OT solution is already sparse so this regularization does -not make sens if it is not combined with others such as entropic. Depending on +not make sens if it is not combined with entropic regularization. Depending on the choice of :code:`p` and :code:`q`, the problem can be solved with different approaches. When :code:`q=1` and :code:`p<1` the problem is non convex but can -be solved using an efficient majoration minimization approach with +be solved using an efficient majoration minimization approach with :any:`ot.sinkhorn_lpl1_mm`. When :code:`q=2` and :code:`p=1` we recover the -convex gourp lasso and we provide a solver using generalized conditional +convex group lasso and we provide a solver using generalized conditional gradient algorithm [7]_ in function :any:`ot.da.sinkhorn_l1l2_gl`. .. hint:: - Examples of group Lasso regularization are available in the following examples: + Examples of group Lasso regularization are available in : - :any:`auto_examples/plot_otda_classes` - :any:`auto_examples/plot_otda_d2` @@ -309,7 +309,7 @@ Generic solvers Finally we propose in POT generic solvers that can be used to solve any regularization as long as you can provide a function computing the -regularization and a function computing its gradient. +regularization and a function computing its gradient (or sub-gradient). In order to solve @@ -319,13 +319,14 @@ In order to solve s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 you can use function :any:`ot.optim.cg` that will use a conditional gradient as -proposed in [6]_ . you need to provide the regularization function as parameter +proposed in [6]_ . You need to provide the regularization function as parameter ``f`` and its gradient as parameter ``df``. Note that the conditional gradient relies on iterative solving of a linearization of the problem using the exact -:any:`ot.emd` so it can be slow in practice. Still it always returns a +:any:`ot.emd` so it can be slow in practice. But, being an interior point +algorithm, it always returns a transport matrix that does not violates the marginals. -Another solver is proposed to solve the problem +Another generic solver is proposed to solve the problem .. math:: \gamma^* = arg\min_\gamma \quad \sum_{i,j}\gamma_{i,j}M_{i,j}+ \lambda_e\Omega_e(\gamma) + \lambda\Omega(\gamma) @@ -333,11 +334,12 @@ Another solver is proposed to solve the problem s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 where :math:`\Omega_e` is the entropic regularization. In this case we use a -generalized conditional gradient [7]_ implemented in :any:`ot.optim.gcg` that does not linearize the entropic term and +generalized conditional gradient [7]_ implemented in :any:`ot.optim.gcg` that +does not linearize the entropic term but relies on :any:`ot.sinkhorn` for its iterations. .. hint:: - Example of generic solvers are available in the following example: + An example of generic solvers are available in : - :any:`auto_examples/plot_optim_OTreg` @@ -347,7 +349,7 @@ Wasserstein Barycenters A Wasserstein barycenter is a distribution that minimize its Wasserstein distance with respect to other distributions [16]_. It corresponds to minimizing the -following problem by seaching a distribution :math:`\mu` +following problem by searching a distribution :math:`\mu` such that .. math:: \min_\mu \quad \sum_{k} w_kW(\mu,\mu_k) @@ -371,7 +373,6 @@ Barycenters with fixed support When optimizing a barycenter with a fixed support, the optimization problem can be expressed as - .. math:: \min_a \quad \sum_{k} w_k W(a,b_k) @@ -379,36 +380,36 @@ where :math:`b_k` are also weights in the simplex. In the non-regularized case, the problem above is a classical linear program. In this case we propose a solver :any:`ot.lp.barycenter` that rely on generic LP solvers. By default the function uses :any:`scipy.optimize.linprog`, but more efficient LP solvers from -cvxopt can be also used by changing parameter :code:`solver`. Note that these -solver require to solve a very large linear program and can be very slow in +cvxopt can be also used by changing parameter :code:`solver`. Note that this problem +requires to solve a very large linear program and can be very slow in practice. Similarly to the OT problem, OT barycenters can be computed in the regularized -case. When using entropic regularization the problem can be solved with a +case. When using entropic regularization is used, the problem can be solved with a generalization of the sinkhorn algorithm based on bregman projections [3]_. This algorithm is provided in function :any:`ot.bregman.barycenter` also available as :any:`ot.barycenter`. In this case, the algorithm scales better to large distributions and rely only on matrix multiplications that can be performed in parallel. -In addition to teh speedup brought by regularization, one can also greatly +In addition to the speedup brought by regularization, one can also greatly accelerate the estimation of Wasserstein barycenter when the support has a separable structure [21]_. In the case of 2D images for instance one can replace -the matrix vector production in teh bregman projections by convolution +the matrix vector production in the Bregman projections by convolution operators. We provide an implementation of this algorithm in function :any:`ot.bregman.convolutional_barycenter2d`. .. hint:: - Example of Wasserstein (:any:`ot.lp.barycenter`) and regularized Wasserstein - barycenter (:any:`ot.bregman.barycenter`) computation are available in the following examples: + Examples of Wasserstein (:any:`ot.lp.barycenter`) and regularized Wasserstein + barycenter (:any:`ot.bregman.barycenter`) computation are available in : - :any:`auto_examples/plot_barycenter_1D` - :any:`auto_examples/plot_barycenter_lp_vs_entropic` - Example of convolutional barycenter + An example of convolutional barycenter (:any:`ot.bregman.convolutional_barycenter2d`) computation for 2D images is available - in the following example: + in : - :any:`auto_examples/plot_convolutional_barycenter` @@ -425,15 +426,15 @@ corresponds to solving the following optimization problem: s.t. \quad \mu=\sum_{i=1}^n a_i\delta_{x_i} -We provide an alternating solver based on [20]_ in +We provide a solver based on [20]_ in :any:`ot.lp.free_support_barycenter`. This function minimize the problem and -return an optimal support :math:`\{x_i\}` for uniform or given weights +return a locally optimal support :math:`\{x_i\}` for uniform or given weights :math:`a`. .. hint:: - Example of the fee support barycenter estimation is available - in the following example: + An example of the free support barycenter estimation is available + in : - :any:`auto_examples/plot_free_support_barycenter` @@ -449,7 +450,8 @@ distribution but that minimizes the transport loss. The existence and uniqueness optimal mapping is still an open problem in the general case but has been proven for smooth distributions by Brenier in his eponym `theorem `__. We provide in -:any:`ot.da` several solvers for Monge mapping estimation and domain adaptation. +:any:`ot.da` several solvers for smooth Monge mapping estimation and domain +adaptation from discrete distributions. Monge Mapping estimation ^^^^^^^^^^^^^^^^^^^^^^^^ @@ -468,14 +470,14 @@ For a more general mapping estimation we also provide the barycentric mapping proposed in [6]_ . It is implemented in the class :any:`ot.da.EMDTransport` and other transport based classes in :any:`ot.da` . Those classes are discussed more in the following but follow an interface similar to sklearn classes. Finally a -method proposed in [8]_ that estimate a continuous mapping approximating the +method proposed in [8]_ that estimates a continuous mapping approximating the barycentric mapping is provided in :any:`ot.da.joint_OT_mapping_linear` for linear mapping and :any:`ot.da.joint_OT_mapping_kernel` for non linear mapping. .. hint:: - Example of the linear Monge mapping estimation is available - in the following example: + An example of the linear Monge mapping estimation is available + in : - :any:`auto_examples/plot_otda_linear_mapping` @@ -489,12 +491,14 @@ transport labeled source samples onto the target distribution with no labels. We provide several classes based on :any:`ot.da.BaseTransport` that provide several OT and mapping estimations. The interface of those classes is similar to -classifiers in sklearn toolbox. At initialization several parameters (for -instance regularization parameter) can be set. Then one needs to estimate the +classifiers in sklearn toolbox. At initialization, several parameters such as + regularization parameter value can be set. Then one needs to estimate the mapping with function :any:`ot.da.BaseTransport.fit`. Finally one can map the samples from source to target with :any:`ot.da.BaseTransport.transform` and -from target to source with :any:`ot.da.BaseTransport.inverse_transform`. Here is -an example for class :any:`ot.da.EMDTransport` +from target to source with :any:`ot.da.BaseTransport.inverse_transform`. + +Here is +an example for class :any:`ot.da.EMDTransport` : .. code:: @@ -503,12 +507,11 @@ an example for class :any:`ot.da.EMDTransport` Mapped_Xs= ot_emd.transform(Xs=Xs) -A list -of the provided implementation is given in the following note. +A list of the provided implementation is given in the following note. .. note:: - Here is a list of the mapping classes inheriting from + Here is a list of the OT mapping classes inheriting from :any:`ot.da.BaseTransport` * :any:`ot.da.EMDTransport` : Barycentric mapping with EMD transport @@ -523,7 +526,7 @@ of the provided implementation is given in the following note. .. hint:: - Example of the use of OTDA classes are available in the following exmaples: + Example of the use of OTDA classes are available in : - :any:`auto_examples/plot_otda_color_images` - :any:`auto_examples/plot_otda_mapping` @@ -533,7 +536,7 @@ of the provided implementation is given in the following note. Other applications ------------------ -We discuss in the following several implementations that has been used and +We discuss in the following several OT related problems and tools that has been proposed in the OT and machine learning community. Wasserstein Discriminant Analysis @@ -551,7 +554,7 @@ consist in finding a linear projector optimizing the following criterion where :math:`\#` is the push-forward operator, :math:`OT_e` is the entropic OT loss and :math:`\mu_i` is the distribution of samples from class :math:`i`. :math:`P` is also constrained to -be in the Stiefel manifold. WDA can be solved in pot using function +be in the Stiefel manifold. WDA can be solved in POT using function :any:`ot.dr.wda`. It requires to have installed :code:`pymanopt` and :code:`autograd` for manifold optimization and automatic differentiation respectively. Note that we also provide the Fisher discriminant estimator in @@ -564,7 +567,7 @@ respectively. Note that we also provide the Fisher discriminant estimator in .. hint:: - An example of the use of WDA is available in the following example: + An example of the use of WDA is available in : - :any:`auto_examples/plot_WDA` @@ -582,7 +585,7 @@ problem: s.t. \quad \gamma\geq 0 -where KL is the Kullback-Leibler divergence. This formulation allwos for +where KL is the Kullback-Leibler divergence. This formulation allows for computing approximate mapping between distributions that do not have the same amount of mass. Interestingly the problem can be solved with a generalization of the Bregman projections algorithm [10]_. We provide a solver for unbalanced OT @@ -594,7 +597,7 @@ is available in :any:`ot.barycenter_unbalanced`. .. hint:: Examples of the use of :any:`ot.sinkhorn_unbalanced` and - :any:`ot.barycenter_unbalanced` are available in: + :any:`ot.barycenter_unbalanced` are available in : - :any:`auto_examples/plot_UOT_1D` - :any:`auto_examples/plot_UOT_barycenter_1D` @@ -609,46 +612,46 @@ from the two distributions. [13]_ proposed instead to realign the metric spaces by computing a transport between distance matrices. The Gromow Wasserstein alignement between two distributions can be expressed as the one minimizing: - .. math:: GW = \min_\gamma \sum_{i,j,k,l} L(C1_{i,k},C2_{j,l})*\gamma_{i,j}*\gamma_{k,l} s.t. \gamma 1 = a; \gamma^T 1= b; \gamma\geq 0 where ::math:`C1` is the distance matrix between samples in the source -distribution and :math:`C2` the one between samples in the target, :math:`L(C1_{i,k},C2_{j,l})` is a measure of similarity between +distribution and :math:`C2` the one between samples in the target, +:math:`L(C1_{i,k},C2_{j,l})` is a measure of similarity between :math:`C1_{i,k}` and :math:`C2_{j,l}` often chosen as :math:`L(C1_{i,k},C2_{j,l})=\|C1_{i,k}-C2_{j,l}\|^2`. The optimization problem above is a non-convex quadratic program but we provide a solver that finds a local minimum using conditional gradient in :any:`ot.gromov.gromov_wasserstein`. -There also exist an entropic regularized variant of GW that has been proposed in +There also exists an entropic regularized variant of GW that has been proposed in [12]_ and we provide an implementation of their algorithm in :any:`ot.gromov.entropic_gromov_wasserstein`. Note that similarly to Wasserstein distance GW allows for the definition of GW -barycenters that cen be expressed as +barycenters that can be expressed as .. math:: \min_{C\geq 0} \quad \sum_{k} w_k GW(C,Ck) where :math:`Ck` is the distance matrix between samples in distribution -:math:`k`. Note that interestingly the barycenter is defined a a symmetric +:math:`k`. Note that interestingly the barycenter is defined as a symmetric positive matrix. We provide a block coordinate optimization procedure in :any:`ot.gromov.gromov_barycenters` and :any:`ot.gromov.entropic_gromov_barycenters` for non-regularized and regularized barycenters respectively. Finally note that recently a fusion between Wasserstein and GW, coined Fused -Groimov-Wasserstein (FGW) has been proposed +Gromov-Wasserstein (FGW) has been proposed in [24]_. It allows to compute a similarity between objects that are only partly in the same space. As such it can be used to measure similarity between labeled graphs for instance and also provide computable barycenters. -The implementations of FGW is provided in functions +The implementations of FGW and FGW barycenter is provided in functions :any:`ot.gromov.fused_gromov_wasserstein` and :any:`ot.gromov.fgw_barycenters`. .. hint:: - Examples of computation of GW, regularized G and FGW are provided in : + Examples of computation of GW, regularized G and FGW are available in : - :any:`auto_examples/plot_gromov` - :any:`auto_examples/plot_fgw` @@ -663,7 +666,7 @@ GPU acceleration ^^^^^^^^^^^^^^^^ We provide several implementation of our OT solvers in :any:`ot.gpu`. Those -implementation use the :code:`cupy` toolbox that obviously need to be installed. +implementations use the :code:`cupy` toolbox that obviously need to be installed. .. note:: @@ -677,7 +680,7 @@ implementation use the :code:`cupy` toolbox that obviously need to be installed. - :any:`ot.gpu.sinkhorn_lpl1_mm` : computation of sinkhorn + group lasso Note that while the :any:`ot.gpu` module has been designed to be compatible with -POT, calling its function with numpy array will incur a large overhead due to +POT, calling its function with :any:`numpy` arrays will incur a large overhead due to the memory copy of the array on GPU prior to computation and conversion of the array after computation. To avoid this overhead, we provide functions :any:`ot.gpu.to_gpu` and :any:`ot.gpu.to_np` that perform the conversion @@ -697,7 +700,7 @@ FAQ 1. **How to solve a discrete optimal transport problem ?** - The solver for discrete is the function :py:mod:`ot.emd` that returns + The solver for discrete OT is the function :py:mod:`ot.emd` that returns the OT transport matrix. If you want to solve a regularized OT you can use :py:mod:`ot.sinkhorn`. @@ -711,7 +714,7 @@ FAQ T=ot.emd(a,b,M) # exact linear program T_reg=ot.sinkhorn(a,b,M,reg) # entropic regularized OT - More detailed examples can be seen on this + More detailed examples can be seen on this example: :doc:`auto_examples/plot_OT_2D_samples` -- cgit v1.2.3 From e26aa8ee4498f19248f8dcc9868ec55b62eb35e5 Mon Sep 17 00:00:00 2001 From: Rémi Flamary Date: Tue, 2 Jul 2019 16:52:24 +0200 Subject: typo exmaples --- docs/source/quickstart.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'docs/source/quickstart.rst') diff --git a/docs/source/quickstart.rst b/docs/source/quickstart.rst index b726149..1640d6a 100644 --- a/docs/source/quickstart.rst +++ b/docs/source/quickstart.rst @@ -52,7 +52,7 @@ implemented in C from [1]_. It has a complexity of :math:`O(n^3)` but the solver is quite efficient and uses sparsity of the solution. .. hint:: - Examples of use for :any:`ot.emd` are available in the following examples: + Examples of use for :any:`ot.emd` are available in : - :any:`auto_examples/plot_OT_2D_samples` - :any:`auto_examples/plot_OT_1D` @@ -99,7 +99,7 @@ distance. .. hint:: - Examples of use for :any:`ot.emd2` are available in the following examples: + An example of use for :any:`ot.emd2` is available in : - :any:`auto_examples/plot_compute_emd` -- cgit v1.2.3