-
Notifications
You must be signed in to change notification settings - Fork 540
Description
Describe the bug
I'm opening as a bug, but the code actually does work. The issue is the definition used for Sinkhorn Divergence.
Basically, the Sinkhorn Divergence code does not match the formula in the referenced paper (["Learning Generative Models with Sinkhorn Divergences", 2017])(https://arxiv.org/pdf/1706.00292.pdf).
Code sample and Expected behavior
Here is a piece of the code from empirical_sinkhorn_divergence:
sinkhorn_div = sinkhorn_loss_ab - 0.5 * (sinkhorn_loss_a + sinkhorn_loss_b).
To match the sinkhorn divergence formula from the paper, the code should probably be:
sinkhorn_div = 2* sinkhorn_loss_ab - (sinkhorn_loss_a + sinkhorn_loss_b).
This is a minor issue, but perhaps the documentation should address this difference.
Another issue is that the sinkhorn_loss returns
While in the paper, the sinkhorn cost is only
where \gamma* is the optimal plan for the regularized problem. In other words, the regularization term is only used to find the optimal plan and is then discarded.