diff options
author | MathieuCarriere <mathieu.carriere3@gmail.com> | 2022-04-12 15:21:02 +0200 |
---|---|---|
committer | MathieuCarriere <mathieu.carriere3@gmail.com> | 2022-04-12 15:21:02 +0200 |
commit | 27f8df308e3ed935e4ef9f62d23717efebdf36ae (patch) | |
tree | 9fbe0bc93b8b4479af03198bb3400cc697750432 /src/python/doc | |
parent | 92517a85ef7d28f4738a27ea850eed9d8c407334 (diff) |
fix doc + reshape in cubical
Diffstat (limited to 'src/python/doc')
-rw-r--r-- | src/python/doc/cubical_complex_tflow_itf_ref.rst | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/src/python/doc/cubical_complex_tflow_itf_ref.rst b/src/python/doc/cubical_complex_tflow_itf_ref.rst index 18b97adf..881a2950 100644 --- a/src/python/doc/cubical_complex_tflow_itf_ref.rst +++ b/src/python/doc/cubical_complex_tflow_itf_ref.rst @@ -19,7 +19,7 @@ Example of gradient computed from cubical persistence cl = CubicalLayer(dimensions=[0]) with tf.GradientTape() as tape: - dgm = cl.call(X)[0][0] + dgm = cl.call(X)[0] loss = tf.math.reduce_sum(tf.square(.5*(dgm[:,1]-dgm[:,0]))) grads = tape.gradient(loss, [X]) |