diff options
author | MathieuCarriere <mathieu.carriere3@gmail.com> | 2022-05-23 10:23:25 +0200 |
---|---|---|
committer | MathieuCarriere <mathieu.carriere3@gmail.com> | 2022-05-23 10:23:25 +0200 |
commit | 6e5b348cb02acd16f990df629a9d938ecb3a318f (patch) | |
tree | f43642cce2fc0d3ecf3e4f75f53718c4842a79e2 /src/python/doc | |
parent | bff442de9324421062a4aada6a271c79a826db7a (diff) |
updated output for cubical complexes
Diffstat (limited to 'src/python/doc')
-rw-r--r-- | src/python/doc/cubical_complex_tflow_itf_ref.rst | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/src/python/doc/cubical_complex_tflow_itf_ref.rst b/src/python/doc/cubical_complex_tflow_itf_ref.rst index 881a2950..18b97adf 100644 --- a/src/python/doc/cubical_complex_tflow_itf_ref.rst +++ b/src/python/doc/cubical_complex_tflow_itf_ref.rst @@ -19,7 +19,7 @@ Example of gradient computed from cubical persistence cl = CubicalLayer(dimensions=[0]) with tf.GradientTape() as tape: - dgm = cl.call(X)[0] + dgm = cl.call(X)[0][0] loss = tf.math.reduce_sum(tf.square(.5*(dgm[:,1]-dgm[:,0]))) grads = tape.gradient(loss, [X]) |