summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMario Mulansky <mario.mulansky@gmx.net>2015-04-30 14:23:14 +0200
committerMario Mulansky <mario.mulansky@gmx.net>2015-04-30 14:23:14 +0200
commit88c18727290d81ce6f6ff66862cea6da241c7c34 (patch)
tree47d1922a621becc87a4573fca9feb4acef821432
parent1b9f4ec0aee1281464cfcab02bb4c7c302dbbb00 (diff)
parent26d4dd11f80b1c0b4ab19e7cb74c77b61eaad90c (diff)
Merge pull request #4 from mariomulansky/develop0.2.0
merge develop
-rw-r--r--Changelog9
-rw-r--r--Readme.rst80
-rw-r--r--doc/pyspike.rst58
-rw-r--r--[-rwxr-xr-x]examples/PySpike_testdata.txt0
-rw-r--r--examples/merge.py11
-rw-r--r--examples/multivariate.py5
-rw-r--r--examples/plot.py6
-rw-r--r--examples/spike_sync.py5
-rw-r--r--pyspike/DiscreteFunc.py12
-rw-r--r--pyspike/PieceWiseConstFunc.py12
-rw-r--r--pyspike/PieceWiseLinFunc.py12
-rw-r--r--pyspike/SpikeTrain.py50
-rw-r--r--pyspike/__init__.py7
-rw-r--r--pyspike/cython/cython_distance.pyx339
-rw-r--r--pyspike/cython/python_backend.py329
-rw-r--r--pyspike/isi_distance.py82
-rw-r--r--pyspike/psth.py39
-rw-r--r--pyspike/spike_distance.py83
-rw-r--r--pyspike/spike_sync.py73
-rw-r--r--pyspike/spikes.py129
-rw-r--r--test/test_distance.py192
-rw-r--r--test/test_spikes.py46
22 files changed, 936 insertions, 643 deletions
diff --git a/Changelog b/Changelog
new file mode 100644
index 0000000..2ac065b
--- /dev/null
+++ b/Changelog
@@ -0,0 +1,9 @@
+PySpike v0.2:
+
+ * introduction of SpikeTrain class. Breaking interface change of
+ distance functions
+ * added max_tau parameter to SPIKE-Synchronization
+ * added psth function
+ * restructured modules, separate files for different distance measures
+ * Bugfix for selective multivariate distance
+ * added spike generation function
diff --git a/Readme.rst b/Readme.rst
index 03441fc..e80c0f7 100644
--- a/Readme.rst
+++ b/Readme.rst
@@ -19,6 +19,14 @@ All source codes are available on `Github <https://github.com/mariomulansky/PySp
.. [#] Kreuz T, Mulansky M and Bozanic N, *SPIKY: A graphical user interface for monitoring spike train synchrony*, tbp (2015)
+Important Changelog
+-----------------------------
+
+With version 0.2.0, the :class:`.SpikeTrain` class has been introduced to represent spike trains.
+This is a breaking change in the function interfaces.
+Hence, programs written for older versions of PySpike (0.1.x) will not run with newer versions.
+
+
Requirements and Installation
-----------------------------
@@ -65,14 +73,16 @@ Therefore, add your :code:`/path/to/PySpike` to the :code:`$PYTHONPATH` environm
Spike trains
------------
-In PySpike, spike trains are represented by one-dimensional numpy arrays containing the sequence of spike times as double values.
+In PySpike, spike trains are represented by :class:`.SpikeTrain` objects.
+A :class:`.SpikeTrain` object consists of the spike times given as `numpy` arrays as well as the edges of the spike train as :code:`[t_start, t_end]`.
The following code creates such a spike train with some arbitrary spike times:
.. code:: python
import numpy as np
+ from pyspike import SpikeTrain
- spike_train = np.array([0.1, 0.3, 0.45, 0.6, 0.9])
+ spike_train = SpikeTrain(np.array([0.1, 0.3, 0.45, 0.6, 0.9], [0.0, 1.0]))
Loading from text files
.......................
@@ -80,7 +90,7 @@ Loading from text files
Typically, spike train data is loaded into PySpike from data files.
The most straight-forward data files are text files where each line represents one spike train given as an sequence of spike times.
An exemplary file with several spike trains is `PySpike_testdata.txt <https://github.com/mariomulansky/PySpike/blob/master/examples/PySpike_testdata.txt>`_.
-To quickly obtain spike trains from such files, PySpike provides the function :code:`load_spike_trains_from_txt`.
+To quickly obtain spike trains from such files, PySpike provides the function :func:`.load_spike_trains_from_txt`.
.. code:: python
@@ -88,22 +98,13 @@ To quickly obtain spike trains from such files, PySpike provides the function :c
import pyspike as spk
spike_trains = spk.load_spike_trains_from_txt("SPIKY_testdata.txt",
- time_interval=(0, 4000))
+ edges=(0, 4000))
This function expects the name of the data file as first parameter.
-Additionally, the time interval of the spike train measurement can be provided as a pair of start- and end-time values.
-If the time interval is provided (:code:`time_interval is not None`), auxiliary spikes at the start- and end-time of the interval are added to the spike trains.
+Furthermore, the time interval of the spike train measurement (edges of the spike trains) should be provided as a pair of start- and end-time values.
Furthermore, the spike trains are sorted via :code:`np.sort` (disable this feature by providing :code:`is_sorted=True` as a parameter to the load function).
-As result, :code:`load_spike_trains_from_txt` returns a *list of arrays* containing the spike trains in the text file.
-
-If you load spike trains yourself, i.e. from data files with different structure, you can use the helper function :code:`add_auxiliary_spikes` to add the auxiliary spikes at the beginning and end of the observation interval.
-Both the ISI and the SPIKE distance computation require the presence of auxiliary spikes, so make sure you have those in your spike trains:
+As result, :func:`.load_spike_trains_from_txt` returns a *list of arrays* containing the spike trains in the text file.
-.. code:: python
-
- spike_train = spk.add_auxiliary_spikes(spike_train, (T_start, T_end))
- # if you provide only a single value, it is interpreted as T_end, while T_start=0
- spike_train = spk.add_auxiliary_spikes(spike_train, T_end)
Computing bivariate distances profiles
---------------------------------------
@@ -114,19 +115,18 @@ Computing bivariate distances profiles
Spike trains are expected to be *sorted*!
For performance reasons, the PySpike distance functions do not check if the spike trains provided are indeed sorted.
- Make sure that all your spike trains are sorted, which is ensured if you use the `load_spike_trains_from_txt` function with the parameter `is_sorted=False`.
- If in doubt, use :code:`spike_train = np.sort(spike_train)` to obtain a correctly sorted spike train.
-
- Furthermore, the spike trains should have auxiliary spikes at the beginning and end of the observation interval.
- You can ensure this by providing the :code:`time_interval` in the :code:`load_spike_trains_from_txt` function, or calling :code:`add_auxiliary_spikes` for your spike trains.
- The spike trains must have *the same* observation interval!
+ Make sure that all your spike trains are sorted, which is ensured if you use the :func:`.load_spike_trains_from_txt` function with the parameter `is_sorted=False` (default).
+ If in doubt, use :meth:`.SpikeTrain.sort()` to ensure a correctly sorted spike train.
-----------------------
+ If you need to copy a spike train, use the :meth:`.SpikeTrain.copy()` method.
+ Simple assignment `t2 = t1` does not create a copy of the spike train data, but a reference as `numpy.array` is used for storing the data.
+
+------------------------------
ISI-distance
............
-The following code loads some exemplary spike trains, computes the dissimilarity profile of the ISI-distance of the first two spike trains, and plots it with matplotlib:
+The following code loads some exemplary spike trains, computes the dissimilarity profile of the ISI-distance of the first two :class:`.SpikeTrain` s, and plots it with matplotlib:
.. code:: python
@@ -134,18 +134,18 @@ The following code loads some exemplary spike trains, computes the dissimilarity
import pyspike as spk
spike_trains = spk.load_spike_trains_from_txt("PySpike_testdata.txt",
- time_interval=(0, 4000))
+ edges=(0, 4000))
isi_profile = spk.isi_profile(spike_trains[0], spike_trains[1])
x, y = isi_profile.get_plottable_data()
plt.plot(x, y, '--k')
print("ISI distance: %.8f" % isi_profile.avrg())
plt.show()
-The ISI-profile is a piece-wise constant function, and hence the function :code:`isi_profile` returns an instance of the :code:`PieceWiseConstFunc` class.
+The ISI-profile is a piece-wise constant function, and hence the function :func:`.isi_profile` returns an instance of the :class:`.PieceWiseConstFunc` class.
As shown above, this class allows you to obtain arrays that can be used to plot the function with :code:`plt.plt`, but also to compute the time average, which amounts to the final scalar ISI-distance.
-By default, the time average is computed for the whole :code:`PieceWiseConstFunc` function.
+By default, the time average is computed for the whole :class:`.PieceWiseConstFunc` function.
However, it is also possible to obtain the average of a specific interval by providing a pair of floats defining the start and end of the interval.
-In the above example, the following code computes the ISI-distances obtained from averaging the ISI-profile over four different intervals:
+For the above example, the following code computes the ISI-distances obtained from averaging the ISI-profile over four different intervals:
.. code:: python
@@ -168,7 +168,7 @@ where :code:`interval` is optional, as above, and if omitted the ISI-distance is
SPIKE-distance
..............
-To compute for the spike distance profile you use the function :code:`spike_profile` instead of :code:`isi_profile` above.
+To compute for the spike distance profile you use the function :func:`.spike_profile` instead of :code:`isi_profile` above.
But the general approach is very similar:
.. code:: python
@@ -177,7 +177,7 @@ But the general approach is very similar:
import pyspike as spk
spike_trains = spk.load_spike_trains_from_txt("PySpike_testdata.txt",
- time_interval=(0, 4000))
+ edges=(0, 4000))
spike_profile = spk.spike_profile(spike_trains[0], spike_trains[1])
x, y = spike_profile.get_plottable_data()
plt.plot(x, y, '--k')
@@ -185,9 +185,9 @@ But the general approach is very similar:
plt.show()
This short example computes and plots the SPIKE-profile of the first two spike trains in the file :code:`PySpike_testdata.txt`.
-In contrast to the ISI-profile, a SPIKE-profile is a piece-wise *linear* function and is therefore represented by a :code:`PieceWiseLinFunc` object.
-Just like the :code:`PieceWiseConstFunc` for the ISI-profile, the :code:`PieceWiseLinFunc` provides a :code:`get_plottable_data` member function that returns arrays that can be used directly to plot the function.
-Furthermore, the :code:`avrg` member function returns the average of the profile defined as the overall SPIKE distance.
+In contrast to the ISI-profile, a SPIKE-profile is a piece-wise *linear* function and is therefore represented by a :class:`.PieceWiseLinFunc` object.
+Just like the :class:`.PieceWiseConstFunc` for the ISI-profile, the :class:`.PieceWiseLinFunc` provides a :meth:`.PieceWiseLinFunc.get_plottable_data` member function that returns arrays that can be used directly to plot the function.
+Furthermore, the :meth:`.PieceWiseLinFunc.avrg` member function returns the average of the profile defined as the overall SPIKE distance.
As above, you can provide an interval as a pair of floats as well as a sequence of such pairs to :code:`avrg` to specify the averaging interval if required.
Again, you can use
@@ -217,9 +217,9 @@ SPIKE synchronization
SPIKE synchronization is another approach to measure spike synchrony.
In contrast to the SPIKE- and ISI-distance, it measures similarity instead of dissimilarity, i.e. higher values represent larger synchrony.
Another difference is that the SPIKE synchronization profile is only defined exactly at the spike times, not for the whole interval of the spike trains.
-Therefore, it is represented by a :code:`DiscreteFunction`.
+Therefore, it is represented by a :class:`.DiscreteFunction`.
-To compute for the spike synchronization profile, PySpike provides the function :code:`spike_sync_profile`.
+To compute for the spike synchronization profile, PySpike provides the function :func:`.spike_sync_profile`.
The general handling of the profile, however, is similar to the other profiles above:
.. code:: python
@@ -228,11 +228,11 @@ The general handling of the profile, however, is similar to the other profiles a
import pyspike as spk
spike_trains = spk.load_spike_trains_from_txt("PySpike_testdata.txt",
- time_interval=(0, 4000))
+ edges=(0, 4000))
spike_profile = spk.spike_sync_profile(spike_trains[0], spike_trains[1])
x, y = spike_profile.get_plottable_data()
-For the direct computation of the overall spike synchronization value within some interval, the :code:`spike_sync` function can be used:
+For the direct computation of the overall spike synchronization value within some interval, the :func:`.spike_sync` function can be used:
.. code:: python
@@ -243,23 +243,23 @@ Computing multivariate profiles and distances
----------------------------------------------
To compute the multivariate ISI-profile, SPIKE-profile or SPIKE-Synchronization profile f a set of spike trains, PySpike provides multi-variate version of the profile function.
-The following example computes the multivariate ISI-, SPIKE- and SPIKE-Sync-profile for a list of spike trains:
+The following example computes the multivariate ISI-, SPIKE- and SPIKE-Sync-profile for a list of spike trains using the :func:`.isi_profile_multi`, :func:`.spike_profile_multi`, :func:`.spike_sync_profile_multi` functions:
.. code:: python
spike_trains = spk.load_spike_trains_from_txt("PySpike_testdata.txt",
- time_interval=(0, 4000))
+ edges=(0, 4000))
avrg_isi_profile = spk.isi_profile_multi(spike_trains)
avrg_spike_profile = spk.spike_profile_multi(spike_trains)
avrg_spike_sync_profile = spk.spike_sync_profile_multi(spike_trains)
All functions take an optional parameter :code:`indices`, a list of indices that allows to define the spike trains that should be used for the multivariate profile.
-As before, if you are only interested in the distance values, and not in the profile, PySpike offers the functions: :code:`isi_distance_multi`, :code:`spike_distance_multi` and :code:`spike_sync_multi`, that return the scalar overall multivariate ISI- and SPIKE-distance as well as the SPIKE-Synchronization value.
+As before, if you are only interested in the distance values, and not in the profile, PySpike offers the functions: :func:`.isi_distance_multi`, :func:`.spike_distance_multi` and :func:`.spike_sync_multi`, that return the scalar overall multivariate ISI- and SPIKE-distance as well as the SPIKE-Synchronization value.
Those functions also accept an :code:`interval` parameter that can be used to specify the begin and end of the averaging interval as a pair of floats, if neglected the complete interval is used.
Another option to characterize large sets of spike trains are distance matrices.
Each entry in the distance matrix represents a bivariate distance (similarity for SPIKE-Synchronization) of two spike trains.
-The distance matrix is symmetric and has zero values (ones) at the diagonal.
+The distance matrix is symmetric and has zero values (ones) at the diagonal and is computed with the functions :func:`.isi_distance_matrix`, :func:`.spike_distance_matrix` and :func:`.spike_sync_matrix`.
The following example computes and plots the ISI- and SPIKE-distance matrix as well as the SPIKE-Synchronization-matrix, with different intervals.
.. code:: python
diff --git a/doc/pyspike.rst b/doc/pyspike.rst
index 6aa36e7..74ab439 100644
--- a/doc/pyspike.rst
+++ b/doc/pyspike.rst
@@ -1,60 +1,72 @@
pyspike package
===============
-Submodules
-----------
-pyspike.isi_distance module
+Classes
----------------------------------------
-.. automodule:: pyspike.isi_distance
+SpikeTrain
+........................................
+.. automodule:: pyspike.SpikeTrain
:members:
:undoc-members:
:show-inheritance:
-pyspike.spike_distance module
-----------------------------------------
-
-.. automodule:: pyspike.spike_distance
+PieceWiseConstFunc
+........................................
+.. automodule:: pyspike.PieceWiseConstFunc
:members:
:undoc-members:
:show-inheritance:
-pyspike.spike_sync module
-----------------------------------------
+PieceWiseLinFunc
+........................................
+.. automodule:: pyspike.PieceWiseLinFunc
+ :members:
+ :undoc-members:
+ :show-inheritance:
-.. automodule:: pyspike.spike_sync
+DiscreteFunc
+........................................
+.. automodule:: pyspike.DiscreteFunc
:members:
:undoc-members:
:show-inheritance:
-pyspike.PieceWiseConstFunc module
-----------------------------------------
+Functions
+----------
-.. automodule:: pyspike.PieceWiseConstFunc
+ISI-distance
+........................................
+.. automodule:: pyspike.isi_distance
:members:
:undoc-members:
:show-inheritance:
-pyspike.PieceWiseLinFunc module
-----------------------------------------
-
-.. automodule:: pyspike.PieceWiseLinFunc
+SPIKE-distance
+........................................
+.. automodule:: pyspike.spike_distance
:members:
:undoc-members:
:show-inheritance:
-pyspike.DiscreteFunc module
-----------------------------------------
+SPIKE-synchronization
+........................................
+.. automodule:: pyspike.spike_sync
+ :members:
+ :undoc-members:
+ :show-inheritance:
-.. automodule:: pyspike.DiscreteFunc
+PSTH
+........................................
+.. automodule:: pyspike.psth
:members:
:undoc-members:
:show-inheritance:
-pyspike.spikes module
-----------------------------------------
+Helper functions
+........................................
.. automodule:: pyspike.spikes
:members:
:undoc-members:
diff --git a/examples/PySpike_testdata.txt b/examples/PySpike_testdata.txt
index 41b2362..41b2362 100755..100644
--- a/examples/PySpike_testdata.txt
+++ b/examples/PySpike_testdata.txt
diff --git a/examples/merge.py b/examples/merge.py
index 2550cdb..2ea96ea 100644
--- a/examples/merge.py
+++ b/examples/merge.py
@@ -17,12 +17,13 @@ import pyspike as spk
# first load the data, ending time = 4000
spike_trains = spk.load_spike_trains_from_txt("PySpike_testdata.txt", 4000)
-spikes = spk.merge_spike_trains([spike_trains[0], spike_trains[1]])
+merged_spike_train = spk.merge_spike_trains([spike_trains[0], spike_trains[1]])
-print(spikes)
+print(merged_spike_train.spikes)
-plt.plot(spike_trains[0], np.ones_like(spike_trains[0]), 'o')
-plt.plot(spike_trains[1], np.ones_like(spike_trains[1]), 'x')
-plt.plot(spikes, 2*np.ones_like(spikes), 'o')
+plt.plot(spike_trains[0].spikes, np.ones_like(spike_trains[0].spikes), 'o')
+plt.plot(spike_trains[1].spikes, np.ones_like(spike_trains[1].spikes), 'x')
+plt.plot(merged_spike_train.spikes,
+ 2*np.ones_like(merged_spike_train.spikes), 'o')
plt.show()
diff --git a/examples/multivariate.py b/examples/multivariate.py
index 260b217..53dbf0f 100644
--- a/examples/multivariate.py
+++ b/examples/multivariate.py
@@ -19,11 +19,12 @@ t_start = time.clock()
# load the data
time_loading = time.clock()
spike_trains = spk.load_spike_trains_from_txt("PySpike_testdata.txt",
- time_interval=(0, 4000))
+ edges=(0, 4000))
t_loading = time.clock()
print("Number of spike trains: %d" % len(spike_trains))
-num_of_spikes = sum([len(spike_trains[i]) for i in xrange(len(spike_trains))])
+num_of_spikes = sum([len(spike_trains[i].spikes)
+ for i in xrange(len(spike_trains))])
print("Number of spikes: %d" % num_of_spikes)
# calculate the multivariate spike distance
diff --git a/examples/plot.py b/examples/plot.py
index d32c464..9670286 100644
--- a/examples/plot.py
+++ b/examples/plot.py
@@ -17,11 +17,11 @@ import matplotlib.pyplot as plt
import pyspike as spk
spike_trains = spk.load_spike_trains_from_txt("PySpike_testdata.txt",
- time_interval=(0, 4000))
+ edges=(0, 4000))
# plot the spike time
-for (i, spikes) in enumerate(spike_trains):
- plt.plot(spikes, i*np.ones_like(spikes), 'o')
+for (i, spike_train) in enumerate(spike_trains):
+ plt.plot(spike_train.spikes, i*np.ones_like(spike_train.spikes), 'o')
f = spk.isi_profile(spike_trains[0], spike_trains[1])
x, y = f.get_plottable_data()
diff --git a/examples/spike_sync.py b/examples/spike_sync.py
index 9c5f75c..37dbff4 100644
--- a/examples/spike_sync.py
+++ b/examples/spike_sync.py
@@ -1,12 +1,11 @@
from __future__ import print_function
-import numpy as np
import matplotlib.pyplot as plt
import pyspike as spk
spike_trains = spk.load_spike_trains_from_txt("../test/SPIKE_Sync_Test.txt",
- time_interval=(0, 4000))
+ edges=(0, 4000))
plt.figure()
@@ -41,7 +40,7 @@ plt.plot(x1, y1, '-k', lw=2.5, label="averaged SPIKE-Sync profile")
plt.subplot(212)
-f_psth = spk.psth(spike_trains, bin_size=5.0)
+f_psth = spk.psth(spike_trains, bin_size=50.0)
x, y = f_psth.get_plottable_data()
plt.plot(x, y, '-k', alpha=1.0, label="PSTH")
diff --git a/pyspike/DiscreteFunc.py b/pyspike/DiscreteFunc.py
index bd13e1f..33b7a81 100644
--- a/pyspike/DiscreteFunc.py
+++ b/pyspike/DiscreteFunc.py
@@ -1,11 +1,7 @@
-"""
-Class representing discrete functions.
+# Class representing discrete functions.
+# Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
+# Distributed under the BSD License
-Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
-
-Distributed under the BSD License
-
-"""
from __future__ import print_function
import numpy as np
@@ -174,7 +170,7 @@ class DiscreteFunc(object):
def avrg(self, interval=None):
""" Computes the average of the interval sequence:
- :math:`a = 1/N sum f_n` where N is the number of intervals.
+ :math:`a = 1/N \\sum f_n` where N is the number of intervals.
:param interval: averaging interval given as a pair of floats, a
sequence of pairs for averaging multiple intervals, or
diff --git a/pyspike/PieceWiseConstFunc.py b/pyspike/PieceWiseConstFunc.py
index dc57ab1..41998ef 100644
--- a/pyspike/PieceWiseConstFunc.py
+++ b/pyspike/PieceWiseConstFunc.py
@@ -1,11 +1,7 @@
-"""
-Class representing piece-wise constant functions.
+# Class representing piece-wise constant functions.
+# Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
+# Distributed under the BSD License
-Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
-
-Distributed under the BSD License
-
-"""
from __future__ import print_function
import numpy as np
@@ -103,7 +99,7 @@ class PieceWiseConstFunc(object):
def avrg(self, interval=None):
""" Computes the average of the piece-wise const function:
- :math:`a = 1/T int_0^T f(x) dx` where T is the length of the interval.
+ :math:`a = 1/T \int_0^T f(x) dx` where T is the length of the interval.
:param interval: averaging interval given as a pair of floats, a
sequence of pairs for averaging multiple intervals, or
diff --git a/pyspike/PieceWiseLinFunc.py b/pyspike/PieceWiseLinFunc.py
index bc0aa2a..f2442be 100644
--- a/pyspike/PieceWiseLinFunc.py
+++ b/pyspike/PieceWiseLinFunc.py
@@ -1,11 +1,7 @@
-"""
-Class representing piece-wise linear functions.
+# Class representing piece-wise linear functions.
+# Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
+# Distributed under the BSD License
-Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
-
-Distributed under the BSD License
-
-"""
from __future__ import print_function
import numpy as np
@@ -123,7 +119,7 @@ class PieceWiseLinFunc:
def avrg(self, interval=None):
""" Computes the average of the piece-wise linear function:
- :math:`a = 1/T int_0^T f(x) dx` where T is the length of the interval.
+ :math:`a = 1/T \int_0^T f(x) dx` where T is the interval length.
:param interval: averaging interval given as a pair of floats, a
sequence of pairs for averaging multiple intervals, or
diff --git a/pyspike/SpikeTrain.py b/pyspike/SpikeTrain.py
new file mode 100644
index 0000000..a02b7ab
--- /dev/null
+++ b/pyspike/SpikeTrain.py
@@ -0,0 +1,50 @@
+# Module containing the class representing spike trains for PySpike.
+# Copyright 2015, Mario Mulansky <mario.mulansky@gmx.net>
+# Distributed under the BSD License
+
+import numpy as np
+
+
+class SpikeTrain(object):
+ """ Class representing spike trains for the PySpike Module."""
+
+ def __init__(self, spike_times, edges, is_sorted=True):
+ """ Constructs the SpikeTrain.
+
+ :param spike_times: ordered array of spike times.
+ :param edges: The edges of the spike train. Given as a pair of floats
+ (T0, T1) or a single float T1, where then T0=0 is
+ assumed.
+ :param is_sorted: If `False`, the spike times will sorted by `np.sort`.
+
+ """
+
+ # TODO: sanity checks
+ if is_sorted:
+ self.spikes = np.array(spike_times, dtype=float)
+ else:
+ self.spikes = np.sort(np.array(spike_times, dtype=float))
+
+ try:
+ self.t_start = float(edges[0])
+ self.t_end = float(edges[1])
+ except:
+ self.t_start = 0.0
+ self.t_end = float(edges)
+
+ def sort(self):
+ """ Sorts the spike times of this spike train using `np.sort`
+ """
+ self.spikes = np.sort(self.spikes)
+
+ def copy(self):
+ """ Returns a copy of this spike train.
+ Use this function if you want to create a real (deep) copy of this
+ spike train. Simple assignment `t2 = t1` does not create a copy of the
+ spike train data, but a reference as `numpy.array` is used for storing
+ the data.
+
+ :return: :class:`.SpikeTrain` copy of this spike train.
+
+ """
+ return SpikeTrain(self.spikes.copy(), [self.t_start, self.t_end])
diff --git a/pyspike/__init__.py b/pyspike/__init__.py
index 4d3f9f6..a5f9f0a 100644
--- a/pyspike/__init__.py
+++ b/pyspike/__init__.py
@@ -5,12 +5,13 @@ Distributed under the BSD License
"""
__all__ = ["isi_distance", "spike_distance", "spike_sync", "psth",
- "spikes", "PieceWiseConstFunc", "PieceWiseLinFunc",
+ "spikes", "SpikeTrain", "PieceWiseConstFunc", "PieceWiseLinFunc",
"DiscreteFunc"]
from PieceWiseConstFunc import PieceWiseConstFunc
from PieceWiseLinFunc import PieceWiseLinFunc
from DiscreteFunc import DiscreteFunc
+from SpikeTrain import SpikeTrain
from isi_distance import isi_profile, isi_distance, isi_profile_multi,\
isi_distance_multi, isi_distance_matrix
@@ -20,5 +21,5 @@ from spike_sync import spike_sync_profile, spike_sync,\
spike_sync_profile_multi, spike_sync_multi, spike_sync_matrix
from psth import psth
-from spikes import add_auxiliary_spikes, load_spike_trains_from_txt, \
- spike_train_from_string, merge_spike_trains, generate_poisson_spikes
+from spikes import load_spike_trains_from_txt, spike_train_from_string, \
+ merge_spike_trains, generate_poisson_spikes
diff --git a/pyspike/cython/cython_distance.pyx b/pyspike/cython/cython_distance.pyx
index 2834ca5..6ee0181 100644
--- a/pyspike/cython/cython_distance.pyx
+++ b/pyspike/cython/cython_distance.pyx
@@ -42,57 +42,88 @@ ctypedef np.float_t DTYPE_t
############################################################
# isi_distance_cython
############################################################
-def isi_distance_cython(double[:] s1,
- double[:] s2):
+def isi_distance_cython(double[:] s1, double[:] s2,
+ double t_start, double t_end):
cdef double[:] spike_events
cdef double[:] isi_values
cdef int index1, index2, index
cdef int N1, N2
cdef double nu1, nu2
- N1 = len(s1)-1
- N2 = len(s2)-1
+ N1 = len(s1)
+ N2 = len(s2)
+
+ spike_events = np.empty(N1+N2+2)
+ # the values have one entry less as they are defined at the intervals
+ isi_values = np.empty(N1+N2+1)
+
+ # first x-value of the profile
+ spike_events[0] = t_start
+
+ # first interspike interval - check if a spike exists at the start time
+ if s1[0] > t_start:
+ # edge correction
+ nu1 = fmax(s1[0]-t_start, s1[1]-s1[0])
+ index1 = -1
+ else:
+ nu1 = s1[1]-s1[0]
+ index1 = 0
+
+ if s2[0] > t_start:
+ # edge correction
+ nu2 = fmax(s2[0]-t_start, s2[1]-s2[0])
+ index2 = -1
+ else:
+ nu2 = s2[1]-s2[0]
+ index2 = 0
- nu1 = s1[1]-s1[0]
- nu2 = s2[1]-s2[0]
- spike_events = np.empty(N1+N2)
- spike_events[0] = s1[0]
- # the values have one entry less - the number of intervals between events
- isi_values = np.empty(N1+N2-1)
+ isi_values[0] = fabs(nu1-nu2)/fmax(nu1, nu2)
+ index = 1
with nogil: # release the interpreter to allow multithreading
- isi_values[0] = fabs(nu1-nu2)/fmax(nu1, nu2)
- index1 = 0
- index2 = 0
- index = 1
- while True:
- # check which spike is next - from s1 or s2
- if s1[index1+1] < s2[index2+1]:
+ while index1+index2 < N1+N2-2:
+ # check which spike is next, only if there are spikes left in 1
+ # next spike in 1 is earlier, or there are no spikes left in 2
+ if (index1 < N1-1) and ((index2 == N2-1) or
+ (s1[index1+1] < s2[index2+1])):
index1 += 1
- # break condition relies on existence of spikes at T_end
- if index1 >= N1:
- break
spike_events[index] = s1[index1]
- nu1 = s1[index1+1]-s1[index1]
- elif s1[index1+1] > s2[index2+1]:
+ if index1 < N1-1:
+ nu1 = s1[index1+1]-s1[index1]
+ else:
+ # edge correction
+ nu1 = fmax(t_end-s1[index1], nu1)
+ elif (index2 < N2-1) and ((index1 == N1-1) or
+ (s1[index1+1] > s2[index2+1])):
index2 += 1
- if index2 >= N2:
- break
spike_events[index] = s2[index2]
- nu2 = s2[index2+1]-s2[index2]
+ if index2 < N2-1:
+ nu2 = s2[index2+1]-s2[index2]
+ else:
+ # edge correction
+ nu2 = fmax(t_end-s2[index2], nu2)
else: # s1[index1+1] == s2[index2+1]
index1 += 1
index2 += 1
- if (index1 >= N1) or (index2 >= N2):
- break
spike_events[index] = s1[index1]
- nu1 = s1[index1+1]-s1[index1]
- nu2 = s2[index2+1]-s2[index2]
+ if index1 < N1-1:
+ nu1 = s1[index1+1]-s1[index1]
+ else:
+ # edge correction
+ nu1 = fmax(t_end-s1[index1], nu1)
+ if index2 < N2-1:
+ nu2 = s2[index2+1]-s2[index2]
+ else:
+ # edge correction
+ nu2 = fmax(t_end-s2[index2], nu2)
# compute the corresponding isi-distance
isi_values[index] = fabs(nu1 - nu2) / fmax(nu1, nu2)
index += 1
# the last event is the interval end
- spike_events[index] = s1[N1]
+ if spike_events[index-1] == t_end:
+ index -= 1
+ else:
+ spike_events[index] = t_end
# end nogil
return spike_events[:index+1], isi_values[:index]
@@ -106,21 +137,30 @@ cdef inline double get_min_dist_cython(double spike_time,
# use memory view to ensure inlining
# np.ndarray[DTYPE_t,ndim=1] spike_train,
int N,
- int start_index=0) nogil:
+ int start_index,
+ double t_start, double t_end) nogil:
""" Returns the minimal distance |spike_time - spike_train[i]|
with i>=start_index.
"""
cdef double d, d_temp
- d = fabs(spike_time - spike_train[start_index])
- start_index += 1
+ # start with the distance to the start time
+ d = fabs(spike_time - t_start)
+ if start_index < 0:
+ start_index = 0
while start_index < N:
d_temp = fabs(spike_time - spike_train[start_index])
if d_temp > d:
- break
+ return d
else:
d = d_temp
start_index += 1
- return d
+
+ # finally, check the distance to end time
+ d_temp = fabs(t_end - spike_time)
+ if d_temp > d:
+ return d
+ else:
+ return d_temp
############################################################
@@ -135,96 +175,164 @@ cdef inline double isi_avrg_cython(double isi1, double isi2) nogil:
############################################################
# spike_distance_cython
############################################################
-def spike_distance_cython(double[:] t1,
- double[:] t2):
+def spike_distance_cython(double[:] t1, double[:] t2,
+ double t_start, double t_end):
cdef double[:] spike_events
cdef double[:] y_starts
cdef double[:] y_ends
cdef int N1, N2, index1, index2, index
- cdef double dt_p1, dt_p2, dt_f1, dt_f2, isi1, isi2, s1, s2
+ cdef double t_p1, t_f1, t_p2, t_f2, dt_p1, dt_p2, dt_f1, dt_f2
+ cdef double isi1, isi2, s1, s2
N1 = len(t1)
N2 = len(t2)
- spike_events = np.empty(N1+N2-2)
- spike_events[0] = t1[0]
+ spike_events = np.empty(N1+N2+2)
+
y_starts = np.empty(len(spike_events)-1)
y_ends = np.empty(len(spike_events)-1)
with nogil: # release the interpreter to allow multithreading
- index1 = 0
- index2 = 0
- index = 1
- dt_p1 = 0.0
- dt_f1 = get_min_dist_cython(t1[1], t2, N2, 0)
- dt_p2 = 0.0
- dt_f2 = get_min_dist_cython(t2[1], t1, N1, 0)
- isi1 = max(t1[1]-t1[0], t1[2]-t1[1])
- isi2 = max(t2[1]-t2[0], t2[2]-t2[1])
- s1 = dt_f1*(t1[1]-t1[0])/isi1
- s2 = dt_f2*(t2[1]-t2[0])/isi2
+ spike_events[0] = t_start
+ t_p1 = t_start
+ t_p2 = t_start
+ if t1[0] > t_start:
+ # dt_p1 = t2[0]-t_start
+ t_f1 = t1[0]
+ dt_f1 = get_min_dist_cython(t_f1, t2, N2, 0, t_start, t_end)
+ isi1 = fmax(t_f1-t_start, t1[1]-t1[0])
+ dt_p1 = dt_f1
+ s1 = dt_p1*(t_f1-t_start)/isi1
+ index1 = -1
+ else:
+ t_f1 = t1[1]
+ dt_f1 = get_min_dist_cython(t_f1, t2, N2, 0, t_start, t_end)
+ dt_p1 = 0.0
+ isi1 = t1[1]-t1[0]
+ s1 = dt_p1
+ index1 = 0
+ if t2[0] > t_start:
+ # dt_p1 = t2[0]-t_start
+ t_f2 = t2[0]
+ dt_f2 = get_min_dist_cython(t_f2, t1, N1, 0, t_start, t_end)
+ dt_p2 = dt_f2
+ isi2 = fmax(t_f2-t_start, t2[1]-t2[0])
+ s2 = dt_p2*(t_f2-t_start)/isi2
+ index2 = -1
+ else:
+ t_f2 = t2[1]
+ dt_f2 = get_min_dist_cython(t_f2, t1, N1, 0, t_start, t_end)
+ dt_p2 = 0.0
+ isi2 = t2[1]-t2[0]
+ s2 = dt_p2
+ index2 = 0
+
y_starts[0] = (s1*isi2 + s2*isi1) / isi_avrg_cython(isi1, isi2)
- while True:
+ index = 1
+
+ while index1+index2 < N1+N2-2:
# print(index, index1, index2)
- if t1[index1+1] < t2[index2+1]:
+ if (index1 < N1-1) and (t_f1 < t_f2 or index2 == N2-1):
index1 += 1
- # break condition relies on existence of spikes at T_end
- if index1+1 >= N1:
- break
- spike_events[index] = t1[index1]
# first calculate the previous interval end value
- dt_p1 = dt_f1 # the previous time now was the following time before
- s1 = dt_p1
- s2 = (dt_p2*(t2[index2+1]-t1[index1]) +
- dt_f2*(t1[index1]-t2[index2])) / isi2
- y_ends[index-1] = (s1*isi2 + s2*isi1)/isi_avrg_cython(isi1, isi2)
+ s1 = dt_f1*(t_f1-t_p1) / isi1
+ # the previous time now was the following time before:
+ dt_p1 = dt_f1
+ t_p1 = t_f1 # t_p1 contains the current time point
+ # get the next time
+ if index1 < N1-1:
+ t_f1 = t1[index1+1]
+ else:
+ t_f1 = t_end
+ spike_events[index] = t_p1
+ s2 = (dt_p2*(t_f2-t_p1) + dt_f2*(t_p1-t_p2)) / isi2
+ y_ends[index-1] = (s1*isi2 + s2*isi1)/isi_avrg_cython(isi1,
+ isi2)
# now the next interval start value
- dt_f1 = get_min_dist_cython(t1[index1+1], t2, N2, index2)
- isi1 = t1[index1+1]-t1[index1]
+ if index1 < N1-1:
+ dt_f1 = get_min_dist_cython(t_f1, t2, N2, index2,
+ t_start, t_end)
+ isi1 = t_f1-t_p1
+ s1 = dt_p1
+ else:
+ dt_f1 = dt_p1
+ isi1 = fmax(t_end-t1[N1-1], t1[N1-1]-t1[N1-2])
+ # s1 needs adjustment due to change of isi1
+ s1 = dt_p1*(t_end-t1[N1-1])/isi1
# s2 is the same as above, thus we can compute y2 immediately
- y_starts[index] = (s1*isi2 + s2*isi1)/isi_avrg_cython(isi1, isi2)
- elif t1[index1+1] > t2[index2+1]:
+ y_starts[index] = (s1*isi2 + s2*isi1)/isi_avrg_cython(isi1,
+ isi2)
+ elif (index2 < N2-1) and (t_f1 > t_f2 or index1 == N1-1):
index2 += 1
- if index2+1 >= N2:
- break
- spike_events[index] = t2[index2]
# first calculate the previous interval end value
- dt_p2 = dt_f2 # the previous time now was the following time before
- s1 = (dt_p1*(t1[index1+1]-t2[index2]) +
- dt_f1*(t2[index2]-t1[index1])) / isi1
- s2 = dt_p2
- y_ends[index-1] = (s1*isi2 + s2*isi1) / isi_avrg_cython(isi1, isi2)
+ s2 = dt_f2*(t_f2-t_p2) / isi2
+ # the previous time now was the following time before:
+ dt_p2 = dt_f2
+ t_p2 = t_f2 # t_p2 contains the current time point
+ # get the next time
+ if index2 < N2-1:
+ t_f2 = t2[index2+1]
+ else:
+ t_f2 = t_end
+ spike_events[index] = t_p2
+ s1 = (dt_p1*(t_f1-t_p2) + dt_f1*(t_p2-t_p1)) / isi1
+ y_ends[index-1] = (s1*isi2 + s2*isi1) / isi_avrg_cython(isi1,
+ isi2)
# now the next interval start value
- dt_f2 = get_min_dist_cython(t2[index2+1], t1, N1, index1)
- #s2 = dt_f2
- isi2 = t2[index2+1]-t2[index2]
+ if index2 < N2-1:
+ dt_f2 = get_min_dist_cython(t_f2, t1, N1, index1,
+ t_start, t_end)
+ isi2 = t_f2-t_p2
+ s2 = dt_p2
+ else:
+ dt_f2 = dt_p2
+ isi2 = fmax(t_end-t2[N2-1], t2[N2-1]-t2[N2-2])
+ # s2 needs adjustment due to change of isi2
+ s2 = dt_p2*(t_end-t2[N2-1])/isi2
# s2 is the same as above, thus we can compute y2 immediately
y_starts[index] = (s1*isi2 + s2*isi1)/isi_avrg_cython(isi1, isi2)
- else: # t1[index1+1] == t2[index2+1] - generate only one event
+ else: # t_f1 == t_f2 - generate only one event
index1 += 1
index2 += 1
- if (index1+1 >= N1) or (index2+1 >= N2):
- break
- spike_events[index] = t1[index1]
- y_ends[index-1] = 0.0
- y_starts[index] = 0.0
+ t_p1 = t_f1
+ t_p2 = t_f2
dt_p1 = 0.0
dt_p2 = 0.0
- dt_f1 = get_min_dist_cython(t1[index1+1], t2, N2, index2)
- dt_f2 = get_min_dist_cython(t2[index2+1], t1, N1, index1)
- isi1 = t1[index1+1]-t1[index1]
- isi2 = t2[index2+1]-t2[index2]
+ spike_events[index] = t_f1
+ y_ends[index-1] = 0.0
+ y_starts[index] = 0.0
+ if index1 < N1-1:
+ t_f1 = t1[index1+1]
+ dt_f1 = get_min_dist_cython(t_f1, t2, N2, index2,
+ t_start, t_end)
+ isi1 = t_f1 - t_p1
+ else:
+ t_f1 = t_end
+ dt_f1 = dt_p1
+ isi1 = fmax(t_end-t1[N1-1], t1[N1-1]-t1[N1-2])
+ if index2 < N2-1:
+ t_f2 = t2[index2+1]
+ dt_f2 = get_min_dist_cython(t_f2, t1, N1, index1,
+ t_start, t_end)
+ isi2 = t_f2 - t_p2
+ else:
+ t_f2 = t_end
+ dt_f2 = dt_p2
+ isi2 = fmax(t_end-t2[N2-1], t2[N2-1]-t2[N2-2])
index += 1
# the last event is the interval end
- spike_events[index] = t1[N1-1]
- # the ending value of the last interval
- isi1 = max(t1[N1-1]-t1[N1-2], t1[N1-2]-t1[N1-3])
- isi2 = max(t2[N2-1]-t2[N2-2], t2[N2-2]-t2[N2-3])
- s1 = dt_p1*(t1[N1-1]-t1[N1-2])/isi1
- s2 = dt_p2*(t2[N2-1]-t2[N2-2])/isi2
- y_ends[index-1] = (s1*isi2 + s2*isi1) / isi_avrg_cython(isi1, isi2)
+ if spike_events[index-1] == t_end:
+ index -= 1
+ else:
+ spike_events[index] = t_end
+ # the ending value of the last interval
+ isi1 = max(t_end-t1[N1-1], t1[N1-1]-t1[N1-2])
+ isi2 = max(t_end-t2[N2-1], t2[N2-1]-t2[N2-2])
+ s1 = dt_f1*(t_end-t1[N1-1])/isi1
+ s2 = dt_f2*(t_end-t2[N2-1])/isi2
+ y_ends[index-1] = (s1*isi2 + s2*isi1) / isi_avrg_cython(isi1, isi2)
# end nogil
# use only the data added above
@@ -237,17 +345,17 @@ def spike_distance_cython(double[:] t1,
# coincidence_python
############################################################
cdef inline double get_tau(double[:] spikes1, double[:] spikes2,
- int i, int j, max_tau):
+ int i, int j, double max_tau):
cdef double m = 1E100 # some huge number
- cdef int N1 = len(spikes1)-2
- cdef int N2 = len(spikes2)-2
- if i < N1:
+ cdef int N1 = spikes1.shape[0]-1 # len(spikes1)-1
+ cdef int N2 = spikes2.shape[0]-1 # len(spikes2)-1
+ if i < N1 and i > -1:
m = fmin(m, spikes1[i+1]-spikes1[i])
- if j < N2:
+ if j < N2 and j > -1:
m = fmin(m, spikes2[j+1]-spikes2[j])
- if i > 1:
+ if i > 0:
m = fmin(m, spikes1[i]-spikes1[i-1])
- if j > 1:
+ if j > 0:
m = fmin(m, spikes2[j]-spikes2[j-1])
m *= 0.5
if max_tau > 0.0:
@@ -258,34 +366,35 @@ cdef inline double get_tau(double[:] spikes1, double[:] spikes2,
############################################################
# coincidence_cython
############################################################
-def coincidence_cython(double[:] spikes1, double[:] spikes2, double max_tau):
+def coincidence_cython(double[:] spikes1, double[:] spikes2,
+ double t_start, double t_end, double max_tau):
cdef int N1 = len(spikes1)
cdef int N2 = len(spikes2)
- cdef int i = 0
- cdef int j = 0
+ cdef int i = -1
+ cdef int j = -1
cdef int n = 0
- cdef double[:] st = np.zeros(N1 + N2 - 2) # spike times
- cdef double[:] c = np.zeros(N1 + N2 - 2) # coincidences
- cdef double[:] mp = np.ones(N1 + N2 - 2) # multiplicity
+ cdef double[:] st = np.zeros(N1 + N2 + 2) # spike times
+ cdef double[:] c = np.zeros(N1 + N2 + 2) # coincidences
+ cdef double[:] mp = np.ones(N1 + N2 + 2) # multiplicity
cdef double tau
- while n < N1 + N2 - 2:
- if spikes1[i+1] < spikes2[j+1]:
+ while i + j < N1 + N2 - 2:
+ if (i < N1-1) and (j == N2-1 or spikes1[i+1] < spikes2[j+1]):
i += 1
n += 1
tau = get_tau(spikes1, spikes2, i, j, max_tau)
st[n] = spikes1[i]
- if j > 0 and spikes1[i]-spikes2[j] < tau:
+ if j > -1 and spikes1[i]-spikes2[j] < tau:
# coincidence between the current spike and the previous spike
# both get marked with 1
c[n] = 1
c[n-1] = 1
- elif spikes1[i+1] > spikes2[j+1]:
+ elif (j < N2-1) and (i == N1-1 or spikes1[i+1] > spikes2[j+1]):
j += 1
n += 1
tau = get_tau(spikes1, spikes2, i, j, max_tau)
st[n] = spikes2[j]
- if i > 0 and spikes2[j]-spikes1[i] < tau:
+ if i > -1 and spikes2[j]-spikes1[i] < tau:
# coincidence between the current spike and the previous spike
# both get marked with 1
c[n] = 1
@@ -294,8 +403,6 @@ def coincidence_cython(double[:] spikes1, double[:] spikes2, double max_tau):
# advance in both spike trains
j += 1
i += 1
- if i == N1-1 or j == N2-1:
- break
n += 1
# add only one event, but with coincidence 2 and multiplicity 2
st[n] = spikes1[i]
@@ -306,8 +413,8 @@ def coincidence_cython(double[:] spikes1, double[:] spikes2, double max_tau):
c = c[:n+2]
mp = mp[:n+2]
- st[0] = spikes1[0]
- st[len(st)-1] = spikes1[len(spikes1)-1]
+ st[0] = t_start
+ st[len(st)-1] = t_end
c[0] = c[1]
c[len(c)-1] = c[len(c)-2]
mp[0] = mp[1]
diff --git a/pyspike/cython/python_backend.py b/pyspike/cython/python_backend.py
index 749507a..1fd8c42 100644
--- a/pyspike/cython/python_backend.py
+++ b/pyspike/cython/python_backend.py
@@ -15,50 +15,78 @@ import numpy as np
############################################################
# isi_distance_python
############################################################
-def isi_distance_python(s1, s2):
+def isi_distance_python(s1, s2, t_start, t_end):
""" Plain Python implementation of the isi distance.
"""
- # compute the interspike interval
- nu1 = s1[1:] - s1[:-1]
- nu2 = s2[1:] - s2[:-1]
+ N1 = len(s1)
+ N2 = len(s2)
# compute the isi-distance
- spike_events = np.empty(len(nu1) + len(nu2))
- spike_events[0] = s1[0]
+ spike_events = np.empty(N1+N2+2)
+ spike_events[0] = t_start
# the values have one entry less - the number of intervals between events
isi_values = np.empty(len(spike_events) - 1)
- # add the distance of the first events
- # isi_values[0] = nu1[0]/nu2[0] - 1.0 if nu1[0] <= nu2[0] \
- # else 1.0 - nu2[0]/nu1[0]
- isi_values[0] = abs(nu1[0] - nu2[0]) / max(nu1[0], nu2[0])
- index1 = 0
- index2 = 0
+ if s1[0] > t_start:
+ # edge correction
+ nu1 = max(s1[0] - t_start, s1[1] - s1[0])
+ index1 = -1
+ else:
+ nu1 = s1[1] - s1[0]
+ index1 = 0
+ if s2[0] > t_start:
+ # edge correction
+ nu2 = max(s2[0] - t_start, s2[1] - s2[0])
+ index2 = -1
+ else:
+ nu2 = s2[1] - s2[0]
+ index2 = 0
+
+ isi_values[0] = abs(nu1 - nu2) / max(nu1, nu2)
index = 1
- while True:
+ while index1+index2 < N1+N2-2:
# check which spike is next - from s1 or s2
- if s1[index1+1] < s2[index2+1]:
+ if (index1 < N1-1) and (index2 == N2-1 or s1[index1+1] < s2[index2+1]):
index1 += 1
- # break condition relies on existence of spikes at T_end
- if index1 >= len(nu1):
- break
spike_events[index] = s1[index1]
- elif s1[index1+1] > s2[index2+1]:
+ if index1 < N1-1:
+ nu1 = s1[index1+1]-s1[index1]
+ else:
+ # edge correction
+ nu1 = max(t_end-s1[N1-1], s1[N1-1]-s1[N1-2])
+
+ elif (index2 < N2-1) and (index1 == N1-1 or
+ s1[index1+1] > s2[index2+1]):
index2 += 1
- if index2 >= len(nu2):
- break
spike_events[index] = s2[index2]
+ if index2 < N2-1:
+ nu2 = s2[index2+1]-s2[index2]
+ else:
+ # edge correction
+ nu2 = max(t_end-s2[N2-1], s2[N2-1]-s2[N2-2])
+
else: # s1[index1 + 1] == s2[index2 + 1]
index1 += 1
index2 += 1
- if (index1 >= len(nu1)) or (index2 >= len(nu2)):
- break
spike_events[index] = s1[index1]
+ if index1 < N1-1:
+ nu1 = s1[index1+1]-s1[index1]
+ else:
+ # edge correction
+ nu1 = max(t_end-s1[N1-1], s1[N1-1]-s1[N1-2])
+ if index2 < N2-1:
+ nu2 = s2[index2+1]-s2[index2]
+ else:
+ # edge correction
+ nu2 = max(t_end-s2[N2-1], s2[N2-1]-s2[N2-2])
# compute the corresponding isi-distance
- isi_values[index] = abs(nu1[index1] - nu2[index2]) / \
- max(nu1[index1], nu2[index2])
+ isi_values[index] = abs(nu1 - nu2) / \
+ max(nu1, nu2)
index += 1
# the last event is the interval end
- spike_events[index] = s1[-1]
+ if spike_events[index-1] == t_end:
+ index -= 1
+ else:
+ spike_events[index] = t_end
# use only the data added above
# could be less than original length due to equal spike times
return spike_events[:index + 1], isi_values[:index]
@@ -67,122 +95,186 @@ def isi_distance_python(s1, s2):
############################################################
# get_min_dist
############################################################
-def get_min_dist(spike_time, spike_train, start_index=0):
+def get_min_dist(spike_time, spike_train, start_index, t_start, t_end):
""" Returns the minimal distance |spike_time - spike_train[i]|
with i>=start_index.
"""
- d = abs(spike_time - spike_train[start_index])
- start_index += 1
+ d = abs(spike_time - t_start)
+ if start_index < 0:
+ start_index = 0
while start_index < len(spike_train):
d_temp = abs(spike_time - spike_train[start_index])
if d_temp > d:
- break
+ return d
else:
d = d_temp
start_index += 1
- return d
+ # finally, check the distance to end time
+ d_temp = abs(t_end - spike_time)
+ if d_temp > d:
+ return d
+ else:
+ return d_temp
############################################################
# spike_distance_python
############################################################
-def spike_distance_python(spikes1, spikes2):
+def spike_distance_python(spikes1, spikes2, t_start, t_end):
""" Computes the instantaneous spike-distance S_spike (t) of the two given
spike trains. The spike trains are expected to have auxiliary spikes at the
beginning and end of the interval. Use the function add_auxiliary_spikes to
add those spikes to the spike train.
Args:
- spikes1, spikes2: ordered arrays of spike times with auxiliary spikes.
+ - t_start, t_end: edges of the spike train
Returns:
- PieceWiseLinFunc describing the spike-distance.
"""
- # check for auxiliary spikes - first and last spikes should be identical
- assert spikes1[0] == spikes2[0], \
- "Given spike trains seems not to have auxiliary spikes!"
- assert spikes1[-1] == spikes2[-1], \
- "Given spike trains seems not to have auxiliary spikes!"
+
# shorter variables
t1 = spikes1
t2 = spikes2
- spike_events = np.empty(len(t1) + len(t2) - 2)
- spike_events[0] = t1[0]
- y_starts = np.empty(len(spike_events) - 1)
- y_ends = np.empty(len(spike_events) - 1)
-
- index1 = 0
- index2 = 0
+ N1 = len(t1)
+ N2 = len(t2)
+
+ spike_events = np.empty(N1+N2+2)
+
+ y_starts = np.empty(len(spike_events)-1)
+ y_ends = np.empty(len(spike_events)-1)
+
+ spike_events[0] = t_start
+ t_p1 = t_start
+ t_p2 = t_start
+ if t1[0] > t_start:
+ t_f1 = t1[0]
+ dt_f1 = get_min_dist(t_f1, t2, 0, t_start, t_end)
+ dt_p1 = dt_f1
+ isi1 = max(t_f1-t_start, t1[1]-t1[0])
+ s1 = dt_p1*(t_f1-t_start)/isi1
+ index1 = -1
+ else:
+ dt_p1 = 0.0
+ t_f1 = t1[1]
+ dt_f1 = get_min_dist(t_f1, t2, 0, t_start, t_end)
+ isi1 = t1[1]-t1[0]
+ s1 = dt_p1
+ index1 = 0
+ if t2[0] > t_start:
+ # dt_p1 = t2[0]-t_start
+ t_f2 = t2[0]
+ dt_f2 = get_min_dist(t_f2, t1, 0, t_start, t_end)
+ dt_p2 = dt_f2
+ isi2 = max(t_f2-t_start, t2[1]-t2[0])
+ s2 = dt_p2*(t_f2-t_start)/isi2
+ index2 = -1
+ else:
+ dt_p2 = 0.0
+ t_f2 = t2[1]
+ dt_f2 = get_min_dist(t_f2, t1, 0, t_start, t_end)
+ isi2 = t2[1]-t2[0]
+ s2 = dt_p2
+ index2 = 0
+
+ y_starts[0] = (s1*isi2 + s2*isi1) / (0.5*(isi1+isi2)**2)
index = 1
- dt_p1 = 0.0
- dt_f1 = get_min_dist(t1[1], t2, 0)
- dt_p2 = 0.0
- dt_f2 = get_min_dist(t2[1], t1, 0)
- isi1 = max(t1[1]-t1[0], t1[2]-t1[1])
- isi2 = max(t2[1]-t2[0], t2[2]-t2[1])
- s1 = dt_f1*(t1[1]-t1[0])/isi1
- s2 = dt_f2*(t2[1]-t2[0])/isi2
- y_starts[0] = (s1*isi2 + s2*isi1) / ((isi1+isi2)**2/2)
- while True:
+
+ while index1+index2 < N1+N2-2:
# print(index, index1, index2)
- if t1[index1+1] < t2[index2+1]:
+ if (index1 < N1-1) and (t_f1 < t_f2 or index2 == N2-1):
index1 += 1
- # break condition relies on existence of spikes at T_end
- if index1+1 >= len(t1):
- break
- spike_events[index] = t1[index1]
# first calculate the previous interval end value
- dt_p1 = dt_f1 # the previous time was the following time before
- s1 = dt_p1
- s2 = (dt_p2*(t2[index2+1]-t1[index1]) +
- dt_f2*(t1[index1]-t2[index2])) / isi2
- y_ends[index-1] = (s1*isi2 + s2*isi1) / ((isi1+isi2)**2/2)
+ s1 = dt_f1*(t_f1-t_p1) / isi1
+ # the previous time now was the following time before:
+ dt_p1 = dt_f1
+ t_p1 = t_f1 # t_p1 contains the current time point
+ # get the next time
+ if index1 < N1-1:
+ t_f1 = t1[index1+1]
+ else:
+ t_f1 = t_end
+ spike_events[index] = t_p1
+ s2 = (dt_p2*(t_f2-t_p1) + dt_f2*(t_p1-t_p2)) / isi2
+ y_ends[index-1] = (s1*isi2 + s2*isi1) / (0.5*(isi1+isi2)**2)
# now the next interval start value
- dt_f1 = get_min_dist(t1[index1+1], t2, index2)
- isi1 = t1[index1+1]-t1[index1]
+ if index1 < N1-1:
+ dt_f1 = get_min_dist(t_f1, t2, index2, t_start, t_end)
+ isi1 = t_f1-t_p1
+ s1 = dt_p1
+ else:
+ dt_f1 = dt_p1
+ isi1 = max(t_end-t1[N1-1], t1[N1-1]-t1[N1-2])
+ # s1 needs adjustment due to change of isi1
+ s1 = dt_p1*(t_end-t1[N1-1])/isi1
# s2 is the same as above, thus we can compute y2 immediately
- y_starts[index] = (s1*isi2 + s2*isi1) / ((isi1+isi2)**2/2)
- elif t1[index1+1] > t2[index2+1]:
+ y_starts[index] = (s1*isi2 + s2*isi1) / (0.5*(isi1+isi2)**2)
+ elif (index2 < N2-1) and (t_f1 > t_f2 or index1 == N1-1):
index2 += 1
- if index2+1 >= len(t2):
- break
- spike_events[index] = t2[index2]
# first calculate the previous interval end value
- dt_p2 = dt_f2 # the previous time was the following time before
- s1 = (dt_p1*(t1[index1+1]-t2[index2]) +
- dt_f1*(t2[index2]-t1[index1])) / isi1
- s2 = dt_p2
- y_ends[index-1] = (s1*isi2 + s2*isi1) / ((isi1+isi2)**2/2)
+ s2 = dt_f2*(t_f2-t_p2) / isi2
+ # the previous time now was the following time before:
+ dt_p2 = dt_f2
+ t_p2 = t_f2 # t_p1 contains the current time point
+ # get the next time
+ if index2 < N2-1:
+ t_f2 = t2[index2+1]
+ else:
+ t_f2 = t_end
+ spike_events[index] = t_p2
+ s1 = (dt_p1*(t_f1-t_p2) + dt_f1*(t_p2-t_p1)) / isi1
+ y_ends[index-1] = (s1*isi2 + s2*isi1) / (0.5*(isi1+isi2)**2)
# now the next interval start value
- dt_f2 = get_min_dist(t2[index2+1], t1, index1)
- #s2 = dt_f2
- isi2 = t2[index2+1]-t2[index2]
+ if index2 < N2-1:
+ dt_f2 = get_min_dist(t_f2, t1, index1, t_start, t_end)
+ isi2 = t_f2-t_p2
+ s2 = dt_p2
+ else:
+ dt_f2 = dt_p2
+ isi2 = max(t_end-t2[N2-1], t2[N2-1]-t2[N2-2])
+ # s2 needs adjustment due to change of isi2
+ s2 = dt_p2*(t_end-t2[N2-1])/isi2
# s2 is the same as above, thus we can compute y2 immediately
- y_starts[index] = (s1*isi2 + s2*isi1) / ((isi1+isi2)**2/2)
- else: # t1[index1+1] == t2[index2+1] - generate only one event
+ y_starts[index] = (s1*isi2 + s2*isi1) / (0.5*(isi1+isi2)**2)
+ else: # t_f1 == t_f2 - generate only one event
index1 += 1
index2 += 1
- if (index1+1 >= len(t1)) or (index2+1 >= len(t2)):
- break
- assert dt_f2 == 0.0
- assert dt_f1 == 0.0
- spike_events[index] = t1[index1]
- y_ends[index-1] = 0.0
- y_starts[index] = 0.0
+ t_p1 = t_f1
+ t_p2 = t_f2
dt_p1 = 0.0
dt_p2 = 0.0
- dt_f1 = get_min_dist(t1[index1+1], t2, index2)
- dt_f2 = get_min_dist(t2[index2+1], t1, index1)
- isi1 = t1[index1+1]-t1[index1]
- isi2 = t2[index2+1]-t2[index2]
+ spike_events[index] = t_f1
+ y_ends[index-1] = 0.0
+ y_starts[index] = 0.0
+ if index1 < N1-1:
+ t_f1 = t1[index1+1]
+ dt_f1 = get_min_dist(t_f1, t2, index2, t_start, t_end)
+ isi1 = t_f1 - t_p1
+ else:
+ t_f1 = t_end
+ dt_f1 = dt_p1
+ isi1 = max(t_end-t1[N1-1], t1[N1-1]-t1[N1-2])
+ if index2 < N2-1:
+ t_f2 = t2[index2+1]
+ dt_f2 = get_min_dist(t_f2, t1, index1, t_start, t_end)
+ isi2 = t_f2 - t_p2
+ else:
+ t_f2 = t_end
+ dt_f2 = dt_p2
+ isi2 = max(t_end-t2[N2-1], t2[N2-1]-t2[N2-2])
index += 1
# the last event is the interval end
- spike_events[index] = t1[-1]
- # the ending value of the last interval
- isi1 = max(t1[-1]-t1[-2], t1[-2]-t1[-3])
- isi2 = max(t2[-1]-t2[-2], t2[-2]-t2[-3])
- s1 = dt_p1*(t1[-1]-t1[-2])/isi1
- s2 = dt_p2*(t2[-1]-t2[-2])/isi2
- y_ends[index-1] = (s1*isi2 + s2*isi1) / ((isi1+isi2)**2/2)
+ if spike_events[index-1] == t_end:
+ index -= 1
+ else:
+ spike_events[index] = t_end
+ # the ending value of the last interval
+ isi1 = max(t_end-t1[N1-1], t1[N1-1]-t1[N1-2])
+ isi2 = max(t_end-t2[N2-1], t2[N2-1]-t2[N2-2])
+ s1 = dt_f1*(t_end-t1[N1-1])/isi1
+ s2 = dt_f2*(t_end-t2[N2-1])/isi2
+ y_ends[index-1] = (s1*isi2 + s2*isi1) / (0.5*(isi1+isi2)**2)
+
# use only the data added above
# could be less than original length due to equal spike times
return spike_events[:index+1], y_starts[:index], y_ends[:index]
@@ -245,47 +337,48 @@ def cumulative_sync_python(spikes1, spikes2):
############################################################
# coincidence_python
############################################################
-def coincidence_python(spikes1, spikes2, max_tau):
+def coincidence_python(spikes1, spikes2, t_start, t_end, max_tau):
def get_tau(spikes1, spikes2, i, j, max_tau):
m = 1E100 # some huge number
- if i < len(spikes1)-2:
+ if i < len(spikes1)-1 and i > -1:
m = min(m, spikes1[i+1]-spikes1[i])
- if j < len(spikes2)-2:
+ if j < len(spikes2)-1 and j > -1:
m = min(m, spikes2[j+1]-spikes2[j])
- if i > 1:
+ if i > 0:
m = min(m, spikes1[i]-spikes1[i-1])
- if j > 1:
+ if j > 0:
m = min(m, spikes2[j]-spikes2[j-1])
m *= 0.5
if max_tau > 0.0:
m = min(m, max_tau)
return m
+
N1 = len(spikes1)
N2 = len(spikes2)
- i = 0
- j = 0
+ i = -1
+ j = -1
n = 0
- st = np.zeros(N1 + N2 - 2) # spike times
- c = np.zeros(N1 + N2 - 2) # coincidences
- mp = np.ones(N1 + N2 - 2) # multiplicity
- while n < N1 + N2 - 2:
- if spikes1[i+1] < spikes2[j+1]:
+ st = np.zeros(N1 + N2 + 2) # spike times
+ c = np.zeros(N1 + N2 + 2) # coincidences
+ mp = np.ones(N1 + N2 + 2) # multiplicity
+ while i + j < N1 + N2 - 2:
+ if (i < N1-1) and (j == N2-1 or spikes1[i+1] < spikes2[j+1]):
i += 1
n += 1
tau = get_tau(spikes1, spikes2, i, j, max_tau)
st[n] = spikes1[i]
- if j > 0 and spikes1[i]-spikes2[j] < tau:
+ if j > -1 and spikes1[i]-spikes2[j] < tau:
# coincidence between the current spike and the previous spike
# both get marked with 1
c[n] = 1
c[n-1] = 1
- elif spikes1[i+1] > spikes2[j+1]:
+ elif (j < N2-1) and (i == N1-1 or spikes1[i+1] > spikes2[j+1]):
j += 1
n += 1
tau = get_tau(spikes1, spikes2, i, j, max_tau)
st[n] = spikes2[j]
- if i > 0 and spikes2[j]-spikes1[i] < tau:
+ if i > -1 and spikes2[j]-spikes1[i] < tau:
# coincidence between the current spike and the previous spike
# both get marked with 1
c[n] = 1
@@ -294,8 +387,6 @@ def coincidence_python(spikes1, spikes2, max_tau):
# advance in both spike trains
j += 1
i += 1
- if i == N1-1 or j == N2-1:
- break
n += 1
# add only one event, but with coincidence 2 and multiplicity 2
st[n] = spikes1[i]
@@ -306,12 +397,12 @@ def coincidence_python(spikes1, spikes2, max_tau):
c = c[:n+2]
mp = mp[:n+2]
- st[0] = spikes1[0]
- st[-1] = spikes1[-1]
+ st[0] = t_start
+ st[len(st)-1] = t_end
c[0] = c[1]
- c[-1] = c[-2]
+ c[len(c)-1] = c[len(c)-2]
mp[0] = mp[1]
- mp[-1] = mp[-2]
+ mp[len(mp)-1] = mp[len(mp)-2]
return st, c, mp
diff --git a/pyspike/isi_distance.py b/pyspike/isi_distance.py
index c2ef8e8..aeab0df 100644
--- a/pyspike/isi_distance.py
+++ b/pyspike/isi_distance.py
@@ -1,11 +1,6 @@
-"""
-
-Module containing several functions to compute the ISI profiles and distances
-
-Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
-
-Distributed under the BSD License
-"""
+# Module containing several functions to compute the ISI profiles and distances
+# Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
+# Distributed under the BSD License
from pyspike import PieceWiseConstFunc
from pyspike.generic import _generic_profile_multi, _generic_distance_matrix
@@ -14,23 +9,23 @@ from pyspike.generic import _generic_profile_multi, _generic_distance_matrix
############################################################
# isi_profile
############################################################
-def isi_profile(spikes1, spikes2):
- """ Computes the isi-distance profile :math:`S_{isi}(t)` of the two given
- spike trains. Retruns the profile as a PieceWiseConstFunc object. The S_isi
- values are defined positive S_isi(t)>=0. The spike trains are expected
- to have auxiliary spikes at the beginning and end of the interval. Use the
- function add_auxiliary_spikes to add those spikes to the spike train.
-
- :param spikes1: ordered array of spike times with auxiliary spikes.
- :param spikes2: ordered array of spike times with auxiliary spikes.
- :returns: The isi-distance profile :math:`S_{isi}(t)`
- :rtype: :class:`pyspike.function.PieceWiseConstFunc`
+def isi_profile(spike_train1, spike_train2):
+ """ Computes the isi-distance profile :math:`I(t)` of the two given
+ spike trains. Retruns the profile as a PieceWiseConstFunc object. The
+ ISI-values are defined positive :math:`I(t)>=0`.
+
+ :param spike_train1: First spike train.
+ :type spike_train1: :class:`.SpikeTrain`
+ :param spike_train2: Second spike train.
+ :type spike_train2: :class:`.SpikeTrain`
+ :returns: The isi-distance profile :math:`I(t)`
+ :rtype: :class:`.PieceWiseConstFunc`
"""
- # check for auxiliary spikes - first and last spikes should be identical
- assert spikes1[0] == spikes2[0], \
+ # check whether the spike trains are defined for the same interval
+ assert spike_train1.t_start == spike_train2.t_start, \
"Given spike trains seems not to have auxiliary spikes!"
- assert spikes1[-1] == spikes2[-1], \
+ assert spike_train1.t_end == spike_train2.t_end, \
"Given spike trains seems not to have auxiliary spikes!"
# load cython implementation
@@ -45,29 +40,32 @@ Falling back to slow python backend.")
from cython.python_backend import isi_distance_python \
as isi_distance_impl
- times, values = isi_distance_impl(spikes1, spikes2)
+ times, values = isi_distance_impl(spike_train1.spikes, spike_train2.spikes,
+ spike_train1.t_start, spike_train1.t_end)
return PieceWiseConstFunc(times, values)
############################################################
# isi_distance
############################################################
-def isi_distance(spikes1, spikes2, interval=None):
- """ Computes the isi-distance I of the given spike trains. The
+def isi_distance(spike_train1, spike_train2, interval=None):
+ """ Computes the ISI-distance :math:`D_I` of the given spike trains. The
isi-distance is the integral over the isi distance profile
- :math:`S_{isi}(t)`:
+ :math:`I(t)`:
- .. math:: I = \int_{T_0}^{T_1} S_{isi}(t) dt.
+ .. math:: D_I = \\int_{T_0}^{T_1} I(t) dt.
- :param spikes1: ordered array of spike times with auxiliary spikes.
- :param spikes2: ordered array of spike times with auxiliary spikes.
+ :param spike_train1: First spike train.
+ :type spike_train1: :class:`.SpikeTrain`
+ :param spike_train2: Second spike train.
+ :type spike_train2: :class:`.SpikeTrain`
:param interval: averaging interval given as a pair of floats (T0, T1),
if None the average over the whole function is computed.
:type interval: Pair of floats or None.
- :returns: The isi-distance I.
+ :returns: The isi-distance :math:`D_I`.
:rtype: double
"""
- return isi_profile(spikes1, spikes2).avrg(interval)
+ return isi_profile(spike_train1, spike_train2).avrg(interval)
############################################################
@@ -76,15 +74,17 @@ def isi_distance(spikes1, spikes2, interval=None):
def isi_profile_multi(spike_trains, indices=None):
""" computes the multi-variate isi distance profile for a set of spike
trains. That is the average isi-distance of all pairs of spike-trains:
- S_isi(t) = 2/((N(N-1)) sum_{<i,j>} S_{isi}^{i,j},
+
+ .. math:: <I(t)> = \\frac{2}{N(N-1)} \\sum_{<i,j>} I^{i,j},
+
where the sum goes over all pairs <i,j>
- :param spike_trains: list of spike trains
+ :param spike_trains: list of :class:`.SpikeTrain`
:param indices: list of indices defining which spike trains to use,
if None all given spike trains are used (default=None)
:type state: list or None
- :returns: The averaged isi profile :math:`<S_{isi}>(t)`
- :rtype: :class:`pyspike.function.PieceWiseConstFunc`
+ :returns: The averaged isi profile :math:`<I(t)>`
+ :rtype: :class:`.PieceWiseConstFunc`
"""
average_dist, M = _generic_profile_multi(spike_trains, isi_profile,
indices)
@@ -98,16 +98,18 @@ def isi_profile_multi(spike_trains, indices=None):
def isi_distance_multi(spike_trains, indices=None, interval=None):
""" computes the multi-variate isi-distance for a set of spike-trains.
That is the time average of the multi-variate spike profile:
- I = \int_0^T 2/((N(N-1)) sum_{<i,j>} S_{isi}^{i,j},
+
+ .. math:: D_I = \\int_0^T \\frac{2}{N(N-1)} \\sum_{<i,j>} I^{i,j},
+
where the sum goes over all pairs <i,j>
- :param spike_trains: list of spike trains
+ :param spike_trains: list of :class:`.SpikeTrain`
:param indices: list of indices defining which spike trains to use,
if None all given spike trains are used (default=None)
:param interval: averaging interval given as a pair of floats, if None
the average over the whole function is computed.
:type interval: Pair of floats or None.
- :returns: The time-averaged isi distance :math:`I`
+ :returns: The time-averaged multivariate ISI distance :math:`D_I`
:rtype: double
"""
return isi_profile_multi(spike_trains, indices).avrg(interval)
@@ -119,7 +121,7 @@ def isi_distance_multi(spike_trains, indices=None, interval=None):
def isi_distance_matrix(spike_trains, indices=None, interval=None):
""" Computes the time averaged isi-distance of all pairs of spike-trains.
- :param spike_trains: list of spike trains
+ :param spike_trains: list of :class:`.SpikeTrain`
:param indices: list of indices defining which spike trains to use,
if None all given spike trains are used (default=None)
:type indices: list or None
@@ -127,7 +129,7 @@ def isi_distance_matrix(spike_trains, indices=None, interval=None):
the average over the whole function is computed.
:type interval: Pair of floats or None.
:returns: 2D array with the pair wise time average isi distances
- :math:`I_{ij}`
+ :math:`D_{I}^{ij}`
:rtype: np.array
"""
return _generic_distance_matrix(spike_trains, isi_distance,
diff --git a/pyspike/psth.py b/pyspike/psth.py
index 8516460..4027215 100644
--- a/pyspike/psth.py
+++ b/pyspike/psth.py
@@ -1,27 +1,34 @@
-"""
-
-Module containing functions to compute the PSTH profile
-
-Copyright 2015, Mario Mulansky <mario.mulansky@gmx.net>
-
-Distributed under the BSD License
-"""
+# Module containing functions to compute the PSTH profile
+# Copyright 2015, Mario Mulansky <mario.mulansky@gmx.net>
+# Distributed under the BSD License
import numpy as np
from pyspike import PieceWiseConstFunc
-# Computes the Peristimulus time histogram of a set of spike trains
+# Computes the peri-stimulus time histogram of a set of spike trains
def psth(spike_trains, bin_size):
-
- bins = int((spike_trains[0][-1] - spike_trains[0][0]) / bin_size)
-
- N = len(spike_trains)
- combined_spike_train = spike_trains[0][1:-1]
+ """ Computes the peri-stimulus time histogram of a set of
+ :class:`.SpikeTrain`. The PSTH is simply the histogram of merged spike
+ events. The :code:`bin_size` defines the width of the histogram bins.
+
+ :param spike_trains: list of :class:`.SpikeTrain`
+ :param bin_size: width of the histogram bins.
+ :return: The PSTH as a :class:`.PieceWiseConstFunc`
+ """
+
+ bin_count = int((spike_trains[0].t_end - spike_trains[0].t_start) /
+ bin_size)
+ bins = np.linspace(spike_trains[0].t_start, spike_trains[0].t_end,
+ bin_count+1)
+
+ # N = len(spike_trains)
+ combined_spike_train = spike_trains[0].spikes
for i in xrange(1, len(spike_trains)):
combined_spike_train = np.append(combined_spike_train,
- spike_trains[i][1:-1])
+ spike_trains[i].spikes)
vals, edges = np.histogram(combined_spike_train, bins, density=False)
+
bin_size = edges[1]-edges[0]
- return PieceWiseConstFunc(edges, vals/(N*bin_size))
+ return PieceWiseConstFunc(edges, vals) # /(N*bin_size))
diff --git a/pyspike/spike_distance.py b/pyspike/spike_distance.py
index f721c86..cc620d4 100644
--- a/pyspike/spike_distance.py
+++ b/pyspike/spike_distance.py
@@ -1,11 +1,6 @@
-"""
-
-Module containing several functions to compute SPIKE profiles and distances
-
-Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
-
-Distributed under the BSD License
-"""
+# Module containing several functions to compute SPIKE profiles and distances
+# Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
+# Distributed under the BSD License
from pyspike import PieceWiseLinFunc
from pyspike.generic import _generic_profile_multi, _generic_distance_matrix
@@ -14,23 +9,23 @@ from pyspike.generic import _generic_profile_multi, _generic_distance_matrix
############################################################
# spike_profile
############################################################
-def spike_profile(spikes1, spikes2):
- """ Computes the spike-distance profile S_spike(t) of the two given spike
- trains. Returns the profile as a PieceWiseLinFunc object. The S_spike
- values are defined positive S_spike(t)>=0. The spike trains are expected to
- have auxiliary spikes at the beginning and end of the interval. Use the
- function add_auxiliary_spikes to add those spikes to the spike train.
-
- :param spikes1: ordered array of spike times with auxiliary spikes.
- :param spikes2: ordered array of spike times with auxiliary spikes.
- :returns: The spike-distance profile :math:`S_{spike}(t)`.
- :rtype: :class:`pyspike.function.PieceWiseLinFunc`
+def spike_profile(spike_train1, spike_train2):
+ """ Computes the spike-distance profile :math:`S(t)` of the two given spike
+ trains. Returns the profile as a PieceWiseLinFunc object. The SPIKE-values
+ are defined positive :math:`S(t)>=0`.
+
+ :param spike_train1: First spike train.
+ :type spike_train1: :class:`.SpikeTrain`
+ :param spike_train2: Second spike train.
+ :type spike_train2: :class:`.SpikeTrain`
+ :returns: The spike-distance profile :math:`S(t)`.
+ :rtype: :class:`.PieceWiseLinFunc`
"""
- # check for auxiliary spikes - first and last spikes should be identical
- assert spikes1[0] == spikes2[0], \
+ # check whether the spike trains are defined for the same interval
+ assert spike_train1.t_start == spike_train2.t_start, \
"Given spike trains seems not to have auxiliary spikes!"
- assert spikes1[-1] == spikes2[-1], \
+ assert spike_train1.t_end == spike_train2.t_end, \
"Given spike trains seems not to have auxiliary spikes!"
# cython implementation
@@ -45,21 +40,26 @@ Falling back to slow python backend.")
from cython.python_backend import spike_distance_python \
as spike_distance_impl
- times, y_starts, y_ends = spike_distance_impl(spikes1, spikes2)
+ times, y_starts, y_ends = spike_distance_impl(spike_train1.spikes,
+ spike_train2.spikes,
+ spike_train1.t_start,
+ spike_train1.t_end)
return PieceWiseLinFunc(times, y_starts, y_ends)
############################################################
# spike_distance
############################################################
-def spike_distance(spikes1, spikes2, interval=None):
- """ Computes the spike-distance S of the given spike trains. The
- spike-distance is the integral over the isi distance profile S_spike(t):
+def spike_distance(spike_train1, spike_train2, interval=None):
+ """ Computes the spike-distance :math:`D_S` of the given spike trains. The
+ spike-distance is the integral over the isi distance profile :math:`S(t)`:
- .. math:: S = \int_{T_0}^{T_1} S_{spike}(t) dt.
+ .. math:: D_S = \int_{T_0}^{T_1} S(t) dt.
- :param spikes1: ordered array of spike times with auxiliary spikes.
- :param spikes2: ordered array of spike times with auxiliary spikes.
+ :param spike_train1: First spike train.
+ :type spike_train1: :class:`.SpikeTrain`
+ :param spike_train2: Second spike train.
+ :type spike_train2: :class:`.SpikeTrain`
:param interval: averaging interval given as a pair of floats (T0, T1),
if None the average over the whole function is computed.
:type interval: Pair of floats or None.
@@ -67,7 +67,7 @@ def spike_distance(spikes1, spikes2, interval=None):
:rtype: double
"""
- return spike_profile(spikes1, spikes2).avrg(interval)
+ return spike_profile(spike_train1, spike_train2).avrg(interval)
############################################################
@@ -76,15 +76,17 @@ def spike_distance(spikes1, spikes2, interval=None):
def spike_profile_multi(spike_trains, indices=None):
""" Computes the multi-variate spike distance profile for a set of spike
trains. That is the average spike-distance of all pairs of spike-trains:
- :math:`S_spike(t) = 2/((N(N-1)) sum_{<i,j>} S_{spike}^{i, j}`,
+
+ .. math:: <S(t)> = \\frac{2}{N(N-1)} \\sum_{<i,j>} S^{i, j}`,
+
where the sum goes over all pairs <i,j>
- :param spike_trains: list of spike trains
+ :param spike_trains: list of :class:`.SpikeTrain`
:param indices: list of indices defining which spike trains to use,
if None all given spike trains are used (default=None)
:type indices: list or None
- :returns: The averaged spike profile :math:`<S_{spike}>(t)`
- :rtype: :class:`pyspike.function.PieceWiseLinFunc`
+ :returns: The averaged spike profile :math:`<S>(t)`
+ :rtype: :class:`.PieceWiseLinFunc`
"""
average_dist, M = _generic_profile_multi(spike_trains, spike_profile,
@@ -99,17 +101,20 @@ def spike_profile_multi(spike_trains, indices=None):
def spike_distance_multi(spike_trains, indices=None, interval=None):
""" Computes the multi-variate spike distance for a set of spike trains.
That is the time average of the multi-variate spike profile:
- S_{spike} = \int_0^T 2/((N(N-1)) sum_{<i,j>} S_{spike}^{i, j} dt
+
+ .. math:: D_S = \\int_0^T \\frac{2}{N(N-1)} \\sum_{<i,j>}
+ S^{i, j} dt
+
where the sum goes over all pairs <i,j>
- :param spike_trains: list of spike trains
+ :param spike_trains: list of :class:`.SpikeTrain`
:param indices: list of indices defining which spike trains to use,
if None all given spike trains are used (default=None)
:type indices: list or None
:param interval: averaging interval given as a pair of floats, if None
the average over the whole function is computed.
:type interval: Pair of floats or None.
- :returns: The averaged spike distance S.
+ :returns: The averaged multi-variate spike distance :math:`D_S`.
:rtype: double
"""
return spike_profile_multi(spike_trains, indices).avrg(interval)
@@ -121,7 +126,7 @@ def spike_distance_multi(spike_trains, indices=None, interval=None):
def spike_distance_matrix(spike_trains, indices=None, interval=None):
""" Computes the time averaged spike-distance of all pairs of spike-trains.
- :param spike_trains: list of spike trains
+ :param spike_trains: list of :class:`.SpikeTrain`
:param indices: list of indices defining which spike trains to use,
if None all given spike trains are used (default=None)
:type indices: list or None
@@ -129,7 +134,7 @@ def spike_distance_matrix(spike_trains, indices=None, interval=None):
the average over the whole function is computed.
:type interval: Pair of floats or None.
:returns: 2D array with the pair wise time average spike distances
- :math:`S_{ij}`
+ :math:`D_S^{ij}`
:rtype: np.array
"""
return _generic_distance_matrix(spike_trains, spike_distance,
diff --git a/pyspike/spike_sync.py b/pyspike/spike_sync.py
index e12ebb8..9d2e363 100644
--- a/pyspike/spike_sync.py
+++ b/pyspike/spike_sync.py
@@ -1,12 +1,7 @@
-"""
-
-Module containing several functions to compute SPIKE-Synchronization profiles
-and distances
-
-Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
-
-Distributed under the BSD License
-"""
+# Module containing several functions to compute SPIKE-Synchronization profiles
+# and distances
+# Copyright 2014-2015, Mario Mulansky <mario.mulansky@gmx.net>
+# Distributed under the BSD License
from functools import partial
from pyspike import DiscreteFunc
@@ -16,22 +11,27 @@ from pyspike.generic import _generic_profile_multi, _generic_distance_matrix
############################################################
# spike_sync_profile
############################################################
-def spike_sync_profile(spikes1, spikes2, max_tau=None):
+def spike_sync_profile(spike_train1, spike_train2, max_tau=None):
""" Computes the spike-synchronization profile S_sync(t) of the two given
spike trains. Returns the profile as a DiscreteFunction object. The S_sync
values are either 1 or 0, indicating the presence or absence of a
- coincidence. The spike trains are expected to have auxiliary spikes at the
- beginning and end of the interval. Use the function add_auxiliary_spikes to
- add those spikes to the spike train.
+ coincidence.
- :param spikes1: ordered array of spike times with auxiliary spikes.
- :param spikes2: ordered array of spike times with auxiliary spikes.
+ :param spike_train1: First spike train.
+ :type spike_train1: :class:`pyspike.SpikeTrain`
+ :param spike_train2: Second spike train.
+ :type spike_train2: :class:`pyspike.SpikeTrain`
:param max_tau: Maximum coincidence window size. If 0 or `None`, the
- coincidence window has no upper bound.
+ coincidence window has no upper bound.
:returns: The spike-distance profile :math:`S_{sync}(t)`.
:rtype: :class:`pyspike.function.DiscreteFunction`
"""
+ # check whether the spike trains are defined for the same interval
+ assert spike_train1.t_start == spike_train2.t_start, \
+ "Given spike trains seems not to have auxiliary spikes!"
+ assert spike_train1.t_end == spike_train2.t_end, \
+ "Given spike trains seems not to have auxiliary spikes!"
# cython implementation
try:
@@ -48,7 +48,10 @@ Falling back to slow python backend.")
if max_tau is None:
max_tau = 0.0
- times, coincidences, multiplicity = coincidence_impl(spikes1, spikes2,
+ times, coincidences, multiplicity = coincidence_impl(spike_train1.spikes,
+ spike_train2.spikes,
+ spike_train1.t_start,
+ spike_train1.t_end,
max_tau)
return DiscreteFunc(times, coincidences, multiplicity)
@@ -57,24 +60,28 @@ Falling back to slow python backend.")
############################################################
# spike_sync
############################################################
-def spike_sync(spikes1, spikes2, interval=None, max_tau=None):
+def spike_sync(spike_train1, spike_train2, interval=None, max_tau=None):
""" Computes the spike synchronization value SYNC of the given spike
trains. The spike synchronization value is the computed as the total number
of coincidences divided by the total number of spikes:
.. math:: SYNC = \sum_n C_n / N.
- :param spikes1: ordered array of spike times with auxiliary spikes.
- :param spikes2: ordered array of spike times with auxiliary spikes.
+ :param spike_train1: First spike train.
+ :type spike_train1: :class:`pyspike.SpikeTrain`
+ :param spike_train2: Second spike train.
+ :type spike_train2: :class:`pyspike.SpikeTrain`
:param interval: averaging interval given as a pair of floats (T0, T1),
- if None the average over the whole function is computed.
+ if `None` the average over the whole function is computed.
:type interval: Pair of floats or None.
:param max_tau: Maximum coincidence window size. If 0 or `None`, the
- coincidence window has no upper bound.
+ coincidence window has no upper bound.
:returns: The spike synchronization value.
- :rtype: double
+ :rtype: `double`
+
"""
- return spike_sync_profile(spikes1, spikes2, max_tau).avrg(interval)
+ return spike_sync_profile(spike_train1, spike_train2,
+ max_tau).avrg(interval)
############################################################
@@ -87,21 +94,21 @@ def spike_sync_profile_multi(spike_trains, indices=None, max_tau=None):
spike trains pairs involving the spike train of containing this spike,
which is the number of spike trains minus one (N-1).
- :param spike_trains: list of spike trains
+ :param spike_trains: list of :class:`pyspike.SpikeTrain`
:param indices: list of indices defining which spike trains to use,
if None all given spike trains are used (default=None)
:type indices: list or None
:param max_tau: Maximum coincidence window size. If 0 or `None`, the
- coincidence window has no upper bound.
+ coincidence window has no upper bound.
:returns: The multi-variate spike sync profile :math:`<S_{sync}>(t)`
:rtype: :class:`pyspike.function.DiscreteFunction`
"""
prof_func = partial(spike_sync_profile, max_tau=max_tau)
- average_dist, M = _generic_profile_multi(spike_trains, prof_func,
+ average_prof, M = _generic_profile_multi(spike_trains, prof_func,
indices)
# average_dist.mul_scalar(1.0/M) # no normalization here!
- return average_dist
+ return average_prof
############################################################
@@ -111,7 +118,7 @@ def spike_sync_multi(spike_trains, indices=None, interval=None, max_tau=None):
""" Computes the multi-variate spike synchronization value for a set of
spike trains.
- :param spike_trains: list of spike trains
+ :param spike_trains: list of :class:`pyspike.SpikeTrain`
:param indices: list of indices defining which spike trains to use,
if None all given spike trains are used (default=None)
:type indices: list or None
@@ -119,9 +126,10 @@ def spike_sync_multi(spike_trains, indices=None, interval=None, max_tau=None):
the average over the whole function is computed.
:type interval: Pair of floats or None.
:param max_tau: Maximum coincidence window size. If 0 or `None`, the
- coincidence window has no upper bound.
+ coincidence window has no upper bound.
:returns: The multi-variate spike synchronization value SYNC.
:rtype: double
+
"""
return spike_sync_profile_multi(spike_trains, indices,
max_tau).avrg(interval)
@@ -134,7 +142,7 @@ def spike_sync_matrix(spike_trains, indices=None, interval=None, max_tau=None):
""" Computes the overall spike-synchronization value of all pairs of
spike-trains.
- :param spike_trains: list of spike trains
+ :param spike_trains: list of :class:`pyspike.SpikeTrain`
:param indices: list of indices defining which spike trains to use,
if None all given spike trains are used (default=None)
:type indices: list or None
@@ -142,10 +150,11 @@ def spike_sync_matrix(spike_trains, indices=None, interval=None, max_tau=None):
the average over the whole function is computed.
:type interval: Pair of floats or None.
:param max_tau: Maximum coincidence window size. If 0 or `None`, the
- coincidence window has no upper bound.
+ coincidence window has no upper bound.
:returns: 2D array with the pair wise time spike synchronization values
:math:`SYNC_{ij}`
:rtype: np.array
+
"""
dist_func = partial(spike_sync, max_tau=max_tau)
return _generic_distance_matrix(spike_trains, dist_func,
diff --git a/pyspike/spikes.py b/pyspike/spikes.py
index 9d7d6f4..35d8533 100644
--- a/pyspike/spikes.py
+++ b/pyspike/spikes.py
@@ -1,102 +1,57 @@
-""" spikes.py
-
-Module containing several function to load and transform spike trains
-
-Copyright 2014, Mario Mulansky <mario.mulansky@gmx.net>
-
-Distributed under the BSD License
-"""
+# Module containing several function to load and transform spike trains
+# Copyright 2014, Mario Mulansky <mario.mulansky@gmx.net>
+# Distributed under the BSD License
import numpy as np
-
-
-############################################################
-# add_auxiliary_spikes
-############################################################
-def add_auxiliary_spikes(spike_train, time_interval):
- """ Adds spikes at the beginning and end of the given time interval.
-
- :param spike_train: ordered array of spike times
- :param time_interval: A pair (T_start, T_end) of values representing the
- start and end time of the spike train measurement or
- a single value representing the end time, the T_start
- is then assuemd as 0. Auxiliary spikes will be added
- to the spike train at the beginning and end of this
- interval, if they are not yet present.
- :type time_interval: pair of doubles or double
- :returns: spike train with additional spikes at T_start and T_end.
-
- """
- try:
- T_start = time_interval[0]
- T_end = time_interval[1]
- except:
- T_start = 0
- T_end = time_interval
-
- assert spike_train[0] >= T_start, \
- "Spike train has events before the given start time"
- assert spike_train[-1] <= T_end, \
- "Spike train has events after the given end time"
- if spike_train[0] != T_start:
- spike_train = np.insert(spike_train, 0, T_start)
- if spike_train[-1] != T_end:
- spike_train = np.append(spike_train, T_end)
- return spike_train
+from pyspike import SpikeTrain
############################################################
# spike_train_from_string
############################################################
-def spike_train_from_string(s, sep=' ', is_sorted=False):
- """ Converts a string of times into an array of spike times.
+def spike_train_from_string(s, edges, sep=' ', is_sorted=False):
+ """ Converts a string of times into a :class:`.SpikeTrain`.
- :param s: the string with (ordered) spike times
+ :param s: the string with (ordered) spike times.
+ :param edges: interval defining the edges of the spike train.
+ Given as a pair of floats (T0, T1) or a single float T1,
+ where T0=0 is assumed.
:param sep: The separator between the time numbers, default=' '.
:param is_sorted: if True, the spike times are not sorted after loading,
if False, spike times are sorted with `np.sort`
- :returns: array of spike times
+ :returns: :class:`.SpikeTrain`
"""
- if not(is_sorted):
- return np.sort(np.fromstring(s, sep=sep))
- else:
- return np.fromstring(s, sep=sep)
+ return SpikeTrain(np.fromstring(s, sep=sep), edges, is_sorted)
############################################################
# load_spike_trains_txt
############################################################
-def load_spike_trains_from_txt(file_name, time_interval=None,
+def load_spike_trains_from_txt(file_name, edges,
separator=' ', comment='#', is_sorted=False):
""" Loads a number of spike trains from a text file. Each line of the text
file should contain one spike train as a sequence of spike times separated
by `separator`. Empty lines as well as lines starting with `comment` are
- neglected. The `time_interval` represents the start and the end of the
- spike trains and it is used to add auxiliary spikes at the beginning and
- end of each spike train. However, if `time_interval == None`, no auxiliary
- spikes are added, but note that the Spike and ISI distance both require
- auxiliary spikes.
+ neglected. The `edges` represents the start and the end of the
+ spike trains.
:param file_name: The name of the text file.
- :param time_interval: A pair (T_start, T_end) of values representing the
- start and end time of the spike train measurement
- or a single value representing the end time, the
- T_start is then assuemd as 0. Auxiliary spikes will
- be added to the spike train at the beginning and end
- of this interval.
+ :param edges: A pair (T_start, T_end) of values representing the
+ start and end time of the spike train measurement
+ or a single value representing the end time, the
+ T_start is then assuemd as 0.
:param separator: The character used to seprate the values in the text file
:param comment: Lines starting with this character are ignored.
:param sort: If true, the spike times are order via `np.sort`, default=True
- :returns: list of spike trains
+ :returns: list of :class:`.SpikeTrain`
"""
spike_trains = []
spike_file = open(file_name, 'r')
for line in spike_file:
if len(line) > 1 and not line.startswith(comment):
# use only the lines with actual data and not commented
- spike_train = spike_train_from_string(line, separator, is_sorted)
- if time_interval is not None: # add auxil. spikes if times given
- spike_train = add_auxiliary_spikes(spike_train, time_interval)
+ spike_train = spike_train_from_string(line, edges,
+ separator, is_sorted)
spike_trains.append(spike_train)
return spike_trains
@@ -107,18 +62,18 @@ def load_spike_trains_from_txt(file_name, time_interval=None,
def merge_spike_trains(spike_trains):
""" Merges a number of spike trains into a single spike train.
- :param spike_trains: list of arrays of spike times
+ :param spike_trains: list of :class:`.SpikeTrain`
:returns: spike train with the merged spike times
"""
# get the lengths of the spike trains
- lens = np.array([len(st) for st in spike_trains])
+ lens = np.array([len(st.spikes) for st in spike_trains])
merged_spikes = np.empty(np.sum(lens))
index = 0 # the index for merged_spikes
indices = np.zeros_like(lens) # indices of the spike trains
index_list = np.arange(len(indices)) # indices of indices of spike trains
# that have not yet reached the end
# list of the possible events in the spike trains
- vals = [spike_trains[i][indices[i]] for i in index_list]
+ vals = [spike_trains[i].spikes[indices[i]] for i in index_list]
while len(index_list) > 0:
i = np.argmin(vals) # the next spike is the minimum
merged_spikes[index] = vals[i] # put it to the merged spike train
@@ -127,33 +82,34 @@ def merge_spike_trains(spike_trains):
indices[i] += 1 # next index for the chosen spike train
if indices[i] >= lens[i]: # remove spike train index if ended
index_list = index_list[index_list != i]
- vals = [spike_trains[n][indices[n]] for n in index_list]
- return merged_spikes
+ vals = [spike_trains[n].spikes[indices[n]] for n in index_list]
+ return SpikeTrain(merged_spikes, [spike_trains[0].t_start,
+ spike_trains[0].t_end])
############################################################
# generate_poisson_spikes
############################################################
-def generate_poisson_spikes(rate, time_interval, add_aux_spikes=True):
+def generate_poisson_spikes(rate, interval):
""" Generates a Poisson spike train with the given rate in the given time
interval
:param rate: The rate of the spike trains
- :param time_interval: A pair (T_start, T_end) of values representing the
- start and end time of the spike train measurement or
- a single value representing the end time, the T_start
- is then assuemd as 0. Auxiliary spikes will be added
- to the spike train at the beginning and end of this
- interval, if they are not yet present.
- :type time_interval: pair of doubles or double
- :returns: Poisson spike train
+ :param interval: A pair (T_start, T_end) of values representing the
+ start and end time of the spike train measurement or
+ a single value representing the end time, the T_start
+ is then assuemd as 0. Auxiliary spikes will be added
+ to the spike train at the beginning and end of this
+ interval, if they are not yet present.
+ :type interval: pair of doubles or double
+ :returns: Poisson spike train as a :class:`.SpikeTrain`
"""
try:
- T_start = time_interval[0]
- T_end = time_interval[1]
+ T_start = interval[0]
+ T_end = interval[1]
except:
T_start = 0
- T_end = time_interval
+ T_end = interval
# roughly how many spikes are required to fill the interval
N = max(1, int(1.2 * rate * (T_end-T_start)))
N_append = max(1, int(0.1 * rate * (T_end-T_start)))
@@ -165,7 +121,4 @@ def generate_poisson_spikes(rate, time_interval, add_aux_spikes=True):
np.random.exponential(1.0/rate, N_append))
spikes = T_start + np.cumsum(intervals)
spikes = spikes[spikes < T_end]
- if add_aux_spikes:
- return add_auxiliary_spikes(spikes, time_interval)
- else:
- return spikes
+ return SpikeTrain(spikes, interval)
diff --git a/test/test_distance.py b/test/test_distance.py
index ba19f5e..19da35f 100644
--- a/test/test_distance.py
+++ b/test/test_distance.py
@@ -15,12 +15,13 @@ from numpy.testing import assert_equal, assert_almost_equal, \
assert_array_almost_equal
import pyspike as spk
+from pyspike import SpikeTrain
def test_isi():
# generate two spike trains:
- t1 = np.array([0.2, 0.4, 0.6, 0.7])
- t2 = np.array([0.3, 0.45, 0.8, 0.9, 0.95])
+ t1 = SpikeTrain([0.2, 0.4, 0.6, 0.7], 1.0)
+ t2 = SpikeTrain([0.3, 0.45, 0.8, 0.9, 0.95], 1.0)
# pen&paper calculation of the isi distance
expected_times = [0.0, 0.2, 0.3, 0.4, 0.45, 0.6, 0.7, 0.8, 0.9, 0.95, 1.0]
@@ -32,8 +33,6 @@ def test_isi():
expected_isi_val = sum((expected_times[1:] - expected_times[:-1]) *
expected_isi)/(expected_times[-1]-expected_times[0])
- t1 = spk.add_auxiliary_spikes(t1, 1.0)
- t2 = spk.add_auxiliary_spikes(t2, 1.0)
f = spk.isi_profile(t1, t2)
# print("ISI: ", f.y)
@@ -44,19 +43,17 @@ def test_isi():
assert_equal(spk.isi_distance(t1, t2), expected_isi_val)
# check with some equal spike times
- t1 = np.array([0.2, 0.4, 0.6])
- t2 = np.array([0.1, 0.4, 0.5, 0.6])
+ t1 = SpikeTrain([0.2, 0.4, 0.6], [0.0, 1.0])
+ t2 = SpikeTrain([0.1, 0.4, 0.5, 0.6], [0.0, 1.0])
expected_times = [0.0, 0.1, 0.2, 0.4, 0.5, 0.6, 1.0]
- expected_isi = [0.1/0.2, 0.1/0.3, 0.1/0.3, 0.1/0.2, 0.1/0.2, 0.0/0.5]
+ expected_isi = [0.1/0.3, 0.1/0.3, 0.1/0.3, 0.1/0.2, 0.1/0.2, 0.0/0.5]
expected_times = np.array(expected_times)
expected_isi = np.array(expected_isi)
expected_isi_val = sum((expected_times[1:] - expected_times[:-1]) *
expected_isi)/(expected_times[-1]-expected_times[0])
- t1 = spk.add_auxiliary_spikes(t1, 1.0)
- t2 = spk.add_auxiliary_spikes(t2, 1.0)
f = spk.isi_profile(t1, t2)
assert_equal(f.x, expected_times)
@@ -67,14 +64,27 @@ def test_isi():
def test_spike():
# generate two spike trains:
- t1 = np.array([0.2, 0.4, 0.6, 0.7])
- t2 = np.array([0.3, 0.45, 0.8, 0.9, 0.95])
+ t1 = SpikeTrain([0.0, 2.0, 5.0, 8.0], 10.0)
+ t2 = SpikeTrain([0.0, 1.0, 5.0, 9.0], 10.0)
+
+ expected_times = np.array([0.0, 1.0, 2.0, 5.0, 8.0, 9.0, 10.0])
+
+ f = spk.spike_profile(t1, t2)
+
+ assert_equal(f.x, expected_times)
+
+ assert_almost_equal(f.avrg(), 1.6624149659863946e-01, decimal=15)
+ assert_almost_equal(f.y2[-1], 0.1394558, decimal=6)
+
+ t1 = SpikeTrain([0.2, 0.4, 0.6, 0.7], 1.0)
+ t2 = SpikeTrain([0.3, 0.45, 0.8, 0.9, 0.95], 1.0)
# pen&paper calculation of the spike distance
expected_times = [0.0, 0.2, 0.3, 0.4, 0.45, 0.6, 0.7, 0.8, 0.9, 0.95, 1.0]
s1 = np.array([0.1, 0.1, (0.1*0.1+0.05*0.1)/0.2, 0.05, (0.05*0.15 * 2)/0.2,
- 0.15, 0.1, 0.1*0.2/0.3, 0.1**2/0.3, 0.1*0.05/0.3, 0.1])
- s2 = np.array([0.1, 0.1*0.2/0.3, 0.1, (0.1*0.05 * 2)/.15, 0.05,
+ 0.15, 0.1, (0.1*0.1+0.1*0.2)/0.3, (0.1*0.2+0.1*0.1)/0.3,
+ (0.1*0.05+0.1*0.25)/0.3, 0.1])
+ s2 = np.array([0.1, (0.1*0.2+0.1*0.1)/0.3, 0.1, (0.1*0.05 * 2)/.15, 0.05,
(0.05*0.2+0.1*0.15)/0.35, (0.05*0.1+0.1*0.25)/0.35,
0.1, 0.1, 0.05, 0.05])
isi1 = np.array([0.2, 0.2, 0.2, 0.2, 0.2, 0.1, 0.3, 0.3, 0.3, 0.3])
@@ -89,27 +99,32 @@ def test_spike():
(expected_y1+expected_y2)/2)
expected_spike_val /= (expected_times[-1]-expected_times[0])
- t1 = spk.add_auxiliary_spikes(t1, 1.0)
- t2 = spk.add_auxiliary_spikes(t2, 1.0)
f = spk.spike_profile(t1, t2)
assert_equal(f.x, expected_times)
assert_array_almost_equal(f.y1, expected_y1, decimal=15)
assert_array_almost_equal(f.y2, expected_y2, decimal=15)
- assert_equal(f.avrg(), expected_spike_val)
- assert_equal(spk.spike_distance(t1, t2), expected_spike_val)
+ assert_almost_equal(f.avrg(), expected_spike_val, decimal=15)
+ assert_almost_equal(spk.spike_distance(t1, t2), expected_spike_val,
+ decimal=15)
# check with some equal spike times
- t1 = np.array([0.2, 0.4, 0.6])
- t2 = np.array([0.1, 0.4, 0.5, 0.6])
+ t1 = SpikeTrain([0.2, 0.4, 0.6], [0.0, 1.0])
+ t2 = SpikeTrain([0.1, 0.4, 0.5, 0.6], [0.0, 1.0])
expected_times = [0.0, 0.1, 0.2, 0.4, 0.5, 0.6, 1.0]
- s1 = np.array([0.1, 0.1*0.1/0.2, 0.1, 0.0, 0.0, 0.0, 0.0])
- s2 = np.array([0.1*0.1/0.3, 0.1, 0.1*0.2/0.3, 0.0, 0.1, 0.0, 0.0])
+ # due to the edge correction in the beginning, s1 and s2 are different
+ # for left and right values
+ s1_r = np.array([0.1, (0.1*0.1+0.1*0.1)/0.2, 0.1, 0.0, 0.0, 0.0, 0.0])
+ s1_l = np.array([0.1, (0.1*0.1+0.1*0.1)/0.2, 0.1, 0.0, 0.0, 0.0, 0.0])
+ s2_r = np.array([0.1*0.1/0.3, 0.1*0.3/0.3, 0.1*0.2/0.3,
+ 0.0, 0.1, 0.0, 0.0])
+ s2_l = np.array([0.1*0.1/0.3, 0.1*0.1/0.3, 0.1*0.2/0.3, 0.0,
+ 0.1, 0.0, 0.0])
isi1 = np.array([0.2, 0.2, 0.2, 0.2, 0.2, 0.4])
isi2 = np.array([0.3, 0.3, 0.3, 0.1, 0.1, 0.4])
- expected_y1 = (s1[:-1]*isi2+s2[:-1]*isi1) / (0.5*(isi1+isi2)**2)
- expected_y2 = (s1[1:]*isi2+s2[1:]*isi1) / (0.5*(isi1+isi2)**2)
+ expected_y1 = (s1_r[:-1]*isi2+s2_r[:-1]*isi1) / (0.5*(isi1+isi2)**2)
+ expected_y2 = (s1_l[1:]*isi2+s2_l[1:]*isi1) / (0.5*(isi1+isi2)**2)
expected_times = np.array(expected_times)
expected_y1 = np.array(expected_y1)
@@ -118,8 +133,6 @@ def test_spike():
(expected_y1+expected_y2)/2)
expected_spike_val /= (expected_times[-1]-expected_times[0])
- t1 = spk.add_auxiliary_spikes(t1, 1.0)
- t2 = spk.add_auxiliary_spikes(t2, 1.0)
f = spk.spike_profile(t1, t2)
assert_equal(f.x, expected_times)
@@ -131,10 +144,17 @@ def test_spike():
def test_spike_sync():
- spikes1 = np.array([1.0, 2.0, 3.0])
- spikes2 = np.array([2.1])
- spikes1 = spk.add_auxiliary_spikes(spikes1, 4.0)
- spikes2 = spk.add_auxiliary_spikes(spikes2, 4.0)
+ spikes1 = SpikeTrain([1.0, 2.0, 3.0], 4.0)
+ spikes2 = SpikeTrain([2.1], 4.0)
+
+ expected_x = np.array([0.0, 1.0, 2.0, 2.1, 3.0, 4.0])
+ expected_y = np.array([0.0, 0.0, 1.0, 1.0, 0.0, 0.0])
+
+ f = spk.spike_sync_profile(spikes1, spikes2)
+
+ assert_array_almost_equal(f.x, expected_x, decimal=16)
+ assert_array_almost_equal(f.y, expected_y, decimal=16)
+
assert_almost_equal(spk.spike_sync(spikes1, spikes2),
0.5, decimal=16)
@@ -142,28 +162,46 @@ def test_spike_sync():
assert_almost_equal(spk.spike_sync(spikes1, spikes2, max_tau=0.05),
0.0, decimal=16)
- spikes2 = np.array([3.1])
- spikes2 = spk.add_auxiliary_spikes(spikes2, 4.0)
+ spikes2 = SpikeTrain([3.1], 4.0)
assert_almost_equal(spk.spike_sync(spikes1, spikes2),
0.5, decimal=16)
- spikes2 = np.array([1.1])
- spikes2 = spk.add_auxiliary_spikes(spikes2, 4.0)
+ spikes2 = SpikeTrain([1.1], 4.0)
+
+ expected_x = np.array([0.0, 1.0, 1.1, 2.0, 3.0, 4.0])
+ expected_y = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 0.0])
+
+ f = spk.spike_sync_profile(spikes1, spikes2)
+
+ assert_array_almost_equal(f.x, expected_x, decimal=16)
+ assert_array_almost_equal(f.y, expected_y, decimal=16)
+
assert_almost_equal(spk.spike_sync(spikes1, spikes2),
0.5, decimal=16)
- spikes2 = np.array([0.9])
- spikes2 = spk.add_auxiliary_spikes(spikes2, 4.0)
+ spikes2 = SpikeTrain([0.9], 4.0)
assert_almost_equal(spk.spike_sync(spikes1, spikes2),
0.5, decimal=16)
+ spikes2 = SpikeTrain([3.0], 4.0)
+ assert_almost_equal(spk.spike_sync(spikes1, spikes2),
+ 0.5, decimal=16)
+
+ spikes2 = SpikeTrain([1.0], 4.0)
+ assert_almost_equal(spk.spike_sync(spikes1, spikes2),
+ 0.5, decimal=16)
+
+ spikes2 = SpikeTrain([1.5, 3.0], 4.0)
+ assert_almost_equal(spk.spike_sync(spikes1, spikes2),
+ 0.4, decimal=16)
+
def check_multi_profile(profile_func, profile_func_multi):
# generate spike trains:
- t1 = spk.add_auxiliary_spikes(np.array([0.2, 0.4, 0.6, 0.7]), 1.0)
- t2 = spk.add_auxiliary_spikes(np.array([0.3, 0.45, 0.8, 0.9, 0.95]), 1.0)
- t3 = spk.add_auxiliary_spikes(np.array([0.2, 0.4, 0.6]), 1.0)
- t4 = spk.add_auxiliary_spikes(np.array([0.1, 0.4, 0.5, 0.6]), 1.0)
+ t1 = SpikeTrain([0.2, 0.4, 0.6, 0.7], 1.0)
+ t2 = SpikeTrain([0.3, 0.45, 0.8, 0.9, 0.95], 1.0)
+ t3 = SpikeTrain([0.2, 0.4, 0.6], 1.0)
+ t4 = SpikeTrain([0.1, 0.4, 0.5, 0.6], 1.0)
spike_trains = [t1, t2, t3, t4]
f12 = profile_func(t1, t2)
@@ -206,15 +244,12 @@ def test_multi_spike():
def test_multi_spike_sync():
# some basic multivariate check
- spikes1 = np.array([100, 300, 400, 405, 410, 500, 700, 800,
- 805, 810, 815, 900], dtype=float)
- spikes2 = np.array([100, 200, 205, 210, 295, 350, 400, 510,
- 600, 605, 700, 910], dtype=float)
- spikes3 = np.array([100, 180, 198, 295, 412, 420, 510, 640,
- 695, 795, 820, 920], dtype=float)
- spikes1 = spk.add_auxiliary_spikes(spikes1, 1000)
- spikes2 = spk.add_auxiliary_spikes(spikes2, 1000)
- spikes3 = spk.add_auxiliary_spikes(spikes3, 1000)
+ spikes1 = SpikeTrain([100, 300, 400, 405, 410, 500, 700, 800,
+ 805, 810, 815, 900], 1000)
+ spikes2 = SpikeTrain([100, 200, 205, 210, 295, 350, 400, 510,
+ 600, 605, 700, 910], 1000)
+ spikes3 = SpikeTrain([100, 180, 198, 295, 412, 420, 510, 640,
+ 695, 795, 820, 920], 1000)
assert_almost_equal(spk.spike_sync(spikes1, spikes2),
0.5, decimal=15)
assert_almost_equal(spk.spike_sync(spikes1, spikes3),
@@ -233,18 +268,28 @@ def test_multi_spike_sync():
# multivariate regression test
spike_trains = spk.load_spike_trains_from_txt("test/SPIKE_Sync_Test.txt",
- time_interval=(0, 4000))
+ edges=[0, 4000])
+ # extract all spike times
+ spike_times = np.array([])
+ for st in spike_trains:
+ spike_times = np.append(spike_times, st.spikes)
+ spike_times = np.unique(np.sort(spike_times))
+
f = spk.spike_sync_profile_multi(spike_trains)
+
+ assert_equal(spike_times, f.x[1:-1])
+ assert_equal(len(f.x), len(f.y))
+
assert_equal(np.sum(f.y[1:-1]), 39932)
assert_equal(np.sum(f.mp[1:-1]), 85554)
def check_dist_matrix(dist_func, dist_matrix_func):
# generate spike trains:
- t1 = spk.add_auxiliary_spikes(np.array([0.2, 0.4, 0.6, 0.7]), 1.0)
- t2 = spk.add_auxiliary_spikes(np.array([0.3, 0.45, 0.8, 0.9, 0.95]), 1.0)
- t3 = spk.add_auxiliary_spikes(np.array([0.2, 0.4, 0.6]), 1.0)
- t4 = spk.add_auxiliary_spikes(np.array([0.1, 0.4, 0.5, 0.6]), 1.0)
+ t1 = SpikeTrain([0.2, 0.4, 0.6, 0.7], 1.0)
+ t2 = SpikeTrain([0.3, 0.45, 0.8, 0.9, 0.95], 1.0)
+ t3 = SpikeTrain([0.2, 0.4, 0.6], 1.0)
+ t4 = SpikeTrain([0.1, 0.4, 0.5, 0.6], 1.0)
spike_trains = [t1, t2, t3, t4]
f12 = dist_func(t1, t2)
@@ -282,19 +327,39 @@ def test_spike_sync_matrix():
def test_regression_spiky():
+ # standard example
+ st1 = SpikeTrain(np.arange(100, 1201, 100), 1300)
+ st2 = SpikeTrain(np.arange(100, 1201, 110), 1300)
+
+ isi_dist = spk.isi_distance(st1, st2)
+ assert_almost_equal(isi_dist, 9.0909090909090939e-02, decimal=15)
+ isi_profile = spk.isi_profile(st1, st2)
+ assert_equal(isi_profile.y, 0.1/1.1 * np.ones_like(isi_profile.y))
+
+ spike_dist = spk.spike_distance(st1, st2)
+ assert_equal(spike_dist, 2.1105878248735391e-01)
+
+ spike_sync = spk.spike_sync(st1, st2)
+ assert_equal(spike_sync, 8.6956521739130432e-01)
+
+ # multivariate check
+
spike_trains = spk.load_spike_trains_from_txt("test/PySpike_testdata.txt",
(0.0, 4000.0))
- isi_profile = spk.isi_profile_multi(spike_trains)
- isi_dist = isi_profile.avrg()
- print(isi_dist)
+ isi_dist = spk.isi_distance_multi(spike_trains)
# get the full precision from SPIKY
- # assert_equal(isi_dist, 0.1832)
+ assert_almost_equal(isi_dist, 0.17051816816999129656, decimal=15)
spike_profile = spk.spike_profile_multi(spike_trains)
- spike_dist = spike_profile.avrg()
- print(spike_dist)
+ assert_equal(len(spike_profile.y1)+len(spike_profile.y2), 1252)
+
+ spike_dist = spk.spike_distance_multi(spike_trains)
+ # get the full precision from SPIKY
+ assert_almost_equal(spike_dist, 2.4432433330596512e-01, decimal=15)
+
+ spike_sync = spk.spike_sync_multi(spike_trains)
# get the full precision from SPIKY
- # assert_equal(spike_dist, 0.2445)
+ assert_equal(spike_sync, 0.7183531505298066)
def test_multi_variate_subsets():
@@ -319,5 +384,12 @@ def test_multi_variate_subsets():
if __name__ == "__main__":
test_isi()
test_spike()
+ test_spike_sync()
test_multi_isi()
test_multi_spike()
+ test_multi_spike_sync()
+ test_isi_matrix()
+ test_spike_matrix()
+ test_spike_sync_matrix()
+ test_regression_spiky()
+ test_multi_variate_subsets()
diff --git a/test/test_spikes.py b/test/test_spikes.py
index b12099e..d4eb131 100644
--- a/test/test_spikes.py
+++ b/test/test_spikes.py
@@ -14,38 +14,22 @@ from numpy.testing import assert_equal
import pyspike as spk
-def test_auxiliary_spikes():
- t = np.array([0.2, 0.4, 0.6, 0.7])
- t_aux = spk.add_auxiliary_spikes(t, time_interval=(0.1, 1.0))
- assert_equal(t_aux, [0.1, 0.2, 0.4, 0.6, 0.7, 1.0])
- t_aux = spk.add_auxiliary_spikes(t_aux, time_interval=(0.0, 1.0))
- assert_equal(t_aux, [0.0, 0.1, 0.2, 0.4, 0.6, 0.7, 1.0])
-
-
def test_load_from_txt():
spike_trains = spk.load_spike_trains_from_txt("test/PySpike_testdata.txt",
- time_interval=(0, 4000))
+ edges=(0, 4000))
assert len(spike_trains) == 40
# check the first spike train
- spike_times = [0, 64.886, 305.81, 696, 937.77, 1059.7, 1322.2, 1576.1,
+ spike_times = [64.886, 305.81, 696, 937.77, 1059.7, 1322.2, 1576.1,
1808.1, 2121.5, 2381.1, 2728.6, 2966.9, 3223.7, 3473.7,
- 3644.3, 3936.3, 4000]
- assert_equal(spike_times, spike_trains[0])
+ 3644.3, 3936.3]
+ assert_equal(spike_times, spike_trains[0].spikes)
# check auxiliary spikes
for spike_train in spike_trains:
- assert spike_train[0] == 0.0
- assert spike_train[-1] == 4000
+ assert spike_train.t_start == 0.0
+ assert spike_train.t_end == 4000
- # load without adding auxiliary spikes
- spike_trains2 = spk.load_spike_trains_from_txt("test/PySpike_testdata.txt",
- time_interval=None)
- assert len(spike_trains2) == 40
- # check auxiliary spikes
- for i in xrange(len(spike_trains)):
- assert len(spike_trains[i]) == len(spike_trains2[i])+2 # 2 spikes less
-
def check_merged_spikes(merged_spikes, spike_trains):
# create a flat array with all spike events
@@ -65,21 +49,23 @@ def check_merged_spikes(merged_spikes, spike_trains):
def test_merge_spike_trains():
# first load the data
spike_trains = spk.load_spike_trains_from_txt("test/PySpike_testdata.txt",
- time_interval=(0, 4000))
+ edges=(0, 4000))
- spikes = spk.merge_spike_trains([spike_trains[0], spike_trains[1]])
+ merged_spikes = spk.merge_spike_trains([spike_trains[0], spike_trains[1]])
# test if result is sorted
- assert((spikes == np.sort(spikes)).all())
+ assert((merged_spikes.spikes == np.sort(merged_spikes.spikes)).all())
# check merging
- check_merged_spikes(spikes, [spike_trains[0], spike_trains[1]])
+ check_merged_spikes(merged_spikes.spikes, [spike_trains[0].spikes,
+ spike_trains[1].spikes])
- spikes = spk.merge_spike_trains(spike_trains)
+ merged_spikes = spk.merge_spike_trains(spike_trains)
# test if result is sorted
- assert((spikes == np.sort(spikes)).all())
+ assert((merged_spikes.spikes == np.sort(merged_spikes.spikes)).all())
# check merging
- check_merged_spikes(spikes, spike_trains)
+ check_merged_spikes(merged_spikes.spikes,
+ [st.spikes for st in spike_trains])
+
if __name__ == "main":
- test_auxiliary_spikes()
test_load_from_txt()
test_merge_spike_trains()