summaryrefslogtreecommitdiff
path: root/Readme.md
blob: be42c4acf33c0cac166bbd791a8e24eaef81a630 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
# PySpike

[![Build Status](https://travis-ci.org/mariomulansky/PySpike.svg?branch=master)](https://travis-ci.org/mariomulansky/PySpike)

PySpike is a Python library for the numerical analysis of spike train similarity. 
Its core functionality is the implementation of the bivariate [ISI](http://www.scholarpedia.org/article/Measures_of_spike_train_synchrony#ISI-distance) [1] and [SPIKE](http://www.scholarpedia.org/article/SPIKE-distance) [2] distance. 
Additionally, it provides functions to compute multi-variate SPIKE and ISI distances, as well as averaging and general spike train processing.
All computation intensive parts are implemented in C via [cython](http://www.cython.org) to reach a competitive performance (factor 100-200 over plain Python).

All source codes are published under the [BSD License](http://opensource.org/licenses/BSD-2-Clause).

>[1] Kreuz T, Haas JS, Morelli A, Abarbanel HDI, Politi A, *Measuring spike train synchrony.* J Neurosci Methods 165, 151 (2007)

>[2] Kreuz T, Chicharro D, Houghton C, Andrzejak RG, Mormann F, *Monitoring spike train synchrony.* J Neurophysiol 109, 1457 (2013)

## Requirements and Installation

To use PySpike you need Python installed with the following additional packages:

- numpy
- scipy
- matplotlib
- cython
- nosetests (for running the tests)

In particular, make sure that [cython](http://www.cython.org) is configured properly and able to locate a C compiler.

To install PySpike, simply download the source, e.g. from Github, and run the `setup.py` script:

    git clone https://github.com/mariomulansky/PySpike.git
    cd PySpike
    python setup.py build_ext --inplace

Then you can run the tests using the `nosetests` test framework:

    nosetests

Finally, you should make PySpike's installation folder known to Python to be able to import pyspike in your own projects.
Therefore, add your `/path/to/PySpike` to the `$PYTHONPATH` environment variable.

## Spike trains

In PySpike, spike trains are represented by one-dimensional numpy arrays containing the sequence of spike times as double values.
The following code creates such a spike train with some arbitrary spike times:
    
    import numpy as np

    spike_train = np.array([0.1, 0.3, 0.45, 0.6, 0.9])

### Loading from text files

Typically, spike train data is loaded into PySpike from data files.
The most straight-forward data files are text files where each line represents one spike train given as an sequence of spike times.
An exemplary file with several spike trains is [PySpike_testdata.txt](https://github.com/mariomulansky/PySpike/blob/master/examples/PySpike_testdata.txt).
To quickly obtain spike trains from such files, PySpike provides the function `load_spike_trains_from_txt`.

    import numpy as np
    import pyspike as spk
    
    spike_trains = spk.load_spike_trains_from_txt("SPIKY_testdata.txt", 
                                                  time_interval=(0,4000))

This function expects the name of the data file as first parameter, and additionally the time intervall of the spike train measurement can be provided as a pair of start- and end-time values.
If the time interval is provided (`time_interval is not None`), auxiliary spikes at the start- and end-time of the interval are added to the spike trains.
Furthermore, the spike trains are ordered via `np.sort` (disable this feature by providing `sort=False` as a parameter to the load function).
As result, `load_spike_trains_from_txt` returns a *list of arrays* containing the spike trains in the text file.

If you load spike trains yourself, i.e. from data files with different structure, you can use the helper function `add_auxiliary_spikes` to add the auxiliary spikes at the beginning and end of the observation interval.
Both the ISI and the SPIKE distance computation require the presence of auxiliary spikes, so make sure you have those in your spike trains:

    spike_train = spk.add_auxiliary_spikes(spike_train, (T_start, T_end))
    # if you provide only a single value, it is interpreted as T_end, while T_start=0
    spike_train = spk.add_auxiliary_spikes(spike_train, T_end)

## Computing bi-variate distances

----------------------
**Important note:**

>Spike trains are expected to be *ordered sequences*! 
>For performance reasons, the PySpike distance functions do not check if the spike trains provided are indeed ordered.
>Make sure that all your spike trains are ordered.
>If in doubt, use `spike_train = np.sort(spike_train)` to obtain a correctly ordered spike train.
>
>Furthermore, the spike trains should have auxiliary spikes at the beginning and end of the observation interval.
>You can ensure this by providing the `time_interval` in the `load_spike_trains_from_txt` function, or calling `add_auxiliary_spikes` for your spike trains.
>The spike trains must have *the same* observation interval!

----------------------

### ISI-distance

The following code loads some exemplary spike trains, computes the dissimilarity profile of the ISI-distance of the first two spike trains, and plots it with matplotlib:

    import matplotlib.pyplot as plt
    import pyspike as spk
    
    spike_trains = spk.load_spike_trains_from_txt("PySpike_testdata.txt",
                                                  time_interval=(0, 4000))
    isi_profile = spk.isi_profile(spike_trains[0], spike_trains[1])
    x, y = isi_profile.get_plottable_data()
    plt.plot(x, y, '--k')
    print("ISI distance: %.8f" % isi_profil.avrg())
    plt.show()

The ISI-profile is a piece-wise constant function, there the function `isi_profile` returns an instance of the `PieceWiseConstFunc` class.
As shown above, this class allows you to obtain arrays that can be used to plot the function with `plt.plt`, but also to compute the absolute average, which amounts to the final scalar ISI-distance.
If you are only interested in the scalar ISI-distance and not the profile, you can simly use:

     isi_dist = spk.isi_distance(spike_trains[0], spike_trains[1])

Furthermore, PySpike provides the `average_profile` function that can be used to compute the average profile of a list of given `PieceWiseConstFunc` instances.

    avrg_profile = spk.average_profile([spike_train1, spike_train2])
    x, y = avrg_profile.get_plottable_data()
    plt.plot(x, y, label="Average profile")

Note the difference between the `average_profile` function, which returns a `PieceWiseConstFunc` (or `PieceWiseLinFunc`, see below), and the `avrg` member function above, that computes the integral over the time profile.
So to obtain overall average ISI-distance of a list of ISI profiles you can first compute the average profile using `average_profile` and the use 

    avrg_isi = avrg_profile.avrg()

to obtain the final, scalar average ISI distance of the whole set (see also "Computing multi-variate distance" below).

## Computing multi-variate distances


## Plotting


## Averaging