Demo "BeForData": Behavioral Force Data¶

Install package

pip install beforedata

Create BeForRecord from csvfile¶

In [1]:
import pandas as pd
from befordata import BeForRecord

# 1. read csv with Pandas
fdata = pd.read_csv("demo_force_data.csv")

# 2. converting to before record
befor_data = BeForRecord(fdata, sampling_rate = 1000)
befor_data
Out[1]:
BeForRecord
  sampling_rate: 1000, n sessions: 1
  columns: 'S1_time', 'S1_Fx', 'S1_trigger1', 'S2_Fx'
  time_column: 
  metadata
         S1_time   S1_Fx  S1_trigger1   S2_Fx
0         601676 -0.1717       0.0000 -0.1143
1         601678 -0.1719       0.0000 -0.1136
2         601679 -0.1719       0.0000 -0.1133
3         601680 -0.1718       0.0000 -0.1209
4         601681 -0.1697       0.0000 -0.1020
...          ...     ...          ...     ...
2334873  3120147  0.0991       0.9656 -0.3851
2334874  3120147  0.1034       0.9650 -0.3789
2334875  3120149  0.1013       0.9653 -0.3704
2334876  3120149  0.1013       0.9653 -0.3875
2334877  3120151  0.0992       0.9660 -0.3883

[2334878 rows x 4 columns]
In [2]:
# adding cricial additonal information
befor_data = BeForRecord(fdata, sampling_rate = 1000, 
                         columns=["S1_Fx", "S2_Fx"], time_column = "S1_time")

befor_data
Out[2]:
BeForRecord
  sampling_rate: 1000, n sessions: 1
  columns: 'S1_Fx', 'S2_Fx'
  time_column: S1_time
  metadata
         S1_time   S1_Fx  S1_trigger1   S2_Fx
0         601676 -0.1717       0.0000 -0.1143
1         601678 -0.1719       0.0000 -0.1136
2         601679 -0.1719       0.0000 -0.1133
3         601680 -0.1718       0.0000 -0.1209
4         601681 -0.1697       0.0000 -0.1020
...          ...     ...          ...     ...
2334873  3120147  0.0991       0.9656 -0.3851
2334874  3120147  0.1034       0.9650 -0.3789
2334875  3120149  0.1013       0.9653 -0.3704
2334876  3120149  0.1013       0.9653 -0.3875
2334877  3120151  0.0992       0.9660 -0.3883

[2334878 rows x 4 columns]
In [3]:
# adding meta data as dict
befor_data.meta = {"Exp": "my experiment"}
befor_data
Out[3]:
BeForRecord
  sampling_rate: 1000, n sessions: 1
  columns: 'S1_Fx', 'S2_Fx'
  time_column: S1_time
  metadata
  - Exp: my experiment
         S1_time   S1_Fx  S1_trigger1   S2_Fx
0         601676 -0.1717       0.0000 -0.1143
1         601678 -0.1719       0.0000 -0.1136
2         601679 -0.1719       0.0000 -0.1133
3         601680 -0.1718       0.0000 -0.1209
4         601681 -0.1697       0.0000 -0.1020
...          ...     ...          ...     ...
2334873  3120147  0.0991       0.9656 -0.3851
2334874  3120147  0.1034       0.9650 -0.3789
2334875  3120149  0.1013       0.9653 -0.3704
2334876  3120149  0.1013       0.9653 -0.3875
2334877  3120151  0.0992       0.9660 -0.3883

[2334878 rows x 4 columns]

Example of the data preprocessing of experimental data with design¶

In [4]:
import pandas as pd
from befordata import BeForRecord, befor_tools

demo_data_file = "demo_force_data.csv"
demo_design_file = "demo_design_data.csv"

# 1. read data and convert to befordata 
data = BeForRecord(pd.read_csv(demo_data_file), sampling_rate=1000, 
                   columns=["S1_Fx", "S2_Fx"], time_column = "S1_time", meta = {"Exp": "my experiment"})

# 2. detect pauses in the recordings and treat data as recodring with different sessions
data = befor_tools.detect_sessions(data, time_gap=2000)

# 3. filter data (takes into account the different sessions)
data = befor_tools.butter_filter(data, cutoff=30, order=4, btype="lowpass")

# 4. read design data (csv)
design = pd.read_csv(demo_design_file)

# 5. get samples from times of the trial onset in the design (`udp_times`)
samples = data.find_samples_by_time(design.udp_time)

# 5. extract epochs
ep = data.extract_epochs("S1_Fx", samples,
                    n_samples = 5000, n_samples_before=100, design=design)

print(ep)
BeForEpochs
  n epochs: 391, n_samples: 5100
  sampling_rate: 1000, zero_sample: 100
  design: 'operand_1', 'operand_2', 'operator', 'correct_response', 'response', 'resp_number_digits', 'resp_num_category', 'subject_id', 'trial', 'udp_time'

Further Instructions¶

Pyarrow format¶

Arrow and feather file format is very fast and plattform & language independed

Loading BeforRecord from feather file¶

In [5]:
from pyarrow.feather import read_table, write_feather

# 1. load files as arrow table
tbl = read_table("robot_test.feather")
# 2. Convert to BeForRecord
data = BeForRecord.from_arrow(tbl)

Saving to feather file¶

In [6]:
write_feather(data.to_arrow(), "demo.feather", compression="lz4", compression_level=6)

Epochs-based representation¶

Epochs are represented in a matrix. Each row is one trial

Example

  • Extracting epochs of the length 2000 from Fx (plut 100 samples before)
  • the 6 epochs start at the 6 "zero samples"
In [7]:
epochs = data.extract_epochs("S1_Fx",
            zero_samples = [1530, 6021, 16983, 28952, 67987],
            n_samples=2000,
            n_samples_before=10)
In [ ]: