Skip to content

Commit 84759da

Browse files
committed
Fixed documentation
1 parent c5f885a commit 84759da

4 files changed

Lines changed: 145 additions & 25 deletions

File tree

Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
# Discrete Classifiers
2+
3+
Unlike continuous classifiers that output a prediction for every window of EMG data, discrete classifiers are designed for recognizing transient, isolated gestures. These classifiers operate on variable-length templates (sequences of windows) and are well-suited for detecting distinct movements like finger snaps, taps, or quick hand gestures.
4+
5+
Discrete classifiers expect input data in a different format than continuous classifiers:
6+
- **Continuous classifiers**: Operate on individual windows of shape `(n_windows, n_features)`.
7+
- **Discrete classifiers**: Operate on templates (sequences of windows) where each template has shape `(n_frames, n_features)` and can vary in length.
8+
9+
To prepare data for discrete classifiers, use the `discrete=True` parameter when calling `parse_windows()` on your `OfflineDataHandler`:
10+
11+
```Python
12+
from libemg.data_handler import OfflineDataHandler
13+
14+
odh = OfflineDataHandler()
15+
odh.get_data('./data/', regex_filters)
16+
windows, metadata = odh.parse_windows(window_size=50, window_increment=10, discrete=True)
17+
# windows is now a list of templates, one per file/rep
18+
```
19+
20+
For feature extraction with discrete data, use the `discrete=True` parameter:
21+
22+
```Python
23+
from libemg.feature_extractor import FeatureExtractor
24+
25+
fe = FeatureExtractor()
26+
features = fe.extract_features(['MAV', 'ZC', 'SSC', 'WL'], windows, discrete=True, array=True)
27+
# features is a list of arrays, one per template
28+
```
29+
30+
## Majority Vote LDA (MVLDA)
31+
32+
A classifier that applies Linear Discriminant Analysis (LDA) to each frame within a template and uses majority voting to determine the final prediction. This approach is simple yet effective for discrete gesture recognition.
33+
34+
```Python
35+
from libemg._discrete_models import MVLDA
36+
37+
model = MVLDA()
38+
model.fit(train_features, train_labels)
39+
predictions = model.predict(test_features)
40+
probabilities = model.predict_proba(test_features)
41+
```
42+
43+
## Dynamic Time Warping Classifier (DTWClassifier)
44+
45+
A template-matching classifier that uses Dynamic Time Warping (DTW) distance to compare test samples against stored training templates. DTW is particularly useful when gestures may vary in speed or duration, as it can align sequences with different temporal characteristics.
46+
47+
```Python
48+
from libemg._discrete_models import DTWClassifier
49+
50+
model = DTWClassifier(n_neighbors=3)
51+
model.fit(train_features, train_labels)
52+
predictions = model.predict(test_features)
53+
probabilities = model.predict_proba(test_features)
54+
```
55+
56+
The `n_neighbors` parameter controls how many nearest templates are used for voting (k-nearest neighbors with DTW distance).
57+
58+
## Pretrained Myo Cross-User Model (MyoCrossUserPretrained)
59+
60+
A pretrained deep learning model for cross-user discrete gesture recognition using the Myo armband. This model uses a convolutional-recurrent architecture and recognizes 6 gestures: Nothing, Close, Flexion, Extension, Open, and Pinch.
61+
62+
```Python
63+
from libemg._discrete_models import MyoCrossUserPretrained
64+
65+
model = MyoCrossUserPretrained()
66+
# Model is automatically downloaded on first use
67+
68+
# The model provides recommended parameters for OnlineDiscreteClassifier
69+
print(model.args)
70+
# {'window_size': 10, 'window_increment': 5, 'null_label': 0, ...}
71+
72+
predictions = model.predict(test_data)
73+
probabilities = model.predict_proba(test_data)
74+
```
75+
76+
This model expects raw windowed EMG data (not extracted features) with shape `(batch_size, seq_len, n_channels, n_samples)`.
77+
78+
## Online Discrete Classification
79+
80+
For real-time discrete gesture recognition, use the `OnlineDiscreteClassifier`:
81+
82+
```Python
83+
from libemg.emg_predictor import OnlineDiscreteClassifier
84+
from libemg._discrete_models import MyoCrossUserPretrained
85+
86+
# Load pretrained model
87+
model = MyoCrossUserPretrained()
88+
89+
# Create online classifier
90+
classifier = OnlineDiscreteClassifier(
91+
odh=online_data_handler,
92+
model=model,
93+
window_size=model.args['window_size'],
94+
window_increment=model.args['window_increment'],
95+
null_label=model.args['null_label'],
96+
feature_list=model.args['feature_list'], # None for raw data
97+
template_size=model.args['template_size'],
98+
min_template_size=model.args['min_template_size'],
99+
gesture_mapping=model.args['gesture_mapping'],
100+
buffer_size=model.args['buffer_size'],
101+
rejection_threshold=0.5,
102+
debug=True
103+
)
104+
105+
# Start recognition loop
106+
classifier.run()
107+
```
108+
109+
## Creating Custom Discrete Classifiers
110+
111+
Any custom discrete classifier should implement the following methods to work with LibEMG:
112+
113+
- `fit(x, y)`: Train the model where `x` is a list of templates and `y` is the corresponding labels.
114+
- `predict(x)`: Return predicted class labels for a list of templates.
115+
- `predict_proba(x)`: Return predicted class probabilities for a list of templates.
116+
117+
```Python
118+
class CustomDiscreteClassifier:
119+
def __init__(self):
120+
self.classes_ = None
121+
122+
def fit(self, x, y):
123+
# x: list of templates (each template is an array of frames)
124+
# y: labels for each template
125+
self.classes_ = np.unique(y)
126+
# ... training logic
127+
128+
def predict(self, x):
129+
# Return array of predictions
130+
pass
131+
132+
def predict_proba(self, x):
133+
# Return array of shape (n_samples, n_classes)
134+
pass
135+
```

docs/source/documentation/prediction/prediction.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,9 @@ EMG Prediction
66
.. include:: classification_doc.md
77
:parser: myst_parser.sphinx_
88

9+
.. include:: discrete_classification_doc.md
10+
:parser: myst_parser.sphinx_
11+
912
.. include:: regression_doc.md
1013
:parser: myst_parser.sphinx_
1114

docs/source/documentation/prediction/predictors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
After recording, processing, and extracting features from a window of EMG data, it is passed to a machine learning algorithm for prediction. These control systems have evolved in the prosthetics community for continuously predicting muscular contractions for enabling prosthesis control. Therefore, they are primarily limited to recognizing static contractions (e.g., hand open/close and wrist flexion/extension) as they have no temporal awareness. Currently, this is the form of recognition supported by LibEMG and is an initial step to explore EMG as an interaction opportunity for general-purpose use. This section highlights the machine-learning strategies that are part of `LibEMG`'s pipeline.
44

5-
There are two types of models supported in `LibEMG`: classifiers and regressors. Classifiers output a discrete motion class for each window, whereas regressors output a continuous prediction along a degree of freedom. For both classifiers and regressors, `LibEMG` supports statistical models as well as deep learning models. Additionally, a number of post-processing methods (i.e., techniques to improve performance after prediction) are supported for all models.
5+
There are three types of models supported in `LibEMG`: classifiers, regressors, and discrete classifiers. Classifiers output a motion class for each window of EMG data, whereas regressors output a continuous prediction along a degree of freedom. Discrete classifiers are designed for recognizing transient, isolated gestures and operate on variable-length templates rather than individual windows. For classifiers and regressors, `LibEMG` supports statistical models as well as deep learning models. Additionally, a number of post-processing methods (i.e., techniques to improve performance after prediction) are supported for all models.
66

77
## Statistical Models
88

libemg/feature_extractor.py

Lines changed: 6 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,7 @@ def extract_features(self, feature_list, windows, feature_dic={}, array=False, n
157157
discrete: bool (optional), default=False
158158
If True, windows is expected to be a list of templates (from parse_windows with discrete=True).
159159
Features will be extracted for each template separately and returned as a list.
160+
Note: Normalization is not currently supported in discrete mode.
160161
161162
Returns
162163
----------
@@ -165,37 +166,18 @@ def extract_features(self, feature_list, windows, feature_dic={}, array=False, n
165166
of the computed features for each window. If array=True, returns a np.ndarray instead.
166167
When discrete=True: A list of dictionaries/arrays (one per template). If array=True, each
167168
element is a np.ndarray.
168-
tuple (features, StandardScaler)
169-
If normalize=True, returns a tuple of (features, scaler). When discrete=False, features is a
170-
np.ndarray. When discrete=True, features is a list of np.ndarrays. The scaler should be passed
171-
into the feature extractor for test data.
169+
tuple (np.ndarray, StandardScaler)
170+
If normalize=True (only supported when discrete=False), returns a tuple of (features array, scaler).
171+
The scaler should be passed into the feature extractor for test data.
172172
"""
173173
if discrete:
174+
if normalize:
175+
raise ValueError("Normalization is not currently supported in discrete mode.")
174176
# Handle discrete mode: windows is a list of templates
175177
all_features = []
176178
for template in windows:
177179
template_features = self._extract_features_single(feature_list, template, feature_dic, array, fix_feature_errors)
178180
all_features.append(template_features)
179-
180-
if normalize:
181-
# For normalization in discrete mode, we need to flatten, normalize, then restructure
182-
if not array:
183-
all_features = [self._format_data(f) for f in all_features]
184-
combined = np.vstack(all_features)
185-
if not normalizer:
186-
scaler = StandardScaler()
187-
combined = scaler.fit_transform(combined)
188-
else:
189-
scaler = normalizer
190-
combined = normalizer.transform(combined)
191-
# Split back into list based on original sizes
192-
result = []
193-
idx = 0
194-
for template in windows:
195-
n_windows = template.shape[0]
196-
result.append(combined[idx:idx+n_windows])
197-
idx += n_windows
198-
return result, scaler
199181
return all_features
200182

201183
return self._extract_features_single(feature_list, windows, feature_dic, array, fix_feature_errors, normalize, normalizer)

0 commit comments

Comments
 (0)