|
| 1 | +# Discrete Classifiers |
| 2 | + |
| 3 | +Unlike continuous classifiers that output a prediction for every window of EMG data, discrete classifiers are designed for recognizing transient, isolated gestures. These classifiers operate on variable-length templates (sequences of windows) and are well-suited for detecting distinct movements like finger snaps, taps, or quick hand gestures. |
| 4 | + |
| 5 | +Discrete classifiers expect input data in a different format than continuous classifiers: |
| 6 | +- **Continuous classifiers**: Operate on individual windows of shape `(n_windows, n_features)`. |
| 7 | +- **Discrete classifiers**: Operate on templates (sequences of windows) where each template has shape `(n_frames, n_features)` and can vary in length. |
| 8 | + |
| 9 | +To prepare data for discrete classifiers, use the `discrete=True` parameter when calling `parse_windows()` on your `OfflineDataHandler`: |
| 10 | + |
| 11 | +```Python |
| 12 | +from libemg.data_handler import OfflineDataHandler |
| 13 | + |
| 14 | +odh = OfflineDataHandler() |
| 15 | +odh.get_data('./data/', regex_filters) |
| 16 | +windows, metadata = odh.parse_windows(window_size=50, window_increment=10, discrete=True) |
| 17 | +# windows is now a list of templates, one per file/rep |
| 18 | +``` |
| 19 | + |
| 20 | +For feature extraction with discrete data, use the `discrete=True` parameter: |
| 21 | + |
| 22 | +```Python |
| 23 | +from libemg.feature_extractor import FeatureExtractor |
| 24 | + |
| 25 | +fe = FeatureExtractor() |
| 26 | +features = fe.extract_features(['MAV', 'ZC', 'SSC', 'WL'], windows, discrete=True, array=True) |
| 27 | +# features is a list of arrays, one per template |
| 28 | +``` |
| 29 | + |
| 30 | +## Majority Vote LDA (MVLDA) |
| 31 | + |
| 32 | +A classifier that applies Linear Discriminant Analysis (LDA) to each frame within a template and uses majority voting to determine the final prediction. This approach is simple yet effective for discrete gesture recognition. |
| 33 | + |
| 34 | +```Python |
| 35 | +from libemg._discrete_models import MVLDA |
| 36 | + |
| 37 | +model = MVLDA() |
| 38 | +model.fit(train_features, train_labels) |
| 39 | +predictions = model.predict(test_features) |
| 40 | +probabilities = model.predict_proba(test_features) |
| 41 | +``` |
| 42 | + |
| 43 | +## Dynamic Time Warping Classifier (DTWClassifier) |
| 44 | + |
| 45 | +A template-matching classifier that uses Dynamic Time Warping (DTW) distance to compare test samples against stored training templates. DTW is particularly useful when gestures may vary in speed or duration, as it can align sequences with different temporal characteristics. |
| 46 | + |
| 47 | +```Python |
| 48 | +from libemg._discrete_models import DTWClassifier |
| 49 | + |
| 50 | +model = DTWClassifier(n_neighbors=3) |
| 51 | +model.fit(train_features, train_labels) |
| 52 | +predictions = model.predict(test_features) |
| 53 | +probabilities = model.predict_proba(test_features) |
| 54 | +``` |
| 55 | + |
| 56 | +The `n_neighbors` parameter controls how many nearest templates are used for voting (k-nearest neighbors with DTW distance). |
| 57 | + |
| 58 | +## Pretrained Myo Cross-User Model (MyoCrossUserPretrained) |
| 59 | + |
| 60 | +A pretrained deep learning model for cross-user discrete gesture recognition using the Myo armband. This model uses a convolutional-recurrent architecture and recognizes 6 gestures: Nothing, Close, Flexion, Extension, Open, and Pinch. |
| 61 | + |
| 62 | +```Python |
| 63 | +from libemg._discrete_models import MyoCrossUserPretrained |
| 64 | + |
| 65 | +model = MyoCrossUserPretrained() |
| 66 | +# Model is automatically downloaded on first use |
| 67 | + |
| 68 | +# The model provides recommended parameters for OnlineDiscreteClassifier |
| 69 | +print(model.args) |
| 70 | +# {'window_size': 10, 'window_increment': 5, 'null_label': 0, ...} |
| 71 | + |
| 72 | +predictions = model.predict(test_data) |
| 73 | +probabilities = model.predict_proba(test_data) |
| 74 | +``` |
| 75 | + |
| 76 | +This model expects raw windowed EMG data (not extracted features) with shape `(batch_size, seq_len, n_channels, n_samples)`. |
| 77 | + |
| 78 | +## Online Discrete Classification |
| 79 | + |
| 80 | +For real-time discrete gesture recognition, use the `OnlineDiscreteClassifier`: |
| 81 | + |
| 82 | +```Python |
| 83 | +from libemg.emg_predictor import OnlineDiscreteClassifier |
| 84 | +from libemg._discrete_models import MyoCrossUserPretrained |
| 85 | + |
| 86 | +# Load pretrained model |
| 87 | +model = MyoCrossUserPretrained() |
| 88 | + |
| 89 | +# Create online classifier |
| 90 | +classifier = OnlineDiscreteClassifier( |
| 91 | + odh=online_data_handler, |
| 92 | + model=model, |
| 93 | + window_size=model.args['window_size'], |
| 94 | + window_increment=model.args['window_increment'], |
| 95 | + null_label=model.args['null_label'], |
| 96 | + feature_list=model.args['feature_list'], # None for raw data |
| 97 | + template_size=model.args['template_size'], |
| 98 | + min_template_size=model.args['min_template_size'], |
| 99 | + gesture_mapping=model.args['gesture_mapping'], |
| 100 | + buffer_size=model.args['buffer_size'], |
| 101 | + rejection_threshold=0.5, |
| 102 | + debug=True |
| 103 | +) |
| 104 | + |
| 105 | +# Start recognition loop |
| 106 | +classifier.run() |
| 107 | +``` |
| 108 | + |
| 109 | +## Creating Custom Discrete Classifiers |
| 110 | + |
| 111 | +Any custom discrete classifier should implement the following methods to work with LibEMG: |
| 112 | + |
| 113 | +- `fit(x, y)`: Train the model where `x` is a list of templates and `y` is the corresponding labels. |
| 114 | +- `predict(x)`: Return predicted class labels for a list of templates. |
| 115 | +- `predict_proba(x)`: Return predicted class probabilities for a list of templates. |
| 116 | + |
| 117 | +```Python |
| 118 | +class CustomDiscreteClassifier: |
| 119 | + def __init__(self): |
| 120 | + self.classes_ = None |
| 121 | + |
| 122 | + def fit(self, x, y): |
| 123 | + # x: list of templates (each template is an array of frames) |
| 124 | + # y: labels for each template |
| 125 | + self.classes_ = np.unique(y) |
| 126 | + # ... training logic |
| 127 | + |
| 128 | + def predict(self, x): |
| 129 | + # Return array of predictions |
| 130 | + pass |
| 131 | + |
| 132 | + def predict_proba(self, x): |
| 133 | + # Return array of shape (n_samples, n_classes) |
| 134 | + pass |
| 135 | +``` |
0 commit comments