Next Article in Journal
Translational Echocardiography: The Dog as a Clinical Research Model of Cardiac Dysfunction
Next Article in Special Issue
MCI Conversion Prediction Using 3D Zernike Moments and the Improved Dynamic Particle Swarm Optimization Algorithm
Previous Article in Journal
Adaptive Regression Prefetching Algorithm by Using Big Data Application Characteristics
Previous Article in Special Issue
Deep Learning-Based Radiomics for Prognostic Stratification of Low-Grade Gliomas Using a Multiple-Gene Signature
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Sensor Set for Decoding Motor Imagery from EEG †

1
Human Physiology and Sports Physiotherapy Research Group, Vrije Universiteit Brussel, 1050 Brussels, Belgium
2
Equipes Traitement de l’Information et Systèmes, UMR 8051, CY Cergy Paris Université, Ećole Nationale Supeŕieure de l’Eĺectronique et de ses Applications (ENSEA), Centre national de la recherche scientifique (CNRS), 95000 Cergy, France
3
Brussels Human Robotic Research Center (BruBotics), Vrije Universiteit Brussel, 1050 Brussels, Belgium
4
Robotics and Multibody Mechanics Research Group, Vrije Universiteit Brussel and imec, 1050 Brussels, Belgium
5
Institute for Kinesiology Research, Science and Research Centre Koper, 6000 Koper, Slovenia
6
Department of Health Sciences, Alma Mater Europaea-ECM, 2000 Maribor, Slovenia
7
Laboratory Culture Sport Health and Society (C3S-UR 4660), Sport and Performance Department, University of Franche-Comté, 25000 Besancon, France
8
Artificial Intelligence Research Group, Vrije Universiteit Brussel, 1050 Brussels, Belgium
*
Authors to whom correspondence should be addressed.
This paper is an extended version of our paper published in 11th International IEEE EMBS Conference on Neural Engineering, Baltimore, MD, USA, 25–27 April 2023.
Appl. Sci. 2023, 13(7), 4438; https://doi.org/10.3390/app13074438
Submission received: 28 February 2023 / Revised: 28 March 2023 / Accepted: 29 March 2023 / Published: 31 March 2023
(This article belongs to the Special Issue Artificial Intelligence (AI) in Neuroscience)

Abstract

:
Brain–computer interfaces (BCIs) have the potential to enable individuals to interact with devices by detecting their intention from brain activity. A common approach to BCI is to decode movement intention from motor imagery (MI), the mental representation of an overt action. However, research-grade electroencephalogram (EEG) acquisition devices with a high number of sensors are typically necessary to achieve the spatial resolution required for reliable analysis. This entails high monetary and computational costs that make these approaches impractical for everyday use. This study investigates the trade-off between accuracy and complexity when decoding MI from fewer EEG sensors. Data were acquired from 15 healthy participants performing MI with a 64-channel research-grade EEG device. After performing a quality assessment by identifying visually evoked potentials, several decoding pipelines were trained on these data using different subsets of electrode locations. No significant differences (p = [0.18–0.91]) in the average decoding accuracy were found when using a reduced number of sensors. Therefore, decoding MI from a limited number of sensors is feasible. Hence, using commercial sensor devices for this purpose should be attainable, reducing both monetary and computational costs for BCI control.

1. Introduction

Brain–computer interfaces (BCIs) enable users to interact with devices with their thoughts by decoding a user’s intent from neural activity. Most commonly, neural activity is measured with the non-invasive electroencephalogram (EEG) signal due to its high temporal resolution, portability, and relatively low financial cost [1]. This type of interface could potentially replace or complement currently available interaction modalities such as speech recognition [2] or gesture control [3], among others. Using BCI could enable individuals suffering from a condition that impairs speech or movement to operate devices that were previously not feasible or improve their user experience compared to using classical interaction modalities [4].
Implementing and deploying BCI control systems in the real world comes with numerous challenges [5]. The neural activity of a person can be influenced by several external factors, such as distractions or external electromagnetic fields from other electronic devices, which might obfuscate the patterns in brain activity related to the chosen control modality [6]. Additionally, EEG signals are different for each person, requiring the acquisition of a person’s neural data before one can train an artificial intelligence method that is typically employed to decode neural activity [7].
The medical or research-grade sensor devices and computational resources that are currently necessary to decode the signals from these devices can become prohibitively expensive for an individual without financial support. Making BCI technology more accessible is a crucial step toward applying BCI in real-world scenarios. One possible way to limit both computational and monetary costs is to use commercial devices that have a smaller number of sensors [8].
Hence, it is of interest to identify the minimal number of sensors that is necessary to reliably decode user intention. Additionally, the optimal locations of the sensors are also important for EEG decoding. Depending on the EEG modality used to generate commands, a specific subset of sensors will be necessary. One of these modalities is motor imagery (MI), i.e., the neural activity related to motor planning in the execution of a movement, but also occurring with the imagination of a movement [9,10]. Conventionally, MI is defined as “the mental simulation of an action without the corresponding motor output” [11].
There is currently a gap between BCI engineering research that mostly focuses on finding methods that can reliably decode MI from existing EEG data and neurophysiology studies that study the activation of the brain during MI. Multiple brain areas are activated when performing MI. Most activity occurs around the primary motor cortex [12,13,14,15,16]; however, numerous studies have determined that other areas of the brain are also activated during MI. These areas include the parietal cortex [13,16,17], the premotor cortex [11,13,14,16], and even the cerebellum [13,18].
Selecting the optimal set of EEG sensors is an important aspect of BCI research [19,20]. Most research related to reducing the number of sensors attempts to identify the optimal user-specific subset for MI decoding [21,22] or aims to use statistical methods to reduce the number of channels [23,24]. Few studies have been conducted to experimentally verify the influence of sensor choice on decoding accuracy [25] and generally do not relate their results with neuroscientific research on the brain areas involved in MI.
This research aims to experimentally identify the minimal subset of electrodes and their optimal locations for the purpose of MI classification by training different machine learning (ML) models on MI data. From existing literature on the functional regions of the brain [26], the optimal locations would be expected around the motor cortex area of the brain. However, since other sources of neural activity can also play a role in motor planning, they might be essential to include in the EEG inputs used to decode MI [13,27].
Since performing an exhaustive search of all possible sensor subsets would be computationally unfeasible, we decided to perform an experimental evaluation of decoding performance when using specific sensor subsets, using common off-the-shelf EEG decoding methods. For this purpose, BCI decoding pipelines were trained on previously acquired MI data. The chosen sensor subsets were selected based on their usage in commercially available EEG devices or their involvement in MI. To validate that the data are qualitative and contain the information relevant to this analysis, visually evoked potentials (VEP [28,29]), were first identified at pertinent sensor locations. VEPs were chosen for their strong activation and previous use in BCI [30], often in conjunction with MI [31].
This article expands the research presented at the 11th International IEEE EMBS Conference on Neural Engineering [32] with a detailed analysis of the EEG signals. This extended analysis includes the investigation of VEPs related to the visual cues used for MI data gathering. Additionally, the obtained MI decoding results are extended with a subject-level analysis of the effect of reducing the number of sensors. Furthermore, the implications of these expanded results in relation to known brain areas that are involved in MI are discussed.

2. Materials and Methods

2.1. Data Acquisition

2.1.1. Experimental Procedure

The EEG data used in this study were acquired between August and October 2022. Ethical approval for the study was given by the ethical commission of human sciences of Vrije Universiteit Brussel (ECHW_364.02). The subject group included 15 participants (14 male, 1 female) aged between 18 and 50 years (mean 28 ± 7 ). Participants had no prior experience with MI. After giving their informed consent, each participant took part in 5 sessions, which took place on separate days.
The first 2 sessions of the experiment were familiarization sessions, where the participant was alternately asked to perform and imagine movements without feedback. Before starting the first session, the concept of MI was introduced and the participant’s MI ability was assessed through the MIQ-3 questionnaire [33]. After filling in the questionnaire, participants were asked to choose whether they preferred kinesthetic MI (feeling the movement) or visual MI (visualizing the movement). MIQ-3 outcomes were not used in the current study as this falls outside of the scope of the article. Finally, at the end of the first session, the participants were once again asked if they would use kinesthetic or visual MI.
For the second session, following a brief summary of MI concepts and the participant’s previous experience, a final executed movement run was performed followed by an MI run. At the end of the second familiarization session, an MI decoding model was trained on data from the imagined movement run to perform a first feedback run. Feedback runs were introduced to train users to get better at generating motor imagery [34] and to assist them in staying focused, as suggested by general human–computer interaction research [35]. The last 3 sessions each started with an imagined movement calibration run and three or more feedback runs.
The design of this experiment expands the procedure used to acquire MI data for the BCI competition IV dataset 2a [36] by incorporating feedback. Each session consisted of multiple runs where movements were requested while the participant was seated in front of a screen. The sessions always began with a baseline run where participants had to do nothing while observing the visual cues for the movements or performing certain actions such as blinking and moving their head. Following the baseline, there were two types of runs, depending on the session.
During calibration runs, participants had to either perform or imagine the requested movement. The requested movement could either be closing the right hand, closing the left hand, curling the toes, or pushing the tongue against the top front teeth. The same procedure was employed for both executed and imagined movements. Figure 1 shows the experimental design for a single trial during such a calibration run. Each run consisted of 15 repetitions of each movement, resulting in 60 trials per run. Whether to perform or imagine the movement was orally communicated to the participant before the start of the run.
At the start of each trial, a fixation cross was shown for 3 s, giving the participant time to get ready. Subsequently, a textual cue was displayed indicating the movement to perform or imagine. Depending on the movement, the cue appeared above, below, to the left, or to the right of the fixation cross. The order in which movements were presented was randomized for each run. After 1.5 s, the cue disappeared and the fixation cross turned green, giving the GO signal for the participant to start performing or imagining the movement. After 2.5 s, the cross turned red to indicate that the participant could stop. Finally, there was a break with a randomized duration between 3 and 4 s where white noise was shown to the participant. After the break, the next trial of the run began or the run ended if all trials were completed.
For feedback runs, a BCI decoding model was trained on previously acquired MI calibration data to give feedback after each movement imagination. The calibration data were always acquired at the beginning of the session and no data from previous sessions were used. The feedback consisted of a textual message that informed the participant if the decoded movement matched the requested movement. If there was a mismatch, the predicted movement was also shown. Feedback runs always used imagined movements. At the end of each feedback run, participants were informed of the proportion of correctly predicted movements to provide them with a target to improve upon. Figure 2 shows the procedure of a single trial during a feedback run. Feedback runs consisted of 8 repetitions per movement, resulting in 32 trials per run.
The procedure for feedback runs was the same as calibration, with the addition of feedback after the MI phase. Including the time to decode, the feedback phase lasted 3 s. In addition to textual feedback, the fixation cross turned green on a match and red on a mismatch. The feedback decoding model used a sliding window of 2.5 s, starting 1 s before the GO signal and ending after the first 3 s of movement imagination, with strides of 0.25 s. This resulted in 7 overlapping windows for which a prediction was made. To obtain a match, the majority of windows should predict the requested movement.

2.1.2. Acquisition Hardware and Software

EEG data were acquired with a 64-channel LiveAmp (Brain Products GmbH, Gilching, Germany) device at a sampling rate of 500 Hz, using active wet electrodes. Electrode locations follow the 10-10 system according to the 64-channel actiCap (https://brainvision.com/products/acticap-slim-acticap-snap/ (accessed on 7 January 2023)) layout. This electrode layout is schematically shown in Figure 3. Data transfer from the device to the recording computer was achieved over a Bluetooth connection.
To display instructions and cues of the experiment, the stimulus presentation software was implemented in the Python programming language, using the Shady library [37]. The EEG signal and cue presentation timings were synchronized using the Lab Streaming Layer (https://github.com/sccn/labstreaminglayer (accessed on 23 February 2023)) (LSL, RRID:SCR_017631) protocol. A separate datastream recorded the timing of cues for movement initiation by placing a marker at each state change in the procedure. The synchronized datastreams for EEG and markers were recorded in a single XDF file using the LarbRecorder software (v 1.16.2) provided by LSL. By recording the datastreams in this manner, LSL ensures that the event markers are included with the EEG data when the data file is read.

2.2. Data Analysis

2.2.1. Preprocessing

Two distinct preprocessing procedures were performed on the data, depending on the analysis. In both cases, EEG data were first preprocessed using pyPREP [38], a Python implementation of the PREP pipeline [39] that provides a standardized EEG preprocessing procedure. After initial preprocessing with the PREP pipeline, the data were further preprocessed with different methods for each experiment. All discussed filters are finite impulse response filters with a design of a one-pass, zero-phase, non-causal bandpass and use a windowed time-domain design (firwin). The windowing method uses a Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation. Filter design choices were based on the recommendations made in [40].
For the data quality assessment, independent component analysis (ICA) [41] was performed to ensure that only brain-related activity was present in the signal. Before performing ICA, the data were filtered between 1 and 100 Hz, as recommended by [42]. For this filter, the lower transition bandwidth was 1 Hz with a −6 dB cutoff frequency and the upper transition bandwidth was 25 Hz with a −6 dB cutoff frequency. The filter length was 1651 samples (i.e., 3.3 s).
A total of 10 ICA components were extracted using the infomax method [41] and then labeled using the ICLabel ML pipeline [43] to identify the components related to brain activity. The ICA solution was then applied to the unfiltered signal to only include components that ICLabel classified as brain activity. Finally, the signal was filtered between 1 and 40 Hz to eliminate low-frequency drifts and high-frequency muscle activity, respectively. This filter had a lower transition bandwidth of 0.1 Hz and an upper transition bandwidth of 10 Hz with a −6 dB cutoff frequency in both cases. The length of this filter was 16,501 samples or 33 s.
To train the ML models, the clean data resulting from the PREP pipeline were filtered with a highpass frequency of 8 Hz and a lowpass frequency of 30 Hz, as these frequencies contain most information related to MI [44]. For this filter, the length was of 825 samples resulting in a time window of 1.6 s. The data were subsequently split into epochs of 2 s before the movement initiation cue and 2 s after the cue. The epochs were then resampled to 250 Hz. The epochs of individual runs of the same type and session for the same participant were merged to obtain individual datasets that were then used to train and evaluate the decoding pipeline.

2.2.2. Data Quality Assessment

To confirm that the acquired data are of sufficient quality to train an ML model and contain the expected event-related activity, an analysis of evoked responses was performed. The chosen evoked response was the well-studied VEP [28,29]. More specifically, we investigated whether brain activations related to VEP could be identified when the GO signal was given.
To investigate the VEPs, the preprocessed raw data were epoched from 300 ms before and 1 s after the GO marker, respectively, with a mean baseline correction applied using the first 300 ms of the signal. The epochs for every run were then averaged to yield evoked responses. For each type of run, i.e., executed movement, imagined movement, and imagined movement with feedback, the evoked data were combined over all sessions for all participants. This resulted in three distinct grand averages for each of those runs. To identify the VEP, the occipital region of interest was investigated, which comprises electrodes O1, O2, and Oz.

2.2.3. Machine Learning Evaluation

The preprocessed data were used to train an ML decoding pipeline using Common Spatial Pattern (CSP [45]) features and Linear Discriminant Analysis (LDA [46]) for 2-class classification. This decoding pipeline was chosen for its low complexity and common usage in MI decoding [5]. The classification task consisted of distinguishing feet and right hand MI. For the CSP features, 4 components were used. The pipelines were evaluated by 5-fold cross-validation for each sensor subset.
All reported metrics were obtained by averaging the classification accuracy on the test set for each of the train–test splits that were obtained from cross-validation. For each subset, a new pipeline was trained using only the sensors that are part of the subset. The mean cross-validation accuracy obtained by training on all sensors was used as the baseline against which we compare performance when training the decoding pipeline with a subset of sensors. Mean cross-validation accuracies were compared with an independent sample t-test at a significance level of 0.05 . This test was chosen since the data originate from separate classifiers trained on different data subsets. The resulting accuracy distribution is normal due to the central limit theorem [47] since the compared means are themselves a result of mean cross-validation accuracy.
The considered sensor subsets were chosen based on existing datasets, the sensors used in commercial devices, and brain regions associated with motor activity. The central regions of interest were the motor cortex, which typically shows MI activity [12], and also other brain regions associated with MI [13]. Figure 3 shows the electrode locations on the scalp for each subset according to the 10-10 system.
The Full subset uses all 64 EEG channels, while the Half subset only uses 32 channels with reduced spatial resolution. The BCI Comp subset uses 21 of the 22 locations present in the BCI competition IV dataset 2a [36]. The excluded location of FCz could not be used because the LiveAmp device uses it as the reference. The OpenBCI 8 and OpenBCI 16 subsets correspond to the default sensor locations for the OpenBCI Ultracortex Mark IV headset (https://docs.openbci.com/AddOns/Headwear/MarkIV/ (accessed on 26 January 2023)) in its 8-channel and 16-channel configurations, respectively. Finally, the Motor cortex (MC) subset corresponds to the 24 locations that are associated with motor activity and the MC reduced subset only uses 9 of those locations.
Data were processed and analyzed with Python 3.9, using the MNE-Python software (v 1.0.3) library for EEG analysis [48]. Machine learning methods that decode EEG were implemented with the Scikit-Learn library [49]. The statistical analysis was performed with the Pandas library [50] for data manipulation and SciPy [51] for statistical testing. Figures were generated with the Seaborn library [52].

3. Results

3.1. Visually Evoked Potential Inspection

The grand averaged signals at the moment of the Go cue for electrode locations Oz, O1, and O2 are visualized in Figure 4 for acquisition runs with imagined movement (calibration runs), executed movement, and imagined movement with feedback, respectively.
The VEP can be observed for all sensors in each figure, with a slight delay that can be attributed to the reaction speed of the individuals or a syncing delay introduced by the recording software. The P1 peak is small for all types of runs, even almost imperceptible for feedback runs. The N2 peak is clearly visible in each figure with the strongest deflection observed for executed movement and the smallest peak observed for imagined movement without feedback. Finally, the P2 peak is not evidently noticeable for both executed and imagined movement without feedback. For imagined movements with feedback, the P2 peak is clearly visible. The presence of VEPs in the EEG data shows that the data are qualitative and contain the expected evoked responses.

3.2. Machine Learning Decoding

Table 1 provides an overview of the number of sensors and shows the mean, standard deviation (std), minimum, and maximum cross-validation accuracy for each sensor subset over all participants. The sensor locations of each subset correspond to those detailed in Figure 3.
After performing an independent sample t-test between decoding accuracies obtained with the full sensor set and those obtained when using the different subsets, the results show that none of the observed differences in mean accuracy are statistically significant. The resulting p-values of 0.90 (half), 0.91 (BCI Comp), 0.74 (OpenBCI 16), 0.18 (OpenBCI 8), 0.52 (Motor cortex), and 0.25 (MC Reduced) do not allow rejection of the null hypothesis, meaning decoding accuracies are the same.
The trends in Table 1 indicate that there is great variability in the cross-validation results when using the full set of 64 channels, with a standard deviation of 0.15 . This is also the case for other sensor subsets with observed standard deviations that are equal or larger. For the full set of electrodes, the lowest accuracy was below random chance at 0.31 , while the highest achieved accuracy was 1.00 . On average, the accuracy for the full set of sensors is 0.67 .
To visualize trends in the decoding results, Figure 5 visualizes the cross-validation results for each sensor subset in a box-plot format.
From this figure, we can observe that the maximum accuracy is slightly lower for both OpenBCI sensor subsets and the MC reduced subset. For the other subsets, the maximum accuracy of 1.0 is retained. We can also observe that the minimum accuracy seems to increase for some sensor subsets, while it appears to decrease for others. For median accuracy, the observed difference between groups appears slightly larger, with the OpenBCI 8 subset being the lowest. These results indicate that it should be feasible to downscale the number of sensors used for MI decoding.
To investigate how performance is distributed at the level of individual participants, Figure 6 shows the average decoding accuracies on the full set of sensors for each participant separately.
The figure shows that decoding accuracies for individual participants still have a large variability in most cases. Intra-subject variability is reduced compared to the overall performance. When comparing the decoding accuracy between the participant with the highest average performance (P12) and the one with the lowest average performance (P02), a statistical difference is observed (p < 0.01).
To determine whether the absence of a significant difference between sensor subsets persists at the individual participant level, the decoding accuracies of the participants with the highest (P12) and lowest (P02) median decoding accuracy are compared. The individual decoding accuracies for each sensor subset can be found in Figure 7a and Figure 7b, respectively.
The largest difference in median accuracy can be observed in Figure 7a with a difference of 0.1 for the OpenBCI 8 subset. There are also no significant differences in mean accuracy for these individual participants, with p-values ranging between 0.55 and 0.84 , and between 0.48 and 0.95 , respectively. However, we can notice a reduced variance and an apparent increase in minimum accuracy for specific sensor subsets.

4. Discussion

The goal of this research was to investigate the possibility of downscaling the number of sensors necessary to decode imagined movements from EEG signals. To determine whether reducing the number of sensors used for this purpose is feasible and to identify their optimal locations, an experimental study was performed. This study consisted of training a well-known decoding pipeline using different sensor subsets and comparing the cross-validation results.
The data used in this study were acquired from 15 participants performing MI in our lab. To ensure that the data are qualitative and contain motor-related activity, the signals from the executed movement were used as ground truth and compared with the imagined movement signals in a VEP analysis, both with and without feedback. The expected VEP peaks [29] could clearly be identified from the visualization of the signals in Figure 4. A delay of around 50 ms was observed compared to the expected timings, which could be attributed to the reaction speed of the individuals or due to syncing delays introduced by the LSL software (v 1.15.0). This confirms that the data are indeed qualitative and contain the expected movement-related information. This indicates that models should learn from the motor-related brain activity and not from some unrelated patterns that co-occur with imagined movement initiation.
The obtained cross-validation results are in line with literature that uses similar models [53,54]. However, it is important to note that the number of classes used for classification and hyperparameter choices can greatly influence decoding performance. For example, using user-specific filter frequencies can improve results [55]. Additionally, the evaluation method plays an important role, resulting in stronger or weaker conclusions regarding performance, depending on the method employed. A cross-validated result cannot be compared to test accuracy with a single train–test split [56].
From the cross-validation results on imagined movement data from both feedback and non-feedback runs, we observe that the decoding accuracy does not significantly differ between the chosen sensor subsets. This outlines the feasibility of downscaling the number of sensors for BCI decoding purposes. It is noteworthy to mention that a large inter-individual variability exists, which is in line with current literature [57]. The p-values for the smaller sensor subsets are lower, which seems to indicate that there is a threshold for the minimal number of necessary sensors to maintain decoding performance. This threshold is yet to be determined for MI decoding.
When comparing decoding performance on data from individual participants, the variance in decoding accuracy is reduced for different sensor subsets, suggesting a more stable decoding performance. However, a large intra-individual variability remains, which makes it harder to train a model using all sessions for a specific user. Finding methods to reduce this variability is an important part of BCI research [58,59]. We also observe that there is a difference in which specific subsets of sensors seem to be optimal for different individuals. This result is also consistent with literature that aims to identify optimal user-specific sensor subsets [21]. This finding indicates that both defining a general set of sensor locations and performing a user-specific search for the optimal subset within this set are important steps toward real-life BCI applications.
Only one participant used kinesthetic MI for all movements, while two used both visual and kinesthetic MI depending on the movement. The other participants used visual MI, with three using an internal perspective, seven using an external perspective, and the remaining two participants choosing both internal and external perspectives depending on the movement. It would be of interest to select relevant sensors based on the users’ preference for visual or kinesthetic MI. Recent functional magnetic resonance imaging research has shown that the different brain area activations can be used to determine movement from multivariate pattern analysis of the signal [60].
We have shown that high-density EEG—typically defined as EEG setups with 64 sensors or more [61]—is not essential for decoding MI and that this might even negatively affect decoding performance by introducing noise in the data. A relatively small number of electrodes could be used to decode MI from EEG signals, which also demonstrates robustness to the loss of faulty electrodes. This is most apparent when comparing the decoding performance on the full motor cortex subset and the MC reduced subset, which only uses half the number of sensors (i.e., half the spatial resolution) available for this brain region.
The current research applied a top-down approach by removing sensors and subselecting based on existing datasets and devices. Future research could apply a bottom-up approach by starting from one sensor located over the primary motor cortex and including sensors that cover the regions of interest associated with MI [14]. Alternatively, fixing the number of sensors and evaluating different sensor locations could also provide insights into optimal EEG acquisition configurations. Additionally, investigating both intra- and inter-individual differences in decoding performance could also provide better insights into the sensor locations to include. By relating characteristics of the individual and possible internal or external properties to decoding performance, decoding models could take this variability into account. While there is variability in decoding accuracy between individuals, it seems to remain consistent for different sensor subsets within the same individual according to our results, as shown by Figure 7. Accordingly, we postulate that this approach could be used to determine the minimal number of sensors that are necessary for MI decoding.
By using more advanced methods that also learn a representation of the data, such as deep learning, an optimal set of sensors could also be identified in a data-driven way. Previous research demonstrated that MI questionnaire results could be used to predict the performance of MI-based BCI [62]. Relating MIQ-3 questionnaire outcomes with decoding performance would therefore also be promising as a tool to determine the appropriate user training that would be necessary to reach reasonable decoding performance. By applying these methods, the current feedback runs could be expanded to develop a user-friendly and efficient calibration procedure for new users.
This knowledge could inform the design of future EEG acquisition devices with the specific purpose of using them in MI-based BCI control systems. Recent advances in EEG sensor technology [63,64,65] could also improve the user-friendliness of EEG devices thanks to increased comfort and easy setup. Improved signal quality from a single electrode could also facilitate sensor reduction and is a future avenue of research. This also opens the possibility of using existing consumer-grade sensor devices for MI-based control such as OpenBCI (https://openbci.com/ (accessed on 23 February 2023)) or EMOTIV (https://www.emotiv.com/ (accessed on 23 February 2023)) products, among others. Furthermore, including other biosignal modalities, such as in the upcoming Galea (https://galea.co/ (accessed on 23 February 2023)) device, should result in a robust decoding system, bringing us one step closer to affordable and reliable BCI control.

5. Conclusions

The acquired EEG data were confirmed to be qualitative and contain movement-related information when participants performed imagined movements. Decoding MI from these EEG data was shown to be feasible with a relatively low number of sensors. Decoding accuracy does not significantly decrease when using fewer sensors. The worst loss in performance for an individual participant was observed for the smallest subset of eight sensors with a difference of 0.1 in median accuracy, while the difference in maximum accuracy was lower at 0.05 . On average, including all participants, no statistically significant difference was observed (p = [0.18–0.91]).
Therefore, using commercial EEG devices with a small number of sensors for BCI applications is feasible. Additionally, this shows robustness in decoding performance if fewer sensors become available due to one or more faulty sensors. However, more work is necessary to establish a standard optimal set of sensor locations by using more advanced decoding methods and investigating inter-individual differences in optimal sensor locations. Using consumer-grade EEG systems for MI decoding applications is, therefore, feasible and would enable the creation of affordable and reliable BCI control systems.

Author Contributions

Conceptualization, A.D., F.G., O.R. and K.D.P.; methodology, A.D., U.M., S.G. and K.D.P.; software, A.D.; validation, all authors; formal analysis, A.D. and K.D.P.; investigation, A.D., F.G., O.R. and K.D.P.; resources, K.D.P.; data curation, A.D.; writing—original draft preparation, A.D.; writing—review and editing, K.D.P. and B.V.; visualization, A.D.; supervision, F.G., O.R., A.N. and R.M.; project administration, K.D.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Human Sciences of Vrije Universiteit Brussel (ECHW_364.02, 7 June 2022).

Informed Consent Statement

Informed consent was obtained from each participant prior to the start of the first session. Participants were orally given an overview of the experimental procedure and asked to read and sign the informed consent form that contains a detailed explanation of the experiment.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the highly personal nature of these data.

Acknowledgments

The authors would like to thank the people who participated in the data-gathering experiments and the students who assisted in the execution of these experiments. This research was made possible thanks to the EUTOPIA Ph.D. co-tutelle program and the Strategic Research Program Exercise and the Brain in Health and Disease: The Added Value of Human-Centered Robotics. UM gratefully acknowledges funding from the European Union’s Horizon 2020 Research and Innovation Program under grant agreement no. 952401 (TwinBrain—TWINning the BRAIN with machine learning for neuro-muscular efficiency).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gu, X.; Cao, Z.; Jolfaei, A.; Xu, P.; Wu, D.; Jung, T.P.; Lin, C.T. EEG-Based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and Their Applications. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 1645–1666. [Google Scholar] [CrossRef]
  2. Lee, S.H.; Lee, M.; Lee, S.W. Neural Decoding of Imagined Speech and Visual Imagery as Intuitive Paradigms for BCI Communication. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 2647–2659. [Google Scholar] [CrossRef] [PubMed]
  3. Voznenko, T.I.; Chepin, E.V.; Urvanov, G.A. The Control System Based on Extended BCI for a Robotic Wheelchair. Procedia Comput. Sci. 2018, 123, 522–527. [Google Scholar] [CrossRef]
  4. Kuhner, D.; Fiederer, L.; Aldinger, J.; Burget, F.; Völker, M.; Schirrmeister, R.; Do, C.; Boedecker, J.; Nebel, B.; Ball, T.; et al. A Service Assistant Combining Autonomous Robotics, Flexible Goal Formulation, and Deep-Learning-Based Brain–Computer Interfacing. Robot. Auton. Syst. 2019, 116, 98–113. [Google Scholar] [CrossRef]
  5. Rashid, M.; Sulaiman, N.; Abdul Majeed, A.P.P.; Musa, R.M.; Ab. Nasir, A.F.; Bari, B.S.; Khatun, S. Current Status, Challenges, and Possible Solutions of EEG-Based Brain-Computer Interface: A Comprehensive Review. Front. Neurorobot. 2020, 14, 25. [Google Scholar] [CrossRef] [PubMed]
  6. Tandle, A.; Jog, N.; D’cunha, P.; Chheta, M. Classification of Artefacts in EEG Signal Recordings and EOG Artefact Removal Using EOG Subtraction. Commun. Appl. Electron. 2016, 4, 12–19. [Google Scholar] [CrossRef]
  7. Hagemann, D. Individual Differences in Anterior EEG Asymmetry: Methodological Problems and Solutions. Biol. Psychol. 2004, 67, 157–182. [Google Scholar] [CrossRef]
  8. Riedl, R.; Minas, R.K.; Dennis, A.R.; Müller-Putz, G.R. Consumer-Grade EEG Instruments: Insights on the Measurement Quality Based on a Literature Review and Implications for NeuroIS Research. In Lecture Notes in Information Systems and Organisation, Proceedings of the Information Systems and Neuroscience, Vienna, Austria, 14–16 June 2020; Springer International Publishing: New York, NY, USA, 2020; pp. 350–361. [Google Scholar] [CrossRef]
  9. Jeannerod, M. The Representing Brain: Neural Correlates of Motor Intention and Imagery. Behav. Brain Sci. 1994, 17, 187–202. [Google Scholar] [CrossRef] [Green Version]
  10. Marusic, U.; Grosprêtre, S. Non-Physical Approaches to Counteract Age-Related Functional Deterioration: Applications for Rehabilitation and Neural Mechanisms. Eur. J. Sport Sci. 2018, 18, 639–649. [Google Scholar] [CrossRef]
  11. Decety, J. The Neurophysiological Basis of Motor Imagery. Behav. Brain Res. 1996, 77, 45–52. [Google Scholar] [CrossRef]
  12. Maksimenko, V.A.; Pavlov, A.; Runnova, A.E.; Nedaivozov, V.; Grubov, V.; Koronovslii, A.; Pchelintseva, S.V.; Pitsik, E.; Pisarchik, A.N.; Hramov, A.E. Nonlinear Analysis of Brain Activity, Associated with Motor Action and Motor Imaginary in Untrained Subjects. Nonlinear Dyn. 2018, 91, 2803–2817. [Google Scholar] [CrossRef]
  13. Lotze, M.; Montoya, P.; Erb, M.; Hülsmann, E.; Flor, H.; Klose, U.; Birbaumer, N.; Grodd, W. Activation of Cortical and Cerebellar Motor Areas during Executed and Imagined Hand Movements: An fMRI Study. J. Cogn. Neurosci. 1999, 11, 491–501. [Google Scholar] [CrossRef] [Green Version]
  14. Ehrsson, H.H.; Geyer, S.; Naito, E. Imagery of Voluntary Movement of Fingers, Toes, and Tongue Activates Corresponding Body-Part-Specific Motor Representations. J. Neurophysiol. 2003, 90, 3304–3316. [Google Scholar] [CrossRef] [Green Version]
  15. Munzert, J.; Lorey, B.; Zentgraf, K. Cognitive Motor Processes: The Role of Motor Imagery in the Study of Motor Representations. Brain Res. Rev. 2009, 60, 306–326. [Google Scholar] [CrossRef] [PubMed]
  16. Kilintari, M.; Narayana, S.; Babajani-Feremi, A.; Rezaie, R.; Papanicolaou, A.C. Brain Activation Profiles during Kinesthetic and Visual Imagery: An fMRI Study. Brain Res. 2016, 1646, 249–261. [Google Scholar] [CrossRef] [Green Version]
  17. Guillot, A.; Collet, C.; Nguyen, V.A.; Malouin, F.; Richards, C.; Doyon, J. Brain Activity during Visual versus Kinesthetic Imagery: An fMRI Study. Hum. Brain Mapp. 2009, 30, 2157–2172. [Google Scholar] [CrossRef]
  18. Decety, J.; Perani, D.; Jeannerod, M.; Bettinardi, V.; Tadary, B.; Woods, R.; Mazziotta, J.C.; Fazio, F. Mapping Motor Representations with Positron Emission Tomography. Nature 1994, 371, 600–602. [Google Scholar] [CrossRef]
  19. Abdullah; Faye, I.; Islam, M.R. EEG Channel Selection Techniques in Motor Imagery Applications: A Review and New Perspectives. Bioengineering 2022, 9, 726. [Google Scholar] [CrossRef] [PubMed]
  20. Baig, M.Z.; Aslam, N.; Shum, H.P.H. Filtering Techniques for Channel Selection in Motor Imagery EEG Applications: A Survey. Artif. Intell. Rev. 2020, 53, 1207–1232. [Google Scholar] [CrossRef] [Green Version]
  21. Gurve, D.; Delisle-Rodriguez, D.; Romero-Laiseca, M.; Cardoso, V.; Loterio, F.; Bastos, T.; Krishnan, S. Subject-Specific EEG Channel Selection Using Non-Negative Matrix Factorization for Lower-Limb Motor Imagery Recognition. J. Neural Eng. 2020, 17, 026029. [Google Scholar] [CrossRef] [PubMed]
  22. Gaur, P.; McCreadie, K.; Pachori, R.B.; Wang, H.; Prasad, G. An Automatic Subject Specific Channel Selection Method for Enhancing Motor Imagery Classification in EEG-BCI Using Correlation. Biomed. Signal Process. Control 2021, 68, 102574. [Google Scholar] [CrossRef]
  23. Roy, S.; Rathee, D.; Chowdhury, A.; McCreadie, K.; Prasad, G. Assessing Impact of Channel Selection on Decoding of Motor and Cognitive Imagery from MEG Data. J. Neural Eng. 2020, 17, 056037. [Google Scholar] [CrossRef] [PubMed]
  24. Wang, Y.; Wang, G.; Zhou, Y.; Li, Z.; Li, Y. EEG Signal Feature Reduction and Channel Selection Method in Hand Gesture Recognition BCI System. In Proceedings of the 2021 International Conference on Computer Engineering and Application (ICCEA), Kunming, China, 25–27 June 2021; pp. 280–284. [Google Scholar] [CrossRef]
  25. Mwata-Velu, T.; Avina-Cervantes, J.G.; Ruiz-Pinales, J.; Garcia-Calva, T.A.; González-Barbosa, E.A.; Hurtado-Ramos, J.B.; González-Barbosa, J.J. Improving Motor Imagery EEG Classification Based on Channel Selection Using a Deep Learning Architecture. Mathematics 2022, 10, 2302. [Google Scholar] [CrossRef]
  26. Snell, R.S. Clinical Neuroanatomy; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2010. [Google Scholar]
  27. Friston, K.J.; Frith, C.D.; Dolan, R.J.; Price, C.J.; Zeki, S.; Ashburner, J.T.; Penny, W.D. Human Brain Function, 2nd ed.; Elsevier: Amsterdam, The Netherlands, 2004. [Google Scholar]
  28. Creel, D.J. Chapter 34—Visually Evoked Potentials. In Handbook of Clinical Neurology; Clinical Neurophysiology: Basis and Technical Aspects; Levin, K.H., Chauvel, P., Eds.; Elsevier: Amsterdam, The Netherlands, 2019; Volume 160, pp. 501–522. [Google Scholar] [CrossRef]
  29. Kuba, M.; Kubová, Z.; Kremláček, J.; Langrová, J. Motion-Onset VEPs: Characteristics, Methods, and Diagnostic Use. Vis. Res. 2007, 47, 189–202. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Ma, T.; Li, H.; Yang, H.; Lv, X.; Li, P.; Liu, T.; Yao, D.; Xu, P. The Extraction of Motion-Onset VEP BCI Features Based on Deep Learning and Compressed Sensing. J. Neurosci. Methods 2017, 275, 80–92. [Google Scholar] [CrossRef]
  31. Ma, T.; Li, H.; Deng, L.; Yang, H.; Lv, X.; Li, P.; Li, F.; Zhang, R.; Liu, T.; Yao, D.; et al. The Hybrid BCI System for Movement Control by Combining Motor Imagery and Moving Onset Visual Evoked Potential. J. Neural Eng. 2017, 14, 026015. [Google Scholar] [CrossRef]
  32. Dillen, A.; Ghaffari, F.; Romain, O.; Vanderborght, B.; Meeusen, R.; Roelands, B.; De Pauw, K. Optimal Sensor Set for Decoding Motor Imagery from EEG. In Proceedings of the 11th International IEEE EMBS Conference on Neural Engineering (NER), Baltimore, MD, USA, 25–27 April 2023. [Google Scholar]
  33. Williams, S.E.; Cumming, J.; Ntoumanis, N.; Nordin-Bates, S.M.; Ramsey, R.; Hall, C. Further Validation and Development of the Movement Imagery Questionnaire. J. Sport Exerc. Psychol. 2012, 34, 621–646. [Google Scholar] [CrossRef] [Green Version]
  34. Roc, A.; Pillette, L.; Mladenovic, J.; Benaroch, C.; N’Kaoua, B.; Jeunet, C.; Lotte, F. A Review of User Training Methods in Brain Computer Interfaces Based on Mental Tasks. J. Neural Eng. 2021, 18, 011002. [Google Scholar] [CrossRef]
  35. MacKenzie, I.S. Human-Computer Interaction: An Empirical Research Perspective; Morgan and Kaufman: Burlington, MA, USA, 2012. [Google Scholar]
  36. Tangermann, M.; Müller, K.R.; Aertsen, A.; Birbaumer, N.; Braun, C.; Brunner, C.; Leeb, R.; Mehring, C.; Miller, K.; Mueller-Putz, G.; et al. Review of the BCI Competition IV. Front. Neurosci. 2012, 6, 55. [Google Scholar] [CrossRef] [Green Version]
  37. Hill, N.J.; Mooney, S.W.J.; Ryklin, E.B.; Prusky, G.T. Shady: A Software Engine for Real-Time Visual Stimulus Manipulation. J. Neurosci. Methods 2019, 320, 79–86. [Google Scholar] [CrossRef]
  38. Appelhoff, S.; Hurst, A.J.; Lawrence, A.; Li, A.; Mantilla Ramos, Y.J.; O’Reilly, C.; Xiang, L.; Dancker, J. PyPREP: A Python Implementation of the Preprocessing Pipeline (PREP) for EEG Data. 2022. Available online: https://zenodo.org/record/6363576#.ZCRPsPZBxPY (accessed on 15 November 2022).
  39. Bigdely-Shamlo, N.; Mullen, T.; Kothe, C.; Su, K.M.; Robbins, K.A. The PREP Pipeline: Standardized Preprocessing for Large-Scale EEG Analysis. Front. Neuroinform. 2015, 9, 16. [Google Scholar] [CrossRef] [PubMed]
  40. Widmann, A.; Schröger, E.; Maess, B. Digital Filter Design for Electrophysiological Data—A Practical Approach. J. Neurosci. Methods 2015, 250, 34–46. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Lee, T.W.; Girolami, M.; Sejnowski, T.J. Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources. Neural Comput. 1999, 11, 417–441. [Google Scholar] [CrossRef]
  42. Winkler, I.; Debener, S.; Müller, K.R.; Tangermann, M. On the Influence of High-Pass Filtering on ICA-based Artifact Reduction in EEG-ERP. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 4101–4105. [Google Scholar] [CrossRef]
  43. Pion-Tonachini, L.; Kreutz-Delgado, K.; Makeig, S. ICLabel: An Automated Electroencephalographic Independent Component Classifier, Dataset, and Website. NeuroImage 2019, 198, 181–197. [Google Scholar] [CrossRef] [Green Version]
  44. Singh, A.; Hussain, A.A.; Lal, S.; Guesgen, H.W. A Comprehensive Review on Critical Issues and Possible Solutions of Motor Imagery Based Electroencephalography Brain-Computer Interface. Sensors 2021, 21, 2173. [Google Scholar] [CrossRef] [PubMed]
  45. Blankertz, B.; Tomioka, R.; Lemm, S.; Kawanabe, M.; Muller, K.r. Optimizing Spatial Filters for Robust EEG Single-Trial Analysis. IEEE Signal Process. Mag. 2008, 25, 41–56. [Google Scholar] [CrossRef]
  46. McLachlan, G.J. Discriminant Analysis and Statistical Pattern Recognition; Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics; John Wiley & Sons, Inc.: New York, NY, USA, 1992. [Google Scholar] [CrossRef]
  47. Montgomery, D.C.; Runger, G.C. Applied Statistics and Probability for Engineers, 7th ed.; John Wiley & Sons: New York, NY, USA, 2010. [Google Scholar]
  48. Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Goj, R.; Jas, M.; Brooks, T.; Parkkonen, L.; et al. MEG and EEG Data Analysis with MNE-Python. Front. Neurosci. 2013, 7, 267. [Google Scholar] [CrossRef] [Green Version]
  49. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  50. Reback, J.; McKinney, W.; jbrockmendel; den Bossche, J.V.; Augspurger, T.; Cloud, P.; gfyoung; Sinhrks; Klein, A.; Roeschke, M.; et al. Pandas-Dev/Pandas: Pandas 1.0.3. 2020. Available online: https://zenodo.org/record/3715232#.ZCRQ2vZBxPY (accessed on 13 August 2021).
  51. Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef] [Green Version]
  52. Waskom, M.L. Seaborn: Statistical Data Visualization. J. Open Source Softw. 2021, 6, 3021. [Google Scholar] [CrossRef]
  53. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-based Brain-Computer Interfaces Using Motor-Imagery: Techniques and Challenges. Sensors 2019, 19, 1423. [Google Scholar] [CrossRef] [Green Version]
  54. Hosseini, M.P.; Hosseini, A.; Ahi, K. A Review on Machine Learning for EEG Signal Processing in Bioengineering. IEEE Rev. Biomed. Eng. 2021, 14, 204–218. [Google Scholar] [CrossRef] [PubMed]
  55. Dillen, A.; Lathouwers, E.; Miladinović, A.; Marusic, U.; Ghaffari, F.; Romain, O.; Meeusen, R.; De Pauw, K. A Data-Driven Machine Learning Approach for Brain-Computer Interfaces Targeting Lower Limb Neuroprosthetics. Front. Hum. Neurosci. 2022, 16, 491. [Google Scholar] [CrossRef]
  56. Ojala, M.; Garriga, G.C. Permutation Tests for Studying Classifier Performance. In Proceedings of the 2009 Ninth IEEE International Conference on Data Mining, Miami Beach, FL, USA, 6–9 December 2009; pp. 908–913. [Google Scholar] [CrossRef]
  57. Nguyen, T.; Hettiarachchi, I.; Khatami, A.; Gordon-Brown, L.; Lim, C.P.; Nahavandi, S. Classification of Multi-Class BCI Data by Common Spatial Pattern and Fuzzy System. IEEE Access 2018, 6, 27873–27884. [Google Scholar] [CrossRef]
  58. Saha, S.; Baumert, M. Intra- and Inter-subject Variability in EEG-Based Sensorimotor Brain Computer Interface: A Review. Front. Comput. Neurosci. 2020, 13, 87. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Zhang, R.; Li, F.; Zhang, T.; Yao, D.; Xu, P. Subject Inefficiency Phenomenon of Motor Imagery Brain-Computer Interface: Influence Factors and Potential Solutions. Brain Sci. Adv. 2020, 6, 224–241. [Google Scholar] [CrossRef]
  60. Yang, H.; Ogawa, K. Decoding of Motor Imagery Involving Whole-body Coordination. Neuroscience 2022, 501, 131–142. [Google Scholar] [CrossRef]
  61. Stoyell, S.; Wilmskoetter, J.; Dobrota, M.; Chinappen, D.; Bonilha, L.; Mintz, M.; Brinkmann, B.; Herman, S.; Peters, J.; Vulliemoz, S.; et al. High Density EEG in Current Clinical Practice and Opportunities for the Future. J. Clin. Neurophysiol. 2021, 38, 112–123. [Google Scholar] [CrossRef]
  62. Vuckovic, A.; Osuagwu, B.A. Using a Motor Imagery Questionnaire to Estimate the Performance of a Brain–Computer Interface Based on Object Oriented Motor Imagery. Clin. Neurophysiol. 2013, 124, 1586–1595. [Google Scholar] [CrossRef] [Green Version]
  63. Li, G.L.; Wu, J.T.; Xia, Y.H.; He, Q.G.; Jin, H.G. Review of Semi-Dry Electrodes for EEG Recording. J. Neural Eng. 2020, 17, 051004. [Google Scholar] [CrossRef]
  64. Faisal, S.N.; Amjadipour, M.; Izzo, K.; Singer, J.A.; Bendavid, A.; Lin, C.T.; Iacopi, F. Non-Invasive on-Skin Sensors for Brain Machine Interfaces with Epitaxial Graphene. J. Neural Eng. 2021, 18, 066035. [Google Scholar] [CrossRef] [PubMed]
  65. Li, G.; Liu, Y.; Chen, Y.; Li, M.; Song, J.; Li, K.; Zhang, Y.; Hu, L.; Qi, X.; Wan, X.; et al. Polyvinyl Alcohol/Polyacrylamide Double-Network Hydrogel-Based Semi-Dry Electrodes for Robust Electroencephalography Recording at Hairy Scalp for Noninvasive Brain–Computer Interfaces. J. Neural Eng. 2023, 20, 026017. [Google Scholar] [CrossRef] [PubMed]
Figure 1. One trial in a calibration run of the data acquisition procedure.
Figure 1. One trial in a calibration run of the data acquisition procedure.
Applsci 13 04438 g001
Figure 2. One trial in a feedback run of the data acquisition procedure.
Figure 2. One trial in a feedback run of the data acquisition procedure.
Applsci 13 04438 g002
Figure 3. Locations of the sensors with colors indicating which subsets they belong to. Empty circles indicate that they are only used in the full subset.
Figure 3. Locations of the sensors with colors indicating which subsets they belong to. Empty circles indicate that they are only used in the full subset.
Applsci 13 04438 g003
Figure 4. Grand averaged EEG activity for the Oz, O1, and O2 channels at the time of the GO cue during (a) executed movement, (b) imagined movement, and (c) imagined movement with feedback, respectively. Nave denotes the total number of trials that were averaged to obtain the figure.
Figure 4. Grand averaged EEG activity for the Oz, O1, and O2 channels at the time of the GO cue during (a) executed movement, (b) imagined movement, and (c) imagined movement with feedback, respectively. Nave denotes the total number of trials that were averaged to obtain the figure.
Applsci 13 04438 g004
Figure 5. Cross-validation accuracies for different sensor subsets.
Figure 5. Cross-validation accuracies for different sensor subsets.
Applsci 13 04438 g005
Figure 6. Average cross-validation decoding accuracies for each participant on the full set of sensors.
Figure 6. Average cross-validation decoding accuracies for each participant on the full set of sensors.
Applsci 13 04438 g006
Figure 7. Cross-validation accuracies for different sensor subsets for the participant with the best mean accuracy (P12) (a) and lowest accuracy (P02) (b), respectively, when using the full sensor set.
Figure 7. Cross-validation accuracies for different sensor subsets for the participant with the best mean accuracy (P12) (a) and lowest accuracy (P02) (b), respectively, when using the full sensor set.
Applsci 13 04438 g007
Table 1. Overview of mean cross-validation accuracy results for each considered sensor subset.
Table 1. Overview of mean cross-validation accuracy results for each considered sensor subset.
Sensor Subset# SensorsMean Acc.Std. Acc.Min. Acc.Max. Acc.
Full640.670.150.311.00
Half320.670.160.311.00
BCI Comp210.670.150.341.00
OpenBCI 16160.660.150.230.98
OpenBCI 880.650.150.340.93
Motor cortex240.660.160.271.00
MC reduced90.650.150.350.94
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dillen, A.; Ghaffari, F.; Romain, O.; Vanderborght, B.; Marusic, U.; Grosprêtre, S.; Nowé, A.; Meeusen, R.; De Pauw, K. Optimal Sensor Set for Decoding Motor Imagery from EEG. Appl. Sci. 2023, 13, 4438. https://doi.org/10.3390/app13074438

AMA Style

Dillen A, Ghaffari F, Romain O, Vanderborght B, Marusic U, Grosprêtre S, Nowé A, Meeusen R, De Pauw K. Optimal Sensor Set for Decoding Motor Imagery from EEG. Applied Sciences. 2023; 13(7):4438. https://doi.org/10.3390/app13074438

Chicago/Turabian Style

Dillen, Arnau, Fakhreddine Ghaffari, Olivier Romain, Bram Vanderborght, Uros Marusic, Sidney Grosprêtre, Ann Nowé, Romain Meeusen, and Kevin De Pauw. 2023. "Optimal Sensor Set for Decoding Motor Imagery from EEG" Applied Sciences 13, no. 7: 4438. https://doi.org/10.3390/app13074438

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop