A public iEEG dataset, encompassing data from 20 patients, served as the foundation for the experiments conducted. SPC-HFA's localization method, when contrasted against prevailing methods, showed an improvement (Cohen's d exceeding 0.2) and obtained the top rank for 10 out of the 20 patients considered, as evaluated by the area under the curve metric. Subsequently, extending SPC-HFA to incorporate high-frequency oscillation detection algorithms yielded improved localization results, demonstrating a statistically significant effect size of Cohen's d = 0.48. Finally, SPC-HFA is a valuable tool that can aid in directing the course of clinical and surgical interventions for patients with intractable epilepsy.
The negative transfer of data in the source domain during EEG-based cross-subject emotion recognition via transfer learning causes accuracy decline. This paper introduces a dynamic data selection approach to mitigate this problem. Consisting of three sections, the cross-subject source domain selection (CSDS) method is detailed below. Based on Copula function theory, a preliminary Frank-copula model is constructed to investigate the correlation between the source and target domains, a correlation measured by the Kendall correlation coefficient. To enhance the accuracy of Maximum Mean Discrepancy in quantifying the distance between classes from a single origin, a new calculation approach has been formulated. After normalizing the data, the Kendall correlation coefficient is applied, with a threshold set to identify the source data most suitable for transfer learning. new infections In the context of transfer learning, Manifold Embedded Distribution Alignment uses Local Tangent Space Alignment to create a low-dimensional linear estimate of local nonlinear manifold geometry. The method's success hinges on preserving the sample data's local characteristics after dimensionality reduction. In experiments, the CSDS outperformed traditional methods by roughly 28% in emotion classification accuracy and reduced processing time by about 65%.
Myoelectric interfaces, trained on a variety of users, are unable to adjust to the particular hand movement patterns of a new user due to the differing anatomical and physiological structures in individuals. New users engaging with the current movement recognition process must provide multiple trials for each gesture, spanning dozens to hundreds of samples. Calibrating the model through domain adaptation techniques is crucial to attain successful recognition. A major roadblock to widespread myoelectric control adoption stems from the user burden associated with the time-consuming process of electromyography signal acquisition and meticulous annotation. This research shows that lowering the calibration sample count causes a decline in the performance of earlier cross-user myoelectric interfaces, due to inadequate statistics for characterizing the distributions involved. A few-shot supervised domain adaptation (FSSDA) framework is presented in this paper to resolve this issue. The method of aligning domain distributions involves calculating the distances of point-wise surrogate distributions. To establish a shared embedding subspace, we introduce a distance loss function based on positive-negative sample pairs. This prioritizes drawing new user samples closer to positive samples and further away from negative samples from multiple users. In this way, FSSDA facilitates pairing each sample from the target domain with each sample from the source domain, improving the feature gap between each target sample and its matching source samples in the same batch, in contrast to directly calculating the distribution of data in the target domain. Average recognition accuracies of 97.59% and 82.78% were obtained for the proposed method when tested on two high-density EMG datasets, using only 5 samples per gesture. Besides this, FSSDA is still effective, even if using a single data point per gesture. Experimental results unequivocally indicate that FSSDA dramatically mitigates user effort and further promotes the evolution of myoelectric pattern recognition techniques.
The brain-computer interface (BCI), a pioneering method for direct human-machine interaction, has generated significant research interest over the past ten years, promising valuable applications in rehabilitation and communication. The P300-based BCI speller, as a typical application, has the capability to reliably detect the stimulated characters that were intended. The P300 speller's applicability is reduced by a low recognition rate, which is, in part, a consequence of the complex spatio-temporal dynamics of the EEG signal. For superior P300 detection, we constructed ST-CapsNet, a deep-learning analysis framework which incorporates a capsule network with integrated spatial and temporal attention modules, to overcome previously encountered challenges. To start with, we employed spatial and temporal attention modules to extract enhanced EEG signals, highlighting event-related characteristics. The capsule network then received the acquired signals for discerning feature extraction and P300 identification. Two public datasets, the BCI Competition 2003's Dataset IIb and the BCI Competition III's Dataset II, were used for the quantitative assessment of the ST-CapsNet's performance. To measure the combined impact of symbol identification across various repetitions, the Averaged Symbols Under Repetitions (ASUR) metric was employed. The ST-CapsNet framework exhibited significantly better ASUR results than existing methodologies, including LDA, ERP-CapsNet, CNN, MCNN, SWFP, and MsCNN-TL-ESVM. The absolute values of spatial filters learned by ST-CapsNet are strikingly higher within the parietal and occipital regions, a phenomenon mirroring the generation of P300.
Brain-computer interface inefficiency in terms of data transfer speed and dependability can stand in the way of its development and use. The objective of this study was to improve the accuracy of motor imagery-based brain-computer interfaces, particularly for individuals who showed poor performance in classifying three distinct actions: left hand, right hand, and right foot. The researchers employed a novel hybrid imagery technique that fused motor and somatosensory activity. These experiments, involving twenty healthy individuals, featured three experimental paradigms: (1) a control condition with motor imagery alone, (2) a hybrid condition using motor and somatosensory stimuli with the same stimulus (a rough ball), and (3) a second hybrid condition, also involving motor and somatosensory stimuli, but with differing stimuli (hard and rough, soft and smooth, hard and rough balls). The three paradigms, using a 5-fold cross-validation approach with the filter bank common spatial pattern algorithm, yielded average accuracy scores of 63,602,162%, 71,251,953%, and 84,091,279%, respectively, for all participants. Within the subgroup displaying suboptimal performance, the Hybrid-condition II method achieved a remarkable accuracy of 81.82%, showcasing a substantial 38.86% increase in accuracy compared to the baseline control condition (42.96%) and a 21.04% advancement over Hybrid-condition I (60.78%), respectively. In contrast, the high-scoring group showcased a pattern of enhanced accuracy, with no remarkable dissimilarity among the three paradigms. The Hybrid-condition II paradigm provided high concentration and discrimination to poor performers in the motor imagery-based brain-computer interface and generated the enhanced event-related desynchronization pattern in three modalities corresponding to different types of somatosensory stimuli in motor and somatosensory regions compared to the Control-condition and Hybrid-condition I. For users with suboptimal motor imagery-based brain-computer interface performance, the hybrid-imagery approach can contribute significantly to better outcomes and, consequently, practical application and broader acceptance of such technology.
Hand grasp recognition, utilizing surface electromyography (sEMG), presents a possible natural control method for prosthetic hands. GDC-0077 in vivo However, the reliability of this recognition over time is a critical factor for users to successfully manage daily living, as the task remains difficult because of the ambiguity of categories and other issues. We posit that introducing uncertainty-aware models is a potential solution to this challenge, as the rejection of uncertain movements has previously shown its effectiveness in enhancing the dependability of sEMG-based hand gesture recognition. Focusing intently on the exceptionally demanding NinaPro Database 6 benchmark, we present a novel end-to-end uncertainty-aware model, the evidential convolutional neural network (ECNN), capable of producing multidimensional uncertainties, encompassing vacuity and dissonance, for reliable long-term hand grasp recognition. To identify the optimal rejection threshold without any heuristic judgments, we scrutinize the validation set's performance regarding misclassification detection. To evaluate the accuracy of the proposed models, extensive comparisons are made under non-rejection and rejection strategies for classifying eight different hand grips (including the resting position) across eight subjects. The ECNN model, as proposed, significantly improves recognition performance, achieving a 5144% accuracy rate without rejection and an 8351% accuracy rate under a multidimensional uncertainty rejection system. This marks a substantial advancement over current state-of-the-art (SoA) techniques, increasing performance by 371% and 1388%, respectively. Subsequently, the recognition accuracy of the system in rejecting faulty data remained steady, exhibiting only a small reduction in accuracy following the three days of data gathering. The potential for a reliable classifier design, producing accurate and robust recognition, is evident from these results.
Hyperspectral image (HSI) classification is a problem that has received considerable attention in the field of image analysis. The hyperspectral imagery's (HSI) extensive spectral information yields a more detailed understanding of the scene but comes with a great deal of redundancy. Spectral curves belonging to distinct categories frequently show overlapping trends because of redundant data, which diminishes category separability. Biotic interaction This article enhances category separability by maximizing inter-category differences and minimizing intra-category variations, thereby improving classification accuracy. Our proposed spectral processing module, based on template spectra, effectively reveals the unique attributes of various categories, thus easing the task of discovering key features within the model.