Research
Research
We Engineer Intelligent Bio-Integrated Systems
The Biomedical Sensors & Systems Lab is dedicated to making healthcare proactive, predictive, and accessible. Unlike traditional labs that rely on off-the-shelf devices, we possess a unique "full-stack" innovation capability: we design novel sensors, engineer custom circuit boards (PCBs) and firmware from the component level, and develop the Causal AI algorithms to process the data, all in-house.
Our mission is to uncover new biomarkers for early diagnosis and translate these discoveries into commercially viable products. To achieve this, our work is organized into three synergistic research thrusts:
Conventional wearable neuro-biosensors are fundamentally limited by motion artifacts and single-modality constraints, prohibiting clinically precise brain research in real-world environments. We are revolutionizing this field by engineering intelligent, hybrid sensing platforms that fuse modalities like EEG and fNIRS to deliver clinical-grade, high-definition neurological data.
Status: Active | Funding: National Science Foundation (NSF) Award #2514612
The Challenge: Clinically-precise brain research outside of laboratory settings is currently hampered by three fundamental limitations:
Wearability & Motion Artifacts: Existing systems are often bulky and tethered, making them susceptible to motion artifacts that severely corrupt high-fidelity signal capture in real-world, mobile environments.
Single-Modality Constraint: The inherent trade-off between high temporal speed (EEG) and high spatial precision (fNIRS) prevents a complete, high-definition capture of brain dynamics.
Equitable Sensing Bias: Standard optical sensors perform inconsistently across diverse skin tones and biophysical characteristics, leading to biased data collection and unreliable diagnostics for non-White populations.
Our Innovation: The High-Resolution Hybrid Neuro-Monitor: Supported by the NSF, we are engineering a miniaturized, wireless platform that achieves clinical-grade sensing by eliminating these compromises. Our "full-stack" design integrates:
Novel System-on-Chip Architecture: Developed high-fidelity circuits, custom optics, and flexible PCBs to achieve clinical-grade SNR and robust motion-tolerance within a truly mobile and wearable form factor.
Equitable Sensing via Autonomous Adaptation: A smart optical system dynamically measures skin tone and adjusts hardware parameters (e.g., source power, detector gain) in real-time, ensuring accurate and unbiased data collection for all user demographics, directly addressing the sensor bias challenge.
Dual-Modality Causal Fusion: Simultaneous recording and fusion of EEG (electrical activity) and fNIRS (hemodynamic response) data to capture a complete, high-definition picture of brain function inaccessible to either modality alone.
Multi-Biomarker Extraction: Real-time processing pipelines designed to extract and integrate complex neuro-biomarkers, fueling more robust and precise Causal AI models.
Impact: This work unlocks a new frontier in "real-world" neuroscience and equitable disease prediction. By delivering high-definition, motion-tolerant, and unbiased neurological data, the platform enables the discovery of novel neuro-biomarkers for early diagnosis and fuels the next generation of bias-free AI models designed for equitable healthcare across all populations.
Challenge: Conventional functional Near-Infrared Spectroscopy (fNIRS) systems were non-mobile and tethered, preventing their use outside of controlled laboratory settings. The core challenge was achieving high-fidelity hemodynamic monitoring with the miniaturization and power efficiency required for a true, motion-tolerant, Body Sensor Network (BSN) device.
Our Solution: We designed and developed WearLight, a fully wearable, wireless fNIRS brain imaging device funded by a major NSF grant. Our solution was a comprehensive Internet-of-Things (IoT) embedded architecture, where we performed end-to-end hardware development, including custom PCB design, component-level testing, and firmware for onboard intelligence, computation, and robust wireless data transmission.
Outcome: WearLight was rigorously validated to successfully monitor the functional hemodynamic activities of the prefrontal cortex in freely moving human participants. This foundational device established the core hardware architectures now utilized in our lab, opening new applications for high-fidelity, motion-tolerant wearable brain imaging, diagnostic use, cognitive performance improvement, and BCI research.
Funded by: NSF EPSCoR Research Infrastructure #1539068 (~$6 million grant).
Published in: IEEE Transactions on Biomedical Circuits and Systems (TBCAS) (First & Corresponding Author).
Challenge: The reliability of predictive models in mission-critical environments (e.g., military, first-responder operations) is limited by high inter-subject variability. Conventional population-averaged models fail to account for individual neural differences, leading to inaccurate performance assessments in ostensibly homogeneous teams.
Our Solution: Leveraging the high-fidelity data from our WearLight platform, we developed a "Cognitive Phenotyping" protocol. We imaged the Prefrontal Cortex (PFC) during n-back Working Memory tasks and applied an advanced unsupervised k-means clustering algorithm to the hemodynamic response (HR) data. This approach moves beyond supervised classification to automatically discover hidden, intrinsic cognitive sub-groups.
Outcome: The unsupervised framework successfully identified three distinct cognitive sub-groups, each with unique Task, Performance, and Hemodynamic (TPH) profiles. This demonstrates the capability of our hardware and AI pipeline to objectively categorize operational personnel based on cognitive resilience, providing commanders with a real-time metric for optimizing unit formation.
Published in: IEEE Transactions on Neural Systems and Rehabilitation Engineering (TNSRE) (Solo Author).
Modern medical diagnostics are fundamentally challenged by fragmented, multi-modal data streams. We solve this complexity using Deep Learning and Biomedical Foundation Models to fuse patient data (including imaging, genomics, clinical text, and wearable physiological signals), moving beyond correlation to Causal Inference. Our goal is to architect interpretable, in silico diagnostic tools that discover composite biomarkers for early, predictive disease detection.
Challenge: Diagnosing Focal Cortical Dysplasia Type II (FCD-II), a major cause of drug-resistant epilepsy, is challenging because lesions are often subtle, and diagnosis requires fragmented, subjective analysis of multi-modal anatomical MRI features and diverse clinical characteristics by human experts. The challenge is developing an automated framework that can fuse these diverse data domains for objective detection and subtyping.
Our Solution: We engineered a novel, comprehensive Causal Deep Learning (DL) framework to automate the detection and subtyping of FCD-II (FCD-IIa/FCD-IIb). This solution utilized Transfer Learning, leveraging advanced architectures (DenseNet201 and Xception) to fuse data from three domains:
Multi-Modal MRI Data: T1-weighted (T1w) and FLAIR sequences.
Multi-Planar Imaging: Axial, Coronal, and Sagittal views simultaneously.
Clinical Characteristics: Incorporating the influence of patient age, sex, brain hemisphere, and lobe (a key component of the causal model).
Outcome: The models, particularly DenseNet201 and Xception, achieved superior performance in FCD-II classification and subtyping, with an accuracy exceeding 97% in critical sub-analyses. This analysis provides clinicians with crucial insights into the optimal selection of MRI planes and DL models tailored to specific patient demographic groups, significantly advancing the precision and efficiency of presurgical planning for epilepsy treatment.
Published in: npj Imaging (Nature Partner Journals) (Corresponding Author).
Challenge: The prognosis of Oral Squamous Cell Carcinoma (OSCC) relies on the manual quantification of Tumor-Infiltrating Lymphocytes (TILs), a subjective process prone to high inter-observer variability. Existing automated methods fail because they treat images as simple pixel maps, overlooking the critical, fine-grained spatial context of cellular interactions necessary for accurate grading.
Our Solution: We introduced OralTILs-ViT, a novel joint representation learning framework that integrates two distinct modalities: cellular density maps and raw H&E-stained tissue images.
Stage 1 (Cellular Mapping): We pioneered TILSeg-MobileViT, a weakly supervised segmentation model, to generate precise cellular density maps for key components (tumor, stromal cells, and lymphocytes), eliminating the need for costly manual pixel-level annotation.
Stage 2 (Multi-Modal Fusion): Our dual-encoder architecture fuses the local cellular density features with the global tissue context from H&E images to replicate the decision-making process of an expert pathologist.
Outcome: The framework achieved superior performance against single-modality approaches, providing objective, multiclass classification aligned with the clinical Broders' grading system.
Accuracy: 96.37%
Precision/Recall/F1: >96.3% across all metrics.
Validation: Multi-criteria decision analysis (TOPSIS) confirmed that our method ranks first across all TILs infiltration categories, establishing it as a robust tool for objective OSCC grading.
Published in: Scientific Reports (Nature Portfolio) (Corresponding Author).
The creation of predictive Human Digital Twins is critically constrained by the inability of current sensors to capture real-time, multi-system physiological interactions. We are pioneering a new generation of bio-adaptive hardware architectures that simultaneously monitor interacting systems (e.g., cardiac, respiratory, and neurological) at the foundational sensor level. This enables the high-fidelity data required for in silico models to accurately predict complex adverse events, such as sepsis and neurological crises.
Challenge: Traditional Digital Twins rely heavily on macroscopic signals (motion, heart rate), missing the critical "biochemical layer" required for comprehensive health modeling and early disease detection.
Our Solution: We engineered a handheld, point-of-care optical biosensor using custom-synthesized CdS/ZnS core-shell Quantum Dots. By leveraging Quantum Quenching and Förster Resonance Energy Transfer (FRET) mechanics, we developed a low-cost fluoroscopic prototype capable of rapid, label-free detection of alpha-amylase (a biomarker for acute pancreatitis).
Outcome: The fully functional hardware prototype achieved a Limit of Detection of 49.76 U/L with high clinical correlation (Pearson’s r = -0.98). This demonstrates a scalable pathway for integrating real-time metabolic sensing at the foundational sensor level into the Digital Twin architecture.
Published in: IEEE Transactions on NanoBioscience (TNB) (Corresponding Author).
Challenge: Comprehensive Human Digital Twins must model not only internal physiology but also external physical interactions (such as trauma or impact). Standard wearable sensors fail to capture these high-pressure mechanical events, saturating or breaking under force, which creates a "blind spot" in modeling real-world injury dynamics.
Our Solution: We designed and fabricated a piezoresistive Smart Fabric Pressure Sensor (SFPS) using Semiconductive Polymer Composites (SCPC). The "full-stack" development involved:
Material Engineering: Utilized a carbon-infused SCPC active layer sandwiched between conductive thread electrodes and durable Nylon insulation to create a robust sensing matrix.
Geometric Optimization: Implemented an iterative design process, modifying the active sensing layer width (up to 1.5 mm) to significantly extend the sensor's linear detection range.
System Integration: Developed a custom biasing circuit and microcontroller-based Data Acquisition (DAQ) system to linearize the response and capture rapid impact data.
Outcome: The SFPS achieved a major improvement over existing flexible sensors, demonstrating a linear measurement range of 4.63–74.08 kPa (r=0.99) with a high accuracy of 97.72%. Impact testing further validated the sensor's capability to track pressures up to 254 kPa, establishing it as a foundational hardware layer for digitizing physical trauma in safety wearables (e.g., e-helmets).
Published in: IEEE Transactions on Instrumentation and Measurement (TIM) (Solo Author).
Challenge: Real-time 3-D functional imaging of tissue, critical for tumor detection and monitoring chemotherapy, is computationally demanding and requires bulky, expensive laboratory equipment, limiting its use in unconstrained clinical settings.
Our Solution: We developed a spectroscopic Diffuse Optical Tomography (DOT) system built on an IoT-embedded system architecture and a GPU-accelerated Broker-based Model-Based Iterative Image Reconstruction (BMOBIIR) algorithm. This solution performs noise-free lock-in detection, wireless data transmission, and high-speed 3D spectroscopic image reconstruction simultaneously.
Outcome: The system achieved a performance boost of 29x over serial CPU implementations, updating 3D images at a rate of 4 frames/second. Simulation and experimental results demonstrated the capability to map the concentration of oxyhemoglobin, deoxyhemoglobin, and water, effectively characterizing tumor-mimicking tissue. This low-cost, miniaturizable system paves the way for handheld functional DOT imaging.
Published in: IEEE Transactions on Instrumentation and Measurement (TIM) (Solo-Author).
Challenge: Traditional cardiac arrhythmia detection methods fail to capture subtle temporal phase drifts indicative of arrhythmias or require extensive computational resources and handcrafted features, limiting their effectiveness for early diagnosis and real-time applicability on edge devices.
Our Solution: We proposed a novel, resource-efficient methodology combining Reconstructed Phase Space (RPS) analysis with an optimized Delay State Network (DSN). This approach:
Nonlinear Dynamics: Reconstructs the nonlinear dynamics of ECG signals and leverages the entire Phase Space Structure (PSS) as direct input to the classifier.
Resource Efficiency: Uses an optimized DSN with a single nonlinear node and delayed feedback to emulate multiple virtual nodes, reducing hardware demands by over an order of magnitude compared to conventional reservoirs or LSTMs.
Hyperparameter Optimization: Integrates delay and embedding optimization to accurately capture subtle temporal phase drifts in ECG signals.
Outcome: This framework achieved state-of-the-art performance with high accuracy and low latency, demonstrating a reliable, low-power solution for real-time cardiac arrhythmia classification and early diagnosis on computationally constrained edge hardware.
Published in: IEEE Transactions on Biomedical Engineering (TBE) (Corrosponding Author).
Challenge: Accurate continuous-wave Diffuse Optical Tomography (CW-DOT) image reconstruction is limited by conventional methods that simplify photon paths as curved lines, compromising the accurate estimation of tissue absorption properties and vital hemodynamic parameters.
Our Solution: We proposed a non-linear semi-analytic reconstruction method for CW-DOT, utilizing a Gaussian distribution framework for tracing photon paths. This method models photon diffusion as a curved photon cloud distribution (using the Rosenbrock's banana function) to more accurately mimic photon propagation in highly scattering tissue.
Outcome: Experimental validation on phantom and in-vivo finger joint imaging of a healthy volunteer demonstrated significant improvement in accuracy over generalized methods. The approach provides consistently accurate non-invasive estimates of tissue oxygenation levels (StO2) and oxygen capacity (CB), suitable for low-cost disease detection systems.
Published in: IEEE Sensors Journal (Corrosponding Author).
Challenge: Continuous, long-term personalized health monitoring is severely limited by the need for frequent battery replacement and the high computational cost of traditional signal processing methods, which prohibit real-time classification on portable devices.
Our Solution: We developed a self-powered portable electronic module and signal-processing framework for real-time ECG monitoring, featuring end-to-end efficiency:
Sustainable Energy System: An ultra-low-power boost charger (BQ25505 IC) integrates ambient light energy harvesting (solar cells/photodiode arrays) with a lithium-ion battery for continuous, self-powered operation.
Efficient Signal Processing: We introduced a dynamic multi-level wavelet packet decomposition framework to remove redundant, overlapping samples from ECG signals, significantly reducing processing time and computational cost.
Advanced Classification: A custom Random Forest with Deep Decision Tree (RFDDT) model was designed for highly accurate, low-latency ECG classification.
Outcome: The integrated system demonstrates a breakthrough in energy optimization and signal processing efficiency for wearable devices, achieving medical-grade monitoring performance:
Classification Accuracy: Achieved a remarkable 99.72% accuracy for offline ECG signal classification (using the RFDDT model).
Sustainability: The self-powered design, utilizing ambient light harvesting, supports uninterrupted, long-term operation, overcoming a major barrier in wearable healthcare.
Published in: Bioengineering (Corrosponding Author).
Challenge: Evaluating the impact of acute stress on critical cognitive functions (e.g., memory, decision-making) in high-risk environments requires the simultaneous, high-fidelity capture and fusion of neuro-physiological, autonomic, and behavioral biomarkers: a complex capability that is currently lacking in commercial sensing systems.
Our Solution: We engineered a custom, multi-modal, bio-integrated sensing platform that simultaneously acquires a comprehensive array of biomarkers: neuro-electric activity (EEG), sympathetic arousal (ECG/GSR), and biochemical markers. This platform allowed us to precisely decouple the effects of operationally relevant emotional stressors (using a Threat of Shock paradigm) from performance during complex cognitive tasks like spatial orientation and decision-making under uncertainty.
Outcome: The high-fidelity, multi-modal data revealed a crucial finding: Uncertainty (stimulus clarity), rather than acute stress itself, was the dominant factor influencing decision times. This research validates the need for advanced, multi-modal neuro-biosensors in psychological studies and provides crucial data for improving military training protocols and predictive models of cognitive failure in the field.
Published in: PLOS ONE.
Challenge: Diagnosing and grading Colorectal Cancer relies on manual histopathology examination, a process that is often time-consuming, subjective, and lacks inter-observer consistency. The challenge is developing an automated, highly efficient, and robust segmentation model for accurate tumor region identification.
Our Solution: To find an optimal balance between segmentation accuracy and computational efficiency, we explored several advanced Deep Learning architectures (VGG16-UNet, ResNet50-UNet, MobileNet-UNet). Notably, we pioneered the integration of the MobileViT (Mobile Vision Transformer) architecture as a UNet encoder: a novel, high-efficiency approach for accurate colorectal histopathology image segmentation.
Outcome: Our findings highlight the MobileViT-UNet with Dice loss as the leading model, demonstrating robust performance metrics:
Dice Ratio: 0.944 + 0.030
Jaccard Index: 0.897 + 0.049
Precision: 0.955 + 0.046
Recall: 0.939 + 0.038
Furthermore, using TOPSIS-based Multi-Criteria Decision Analysis, the MobileViT-UNet was objectively ranked as the best-performing model, significantly outperforming existing segmentation benchmarks. This breakthrough has the potential to improve the speed and accuracy of colorectal cancer diagnosis in clinical settings.
Published in: PeerJ Computer Science (Corrosponding Author).
Challenge: Managing patient health in the context of comorbidities (multiple simultaneous diseases) is highly complex, as interventions for one illness can inadvertently worsen another. Traditional AI models are often opaque "black boxes" that fail to provide harmonized, actionable counterfactual advice for patients at risk of multiple concurrent conditions (e.g., heart stroke and diabetes).
Our Solution: We developed IMPACT (Interactive Multi-disease Prevention and Counterfactual Treatment System). This full-stack system overcomes the single-disease limitation by integrating three core components:
Multi-Dimensional Counterfactual Model: We utilize the Non-dominated Sorting Genetic Algorithm II (NSGA-II), a multi-objective optimization technique, to generate a set of Pareto-optimal solutions that simultaneously minimize the predicted risks of multiple diseases.
Multimodal LLM Integration: We employ Google Gemini Pro to power an intuitive interface that converts user natural language requests into executable database queries (SQL). This simplifies the interpretation and practical usability of the complex counterfactual outputs.
Real-Time Data Fusion: The system integrates personal user information with real-time physiological data from wearable sensors (e.g., hypertension, blood sugar) to provide highly personalized recommendations.
Outcome: IMPACT is a holistic, "human-in-the-loop" prevention tool that successfully translates complex biomedical data into personalized, natural-language health recommendations.
Risk Minimization: The system successfully minimizes the joint probability of concurrent heart stroke and diabetes attacks to as low as 0.00000876 and 0.00010403, respectively.
Transparency and Actionability: It offers personalized feature value adjustments (e.g., changes in BMI, glucose level) while ensuring one recommendation does not increase the risk of another through a penalty function.
User Accessibility: The conversational interface democratizes access to health analytics for the general public, regardless of their technical expertise.
Published in: PeerJ Computer Science (Corrosponding Author).
Challenge: Oropharyngeal Squamous Cell Carcinoma (OPSCC) exhibits significant heterogeneity; determining Human Papillomavirus (HPV) status is critical for treatment but traditionally relies on invasive, time-consuming biopsies. Traditional single-modality AI models and manual radiomics often fail to capture the complex intra-tumor heterogeneity required for accurate non-invasive prediction.
Our Solution: We engineered a novel 3D CNN Ensemble Deep Learning Framework composed of six base CNN models. The framework employs a dual network and fuses anatomical features (CT, density, texture) with metabolic features (PET, hyper-metabolism) using a sophisticated Gated Late Fusion technique. This mechanism autonomously learns filter weights to weigh the complementary contributions of each modality for final classification. The training procedure involved Multiple Instance Learning (MIL), where each tumor volume was treated as a "bag of patches," and stratified 5-fold cross-validation was used to minimize ensemble model bias and handle the class imbalance in the dataset.
Outcome: The multi-modal ensemble model with soft voting significantly outperformed single-modality (PET-only, CT-only) and traditional 2D CNNs (ResNet, DenseNet) in non-invasive HPV classification.
Performance Metrics: It achieved an AUC of 0.76 and an F1 score of 0.746 on independent multi-institutional datasets (TCGA, MAASTRO). The superior performance was attributed to the soft voting approach, which balanced the individual base models' weaknesses.
Clinical Impact: This non-invasive computational approach demonstrates sufficient accuracy for providing preliminary HPV assessment before biopsy, leveraging the strength of multi-modal feature fusion to inform early treatment decisions.
Published in: Bioengineering (First & Corrosponding Author).
Challenge: The creation of truly representative human digital twins is hampered by the current practice of generating models based on data from isolated organs or body systems, leading to an incomplete picture of cross-system physiological interactions.
Our Solution: We proposed a conceptual and architectural framework for generating a "Holistic Human Digital Twin" by synchronizing multi-modal data streams across three major interacting physiological domains (e.g., Nervous, Circulatory, and Musculoskeletal Systems):
Nervous System: Utilizing EEG/fNIRS data.
Circulatory System: Utilizing ECG/PPG data.
Musculoskeletal System: Utilizing IMU/EMG data.
This approach provides a robust roadmap for data acquisition, processing (AI/ML), cloud storage, and security (Blockchain) to create a comprehensive digital model.
Outcome: A validated conceptual roadmap for utilizing wearable sensors to create continuously updating, multi-system digital replicas. This holistic model enables advanced, personalized applications, including simulated clinical trials, real-time disease progression tracking, and individualized treatment outcome prediction.
Published in: Bioengineering (Corrosponding Author).
Challenge: Quantifying spastic cocontraction in children with Cerebral Palsy (CP) is challenging because traditional electromyography (EMG) normalization methods (like maximal isometric plantar flexion, IPF) provide inconsistent, non-physiological estimates of agonist muscle activity, leading to misdiagnosis in gait assessment.
Our Solution: We developed a more robust surface EMG biomarker based on normalizing the ankle plantar flexors cocontraction index (CCI) using the bipedal heel rise (BHR) maneuver. This approach provides agonist EMG values that are internally reliable and significantly larger (~50 ± 0.4% greater than IPF), minimizing the risk of CCI overestimation.
Outcome: Clinical study results demonstrated that the BHR-normalized CCIs were significantly smaller (p < 0.05) across control and CP populations, showing that this modified biomarker offers a more representative and accurate quantitative assessment for spastic gait management in pediatrics. Furthermore, we found that MG activity is greater distally during agonist action (BHR) but greater proximally during antagonist action (dorsiflexion), validating the need for broader sensor configurations to reduce estimation bias.
Published in: IEEE Transactions on Neural Systems and Rehabilitation Engineering (TNSRE) (Corrosponding Author) and Human Movement Science.
DOI: https://doi.org/10.1109/TNSRE.2023.3329057 and https://doi.org/10.1016/j.humov.2021.102875
The field of Information Assurance (IA) has evolved significantly over the years, shaped by technological Challenge: Understanding the historical transformation and identifying decade-wise trends in vast, evolving scientific fields like Information Assurance (IA) is manually infeasible. Existing summarization techniques fail to retain logical integrity and thematic richness when analyzing massive, multi-decade corpora.
Our Solution: We leveraged Large Language Models (LLMs) and advanced Natural Language Processing (NLP) techniques to analyze over 62,000 documents spanning 1967 to 2024. Our approach combines:
Innovative Ensemble Prompts (Ev2) Method: An advanced prompt engineering technique fusing Chain of Density (CoD), Few-Shot Learning, role-based structuring, and adversarial testing for superior summarization quality.
Comprehensive Topic Detection: Utilization of BERTopic for robust topic detection, enabling structured, decade-wise trend analysis.
Targeted Summarization: Generation of focused, thematic summaries for each decade, ensuring key bibliographic references and logical topic progressions remain intact.
Outcome: The methodology provided a clear, decade-wise breakdown of key trends in IA research. The Ensemble Prompts (Ev2) method demonstrated significant quantitative superiority, outperforming traditional summarization methods by 16.7% to 29.6% in keyword definition tasks and excelling in 5 out of 7 tested metrics, thus delivering a highly reliable framework for large-scale scientific literature analysis.
Published in: Scientific Reports (Nature Portfolio) (Corrosponding Author).
Challenge: Accurate discrimination of distinct epileptic seizure types (e.g., absence, complex, and myoclonic seizures) is highly challenging due to minute, indiscernible variations in the Electroencephalogram (EEG) signals and reliance on manual feature extraction techniques that lack generalizability.
Our Solution: We developed an automated Deep Learning (DL) framework leveraging EEG signals to classify three seizure types: Absence (ABSZ), Complex Partial (CPSZ), and Myoclonic (MCSZ).
Feature Generation: We exploited the phenomenon of significant change in Phase Synchronization (PS) among EEG channels during a seizure. We measured the mean phase coherence among each pair of EEG channels to generate a Phase Synchronization Matrix (PSM).
Deep Learning Pipeline: This PSM was transformed into 2D input images and fed into a custom-designed Convolutional Neural Network (CNN) pipeline to automatically learn detailed, hidden, and spatial features.
Outcome: The model demonstrated strong performance and robustness for classifying seizure types using a 5-fold cross-validation scheme.
Maximum Performance Metrics: Achieved up to 83.3% accuracy, 91.4% sensitivity, 82.9% specificity, and 83.0% weighted F1-score.
Average Performance (5-Fold CV): The average accuracy, sensitivity, specificity, and F1-scores were 80.8%, 79.6%, 89.2%, and 79.5% respectively.
Published in: 2023 IEEE 19th International Conference on Body Sensor Networks (BSN) (Corrosponding Author).
Challenge: The predictive power of Human Digital Twins is fundamentally constrained by a reliance on bulky, single-system sensors that fail to capture the real-time, cross-system interactions vital for predicting complex adverse events (e.g., sepsis, neurological crises).
Our Solution: We engineered a novel, compact, and bio-adaptive hardware platform designed for simultaneous, high-fidelity data acquisition from multiple, interacting physiological domains. This required end-to-end full-stack development, integrating:
Custom Flexible PCBs (FPCBs): Engineered for modularity and seamless integration into motion-tolerant wearable devices.
Noise-Optimized Front-End Circuits: Designed to capture medical-grade signals across systems (cardiac, respiratory, neurological) with high SNR.
Proprietary Firmware: Providing the control logic for real-time processing and stable, high-speed data transmission.
Outcome & Status: These foundational hardware architectures are successfully generating continuous, multi-system physiological data streams. The platform is currently in the prototype validation phase; this successful high-fidelity data acquisition is now driving the development of the in silico models necessary to accurately forecast complex adverse events for Human Digital Twins.
Challenge: Generative AI in healthcare suffers from critical reliability issues, including "hallucinations" and reliance on outdated knowledge, which undermines trust and prevents clinical deployment of large language models (LLMs).
Our Solution: We rigorously benchmarked adaptation strategies for leading LLMs (including Llama-3 and Phi-3.5) by comparing Fine-Tuning (FT) alone against Retrieval-Augmented Generation (RAG) and hybrid architectures (FT+RAG) across the specialized MedQuAD medical dataset.
Outcome: Our research demonstrated that a hybrid FT+RAG approach yields superior diagnostic accuracy and semantic coherence. By combining the specialization achieved through fine-tuning with the ability to perform real-time external knowledge retrieval, we engineered a framework that significantly reduces factual errors and enhances the trustworthiness of automated medical question answering.
Published in: Bioengineering (Corrosponding Author).
Challenge: Developing practical, non-invasive therapeutic tools for memory loss in dementia requires robust, easy-to-use systems capable of quickly identifying music familiarity, a key trigger for memory recollection, using minimal, single-channel EEG data.
Our Solution: We engineered a pipeline for music familiarity classification utilizing single-channel EEG data collected from a mobile headset (Fp2 channel). The solution involved:
Feature Engineering: Extraction of six statistical features (including kurtosis) across four key frequency bands (theta, alpha, low beta, high beta).
ML/DL Benchmarking: Rigorous application of various machine learning algorithms (KNN, SVM, RF, LDA) and a Deep Learning CNN approach using spectrograms.
Optimized Classification: Identification of the Support Vector Machine (SVM) algorithm trained on kurtosis features as the optimal classifier.
Outcome: The SVM classifier achieved a 67% accuracy (with kurtosis features) in predicting music familiarity, demonstrating that single-channel EEG is sufficient for this application. This result, achieved in a user-independent manner, establishes a foundation for developing simplified, non-invasive, and quickly responsive therapeutic devices for memory monitoring and communication in dementia care.
Published in: 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (Corrosponding Author).
Challenge: Continuous respiration monitoring in the Neonatal Intensive Care Unit (NICU) relies on wired, sticky electrodes that cause skin injuries and discomfort to fragile infants, necessitating a non-invasive, soft alternative.
Our Solution: We designed and validated a non-invasive e-textile piezoresistive pressure sensor system for respiratory rate (RR) monitoring. Our solution involved:
Dual Fabrication Prototypes: Development of two prototypes, one hand-stitched and one embroidered on denim using an industrial machine, to optimize manufacturability and performance.
Dedicated Acquisition Pipeline: Creation of a custom data acquisition (DAQ) and signal processing pipeline tuned for the high-fidelity detection of subtle breathing movements.
Outcome: Validation on a high-fidelity NICU baby mannequin showed exceptional accuracy, with the embroidered sensor achieving a relative accuracy of 99.39%. This result demonstrates a scalable, comfortable, and non-adhesive e-textile alternative for critical neonatal care.
Published in: Journal of Signal Processing Systems.
Challenge: Despite the miniaturization of fNIRS electronics, the overall performance of portable neuroimaging systems is limited by the physical optodes (sources/detectors). Traditional designs fail to optimize skin-optode coupling, comfort, and stability, leading to poor Signal-to-Noise Ratio (SNR) and high motion artifacts during physical activity.
Our Solution: We implemented an iterative, human-centered design framework utilizing 3D printing and laser cutting to develop high-performance fNIRS optodes. The final designs included:
Iterative User Feedback: Continuous feedback from participants refined the design for improved user comfort and optimal integration into headgear.
Modular Optodes: Two primary designs were fabricated: one for a dedicated forehead patch and one integrable into an existing EEG electrode head cap, enabling dual-modality (fNIRS/EEG) signal acquisition.
Noise Optimization: Systematic study of noise characteristics during physical activities informed design improvements to reduce motion artifacts.
Outcome: The finalized, 3D-printed fNIRS optodes successfully proved to be comfortable, easy to use, and effective for long-term brain imaging. Experimental validation demonstrated superior SNR and resistance to motion artifacts, paving the way for the robust, wearable fNIRS systems developed in Thrust 1.
Published in: Proceedings of SPIE - Design and Quality for Biomedical Technologies XII (First & Corrosponding Author).
Challenge: Validating portable functional Near-Infrared Spectroscopy (fNIRS) systems for reliable brain imaging requires rigorous experimental studies that demonstrate the ability to capture subtle, graded hemodynamic responses in the prefrontal cortex (PFC) during controlled cognitive tasks in naturalistic, mobile settings.
Our Solution: We conducted a validation study using our laboratory-developed portable fNIRS system, WearLight, on a cohort of 25 college students. We implemented a robust experimental protocol involving 32 blocks of n-back Working Memory (WM) tasks with four pseudo-randomized difficulty levels to systematically induce incremental cognitive load.
Outcome: Experimental results successfully demonstrated the functioning of the WearLight system by showing incremental mean hemodynamic activation (HbO) and (Hb) induced by the increasing WM load across the PFC. The study provided key neurophysiological insights, including a strong left-PFC lateralization in activation, confirming the WearLight's capability to measure cognitive load in naturalistic environments, paving the way for advanced BCI (Brain-Computer Interface) development.
Published in: Sensors (First & Corrosponding Author).
Challenge: Accurate, real-time detection of epileptic seizures is crucial for treatment but often challenged by the subjective nature of EEG interpretation and the need for computationally efficient frameworks suitable for edge hardware deployment.
Our Solution: We proposed a computationally efficient framework for precise epileptic episode detection leveraging single-channel EEG. The solution utilizes a novel feature engineering and classification pipeline:
Non-Linear Feature Engineering: Integration of Hurst Exponent Analysis (to capture long-term memory characteristics of EEG) and Daubechies 4 Discrete Wavelet Transformation (for time-frequency feature extraction).
Optimized Classification: Rigorous benchmarking of models (including SVM, LSTM, and Random Forest Classifier) using features selected via ANOVA and Random Forest Regression.
Outcome: The Random Forest Classifier outperformed other models, achieving a remarkable 97% accuracy and 97.2% sensitivity. The framework demonstrated strong generalization on the CHB-MIT scalp EEG database and is designed to be highly computationally efficient, making it suitable for real-time implementation on edge hardware to provide individualized regimens and improve patient outcomes.
Published in: Bioengineering (Corrosponding Author).
Digital Therapy for Mindful Breathing to unlock the power of breath with BreathHRV App
This revolutionary App was designed to help you harness the ancient practice of mindful breathing for improved well-being and mental health.
Our App offers a unique combination of guided breathing exercises, meditation techniques, and personalized insights to help you achieve balance, reduce stress, and enhance your overall quality of life.
Challenge: Critically reviewing the state of AI adoption in the fragmented Indian healthcare sector to address the significant gaps in integrating ethical, legal, and regulatory-compliant (Trustworthy) AI solutions, which is essential for ensuring patient safety and public trust.
Our Solution: We conducted a systematic scoping review of 15 articles selected from over 1,100 documents across multiple databases and project websites. The study rigorously assessed existing AI implementations against key ethical and legal considerations, including privacy, transparency, fairness, and accountability.
Outcome: The research provided key takeaways demonstrating the AI’s Potential to enhance global disease detection and management. It identified a significant gap in integrating ethical, legal, and regulatory-compliant AI solutions within this regional context. The study outlines a Future Roadmap with actionable recommendations grounded in regional policy (DPDP Act, HIPAA, GDPR) that prioritize patient safety, data privacy, and regulatory compliance for building responsible and impactful AI solutions adaptable to other developing regions.
Published in: AI (Corrosponding Author).
Challenge: Measuring, predicting, and enhancing cognitive capabilities in highly dynamic, naturalistic high-stakes environments (like military training) requires designing and deploying a robust Body Sensor Network (BSN) architecture capable of synchronized, multimodal data collection in virtual reality (VR).
Our Solution: In collaboration with the Center for Applied Brain and Cognitive Sciences (CABCS) and the U.S. Army DEVCOM SC, we designed, developed, and deployed a comprehensive BSN architecture. This system collects synchronized, multi-modal data streams:
Brain: Neurological information (e.g., EEG, fNIRS signals).
Body: Physiological and autonomic activity (e.g., heart rate, motion).
Behavioral: Performance metrics captured within the VR environment.
Outcome & Status: This system successfully measures and integrates diverse data types from U.S. soldiers in immersive VR environments. The ongoing work is focused on developing advanced signal processing methods and AI models to enhance cognitive capabilities and human-system interactions for individuals and teams operating under naturalistic stress.
Challenge: Chronic management of Parkinson’s Disease (PD) requires frequent, subjective clinic assessments. Implementing effective telemedicine demands an Internet-of-Things (IoT) infrastructure capable of enabling objective, high-fidelity symptom monitoring of upper limb movements in out-of-clinic settings (e.g., patient homes).
Our Solution: We developed the Kaya IoT framework, an integrated Edge/Fog architecture utilizing a Body Sensor Network (BSN) for motor symptom assessment. The system components include:
Edge Device (Smart Gloves): Wearable e-textile smart gloves integrated with finger flex sensors and an Inertial Measurement Unit (IMU) to capture movement tasks based on the clinical UPDRS protocol.
Fog Computing Architecture: A Fog-based BSN architecture connecting the gloves (Edge) to a local Raspberry Pi (Fog) device that hosts the Machine Learning (ML) classification model for real-time task assessment, minimizing cloud latency and network traffic.
Optimal Classifier: We developed and tested KNN, SVM, and Decision Tree models, identifying the SVM model as the optimal classifier for deployment.
Outcome: The SVM model achieved strong performance (94% training/testing accuracy and 93% validation accuracy) with a mean inference time of only 560 µs on the Fog node. This validates the efficiency and accuracy of the Edge/Fog-driven BSN architecture for objective, remote motor screening, demonstrating a viable telehealth infrastructure for the chronic management of PD.
Published in: Smart Health.
Challenge: Advancing smart, precision, and sustainable agriculture is constrained by the lack of low-cost, non-destructive, and highly accurate diagnostic tools for field-based plant health assessment, including both disease identification and biomass estimation (e.g., chlorophyll content).
Our Solution: We developed a dual-pronged approach leveraging portable devices and advanced vision models:
Disease Detection (ViT-SmartAgri): We utilized a cutting-edge Vision Transformer (ViT) model within a smartphone application to accurately classify 10 different tomato disease classes (plus a healthy class) from 10,010 leaf images.
Biomass Estimation (Chlorophyll): We pioneered a non-destructive method combining smartphone contact imaging with a 1-D Convolutional Neural Network (CNN) for precise tea leaf chlorophyll estimation, a vital biomarker for plant nitrogen status.
Outcome: This project demonstrates the potential of integrating advanced AI/ML with simple mobile hardware for field-based diagnostics:
Disease Classification: The ViT model achieved superior performance, with 90.99% testing accuracy in disease classification.
Chlorophyll Estimation: The 1-D CNN outperformed conventional regression models, achieving a Mean Absolute Error (MAE) of 2.96 and a Coefficient of Regression (R2) of 0.82.
This research validates the approach of using low-cost, non-destructive, and highly accurate digital replication via smartphone sensing to drive precision and sustainable agricultural practices.
Published in: Agronomy and Agriculture (Corrosponding Author).
DOI: https://doi.org/10.3390/agronomy14020327 and https://doi.org/10.3390/agriculture14081262
Challenge: Reconstructing fully 3D tissue properties (like those inside a human breast or head) using Diffuse Optical Tomography (DOT) demands high computational power, causing typical reconstruction times to take hours, which prevents physicians from achieving real-time image monitoring during a patient scan.
Our Solution: We proposed a computationally superior algorithm that achieved real-time 3D DOT image reconstruction. The significant reduction in computation time was achieved through a two-fold optimization strategy:
Algorithmic Improvement: Implementation of the Broyden approach for updating the Jacobian matrix, which is the most computationally intensive step in DOT reconstruction, allowing for a rank-1 update rather than full re-computation.
Parallel Computing: Utilization of multi-node, multi-threaded GPU computation leveraging the CUDA parallel computing platform to process the heavy linear algebra tasks simultaneously.
Outcome: This framework enabled the visualization of 3D images as the patient underwent a scan. This approach successfully reduced the image reconstruction time to less than 1 second per iteration, achieving a reconstruction speed of up to 2 frames per second (fps). This breakthrough transformed DOT from an offline diagnostic tool to a fast, real-time monitoring modality.
Funding/IP: Funded by DST Award # DST1163
Published in: International Journal of Biomedical Imaging (First Author).
Patent # 3096/CHE/2015 was granted. https://drive.google.com/drive/folders/1qGxX-XyGWQM1yCGPd4sutQENv8u-W24K?usp=sharing
Challenge: Achieving rapid, continuous 3D monitoring of pathological or functional changes within a specific localized area of deep tissue is difficult because conventional Diffuse Optical Tomography (DOT) systems require a large volume of measurement data, leading to lengthy reconstruction times.
Our Solution: We proposed and developed a specialized Region-of-Interest (ROI) tissue scanning method based on DOT principles, drastically reducing the number of required measurement data. This approach included:
Wearable Optode Patch: Designing a wearable silicone rubber patch containing a limited, optimized number of LEDs and photodetectors for direct mounting onto the tissue surface.
Continuous Monitoring: Developing a system capable of continuously acquiring data from the patch and performing reconstruction, enabling rapid surveying of pathological and functional status.
Hybrid Imaging Strategy: Proposing the use of structural imaging modalities (e.g., Ultrasound or MRI) to initially localize the ROI, allowing the ROI DOT system to continuously monitor the targeted area for functional changes.
Outcome: The system successfully continuously acquired measurement data and reconstructed 3D optical properties distributions of dynamic tissue-mimicking phantoms. The results encouraged the development of a functional spectroscopic ROI DOT system, capable of deriving chromophore concentrations (oxyhemoglobin, deoxyhemoglobin, etc.) to characterize deep-seated tumors at high speed.
Published in: Review of Scientific Instruments (First Author).
Challenge: Detecting small tumors and diagnosing breast cancer requires continuous, fully 3D functional imaging of deep tissue, but traditional DOT systems are slow, computationally expensive, and lack the portability needed for widespread screening.
Our Solution: We invented and fabricated a fast, cost-effective 3D DOT instrument comprising custom hardware, embedded systems, and parallel processing algorithms:
Custom Optical Hardware: Designed an optical fiber switching mechanism to illuminate multiple tissue surface portions using four Near-Infrared (NIR) wavelengths and a high-speed photodetector system for light measurement.
Embedded GPU System: Utilized an ARM-based 32-bit processor (ARM Cortex-A7 CPU and Mali400MP2 GPU) under Linux OS for system control, detector selection, and noise-free lock-in detection.
GPU-Accelerated Reconstruction: Data is transferred to a GPU-enabled host computer for high-speed 3D image reconstruction.
Outcome: The system was successfully validated using tissue-mimicking phantoms (cylindrical and breast-shaped). It is capable of non-invasively reconstructing 3D images of tissue with a superb imaging quality, localization, and contrast. This breakthrough paves the way for the development of a low-cost, handheld DOT system for real-time, onsite tumor characterization.
Funding/IP: Funded by DST Award # DST 1163 | Near-infrared Diffuse Optical Tomography System, Indian Patent, 3096/CHE/2015.