applied cs lab, making a real impact in academia
ABOUT US
Who we are: Our story
Cyrion Labs was founded to bridge the gap between cutting-edge AI research and real-world applications. Our team of researchers, engineers, and innovators work at the intersection of artificial intelligence, machine learning, and computational sciences to drive ethical and impactful technological advancements.
We focus on computational research for social good, partnering with institutions to create AI-driven solutions that address societal challenges, such as safer internet access for students, bias reduction in AI, and accessibility tools for underserved communities. Our research also extends into natural language processing and generative AI, exploring the frontiers of AI-generated content, language models, and human-computer interaction.
Collaboration is at the core of our mission. We work with universities, independent researchers, and industry partners to produce high-impact studies, with plans to publish in leading conferences such as NeurIPS, CVPR, and ICLR. Our work has already contributed to large-scale projects, including a partnership with a public school district of 66,000 students to develop safer internet access solutions.
Our Specialities
What we do
CASE STUDIES
CASE STUDIES
Explore our latest projects.
Explore our latest projects.
Vega: A Novel Approach to Detecting Modern Web-Proxies in K12 Environments
This paper presents a comprehensive analysis of inherent vulnerabilities in contemporary web filtering systems that rely on manual review and elementary keyword detection, which permit advanced circumvention through domain manipulation and dynamic proxy deployment. We detail how attackers exploit registration of new domains and subdomain configuration to bypass conventional filters, thereby exposing the limitations of existing solutions that primarily utilize client-side scripts for proxy detection. To counter these challenges, we introduce Vega—a novel, multi-tiered detection framework employing a four-level hierarchy. Level 1 implements rapid scanning for distinct JavaScript signatures characteristic of proxy implementations. Level 2 integrates service worker introspection to monitor and detect anomalous fetch-interception behaviors. Level 3 leverages comprehensive HTML content analysis to identify rewriting artifacts indicative of proxy-mediated content transformation. Finally, Level 4 performs rigorous network traffic scrutiny, incorporating protocol-level analysis to detect distinctive markers of backend communications, such as those conforming to TOMPHTTP and Wisp specifications. Vega operates entirely on the client side and incorporates an adaptive caching mechanism to optimize computational load by eliminating redundant scans. Experimental results demonstrate that Vega significantly enhances detection accuracy and response efficiency compared to traditional filtering methods, offering a robust defense against the evolving landscape of web-based circumvention techniques.
Vega: A Novel Approach to Detecting Modern Web-Proxies in K12 Environments
This paper presents a comprehensive analysis of inherent vulnerabilities in contemporary web filtering systems that rely on manual review and elementary keyword detection, which permit advanced circumvention through domain manipulation and dynamic proxy deployment. We detail how attackers exploit registration of new domains and subdomain configuration to bypass conventional filters, thereby exposing the limitations of existing solutions that primarily utilize client-side scripts for proxy detection. To counter these challenges, we introduce Vega—a novel, multi-tiered detection framework employing a four-level hierarchy. Level 1 implements rapid scanning for distinct JavaScript signatures characteristic of proxy implementations. Level 2 integrates service worker introspection to monitor and detect anomalous fetch-interception behaviors. Level 3 leverages comprehensive HTML content analysis to identify rewriting artifacts indicative of proxy-mediated content transformation. Finally, Level 4 performs rigorous network traffic scrutiny, incorporating protocol-level analysis to detect distinctive markers of backend communications, such as those conforming to TOMPHTTP and Wisp specifications. Vega operates entirely on the client side and incorporates an adaptive caching mechanism to optimize computational load by eliminating redundant scans. Experimental results demonstrate that Vega significantly enhances detection accuracy and response efficiency compared to traditional filtering methods, offering a robust defense against the evolving landscape of web-based circumvention techniques.
Vega: A Novel Approach to Detecting Modern Web-Proxies in K12 Environments
This paper presents a comprehensive analysis of inherent vulnerabilities in contemporary web filtering systems that rely on manual review and elementary keyword detection, which permit advanced circumvention through domain manipulation and dynamic proxy deployment. We detail how attackers exploit registration of new domains and subdomain configuration to bypass conventional filters, thereby exposing the limitations of existing solutions that primarily utilize client-side scripts for proxy detection. To counter these challenges, we introduce Vega—a novel, multi-tiered detection framework employing a four-level hierarchy. Level 1 implements rapid scanning for distinct JavaScript signatures characteristic of proxy implementations. Level 2 integrates service worker introspection to monitor and detect anomalous fetch-interception behaviors. Level 3 leverages comprehensive HTML content analysis to identify rewriting artifacts indicative of proxy-mediated content transformation. Finally, Level 4 performs rigorous network traffic scrutiny, incorporating protocol-level analysis to detect distinctive markers of backend communications, such as those conforming to TOMPHTTP and Wisp specifications. Vega operates entirely on the client side and incorporates an adaptive caching mechanism to optimize computational load by eliminating redundant scans. Experimental results demonstrate that Vega significantly enhances detection accuracy and response efficiency compared to traditional filtering methods, offering a robust defense against the evolving landscape of web-based circumvention techniques.
PsyQ: A Multimodal Embodied Conversational Agent for Mental Health Screening and Intervention
This paper presents a comprehensive technical analysis of Tessa, an Embodied Conversational Agent (ECA) engineered to deliver structured mental health interventions through naturalistic dialogue. Tessa integrates a multimodal sensor suite—including audio processing, facial expression analysis, posture detection, and prosodic feature extraction—to capture and interpret nuanced user behaviors in real time. Leveraging advanced Motivational Interviewing (MI) and Cognitive Behavioral Therapy (CBT) protocols within a biweekly intervention pipeline, the system systematically identifies, challenges, and reframes maladaptive cognitive patterns. Tessa’s architecture comprises four primary modules. The first module, Multimodal Data Acquisition, synchronizes heterogeneous data streams to generate a robust representation of user affect and engagement. The second module, Dynamic Backchanneling, employs real-time analytics to produce adaptive nonverbal cues and animations that reinforce rapport. The third, the Intervention Engine, dynamically calibrates therapeutic strategies by continuously monitoring user responsiveness and contextual signals. Finally, the Clinical Escalation Module facilitates seamless therapist matching, enabling a secure transition to professional care when necessary. Operating entirely on-device to preserve privacy and reduce latency, Tessa demonstrates significant improvements in user rapport and intervention adherence compared to traditional chatbot systems, thus offering a technologically advanced bridge between AI-driven support and conventional therapy.
PsyQ: A Multimodal Embodied Conversational Agent for Mental Health Screening and Intervention
This paper presents a comprehensive technical analysis of Tessa, an Embodied Conversational Agent (ECA) engineered to deliver structured mental health interventions through naturalistic dialogue. Tessa integrates a multimodal sensor suite—including audio processing, facial expression analysis, posture detection, and prosodic feature extraction—to capture and interpret nuanced user behaviors in real time. Leveraging advanced Motivational Interviewing (MI) and Cognitive Behavioral Therapy (CBT) protocols within a biweekly intervention pipeline, the system systematically identifies, challenges, and reframes maladaptive cognitive patterns. Tessa’s architecture comprises four primary modules. The first module, Multimodal Data Acquisition, synchronizes heterogeneous data streams to generate a robust representation of user affect and engagement. The second module, Dynamic Backchanneling, employs real-time analytics to produce adaptive nonverbal cues and animations that reinforce rapport. The third, the Intervention Engine, dynamically calibrates therapeutic strategies by continuously monitoring user responsiveness and contextual signals. Finally, the Clinical Escalation Module facilitates seamless therapist matching, enabling a secure transition to professional care when necessary. Operating entirely on-device to preserve privacy and reduce latency, Tessa demonstrates significant improvements in user rapport and intervention adherence compared to traditional chatbot systems, thus offering a technologically advanced bridge between AI-driven support and conventional therapy.
PsyQ: A Multimodal Embodied Conversational Agent for Mental Health Screening and Intervention
This paper presents a comprehensive technical analysis of Tessa, an Embodied Conversational Agent (ECA) engineered to deliver structured mental health interventions through naturalistic dialogue. Tessa integrates a multimodal sensor suite—including audio processing, facial expression analysis, posture detection, and prosodic feature extraction—to capture and interpret nuanced user behaviors in real time. Leveraging advanced Motivational Interviewing (MI) and Cognitive Behavioral Therapy (CBT) protocols within a biweekly intervention pipeline, the system systematically identifies, challenges, and reframes maladaptive cognitive patterns. Tessa’s architecture comprises four primary modules. The first module, Multimodal Data Acquisition, synchronizes heterogeneous data streams to generate a robust representation of user affect and engagement. The second module, Dynamic Backchanneling, employs real-time analytics to produce adaptive nonverbal cues and animations that reinforce rapport. The third, the Intervention Engine, dynamically calibrates therapeutic strategies by continuously monitoring user responsiveness and contextual signals. Finally, the Clinical Escalation Module facilitates seamless therapist matching, enabling a secure transition to professional care when necessary. Operating entirely on-device to preserve privacy and reduce latency, Tessa demonstrates significant improvements in user rapport and intervention adherence compared to traditional chatbot systems, thus offering a technologically advanced bridge between AI-driven support and conventional therapy.
Ellie: Multimodal Identification of DSM-5 Criteria for Mental Health Disorders
This paper presents a comprehensive technical analysis of a novel multimodal AI framework engineered to detect granular DSM-5 diagnostic criteria with unprecedented precision across a wide array of mental health conditions. The framework employs a quad-modal fusion approach, integrating textual data, vocal acoustics, facial expression analytics, and postural dynamics into a unified deep learning architecture. This integration facilitates an interpretable, fine-grained analysis of psychological symptoms that adheres closely to established clinical standards. The system is structured into four primary modules. The first module, Multimodal Data Acquisition, synchronizes heterogeneous data streams in real time, ensuring robust signal capture across all modalities. The second module, Feature Extraction and Representation, leverages advanced deep learning techniques to transform raw sensory inputs into high-level, clinically relevant features. The third module, Quad-Modal Fusion, employs an attention-based mechanism to integrate disparate data sources, thereby enhancing the diagnostic granularity and interpretability of the model. Finally, the Prognostic Classification module implements binary diagnostic decision-making, achieving superior accuracy by transcending the limitations of traditional self-reported assessments. By combining quad-modal fusion with state-of-the-art deep learning optimization, this framework sets a new benchmark for automated, AI-assisted diagnostics in precision mental health care. Its capacity for scalable, real-time analysis bridges the gap between automated assessments and professional clinical interventions, offering significant implications for the future of mental health diagnostics.
Ellie: Multimodal Identification of DSM-5 Criteria for Mental Health Disorders
This paper presents a comprehensive technical analysis of a novel multimodal AI framework engineered to detect granular DSM-5 diagnostic criteria with unprecedented precision across a wide array of mental health conditions. The framework employs a quad-modal fusion approach, integrating textual data, vocal acoustics, facial expression analytics, and postural dynamics into a unified deep learning architecture. This integration facilitates an interpretable, fine-grained analysis of psychological symptoms that adheres closely to established clinical standards. The system is structured into four primary modules. The first module, Multimodal Data Acquisition, synchronizes heterogeneous data streams in real time, ensuring robust signal capture across all modalities. The second module, Feature Extraction and Representation, leverages advanced deep learning techniques to transform raw sensory inputs into high-level, clinically relevant features. The third module, Quad-Modal Fusion, employs an attention-based mechanism to integrate disparate data sources, thereby enhancing the diagnostic granularity and interpretability of the model. Finally, the Prognostic Classification module implements binary diagnostic decision-making, achieving superior accuracy by transcending the limitations of traditional self-reported assessments. By combining quad-modal fusion with state-of-the-art deep learning optimization, this framework sets a new benchmark for automated, AI-assisted diagnostics in precision mental health care. Its capacity for scalable, real-time analysis bridges the gap between automated assessments and professional clinical interventions, offering significant implications for the future of mental health diagnostics.
Ellie: Multimodal Identification of DSM-5 Criteria for Mental Health Disorders
This paper presents a comprehensive technical analysis of a novel multimodal AI framework engineered to detect granular DSM-5 diagnostic criteria with unprecedented precision across a wide array of mental health conditions. The framework employs a quad-modal fusion approach, integrating textual data, vocal acoustics, facial expression analytics, and postural dynamics into a unified deep learning architecture. This integration facilitates an interpretable, fine-grained analysis of psychological symptoms that adheres closely to established clinical standards. The system is structured into four primary modules. The first module, Multimodal Data Acquisition, synchronizes heterogeneous data streams in real time, ensuring robust signal capture across all modalities. The second module, Feature Extraction and Representation, leverages advanced deep learning techniques to transform raw sensory inputs into high-level, clinically relevant features. The third module, Quad-Modal Fusion, employs an attention-based mechanism to integrate disparate data sources, thereby enhancing the diagnostic granularity and interpretability of the model. Finally, the Prognostic Classification module implements binary diagnostic decision-making, achieving superior accuracy by transcending the limitations of traditional self-reported assessments. By combining quad-modal fusion with state-of-the-art deep learning optimization, this framework sets a new benchmark for automated, AI-assisted diagnostics in precision mental health care. Its capacity for scalable, real-time analysis bridges the gap between automated assessments and professional clinical interventions, offering significant implications for the future of mental health diagnostics.
OUR TEAM
Hear from our world-class team of researchers
Hear from our world-class team of researchers
Hear from our world-class team of researchers
-
I've had a passion for making things every since I was young, through Cyrion Labs, I'm able to harness this passion to make a real impact in the world with my skills
Trisanth Srinivasan
-
Technology can change lives, but too often, the people who need it the most are the last to benefit. Cyrion Labs is about closing that gap. Building and deploying AI solutions that actually reach those who need them.
Santosh Patapati
-
I've had a passion for making things every since I was young, through Cyrion Labs, I'm able to harness this passion to make a real impact in the world with my skills
Trisanth Srinivasan
-
Technology can change lives, but too often, the people who need it the most are the last to benefit. Cyrion Labs is about closing that gap. Building and deploying AI solutions that actually reach those who need them.
Santosh Patapati
we're always looking for talented researchers, no matter your background, education, or age
we're always looking for talented researchers, no matter your background, education, or age