As a Cybersecurity researcher and Army veteran with 14 years of operational leadership, I investigate the security of autonomous systems and critical infrastructure. My research explores the nexus of Neuromorphic Computing and Cyber-Physical Systems (CPS), specifically focusing on how neuro-inspired architectures can provide resilient, low-power security for edge environments. My objective is to bridge the gap between high-stakes operational risk management and formal cryptographic and AI assurance models.

Research Agenda

Core areas of inquiry for doctoral study and future development.

Adversarial Resilience in Spiking Neural Networks (SNNs)

This research investigates how the temporal dynamics and "sparsity" of neuromorphic architectures provide a natural defense against adversarial attacks. By exploring the unique noise-tolerance of SNNs to develop models that are inherently resistant to the perturbations that typically crash traditional deep learning systems.

Neuromorphic Edge-Native Intrusion Detection (N-EID)

Focusing on the extreme power constraints of IoT and military edge devices, this topic explores using brain-inspired hardware to run real-time security analytics. This allows for complex threat detection to happen directly on the device without the latency or "data leakage" risks associated with sending information back to a central cloud.

Lifelong Learning for Autonomous "Self-Healing" Security

Inspired by Dr. Kudithipudi’s work in lifelong learning, this area focuses on AI that evolves in real-time as it encounters new threats. Instead of a static security model, a "self-healing" systems that utilize continual learning to recognize zero-day compromises and autonomously adjust their internal parameters to maintain operational integrity.

Trust Assurance in Neuro-Inspired Cyber-Physical Systems (CPS)

This research bridges the gap between hardware-level neuromorphic design and high-level ethical AI frameworks. It focuses on creating "Assurance Models" that provide formal proofs of security for autonomous systems (like drones or robotic infrastructure), ensuring that brain-inspired AI behaves predictably and remains trustworthy in high-stakes, adversarial environments.

Active Preprints & Technical Analysis

Translating academic theory into peer-reviewed analytical preprints.

IoT Infrastructure Security

Mitigating Digital Risk Across The Expanding IoT Ecosystem

As our physical world merges with the digital, this paper proposes a layered defense-in-depth framework to close the critical security gaps found in everything from smart home devices to industrial sensors.

IoT • Authentication • Encryption • Privacy
Security Architecture

HealthSecure's Comprehensive Cybersecurity Architecture

Moving beyond traditional perimeters, this analysis explores how a unified strategy of behavioral analytics and automated enforcement creates a resilient, identity-centric shield for modern complex networks.

NIST 800-53 • Zero Trust • GPO
Vulnerability Research

EXIF Metadata as a Hidden Threat Vector

Hidden in plain sight within everyday image files, this study deconstructs how threat actors weaponize EXIF metadata to execute unauthorized commands and the automated strategies security pros can use to stop them.

Python • CVE Analysis • Sanitization
Identity & Trust

Access Control in Modern Cybersecurity

Using a real-world multi-state hospital as a case study, this blueprint maps out how to bridge the gap between regulatory compliance and actual operational resilience across cloud and medical device ecosystems.

Zero Trust • SASE • ABAC
Threat Intel

Evolving Threats and Strategic Defense

By synthesizing the latest global intelligence, this report breaks down the 'unholy trinity' of modern cyber risk: the industrialization of Cybercrime-as-a-Service, weaponized AI, and the persistent crisis of compromised credentials.

Adversarial AI • CaaS • NIST CSF