Trustworthy Language Intelligence for Security-Relevant and Harmful Online Text

Research in security-aware NLP, incident-centric social-media intelligence, and rigorous benchmarking of traditional ML, transformers, and LLM-based approaches (including Green NLP).

I develop methods for analyzing, grading, and prioritizing social-media discourse related to cybersecurity incidents, healthcare cyberattacks, and AI-generated/harmful language.

cybersecurity NLP healthcare cyberattacks social media cyber incidents LLM evaluation Green NLP

About

I am an Assistant Professor in Computer Science & Cybersecurity at Minot State University, where I am building an externally fundable research program in trustworthy language intelligence for security-relevant and harmful online text. My work focuses on reproducible, reliable, and efficient language intelligence pipelines for high-noise, high-stakes settings—especially cybersecurity discourse on social platforms.

A central thread across my projects is controlled comparison: understanding when lightweight baselines (e.g., TF-IDF + linear models) are sufficient, when transformers and LLMs are justified, and how performance changes under cost, latency, and energy constraints. The program currently includes 16 undergraduate coauthors (2022–2026), internal grant support, and continuing proposal activity.

Research Program Snapshot

  • Research identity: trustworthy NLP for security-relevant and harmful online text, with emphasis on CTGL, auditable datasets, and deployment-realistic benchmarking.
  • Student mentoring: 16 undergraduate coauthors (2022–2026), including three accepted MICS 2026 papers on membership inference, evasion attacks, and data poisoning.
  • Funding trajectory: internal faculty grant support for CTGL, continuing proposal development, and a growing portfolio of publishable undergraduate-led research.

Featured Research

CyberTweetGrader&Labeler (CTGL)

A domain-specific NLP pipeline for prioritizing cyber-incident discourse on Twitter/X, with incident-centric feature engineering and transparent relevance grading.

Benchmarking: ML vs Transformers vs LLMs

Controlled evaluations of prompted LLM inference vs fine-tuned encoders and traditional ML, emphasizing reliability and deployment-realistic trade-offs.

Green NLP for Online Abuse Detection

Energy-aware benchmarking that jointly measures accuracy, latency/throughput, and inference energy for lightweight baselines vs fine-tuned transformers.

News & Updates

  • Mar 2026: Three undergraduate coauthored papers were accepted for presentation and publication in the online MICS 2026 proceedings.
  • Mar 2026: Submitted the corresponding full-paper versions for the three accepted MICS 2026 papers to the conference proceedings system.
  • Jan 2026: Controlled evaluation of prompted LLM inference vs. fine-tuned encoders remains under review.
  • 2026: Green NLP for Online Abuse Detection and Caption-then-Classify for Multimodal Harmful Meme Detection are under revision for resubmission.

For the fuller timeline, see the News page.

For Collaborators & Students

I welcome collaborations in security-aware NLP, LLM/transformer benchmarking, Green NLP, and incident-centric social-media intelligence. Undergraduate students and prospective research collaborators interested in these areas are encouraged to contact me.

The best starting point is the Trustworthy Language Intelligence Lab (TLI Lab) page, which lists current directions and open project ideas.

Contact

Email: Muhammad.Abusaqer@MinotStateU.edu

Office: Model Hall 110, Minot State University, Minot, ND

Office phone: (701) 858-3075

© 2026 Muhammad Abusaqer