Trustworthy Language Intelligence Lab (TLI Lab)

Reliable, efficient, and auditable NLP for security-relevant and harmful online text.

The TLI Lab develops reproducible evaluation assets and practical NLP pipelines that support deployment-realistic decisions across traditional ML, transformers, and LLMs, with explicit attention to latency, cost, and energy (Green NLP).

Research Thrusts

1) Security-Aware Language Intelligence

Incident-centric discourse triage and prioritization for cybersecurity events—especially healthcare cyber incidents—using transparent scoring and analysis workflows.

Example: CyberTweetGrader&Labeler (CTGL) for prioritizing cyber-incident tweets.

2) Comparative Benchmarking: ML ↔ Transformers ↔ LLMs

Controlled, reproducible comparisons that quantify not only predictive performance, but also reliability and deployment trade-offs (cost, latency, and failure modes).

Example: prompted LLM inference vs. fine-tuned encoders under constraints.

3) Green NLP & Efficient Inference

Energy-aware evaluation that supports sustainable model selection—e.g., TF-IDF baselines vs. fine-tuned transformers—without sacrificing responsible reporting.

Example: accuracy–latency–energy trade-offs for abuse detection.

Current / Featured Projects

  • CyberTweetGrader&Labeler (CTGL): A domain-specific NLP system for prioritizing cyber-incident discourse on Twitter/X. Project page.
  • Trust-calibrated evaluation of LLM inference vs. fine-tuned encoders: Controlled comparisons under reliability and cost constraints.
  • Green NLP for online abuse detection: Joint evaluation of accuracy, throughput/latency, and energy per inference for classical baselines vs. transformers.
  • Harmful language detection (cyberbullying / abuse / AI-generated text): Comparative benchmarking and error analysis across model families.

People

Lab Director

Muhammad Abusaqer (Minot State University)

Security-aware NLP • trustworthy evaluation • deployment-realistic benchmarking

Students

The lab actively mentors undergraduate and M.S.-level research projects aligned with the thrust areas above.

(A current member list can be added/updated anytime.)

Former Undergraduate Researchers (Coauthors)

Grouped by year; initials are used to match publication records.

MICS 2025

  • A. PunPredicting Student Academic Performance: Using Machine Learning and Clustering ( PDF) [Student success]
  • B. OlsonPredicting Student Academic Performance: Using Machine Learning and Clustering ( PDF) [Student success]
  • D. DegeleAnalyzing Ransomware Incidents in Healthcare: Patterns and Risk Assessment ( PDF) [Healthcare ransomware]
  • T. KhanEvaluating Quick-Commerce Platforms: A Sentiment and Topic Modeling Analysis of User Reviews ( PDF) [Sentiment & topic modeling]

MICS 2024

  • J. JensenGlobal Echoes of the FIFA World Cup 2022: Sentiment and Theme Analysis via Deep Learning and Machine Learning on Twitter [Event sentiment/theme]
  • S. KhanText Detection between an AI-Written Passage vs. a Human-Written Passage [AI vs human text]
  • K. KhanText Detection between an AI-Written Passage vs. a Human-Written Passage [AI vs human text]
  • T. SmithPredicting Campus Crime Based on State Firearm Policy [Public safety & policy]

MICS 2023

  • C. FofieCyberbullying Classification Using Three Deep Learning Models: GPT, BERT, and RoBERTa [Cyberbullying]
  • Q. SullivanDarknet Traffic Classification Using Deep Learning [Network traffic]
  • A. ScottAutomated Categorization of Cybersecurity News Articles through State-of-the-Art Text Transfer Deep Learning Models [Cybersecurity news]
  • J. T. SnowAutomated Categorization of Cybersecurity News Articles through State-of-the-Art Text Transfer Deep Learning Models [Cybersecurity news]

Join / Collaborate

The lab welcomes collaborations and student research in:

  • security-relevant social media analysis and cyber-incident intelligence
  • evaluation and benchmarking of ML, transformers, and LLMs (including cost/latency/energy trade-offs)
  • harmful language detection, cyberbullying, and AI-generated text detection

If you are interested, please email Muhammad.Abusaqer@MinotStateU.edu with a short note about your background and interests.

Contact

Email: Muhammad.Abusaqer@MinotStateU.edu

Office: Model Hall 110, Minot State University, Minot, ND

Office phone: (701) 858-3075

© 2026 Muhammad Abusaqer