ANLAM-Net Lab · Research Programme

Research Areas

Six interconnected research strands spanning language models, multimedia intelligence, and evaluation science.

01 Research Agenda
R · 01

Large Language Models

We advance the adaptability, robustness, and efficiency of large language models across diverse domains and languages. Our work spans parameter-efficient fine-tuning (LoRA, QLoRA, adapters), cross-lingual transfer, instruction-following, alignment, and comprehensive evaluation, with strong focus on Turkish and morphologically complex languages.

Turkish LLM Benchmarking Suite
Domain-Adaptive Fine-tuning
Instruction Alignment for Low-resource Languages
Fine-tuning LoRA / QLoRA Turkish NLP RLHF / DPO Cross-lingual LLM Evaluation
Historic computing and Alan Turing portrait
R · 02
Sentiment & Opinion Analysis
Fine-grained multimodal opinion mining targeting Turkish and multilingual social media, product reviews, and multimedia content. We combine textual, visual, and acoustic modalities with aspect-level annotation schemes to produce nuanced sentiment models that operate across languages and media types.

Multimodal Aspect-level Sentiment

Joint modeling of text, image, and audio for aspect-based sentiment prediction — fusing cross-modal attention with aspect-specific pooling mechanisms.

Turkish Social Media Sentiment Corpus

Construction of a large-scale, aspect-annotated Turkish social media dataset for benchmarking NLP systems on morphologically rich text.

Aspect-level SA Multimodal Fusion Turkish Cross-modal Attention
R · 03
Document Summarization
Abstractive and hybrid summarization for long, complex, and multi-document settings. We develop hierarchical transformer architectures that scale to thousands of tokens, with attention to coherence, factual accuracy, and cross-document discourse modeling.

Extreme Multi-Document Summarization

Hierarchical encoder architectures that aggregate information across dozens of source documents while maintaining factual grounding and attribution.

Long-Context Summarization

Efficient attention mechanisms and memory-augmented transformers for processing 10k+ token inputs in legal and scientific document summarization.

Abstractive Multi-document Long Context Hierarchical Transformers
R · 04
NLP Evaluation & Robustness
Developing comprehensive benchmarks and evaluation methodologies that measure NLP system robustness under distribution shift, adversarial perturbations, and out-of-distribution inputs. We focus on linguistically diverse evaluation sets and automatic metric design beyond BLEU and ROUGE.

Turkish NLP Benchmark Suite (TurkBench)

A multi-task evaluation framework for Turkish covering QA, NER, NLI, summarization, and generation with rigorous human validation protocols.

Robustness Under Adversarial Conditions

Systematic stress-testing of NLP models using character-level attacks, paraphrase-based perturbations, and domain-shift evaluation scenarios.

Benchmarking Adversarial NLP Automatic Metrics Distribution Shift
R · 05
Multimedia Processing
Cross-modal learning integrating text, image, audio, and video modalities. We design joint embedding spaces and fusion architectures for visual question answering, video captioning, audio-visual sentiment analysis, and cross-modal retrieval at scale.

Audio-Visual Sentiment Analysis

Fusion of speech prosody, facial expression, and text for fine-grained multimodal sentiment prediction in video content.

Cross-Modal Retrieval

Contrastive learning approaches for aligning text and visual representations, enabling zero-shot retrieval across modalities.

Vision-Language Audio-Visual Contrastive Learning Multimodal Retrieval
R · 06
Content Generation
Reliable, factual, and controllable text generation targeting hallucination reduction, factuality grounding, and domain-specific content synthesis. We develop RAG pipelines, constrained decoding methods, and factual verification systems for high-stakes generation scenarios.

Retrieval-Augmented Generation (RAG)

Dense retrieval systems coupled with generative models for knowledge-grounded text synthesis in domain-specific and multilingual settings.

Hallucination Detection & Mitigation

Taxonomy and mitigation strategies for factual hallucinations in neural text generation, including post-hoc verification and constrained decoding.

RAG Factuality Hallucination Controlled Generation
02 Methodology
M · 01

Empirical Rigor

All claims are backed by controlled experiments with statistical significance testing, ablation studies, and reproducible baselines.

M · 02

Open Science

Datasets, model checkpoints, and evaluation code released openly to accelerate community progress in multilingual and multimedia NLP.

M · 03

Interdisciplinary Collaboration

Researchers from four universities contribute complementary expertise in linguistics, engineering, AI systems, and applied ML.

M · 04

Cloud-Scale Compute

Google Cloud Research Credits enable large-scale model training and comprehensive evaluation sweeps across multilingual benchmarks.

M · 05

Low-resource Focus

We prioritize underrepresented languages — especially Turkish and Turkic families — developing resources for communities with limited NLP tooling.

M · 06

Responsible AI

Safety, fairness, and interpretability are embedded from research design through evaluation, with explicit bias audits on all released systems.