Software Engineering · Ankara University · Est. 2024
Fundamental and applied research at the intersection
of language, networking, and intelligence.
Advancing the adaptability, robustness, and efficiency of LLMs across diverse domains and languages. We study parameter-efficient fine-tuning, cross-lingual transfer, instruction-following, and LLM alignment — with particular focus on Turkish and morphologically complex languages.
Fine-grained multimodal opinion mining for Turkish and multilingual social media, reviews, and multimedia content. Combining text, visual, and acoustic cues for aspect-level analysis.
Abstractive and hybrid summarization of long, complex, and multiple documents. Novel architectures for extreme multi-document and long-context NLP with hierarchical transformers.
Comprehensive benchmarks and evaluation frameworks for robustness under distribution shift and adversarial conditions.
Cross-modal learning fusing text, image, audio, and video for visual QA, video captioning, and cross-modal retrieval tasks at scale.
Reliable, factual, and controllable text generation — hallucination reduction, factuality grounding, and domain-specific content synthesis with RAG pipelines.