Search To Be Logo Search To Be

AI SEO Research Papers

A curated list of the latest academic research on AI search, SEO, generative models, content optimization strategies, and other related topics.

Displaying newest research first — chronologically sorted by publication date.

Deep Research System Card

OpenAI
OpenAI

The paper discusses the development, training, risks, safety evaluations, and mitigations of OpenAI's Deep Research model, focusing on its browsing capabilities, cybersecurity risks, and autonomy.

training risk mitigation safety evaluation cybersecurity autonomy

Agentic Reasoning: Reasoning LLMs with Tools for the Deep Research

Junde Wu, Jiayuan Zhu, Yuyuan Liu
Junde Wu
Jiayuan Zhu
Yuyuan Liu

The paper introduces Agentic Reasoning, a framework that improves LLM reasoning by integrating web search, coding agents, structured memory (Mind Map), and much more.

agentic reasoning tool-assisted llms web search coding agents knowledge graphs deep research

Dynamics of Adversarial Attacks on Large Language Model-based Search Engines

Xiyang Hu
Xiyang Hu

The paper models ranking manipulation in LLM search as a repeated Prisoner’s Dilemma, analyzing how attack costs, success rates, and defenses shape adversarial behavior and content provider cooperation.

llm search attacks ranking manipulation game theory adversarial strategies cooperation dynamics

GASLITEing the Retrieval: Exploring Vulnerabilities in Dense Embedding-based Search

Matan Ben-Tov, Mahmood Sharif
Matan Ben-Tov
Mahmood Sharif

The paper introduces GASLITE, an attack that injects adversarial passages into embedding-based search to manipulate rankings. It outperforms baselines, demonstrating vulnerabilities in dense retrieval models with minimal poisoning effort.

embedding attacks seo poisoning adversarial ranking gradient-based optimization retrieval security

Persistent Pre-Training Poisioning of LLMs

Yiming Zhang et al. (8 authors)
Yiming Zhang
Javier Rando
Ivan Evtimov
Jianfeng Chi
Eric Michael Smith
Nicholas Carlini
Florian Tramer
Daphne Ippolito

The paper examines how poisoning only 0.1% of pre-training data can make harmful behaviors persist in large language models, even after alignment. It demonstrates attacks affecting model beliefs, prompt security, and safe outputs.

llm poisoning belief manipulation backdoor attacks jailbreak attempts context extraction

Ranking Manipulation for Conversational Search Engines

Samuel Pfrommer et al. (4 authors)
Samuel Pfrommer
Yatong Bai
Tanmay Gautam
Somayeh Sojoudi

The paper explores how prompt injections manipulate rankings in conversational search engines, demonstrating an attack that boosts product rankings across LLMs and transfers to real-world systems like Perplexity.ai, highlighting fairness concerns.

ranking manipulation llm attacks prompt injection conversational search adversarial optimization

CONFLICTBANK: A Benchmark for Evaluating Knowledge Conflicts in Large Language Models

Zhaochen Su et al. (9 authors)
Zhaochen Su
Jun Zhang
Xiaoye Qu
Tong Zhu
Yanshu Li
Jiashuo Sun
Juntao Li
Min Zhang
Yu Cheng

CONFLICTBANK is a benchmark evaluating knowledge conflicts in LLMs, using 7M claim-evidence pairs and 553k QA pairs to study misinformation, temporal, and semantic conflicts across four model families.

knowledge conflicts conflictbank dataset misinformation temporal discrepancies

What Evidence Do Language Models Find Convincing?

Alexander Wan, Eric Wallace, Dan Klein
Alexander Wan
Eric Wallace
Dan Klein

This paper explores how language models judge evidence for controversial questions using the CoNFLICTINGQA dataset, finding they prioritize text relevance over stylistic features humans value.

language models evidence convincingness conflictingqa dataset text relevance stylistic features

Adversarial Search Engine Optimization for Large Language Models

Fredrik Nestaas, Edoardo Debenedetti, Florian Tramèr
Fredrik Nestaas
Edoardo Debenedetti
Florian Tramèr

This paper introduces Preference Manipulation Attacks, showing how crafted content can bias LLMs in search engines and plugins to favor attackers, creating a prisoner's dilemma that degrades results.

preference manipulation search engine optimization plugin attacks prisoner's dilemma

GEO: Generative Engine Optimization

Pranjal Aggarwal et al. (6 authors)
Pranjal Aggarwal
Vishvak Murahari
Tanmay Rajpurohit
Ashwin Kalyan
Karthik Narasimhan
Ameet Deshpande

The paper introduces Generative Engine Optimization (GEO) to boost content visibility in generative engine responses by up to 40%, using a flexible framework and GEO-BENCH for evaluation.

generative engines optimization visibility content creators benchmarks