What is ChatGPT Shopping Research and how does it work?

Shopping Research is a feature in ChatGPT that acts as a personalized shopping assistant.
Simply describe what you’re looking for, such as “a lightweight laptop for travel”, and ChatGPT gathers product details, reviews, specs, prices, and comparisons from the web.

You can refine the results by marking products as “Not interested” or “More like this”, helping ChatGPT understand your preferences.

At the end, you receive a custom buyer’s guide that explains the pros, cons, and trade-offs of each option, making your purchase process easier and more informed.

Last updated at  
November 26, 2025
Other FAQ
What is a transformer model, and why is it important for LLMs?
Arrow

The transformer is the foundational architecture behind modern LLMs like GPT. Introduced in a groundbreaking 2017 research paper, transformers revolutionized natural language processing by allowing models to consider the entire context of a sentence at once, rather than just word-by-word sequences.

The key innovation is the attention mechanism, which helps the model decide which words in a sentence are most relevant to each other, essentially mimicking how humans pay attention to specific details in a conversation.

Transformers make it possible for LLMs to generate more coherent, context-aware, and accurate responses.

This is why they're at the heart of most state-of-the-art language models today.

Read More
ArrowArrow right blue
How is GEO different from SEO?
Arrow

GEO (Generative Engine Optimization) is not a rebrand of SEO—it’s a response to an entirely new environment. SEO optimizes for bots that crawl, index, and rank. GEO optimizes for large language models (LLMs) that read, learn, and generate human-like answers.

While SEO is built around keywords and backlinks, GEO is about semantic clarity, contextual authority, and conversational structuring. You're not trying to please an algorithm—you’re helping an AI understand and echo your ideas accurately in its responses. It's not just about being found—it's about being spoken for.

Read More
ArrowArrow right blue
What is tokenization, and why does it matter for GEO?
Arrow

Tokenization is the process by which AI models, like GPT, break down text into small units—called tokens—before processing. These tokens can be as small as a single character or as large as a word or phrase. For example, the word “marketing” might be one token, while “AI-powered tools” could be split into several.

Why does this matter for GEO (Generative Engine Optimization)?

Because how well your content is tokenized directly impacts how accurately it’s understood and retrieved by AI. Poorly structured or overly complex writing may confuse token boundaries, leading to missed context or incorrect responses.

Clear, concise language = better tokenization
Headings, lists, and structured data = easier to parse
Consistent terminology = improved AI recall

In short, optimizing for GEO means writing not just for readers or search engines, but also for how the AI tokenizes and interprets your content behind the scenes.

Read More
ArrowArrow right blue
What export formats are available?
Arrow

RankWit makes reporting simple.
You can export all tracking data in multiple formats, including:

  • PDF
  • CSV
  • Word documents
  • Custom reporting templates

This makes sharing insights with clients or leadership fast and flexible.

Read More
ArrowArrow right blue
Does RankWit support multiple countries?
Arrow

Yes! RankWit includes unlimited country tracking across all plans at no additional cost.
You can monitor AI visibility for any market worldwide.

Read More
ArrowArrow right blue
What is Generative Engine Optimization (GEO)?
Arrow

Generative Engine Optimization (GEO), also known as Large Language Model Optimization (LLMO), is the process of optimizing content to increase its visibility and relevance within AI-generated responses from tools like ChatGPT, Gemini, or Perplexity.

Unlike traditional SEO, which targets search engine rankings, GEO focuses on how large language models interpret, prioritize, and present information to users in conversational outputs. The goal is to influence how and when content appears in AI-driven answers.

Read More
ArrowArrow right blue
How are RankWit credits calculated?
Arrow

Credits determine how much AI tracking you perform.
A single credit = 1 prompt × 1 AI model.

For example:

  • 10 prompts
  • × 3 AI models (ChatGPT, Google AI Overview, Perplexity)
    = 30 credits

This transparent system ensures you only pay for the tracking you use.

Read More
ArrowArrow right blue
How do large language models actually work, and why does that matter for GEO?
Arrow

Large Language Models (LLMs) like GPT are trained on vast amounts of text data to learn the patterns, structures, and relationships between words. At their core, they predict the next word in a sequence based on what came before—enabling them to generate coherent, human-like language.

This matters for GEO (Generative Engine Optimization) because it means your content must be:

  • Well-structured so LLMs can interpret and reuse it effectively.
  • Clear and specific, as models rely on patterns to make accurate predictions.
  • Contextually rich, because LLMs use surrounding context to generate responses.

By understanding how LLMs “think,” businesses can optimize content not just for humans or search engines—but for the AI models that are becoming the new discovery layer.

Bottom line: If your content helps the model predict the right answer, GEO helps users find you.

Read More
ArrowArrow right blue
How do Large Language Models (LLMs) like ChatGPT actually work?
Arrow

Large Language Models (LLMs) are AI systems trained on massive amounts of text data, from websites to books, to understand and generate language.

They use deep learning algorithms, specifically transformer architectures, to model the structure and meaning of language.

LLMs don't "know" facts in the way humans do. Instead, they predict the next word in a sequence using probabilities, based on the context of everything that came before it. This ability enables them to produce fluent and relevant responses across countless topics.

For a deeper look at the mechanics, check out our full blog post: How Large Language Models Work.

Read More
ArrowArrow right blue
What is Agentic RAG?
Arrow

Agentic RAG represents a new paradigm in Retrieval-Augmented Generation (RAG).

While traditional RAG retrieves information to improve the accuracy of model outputs, Agentic RAG goes a step further by integrating autonomous agents that can plan, reason, and act across multi-step workflows.

This approach allows systems to:

  • Break down complex problems into smaller steps.
  • Decide dynamically which sources to retrieve and when.
  • Optimize workflows in real time for tasks such as legal reasoning, enterprise automation, or scientific research.

In other words, Agentic RAG doesn’t just provide better answers, but it strategically manages the retrieval process to support more accurate, efficient, and explainable decision-making.

Read More
ArrowArrow right blue

📚 Learn, Apply, Win

Stay inspired with the latest stories, tips, and insights.
Explore articles designed to spark ideas, share knowledge, and keep you updated on what’s new.