Fix What ChatGPT and Gemini Say About You (2026)
How to Fix Wrong Information About You in ChatGPT, Gemini, and Perplexity (2026 Guide)
If you have ever asked ChatGPT to describe you, search Perplexity for your own name, or tested what Gemini says about your company, you have probably had the same uncomfortable experience most of our clients have. The model is confident. The model is articulate. And some part of what the model just said is wrong.
Maybe it confused you with someone who shares your name. Maybe it surfaced a 2017 lawsuit that was dismissed. Maybe it inflated a title, invented a job you never held, or referenced a defunct company as your “current employer.” Maybe it answered a question about your character by quoting a stale blog post from a former employee.
The instinct is to brush it off, because it is “just a chatbot.” That instinct is increasingly expensive. According to Pew Research’s 2024 study on generative AI adoption, the share of U.S. adults who have used ChatGPT has continued to climb each year, and a 2025 Reuters Institute Digital News Report found that a meaningful share of younger users now turn to AI chatbots before traditional search engines for everyday information lookups. Recruiters, journalists, and investors are doing exactly the same thing, and the answer they get is the first impression you make.
At Digital Crisis Management we call this the AI answer layer, and we treat it the same way we treat page one of Google. It is a public surface. It is editable, with the right approach. And it is too important to leave to chance.
This guide walks through exactly how to fix wrong information about you in ChatGPT, Gemini, and Perplexity, the order to do it in, and where to draw the line between a DIY fix and a professional intervention.
Why AI Chatbots Get Real People Wrong
Generative AI does not “know” things. It predicts the next likely word based on patterns in its training data and, for newer models, real-time retrieved web content. When a model writes a paragraph about you, it is compressing dozens of sources into a confident summary. That compression is where the problems start.
There are four common failure modes our team sees in client audits:
1. Source poisoning. A single high-authority page that is incorrect (a misreported news article, a stale Crunchbase profile, an old PACER record that never got cleared) gets cited disproportionately because the model trusts the domain. One bad source can shape an entire answer.
2. Identity collision. The model confuses you with another public person who shares your name. This is a notorious problem documented in coverage from MIT Technology Review on chatbot hallucinations and in academic work on entity disambiguation in LLMs.
3. Temporal compression. The model treats a 2018 article and a 2025 article as equally current. A lawsuit you settled and moved on from seven years ago can resurface as a top-line fact about your career.
4. Hallucinated synthesis. The model invents detail that is not in any source. According to Stanford’s 2024 AI Index Report, hallucination rates on factual questions remain a persistent issue across every major frontier model, even as overall accuracy improves.
If any of those four patterns is operating against your name today, the standard search engine optimization playbook is not going to fix it. You need to work on the model’s inputs.
The AI Reputation Management Framework
Our framework rests on a simple premise borrowed from how the models themselves think. If you control the most authoritative, most recent, and most consistent sources about a person, you control the answer. That means the work is not “trick the AI” or “prompt the AI.” The work is shape the public record the AI reads.
We break it into four lanes:
- Audit. Run the same prompt battery across every major model and capture what each one says.
- Source map. For each wrong claim, find the upstream URLs that are most likely feeding it.
- Correct or suppress. Either fix the source at the source, or push it down with credible counter-content.
- Reinforce. Publish authoritative, recent, consistent content that becomes the new dominant signal.
We use this same framework on every client engagement inside our AI Search Reputation Management service, and it is the layer most traditional ORM firms are not yet operating in.
Step 1: Run a Prompt Battery (and Take Screenshots)
Before you fix anything you need to know what is actually being said. Open each of these in a fresh, signed-out session:
- ChatGPT (gpt-5 or current default), with web browsing on.
- Google Gemini (latest Pro tier), with grounding on.
- Perplexity, default mode and Pro mode.
- Microsoft Copilot in Bing.
- Claude with web search on.
- Meta AI.
Then run the same five to seven prompts in every model. The exact wording matters. We use variations of:
“Who is [Full Name]?” “Tell me about [Full Name], [city or company].” “Is [Full Name] trustworthy?” “What controversies has [Full Name] been involved in?” “Should I hire [Full Name]?” “What is [Full Name]’s background?” “Summarize [Full Name]’s public record.”
Save screenshots and copy the citation list each model produces. Perplexity and Gemini both show their sources by default. ChatGPT and Claude often surface a citations panel when you ask, “What sources are you basing this on?” That answer is the most useful thing in the entire audit, because every wrong claim is downstream of a URL you can act on. Perplexity has public documentation on how its retrieval and citation system works, and Google has documented the grounding system in Gemini for the same reason.
Step 2: Map the Sources Feeding the Wrong Answer
This is where most DIY attempts fall apart. You cannot fix what you cannot see.
For every false or stale claim, list the most likely upstream sources. Common offenders:
- A news article from a local outlet that was never updated when the case was resolved.
- A Wikipedia stub that was edited in 2019 and never refreshed.
- A defunct corporate bio still hosted on an old employer’s website.
- A Crunchbase, ZoomInfo, or RocketReach profile with outdated info.
- A YouTube video transcript that the model is indexing.
- A Reddit thread that ranks high enough for the model to pull from it.
- An old podcast description that mischaracterizes you.
- A press release on a wire service that was never retracted.
Once you have the source list, sort by domain authority. The high-DA URLs are doing more damage and should be addressed first. If a low-quality Reddit thread is the only “source” feeding a hallucinated claim, that is a different problem than a Forbes contributor piece doing the same thing.
Step 3: Correct at the Source Whenever Possible
The cleanest fix is always the one that goes upstream. A correction request to the publisher, when it works, removes the entire input the model is drawing from. The most common avenues we use:
- News editorial corrections. Most major outlets have a corrections policy. The Society of Professional Journalists publishes model standards on corrections in journalism, and outlets often honor a documented factual error if you present it cleanly with proof.
- Wikipedia talk-page edits. Wikipedia has a formal biographies of living persons policy that requires reliable, recent sourcing. False or unsourced claims can be challenged on the talk page.
- Wikidata corrections. Many AI models use Wikidata as a backbone for facts about real people. Wikidata has open editing for verified facts.
- Crunchbase and LinkedIn fixes. Both have direct profile claim flows. Refresh your own bios first; many bad AI answers are downstream of a profile you have not touched in three years.
- Google’s Results about you tool. For URLs that contain personal info, Google’s Results about you tool lets you request removal from Google Search. Removing the URL from Google does not always remove it from a model’s training data, but it does affect grounded retrieval going forward.
- Bing and Microsoft removal flows. Microsoft’s own content removal forms follow a similar process for the Bing index, which feeds Copilot.
When a publisher will not correct, the next move is a content removal engagement. Removal is the highest-leverage move available because it eliminates a source rather than competing with it. We treat removal as outcome-based work and structure our engagements around guarantees on results, not promises on activity.
Step 4: Suppress What Cannot Be Removed
Not every bad source can be corrected. Some publishers refuse. Some content is hosted on platforms with no editorial policy. Some defamatory content exists in the form of personal blogs, ex-employee LinkedIn posts, or anonymous review sites.
When upstream correction fails, the strategy shifts to suppression. The objective is to flood the credible, recent, consistent source layer with content the model will prefer. Practically, this means:
- A high-authority personal site under your name with a clean Schema.org Person markup. Google has public documentation on Person structured data and AI models increasingly use Schema markup as a fact backbone.
- An updated Wikipedia entry (where notability supports it) with current, well-sourced claims.
- A claimed and verified Knowledge Panel via Google’s verified entity claim flow.
- Contributor or guest articles on credible industry publications, recent and on-topic.
- A refreshed LinkedIn, Crunchbase, and Wikidata profile.
- A reliable cadence of podcast interviews, panel appearances, or press mentions that the models will index and cite.
For executives and individuals, we run this as a dedicated engagement under Executive and Individual Crisis Reputation Management. For companies dealing with a brand-level AI answer problem, the equivalent service is Company Crisis Management. The mechanics are the same. The variables differ.
Step 5: Reinforce With Authoritative, Recent, Consistent Content
The single biggest predictor of whether a model will give you the answer you want is whether the most recent credible sources tell a consistent story. A 2024 paper on retrieval-augmented generation from the Stanford NLP Group describes how grounded models systematically prefer fresher, more cohesive evidence over older or contradictory evidence. That is the seam to operate in.
A reinforcement plan typically includes:
- Two to four authoritative third-party features per quarter, placed in publications with high domain authority and AI citation history.
- An owned-property publishing cadence (your personal site, your company blog) that signals current and active.
- Press releases distributed through wire services that AI models index, written around real, verifiable news pegs.
- Schema.org markup audited and refreshed quarterly.
- A consistent biography string that appears identically across every owned and earned property. Inconsistency is itself a signal that lowers trust.
This is the work that builds compounding value. Each new credible source displaces an older, weaker one in the model’s preference order.
What This Looks Like for a Real Client
A common engagement pattern we see: an executive in their forties whose name is associated, in ChatGPT and Perplexity, with a single regulatory action from a decade ago. The action was settled. The narrative around it has long since moved on. But because that 2014 article sits on a high-authority domain and was never updated, every model pulls it into its summary.
We run the prompt battery. We map the sources. We file a correction request with the original outlet (sometimes they update, sometimes they do not). We pursue a news article removal or de-indexing strategy where applicable. We build out the credible recent layer (personal site, refreshed Knowledge Panel, four to six guest features over two quarters, Wikipedia and Wikidata cleanup). Six months in, we re-run the audit. The chatbots have shifted. The summary now leads with current professional work and frames the older issue, if it appears at all, as historical.
The reason this works is the same reason it is hard. The models are not personal. They mirror the public record. If you change the record, the mirror updates.
What Most People Get Wrong About Fixing AI Answers
Three mistakes show up almost every week in client intakes:
1. Trying to “prompt fix” the model. Telling ChatGPT “that is wrong, please remember the correct version” does nothing across sessions. The model has no persistent memory of you. The fix has to live in the public web, not in a chat window.
2. Only working on Google. Pushing a bad article off page one of Google does not remove it from ChatGPT’s source set or from Perplexity’s index. The work has to cover the retrieval layer of every major model.
3. Letting it sit. Every month a wrong answer is live, it gets cited, screenshotted, and reposted. The cleanup gets harder, not easier, as the wrong narrative gets recompounded.
The cost of letting negative or false content sit is real, and it shows up in every measurable business metric. BrightLocal’s 2024 Local Consumer Review Survey found that the majority of consumers will avoid a business after seeing negative content, and the 2024 Edelman Trust Barometer shows that trust in institutions and individuals continues to be highly mediated by what people read online, increasingly through AI-generated summaries.
When to Bring in a Firm
You can do a fair amount of this yourself if you have the time, the writing capability, and the willingness to deal with publisher correction policies, Wikipedia editors, and Schema markup. Many people start there and switch to professional help when the audit reveals more sources than expected, or when a defamatory claim is genuinely entrenched.
When that point hits, the value of bringing in a firm is not “we know a secret prompt.” There is no secret prompt. The value is process, leverage with publishers, technical SEO depth, and a team that has run this work hundreds of times. At Digital Crisis Management we structure our engagements around guarantees on outcomes for most of our service lines, because the only metric that matters in this work is whether the answer changed.
Free Consultation
If ChatGPT, Gemini, or Perplexity is currently saying something inaccurate or harmful about you or your company, we will run the prompt battery, map your live source set, and walk you through what is fixable and what is not, at no cost.
Reach out through our contact page or learn more about the full service at AI Search Reputation Management. The AI answer layer is moving quickly. The teams that get out in front of it now will define the search results everyone else inherits.



