Google AI Overviews Display Scam Phone Numbers
Summary
Google AI Overviews show fake company phone numbers, leading to scams. Always verify contact details on official company websites, not AI summaries, to avoid fraud.
Scammers exploit Google AI summaries
Google Search is currently displaying fraudulent customer service phone numbers within its AI Overviews. These AI-generated summaries appear at the top of search results and frequently present scam contact information as verified facts.
Scammers plant these fake numbers on low-profile websites and social media platforms. Google’s generative AI models scrape this data and synthesize it into a conversational answer for users. This process bypasses traditional verification steps that typically prioritize official corporate domains.
The design of AI Overviews encourages users to trust the summary rather than clicking through to a company’s official website. This authority bias makes it easier for bad actors to redirect callers to fraudulent call centers. Once a victim dials the number, scammers attempt to steal payment information or sensitive personal data.
Fraudulent numbers appear in search
Reports of scam numbers in AI Overviews first surfaced on Facebook and Reddit. Users looking for help with bank accounts or airline bookings found that the AI-provided contact details led directly to criminals. The Washington Post and Digital Trends confirmed multiple instances where AI Overviews prioritized scam numbers over official support channels.
Financial institutions have started issuing warnings to their customers. Credit unions and banks are seeing an increase in fraud cases linked to search engine misinformation. These institutions advise customers to ignore the AI summary box and navigate directly to official bank apps or physical cards for contact details.
The problem stems from how Google’s large language models (LLMs) treat information. These systems predict the next word in a sequence based on scraped web data. If a scammer successfully floods the web with a fake number, the AI may perceive that number as the most "probable" answer to a user’s query.
Google updates its spam protections
Google claims it is actively fighting these scams by refining its automated detection systems. The company stated that its anti-spam protections generally keep fraudulent content out of AI Overviews. However, the company acknowledged that bad actors continue to find gaps in the system.
A Google spokesperson told WIRED that the company is showing official customer support numbers "where possible." The company also launched several updates to improve the accuracy of its AI-generated summaries. Despite these efforts, the underlying technology remains susceptible to "data poisoning," where malicious actors feed false information into the AI’s training or retrieval set.
Google currently offers no official toggle to disable AI Overviews. Users who want to avoid the summaries must scroll past them or use third-party browser extensions to hide the feature. This lack of control forces users to interact with a system that Google admits is still experimental and prone to errors.
Why AI summaries invite fraud
Traditional search results show a list of links, allowing users to evaluate the source of the information. AI Overviews remove this friction by providing a direct answer. While this saves time, it also removes the visual cues that help users identify a suspicious website.
Security researchers have demonstrated that this vulnerability extends beyond search queries. Malicious text can be hidden in emails and public documents specifically to be scraped by AI bots. When a user asks an AI to summarize those documents, the bot repeats the malicious instructions or fake data as if it were legitimate.
This issue is not exclusive to Google. Other AI-powered search engines and chatbots face similar challenges with "hallucinations" and misinformation. The competitive pressure to provide instant answers often outweighs the need for 100 percent factual accuracy in the current AI market.
How to avoid search scams
Users should treat every phone number in an AI Overview with skepticism. While the AI may correctly identify a company’s history or products, it often fails at real-time data verification. Verifying information through a second, independent source is the most effective way to prevent fraud.
Google itself recommends that users double-check phone numbers by performing additional searches. If a number provided by the AI does not match the number found on an official corporate website, users should report the result to Google. The company uses these reports to train its filters and remove malicious summaries.
Follow these steps to ensure you are calling a legitimate business:
- Check the source: Click the link citations in the AI Overview to see which website provided the phone number.
- Use official apps: Access customer support through a company’s verified mobile application rather than a search engine.
- Look at your hardware: Most banks and credit card companies print their official support numbers on the back of your physical card.
- Verify the URL: Ensure you are on a .com or .org domain that belongs to the company before trusting any contact details on the page.
- Cross-reference: Search for the phone number itself to see if other users have reported it as a scam on forums like Reddit.
The risks of LLM summarization
The nature of generative AI makes it difficult to completely eliminate these errors. These models do not "know" facts; they calculate the statistical likelihood of phrases. When scammers manipulate the online data environment, they change the statistics that the AI relies on to build its summaries.
Google’s transition from a search engine to an "answer engine" has significant implications for web security. By moving away from a list of links, Google assumes more responsibility for the accuracy of the information it displays. When the AI fails, the financial risk falls entirely on the user.
While AI is useful for creative tasks like planning a vacation or drafting an email, it is currently unreliable for high-stakes information. For queries involving money, health, or legal advice, the "old ways" of searching remain the safest. Users should continue to rely on primary sources and official websites until AI models can guarantee factual grounding.
Related Articles

FDA Removes Warning Page on Ineffective Autism Treatments Like Chlorine Dioxide
FDA removed a warning about harmful fake autism cures like chlorine dioxide. HHS said it was outdated, but advocates disagree. Under RFK Jr., HHS has appointed proponents of such unproven treatments to an autism panel, alarming experts.

UK to fine tech firms 10% of revenue for failing to remove deepfake nudes in 48 hours
UK PM Starmer demands tech firms remove revenge porn and deepfake nudes within 48 hours or face fines/blocking, calling online misogyny a "national emergency."
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.

