Can you trust any online reviews? (2025 reality check)
Short answer: you can trust reviews, if you know what to look for and which signals matter. For brands, that means building an evidence trail (verification), practicing radical transparency (no cherry-picking), and embracing the new compliance landscape (regulators are watching). For consumers, it means upgrading your BS detector.
Online reviews were supposed to be the great equalizer: real experiences from real customers, helping great businesses shine. Yet in 2025, most buyers still open a new tab, type your brand into Google, and stare at star ratings… while quietly wondering, “Are these even real?”
Let’s unpack what’s true, what’s noise, and how both buyers and brands can win.
What you’ll learn in this post:
-
Why trust in reviews is under pressure - the scale of fake reviews today, with real stats from Trustpilot, Yelp, Amazon, and regulators.
-
How to spot trustworthy reviews as a consumer — practical cues like verified badges, review cadence, and balanced feedback.
-
The business impact of reviews — data on how reviews boost conversions (and how distrust kills sales).
-
What regulators now require — key takeaways from the FTC in the U.S., the EU Omnibus Directive, and the UK’s CMA.
-
How businesses can get it right — a blueprint for verification, transparency, and compliance.
-
How consumers should read - a checklist on how to differentiate between real and fake review
-
Where AI changes the game — how generative AI is being used to fake reviews, and how platforms and brands can fight back.
-
How Tickiwi meets the global trust standards
What’s happening with reviews?
-
Consumers still depend on reviews: virtually everyone reads them before buying; PowerReviews found that 99.9% of shoppers consult reviews, and most hunt for negatives to sanity-check the positives.
-
But concern about fakes is high: Bazaarvoice reported 75% of consumers worry about fake reviews; many trust reviews less than five years ago.
-
Regulators are stepping in: the U.S. FTC’s new Consumer Reviews and Testimonials Rule bans buying/selling fake reviews (including AI-generated), effective October 21, 2024, with civil penalties per violation. In the EU, the Omnibus Directive (2019/2161) explicitly prohibits fake reviews and requires disclosure of verification measures. The UK’s CMA estimates £23bn of annual spending is influenced by reviews and is now extracting undertakings from platforms like Google and Amazon.
Bottom line: Reviews aren’t going away; they’re being professionalized. That’s good for honest businesses and for consumers who know what to look for.
How big is the fake review problem really?
Different platforms measure “fake” differently (removed vs. down-ranked vs. not-recommended), so avoid apples-to-oranges comparisons. But recent disclosures show the scale and the arms race:
-
Trustpilot reports removing 4.5 million fake reviews in 2024—7.4% of all reviews submitted—up from 3.3 million (6%) in 2023, with an increasing share caught automatically.
-
Yelp says about 18% of reviews in 2024 were not recommended by its algorithm (they don’t count toward star averages). In SEC filings, Yelp notes ~76% recommended and ~15% not recommended as of Dec 31, 2024.
-
Amazon says it blocked 275 million suspected fake reviews in 2024 (globally).
Note: “Removed fake” (Trustpilot) and “not-recommended” (Yelp) aren’t identical categories, but both represent reviews that don’t contribute to visible star ratings.
Why reviews can mislead (even when no one is cheating)
Even authentic systems have behavioral and structural biases:
-
The “J-shaped” bias: Many categories skew to tons of 5-stars and a chunk of 1-stars, with fewer middling ratings. That’s partly self-selection: people with extreme experiences post; the “meh” majority goes silent. Classic research by Hu, Pavlou & Zhang documents this pattern.
-
Under-reporting of moderate experiences: Recent work continues to show that extreme experiences are over-represented, which can distort averages. ScienceDirect
-
Design choices: Widget placement and sort defaults (e.g., “most helpful” vs. “most recent”) can amplify certain narratives. The OECD warns against misleading moderation or suppressing negatives.
What does it mean: A 4.7★ average with only glowing 5-star reviews and zero specific complaints is a yellow flag, not a green one.
Do reviews still move the needle? Absolutely.
If you’ve ever hesitated at checkout then bought after reading a few detailed reviews, you’re normal. Credible studies show reviews materially shift conversion:
-
Displaying reviews boosts conversion massively, especially for higher-priced items. Northwestern’s Spiegel Research Center observed conversion lifts of ~190% for low-priced items and ~380% for high-priced items when reviews are present. Verified-buyer badges alone increase purchase likelihood by ~15%.
-
Reviews influence big money: The UK’s CMA estimates as much as £23bn in annual spend is influenced by online reviews.
-
Digital channels win users, but not always their trust: McKinsey notes consumers rely on digital/social channels yet rank them among their least trusted sources, making verified, transparent review ecosystems more valuable.
The new compliance baseline you need to know
-
United States (FTC, effective Oct 21, 2024): No buying/selling fake reviews, no insider reviews without disclosure, no suppression of negatives via threats or bogus legal claims. Violations can trigger significant penalties. The FTC also offers guides for platforms and businesses on collecting and featuring reviews responsibly.
-
European Union (Omnibus Directive): Fake reviews are expressly prohibited. If you claim reviews are from actual purchasers, you must disclose your verification process. Platforms and merchants must be transparent about how reviews are selected, moderated, and whether they’re incentivized.
-
United Kingdom (CMA): With strengthened powers in 2025, the CMA is extracting undertakings from big platforms to crack down on review manipulation and can levy substantial penalties.
Implication for brands: Treat reviews like regulated content. Document your process; be ready to show your work.
What businesses should do now (and how to do it right).
5 steps to improve your reviews game
1. Verification by default
Link reviews to real transactions wherever feasible: POS receipts, order IDs, booking references, or service tickets. When a review isn’t transaction-linked (e.g., walk-in service), disclose that clearly. This aligns with EU Omnibus requirements and builds trust.
2. Publish (almost) everything, promptly
Don’t filter out negatives unless they’re policy violations (hate speech, doxxing, etc.). Keep all reviews and mark them appropriately based on moderation they require, for example: “flagged as spam”, “not counted in rating”. It created an additional trust signal towards consumers.
3. Respond with substance
A short, specific reply that acknowledges the issue and offers a fix does more for conversion than a dozen generic “we’re sorry” posts. (Consumers explicitly look for negative reviews to learn how you handle problems.)
4. Maintain an audit trail
Keep immutable logs of invitations sent, responses received, moderation actions, and the verification link to the transaction. This helps with regulator inquiries (FTC/CMA) and demonstrates good faith.
5. Use detection tech—but know its limits
AI can spot anomalies (sudden bursts, cluster behavior, language patterns), but adversaries use AI too. Deloitte recommends diverse, high-quality training data and cross-modal signals to reduce bias and improve detection.
What “good” looks like: a trusted review system
Here’s a blueprint you can run in your ecommerce stack or customer feedback pipeline:
-
Verified-first intake
-
Auto-invite after every transaction (email/SMS/API).
-
Tag the review with a visible verified buyer badge; evidence shows this alone lifts purchase odds by ~15%.
-
Transparent moderation
-
Post nearly all content (after minimal fraud checks).
-
Don’t bury negatives; label any reviews excluded from star calcs (“not counted due to …”). “Misleading moderation” is considered as a risk.
-
Authenticity signals in your widgets
-
Source icons (“purchased online,” “in-store,” “public submission”).
-
Show recency, device/platform, and merchant responses inline. (Users want detail and recency; platforms that show this earn more trust.)
-
Audit + analytics
-
Maintain logs linking reviews to orders.
-
Track anomalies: volume spikes, language duplication, bursty posting windows, reviewer graph linkages. (Detection is an ongoing process; adversaries adapt.)
-
Compliance disclosures
-
Publish your verification policy and moderation rules (EU requirement if you claim reviews are from real buyers; good practice everywhere).
The AI wildcard: reviews just joined the deepfake era
It’s now trivial to generate thousands of plausible, category-specific reviews. Newsrooms and academics have documented the rising use of genAI in deceptive content, while regulators (FTC) explicitly folded AI-generated fake reviews into enforcement. Leading consultancies warn that trust now requires AI-native defenses—from anomaly detection to content provenance.
Practical move: Treat review integrity like cybersecurity—continuous monitoring, incident response, vendor due diligence, and adversarial testing. Deloitte’s guidance on deepfake detection echoes this shift.
What to look for as a consumer (a quick, practical checklist)
Use this when you’re scanning reviews for a product, app, restaurant, or service:
Look for verified-purchase signals (badges, “verified buyer”, or transaction-linked reviews). A “verified” badge measurably increases credibility and purchase odds.
Check recency and review cadence: A sudden burst of near-identical 5-stars is suspicious; check for timing anomalies
Read the 3- and 4-star reviews: They tend to be more nuanced and specific, mitigating the J-shaped extreme bias.
Scan negative reviews for owner responses: Thoughtful replies that address specifics suggest a “live” business and better post-purchase support.
Cross-check sources: Don’t rely on one platform, compare reviews from at least two platforms. Don’t look at reviews only on the website, check independent platform like Tickiwi, Trustpilot and Google Reviews.
Beware of copy-paste language or templated phrasing—an AI red flag. Does it read like it was an AI-generated review? It probably was. Look for human-like content.
How Tickiwi meets (and exceeds) global trust standards
Tickiwi’s approach wasn’t designed in a vacuum, it mirrors the very principles regulators and consumer bodies are now demanding worldwide, while fixing the blind spots where bigger platforms still fall short. Here’s how:
-
Source clarity Every review in Tickiwi carries a transparent badge showing how it was collected—whether directly from a purchase, from an offline request, or as a public submission. This aligns with EU Omnibus rules that require platforms to disclose their verification process, so consumers immediately see the difference between a verified buyer and casual feedback.
-
Open negativity Tickiwi publishes both the good and the bad, no filters to hide criticism. Businesses are encouraged to respond quickly and substantively, echoing FTC and OECD guidance that engagement with negative reviews builds more trust than suppressing them ever could.
-
Evidence trail
Behind each verified review sits an order ID or transaction reference, creating a provable link between customer and feedback. This isn’t just good practice, it’s exactly what European regulators expect when a business claims reviews are “from real purchasers.”
- Ongoing detection & transparency Tickiwi is committed to publishing regular “trust stats” (e.g., share of verified reviews, percentage flagged or removed, median response times). Think of it as a lightweight Trust & Safety Report—similar to what Yelp and Trustpilot release—so businesses and consumers alike can see the system working in the open.
Fast answers to common “can I trust this?” moments
A product has thousands of 5-star reviews and almost no negatives. Is that good?
Not automatically. The J-shaped bias means extremes dominate. Look for specifics, recentness, and verified badges. Then read 3–4★ reviews for nuance.
Are “verified buyer” badges just cosmetic?
No—credible research shows a ~15% lift in purchase likelihood when the badge is present. It’s a meaningful trust signal.
How do I spot AI-generated reviews?
Patterns (repeated phrasing, oddly generic specifics), synchronous bursts, and thin reviewer histories. But it’s hard—platforms are using AI detection because the content arms race is real.
Is it legal to ask for reviews?
Yes—if you don’t pay for positive sentiment, don’t hide negatives, and you disclose incentives (e.g., sweepstakes entries).
Final word
You can trust online reviews—if the system shows its receipts. For consumers, that means reading with a trained eye. For businesses, it means instrumenting your review flow with verification, transparency, and continuous detection. Do that, and reviews become what they were always meant to be: a durable, compounding asset that grows credibility—and sales.
Thanks for reading. If you found this useful, we’d love for you to share it on LinkedIn and tag us. Let’s build the future of reviews together.