Can you help with an honest Perplexity AI app review?

I’ve been testing the Perplexity AI app for research and everyday questions, but I’m not sure if I’m using all its features correctly or judging its answers fairly. Sometimes it’s incredibly helpful, other times it feels off or repetitive. I’d really appreciate feedback from people who’ve used it longer—what’s your honest review, what are the pros and cons, and how do you get the most value out of Perplexity AI?

I’ve been using Perplexity a lot, here’s a straight take so you can judge your own use.

Where it shines

  1. Research with sources

    • Ask focused questions, not vague ones.
    • Example: “Summarize recent RCTs on intermittent fasting for weight loss, list sources with years.”
    • It pulls links. Click them. Check if they match the summary.
    • If you see blogspam or low quality sites, treat the answer as weak.
  2. Everyday questions

    • It does well on: quick how-tos, tech setup, product comparisons, summarizing long articles.
    • For opinions or “best X”, ask it to show criteria.
      Example: “Compare A vs B, show pros/cons, list how you ranked them.”
  3. Followups

    • The followup feature is underrated.
    • Ask: “Where did you get that claim about X? Quote exact sentence from source.”
    • Or: “Give only what is directly supported by the sources above.”
  4. Multi-step tasks

    • Split tasks into steps.
    • Step 1: “Gather 5 strong sources on topic X, show short notes for each.”
    • Step 2: “Using only those 5, answer this question in bulletpoints.”
    • This keeps it grounded, avoids hallucinated stuff.
  5. When it feels wrong

    • Red flags: confident tone, no quoted numbers, vague sources, or references to “experts say” without names.
    • If you suspect nonsense, say: “List every factual claim in bullets, attach citation link to each.”
    • If it cannot, do not trust the answer.
  6. Using different modes

    • Short Q&A mode works for quick stuff.
    • For research, prefer long answers with citations, then ask it to condense.
    • If you get fluff, say: “Remove filler. Keep only verifiable facts with links.”

How to judge it fairly

  • Compare answers to 2 or 3 good sources you trust.
  • Expect it to be “first draft”, not final.
  • For anything health, legal, money related, treat it like a hint, then verify manually.

Quick “settings” for better answers

  • Use precise prompts: who, what, when, how many, from which year range.
  • Force structure: “Answer in 3 sections: summary, key facts, sources.”
  • Ask it what it is unsure about: “List things you are uncertain about in your answer.”

If you do that and still feel like half the time it is useless, you are not doing anything wrong. It hits a ceiling with niche topics, new research, and anything that needs real expertise or lived experience.

You’re not crazy, Perplexity is very “wow / what the hell?” depending on how you use it and what you ask.

I mostly agree with @viaggiatoresolare’s breakdown, but I’d add a few angles they didn’t touch and push back on a couple of things.

1. How I’d judge it fairly
Instead of “is this answer good?” I ask 3 questions:

  1. Is this better than what I’d get from skimming Google for 5–10 minutes?
    • If yes, it’s already a win. It’s not supposed to be a PhD supervisor.
  2. Did it actually save me time?
    • If I still have to open 10 tabs and reconstruct everything, then it’s just a fancy search wrapper.
  3. Would I act on this without checking?
    • If the answer is yes and the topic is serious (health, money, law, work commitments), I slow down and verify manually.

If you’re expecting “final truth,” it’ll feel disappointing a lot. If you mentally downgrade it to “smart research intern that works for free and never sleeps,” it suddenly looks pretty good.

2. Where I personally find it overrated
This is where I slightly disagree with @viaggiatoresolare:

  • Everyday questions.
    People hype it for “what should I eat in Rome for 3 days” kind of stuff. Honestly, I find it mediocre there. It mixes TripAdvisor clichés, blogspam, and generic advice. For travel, restaurants, products, etc, I still lean on Reddit, specialist blogs, or people I know.
    I use Perplexity here only to:

    • Get a framework (“what things should I consider when choosing a mattress / router / monitor”)
    • Then I do my own digging.
  • “Quick how-tos.”
    It’s decent, but for anything that can break, cost money, or injure you, the hallucinations are just not worth it. It sometimes confidently invents settings, menu items, or steps that simply do not exist.

3. Features people underuse (that actually matter)

Instead of focusing on just prompts, I think these usage patterns matter more:

  • Conversation reset vs long threads
    Long threads slowly accumulate weird assumptions. If your answers start feeling off, just start a new query instead of “clarifying” for the 10th time. Fresh context often gives a better result than another followup.

  • “Show me what I’m missing”
    Try prompts like:

    • “What are the main counterarguments / critiques of the position you just took?”
    • “List the strongest reasons your answer might be incomplete or wrong.”
      This exposes blind spots and helps you judge if it’s giving you a one‑sided picture.
  • Asking it to model your constraints
    Example:

    • “Pretend you’re explaining this to a grad student in X who already knows Y and Z.”
    • “Respond as if I’m a non‑technical manager who has 5 minutes before a meeting.”
      When it knows your baseline, the answers feel way less random and you can judge them more sensibly.
  • Let it propose how to help before you decide
    Instead of “Write me X,” try:

    • “Given this goal, list 3 ways you could help (outline, critique, checklist, etc). Then I’ll pick one.”
      This reduces the “this isn’t what I wanted at all” feeling.

4. How I personally use it for research (and stay sane)

For research stuff, I treat it like a layered filter, not an oracle.

  1. Orientation pass

    • “Give me the main concepts, key debates, and 5 recurring names in [topic]. Just headlines and 1‑line explanations.”
    • I don’t care if it’s perfectly accurate here. I want a map, not a verdict.
  2. Targeted drilling

    • Once I see recurring author names / models / terms, I ask only about those.
    • “Explain [specific paper / concept], list limitations and common criticisms.”
      This is where you can tell if it’s actually grounded or just stringing buzzwords.
  3. Final cross‑check

    • I manually check 1–2 of the key papers or sources it mentioned.
    • If those 1–2 are grossly misrepresented, I discard the whole answer and restart narrower.

You’re using it fairly if you (a) verify the big claims, (b) don’t expect it to replace actual reading, and (c) notice when it’s out of its depth instead of trying to force it.

5. When it feels useless, it might not be you

A few categories where it reliably struggles, regardless of prompt finesse:

  • Niche or very recent research (last few months, new preprints, weird subfields)
  • Highly local info (tiny businesses, hyper‑specific bureaucracy questions)
  • Stuff that needs real lived experience (office politics, dating, navigating weird workplace cultures)
  • Anything where the stakes are high and the information is nuanced

If half your use is in those zones, it will legit feel “meh” a good chunk of the time. That’s not you “using it wrong,” that’s the tool’s limit.

6. Simple gut-check heuristics

When I read a Perplexity answer, I do a quick vibe-check:

  • If it reads like marketing copy (lots of adjectives, no specifics) → low trust
  • If it avoids numbers, dates, or concrete names → low trust
  • If it gives a neat, symmetrical list of pros/cons that feels too balanced → probably generic filler
  • If it admits uncertainty or says “I couldn’t find strong sources on X” → oddly, I trust it more

You’re judging it fairly if you’re willing to say “this saved me an hour” and also “this is useless for this type of task” without trying to force everything through it.

TL;DR of an honest review

  • Great as: fast orientation, structured summaries, “explain like I’m X”, brainstorming, and rough first drafts.
  • Mediocre as: travel planner, product picker, or “replace reading actual sources” machine.
  • Risky as: authority on medical, legal, financial, or life‑changing stuff.
  • Fair way to see it: a very smart, occasionally delusional intern that’s fantastic for drafts and terrible as final QA.

If your experience is “sometimes it’s magic, sometimes it’s trash,” that’s basically the correct expectation level right now.

Perplexity is basically that coworker who is brilliant in some meetings and wildly off in others. You’re not misjudging it; the swings are real.

Let me come at this from a different angle than @viaggiatoresolare and the other breakdown you quoted.


1. Judge it by “mode,” not by answer quality alone

Instead of “is Perplexity good,” ask “is this mode the right one?”

Roughly:

  • Research / info-gathering mode
    Strong when:

    • You need a quick landscape of a topic
    • You want linked sources in one place
    • You’re checking what’s already known rather than discovering bleeding-edge stuff
      Weak when:
    • You need nuance from primary sources
    • You care about subtle disagreements between experts
  • Writing / drafting mode
    Pros:

    • Great for ugly first drafts, outlines, email scaffolding
    • Can compress messy notes into something structured
      Cons:
    • Style is often bland, “blogspam energy”
    • It will sometimes fabricate details to make the text sound smoother
  • Decision-helping / recommendation mode
    Honestly where it’s most overrated.

    • Product picks, travel, “what’s the best X” feel generic
    • Reviews it surfaces can be heavily skewed to mainstream / SEO-farmed sites

If you treat all three modes like they should be equally good, you’ll keep being disappointed. For me it is primarily a research & drafting tool, not a recommendation engine.


2. Where I disagree slightly with the earlier takes

  • Everyday questions
    I actually do find it useful for “life admin,” but not the way people market it.
    Good for:

    • Converting bureaucracy into plain language
    • Comparing policies, rules, terms & conditions in a readable way
      Bad for:
    • “What school / doctor / restaurant should I choose” type questions
  • “Just like a smart intern” analogy
    I think this is a bit too generous in one way and too harsh in another.

    • Real interns have domain intuition and can say “I don’t know” more naturally.
    • Perplexity is better than an intern at brute summarization across many pages.

So I see it closer to a “very fast but context-clueless abstracting machine” rather than a human-like research assistant.


3. Features most people miss that change how fair it feels

Trying not to repeat previous suggestions:

  • Source pattern checking
    Instead of only reading the final answer, look at:

    • Are all the sources the same type (e.g., generic blogs, medium-level news sites)?
    • Is it mixing primary sources (papers, docs, official sites) with commentary?
      If the answer is built on a pile of commentary, treat it as opinion, not fact.
  • Query reframing
    If it keeps giving you shallow answers, explicitly force it into a different role:

    • “Act as a critic of the sources you are citing. Explain where they might be biased or out of date.”
    • “Only use primary sources (official docs, standards, laws, or scientific papers). If you can’t, say so.”
      This shifts it from “content generator” into something closer to a curator.
  • Constraint sharpening
    Most people stay too vague:

    • Bad: “Explain X.”
    • Better: “Explain X in 5 bullet points, each with exactly one citation, targeting someone who already knows Y and Z.”
      You can judge it more fairly if you give it a concrete format to hit.

4. How I personally use Perplexity for research without losing trust

Different angle from the layered approach you already saw:

  1. Hypothesis builder
    I ask it to:

    • “Propose 3 plausible explanations / hypotheses for [phenomenon]. Label each as speculative or well supported.”
      This gives me ideas, not conclusions.
  2. Contradiction finder
    After I read a few real sources, I go back and say:

    • “Here is an excerpt from [paper / article]. List all the ways this contradicts or refines what you previously said.”
      If it can reconcile or correct itself using the real text, its earlier “mistake” becomes less of a problem.
  3. Boundary tester
    I occasionally ask:

    • “Describe where your knowledge on this topic probably cuts off in terms of years, subfields, or regional nuance.”
      The answer is not perfect, but you’ll often see hints like “most information is up to [year]” or “there is limited coverage on [region].”
      That tells you when you’re expecting too much.

5. When “it feels useless,” what’s usually happening

Based on your description:

  • If you’re asking very open questions (“what’s the best way to think about X”), it tends to regress to generic self-help.
  • If you’re asking highly specific lived-experience stuff (office dynamics, personal dilemmas), it lacks context and gives “polite HR advice.”
  • If you’re deep in niche research, it may be stuck on surface-level or outdated summaries.

In those cases, the fair judgment is not “Perplexity is bad,” but “this is the wrong category of problem for this tool.”


6. Pros & cons, framed like an honest app-store review

For the Perplexity AI app specifically:

Pros

  • Very fast multi-page summarization in a single answer
  • Clear citation display helps you audit where things came from
  • Good at translating dense or technical material into readable language
  • Strong for “conceptual maps” of new topics
  • Handy as a portable notes-expander / first-draft generator

Cons

  • Still hallucinates specific details, especially UI steps, niche facts, and edge cases
  • Bias toward mainstream and SEO-friendly sources, which can flatten nuance
  • Weak at local, recent, or highly subjective information
  • Style often feels generic and padded unless you heavily constrain it
  • Easy to overtrust because the interface looks so confident

7. How to know if you’re judging it fairly

You’re being fair if:

  • You compare it to how long manual searching would have taken, not to an expert’s lifelong knowledge
  • You only “act without checking” on low-stakes stuff
  • You treat controversial or high-stakes topics as prompts for deeper reading, not as final answers
  • You’re willing to say “this is excellent for my use in [A, B, C] and basically ignored for [D, E, F]”

And for what it’s worth, your “sometimes incredible, sometimes useless” experience is not a usage error. That is basically the current state of tools like Perplexity, regardless of how careful your prompts are, whether you agree more with @viaggiatoresolare’s view or with mine.