Perplexity New Features and Use Cases in March 2026
- 3 days ago
- 11 min read

Perplexity has moved past the “AI search toy” phase and is now behaving like a product family with distinct modes, verticals, and enterprise controls.
In March 2026, the most meaningful changes are not cosmetic, because they touch model orchestration, deep research workflows, finance-grade outputs, and enterprise memory and governance.
The core idea remains consistent: Perplexity wants to deliver answers that are tightly connected to sources and auditable references rather than open-ended chat.
That positioning is increasingly reinforced by feature decisions, such as Model Council for cross-checking answers across multiple frontier models.
It is also reinforced by product investments in Deep Research, which Perplexity frames as benchmark-driven and optimized for accuracy and reliability.
At the same time, Perplexity is sharpening monetization strategy by moving away from ads inside answers and emphasizing subscriptions, enterprise, and developer services.
This pivot matters because it changes how the product is engineered, since ad-driven systems tend to optimize reach and frequency while subscription-driven systems tend to optimize trust and retention.
On the adoption side, Perplexity is no longer just “students and curious users,” because law firms, consulting teams, and public-sector organizations are now showcased as repeatable enterprise deployments.
A second adoption channel is the Sonar API platform, which pushes Perplexity into embedded workflows for publishers and businesses that want an answer engine inside their own products.
This report consolidates what is new as of March 2026 and what Perplexity is actually used for, with an emphasis on publishable facts and clear boundaries between product reality and marketing language.
··········
Perplexity is becoming an orchestration layer rather than a single-model chatbot.
Perplexity’s recent shipping cadence shows a pattern: it increasingly exposes model choice, comparison, and specialized modes rather than treating “the model” as a hidden implementation detail.
This is visible in Model Council, which is explicitly designed to run multiple frontier models in parallel and then synthesize agreement and disagreement.
It is also visible in how Deep Research is described, because Perplexity ties it to “best available models” plus proprietary search and sandbox infrastructure.
The practical consequence is that Perplexity is less about “one assistant personality” and more about selecting the right workflow for the job and making uncertainty visible.
Model Council changes the verification workflow.
Instead of asking a question once and trusting a single answer, Model Council is built to show you where strong models converge and where they diverge.
That makes it particularly relevant for professional research and writing, where the cost of a wrong answer is higher than the cost of running extra inference.
........
How Perplexity’s “orchestration” approach shows up in the product
Product layer | What the user sees | Why it matters | Primary source basis |
Deep Research | A dedicated mode that produces longer research outputs | It is framed as accuracy/reliability oriented, not just “fast answers” | Perplexity changelog describes upgraded Deep Research and its architecture framing |
Model Council | Parallel multi-model answering with synthesis | It operationalizes cross-checking and makes disagreement explicit | Perplexity changelog announces Model Council |
Model lineup updates | New model options surfaced to users | It signals a product philosophy of routing to “best available” options | Changelog notes adding models such as Claude Sonnet 4.6 and Gemini 3.1 Pro |
Enterprise controls | Memory and security controls in business context | It shifts Perplexity from consumer app to governed platform | Changelog references enterprise memory and security-related shipping |
··········
What is new in March 2026 is a stack of features, not a single headline.
The cleanest primary record of Perplexity’s recent changes is its official changelog, which is unusually explicit about what shipped and when.
In the February 2026 entries that define what March users will experience, the biggest items cluster into Deep Research upgrades, Model Council expansion, Finance becoming more auditable, and Comet accelerating toward iOS distribution.
Perplexity also explicitly shipped “response preferences” and “enterprise Memory,” which indicates the product is investing in controllability and repeatability rather than only raw capability.
On the model layer, Perplexity’s changelog notes adding models such as Claude Sonnet 4.6 and Gemini 3.1 Pro, which is relevant for users who treat Perplexity as a routing layer across vendors.
Deep Research is being positioned as a benchmark-driven premium workflow.
Perplexity states that Deep Research has been upgraded to achieve state-of-the-art performance on external benchmarks and that it pairs “best available models” with Perplexity’s proprietary search engine and sandbox infrastructure.
Perplexity also states that Deep Research runs on a specific top-tier model for Max and Pro users and that it will upgrade to top reasoning models as they become available, which is a roadmap-style promise embedded in the changelog text.
Finance is being turned into a more auditable experience.
Perplexity’s changelog explicitly calls out analyst ratings in Perplexity Finance and “auditable financials with links to SEC filings,” which signals an intentional move toward verifiability in finance outputs.
This matters for professional readers because finance answers are a domain where citations are not a nice-to-have, and the product is increasingly building the citation trail into the UI.
Comet is becoming a distribution strategy, not a side project.
Perplexity’s changelog references pre-ordering for Comet on iOS and improvements like a desktop tab switcher and a more personalized Comet Assistant, which together suggest an effort to make Perplexity “the browsing surface,” not just a destination site.
........
Perplexity features you can safely describe as “new going into March 2026”
Feature area | What shipped (primary wording summarized) | Why it matters in practice | Where it is documented |
Deep Research upgrade | Upgraded Deep Research and positioned it as accuracy/reliability oriented with search + sandbox infrastructure | It targets “report-grade” workflows, not just Q&A | Official changelog (Feb 6, 2026) |
Model Council | Multi-model mode shipped as a core workflow | It enables fast cross-checking without manual copy/paste across tools | Official changelog (Feb 6, 2026) and changelog index |
Finance verifiability | Analyst ratings and auditable financials with links to SEC filings | It strengthens the citation chain for finance questions | Official changelog (Feb 20, 2026) |
Comet distribution | Pre-ordering for Comet on iOS and productivity improvements | It expands Perplexity into “default browsing” behavior | Official changelog (Feb 20, 2026) |
Enterprise Memory | Memory for enterprise contexts | It supports repeatable internal workflows and knowledge retention | Official changelog (Feb 20, 2026) |
··········
Perplexity’s “no ads in answers” decision changes the product narrative and the buyer narrative.
Multiple credible outlets report that Perplexity is moving away from advertising in its product because ads risk undermining user trust in the objectivity of answers.
The Financial Times frames the shift as discontinuing ads and explicitly tying the decision to trust and willingness to pay, which aligns with a subscription-first strategy.
WIRED frames it as a broader strategic pivot away from mass-market growth goals toward becoming the most accurate tool for paying users and enterprise use cases.
The Verge emphasizes the same trust dynamic and highlights that Perplexity intends to focus on services professionals are willing to pay for, especially in domains where correctness is valued.
Business Insider adds detail about leadership emphasis on subscriptions and enterprise sales and describes a focus on high-powered users such as finance professionals, doctors, and CEOs.
This is not a moral stance only, because it is a product design choice.
If you monetize answers with ads, you create incentives for engagement and impression volume, and you also create user suspicion about whether the answer was “the best answer” or “the best monetized answer.”
Perplexity is explicitly choosing the opposite positioning, which is consistent with shipping features like Model Council and “auditable financials,” because those are trust-building features rather than engagement hacks.
........
How the ads shift changes Perplexity’s competitive story
Dimension | Ads-in-answers strategy tends to optimize | Subscription-first strategy tends to optimize | What Perplexity is signaling |
User trust | Often harder to sustain at scale | Easier to align with “answer quality” | Perplexity cites trust concerns as a reason to move away from ads |
Product roadmap | Features that increase usage frequency | Features that increase correctness and retention | Changelog focus includes verification and auditable outputs |
Target user | Broad free audience | Paying professionals and enterprise teams | Reporting highlights a tilt toward professional users |
··········
Who uses Perplexity spans three observable groups: consumers, professionals, and embedded “developer” users.
The most defensible “who uses it” evidence comes from three sources that measure different things: Perplexity’s published enterprise customer stories, third-party business adoption datasets, and major media reporting on consumer scale.
On the enterprise side, Perplexity publicly showcases deployments in law, consulting, and government, which are high-signal use cases because they require repeatability, governance, and citations.
For example, Gunderson Dettmer’s case study states Perplexity Enterprise is used by attorneys to stay up-to-date on legal developments, do deep dives on emerging tech verticals, track client markets and competitors, and analyze trends in the venture ecosystem.
Perplexity also publishes a Latham & Watkins customer story describing an internal market research function supporting lawyers with commercial insight and market intelligence, which is a clear “research-as-a-service” pattern.
On the “how widely used in business” angle, Ramp’s dataset reports that as of January 2026, 11% of organizations that have a vendor in the generative AI category use Perplexity AI, and it ranks within that category on Ramp’s vendor page.
On consumer scale, WIRED reports Similarweb-based estimates around tens of millions of monthly active users, while noting tracking limitations for the Comet browser, which is a useful caution when writing MAU claims.
Developer and partner usage is a separate adoption channel.
Perplexity’s Sonar API case study with Lee Enterprises describes using Perplexity’s answer engine to generate content at scale, which is a concrete example of Perplexity being embedded rather than visited.
That matters because an embedded answer engine can have high business impact even if consumer MAU narratives fluctuate.
........
Evidence-backed user segments and what they typically do with Perplexity
Segment | What we can state without guessing | What they use it for | Source basis |
Enterprise legal teams | Law firms are showcased as Perplexity Enterprise customers | Legal current awareness, deep research, market and competitor tracking | Perplexity customer story (Gunderson) and customer list |
Enterprise research and consulting | Consulting and internal research teams are showcased as customers | Market intelligence and insight production workflows | Perplexity customer list and Latham case study |
Public sector and operations teams | Government agency deployments are showcased | Operational writing and knowledge workflows | Perplexity customer story (Montana DNRC) and customer list |
Business adoption (cross-industry) | A measurable share of Ramp-tracked orgs using genAI vendors pay for Perplexity | Business research, knowledge work, and adoption alongside other vendors | Ramp vendor page for Perplexity |
Developer/embedded users | Sonar API is used to embed Perplexity answers into products and content pipelines | Scalable content generation and answer delivery using internal + external data | Sonar API case study (Lee Enterprises) |
Consumer knowledge workers | Large-scale consumer usage is reported via third-party analytics | Search replacement, research, and faster synthesis with sources | WIRED reporting on usage estimates |
··········
What Perplexity is used for is best explained as “research workflows,” not “chat.”
If you describe Perplexity as a chatbot, you miss the central thing users pay for, which is the research loop from question to sourced synthesis to follow-up verification.
Perplexity’s enterprise stories repeatedly point to “stay up-to-date,” “deep dives,” “tracking,” and “analysis,” which are classic research verbs rather than conversational verbs.
Perplexity’s recent feature set supports that framing, because Deep Research and Model Council are explicitly research structures rather than “persona features.”
Perplexity Finance is another example, because it is being shaped into an “audit trail” experience with elements like analyst ratings and links to SEC filings.
The Sonar API extends the same idea into embedded environments, where the “research workflow” becomes a product capability sold to publishers and businesses.
Deep Research is the “report output” lane.
Perplexity frames the upgraded Deep Research as accuracy and reliability oriented and supported by proprietary infrastructure, which is exactly what a report workflow needs.
Model Council is the “verification lane.”
Model Council is designed to compare models and synthesize agreement and disagreement, which is a direct response to the professional pain point of single-model blind spots.
........
High-signal Perplexity use cases, mapped to the feature that supports them
Use case | Why it fits Perplexity specifically | Feature(s) that match the workflow | Best supporting source |
Current-awareness monitoring | Fast synthesis across sources, repeated queries | Standard answers + saved workflows, plus Enterprise context | Enterprise legal customer story framing |
Deep-dive research briefs | Requires longer multi-step synthesis and citations | Deep Research | Official changelog describing upgraded Deep Research |
Cross-checking high-stakes answers | Requires surfacing uncertainty and disagreement | Model Council | Official changelog announcing Model Council |
Finance due diligence and explainers | Requires auditable references like filings and ratings | Perplexity Finance “auditable financials” and analyst ratings | Official changelog (Feb 20, 2026) |
Internal knowledge Q&A | Requires governance, repeatability, and enterprise controls | Enterprise Memory and related enterprise features | Official changelog (enterprise Memory) and enterprise product direction |
Embedded answer engine | Requires API-delivered sourced answers at scale | Sonar API | Sonar API case study (Lee Enterprises) |
··········
How to use Perplexity effectively is about structuring the query, the sources, and the verification step.
Perplexity works best when you treat it as a research operator, meaning you provide scope, constraints, and what “good evidence” looks like for the task.
Deep Research is the right choice when you want a longer brief that tries to assemble and reconcile sources rather than give a single paragraph.
Model Council is the right choice when you care about confidence and want to see where multiple strong models disagree before you publish or decide.
Finance workflows should be treated as auditable by default, meaning you privilege outputs that include direct links to filings and clearly separated supporting references.
In enterprise settings, you should assume governance is part of the workflow, because Perplexity’s own enterprise positioning and feature shipping focuses on organizational use, not only individual convenience.
A publishable workflow is a two-pass workflow.
The first pass is research and synthesis, which is where Perplexity can compress time.
The second pass is verification and contradiction hunting, which is where Model Council is most valuable, because it makes missing details and conflicting interpretations visible.
........
Practical “two-pass” workflows that match Perplexity’s March 2026 feature set
Output you want | Pass 1 (build the brief) | Pass 2 (stress-test it) | Why this reduces error |
A report-style research memo | Deep Research | Model Council on the key claims | It separates synthesis from verification |
A finance explainer | Perplexity Finance with auditable links | Model Council for interpretation and edge cases | It combines traceability with cross-checking |
A competitor landscape snapshot | Standard Perplexity answer flow with iterative follow-ups | Model Council for “what is missing” | It catches omission risk early |
A scalable content pipeline | Sonar API for source-backed drafting | Human editorial verification and policy checks | It preserves speed while keeping accountability |
··········
Pricing and business positioning are now part of the product story, because they influence what gets built.
Perplexity’s shift away from ads is being reported as a direct attempt to protect trust, which implies that Perplexity expects users to pay for perceived integrity and usefulness.
Reporting also emphasizes that Perplexity is leaning into subscriptions and enterprise sales, which is consistent with the enterprise feature shipping seen in the changelog.
The Financial Times reports subscription tiers ranging from $20 to $200 per month, which is a material detail for how Perplexity segments consumer versus professional users.
If your article is aimed at practical readers, the key point is that Perplexity is signaling a willingness to charge more for workflows like deep research that have higher compute costs and higher professional value.
On the enterprise side, Perplexity’s public customer stories and emphasis on internal + external data align with the business model described in reporting about enterprise research reports.
........
What Perplexity’s March 2026 business posture implies for users
Question a reader has | Evidence-backed answer | Why it matters |
Is Perplexity trying to monetize with ads? | Reporting says Perplexity is moving away from ads due to trust concerns | It affects perceived neutrality of answers |
Is Perplexity trying to monetize with subscriptions? | Reporting and pricing ranges indicate subscription-driven monetization | It shapes which features become premium |
Is Perplexity building for professionals? | Coverage emphasizes professional verticals and enterprise focus | It explains investments in verification and auditable outputs |
··········
Adoption signals in 2026 show Perplexity is a “second tool” for many, but a “primary tool” for specific teams.
Ramp’s dataset suggests Perplexity is present in a meaningful minority of organizations that pay for generative AI vendors, which is a credible indicator of business adoption beyond hype.
Perplexity’s own customer stories show that where Perplexity becomes primary, it tends to be in teams that live and die by research quality, such as legal and market intelligence functions.
This matches the strategic framing in recent reporting that Perplexity is prioritizing being best for paying users rather than largest for free users.
If you need a publishable way to summarize “who it is for,” the safest approach is to describe Perplexity as strongest when the workflow demands cited synthesis and fast verification, which is exactly what its feature set is evolving toward.
........
Adoption snapshots you can cite without overreaching
Adoption lens | What it shows | What it does not prove | Source |
Enterprise customer stories | Real deployments and named organizations | Full market share or full customer list | Perplexity Enterprise customers pages |
Business adoption dataset | Penetration among Ramp-tracked orgs that use genAI vendors | Consumer usage or total MAU | Ramp vendor page for Perplexity |
Consumer scale reporting | Third-party estimates and strategic framing | Audited user counts | WIRED reporting |
······
FOLLOW US FOR MORE.
·····
·····
DATA STUDIOS
·····

