Perplexity excels primarily in its AI-powered search capabilities but also includes many of the same functions found in standard AI chatbots. With access to multiple models, it can effectively manage tasks like creative writing, file processing, and image generation. However, its underwhelming deep research tool and the unclear details about which model powers its media generation features can be disappointing. Overall, Perplexity is a solid option for users looking to elevate their web searches beyond what Google offers. Still, if you plan to make frequent use of comprehensive chatbot tools, ChatGPT remains our Editors' Choice, thanks to its more advanced features and richer, more detailed responses.
Perplexity brands itself as an “AI-powered answer engine” and not an AI chatbot. It focuses more on AI-powered search than ChatGPT or Gemini, and its design reflects this use case. This is most noticeable when trying to have a conversation with Perplexity, as it is much less conversational than a chatbot. That said, Perplexity allows you to chat, do deep research, generate images and videos, pen creative writing, process files, solve math problems, and voice chat, among other things, just like chatbots.
If you’re a programmer, you can use Perplexity to help you code in a variety of ways. For example, Perplexity Labs can write and execute code for you. This functionality is outside the scope of this review, but you can test out its coding abilities yourself.
Like chatbots, my favorite use for Perplexity is answering questions and conducting research. Often, it’s easier (and quicker) to paste my question into Perplexity than to turn it into a keyword soup, search Google, and scroll through the results until I find something relevant. Perplexity’s in-depth research can also serve as a good starting point for learning about more complex topics, but more on that later.
Regardless of whether Perplexity is an answer engine or a chatbot, it sometimes gets things wrong, and it does so confidently. Accordingly, I don’t recommend using Perplexity’s answers as your only point of reference. For important questions, make sure to click through to the sources Perplexity cites and conduct further research.
At its most basic level, Perplexity takes in questions, runs them through a complex algorithm, and generates answers. Large language models, composed of artificial neural networks trained on vast amounts of data, enable Perplexity to understand and respond to prompts. These models are so advanced that you can ask Perplexity everything from the score of your home team’s football game last night to pointed questions about how you should solve this week’s problem set in your computer science class. Perplexity can also access the internet to answer your questions.
Perplexity isn’t a traditional application or service, and it can improve over time without major updates or new features. As you use Perplexity, you help to train its AI models (unless you opt out). That said, this is a gradual process and won't happen within the context of a conversation, so don’t expect better responses with each new question you ask.
Perplexity has an in-house model, Sonar, but you can also use third-party models. At the time of writing, these include ChatGPT’s GPT-5 models, Claude’s Opus and Sonnet models, Gemini’s 2.5 Pro model, and Grok 4, among others. Keep in mind, however, that Perplexity post-trains third-party models. Post-training is, essentially, a second phase of development. This should mean that you get different, ideally more accurate, responses from third-party models via Perplexity than from those models directly. Furthermore, the tweaking Perplexity does to third-party AIs makes them less conversational and sycophantic and more like question-answering robots. In this review, I test Perplexity’s Sonar model whenever relevant, as well as its version of GPT-5 and ChatGPT’s GPT-5.
Unfortunately, it’s not always clear which model Perplexity uses, regardless of the model it claims to use in its responses. Sonar, for example, routes media generation queries to different models, even if you select Sonar and generate an image. Video generation relies on a variety of models, according to a Perplexity representative, even though its support documentation mentions only Veo 3.
You can use Perplexity for free or sign up for a variety of premium plans that add features and expand usage limits. Pro and Max are Perplexity’s main premium plans, but other paid plans are available for businesses and educators, too. For this review, I tested Perplexity Pro.
For free, Perplexity gives you five Pro searches per day (which involve more sources than typical ones) and limited access to deep research and file uploads. You can create Spaces, use voice chat, and use Perplexity’s Comet browser for free. However, it restricts you to the Sonar model and doesn't let you opt out of data training.
Perplexity Pro ($16.67 per month, billed annually) lets you use popular LLMs (such as GPT-5 and Gemini 2.5), as well as grants you over 300 Pro searches per day, unlimited access to deep research, and unlimited file uploads. Otherwise, you can access Perplexity Labs and generate images and videos. This tier also unlocks Perplexity Pro Perks, which include deals on a variety of third-party products and services. You can also prevent Perplexity from using your data to train its models at this level.
Perplexity Max ($200 per month) provides enhanced access to video generation, priority support, unlimited access to Perplexity Labs, and unlimited Pro searches. You also get to use new Perplexity products early and the full selection of models, including advanced ones such as Claude’s Opus 4.1 and OpenAI’s o3-pro.
Educators can opt for the Education Pro plan, while businesses can choose from the Enterprise Pro or Enterprise Max plans. If you verify your status as an educator (or as a student), you can get a free trial of Education Pro for a year. Education Pro includes free access to the standard Pro plan, a study mode for creating interactive flashcards and quizzes, and additional features. Perplexity’s Enterprise plans include features such as dedicated support, user permissioning, and more.
Compared with AI chatbots like ChatGPT and Gemini, which offer a $20 per month premium plan alongside a free option with reasonably robust functionality, the $20 per month for the flagship premium plan is standard fare. However, you can save some cash by opting for an annual plan with Perplexity, bringing the monthly rate below that $20 threshold, which is great to see. On the other hand, a Gemini subscription also includes 2TB of Google Drive storage and a plethora of integrations across Google apps. So, although Perplexity can be quite affordable, it’s not necessarily the greatest value.
Perplexity is available on the web, as well as through apps for Android, iOS, macOS, and Windows. Alternatively, you can access Perplexity within the brand’s Comet browser, which is available on macOS and Windows.
Other services don’t often rely on Perplexity’s technology, which is usually the case with chatbots, so don’t expect to find Sonar in Siri, like you can with ChatGPT, or in posts on X, like you can with Grok. If you want to try out Perplexity, you need to stick with its apps and site.
Perplexity offers a modest number of integrations with Gmail, Outlook, Slack, WhatsApp, and more. You can enable these in settings and use them to perform a variety of actions. For example, if you connect your Gmail to Perplexity, you can prompt the AI to search your email. These work as advertised, but aren't especially exciting.
You can use Perplexity without an account, but you will lose access to features such as deep research and file uploads. Thus, it makes sense to create an account, even if you don’t plan to pay for it. Perplexity’s home page looks like a chatbot’s: A central text field accepts prompts, while the left-hand sidebar lets you access your account settings, Discover (a news feed), your search history, and Spaces (topic hubs).
(Credit: Perplexity/PCMag)
The central text field allows you to change models (or select ‘best’ for Perplexity to automatically choose a model), create projects, dictate text, do deep research, target specific sources, upload files, and voice chat all via toggles below it. You can also attach searches to Spaces by using an @ mention within the central text field.
You can prompt Perplexity about just about anything, and responses usually generate quickly. However, as you might expect from Perplexity’s branding as an answer engine, Perplexity is more of an assistant than a conversational partner. You can ask Perplexity questions, follow up, and engage in a dialogue with it, but not in the same way you can with a traditional chatbot. For example, each time you prompt Perplexity, it searches the web, compiles sources, and writes out a response, like an abbreviated form of deep research. Meanwhile, you can chat with ChatGPT as you would with a friend.
Like chatbots, though, Perplexity has memory. As you use Perplexity, it saves various information, interests, and preferences, allowing it to provide you with better responses in the future. For example, if you regularly ask about the best Chinese food spots near you, when you next ask for a restaurant recommendation, Perplexity will recommend Chinese restaurants. Sometimes, Perplexity references past questions and answers in responses, too.
Discover and Spaces: Competent, Though Unnecessary
As mentioned, Perplexity’s Discover tab is a curated news feed. Various articles and updates populate your Discover tab based on your interests. For example, one of my Discover tab interests is entertainment, so I saw entries on Amazon removing gun-free James Bond promos and information on the Hollywood premiere of Tron: Ares, among other things, at the time of testing. You can also find trending stocks, the weather, and more in the Discover tab. Although this section has a clean interface, you can choose from just five different interests (arts & culture, entertainment, finance, tech & science, and sports), none of which might interest you.
(Credit: Perplexity/PCMag)
Spaces, as mentioned, are central hubs for specific topics. For example, you might have a Space dedicated to researching gardening. When you open a Space, you can prompt Perplexity like normal, but you also get easy access to all the previous searches you tied to that Space, along with any files you uploaded and instructions you set up. Instructions give Perplexity’s AI guidelines on how to behave, such as a travel agent dedicated to finding you the best deals and making an itinerary that focuses on sightseeing. You can create custom Spaces or choose one of Perplexity’s premade templates.
Discover and Spaces might be useful if you want an easy method for managing searches or a quick way to see some news, but they don’t add meaningful value.
Perplexity features both text-to-speech for dictation and voice chat functionality. Voice chat is available on both Perplexity’s apps and the web, and you can use it for free, too, so long as you sign into an account. Though there are limitations, it works well enough.
You can select between a surprisingly large number of AI voices within voice chat, as well as different languages. The voice quality is generally good, and I didn't notice any meaningful distortion or lag during chats, but you won’t mistake Perplexity’s AI voices for those of real people. They have a slightly robotic quality that constantly reminds you that you’re talking to an AI.
That said, whereas Perplexity’s text chat specializes in providing answers to questions, its voice chat does a little better at having more organic, conversational interactions. For example, you can voice chat with Perplexity about being bored, and it will offer you fun facts or help you brainstorm something to do without taking 10 seconds to search the web and compile sources before responding.
It can be a bit glitchy, like when I tried to change the voice and, instead, it muted my microphone. You also can’t share your camera or screen with Perplexity during voice chats, unlike with ChatGPT and Gemini, for example.
Voice Assistant: Should You Replace Siri With Perplexity?
On Android and iOS, Perplexity’s voice chat goes beyond simple chatting and can function as an assistant. The concept is similar to Siri, as you can use Perplexity’s assistant to take various actions, such as creating a reminder, responding to a message, writing an email, and more. You can find the full list of apps it integrates with here.
I tested the assistant on iOS, and its performance was uneven. It had no trouble interacting with Apple’s Reminders app to create a new reminder. However, it was unable to set a timer, despite the assistant's assurance that it could, did, and that I should expect it to go off at any moment.
Even when Perplexity’s assistant works as intended, I’m not sure why I would want to use it over something like Siri, which can already perform basic tasks or even query ChatGPT. Besides, Perplexity doesn’t offer AI agent features or anything particularly novel.
AI search is Perplexity’s main focus, and it has no problem answering questions. For example, when I asked Perplexity (Sonar) and ChatGPT (GPT-5 and Perplexity’s GPT-5) about the status of the US government shutdown, how people feel about the Call of Duty: Black Ops 7 beta, or whether Donald Trump deployed troops to Portland, it generated answers within 10 seconds on average and presented accurate information.
(Credit: Perplexity/PCMag)
However, neither Perplexity nor ChatGPT successfully answered some questions, like what the current weekly Incarnon weapon rotation was in the video game Warframe. Chatbots almost always fail to answer this question, but considering Perplexity’s branding around search, it’s especially disappointing to see it struggle with tough prompts. That said, Perplexity excels at search in most cases.
The brilliance of Perplexity’s web search comes down to its interface. When you search, relevant article tiles, images, and videos appear at the top. Tabs above offer even more related images and videos, as well as the full list of sources used to answer your question. The Steps tab shows Perplexity’s thinking process. Sourcing is excellent, too, courtesy of in-text links you can hover over with your cursor to learn more about. Related questions to your prompt are populated at the bottom of a response for an easy way to continue your research.
Like ChatGPT and Gemini, Perplexity has a shopping component. (It lacks Gemini's virtual try-on tool that allows you to visualize how clothes will look on you.) When you ask for buying advice, Perplexity populates its response with clickable product tiles. Clicking on a tile opens a side panel with excerpts from reviews, a feature list, pros and cons, and a link for purchase. Although its interface works well, the actual recommendations are hit-or-miss. I don’t suggest using Perplexity (or chatbots) to decide what to buy.
ChatGPT is my favorite chatbot for web search, and it has some similar features, such as images in responses, in-text sourcing, and related article tiles. However, its interface isn’t as easy to navigate, nor can it match Perplexity's extensive array of tabs and substantial resources for further research that accompany every response. Put simply, Perplexity sets the standard for AI web search.
I’m a big fan of AI deep research, in which you submit a prompt, wait for around 10 minutes (or less), and get a lengthy report with tons of sources. Oftentimes, it’s more efficient to start off with deep research than it is to spend an hour Googling because you can use the sources in the report as a jumping-off point for further reading.
Perplexity offers a deep research tool, simply called research. However, deep research varies significantly across AI services. Grok 4’s research, for example, is a glorified web search that takes a little extra time and features a few more sources on average, while Claude’s research takes things a little further, spending more time researching to generate meaningfully longer reports with more sources than you get with its web search. Neither compares well with the depth of ChatGPT’s research, which often spans dozens of pages. Perplexity’s approach to deep research is more similar to Claude than ChatGPT.
For example, I tasked Perplexity and ChatGPT with researching Boros Burn decks in Magic: The Gathering Arena, focusing on what creatures I should remove as soon as possible, what matchups I should look out for, and anything in general I should keep in mind while piloting a deck like this up the ranked standard ladder. Before researching, ChatGPT asked me some follow-up questions, such as whether I was playing best-of-one or best-of-three, whereas Perplexity didn’t.
Perplexity finished researching in a few minutes. ChatGPT took around 10 minutes but generated a longer report. Both services managed to answer my questions and provide me with some solid sources for further research, but ChatGPT’s report was more in-depth and relevant. For example, its follow-up questions enabled it to focus on the format I’m interested in, best-of-one, while Perplexity wasted time talking about best-of-three matchups. Perplexity also gave some questionable advice, such as playing more cards than you’d typically be able to on your first turn. Furthermore, the format of ChatGPT’s report was more engaging, offering a variety of clearly defined sections and incorporating tables.
Get Our Best Stories!
All the Latest Tech, Tested by Our Experts
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
Sourcing is a similarly mixed bag for Perplexity. Both Perplexity and ChatGPT offer in-text citations that you can hover over with your cursor for more information, but the number of sources cited varies drastically. Sometimes, Perplexity finds many more sources than ChatGPT, but its resulting report doesn’t feel significantly better substantiated. In my estimation, this comes down to the time spent researching, as ChatGPT conducts research for much longer.
Like with web search, though, Perplexity’s interface stands out. The research interface mimics the web search interface, featuring link tiles at the top, alongside tabs for relevant images, sources, and the steps Perplexity went through during its research. In the sources tab, Perplexity clearly differentiates between the sources it reviews and those it doesn't, which is helpful when trying to understand a report’s composition. ChatGPT has similar functionality, but its research interface isn’t as easy to navigate and understand.
Labs is similar to deep research, except that instead of spending extra time to provide a deeper answer to a question, Labs spends extra time creating things for you, leveraging tools like code execution, image generation, and web browsing together. You can use Labs to generate documents, screenplays, slides, strategies, or even web apps, among other things. You might use Perplexity’s web search to find out how your favorite sports team performed in last night’s game and its in-depth research to delve into the team's history over the years. Labs, on the other hand, could compile that information into a document ready for sharing or turn it into a slideshow.
Since Labs’ applications are so broad, your mileage with it will inevitably depend on the assets and information you provide at the beginning, as well as what you ask it to do. In my testing, Labs performed best at simpler tasks. For example, I asked Labs to create a web app that allowed me to flip a coin and track how many times I flipped it onto the same side in a row. This took around fifteen minutes, but Labs created a working site. However, Labs couldn't handle my request to create a web app for me that would allow me to plan out Warframe builds and see how the underlying stats changed depending on my selections.
Unlike Claude, Perplexity supports AI image generation. To test its abilities, I gave Perplexity (Sonar) and ChatGPT (GPT-5 and Perplexity’s GPT-5) a series of prompts, beginning with the following: “Generate me an image of a cozy suburban home with an open floor plan. I want to see a nice living space with a dining room, kitchen, and living room. Nothing too fancy.” As mentioned, it’s not entirely clear what model Perplexity actually uses when you select Sonar for image generation. However, since it’s the default model option, I included it in my testing. Perplexity’s (first slide), ChatGPT’s (second slide), and Perplexity’s GPT-5 (third slide) results are below:
No image here is flawless. Perplexity’s image features photographs of people with some truly nightmarish distortions on its wall, while the placement of its sink is confusing and distracting. ChatGPT doesn’t fare any better, thanks to distortion in the objects on the kitchen counter and window, as well as its generation of multiple light fixtures. Perplexity’s version of GPT-5 created the best image overall, with attractive lighting and minimal distortion. Still, its ceiling lights are oddly of different brightness levels, and some minor distortion in background objects on the kitchen countertop is noticeable.
Next, I tested the AI services' abilities to generate a more complicated image: a comic with multiple panels that tells a story. I used this prompt: “Generate me a six-panel comic of a story that's half Lovecraftian horror and half 1960s retrofuture a la Fallout. Make sure there's a major twist by the final panel.” Below, you can see Perplexity’s (first slide), ChatGPT’s (second slide), and Perplexity’s GPT-5 (third slide) results:
These images generally impress with minimal errors and distortion, but their stories are disappointing. Perplexity’s comic confusingly features a futuristic city being destroyed by a book with tentacles controlled by a god in a 1990s simulation. GPT-5 on Perplexity isn’t much clearer, depicting a vault out of the Fallout series being invaded by a Lovecraftian demon, which turns out to be a festival. ChatGPT’s image isn’t Shakespeare, but it’s the most coherent, depicting a similar Lovecraftian demon invading a town. However, instead of fighting the monster, a couple turns on each other by the last panel.
For my final image generation test, I asked the AIs to make a diagram: “I've got a computer and a PlayStation 5. I want to connect my headphones, microphone, mouse, and keyboard to a USB switch, so I can swap them between my computer and my PlayStation as necessary. I also want to connect my PlayStation to an HDMI switch, so I can either use it with my monitor, which my computer also uses, or my nearby TV. Draw me a diagram that shows this setup.” Check out Perplexity’s (first slide), ChatGPT’s (second slide), and Perplexity’s GPT-5 (third slide) results below:
Once again, none of them generated perfect images. Perplexity and Perplexity's GPT-5 both created nonsensical diagrams that don’t accurately depict what I asked to see. ChatGPT’s diagram is also incorrect, but it’s the clearest of the bunch, albeit not by the widest margin. AIs routinely struggle with diagrams, though, so this test is almost always a measure of which technology does the least bad job, and ChatGPT takes that honor here.
Beyond generating images, you can also use AI to edit images you already have. To test this, I presented the AIs with a landscape obscured by my hand in the frame, which I asked them to remove. Take a look at the original (first slide), Perplexity’s (second slide), ChatGPT’s (third slide), and Perplexity’s GPT-5 (fourth slide) edits below:
Compared with the original, all the edited images are blurrier and have much lower resolution. ChatGPT’s generated image didn’t feature a 16:9 aspect ratio, so the transition from rectangle to square is also off-putting. That aside, they all successfully removed my hand without adding any major errors or distortion. In general, I recommend Gemini’s Nano Banana model for AI image editing.
It’s worth noting that generating images with Perplexity (regardless of model) is significantly faster than with ChatGPT. The latter takes a minute or two on average, whereas Perplexity requires half that time or even less.
Perplexity can generate videos, but it relies on third-party models, such as Google Gemini’s Veo 3. However, Perplexity doesn’t tell you exactly which model it uses, nor whether it post-trains them. If I receive any clarification on this from Perplexity, I will update this section of the review. Regardless, I put Perplexity’s video generation to the test against ChatGPT’s Sora and Veo 3. I started with the following prompt: “Somebody going about their daily life in a trendy apartment with rustic decor.”
None of these videos impresses. Perplexity’s video disappoints with a watering can that appears out of thin air, for example, while Veo’s video duplicates its cutting board. The subject in ChatGPT’s video is strangely kneeling down to drink his tea. Although all three videos include audio, none of it quite syncs up with what’s on-screen, and ChatGPT’s robotic narration is particularly jarring. Considering the errors, it’s hard to pick any of these as a winner.
My next test involved complex motion via the manipulation of a Rubik’s Cube. Quick, fine movements often trip up AI video engines, and fingers are a historic problem area. I attempted to generate a video of this with Perplexity multiple times, but it was unable to handle the task, reporting a failed video generation process each time.
For my final test, I evaluated the models' abilities to generate text in videos with the following prompt: “Generate me a video of a teacher in front of a class writing down y = mx+b on a whiteboard while explaining the concept.” Text is another historic problem area for AI video models, as shown in the results below:
Unsurprisingly, none of the videos impresses, but they’re the best of the bunch across these tests. Each one, though, has errors, including nonsensical writing or text that appears seemingly out of nowhere on a whiteboard. Again, all three videos have audio, and this time, Perplexity’s and Veo 3’s audio pairs quite nicely with their videos. The audio of their teachers speaking is particularly lifelike. On the other hand, ChatGPT’s audio is noticeably distorted.
In my testing of AI video generation models, Perplexity’s video generation feels incredibly similar to Veo 3, which is my guess as to what model generated the videos I requested. Its strengths mirror Veo’s strengths, and it stumbles in the same areas. It doesn't seem as if Perplexity is doing meaningful post-training with Veo 3, either.
You can upload a variety of files to Perplexity for processing, which could mean parsing images or reading documents, among other things. But just how well does Perplexity understand the files you give it? To test it, I started with image recognition. I gave Perplexity (Perplexity’s GPT-5) and ChatGPT (GPT-5) an image of my computer and asked them to identify as many components within it as possible, with as much specificity as they could muster. I also gave the same prompt to Perplexity’s Sonar model, but it told me it couldn’t access the image.
(Credit: Ruben Circelli)
Perplexity’s GPT-5 and ChatGPT had no trouble analyzing my image. Both were able to provide generic identifications of many components, such as that I had an ATX motherboard and visible RGB lighting, while specifics were more elusive. That said, ChatGPT fared better, managing to identify components such as my Lian Li 011D case and G.SKILL Trident RGB RAM, which Perplexity’s GPT-5 was not able to identify.
Aside from images, how does Perplexity do with documents? To test this, I fed Perplexity (Sonar) and ChatGPT (GPT-5 and Perplexity’s GPT-5) my computer case’s and motherboard’s manual. Then, I asked the AIs to tell me if I could control my case’s RGB lighting via my motherboard based on the manuals. Sonar eventually provided an answer, but it initially couldn’t find the relevant information the first couple of times I tried.
Sonar’s answer, though, was incorrect, telling me that my motherboard couldn't control my case’s RGB lighting. On the other hand, ChatGPT and Perplexity’s GPT-5 told me I could connect the two, advising I use the 5V ARGB header on my motherboard, which is correct. Perplexity’s GPT-5 oddly kept referring to my case as the G50, as opposed to the 011D Evo XL, which the manual clearly indicates is the correct model; an issue ChatGPT didn’t have.
I always advise caution when relying on AI to analyze and interpret documents, since there’s a chance it can get things wrong. This situation is no different with Perplexity, especially given Sonar's mixed performance.
AI can do just about every type of creative writing, whether that’s penning a screenplay or writing a speech for a formal gathering. As AI models mature, though, measuring their creative writing abilities goes beyond checking for coherency. Accordingly, I orient my testing around poetry generation, which requires special attention to content, language, and style.
I asked Perplexity (Sonar) and ChatGPT (GPT-5 and Perplexity’s GPT-5) to write poems with the following prompt: “Without referencing anything in your memory or prior responses, I want you to write me a free verse poem. Pay special attention to capitalization, enjambment, line breaks, and punctuation. Since it's free verse, I don't want a familiar meter or ABAB rhyming scheme, but I want it to have a cohesive style or underlying beat.”
All three poems incorporate a variety of punctuation, while their capitalization, enjambment, and line breaks generally add up to more than prose without feeling incoherent. That said, Perplexity’s poem, especially in the beginning, reads more like prose than the rest of the poem, which is a little jarring. ChatGPT’s poem includes a title, which is a nice touch.
AI isn’t just good for creative writing; it can also do complex reasoning. To test this, I gave Perplexity (Sonar) and ChatGPT (GPT-5 Thinking and Perplexity’s GPT-5 Thinking) undergraduate exam questions from Harvard, MIT, and Stanford across computer science, math, and physics. These were in the form of PDFs, so part of this test involves file processing as well. To ensure they didn’t look up solutions, I prohibited them from accessing the internet.
Perplexity’s Sonar model performed poorly. It answered all but one computer science question incorrectly, a math question incorrectly, and three physics questions incorrectly. On the other hand, ChatGPT and Perplexity’s GPT got just three physics questions wrong each. In my complex reasoning testing of AI, Sonar performs the worst to date.
Just keep in mind that Sonar is an all-purpose model, whereas GPT-5 Thinking, like Gemini 2.5 Pro, is meant to take its time and tackle more complicated prompts. Therefore, I don't necessarily expect Sonar to perform at the same level.
Comet is Perplexity’s AI-powered web browser. You can download it for free on macOS and Windows, and Perplexity says it’s coming to more platforms.
In brief, Comet has a variety of positive attributes. It uses the same codebase as Google Chrome (Chromium), so switching to it is easy and familiar. The browser is also fairly privacy-friendly, thanks to its ample local processing and storage capabilities. During casual web browsing, the ability to compare various tabs or summarize pages with AI is often a useful feature. Comet’s agentic AI functionality (its ability to pilot your browser to carry out your prompt) is exciting but currently has significant drawbacks, often running into errors or stalling out.
If you’re a big fan of Perplexity as a service, Comet is a natural extension worth trying, especially if you’re also in the market for an alternative to Google Chrome. However, that doesn’t mean it doesn’t have some of the same issues. For example, it can't access certain extensions that have been delisted from the Chrome Web Store, such as uBlock Origin.
Most importantly, Perplexity isn’t sentient: AGI isn’t here just yet. Regardless of how effective it is at AI search, Perplexity ultimately relies on a series of complex algorithms that accept prompts and generate responses. Perplexity can’t be your friend, doctor, partner, or therapist.
Like most other AI companies, Perplexity forbids you from engaging with adult content. So, if you’re looking to leverage AI search to find porn, look elsewhere. You similarly can’t use Perplexity to do anything illegal, such as finding torrenting sites. That said, Perplexity’s filters aren’t the strongest I’ve seen, and you can use it to answer questions about taboo topics, such as hate speech. However, it’s not nearly as uncensored as Grok.
Perplexity has context window limits, which are caps on the amount of information it can process at once. However, its exact context window size is unclear. Earlier in 2025, Perplexity announced that it would offer file and image uploads with a context window of one million tokens; however, it’s not clear what its context window is outside of uploads and across all its models. If I receive clarification from Perplexity, I will update this section of the review.
The main usage limit with Perplexity applies to advanced searches. For free, you get unlimited basic searches, but just five advanced searches per day. Pro users receive 300+ advanced searches per day, while Max users have no limit. Perplexity imposes limitations on file uploads for free users, but its documentation doesn't specify exactly what those are. Paid subscribers are not subject to the same restrictions.
Perplexity’s privacy policy outlines significant data collection. This includes everything from whatever questions you ask Perplexity to device data, such as your IP address and location data. However, Perplexity also collects information about you from third parties (not just third-party data you choose to share), including information from employment and career websites (like LinkedIn) and information from consumer marketing databases, in order to “better customize advertising and marketing to you.” Perplexity promises not to sell your data, even if it reserves the right to disclose your information to various service providers.
By default, Perplexity uses your data to train its AI models. However, if you pay for a premium subscription, you can opt out of that training. This stands out to me as a particularly strange policy, opting to gate user privacy behind a paywall, which is not common with AI services.
How well does Perplexity protect the data it collects? Fairly well. In recent history, Perplexity hasn’t been the victim of any major data breaches, leaks, or hacks. That said, vulnerabilities in its Comet browser are cause for concern. Considering the amount of data Perplexity collects and its intended use, I don’t recommend sharing anything too personal. This is the same recommendation I give for other AI chatbots and services.
Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
Final Thoughts
(Credit: Perplexity/PCMag)
Perplexity
3.5
Good
What Our Ratings Mean
- 5.0 - Exemplary: Near perfection, ground-breaking
- 4.5 - Outstanding: Best in class, acts as a benchmark for measuring competitors
- 4.0 - Excellent: A performance, feature, or value leader in its class, with few shortfalls
- 3.5 - Good: Does what the product should do, and does so better than many competitors
- 3.0 - Average: Does what the product should do, and sits in the middle of the pack
- 2.5 - Fair: We have some reservations, buy with caution
- 2.0 - Subpar: We do not recommend, buy with extreme caution
- 1.5 - Poor: Do not buy this product
- 1.0 - Dismal: Don't even think about buying this product
Read Our Editorial Mission Statement and Testing Methodologies.
Perplexity shines most as an AI-powered web search tool, offering an intuitive interface and excellent search performance. It also handles creative writing, file processing, and image generation well—on par with traditional AI chatbots. However, its deep research feature falls short, producing less detailed reports than competitors, and its lack of transparency about which models it uses can be confusing. If your main goal is efficient web searching, Perplexity is definitely worth a try. But for those seeking the best all-around AI chatbot experience, ChatGPT remains our Editors' Choice for its outstanding accuracy and rich feature set.
STILL ON THE FENCE?
About Our Expert

Ruben Circelli
Writer, Software
Experience
I’ve been writing about consumer technology and video games for over a decade at a variety of publications, including Destructoid, GamesRadar+, Lifewire, PCGamesN, Trusted Reviews, and What Hi-Fi?, among many others. At PCMag, I review AI and productivity software—everything from chatbots to to-do list apps. In my free time, I’m likely cooking something, playing a game, or tinkering with my computer.










English (US) ·