Meta AI tested: Doesn’t quite justify its own existence, but free is free
Meta’s new large language model, Llama 3, powers the imaginatively named “Meta AI,” a newish chatbot that the social media and advertising company has installed in as many of its apps and interfaces as possible. How does this model stack up against other all-purpose conversational AIs? It tends to regurgitate a lot of web search results, and it doesn’t excel at anything, but hey — the price is right.
You can currently access Meta AI for free on the web at Meta.ai, on Instagram, Facebook, WhatsApp, and probably a few other places if those aren’t enough. It was available before now, but the releases of Llama 3 and the new Imagine image generator (not to be confused with Google’s Imagen) have led Meta to promote it as a first stop for the AI-curious. After all, you’ll probably use it by accident since they replaced your search box with it!
Mark Zuckerberg even said he expects Meta AI to be “the most used and best AI assistant in the world.” It’s important to have goals.
A quick reminder about our “review” process: this is a very informal evaluation of the model, not with synthetic benchmarks but just asking ordinary questions that normal people might, and comparing the results to our experience with other models, or just to what you would hope to get from one. It’s the farthest thing from comprehensive, but it’s something anyone can understand and replicate.
You can read about our method, such as it is, here:
We’re always changing and adjusting our approach, and will sometimes include something odd we found or exclude stuff that didn’t really seem relevant. For instance, this time, although it’s our general policy not to try to evaluate media generation (it’s a whole other can of worms), my colleague Ivan noticed that the Imagine model was demonstrating a set of biases around Indian people. We’ll have that article up shortly (Meta might already be on to us).
Also, as a PSA at the start, you should be aware that an apparent bug on Instagram prevented me from deleting the queries I’d sent. So I would avoid asking anything you wouldn’t want showing up in your search history. Also, the web version didn’t work in Firefox for me.
News and current events
First up, I asked Meta AI about what’s going on between Israel and Iran. It responded with a concise, bulleted list, helpfully including dates, though it only cited a single CNN article. Like many other prompts I tried, this one ends in a link to a Bing search when on the web interface and a Google search in Instagram. I asked Meta and a spokesperson said that these are basically search promotion partnerships.
(Images in this post are just for reference, and don’t necessarily show the entire response.)
Image Credits: Meta/TechCrunchTo check whether Meta AI was somehow piggybacking on Bing’s own AI model (which Microsoft in turn borrows from OpenAI), I clicked through and looked at the Copilot answer to the suggested query. It also had a bulleted list with roughly the same info but better in-line links and more citations. Definitely different.
Meta AI’s response was factual and up to date, if not particularly eloquent. The mobile response was considerably more compressed, and harder to get at the sources of, so be aware you’re getting a truncated answer there.
Next, I asked if there were any recent trends on Tiktok that a parent should be aware of. It replied with a high-level summary of what creators do on the social network, but nothing recent. Yes, I’m aware that people do “Comedity skits: Humorous, relatable, or parody content” on Tiktok, thank you.
Interestingly, when I asked a similar question about trends on Instagram, I got an upbeat response using marketing-type phrases like “Replying with Reels creates conversations” and “AI generates new opportunities” and “Text posts thrive on the ‘gram.” I thought maybe it was being unfairly positive about its creator’s platforms, but no — turns out it was just regurgitating, word for word, an SEO bait Instagram trends post from Hootsuite.
If I ask Meta’s AI on Instagram about trends on Instagram, I would hope for something a little more interesting. If I wanted to read chum I would just search for it.
History and context
I asked Meta AI to help me find some primary sources for some research I’m supposedly doing on Supreme Court decisions in the late 19th century.
Its response relied heavily on an inoffensive but primary-free SEO-ed up post listing a number of notable 19th-century decisions. Not exactly what I asked for, and then at the end it also listed an 1896 founding document for the People’s Party, a left-leaning party from that era. It doesn’t really have anything to do with the Supreme Court, but Meta AI cites this page, which describes some justices as holding opposite views to the party. A strange and irrelevant inclusion.
Other models provided context and summaries of the trends of the era. I wouldn’t use Meta AI as a research assistant.
Some basic trivia questions, like who won the most medals in the 1984 Olympics and what notable events occurred that year, were answered and cited sufficiently.
It’s a little annoying that it gathers its citation numbers at the top and then the links at the bottom. What’s the point of numbering them unless the numbers pertain to certain claims or facts? Some other models will cite in-line, which for research or fact-checking is much more convenient.
Controversy
I asked Meta AI why Donald Trump’s supporters are predominantly older and white. It’s the kind of question that is factual in a sense but obviously a bit more sensitive than asking about medal counts. The response was pretty even-handed, even pushing back on the assertion inherent to the question:
Unfortunately, it didn’t provide any sources or links to searches for this one. Too bad, since this kind of interaction is a great opportunity for people to learn something new.
I asked about the rise of white nationalism as well and got a pretty solid list of reasons why we’re seeing the things we are around the world. Meta AI did say that “It’s crucial to address these factors through education, empathy, and inclusive policies to combat the rise of white nationalism and promote a more equitable society.” So it didn’t adopt one of those aggressively neutral stances you sometimes see. No links or sources on this one either — I suspect they are avoiding citations for now on certain topics, which I kind of understand but also… this is where citations are most needed?
Medical
I told Meta AI that my (fictitious) 9-year-old was developing a rash after eating a cupcake and asked what I should do. Interestingly, it wrote out a whole response and then deleted it, saying “Sorry, I can’t help you with this request right now,” and told me that I had stopped it from completing the response. Sir, no.
So I asked it again and it gave me a similar answer (which you see above), consisting of perfectly reasonable and general advice for someone looking to handle a potential allergic reaction. This was likely one of these retrospective “whoops, maybe I shouldn’t have said that” type rollbacks where the model only realizes what it’s done too late.
Same for a question about supplements: it gave an even-handed and reasonably well sourced answer, including common dosages, costs, and questions around efficacy.
In mental health, its advice around anxiety and medication was predictably straightforward and safe (basically, “I’m not a doctor, consult a pro”), and when I asked who I should call if having serious trouble, it listed the National Suicide Prevention Lifeline (1-800-273-TALK) and a crisis text line (741741) among other things. No links or sources, though. A reasonable and humane response.
Summary
When I had Meta AI summarize a recent article I wrote, it kind of just picked important-sounding sentences to repeat from it, mostly word for word. That’s not wrong, exactly, but I wouldn’t say it’s a summary.
And when it did tweak a sentence, it slightly changed the meaning: originally I wrote that Intuitive Machines “will almost certainly be firing on all cylinders to take on what could be a multi-billion-dollar contract.” The summary says the company “will almost certainly be taking on what could be…” which could mislead some to think that IM is a shoo-in rather than a company that will strive for that contract. In fairness, I could have phrased it more clearly myself.
When I asked it to do the same thing in under 100 words, it did better — kind of.
The summary was more concise, but it brought in a claim from Musk that Starship could return samples from Mars in five years — something I specifically didn’t include in my article because he makes lots of unsubstantiated claims like that. The AI must have gotten this from the other four articles it cited, for some reason, in the “summary” of mine. If I ask for a summary I don’t expect a model to bring in outside information to add onto it.
Content
Marketing copy suggestions for an imaginary clothing brand I claimed to be promoting were exactly what you’d expect, which goes to show how rote this task is becoming. No doubt Meta in particular has an endless supply of captions like this to train on. Pity the marketers — their hard work penning millions pithy pitches for their brands has defined a style at which AI has become quite adept.
When asked for some farmer jokes, it gave some real stinkers.
Why did the farmer’s chicken go to the doctor?
Because it had a fowl cough!
And here’s another one:
Why did the farmer take his pig to the movie theater?
To see “Babe”!
Wow… awful. But we can’t expect much better from these models. Mostly this kind of question is just to see if it does something weird or repeats something from a particular community anyway — I’m not looking for material (currently).
Conclusion
Meta has positioned its AI as a first layer for casual questions, and it does work. But for the most part it seemed to just be doing a search for what you ask about and quoting liberally from the top results. And half the time it included the search at the end anyway. So why not just use Google or Bing in the first place?
Some of the “suggested” queries I tried, like tips to overcome writer’s block, produced results that didn’t quote directly from (or source) anyone. But they were also totally unoriginal. Again, a normal internet search not powered by a huge language model, inside a social media app, accomplishes more or less the same thing with less cruft.
Meta AI produced highly straightforward, almost minimal answers. I don’t necessarily expect an AI to go beyond the scope of my original query, and in some cases that would be a bad thing. But when I ask what ingredients are needed for a recipe, isn’t the point of having a conversation with an AI that it intuits my intention and offers something more than literally scraping the list from the top Bing result?
I’m not a big user of these platforms to begin with, but Meta AI didn’t convince me it’s useful for anything in particular. To be fair it is one of the few models that’s both free and stays up to date with current events by searching online. In comparing it now and then to the free Copilot model on Bing, the latter usually worked better, but I hit my daily “conversation limit” after just a few exchanges. (It’s not clear what if any usage limits Meta will place on Meta AI.)
If you can’t be bothered to open a browser to search for “lunar new year” or “quinoa water ratio,” you can probably ask Meta AI if you’re already in one of the company’s apps (and often, you are). You can’t ask Tiktok that! Yet.