top of page

Meta AI: Latest features added to the platform and their impact on everyday use

ree

The evolution of Meta AI over the last months has focused on expanding multimodal capabilities, integrating more tightly with Meta’s family of apps, and introducing governance features that allow greater control over memory, privacy, and enterprise adoption. The changes impact both free-tier users and those opting for the paid Meta AI+ subscription.



Image understanding now works seamlessly across messaging apps.

The introduction of Gemma Vision Core brings integrated image analysis to Messenger, WhatsApp, and Instagram. Users can send up to three images per query, each with a maximum size of 10 MB, and receive detailed captions along with bounding-box references for detected objects. This feature enables richer contextual conversation, whether identifying an item, explaining a chart, or breaking down a photo. The capability rolled out gradually, becoming widely available early in the year, and marks Meta AI’s largest leap in visual comprehension so far.

Specification

Detail

Platforms

Messenger, WhatsApp, Instagram

Max images per query

3

Max file size per image

10 MB

Output format

Captions + bounding-box annotations



Voice conversations are faster and more natural.

A complete overhaul of the real-time voice chat system has been deployed on the Meta AI mobile app, the dedicated “Voice” tab, and Ray-Ban smart glasses. The service now supports 48 languages with automatic detection and near-instant speech start, averaging around 800 milliseconds to deliver the first spoken response. Users can interrupt responses mid-sentence with the new “barge-in” function, improving the fluidity of conversational exchanges. This upgrade is especially important for live translation scenarios and hands-free use while multitasking.

Feature

Detail

Languages supported

48 (auto-detected)

First-token delay

≈ 800 ms

Interrupt function

Yes, “barge-in” mid-sentence

Platforms

Mobile app, Ray-Ban glasses


Smart memory retains user context across threads.

Meta AI has introduced 30-day memory for Messenger and Instagram conversations, storing personal preferences such as favorite sports teams, ongoing project topics, and previously shared files. This stored information can be easily reset at any time using a Forget toggle in the settings menu, allowing users to maintain privacy control while benefiting from more contextually aware replies.

Memory Duration

Control Option

30 days

“Forget” toggle in settings



Creative tools now include a GIF and sticker generator.

A new custom GIF and sticker maker is available in Instagram Stories and WhatsApp, producing short loops of up to four seconds with a maximum export size of 3 MB. The feature leverages diffusion techniques to allow users to create eight-frame sequences that can be stylized before posting. This expands Meta AI’s creative toolkit beyond text and image generation into quick-share visual formats.

Feature

Detail

Platforms

Instagram Stories, WhatsApp

Max loop length

4 seconds

Max export size

3 MB

Frames supported

8


Larger context and faster processing come with Llama 4 Turbo.

Meta AI has upgraded its backend to Llama 4 Turbo, expanding the context window to 64,000 tokens and increasing processing speed from 55 to 92 tokens per second. This change means that the assistant can handle more extensive documents, retain more conversation history, and deliver answers with greater speed, especially in long-form or data-heavy sessions.

Specification

Old

New

Context window

32,000 tokens

64,000 tokens

Processing speed

55 tps

92 tps



Wearable integration improves with live narration.

The Ray-Ban “Look & Ask” feature streams two frames per second from the glasses’ point of view for up to 20 minutes per session. This capability enables real-time narration of surroundings, identification of landmarks, and assistance in navigation. The feature launched with firmware version 4.2 and has since expanded availability to European users.

Specification

Detail

Frame rate

2 fps

Session limit

20 minutes

Firmware version

4.2

Availability

Global + EU expansion


Summarisation tools keep group communication efficient.

Meta AI now provides group chat summaries in Messenger, producing up to five digests per chat per day. These summaries cover activity from the past 24 hours, making it easier for users to catch up without scrolling through long message threads. Similarly, WhatsApp has gained a “catch-me-up” digest for unread messages, available on demand and processed with end-to-end encryption.

Feature

Platform

Limit

Group summaries

Messenger

5/day

Unread digest

WhatsApp

24-hour coverage


Productivity extends to page management and scheduling.

Within the Facebook Pages app, Meta AI now offers a post-scheduler agent capable of drafting, refining, and automatically posting content. It requires a minimum follower base of 10,000 to activate. For content teams, this reduces the need for third-party scheduling tools and integrates directly into existing Meta publishing workflows.

Feature

Requirement

Post-scheduler

≥ 10,000 followers



New translation overlays enhance live communication.

In WhatsApp video calls, Meta AI can now apply a multimodal translation overlay, displaying real-time subtitles for seven language pairs with an average delay of just one second. This adds an accessibility layer for international conversations and business meetings.

Feature

Detail

Supported pairs

7

Avg delay

≈ 1 s


Paid Meta AI+ unlocks priority and privacy benefits.

The Meta AI+ subscription, priced at $10 per month, removes sponsored links from responses, extends memory from 30 days to 90 days, and increases the daily voice exchange limit to 200 interactions. The service also gives priority processing to reduce wait times during peak demand.

Feature

Free

Meta AI+

Memory

30 days

90 days

Voice exchanges/day

Standard

200

Sponsored links

Included

Removed



Governance and infrastructure strengthen reliability.

Meta has invested in H200-NVL GPUs, reducing median first-token latency by 18%, and introduced Private Compute Core for pseudonymised IDs and end-to-end encrypted processing. An enterprise admin console now allows organisation-wide feature toggles, per-model spend caps, and detailed audit logging of beta feature usage. For developers, the Meta Actions SDK provides JSON-based function calls with a limit of 100 calls per minute and a 24-hour schema cache, opening opportunities for integrating Meta AI into automated workflows.

Feature

Detail

GPU upgrade

H200-NVL

Latency improvement

18%

Admin tools

Toggles, spend caps, audit logs

Actions SDK limit

100 calls/min



____________

FOLLOW US FOR MORE.


DATA STUDIOS


bottom of page