Meta is expanding its safety features to make the chat ecosystem safer, allowing Messenger users to ask Meta AI to take a look at potentially problematic messages and introducing a new screen-sharing notice in WhatsApp.
The goal, the company says, is straightforward: reduce social-engineering attacks — before they disconnect people from their money — with an emphasis on members who unwittingly attract scammers most often.
- How Meta AI Detects Suspicious Messages in Messenger
- What Changes for Privacy and Encryption in Messenger
- WhatsApp Adds Screen-Sharing Safeguard for Untrusted Contacts
- Why It Matters for Fraud Protection on Messaging Platforms
- Limitations and Real-World Use and What Users Should Expect
- Safety Tips for Using Both New Meta Tools
How Meta AI Detects Suspicious Messages in Messenger
The new Messenger feature — which uses on-device signals, like an intrusion detection system, to identify when a message looks like it might be spam or fraud (such as when someone is sending messages at scale) — blocks suspicious accounts from even being able to send you a message until you unblock them, although they can still read messages you’ve already sent — including elsewhere in your Facebook account — and download posted photos. If the system finds red flags, it prompts you to forward the thread to Meta AI for a risk assessment.
Meta AI then interprets the messages submitted and, if the patterns are similar to known scams, it will explain why the request is risky, as well as next steps, such as blocking or reporting the sender. It’s advisory, not automatic: you opt to share snippets for analysis, and in return the assistant serves up guidance for that specific conversation.
Some of the types of scams that the tool was created to intercept include marketplace overpayment scams (“I’ll pay you extra, just refund me”), cryptocurrency or investment pitches promising huge returns that could never be possible, fake account recovery requests, and impersonation of someone’s family members or colleagues urging them to make an immediate payment. These are some of the most commonly reported categories by law enforcement and consumer protection agencies.
What Changes for Privacy and Encryption in Messenger
This system has two phases. The initial scan occurs on your device, and messages are still end-to-end encrypted. If you choose to submit particular messages to Meta AI for review, those selected messages are reviewed outside of end-to-end encryption so that the assistant can do its work. That trade-off — more analysis for less of your own visibility — is clear and user-initiated.
You can manage the feature in Messenger by going to Settings > Privacy & Safety Settings > Scam Detection. It’s designed so that you have to opt in, and you can turn the feature off entirely or choose not to submit any individual chat. The AI provides guidance for the content you decide to share, Meta says.
WhatsApp Adds Screen-Sharing Safeguard for Untrusted Contacts
A new warning now pops up on WhatsApp when you attempt to share your screen with anyone who is not a trusted contact. It is a reminder that sensitive data — bank details, one-time codes, personal documents — could be compromised. This is aimed at a thriving rip-off trend in which, posing as bank reps or tech support, swindlers bully victims into screen-sharing with them and then run off with whatever they see.
Security researchers and consumer advocates have issued numerous warnings about screen-sharing coercion as a means of account takeovers. A quick nudge at just the right time before you screen-share your smartphone can be all it takes to prevent handing over those credentials — which would normally be visible for mere seconds but useful for months or more.
Why It Matters for Fraud Protection on Messaging Platforms
Digital fraud losses are staggering. According to the F.B.I.’s Internet Crime Complaint Center, Americans have reported over $16.6 billion in losses from online scams, with older adults more likely to be targeted. The Federal Trade Commission has said that scammers frequently make first contact with users through social media and messaging apps, especially in investment cons and online shopping fraud.
By baking risk signals directly into the experience of hanging out in its various apps, Meta is aiming to shrink the window between a scammer’s initial pitch and a user’s “gut check.” It is also in line with broader industry trends to display real-time warnings — reminiscent of a prompt before a wire transfer from a banking app or email clients that flag attempts at impersonation — without interrupting the flow of communication.
Limitations and Real-World Use and What Users Should Expect
No AI model is going to catch every scam, and determined bad guys are always changing their language to get around filters. There is the potential for false positives, and it’s not clear how willing users will be to hand over any messages for analysis, even in a selective way. But a well-timed alert can neutralize high-pressure tactics — particularly around someone being rushed to pay with gift cards, crypto, or a wire transfer.
The practical benefit of all this is mainly about context. If Meta AI can articulate that a “buyer” willing to overpay and asking for an off-platform refund correlates with a known refund scam, or that a “bank agent” pushing for screen-sharing is the countermove in account takeovers, users might be likelier to preemptively disengage before money exchanges hands.
Safety Tips for Using Both New Meta Tools
- Never share your screen or one-time codes with someone you haven’t met in real life.
- Confirm identity through a second communication channel you trust — ring a number that you know, not one sent via chat.
- For deals through marketplaces, keep payments within platforms that provide buyer and seller protections; avoid overpayments and off-platform refunds.
- Assume all pitches in your DMs that are crypto or investment-related to be fraudulent until proven otherwise.
- Employ the block and report tools; aggregated reports help platforms and law enforcement track new scams.
Bottom line: the new Meta AI review in Messenger and the screen-sharing warning in WhatsApp are not going to eradicate scams, but they do insert a welcome pause button right where it’s needed most. And all too often, that pause can be the difference between a moment of caution or an action you deeply regret.