Mark Zuckerberg has for months publicly hinted that he is backing away from open-source AI models. Now, Meta's latest AI pivot is starting to come into focus. The company is reportedly working on a new model, known inside of Meta as "Avocado," which could mark a major shift away from its previous open-source approach to AI development.Β
Both CNBC and Bloomberghave reported on Meta's plans surrounding "Avocado," with both outlets saying the model "could" be proprietary rather than open-source. Avocado, which is due out sometime in 2026, is being worked on inside of "TBD," a smaller group within Meta's AI Superintelligence Labs that's headed up byΒ Chief AI Officer Alexandr Wang, who apparently favors closed models.
It's not clear what Avocado could mean for Llama. Earlier this year, Zuckerberg said he expected Meta would "continue to be a leader" in open source but that it wouldn't "open source everything that we do." He's also cited safety concerns as they relate to superintelligence.Β As both CNBC and Bloomberg note, Meta's shift has also been driven by issues surrounding the release of Llama 4. The Llama 4 "Behemoth" model has been delayed for months; The New York Timesreported earlier this year that Wang and other execs had "discussed abandoning" it altogether. And developers have reportedly been unimpressed with the Llama 4 models that are available.Β
There have been other shakeups within the ranks of Meta's AI groups as Zuckerberg has spent billions of dollars building a team dedicated to superintelligence. The company laid off several hundred workers from its Fundamental Artificial Intelligence Research (FAIR) unit. And Meta veteran and Chief AI Scientist Yann LeCun, who has been a proponent for open-source and skeptical of LLMs, recently announced he was leaving the company.Β
That Meta may now be pursuing a closed AI model is a significant shift for Zuckerberg, who just last year said "fuck that" about closed platforms and penned a lengthy memo titled "Open Source AI is the Path Forward." But the notoriously competitive CEO is also apparently intensely worried about falling behind OpenAI, Google and other rivals. Meta has said it expects to spend $600 billion over the next few years to fund its AI ambitions.
This article originally appeared on Engadget at https://www.engadget.com/ai/meta-is-reportedly-working-on-a-new-ai-model-called-avocado-and-it-might-not-be-open-source-215426778.html?src=rss
Like it or not, the checkmark has become an almost universal symbol on most social platforms, even though its exact meaning can vary significantly between services. Now, Reddit, which historically hasn't cared that much about its users' identity, is joining the club and starting to test verification for public figures on its platform.
The company is beginning "a limited alpha test" of the feature with a small "curated" group of accounts that includes journalists from major media outlets like NBC News and the Boston Globe. Businesses that are already using an "official" badge, which Reddit started testing in 2023, will also now have a grey "verified" checkmark instead of the "official" label.Β
Verification has long been a thorny issue for many platforms. For users, it's at times been a source of confusion, especially on sites where verified badges only require a paid subscription. Reddit's approach, at least for now, is closer to how Twitter handled verification prior to Elon Musk's takeover of the company.
The company has handpicked the initial group who will get checkmarks indicating they have verified their identity and seems to be geared around high-visibility accounts. "This feature is designed to help redditors understand who they're engaging with in moments when verification matters, whether itβs an expert or celebrity hosting an AMA, a journalist reporting news, or a brand sharing information," Reddit explains in a blog post. "Our approach to verification is voluntary, opt-in, and explicitly not about status. Itβs designed to add clarity for redditors and ease the burden on moderators who often verify users manually."Β
For now, Reddit users β even notable ones β won't be able to apply for verification. But the company notes that its intention isn't to limit checkmarks to famous people only. A Reddit spokesperson tells Engadget that "our goal is that anyone who wishes to self-identify will be able to do so in the future."Β
The company also notes that verification doesn't come with any exclusive perks, like increased visibility or immunity from the rules of individual subreddits. Reddit requires accounts to be in good standing and already active on the platform in order to be eligible for verification. Accounts that are marked NSFW or that "primarily engage in NSFW-tagged communities" won't be eligible.Β
This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-is-starting-to-verify-public-figures-170000833.html?src=rss
AI chatbots haven't come close to replacing teens' social media habits, but they are playing a significant role in their online habits. Nearly one-third of US teens report using AI chatbots daily or more, according to a new report from Pew Research.Β
The report is the first from Pew to specifically examine how often teens are using AI overall, and was published alongside its latest research on teens' social media use. It's based on an online survey of 1,458 US teens who were polled between September 25 to October 9, 2025. According to Pew, the survey was "weighted to be representative of U.S. teens ages 13 to 17 who live with their parents by age, gender, race and ethnicity, household income, and other categories."
According to Pew, 48 percent of teens use AI chatbots "several times a week" or more often, with 12 percent reporting their use at "several times a day" and 4 percent saying they use the tools "almost constantly." That's far fewer than the 21 percent of teens who report almost constant use of TikTok and the 17 percent who say the same about YouTube. But those numbers are still significant considering how much newer these services are compared with mainstream social media apps.Β
The report also offers some insight into which AI companies' chatbots are most used among teens. OpenAI's ChatGPT came out ahead by far, with 59 percent of teens saying they had used the service, followed by Google's Gemini at 23 percent and Meta AI at 20 percent. Just 14 percent of teens said they had ever used Microsoft Copilot, and 9 percent and 3 percent reported using Character AI and Anthropic's Claude, respectively.
The survey is Pew's first to study Ai chatbot use among teens broadly.
Pew Research
Pew's research comes as there's been growing scrutiny over AI companies' handling of younger users. Both OpenAI and Character AI are currently facing wrongful deaths lawsuits from the parents of teens who died by suicide. In both cases, the parents allege that their child's interactions with a chatbot played a role in their death. (Character AI briefly banned teens from its service before introducing a more limited format for younger users.) Other companies, including Alphabet and Meta, are being probed by the FTC over their safety policies for younger users.
Interestingly, the report also indicates there has been little change in US teens' social media use.Β Pew, which has regularly polled teens about how they use social media, notes that teens' daily use of these platforms "remains relatively stable" compared with recent years. YouTube is still the most widely-used platform, reaching 92 percent of teens, followed by TikTok at 69 percent, Instagram at 63 percent and Snapchat at 55 percent. Of the major apps the report surveyed, WhatsApp is the only service to see significant change in recent years, with 24 percent of teens now reporting they use the messaging app, compared with 17 percent in 2022.
This article originally appeared on Engadget at https://www.engadget.com/ai/nearly-one-third-of-teens-use-ai-chatbots-daily-200000888.html?src=rss
When the nonprofit Freedom House recently published its annual report, it noted that 2025 marked the 15th straight year of decline for global internet freedom. The biggest decline, after Georgia and Germany, came within the United States.
Among the culprits cited in the report: age verification laws, dozens of which have come into effect over the last year. "Online anonymity, an essential enabler for freedom of expression, is entering a period of crisis as policymakers in free and autocratic countries alike mandate the use of identity verification technology for certain websites or platforms, motivated in some cases by the legitimate aim of protecting children," the report warns.
Age verification laws are, in some ways, part of a years-long reckoning over child safety online, as tech companies have shown themselves unable to prevent serious harms to their most vulnerable users. Lawmakers, who have failed to pass data privacy regulations, Section 230 reform or any other meaningful legislation that would thoughtfully reimagine what responsibilities tech companies owe their users, have instead turned to the blunt tool of age-based restrictions β and with much greater success.Β Β
Over the last two years, 25 states have passed laws requiring some kind of age verification to access adult content online. This year, the Supreme Court delivered a major victory to backers of age verification standards when it upheld a Texas law requiring sites hosting adult content to check the ages of their users.
Age checks have also expanded to social media and online platforms more broadly. Sixteen states now have laws requiring parental controls or other age-based restrictions for social media services. (Six of these measures are currently in limbo due to court challenges.) A federal bill to ban kids younger than 13 from social media has gained bipartisan support in Congress. Utah, Texas and Louisiana passed laws requiring app stores to check the ages of their users, all of which are set to go into effect next year. California plans to enact age-based rules for app stores in 2027.
These laws have started to fragment the internet. Smaller platforms and websites that don't have the resources to pay for third-party verification services may have no choice but to exit markets where age checks are required. Blogging service Dreamwidth pulled out of Mississippi after its age verification laws went into effect, saying that the $10,000 per user fines it could face were an "existential threat" to the company. Bluesky also opted to go dark in Mississippi rather than comply. (The service has complied with age verification laws in South Dakota and Wyoming, as well as the UK.) Pornhub, which has called existing age verification laws "haphazard and dangerous," has blocked access in 23 states.Β
Pornhub is not an outlier in its assessment. Privacy advocates have long warned that age verification laws put everyone's privacy at risk. Practically, there's no way to limit age verification standards only to minors. Confirming the ages of everyone under 18 means you have to confirm the ages of everyone. In practice, this often means submitting a government-issued ID or allowing an app to scan your face. Both are problematic and we don't need to look far to see how these methods can go wrong.Β
Discord recently revealed that around 70,000 users "may" have had their government IDs leaked due to an "incident" involving a third-party vendor the company contracts with to provide customer service related to age verification. Last year, another third-party identity provider that had worked with TikTok, Uber and other services exposed drivers' licenses. As a growing number of platforms require us to hand over an ID, these kinds of incidents will likely become even more common.Β
Similar risks exist for face scans. Because most minors don't have official IDs, platforms often rely on AI-based tools that can guess users' ages. A face scan may seem more private than handing over a social security number, but we could be turning over far more information than we realize, according to experts at the Electronic Frontier Foundation (EFF).
"When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics," the organization notes. "A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us."
These issues aren't limited to the United States. Australia, Denmark and Malaysia have taken steps to ban younger teens from social media entirely. Officials in France are pushing for a similar ban, as well as a "curfew" for older teens. These measures would also necessitate some form of age verification in order to block the intended users. In the UK, where the Online Safety Act went into effect earlier this year, we've already seen how well-intentioned efforts to protect teens from supposedly harmful content can end up making large swaths of the internet more difficult to access.Β
The law is ostensibly meant to "prevent young people from encountering harmful content relating to suicide, self-harm, eating disorders and pornography," according to the BBC. But the law has also resulted in age checks that reach far beyond porn sites. Age verification is required, in some cases, to access music videos and other content on Spotify. It will soon be required for Xbox accounts. On X, videos of protests have been blocked. Redditors have reported being blocked from a lengthy number of subreddits that are marked NSFW but don't actually host porn, including those related to menstruation, news and addiction recovery. Wikipedia, which recently lost a challenge to be excluded from the law's strictest requirements, is facing the prospect of being forced to verify the ages of its UK contributors, which the organization has said could have disastrous consequences.Β
The UK law has also shown how ineffective existing age verification methods are. Users have been able to circumvent the checks by using selfies of video game characters, AI-generated images of ID documents and, of course, Virtual Private Networks (VPNs).Β
As the EFF notes, VPNs are incredibly widely used. The software allows people to browse the internet while masking their actual location. They're used by activists and students and people who want to get around geoblocks built into streaming services. Many universities and businesses (including Engadget parent company Yahoo) require their students and workers to use VPNs in order to access certain information. Blocking VPNs would have serious repercussions for all of these groups.Β
The makers of several popular VPN services reported major spikes in the UK following the Online Safety Act going into effect this summer, with ProtonVPN reporting a 1,400 percent surge in sign-ups. That's also led to fears of a renewed crackdown on VPNs. Ofcom, the regulator tasked with enforcing the law, told TechRadar it was "monitoring" VPN usage, which has further fueled speculation it could try to ban or restrict their use. And here in the States, lawmakers in Wisconsin have proposed an age verification law that would require sites that host "harmful" content to also block VPNs.
While restrictions on VPNs are, for now, mostly theoretical, the fact that such measures are even being considered is alarming. Up to now, VPN bans are more closely associated with authoritarian countries without an open internet, like Russia and China. If we continue down a path of trying to put age gates up around every piece of potentially objectionable content, the internet could get a lot worse for everyone.Β
Correction, December 9, 2025, 11:23AM PT: A previous version of this story stated that Spotify requires age checks to access music in the UK. The service requires some users to complete age verification in order to access music videos tagged 18+ and messaging. We apologize for the error.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-year-age-verification-laws-came-for-the-open-internet-130000979.html?src=rss
Meta will soon allow Facebook and Instagram users in the European Union to choose to share less data and see less personalized ads on the platform, the European Commission announced. The change will begin to roll out in January, according to the regulator.Β
"This is the first time that such a choice is offered on Meta's social networks," the commission said in a statement. "Meta will give users the effective choice between: consenting to share all their data and seeing fully personalised advertising, and opting to share less personal data for an experience with more limited personalised advertising."
The move from Meta comes after the European Commission had fined the company β¬200 million over its ad-free subscription plans in the EU, which the regulator deemed "consent or pay." Meta began offering ad-free subscriptions to EU users in 2023, and later lowered the price of the plans in response to criticism from the commission. Those plans haven't been very popular, however, with one Meta executive admitting earlier this year that there's been "very little interest" from users.Β
In a statement, a Meta spokesperson said that "we acknowledge" the European Commission's statement. "Personalized ads are vital for Europeβs economy β last year, Metaβs ads were linked to β¬213 billion in economic activity and supported 1.44 million jobs across the EU."
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-will-let-facebook-and-instagram-users-in-the-eu-share-less-data-183535897.html?src=rss
If you've ever had something go wrong with your Facebook or Instagram account, then you probably have a good idea of just how frustrating the support process can be. The company's automated processes are so broken that some people have found that suing Meta in small claims court can be a more reliable way of getting help from the company.
Now, Meta says it's trying to address some of these longstanding issues. In an update, the company acknowledged that its "support hasnβt always met expectations" but that a series of AI-powered updates should make it easier for people to get help.Β
The company is rolling out a new "support hub" on Facebook and Instagram that is meant to bring all of its support features into one place. The hub will also have a new AI chat feature so users can ask questions about account issues or Meta's policies. An in-app support hub might not be that helpful if you can't access your account, though. A Meta spokesperson pointed to its external account recovery tool, which is meant to help people get back into their accounts.Β
Recovering hacked accounts has long been a pain point for Facebook and Instagram users. But Meta says that it's now improved the process with better email and text alerts. AI has also helped the company's systems detect devices and locations you've frequently used in the past. "Our new account recovery experience adjusts to your particular situation with clearer guidance and simpler verification," Meta writes. "Weβve also expanded recovery methods to include taking an optional selfie video to further verify your identity."
Meta is also starting to test a new "AI support assistant" on Facebook that can provide "instant, personalized help" for issues like account recovery or managing your profile. It's not clear how this will work, or if it will enable people to talk to an actual person who works for Meta. For now, the most reliable way to access live support is via a Meta Verified subscription, though many users report that the chat-based service isn't able to help with more complex issues.
A Meta spokesperson said that the assistant is in the "early stages of testing" and is currently only available to some Facebook users globally. Those who are part of the test can find it via the app's new support hub.Β
According to Meta, these improvements have already shown some success in helping people get back into hacked accounts. The company says that this year it has "increased the relative success rate of hacked account recovery by more than 30% in the US and Canada."
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-says-its-fixing-its-broken-support-system-with-the-help-of-ai-185348328.html?src=rss
The Oversight Board is getting ready to tackle a new pain point for Facebook and Instagram users. Up to now, users have been able to appeal content moderation decisions related to specific posts to the board, but haven't been able to ask the group to intervene in other situations that affect their accounts.Β
That could soon change. The board says that it will weigh in on individual account-level penalties in a pilot next year. The board noted the change in an impact report recapping its five-year history and what lies ahead in the year to come. "In 2026, our scope expands once more as we pilot the ability to review Metaβs decisions removing and impacting accounts, something that has created ongoing frustration for platform users," the report says.Β
It's not clear how this process will work, but if the board plans to take on account-level issues like suspensions, it would be a significant expansion of its purview. In an interview with Engadget, board member Paolo Carozza said that Meta is expected to refer a case to the board in January that will deal with an account-level issue. The handling of that case will allow the board to explore how it might take on similar cases in the future.Β
"We're really excited to take it on because we think it's an important area that really affects a lot of users and their interests," he told Engadget. "We all know how many people are constantly coming forward complaining about account-level restrictions or blocking or whatever else, and so if we get it right β and it's going to be important to work it out this first pilot β we're really optimistic that it's going to help open up a whole new avenue for us to be helpful to the users of [Meta's] platforms."
Carozza added that there are a number of "technical aspects" and other questions still being worked out between the board and Meta. So for now, it's too soon to say whether there will ever be an official appeals process for suspensions, like there currently is for post removals. But he says Meta is equally invested in the effort. "It's something we've been talking about with Meta for well over a year," he said. "They've been expressing an openness and a willingness to give us access to those kinds of questions."
The Oversight Board's report hints at another way its influence could potentially expand. It notes that the group's work has made it "well-positioned to partner with a range of global tech companies as they navigate issues arising from free speech debates globally." Both Meta and Oversight Board officials have previously floated the idea that "other companies" might want to take advantage of its expertise.Β
Up to now, most other platforms have had little incentive to do so. But Carozza says the rise of generative AI has created some new interest from non-Meta affiliated platforms, and that there have been "really preliminary" conversations with other companies. "It feels like quite a different moment now, largely because of generative AI, LLMs, chatbots [and] the way that a variety of retail-level users of these technologies are facing a whole new set of challenges and harms that's attracting a lot of scrutiny," he said. "We have had conversations in recent months with other tech companies in this space about the possibility that the board might be able to contribute helpful services to them to help navigate some of these really thorny questions."
This article originally appeared on Engadget at https://www.engadget.com/big-tech/metas-oversight-board-wants-to-expand-its-powers-in-2026-100000385.html?src=rss
Apple has tapped AI researcher Amar Subramanya, a longtime Google exec who was most recently corporate vice president of AI at Microsoft, as its new VP of AI. The company also announced that current AI exec, John Giannandrea, will retire in 2026.
Subramanya, who Apple describes as a "renowned AI researcher," spent 16 years at Google, where he was head of engineering for Gemini. He left Google earlier this year for Microsoft. In a press release, Apple said that Subramanya will report to Craig Federighi and will "be leading critical areas, including Apple Foundation Models, ML research, and AI Safety and Evaluation."Β
It's not entirely surprising that Apple is shaking up its AI leadership. Giannandrea joined Apple in 2018 after a stint at Google that included VP of search. While his hiring was seen as a major coup for Apple at the time, the company has faced some significant setbacks since. Most notably, its failure to deliver a more personalized, AI-centric version of Siri that it previewed last year. Giannandrea, who oversaw Siri for years, has shouldered much of the blame for the delays. Bloombergreported earlier this year that Apple CEO Tim Cook had "lost confidence in the ability of AI head John Giannandrea to execute on product development" and put other executives in charge of Siri instead.Β
In a statement, Cook said he was "thankful" for Giannandrea's contributions to the company and credited Federighi with pushing the revamped Siri forward. "In addition to growing his leadership team and AI responsibilities with Amarβs joining, Craig has been instrumental in driving our AI efforts, including overseeing our work to bring a more personalized Siri to users next year.β
This article originally appeared on Engadget at https://www.engadget.com/big-tech/apple-hires-google-veteran-as-its-new-vice-president-of-ai-234820021.html?src=rss