Reading view

There are new articles available, click to refresh the page.

X Rolls out ‘Starterpacks’ for New Users to Discover Crypto, Tech Feeds

X is preparing to launch “Starterpacks,” an onboarding feature for new users to help discover the best accounts and feeds based on interests like crypto and technology.

Unveiled Thursday, X will roll out the feature in the coming weeks, Nikita Bier, Head of Product at X, noted.

Over the last few months, we scoured the world for the top posters in every niche & country

We've compiled them into a new tool called Starterpacks: to help new users find the best accounts—big or small—for their interests

⬇ Reply below with a topic you're most interested in… pic.twitter.com/MYIIQAaJaL

— Nikita Bier (@nikitabier) January 21, 2026

The curated starterpack in the crypto category will comprise memecoin trading with real-time market trends and sentiment from active traders.

A short video posted by Bier showed the preview of how starterpacks work. It shows users selecting their interests while onboarding and following the curated list of accounts.

Crypto Twitter Backlash – Is X Trying to Revive it?

The announcement arrives days after Bier’s comments about crypto Twitter sparked backlash among the community. Crypto users have complained about the declining visibility of crypto content on X.

“Crypto Twitter (CT) is dying from suicide, not from the algorithm,” he wrote in response.

His response triggered growing frustration within the crypto community, with users believing that the platform is intentionally limiting crypto-related posts. Bier insisted that the issue is not tied to X’s algorithms.

On Wednesday, Bitcoin cypherpunk Jameson Lopp wrote that there were 96 million posts on X containing ‘Bitcoin’ in 2025, a 32% drop year-over-year.

Although the data did not reflect overall crypto engagement, the post triggered concerns about discovery challenges and algorithmic shifts.

Vitalik Buterin Emphasized Better Crypto Social Media

In a separate post on Wednesday, Ethereum co-founder Vitalik Buterin stressed the need for better mass communication tools.

“We need mass communication tools that serve the user’s long-term interest, not maximize short-term engagement,” he wrote on X.

In 2026, I plan to be fully back to decentralized social.

If we want a better society, we need better mass communication tools. We need mass communication tools that surface the best information and arguments and help people find points of agreement. We need mass communication… https://t.co/ye249HsojJ

— vitalik.eth (@VitalikButerin) January 21, 2026

Further, he noted that crypto social projects has often been gone the wrong way.

“Decentralized social should be run by people who deeply believe in the “social” part, and are motivated first and foremost by solving the problems of social,” he added.

The post X Rolls out ‘Starterpacks’ for New Users to Discover Crypto, Tech Feeds appeared first on Cryptonews.

Elon Musk’s X Open-Sources Its Feed Algorithm Amid Crypto Content Disruptions

By: Amin Ayan

Elon Musk’s social media platform X has released the core architecture behind the algorithm that determines what users see in their feeds.

Key Takeaways:

  • X has open-sourced its feed algorithm, exposing how content is ranked and surfaced.
  • The system uses a Grok-based transformer model to predict user engagement.
  • Musk acknowledged flaws in the algorithm and pledged regular public updates.

The move marks one of the first such disclosures ever made by a large social platform and comes as X faces growing pressure over content moderation, artificial intelligence, and crypto-related activity on the site.

X’s engineering team said the newly open-sourced system is built on the same transformer-based machine learning architecture used by Grok, the AI model developed by Musk’s xAI venture.

X Opens Its “For You” Algorithm, Admitting It Needs Major Fixes

The algorithm governs how posts are ranked in X’s “For You” feed, predicting user actions such as likes, replies, and reposts to determine which content surfaces most prominently.

Musk framed the release as a candid look at an imperfect system. In a post following the announcement, he acknowledged that the algorithm “needs massive improvements,” arguing that public scrutiny would help accelerate progress.

He added that X plans to publish regular updates every four weeks, accompanied by detailed developer notes explaining what has changed.

We know the algorithm is dumb and needs massive improvements, but at least you can see us struggle to make it better in real-time and with transparency.

No other social media companies do this. https://t.co/UMvBlD1ZpV

— Elon Musk (@elonmusk) January 20, 2026

According to technical documentation, the system relies on end-to-end machine learning rather than manually tuned ranking rules.

Written primarily in Rust and Python, the model retrieves posts from two sources, including accounts a user follows and a wider pool of content identified through machine-learning-based discovery.

These posts are then scored based on predicted engagement, with higher-ranked content appearing more frequently in feeds.

The transparency may also affect creators and crypto-focused accounts that rely heavily on X for reach.

Grok’s own analysis of the algorithm highlighted several factors that influence visibility, including engagement history, content freshness, author diversity, and negative signals such as blocks or mutes.

For creators, that clarity could reduce guesswork around what drives distribution, though it may also limit attempts to exploit ranking mechanics.

X Cracks Down on Crypto-Linked Engagement Apps

The timing of the release is notable. X has recently come under scrutiny after restricting API access for so-called InfoFi and engagement-reward projects, many of which were tied to crypto incentives.

The company said it would no longer allow apps that reward users for posting or interacting on X, citing concerns over AI-generated spam and manipulation.

Beyond crypto, X’s broader AI strategy has drawn regulatory attention, particularly in Europe, where authorities have raised concerns about Grok’s image-generation features.

The platform has since limited certain capabilities and introduced safeguards after investigations were launched.

As reported, X’s decision to clamp down on so-called InfoFi applications sent fresh shockwaves through the crypto market, dragging several tokens sharply lower and forcing a rethink across a niche that had grown tightly intertwined with the social media platform.

The immediate market reaction was led by KAITO, the token linked to the Kaito platform, which slid roughly 20% in a single day as investors digested what many saw as a structural threat rather than a short-term policy tweak.

The post Elon Musk’s X Open-Sources Its Feed Algorithm Amid Crypto Content Disruptions appeared first on Cryptonews.

Grok was finally updated to stop undressing women and children, X Safety says

Late Wednesday, X Safety confirmed that Grok was tweaked to stop undressing images of people without their consent.

"We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis," X Safety said. "This restriction applies to all users, including paid subscribers."

The update includes restricting "image creation and the ability to edit images via the Grok account on the X platform," which "are now only available to paid subscribers. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable," X Safety said.

Read full article

Comments

© Leon Neal / Staff | Getty Images News

X’s half-assed attempt to paywall Grok doesn’t block free image editing

Once again, people are taking Grok at its word, treating the chatbot as a company spokesperson without questioning what it says.

On Friday morning, many outlets reported that X had blocked universal access to Grok's image-editing features after the chatbot began prompting some users to pay $8 to use them. The messages are seemingly in response to reporting that people are using Grok to generate thousands of non-consensual sexualized images of women and children each hour.

"Image generation and editing are currently limited to paying subscribers," Grok tells users, dropping a link and urging, "you can subscribe to unlock these features."

Read full article

Comments

© Apu Gomes / Stringer | Getty Images News

Grok assumes users seeking images of underage girls have “good intent”

For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as "sexually suggestive or nudifying," Bloomberg reported.

While the chatbot claimed that xAI supposedly "identified lapses in safeguards" that allowed outputs flagged as child sexual abuse material (CSAM) and was "urgently fixing them," Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.

A quick look at Grok's safety guidelines on its public GitHub shows they were last updated two months ago. The GitHub also indicates that, despite prohibiting such content, Grok maintains programming that could make it likely to generate CSAM.

Read full article

Comments

© Aurich Lawson | Getty Images

X blames users for Grok-generated CSAM; no fixes announced

It seems that instead of updating Grok to prevent outputs of sexualized images of minors, X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM).

On Saturday, X Safety finally posted an official response after nearly a week of backlash over Grok outputs that sexualized real people without consent. Offering no apology for Grok's functionality, X Safety blamed users for prompting Grok to produce CSAM while reminding them that such prompts can trigger account suspensions and possible legal consequences.

"We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary," X Safety said. "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."

Read full article

Comments

© NurPhoto / Contributor | NurPhoto

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology”

For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.

According to Grok's "apology"—which was generated by a user's request, not posted by xAI—the chatbot's outputs may have been illegal:

"I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues."

Ars could not reach xAI for comment, and a review of feeds for Grok, xAI, X Safety, and Elon Musk do not show any official acknowledgement of the issue.

Read full article

Comments

© Anadolu / Contributor | Anadolu

Signal in the noise: what hashtags reveal about hacktivism in 2025

What do hacktivist campaigns look like in 2025? To answer this question, we analyzed more than 11,000 posts produced by over 120 hacktivist groups circulating across both the surface web and the dark web, with a particular focus on groups targeting MENA countries. The primary goal of our research is to highlight patterns in hacktivist operations, including attack methods, public warnings, and stated intent. The analysis is undertaken exclusively from a cybersecurity perspective and anchored in the principle of neutrality.

Hacktivists are politically motivated threat actors who typically value visibility over sophistication. Their tactics are designed for maximum visibility, reach, and ease of execution, rather than stealth or technical complexity. The term “hacktivist” may refer to either the administrator of a community who initiates the attack or an ordinary subscriber who simply participates in the campaign.

Key findings

While it may be assumed that most operations unfold on hidden forums, in fact, most hacktivist planning and mobilization happens in the open. Telegram has become the command center for today’s hacktivist groups, hosting the highest density of attack planning and calls to action. The second place is occupied by X (ex-Twitter).

Distribution of social media references in posts published in 2025

Distribution of social media references in posts published in 2025

Although we focused on hacktivists operating in MENA, the targeting of the groups under review is global, extending well beyond the region. There are victims throughout Europe and Middle East, as well as Argentina, the United States, Indonesia, India, Vietnam, Thailand, Cambodia, Türkiye, and others.

Hashtags as the connective tissue of hacktivist operations

One notable feature of hacktivist posts and messages on dark web sites is the frequent use of hashtags (#words). Used in their posts constantly, hashtags often serve as political slogans, amplifying messages, coordinating activity or claiming credit for attacks. The most common themes are political statements and hacktivist groups names, though hashtags sometimes reference geographical locations, such as specific countries or cities.

Hashtags also map alliances and momentum. We have identified 2063 unique tags in 2025: 1484 appearing for the first time, and many tied directly to specific groups or joint campaigns. Most tags are short-lived, lasting about two months, with “popular” ones persisting longer when amplified by alliances; channel bans contribute to attrition.

Operationally, reports of completed attacks dominate hashtagged content (58%), and within those, DDoS is the workhorse (61%). Spikes in threatening rhetoric do not by themselves predict more attacks, but timing matters: when threats are published, they typically refer to actions in the near term, i.e. the same week or month, making early warning from open-channel monitoring materially useful.

The full version of the report details the following findings:

  • How long it typically takes for an attack to be reported after an initial threat post
  • How hashtags are used to coordinate attacks or claim credit
  • Patterns across campaigns and regions
  • The types of cyberattacks being promoted or celebrated

Practical takeaways and recommendations

For defenders and corporate leaders, we recommend the following:

  • Prioritize scalable DDoS mitigation and proactive security measures.
  • Treat public threats as short-horizon indicators rather than long-range forecasts.
  • Invest in continuous monitoring across Telegram and related ecosystems to discover alliance announcements, threat posts, and cross-posted “proof” rapidly.

Even organizations outside geopolitical conflict zones should assume exposure: hacktivist campaigns seek reach and spectacle, not narrow geography, and hashtags remain a practical lens for separating noise from signals that demand action.

To download the full report, please fill in the form below.



Elon Musk Denies 420 Tweet Was About Weed

During a California court appearance Monday, when questioned about a 420 tweet, Elon Musk suddenly forgot the significance of the number in pot culture. The tech billionaire responded after being cornered by a prosecutor representing Tesla employees for a class action lawsuit alleging he tweeted and misled shareholders about the price of Tesla shares.

The fiasco began several years ago. In 2018, Musk rounded up Tesla shares from $419 to $420, announcing his plan to go private in a tweet. “Am considering taking Tesla private at $420,” Musk tweeted on Aug. 7, 2018. “Funding secured.”—sending officials from The Securities and Exchange Commission (SEC) into a tailspin.

Am considering taking Tesla private at $420. Funding secured.

— Elon Musk (@elonmusk) August 7, 2018

Musk said he tweeted the share price based on what he said was a “firm commitment” from Saudi Arabia’s Public Investment Fund (PIF) to take Tesla private. But about 10 days later, Musk admitted that the Tesla buyout he had envisioned wasn’t going to materialize.

After an investigation, the SEC fined Musk $40 million, forcing the billionaire to step down as chair of Tesla’s board. The SEC said that Musk misled investors. In the SEC’s complaint, Musk was accused of rounding up the share price to $420 from $419 “because he had recently learned about the number’s significance in marijuana culture.” 

Musk caused instantaneous uproar about a month later, sparking a blunt with Joe Rogan on his show “The Joe Rogan Experience” on Sept. 3, 2018, shocking Tesla investors and officials across the board. His troubles didn’t end there. High Times asked if it was “the most expensive blunt of all time” due to the fallout, with NASA- and SpaceX-associated officials reviewing his security clearance.

The Verge reports that Nicholas Porritt is an attorney for a class of Tesla investors suing Musk for millions of dollars that they say resulted from his failure to take Tesla private. 

The courtroom got tense: “You rounded up to 420 because you thought that would be a joke that your girlfriend will enjoy, isn’t that correct?” Porritt asked. “No,” Musk said, adding, “there is some, I think, karma around 420. I should question whether that is good or bad karma at this point.”

Musk said that 420 wasn’t a weed joke, but a roughly 20% premium on the $419 stock price at the time. “420 was not chosen because of a joke,” Musk testified. “It was chosen because there was a 20 percent premium over the stock price.” Musk also claimed that it was a “coincidence.”

The jury will decide if Musk should have to pay out up to billions of dollars in damages to Tesla shareholders for the money they lost due to his tweets.

Judge Edward Chen ruled that the jury should be aware that Musk’s 2018 tweets are false. Jurors will now need to decide whether Musk deceived Tesla shareholders because of his tweets.

Musk said that he was not relying on a commitment for the Saudi PIF when he tweeted “funding secured,” adding that his shares in SpaceX would also help fund the deal to take Tesla private. “Just as I sold stock in Tesla to buy Twitter… I didn’t want to sell Tesla stock, but I did sell Tesla stock,” Musk said. “My SpaceX shares alone would have meant that funding was secured.”

Musk has also been sued by a group of former Twitter employees after a mass firing. Musk recently became the CEO of Twitter after buying the platform for $44 billion in October 2022. Saudi Prince Alwaleed bin Talal bin Abdulaziz is Twitter’s second-largest shareholder after Musk. 

The post Elon Musk Denies 420 Tweet Was About Weed appeared first on High Times.

Twitter in Trouble! EU to Take Action After Journalist Suspensions

By: Gokul G
Twitter in Trouble! EU to Take Action After Journalist Suspensions

Twitter's mass suspension of prominent journalists this week has the EU up in arms. After the accounts of Ryan Mac from the New York Times, Donie O'Sullivan from CNN, Drew Harwell from The Washington Post, political commentator Keith Olbermann, journalist Tony Webster, Micah Flee from The Intercept, Steve Herman from the Voice of America, journalist Aaron Rupar, and Mashable reporter Matt Binder were suspended temporarily for seven days, EU officials have announced that they are preparing to take action.

The cause of the suspension was initially a mystery, but Twitter CEO Elon Musk cleared up the confusion with a tweet that explained that doxxing had occurred, resulting in the seven-day ban. Even though the journalists had not engaged in live location sharing, which is against Twitter's safety policy, they seemed to have broken the rules in some other way.

It's been a wild week for Twitter users! On Wednesday, Twitter took the unprecedented step of banning the account @ElonJet, owned by college student Jack Sweeney. The account tracked Elon Musk's private jet using publicly available information. Shortly after, Musk tweeted that a car carrying his son "was followed by crazy stalker" who "blocked [the] car from moving and climbed onto [its] hood." Musk said that he was pursuing legal action against Sweeney.

Fast forward to Thursday night and a flurry of journalists were suspended for tweeting about the ElonJet account suspension and sharing the official LAPD statement regarding the incident connected to Musk's son. It just goes to show that you can't predict when Twitter will come down on an account, or who will be affected by a suspension.

These events have led to Vera Journova, vice president of Values and Transparency in the EU Commission, threatening sanctions against Twitter. She tweeted, "News about arbitrary suspension of journalists on Twitter is worrying. EU's Digital Services Act requires respect of media freedom and fundamental rights. This is reinforced under our #MediaFreedomAct."

It's great to see the EU taking a stance and protecting media freedom, as it is a fundamental right and should be respected globally. It's time for tech giants to be held accountable for their actions and abide by the same laws as the rest of us. We'll have to wait and see how this situation develops and what kind of impact it will have.

Elon Musk Shadowbanned ElonJet?

By: Gokul G
Elon Musk Shadowbanned ElonJet?

If you didn't know, there's a Twitter account called @ElonJet that is tracking the movements of Elon Musk's private jet. Recently, it became harder for people to search for and see that account. And someone found out that Twitter had done this on purpose.

An anonymous employee told a journalist that Twitter's new head of trust and safety asked the engineers to "apply heavy VF to @elonjet immediately."

VF stands for "visibility filtering," a way to hide certain accounts and their posts from other people.

Elon Musk Just Reinstated Donald Trump's Twitter Account

By: Gokul G
Elon Musk Just Reinstated Donald Trump's Twitter Account

The Twitter account of former President Donald J. Trump has been restored after Elon Musk held a Twitter poll titled "Reinstate former President Trump" and asked his followers if they thought the former president's Twitter account should be reinstated. The final results show 51.8% of respondents said "Yes", and only 48.2% said "No".

Reinstate former President Trump

— Elon Musk (@elonmusk) November 19, 2022

The people have spoken.

Trump will be reinstated.

Vox Populi, Vox Dei. https://t.co/jmkhFuyfkv

— Elon Musk (@elonmusk) November 20, 2022

Elon Musk says Twitter Blue will probably be back by the end of next week

By: Gokul G
Elon Musk says Twitter Blue will probably be back by the end of next week

In a recent tweet, Elon Musk revealed that Twitter Blue will probably be back by the end of next week.

Here is the Tweet:

Probably end of next week

— Elon Musk (@elonmusk) November 13, 2022


If you have been following the recent developments on Twitter, you have probably heard that Musk's $8 blue check mark policy backfired badly: The trouble began after some online trolls started making "verified" parody accounts of Elon Musk, Tesla, and many others. 

The one that caused the most problems was the parody account of Eli Lilly which send out a tweet saying "we are excited to announce insulin is free now". Believe it or not, that single tweet caused Eli Lilly to lose $15 billion in stock market value.

Do you remember when Elon Musk tweeted "power to the people"? I never thought he would provide this much power to the common people. 🤣🤣🤣

Has Twitter become unfixable? I would say no because I believe Elon Musk can fix it, but Elon should rethink his approach and do what he did with Tesla and SpaceX: instead of trying to make the company profitable, he should focus on making an awesome product.

❌