Normal view

There are new articles available, click to refresh the page.
Yesterday — 24 January 2026Main stream

$7 Trillion Player Is Moving Into Bitcoin, Can This Trigger A Surge To $200,000?

24 January 2026 at 19:30

Swiss banking giant UBS, with assets under management (AuM) of up to $7 trillion, is set to launch Bitcoin trading for some of its clients. This comes amid predictions that regulatory clarity and broader adoption could send the BTC price to as high as $200,000. 

UBS To Offer Bitcoin Trading To Some Wealth Clients

Bloomberg reported that UBS is planning to launch crypto trading for some of its wealth clients, starting with its private bank clients in Switzerland. The bank will reportedly begin by offering these clients the opportunity to invest in Bitcoin and Ethereum. At the same time, the crypto offering could further expand to clients in the Pacific-Asia region and the U.S.

The banking giant is currently in discussions with potential partners, and there is no clear timeline for when it could launch Bitcoin and Ethereum trading for clients. This move is said to be partly due to increased demand from wealth clients for crypto exposure. UBS also faces increased competition as other Wall Street giants are working to offer crypto trading. 

Morgan Stanley, in partnership with Zerohash, announced plans to launch crypto trading in the first half of this year, starting with Bitcoin, Ethereum, and Solana. The banking giant may soon also be able to offer its crypto products, as it has filed with the SEC to launch spot BTC, ETH, and SOL ETFs. 

Furthermore, JPMorgan, another of UBS’ competitors, is considering offering crypto trading to institutional clients, although this plan is still in the early stages. The bank already accepts Bitcoin and Ethereum as collateral from its clients. Last year, it also filed to offer BTC structured notes that will track the performance of the BlackRock Bitcoin ETF.

Can Bank’s Entry Trigger A BTC Rally To $200,000  

Kevin O’Leary predicted that Bitcoin could rally to between $150,000 and $200,000 this year, driven by the passage of the CLARITY Act. His prediction came just as White House Crypto Czar David Sacks said banks would fully enter crypto once the bill passes. As such, there is a possibility that BTC could reach this $200,000 psychological level in anticipation of the amount of new capital that could flow into BTC from these banks once the bill passes. 

BitMine’s Chairman, Tom Lee, also predicted during a CNBC interview that Bitcoin could reach between $200,000 and $250,000 this year, partly due to growing institutional adoption by Wall Street giants. Meanwhile, Binance founder Changpeng “CZ” Zhao said that a BTC rally to $200,000 is the “most obvious thing in the world” to him.

At the time of writing, the Bitcoin price is trading at around $89,600, up in the last 24 hours, according to data from CoinMarketCap.

Bitcoin

Before yesterdayMain stream

With a new executive order clearing the path for federal AI standards, the question now is whether Congress will finish the job

21 January 2026 at 17:12

Interview transcript:

Terry Gerton Last time we spoke, we were talking about the potential of a patchwork of state laws that might stifle AI innovation. Now we don’t have a federal law, but we have an executive order from the president that creates a federal preemption framework and a task force that will specifically challenge those state laws. Last time we talked, we were worried about constitutionality. What do you think about this new construct?

Kevin Frazier Yeah, so this construct really tries to set forth a path for Congress to step up. I think everyone across the board at the state level in the White House is asking Congress to take action. And this, in many ways, is meant to smooth that path and ease the way forward for Congress to finally set forth a national framework. And by virtue of establishing an AI Litigation Task Force, the president is trying to make sure that Congress has a clear path to move forward. This AI Litigation Task Force is essentially charging the Department of Justice under the Attorney General to challenge state AI laws that may be unconstitutional or otherwise unlawful. Now, critically, this is not saying that states do not have the authority to regulate AI in certain domains, but merely giving and encouraging the AG to have a more focused regulatory agenda, focusing their litigation challenges on state AI laws that may have extra-territorial ramifications, that may violate the First Amendment, other things that the DOJ has always had the authority to do.

Terry Gerton Where do you think, then, that this sets the balance between innovation and state autonomy and federal authority?

Kevin Frazier So the balance is constantly being weighed here, Terry. I’d say that this is trying to strike a happy middle ground. We see that in the executive order, there’s explicit recognition that in many ways there may be state laws that actually do empower and encourage innovation. We know that in 2026, we’re going to see Texas, my home state, develop a regulatory sandbox that allows for AI companies to deploy their tools under fewer regulations, but with increased oversight. Utah has explored a similar approach. And so those sorts of state laws that are very much operating within their own borders, that are regulating the end uses of AI, or as specified in the executive order, things like data center locations, things like child safety protections and things like state government use of AI, those are all cordoned off and recognized by the EO as the proper domain of states. And now, the EO is really encouraging Congress to say, look, we’re trying to do our best to make sure that states aren’t regulating things like the frontier of AI, imposing obligations on AI development, but Congress, you need to step up because it is you, after all, that has the authority under the Constitution to regulate interstate commerce.

Terry Gerton Let’s go back to those sandboxes that you talked about, because we talked about those before and you talked about them as a smart way of creating a trial and error space for AI governance. Does this EO then align with those and do you expect more states to move in that direction?

Kevin Frazier Yes, so this EO very much encourages and welcomes state regulations that, again, aren’t running afoul of the Constitution, aren’t otherwise running afoul of federal laws or regulations that may preempt certain regulatory postures by the states. If you’re not doing something unconstitutional, if you’re trying to violate the Supremacy Clause, there’s a wide range for states to occupy with respect to AI governance. And here, those sorts of sandboxes are the sort of innovation-friendly approaches that I think the White House and members of Congress and many state legislators would like to see spread and continue to be developed. And these are really the sorts of approaches that allow us to get used to and start acclimating to what I like to refer to as boring AI. The fact of the matter is most AI isn’t something that’s going to threaten humanity. It’s not something that’s going to destroy the economy tomorrow, so on and so forth. Most AI, Terry, is really boring. It’s things like improving our ability to detect diseases, improving our ability to direct the transmission of energy. And these sorts of positive, admittedly boring, uses of AI are the very sorts of things we should be trying to experiment with at the state level.

Terry Gerton I’m speaking with Dr. Kevin Frazier. He is the AI innovation and law fellow at the University of Texas School of Law. Kevin, one of the other things we’ve talked about is that the uncertainty around AI laws and regulations really creates a barrier to entry for innovators or startups or small businesses in the AI space. How do you think the EO affects that concern?

Kevin Frazier So the EO is very attentive to what I would refer to, not only as a patchwork, but increasingly what’s looking like a Tower of Babel approach that we’re seeing at the state level. So most recently in New York, we saw that the governor signed legislation that looks a lot like SB 53. Now for folks who aren’t spending all of their waking hours thinking about AI, SB 53 was a bill passed in California that regulates the frontier AI companies and imposes various transparency requirements on them. Now, New York in some ways copy and pasted that legislation. Folks may say, oh, this is great, states are trying to copy one another to make sure that there is some sort of harmony with respect to AI regulation. Well, the problem is how states end up interpreting those same provisions, what it means for example, to have a reasonable model or what it means to adhere to certain transparency requirements, that may vary in terms of state-by-state enforcement. And so that’s really where there is concern among the White House with respect to extra-territorial laws, because if suddenly we see that a AI company in Utah or Texas feels compelled or is compelled to comply with New York laws or California laws, that’s where we start to see that concern about a patchwork.

Terry Gerton And what does that mean for innovators who may want to scale up? They may get a great start in Utah, for example, but how do they scale up nationwide if there is that patchwork?

Kevin Frazier Terry, this is a really important question because there’s an argument to be made that bills like SB 53 or the RAISE Act in New York include carve-outs for smaller AI labs. And some folks will say, hey, look, it says if you’re not building a model of this size or with this much money, or if you don’t have this many users, then great, you don’t to comply with this specific regulation. Well, the problem is, Terry, I have yet to meet a startup founder who says, I can’t wait to build this new AI tool, but the second I hit 999,000 users, I’m just going to stop building. Or the second that I want to build a model that’s just one order of magnitude more powerful in terms of compute, I’m just going to turn it off, I’m going to throw in the towel. And so even when there are carve-outs, we see that startups have to begin to think about when they’re going to run into those regulatory burdens. And so even with carve-outs applied across the patchwork approach, we’re going to see that startups find it harder and harder to convince venture capitalists, to convince institutions, to bet and gamble on them. And that’s a real problem if we want to be the leaders in AI innovation.

Terry Gerton So let’s go back then to the DOJ’s litigation task force. How might that play into this confusion? Will it clarify it? Will it add more complexity? What’s your prognostication?

Kevin Frazier Yes, I always love to prognosticate, and I think that here we’re going to see some positive litigation be brought forward that allows some of these really important, difficult debates to finally be litigated. There’s questions about what it means to regulate interstate commerce in the AI domain. We need experts to have honest and frank conversations about this, and litigation can be a very valuable forcing mechanism for having folks suddenly say, hey, if you regulate this aspect of AI, then from a technical standpoint, it may not pose any issues. But if you calculate this aspect, now we’re starting to see that labs would have to change their behavior. And so litigation can be a very positive step that sends the signals to state legislators, hey, here are the areas where it’s clear for you to proceed and here are areas where the constitution says, whoa, that’s Congress’s domain. And so I’m optimistic that under the leadership of the attorney general and seeing folks like David Sacks, the AI and crypto czar, lend their expertise to these challenges as well, that we’re going to get the sort of information we need at the state and federal level for both parties to be more thoughtful about the sorts of regulations they should impose.

Terry Gerton All right, Kevin, underlying all of the things you’ve just talked about is the concern you raised at the beginning. Will Congress step up and enact national legislation? What should be at the top of their list if they’re going to move forward on this?

Kevin Frazier So the thing at the top of Congress’s list, in my opinion, has to be novel approaches, number one, to AI research. We just need to understand better how AI works, things like that black box concept we talk about frequently with respect to AI, and things like making sure that if AI ends up in the hands of bad actors, we know how to respond. Congress can really put a lot of energy behind those important AI research initiatives. We also need Congress to help make sure we have more data be available to more researchers and startups so that we don’t find ourselves just operating under the AI world of OpenAI, Microsoft and Anthropic. But we want to see real competition in this space. And Congress can make sure that the essential inputs to AI development are more broadly available. And finally, I think Congress can do a lot of work with respect to improving the amount of information we’re receiving from AI companies. So SB 53, for example, is a great example of a state bill that’s trying to garner more information from AI labs that can then lead to smarter, better regulation down the road. But the best approach is for Congress to take the lead on imposing those requirements, not states.

The post With a new executive order clearing the path for federal AI standards, the question now is whether Congress will finish the job first appeared on Federal News Network.

© Getty Images/Khanchit Khirisutchalual

technology control law ai concept for AI ethics and Developing artificial codes of ethics.Compliance, regulation, standard, and responsibility for guarding against

Lawmaker eyes bill to codify NIST AI center

A top House lawmaker is developing legislation to codify the National Institute of Standards and Technology’s Center for AI Standards and Innovation into law.

The move to codify CAISI comes as lawmakers and the Trump administration debate and discuss the federal government’s role in overseeing AI technology.

Rep. Jay Obernolte (R-Calif.), chairman of the House Science, Space and Technology Committee’s research and technology subcommittee, said he has a “forthcoming” bill dubbed the “Great American AI Act.”

During a Wednesday hearing, Obernolte said the bill will formalize CAISI’s role to “advance AI evaluation and standard setting.”

“The work it does in doing AI model evaluation is essential in creating a regulatory toolbox for our sectoral regulators, so everyone doesn’t have to reinvent the wheel,” Obernolte said.

The Biden administration had initially established an AI Safety Institute at NIST. But last summer, the Trump administration rebranded the center to focus on standards and innovation.

Last September, the center released an evaluation of the Chinese “DeepSeek” AI model that found it lagged behind U.S. models on cost, security and performance. More recently, CAISI released a request for information on securing AI agent systems.

Despite the Trump administration’s rebranding, however, Obernolte noted the NIST center’s functions have largely stayed consistent. He argued codifying the center would provide stability.

“I think everyone would agree, it’s unhealthy for us to have every successive administration spin up a brand new agency that, essentially, is doing something with a long-term mission that needs continuity,” he said.

Obernolte asked Michael Kratsios, the director of the White House Office of Science and Technology Policy, what he thought about codifying the center into law.

Kratsios said CAISI is an “very important part of the larger AI agenda.” He also said it was important for the administration to reframe the center’s work around innovation and standards, rather than safety.

“It’s absolutely important that the legacy work around standards relating to AI are undertaken by CAISI, and that’s what they’re challenged to do,” Kratsios said. “And that’s the focus that they should have, because the great standards that are put out by CAISI and by NIST are the ones that, ultimately, will empower the proliferation of this technology across many industries.”

Later on in the hearing, Kratsios said the NIST center would play a key role in setting standards for “advanced metrology of model evaluation.”

“That is something that can be used across all industries when they want to deploy these models,” he said. “You want to have trust in them so that when everyday Americans are using, whether it be medical models or anything else, they are comfortable with the fact that it has been tested and evaluated.”

Obernolte and Rep. Sarah McBride (D-Del.), meanwhile, have also introduced the “READ AI Act.” The bill would direct NIST to develop guidelines for how AI models should be evaluated, including standard documentation.

Asked about the bill, Kratsios said it was worthy of consideration, but added that any such efforts should avoid just focusing on frontier AI model evaluation.

“The reality is that the most implementation that’s going to happen across industry is going to happen through fine-tuned models for specific use cases, and it’s going to be trained on specific data that the large frontier models never had access to,” he added.

“In my opinion, the greatest work that NIST could do is to create the science behind how you measure models, such that any time that you have a specific model – for finance, for health, for agriculture – whoever’s attempting to implement it has a framework and a standard around how they can evaluate that model,” Kratsios continued. “At the end of the day, the massive proliferation is going to be through these smaller, fine-tuned models for specific use cases.”

Discussion around the role of the NIST center comes amid a larger debate over the role of the federal government in setting AI standards. In a December executive order, President Donald Trump called for legislative recommendations to create a national framework that would preempt state AI laws.

But during the hearing, Kratsios offered few specifics on what he and Special Adviser for AI and Crypto David Sacks have been considering.

“That’s something that I very much look forward to working with everyone on this committee on,” Kratsios said.  “What was clear in the executive order, specifically, was that, any proposed legislation should not preempt otherwise lawful state actions relating to child safety protections, AI compute and data infrastructure, and also state government procurement and use of AI.”

“But, we look forward over the next weeks and months to be working with Congress on a viable solution,” he added.

The post Lawmaker eyes bill to codify NIST AI center first appeared on Federal News Network.

© Federal News Network

NIST
❌
❌