Normal view

There are new articles available, click to refresh the page.
Yesterday — 5 December 2025Main stream
Before yesterdayMain stream

Gen AI adoption is reshaping roles and raising tough questions about workforce strategy

3 December 2025 at 16:45

 

Interview transcript:

 

Terry Gerton I know you have studied how workers of different skill levels choose to use generative AI and the concept of AI exposure. Can you talk to us a little bit about what you’re finding there? Are there certain roles more likely to embrace AI, or certain roles that are more likely to be replaced?

Ramayya Krishnan AI exposure, to understand that, I think we have to think about how occupations are structured. So the Bureau of Labor Statistics has something, a taxonomy called O*NET. And O*NET describes all the occupations in the U.S. economy, there are 873 or so. And each of those occupations is viewed as consisting of tasks and tasks requiring certain sets of skills. AI exposure is a measure of how many of those tasks are potentially doable by AI. And thereby that becomes, then, a measure of ways in which AI could have an impact on people who are in that particular occupation. So, however, AI exposure should not be assumed to mean that that’s tantamount to AI substitution, because I think we should be thinking about how AI is deployed. And so there are capabilities that AI has. For instance, this conversation that we’re having could be automatically transcribed by AI. This this conversation we are having could be automatically translated from English to Spanish by AI, for instance. Those are capabilities, right? So when you take capabilities and actually deploy them in organizational contexts, the question of how it’s deployed will determine whether AI is going to augment the human worker, or is it going to automate and replace a particular task that a human worker does? Remember, this happens at the task level, not at the occupation level. So some tasks within an occupation may get modified or adapted. So if you look at how software developers today use co-pilots to build software, that’s augmentation, where it’s been demonstrated that software developers with lower skills usually get between 20% to 25% productivity improvement. Call center employees, again, a similar type of augmentation is happening. In other cases, you could imagine, for instance, if you were my physician and I was speaking to you, today we have things called ambient AIs that will automatically transcribe the conversation that I’m having with you, the physician. That’s an example of an AI that could potentially substitute for a human transcriber. So I gave you two examples: software developer and customer service where you’re seeing augmentation; the transcription task, I’m giving you an example of substitution. So depending on how AI is deployed, you might have some tasks being augmented, some being substituted. When you take a step back, you have to take AI exposure as a measure of capability and then ask the question, how does that then get deployed? Which then has impact on how workers are going to actually have to think about, what does this then mean for them? And if it’s complementing, how do they become fluent in AI and be able to use AI well? And if there’s a particular task where it’s being used in a substitutive manner, what does that then mean longer term for them, in terms of having to acquire new skills to maybe transition to other occupations where there might be even more demand? So I think it’s we have to unpack what AI exposure then means for workers by thinking about augmentation versus automation.

Terry Gerton There’s a lot of nuance in that. And your writings also make the point that Gen AI adoption narrows when the cost of failure is high. So how do organizations think both about augmentation versus replacement and the risk of failure as they deploy AI?

Ramayya Krishnan If you take the example of using AI in an automated fashion, its error rate has to be so low because you don’t have human oversight. And therefore, if the error rates are not sufficiently appropriate, then you need to pair the human with the AI. In some cases you might say the AI is just not ready. So we’re not going to use the AI at all. We’ll just keep human as is. In other cases, if AI can be used with the human, where there is benefits to productivity but the error rates are such you still need the human to ensure and sign off, either because the error rates are high or from an ethical standpoint or from a governance standpoint, you need the human in the loop to sign off, you’re going to see complementing the human with the AI. And then there are going to be tasks for which the AI quality is so high, that its error rates are so low, that you could actually deploy it. So when we talk about the cost of failure, you want to think about consequential tasks where failure is not an option. And so either the error rates have to be really low, and therefore I can deploy the AI in an automated fashion, or you have to ensure there is a human in the loop. And this is why I think AI measurement and evaluation prior to deployment is so essential because things like error rates, costs, all of these have to be measured and inform the decisions to deploy AI and deploy AI in what fashion? Is it in augmentation fashion or not, or is it going to be used independently?

Terry Gerton I’m speaking with Dr. Ramayya Krishnan. He’s the director of the Center for AI Measurement Science and Engineering at Carnegie Mellon University. So we’re talking there about how AI gets deployed in different organizations. How do you see this applying in the public sector? Are there certain kinds of government work where AI is more suitable for augmentation versus automation and that error rate then becomes a really important consideration?

Ramayya Krishnan I think there are going to be a number of opportunities for AI to be deployed. So you remember we talked about call centers and customer service types of centers. I mean, public sector, one aspect of what they do is they engage with citizens in a variety of ways, where they have to deliver and provide good information. Some of those are time sensitive and very consequential, like 911 emergency calls. Now, there you absolutely want the human in the loop because we want to make sure that those are dealt with in a way that we believe we need humans in the loop, which could be augmented by AI, but you know, you want humans in the loop. On the other hand, you could imagine questions about, you know, what kind of permit or what kind of form, you know, administrative kinds of questions, where there’s triage, if you will, of having better response time to those kinds of questions. The alternative to calling and speaking to somebody might be just like you could go to a website and look it up. Imagine a question-answering system that actually allows for you to ask and get these questions answered. I expect that, and in fact you’re already seeing this in local government and in state government, the deployment of these kinds of administrative kinds of question-answering systems. I’d say that’s one example. Within the organizations, there is the use of AI, not customer-facing or citizen-facing, but within the organizations, the use of these kinds of co-pilots that are being used within the organization to try and improve productivity. I think as AI gets more robust and more reliable, I expect that you will see greater use of AI in both trying to improve efficiency and effectiveness, but to do so in a responsible way, in such a way that you take into account the importance of providing service to citizens of all different abilities. One of the important things with the public sector is … maybe there’s multilingual support that is needed, you might need to help citizens who are disabled. How might we support different kinds of citizens with different ability levels? I think these are things where AI could potentially play an important role.

Terry Gerton AI is certainly already having a disruptive impact on the American workforce, particularly. What recommendations do you have for policymakers and employers to mitigate the disruption and think long-term about upskilling and reskilling so that folks can be successful in this new space?

Ramayya Krishnan I think this is actually one of the most important questions that we need to address. And you know, I served on the National AI Advisory Committee to the President and the White House Office of AI Initiatives, and this was very much a key question that was addressed by colleagues. And I think a recent op-ed that we have written with Patrick Harker at the University of Pennsylvania and Mark Hagerott at the University of South Dakota, really we make the case that this is an inflection point which requires a response pretty much on the scale of what President Lincoln did in 1862 with the Morrill Act in establishing land grant universities. Much like land grant universities were designed to democratize access to agricultural technology, really it enabled Americans from everywhere in the nation to harness this technology for economic prosperity both for themselves and for the nation. I think if you’re going to see AI be deployed and not have the kind of inequality that might arise from people having access to the technology and not having access to the technology, we need something like this. And we call this the Digital Land Grant Initiative that would connect our universities, the community colleges, with various ways of providing citizens, both in rural areas and urban areas, everywhere in the country, access to AI education and skilling appropriate to their context. So if I’m a farmer, how can I do precision agriculture? If I’m a mine worker, or if I’m somebody who wants to work in in banking — from the whole range of occupations and professions, you could imagine AI having a transformative effect on these different occupations. And there may be new occupations that are going to emerge that you and I are not thinking about right now. So, how do we best position our citizens so that they can equip themselves with the right sets of skills that are going to be required and demanded? I think that’s the big public policy question with regard to workforce upskilling and reskilling.

The post Gen AI adoption is reshaping roles and raising tough questions about workforce strategy first appeared on Federal News Network.

© Getty Images/iStockphoto/ipopba

Businessman hold circle of network structure HR - Human resources. Business leadership concept. Management and recruitment. Social network. Different people.

Syntax hacking: Researchers discover sentence structure can bypass AI safety rules

2 December 2025 at 07:15

Researchers from MIT, Northeastern University, and Meta recently released a paper suggesting that large language models (LLMs) similar to those that power ChatGPT may sometimes prioritize sentence structure over meaning when answering questions. The findings reveal a weakness in how these models process instructions that may shed light on why some prompt injection or jailbreaking approaches work, though the researchers caution their analysis of some production models remains speculative since training data details of prominent commercial AI models are not publicly available.

The team, led by Chantal Shaib and Vinith M. Suriyakumar, tested this by asking models questions with preserved grammatical patterns but nonsensical words. For example, when prompted with “Quickly sit Paris clouded?” (mimicking the structure of “Where is Paris located?”), models still answered “France.”

This suggests models absorb both meaning and syntactic patterns, but can overrely on structural shortcuts when they strongly correlate with specific domains in training data, which sometimes allows patterns to override semantic understanding in edge cases. The team plans to present these findings at NeurIPS later this month.

Read full article

Comments

© EasternLightcraft via Getty Images

Alumni, Student, and Staff Information Stolen From Harvard University

25 November 2025 at 09:15

A phone phishing attack led to the compromise of a system containing information about alumni, donors, students, staff, and other individuals.

The post Alumni, Student, and Staff Information Stolen From Harvard University appeared first on SecurityWeek.

How To Change The Flowering Cycle Back To The Vegetative Stage or Cycle

By: press
2 December 2022 at 08:00

How To Change The Flowering Cycle Back To The Vegetative Stage or Cycle. Basically: Flower Light Cycle is 12 hours on and 12 hours off and the Veg Light Cycle is 18 hours on and 6 hours off.

  1. I have reverted plants that were in the flower cycle for up to 3 weeks back to the veg cycle with no problems other than losing time.
  2. All you have to do is change your flowering light cycle of 12 hours on and 12 hours off to 24 hours on for about 10 days.
  3. Your plants will start reverting back to the veg cycle.
  4. You will see single leaves starting to grow out of the buds – that’s normal –
  5. After 10 days, change the 24 hours on back to the vegetative lighting schedule you use, we use GLR
  6. Be very strict during the off hours – allow no light to come in – don’t open the door – be patient!

A time consuming task but one that has to be done, is figuring out the males from the females when you’re growing from regular seed. We’ve been growing weed for over 40 years and still can’t tell a female from a male until they flower for a week or two.

IMO-growing from regular seed is better than using feminized or auto flowering seeds. Regular seed will give you a larger, more robust plant that you can easily clone. Check out this link to find out where I get my marijuana seeds.

Many growers can tell, or at least claim they can spot a girl from a boy and there are many articles and videos about it, but we can’t figure it out. So we’re stuck; when we grow from seed, we have to allow seedlings to grow for awhile in the vegetative cycle than start flowering them (switch from 18 hours of light to 12 hours of light) so we can figure out who’s who. ir?t=growingweedindoors 20&l=li2&o=1&a=B01IW8M31U

After you switch your lighting schedule to 12/12; to tell a male from a female, look for little balls (male plants) that start to form and get rid of those plants, (we do save one to pollinate 1 female for seeds).  Sometimes the little balls start forming in a couple of days and sometimes it takes a week or two.

Female and male marijuana plants

Now we have to switch back to the vegetative cycle

Once we have our girls (they have white wispy hairs/no little balls) figured out, we change the flowering light cycle back to the vegetative cycle. We start with 24 hours light for about 10 days then switch to the GLR.

You’ll notice weird looking single leafs growing up through the little buds. Don’t worry about them, just pick them off if they make you nervous or leave them.

After 10 days of 24 on, we change back to our normal veg lighting schedule, we use the GLR (Gas Light Routine) for the remaining time in the veg cycle (now you can see why clones are so much better to start with).

Even though the plants initially look a little weird, they grow out into normal looking plants in a short while. Once you have a group of female plants growing in the vegetative cycle, you can start taking clones when the lower branches get to be around 3″ long or so.

All in all it can take around 3 weeks to differentiate the boys from the girls so it makes a lot of sense to keep a mother plant growing so you can take female clones anytime you need them.

These are the products I use to grow the best weed

❌
❌