Playing Leon in Resident Evil Requiem Is a Kickass Catharsis



Anupriya Datta / Euractiv:
The EU Commission says WhatsApp's Channels fall under the DSA's Very Large Online Platform rules, joining Instagram and Facebook, requiring it to address risks — Commission designation means Meta-owned messaging giant's public channels must comply with the EU's Digital Services Act rules for very large platforms
The Information:
Source: OpenAI targets ~$60 per 1,000 views for ChatGPT ads, on par with live NFL broadcasts and above Meta's sub-$20 CPM, while offering little conversion data — In its initial rollout of ads, OpenAI is charging prices that rival those for coveted video programs like the NFL …
Tim De Chant / TechCrunch:
Nvidia announces its Earth-2 Medium Range weather model, built on its Atlas architecture, claiming it outperforms Google DeepMind's GenCast in 70+ variables — In the run-up to the winter storm currently pummeling much of the U.S., weather forecasts for some regions were all over the map, with snowfall predictions varying wildly.
Ryan Christoffel / 9to5Mac:
Apple unveils a second-gen AirTag with an updated speaker and the same second-gen Ultra Wideband chip as in the iPhone 17, costing $29 for one and $99 for four — Apple has announced a brand new AirTag 2 that's available to order now, here are the details on the update.
Antonio G. Di Benedetto / The Verge:
Intel Panther Lake laptop CPU review: the 18A-based Core Ultra X9 388H outperforms Apple's M5 and AMD's top-tier Strix Halo in some tests like 4K video exports — Intel's been talking the talk for months about its new generation of laptop chips, the first made on its long-anticipated 18A process.
In an era where business, education, and even casual conversations occur via screens, sound has become a differentiating factor. We obsess over lighting, camera angles, and virtual backgrounds, but how we sound can be just as critical to credibility, trust, and connection.
That’s the insight driving Erik Vaveris, vice president of product management and chief marketing officer at Shure, and Brian Scholl, director of the Perception & Cognition Laboratory at Yale University. Both see audio as more than a technical layer: It’s a human factor shaping how people perceive intelligence, trustworthiness, and authority in virtual settings.
“If you’re willing to take a little bit of time with your audio set up, you can really get across the full power of your message and the full power of who you are to your peers, to your employees, your boss, your suppliers, and of course, your customers,” says Vaveris.
Scholl’s research shows that poor audio quality can make a speaker seem less persuasive, less hireable, and even less credible.
“We know that [poor] sound doesn’t reflect the people themselves, but we really just can’t stop ourselves from having those impressions,” says Scholl. “We all understand intuitively that if we’re having difficulty being understood while we’re talking, then that’s bad. But we sort of think that as long as you can make out the words I’m saying, then that’s probably all fine. And this research showed in a somewhat surprising way, to a surprising degree, that this is not so.”
For organizations navigating hybrid work, training, and marketing, the stakes have become high.
Vaveris points out that the pandemic was a watershed moment for audio technology. As classrooms, boardrooms, and conferences shifted online almost overnight, demand accelerated for advanced noise suppression, echo cancellation, and AI-driven processing tools that make meetings more seamless. Today, machine learning algorithms can strip away keyboard clicks or reverberation and isolate a speaker’s voice in noisy environments. That clarity underpins the accuracy of AI meeting assistants that can step in to transcribe, summarize, and analyze discussions.
The implications across industries are rippling. Clearer audio levels the playing field for remote participants, enabling inclusive collaboration. It empowers executives and creators alike to produce broadcast-quality content from the comfort of their home office. And it offers companies new ways to build credibility with customers and employees without the costly overhead of traditional production.
Looking forward, the convergence of audio innovation and AI promises an even more dynamic landscape: from real-time captioning in your native language to audio filtering, to smarter meeting tools that capture not only what is said but how it’s said, and to technologies that disappear into the background while amplifying the human voice at the center.
“There’s a future out there where this technology can really be something that helps bring people together,” says Vaveris. “Now that we have so many years of history with the internet, we know there’s usually two sides to the coin of technology, but there’s definitely going to be a positive side to this, and I’m really looking forward to it.
In a world increasingly mediated by screens, sound may prove to be the most powerful connector of all.
This episode of Business Lab is produced in partnership with Shure.
Full Transcript
Megan Tatum: From MIT Technology Review, I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.
This episode is produced in partnership with Shure.
Our topic today is the power of sound. As our personal and professional lives become increasingly virtual, audio is emerging as an essential tool for everything from remote work to virtual conferences to virtual happy hour. While appearance is often top of mind in video conferencing and streaming, audio can be as or even more important, not only to effective communication, but potentially to brand equity for both the speaker and the company.
Two words for you: crystal clear.
My guests today are Erik Vaveris, VP of Product Management and Chief Marketing Officer at Shure, and Brian Scholl, Director of the Perception & Cognition Laboratory at Yale University.
Welcome, Erik and Brian.
Erik Vaveris: Thank you, Megan. And hello, Brian. Thrilled to be here today.
Brian Scholl: Good afternoon, everyone.
Megan: Fantastic. Thank you both so much for being here. Erik, let’s open with a bit of background. I imagine the pandemic changed the audio industry in some significant ways, given the pivot to our modern remote hybrid lifestyles. Could you talk a bit about that journey and some of the interesting audio advances that arose from that transformative shift?
Erik: Absolutely, Megan. That’s an interesting thing to think about now being here in 2025. And if you put yourself back in those moments in 2020, when things were fully shut down and everything was fully remote, the importance of audio quality became immediately obvious. As people adopted Zoom or Teams or platforms like that overnight, there were a lot of technical challenges that people experienced, but the importance of how they were presenting themselves to people via their audio quality was a bit less obvious. As Brian’s noted in a lot of the press that he’s received for his wonderful study, we know how we look on video. We can see ourselves back on the screen, but we don’t know how we sound to the people with whom we’re speaking.
If a meeting participant on the other side can manage to parse the words that you’re saying, they’re not likely to speak up and say, “Hey, I’m having a little bit of trouble hearing you.” They’ll just let the meeting continue. And if you don’t have a really strong level of audio quality, you’re asking the people that you’re talking to devote way too much brainpower to just determining the words that you’re saying. And you’re going to be fatiguing to listen to. And your message won’t come across. In contrast, if you’re willing to take a little bit of time with your audio set up, you can really get across the full power of your message and the full power of who you are to your peers, to your employees, your boss, your suppliers, and of course your customers. Back in 2020, this very quickly became a marketing story that we had to tell immediately.
And I have to say, it’s so gratifying to see Brian’s research in the news because, to me, it was like, “Yes, this is what we’ve been experiencing. And this is what we’ve been trying to educate people about.” Having the real science to back it up means a lot. But from that, development on improvements to key audio processing algorithms accelerated across the whole AV industry.
I think, Megan and Brian, you probably remember hearing loud keyboard clicking when you were on calls and meetings, or people eating potato chips and things like that back on those. But you don’t hear that much today because most platforms have invested in AI-trained algorithms to remove undesirable noises. And I know we’re going to talk more about that later on.
But the other thing that happened, thankfully, was that as we got into the late spring and summer of 2020, was that educational institutions, especially universities, and also businesses realized that things were going to need to change quickly. Nothing was going to be the same. And universities realized that all classrooms were going to need hybrid capabilities for both remote students and students in the classroom. And that helped the market for professional AV equipment start to recover because we had been pretty much completely shut down in the earlier months. But that focus on hybrid meeting spaces of all types accelerated more investment and more R&D into making equipment and further developing those key audio processing algorithms for more and different types of spaces and use cases. And since then, we’ve really seen a proliferation of different types of unobtrusive audio capture devices based on arrays of microphones and the supporting signal processing behind them. And right now, machine-learning-trained signal processing is really the norm. And that all accelerated, unfortunately, because of the pandemic.
Megan: Yeah. Such an interesting period of change, as you say. And Brian, what did you observe and experience in academia during that time? How did that time period affect the work at your lab?
Brian: I’ll admit, Megan, I had never given a single thought to audio quality or anything like that, certainly until the pandemic hit. I was thrown into this, just like the rest of the world was. I don’t believe I’d ever had a single video conference with a student or with a class or anything like that before the pandemic hit. But in some ways, our experience in universities was quite extreme. I went on a Tuesday from teaching an in-person class with 300 students to being on Zoom with everyone suddenly on a Thursday. Business meetings come in all shapes and sizes. But this was quite extreme. This was a case where suddenly I’m talking to hundreds and hundreds of people over Zoom. And every single one of them knows exactly what I sound like, except for me, because I’m just speaking my normal voice and I have no idea how it’s being translated through all the different levels of technology.
I will say, part of the general rhetoric we have about the pandemic focuses on all the negatives and the lack of personal connection and nuance and the fact that we can’t see how everyone’s paying attention to each other. Our experience was a bit more mixed. I’ll just tell you one anecdote. Shortly after the pandemic started, I started teaching a seminar with about 20 students. And of course, this was still online. What I did is I just invited, for whatever topic we were discussing on any given day, I sent a note to whoever was the clear world leader in the study of whatever that topic was. I said, “Hey, don’t prepare a talk. You don’t have to answer any questions. But just come join us on Zoom and just participate in the conversation. The students will have read some of your work.”
Every single one of them said, “Let me check my schedule. Oh, I’m stuck at home for a year. Sure. I’d be happy to do that.” And that was quite a positive. The students got to meet a who’s who of cognitive science from this experience. And it’s true that there were all these technological difficulties, but that would never, ever have happened if we were teaching the class in real life. That would’ve just been way too much travel and airfare and hotel and scheduling and all of that. So, it was a mixed bag for us.
Megan: That’s fascinating.
Erik: Yeah. Megan, can I add?
Megan: Of course.
Erik: That is really interesting. And that’s such a cool idea. And it’s so wonderful that that worked out. I would say that working for a global company, we like to think that, “Oh, we’re all together. And we’re having these meetings. And we’re in the same room,” but the reality was we weren’t in the same room. And there hadn’t been enough attention paid to the people who were conferencing in speaking not their native language in a different time zone, maybe pretty deep into the evening, in some cases. And the remote work that everybody got thrown into immediately at the start of the pandemic did force everybody to start to think more about those types of interactions and put everybody on a level playing field.
And that was insightful. And that helped some people have stronger voices in the work that we were doing than they maybe did before. And it’s also led businesses really across the board, there’s a lot written about this, to be much more focused on making sure that participants from those who may be remote at home, may be in the office, may be in different offices, may be in different time zones, are all able to participate and collaborate on really a level playing field. And that is a positive. That’s a good thing.
Megan: Yeah. There are absolutely some positive side effects there, aren’t there? And it inspired you, Brian, to look at this more closely. And you’ve done a study that shows poor audio quality can actually affect the perception of listeners. So, I wonder what prompted the study, in particular. And what kinds of data did you gather? What methodology did you use?
Brian: Yeah. The motivation for this study was actually a real-world experience, just like we’ve been talking about. In addition to all of our classes moving online with no notice whatsoever, the same thing was true of our departmental faculty meetings. Very early on in the pandemic, we had one of these meetings. And we were talking about some contentious issue about hiring or whatever. And two of my colleagues, who I’d known very well and for many, many years, spoke up to offer their opinions. And one of these colleagues is someone who I’m very close with. We almost always see eye to eye. He was actually a former graduate student of mine once upon a time. And we almost always see eye to eye on things. He happened to be participating in that meeting from an old not-so-hot laptop. His audio quality had that sort of familiar tinny quality that we’re all familiar with. I could totally understand everything he was saying, but I found myself just being a little skeptical.
I didn’t find his points so compelling as usual. Meanwhile, I had another colleague, someone who I deeply respect, I’ve collaborated with, but we don’t always see eye to eye on these things. And he was participating in this first virtual faculty meeting from his home recording studio. Erik, I don’t know if his equipment would be up to your level or not, but he sounded better than real life. He sounded like he was all around us. And I found myself just sort of naturally agreeing with his points, which sort of was notable and a little surprising in that context. And so, we turned this into a study.
We played people a number of short audio clips, maybe like 30 seconds or so. And we had these being played in the context of very familiar situations and decisions. One of them might be like a hiring decision. You would have to listen to this person telling you why they think they might be a good fit for your job. And then afterwards, you had to make a simple judgment. It might be of a trait. How intelligent did that person seem? Or it might be a real-world decision like, “Hey, based on this, how likely would you be to pursue trying to hire them?” And critically, we had people listen to exactly the same sort of scripts, but with a little bit of work behind the scenes to affect the audio quality. In one case, the audio sounded crisp and clear. Recorded with a decent microphone. And here’s what it sounded like.
Audio Clip: After eight years in sales, I’m currently seeking a new challenge which will utilize my meticulous attention to detail and friendly professional manner. I’m an excellent fit for your company and will be an asset to your team as a senior sales manager.
Brian: Okay. Whatever you think of the content of that message, at least it’s nice and clear. Other subjects listened to exactly the same recording. But again, it had that sort of tinny quality that we’re all familiar with when people’s voices are filtered through a microphone or a recording setup that’s not so hot. That sounded like this.
Audio Clip: After eight years in sales, I’m currently seeking a new challenge which will utilize my meticulous attention to detail and friendly professional manner. I’m an excellent fit for your company and will be an asset to your team as a senior sales manager.
Brian: All right. Now, the thing that I hope you can get from that recording there is that although it clearly has this what we would call, as a technical term, a disfluent sound, it’s just a little harder to process, you are ultimately successful, right? Megan, Erik, you were able to understand the words in that second recording.
Megan: Yeah.
Erik: Mm-hmm.
Brian: And we made sure this was true for all of our subjects. We had them do word-for-word transcription after they made these judgments. And I’ll also just point out that this kind of manipulation clearly can’t be about the person themselves, right? You couldn’t make your voices sound like that in real world conversation if you tried. Voices just don’t do those sorts of things. Nevertheless, in a way that sort of didn’t make sense, that was kind of irrational because this couldn’t reflect the person, this affected all sorts of judgments about people.
So, people were judged to be about 8% less hirable. They were judged to be about 8% less intelligent. We also did this in other contexts. We did this in the context of dateability as if you were listening to a little audio clip from someone who was maybe interested in dating you, and then you had to make a judgment of how likely would you be to date this person. Same exact result. People were a little less datable when their audio was a little more tinny, even though they were completely understandable.
The experiment, the result that I thought was in some ways most striking is one of the clips was about someone who had been in a car accident. It was a little narrative about what had happened in the car accident. And they were talking as if to the insurance agent. They were saying, “Hey, it wasn’t my fault. This is what happened.” And afterwards, we simply had people make a natural intuitive judgment of how credible do you think the person’s story was. And when it was recorded with high-end audio, these messages were judged to be about 8% more credible in this context. So those are our experiments. What it shows really is something about the power of perception. We know that that sort of sound doesn’t reflect the people themselves, but we really just can’t stop ourselves from having those impressions made. And I don’t know about you guys, but, Erik, I think you’re right, that we all understand intuitively that if we’re having difficulty being understood while we’re talking, then that’s bad. But we sort of think that as long as you can make out the words I’m saying, then that’s probably all fine. And this research showed in a somewhat surprising way to a surprising degree that this is not so.
Megan: It’s absolutely fascinating.
Erik: Wow.
Megan: From an industry perspective, Erik, what are your thoughts on those study results? Did it surprise you as well?
Erik: No, like I said, I found it very, very gratifying because we invest a lot in trying to make sure that people understand the importance of quality audio, but we kind of come about that intuitively. Our entire company is audio people. So of course, we think that. And it’s our mission to help other people achieve those higher levels of audio in everything that they do, whether you’re a minister at a church or you’re teaching a class or you’re performing on stage. When I first saw in the news about Brian’s study, I think it was the NPR article that just came up in one of my feeds. I read it and it made me feel like my life’s work has been validated to some extent. I wouldn’t say we were surprised by it, but iIt made a lot of sense to us. Let’s put it that way.
Megan: And how-
Brian: This is what we’re hearing. Oh, sorry. Megan, I was going to say this is what we’re hearing from a lot of the audio professionals as they’re saying, “Hey, you scientists, you finally caught up to us.” But of course-
Erik: I wouldn’t say it that way, Brian.
Brian: Erik, you’re in an unusual circumstance because you guys think about audio every day. When we’re on Zoom, look, I can see the little rectangle as well as you can. I can see exactly how I look like. I can check the lighting. I check my hair. We all do that every day. But I would say most people really, they use whatever microphone came with their setup, and never give a second thought to what they sound like because they don’t know what they sound like.
Megan: Yeah. Absolutely.
Erik: Absolutely.
Megan: Avoid listening to yourself back as well. I think that’s common. We don’t scrutinize audio as much as we should. I wonder, Erik, since the study came out, how are you seeing that research play out across industry? Can you talk a bit about the importance of strong, clear audio in today’s virtual world and the challenges that companies and employees are facing as well?
Erik: Yeah. Sure, Megan. That’s a great question. And studies kind of back this up, businesses understand that collaboration is the key to many things that we do. They know that that’s critical. And they are investing in making the experiences for the people at work better because of that knowledge, that intuitive understanding. But there are challenges. It can be expensive. You need solutions that people who are going to walk into a room or join a meeting on their personal device, that they’re motivated to use and that they can use because they’re simple. You also have to overcome the barriers to investment. We in the AV industry have had to look a lot at how can we bring down the overall cost of ownership of setting up AV technology because, as we’ve seen, the prices of everything that goes into making a product are not coming down.
Simplifying deployment and management is critical. Beyond just audio technology, IoT technology and cloud technology for IT teams to be able to easily deploy and manage classrooms across an entire university campus or conference rooms across a global enterprise are really, really critical. And those are quickly evolving. And integrations with more standard common IT tools are coming out. And that’s one area. Another thing is just for the end user, having the same user interface in each conference room that is familiar to everyone from their personal devices is also important. For many, many years, a lot of people had the experience where, “Hey, it’s time we’re going to actually do a conference meeting.” And you might have a few rooms in your company or in your office area that could do that. And you walk into the meeting room. And how long does it take you to actually get connected to the people you’re going to talk with?
There was always a joke that you’d have to spend the first 15 minutes of a meeting working all of that out. And that’s because the technology was fragmented and you had to do a lot of custom work to make that happen. But these days, I would say platforms like Zoom and Teams and Google and others are doing a really great job with this. If you have the latest and greatest in your meeting rooms and you know how to join from your own personal device, it’s basically the same experience. And that is streamlining the process for everyone. Bringing down the costs of owning it so that companies can get to those benefits to collaboration is kind of the key.
Megan: I was going to ask if we could dive a little deeper into that kind of audio quality, the technological advancements that AI has made possible, which you did touch on slightly there, Erik. What are the most significant advancements, in your view? And how are those impacting the ways we use audio and the things we can do with it?
Erik: Okay. Let me try to break that down into-
Megan: That’s a big question. Sorry.
Erik: … a couple different sections. Yeah. No, and one that’s just so exciting. Machine-learning-based digital signal processing, or DSP, is here and is the norm now. If you think about the beginning of telephones and teleconferencing, just going way back, one of the initial problems you had whenever you tried to get something out of a dedicated handset onto a table was echo. And I’m sure we’ve all heard that at some point in our life. You need to have a way to cancel echo. But by the way, you also want people to be able to speak at the same time on both ends of a call. You get to some of those very rudimentary things. Machine learning is really supercharging those algorithms to provide better performance with fewer trade-offs, fewer artifacts in the actual audio signal.
Noise reduction has come a long way. I mentioned earlier on, keyboard sounds and the sounds of people eating, and how you just don’t hear that anymore, at least I don’t when I’m on conference calls. But only a few years ago, that could be a major problem. The machine-learning-trained digital signal processing is in the market now and it’s doing a better job than ever in removing things that you don’t want from your sound. We have a new de-verberation algorithm, so if you have a reverberant room with echoes and reflections that’s getting into the audio signal, that can degrade the experience there. We can remove that now. Another thing, the flip side of that is that there’s also a focus on isolating the sound that you do want and the signal that you do want.
Microsoft has rolled out a voice print feature in Teams that allows you, if you’re willing, to provide them with a sample of your voice. And then whenever you’re talking from your device, it will take out anything else that the microphone may be picking up so that even if you’re in a really noisy environment outdoors or, say, in an airport, the people that you’re speaking with are going to hear you and only you. And it’s pretty amazing as well. So those are some of the things that are happening today and are available today.
Another thing that’s emerged from all of this is we’ve been talking about how important audio quality is to the people participating in a discussion, the people speaking, the people listening, how everyone is perceived, but a new consumer, if you will, of audio in a discussion or a meeting has emerged, and that is in the form of the AI agent that can summarize meetings and create action plans, do those sorts of things. But for it to work, a clean transcription of what was said is already table stakes. It can’t garbled. It can’t miss key things. It needs to get it word for word, sentence for sentence throughout the entire meeting. And the ability to attribute who said what to the meeting participants, even if they’re all in the same room, is quickly upon us. And the ability to detect and integrate sentiment and emotion of the participants is going to become very important as well for us to really get the full value out of those kinds of AI agents.
So audio quality is as important as ever for humans, as Brian notes, in some ways more important because this is now the normal way that we talk and meet, but it’s also critical for AI agents to work properly. And it’s different, right? It’s a different set of considerations. And there’s a lot of emerging thought and work that’s going into that as well. And boy, Megan, there’s so much more we could say about this beyond meetings and video conferences. AI tools to simplify the production process. And of course, there’s generative AI of music content. I know that’s beyond the scope of what we’re talking about. But it’s really pretty incredible when you look around at the work that’s happening and the capabilities that are emerging.
Megan: Yeah. Absolutely. Sounds like there are so many elements to consider and work going on. It’s all fascinating. Brian, what kinds of emerging capabilities and use cases around AI and audio quality are you seeing in your lab as well?
Brian: Yeah. Well, I’m sorry that Brian himself was not able to be here today, but I’m an AI agent.
Megan: You got me for a second there.
Brian: Just kidding. The fascinating thing that we’re seeing from the lab, from the study of people’s impressions is that all of this technology that Erik has described, when it works best, it’s completely invisible. Erik, I loved your point about not hearing potato chips being eaten or rain in the background or something like that. You’re totally right. I used to notice that all the time. I don’t think I’ve noticed that recently, but I also didn’t notice that I haven’t noticed that recently, right? It just kind of disappears. The interesting thing about these perceptual impressions, we’re constantly drawing intuitive conclusions about people based on how they sound. And that might be a good thing or a bad thing when we’re judging things like trustworthiness, for example, on the basis of a short audio clip.
But clearly, some of these things are valid, right? We can judge the size of someone or even of an animal based on how they sound, right? A chihuahua can’t make the sound of a lion. A lion can’t make the sound of a chihuahua. And that’s always been true because we’re producing audio signals that go right into each other’s ears. And now, of course, everything that Erik is talking about, that’s not true. It goes through all of these different layers of technology increasingly fueled by AI. But when that technology works the best way, it’s as if it isn’t there at all and we’re just hearing each other directly.
Erik: That’s the goal, right? That it’s seamless open communication and we don’t have to think about the technology anymore.
Brian: It’s a tough business to be in, I think, though, Erik, because people have to know what’s going on behind the surface in order to value it. Otherwise, we just expect it to work.
Erik: Well, that’s why we try to put the logo of our products on the side of them so they show up in the videos. But yeah, it’s a good point.
Brian: Very good. Very good.
Erik: Yeah.
Megan: And we’ve talked about virtual meetings and conversations quite a bit, but there’s also streamed and recorded content, which are increasingly important at work as well. I wondered, Erik, if you could talk a bit about how businesses are leveraging audio in new ways for things like marketing campaigns and internal upskilling and training and areas like that?
Erik: Yeah. Well, one of the things I think we’ve all seen in marketing is that not everything is a high production value commercial anymore. And there’s still a place for that, for sure. But people tend to trust influencers that they follow. People search on TikTok, on YouTube for topics. Those can be the place that they start. And as technology’s gotten more accessible, not just audio, but of course, the video technology too, content creators can produce satisfying content on their own or with just a couple of people with them. And Brian’s study shows that it doesn’t really matter what the origins of the content are for it to be compelling.
For the person delivering the message to be compelling, the audio quality does have to hit a certain level. But because the tools are simpler to use and you need less things to connect and pull together a decent production system, creator-driven content is becoming even more and more integral to a marketing campaign. And so not just what they maybe post on their Instagram page or post on LinkedIn, for example, but us as a brand being able to take that content and use that actually in paid media and things like that is all entirely possible because of the overall quality of the content. So that’s something that’s been a trend that’s been in process really, I would say, maybe since the advent of podcasts. But it’s been an evolution. And it’s come a long, long way.
Another thing, and this is really interesting, and this hits home personally, but I remember when I first entered the workforce, and I hope I’m not showing my age too badly here, but I remember the word processing department. And you would write down on a piece of paper, like a memo, and you would give it to the word processing department and somebody would type it up for you. That was a thing. And these days, we’re seeing actually more and more video production with audio, of course, transfer to the actual producers of the content.
In my company, at Shure, we make videos for different purposes to talk about different initiatives or product launches or things that we’re doing just for internal use. And right now, everybody, including our CEO, she makes these videos just at her own desk. She has a little software tool and she can show a PowerPoint and herself and speak to things. And with very, very limited amount of editing, you can put that out there. And I’ve seen friends and colleagues at other companies in very high-level roles just kind of doing their own production. Being able to buy a very high quality microphone with really advanced signal processing built right in, but just plug it in via USB and have it be handled as simply as any consumer device, has made it possible to do really very useful production where you are going to actually sound good and get your message across, but without having to make such a big production out of it, which is kind of cool.
Megan: Yeah. Really democratizes access to sort of creating high quality content, doesn’t it? And of course, no technology discussion is complete without a mention of return on investment, particularly nowadays. Erik, what are some ways companies can get returns on their audio tech investments as well? Where are the most common places you see cost savings?
Erik: Yeah. Well, we collaborated on a study with IDC Research. And they came up with some really interesting findings on this. And one of them was, no surprise, two-thirds or more of companies have taken action on improving their communication and collaboration technology, and even more have additional or initial investments still planned. But the ROI of those initiatives isn’t really tied to the initiative itself. It’s not like when you come out with a new product, you look at how that product performs, and that’s the driver of your ROI. The benefits of smoother collaboration come in the form of shorter meetings, more productive meetings, better decision-making, faster decision-making, stronger teamwork. And so to build an ROI model, what IDC concluded was that you have to build your model to account for those advantages really across the enterprise or across your university, or whatever it may be, and kind of up and down the different set of activities where they’re actually going to be utilized.
So that can be complex. Quantifying things can always be a challenge. But like I said, companies do seem to understand this. And I think that’s because, this is just my hunch, but because everybody, including the CEO and the CFO and the whole finance department, uses and benefits from collaboration technology too. Perhaps that’s one reason why the value is easier to convey. Even if they have not taken the time to articulate things like we’re doing here today, they know when a meeting is good and when it’s not good. And maybe that’s one of the things that’s helping companies to justify these investments. But it’s always tricky to do ROI on projects like that. But again, focusing on the broader benefits of collaboration and breaking it down into what it means for specific activities and types of meetings, I think, is the way to go about doing that.
Megan: Absolutely. And Brian, what kinds of advancements are you seeing in the lab that perhaps one day might contribute to those cost savings?
Brian: Well, I don’t know anything about cost savings, Megan. I’m a college professor. I live a pure life of the mind.
Megan: Of course.
Brian: ROI does not compute for me. No, I would say we are in an extremely exciting frontier right now because of AI and many different technologies. The studies that we talked about earlier, in one sense, they were broad. We explored many different traits from dating to hiring to credibility. And we isolated them in all sorts of ways we didn’t talk about. We showed that it wasn’t due to overall affect or pessimism or something like that. But in those studies, we really only tested one very particular set of dimensions along which an audio signal can vary, which is some sort of model of clarity. But in reality, the audio signal is so multi-dimensional. And as we’re getting more and more tools these days, we can not only change audio along the lines of clarity, as we’ve been talking about, but we can potentially manipulate it in all sorts of ways.
We’re very interested in pushing these studies forward and in exploring how people’s sort of brute impressions that they make are affected by all sorts of things. Meg and Erik, we walk around the world all the time making these judgments about people, right? You meet someone and you’re like, “Wow, I could really be friends with them. They seem like a great person.” And you know that you’re making that judgment, but you have no idea why, right? It just seems kind of intuitive. Well, in an audio signal, when you’re talking to someone, you can think of, “What if their signal is more bass heavy? What if it’s a little more treble heavy? What if we manipulate it in this way? In that way?”
When we talked about the faculty meeting that motivated this whole research program, I mentioned that my colleague, who was speaking from his home recording studio, he actually didn’t sound clear like in real life. He sounded better than in real life. He sounded like he was all around us. What is the implication of that? I think there’s so many different dimensions of an audio signal that we’re just being able to readily control and manipulate that it’s going to be very exciting to see how all of these sorts of things impact our impressions of each other.
Megan: And there may be some overlap with this as well, but I wondered if we could close with a future forward look, Brian. What are you looking forward to in emerging audio technology? What are some exciting opportunities on the horizon, perhaps related to what you were just talking about there?
Brian: Well, we’re interested in studying this from a scientific perspective. Erik, you talked about how when you started. When I started doing this science, we didn’t have a word processing department. We had a stone tablet department. But I hear tell that the current generation, when they send photos back and forth to each other, that they, as a matter, of course, they apply all sorts of filters-
Erik: Oh, yes.
Brian: … to those video signals, those video or just photographic signals. We’re all familiar with that. That hasn’t quite happened with the audio signals yet, but I think that’s coming up as well. You can imagine that you record yourself saying a little message and then you filter it this way or that way. And that’s going to become the Wild West about the kinds of impressions we make on each other, especially if and when you don’t know that those filters have been operating in the first place.
Megan: That’s so interesting. Erik, what are you looking forward to in audio technology as well?
Erik: Well, I’m still thinking about what Brian said.
Megan: Yeah. That’s-
Erik: That’s very interesting.
Megan: It’s terrifying.
Erik: I have to go back again. I’ll go back to the past, maybe 15 to 20 years. And I remember at work, we had meeting rooms with the Starfish phones in the middle of the table. And I remember that we would have international meetings with our partners there that were selling our products in different countries, including in Japan and in China, and the people actually in our own company in those countries. We knew the time zone was bad. And we knew that English wasn’t their native language, and tried to be as courteous as possible with written materials and things like that. But I went over to China, and I had to actually be on the other end of one of those calls. And I’m a native English speaker, or at least a native Chicago dialect of American English speaker. And really understanding how challenging it was for them to participate in those meetings just hit me right between the eyes.
We’ve come so far, which is wonderful. But I think of a scenario, and this is not far off, there are many companies working on this right now, where not only can you get a real time captioning in your native language, no matter what the language of the participant, you can actually hear the person who’s speaking’s voice manipulated into your native language.
I’m never going to be a fluent Japanese or Chinese speaker, that’s for sure. But I love the thought that I could actually talk with people and they could understand me as though I were speaking their native language, and that they could communicate to me and I could understand them in the way that they want to be understood. I think there’s a future out there where this technology can really be something that helps bring people together. Now that we have so many years of history with the internet, we know there’s usually two sides to the coin of technology, but there’s definitely going to be a positive side to this, and I’m really looking forward to it.
Megan: Gosh, that sounds absolutely fascinating. Thank you both so much for such an interesting discussion.
That was Erik Vaveris, the VP of product management and chief marketing officer at Shure, and Brian Scholl, director of the Perception & Cognition Laboratory at Yale University, whom I spoke with from Brighton in England.
That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.
This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. And this episode was produced by Giro Studios. Thanks for listening.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
Not content with rendering Doom in PCB design software or playing it on an oscilloscope, engineer Mike Ayles has got the 1990s shooter running in a computer-aided design (CAD) modeler.…
Boffins at the Department of Energy's Sandia National Labs are working to develop cheap and power efficient LEDs to replace lasers. One day, they let a trio of AI assistants loose in their lab.…
Apple’s new AirTag keeps the $29 price but adds longer range, stronger Precision Finding guidance, and a louder speaker. It also brings Precision Finding to newer Apple Watches for quicker, wrist-first searches.
The post New AirTag, same price, better range, and improved finding for you appeared first on Digital Trends.

The NTSB has launched an investigation into Waymo following multiple reports of its self-driving taxis passing stopped school buses in Austin.
The post Waymo triggers another probe over its robotaxis passing stopped school buses appeared first on Digital Trends.

Displays have evolved steadily for years. Specs got sharper, refresh rates got higher, and bezels got thinner, but we’re now entering a new chapter where screen is no longer just a window into content. Displays are becoming intelligent, intuitive companions that adjust to how we work, play, and interact through the day. “If you look […]
The post The rise of adaptive displays: How Lenovo is redefining productivity & play appeared first on Digital Trends.

Crossovers and SUVs have taken over showrooms, and somewhere along the way the sedan got treated like a bad idea. We’re told we need more height, tougher looks, and all-wheel drive just to survive a commute that mostly involves traffic lights and potholes.

microSD cards are fantastic storage tools, but they’ve never been one to win file transfer speed records—that is, until microSD Express. The latest microSD standard could completely revolutionize portable storage, and I can’t wait to see what’s in store.
