
As a Johns Hopkins University alumnus, I’ve always appreciated the institution’s commitment to rigorous research that challenges conventional wisdom. So when I came across this recent study from Johns Hopkins that revealed doctors who use AI are viewed as less competent by their peers, I wasn’t just intrigued—I was concerned. This finding should serve as a wake-up call for every C-suite executive pushing AI transformation in their organization.
The competence penalty is real, and it’s not confined to operating rooms.
The Paradox of Progress
The Johns Hopkins study, published in Nature Digital Medicine, exposed a troubling dynamic: physicians who rely on generative AI for medical decision-making face considerable skepticism from fellow clinicians, who correlate AI use with lack of clinical skill. The more dependent on AI a physician appeared, the more harshly they were judged by their peers. Ironically, these same clinicians acknowledged that AI improves the accuracy of clinical assessments.
Let that paradox sink in. The technology is recognized as beneficial, yet those who use it are penalized socially.
This isn’t a healthcare problem. It’s an organizational psychology problem that’s playing out in boardrooms, trading floors, legal departments, and engineering teams across every industry.
The Social Tax on Early Adopters
While Fortune 500 leaders have invested billions in AI infrastructure, training programs, and pilot initiatives, many have overlooked a critical variable: the social dynamics that determine whether employees will actually use these tools—and more importantly, whether they’ll admit to using them.
In my conversations with executives across industries, a pattern emerges. The official narrative celebrates AI adoption. The unofficial reality is more complex. Engineers worry that using AI coding assistants signals they’re not “real” programmers. Financial analysts fear that leveraging AI for market research will suggest they lack analytical depth. Marketing professionals hesitate to acknowledge AI-assisted content creation, concerned it will undermine perceptions of their creativity.
The competence penalty identified in the Johns Hopkins research is alive and well beyond medicine.
Understanding the Psychology of Resistance
Professor Tinglong Dai, co-author of the Johns Hopkins study, observed that “human psychology remains the ultimate variable” in the age of AI. This insight is crucial for leaders attempting to drive organizational change.
The resistance to AI adoption isn’t primarily about technology literacy or access. It’s rooted in deeply held beliefs about competence, autonomy, and professional identity. In many organizational cultures, expertise is synonymous with independence. Relying on external tools—especially ones that might be perceived as doing the “thinking” for you—can be interpreted as a weakness rather than a strategic advantage.
This creates a vicious cycle. Early adopters who could demonstrate AI’s value stay quiet to protect their reputations. Their silence reinforces the stigma. Meanwhile, competitors who create cultures of transparent AI use pull ahead.
The Cost of Stigma
The stakes are higher than hurt feelings or workplace politics. When social barriers prevent AI adoption, organizations pay a tangible price:
Competitive disadvantage: While your team debates the optics of AI use, competitors are compounding gains in productivity, accuracy, and innovation. The Johns Hopkins study found that framing AI as a “second opinion” partially improved perceptions but didn’t eliminate the stigma. Half-measures lead to half-adoption.
Talent flight: Your best performers—the ones who understand how to leverage AI as a force multiplier—will migrate to organizations that celebrate rather than penalize this capability. They’ll move to cultures where augmented intelligence is the norm, not a liability.
Innovation stagnation: If employees fear social consequences for experimenting with AI tools, they’ll default to familiar approaches. The learning curve your organization needs to climb gets steeper while others are already at the summit.
Fragmented implementation: Without cultural acceptance, AI adoption becomes a patchwork of quiet individual use rather than systematic organizational advantage. You get islands of efficiency instead of enterprise transformation.
Rewriting the Social Contract
The solution isn’t technological—it’s cultural. Leaders must actively reshape the social dynamics around AI use:
Model from the top: When executives publicly discuss how they use AI in decision-making, it normalizes the behavior. If the CEO talks about using AI for strategic analysis, middle managers feel less exposed doing the same. Leadership visibility matters more than policy memos.
Redefine competence: Competence in the AI era means knowing when and how to augment human judgment with machine capabilities. Organizations should evaluate employees not on whether they use AI, but on the quality of outcomes they deliver and the judgment they apply in using available tools. Make AI fluency a competency requirement, not a secret advantage.
Create psychological safety: The Johns Hopkins research showed that even positioning AI as verification didn’t fully eliminate stigma. True adoption requires environments where experimentation is encouraged and failure is treated as data. People need to feel safe not just using AI, but discussing both its capabilities and limitations openly.
Establish transparent standards: Rather than leaving AI use in a gray zone, create clear guidelines about appropriate applications. This removes the uncertainty that drives underground behavior. When people know the boundaries, they can operate confidently within them.
Celebrate integrated workflows: Recognize and reward teams that effectively combine human and machine intelligence. Share case studies internally. Make heroes of those who achieve breakthrough results through augmented approaches. The stories you tell shape the behaviors you get.
The Healthcare Lesson for All Industries
The medical field offers a particularly instructive case because healthcare professionals have traditionally been evaluated on their individual clinical judgment. The idea of a physician as an autonomous expert is central to professional identity. AI challenges this archetype.
Yet as Dr. Risa Wolf, another author of the Johns Hopkins study, noted, AI has the potential to “complement—not replace—clinical judgment, ultimately strengthening decision making.” The same principle applies whether you’re diagnosing patients, analyzing markets, designing products, or optimizing supply chains.
The question isn’t whether AI will be part of professional work—that ship has sailed. The question is whether your organization will allow social stigma to slow adoption while competitors sprint ahead.
A Strategic Imperative
For Fortune 500 leaders, addressing the social barriers to AI adoption isn’t a soft skills exercise—it’s a strategic imperative. The organizations that win the next decade won’t just have the best AI tools. They’ll have cultures that enable widespread, transparent, and increasingly sophisticated use of those tools.
The Johns Hopkins research revealed that clinicians recognize AI’s value for improving accuracy, even as they penalize peers who use it. Your organization likely faces a similar disconnect. The technology is ready. The question is whether your culture is.
The competence penalty is real, but it’s not inevitable. It’s a function of organizational culture, and culture is something leaders can shape. The time to act isn’t when your competitors have already normalized AI augmentation. It’s now, while you still have the opportunity to define what competence looks like in the age of intelligence augmentation.
As an alumnus of an institution that has consistently pushed the boundaries of what’s possible through rigorous inquiry, I’m reminded that the most important discoveries often challenge our assumptions about ourselves. The Johns Hopkins AI study is one such discovery. The question is: What will you do with it?
The research discussed was funded by a 2022 Johns Hopkins Discovery Award and published in Nature Digital Medicine. The study involved 276 practicing clinicians from a major hospital system and was led by researchers at Johns Hopkins Carey Business School and Johns Hopkins School of Medicine.











