Home/ Insights/ AI Transformation
AI TRANSFORMATION LATERAL THINKING April 29, 2026 · 13 min read

We Are Asking the Wrong Question About AI Readiness. Here Is the One That Actually Matters.

Every AI vendor asks whether your clinic is ready for their tool. Nobody asks whether their tool is ready for your clinic. Edward de Bono spent his career demonstrating that the most consequential problems remain unsolved not because they are difficult but because we approach them from the wrong angle. Clinical AI adoption in independent practices is one of those problems. And the lateral thinking reframe changes everything.

E
Elevare Health AI Inc.
HIT & AI Transformation Consulting, Cedar Falls, Iowa

Edward de Bono trained as a physician at Oxford and Cambridge before spending fifty years developing frameworks for thinking about thinking. His central observation was simple and devastating. Many problems remain unsolved not because they are inherently difficult but because people approach them from overly rigid angles. Education trains people to be excellent at vertical thinking — step-by-step reasoning, analysis, and proof — but gives almost no tools for deliberately restructuring perception. Where logic asks how do we optimize this process, lateral thinking asks why does this process exist at all. Where logic asks what is the best option, lateral thinking asks what if the opposite were true.[1]

The clinical AI adoption problem in independent practices is a lateral thinking problem disguised as a technology problem. Every stakeholder in the conversation is applying vertical thinking to it. The vendor asks whether the practice is ready for the tool. The consultant asks how to improve adoption rates. The practice administrator asks why physicians are resisting. The physician asks whether the tool is worth the disruption.

Every one of those questions digs deeper into the same hole. None of them challenges the premise that produced the hole in the first place.

// THE LATERAL THINKING REFRAME

The question is not whether your clinic is ready for AI. The question is whether the AI is ready for your clinic. That single inversion shifts the burden of proof from the practice to the vendor. It reframes the readiness conversation from a deficiency assessment of the clinic to an adequacy assessment of the tool. Every vendor asks the first question. Nobody asks the second one. That asymmetry is where most independent practice AI deployments fail before they begin.

// VERTICAL THINKING. THE WRONG QUESTION.
Is our clinic ready for AI?
Assumes the tool is the constant and the clinic is the variable
Places burden of proof on the practice
Evaluates clinic deficiencies against vendor requirements
Produced by every AI vendor assessment ever written
Digs deeper into the same hole
// LATERAL THINKING. THE RIGHT QUESTION.
Is the AI ready for our clinic?
Treats the clinic as the constant and the tool as the variable
Places burden of proof on the vendor
Evaluates tool adequacy against clinic reality
Asked by nobody in the current market
Digs in a completely different place

Why Vertical Thinking Keeps Producing the Same Failed Deployments

The data on clinical AI adoption in independent practices in 2026 is consistent and discouraging. Healthcare leaders across the industry report that organizations must leverage technology to move beyond AI awareness to seamless integration in daily workflows. The vision is clear. The gap between the vision and the reality is where 86 percent of healthcare organizations find themselves. They believe AI is essential. They are not equipped to deploy it effectively.[2]

That 86 percent gap is not a technology gap. It is a thinking gap. The technology works. The frameworks for thinking about deploying it are inadequate. And the inadequacy is structural. Vertical thinking about AI adoption produces the same categories of solution every time. More training. Better change management. Stronger physician champions. Clearer ROI documentation. More vendor support during implementation.

These solutions are not wrong. They are incomplete. They are the solutions that vertical thinking generates from within the existing frame of the problem. They make the current approach work slightly better. They do not challenge whether the current approach is the right one.

Many AI use cases in healthcare are poorly defined. Developers and data scientists are building models in an opportunistic way rather than identifying a problem that needs to be solved. The data scientists and developer community need to find common working ground with frontline clinicians. Trust is the central challenge that rigorous evaluation and scaling of validated ideas must address.[3]

The trust problem is a lateral thinking problem. Trust is not built by better training. Trust is built when the tool demonstrates that it was designed around the physician's actual workflow rather than the vendor's development assumptions. That requires asking the right question before building the tool. And the right question is not what does AI enable but what does this specific physician in this specific clinical context actually need from an AI tool to trust it enough to use it consistently.

Four De Bono Tools Applied to Clinical AI and HIPAA

De Bono produced over 60 books in 40 languages focused on creative problem-solving methods. He was a pioneer in promoting alternative perspectives and emphasized the importance of divergent thinking techniques that encourage creating varied solutions rather than following a singular conventional path. Lateral thinking, he argued, is not a mysterious talent but a skill that can be practiced by deliberately disrupting habitual patterns of thought.[4]

Here are the four tools most directly applicable to the clinical AI adoption and HIPAA compliance challenges facing independent practices in 2026.

// DE BONO TOOL 1
The Provocation
State a deliberately absurd or impossible provocation then use it as a stepping stone to a practical idea. The provocation is not meant to be implemented. It is meant to disrupt the gravitational pull of conventional thinking.
APPLIED TO HIPAA COMPLIANCE:
Po: What if staff received no HIPAA training at all? Stepping stone: If there were no training the only thing preventing violations would be workflow design. The compliant pathway would have to be easier than the non-compliant one. Practical insight: Invest in workflow redesign that makes compliance the path of least resistance before investing in training to overcome a poorly designed workflow.
// DE BONO TOOL 2
The Challenge
Ask why we do it this way. Not to criticize. To open the assumption to examination. The answer is almost always because we always have or because it seemed obvious. Both are lateral thinking entry points.
APPLIED TO AI READINESS:
Challenge: Why does AI readiness happen before deployment? Conventional answer: To make sure we are ready before investing. Lateral insight: What if readiness is better assessed through a structured 30-day pilot than a pre-deployment checklist? What if the most accurate readiness data comes from observing the system under real conditions rather than predicting its behavior theoretically?
// DE BONO TOOL 3
Concept Extraction
Take the concept behind an existing solution in one domain and apply it in a completely different context. The concept travels. The specific implementation stays behind.
APPLIED TO AI GOVERNANCE:
Aviation uses a pre-flight checklist every pilot completes before every flight regardless of experience. Not a sign of incompetence. A shared commons protection mechanism. Concept extracted: mandatory brief shared verification before high-stakes individual action. Applied to clinical AI: a 60-second physician AI note verification checklist before signing notes containing new medications or diagnosis changes. The most experienced physician still does the checklist.
// DE BONO TOOL 4
The Six Thinking Hats
APPLIED TO AI VENDOR EVALUATION:
White hat: What do we actually know about this tool's performance in practices like ours? Red hat: How do our physicians feel about this tool honestly? Black hat: What is the worst realistic outcome? Yellow hat: What is the genuine best case? Green hat: What unconventional deployment approaches could we try? Blue hat: Who is facilitating this decision and when will we commit?

The Dominant Idea Nobody Is Challenging in Clinical AI

De Bono identified what he called dominant ideas. Assumptions so deeply embedded in a field that nobody recognizes them as assumptions. They feel like facts. They are invisible precisely because they are so widely held.

In clinical AI the dominant idea is this: AI adoption is a change management problem.

This dominant idea is the foundation of every AI adoption framework in healthcare. Physician resistance is a change management challenge. Low adoption rates are a change management failure. The solution is always better change management. More training. Stronger champions. More visible leadership support.

The lateral thinking challenge: what if AI adoption is not a change management problem at all?

What if it is a workflow design problem? What if physician resistance is not resistance to change but resistance to a tool that was designed without understanding the workflow it was designed to improve? The most forward-thinking organizations will begin exploring AI safe zones. Controlled environments where providers and administrative staff can safely experiment with approved AI tools and datasets. The emphasis is on safe experimentation not managed adoption. The difference is significant. Experimentation assumes the tool might need to change. Managed adoption assumes the physician needs to change.[6]

That distinction is a lateral thinking insight. Safe zones for experimentation treat the physician as the expert on workflow reality and the tool as the thing being evaluated. Managed adoption treats the tool as the answer and the physician as the obstacle.

When you reframe AI adoption from a change management problem to a workflow design problem you immediately generate different solutions. Not how do we get physicians to use the tool but how do we understand what physicians actually need and design the deployment around that reality. The first question produces adoption coaching programs. The second question produces tools that physicians actually want to use.

The Lateral Thinking Move That Turns Resistance Into a Resource

Here is the single most powerful lateral thinking move available in a clinical AI deployment that is struggling with adoption.

Stop treating the most resistant physician as the biggest obstacle. Start treating them as the most valuable data source.

The physician who refuses to use the ambient AI documentation tool has thought most carefully about it. Their objections are the most fully formed in the practice. Their concerns about workflow disruption, documentation accuracy, patient trust implications, and liability exposure are the most thoroughly developed concerns anyone in the practice holds.

That physician is not an obstacle to successful deployment. They are an involuntary quality assurance function.

// LATERAL THINKING IN PRACTICE: THE RESISTANT PHYSICIAN SCENARIO
1
The vertical thinking response
More training. Peer pressure. Leadership mandate. The resistant physician eventually complies or becomes an outlier who is managed around. Their concerns are noted and dismissed. The deployment proceeds. Adoption is measured at 67 percent and declared a success.
2
The lateral thinking response
Schedule a 45-minute structured interview with the resistant physician. Not to convince them. To document every objection with full specificity. Which workflows break with this tool. Which note types are least accurate. What the patient disclosure concern actually is. What the liability exposure looks like from their perspective as the signing physician.
3
Use the objections as the redesign brief
Take every documented objection to the vendor as a redesign requirement. Not a complaint. A specification. The workflow that breaks needs to be addressed before go-live not after. The note type that is least accurate needs additional review protocol. The patient disclosure concern needs a documented consent process.
4
Invite the resistant physician to validate the redesign
Show them the redesigned workflow. Ask whether it addresses their concerns. Their buy-in follows naturally from their authorship of the solution. They are no longer being asked to accept a tool someone else designed. They are being asked to validate a tool they helped design. The psychology of authorship is completely different from the psychology of compliance.
5
The emergent outcome
The previously resistant physician becomes the most credible adoption advocate in the practice. Not because they were convinced. Because they were heard. And because the tool was changed in response to what they said. That is the difference between 67 percent adoption that management measures and 95 percent adoption that the clinical team sustains.

Where Lateral Thinking and Systems Thinking Meet

Lateral thinking and systems thinking are not competing frameworks. They are sequential ones that operate at different stages of the same problem-solving process.

Systems thinking reveals the structure of the problem. It maps the feedback loops, identifies the stocks and flows, traces the delays, and shows the relationships between parts that are producing the outcome nobody wanted. Systems thinking tells you what is happening and why the system is producing it.

Lateral thinking generates solutions the system would never produce from within itself. It challenges the dominant ideas that built the structure in the first place. It moves sideways into solution spaces that vertical thinking from within the system cannot reach. Lateral thinking tells you what to do differently when the obvious solution has already failed.

The integration for an independent practice looks like this:

  • Systems thinking first: Map the clinic as a complex adaptive system before proposing any AI deployment. Identify the feedback loops the tool will create. Name the stocks it will affect. Find the downstream bottlenecks. Reveal the commons that need protection.
  • Lateral thinking second: Challenge every obvious solution the systems map suggests. Apply the provocation tool to each structural problem. Run a Six Hats session with the practice leadership team before the vendor demo becomes a signed contract.
  • Systems validation third: Run the lateral thinking solutions back through the systems framework. What feedback loops do they create? What stocks do they affect? What unintended consequences do they risk? Which solutions strengthen the system and which create new problems in different places?

AI will soon be able to deliver clinician-grade care under the direction of a clinician. A key barrier to adoption is that reimbursement is not designed for clinical AI agents. Time-based billing structures penalize physicians for using AI tools that enhance productivity. The current payment model risks bypassing physician oversight and fragmenting care.[7] That structural barrier is a systems thinking finding. The lateral thinking response to it is to challenge the dominant idea that adoption must happen within the current reimbursement structure rather than that the reimbursement structure must change to accommodate the adoption that the clinical reality requires.

One response works within the system. The other challenges the system. Both are necessary. Neither alone is sufficient.

Three Questions That Change the Discovery Call

When a practice administrator contacts you about AI readiness or HIPAA compliance the vertical thinking consultant asks: what are you trying to achieve and what is your timeline and budget?

Those are not bad questions. They are incomplete ones. They accept the frame the practice administrator brings to the conversation. Lateral thinking opens the frame before accepting it.

Three questions that signal immediately you are operating at a different level:

// THE THREE LATERAL THINKING DISCOVERY QUESTIONS

Question 1: What have you already tried? Not to learn the history but to find the dominant idea driving the approach. Every failed solution reveals an assumption that has not yet been challenged. The pattern of what has been tried and failed is a map of the vertical thinking that has already been applied.

Question 2: What would have to be true for this problem to not exist? This question forces the imagination backward into the conditions that would prevent the problem rather than forward into the solutions that treat it. The answer almost always reveals a structural design choice that could have been made differently and still can be.

Question 3: Who in your practice most disagrees with the current approach and what exactly do they say? The dissenter is the lateral thinker the practice already has. Their objections are the most valuable information in the building. Collecting them before proposing a solution is the difference between a consultant who confirms the client's existing thinking and one who expands it.

De Bono's central claim was not that logic is flawed but that without tools for lateral movement even the most intelligent thinkers can remain trapped in perfectly reasoned dead ends. Independent practice AI adoption in 2026 is full of perfectly reasoned dead ends. The infrastructure investment has been made. The vendor relationships are in place. The physician champions have been identified. The training has been delivered. The adoption is still at 45 percent.

The solution is not more of what has already been tried. The solution is a lateral move into a completely different part of the problem space. A space that vertical thinking cannot reach because vertical thinking by definition stays within the existing frame.

Lateral thinking finds the door in the wall that everyone else has been walking past because they were too busy reinforcing the wall.

Ready to Ask the Question Nobody Else Is Asking?

Our free AI Readiness Scorecard applies both systems thinking and lateral thinking to your clinic's specific situation. We assess not just whether your infrastructure is ready but whether the AI tools you are considering are ready for your clinical reality. Free. 10 minutes. Instant results.

Want to bring lateral thinking and systems thinking to your next AI deployment decision?
Book a free 30-minute discovery call here.

// Sources and References