Unraveling the Potential of AI in Medical Practice: Building Trust

Unraveling the Potential of AI in Medical Practice: Building Trust

“Trust is like the air we breathe – when it’s present, nobody really notices; when it’s absent, everybody notices.”  Warren Buffett

In the age of rapid technological advancements, artificial intelligence (AI) has emerged as a powerful tool that can potentially transform the medical industry. The dialogue surrounding AI’s integration into healthcare is a testament to the dynamism of this field, with thought leaders and professionals passionately discussing its possibilities.

credit: gapingvoid.com

I saw an example of this only last week in an OMA Connects online forum hosted by the Ontario Medical Association. In one of the threads, I was incredibly impressed by a group of doctors from all over Ontario passionately discussing and sharing ideas on how best to install an AI-based charting tool in their offices, one that is accurate, open-source, and free.  I saw joy in that conversation, and it brought me back to my early days in practice when collaboration and learning from my peers were a big part of my world!

I am amazed at how many doctors are fascinated with the idea of bringing this new technology into their offices.  This may be less about the coolness of the product and more about them being desperate to find any tool that makes the chore of charting and paperwork less, but they are interested nonetheless!!  This is wonderful at a time when many of our collective conversations point toward burnout and despondency.  However, this positivity may quickly sour if we find that the software leads us astray.  We trust some of the tools before we see the evidence.  I am not sure these nascent products fully deserve our trust yet.

In this post, I will delve into ways that software engineers can build and maintain the trust of their customers.  I wrote about trust in medicine a few years ago, and the concepts have not changed.  Trust is built by being “trustworthy”.  Trustworthiness is generated by showing honesty, integrity and consistency.  As we bring AI tools into our practices, we must consider critical issues such as privacy, data management, accountability, and the associated risks and benefits for doctors.  If this goes well, then trust is inherent.

Beyond Automation

AI in healthcare spans a broad spectrum, from basic office automation to advanced clinical decision support tools. It’s essential to distinguish between these two ends of the spectrum. While AI can simplify administrative tasks that often burden healthcare providers, its true potential lies in elevating patient care.

credit: gapingvoid.com

Mostly, in the community, we have seen AI show up in two forms: automation bots and charting tools.  These simplify routine work that consumes huge parts of our daytime and, especially with endless paperwork, suck the joy from the medicine.  Bots and AI tools can help with note-taking, scheduling, patient intake, and billing. However, as AI-driven data collection and management processes improve, they open new horizons. We will be asking an AI engine to do real-time population health analysis.  Clinical decision support tools will work dynamically with the patient and current evidence.  We will see high-level engagement with patients that helps them navigate every aspect of their health journey.  This is precisely where the need for reproducible, analyzable, and stable algorithms becomes paramount, as I discussed in my last article.  For the tools to be accurate and trusted, we need to ensure that we can trust the data they were formed on.  And we need to trust that the AI engines that give us answers are honest, consistent, and created with integrity.  Also, for anything that touches a patient’s experience, we cannot forget to pressure-test the experience with patients themselves.  Trust is a three-legged stool supported equally by clinicians, patients, and the health care system.

Accountability

The foundation of AI’s effectiveness in healthcare hinges on data quality. As we transition from traditional notetaking to AI-generated clean, standardized, and coded data, AI becomes a valuable tool for deriving meaningful insights. Ensuring that AI’s conclusions are transparent and trustworthy is crucial.

To build trust, we must know and be able to see how AI tools arrive at their conclusions. Given the complexity of AI algorithms and neural networks, this transparency is challenging. Here, the need for oversight and regulation becomes evident, mirroring the accountability expected from human practitioners. Doctors are responsible for their decisions, and AI must be, too if we will rely on it for anything more than superficial encounters. Indeed, our Colleges will expect this, and for good reason.  We have to ensure that decisions are sound and people are kept safe.

Challenges Ahead in Trust and Accountability

credit: gapingvoid.com

The path to accountability in AI-driven healthcare is not without hurdles. AI recommendations must consider factors like patient preference, intended variance between regions and practices, and economics.  These must align with the best interests of patients, healthcare providers, and the healthcare system.

The challenge lies in understanding how prediction engines work.  We will have to distinguish between recommendations that could come with confidence intervals versus facts and evidence that set standards of care via hard-set rules. We need to be able to make judgements based on these predictions.  We must know how accurate the tool thinks it is. And we need to be able to work with the predictions to correct them if they are wrong.  As all our AI tools are based on some form of machine learning, this should be possible if the tool has the option for feedback and the training data set is available to be corrected.  If we cannot trust the answers we get, we will abandon the products very quickly.  And if we cannot defend the actions we take based on AI’s information, our Colleges and medicolegal lawyers will have a lot to say about it.

Privacy and Trust

One fundamental vulnerability that arises when AI is introduced into healthcare is privacy. Handling sensitive patient data necessitates stringent privacy measures to safeguard individuals’ information. Privacy breaches, big or small, can have severe consequences and erode trust in AI programs and the health system overall.

For example, consider a scenario where an AI-driven healthcare platform inadvertently exposes patient records due to a vulnerability in its security protocols. This breach could result in unauthorized access to patients’ medical histories, diagnoses, and other sensitive information. Data accessed could be used for malicious intentions such as identity fraud, insurance abuse, ransom or even blackmail!! A breach not only compromises patient privacy but also erodes trust in all of our digital health systems.  Risk reduction processes must be built into our tools to combat this, and every effort must be made to ensure that the algorithms function in a relatively local closed loop.  Even in a very tight system, we all know that a breach will occur someday, and one of the main issues will be understanding where in the algorithm this happened.  As well, modernization of our regulations and privacy legislation is required to take AI into account.  They are way behind the advances we will see in AI if only because the technology is rapidly evolving.

AI’s potential in the medical industry is vast and promising. From streamlining administrative tasks to enhancing clinical decision-making, AI has the potential to revolutionize healthcare. However, the journey towards AI integration should be guided by transparency, accountability, and data privacy principles. As we navigate this transformative era, healthcare professionals, policymakers, and technologists must work together to balance progress and responsibility. This is how we build trust.

credit: gapingvoid.com

I want to commend the doctors, nurses, patients, and tech gurus who participate in these conversations and contribute to critical dialogue. Thank you for your dedication and bravery.

Stay tuned for Part 4:  Challenges in the Implementation of AI

This will be posted next week.

Hit the SUBSCRIBE button on the right of this page to have it arrive instantly in your inbox!

Show Buttons
Hide Buttons