The Clinovera team has been building AI-driven solutions for clients for over 20 years—utilizing classical technologies of the past such as machine learning, natural language processing, Bayesian models, rules-based systems, and now GenAI.
GenAI has revolutionized and dramatically accelerated solution development. Yet, there are a plethora of complexities with every solution we create today. Fortunately, these complexities are not blockers, and there are options and tradeoffs that we know how to address.
This document describes and classifies key concerns associated with the development of AI-driven solutions at a high level. The concerns can be broken into the following categories:
The diagram below represents and breaks down each category into more specificity, which we will later clarify with examples.
With respect to the usage of language models, all GenAI solutions utilize one of two strategies:
While the challenges and tradeoffs with these two approaches are largely the same, the options to address them may be somewhat different, which we detail below.
When you consider the expense of using AI models, it all depends on the strategy you choose. With the commercial AI services this is usually the cost per token (or 1000s of tokens), while invoking AI service API. For privately-deployed models, this is usually fractional infrastructure cost of a provisioned VM. The hardware to run an “intelligent” LLM with many billions of parameters usually requires an expensive graphics card and powerful CPUs, which could be expensive with continuous usage.
After the solution has been utilized extensively by users or systems, significant costs will incur. For example, users may interact with a chatbot for a prolonged period or upload countless documents of a significant size, resulting in a major usage costs.
Runaway costs are different from the excessive usage costs in that they are associated with certain internal issues or inherent application workflow, such as potential glitches or unanticipated application behavior. For example, due to a bug in the application, the AI services were called in an infinite loop, resulting in a significant accumulation of costs.
There are instances when the AI-driven solution cannot achieve desired functional outcomes.
Slow, delayed processing may be caused by a number of individual or combined factors:
When the model generates poor quality, irrelevant or misleading responses, the contributing factors could be:
An AI solution is too generic when the response it generates doesn’t have sufficient relevance to the intended purpose of the application. This usually stems from lack of provided context or insufficient training of the model. For example, when an application designed to assist in making patient admission decisions is prompted to inform about patient medications, it simply returns a medication list, rather than indicating which medications are costly or psychotic.
Concerns related to privacy and security are usually raised when dealing with commercial AI services. The assumption is that privately-deployed models on a privately-provisioned hardware are more secure.
Protected Health Information (PHI) and Personally Identifiable Information (PII)or content submitted to commercial AI services may be used for training of the model and consequently appear in responses to other users or systems. There are a number of available mechanisms and methodologies for protecting the PHI with AI services, and all of them have their own advantages and shortcomings. These rapid methodologies include deidentification of data, making appropriate arrangements with the commercial AI service providers, instantiating private LLMs and others.
Protecting IP is of high importance for many organizations. Often, it can be protected in similar ways as PHI, though the frameworks for this are much less developed.
Rapid advancements of AI is both an opportunity and a challenge. It forces organizations to constantly rethink and adapt their business and technology strategies, as well as their approaches to cost management and performance optimization. We loosely break this topic into three areas:
GenAI models are evolving in multiple ways.
These advancements reflect a shift towards more intelligent, versatile, and interactive AI systems, offering new opportunities and challenges for solution developers.
Advancements in computing power have a significant impact on AI solution development. Newer, more capable hardware can run bigger, smarter models with more parameters and more training data. The latest chips, like Apple's Neural Engine or NVIDIA Jetson, let sophisticated AI run on phones, drones, and IoT devices. The challenges, however, include costs, energy pressure and sustainability concerns.
Legal frameworks for AI in healthcare are evolving rapidly to address the unique challenges posed by integrating advanced technologies into medical settings. Most of these efforts revolve around data privacy and security, bias and fairness, and accountability and liability.
Here are examples of some key developments:
Many of these regulatory developments have a profound impact on healthcare organizations and AI solutions developers, encouraging them to stay informed about regulatory changes and engaging in proactive compliance efforts.
Understanding how to balance these opportunities and challenges with your healthcare operation’s needs is exactly where the Clinovera team can help. Please fill out the contact us form to speak with our experts.