5 key considerations when thinking about AI in dentistry
As artificial intelligence (AI) continues to evolve, its integration into dentistry presents numerous opportunities to enhance patient care and streamline operations. However, before adopting generative AI and other AI tools, we must carefully consider several critical factors: environmental impact, system bias, patient data, hallucinations, and effective prompting.
1. Environmental impact
Environmental impact wasn’t something I had consciously considered when thinking about AI tools, but after conducting more research on the topic, it’s clear that AI systems—especially those requiring extensive computational resources—can be very energy-intensive. Data centres and cloud servers running advanced algorithms for generative AI often consume significant amounts of electricity. To explore this further, I spent some time interrogating ChatGPT 4.0 (the latest version) to see its estimates on energy usage. In the interest of reducing our collective footprint, I’ve included a link below to the chat for you to review.
https://chatgpt.com/share/67ebd9ba-af00-8013-827c-39eeaf249907
Personally, this has affected how I use ChatGPT and other tools. While it’s an expectation in my role, I try to be more conscious about the types of questions I ask. For {my}dentist, both as an NHS provider and in our ambitions to work towards net zero, this is an important collective consideration.
2. Bias in AI systems
Bias in AI systems is a fascinating issue—one I explored during my EMBA at Warwick, particularly in the context of healthcare. The core principle of AI is that models are built from large volumes of data. This raises several questions: what data is being used? Where did it come from? How much data is involved? Was any sampling applied? Who decided what the system would learn from, and how was it decided (for example, in the case of AI for X-rays, was it based on specialists or general dentists)?
From a personal and simplistic perspective, if you take ethnicity as an example, the global population is not evenly distributed. Therefore, designing data (if ethnicity is a factor) with some bias is necessary to ensure accuracy in context. Understanding this bias is crucial for interpreting AI outputs correctly. If you are interested to read more There’s More to AI Bias Than Biased Data, NIST Report Highlights has an interesting article on the topic.
3. Data security and privacy
Patient data is incredibly sensitive, and maintaining confidentiality is paramount. When using AI tools, we must ensure that systems comply with data protection regulations, such as GDPR in the UK. This includes understanding how patient data is stored, processed, and accessed by the AI systems. It’s essential to choose AI platforms with robust encryption, secure data storage practices, and clear data governance policies to protect patient privacy and mitigate risks associated with data breaches. We’ve seen numerous examples where this is not the case, especially when data is extracted or used to train models. A simple rule here: don’t input any confidential, patient, or sensitive data into any AI system without going through the appropriate checks. Companies may assure you that it’s fine, but they might not fully understand your specific context or use case.
4. Hallucinations and accuracy
Generative AI, especially in clinical or specialised contexts, can sometimes produce ‘hallucinations’ – outputs that are inaccurate, irrelevant, or not grounded in reality. These inaccuracies can range from minor errors to significant misinterpretations that could impact the outcome. It’s crucial to remain vigilant and cross-check AI outputs against professional expertise and evidence-based practices. While AI can assist in diagnosis or treatment planning (where approved), it should never replace clinical judgement.
5. Prompting for best results
Much like how dentists read between the lines of a medical history to ensure patient safety (asking the right questions to uncover crucial information), developers and data experts have long been probing and prompting systems to ensure the output is appropriate and contextual. Most generative AI systems on the market today require some form of prompting to yield good results. Although this may change as AI becomes more advanced, for now, effective prompting is key to getting accurate, useful, and relevant information from AI tools. By carefully framing prompts and queries, you can optimise the AI’s output to meet specific needs. Clear and precise instructions will help minimise errors, improve AI-generated recommendations, and enhance the overall effectiveness of AI-driven solutions.
Conversely, poor prompting can lead to misguided or even factually inaccurate information. Always ensure you fact-check AI outputs with someone knowledgeable in the field. Otherwise, you risk becoming the person with all the gear but no idea… and no one wants that. Microsoft has some excellent examples of prompting, but the challenge with creating effective prompts is that they often conflict with data security and privacy, as good prompts require a lot of context and information. Check this out: The art of the prompt: How to get the best out of generative AI – Source.
In conclusion, while AI offers promising advancements in dentistry, we must approach these tools with caution, considering their environmental impact, potential biases, patient data security and hallucinations.