$
Personal Info

Credit Card Info
This is a secure SSL encrypted payment.

Donation Total: $25.00 for 12

Title Image

Blog

Artificial Intelligence in Pharmacy: “Appropriate” Use of AI (Part 1 of 2)

Artificial intelligence (AI) has made its way into almost every industry, including healthcare. One area where AI holds immense potential is pharmacy. With its ability to analyze vast amounts of data and make accurate predictions (when based on accurate input), AI can revolutionize the way medications are prescribed, dispensed, and monitored.

The key question that arises is, what is the “appropriate” use of AI in pharmacy? 

While the benefits are wide-ranging and inspire hope, it is important to strike a balance between the power of AI and human expertise, ensuring that patient safety and ethical considerations remain at the forefront.

The World Health Organization’s Recent Warnings About AI in Health

On May 16, 2023, the World Health Organization (WHO) issued a call for caution and ethical use of large language model tools (LLMs), including platforms like ChatGPT, Bard, and Bert, in the context of health. While these AI-generated LLMs hold great potential for supporting health needs and improving access to health information, WHO emphasizes the importance of carefully examining the risks and exercising caution in their adoption.

LLMs are not “true” AI in that they don’t “think” for themselves. They construct responses based on large sets of language data, such as articles on the internet or posts on social media. What appears to be the consensus on a topic becomes “facts” for LLMs. — the editor

The concerns raised by WHO include:

  1. Biased data: The data used to train AI may be biased, leading to misleading or inaccurate health information that could impact health equity and inclusiveness.
  2. Potential errors: LLMs may generate responses that seem plausible and authoritative, but they could be incorrect or contain serious errors, especially when providing health-related information.
  3. Lack of consent and data protection: LLMs may be trained on data without appropriate consent, and they may not adequately protect sensitive data, including health data, provided by users.
  4. Disinformation: LLMs could be misused to create and spread convincing disinformation in the form of text, audio, or video content, in an authoritative voice, that is challenging for the public to distinguish from reliable health information.
  5. Patient safety and protection: WHO recommends that policy-makers prioritize patient safety and protection while technology firms work on commercializing LLMs.

In light of these concerns, WHO urges for rigorous oversight and clear evidence of benefit before the widespread use of LLMs in routine healthcare and medicine. The organization emphasizes the importance of applying ethical principles and appropriate governance, as outlined in the WHO guidance on the ethics and governance of AI for health. The six core principles identified by WHO are:

  1. Protect autonomy.
  2. Promote human well-being, human safety, and the public interest.
  3. Ensure transparency, explainability, and intelligibility.
  4. Foster responsibility and accountability.
  5. Ensure inclusiveness and equity.
  6. Promote AI that is responsive and sustainable.

By adhering to these principles, WHO aims to ensure that AI technologies, including LLMs, are used ethically, responsibly, and for the benefit of all individuals and communities in the field of healthcare and medicine.

WHO Guidelines in Detail

WHO released its first global report on AI in health on June 28, 2021. The report, titled “Ethics and governance of artificial intelligence for health,” was the outcome of a two-year consultation process involving a panel of international experts appointed by WHO. In it, WHO introduced its six guiding principles.

  1. Protecting human autonomy: Humans should remain in control of healthcare systems and medical decisions, and privacy and confidentiality should be safeguarded. Patients must provide valid informed consent through appropriate legal frameworks for data protection.
  2. Promoting human well-being and safety and the public interest: AI technologies should meet regulatory requirements for safety, accuracy, and efficacy for specific use cases. Quality control and quality improvement measures should be available for the use of AI.
  3. Ensuring transparency, explainability, and intelligibility: Sufficient information should be published or documented before the design or deployment of an AI technology. This information should be easily accessible to facilitate meaningful public consultation and debate on the technology’s design and usage.
  4. Fostering responsibility and accountability: Stakeholders are responsible for ensuring that AI technologies are used under appropriate conditions and by appropriately trained individuals. Mechanisms should be available for questioning decisions based on algorithms and providing redress for those adversely affected.
  5. Ensuring inclusiveness and equity: AI for health should be designed to encourage equitable use and access regardless of age, sex, gender, income, race, ethnicity, sexual orientation, ability, or other protected characteristics under human rights codes.
  6. Promoting AI that is responsive and sustainable: Continuous and transparent assessment of AI applications during actual use should ensure that AI responds adequately to expectations and requirements. AI systems should also be designed to minimize environmental consequences and increase energy efficiency. Efforts should be made to address anticipated disruptions in the workplace due to the use of AI, including providing training for healthcare workers to adapt to automated systems and potential job losses.

By following these principles, governments, providers, and designers can work together to address ethics and human rights concerns at every stage of AI technology’s design, development, and deployment in healthcare and public health. The aim is to ensure that AI’s full potential benefits all individuals and communities.

The Potential Benefits of AI in Pharmacy

Artificial intelligence has the potential to revolutionize pharmacy practices. These potential benefits of AI in pharmacy are vast, ranging from improved decision-making and patient safety to increased efficiency and medication adherence.

One of the key advantages is its ability to analyze vast amounts of patient data, such as medical history, laboratory results, and medication records, to identify patterns and make accurate predictions. This can help pharmacists in making more informed decisions about medication selection, dosing, and monitoring. 

AI can assist in detecting potential drug interactions, allergies, and adverse drug reactions, thereby reducing the risk of medication errors and improving patient safety.

AI-powered systems can automate routine tasks, such as medication dispensing and inventory management, allowing pharmacists to focus more on patient care and counseling. This can lead to improved efficiency and productivity in pharmacy settings. 

AI can also enhance medication adherence by providing personalized reminders and educational materials to patients, ensuring they take their medications as prescribed.

The Challenges and Risks of AI in Pharmacy

While AI holds great promise in pharmacy, it is not without its challenges and risks. One of the main concerns is the accuracy and reliability of AI algorithms. The effectiveness of AI systems heavily relies on the quality and integrity of the data used for training. If the data is biased or incomplete, it can lead to inaccurate predictions and potentially harmful outcomes. Therefore, it is crucial to ensure the integrity and diversity of the data used in AI models to avoid biased results or discriminatory practices.

Another challenge is the integration of AI systems into existing pharmacy workflows and infrastructure. Implementing AI technologies requires significant investments in terms of both financial resources and human capital. Pharmacists and pharmacy staff need to be trained to effectively use and interpret the results generated by AI systems. Moreover, there may be resistance to change and concerns about job displacement among pharmacy professionals. It is essential to address these challenges and provide adequate support and training to ensure a smooth transition.

Additionally, the ethical implications of AI in pharmacy cannot be overlooked. Patient privacy and data security are major concerns when dealing with sensitive health information. Proper safeguards must be in place to protect patient data and ensure compliance with relevant regulations, such as the Health Insurance Portability and Accountability Act (HIPAA). Transparency and accountability in the development and deployment of AI systems are also crucial to maintain trust and mitigate potential ethical issues.

In summary, while AI has the potential to transform pharmacy practices, addressing challenges related to data integrity, integration, and ethics is crucial for its successful implementation.

Ethical Considerations in the Use of AI in Pharmacy

The use of AI in pharmacy raises several ethical considerations that must be carefully addressed. One of the primary concerns is the potential for bias and discrimination in AI algorithms. If the data used to train AI models reflects existing biases or discriminatory practices, it can perpetuate inequalities in healthcare. For example, if historical data predominantly represents certain demographics, AI systems may not provide accurate predictions or recommendations for underrepresented populations. It is essential to ensure that AI algorithms are trained on diverse and representative datasets to avoid biases and promote equity in healthcare.

Another ethical consideration is the transparency and explainability of AI systems. AI algorithms often work as “black boxes,” making it difficult to understand the reasoning behind their decisions. In healthcare, where decisions can have significant implications for patients’ lives, it is crucial to have transparency and explainability to build trust and accountability. Patients and healthcare professionals should have access to information about how AI systems work, the data they use, and the factors influencing their recommendations.

Moreover, the protection of patient privacy and data security is paramount when using AI in pharmacy. AI systems require access to sensitive health information to make accurate predictions and recommendations. It is essential to implement robust security measures and adhere to privacy regulations to safeguard patient data from unauthorized access or breaches. Clear policies and protocols should be in place to ensure the responsible collection, use, and storage of patient data.

In summary, addressing ethical considerations, such as bias mitigation, transparency, and privacy, is vital to ensure the responsible and ethical use of AI in pharmacy.

This is part 1 of 2

In part II (which will be posted in a day, for those reading this in real time), we’ll cover how AI is being used in pharmacy currently, and what it might mean for future pharmacy practices. Thank you.