At Asphalion, we are strongly committed to driving digitalization and integrating innovative tools to enhance our work. We believe that the thoughtful and responsible use of digital solutions, such as Artificial Intelligence, can significantly help improve project outcomes. That’s why we continuously explore new automation strategies to optimize our processes.
With innovation comes responsibility. Staying compliant, informed, and aligned with regulatory expectations is essential, especially when it comes to emerging technologies like AI. One key authority shaping the future of AI in healthcare is the U.S. Food and Drug Administration (FDA). Recently, the FDA shared important guidance on the “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drugs and Biological Products.”
A key highlight of this guidance includes the proposal of a 7-step risk-based credibility assessment framework to ensure AI models used in the drug product life cycle are trustworthy and appropriate for their context of use (COU). This process involves:
- Defining the AI model’s objective
- Determining the COU
- Assessing model risk
- Establishing, executing, and documenting a credibility assessment plan
- Evaluating the model’s adequacy
Notably, the FDA emphasizes the importance of life cycle maintenance of AI models, as their performance can evolve over time. The Agency encourages a risk-based approach to monitoring and change management, particularly in manufacturing, and stresses the value of ongoing engagement with regulators to maintain compliance.
Read the whole document here: https://bit.ly/40cPVEG
We’re here to help with your regulatory processes. Get in touch with us! [email protected]