Preamble
As accrediting bodies, we support continuing professional development (CPD) systems that evolve in tandem with clinical practice. Just as clinicians are learning to work alongside new tools, including artificial intelligence (AI), so must facilitators of learning and accredited providers of continuing education (CE). This guidance includes considerations for the responsible use of generative AI, machine-based systems that produce new content based on user input, as it applies to accredited continuing education programs. Some of these tools are freely accessible to the public, while others operate through licensed, closed-source platforms. The emergence of AI-enabled technologies offers remarkable potential to improve the design, relevance, efficiency, and assessment impact of lifelong learning (Masters, 2023; Ensign, Nisly, & Pardo, 2024).
It is important to foster exploration, innovation, and thoughtful experimentation with AI in CE while safeguarding the integrity of educational content, learner trust, and patient confidentiality, including bias and influence. This document outlines guidance to support Accredited Providers as they navigate this evolving landscape and may be updated as the technology and capabilities evolve. For areas that overlap existing policy, such as the Standards for Integrity and Independence, we’ve reiterated expectations that apply even when using AI for the planning, presentation, and evaluation of accredited education.
Encouraging Responsible Innovation
AI tools can enrich educational effectiveness, foster learner engagement, and support data-informed planning and evaluation while maintaining alignment with accreditation standards. When used thoughtfully, AI can be a powerful adjunct to educational design, improving efficiency and sparking creativity, while augmenting (not replacing) human insight and professional judgment.
Accredited Providers are encouraged to experiment with AI in ways that can enhance educational design, delivery, independence, and evaluation, such as:
- Conduct professional gaps or learning needs assessments using AI to identify gaps, trends, or at-risk populations.
- Content design and development, including brainstorming, drafting, and adapting materials for different audiences.
- Analyzing financial relationships with ineligible companies to assess relevance to content.
- Assessment method generation and creation of distractors and answer rationales, including scoring rubrics.
- Feedback summarization and analysis of large evaluation datasets.
- Support for quality outcome and improvement initiatives and social media outreach.
- Single profession and interprofessional case-building and scenario development in complex clinical domains.
The following guidance addresses accreditation-related requirements as they relate to the use of AI in CE:
1. Safeguard Independence and Mitigate Bias
Accredited Providers must ensure that AI-generated or AI-assisted content does not introduce bias or undue influence. All content must comply with the Standards for Integrity and Independence in Accredited Continuing Education.
- Educational content must be based on current science, evidence, and clinical reasoning and present a balanced view of treatment and diagnostic options.
- The provider must ensure that no aspect of the AI-generated or AI-assisted content reflects the interests of ineligible companies or introduces promotional language. This includes ensuring that advertisers, sponsors, or commercial supporters are never permitted to influence or bias any AI-driven recommendation engine used in accredited educational activities.
- The provider must maintain control over and be fully responsible for educational content, ensuring individuals without relevant financial relationships make all decisions about the use of AI.
- Maintain the same standards of content review and independence for AI-generated materials as those developed by human authors.
- Screen AI outputs for commercial bias or promotional language that could benefit ineligible companies.
- Document the steps taken to ensure AI-generated content is free from commercial influence.
2. Transparently Disclose AI Use
When AI is used in the creation or development of educational content, educators and accredited CE providers should disclose its use. This is especially true when the tools are used to generate, modify, or analyze educational materials. The disclosure of AI use in educational content creation aligns with the principles of informed engagement and the Standards for Integrity and Independence in Accredited Continuing Education, which help uphold learner trust in accredited continuing education. Routine spelling or grammar tools do not require disclosure.
Strategies to support transparency include:
- Disclose the name, version, and date of use of the AI tool.
- Describe the purpose for which AI was used, such as drafting, data analysis, or assessment creation.
- Inform learners when content was created or edited with AI assistance, including slides, written materials, or assessments.
- Include additional disclosure elements: model/source identification, an indication or statement that outputs were verified by a human reviewer (including who and how), and whether prompts or outputs were stored externally.
3. Ensure Human Oversight, Accuracy, and Accountability
AI is a tool that can enhance efficiency and expand design capabilities, but it is not a substitute for professional judgment. Current systems are not consistently capable of making ethically sound or context-specific decisions without human oversight. Generative AI is also prone to fabricating citations or producing misleading content, and inaccuracies or training bias can result in harmful or discriminatory outputs. For these reasons, content integrity and data stewardship remain human responsibilities of the accredited provider.
For AI-generated materials that are fixed, meaning they are created once and then distributed (e.g., slides, handouts, assessments), the accredited provider must ensure that these AI generated outputs are:
- Reviewed and approved by named, qualified individuals before dissemination to learners.
- Checked for factual errors or “AI hallucinations.”
- Screened for bias or stereotyping in clinical or demographic representations.
- Accompanied by version control and traceability, indicating who reviewed what, and when.
- Identified, experienced clinicians should have oversight and confirm that recommendations and outputs are accurate, scientifically valid, unbiased, and reliable.
When learners use AI systems to answer clinical questions or produce dynamic outputs in real time during the learning experience, the accredited provider must ensure that the AI system itself, as well as the underlying processes and architecture, undergo rigorous oversight by identified, experienced clinicians. Oversight should confirm that recommendations and outputs are accurate, scientifically valid, unbiased, and reliable. Learners using such outputs should be reminded that AI systems can make mistakes and that final responsibility for clinical decisions always rests with the human professional.
4. Protect Learner Identity and Sensitive Information
The accredited provider should not compromise learner privacy. Specifically:
- Obtain consent from learners before allowing their data or identity to be shared with any third party.
- Be careful to avoid inputting protected health information (PHI) or personally identifiable information (PII) into AI tools unless those tools meet organizational, legal, and ethical data privacy requirements (U.S. Department of Health and Human Services, 2023).
- For activities involving proprietary or sensitive data, use only AI platforms that the accredited provider’s institution has approved.
- De-identify evaluation, outcomes, or learner performance data, except for internal use by the accredited provider.
- Maintain data governance, retention, and access protocols in alignment with organizational and legal requirements.
5. Limit Prohibited or High-Risk Uses
AI should not generally be used for the following purposes; if used, especially rigorous oversight and expert review should be in place:
- Generating diagnostic or treatment recommendations without clinical validation.
- Entering or storing sensitive content in public or non-secure AI platforms.
- Auto-producing assessment answers are visible to learners.
- Automating analysis or summaries that bypass bias, independence, or accuracy checks.
6. Establish Internal Governance and Continuous Improvement Practices
For providers seeking to scale their AI use, internal governance policies can ensure consistency, accountability, and alignment with accreditation requirements. By implementing oversight structures, training, approved tool lists, and evaluative mechanisms, Accredited providers can monitor evolving AI practices while preventing drift toward inappropriate use. This proactive stance helps safeguard the independence of educational content, assures learners of the credibility of CE activities, and fosters a culture of continuous improvement grounded in transparency and ethical practice.
Accredited Providers routinely using AI tools should consider the following approaches:
Designate responsible individuals to oversee AI use in CE.
- Maintain an internal list of approved AI tools and update it periodically.
- Provide training and orientation for faculty and staff on responsible AI use, including clear guidance on when AI use is appropriate or prohibited.
- Pilot and evaluate new AI applications before full-scale implementation.
- Develop policies that differentiate expectations for staff, faculty, and learners, including:
- For staff: Ensure understanding of system types (closed vs. open source), risks associated with various use cases, and documentation requirements.
- For faculty: Include disclosure protocols in content development materials and outline expectations for evidence validation and proper attribution of AI support.
- For learners: Clarify when the use of AI (e.g., to transcribe, summarize, or analyze CE material) is inappropriate due to privacy, copyright, or assessment integrity concerns.
- Log usage and periodically review AI applications for compliance and quality.
- Update policies and practices based on evolving technology and risk assessments.
- Have policies that indicate only the use of institution-approved AI tools for CE work is permitted.
- Have policies that indicate PHI/PII data must not be introduced into or must be scrubbed from documents uploaded or used in AI platforms.
- Have policies that prevent or reduce the risk of the use of the AI tool infringing a third party’s intellectual property.
7. Secure Databases and AI Systems
Continuing education staff should exercise caution when uploading educational materials into AI systems. If the content includes any patented, proprietary, or unpublished materials provided by faculty, staff should first obtain the faculty member’s explicit permission. This is especially important when using open-access or third-party AI platforms. Uploading protected content without consent may expose trade secrets or violate confidentiality and privacy expectations and agreements, potentially resulting in liability for the provider organization.
To protect data integrity, provider credibility, and adherence to accreditation standards, Accredited providers should consider the following:
- Confidential or proprietary information should only be inputted into AI systems that are “private” or “closed” (i.e., company-specific / licensed) AI systems.
- Monitor and comply with all state requirements and future laws or regulations. Implement best practices and anticipate future laws related to the use of AI.
- Store AI-assisted content and learner data only in secure, access-controlled databases compliant with institutional and legal data protection standards (e.g., HIPAA, FERPA, GDPR).
- Utilize AI systems and large language models (LLMs) that are hosted in secure, vetted environments approved by the institution’s IT or compliance office.
- Implement routine audits to ensure that no content generated or processed by AI violates content validity, independence, or learner confidentiality requirements.
- Avoid using free or open-access AI platforms for accreditation-sensitive tasks unless explicitly permitted by the organization and accompanied by sufficient risk mitigation.
- Align the use of AI systems with existing standards of integrity and freedom from commercial influence required for accredited CE activities.
Conclusion
Generative AI tools, especially those that are open-source or publicly accessible, introduce additional risks beyond educational content accuracy, including regulatory, intellectual property (IP), and confidentiality vulnerabilities (Masters, 2023; Ensign, Nisly, & Pardo, 2024). As this field evolves, so will guidance from the accrediting bodies, as responsible AI integration into CE requires flexibility, transparency, integrity, and accountability, all of which remain core to the values of accredited continuing education.
______________
Definitions
Shared understanding of definitions improves understanding of how AI may be referenced and utilized in the CE context:
- AI-Assisted Content: Educational content created or adapted with AI tools in partnership with human authors, where AI influenced structure, wording, or analysis.
- AI-Generated Content: Educational content that was primarily produced by AI, with minimal human input prior to initial generation.
- AI Hallucinations: A response produced by an artificial intelligence program or tool that appears to be accurate or plausible but that contains inaccurate or misleading information.
- Artificial Intelligence (AI): Technologies that perform tasks typically requiring human intelligence, including natural language processing, pattern recognition, and predictive modeling. This includes both generative AI (e.g., ChatGPT, Claude) and rules-based or decision support algorithms.
- Closed-Source AI Tools: Institution-specific or licensed AI platforms that offer enhanced protections for data privacy, IP integrity, and content security.
- Large Language Models (LLMs): A type of generative AI trained on vast text datasets to perform tasks such as text generation, summarization, translation, and question answering.
- Machine Learning (ML): A subset of AI that enables systems to learn from data, identify patterns, and make decisions with minimal human intervention.
- Open-Source AI Tools: Publicly available AI systems accessible without a proprietary license, which may present increased privacy, security, and content control risks.
- Secure/Approved Tools: AI platforms vetted for use by the provider organization or institution, consistent with data security, privacy, and content integrity standards.
References
Accreditation Council for Continuing Medical Education (ACCME). (2024). Standards for Integrity and Independence in Accredited Continuing Education. https://accme.org/wp-content/uploads/2024/05/881_20220623_Standards-for-Integrity-and-Independence-in-Accredited-CE-Information-Package.pdf
American Medical Association. (2020). AMA Manual of Style: A Guide for Authors and Editors (11th ed.). Oxford University Press. https://academic.oup.com/amamanualofstyle/book/27941
Ensign, D., Nisly, S. A., & Pardo, C. O. (2024). The Future of Generative AI in Continuing Professional Development (CPD): Crowdsourcing the Alliance Community. Journal of CME, 13(1), 2437288. https://doi.org/10.1080/28338073.2024.2
European Commission. (2023). Proposal for a Regulation on Artificial Intelligence (AI Act). https://artificialintelligenceact.eu
Kruckel, J. (2025). Recalibrating the Social Contract: A Blueprint for AI-Resilient Societies. https://philpapers.org/rec/KRURTS-3
Massenon, R., Gambo, I., Khan, J. A., Agbonkhese, C., & Alwadain, A. (2025). “My AI is Lying to Me”: User-reported LLM hallucinations in AI mobile apps reviews. Scientific Reports, 15(1), 30397. https://www.nature.com/articles/s41598-025-15416-8
Masters, K. (2023). Ethical use of Artificial Intelligence in Health Professions Education: AMEE Guide No. 158. Medical Teacher, 45(6), 574–584. https://doi.org/10.1080/0142159X.2023.2186203
U.S. Department of Health and Human Services. (2023). Artificial Intelligence (AI) at HHS. https://www.hhs.gov/programs/topic-sites/ai/index.html