An Open Access Article

Type: Policy
Volume: 2025
DOI: N/A
Keywords: Artificial Intelligence in Healthcare, EU Artificial Intelligence Act (AIA), General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), AI Bill of Rights, Healthcare Technology Policy. Ethical AI, Artificial Intelligence Act (AIA)
Relevant IGOs: European Union (EU), World Health Organization (WHO), Organization for Economic Co-operation and Development (OECD), G20, EU-U.S. Trade and Technology Council (TTC), Quad Alliance

Article History at IRPJ

Date Received: 12/22/2024
Date Revised:
Date Accepted:
Date Published: 01/22/2025
Assigned ID: 20250122

From Soft Law to Hard Choices: Healthcare AI Governance Across the USA and EU

Julian Lloyd Bruce; PhD Candidate, EUCLID (Euclid University), Bangui, Central African Republic and Greater Banjul, Gambia

Corresponding Author:

Julian Lloyd Bruce

Email: Julian.L.Bruce@gmail.com

ABSTRACT

This paper explores the evolving regulatory landscape of artificial intelligence (AI) in healthcare, emphasizing the diverse approaches adopted by the United States and the European Union. It examines the historical reliance on soft-law frameworks and highlights the challenges of balancing innovation with accountability in this rapidly advancing sector. Focusing on key regulatory shifts under the Trump and Biden administrations, the paper underscores the interplay between national and international strategies to address ethical, legal, and safety concerns in AI-driven healthcare solutions. The analysis extends to future considerations under the incoming Trump administration in his second term starting in 2025, outlining proposed deregulation and industry self-governance initiatives to foster technological growth while safeguarding public interests. By juxtaposing these strategies against the European Union’s structured and ethically anchored policies, this study provides a comprehensive perspective on the global implications of AI governance in healthcare. The paper concludes by identifying critical challenges and opportunities for ensuring that AI technologies enhance healthcare delivery while maintaining equity, privacy, and safety standards.

1.     Background

The rapid advancement of artificial intelligence (AI) has transformed numerous industries, with healthcare emerging as a critical area of innovation. AI technologies have demonstrated remarkable potential in enhancing diagnostics, optimizing treatment plans, streamlining healthcare operations, and advancing drug discovery. However, these advancements also bring complex ethical, legal, and regulatory challenges concerning patient safety, data privacy, and equity.

Globally, regulatory approaches to healthcare AI have varied significantly, reflecting differences in governance philosophies and priorities. The United States (USA) has historically favored market-driven innovation, relying on flexible, non-binding guidelines to stimulate AI development while addressing emerging risks. Conversely, the European Union (EU) has adopted a more structured and precautionary approach, embedding ethical principles into legally binding frameworks to ensure safety, transparency, and fairness in AI deployment [3].

This divergence in regulatory philosophies stems from broader societal and political contexts. In the USA, the balance between fostering technological leadership and safeguarding public interests has shaped a patchwork of federal and state-level policies influenced by administrative shifts [1]. Meanwhile, the EU’s cohesive, value-driven strategy emphasizes the alignment of AI technologies with fundamental rights, such as human dignity and privacy, underpinned by robust legal mechanisms like the General Data Protection Regulation (GDPR) and the proposed Artificial Intelligence Act (AIA) [2].

Understanding these contrasting regulatory landscapes is essential to addressing the global implications of AI governance in healthcare. By examining the strengths and limitations of each approach, this paper provides a foundation for exploring how international collaboration, ethical oversight, and adaptive regulatory frameworks can support the responsible integration of AI into healthcare systems worldwide.

2.     Soft Law Frameworks: Catalysts or Constraints for Healthcare AI?

The analysis of global regulatory frameworks for the use of AI in healthcare reveals that regulations predominantly lean on a soft-law approach. Soft law refers to rules and guidelines that, unlike hard law, are not legally binding but still influence behavior and practices. This includes professional guidelines, voluntary standards, and codes of conduct adopted by governments, industry, and professional bodies. Soft-law frameworks are particularly appealing in the context of healthcare AI due to the rapidly evolving nature of the technology, as they allow for flexibility and timely adaptation. However, reliance on soft law also raises concerns about enforceability and accountability, as organizations may choose not to adopt or fully implement these voluntary measures [4, 5].

In 2019, the World Health Organization (WHO) began developing a framework to support integrating digital technologies into global healthcare systems. This effort reflects a growing recognition of the transformative potential of digital innovations, including AI, in addressing complex health challenges. Building on this initiative, the WHO released a detailed set of recommendations in June 2021, outlining best practices for developing, implementing, and using AI in healthcare. These guidelines stress the importance of assessing AI technologies through a comprehensive lens, considering their benefits, limitations, feasibility, acceptability, resource demands, and impact on equity. By emphasizing these criteria, the WHO positions digital tools as vital in advancing universal health coverage, enhancing healthcare accessibility, and fostering long-term sustainability. These initiatives highlight a broader shift toward ensuring that digital health innovations are technologically advanced, ethically aligned, equitable, and capable of addressing diverse global health needs [6, 7].

3.     Uniting Stakeholders: Strengthening Soft Law for AI Accountability in Medicine

Recent research highlights additional insights into the challenges and opportunities of soft-law approaches. For instance, studies have shown that soft-law frameworks foster innovation by reducing regulatory burdens. However, they can create fragmented compliance practices across jurisdictions, hindering interoperability and trust in AI systems. The lack of mandatory oversight mechanisms can also lead to ethical lapses, such as biased algorithms or privacy violations, which could seriously affect patient safety and equity in healthcare delivery [8].

Moreover, the intersection of soft law with existing legal structures—such as the GDPR in the European Union or the Health Insurance Portability and Accountability Act (HIPAA) in the USA—has been noted as a critical area for ensuring robust governance. Recent proposals suggest that hybrid models integrating soft-law principles with binding legal mandates may balance flexibility and enforceability, enabling accountability without stifling innovation [9].

In addition, new studies have called for greater stakeholder engagement in developing these frameworks, emphasizing the importance of including diverse voices, such as patients, clinicians, ethicists, and technologists [10]. By fostering participatory governance, regulatory frameworks can better align with healthcare AI’s ethical, social, and technical complexities, ensuring these technologies are deployed responsibly and equitably. This participatory approach can also strengthen public trust, a cornerstone of AI adoption in critical sectors like healthcare.

4.     Deregulation Nation: AI Monitoring in the Early Trump Era

The Trump administration’s approach to AI regulation (2016–2020) emphasized fostering innovation through a free-market orientation while minimizing regulatory barriers. This strategy reflected a prioritization of economic growth and technological leadership, but it also revealed gaps in addressing critical ethical and legal considerations, particularly in sensitive sectors like healthcare [11].

A cornerstone of this strategy was the 2019 Executive Order on Maintaining American Leadership in Artificial Intelligence, which marked a significant shift in federal AI policy. This order established the American AI Initiative, a coordinated effort to solidify USA’s dominance in AI research and development. The initiative emphasized five key objectives: increasing investments in AI research, enhancing accessibility to AI resources, creating AI governance standards, developing a robust AI workforce, and fostering international collaboration to protect the nation’s competitive edge. These objectives sought to accelerate the adoption of AI technologies across industries, including healthcare, where AI’s potential for innovation in diagnostics, treatment, and operational efficiency was evident [11].

In healthcare, the administration’s focus on reducing regulatory impediments facilitated the rapid adoption of AI tools for medical imaging and predictive analytics tasks. However, this hands-off approach drew criticism for overlooking critical safety and ethical concerns. The lack of enforceable standards for AI in clinical settings left gaps in accountability, particularly in ensuring that AI systems adhered to rigorous accuracy and safety standards necessary for patient care [12].

To address emerging concerns, the Trump administration released draft guidelines for AI regulation in January 2020, which outlined ten key principles for federal agencies to consider when developing AI policies. These principles included promoting public trust, maintaining scientific integrity, ensuring fairness, managing risks and benefits, and enhancing transparency. While these guidelines provided a foundational framework for integrating AI technologies into critical sectors like healthcare, their reliance on voluntary compliance limited their impact. Federal agencies and private developers were encouraged, but not mandated, to adopt these principles, which led to inconsistent implementation across industries [13].

A strong focus on international competitiveness also marked the administration’s AI strategy. In response to growing concerns about China’s rapid advancements in AI, the USA actively participated in developing global standards, such as the OECD AI Principles adopted by over 40 countries in 2019—these principles aimed to promote trustworthy and ethical AI aligned with democratic values and human rights. While the Trump administration played a leadership role in these efforts, its domestic policies often lacked the robust legal frameworks necessary to align with these international commitments [14].

Despite its emphasis on innovation, the Trump administration’s regulatory stance highlighted significant challenges in ensuring that AI technologies, particularly in healthcare, were deployed safely and equitably. The reliance on voluntary guidelines and minimal oversight raised concerns about algorithmic bias, data privacy, and the long-term implications of unregulated AI adoption in clinical environments. These gaps would later influence the more structured and ethically focused AI governance framework developed under the Biden administration [12].

5.     Ethics in Action: The USA’s Regulatory Shift Under President Biden

President Biden’s administration built upon these early initiatives with a more structured and ethically driven framework. The National Artificial Intelligence Initiative Act of 2020, enacted just before Biden took office, established the National Artificial Intelligence Initiative Office (NAIIO). This office provided a centralized mechanism to coordinate AI research, development, and deployment across federal agencies, integrating ethical considerations into federal AI policy [15].

A notable area of focus under Biden was healthcare. The administration emphasized the need for AI technologies to address pressing challenges, such as streamlining diagnostics, enhancing drug discovery, and improving public health data management. Legal and regulatory oversight intensified during this period, particularly in aligning AI applications with existing laws, such as the HIPAA and the California Consumer Privacy Act (CCPA). These regulations provided a framework to ensure that sensitive patient data was handled securely and ethically, particularly in AI-driven research and clinical decision-making [16].

During this period, the United State’s AI policy also reflected lessons learned from historical controversies. For instance, cases like the 2017 data-sharing agreement between the Royal Free NHS Foundation Trust and Google DeepMind, which breached the UK Data Protection Act, underscored the importance of patient consent and transparency in data use. Although this incident occurred outside the USA, it influenced global conversations on data privacy and ethical AI, prompting USA regulators to strengthen oversight mechanisms [17].

Domestically, the Federal Trade Commission (FTC) began playing a more active role in overseeing the use of AI in healthcare technologies, targeting deceptive practices, and ensuring consumer protection. Meanwhile, the Department of Health and Human Services (HHS) enforced stricter compliance measures for AI technologies integrated into healthcare systems, focusing on patient safety and clinical efficacy [18].

During Biden’s presidency, the USA took a leadership role in shaping international AI norms, mainly through its contributions to the Organization for Economic Co-operation and Development (OECD) AI Principles. These principles, adopted by over 40 countries, emphasized trustworthy AI aligned with democratic values and human rights. In healthcare, the principles promoted the ethical use of AI for equitable access to medical innovations and the protection of patient data [14].

The administration also engaged in partnerships like the Quad Alliance with India, Japan, and Australia, prioritizing AI-driven solutions for public health crises and global healthcare disparities. These initiatives highlighted the USA’s commitment to using AI to address systemic health challenges while ensuring alignment with ethical and legal standards [16].

AI’s transformative potential in healthcare research was further emphasized under Biden. Federal investments focused on interdisciplinary collaborations to accelerate drug discovery, optimize clinical trials, and enhance personalized medicine. The administration prioritized ethical algorithm design and diverse, representative datasets to mitigate biases and ensure fairness in AI outcomes [16].

The Blueprint for an AI Bill of Rights, introduced in 2022 by the White House Office of Science and Technology Policy (OSTP), provided additional legal and ethical guidance. Key provisions addressed algorithmic bias, transparency, and accountability, directly impacting healthcare applications. For example, the framework emphasized the need for explainable AI in clinical settings to build trust among patients and healthcare providers [19].

Historical precedents in USA law significantly influenced the regulatory stance on AI in healthcare. The Medical Device Amendments of 1976 and subsequent updates to the Federal Food, Drug, and Cosmetic Act (FDCA) provided a template for evaluating the safety and efficacy of medical devices, including AI-driven technologies [20]. Similarly, the Genetic Information Nondiscrimination Act of 2008 (GINA) offered a framework for addressing privacy and discrimination concerns in using sensitive data, setting the stage for broader discussions on AI ethics and accountability [21].

These historical legal frameworks informed contemporary policies, ensuring that AI applications in healthcare adhered to rigorous safety standards while respecting individual rights. As AI technologies advanced, regulatory bodies adapted these precedents to address emerging challenges, such as “black-box” algorithms and the ethical implications of autonomous decision-making systems [22].

6.     Innovation Meets Ethics: The EU’s Comprehensive AI Vision

The European Union (EU) has positioned itself as a global leader in artificial intelligence (AI) regulation, emphasizing ethical development and deployment with a strong focus on healthcare and research. Unlike the free-market-oriented approach of the USA, the EU’s comprehensive frameworks aim to align AI technologies with European values such as human dignity, privacy, and fairness [23]. Central to this strategy is the AIA, proposed in 2021, which categorizes AI systems based on risk levels from minimal to unacceptable. High-risk applications, including healthcare diagnostics, treatment recommendations, and medical research, are subject to rigorous transparency, accountability, and data governance requirements. This risk-based approach ensures that healthcare AI technologies meet stringent standards to safeguard patient safety and uphold ethical practices. Additionally, the AIA outright prohibits specific AI applications, such as social scoring and mass surveillance, reflecting the EU’s commitment to protecting fundamental rights [24].

Data privacy forms a cornerstone of the EU’s healthcare AI governance, with the GDPR setting a global benchmark. GDPR mandates that personal health data be collected, processed, and stored transparently and only with explicit consent, ensuring that sensitive patient information remains protected. This legal framework addresses challenges in healthcare AI, such as the need for data-sharing frameworks that enable research while safeguarding patient confidentiality. By emphasizing data anonymization, encryption, and accountability, GDPR enables researchers to leverage large datasets for training AI systems, such as those used in drug discovery or predictive analytics, without compromising privacy. Moreover, GDPR’s provisions for algorithmic explainability empower patients and clinicians to understand AI-driven decisions, a critical aspect of trust-building in healthcare applications [25, 26].

Ethical considerations underpin the EU’s AI strategy, particularly in healthcare, where decisions can have profound implications for patient outcomes. The Ethics Guidelines for Trustworthy AI, published in 2019 by the High-Level Expert Group on AI, outline key requirements such as human oversight, technical robustness, transparency, and fairness. These principles guide the design and implementation of AI systems in healthcare, ensuring they are free from bias and capable of equitable outcomes. For instance, datasets that train diagnostic algorithms must be diverse and representative to avoid discriminatory practices. Transparency is also emphasized, enabling clinicians to validate AI recommendations and fostering trust among patients and healthcare providers. By embedding these ethical guidelines into its regulatory frameworks, the EU ensures that healthcare AI aligns with broader societal values while addressing complex challenges such as algorithmic bias and inequities in access [27].

The EU’s leadership in global AI governance has profound implications for healthcare innovation. Its contributions to the OECD AI Principles in 2019 and the subsequent adoption of these principles by the G20 reflect a commitment to fostering international collaboration. These efforts promote ethical AI practices across borders, supporting initiatives like cross-border data sharing in genomics and rare disease research [14]. Additionally, establishing the EU-US Trade and Technology Council (TTC) in 2021 has facilitated transatlantic alignment on AI policy, particularly in areas like algorithmic accountability and data governance. This collaboration ensures interoperability and ethical compliance in healthcare AI applications, enabling global advancements while maintaining patient trust and safety [28].

To further support the integration of AI in healthcare and research, the EU’s Coordinated Plan on Artificial Intelligence emphasizes investments in education, training, and infrastructure. Last updated in 2021, this plan promotes AI-driven advancements in clinical trials, public health analytics, and personalized medicine. By building a skilled healthcare AI workforce and addressing the digital divide, the EU ensures that these technologies are accessible and beneficial across member states, reducing disparities in healthcare delivery. Reciprocity in data sharing is another focal point, with policies requiring entities using patient data to demonstrate tangible benefits for the individuals whose data is used. This approach reinforces the principle that healthcare AI must serve public health goals and improve patient outcomes, fostering trust in its deployment [29].

By combining robust legal frameworks, ethical guidelines, and international collaboration, the EU has established a comprehensive approach to AI in healthcare. Its regulatory stance balances innovation with the need to address privacy, bias, and equity challenges, setting a standard for responsible AI governance that enhances clinical and research capabilities while protecting fundamental rights.

7.     Regulatory Horizons: What’s Next for Healthcare AI?

As the Trump administration prepares to assume office in January 2025, it has outlined a strategic agenda to reshape the regulatory landscape for artificial intelligence (AI) in healthcare. Emphasizing deregulation and industry self-governance, this approach fosters innovation while addressing critical patient safety and data privacy concerns. Central to this strategy is reversing several AI-related policies implemented during the Biden era, particularly the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued in October 2023. Critics, including President-elect Trump, argue that such regulations hinder innovation and impose unnecessary constraints on AI development. Instead, the administration advocates for a framework driven by market forces, allowing ethical standards and best practices to evolve organically [30].

A key component of this strategy is the appointment of an AI Czar, who is expected to prioritize deregulation and promote market-driven solutions. This role will foster collaboration between government and industry stakeholders, encourage voluntary compliance with ethical guidelines, and reduce federal oversight on AI ethics and transparency. By minimizing government intervention, the administration aims to avoid stifling innovation and enhance the USA’s competitive edge in AI technology [31]. Simultaneously, active engagement in global regulatory discussions, such as those led by the Organization for Economic Cooperation and Development (OECD) and the Quad Alliance, is envisioned to solidify the USA’s position as a leader in shaping international AI norms. Participation in global standard-setting can promote ethical and accountable AI practices while ensuring interoperability and trust in cross-border applications [32].

Within the healthcare sector, this deregulatory approach seeks to accelerate the integration of AI technologies by reducing compliance burdens and expediting the deployment of AI-driven solutions. However, these measures raise concerns about the adequacy of safeguards to protect patient data, ensure the accuracy of AI diagnostics, and address algorithmic biases. The administration relies on existing laws, such as the HIPAA, to address data privacy issues rather than introducing new AI-specific regulations. This reliance on established frameworks highlights the administration’s commitment to streamlining regulation and underscores potential oversight gaps [33].

Legislatively, the administration’s agenda may include introducing the AI Accountability Act to establish enforceable standards for transparency, ethical use, and patient safety in healthcare AI systems. Expanding HIPAA’s scope or developing a modernized data protection framework could address emerging challenges, such as the need for real-time data usage auditing and securing explicit patient consent for AI-driven interventions. Additionally, leveraging AI to reduce healthcare disparities and enhance access and quality of care in underserved areas is likely to be a priority, emphasizing the transformative potential of AI in addressing systemic inequalities [34].

Despite the promise of rapid AI advancements, the administration’s policies may lead to a fragmented regulatory environment, with varying standards across states and institutions. Balancing innovation with regulation is a complex challenge; overregulation risks stifling technological progress, while underregulation could result in ethical lapses and diminished public trust. Addressing algorithmic bias remains critical to ensuring fair outcomes across diverse patient populations, requiring significant investment in diverse data collection and rigorous testing. Furthermore, developing the necessary infrastructure and skilled workforce to support an AI-driven healthcare ecosystem will demand coordinated efforts at both federal and state levels. This focus on self-governance places significant responsibility on industry players to establish and adhere to ethical standards, which may vary in rigor and enforcement, potentially affecting consistency in quality and safety across AI healthcare applications [35].

8.     Conclusion

This paper has explored the divergent regulatory approaches of the USA and the EU in governing AI in healthcare, highlighting the interplay between innovation and accountability. While the USA’s flexible, market-driven model fosters rapid technological development, it risks ethical oversight and patient safety gaps. In contrast, the EU’s structured and ethically rigorous frameworks provide robust protections but may slow innovation. Both approaches underscore the complexity of balancing technological growth with public trust and equity.

A key insight from this analysis is the growing importance of hybrid governance models that integrate the adaptability of soft law with the enforceability of binding regulations. These models can address pressing concerns such as algorithmic bias, data privacy, and global interoperability while fostering innovation.

Future research could expand on the role of international collaborations, such as the OECD AI Principles, in harmonizing standards and promoting equity in healthcare AI deployment. Specifically, investigating the potential for globally agreed upon and applied regulatory frameworks to bridge the gaps between diverse national policies could offer valuable insights into creating a more cohesive, effective governance system. Such exploration would be vital as AI technologies increasingly transcend borders, impacting healthcare systems worldwide.

CONFLICTS OF INTEREST

The primary author of this paper, Julian Lloyd Bruce, serves as the president of Deeply Human Inc., an artificial intelligence hardware company. This company’s primary focus is not within the realms of healthcare or clinical research. The author declares that there are no other potential conflicts of interest concerning the content, analysis, or conclusions of this paper.

ACKNOWLEDGEMENTS

The author wishes to sincerely thank Euclid University for its support throughout the research process. Special thanks are extended to Professor Laurent Cleenewerck de Kiev for his invaluable guidance, insights, and advice on the writing and submission of this paper. His expertise and encouragement were instrumental in the successful completion of this work.

REFERENCES AND BIBLIOGRAPHY

  1. Levi-Faur, D., Kariv-Teitelbaum, Y., & Medzini, R. (2021). “Regulatory Governance: History, Theories, Strategies, and Challenges.” Oxford Research Encyclopedia of Politics. https://doi.org/10.1093/acrefore/9780190228637.013.1430.
  2. Levantino, F. P., & Paolucci, F. (2024). “Advancing the Protection of Fundamental Rights Through AI Regulation: How the EU and the Council of Europe are Shaping the Future.” European Yearbook on Human Rights 2024. https://doi.org/10.14763/2024.3.1797.
  3. “The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment.” Brookings Institution. https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/
  4. Palaniappan, K., Lin, E. Y. T., & Vogel, S. (2024). “Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector.” Healthcare, 12(5), 562. https://doi.org/10.3390/healthcare12050562.
  5. Gutierrez, C. I., & Marchant, G. (2021). “A Global Perspective of Soft Law Programs for the Governance of Artificial Intelligence.” Arizona State University Center for Law, Science and Innovation. https://lsi.asulaw.org/softlaw/wp-content/uploads/sites/7/2022/08/final-database-report-002-compressed.pdf
  6. World Health Organization (WHO). “Ethics and governance of artificial intelligence for health.” June 2021. https://www.who.int/publications/i/item/9789240029200.
  7. World Health Organization (WHO). “WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use.” June 2021. https://www.who.int/news/item/28-06-2021-who-issues-first-global-report-on-ai-in-health-and-six-guiding-principles-for-its-design-and-use
  8. Villasenor, J. (2020). “Soft law as a complement to AI regulation.” Brookings Institution. https://www.brookings.edu/articles/soft-law-as-a-complement-to-ai-regulation/
  9. Joseph, S., & Kyriakakis, J. (2023). “From soft law to hard law in business and human rights and the challenge of corporate power.” Leiden Journal of International Law, 36(2), 335-361. https://doi.org/10.1017/S0922156522000826.
  10. Kallina, E., & Singh, J. (2024). “Stakeholder Involvement for Responsible AI Development: A Process Framework.” Proceedings of the Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’24), San Luis Potosi, Mexico. https://doi.org/10.1145/3689904.3694698.
  11. Executive Order on Maintaining American Leadership in Artificial Intelligence (2019). “Maintaining American Leadership in Artificial Intelligence.” Federal Register. https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence.
  12. Ross, C. (2025). “Four pressing challenges facing Trump’s health regulators as AI makes deeper inroads into medicine.” STAT News. https://www.statnews.com/2025/01/16/trump-administration-health-care-artificial-intelligence-regulation/
  13. White House Office of Science and Technology Policy (OSTP). “Draft Guidance for Regulation of Artificial Intelligence Applications.” January 2020. https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf
  14. OECD (2019). “OECD Principles on Artificial Intelligence.” https://www.oecd.org/en/topics/ai-principles.html
  15. H.R.6216 – National Artificial Intelligence Initiative Act of 2020. “Maintaining American Leadership in Artificial Intelligence.” Congress.gov. Retrieved from https://www.congress.gov/bill/116th-congress/house-bill/6216
  16. White House Office of Science and Technology Policy (OSTP). “Fact Sheet: Key AI Accomplishments in the Year Since the Biden-Harris Administration’s Landmark Executive Order.” October 30, 2024. https://www.whitehouse.gov/briefing-room/statements-releases/2024/10/30/fact-sheet-key-ai-accomplishments-in-the-year-since-the-biden-harris-administrations-landmark-executive-order/
  17. Powles, J., & Hodson, H. (2018). “Google DeepMind and healthcare data sharing: A critical case study.” Health and Technology, 8(3), 195-203. https://doi.org/10.1007/s12553-018-0243-6.
  18. Federal Trade Commission (FTC). “FTC Issues Staff Report on AI Partnerships & Investments Study.” January 17, 2025. https://www.ftc.gov/news-events/news/press-releases/ftc-issues-staff-report-ai-partnerships-investments-study
  19. White House Office of Science and Technology Policy (OSTP). “Blueprint for an AI Bill of Rights.” October 2022. https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/
  20. FDA. “A History of Medical Device Regulation and Oversight in the United States.” https://www.fda.gov/medical-devices/overview-device-regulation/history-medical-device-regulation-oversight-united-states
  21. U.S. Equal Employment Opportunity Commission (EEOC). “Fact Sheet: Genetic Information Nondiscrimination Act.” https://www.eeoc.gov/laws/guidance/fact-sheet-genetic-information-nondiscrimination-act
  22. National Academies of Sciences, Engineering, and Medicine. “Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril.” 2022. https://www.nap.edu/read/27111/chapter/9
  23. European Commission. “Artificial Intelligence Act.” https://artificialintelligenceact.eu/the-act/
  24. European Commission. “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts.” April 21, 2021. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
  25. Information Commissioner’s Office (ICO). “Guidance on AI and Data Protection.” March 15, 2023. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/
  26. EIT Health Think Tank. “5 lessons from GDPR to inform AI data governance.” April 2021. https://thinktank.eithealth.eu/wp-content/uploads/2021/04/EIT-Health-Think-Tank-AI-Data-Governance-Infographic.pdf
  27. High-Level Expert Group on Artificial Intelligence (AI HLEG). “Ethics Guidelines for Trustworthy AI.” April 8, 2019. https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf
  28. European Commission. “EU-US Trade and Technology Council (TTC).” https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/stronger-europe-world/eu-us-trade-and-technology-council_en
  29. European Commission. “Coordinated Plan on Artificial Intelligence.” April 2021. https://digital-strategy.ec.europa.eu/en/policies/plan-ai
  30. STAT News. “Four pressing challenges facing Trump’s health regulators as AI makes deeper inroads into medicine.” January 16, 2025. https://www.statnews.com/2025/01/16/trump-administration-health-care-artificial-intelligence-regulation/
  31. TIME. “What Trump’s New AI and Crypto Czar David Sacks Means For the Tech Industry.” December 10, 2024. https://time.com/7200518/david-sacks-new-white-house-ai-crypto-czar-trump-administration/
  32. U.S. Department of State. “Artificial Intelligence (AI).” https://www.state.gov/office-of-the-science-and-technology-adviser/artificial-intelligence-ai/
  33. Healthcare IT News. “Deregulation in Healthcare AI: Balancing Innovation and Patient Safety.” January 12, 2025. https://www.healthcareitnews.com/news/deregulation-healthcare-ai-balancing-innovation-patient-safety
  34. Green, B., Murphy, A., & Robinson, E. (2024). Accelerating health disparities research with artificial intelligence. Frontiers in Digital Health, 6, 1330160. https://doi.org/10.3389/fdgth.2024.1330160
  35. Wang, X., & Wu, Y. C. (2024). Balancing Innovation and Regulation in the Age of Generative Artificial Intelligence. Journal of Information Policy, 14(2), 12. https://doi.org/10.5325/jinfopoli.14.2024.0012

Table of Contents

RECENT ARTICLES:

Publisher information: The Intergovernmental Research and Policy Journal (IRPJ) is a unique interdisciplinary peer-reviewed and open access Journal. It operates under the authority of the only global and treaty-based intergovernmental university in the world (EUCLID), with other intergovernmental organizations in mind. Currently, there are more than 17,000 universities globally, but less than 15 are multilateral institutions, EUCLID, as IRPJ’s sponsor, is the only global and multi-disciplinary UN-registered treaty-based institution.

 

IRPJ authors can be assured that their research will be widely visible on account of the trusted Internet visibility of its “.int” domain which virtually guarantees first page results on matching keywords (.int domains are only assigned by IANA to vetted treaty-based organizations and are recognized as trusted authorities by search engines). In addition to its “.int” domain, IRPJ is published under an approved ISSN for intergovernmental organizations (“international publisher”) status (also used by United Nations, World Bank, European Space Agency, etc.).

 

IRPJ offers:

  1. United Nations Treaty reference on your published article (PDF).
  2. “Efficiency” driven and “author-focused” workflow
  3. Operates the very novel author-centric metric of “Journal Efficiency Factor”
  4. Minimal processing fee with the possibility of waiver
  5. Dedicated editors to work with graduate and doctoral students
  6. Continuous publication i.e., publication of articles immediately upon acceptance
  7. The expected time frame from submission to publication is up to 40 calendar days
  8. Broad thematic categories
  9. Every published article will receive a DOI from Crossref and is archived by CLOCKSS.

Copyright © 2020 IRPP et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.