Artificial Intelligence (AI) in Healthcare: Concerns and regulations
14, Feb 2023
Why in News?
- Artificial Intelligence (AI) was regarded as a revolutionary technology around the early 21st century. Although it has encountered its rise and fall, currently its rapid and pervasive applications have been termed the second coming of AI. It is employed in a variety of sectors, and there is a drive to create practical applications that may improve our daily lives and society. Healthcare is a highly promising, but also a challenging domain for AI.
ChatGPT: The latest model:
- While still in its early stages, AI applications are rapidly evolving.
- For instance, ChatGPT is a large language model (LLM) that utilizes deep learning techniques that are trained on text data.
- This model has been used in a variety of applications, including language translation, text summarisation, conversation generation, text-to-text generation and others.
What is Artificial Intelligence?
- AI is a constellation of technologies that enable machines to act with higher levels of intelligence and emulate the human capabilities of sense, comprehend and act.
- The natural language processing and inference engines can enable AI systems to analyze and understand the information collected.
- An AI system can also take action through technologies such as expert systems and inference engines or undertake actions in the physical world.
- These human-like capabilities are augmented by the ability to learn from experience and keep adapting over time.
- AI systems are finding ever-wider application to supplement these capabilities across various sectors.
Concerns of Using AI tools in medical field:
- The potential for misinformation to be generated: As the model is trained on a large volume of data, it may inadvertently include misinformation in its responses. This could lead to patients receiving incorrect or harmful medical advice, potentially leading to serious health consequences.
- The potential for bias to be introduced into the results: As the model is trained on data, it may perpetuate existing biases and stereotypes, leading to inaccurate or unfair conclusions in research studies as well as in routine care.
- Ethical concerns: In addition, AI tools’ ability to generate human-like text can also raise ethical concerns in various sectors such as in the research field, education, journalism, law, etc.
- For example: The model can be used to generate fake scientific papers and articles, which can potentially deceive researchers and mislead the scientific community.
AI tools should be used with caution considering the context:
- Governance framework: The governance framework can help manage the potential risks and harms by setting standards, monitoring and enforcing policies and regulations, providing feedback and reports on their performance, and ensuring development and deployment with respect to ethical principles, human rights, and safety considerations.
- Ensuring the awareness about possible negative consequences: Additionally, governance frameworks can promote accountability and transparency by ensuring that researchers and practitioners are aware of the possible negative consequences of implementing this paradigm and encouraging them to employ it responsibly.
- A platform for dialogue and exchange of information: The deployment of a governance framework can provide a structured approach for dialogue and facilitate the exchange of information and perspectives among stakeholders, leading to the development of more effective solutions to the problem.
Approach for the effective implementation of AI regulation in healthcare:
- Relational governance model into the AI governance framework: Relational governance is a model that considers the relationships between various stakeholders in the governance of AI.
- Establishing international agreements and standards: At the international level, relational governance in AI in healthcare (AI-H) can be facilitated through the establishment of international agreements and standards. This includes agreements on data privacy and security, as well as ethical and transparent AI development.
- Use of AI in responsible manner across borders: By establishing a common understanding of the responsibilities of each stakeholder in AI governance, international collaboration can help to ensure that AI is used in a consistent and responsible manner across borders.
- Government regulations at national level: At the national level, relational governance in AI-H can be implemented through government regulations and policies that reflect the roles and responsibilities of each stakeholder. This includes laws and regulations on data privacy and security, as well as policies that encourage the ethical and transparent use of AI-H.
- Regular monitoring and strict compliance mechanism: Setting up periodic monitoring/auditing systems and enforcement mechanisms, and imposing sanctions on the industry for noncompliance with the legislation can all help to promote the appropriate use of AI.
- Education and awareness at the user level: Patients and healthcare providers should be informed about the benefits and risks of AI, as well as their rights and responsibilities in relation to AI use. This can help to build trust and confidence in AI systems, and encourage the responsible use of AI-H.
- Industry-led initiatives and standards at the industry level: The relational governance in AI-H can be promotedS through industry-led initiatives and standards. This includes establishing industry standards and norms (for example, International Organization for Standardization) based on user requirements (healthcare providers, patients, and governments), as well as implementing data privacy and security measures in AI systems.
- India’s presidency of the G20 summit provides a platform to initiate dialogue on AI regulation and highlight the need for the implementation of AI regulations in healthcare. The G20 members can collaborate to create AI regulation, considering the unique needs and challenges of the healthcare sector. The set of measures, carried out at various levels, need to assure that AI systems are regularly reviewed and updated and ensure that they remain effective and safe for patients.