6. Governing the Use of AI Ethically and Legally

article-ai

From AI chatbots offering medical advice, to algorithms that analyse patient data for early disease detection, Artificial Intelligence is revolutionising health tech. But eagerly integrating AI/ML models to add value for users., especially when that involves the use of sensitive health data or decision-making that might affect a person’s well-being, raises a host of ethical and compliance concerns that are being increasingly scrutinised by both regulators and the public.

Key issues include: privacy (how is sensitive data used to train or run the AI?), bias and fairness (does the AI produce discriminatory or inaccurate results for certain groups?), transparency (can you explain how the AI makes decisions or recommendations?), and accountability (who is responsible if the AI’s output causes harm?).

From a privacy perspective, one risk is that companies might feed large amounts of personal health data into AI systems without proper safeguards. For instance, using real user data to train a machine learning model could inadvertently expose that data or lead to unintended secondary uses. If that training data is not anonymised properly, it could be breached or misused. Additionally, if you’re using an external AI API (say a cloud AI service) and sending user data to it, that needs to be assessed like any other data transfer to a vendor – does the API provider maintain confidentiality? Do you have user consent to process their data with AI in this manner?

The accuracy and reliability of AI in health contexts is another concern. If an AI-driven feature gives a user health recommendations or risk scores, any error could have real consequences (unwarranted panic, or conversely false reassurance). If these errors stem from biased data or algorithms, it could also lead to regulatory issues. For example, an AI might under-diagnose a condition in women because it was trained mostly on male data – thus potentially violating anti-discrimination laws or upcoming AI regulations about bias. Organisations that deploy AI without fully understanding its workings risk “black box” outcomes that might harm users.

Regulators are responding. The EU’s AI Act will specifically regulate AI systems deemed “high-risk,” a category that is expected to include many healthcare and life-supporting AI applications. High-risk AI (like a diagnostic tool or a system that influences treatments) will require rigorous risk assessments, documentation, transparency, human oversight, and in some cases even notification to authorities before deployment. Non-compliance could result in fines up to 6% of global turnover under the AI Act – even higher than the fines levied by the GDPR. The AI Act also bans certain AI practices outright (e.g. social scoring, manipulative techniques) and imposes requirements on general-purpose AI providers. While the EU AI Act is EU-focused, its extraterritorial scope means if you deploy or even just make an AI system available in Europe, you have to comply.

Elsewhere, the US is inching toward algorithmic accountability: the California CPPA’s regulations on Automated Decision-Making (ADMT) will force companies to disclose when decisions are algorithmically made and possibly allow consumer opt-outs[50]. The FTC has also warned it will use its powers to punish “unfair or deceptive” AI practices (for example, if an AI is biased or if a company lies about what its AI does). China already has regulations requiring transparency and user choice for recommendation algorithms and requires security reviews for “algorithms impacting public opinion”, which is not directly health-related, but it shows the global trend.

Beyond strictly legal requirements, ethical AI is crucial for patient safety and trust. If your AI wellness coach gives someone dangerous advice, you could face not just lawsuits but public backlash and irreparable damage to your credibility. We’ve seen instances where generative AI “hallucinates” medical info – obviously problematic if presented to users as factual. Liability for AI outcomes is a gray area: if an AI suggests a course of action that leads to harm, could your company be held liable? Possibly, especially if due diligence in testing and oversight was lacking.

To govern AI properly:

Conduct AI Impact Assessments: Similar to DPIAs, these evaluate the potential risks of your AI systems and help you identify biases, privacy issues, error rates, and impacts on rights. The EU and others may mandate these for high-risk AI use cases, so get ahead by doing them now.

Data management for AI: Ensure any personal data used in developing or training AI is permitted and protected. Anonymise or pseudonymise wherever possible. Maintain a record of what data went into training and under what lawful basis.

Bias mitigation: Actively test your algorithms for bias or disparate outcomes. Use diverse training data and consult domain experts. If issues are found, retrain or put constraints in place. Document these efforts – under future AI laws you might need to prove you took steps to avoid bias.

Transparency and explainability: Strive to make your AI’s functionality explainable to users. Even if the algorithm is complex, provide users with understandable reasons for outputs, e.g. “Our algorithm suggests a higher risk of X because it noted [factor1] and [factor2] in your data.” Also, label AI interactions clearly (don’t make a chatbot pretend to be human).

Human-in-the-loop: For any significant decisions or recommendations (especially those affecting health or eligibility for services), consider having a human review or an override mechanism. Many regulations encourage or require human oversight for high-impact automated decisions. Even if not required, it’s a good safety practice.

Monitoring and iteration: Once an AI feature is live, monitor its outputs and user feedback. Set up a process to handle complaints or corrections (like if a user says the AI’s suggestion was wrong or harmful). Continuously improve the model or rules as needed.

Define roles and responsibilities: Establish who “owns” AI governance in your organisation. Some companies set up an AI Ethics Committee; others assign the Chief Data Officer or similar to oversee AI compliance. Involve multidisciplinary perspectives – technical, legal, clinical (if it’s health advice), etc.

In short, you should treat your AI with the same rigour as you would a core product offering – because to regulators and users, it is just that. The goal is to reap AI’s benefits (personalization, scalability, efficiency) without undermining privacy, fairness, or safety. The companies that succeed in doing this will stand out as trustworthy innovators in the health tech field. Those that rush AI features to market without these safeguards may quickly find themselves in regulatory hot water or at the wrong end of a viral news story about an AI mishap.

Worried that you may not have fully appreciated your AI risks? Contact us to find out how we can help.

 

Act now and speak to us about your privacy requirements

Start a conversation about how Privacy Made Practical® can benefit your business.

Click here to contact us.

Back to top