Law not yet tailored to AI in healthcare

- Advertisement -spot_imgspot_img


What does the ever-advancing technology mean for important values ​​in healthcare, such as autonomy, professional secrecy and privacy, liability or the role of personal contact? Corrette Ploem, professor by special appointment of ‘law, healthcare technology and medicine’, will be examining these issues in the coming years.

Ploem occupies the chair on behalf of the KNMG at the University of Amsterdam, and recently delivered her inaugural lecture. ChatGPT can, she says, become ‘the doctor’s new friend’, a digital colleague who watches with him and protects him from mistakes. But the question is: how are we going to use it? ‘A doctor is not a diagnostic machine; patients will always need doctors, if only to share their personal preferences with them. We should therefore enshrine personal contact in the provision of care in the law. After all, it must always be clear under what circumstances a physical meeting is necessary and when video calling, a telephone conversation or contact via chat or e-mail is sufficient.’

Security assessment

Current law is not yet tailored to AI, especially when it comes to advanced, self-learning systems. For example, in which situations are doctors liable for damage to health caused by the use of a faulty AI system? According to Ploem, it is important to provide doctors and healthcare institutions with more clarity on this.

Medical AI systems are now regarded as tools, and therefore cannot simply be brought to market; this requires a safety assessment and a CE mark. All kinds of rules will be added in the future when the European Artificial Intelligence Act comes into force. Legal experts do warn against over-regulation and discouragement, especially of smaller companies that focus on the medical AI market. In the meantime, the users of AI systems, the healthcare providers and healthcare providers, are faced with another problem: although it follows from that law that they must monitor the accuracy and safety of AI systems and that they must be transparent about problems that arise and a deficient functioning of the system, but what exactly that supervision should look like is not yet clear, as is the question of liability.

Ploem also focuses on such issues in the workplace of doctors and other professionals: ‘There is a particular need for guidelines and protocols to give AI a responsible place in medical practice.’ For example, there is already a guideline for good professional conduct in the development and use of AI. Unfortunately, according to Ploem, this does not yet provide a basis for the necessary supervision, so there is still work to be done in that area.

Another issue is whether, in the case of informed consent, patients must understand the operation of the technology, in this case an advanced AI system, in order to agree to its use. Not an easy task for care providers, because for them too, its operation is usually a black box. Ploem in her inaugural lecture: ‘From current legislation and case law, I derive such

In any case, the detailed information obligation is not finished at the moment.’ She also doubts whether patients are waiting for that.

Don’t rush

Some experts recently called for a temporary pause in the development of more advanced open AI systems such as ChatGPT: it moves too fast, the dangers are too great, so rather mark the spot. Ploem: ‘You shouldn’t rush into such steps in technology development, but that’s not happening in healthcare at the moment either. AI systems are first examined in a clinical trial. What you see is that from the outset there is a sensitivity to social aspects, such as law and ethics. That is embedded in the research projects.’

But there are also other points of attention, such as the fact that AI systems must be trained, which requires huge amounts of data. This valuable data resides in patients’ medical records. If doctors were to make such data available to researchers without further ado, they would breach their medical professional secrecy. That is why the Medical Treatment Contracts Act (WGBO) stipulates that they must ask the patient’s permission for this, unless this would disproportionately impede scientific practice. In the latter case, think of data from deceased patients, very large numbers of patients or the risk of bias. In such situations, the consent requirement is no longer required, but patients do retain an option to object, which they must be clearly informed about.

In her inaugural lecture, Ploem wonders whether the scope offered by the WGBO with these exceptions to patient consent is sufficient, or whether important research is still hindered too much by this. Some researchers think so. Therefore, an opt-out system would be preferable to them, and that is something the European Commission seems to want to leave room for. In fact, voices have even been raised to make data accessible for research without patients having anything to say about it. However, European privacy regulators are rightly critical of this.

BigTech

Ploem likes to look at it practically: ‘You should not interpret rules unnecessarily strictly. You must leave room for patients to give broad consent, verbally or digitally. That approach is also proposed at the gate of the hospital in the context of consent, and it’s no secret that I support that system. It would be nice if such a system was introduced in all hospitals in about ten years’ time.’

In the meantime, Big Tech is also happy to use all that healthcare data to feed AI systems, Ploem realizes. And that is not without dangers: ‘For example, you have platforms to which patients can ask questions; Health Square is an example of this. If you use such an information facility as a citizen, you are responsible for sharing your data. Many people don’t know that. A completely different situation than with a GP who uses a chat box; after all, the GP retains final responsibility for the data.’ At least as important, says Ploem, is the role of the doctor as a physician ‘with empathy and attention for the patient, especially if he needs personal contact. Leaving that role to technology always requires a convincing justification.’

- Advertisement -spot_imgspot_img
Latest news
- Advertisement -spot_img
Related news
- Advertisement -spot_img