El personal de enfermería se opone al uso de la IA en los hospitales

In some cases, artificial information assistants, IA, are used to automate nurses’ tasks. More artificial information companies, AI, are offering ways to automate time-consuming tasks typically performed by healthcare personnel and medical assistants. Within management teams, they claim to help nursing workers perform their duties more efficiently and combat exhaustion and understaffing. However, healthcare service-related unions argue that this misunderstood technology does not provide satisfactory results, as it undermines workers’ expertise in this sector and degrades the quality of patient care.
«The hospitals waited for the moment when they had something that seemed to have enough legitimacy to replace healthcare personnel,» says Michelle Mahon, Director of Healthcare Practices at the National Union of the National Union. The association, led by Mahon, the largest healthcare union in the United States, has organized over 20 demonstrations in hospitals nationwide, pressing for the right to comment on how advice can be used against disciplinary measures if healthcare personnel choose to ignore automated guidance.
Initially, the hypocritical AI promoted a rate of $9 (approximately 8.2 euros) per hour for AI participants, compared to $40 (approximately 36.7 euros) for a medical assistant diploma. It has since abandoned this language, instead promoting its services and ensuring clients that the AI has been thoroughly tested. The company did not respond to requests for an interview to clarify the issue.
AI in hospitals can generate false alarms and dangerous advice. Hospitals face technology designed to improve costs, such as sensors, microphones, and movement-detecting rooms. Now, this data is linked to electronic medical records and analyzed to predict medical issues and guide healthcare personnel’s attention, sometimes before evaluating the patient.
Adam Hart worked in the emergency department for Dignity Health in Henderson, Nevada, when the hospital’s computer system indicated that a new patient had septicemia, a potentially fatal reaction to an infection. According to hospital protocol, a high dose of intravenous fluids needed to be administered immediately. However, after a more detailed examination, Hart determined that the patient needed treatment for dialysis or renal insufficiency. These patients must be carefully treated to avoid overloading the kidneys with fluids.
Hart raised his concern to the monitoring assistant, but was told to follow the standard protocol. Only after a nearby doctor intervened, the patient began receiving a slow dose of intravenous fluids. «We don’t need to lose our heads, because we’re paid as healthcare workers,» says Hart. «Handing over our thought processes to these devices is reckless and dangerous,» he adds.
Hart and other co-workers acknowledge the goal of AI: to facilitate monitoring of multiple patients and respond quickly to issues. However, the reality often involves a flood of false alarms, sometimes incorrectly indicating basic body functions, such as a patient’s failure, as an emergency.
Can artificial intelligence help in hospitals? According to Michelle Collins, Dean of the School of Healthcare at Loyola University, even the most sophisticated technology can overlook signals typically perceived by healthcare personnel, such as facial expressions and odors. However, people are not perfect either. «It would be absurd to completely revert,» says Collins. «We should accept what it can do to enhance our care, but we must also be vigilant not to replace the human element,» she adds. Over 100,000 healthcare jobs have ceased during the Covid-19 pandemic, the largest reduction in staff in 40 years.
As healthcare workers retire from the US population and healthcare, the government estimates that there will be over 190,000 new vacancies in this line of work each year until 2032. Given this trend, hospital administrators believe AI would play a vital role. Therefore, they believe it doesn’t replace care but helps assistants and doctors gather information and communicate with patients.
Sometimes I talk to a human and sometimes not. At the University of Arkansas for Medical Sciences in Little Rock, staff must make hundreds of calls each week to prepare patients for surgery. Nurses confirm information about prescriptions, heart conditions, and other issues like sleep apnea, which must be carefully reviewed before anesthesia.
The issue is that many patients only answer the phone at night, typically between dinner and bedtime. Therefore, starting in January, the hospital uses an AI assistant to contact patients and healthcare providers, send and receive medical documents, and summarize their content for human staff.
Qventus claims that 115 hospitals use their technology, aimed at increasing hospital profits by accelerating surgery, reducing cancellations, and alleviating exhaustion. While companies like Qventus offer administrative services, other developers play a more significant role in their technology.
Development of techniques for patients to control pain. Xoltar is an emerging Israeli company specializing in video calls with patients. The company collaborates with the Mayo Clinic on an assistant that teaches patients cognitive techniques to manage chronic pain. It also develops an avatar to help smokers quit.
Healthcare experts study that these programs may work with relatively healthy and proactive individuals under their care. However, this is not the majority of people in the healthcare system. «The patients who make up the majority of healthcare in the United States are, in fact, we must consider whether chatbots are intended for them or not,» says Roschelle Fritz, a teacher at the Davis School of Health at the University of California.
FUENTE