All
Suppliers
Products
CAD Models
Diverse Suppliers
Insights
By Category, Company or Brand
All Regions
Alabama
Alaska
Alberta
Arizona
Arkansas
British Columbia
California - Northern
California - Southern
Colorado
Connecticut
Delaware
District of Columbia
Florida
Georgia
Hawaii
Idaho
Illinois
Indiana
Iowa
Kansas
Kentucky
Louisiana
Maine
Manitoba
Maryland
Massachusetts - Eastern
Massachusetts - Western
Michigan
Minnesota
Mississippi
Missouri
Montana
Nebraska
Nevada
New Brunswick
New Hampshire
New Jersey - Northern
New Jersey - Southern
New Mexico
New York - Metro
New York - Upstate
Newfoundland & Labrador
North Carolina
North Dakota
Northwest Territories
Nova Scotia
Nunavut
Ohio - Northern
Ohio - Southern
Oklahoma
Ontario
Oregon
Pennsylvania - Eastern
Pennsylvania - Western
Prince Edward Island
Puerto Rico
Quebec
Rhode Island
Saskatchewan
South Carolina
South Dakota
Tennessee
Texas - North
Texas - South
Utah
Vermont
Virgin Islands
Virginia
Washington
West Virginia
Wisconsin
Wyoming
Yukon

The Dangers of AI in the Healthcare Industry [Report]

Subscribe
The Dangers of AI in the Healthcare Industry [Report]

While artificial intelligence (AI) offers many benefits across many different industries and applications, a recent study lays out some compelling points about the various challenges and dangers of utilizing AI in the healthcare sector.

In recent years, AI has been increasingly incorporated throughout the healthcare space. Machines can now provide mental health assistance via chatbot, monitor patient health, and even predict cardiac arrest, seizures, or sepsis. AI can offer diagnoses and treatments, issue reminders for medication, create precise analytics for pathology images, and predict overall health based on electronic health records and personal history — all while easing some of the burden placed on doctors.

AI-powered predictive analytics can identify potential ailments faster than human doctors, but when it comes to decision-making, AI cannot yet fully and safely take over for human physicians.

The recent report, entitled “Artificial Intelligence, Bias and Clinical Safety,” argues that in other industries, machine learning bots are often able to quickly correct themselves after making a mistake — with or without human intervention — with little to no harm done. However, there is no room for trial and error when it comes to patient health, well-being, and safety.

At this point in time, healthcare AI can’t weigh the pros and cons of a course of action or take a better-safe-than-sorry approach as a human doctor might. In certain situations, a doctor may “play it safe” after carefully considering the safety and comfort of a patient, whereas a medical AI bot may go full-throttle with an invasive strategy in order to produce the intended result.

Key Challenges and Dangers of AI in Healthcare

The most important factor in any kind of medical procedure, of course, is patient safety. The study, published in the medical journal BMJ, notes the increasing concerns surrounding the ethical and medico-legal impact of the use of AI in healthcare and raises some important clinical safety questions that should be considered to ensure success when using these technologies.

The report discusses the following clinical AI quality and safety issues:

  • Distributional shift — A mismatch in data due to a change of environment or circumstance can result in erroneous predictions. For example, over time, disease patterns can change, leading to a disparity between training and operational data.
  • Insensitivity to impact — AI doesn’t yet have the ability to take into account false negatives or false positives.
  • Black box decision-making — With AI, predictions are not open to inspection or interpretation. For example, a problem with training data could produce an inaccurate X-ray analysis that the AI system cannot factor in.
  • Unsafe failure mode — Unlike a human doctor, an AI system can diagnose patients without having confidence in its prediction, especially when working with insufficient information.
  • Automation complacency — Clinicians may start to trust AI tools implicitly, assuming all predictions are correct and failing to cross-check or consider alternatives.
  • Reinforcement of outmoded practice — AI can’t adapt when developments or changes in medical policy are implemented, as these systems are trained using historical data.
  • Self-fulfilling prediction — An AI machine trained to detect a certain illness may lean toward the outcome it is designed to detect.
  • Negative side effects — AI systems may suggest a treatment but fail to consider any potential unintended consequences.
  • Reward hacking — Proxies for intended goals serve as “rewards” for AI, and these clever machines are able to find hacks or loopholes in order to receive unearned rewards, without actually fulfilling the intended goal.
  • Unsafe exploration — In order to learn new strategies or get the outcome it is searching for, an AI system may start to test boundaries in an unsafe way.
  • Unscalable oversight — Because AI systems are capable of carrying out countless jobs and activities, including multitasking, monitoring such a machine can be near impossible.

The Future of AI in Health Care

Technological advancements are rapidly changing the face of healthcare, offering a range of benefits but also some serious drawbacks.

Although errors and less-than-perfect decision-making are inevitable in the world of healthcare, with or without AI, the recent study in BMJ shows the importance of carefully thinking through the use of AI in medical and healthcare settings. As we move further into the fourth industrial revolution, patients and practitioners alike will be keeping an eye on the latest innovations and advancements.

Next Up in Manufacturing & Innovation