Humana also using AI tool with 90% error rate to deny care, lawsuit claims::The AI model, nH Predict, is the focus of another lawsuit against UnitedHealth.

  • originalucifer
    link
    fedilink
    63
    edit-2
    5 months ago

    Then it’s not an error rate.

    It’s a “fuck humans for profit” rate

    • andrew
      link
      fedilink
      English
      10
      edit-2
      5 months ago

      Which is prostitution and the Republicans definitely don’t like that (officially anyway) right?

      I’m sure a bipartisan agreement against this is coming right up.

  • @pimento64@sopuli.xyz
    link
    fedilink
    English
    295 months ago

    Did they really need an AI tool? I worked in healthcare for years before this stuff came out, and back then they didn’t need AI to blanket-deny 90% of claims without reading them. United Healthcare was/is even worse.

  • @Daxtron2@startrek.website
    link
    fedilink
    English
    35 months ago

    Is there proof of this? My mom works for Humana doing LTC and neither she or the medical directors she forwards cases to use this tool?

    • @Cheers@sh.itjust.works
      link
      fedilink
      English
      75 months ago

      It’s probably on claim submission.

      My company operates as an LTC pharmacy. We pay for every claim submission whether reject or success.

      I was on the phone the other day with my pharmacy (optum) and they did a “test” claim which was free for them. I know optum owns the pharmacy, insurance, and pbm, but either their abusive their vertical integration or they have an “ai” to test claims.

  • AutoTL;DRB
    link
    fedilink
    English
    25 months ago

    This is the best summary I could come up with:


    Humana, one the nation’s largest health insurance providers, is allegedly using an artificial intelligence model with a 90 percent error rate to override doctors’ medical judgment and wrongfully deny care to elderly people on the company’s Medicare Advantage plans.

    The lawsuit, filed in the US District Court in western Kentucky, is led by two people who had a Humana Medicare Advantage Plan policy and said they were wrongfully denied needed and covered care, harming their health and finances.

    It is the second lawsuit aimed at an insurer’s use of the AI tool nH Predict, which was developed by NaviHealth to forecast how long patients will need care after a medical injury, illness, or event.

    In November, the estates of two deceased individuals brought a suit against UnitedHealth—the largest health insurance company in the US—for also allegedly using nH Predict to wrongfully deny care.

    Humana did not respond to Ars’ request for comment by the time this story initially published, but a spokesperson has since provided a statement, emphasizing that there is a “human in the loop” whenever AI tools are used.

    In both cases, the plaintiffs claim that the insurers use the flawed model to pinpoint the exact date to blindly and illegally cut off payments for post-acute care that is covered under Medicare plans—such as stays in skilled nursing facilities and inpatient rehabilitation centers.


    The original article contains 1,016 words, the summary contains 225 words. Saved 78%. I’m a bot and I’m open source!