Meta disbanded its Responsible AI team::Meta is in the midst of a reshuffling of its AI teams, and has reportedly split up its responsible AI development team

  • @fluxion@lemmy.world
    link
    fedilink
    English
    51
    edit-2
    6 months ago

    This is gonna be another one of those things where we get more and more brazen about ignoring safeguards the closer things get to profitability, until we realize we are so fucked that we can no longer unfuck ourselves and so just keep fucking ourselves

    • @bearwithastick@feddit.ch
      link
      fedilink
      English
      36 months ago

      Yes, but the guys who made the profit will be hailed as successful business people while the people advising for a responsible approach will be condemned as fucking blockers of innovation and success of the business. The idiots at the top who spew out their shit are glamourized while the people cleaning up at the bottom are the joke of society.

    • Iapar
      link
      fedilink
      English
      56 months ago

      You look at photos and they are all the same people from the responsible team but with mustaches.

  • AutoTL;DRB
    link
    fedilink
    English
    86 months ago

    This is the best summary I could come up with:


    Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence.

    The Information’s report quotes Jon Carvill, who represents Meta, as saying that the company will “continue to prioritize and invest in safe and responsible AI development.” He added that although the company is splitting the team up, those members will “continue to support relevant cross-Meta efforts on responsible AI development and use.”

    The team already saw a restructuring earlier this year, which Business Insider wrote included layoffs that left RAI “a shell of a team.” That report went on to say the RAI team, which had existed since 2019, had little autonomy and that its initiatives had to go through lengthy stakeholder negotiations before they could be implemented.

    RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms.

    Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest, WhatsApp AI sticker generation that results in biased images when given certain prompts, and Instagram’s algorithms helping people find child sexual abuse materials.

    Moves like Meta’s and a similar one by Microsoft early this year come as world governments race to create regulatory guardrails for artificial intelligence development.


    The original article contains 356 words, the summary contains 231 words. Saved 35%. I’m a bot and I’m open source!

  • @Magrath@lemmy.ca
    link
    fedilink
    English
    66 months ago

    Of course it is. It was just a PR move. The team was on a very short leash. They were never going to effect anything.

  • UltraMagnus0001
    link
    fedilink
    English
    16 months ago

    Can’t wait for the rich to control my AI overlord which will gently guide me in their path.