Facebook said on Tuesday it has invested more than US$13 billion in safety and security measures since 2016, days after a newspaper reported the company had failed to fix "the platform's ill effects" researchers had identified.
The social media giant said it now has 40,000 people working on safety and security, compared with 10,000 five years ago.
Facebook played down the negative effects on young users of its Instagram app and had a weak response to alarms raised by employees over how the platform is used in developing countries by human traffickers, the Wall Street Journal reported last week, citing a review of internal company documents.
"In the past, we didn't address safety and security challenges early enough in the product development process," the company said in a blog post. "But we have fundamentally changed that approach."
Facebook said its artificial intelligence technology has helped it block 3 billion fake accounts in the first half of this year. The company also removed more than 20 million pieces of false COVID-19 and vaccine content.
The company said it now removes 15 times more content that violates its standards on hate speech across Facebook and its image-sharing platform Instagram than when it first began reporting it in 2017.