Less than two months after announcing it would begin using artificial intelligence tools in its marketing arm, Facebook is now being accused of letting advertisers target emotionally vulnerable children.
According to The Australian, a 23-page leaked document obtained by the outlet — marked “Confidential: Internal Only” and dated 2017 — reveals that the social media giant used algorithms to sift through the posts, pictures, and reactions of 6.4 million “high schoolers,” “tertiary students,” and “young Australians and New Zealanders…in the workforce” with the aim of learning about their emotional state.
The news outlet says the algorithms essentially looked for “moments when young people need a confidence boost” — moments when kids feel “worthless” or “insecure” and are perhaps more open to an advertiser’s message.
Facebook says the data amassed by the algorithms was never used for targeted ads, but merely “to help marketers understand how people express themselves on Facebook.” Still, the company acknowledges a “process failure” and says an investigation has been opened to correct the mistake.
That’s a good thing, because Facebook’s real-time monitoring of kids might actually be violating the Australian government’s ethical standards. The Australian points out that the Australian Code for Advertising and Marketing Communications to Children defines children as age 14 and under and says any information that could potentially reveal a child’s identity must be obtained through a parent or legal guardian.
The Australian Association of National Advertisers has already gone on record stating Facebook’s activities appear to run contrary to the government’s ethical standards.
Last year, ProPublica conducted an investigation into Facebook’s marketing practices and subsequently published a report accusing the company of allowing advertisers to target ads based on ethnicity.
This post originally appeared at Anti-Media.