Instagram’s algorithms are pushing teenage girls who even briefly interact with fitness-related images toward a flood of weight loss content, according to new research that aimed to recreate the experience of being a child on social media.
Researchers adopting âmystery shopperâ techniques created a series of Instagram profiles that reflected real children and followed the same accounts as the teen volunteers. They then started liking a handful of posts to see how quickly the network’s algorithm pushed potentially damaging material into the site’s “explore” tab, which highlights material that the social network thinks a user might like.
An account created on behalf of a 17-year-old girl liked a single post from a sportswear brand about weight loss that appeared in her Instagram crawl tab. She then followed an account that was suggested to her after posting a photo from a “before and after weight loss trip.”
These two actions were enough to radically change the material offered to the fake teenager on Instagram. The researchers found that his exploration stream suddenly began to feature a lot more content regarding travel and weight loss advice, exercise, and body sculpting. The material often featured “noticeably thin and, in some cases, apparently altered / distorted” body shapes.
When the experience – which consisted of browsing the site for just a few minutes a day – was recreated with a profile masquerading as a 15-year-old girl, a similar effect quickly occurred.
The researchers also replicated the behavior of a real 14-year-old boy, which led to his Instagram crawl tab being inundated with photos of models, many of whom appeared to have heavily altered body types.
Instagram knew that all accounts were registered with teenagers and served users with child-focused ads alongside the material. The site recently resolved to fix anorexia issues in its search functions after previous criticism – the tech company putting warning labels on content, including pro-anorexia material.
The research was conducted in the UK by Revealing Reality and commissioned by the 5Rights Foundation, which advocates for stricter online checks for children. Lady Beeban Kidron, who chairs the UK charity, said it was the inherent design of recommendation engines used by social media such as Instagram that can exacerbate social problems for teens. She said she was disturbed by the existence of “automated pathways” that lead children to such images.
Dame Rachel de Souza, Children’s Commissioner for England, said: âWe do not allow children to access services and content inappropriate for them in the offline world. They shouldn’t be able to access it in the online world either.
Facebook, owner of Instagram, said it was already taking more aggressive steps to keep teens safe on social media, including preventing adults from sending direct messages to teens who don’t follow them.
However, he claimed the study’s methodology was flawed and “drew sweeping conclusions about the overall experience of teens on Instagram from a handful of avatar accounts.” They said much of the content accessible by fake teens in the study was not recommended but actively researched or followed and âmany of these examples predate the changes we made to provide support for people who are looking for content related to self-injury and diet. troubles â.
Research comes at a tricky time for social media platforms. In just over six weeks, businesses will be forced to face the Age-Appropriate Design Code, a tough new set of rules coming into force in the UK. The code, developed by the Office of the Information Commissioner, cleans up the tangled rulebook of how businesses should treat children online, with the aim of leading the creation of a “safe internet for children” .
Starting in September, businesses that expect children to visit their websites or use their apps will be required to present a child-friendly version of their service by default and should not operate under the assumption that a user is an adult. , unless they explicitly state the contrary.
Further restrictions will come with the Online Security Bill, currently in draft form, which provides for fines of up to 10% of global revenue for companies that do not keep promises made in their moderation guidelines and terms of service. – Guardian