|

Quick Resources: Weight Stigma in Social Media Algorithms

Β» For Plus-Size Creators, Tiktok Presents a New Wave of Challenges

TikTok also kept a separate list of β€œspecial users” who were considered to be β€œparticularly vulnerable.” Many of the creators on this list, Netzpolitik discovered, made videos with the hashtags #fatwoman or #disabled, or had rainbow flags and other LGBTQ+ markers in their profile. TikTok moderators marked these creators with an β€œAuto R,” which meant that their videos, after hitting a certain amount of views, would be banned from TikTok’s algorithm of suggested videos that appear in every user’s β€œFor You” feed.

As a result, these creators’s videos would reach a much smaller audience than the average user. For many, dreams of going β€œTikTok viral” and gaining notability on the platform would be squashed by the policies.

Β» Weight Stigma and Social Media: Evidence and Public Health Solutions

Weight stigma is a pressing issue that affects individuals across the weight distribution. The role of social media in both alleviating and exacerbating weight bias has received growing attention. On one hand, biased algorithms on social media platforms may filter out posts from individuals in stigmatized groups and concentrate exposure to content that perpetuates problematic norms about weight.

Individuals may also be more likely to engage in attacks due to increased anonymity and lack of substantive consequences online. The critical influence of social media in shaping beliefs may also lead to the internalization of weight stigma. However, social media could also be used as a positive agent of change.

Β» Stop Censoring Fat Bodies

Yes, these images show a lot of skin because yes I have a lot of skin, but they adhere to the same guidelines that straight sized accounts stick to without bother.

Is it my fault that the world can only handle Fat skin (and especially super Fat skin) in a segsual context?

Β» This is the impact of Instagram’s accidental fat-phobic algorithm

Let’s say a smaller-bodied woman decides to wear a bathing suit that covers up 40% of her skin. Now let’s imagine a fat woman decides to wear the same bathing suit. That bathing suit may have slightly more fabric due to the larger size, but that individual’s body could have significantly more skin, causing Instagram’s algorithm to flag the image even though there is nothing inappropriate about it. Although Instagram likely did not set out to do this, they ended up creating an algorithm that discriminates against fat bodies.

Β» Fatphobic algorithm?

Instagram say their search uses machine learning to “find the highest quality content that’s relevant to you”. This content was not relevant to us. It’s not relevant to the majority of people. The average womens’ dress size in the UK is 16 (size 12 in the USA & Canada). The average size coming up in the search is what, a size 6 or 8? So when they say machine learning… has Instagram taught its algorithm to exclude bigger bodies?

Β» How Algorithmic Bias Hurts People With Disabilities

A hiring tool analyzes facial movements and tone of voice to assess job candidates’ video interviews. A study reports that Facebook’s algorithm automatically shows users job ads based on inferences about their gender and race. Facial recognition tools work less accurately on people with darker skin tones.

As more instances of algorithmic bias hit the headlines, policymakers are starting to respond. But in this important conversation, a critical area is being overlooked: the impact on people with disabilities.

Β» Machine Learning as a Model for Cultural Learning: Teaching an Algorithm What it Means to be Fat

Public culture is a powerful source of cognitive socialization; for example, media language is full of meanings about body weight. Yet it remains unclear how individuals process meanings in public culture. We suggest that schema learning is a core mechanism by which public culture becomes personal culture. We propose that a burgeoning approach in computational text analysis – neural word embeddings – can be interpreted as a formal model for cultural learning.

Embeddings allow us to empirically model schema learning and activation from natural language data. We illustrate our approach by extracting four lower-order schemas from news articles: the gender, moral, health, and class meanings of body weight. Using these lower-order schemas we quantify how words about body weight β€œfill in the blanks” about gender, morality, health, and class.

Our findings reinforce ongoing concerns that machine-learning models (e.g., of natural language) can encode and reproduce harmful human biases.

Β» Pinterest’s New Algorithms Want You to See Every Body Type

In a novel move among major tech platforms, Pinterest is trying to attack head-on the way algorithm-tuned social media services amplify the long-standing bias toward thin, light-skinned women as the ultimate standard for beauty. Academic research and even companies’ internal studies have shown this reinforcement has left some users struggling to reconcile their own bodies with what they generally see online, causing mental trauma or eating disorders.

Every Monday, I send out my Body Liberation Guide, a thoughtful email jam-packed with resources on body liberation, weight stigma, body image and more. And it’s free. Let’s change the world together.

Hi there! I'm Lindley. I create artwork that celebrates the unique beauty of bodies that fall outside conventional "beauty" standards at Body Liberation Photography. I'm also the creator of Body Liberation Stock and the Body Love Shop, a curated central resource for body-friendly artwork and products. Find all my work here at bodyliberationphotos.com.

Similar Posts