In 2022, the Department of Justice filed a complaint against Meta for allowing landlords to use Facebook's advertising targeting system to illegally exclude protected demographic groups from seeing housing ads. The DOJ argued, and Meta settled, on the basis that algorithmic advertising targeting can constitute discrimination.
The housing case established a legal precedent that has not yet been fully applied to a domain where the evidence of harm is even more extensive: food and health advertising.
The Algorithmic Bias in Food Advertising
Advertising platform algorithms do not randomly distribute food advertisements. They optimize delivery for engagement and conversion, which means they target users based on inferred demographic characteristics, behavioral signals, and psychographic profiles. Research has consistently found that this optimization process results in the disproportionate delivery of unhealthy food advertising to lower-income consumers, minority communities, and adolescents.
This is not a theoretical concern. The FTC and DOJ have documented multiple instances of food and beverage companies using algorithmic targeting to concentrate marketing for high-sugar, high-fat products in communities already bearing disproportionate chronic disease burdens.
"When an algorithm preferentially delivers unhealthy food advertising to a community with already elevated diabetes rates, we need to ask whether that constitutes discrimination — and whether the law is equipped to answer that question."
My Research: The Ethical Marketing Analytics Governance Framework (EMAGF)
Research Area 2 in my original research profile proposes the Ethical Marketing Analytics Governance Framework (EMAGF) — a systematic review and conceptual framework development study examining algorithmic advertising practices and their documented bias cases from 2018 to 2024.
The EMAGF proposes five governance standards for ethical algorithmic advertising in health and food categories: demographic parity requirements; health literacy-adjusted targeting restrictions; mandatory ad content quality scoring; platform-level healthy food advertising ratio requirements; and independent algorithmic audit obligations.
The ADPPA Connection
Congress is actively debating the American Data Privacy and Protection Act (ADPPA), which would impose the most comprehensive federal data privacy and algorithmic accountability requirements in U.S. history. My research directly addresses the analytical foundation that ADPPA enforcement would require: how do you measure whether an advertising algorithm is producing discriminatory outcomes? How do you define algorithmic fairness in the context of health-relevant product categories?
Related Research
This topic connects to Research Area 2 (original profile: Algorithmic Advertising Bias and Consumer Equity) and Paper 8 of the health analytics program (NLP Health Misinformation Classifier for FTC/FDA deployment). Both studies contribute evidence directly relevant to current FTC rulemaking proceedings.
What Platform Companies Should Do
I want to offer a constructive position, not just a critical one. Platforms are not inherently malicious. Their algorithms are optimized for the objective they were given — engagement and conversion. The solution is to change the objective function, not to eliminate the technology.
Specifically, I propose that food and beverage advertising algorithms should be required to optimize for a combined metric that weights conversion alongside a Health Impact Score — a standardized measure of the nutritional quality of the advertised product, calibrated to the health risk profile of the target demographic. This is not a radical proposal. It is an engineering decision. And it is one that marketing analytics researchers are uniquely positioned to help design.