Analysis

Hitting the Far Right Where It Hurts

How internet advertising funds bad information, and how activists are working to cut that off

The digital advertising industry is worth billions, but many companies don’t actually know where their ad dollars end up. Programmatic ad exchanges and other third-party platforms have enabled companies to buy ads without the hassle of going to each seller — and in doing so, have opened the door for fake-news, disinformation, and hyper-partisan sites to profit.

On this week’s CANADALAND, reporter Cherise Seucharan explores the financial incentives for bad information, and talks to the activists working to keep mainstream ad dollars from funding the far right:

Some of the largest companies in the world are directly contributing to the flow of disinformation across the internet. And many aren’t even aware it’s happening.

For years, activists and advertising experts have been sounding the alarm about how online advertising has been helping to fund hate and misinformation, due to the way it’s sold through third-party platforms.

In March, the Global Disinformation Index began pointing out how ads from mainstream organizations were appearing right next to stories with false claims about the Russian invasion of Ukraine. Even ads from a global nonprofit soliciting humanitarian aid for Ukraine were being served up alongside articles echoing Russian propaganda about “bioweapons” and supposed liberation from “neo-Nazis.”

“We are in a disinformation crisis, and there are certain publishers on the internet who are making wild amounts of money thanks to the advertising industry,” says Claire Atkin, the Canadian co-founder of Check My Ads, a non-profit and consulting agency which helps advertisers understand where their ad money is going.

Atkin and her U.S.-based co-founder, Nandini Jammi, have been at the forefront of the movement to defund these sites by helping advertisers become aware of where their ads are placed.

In 2017, Jammi was part of an activist group called Sleeping Giants, which went after the monetization of the far-right American website Breitbart. Sleeping Giants would monitor the ads appearing on Breitbart, take images of those ads, and tweet them at the companies being featured. As a result, many pulled their ads, and Breitbart lost a lot of money.

So how did advertisers end up paying to have their ads appear in places they didn’t want them to be?

The answer lies in what are known as programmatic ad exchanges, which started to form in the early-ish days of the internet, when blogging platforms began to make it easy for anyone to create a website.

“It got to a point where the big advertisers couldn’t go to 10,000 small websites and say, ‘Can we buy ads from you,’ right? That just was not practical,” says Augustine Fou, an advertising consultant based in New York.

Third-party platforms began to emerge on which companies and web publishers could buy and sell ads on a larger scale, and even put up ad slots for bidding. However, this enabled bad actors to profit, too.

Most exchanges, like Google, have policies banning false information, promoting hate, or inciting harassment. But Fou says bad actors are easily able to get past an exchange’s screening due to the sheer number of websites they include. Often these “fraudsters,” as Fou calls them, are sites that publish fake news, promote hate against minority groups, or post plagiarized information and articles for the sole purpose of catching ad revenue.

“If you’re mixed into hundreds of thousands of other sites, the big advertiser simply doesn’t know,” Foh says. “So their money eventually, unknowingly flows to both fraudulent websites as well as disinformation websites.”

He says this is the fault of the exchanges for failing to uphold their own standards.

“This kind of escalating arms race, the good guys will always be at a disadvantage, because the bad guys can innovate faster, they can move faster. They don’t play by the rules,” he says.

In 2019, BuzzFeed News reported on a pair of fake-news sites called “The Albany Daily News” and “City of Edmonton News,” disguised to look like local outlets. They featured content copied from across the web, including celebrity content unrelated to the cities they supposedly served. The City of Edmonton News had snagged more pageviews than the sites of real outlets like Edmonton’s Journal and Sun.

In 2020, CNBC reporter Megan Graham decided to test just how easy it would be to runs ads on a low-quality website, by creating a fake publication and copying her own CNBC articles onto it. Ultimately, three ad platforms approved the monetization of her new site, and ads for legitimate companies appeared on it. If she continued to operate the site, she could have started collecting funds.

Major ad-aggregator platforms are working to address these problems. Google Canada tells us in an email that they “have strict ads policies and publisher policies that govern the types of ads and advertisers we allow on our platform.”

These policies prohibit content that makes unreliable claims, such as content that could undermine trust in a democratic process, harmful health claims, and content that denies the existence of climate change.

To enforce the policies, Google says they use “a mix of automated systems and human review,” and can disable ads on specific pages or remove ads from a site entirely. They say that in 2020, they took action against more than 1.3 billion publisher pages, and have removed ads from several prominent right-wing sites including The Gateway Pundit, the Bongino Report, and MyMilitia.

Many ad exchanges have started relying on AI and other technologies to help them determine if a site is safe or not. One of these is keyword-blocking, which can block sites that contain any number of words deemed “unsafe.” Another is known as sentiment analysis, which can screen a page for its tone and the feelings it might evoke in a reader.

But those approaches have had unintended effects on legitimate news sites, whose stories often contain “negative” words and feelings.

In April 2020, The Guardian reported that UK newspapers were poised to lose over £50 million because of advertisers blocking words related to the coronavirus pandemic. A spokesperson for Newsworks, an organization representing the UK newspaper industry, told the newspaper that the lists were “threatening our ability to fund quality journalism.” (A more recent campaign led by Newsworks is pushing for the ad industry to stop blocking words related to climate change, saying that it could prevent the funding of much-needed reporting on the topic.)

In April 2020, Postmedia, Canada’s largest newspaper publisher, held a virtual town hall for its employees, at which the company described how Covid-related ad-blocking was affecting them. CANADALAND obtained a recording from a person in attendance.

“Even though we have more users, more pageviews than we have seen in the past, a lot of advertisers online don’t want their content, their ads, associated with content that deals with disease and death,” Lucinda Chodan, then Postmedia’s senior VP of editorial and editor-in-chief of the Montreal Gazette, told those present.

She said that, as a consequence, record web traffic had coincided with a “disastrous drop in revenue.”

Postmedia and several other major Canadian news organizations declined requests for comment on the subject.

“It takes a human to understand when something is published in bad faith,” says Atkin, who believes these technologies simply won’t work.

Google says it is working to address the problem of Ukraine disinformation being monetized. In late February, the company paused monetization of Russian state-funded media. A month later, they paused monetization of “content that exploits, dismisses, or condones the war.”

However, as Atkin discovered, many Google ads were still up on Russian disinformation sites, the kind that, she says, “are lying to the Russian people and lying to the world about what is happening. And American and Canadian companies are funding this, and they’re funding it against their interests, against their knowledge, and they’re doing it because Google has basically forced them to do that.”

Danny Rogers, executive director of the Global Disinformation Index, says they’ve been trying to bring attention to this problem for years, noting an occasion when ads for the U.S. Department of Veterans Affairs were appearing on a Kremlin-funded site.

“In our minds, the responsibility is squarely on the platforms, given the outsized market power they have, to do everything that they can,” he says.

Lauren Skelly, spokesperson for Google Canada, says in an email that the company is closely monitoring the situation in Ukraine and Russia. She says that specific sites we inquired about were part of a larger group that was under review and that they will take action if they don’t meet Google’s policies.

Rogers says, “When a company that’s building quantum computers and autonomous cars and launching satellites says they’re trying their best, and it’s still not happening — that doesn’t strike me as genuine.”

More recently, Jammi and Atkin have been alerting advertisers about their ads running on The Post Millennial (TPM), a Montreal-based site that describes itself as a news and investigative journalism outfit but which has been criticized for publishing false claims about the Covid pandemic and negative portrayals of immigrants and the LGBTQ community.

Chad Loder, a computer software developer who has been working with Jammi to better understand how programmatic ads work, has counted over 20 platforms and ad exchanges from which TPM has been delisted.

“We know that our work has an impact, because they are doing everything they can to slander us,” says Jammi.

Starting in the fall of 2021, a series of TPM articles called Jammi a “deranged activist” and made claims that she attacked a Jewish journalist and that Atkin was trying to buy inappropriate material for minors.

Libby Emmons, TPM’s editor-in-chief, declined an interview with CANADALAND but says, “We stand by our reporting and will not be silenced by Nandini Jammi’s crowdsourced intimidation against our journalists.”

According to Skelly, Google has taken action against The Post Millennial in the past, over specific pages that were in violation of their policies.

When CANADALAND checked the site last week to see what kinds of ads are still running on it, we encountered two: one for MyPillow, a company whose CEO has been among the leading proponents of false claims about the 2020 U.S. election, and another, served by Google, encouraging users to subscribe to The Globe and Mail.

Latest Stories
Announcing Our 2024 Podcast Slate
Introducing CanadaLabs
What Twitter Was