DIY approach to audits

DIY approach to audits

When faced with biased search results, social media posts, or automated hiring and credit decisions, ordinary people have few options available to them. They can cook in anger and do nothing; They can protest by leaving the platform; Or they can report the crash in hopes that those responsible for the algorithm will fix it—an exercise that often feels pointless.

Researchers or journalists with technical expertise and ample resources have an additional option: They can examine the algorithm to see which inputs produce biased types of outputs. Such as Account audits Affected communities can help hold those who spread malicious algorithms accountable.

One notable example of such an audit is ProPublica 2016 Finding racial bias in the COMPAS algorithm’s calculation of a criminal defendant’s risk of recidivism.

While algorithmic audits by teams of experts like those at ProPublica are clearly valuable, they’re not scalable, he says. Michelle L, a graduate student in computer science at Stanford University and a graduate fellow at Stanford HAI. “It is not practical for experts to conduct audits on behalf of all people who are affected negatively by algorithms.” Furthermore, she says, technical experts’ awareness of the potential harms of algorithms is often limited.

To enable a large-scale review of the effects of algorithms, Lamm and her colleagues, including a Stanford graduate student Mitchell GordonAssistant Professor, University of Pennsylvania Dana MetaxaProfessors of Stanford University Jeffrey HancockAnd the James LandayAnd the Michael BernsteinI decided to put auditing tools into the hands of ordinary people – especially from communities affected by the damage of algorithms.

“We wanted to see if non-technical people could uncover broad systemic claims about what the system does so that they can convince developers that the problem is worth their attention,” says Lamm.

As a proof of concept, Lam and her colleagues created a IndieLabel, a web-based tool that allows end users to audit the Perspective API, a widely used content modification algorithm that determines the toxicity level of text. A group of laypeople charged with using the system revealed not only the same issues with the Perspective API that technical experts had already discovered, but also other bias issues that had not been reported before.

read the paper, End User Audits: A system that enables communities to lead large-scale investigations into malicious algorithmic behavior

“I was very encouraged that people were able to take charge of these audits and explore topics that they found relevant to their own experience,” Lamm says.

Going forward, end-user audits can be hosted on third-party platforms and made available to entire communities of people. But Lamm is also hopeful that algorithm developers will incorporate end-user audits early in the algorithm development process so they can make changes to the system before it is deployed. Ultimately, Lamm says, “We believe developers should be more adamant about who they’re designing for, and they should make informed decisions early on about how their system will behave in contested problem areas.”

End user audit

Although DIY algorithmic audits may be useful in many contexts, Lam and her colleagues decided to test the approach in setting up content moderation, with a particular focus on the Perspective API. Content publishers such as New York times or El Pais Use the Perspective API in a variety of ways, including flagging certain content for human review, or to automatically classify or reject it as toxic. And because the Perspective API has already been audited by technical experts, it provides a basis for comparing how end-user auditors differ from experts in their approach to the audit task, says Lam.

As an audit tool, IndieLabel is unusual in being user-centric: it represents the end-user auditor’s opinions about content toxicity for an entire data set and then allows them to drill down to see where the API disagrees with the auditor (rather than where the API doesn’t). Application programming with the validator) The validator is not compatible with the Perspective API). “Often, you take the model as the gold standard and ask the user’s opinion on it. But here we take the user as a point of reference against which to compare the model.”

To achieve this goal with IndieLabel, the end-user auditor first labels about 20 examples of content on a 5-point scale ranging from “not at all toxic” to “highly toxic.” The examples are a stratified sample that covers the range of toxicity classifications but adds an additional set of samples near the threshold between toxic and non-toxic. And while 20 may seem like a small number, the team showed that it’s enough to train a model that predicts how the validator will label a much larger data set. This process takes about 30 seconds. Once the model is trained, the auditor can either evaluate the toxicity of more examples to improve their personal model or proceed with the audit.

In the auditing step, users select a subject area from the drop-down list of options or create their own custom topics for auditing. A typical subject might be a string of words like “idiot_dumb_stupid_dumber” (or, often, more offensive words). IndieLabel then generates a histogram highlighting examples where the Perspective API’s toxicity prediction agrees or disagrees with the user. To better understand system behavior, the auditor can view and select examples to report to the developer, as well as write notes describing why the content is toxic or non-toxic from the user’s perspective. A single audit covering several topics takes about half an hour and produces a report that users can share with developers, who can change the system.

The research team assigned 17 non-technical auditors to run IndieLabel through its paces. Independently, the participants showed up with the same kind of problems that previous reviews of technical experts found. But they were also able to expand beyond that, drawing from their own experiences or those of the communities of which they are a part.

In some cases, participants agreed that the system was failing in specific ways—an obvious argument that changes should be made, Lamm says. In other cases, participants explored unique topics that expert reviewers were unaware of and might deserve more attention, such as excessive content about sensitive topics such as race or sexual assault, or excessive words originally used as insults but taken back by marginalized communities.

There were also cases where participants had divergent views on the same audit topic. “It is important to draw out these differences,” Lamm notes. For example, when modifying the use of a slur for people with an intellectual disability, some felt that there was a problem only when it was used to insult others, while others felt that the word was capable and had no place in their society at all.

Lamm says the developer whose product is being launched needs to deal with these distinctions. “We hope to broaden the different viewpoints that they are familiar with, while still giving them agency to make explicit decisions about where they want their system to align.”

Direct end user audits

Ideally, Lam says, the platforms would offer end users from diverse communities an opportunity to review new algorithms before they are published. “This creates a direct feedback loop where reports go directly to the developer who has agency to make changes to the system before they cause harm.”

For example, IndieLabel’s approach could be adapted to audit a feed ranking algorithm for a social media company or a job applicant ranking model for a large company. “The system will need to be built around whatever model they have,” Lamm says, “but the same logic and the same technical approaches can easily be transferred to a different context.”

Lamm says conducting end-user audits does not require the company to be bought out. Audits can be hosted by third party platforms which first have to acquire a suitable dataset. It’s more complicated, but in situations where an algorithm developer refuses to address a problem, this may be necessary. “The unfortunate downside is that you depend on public pressure to make the changes you want,” says Lamm. On the plus side, this is better than sending an anecdotal complaint that will get lost in the ether.

Stanford HAI’s mission is to advance AI research, education, policy, and practices to improve the human condition. learn more.

#DIY #approach #audits

Leave a Comment

Your email address will not be published. Required fields are marked *