Skip to content

DRAFT: Will the Social Web Foundation prioritize safety?

Work in progress! Feedback welcome!

The word DRAFT, written on a chalkboard, and a hand holding a piece of chalk
DRAFT! Work in progress!
Feedback welcome!
The Feedback, please section at the end has some specific questions I'm especially interested in feedback on, but other feedback is welcome as well!
Join the discussion in the Fediverse on infosec.exchange or lemmy.blahaj.zone!
Newsletter subscribers can also email me at jon@thenexus.today

To be published as Part ? of I for one welcome Bluesky, the ATmosphere, BTS Army, and millions of Brazilians to the fediverses!

...................................

A closeup of railroad tracks, with a couple of wooden ties.  Lying on the ties, a blue sigh with white letters saying "Safety first"
Image credit: Ricardo Wong, via Flickr

"[E]ven though millions of people left Twitter in the last two years – and millions more are ready to move as soon as there's a viable alternative – the ActivityPub Fediverse isn't growing. One reason why: today's Fediverse is unsafe by design and unsafe by default – especially for Black and Indigenous people, women of color, LGBTQIA2S+ people, Muslims, disabled people and other marginalized communities."

Mastodon and today’s ActivityPub Fediverse are unsafe by design and unsafe by default, 2023

There's a lot to say about the new Social Web Foundation (SWF). The Social Web Foundation and the elephant in the federated room included quotes from the coverage of their initial launch, and highlighted the potential upsides if they evolve in the right ways. That post then went into detail on the tradeoffs related to their engagement with Meta, who's reportedly contributing at least some of the $1 million in initial funding SWF is "closing in on" (whatever that means). But Meta's only one of the hot-button issues SWF critics have spotlighted, and More questions than answers: another post about the Social Web Foundation and the fediverses started broadening the focus on other open questions ... like whether SWF will focus on safety.

Today's ActivityPub Fediverse still isn't growing, and safety continues to be a problem. It's not hopeless, and there are some very encouraging signs of progress.1 Still, there's a lot more to be done.

SWF's mission talks about a "growing, healthy" Fediverse, but their initial plans don't seem to be paying much attention to the "healthy" part. For example:

  • SWF's initial list of projects doesn't include anything addressing current Fediverse safety issues.
  • As far as I know, none of SWF's advisors are safety experts, and IFTAS' Jaz-Michael King is the only one of their launch partners who has a history of prioritizing safety.
  • SWF's list of launch partners didn't included any of the safety-focused software projects I mentioned in footnote 1.

Meta's involvement with SWF adds to the concerns. Once Threads on two-way federation, there are some acute potential safety threats in the rest of the Fediverse – especially looking at Meta's failure to moderate extreme anti-trans hate. But Meta's supporters have mostly dismissed these fears with vague claims that "we have the tools". Yeah right. And earlier this year, Meta's Rachel Lambert and Peter Cottle talked about the possibility of offering their ineffective racist, anti-LGBTQIA2S+, Islamophobic automated AI-based moderation tools to the rest of the Fediverse. SWF's research director Evan Prodromou is a big fan of automated AI-based moderation, for example suggesting in Big Fedi, Small Fedi that "moderation can be automated."2 What could possibly go wrong?

Of course, as I said an earlier post

"[N]othing's set in stone at this point. Most non-profits' initial projects, program, staffing, network of participants, and even mission evolve. My guess is that'll be the case for SWF as well."

Will SWF evolve to prioritize safety?

If they do, will they do it in a way that avoids making the Fediverse's equity problems worse and helping Meta more than people in the Fediverse?

Time will tell.

On SocialHub, SWICG Trust & Safety Task Force lead Emelia Smith's suggested that SWF should commit to devote at least X% of its resources to safety. That would be a good first step, and if X% is high enough it would send important signal they intend to prioritize this issue.

Note to any of SWF's current and potential funders who happen to read this: commiting to spend X% of your fediverse budget on safety is a good idea for you too! Whatever SWF decides to do on this front, directly funding safety-oriented projects and organizations will complement SWF's efforts.

Clarifying how much they intend to focus on AI-based moderation (and how they intend to address the discrimination and ineffectiveness of today's tools if they do go that route) is another useful and straightforward short-term step.

Some ideas if SWF does decide to prioritize safety

The good news is that if SWF does decide to evolve in this drection, there are plenty of opportunities to have an impact. For example:

  • On SocialHub, Evan mentioned that "SWF is going to support my work (and others’) at the W3C on ActivityPub". The new SWICG Trust and Safety task force is a great place to focus these resources. Diverse participation in this effort vital ... and, it's not realistic to ask marginalized people to volunteer in situations where others (like the many SWICG members employed by tech companies where participation is part of their job) are getting paid for their time.
  • Consent is a core value of much (although certainly not all!) of the ActivityPub Fediverse – and, as Eight tips about consent for fediverse developers discusses, a great opportunity for a potential competitive advantage. But consent-based tools and infrastructure historically haven't gotten a lot of attention in the ActivityPub Fediverse.
  • Tools on other platforms like Block Party and Filter Buddy that allow for collaborative defense against harassment and toxic content, and could also apply in a federated context – initially as standalone tools if necessary, but ideally integrated into existing apps and web UIs. And (not to sounds like a broken record) both Block Party and Filter Buddy highlight that tools designed and implemented by (and working with) marginalized people who are the targets of so much of this harassment today are likely to be the most effective.
  • Threat modeling is an important technique for improving safety (and security, privacy) that is only rarely used in the Fediverse. Improving privacy and safety in fediverse software sketehes what a potential project could look like, and also includes the important point that
"Threat modeling needs to be done from multiple perspectives, so it's crucial that participants and experts include people of color, women, trans and queer people, disabled people, and others whose safety is most at risk – and especially people at the intersections."
  • Even though I'm very skeptical about the racist, sexist, anti-LGBTQIA2S+ (etc) AI technologies that Meta and others have adopted today, and the exploitative and non-consensual data Meta and others have used to create the underlying racist, sexist, anti-LGBTQIA2S+ (etc) models that power them, there's no question that automated tools can potentially be incredibly valuable for moderation and other aspects of trust and safety. There are a lot of great AI researchers in the Fediverse who take an anti-oppressive, ethics-and-safety-first approach – like Dr. Timnit Gebru and the rest of DAIR Institute. So there's a real opportunity here to do it right.
  • IFTAS (a non-profit that focuses on federated trust and safety) is a SWF launch partner, but there hasn't been any discussion of concrete plans. Of course I might be biased here (I'm on IFTAS' Advisory Board) ... still, it seems to me that if SWF can use their connections with their corporate and foundation funders to help unlock additional funding for IFTAS, it could magnify the impact of both organizations – as well as address concerns I've heard from several trust and safety folks that SWF will unintentionally wind up competing with IFTAS for the same pool of funding.

These are only the tip of the iceberg. Steps to a safer fediverse talks explores potential improvements in SWF's focus areas of people, protocols, and plumbing at length, and Threat modeling Meta, the fediverse, and privacy includes some recommendations for dealing with some aspects of the threat from Meta.

Of course, these are far from the only ideas ... and SWF's budget isn't big enough to fund everything. But there certainly is no shortage of worthwhile projects, so let's hope they fund at least some of them!

And let's also hope that whatever SWF winds up doing in this area, they're approach it as something that benefit everybody, not just Meta-friendly instances and the corporate fediverse.

Feedback, please!

Some of the feedback I'm especially interested in:

  • Does this draft make the case strongly enough that SWF should prioritize safety and consent? What else could be added to make it a more compelling argument?
  • If SWF does decide to prioritize safety and consent, this drafts listed a handful of projects I think are high value and good early targets. But there's a lot of other interesting stuff going on as well! What else should I include? Are there any here I should leave out, or describe differeintly?
  • This draft doesn't discuss nd-to-end encryption (E2EE). E2EE's a good thing, but almost nobody I talk to thinks it will address the ActivityPub Fediverse's current safety problems – in fact, there's a chance that it could well make moderators' lives even more challenging. Is there a good discussion of this I can link to? If not, should I expand the scope of this article?

And of course, feedback is welcome in other areas as well – even spotting typos! I've linked to discussion threads at the top, or if you'd rather give feedback privately you can message me at @thenexusofprivacy@infosec.exchange, @jdp23@blahaj.zone, or (on Bluesky) @jdp23.bsky.social

Notes

1 For example:

2 Similarly, in his recent book on ActivityPub, most of the very very limited discussion of moderation similarly talks about the potential for AI. Prodromou's current employer OpenEarthFoundation is building a platform to empower cities to decarbonize with AI and data-driven solutions, and he previously founded fuzzy.ai, a developer-focused AI company, so it's not surprising that he'd favor this direction. As far as I know though he's never discussed potential solutions for the biases and ineffectiveness of today's AI-based moderation systems.