A bit of a longer post to hopefully come (but I AM bad at Task, and currently behind on many things) but I think there's a widespread antipattern in UI presentation of calling a functionality (especially safety functionality) what the user WANTS rather than what it DOES.

I think this is especially... not "dangerous" exactly, but more impactful -- in federated/distributed networks, where there isn't a Single Platform/Owner able to play god with the single database (and with a stronger ability to pretend to play god with every user's endpoint).

Iike, "Delete Message", e.g. I think should be called "REQUEST Delete" -- it relies fundamentally on a basic minimal level of politeness from instances (federated) or individual endpoints (E2E chat).
A lot of these are "request"s, but it gets more obviously complicated when you add it in explicitly.
Like, "block user" -> "request block user".
Who are we Requesting things of? That is, who are we trusting? And who hears of this request?

@gaditb@icosahedron.website great point! This is what's currently shown when you delete a post in Bonfire, I guess we could make it clearer that this:

  1. deletes it on your local instance
  2. sends a deletion request to other instances (and to which ones?)
@mayel Oh wow! I didn't even look at the specifics yet (another reason this was dashed-off and non-specific), that's great that you've already been thinking along those same lines!

The actual specific thing with Bonfire set me off on this line of thinking was:

"""
E.g. you can share a post with several circles but only allow replies from a specific circle, or make a post public but invisible to specific people.
"""

"Make this invisible to X, Y, Z" strikes me as something that can requires very nuanced and situation specific careful stepping around disclosure/visibility and trust. (Ranging from "none", e.g. planning a birthday party, to much higher stakes.)

@gaditb@icosahedron.website Good catch! We usually try to explain things in a more comprehensive way than in that short blurb. Specifically in this case, if something it shared publicly there can be no gurantee that they won't see it. For non-public posts there's no guarantee either (because there's no end-to-end-encryption) but there's a better chance, since we distribute it only to the actors who were given permission (in a similar way as BCC in email).

@mayel How visible is the exclusion itself, though?

Like, if I send to "Hey does anybody have a place I can crash tonight?" to @group--attendees_of_this_hacker_gathering but invisible to @guy_I_dont_trust_to_be_alone_with_but_really_dont_want_to_get_into_a_public_confrontation_about_that_right_now,

is there a chance that @defender_of_that_guy_who_sees_fear_of_him_as_an_attack_on_his_character sees that block in the metadata of the post and starts flaming me about it?

This is a contrived example that I don't have specific experiences I'm translating it from,
and sometimes technical limitations or just "we don't have a better design for that yet" mean that you can't AVOID exposing that, so it's not necessarily "it Must Be Fixed if it reveals this",
just,
(a) finding what intuition might be, and checking the implemented teality against that
(b) exploring what attacks there might be, and knowing where they might be made less post-hoc surprising

e.g.:
https://old.reddit.com/r/BlueskySocial/comments/18ebrhx/blocks_being_public_is_already_leading_to/

@gaditb@icosahedron.website

I think the Bluesky example you linked to is an unfortunate illustration of what happens when we expect technical tools to cleanly solve social problems. Making a blocklist public is obviously egregious, but even private exclusions can be exposed through old-fashioned social interactions. For example, someone casually referencing a post might unintentionally reveal to someone else that they weren't included.


In Bonfire, boundaries aren’t meant to control what others do on their own instance (apart from defining who an activity is addressed to, similarly to BCC with email). They’re more about controlling what reaches you on yours. Since ActivityPub doesn’t yet support strong security features like end-to-end encryption or object capabilities, boundaries are designed as a local-first mechanism. They help shape your experience and interactions rather than enforcing strict rules across the network.


So, in your example, if someone starts harassing you, you can block them, or just silence them or the thread in question. From your perspective, those replies disappear, and new ones won’t appear at all. Your instance silently enforces these boundaries by ignoring unwanted interactions, even if the sender’s server is unaware and still allows them to post.


It’s not perfect privacy or hard enforcement but harm reduction and local autonomy in a messy, federated world...

@mayel As a side-point, that I don't think has any safety impact and also don't REALLY have fully-theorized to a necessarily-useful level of specifics, and also haven't really run it by people as like "are my thoughts here, like, coherent? and useful?",

I have thoughts re:
"""
From your perspective, those replies disappear, and new ones won’t appear at all. Your instance silently enforces these boundaries by ignoring unwanted interactions, even if the sender’s server is unaware and still allows them to post.
"""

about the concepts of who "owns" (and can curate) the comments section of their post, from whose perspective. (It's never going to be a single person curating/moderating, but I.. think?.. people feel ownership feelings over it nevertheless.)

@mayel I think there are at least two distinct things to pay attention to with the Bluesky public blocks issue:

- it is a public action that can be enumerated/searched. So, someone can choose to create consequences for blocking them, transforming the action "block X visibly" into "block X and alert X (or Y) that I did that, with further consequences"
-- plus: this searching, and so the transformation, can be done at any time after the fact and so past block-actions can be made differently- and more- -consequential post-hoc

- it's not intuitive to users, leading to people assuming they're doing one thing "block X and block X's visibility of my actions" when in fact they're, potentially, actively alerting X

I think they're different because imo one is a pitfall people can stumble into (and can be improved by UI work, documentation, publicizing),
and one is about the basic actions and affordances the platform provides and doesn't provide, to both the block-er and the block-ee.

2 more replies (not shown)
@mayel (You can tell me to, like, look into it myself rather than asking you to explain every detail and stuff. I'm not in a situation where me having that info is relevant to my safety or supporting anyone where it is relevant to theirs. I'm just asking hopefully-useful questions, so while I like learning about how this works from this angle and am enjoying it, if my repeatedly-asking-details is not particularly enjoyable/useful to you/the project, like.)
(I'm not getting thst impression from you, but just in case/to be explicit about my conversational position rather than try to rely on tonal nuances.)
@mayel (Also there's a limited degree to which I'll be able to evaluate whether a given phrasing does correctly and coherently align promise to reality.

I'd like to think I know how to WORRY well, and to understand networked/distributed systems enough to look for places where intuition might misalign, but I don't have experience needing or using these tools, to speak to success.)

@mayel (My best idea of how to get a good evaluation on that is to reach out to people who regularly use/need these as safety features (or who have had specific need for them) --

-- public figures whose public existence in general tend to attract haters who feel empowered, people who have been subjected to targetted harassment campaigns (often I-think-with-no-specific-formal-knowledge-work-backing-this these are often non-public figures, whose interaction with/from a public figure or keyword made them visible to a community that uses harassment campaigns as a tool), domestic violence survivors/people who have navigated domestic instability involving someone in their social community in a position of resource-controlling power over them, people who have had previous trust-relationships collapse, etc. ...
--

@mayel
-- And I'm sure there are people who, like, have done theoretical and practical work supporting them in these scenarios, who already have a lot of expertise thst doesn't need to be rederived from scratch --

And ask them to know what capabilities they'd need, what they'd reach for given the tools/options this software provides, and try to trace what assumptions they make and whether the implementation aligns with those.)

(Possibly also running scenarios maybe? Maybe e.g. one target with one account, attacker and per-scenario supporting other accounts they can use also, a support-resource person who can react to information they can see through the network. I'm sure people who like actually know what they're doing and are not just tossing out thoughts have better/stronger-formaized suggestions.)

1 more replies (not shown)