So, in other words, the list of people I have blocked on Bluesky, for whatever reason, is readable to everyone on the Internet without the use of any external service.
Is this a design I'd like to use? Or a gaping privacy hole attempted to be explained away through "implementation reasons"?
I would love the ability to copy someone else's block list. Some various popular personalities on ActivityPub have had to deal with abuse from servers and individuals that I would like to preemptively filter out of my time line.
However, that shouldn't be the default.
I'm not sure why this page keeps bringing up that "people on other platforms can find out if they're blocked or not". You're always going to have ways to detect that. It doesn't address the conclusion "so it may as well just be public".
There's something to be said for server-local blocking ("muting"), which this post also advocates. However, if you're going with that approach, why put "real" blocks in your protocol?
> and sequentially query the first party about each of those user IDs
Only if the third party is allowed to query whether there's a block between the first party and the second party. If the block is only visible to the first party, i.e. there's a view filter, then there's no good way of figuring this out.
Which sort of is my point. An option to share blocking information with another (trusted?) person should be possible, but as an explicit opt-in from the first party, with the third party not being able to intercept this in any meaningful way.
"Hey, I know that you have an extensive personal blocklist, can I take a look at it and adapt it to my uses so that I don't start from scratch?"
"Hey, I know that your server has an extensive personal blocklist, can I take a look at it and adapt it to my server so that I don't start from scratch?"
Yes. For all the cases you've mentioned, I might want to share my blocklist - but it should be an option rather than the default. I may want to share it only with select individuals rather than everyone who might want to use it to scrape it, archive it, monetize it, and stir online drama on my behalf AND on behalf of whomever I've blocked.
Seriously, it's worse than on Twitter. Bluesky slowly starts looking like "let's redo ActivityPub but so we can make money off it".
If you used a per-user-list salt, then it would be about as useful as obfuscating phone numbers by hashing them. The list must then be enumerable in N*x time, where x is the time to hash one user, and N is the cardinality of the search space.
One second per legitimate hash check? You can enumerate the list in a few hours by doing a parallel search...
It is a good feature. All bans by users and by entities (subreddits/forums/etc) should be visible for everyone, and also in reverse - "banned by" info should be available. This keeps moderators/admins in check somewhat.
PS: I don't use Bluesky at all, just from previous experience on forums.
I, as the moderator of my own Fediverse timeline, do not need to be kept "in check" by having a list of my blocks published anywhere. This, and I trust my admins enough to do a good moderating job without needing to look at the exact list of bans that they do.
Who is the entity that is supposed to keep mods/admins "in check"?
In my experience middle sized communities/forums operate on trust, but due to being bigger than small communities, it is hard to do. So people often don't know everyone else in the same forum/subforum, or at least don't know for long. And seeing open ban list with reasons and dates, reputation counters with open results who voted +/- for this person or post or comment keeps reputation of semi-anonimous users in check. You see a subforum which banned a lot of good users? It's a dumpster and you can go away and don't waste time there (or remain and troll a bit). You see a user with close to zero reputation counter but it actually a lot of + and a lot of - votes negating each other. That's interesting, possibly he was mass downvoted by some sub. Etc.
In short - public reputation and ban info is fun, interesting and somewhat a social limiter if the reputation still exist on a particular forum. And there are no negative factors for this really. I'm speaking from experience on other older forums I've participated.
I wonder why they even bothered. Only their client-side "mute" feature actually works in the presence of rogue federated servers, and it only takes one person to set up such an instance. It seems like both the federation design and the decision to be public-only fundamentally can't support blocks like this, and they just slammed it in there anyway and hoped for the best. The part that doesn't work is like 99% of the effort they expended here; the client-side mute is trivial and the only part that does always work.
The rogue client question is addressed in the article:
> In theory, a bad actor could create their own rogue client or interface which ignores some of the blocking behaviors, since the content is posted to a public network. But showing content or notifications to the person who created the block won’t be possible, as that behavior is controlled by their own PDS and client. It’s technically possible for a rogue client to create replies and mentions, but they would be invisible or at least low-impact to the recipient account for the same reasons. Protocol-compliant software in the ecosystem will keep such content invisible to other accounts on the network.
That is, compliant clients are expected to mute any interactions between blocked users, even if rogue clients continue to generate such interactions. Only users on other rogue clients would be able to see them. So the hope is that almost all recipients would not be using such clients.
Yes, I know; where do you think I got the idea from? :)
Client-side mute is a different feature than the blocking discussed here. They have both (Ctrl+F "mute", the second usage in the article). I am suggesting only having the client-side mute, not bothering with all this server-side blocking stuff which is loaded with caveats.
> “Mute” behavior can be implemented entirely in a client app because it only impacts the view of the local account holder. Blocks require coordination and enforcement by other parties, because the views and actions of multiple (possibly antagonistic) parties are involved.
Only having mute is still a bad outcome, it's just that it's the corner they've written themselves into here. Private blocks is the feature people actually want, and it's not possible with their architecture.
Well, I'm mainly confused by your expression that the server-side blocking is defeated as soon as one person has a rogue client; if the goal is that the blocking user and the general audience don't see any blocked interactions, then is it not sufficient if the vast majority of them don't see the interactions because their clients are compliant? Sure, a few people using custom clients would be able to see the interactions, but if they were really interested in seeing what blocked people had to say, they could always have communicated on an external forum anyway.
(Although this implementation does let me imagine a funny scenario, where a subgroup of users make "secret" replies visible to each other but no one else, entirely using blocked interactions. I imagine many people would get mad if they discovered such a thing.)
Indeed, it's not sufficient. I don't think people want a "eh, it's almost good enough most of the time" blocking feature. They don't want their block list to be public at all (this is a dealbreaker already) and they don't want other people ever to see blocked people's replies dangling off the bottom of their posts. The client-side mute feature prevents 100% of those posts from appearing to the one client, while keeping the list private. The block feature then, necessarily, must be about other people not seeing the replies to your posts. Otherwise, all you need is the easy client-side mute feature. And indeed, I think the block feature is broken enough that client-side mute is the only thing they can reasonably do.
There WILL be people defeating the block for anyone who can see it and screenshotting it for anyone who can't. As an obvious example, imagine what would happen if Donald Trump were on Bluesky. He will receive both blocked replies AND people will leak his block list.
> and they don't want other people ever to see blocked people's replies dangling off the bottom of their posts.
What I'm saying is, "ever" is an impossible criterion, for any typical blocking implementation on any online service. Unless the whole stack is DRM'd all the way down, someone could always write up a browser extension or equivalent that lets blocked people overlay the official messages with their own messages, sourced from a separate host. (I'm picturing something like a third-party version of Twitter's Birdwatch.)
Indeed, if we allow screenshotting, then even going that far would be unnecessary: people could just screenshot (e.g.) Trump's messages on his own social media site where he quotes the original message, and that screenshot could be spread juat as far as a screenshot from a custom Bluesky client.
Generally, I just don't see how it makes that much of a difference if these overlaid messages are transmitted via Bluesky's network instead of a separate network. Are you saying that Bluesky-hosted blocked messages are so much more compelling a mechanism than third-party-hosted blocked messages that this becomes substantially more problematic? (That does not seem clear at all to me a priori; don't people wanting to transmit and receive blocked messages have to put in active effort either way?) Or are you suggesting that all existing block features are broken in this way, and they're acceptable only if they're private?
> As we currently understand it, on Mastodon, you only see content when there is an explicit follow relationship between accounts and servers, and follows require mutual consent.
You can't even create a mastodon test account to test (and disprove) this theory?
It explains a lot about Bluesky of they are that ignorant about the alternatives.
You don't even need an account. You can disprove this nonsense by visiting any Mastodon server and looking at the local feed and confirm that most posts are accessible to anyone.
Your instance won't get posts pushed to it if nobody on that instance follows a given account, but you can still pull unless an account is private, and at least one instance implements a kind of follow that pulls to let people keep tabs on accounts without openly following them.
On the record blocks is a protocol mistake that has been ported over from Scuttlebot. They'd be better off with a client side mute button.
It's an algorithmic choice, but most who've used these platforms for years know public blocking can also be a form of abuse on these kinds of protocols.
BS could (and this was done by the people who sunk Scuttlebot/Patchwork) for example choose to list all of the people who block you on your profile page as an act of shaming and exclusion.
yes, and the key difference in distributed/decentralized systems is if you post your blocks they are broadcast to the public. On Twitter you can figure out you're blocked/shadowbanned by logging out, but in this case a peer can generate a list of all of the people who block you and advertise it.
On the app I'm writing, blocks are private. Only the blocker knows they have blocked someone. If they block, they disappear from the other member's view. It's as if they didn't exist.
As mentioned, someone can figure out that they've been blocked, but they can't be certain. There's no way to know who another member is connected to. That's also private to the member, so the logging in as an anonymous user can see that you are available to unblocked members, but not who you are connected to. You can infer that you are blocked. Also, nothing is truly public. Only members can even see other members, and, currently, every member request is vetted by a human. We're not going for scale, which means that getting that sockpuppet account might not be so easy.
That's mainly because of the demographic we Serve. There are quite a few dangerous people, therein, so we need to be pretty circumspect about privacy and security. We make sure that every member has full control of their privacy and data, and we also default to the most secure. No dark patterns to trick people into divulging information.
That said, it's a simple community app, so we can't throw too much friction into the way members interact with each other. If someone is really worried, they shouldn't use our app (or any other social media app, because ours is more anal than most).
One important thing to note here. In decentralized social networks, blocks make very little sense and are nothing more than social convention.
There's nothing stopping you from writing a Bluesky (or Mastodon) server that doesn't respect blocks, shows a list of users that you have been blocked by or gives you block notifications. On centralized networks with closed-off APIs, you can make first-party apps respect block semantics and the point of blocks is accomplished (friction increases.) On decentralized networks, users can just migrate to non-block-respecting instances, and nobody else will ever know whether another instance respects blocks or not.
I think what you are describing is majority-accurate (arguably https://docs.joinmastodon.org/admin/config/#authorized_fetch addresses some of it, though it's not watertight) for public content. However, quite a lot of people set up their profiles/posts with access controls.
Honestly the more I hear about AT/Bluesky and some of these other new protocols, the more I keep coming back to thinking of ActivityPub as the one with the most potential for the future.
BlueSky is a better social media service than any ActivityPub system out there right now because of the content recommendation algorithms BS was designed to support.
You need to gather the messages from various huge AP instances for anything close to a good recommendation algorithm.
I think the whole algorithm situation is exactly what's wrong with social media today, but it's also what makes people come back for more. Sadly, I think BS will beat AP in this regard because of that.
I certainly understand where you're coming from but isn't that only demonstrable right now because BS is the only "huge" instance running at the moment? They haven't federated yet so we assume this is just going to work across the entire federation of servers?
Further, development on ActivityPub is not "done," correct? As in, someone could technically still write/develop their own client and attach its own algorithm over the top of ActivityPub, right?
To be clear, I hope I'm not sounding like my opinion on this is definitive or anything and I'm just having a conversation. There are definitely problems with ActivityPub, it just sounds like BlueSky is trying to solve/fix every problem that every individual person maybe/possibly/who knows will have before they even launch which, in my experience, is generally a concerning approach.
AP is set up in such a way that you get the content you want but nothing more. Scraping messages from other servers is slow and cumbersome, the entire system is set up for pushing content to people who are interested rather than pulling in content that may be interesting to you.
That's not a failure in any way. Actually, I think it's a much more respectful approach than what many social media companies are doing. However, it does severely impact the effectiveness in which this network of small servers can push "stuff you might like" to other users. Some major instances may be able to implement a Twitter-like algorithm, but most of them won't have the messages to recommend to people.
BlueSky has chosen a complex model that basically exchanges huge amounts of data. That should allow for the algorithmic crap that AP servers will struggle with.
Don't get me wrong, I consider AP to be better than BS because of the designs and business concerns involved. However, from what I can tell from the people around me, people want the addictive social media algorithms, outrage posts, drama videos, you name it. Every platform with this crap has people addicted and coming back over and over again. Mastodon and friends mostly seem to contain people interested in each other and their work (and not just in good ways) rather than people pushing for high algorithmic scores to boost follower counts.
This is why I think the fediverse will fail in terms of social media. The predatory alternatives will be able to pull in the general public with ways that AP-based servers never would.
> One proposed mechanism to make blocks less public on Bluesky is the use of bloom filters. The basic idea is to encode block relationships in a statistical data structure, and to distribute that data structure instead of the set of actual blocks. The data structure would make it easy to check if there was a block relationship between two specific accounts, but not make it easy to list all of the blocks.
a bloom filter provides no false negatives, but allows some false positives
how do you model an "A-blocks-B" relationship with this data structure?
specifically, how do you ensure that "A-blocks-B" blocks only B, and never C D or E?
As I understand it, you enter a fact into the data structure (A blocks B). Now every time you query: is “A blocks B” in the data structure, it will always return True. But also very rarely it will return True for “A blocks C” or anything else that isn’t true. It would make it harder to dump the full list, but it doesn’t solve the case of “does this user block me?” Because you can still easily query that.
> But also very rarely it will return True for “A blocks C” or anything else that isn’t true. It would make it harder to dump the full list, but it doesn’t solve the case of “does this user block me?” Because you can still easily query that.
if "A blocks C" returns true when A doesn't block C, then this is a problem, right?
it means you can trust false responses (no false negatives), but you can't trust true responses (some false positives)
if you get a true response, you have to confirm it's actually true through some other source, which must not return false positives
that other source therefore must be authoritative, and must also be query-able by any client that can query the bloom filter -- it's no panacea
a bloom filter is an optimization, not a source of truth
The obvious alternative is to have blocks be local only, ie a view filter. This would solve the privacy issue. I think I like that model better (but to be fair have not thought it through fully).
However, I think I understand why they’re doing it. Bluesky is modeled after Twitter, which has sort of public popularity contest features built in, like retweets and replies, which can let malicious users signal boost using your account. Without a public blocklist, there’s no way to stop that signal boosting, which gives the upper hand to bad actors.
The Fediverse currently has popcon features built in as well, namely, the number of likes and boosts of a given post, as collected by the home server of the post. The home server is the one that does not count interactions from blocked accounts, so a local-only view filter that only the home server knows of is enough to solve this as well.
Interesting. That would probably be a better place to keep the blocklists, and trust your homeserver to not expose it over API is more reasonable in terms of trust.
It’s funny how the article focuses on friction to circumvent blocking, which is real, but doesn’t mention that their solution greatly reduces friction of discovering who blocks who. Because traditionally that discovery is tedious and not scalable.
>Bluesky is modeled after Twitter, which has sort of public popularity contest features built in, like retweets and replies, which can let malicious users signal boost using your account.
I don't use twitter enough to understand what this means. How can a malicious user signal boost using your account? Regardless of blocking. I don't understand what that means. No one else can force you to retweet or reply if you don't want to.
> How can a malicious user signal boost using your account?
Not a Twitter user myself either. But I think it works like this: popular user posts something. A troll account replies to that with something edgy. Then the followers of the popular account will see the troll’s comments. Thus the troll has been signal boosted.
People retweet your post in order to draw the attention of their followers to it, which is often called a "dogpile" (or whatever). But blocking someone prevents them from retweeting you, so it's a decent (but imperfect) mitigation.
I feel like there's a way to put salts and hashes in here so that I can publish a list of conditions, "don't send traffic my way if myblocklist.includes(argon2($username + $salt)"
instead of publishing the list of people I don't want to hear from
i also think it's fine to let everyone know who i've blocked, but people have different threat models
It’s still feasible. The search space is limited to “The list of known user names” rather than “every possible combination of N characters”. That’s the difference between computable in under an hour vs computable in millions of years.
Password hashing algorithms don't work well if the entire search space is small. If you block me it's trivial for me to check my username against your hashes to prove you blocked me. Reversing every hash is mostly a question of how many known usernames there are.
An absolutely minimal probability of false positives. The probability that you can find a hash collision that's also a valid username is even smaller. I don't think it's something you'd need to worry about.
hash functions map infinite-cardinality input sets to finite-cardinality output sets
definitionally, this means each output value maps to multiple input values -- which means uniqueness is, factually, not guaranteed
it doesn't matter if the likelihood of collisions is 1-in-10, or 1-in-100000, or 1-in-1000000000000, or etc. -- if a collision is possible at all, then uniqueness is not guaranteed, and cannot be assumed as true
You can and should assume sufficiently likely things. It's a waste of time to worry about probabilities with enough zeroes.
Especially because in the real world even a mathematically perfect system has a baseline failure rate. If the chance of hash collision is orders of magnitude lower than the baseline failure rate, then there is no downside to using that hash.
> It's a waste of time to worry about probabilities with enough zeroes.
what? no. absolutely not.
the "baseline failure rate" of a program with some input values X is zero.
> in the real world even a mathematically perfect system has a baseline failure rate.
what? no. absolutely not.
x = 1
print(x)
this is not a probabilistic program, there is no baseline failure rate above zero, the output must be "1", any other output means the program is incorrect
this line of reasoning is absolutely invalid -- hash(x) != x
Please explain how the benefit outweighs the cost to spend even 30 seconds to prevent a potential problem with 10^-40 probability. (And by that I mean 10^-40 total probability, not per-item.)
> this is not a probabilistic program, there is no baseline failure rate above zero, the output must be "1", any other output means the program is incorrect
You could have a power outage, or the OS could crash, or the CPU could have a bug on those opcodes when certain counters roll over at the exact wrong time, or a bit could flip in your memory.
The baseline failure rate for a program on a computer in the universe is never zero. This isn't abstract math. And the collision rate of many hashing systems is much smaller than that baseline failure rate.
There are even situations where assuming a hash never collides can increase reliability, because the code will be done sooner and that improves the baseline.
obviously computers are not totally infallible, solar rays can flip memory bits from 1 to 0 or vice versa -- but those failure modes are not expressed by the source code of the running program, they're triggered at a layer of abstraction well below the compiled and running software
say the probability of a solar ray bit flip is (say) 0.01%, then consider the following function
fn x -> int {
if (randf32() < 0.0001) { panic("boom") }
return 123
}
x will panic with the same probability that a solar ray will flip a memory bit
is this acceptable? can i assume that calling x will never panic, in practice?
> but those failure modes are not expressed by the source code of the running program, they're triggered at a layer of abstraction well below the compiled and running software
Does that make a difference? There's no way to use source code without applying the real world to it.
> is this acceptable? can i assume that calling x will never panic, in practice?
If the probability is that high, then even if x was just { return 123 } you shouldn't assume it will work.
If you replace that code with if (secure_rand_u128() == 0) { panic("boom") }, then it would be safe to assume it won't panic.
> If the probability is that high, then even if x was just { return 123 } you shouldn't assume it will work.
if x -> { return 123 } then callers can 100% assume it will work
what is the alternative? how could callers de-risk the case when a solar ray flips a bit to make x not return 123?
they can't, because they don't have access to an execution model outside of the semantics of the language, and those language semantics literally guarantee x will return 123
this is literally basic computer science material
> if you replace that code with if (secure_rand_u128() == 0) { panic("boom") }, then it would be safe to assume it won't panic.
secure_rand_u128 returns, presumably, a random unsigned 128 bit integer
unsigned 128 bit integers can be any value between 0 and 340282366920938463463374607431768211455 -- and one of those possible return values is 0
if i call secure_rand_u128, then it absolutely can return 0, and code which assumes that it will never return 0 is, factually, buggy
if you want to make the assumption that secure_rand_u128 will never return 0, then you have to justify that assumption in the context of your specific use case -- it absolutely cannot be assumed in the general case
It reads as rhetorical. But it was also a bit of a strawman. Sometimes dismantling a rhetorical question makes sense to do. The answer to your rhetorical question was an obvious no, but it also wasn't relevant to my argument because you used such a high probability.
Deterministic program execution is an assumption we make even though it's not 100% true.
Soundness, when it comes to computer programs, isn't real.
if you want to assume P=0 when P<n then OK but you have to prove that's safe in your use case, it's not something that can be assumed as true in general, no matter how small P gets
You said in another comment "i can control what my application uses as IDs" but no not really. You can control what it does the vast vast majority of the time, but it will sometimes go wrong. The user doesn't care if it's a logic error that would also happen on impossibly perfect hardware or if it's a logic error caused by trusting your hardware.
There's an implicit "hope not to hit the ultra rare bad luck" on each line of code.
And when the probabilities are low enough it's hard to even say which version is safer sometimes.
"buggy" is a logical property of the program as expressed, not a physical property of the program as executed
a solar ray flipping a memory bit on the hardware executing an otherwise correct program does not make that program buggy
> There's an implicit "hope not to hit the ultra rare bad luck" on each line of code.
there definitely is not, at least in the context of the program as expressed
such a case of "bad luck" would violate the assumptions of the language model, and/or execution model, and/or hardware model, and/or etc., of the running program, in a way that would be likely undetectable and definitely un-fixable
it would be an invariant violation of the hardware/os/language model -- but a bug is a logic error
it is so important that programmers understand this distinction
fn f1(x int) -> int { return x }
fn f2(x int) -> int { if x==2 { panic } else { return x } }
fn f3(x int) -> int { if rand() < 1e-10 { panic } else { return x } }
f1 is correct, f2 is buggy, and f3 is buggy -- no matter if 1e-10 is 1e-10 or 1e-20 or 1e-30
> such a case of "bad luck" would violate the assumptions of the language model, and/or execution model, and/or hardware model, and/or etc., of the running program, in a way that would be likely undetectable and definitely un-fixable
Yes, assumptions. Everyone can only assume the bad luck won't happen despite it being a nonzero chance.
Even if you want to play in the land of pure math, where software does as it's told, every mainstream compiler has bugs.
Sometimes assuming things that aren't strictly true is the best path forward. That's why my first post started with "You can and should assume sufficiently likely things. It's a waste of time to worry about probabilities with enough zeroes."
> it is so important that programmers understand this distinction
Why is it important? Though I do understand the difference, while at the same time describing circumstances where it doesn't matter. And the breadth of those circumstances depends on the probability.
> are you OK with the bank using a hash of your account number to identify your account instead of the number itself?
Why not? Cryptographically, entropically, there's no difference between a call to "give me a uuid to throw on this new account" and "now crunch it through a sponge function and use that instead"
Your uuid generator will probably check for uniqueness, or your database will enforce unique values, so there's a few places you would be alerted of the collision, but 2^256 is a very large number
Yeah, because the consequences of accidentally blocking someone on social media are just as serious as the consequences of a bank confusing two accounts.
I’m being naive possibly, but is it really required to have anything more than “mute”? (To use their language “asymmetric block”, you stop seeing their stuff, nothing changes for them in regards to your stuff)
It’s really trivial to get to accounts blocking you if all you need to do is open it in your browser’s private/incognito/guest mode.
I think activity pub solves this by just not notifying blocked accounts about new posts anyway. So you can just silently stop appearing in their feed. But maybe that’s not feasible in bluesky.
> Like other public social networks, if they log out of their account or use a different account, they will be able to view your content.
Why are the blocks so total (or, rather, why is "mute" not the default behavior) given the simplicity of overcoming the read block? Also, isn't the main reason to block is to avoid seeing something bad anywhere?
For some reason this is an unpopular opinion on HN but I'm going to say it again: federation solves a problem users don't care about and creates problems like this (as well as all the other predictable problems we've seen from decades of email such as spam).
For context, it seems like Elon Musk is toying with the idea of removing blocking from Twitter [1].
As another aside, I'm personally exhausted with the pet projects of annoying billionaires [2].
> federation solves a problem users don't care about
The main problem federation solves is centralization of power and perceived abuse of that power. The type of stuff that caused the Twitter exodus and now Reddit. I’d argue that many users do care about that, but I would concede that it’s not clear that federation will solve those problems. Instance operators can go on equally destructive power trips, the way eg AP and mastodon are designed.
This seems like a feature. One could imagine profiles that exist solely to categorize say nazis and essentially copying blocked entries or sharing blocklists between groups of accounts.
Kind of like filter lists for adblock. I would like to subscribe to naziblock, magablock,botblock, and trollblock please.
What an amazingly naive choice, from people who should know better. The complete lack of comprehension of the nature of spam and abuse on the internet is stunning - as though we haven't had decades of trolls, scrapers and bots to learn from.
Public block lists are it. "Social media:" let's think about a social party.
If someone is upset with another person then everyone can see it, body language alone. Online blocks make pretty much no sense, as the type of person who needs to be blocked will just make more accounts or otherwise persist. The reasonable person who doesn't need to be blocked can be reasoned with and will be reasonable.
Accounts with unreasonable blocks tell on themselves if the blocks are public.
I've been subject to black ball / whisper lists so my perspective may be rare. But I picture the people who've come against me being at a party with me and their behavior is very anti-social. People would quickly notice something weird going on, where online I'm just blocked and muted and nobody else knows.
I don't block anyone but I am aware there are people who can't be reasoned with and they've been the rare exception who've ended up blocked by me in the past. I know they can still access my activity and drop in with anon accounts. "Social media" is more like sociopathic media in its current state. The more like an IRL party things are online the better, so public block lists are an upgrade.
Throwback: AOL/AIM's public warnings. People would start IMing when you got a warning, ask "what happened?" That was pretty social.
To name just two issues: It doesn't scale without paying a lot of people, and deciding what to moderate precisely, reproducibly, and fairly is a major unresolved issue.
How can you look at twitter and facebook and make a statement like that?
you’re somewhat correct, a certain type of moderation was absolutely solved way back then—the owner/ops/mods would simply set down a list of rules and if someone broke them or pushed the limits, they were banned. done.
very similar to the real world, if someone comes to my dinner party and they’re a dick, i make them leave. done.
however, somewhere along the line people got it in their head that they could behave however they wanted and the person hosting the party somehow had no rights to remove them.
it boggles my mind that some people are still struggling to understand that this really is no different from the real world analogs. this is a people problem, not a technical problem.
in the real world there are neighborhoods, bars, events, etc… with little to no interference or expectations of behaviors, think of the hole-in-the-wall rough bars in the rough areas of town where violence breaks out regularly.
then think about the bar with massive security/bouncers who take no shit and remove people quickly.
but we still have these people who believe you can have both at once. it’s just never going to work.
you’re either going to have a bar where everything goes: spam and child porn and violent behaviors are accepted or you’re going to need some kind of bouncers removing people who are doing fentanyl at the bar.
of course there is a spectrum in there, but at the end of the day, this is a human problem, not a technical one. anytime someone tells you there can be no moderation, then it will absolutely be full of spam and child porn and all kinds of other shit. if we remove those, then we’re now absolutely and firmly in the “ok, yep, we accept moderation” territory at which point it’s just a matter of “what kind of clientele does our bar want?” and we tailor our expected behaviors accordingly.
as you pointed out, in the early days it was just understood, “ok, we want to hang out in this bar, so we’ll behave accordingly. it is their space afterall.” this is why we had such a massive diversity of places to go. so many different irc networks each with different expectations. so many different forums. so many different comment sections. so many different newsgroups. etc…
the root of this is not a technical problem. its really not a problem at all unless we’re somehow expected to pick only one bar we go party at (which is absurd, btw) sometimes i like to eat at nice restaurant and once in a while i like to slum it at a shithole bar. and most of the time i prefer parties amongst just me and my friends.
again, this is basic human relation shit. but these guys keep going on these wild goose chases—repeatedly showing how little they understand people. ironic af considering what they’re doing…
Is this a design I'd like to use? Or a gaping privacy hole attempted to be explained away through "implementation reasons"?