Hacker Newsnew | past | comments | ask | show | jobs | submit | more johnecheck's commentslogin

No mention of atproto. Obviously Bluesky is nothing to write home about, but do you see some reason why the atproto couldn't be the foundation for something more interesting?


I've always felt that the browser vendor + CA model was bad but this is next level embarrassing. How is the very root of trust in the internet so... untrustworthy?


Revocation seems really nasty to deal with.

The whole chain of trust model is that your browser vouches for an authority that vouches for a website that everything is legit.

You can't just ducktape on an idea like that cert for "www.xyz" is totally legit unless I takesies-backies'd my vouch at some point, so just double-check.

If you want that sort of "continuous" trust scheme, then what makes more sense is something like having short-lived certificates.


> There is no way to promote facts or what people think are facts.

There is no way with existing platforms and algorithms. We need systems that actually promote the truth. Imagine if claims (posts) you see come with a score* that correlates with whether the claim is true or false. Such a platform could help the world, assuming the scores are good.

How to calculate these scores is naturally the crux of the problem. There's infinite ways to do it; I call these algorithms truth heuristics. These heuristics would consider various inputs like user-created scores and credentials to give you a better estimate of truth than going with your gut.

Users clearly need algorithmic selection and personalized scores. A one-size-fits-all solution sounds like a Ministry of Truth to me.

* I suggest ℝ on [-1,1].

-1 : Certainly false

-0.5 : Probably false

0 : Uncertain

0.5 : Probably true

1 : Certainly true


It will of they can actually make it think better than we do. Whether they ever will is hard to say, but it feels pretty clear that throwing more money at LLMs isn't going to get us there.


I read the comment you're responding to as suggesting a way to resolve the conflicts layered atop the CRDT, not as a component of the CRDT itself. You're very right that LLMs are the wrong tool for CRDT implementation, but using them to generate conflict resolutions seems worth exploring.


1 is true, but this applies to all websites you visit (and their ads, supply chain, etc). Drawing a security boundary here means never executing attacker-controlled Javascript. Good luck!

2 is also true. But also, a zero day like that is a massive deal. That's the kind of exploit you can probably sell to some 3 letter agency for a bag. Worry about this if you're an extremely high-value target, the rest of us can sleep easy.


It might be nice if our universe conformed to our intuitions about time steadily marching forward at the same rate everywhere.

Einstein just had to come along and screw everything up.

Causality is the key.


Conflict-free is right in the name, layering conflicts on top of it would be blasphemy :p


LLMs could be good at this, but the default should be suggestions rather than automatic resolution. Users can turn on YOLO mode if their domain is non-critical or they trust the LLM to get it right.


Wow. That's... impressively bad.

While pretty egregious, this is sadly common. I'm certain there's a dozen other massive companies making similar mistakes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: