> Some of uv’s speed comes from Rust. But not as much as you’d think. Several key optimizations could be implemented in pip today: […] Python-free resolution
> I find a bit annoying when projects don't use GitHub... I need to set up a new account to interact with them
The same is true in the other direction ("Ugh, this project is hosted on GitHub and I now need to set up an account"), with one major difference: compared to other sites which tend to just accept username + email + password for setup and username + password to log in, it's a huge PITA to set up a GitHub account in 2025 and to log in to an infrequently used account from a logged out state. GitHub won't let you get away with using it in such a simple way.
> Browsers are designed to be the end user's self-service toolkit to combat our bad websites.
One of them "self-aware wolves", I see. How did the author wish things to work? That websites should be able to micromanage the user experience without restraint while indulging in whatever abusive behaviours they please, that the user should be powerless against?
No it isn't. People should be free to use software they like, and not get subterfuge like they get from Apple. What Apple has now is a total monopoly on iOS browsers. It's far worse than what Microsoft did to get sued for antitrust violations when they simply bundled IE with Windows - at least Microsoft didn't forbid installation of any other browser engine like Apple is doing on iOS.
If Apple didn't purposely hobble Safari and forbid other browser engines so that developers had no choice but to develop an app for the app store where Apple can then skim 30% off all purchases, there wouldn't be as much of a need to allow other browser engines. For Apple it's completely about greed.
I think that, at a minimum, if you're going to describe your thing as "like X, but Y", then you should have a link to a description of X (especially if X is a generic, non-Googleable term), or, better, an independent description for people who are interested in finding out what your thing is but don't necessarily care about X. Otherwise, people who know nothing about X end up clicking through, finding nothing helpful, and now they still don't know what X is, and they don't know what your thing is, either.
> Google appears to be canon for finding secondary sources, according to the various arguments in the deletion proposals, yet we're all aware of how abysmal Google's search has been for a while now.
Nobody is forcing you to use Google. If you can provide an acceptable source without the help of Google, go ahead. But the burden of proof is on the one who claims sources exist.
> An article's history appears to be irrelevant in the deletion discussion: the CPAN page (now kept) had 24 years of history on Wikipedia, with dozens of sources, yet was nominated for deletion.
Such is life when anyone can nominate anything at any moment... and when many articles that should have never been submitted in the first place slip through cracks of haphazard volunteer quality control. (Stack Overflow also suffers from the latter.)
The sources is the only part that matters. And they sufficed to keep the CPAN article on site, so the system works.
> Doesn't this become a negative feedback cycle? Few sources exist, therefore we remove sources, therefore fewer sources exist.
It was wrong to submit the article without sourcing in the first place. Circular sourcing is not allowed.
> The sources is the only part that matters. And they sufficed to keep the CPAN article on site, so the system works.
The system works if the sources remain available, and in an environment predisposed to link rot that can be a problem. Imagine the hypothetical situation of archive.org disappearing overnight? Should we then delete all pages with it as their sole source if they're not updated within a week?
And the system works if intentions are pure - it seems here the user that suggested the deletion of several Perl related pages is a fan of film festivals[1] and clearly wasn't happy that the "White Camel Award" is a Perl award, since the late 90s, and not a film festival award (since the early 00s). At least according to Google. So they went on a bit of a rampage against Perl articles on Wikipedia.
You could argue "editor doing their job", but I would argue "conflict of interest".
These are all bad-faith takes. What are you doing?
24 years ago, some people wrote on Wikipedia instead of elsewhere. So the wiki page itself became a primary source.
"The page shouldn't have been submitted..." This was a Wiki! If you're unfamiliar with the origin of the term, it was a site mechanism designed to lean in to quick capture and interweaving of documents. Volunteers wrote; the organization of the text arose through thousands of hands shaping it. Most of them were software developers at the time. At a minimum, the software-oriented pages should get special treatment for that alone.
You're acting as though this is producing the next edition of Encyclopedia Britannica, held to a pale imitation of its standards circa the 1980s. The thing is, Britannica employed people to go do research for its articles.
Wikipedia is not Britannica, and this retroactive "shame on them" is unbelievable nonsense.
Verifiability is a core policy on Wikipedia, and with time, citing your sources has become more and more important. Wikipedia isn't was it once was in 2001. Articles can't survive on being verified by their own primary sources, for the same reason we don't want Wikipedia to become a dumping ground for advertisers who then cite their own site in an attempt to gain legitimacy. Secondary sources provide a solid ground truth that the subject in question has gained recognition and thus notability. If those secondary sources don't exist, we can't assume notability based on nothing.
Wikipedia isn't Britannica, because by this point it's probably a lot better than Britannica. They were comparable already in 2005,[1] and I have little reason to believe that Wikipedia is doing much worse on that front nowadays, even though they have vastly more content than Britannica.
Some of the deleted pages never had the « sources missing » tag set for a significative time. It has been straight to deletion point.
Some pages that survived the deletion (e.g. TPRF) had the « missing sources » tag set since 15 years… What, I have to admit, can justify some action. But it was not the case for the PerlMonks and Perl Mongers pages: those just got deleted on an extremely short notice, making it impossible for the community to attempt any improvement.
7 days is policy for a deletion proposal,[1] which I can agree is not really enough time, although it's usually extended if talks are still ongoing.
There aren't really any rules about putting up notices and such before proposing deletion, and if you can't find anything other than primary sources, it doesn't seem unreasonable to propose deletion than propose a fix which can't be implemented. Thankfully, someone did find reliable sources for some of the articles.
Umm…
reply