Hacker Newsnew | past | comments | ask | show | jobs | submit | pawelduda's commentslogin

Shame Phoenix LiveView is missing from the comparison

It's only django-related third-party packages comparison (and SSR itself), would be a bit strange to compare with a different language/stack and/or framework

With focus on LiveView, I think it’s interesting to see how the runtime influences the results. Django and Phoenix have a very different concurrency model

Six years ago when I was working with a Phoenix API, we were measuring responses in microseconds on local dev machines, and under 5 ms in production with zero optimization. In comparison the identical Django app had a 50 ms floor.

If it's only about Django ecosystem, true that. But if it's about pushing the limits how fast you can server-side render doom, then there are more possibilities to be tested:)

It's worth around 85k USD at the time you wrote the comment

Only because it is being subsidized by 20 to 40 gigawatts of electricity. It is basically a ponzi scheme where the increasing difficulty transfers wealth from new comers to early adopters.

Why not? It has full desktop mode with Plasma and can be docked like PC

Yes, it's my new daily driver for light coding and the rest. Also great at object recognition and image gen

Can't wait for future me to post this in 10 years

Show HN: SSH-to-Brain interface (requires tmux and 600mg of caffeine)


Got the same post haha

But their robots are enabled by default. So it is a form of unsolicited scraping. If I spam millions of email addresses without asking for permission but provide a link to opt-out form, am I the good guy?


At this point everyone knows about robots.txt, so if you didn't opt-out that is your own fault. Opting out of everyone at once is easy, and you get fine grained control if you want it.

Also most people would agree they are fine with being indexed in general. That is different from email spam where people don't want it.


Looking at SerpApi clients, looks like most companies would agree they are fine with scraping Google. That is different from having your website content stolen and summarized by AI on Google search, which people don't want.


The claim is SerApi is not honoring robots.txt, and they are getting far more data from google/more often than needed for an index operation. Or at least that is the best I can make out of the claim in court from the article - I have not read the actual complaint.

People are generally fine with indexing operations so long as you don't use too much bandwidth.

Using AI to summarize content is still and open question - I wouldn't be surprised if this develops to some form of "you can index but not summarize", but only time will tell.


Or by Google codewiki, which is morally the equivalent to making a business out of ersatz travel guides by ripping off the authors of real ones

To prove the point, author mentions company that went from React to Htmx and saw positive change in relevant metrics. When you do that, it usually means your app has matured in a sense and you can be sure that htmx will be sufficient to handle all the functionality

I'm however more curious about going the other way, i.e. you start a project with Htmx which happily grows but after a while, a large feature is requested which inevitably changes the direction of the app into React use-case territory. I can't think of concrete example but you now have to work around it with htmx or commit to rewriting to React (or alternative). Wonder what are thoughts of people who had to deal with this


They have been for a while. Had first mover advantage that kept them in the lead but it's not anything others couldn't throw money at, and catch up eventually. I remember when not so long ago everyone was talking how Google lost AI race, and now it feels like they're chasing Anthropic


I've been coding a lot of small apps recently, and going from local JSON file storage to SQLite has been a very natural path of progression, as data's order of magnitude ramps up. A fully performant database which still feels as simple as opening and reading from a plain JSON file. The trick you describe in the article is actually an unexpected performance buffer that'll come in handy when I start hitting next bottleneck :) Thank you


If you just ask it to find problems, it will do its best to find them - like running a while loop with no return condition. That's why I put some breaker in the prompt, which in this case would be "don't make any improvements if the positive impact is marginal". I've mostly seen it do nothing and just summarize why, followed by some suggestions in case I still want to force the issue


I guess "marginal impact" for them is a pretty random metric, which will be different on each run. Will try it next time.

Another problem is that they try to add handling of different cases that are never present in my data. I have to mention that there is no need to update handling to be more generalized. For example, my code handles PNG files, and they add JPG handling that never happens.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: