Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm currently beating around the bush on building a GitHub clone minus react, copilot, etc.

There's no reason I should have my browser tabs crash when I view a pull request involving more than 100 files. The page should already have been generated on the server before I requested it. The information is available. All that remains are excuses and wasted CPU cycles.



Are you just building a web front end on the GitHub API or are you building an end-to-end social programming service?


I started with front end for GH API, but the rate limits and web hook limitations (must own repo) make it a non-starter for total replacement of the typical use cases. 5,000 requests/hr is a lot, but there are repositories with so many issues that you couldn't keep up with things like edits to comments.


there's also no reason you should be viewing a pull request with more than 100 files :p




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: