Two techniques, even: More requests processed at once, after a very (bandwidth-adjusted) precisely user-controlled starting point.
One helps with race conditions in the server, the other helps racing 3rd party requests. Sending a highly-efficient "go" packet for many HTTP requests is sure ruining the fun for all the others awaiting some pre-announced concert ticket / GPU sale to open.
If the website accounting is merely "eventually consistent" between threads/servers and you are able to fire many (large) requests at a precise (determined by small packet) point in time, both techniques work in tandem - could have (one of) your post(s) appear with repeating digits (such as at https://news.ycombinator.com/item?id=42000000) without just seeing "Sorry, we're not able to serve your requests this quickly."
If the website accounting is merely "eventually consistent" between threads/servers and you are able to fire many (large) requests at a precise (determined by small packet) point in time, both techniques work in tandem - could have (one of) your post(s) appear with repeating digits (such as at https://news.ycombinator.com/item?id=42000000) without just seeing "Sorry, we're not able to serve your requests this quickly."