Certainly that's a good reason to force a legible version of settings, and the path to settings...
But if the user sets the system to hot dog stand, the apps should be hot dog stand. If the user wants the system text font to be wingdings, they're in for a nasty time, but that doesn't mean an app should force a different font
The issue with this thinking is that it's easier for people to quit using the product than to figure out how to fix the font. You can't beat the simplicity of doing nothing, so you need to avoid getting into this state in the first place.
Gotta keep users engaged in your app, right? Keep them onboard even if that means removing all their choices. I mean, should we even allow users to uninstall apps?
After all, the developer always knows best and all users are helpless children who need to be forced to conform and comply. Who cares what the user thinks or wants so long as we keep that sweet, sweet engagement.
If your users are not engaging with your app, you can't deliver user value to them. If you are unable to provide value to their lives because they happened to accidently changed a font that is an unfortunate circumstance where the user is losing out on value they could have had.
It's not that users are helpless, but that they just don't want to spend their time dealing with stuff they don't want to. Users like it when things "just work."
Limiting sports betting to in person at licensed casinos seemed to work well enough for decades. Only a little akward when teams play in Vegas.
Yes, there was a fair amount of unlicensed sports betting, and of course a pro sports scandal every so often.
Alternatively, if you cap the amount of bets the bookies can retain, that might solve my immediate problem of I'm so tired of seeing the sports players with betting ads on their jerseys, the commentators yapping about bets, and then half of the commercials are sports betting ads. If they can't keep much, they won't have money to advertise.
Personally, I enjoyed the ads a lot more when the poker industry was advertising their no money .net sites and hoping people would just happen to go to their .com sites instead. That was at least a little amusing.
If we assume there's some altitude that's so polluted by debris that we need to intervene, it might not have that many functional satellites left. Cleanup the orbit in 1 year might be something the world could agree to if the alternative is waiting 5 years for it to clear up by itself.
> just like Facebook bought WhatsApp with private stock with crazy valuations
FB bought Instagram April 9, 2012 with ~ 30% cash and 70% stock, and then IPO'd May 18, 2012. That's probably what you mean. FB bought WhatsApp Feb 19, 2014 with ~ 25% cash and 75% public stock that was roughly 2x the IPO price. The private valuation might be crazy, but it's increased with public trading, so I dunno.
But, if you're getting console debugs from the kernel, that wouldn't be captured either... Otoh, debug output from the kernel should also go into logs or dmesg or something?
You'll capture everything and maybe be able to figure it out from there?
oh, one more thing... your pipeline is only capturing stdout; errors often get logged to stderr ... script (or screen/tmux logging) will capture both though.
From that paper, table 4, large files had an average # of fragments around 100, but a median of 4 fragments. A handful of fragments for a 1 GB level file is probably a lot less seeking than reading 1 GB of data out of a 20 GB aggregated asset database.
But it also depends on how the assets are organized, you can probably group the level specific assets into a sequential section, and maybe shared assets could be somewhat grouped so related assets are sequential.
Not really. But when you write a large file at once (like with an installer), you'll tend to get a good amount of sequential allocation (unless your free space is highly fragmented). If you load that large file sequentially, you benefit from drive read ahead and OS read ahead --- when the file is fragmented, the OS will issue speculative reads for the next fragment automatically and hide some of the latency.
If you break it up into smaller files, those are likely to be allocated all over the disk; plus you'll have delays on reading because windows defender makes opening files slow. If you have a single large file that contains all resources, even if that file is mostly sequential, there will be sections that you don't need, and read ahead cache may work against you, as it will tend to read things you don't need.
Solid state drives tend to respond well to parallel reads, so it's not so clear. If you're reading one at a time, sequential access is going to be better though.
But for a mechanical drive, you'll get much better throughput on sequential reads than random reads, even with command queuing. I think earlier discussion showed it wasn't very effective in this case and taking 6x the space for a marginal benefit for the small % of users with mechanical drives isn't worth while...
Every storage medium, including ram, benefits from sequential access. But it doesn't have to be super long sequential access, the seek time, or block open time just needs to be short relative to the next block read.
But if the user sets the system to hot dog stand, the apps should be hot dog stand. If the user wants the system text font to be wingdings, they're in for a nasty time, but that doesn't mean an app should force a different font
reply