Those damn plebs just have no idea what's best for them.
Most people are not an expert in a single field, much less multiple fields, and never every field.
So yes, we need experts to play a substantial role in running things.
Perhaps even more importantly: it's not solely about what's best for every individual. You know what would be best for me? If the government gave me a free giant SUV that gets 4mpg fuel economy, and also let me drive as fast as I wanted while also subsidizing 90% of my fuel costs. Also it should drive itself so I can sleep while driving.
Sometimes we need to consider what's best for society and the planet, too.
Totally random people could draft new laws on climate (at least, they were told this). They met with lobbyists, both pro-oil and pro-climate for two weekends, experts on three other weekends, once in a conference-style where very generic stuff is said, two other in focus groups with more specific expertises, depending on the subject the focus group is on.
Experts were real experts though, with multiple publications and PhDs (or in some cases, engineering degrees, especially during the conference week), and tried to only talk on their subject matter.
In around 8 weekend, the 150 random people made ?148? law propositions, helped by lawyers, and most experts agree that they were both good and reasonable. What's interesting is that most of the 150 people said that before really learning about the subject, they would never have made this kind of propositions.
All that to say: experts don't have to run things, and imho, they should not. They should however have an advisory role to the random people drafting new laws.
I agree completely. I think the main difference is that it's important for your average people to become educated on topics by experts. Thats the part that is missing today.
In this case, a 141 page highly dense (and frankly
boring to read) document is in its essence a liability
So, do you think that the intent was for developers to memorize this document?
Or do you think the expectation was something more reasonable, like using this document as a tool to configure linting tools so that developers could get realtime feedback as they code?
No, that is not what I mean. The efficiency of a piece of knowledge is not only a function of its intrinsic value, but also how easy it is to understand. Sure, the people who are expected to read the document are smart and this is probably the best way to do it, but even Lockheed engineers are fallible.
If anything, the enemy will be defeated before they have had the time to understand the document in case it gets leaked xD
I find both of them equally (un)reliable. I feel like they both work(ed) about 80% of the time.
The most annoying failure case for FaceID for me is using it in bed. I'm a side sleeper so half of my face is mushed into the pillow. I realize how lazy this sounds, but when I'm half asleep... that is exactly when I don't feel like tapping out a PIN or repositioning my head.
A fingerprint reader inside the powerbutton
is the way to go IMO
I really wish the phone had both methods, TBH.
I love the "reader inside the powerbutton" idea, but... phone cases....
Yea, the faceid in the bed thing is very annoying when using my "tinker iphone" in the bed. Havent found Fingerprint auth unreliable, but my hands and phone mostly stay very clean and i have my finger added 2-3 times for faster and more reliable scans.
My phone has the fingerprint reader in the power button, all cases just leave the powerbutton open.
The Apple Watch knows that it's you, or at least somebody that knows your PIN.
It's tied to your iPhone and Apple account during initial setup.
Each time you put the Apple Watch on, you have to enter your PIN to unlock it. It can only perform automatic unlocking of your Mac and iPhone in this unlocked state.
The watch does automatic wrist detection and it will automatically RE-lock itself as soon as you take it off.
This is reasonably secure for most needs, though of course you can disable all of this automatic unlocking if you want more security. I forget if it's on by default or not. IIRC I had to enable it but I'm not too sure.
I'm not sure that "PIN + hardware dongle that requires continuous skin contact" is meaningfully less secure than a fingerprint sensor, but at least Apple puts you in the driver's seat to choose the level you're most comfortable with.
I've found the Bluetooth connectivity quite reliable. When it doesn't work, it's because the watch re-locked itself.
I do not think the initial decision-making process was "hey, screw working-class people... let's have a 120GB install size on PC."
My best recollection is that the PC install size was a lot more reasonable at launch. It just crept up over time as they added more content over the last ~2 years.
Should they have addressed this problem sooner? Yes.
FWIW, the PC install size was reasonable at launch. It just crept up slowly over time.
But this means that before they blindly trusted
some stats without actually testing how their
game performed with and without it?
Maybe they didn't test it with their game because their game didn't exist yet, because this was a decision made fairly early in the development process. In hindsight, yeah... it was the wrong call.
I'm just a little baffled by people harping on this decision and deciding that the developers must be stupid or lazy.
I mean, seriously, I do not understand. Like what do you get out of that? That would make you happy or satisfied somehow?
Go figure: people are downvoting me but I never once said developers must be stupid or lazy. This is a very common kind of mistake developers often make: premature optimization without considering the actual bottlenecks, and without testing theoretical optimizations actually make any difference. I know I'm guilty of this!
I never called anyone lazy or stupid, I just wondered whether they blindly trusted some stats without actually testing them.
> FWIW, the PC install size was reasonable at launch. It just crept up slowly over time
Wouldn't this mean their optimization mattered even less back then?
One of those absolutely true statements that can obscure a bigger reality.
It's certainly true that a lot of optimization can and should be done after a software project is largely complete. You can see where the hotspots are, optimize the most common SQL queries, whatever. This is especially true for CRUD apps where you're not even really making fundamental architecture decisions at all, because those have already been made by your framework of choice.
Other sorts of projects (like games or "big data" processing) can be a different beast. You do have to make some of those big, architecture-level performance decisions up front.
Remember, for a game... you are trying to process player inputs, do physics, and render a complex graphical scene in 16.7 milliseconds or less. You need to make some big decisions early on; performance can't entirely just be sprinkled on at the end. Some of those decisions don't pan out.
> FWIW, the PC install size was reasonable at launch. It just crept up slowly over time
Wouldn't this mean their optimization mattered even less back then?
I don't see a reason to think this. What are you thinking?
> One of those absolutely true statements that can obscure a bigger reality.
To be clear, I'm not misquoting Knuth if that's what you mean. I'm arguing that in this case, specifically, this optimization was premature, as evidenced by the fact it didn't really have an impact (they explain other processes that run in parallel dominated the load times) and it caused trouble down the line.
> Some of those decisions don't pan out.
Indeed, some premature optimizations will and some won't. I'm not arguing otherwise! In this case, it was a bad call. It happens to all of us.
> I don't see a reason to think this. What are you thinking?
You're right, I got this backwards. While the time savings would have been minimal, the data duplication wasn't that big so the cost (for something that didn't pan out) wasn't that bad either.
I'm sure that whatever project you're assigned to has a lot of optimization stuff in the backlog that you'd love to work on but haven't had a chance to visit because bugfixes, new features, etc. I'm sure the process at Arrowhead is not much different.
For sure, duplicating those assets on PC installs turned out to be the wrong call.
But install sizes were still pretty reasonable for the first 12+ months or so. I think it was ~40-60GB at launch. Not great but not a huge deal and they had mountains of other stuff to focus on.
I’m a working software developer, and if they prove they cannot do better, I get people who make statements like GP quoted demoted from the decision making process because they aren’t trustworthy and they’re embarrassing the entire team with their lack of critical thinking skills.
When the documented worst case is 5x you prepare for the potential bad news that you will hit 2.5x to 5x in your own code. Not assume it will be 10x and preemptively act, keeping your users from installing three other games.
I would classify my work as “shouting into the tempest” about 70% of the time.
People are more likely to thank me after the fact than cheer me on. My point, if I have one, is that gaming has generally been better about this but I don’t really want to work on games. Not the way the industry is. But since we are in fact discussing a game, I’m doing a lot of head scratching on this one.
So yes, we need experts to play a substantial role in running things.
Perhaps even more importantly: it's not solely about what's best for every individual. You know what would be best for me? If the government gave me a free giant SUV that gets 4mpg fuel economy, and also let me drive as fast as I wanted while also subsidizing 90% of my fuel costs. Also it should drive itself so I can sleep while driving.
Sometimes we need to consider what's best for society and the planet, too.
reply