Hacker Newsnew | past | comments | ask | show | jobs | submit | md_'s commentslogin

Lots of "glide"-style floss is PTFE tape: https://en.wikipedia.org/wiki/Oral-B_Glide.

You can buy non-PTFE floss, like https://www.test.de/Zahnseide-Faedeln-ohne-schaedliche-Chemi....

It's not really about "waxed" vs non-waxed, though.


I like TreeBird's floss. They have silk and bamboo/charcoal kinds. They are very affordable, IMO. I buy their bamboo/charcoal refill pack for $18 and it lasts me a long time. I floss everyday and I use excessive yardage of floss, yet still it lasts a long time. Their containers are cool (glass and steel). Strongly recommend it.

https://treebirdeco.com/collections/floss


Or better still use a water flosser.


Phew--I can keep my waxed floss.


I'm reminded of how long it took Garmin to add touchscreens to their sports watches, and how controversial it was in the user community.

If you want to check your heart-rate while sitting at your desk, scrolling through the touchscreen on an Apple Watch is great. But if you're wearing gloves while skiing, or your hands are covered in mud and sweat during a trail run, a touchscreen is not a great option.

Garmin's modern sport line now has optional touchscreens, but all major functionality is still accessible via physical controls alone. Their lifestyle models are touchscreen-first, though, which really demonstrates the different requirements for different use-cases. I suspect the same is true in the camera world.


When you're doing street photography, or any photography with a DSLR/Mirrorless, you don't look at the controls at any given moment.

You see a potential subject, you "arm" the camera via its power switch instinctively.

Your finger goes to front/back dial and you set your parameters depending on the mode, sometimes only paying attention to numbers on the screen or top LCD or viewfinder.

You're tracking your subject now. If you need, you select the AF point blindly via the touchscreen (which is off and is a touchpad if you're looking via viewfinder), and fine tune it via the joystick if you need one.

Looks good, half-press, AF Locks. You release the shutter and camera clicks. It's done.

You turn off your camera blindly and continue walking.


> When you're doing street photography, or any photography with a DSLR/Mirrorless, you don't look at the controls at any given moment.

Why? You're looking at the screen to track the target anyway. Show the controls there, including focus points and maybe "exposure" settings.

And with the computational photography, you can just take multiple pictures and synthesize various "exposure times" later. And it'll likely be better than what you set blindly, hoping to get the right combination.


    > Why? You're looking at the screen to track the target anyway
Not necessarily. You might be looking through the viewfinder, which will almost always have better contrast in bright sunlight than even a sunlight-readable screen; and even so, if you're using the display, fumbling through a touchscreen interface will always be slower than doing the same with a haptic interface you're used to.

    > And with the computational photography, you can just take multiple pictures and synthesize various "exposure times" later. And it'll likely be better than what you set blindly, hoping to get the right combination.
I think this shows some disconnect over what many photographers are trying to do with their cameras. The goal often isn't to maximize the use of technology to get the best possible photo _technically speaking,_ but to use your own familiarity with techniques and tools to make something great _yourself._ Computational photography is an anti-feature for many photographers.

Beyond that; you usually aren't shooting blind unless you choose to. Cameras come with metering (and have done so for many decades now), and it's gotten pretty damn good at telling you when your photo's properly exposed. Newer (<15 years old) cameras will often also have a histogram which gives you even more data than an EV meter.


> Not necessarily. You might be looking through the viewfinder, which will almost always have better contrast

Most mirrorless cameras have electronic viewfinders. They are _worse_ than a phone screen. And they still show you only an approximation of the final image, filtered through an underexposed sensor and whatever processing steps the camera has.

And if the viewfinder is purely optical (in a mirrorless camera) then it won't show the autofocus feedback.

> if you're using the display, fumbling through a touchscreen interface will always be slower than doing the same with a haptic interface you're used to.

Except that you bumped the control wheel on top some time earlier during the day, and it's now at +3 exposure instead of "0". You don't see that in the viewfinder, and find out only when the pictures are downloaded to your computer 2 months later.

Ask me how I know about this scenario.

Oh, or another one I learned at school while taking pictures for the class: if you don't have a perfect vision, and you focus the optical viewfinder until the image is in focus, the actual film image will demonstrate to everyone else exactly how you see the world with your imperfect vision.

> The goal often isn't to maximize the use of technology to get the best possible photo _technically speaking,_ but to use your own familiarity with techniques and tools to make something great _yourself._

And for me, the goal is to take good pictures for my memories, utilizing as much technology and automation as possible. I don't want to spend time learning every function of that 15 knobs on my camera. I want optical zoom and a full-frame sensor, but the same UI experience as on my phone.


    > Most mirrorless cameras have electronic viewfinders. They are _worse_ than a phone screen. And they still show you only an approximation of the final image, filtered through an underexposed sensor and whatever processing steps the camera has.
Not in newer designs. Modern cameras have similar or higher perceived pixel density, with very little or no perceptible screen dooring. Latency on later-gen cameras is also very low to the point of being imperceptible.

    > And if the viewfinder is purely optical (in a mirrorless camera) then it won't show the autofocus feedback.
I think what you're describing is a rangefinder, as seen on some Leicas for example. This is correct, but rangefinder cameras are a niche within a niche. Frankly I don't know how rangefinder users make use of that in the first place.

> Except that you bumped the control wheel on top some time earlier during the day, and it's now at +3 exposure instead of "0". You don't see that in the viewfinder, and find out only when the pictures are downloaded to your computer 2 months later.

I mean, I can't help you here, this kind of misinput is just as likely if not more on a touchscreen in my experience. The fact is that:

- Normally, on any camera I've used between Sony and Nikon, one click of the control wheel is +/- 1/3 EV. Hitting it nine times and failing to pay attention to the live preview or EV metering scale sounds like user error to me.

- If it takes you 2 months to unload your photos, you probably aren't the target audience for these cameras to begin with, to be blunt.

- Assuming it was _less_ than 3EV, most modern cameras shooting in RAW will, for most scenes, be able to give you the dynamic range to still work with the photo in post.


> Not in newer designs. Modern cameras have similar or higher perceived pixel density, with very little or no perceptible screen dooring. Latency on later-gen cameras is also very low to the point of being imperceptible.

Wow, so just like my phone! My point is, the viewfinder is _still_ electronic. It doesn't really provide much advantage compared to just showing an image on the screen. That's why some of the mirrorless cameras don't even have a viewfinder anymore (e.g. EOS M6 Mark II).

> I mean, I can't help you here, this kind of misinput is just as likely if not more on a touchscreen in my experience.

It can be shown on the screen, and the UI can more faithfully reflect the settings.

> - If it takes you 2 months to unload your photos, you probably aren't the target audience for these cameras to begin with, to be blunt.

Sure. That's why I want GPS, on the photos. But I still want a good optical system, there's just no way around the sensor size and the lens quality.


    > Wow, so just like my phone! My point is, the viewfinder is _still_ electronic. It doesn't really provide much advantage compared to just showing an image on the screen. That's why some of the mirrorless cameras don't even have a viewfinder anymore (e.g. EOS M6 Mark II).
I guess? Superficially?

It's normally better than a phone screen since it's hooded, meaning you can get consistently high contrast and good colour representation in a wider range of environments without worrying about glare.

I'd also say that it's not a question of "some mirrorless cameras don't have viewvinders anymore" so much as there exists a segment-within-a-segment which doesn't have viewfinders.

Sigma's fp fits in there (though there's an optional viewfinder attachment); so does Nikon's Z30 and Sony's ZV-E10. It's not a popular design choice to remove the viewfinder since most users of ILCs do get use out of the viewfinder.

    > It can be shown on the screen, and the UI can more faithfully reflect the settings.
This doesn't address the issue of having to navigate multiple layers of menus or having to do weird on-screen gestures to get to settings, all without haptic feedback; besides that, it's not like this information's hard to read on a traditional-layout display. 1/40, 5.6, iso400, and an EV scale pointing to 1.3 is pretty intuitive if you have a basic understanding of photography concepts. If you don't care about that stuff and spend most of your time in auto, most cameras offer layout options to hide that information.

   > Sure. That's why I want GPS, on the photos. But I still want a good optical system, there's just no way around the sensor size and the lens quality.
The hard truth is that you'll have to compromise on something here.

- GPS used to be a popular option for halo-product cameras. I used to own a Sony SLT-A55 which had it, but it was often unreliable and battery life took a hit whenever I had GPS turned on. It was a decent camera otherwise. Nowadays, most cameras just don't ship with GPS built in. Some will offer a hot-shoe attachment, but these still have reliability and battery drain issues. Others rely on a phone link to encode that into the EXIF. Modern phones are pretty good about using standalone, unassisted GPS, so shooting in remote locations while using your phone as a location source is generally an okay solution. If this is a hard requirement for you, you'll have to resort to a camera design that's a few years old, partly because this feature has kinda fallen out of style, and partly because camera generations move slower than smartphone generations (for good reason; outside of professionals in demanding areas like sports or event photography, the value add has to be clear from generation to generation for the bigger enthusiast market to buy in).

- You seem to be hell-bent on having as few physical controls as possible, and even no viewfinder. This cuts out most high-end, midrange, and even most entry-level ILCs, and leaves a small segment of vlog ILCs, plus point-and-shoots (though there are some very respectable higher-end options in that market these days, like the Sony RX100 series, which has a cult following at this point).

- You still want interchangeable lenses and a bigger sensor than what you can find on phones. The latter's easy, most dedicated cameras have a bigger sensor than phones; the former less so given your other constraints, since most cameras with interchangeable lenses will fail on one of your other constraints. Out of the major manufacturers, this basically leaves you with the Sigma fp, Nikon Z30, EOS M6 Mark II, and the Sony ZV-E10, all of which, regrettably, have that control wheel that you might still accidentally hit nine times and bump your exposure up by +3.0EV.

- If you want _specifically a full-frame_ sensor, and you don't want to pay for niche products like Leicas, Zeiss halo products, or something weird like the Sigma fp, the unfortunately the camera you're looking for doesn't exist. The feature set you want represents a tiny sliver of a niche that's mostly been eaten up by smartphones at this point.

- You also want computational photography built in, which, to be honest, as it's currently implemented in phones, largely negates the limitations of the small sensors and cheaper lenses. As in; for casual photography, you're pretty unlikely to see a clear improvement over your phone with a dedicated camera these days, whether or not it comes with phone-style computational photography built in. I can't underscore this enough. If you take pictures of challenging scenes, or if you're going for a specific style, then yeah, phones are outclassed, at least as difficulty of the shot goes -- but for casual stuff? Phones are the way to go, almost unquestionably.

Snark aside; if you're looking at something like family photography, I strongly recommend something like an RX100 or a Z30 or Z5. The RX100 is a point-and-shoot, but it's best-in-class in a lot of ways even if the current rev is from 2019. It's also small enough to fit in your pocket and has a solid lens with a good zoom range. The Z30 and Z5 will probably lock out the control rings for you if you're in auto mode, which should help prevent any accidental overexposure. They also benefit from recent-gen sensors and image processors (though the sensors are APS-C). If you want a full-frame, stacked, or BSI sensor it'd likely break the bank while committing to using more conventional controls since you're looking at an enthusiast camera at that point. No two ways about it.

The GPS thing is the biggest hurdle you'll still have to clear. I don't have a great solution for you. The Z30 and EOS M6 MkII both probably have good smartphone app integration and they'd likely be able to sync location from there, but that can be finnicky, and it tends to be a battery hog.

On the other side of the spectrum I guess you could look at something like Beastgrip. I hear it's what they're using to rig up iPhones for the new 28 Years Later movie.


> The GPS thing is the biggest hurdle you'll still have to clear. I don't have a great solution for you.

Sony also has a great smartphone app which doesn't eat any battery at all. It waits for your camera to connect and activates the GPS when it connects and feeds data to it. I have never seen it eat my battery more than it should on my old iPhone X.

Sony's app also can do remote shooting and image transfer via WiFi and it's not half bad at either.


> As in; for casual photography, you're pretty unlikely to see a clear improvement over your phone with a dedicated camera these days

Phones are great for panorama shots, but they can't zoom. It's a physical limitation, you _need_ larger lenses for that. Another big problem is the low-light shots. Software does wonders, but it's still limited by the amount of light that the sensor can gather.

> Snark aside; if you're looking at something like family photography

I love travels, and most of my photography are either wild nature or landscapes. For the wild life photos you _really_ need optical zoom, you don't generally want to come close and ask a bear (or a lion) for a selfie.

I kinda adapted, and each time I take pictures with my camera, I also take a couple of pictures with my phone, so I can later use it to get the GPS position and correct the timestamps.

And yeah, I really want camera makers to try and go after my market niche. They think that it's small, but I seriously doubt it. There is a lot of people who like to take better-than-a-phone pictures, but can't care less about exposure timings and ISOs.


> And yeah, I really want camera makers to try and go after my market niche. They think that it's small, but I seriously doubt it. There is a lot of people who like to take better-than-a-phone pictures, but can't care less about exposure timings and ISOs.

That may be the case, but many "compact" cameras, like the Sony Rx100 mentioned in this thread wipe the floor with phone cameras. But they're very niche. If there was a market for it, I doubt manufacturers would come up with a random reason not to tap it. I think there are actually very few people who want better than phone pictures and are ready to spend the money and lug around the resulting camera.

As GP says, I doubt you'll find a model that checks all of your boxes (especially the integrated GPS one). But you can probably go to a camera store and try out a few models. My camera with many dials and buttons ignores all of them when in "full auto" mode. It also ignores "picture settings" or whatever they're called (things like custom tone curve, white balance tweaking, etc.). It even has a physical lock on the mode dial, so this should prevent you from unwillingly bumping the mode dial and ending up in some weird under/over-exposed situation because you've also unwittingly bumped 9 times a separate dial. Sure, the camera may have a zillion options for you to configure, but if it ignores them in full-auto mode, it's basically what you're asking for.

My specific camera is an 8+ years old model, so you probably don't want this (olympus pen-f), but there should be newer models with a similar behavior. I'd look at the Panasonic S9, which I wanted to like but dismissed because of the lack of dials. It's a "full-frame" model, so be prepared to carry big and heavy lenses for it, though.


Viewfinder shows all that information in real time already, but after a certain point, you know what your camera gonna do with these settings:

    Hmm... It's a bit too bright and this thing gonna overexpose a bit so, let's compensate it with -0.7EV...

    Hmm... With this settings, it'll track the face automatically so I don't need to think about it now.
This is how you instinctively think while taking a photo. It's automatic. I don't know what my metering says me for most of the time, because I already know from experience. Metering is always there though. If it says something contrary to you, it's worth paying attention (again a split second).

If I can take this [0] with a single frame, why should I bother about multiple frames? Or, if I can take this [1] with a simple 7-shot bracket (which is overkill, 3 will already do, but why not) and simple compositing, why should I bother? Lastly, if I can take this [2] again with a single shot, with a bog standard lens and with a good tripod, why should I bother with tracked shots, etc. (You can always take better astros, but this is a great shot for a single frame and some post processing).

In photography, sensor size is still the king. A mirrorless camera is much crisper than a phone camera, the comparison is still moot. Esp. when you compare full frame sensors to phone camera sensors, even the best ones (like Sony's 48/12 Quad-Bayer systems) fall way short of even an APS-C sensor. It's physics. A RAW image from a big sensor is 90% there. When taking a photo with a phone, you're adding much much more to make it look good.

The joy of photography comes from capturing that fleeting moment and framing it to create something worth looking and remembering that moment. Not synthesizing artificial looking colors with extreme post processing which bends the truth in that moment.

[0]: https://www.flickr.com/photos/zerocoder/33984196648/

[1]: https://www.flickr.com/photos/zerocoder/47965142511/

[2]: https://www.flickr.com/photos/zerocoder/46092337964/


> Hmm... It's a bit too bright and this thing gonna overexpose a bit so, let's compensate it with -0.7EV...

Why should _I_ do that instead of the camera?

> If I can take this [0] with a single frame, why should I bother about multiple frames?

You shouldn't. The camera should. It already knows the illumination level, and it can take multiple measurements from its CCD, until the total amount of transferred charge per pixel is enough to build a good picture. And while at it, just take a couple more pictures with intentionally over-exposed sensor to automatically offer the HDR version.

You know, the thing that phone cameras have been doing for a decade or so.

> In photography, sensor size is still the king.

Yes, and that's why I want a mirrorless camera with changeable lenses. There's only so much software can do with a phone's optical system.

However, the same software can do so much more when coupled with a big sensor and a good optical system.


> Why should _I_ do that instead of the camera?

First, every machine has its limits, second every photographer has a style.

> You shouldn't. The camera should.

No. The camera should do exactly as I say. It's an instrument, which shall allow footguns. Because one person's footgun is other person's style. Camera should be a blunt instrument, and should completely get out of the photographer's way, shall become transparent.

It's not the camera's interpretation of the scene. It's the photographer's interpretation through the camera.

> ...offer the HDR version.

If you feel lazy, many mirrorless cameras do that, but the results are may not fit your taste. Sony A7III's Auto-HDR is nice, but it's not exactly what I want, so I merge mine manually.

> You know, the thing that phone cameras have been doing for a decade or so.

I have quite a few cameras: A Canonette 28, a Pentax MZ50, a Nikon D70s and a Sony A7-III. I also used Canon AE-1, etc. All of these cameras have metering, and all of them are excellent for their era. They are not infallible or perfect.

For example, D70s freaks out in CFL and LED environments, because these indoor lighting was non-existent when it was designed. So a custom WB is a must in this case. A7-III sometimes struggles in colored LED (sodium yellow-ish) environments, so you again set custom WB. That machine was the most accurate camera in terms of color when it came out.

As I said, every machine has its limits.

> However, the same software can do so much more when coupled with a big sensor and a good optical system.

The thing is, photographer's don't want the software. They want what they exactly see recorded in a file, and that's more of a dynamic range thing more than a color thing, and it's directly related to sensor hardware (regardless of its size), not software.

From my understanding, you want a mirrorless (or full frame) point and shoot, and that's OK. What I want is total control over the camera hardware, regardless of its form factor.


> First, every machine has its limits, second every photographer has a style.

This is such a bullshit statement...

> No. The camera should do exactly as I say.

Well, time to throw your camera away, I guess. Unless you have a very old DSLR camera, of course.

> It's not the camera's interpretation of the scene. It's the photographer's interpretation through the camera.

The thing is, the camera can take multiple exposures at no cost, and then you can just discard the ones that you don't need. So you basically want to artificially limit the software and hardware to simulate the old-timey workflows.

> For example, D70s freaks out in CFL and LED environments, because these indoor lighting was non-existent when it was designed.

See: smartphones.

> The thing is, photographer's don't want the software.

This photographer wants it. And the market has clearly spoken in agreement with me.

> From my understanding, you want a mirrorless (or full frame) point and shoot, and that's OK.

Pretty much.


> This is such a bullshit statement...

The only thing I can say is, what we think about photography is very different.

> Well, time to throw your camera away, I guess...

I have film SLRs, a DSLR and a mirrorless. None of them are trash. They still work the way they should.

> See: smartphones.

If you think smartphones are impeccable in white balance, I'd tell you otherwise, because I have seen them fail the same way. It's physics. Even an iron skillet can take good photos in ample light. The difference starts to show itself when light goes down (starting sunsets and going from there + indoors at night). I take (sometimes) grainy photos with my camera, and smartphones just emit line noise from their sensors.

> The thing is, the camera can take multiple exposures at no cost, and then you can just discard the ones that you don't need.

Who says I don't shoot consecutive photos when required? A7III can track an object and keep focus on it at 30FPS, and shoot at 10FPS. Higher end cameras like A9 can go up to 120 AF corrections per second.

However, if you don't know what you're doing, spray and pray is no magic bullet. Also, taking shots is not free. If you can't press the shutter in the correct moment, that action and frame is gone forever. So, your burst shoot is for nothing.

Generally, when you're doing something like Tango nights, a 3-4 frame burst gets what you want. If you're tracking a dog, it's generally ~10 frames. Street is again ~3-4 shots (traffic, walking people, etc.), but I challenge myself to a single shot if I feel good, because why not.

There's no "old time" workflows. There are workflows for different scenarios. Sometimes I shoot and share from camera directly. Sometimes I process on my phone. Sometimes I let the photo sit and process post-trip. Sometimes it's one shot, sometimes it's burst. I have no frames. I just do what feels right at that moment.

These cameras have dedicated DSPs to handle these tasks. They are not bound to their processors, so a camera doesn't lose tracking because it also has to do AF corrections at the same time. Phase detecting AF cameras can scan whole AF surface (not all image pixels are AF pixels) without bogging down even while shooting 4K/8K videos at their max frame rates, because they're designed to do that.

> This photographer wants it. And the market has clearly spoken in agreement with me.

Smartphones are in your service. If you want heavy duty post processing for RAWs on the go, any iPhone later than X can post-process 24-32MP RAWs on board. I know, because I do.

However, the image quality of modern smartphones are not there by a great margin. Esp. in the Dynamic Range and Noise department. My A7-III can shoot in pitch black and create noiseless images. Google Pixel 9 Pro? Can't [0]. Even "portrait mode" creates washed out colors in bright daylight. Compare that to Fuji's XT-50, a mid range APS-C camera [1]. The difference is night and day.

> Pretty much.

I think you can seriously consider XT-50. It's not a full-frame machine, but it's a great APS-C camera with great ergonomics, which can handle 99% of your needs, without even needing post processing.

BTW, you say that "the viewfinders are electronic, anyway". They are calibrated OLED screens which shows the resulting image (after cameras processing) in real time. They are not less capable just because camera viewfinders don't draw yellow ractangles around faces, they track them just fine, incl their eyes. Sony not only focuses to faces. It focuses to eyes, even when they're behind sunglasses (you can tell A7III to show real time tracking markers).

I guess you never used a mirrorless, or any enthusiast camera for any matter. The possibilities they open beyond a single shutter button is immense.

This photo [2] is taken 15 years ago, and post processed in Darktable IIRC. It's taken as a JPEG, and processed from there. This is what good hardware and software can do.

If you don't have the data in the image to begin with, you can't go there even with the best software, sans you hallucinate and make details up, which is more generative AI and less photography.

[0]: https://www.dpreview.com/sample-galleries/7614427312/google-...

[1]: https://www.dpreview.com/sample-galleries/1737607092/fujifil...

[2]: https://www.flickr.com/photos/zerocoder/41901384135/


> They are calibrated OLED screens which shows the resulting image

No they don't. For example, in low-light conditions the sensor doesn't get enough light to faithfully show the long-exposure result.

And my phone also has a calibrated OLED screen, so it's not like it's something exotic.

> I guess you never used a mirrorless, or any enthusiast camera for any matter. The possibilities they open beyond a single shutter button is immense.

I have worked professionally with optical systems and lasers, and for a time I had astrophotography as my hobby. I did plate stacking, and all other kinds of post-processing.


> No they don't.

Sorry, yes they use a long shutter, and you get a blurry photo with the noise combined. It's a double whammy.

> And my phone also has a calibrated OLED screen, so it's not like it's something exotic.

Yes, but is the whole pipeline calibrated to each other? IOW, does what you see equals what you save? It's not always true on a smartphone, but it's "What you see is what you get" on a mirrorless.

> I have worked professionally with optical systems and lasers, and for a time I had astrophotography as my hobby.

Nice, but you might have done the same astrophotography with a CCD module designed for astro or with a wet plate, and both are very different from using a mirrorless camera, esp with one of the latest generation of sensors, which you can just point and shoot and get a more than decent photo of the sky above you. So, my point still stands.

I wish you the best of luck in your endeavors, and get my camera and leave for some greener pastures before the rains start.

Have a nice day.


At this point this guy must be trolling. Or needs to be urgently administered a SIGMA dp2 Quattro. Possibly both. The latter is definitely the case.


Or a Zeiss-ZX1. I'd prefer a Quattro, or a Leica though.


> You're looking at the screen to track the target anyway.

What? No. You present the camera to where it shall be, pin it where the image aligns with the framing you had in mind, and press the shutter. Almost the exact the same as guns minus violence(unless you consider artistic expression a form of violence). This applies to phones too.

> And with the computational photography, you can just take multiple pictures and synthesize various "exposure times" later.

The technology isn't there. Yes, it's 2024, there has to be half a dozen competing models of multispectral LIDAR slaved mirrorless cameras with Gaussian splats features, I agree, but it's easier and cheaper to just load couple AA batteries to a regular clip-on flash and physically stop down the aperture for portraits, or just be where you want to be with a flask of hot coffee for scenery photos.


very early on with my garmin watch, I disabled the touchscreen at all times.


I can't say how it compares to other translations, but A. S. Kline's translations of both are available for free online and, I found, easy and fun to read: https://www.poetryintranslation.com/PITBR/Greek/Odhome.php.


You needn't have responded. Pretty sure the poster you're replying to was just taking the piss.


Not at all. Never heard of it. In US, we only have "black ice".


Definitely this. I think the worst aspect of Passkeys is that the noble goals (public key crypto! unphisability!) seem to somewhat unavoidably wipe out one of the--in hindsight--really valuable aspects of passwords-in-a-password-manager:

That you can always just copy them out, put them in a different password manager, or write them on a post-it.

That said, I think this is a byproduct of the design space being complex (as you suggest) and not, as the author seems to feel, "thought leaders" or malice.


I've been using Passkeys saved in 1Password, I thought that gave me the power to transfer them, but I just looked and apparently the export feature of 1P doesn't allow exporting the Passkeys, it just tells you you need to create new ones in your new password manager, so that's pretty crappy...


I use iCloud's Passkeys extensively and have never had saved Passkeys "wiped out". I am not disputing that data loss bugs can happen, but three times for one user sounds pretty weird given the maturity of the ecosystem.

The most obvious explanations seem to me to be:

a) Apple loses data (presumably not just Passkeys, but also photos, passwords, and other highly noticeable stuff) all the time, and I've been lucky for the last ten years. Hundreds of millions of Apple users just learn to live with this.

b) The author is doing something weird.

c) This is hyperbole.

I'm probably picking nits, but it's like an article raising a bunch of legitimate criticisms of the internal combustion engine mentioning that the author's car has, while sitting in the parking lot, simply exploded on three separate occasions. Like, maybe?


It's not hyperbole. I recently (few weeks ago) got locked out of my GitHub account after iCloud Keychain thrashed my passkey and after analyzing the root cause it turned out to be a bug in webkit (that is now fixed in Safari technology preview after me raising it with the Webkit team)

https://bugs.webkit.org/show_bug.cgi?id=270553


> b) The author is doing something weird.

The author is the main dev of an identity management platform and called kanidm, so yeah I'd wager their usage is fairly non-standard. That said, it should be almost impossible for it to happen anyway.

Also, that doesn't apply to his partner.


One thing that comes to mind is with the earlier WebAuthn implementations in iOS, before they were stored in iCloud and called passkeys, there was no management interface for stored passkeys and 'clear website data' (to delete cookies etc.) would actually erase all credentials permanently. It was useless this way.


Why useless? Not an authentication scheme to and all other authentication schemes, but certainly a (much) better successor to the login cookie?


I do not mean passkeys in general but early iOS implementation was useless since it deleted passkeys along with your cookies and other website data. The passkey iOS implementation is useful in its current form.


> I use iCloud's Passkeys extensively

So what happens if you want to migrate away from iCloud for the storage of passkeys?


You generally enroll a passkey for a single device or connected group of devices. My icloud-syncing devices has a passkey. My windows laptop has another. My desktop has yet another. I have also enrolled my yubikey.

I could stop using my idevices tomorrow and not be negatively influenced.


I can't speak for OP, but for every service that I use passkeys with I enrolled both iCloud Passkeys (for convenience) and several YubiKeys (for portability and backup).

This is not different at all from a SSH public/private key combo. You are not supposed to duplicate SSH keys!


Your answer is totally reasonable, but I admit I don't have time for that in most cases.

1. Most services are not Passkey-only--most people are using it as a password alternative (e.g. eBay) or a second-factor alternative. So losing it won't lock me out.

2. A very small number (e.g. Google) let you configure Passkey as your sole second factor. For those, I am indeed careful to do what you do and have duplicates.

I do think this is kind of bad? So the grandparent totally has a point here: services find it hard to do only Passkeys (and thus realize the security benefits).

But, as a user, it's not something I worry about a lot, to be honest.


I was about to type something similar to this as well! I use passkeys pretty heavily, with iCloud sync. Never had an issue. The only similar issue I can think of is sometimes my Macbook will loose the contents of the on device wallet, including in one case an ssh key stored there. That was somewhat annoying!


It can't be hyperbole, their partner's car keeps exploding too! So often that they're switching back to a four horse carriage.


Agreed. I'm not so sure that some of the iCloud data loss bugs people talk about are actual data loss bugs. I've had a few issues over the years.

Firstly I spent weeks chasing down what I thought was a data loss bug in iCloud. After much effort I managed to reproduce it. Turned out it was an issue with TeXshop rather than iCloud.

Secondly, the one time I had a photo lost, it wasn't lost. I just couldn't find it in the 12000 photos I had. It wasn't where I'd left it.

The third one was a data loss bug, was reproducible, was reported to Apple and was fixed. This was due to how Numbers handles three devices and how it decides the winner of a conflicting change and was an edge case as number 1 awkward customer.

YMMV but user testimony may be as reliable as eyewitness reports.


To be clear, I don't work for Apple. :) And I'm not discounting that there are usage patterns that might lead to persistent bad experiences (like your example with Numbers).

But the implication that Keychain just kind of forgets saved Passkeys once in a while seems alarmist and probably unfounded.


Yeah exactly. It is possible that some expiry or provider specific bug may lead to revocation? I am not sure how it works entirely.

I will say that there are some very well known backup and restore issues with keychain however so I keep anything critical in MacPass as the primary copy.


But that doesn't use the "find my device" netowrk. I think the parent wasn't saying, "I want an app that continually reports its location to my server so I can monitor my phone's location." Indeed, that's fairly trivial to build, but is useless if, say, your phone doesn't have internet access (like, someone turns it off or it runs out of battery).

The thing Google is announcing here is like the Apple "find my" network--it seems to allow you to use other people's devices to find your lost device simply based on a BLE ping.

That is something that is hard to build by yourself, and would benefit greatly from an industry-wide standard (more peer devices reporting locations!).


> your phone doesn't have internet access

> simply based on a BLE ping

What if they disable Bluetooth?

The linked open source find my device app uses cell network to both receive commands and for location.


On Apple's devices, you cannot disable that BLE broadcast. The same is true on the Pixel 8: https://www.theverge.com/2024/4/8/24123909/google-pixel-8-pr.... That's the whole point of this design, and fundamentally different than just having some app ping a server once in a while so long as it has network access.


Wow that's a serious privacy concern


Maybe? I dunno.

The design of this is pretty clever: https://www.wired.com/story/apple-find-my-cryptography-bluet....

I presume somewhere you can turn this off for real, but the defaults seem sensible to me.


Even if you turn on airplane mode on the device?


Beats me. I can't find anything definitive on this. Since Apple devices continue to broadcast Find My signals even when powered off (as long as they have a little bit of battery left), I assume they continue to do so in airplane mode.

It wouldn't do a lot of good if thieves could just turn off Bluetooth, right?


Airplane mode doesn't turn Bluetooth or Wifi off at all on recent OS versions of Android or iOS.


Really? What's airplane mode even for then?


It disables cell data and calling.


really? Why did they even leave the feature in then?


It turns off celular which is the main problem (Wifi is common enough in modern airplanes so I have to assume the interference risk is low, same for Bluetooth)


Define "disable" ;)

I'd guess that Bluetooth will never fully shut off, it would just look like it's turned off to other apps that would want to use it.


Lots of comments about "I’m no lawyer" and "But is this illegal? Once again, I’m not a lawyer so I don’t know," which makes the conclusion a bit, um, weird:

"A lot of this feels yucky, and none of the things mentioned in the case should be a surprise to anyone who has been following the Apple space for years. That said, it’s one thing for me to blog that Apple should change something, it’s another thing when the DOJ says it’s illegal. I think the DOJ has an uphill battle in winning this case..."


At the risk of seeming like an asshole:

I think for every highly competent person who just lacks a bit of social graces and is unfairly punished by a defensive bureaucracy, I have encountered many more incompetent people who, due to Dunning-Kruger, don't recognize their own incompetence, and instead ascribe the rejection of their (mediocre) ideas to the unfair defensiveness of the bureaucracy above them.

Or, in meme form: https://imgflip.com/i/8ks5kq.


How have you ever gotten the full story so many times to know that these people exist in such numbers? You'd have to hear their bad idea (apparently be intelligent enough to understand them completely) and then you'd also be there to hear them griping and blaming management and again finding their complaints uncompelling.


Hmm, let me put it this way:

I have often run into people who seem to think management is stupid for not accepting their idea, which they then explain--and which I also think is a bad idea.

Maybe I'm also just dumb, though!


> highly competent person who just lacks a bit of social graces

I consider myself one of these people (let's say above average competency). I don't think management is stupid for not accepting my ideas. I begin to have an issue when they disregard the concerns my idea was meant to address. Too often, it feels as though they choose the path which leads us straight into what I think are clearly foreseeable and avoidable problems, and then I'm at fault for describing them as such after the fact.


This isn’t meant to respond directly to your statement because I’ve seen the same thing. BUT one fascinating thing I’ve learned is how scale plays into things. That $50 million project may be a Senior Director’s most important, career-making project … but less than a rounding error to their EVP.


Amusing anecdote: On average, people think they are above average.


> many more incompetent people who, due to Dunning-Kruger, don't recognize their own incompetence

You are right of course, I am myself a living proof of that, and I would not wish it on my worst enemy organization to give me a promotion. That said, this doesn't really explain why so many incompetent people end up being promoted, which Peter (I believe correctly) documented. His theory is admittedly a bit more elaborate than mine, but it obviously builds on an endearing naïveté regarding the nature of organizations, especially large and mature such.


Dunning-Kruger seems like an overused framework to explain just pure "lack of self-awareness due to immaturity / ego / lack of intelligence, etc."


Not to mention that if someone isn't a psychologist they shouldn't be spouting off about the Dunning-Kruger effect anyway because arguably they don't have enough competence in that particular domain to be able to talk about it intelligently.


Hell, I have a PhD in psychology and I don't know enough about this effect to talk about it intelligently.


I don’t think the takeaway from the Dunning-Kruger effect is "don’t invoke basic aspects of human nature on internet discussion forums unless you’re a trained academic psychologist"


Isn't people who don't know enough about Dunning-Kruger confidently spouting off about it...sort of evidence of Dunning-Kruger?

I kid of course. Or do I?


I always found it a bit weird how Altman leveraged Loopt (which AFAICT did not make his investors money and was basically a failure) into giving advice to aspiring founders, which he leveraged into, um, whatever he's doing now.

Apparently a smart guy, but it's always hard to distinguish "smart at selling himself" from "smart at building good things."


Sam is incredibly smart and has always given great advice.

And even though Loopt failed, it failed in the right way. This is America: failure is not a negative.

What he has done with OpenAI is only the second time true disruption has happened in the last decade after SpaceX.


He's clearly smart.

I am not sure I get the "failed in the right way", though. From one point of view, I think it's a great story here that, given how much of a role is played by luck and external factors, someone who might have proven himself to have great instincts and intelligence can get ahead even if, due to external factors, his particular startup failed.

But from another point of view, Sam is just an example of "those who can, do; those who can't...", only in this case it's "become hugely successful VCs and CEOs."

Which is weird! But maybe the wrong interpretation. I don't know.


Correction: what the authors of that Google Brain paper on attention did was the time disruption happened. Without them Sam produced a me-too checkin startup. It's so weird that the current frame of thinking is that CEOs are the ones who innovate.


I never said innovate. I specifically used the word disruption. As I said in another post if people only knew how hard it is to get all the pieces in place to create disruption, these founders would have more respect.


You’re being way too charitable. Anthropoic and Google are very close almost equivalent? Spacex is 5-10 yrs ahead of its competition


He's super smart, just too ambitiously political for many's, including mine, taste.


The Valley is now acknowledging a history of shady dealings that flew a number of acquisitions into mountains right after their founders flew away on golden wings, Loopt, Socialcam, and Twitch have entered the chat.

Everyone who has ever slept with a YC founder has known this for more than 10 years, and there are still going to be holdouts?

It’s over folks. It was fun while it lasted. Not.


wat? sam is known for leading yc if anything


He went from being basically a failed founder of a YC company to running YC. Then rumors are that he was fired from YC and then somehow went into AI stuff from there.

He's one of those guys like Kevin Rose who seems to be able to fail upward again and again. Like every venture is a failure but the person responsible ends up being given even more responsibility and even more money. They're very common in silicon valley for some reason.


Probably some sort of self dealing among some insider group.


There's a guy like that who's been president once and is trying again. So the grandparent post is right: this is the American way.


Yeah, as the other reply here said, that's what I was referring to with "giving advice."

As in, his rep seems (from my distant POV) to be about his acumen as a mentor, not investor. Maybe that's bullshit! Maybe he's just great at picking the right horses! And that's totally a talent on its own, and one deserving of a lot of respect.

But it seems like his star was really made on the perception that he knows how to give good advice, which as I said above has a bit of a weird "those who can, do..." vibe to it.


I assume that's the "into giving advice to aspiring founders" bit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: