Hacker Newsnew | past | comments | ask | show | jobs | submit | sandofsky's commentslogin

> It seems like a mistake to lump HDR capture, HDR formats and HDR display together, these are very different things.

These are all related things. When you talk about color, you can be talking about color cameras, color image formats, and color screens, but the concept of color transcends the implementation.

> The claim that Ansel Adams used HDR is super likely to cause confusion, and isn’t particularly accurate.

The post never said Adams used HDR. I very carefully chose the words, "capturing dramatic, high dynamic range scenes."

> Previously when you took a photo, if you over-exposed it or under-exposed it, you were stuck with what you got. Capturing HDR gives the photographer one degree of extra freedom, allowing them to adjust exposure after the fact.

This is just factually wrong. Film negatives have 12-stops of useful dynamic range, while photo paper has 8 stops at best. That gave photographers exposure latitude during the print process.

> Ansel Adams wasn’t using HDR in the same sense we’re talking about, he was just really good at capturing the right exposure for his medium without needing to adjust it later.

There's a photo of Ansel Adams in the article, dodging and burning a print. How would you describe that if not adjusting the exposure?


> Film negatives have 12-stops of useful dynamic range

No, that’s not inherently true. AA used 12 zones, that doesn’t mean every negative stock has 12 stops of latitude. Stocks are different, you need to look at the curves.

But yes most modern negatives are very forgiving. FP4 for example has barely any shoulder at all iirc.


I agree capture, format and display are closely related. But HDR capture and processing specifically developed outside of HDR display devices, and use of HDR displays changes how HDR images are used compared to LDR displays.

> The post never said Adams used HDR. I very carefully chose the words

Hey I’m sorry for criticizing, but I honestly feel like you’re being slightly misleading here. The sentence “What if I told you that analog photographers captured HDR as far back as 1857?” is explicitly claiming that analog photographers use “HDR” capture, and the Ansel Adams sentence that follows appears to be merely a specific example of your claim. The result of the juxtaposition is that the article did in fact claim Adams used HDR, even if you didn’t quite intend to.

I think you’re either misunderstanding me a little, or maybe unaware of some of the context of HDR and its development as a term of art in the computer graphics community. Film’s 12 stops is not really “high” range by HDR standards, and a little exposure latitude isn’t where “HDR” came from. The more important part of HDR was the intent to push toward absolute physical units like luminance. That doesn’t just enable deferred exposure, it enables physical and perceptual processing in ways that aren’t possible with film. It enables calibrated integration with CG simulation that isn’t possible with film. And it enables a much wider rage of exposure push/pull than you can do when going from 12 stops to 8. And of course non-destructive digital deferred exposure at display time is quite different from a print exposure.

Perhaps it’s useful to reflect on the fact that HDR has a counterpart called LDR that’s referring to 8 bits/channel RGB. With analog photography, there is no LDR, thus zero reason to invent the notion of a ‘higher’ range. Higher than what? High relative to what? Analog cameras have exposure control and thus can capture any range you want. There is no ‘high’ range in analog photos, there’s just range. HDR was invented to push against and evolve beyond the de-facto digital practices of the 70s-90s, it is not a statement about what range can be captured by a camera.


> The sentence “What if I told you that analog photographers captured HDR as far back as 1857?” is explicitly claiming that analog photographers use “HDR” capture,

No, it isn't. It's saying they captured HDR scenes.

> The result of the juxtaposition is that the article did in fact claim Adams used HDR

You can't "use" HDR. It's an adjective, not a noun.

> Film’s 12 stops is not really “high” range by HDR standards, and a little exposure latitude isn’t where “HDR” came from.

The Reinhard tone mapper, a benchmark that regularly appears in research papers, specifically cites Ansel Adams as inspiration.

"A classic photographic task is the mapping of the potentially high dynamic range of real world luminances to the low dynamic range of the photographic print."

https://www-old.cs.utah.edu/docs/techreports/2002/pdf/UUCS-0...

> Perhaps it’s useful to reflect on the fact that HDR has a counterpart called LDR that’s referring to 8 bits/channel RGB.

8-bits per channel does not describe dynamic range. If I attach an HLG transfer function on an 8-bit signal, I have HDR. Furthermore, assuming you actually meant 8-bit sRGB, nobody calls that "LDR." It's SDR.

> Analog cameras have exposure control and thus can capture any range you want.

This sentence makes no sense.


Sorry man, you seem really defensive, I didn’t mean to put you on edge. Okay, if you are calling the scenes “HDR” then I’m happy to rescind my critique about Ansel Adams and switch instead to pointing out that “HDR” doesn’t refer to the range of the scene, it refers to the range capability of a digital capture process. I think the point ultimately ends up being the same either way. Hey where is HDR defined as an adjective? Last time I checked, “range” could be a noun, I think… no? You must be right, but FWIW, you used HDR as a noun in your 2nd to last point… oh and in the title of your article too.

Hey it’s great Reinhard was inspired by Adams. I have been too, like a lot of photographers. And I’ve used the Reinhard tone mapper in research papers, I’m quite familiar with it and personally know all three authors of that paper. I’ve even written a paper or maybe two on color spaces with one of them. Anyway, the inspiration doesn’t change the fact that 12 stops isn’t particularly high dynamic range. It’s barely more than SDR. Even the earliest HDR formats had like 20 or 30 stops, in part because the point was to use physical luminance instead of a relative [0..1] range.

8 bit RGB does sort-of in practice describe a dynamic range, as long as the 1 bit difference is approximately the ‘just noticeable difference’ or JND as some researchers call it. This happens to line up with 8 bits being about 8 stops, which is what RGB images have been doing for like 50 years, give or take. While it’s perfectly valid arithmetic to use 8 bits values to represent an arbitrary amount like 200 stops or 0.003 stops, it’d be pretty weird.

Plenty of people have called and continue to call 8 bit images “LDR”, here’s just three of the thousands of uses of “LDR” [1][2][3], and LDR predates usage of SDR by like 15 years maybe? LDR predates sRGB too, I did not actually mean 8 bit sRGB. LDR and SDR are close but not quite the same thing, so feel free to read up on LDR. It’s disappointing you ducked the actual point I was making, which is still there even if you replace LDR with SDR.

What is confusing about the sentence about analog cameras and exposure control? I’m happy to explain it since you didn’t get it. I was referring to how the aperture can be adjusted on an analog camera to make a scene with any dynamic range fit into the ~12 stops of range the film has, or the ~8 stops of range of paper or an old TV. I was just trying to clarify why HDR is an attribute of digital images, and not of scenes.

[1] https://www.easypano.com/showkb_228.html#:~:text=The%20Dynam...

[2] https://www.researchgate.net/figure/shows-digital-photograph...

[3] https://irisldr.github.io/


You opened this thread arguing that Ansel Adams didn't "use HDR." I linked you to a seminal research paper which argues that he tone mapped HDR content, and goes on to implement a tone mapper based on his approach. This all seems open and shut.

> I’m happy to rescind my critique about Ansel Adams

Great, I'm done.

> and switch instead to pointing out that “HDR” doesn’t refer to the range of the scene

Oh god. Here's the first research paper that popped into my head: https://static.googleusercontent.com/media/hdrplusdata.org/e...

"Surprisingly, daytime shots with high dynamic range may also suffer from lack of light."

"In low light, or in very high dynamic range scenes"

"For high dynamic range scenes we use local tone mapping"

You keep trying to define "HDR" differently than current literature. Not even current— that paper was published in 2016! Hey, maybe HDR meant something different in the 1990s, or maybe it was just ok to use "HDR" as shorthand for when things were less ambiguous. I honestly don't care, and you're only serving to confuse people.

> the aperture can be adjusted on an analog camera to make a scene with any dynamic range fit into the ~12 stops of range the film has, or the ~8 stops of range of paper or an old TV.

You sound nonsensical because you keep using the wrong terms. Going back to your first sentence that made no sense:

> Analog cameras have exposure control and thus can capture any range you want

You keep saying "range" when, from what I can tell, you mean "luminance." Changing a camera's aperture scales the luminance hitting your film or sensor. It does not alter the dynamic range of the scene.

Analog cameras cannot capture any range. By adjusting camera settings or attaching ND filters, you can change the window of luminance values that will fit within the dynamic range of your camera. To say a camera can "capture any range" is like saying, "I can fit that couch through the door, I just have to saw it in half."

> And I’ve used the Reinhard tone mapper in research papers, I’m quite familiar with it and personally know all three authors of that paper. I’ve even written a paper or maybe two on color spaces with one of them.

I'm sorry if correcting you triggers insecurities, but if you're going to make an appeal to authority, please link to your papers instead of hand waving about the people you know.


Hehe outside is “HDR content”? To me that still comes off as confused about what HDR is. I know you aren’t, but that’s what it sounds like. A sunny day has a high dynamic range for sure, but the acronym HDR is a term of art that implies more than that. Your article even explains why.

Tone mapping doesn’t imply HDR. Tone mapping is always present, even in LDR and SDR workflows. The paper you cited explicitly notes the idea is to “extend” Adams’ zone system to very high dynamic range digital images, more than what Adams was working with, by implication.

So how is a “window of luminance values” different from a dynamic range, exactly? Why did you make the incorrect and obviously silly assumption that I was suggesting a camera’s aperture changes the outdoor scene’s dynamic range rather than what I actually said, that it changes the exposure? Your description of what a camera does is functionally identical. I’m kinda baffled as to why you’re arguing this part that we both understand, using hyperbole.

I hope you have a better day tomorrow. Good luck with your app. This convo aside, I am honestly rooting for you.


> Hehe outside is “HDR content”? To me that still comes off as confused about what HDR is.

"Surprisingly, daytime shots with high dynamic range may also suffer from lack of light."

That's from, "Burst photography for high dynamic range and low-light imaging on mobile cameras," written by some of the most respected researchers in computational photography. It has 342 citations according to ACM.

I'm still waiting for a link to your papers.

> Tone mapping doesn’t imply HDR.

https://en.wikipedia.org/wiki/Tone_mapping

First sentence: "Tone mapping is a technique used in image processing and computer graphics to map one set of colors to another to approximate the appearance of high-dynamic-range (HDR) images in a medium that has a more limited dynamic range."

> Why did you make the incorrect and obviously silly assumption that I was suggesting a camera’s aperture changes the outdoor scene’s dynamic range rather than what I actually said, that it changes the exposure?

Because you keep bumbling details like someone with a surface level understanding. Your replies are irrelevant, outdated, or flat out wrong. It all gives me flashbacks to working under engineers-turned-managers who just can't let go, forcing their irrelevant backgrounds into discussions.

It's cool that you studied late 90s 3D rendering. So did I. It doesn't make you an expert in computational photography. Please stop confusing people with your non-sequiturs.


What does the lack of light quote prove? That’s a statement about color resolution, not range, and it uses “high dynamic range” and not “HDR content”. I think you’ve missed my point and are not listening.

Yes tone mapping is used on HDR images. It just doesn’t imply HDR. SDR gamma is tone mapping, for example, which the Wikipedia link you sent explains. Your claim is that Adams use of tone mapping is evidence that he is capturing “HDR content”. The paper you sent doesn’t use that language, it doesn’t ever say Adams was doing tone mapping, it says they develop a tone mapping method inspired by Adams’ zone system that extends the idea into higher dynamic range.

You’re using your own misunderstanding and mis-interpretation of my comments as evidence that they’re wrong. Hey I totally might be wrong about a lot of things, and sure maybe I’m completely non-sensical, but you certainly haven’t convinced me of that. I haven’t had trouble speaking with other people about HDR imaging, people who are HDR experts. All I’m getting out of this so far is that some people react very badly to any hint of critique.

From my perspective, I’m also only hearing bumbling errors, errors like that HDR is an adjective, that LDR doesn’t exist and nobody uses it, that using “range” is incorrect when I say it but not when you do and “window of luminance values” is better, and that Ansel Adams was doing HDR imaging.

Ben, we’re having a bona-fide miscommunication, and I wanted to fix it but I’m failing, and it feels like you’re determined not to fix it or find any common ground. In another environment we’d probably be having a friendly, productive and enlightening conversation. I’m sure there are some things I could learn from you.


In theory PQ specifies absolute values, but in practice it's treated as relative. Go load some PQ encoded content on an iPhone, adjust your screen brightness, and watch the HDR brightness also change. Beyond the iPhone, it would be ridiculous to render absolute values as-is, given SDR white is supposedly 100-nits; that would be unwatchable in most living rooms.

Bad HDR boils down to poor taste and the failure of platforms to rein it in. You can't fix bad HDR by switching encodings any more than you can fix global warming by switching from Fahrenheit to Celsius.


I don't think that's the colloquial meaning. If you asked 100 people on the street to describe HDR, I doubt a single person would bring up ITU-R BT.2100.


HDR has a number of different common meanings, which adds to the confusion.

For example, in video games, "HDR" has been around since the mid '00s, and refers to games that render a wider dynamic range than displays were capable of, and use post-process effects to simulate artifcats like bloom and pupil dilation.

In photography, HDR has almost the opposite meaning of what it does everywhere else. Long and multiple exposures are combined to create an image that has very little contrast, bringing out detail in a shot that would normally be lost in shadows or to overexposure.


Photography’s meaning is also about a hundred years older than video games; high(er) dynamic range was a concern in film processing as far back as Ansel, if not prior. That technology adopted it as a sales keyword is interesting and it’s worth keeping in mind when writing for an audience — but this post is about a photography app, not television content or video games, so one can reasonably expect photography’s definition to be used, even if the audience isn’t necessarily familiar.


> Does anyone else find the hubris in the first paragraph writing as off-putting as I do? > "we finally explain what HDR actually means"

No. Because it's written for the many casual photographers we've spoken with who are confused and asked for an explainer.

> Then spends 2/3rds of the article on a tone mapping expedition, only to not address the elephant in the room, that is the almost complete absence of predictable color management in consumer-grade digital environments.

That's because this post is about HDR and not color management, which is different topic.


>No. Because it's written for the many casual photographers we've spoken with who are confused and asked for an explainer.

To be fair, it would be pretty weird if you found your own post off-putting :P


Me, routinely, reading things I wrote a while ago: what is this dreck


> That's because this post is about HDR

It's about HDR from the perspective of still photography, in your app, on iOS, in the context of hand-held mobile devices. The post's title ("What Is HDR, Anyway?"), content level and focus would be appropriate in the context of your company's social media feeds for users of your app - which is probably the audience and context it was written for. However in the much broader context of HN, a highly technical community whose interests in imaging are diverse, the article's content level and narrow focus aren't consistent with the headline title. It seems written at a level appropriate for novice users.

If this post was titled "How does Halide handle HDR, anyway?" or even "How should iOS photo apps handle HDR, anyway?" I'd have no objection about the title's promise not matching the content for the HN audience. When I saw the post's headline I thought "Cool! We really need a good technical deep dive into the mess that is HDR - including tech, specs, standards, formats, content acquisition, distribution and display across content types including stills, video clips and cinematic story-telling and diverse viewing contexts from phones to TVs to cinemas to VR." When I started reading and the article only used photos to illustrate concepts best conveyed with color gradient graphs PLUS photos, I started to feel duped by the title.

(Note: I don't use iOS or your app but the photo comparison of the elderly man near the end of the article confused me. From my perspective (video, cinematography and color grading), the "before" photo looks like a raw capture with flat LUT (or no LUT) applied. Yet the text seemed to imply Halide's feature was 'fixing' some problem with the image. Perhaps I'm misunderstanding since I don't know the tool(s) or workflow but I don't see anything wrong with the original image. It's what you want in a flat capture for later grading.)


> It's about HDR from the perspective of still photography, in your app, on iOS, in the context of hand-held mobile devices.

It's from the perspective of still photography, video, film, desktop computing, decades of research papers, and hundreds of years of analog photography, condensed into something approachable.

> However in the much broader context of HN, a highly technical community whose interests in imaging are diverse, the article's content level and narrow focus aren't consistent with the headline title. It seems written at a level appropriate for novice users.

"On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity"

To be clear, I didn't submit the post, and I never submit my posts. I don't care if my posts make a splash here, and kind of dread when they do because anything involving photography or video attracts the most annoying "well actually" guys on the Internet.

> When I saw the post's headline I thought "Cool! We really need a good technical deep dive into the mess that is HDR - including tech, specs, standards, formats, content acquisition, distribution and display across content types including stills, video clips and cinematic story-telling and diverse viewing contexts from phones to TVs to cinemas to VR."

The post is called, "What is HDR," and the introduction explains the intended audience. That audience is much larger than "people who want to read about ITU-R Recommendation BT.2100." But if you think people are interested in a post like that, by all means write it.


> However in the much broader context of HN, a highly technical community whose interests in imaging are diverse, the article's content level and narrow focus aren't consistent with the headline title. It seems written at a level appropriate for novice users.

That is hardly the fault of the authors though. The article seems entirely appropriate for its intended audience, and they can’t control who posts it on a site like HN.


Maybe my response was part of the broader HDR symptom—that the acronym is overloaded with different meanings depending on where you're coming from.

On the HN frontpage, people are likely thinking of one of at least three things:

HDR as display tech (hardware)

HDR as wide gamut data format (content)

HDR as tone mapping (processing)

...

So when the first paragraph says we finally explain what HDR actually means, it set me off on the wrong foot—it comes across pretty strongly for a term that’s notoriously context-dependent. Especially in a blog post that reads like a general explainer rather than a direct Q&A response when not coming through your apps channels.

Then followed up by The first HDR is the "HDR mode" introduced to the iPhone camera in 2010. caused me to write the comment.

For people over 35 with even the faintest interest in photography, the first exposure to the HDR acronym probably didn’t arrive with the iPhone in 2010, but HDR IS equivalent to Photomatix style tone mapping starting in 2005 as even mentioned later. The ambiguity of the term is a given now. I think it's futile to insist or police one meaning other the other in non-scientific informal communication, just use more specific terminology.

So the correlation of what HDR means or what sentiment it evokes in people by age group and self-assesed photography skill might be something worthwhile to explore.

The post get's a lot better after that. That said, I really did enjoy the depth. The dive into the classic dodge and burn and the linked YouTube piece. One explainer at a time makes sense—and tone mapping is a good place to start. Even tone mapping is fine in moderation :)


I took the post about the same way. Thought it excellent because of depth.

Often, we don't get that and this topic, plus my relative ignorance on it, welcomed the post as written.


Just out of curiosity since your profile suggests your from an older cohort. Do you actively remember the Pixelmatix tone mapping era, or where you already old enough to see this as a passing fad, or was this a more niche thing than I remember?

Now I even remember the 2005 HDR HL2 Lost Coast Demo was a thing 20 years ago: https://bit-tech.net/previews/gaming/pc/hl2_hdr_overview/1/


I was old enough to see it as the passing fad it was.

Niche, style points first kind of thing for sure.

Meta: old enough that getting either a new color not intended, or an additional one visible on screen and having the machine remain able to perform was a big deal.


I missed the MDA/EGA/CGA/Hercules era and jumped right into glorious VGA. Only start options for some DOS-games informed you about that drama in the mid 90s not having any idea what that meant otherwise.


It is a fun era NOW. I love the pre VGA PC and earlier systems, like Apple 2, Atari. Am building a CGA system with a K2 CPU to go hacking on to see what was possible. I have, as do many, unfinished business :)

Back then, it was fun at times, bit was also limiting in ways sometimes hard to fathom ways.

Things are crazy good now, BTW. Almost anything is a few clicks away. The CRT is old, panels so damn good..


> "The first HDR is the "HDR mode" introduced to the iPhone camera in 2010."

Yeah, I had a full halt and process exception on that line too. I guess all the research, technical papers and standards development work done by SMPTE, Kodak, et al in the 1990s and early 2000s just didn't happen? Turns out Apple invented it all in 2010 (pack up those Oscars and Emmys awarded for technical achievement and send'em back boys!)


This post is written for people who have heard "HDR" and feel confused. That introduction lists two types of HDR people might think about. "The first" means "the first of two types we're going to explain," not "the first research in the chronological history of HDR."


Human vision has around 20 stops of static dynamic range. Modern digital cameras can't match human vision— a $90,000 Arri Alexa boasts 17 stops— but they're way better than SDR screens.


No. When you simply adjust shadow and highlights, you lose local contrast. In an early draft of the post, there was an example, but it was cut for pacing.


While it isn't touched on in the post, I think the issue with feeds is that platforms like Instagram have no interest in moderating HDR.

For context: YouTube automatically edits the volume of videos that have an average loudness beyond a certain threshold. I think the solution for HDR is similar penalization based on log luminance or some other reasonable metric.

I don't see this happening on Instagram any time soon, because bad HDR likely makes view counts go up.

As for the HDR photos in the post, well, those are a bit strong to show what HDR can do. That's why the Mark III beta includes a much tamer HDR grade.


> YouTube automatically edits the volume of videos that have an average loudness beyond a certain threshold.

For anyone else who was confused by this, it seems to be a client-side audio compressor feature (not a server-side adjustment) labeled as "Stable Volume". On the web, it's toggleable via the player settings menu.

https://support.google.com/youtube/answer/14106294

I can't find exactly when it appeared but the earliest capture of the help article was from May 2024, so it is a relatively recent feature: https://web.archive.org/web/20240523021242/https://support.g...

I didn't realize this was a thing until just now, but I'm glad they added it because (now that I think about it) it's been awhile since I felt the need to adjust my system volume when a video was too quiet even at 100% player volume. It's a nice little enhancement.


Youtube has been long normalizing videos standard feed, switching to a -14 LUFS target in 2019. But LUFS is a global target, and is meant to allow higher peaks and troughs over the whole video, and the normalization does happen on a global level - if you exceed it by 3dB, then the whole video gets it's volume lowered by 3dB, no matter if the part is quiet or not.

The stable volume thing is meant to essentially level out all of the peaks and troughs, and IIRC it's actually computed server-side, I think yt-dlp can download stable volume streams if asked to.


The client side toggle might be new since 2024 but the volume normalisation has been a thing for a long time.


I know they've automatically boosted brightness in dark scenes for a long time too. It's not rare for people to upload a clip from a video game with a very dark scene and it's way brighter after upload than it was when they played or how it looks in the file they uploaded.


Yes, and I love it! Finally, the volume knob control was pulled from producers all about gimmicks to push their productions.

There are still gimmicks, but at least they do not include music so badly clipped as to be unlistenable... hint: go get the DVD or Blu-Ray release of whatever it is and you are likely to enjoy a not clipped album.

It is all about maximizing the overall sonic impact the music is capable of. Now when levels are sane, song elements well differentiated and equalized such that no or only a minor range of frequencies are crushed due to many sounds all competing for them, it will sound, full, great and not tiring!

Thanks audio industry. Many ears appreciate what was done.


I expected a moderate amount of heat directed at my comment.

No worries. I've friends in various industries doing production who hate the change.

I like it, of course. Losing the volume knob is a direct result of the many abuses.


Instagram has to allow HDR for the same reason that Firefox spent the past twenty years displaying web colors like HN orange at maximum display gamut rather than at sRGB calibrated: because a brighter red than anyone else’s draws people in, and makes the competition seem lifeless by comparison, especially in a mixed-profiles environment. Eventually that is regarded as ‘garishly bright’, so to speak, and people push back against it. I assume Firefox is already fixing this to support the latest CSS color spec (which defines #rrggbb as sRGB and requires it to be presented as such unless stated otherwise in CSS), but I doubt Instagram is willing to literally dim their feed; instead, I would expect them to begin AI-HDR’ing SDR uploads in order that all videos are captivatingly, garishly, bright.


> I think the solution for HDR is similar penalization based on log luminance or some other reasonable metric.

I completely understand the desire to address the issue of content authors misusing or intentionally abusing HDR with some kind of auto-limiting algorithm similar to the way the radio 'loudness wars' were addressed. Unfortunately, I suspect it will be difficult, if not impossible, to achieve without also negatively impacting some content applying HDR correctly for artistically expressive purposes. Static photos may be solvable without excessive false positive over-correction but cinematic video is much more challenging due to the dynamic nature of the content.

As a cinemaphile, I'm starting to wonder if maybe HDR on mobile devices simply isn't a solvable problem in practice. While I think it's solvable technically and certainly addressable from a standards perspective, the reality of having so many stakeholders in the mobile ecosystem (hardware, OS, app, content distributors, original creators) with diverging priorities makes whatever we do from a base technology and standards perspective unlikely to work in practice for most users. Maybe I'm too pessimistic but as a high-end home theater enthusiast I'm continually dismayed how hard it is to correctly display diverse HDR content from different distribution sources in a less complex ecosystem where the stakeholders are more aligned and the leading standards bodies have been around for many decades (SMPTE et al).


I believe everything could be solved the same way we solved high dynamic range in audio, with a volume control.

I find it pretty weird that all tvs and most monitors hide the brightness adjustment under piles and piles of menus when it could be right there in the remote alongside the sound volume buttons. Maybe phones could have hardware brightness buttons too, at least something as easy as it is on adjusting brightness in notebooks that have dedicated brightness fn buttons.

Such brightness slider could also control the amount of tonemapping applied to HDR content. High brightness would mean no to low tonemapping and low brightness would use a very agressive tonemapper producing a similar image to the SDR content along it.

Also note that good audio volume attenuation requires proper loudness contour compensation (as you lower the volume you also increase the bass and treble) for things to sound reasonably good and the "tone" sound well balanced. So, adjusting the tonemapping based on the brightness isn't that far off what we do with audio.


> because bad HDR likely makes view counts go up

Another related parallel trend recently is that bad AI images get very high view and like counts, so much so that I've lost a lot of motivation for doing real photography because the platforms cease to show them to anyone, even my own followers.


Why is nobody talking about the standards development? They (OS, image formats) could just say all stuff by default assumes SDR and if a media file explicitly calls for HDR even then it cannot have sharp transitions except in special cases, and the software just blocks or truncates any non conforming images. The OS should have had something like this for sound, about 25-30 years ago. For example a brightness aware OS/monitor combo could just outright disallow anything about x nits. And disallow certain contrast levels, in the majority of content.


FYI: You wrote Chrome 14 in the post, but I believe you meant Android 14.


Thanks. Updated.


Btw, YouTube doesn't moderate HDR either. I saw one video of a child's violin recital that was insanely bright, and probably just by accident of using a bad HDR recorder.


The effect of HDR increasing views is explicitly mentioned in the article


You are replying to the article's author.


Hey. I’m the guy quoted.

RAW is ultimately about sensor readings. As a developer, you just want to get things from there into a linear, known color space (XYZ in the DNG spec). So from that perspective, interoperability isn’t the issue.

How you process that data is another matter. Handling a traditional bayer pattern vs a quad-bayer vs Fujifilm’s x-trans pattern obviously requires different algorithms, but that’s all moot given DNG is just a container.


> Camera sensors from different companies (and different generations) don't have the same color (or if you prefer, spectral) responses with both their Bayer filter layer and the underlying physical sensor

This is all accommodated for in the DNG spec. The camera manufacturers specify the necessary matrix transforms to get into the XYZ colorspace, along with a linearization table.

If they really think the spectral sensitivity is some valuable IP, they are delusional. It should take one Macbeth chart, a spreadsheet, and one afternoon to reverse engineer this stuff.

Given that third party libraries have figured this stuff out, seems they have failed while only making things more difficult for users.


Raw decoding is an algorithm, not a container format. The issue is every is coming up with their own proprietary containers for identical data that just represents sensor readings.


It's more than just a file format.

The issue is that companies want control of the demosaicing stage, and the container format is part of that strategy.

If a file format is a corporate proprietary one, then there's no expectation that they should provide services that do not directly benefit them, or that expose internal corporate trade secrets, in service to an open format.

If they have their own format, then they don't have to lose any sleep over stuff that doesn't interest or benefit them.


By definition, a RAW container contains sensor data, and nothing more. Are you saying that Adobe is using their proprietary algorithms to render proprietary RAW formats in Lightroom?


I don’t know about Adobe. I never worked for them.


pretty sure they would lose a lot of sleep if no third party application could open their raw


You'd be surprised.

They lost sleep over having images from their devices looking bad.

They wanted ultimate control of their images, and they didn't trust third-party pipelines to render them well.


so you think they'd be all happy if nobody could open the raw files in adobe software?


Yup.

Not kidding. These folks are serious control freaks. They are the most anal people I've ever met, when it comes to image Quality.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: