Hacker Newsnew | past | comments | ask | show | jobs | submit | MontagFTB's commentslogin

> We do [cubic curve fitting] all the time in image processing, and it works very well. It would probably work well for audio as well, although it's not used -- not in the same form, anyway -- in these applications.

Is there a reason the solution that "works very well" for images isn't/can't be applied to audio?


The short answer is that our eyes and ears use very different processing mechanisms. Our eyes sense using rods and cones where the distribution of them reflects a spatial distribution of the image. Our ears instead work by performing an analogue forier transform and hearing the frequencies. If you take an image and add lots of very high frequency noise, the result will be almost indistinguishable, but if you do the same for audio it will sound like a complete mess.


AFAIK it introduces harmonic distortion


I'd love to know more about this, do you perhaps have any refs? Thanks


Not an expert in this field, just a scrub, so I can't really give you much.

There is this website that has painstakingly compares many resampling algorithms from all sorts of software:

https://src.infinitewave.ca

Try it's mirror if you can't access it: https://megapro17.github.io/src/index.html

The only one that says it is a cubic interpolation is the "Renoise 2.8.0 (cubic)" one, the spectrogram isn't very promising with all sorts of noise, intermodulation and aliasing issues. And, by switching to the 1khz tone spectrum view you can see some harmonics creeping up.

When I used to mess with trackers I would sometimes chose different interpolations and bicubic definitely still colored the sound, with sometimes enjoyable results. Obviously you don't want that as a general resampler...


Just to note that this site hasn't been updated for a while.

Much better, more modern and with automated upload analysis site would be [1] although it is designed for finding the highest fidelity resampler rather than AB comparisons.

[1] https://src.hydrogenaudio.org


We saw these is Ravenna


What, no Doom running on Voyager 2?


If the Standard has anything to say about compatibility between different language versions, I doubt many developers know those details. This is breeding ground for ODR violations, as you’re likely using compilers with different output (as they are built in different eras of the language’s lifetime) especially at higher optimization settings.

This flies in the face of modern principles like building all your C++, from source, at the same time, with the same settings.

Languages like Rust include these settings in symbol names as a hash to prevent these kinds of issues by design. Unless your whole team is a moderate-level language lawyer, you must enforce this by some other means or risk some really gnarly issues.


> Languages like Rust include these settings in symbol names as a hash to prevent these kinds of issues by design.

Historically, C++ compilers' name mangling scheme for symbols did precisely the same thing. The 2000-2008 period for gcc was particularly painful since the compiler developers really used it very frequently, to "prevent these kinds of issues by design". The only reason most C++ developers don't think about this much any more is that most C++ compilers haven't needed to change their demangling algorithm for a decade or more.


C++’s name mangling scheme handles some things like namespaces and overloading, but it does not account for other settings that can affect the ABI layer of the routine, like compile time switches or optimization level.


The name mangling scheme was changed to reflect things other than namespaces and overloading, it was modified to reflect fundamental compiler version incompatibilities (i.e. the ABI)

Optimization level should never cause link time or run time issues; if it does I'd consider that a compiler/linker bug, not an issue with the language.


Looping through inflate/deflate on rotated pixels still takes more time than updating a bit in the Exif (and the chunk’s associated CRC)


It's still negligible from the consumer standpoint.

Like, if you had millions of images you needed to rotate on a server in a batch job, then OK.

But if you're just rotating one photo, or even a hundred, that you've just taken, it's plenty fast enough.


Dithering the errors across the image would make the final result a lot more palette-able.


There are plenty of posts out there on using Knuth’s dancing links as a fast sudoku solver. Has it fallen out of fashion?


Dancing links is a very cute data-structure for a backtracking search, but there are a lot more aspects of writing a good Sudoku solver than just having a good data-structure for backtracking. Propagation (making deductions), heuristics, learning, parallelism, restarts, no-goods, ...

While 9x9 Sudoku problems are trivial to solve for more or less any program, 25x25 Sudoku instances are quite tricky and a simple and fast but naive search for a solution can easily take hours.


For generating puzzles it's really useful since it lets you determine if a randomly generated puzzle has only one possible path to solving it (exact cover problem). And it's fast so adding it to a pipeline doesn't incur much if any overhead.


Is there any property in particular of dancing links that you think helps in determining this, or is it just that a backtracking search can be used to test all cases?

For pen-and-paper puzzles like Sudoku there is usually the goal that a solution should be findable by a series of deductive steps. For 9x9 Sudoku, most deductive steps used correspond to the effects well-known propagation techniques offer[1]. With a suitable propagation level, if the puzzle is solved search-free, then one knows that both there is only one solution and that there is a deductive path to solve it.

[1]: See "Sudoku as a Constraint Problem", Helmut Simonis, https://ai.dmi.unibas.ch/_files/teaching/fs21/ai/material/ai... for some data on 9x9 Sudoku difficulty and the propagation techniques that are needed for search-free solving.


How’d they get Claude listed as one of the contributors? Is that due to changes coming in to the repo from a Claude/github integration?


it's just claude code commiting and pushing for me because i'm lazy


Not lazy! This should be a requirement, so future “us” can discern authorship - just like any developer.


It will probably go the opposite way though in the future. People will list when AI wasn't used in the loop, like how "sent from my iphone" was both a status signal and a request for leniency when it comes to spellcheck.


if you read the article, it says it is entirely vibecoded


Perhaps not, but a big benefit according to OP is the smaller number of tokens / context pollution skills introduce v. MCP.


A blast from the past! I once wrote an implementation of dancing_links in C++ as part of a Sudoku solver: https://github.com/stlab/adobe_source_libraries/blob/main/ad...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: