Hacker Newsnew | past | comments | ask | show | jobs | submit | to11mtm's commentslogin

C# has been favored by a lot of game devs for some time. You've got Godot, Unity, I think you can do -some- things in unreal engine with C#...

In contrast to java it has added a lot of helpful constructs for high performance code like game dev; things like `Span` and `Memory` plus ref structs make it easier to produce code that avoids allocation on the heap (and thus lower GC pauses, which are a concern for most types of game dev).

At least for now I'd rather trust Microsoft than Oracle, esp since both CoreCLR and Mono are under more permissive licenses than Java and have been for some time.


I don't think you are off base FWIW. Unity has long both kinda lagged too hard but also just makes things weird at times.

I think the biggest hurdle a unity contender has to overcome, is how to provide both 'similar enough' primitives as well, as a way to easily consistently handle the graphics/sound pipeline (i.e. simple way to handle different platform bindings) while also making sure all of that infrastructure is AOT friendly.


Capcom doing their own env is still a bit extreme sounding to me in it's own right (with the shitpost comment of, I bet they have a slick dispatcher involved somewhere vs 'trust the threadpool')

But then I remember they have to deal with IL2CPP, because lots of mobile/console platforms do not allow JIT as policy.

.NET does now have 'full AOT' as a thing at least, and Streets of Rage 4 I believe used CoreRT for at least a one of the platforms it was released for.

What's more hopeful about that, is that you can see backers of many different architectures contributing towards the ecosystem.


I'd be willing to give you access to the experiment I mentioned in a separate reply (have a github repo), as far as the output that you can get for a complex app buildout.

Will admit It's not great (probably not even good) but it definitely has throughput despite my absolute lack of caring that much [0]. Once I get past a certain stage I am thinking of doing an A-B test where I take an earlier commit and try again while paying more attention... (But I at least want to get where there is a full suite of UOW cases before I do that, for comparison's sake.)

> Those twenty engineers must not have produced much.

I've been considered a 'very fast' engineer at most shops (e.x. at multiple shops, stories assigned to me would have a <1 multiplier for points[1])

20 is a bit bloated, unless we are talking about WITCH tier. I definitely can get done in 2-3 hours what could take me a day. I say it that way because at best it's 1-2 hours but other times it's longer, some folks remember the 'best' rather than median.

[0] - It started as 'prompt only', although after a certain point I did start being more aggressive with personal edits.

[1] - IDK why they did it that way instead of capacity, OTOH that saved me when it came to being assigned Manual Testing stories...


> Will admit It's not great (probably not even good) but it definitely has throughput

Throughput without being good will just lead to more work down the line to correct the badness.

It's like losing money on every sale but making up for it with volume.


> Will admit It's not great (probably not even good)

You lost me here. Come back when you're proud of it.


It's definitely scary in a way.

However I'm still finding a trend even in my org; better non-AI developers tend to be better at using AI to develop.

AI still forgets requirements.

I'm currently running an experiment where I try to get a design and then execute on an enterprise 'SAAS-replacement' application [0].

AI can spit forth a completely convincing looking overall project plan [1] that has gaps if anyone, even the AI itself, tries to execute on the plan; this is where a proper, experienced developer can step in at the right steps to help out.

IDK if that's the right way to venture into the brave new world, but I am at least doing my best to be at a forefront of how my org is using the tech.

[0] - I figured it was a good exercise for testing limits of both my skills prompting and the AI's capability. I do not expect success.


AI does not forget requirements when you use a spec driven AI tool like Kiro

Are you on the Kiro marketing team?

I think they are both more asking about 'per pixel color filters'; that is, something like a sensor filter/glass but the color separators could change (at least 'per-line') fast enough to get a proper readout of the color in formation.

AKA imagine a camera with R/G/B filters being quickly rotated out for 3 exposures, then imagine it again but the technology is integrated right into the sensor (and, ideally, the sensor and switching mechanism is fast enough to read out with rolling shutter competitive with modern ILCs)


JPEG with OOC processing is different from JPEG OOPC (out-of-phone-camera) processing. Thank Samsung for forcing the need to differentiate.

I wrote the raw Bayer to JPEG pipeline used by the phone I write this comment on. The choices on how to interpret the data are mine. Can I tweak these afterwards? :)

I found the article you wrote on processing Librem 5 photos:

https://puri.sm/posts/librem-5-photo-processing-tutorial/

Which is a pleasant read, and I like the pictures. Has the Librem 5's automatic JPEG output improved since you wrote the post about photography in Croatia (https://dosowisko.net/l5/photos/)?


Yes, these are quite old. I've written a GLSL shader that acts as a simple ISP capable of real-time video processing and described it in detail here: https://source.puri.sm/-/snippets/1223

It's still pretty basic compared to hardware accelerated state-of-the-art, but I think it produces decent output in a fraction of a second on the device itself, which isn't exactly a powerhouse: https://social.librem.one/@dos/115091388610379313

Before that, I had an app for offline processing that was calling darktable-cli on the phone, but it took about 30 seconds to process a single photo with it :)


I mean it depends, does your Bayer-to-JPEG pipeline try to detect things like 'this is a zoomed in picture of the moon' and then do auto-fixup to put a perfect moon image there? That's why there's some need to differentiate between SOOC's now, because Samsung did that.

I know my Sony gear can't call out to AI because the WIFI sucks like every other Sony product and barely works inside my house, but also I know the first ILC manufacturer that tries to put AI right into RAW files is probably the first to leave part of the photography market.

That said I'm a purist to the point where I always offer RAWs for my work [0] and don't do any photoshop/etc. D/A, horizon, bright adjust/crop to taste.

Where phones can possibly do better is the smaller size and true MP structure of a cell phone camera sensor, makes it easier to handle things like motion blur. and rolling shutter.

But, I have yet to see anything that gets closer to an ILC for true quality than the decade+ old pureview cameras on Nokia cameras, probably partially because they often had sensors large enough.

There's only so much computation can do to simulate true physics.

[0] - I've found people -like- that. TBH, it helps that I tend to work cheap or for barter type jobs in that scene, however it winds up being something where I've gotten repeat work because they found me and a 'photoshop person' was cheaper than getting an AIO pro.


OK now do Fuji Super CCD (where for reasons unknown the RAW is diagonal [0])

[0] - https://en.wikipedia.org/wiki/Super_CCD#/media/File:Fuji_CCD...


The reasons aren’t exactly unknown, considering that the sensor is diagonally oriented also?

Processing these does seem like more fun though.


Well that's why back in the day (and even still) 'Photographer listing their whole kit for every shot' is a thing thing you sometimes see.

i.e. Camera+Lens+ISO+SS+FStop+FL+TC (If present)+Filter (If present). Add focus distance if being super duper proper.

And some of that is to help at least provide the right requirements to try to recreate.


NGL would be nice if there was a clear link to the cases used both for OP as well as who you are replying to... Kinda get it in OP's case tho.

I measured the raw horsepower of the JIT engine itself, not the speed of the standard library (BCL). My results show that the Mono engine is surprisingly capable when executing pure IL code, and that much of the 'slowness' people attribute to Mono actually comes from the libraries, not the runtime itself.

In contrast, the posted article uses a very specific, non-standard, and "apple-to-oranges" benchmark. It is essentially comparing a complete game engine initialization against a minimal console app (as far as I understand), which explains the massive 3x-15x differences reported. The author is actually measuring "Unity Engine Overhead + Mono vs. Raw .NET", not actually "Mono vs. .NET" as advertized. The "15x" figure comes very likely from the specific microbenchmark (struct heavy loop) where Mono's optimizer fails, extrapolated to imply the whole runtime is that much slower.


Can we reproduce your results for Mandelbrot?

You can find all necessary information/data in the article (see references). Finding the same hardware that I used might be an issue though. Concerning Mandelbrot, I wouldn't spend too much time, because the runtime was so short for some targets that it likely has a big error margin compared to the other results. For my purpose this is not critical because or the geometric mean over all factors.

I think we are trying to find something like 'can we pull this branch/commit/etc and build it to reproduce'.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: