Funny you mention that, I have very recently just came back from a one-shot prompt which fixed a rather complex template instantiation issue in a relatively big very convoluted low-level codebase (lots of asm, SPDK / userspace nvme, unholy shuffling of data between numa domains into shared l3/l2 caches). That codebase maybe isn't in millions of lines of code but definitely is complex enough to need a month of onboarding time. Or you know, just give Claude Opus 4.5 a lldb backtrace with 70% symbols missing due to unholy linker gymnastics and get a working fix in 10 mins.
And those are the worst models we will have used from now on.
Template instantiation is relatively simple and can be resolved immediately. Trying to figure out how 4 different libraries interact with undefined behavior to boot is not going to be easy for AI for a while.
Visual puzzle solving is a pretty easily trainable problem due to it being simple to verify, so that skill getting really good is just a matter of time
In Go you know exactly what code you’re building thanks to gosum, and it’s much easier to audit changed code after upgrading - just create vendor dirs before and after updating packages and diff them; send to AI for basic screening if the diff is >100k loc and/or review manually. My projects are massive codebases with 1000s of deps and >200MB stripped binaries of literally just code, and this is perfectly feasible. (And yes I do catch stuff occasionally, tho nothing actively adversarial so far)
If you have Pro users why not leverage with debt without giving up equity for no good reason?
Maybe the value prop is not clear, the website talks a bunch about AI agent integrations, that sounds like a completely different product to a parser library, which however advanced it may be, investors will likely see as tangential bit of IP that a senior engineer can build for $10-20k in a few days.
Thanks for the suggestion! At this stage, debt is not a feasible option for us. Our focus is on scaling the business quickly and VC funding is the preferable route to achieve that. Having the runway and support from investors will allow us to fully dedicate ourselves to growth and execute our vision effectively.
Re value prop - true. While the existing clients are using FIXParser as a plain old library I was thinking where the puck is going to be, it's going to be all MCP with A2A frameworks so i pivoted and most of the focus was on building the MCP features. We have received interest from VERY prominent firms in finance thanks to our focus on MCP FIX features. I doubt that a sr engineer can put that together in a few days.
It does address quite a few reliability issues - you can have multiple gateways into the thread network so it is actually highly available.
It’s definitely complicated, but it’s a kind of usb-c of smart home - you only worry about the complex part when building a product. Just wish there was a better device reset/portability story.
Unlike websockets you can supply "cert hash" which makes it possible for the browser to establish a TLS connection with a client that doesn't have a certificate signed by a traditional PKI provider or even have a domain name. This property is immensely useful because it makes it possible for browsers to establish connections to any known non-browser node on the internet, including from secure contexts (i.e. from an https page where e.g. you can't establish a ws:// connection, only wss:// is allowed but you need a 'real' tls cert for that)
NFS is much slower, maybe unless you deploy it which RDMA. I believe even 4.2 doesn’t really support asynchronous calls or has some significant limitations around them - I’ve commonly seen a single large write of a few gigs starve all other operations including lstat for minutes.
Also it’s borderline impossible to tune nfs to go above 30gbps or so consistently, with WebDAV it’s a matter of adding a bunch more streams and you’re past 200gbps pretty easily.
And those are the worst models we will have used from now on.
reply