Hacker Newsnew | past | comments | ask | show | jobs | submit | ncommentslogin

This is what the web should be

This article shows how to create and implement an ERC-20 token with all the features needed for practical applications. It equips developers to extend the token into staking, governance, or other sophisticated systems by outlining the engineering considerations and philosophical underpinnings of modular token design.

Thanks! Quick overview: Paths are deterministic, not LLM-generated. I use OpenAI text-embedding-3-large to build a word graph with K-nearest neighbors, then BFS finds the shortest path. No sampling involved. The explanations shown in-game are generated afterward by GPT-5 to explain the semantic jumps. Planning to write up the full architecture in a blog post - will share here when it's ready.

Hi, I'm looking for feedback to see if what I'm doing is intuitive. I'm building an integrated security platform (think telemetry/SIEM/sanitizer/SOAR wrapped into one) called Ben. Ben is built in Rust, and largely macro based. I've included a substack article about the macros I've made so far. Basically I wanted compile time guarantees for info. I have a plan to implement hot swaps on everything but BenSchema that gets versioned. My thought was that if all I had to do was build out structs and boom, it works, it'd help me with my use case. Ideally I'd like to market it, but I'm mostly building it for my own use case. If you have any feedback whether this is intuitive, whether or not I should add things or add to the macros, it'd be appreciated. Ben isn't built, and I'm not trying to shill something. I just sit at this weird intersection where its hard to get feedback on stuff and I've found my project to be rather niche.

Thanks! Yes - the motivation came from repeatedly switching between DevTools, Burp, and ad-hoc scripts whenever I needed to understand how an object ended up in the heap.

Wirebrowser started as an experiment to unify those workflows and make it possible to follow those values directly instead of stitching together multiple tools. It grew from the pain points I kept running into.


Most teams don’t fail because they picked the wrong framework; they fail because they never ship enough iterations for it to matter.

Anything that reliably shortens that loop is “good tech,” even if it’s ugly, uncool, or built on last decade’s stack.


Observability that can produce causal explanations rather than just timelines. We have great tooling for logs/metrics/traces, but very little that helps engineers understand why a distributed system behaved the way it did. Automated causal graphs for incidents still feel like an open problem.

For Twitter, it uses yours (there's a tutorial on how to make a developer account by yourself) since their API only allows 17 posts PER DAY and for Tumblr it uses mine because it has virtually no limit.

Thanks a lot! It started as a small experiment with parts of CDP to solve some real-life debugging problems I kept running into, and it ended up opening workflows I hadn’t expected.

Nice idea. Release notes are surprisingly time-consuming. How does it deal with large PRs that combine multiple changes?

The experience feels fragmented because Google has multiple overlapping developer consoles and product boundaries. Gemini just exposes that underlying fragmentation more clearly than other APIs.

a hack you this went to a can go to you so good hahahahaha§hah§aha

BJH OS Contributions are now live contribute now in BJH OS: https://github.com/Haris16-code/BJH-OS/blob/main/CONTRIBUTIN...

Thank you! And thanks for opening the issue - handling very large memory objects is definitely an area of improvement for Wirebrowser. It’s something I plan to harden as the tool matures.

Good point about the video ;) I’ll surface it more prominently, the whitepaper ended up a bit dense, so having the visual demo earlier probably helps a lot.


The most interesting bit here is not the “2.4x faster than Lambda” part, it is the constraints they quietly codify to make snapshots safe. The post describes how they run your top-level Python code once at deploy, snapshot the entire Pyodide heap, then effectively forbid PRNG use during that phase and reseed after restore. That means a bunch of familiar CPython patterns at import time (reading entropy, doing I/O, starting background threads, even some “random”-driven config) are now treated as bugs and turned into deployment failures rather than “it works on my laptop.”

In practice, Workers + Pyodide is forcing a much sharper line between init-time and request-time state than most Python codebases have today. If you lean into that model, you get very cheap isolates and global deploys with fast cold starts. If your app depends on the broader CPython/C-extension ecosystem behaving like a mutable Unix process, you are still in container land for now. My hunch is the long-term story here will be less about the benchmark numbers and more about how much of “normal” Python can be nudged into these snapshot-friendly constraints.


Like all of your projects....thank you for sharing

My buddy Fredo & I launched an ai survey company & would love to get people’s feedback. We both have never scaled a tech company before and are trying to get more users on our website. I would love to get your thoughts and feedback about our software. If your interested surveyi.app is our website & its free to use.

Our main goal is to collect data with our qr based surveys powered by ai. It takes less then a minute to create a survey with our platform. We also created a new genre rtxi real time experience intelligence which scores the data with sentiment and highlights/summarizes the responses you get so you can get real time feedback fast & gather reports. Signup its free and let us know what you think.


This is precisely why I built akcache.io - managed databases that can't be touched by US jurisdiction. The key finding: "decisive is not the physical storage location, but control by the affected company." Even data in Frankfurt datacenters is accessible to US authorities if a US parent company has ultimate control.

AWS EU regions, Azure Europe, Google Cloud EU - all subject to CLOUD Act and FISA 702. The "EU data residency" marketing is misleading at best. The report is from University of Cologne law professors commissioned by the German Interior Ministry. This isn't privacy activists - this is the German government acknowledging the sovereignty problem.

I'm running managed Redis/PostgreSQL on Hetzner infrastructure (German company, German/Finnish datacenters). Not just "EU regions" - actually EU-controlled top to bottom. €7.99/month for Redis, €4.99 for PostgreSQL.

The economics work because Hetzner CX23 servers are €2.99/month and I use multi-tenant architecture. No VC subsidy needed to compete on price.

More details: https://akcache.io

Happy to discuss the technical architecture or the legal implications here.


ahah I'm really vibing with this post, I went through the same idea as the OP - wanted to try gemini 3 and/or nano banana - and as soon as I was thrown into the billing management panel of Google Cloud and their whole linking process I bailed.

I would say a fair compensation for the original work is fair, until certain threshold, after which they must invent new thing rather than continued benefit of an existing. Say once they earned 400% of valuation or cost of invention or similar. there could be a system in place. But of course the people to regulate this has a natural bias, as they themselves would be hurt by it, most likely. So the vast majority, ie. the public is at an disadvantage, greed wins again.

I think because they are not necessarily consecutive.

> Better than any other option

Such as?

Facebook knew for years its social media was hurting the mental health of teenagers, and not only they doubled down on it because it makes money, they will also face zero consequences.

Corporate self-regulation is a myth.


This will never ever happen in the US because free speech is obviously more important than children's mental health. Allowing 14 YOs onto the Internet is but a mere side effect of the Constitution.

Yes, I am being sarcastic.


Why not compare it to smoking cigarettes or drinking alcohol? You need to be an adult to decide legally you can do that and that makes sense. Its the same thing here.

How many bookmarks do you have that you'll never look at again? We've all been there. You find a great article, an important video, a recipe you want to try... you bookmark it, and it vanishes into the void. I got tired of: Scrolling through 500 bookmarks Copy-pasting the same links over and over Forgetting where I saved things Losing context about WHY I saved something So I built ContentCapture Pro - and I'm giving it away free. How it works:

Find something worth keeping → press Ctrl+Alt+P Give it a short name like "taxhelp" or "recipe" Later, type ::taxhelp:: anywhere and it instantly pastes the title, link, and your notes

That's it. No apps to open. No folders to dig through. Just type and it appears. I've captured over 7,000 articles, videos, and notes this way. It changed how I use the internet.


When you get into C code sometimes you know the most thinngs that will be in the priority queue is like 3. So bubble sort is fine.

You can also do something like a calendar queue with bubble sort for each bin.


I see a HoMM thread, I upvote.

I like when my vehicles take me from A to B safely. If they are also quiet, offer a smooth ride, and are easy to refill/recharge, then even better.

Had a quick play with this and it looks like it 403s saying access to Kafka v4.1 is denied

I’ve had terrible luck benchmarking EC2. Measurements are too noisy to be repeatable. The same instance of the wrong type can swing by double digit percentages when tested twice an hour apart.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: