As being discussed in another thread about plain-text accounting[1], what I've found most difficult about these tools is the learning curve between "Assets = Liabilities + Equity" and the realities of modeling a household economy.
I appreciate the level of detail in this post. I think there's often confusion that plaintext == easy/simple. The real takeaway is: "if you're going to go through all of the trouble of managing your economy, you may as well make sure you control your data and own your system."
I began with PTA recently. I think the barrier to entry is high because you first need to learn double entry bookkeeping (if you haven't already) and then you need to decide between ledger-cli, hledger, or beancount, with the differentiators being on the margins and with some promise of being able to switch later. The choice really comes down to which tool has the documentation/community that makes the most sense to you at the time.
Then, there's the import workflow: which "accounts" should you start with? How much history do you pull in? How do you set up an automatic importer? Hledger has a DSL. Beancount uses Python. Either way, an OP says, much of your time is spent manually editing text.
And finally, then what? Can I make a budget now? Will this thing do my taxes? Am I more financially responsible? How do I explain this to my spouse? My pension is kind of like a commodity, but I don't know what the unit price is, and I don't sell units, but what's a virtual PnL and what if I only have a quarterly PDF!?
It may sound like I'm ranting, but I have found that realizing I don't know the answers to these questions (or even that they exist) is the true benefit of PTA.
Every year, I'm asked if I want a different pension investment mix or if I want to change my car insurance. Or, I might wonder if I'm getting a good deal on my internet plan or if a new job offer's total comp is actually better. Am I "on track" for "retirement," how long until I have enough for a new roof, am I keeping up with inflation, did I spend too much on gifts this year?
There's immense privilege in not really needing to know the answers to these questions; getting them "wrong" won't really hurt you. But, being familiar with the routine minutiae of your economy by way of counting every cent, is rewarding, enlightening, and empowering—even if it's also finicky and brittle sometimes.
I may have to try beancount again. OP's importers look promisingly robust compared to my hledger scripts.
Besides being a mathematician and a programmer, I have a degree in finance and banking, so I learned double-entry accounting pretty early. As a mathematician, I appreciate the beauty of this very clever, very general and very abstract system. As a geek, I've been using ledger-cli with Emacs for a decade now, and Gnucash earlier.
Re: learning curve, it's not that difficult. Shameless plug: I wrote a textbook (actually, a textbooklet, if that is a word;-)) about the basics of DEA, focused on personal finance and using ledger-cli: https://leanpub.com/personal-accounting-in-ledger/
I think there's definitely something in it around there's a huge learning curve.
Double entry book keeping isn't that difficult but that's easy to say once you've been doing it a while
I've been doing PTA since around 2018 and there's definitely lessons I've learned along the way along with plenty of mistakes.
I think the main benefit for me is just the system gives you a complete picture of your finances. The commercial services you can pay for just give you a view into a certain slice (e.g. open banking in UK/Europe to see your current account(s)) - I think mint.com did something similar in the US but it never came over here, I don't know if it still exists. Maybe that's enough for most people, but for me I want everything, investments, liabilities, assets etc. None of these commercial offerings have that because it's so complex and niche, e.g. your open banking provider won't tell you how your pension is doing.
It's also just nice to have the provenance of transactions, e.g. if you receive some shares from work, and you sell the shares and the money ends up in your bank account - the incoming transaction will just be the net proceeds but it won't tell you if you paid any tax prior to that - PTA gives you a more of a complete picture that tracks the whole chain of events that led up that transaction into your bank happening. Overkill for most people? Probably.
Mint's value to me was destroyed long before 2024; its originally-reliable backend was swapped out for a third-party one (plaid?) and half my accounts couldn't sync -- making it useless. YMMV, but IME Monarch seems like the current best option in this space...
I work at Plaid, so this got me curious about who their provider was -- per Wikipedia, Mint used Intuit's internal account aggregation tools from ~2010-2024. It's possible that Mint swapped out to some other third party provider and Wikipedia doesn't know about it, but based on both internal and external records, I'm pretty sure it wasn't Plaid. (Intuit's Credit Karma, which was marketed to Mint customers as a replacement after Mint shut down, does use Plaid.)
> IME Monarch seems like the current best option in this space...
Monarch is just horribly overpriced. I'm not opposed to paying for a SaaS or anything, but Monarch is much too expensive for what it is. Empower (used to be called Personal Capital) is free and what I use instead of Mint. In fairness, the only thing that I really ever used Mint for was to see my net worth and account balances.
I dunno. Insights from Monarch have easily saved me many times the annual cost. Other tools could as well of course, but the ease-of-use makes it easy to maintain
I have no experience w/ PTA, but was a Mint user for a couple years before it got killed, and recently discovered Monarch which has similar features. But just this week I got set up w/ eMoney thanks to a friend who works in wealth management. It provides a centralized dashboard (like Monarch), but also the ability to run forecasts / projections, which will be helpful as things have gotten more complicated for us as a couple (running two S-Corps, paying for daughters' college tuitions, etc).
As an ex-Mint user, Monarch has been a very useful and I feel I spend way less time monkeying around trying to recategorize transactions. I really like the budget rollover feature, it really helps smooth out things like yearly insurance renewals. My only fear is if they start getting greedy and jack up the yearly subscription fee. I feel like around $100/year is just about right.
If you're running a business, double entry BK is essential. If you're doing personal finances it is completely overkill. I think the perfect tool is just excel or any similar spreadsheet, and track what matters for you. I usually add numbers weekly to my spreadsheet.
Double-entry bookkeeping is a transformational way of thinking about money. There's a reason it took over the world. Even if I just tracked expenses in a spreadsheet, it would help that I know double-entry.
The concept of DE BK is simple, yet powerful. The difficult part is to track every penny. For personal finances that doesn't make much sense, but for a business it's essential.
>I think the barrier to entry is high because you first need to learn double entry bookkeeping (if you haven't already)
This. Accounting seems easy if you already know accounting. Learning accounting is difficult because the literature is dense, contradictory, and it's not helped by the fact that most accountants don't even seem to understand it at a fundamental level, they've merely memorized false-but-workable ideas like "debits increase assets because debit means left", or the silly idea about viewing it from the perspective of a bank.
Even accounting profs are surprisingly bad at at this, at least in my experience.
ehhhhhhh I don't think you need to go as far as reading dense accounting literature, I never have and I've been maintaining a beancount files for 7+ years.
The point I was trying to make is that "Debits increase assets" isn't some esoteric technique; it's a fundamental aspect of double-entry that you'd need to know, but it's also counterintuitive and most of the literature out there is full of bullshit explanations that do nothing to help people understand.
The link you posted is similarly full of bullshit, written by someone who demonstrably does not understand the double entry system (they even say so in the text).
But I'm the sort of person who finds it very grating to have to do something without understanding why I'm doing it. If that's not you then party on.
I'm excited to take a look at this! Using Charmbracelet's libraries for TUIs is part of why I learned Go. Ruby's TUI story has generally been underdeveloped by comparison.
Also, Marco (library creator) was just awarded the Rails Luminary award![1]
I always loved Charm's aesthetics and it really opened my eyes with what can be done with TUIs. But I never felt like I wanted to learn Go just to be able to use these libraries. Ruby is magical in it's own way, so it just felt right to bring these libraries over to Ruby!
It's easier to ship a TUI app cross-platform, the constraints around UI and state are often simpler, and some good libraries/frameworks (e.g. [1][2]) exist to make a modern-looking UX.
It was "good old games", then they announced that good old games was going away and after everyone panic-downloaded their whole collection they announced that they weren't going anywhere but they were just going to be GOG without it standing for anything.
HTTP/2, headers are not unique if they only differ by casing, but they must be encoded as lowercase.
Just as in HTTP/1.x, header field names are strings of ASCII characters that are compared in a case-insensitive fashion. However, header field names MUST be converted to lowercase prior to their encoding in HTTP/2. A request or response containing uppercase header field names MUST be treated as malformed (Section 8.1.2.6).[1]
HTTP/1.X, headers are insensitive to casing for reasons of comparison and encoding.
Each header field consists of a name followed by a colon (":") and the field value. Field names are case-insensitive.[2]
So, if Sec-Fetch-Site is sensitive at all, it would be sec-fetch-site when sending via HTTP/2 and you're responsive for encoding/decoding.
How are things going with Sonic Pi?[1] I have lots of fond memories and don't remember there being many strongly popular alternatives some years ago... though maybe I was living under a rock (..and roll).
Feels like more and more of these sorts of things are popping up. For example there's TidalCycles which is a Haskell version of the idea, which also exists as https://strudel.cc/ which is I believe a webasm version of it.
Really? Color me corrected I only ran into TC after SonicPi.
Though this entire discussion reminds me I need to fix my TidalCycles setup, had it working on Linux with vscode but I tried it out again a month or two ago and it wasn't playing anymore.
For the record, planning to do something later than originally planned is the definition of "postpone." Nevertheless, coupling to any vendor is a form of technical debt, and it's always a good idea to take stock and evaluate if it's time to start repaying it.
AFAICT the tool routed the PCB from an existing schematic. It did not "design" the computer.
NXP publishes full schematics and CAD files for this platform, originally designed in Cadence Allegro. Our goal was to keep the schematic identical and prove out only the layout portion with Quilter. That gave us a clear baseline: if the board didn't work, it would be due to our layout.
In my experience, if you are trying to make a quality product in a complex space, it takes as long to fix autorouted stuff as it does to do it yourself (with some exceptions). I have no doubt that the autorouted stuff will work… but it won’t be as robust .
Aging, thermal cycling, signal emissions, signal corruption, reliability, testability, failure dynamics, and a hundred other manufacturing, maintenance, usability, and reliability profiles are subtly affected by placement and layout that one learns to intuit over the years.
I’m not saying that AI can’t capture that eventually, but I am saying that just following simple heuristics and ensuring DRC compliance only gets you 80 percent of the way there.
There is as much work in getting the next 15 percent as there was in the first 80, and often requires a clean slate if the subtleties weren’t properly anticipated in the first pass. The same stands for the next 4 percent. The last 1 percent is a unicorn. You’re always left with avoidable compromises.
For simple stuff where there is plenty of room, you can get great results with automation. For complex and dense elements, automation is very useful but is a tool wielded with caution in the context of a carefully considered strategy in emc, thermal, and signal integrity trade offs. When ther is strong cost pressure it adds a confounding element at every step as well.
In short, yes, it will boot. No, it will not be as performant when longevity, speed, cost, and reliability is exhaustively characterized. Eventually it may be possible to use AI to produce an equivalent product, but until we have an exhaustive training set of “golden boards” and their schematics to use as a training set, it will continue to require significant human intervention.
Unfortunately, well routed, complex boards are typically coveted and carefully guarded IP, and most of the the stuff that is significantly complex yet freely and openly available in the wild is still in the first 80percent, if even. The majority of circuit boards in the wild are either sub-optimally engineered or are under so much cost pressure that everything else is bent to fit that lens. Neither one of those categories make good training data, even if you could get the gerbers.
It is a reasonable place to start. So much so that autorouters have been around for practically as long as computers have, and they've been better at it than people for most of that time.
The only reason people usually route PCBs is that defining the constraints for an autorouter is generally more work than just manually routing a small PCB, but within semiconductors autorouting overtook manual routing decades ago.
it is surprising (or not?) that there is such a vast gulf in terms of automated tooling between the semiconductor world and pcb routing world.
i guess maybe there are less degrees of freedom and more 'regularity' in the semiconductor space? sort of like a fish swimming in an amorphous ocean vs. having to navigate uneven terrain with legs and feet. the fish in some sense is operating in a much more 'elegant' space, and that is reflected in the (beautiful?) simplicity of fish vs. all the weird 'nonlinear' appendages sticking out of terrestrial animals - the guys who walk are facing a more complicated problem space.
i guess with pcbs you have 'weird' or annoying constraints like package dimensions, via size, hole size, trace thickness, limited layer count, etc.
Semi by hand got out of hand (HA!) in the nineties. There is simply too much work for humans (millions of transistors) so we swallow performance hit. Synthesis puts stuff together from human optimized basic building blocks. Same reason FPGA tools quickly advanced from schematic input to hardware description languages.
With PCB its all still quite manageable, even something like whole PC motherboard is easily doable by two-three EEs specializing in different niches (power, thermals, high speed digital design).
It's the opposite; semiconductor design is so full of constraints that it's difficult to do manually. Usually individual transistors and gates are laid out manually, then manufactured in a test chip, and empirically analyzed in every way possible, then those measurements are fed into the router, and it creates a layout of those blocks that allows enough signal propagation to work at the high speeds used in modern semiconductors.
I appreciate the level of detail in this post. I think there's often confusion that plaintext == easy/simple. The real takeaway is: "if you're going to go through all of the trouble of managing your economy, you may as well make sure you control your data and own your system."
[1]: https://news.ycombinator.com/item?id=46463644
reply