Data should be data, queryable, relational. So often I have had to change enums into lookup tables - or worse, duplicate them into lookup tables - because now we need other information attached to the values. Labels, descriptions, colors, etc.
My biggest recommendation though is that if you have a lookup table like this, make the value you would have made an enum not just unique, but _the primary key_. Now all the places that you would be putting an ID have the value just like they would with an enum, and oftentimes you wont need to join. The FK makes sure its valid. The other information is a join away if you need it.
I do wish though that there were more ways to denote certain tables as configuration data vs domain data, besides naming conventions or schemas.
Edit to add: I will say there is one places where I have begrudgingly used enums and thats where we have used something like prisma to get typescript types from the schema. It is useful to have types generated for these values. Of course you can do your own generation of those values based on data, but there is a fundamental difference there between "schema" and "data".
well, if DDL (data definition language) and DML (data manipulation language), were unified and both operated on relation , manipulating meta data would have been a lot simpler, and more dynamics
you can always created data dictionary relation, where you stored the code for table creation, add meta data, and use dynamic sql to execute the DML code stored in the DB, i worked somewhere where they did this ... sort of
I'm not the author, but I think you could by using UNION ALL instead of temp tables. You could also make a view that just calls this function. I'm not sure why it would matter though.
I use claude code every day, and havent had a chance to dig super deep into skills, but even though ive read a lot of people describe them and say they're the best thing so far, I still dont get them. Theyre things the agent chooses to call right? They have different permissions? is it a tool call with different permissions and more context? I have yet to see a single post give an actual real-world concrete example of how theyre supposed to be used or a compare and contrast with other approaches.
The prerequisite thought here is that you're using CC to invoke CLI tools.
So now you need to get CC to understand _how_ to do that for various tools in a way that's context efficient, because otherwise you're relying on either potentially outdated knowledge that Claude has built in (leading to errors b/c CC doesn't know about recent versions) or chucking the entirety of a man page into your default context (inefficent).
What the Skill files do is then separate the when from the how.
Consider the git cli.
The skill file has a couple of sentences on when to use the git cli and then a much longer section on how it's supposed to be used, and the "how" section isn't loaded until you actually need it.
I've got skills for stuff like invoking the native screenshot CLI tool on the Mac, for calling a custom shell script that uses the github API to download and pull in screenshots from issues (b/c the cli doesn't know how to do this), for accessing separate APIs for data, etc.
After CC used that skill and it is now in the context, how do you get rid of it later when you don’t need the skill anymore and don’t want to have your context stuffed with useless skill descriptions?
What I find works best for complex things is having one session generate the plan and then dispatching new sessions for each step to prevent context-rot. Not "parallel agents" but "sequential agents."
I think if it literally as a collection of .md files and scripts to help perform some set of actions. I'm excited for it not really as a "new thing" (as mentioned in the post) but as effectively an endorsement for this pattern of agent-data interaction.
So if youre building your own agent, this would be a directory of markdown documents with headers that you tell the agent to scan so that its aware of them, and then if it thinks they could be useful it can choose to read all the instructions into its context? Is it any more than that?
I guess I dont understand how this isnt just RAG with an index you make the agent aware of?
It also looks a lot like a tool that has a description mentioning it has a more detailed MD file the LLM can read for instructions on complex workflows, doesn’t it? MCP has the concept of resources for this sort of thing. I don’t see any difference between calling a tool and calling a CLI otherwise.
I mean it is technically RAG as the LLM is deciding to retrieve a document. But it’s very constrained.
The skills that I use all direct a next action and how to do it. Most of them instruct to use Tasks to isolate context. Some of them provide abstraction specific context (when working with framework code, find all consumers before making changes. add integration tests for the desired state if it’s missing, then run tests to see…) and others just inject only the correct company specific approach to solving only this problem into Task context.
They are composable and you can build the logic table of when an instance is “skilled” enough. I found them worse than hooks with subagents when I started, but now I see them as the coolest thing in Claude code.
The last benefit is nobody on your team even had to know they exist. You can just have them as part of onboarding and everyone can take advantage of what you’ve learned even when working on greenfield projects that don’t have a CLAUDE.md.
I love cursor, the tab completion and agent mode. But I really dislike vscode after using intellij for so many years. I really wish the underlying editor was better, or I could get cursor features in intellij instead. The editing of the files is mostly fine, but its everything else around it that a full IDE provides thats just so much better. Right now its intellij + claude code for me, and its fine, but I wish I could get the AI power of cursor in a better package.
Intellij's tab-complete is coming along; it's hit and miss if it will work but for similar edits I'm finding it picks up the pattern quickly and I can tab - tab - tab to make them happen.
I find Cursor's tab completion to be distracting enough with multi-line changes that I just disabled it, while I use IntelliJ's tab completion regularly.
Cursor's tab completion is better, but it doesn't seem to have a concept of not trying to tab complete. IntelliJ is correct half the time for completing the rest of the line and only suggests when it is somewhat confident in its answer.
I agree about the multi-line blocks Cursor proposes. Like it gets the first two lines right and then after that it's nonsense. I'd rather it stuck with a single line change at a time, and let me press enter before it predicts again.
Building off of VSCode was probably Cursors silver bullet and the best decision they could have ever made.
It made migrating for everyone using VSCode (probably the single most popular editor) or another vscode forked editor (but at the time it was basically all VSCode) as simple as install and import settings.
I do not think Cursor would have done nearly as well as it has if it didn't. So even though it can be subpar in some areas due to VSCodes baggage, its probably staying that way for a while.
I dont disagree with anything you said. If I was in their shoes, I would have done exactly the same thing.
Maybe my complaint is that I wish vscode had more features like intellij, or that intellij was the open source baseline a lot of other things could be built on.
Intellij is not without its cruft and problems, dont get me wrong. But its git integration, search, navigation, database tools - I could go on - all of these features are just so much nicer than what vscode offers.
I think that could be the killer feature of this use that space for thin batteries maybe only 2500mah. i can carry 3-4 in my bag and have as much battery life as i care to carry around. and rather than push the charging to 30+ watts that turns my phone into a hotplate can recharge 3 batteries at once at 10w in same time. Bonus to apple on accessories sales
Problem is that magsafe means wireless charging which is highly unefficient. It's not that big of a deal for stationary applications, but for attaching a spare battery (which is itself limited by its capacity) you are probably wasting 30% of it on the overhead of the wireless power transfer.
I've always thought as a layman that the weakest link in all of this is our cosmic distance ladder, seems like the most likely place that errors would stack up and lead us to some wrong conclusions. So may places for things to go wrong, we make a lot of assumptions about type 1a supernovas actually being a constant brightness, dust obscuring our view of them, plus all of the assumptions we've made about even measuring the distance between the ones we've measured. And its not like cosmologists havent acknowledged this, but I think a lot of the hubble tension might be solved once we figure out how to measure these distances more accurately.
Until now with a far better telescope able to significantly improve the sample size, that is.
Ugh, this is so frustrating. We know our current theories cannot be complete but the LHC has mostly just confirmed assumptions, and now this. Everything seems to well contained.
The various candles are not independent yardsticks, nor are they just assumed to be true. Wherever possible they are compared against each other. And there are people who spend entire careers debating how dust absorbs light in order to best compensate for such things.
If measurements point to some sort of incongruity, questioning the accuracy of one's ruler is a fools trap. Altering the rulers to remove incongruities results in a spiral of compromises, internal debates that don't result in progress. If one suspects that the rulers are wrong, the answer is to build a better ruler. Not to arbitrarily chop bits off until the difficult observations go away.
I totally agree, hope my comment didnt come off to the contrary. As a layman, I consume most of my information through popsci sources (though I try to go more for the Dr. Beckys than the meatless or sensational stuff), and its generally described as something that we just take for granted - "we just found the oldest galaxy ever observed, only a few hundred million years after the big bang - and its too bright and has way more 'metals' than expected" - but we measured that with redshift, which makes a bunch of assumptions that of course they cant talk about in every video, but we dont talk about anyone questioning them.
I have no doubt that there are great scientist spending their entire careers trying to improve these rulers and measurements, but I also know that there are great scientists spending their entire careers basing everything on the best rulers they have...
agreed, which is awesome, the only thing that worries me is that they will drop support for it earlier than they have to when they want to force people to upgrade eventually. I hope to get 10 years out of my M1
We needed to do something similar one time with 5 large touchscreen tvs that were arranged as a table, where each side needed to be a separate touchscreen application with them all playing a synchronized video in the background but users could interact with things flowing from one end to the other and could send objects from their other apps in any direction to other apps, like users sending things they found to the person on the other side of the table.
We ended up with a trashcan mac pro (thats about all we could find in budget that could drive all the screens at the same time) with apps that were synchronized using redis (I wrote that part). It worked really well, though I didnt get to see the finished product before I left that company. But we always really wanted to have separate computers that were synchronized. We just couldnt get that to be reliable enough - it worked for a while but then various things would throw it out of sync, meaning we would have to restart the applications periodically which wouldnt work.
Something I have always wished we had, since the very early days of PCs was the ability to network devices together in such a way that they could share their resources and collaborate more. Imagine being able to take advantage of all of the computers in an office to do a task like a supercomputer. Of course thats a very hard problem, applications and OSs would need to be designed for it and we would need new algorithms (look how long it took us just to take advantage of multiple processors in the same machine on the same board), but there were some projects out there like seti@home and folding@home that did it somewhat, but I always hoped it would be something that the computers themselves would support.
reply