Hacker Newsnew | past | comments | ask | show | jobs | submit | nh2's commentslogin

What is the actual difference?

As a maintainers, if you want to be be able to tell real issues from non-issue discussions, you still gave to read them (triage). That's what's taking time.

I don't see how transforming a discussion into an issue is less effort than the other way around. Both are a click.

Github's issues and discussions seem the same feature to me (almost identical UI with different naming).

The only potential benefit I can see is that discussions have a top-level upvote count.


If discussions had a more modern UI with threads or something then the difference might be real. But AFAICT it’s the same set of functionality, so it’s effectively equivalent to a tag.

They sorta do: each comment on a discussion starts a thread you can reply to, unlike on issues where you have to keep quoting each other to track a topic if there’s more than one. It still sucks, especially since long threads are collapsed and thus harder to ctrl-f or link a reply, but it’s something.

> able to tell real issues from non-issue discussions

imo almost all issues are real, including "non-issue" - i think you mean non-bug - "discussions." for example it is meaningful that discussions show a potential documentation feature, and products like "a terminal" are complete when their features are authored and also fully documented or discoverable (so intuitive as to not require documentation).

99% of the audience of github projects are other developers, not non-programmer end users. it is almost always wrong to think of issues as not real, every open source maintainer who gets hung up on wanting a category of issues narrower than the ones needed to make their product succeed winds up delegating their product development to a team of professionals and loses control (for an example that I know well: ComfyUI).


When you're shipping software, you have full control over LD_LIBRARY_PATH. Your entry point can be e.g. a shell script that sets it.

There is not so much difference between shipping a statically linked binary, and a dynamically linked binary that brings its own shared object files.

But if they are equivalent, static linking has the benefit of simplicity: Why create and ship N files that load each other in fancy ways, when you can do 1 that doesn't have this complexity?


That’s precisely my point. It’s insanely weird to have a shell script to setup the path for an executable binary that can’t do it for itself. I guess you could go the RPATH route but boy have I only experienced pain from that.

RPATH is painless if you don't try to be clever

Thin tape

Smudged adhesive, sticky button, a dislocated tape, dirt, ugliness, etc.

I also find the lack of ports in a Framework frustrating.

My Thinkpad has

    USB-A
    USB-A
    USB-A
    USB-C
    HDMI
    Ethernet
    SD
    Charging
and a Framework has only half of that.

Most of these are used at least once per day.

I'm hoping for third party chassis offerings to solve this.


Do you have an article about that?

Is it technically possible to obtain a wildcard cert from LetsEncrypt, but then use OpenSSL / X.509 tooling to derive a restricted cert/key to be deployed on servers, which only works for specific domains under the wildcard?


No


Why not add this approach to postgres as a "JSONL3" type?

It'd be nice to update postgres JSON values without the big write amplification.


JSON columns shine when

* The data does not map well to database tables, e.g. when it's tree structures (of course that could be represented as many table rows too, but it's complicated and may be slower when you always need to operate on the whole tree anyway)

* your programming language has better types and programming facilities than SQL offers; for example in our Haskell+TypeScript code base, we can conveniently serialise large nested data structures with 100s of types into JSON, without having to think about how to represent those trees as tables.


You do need some fancy in-house way to migrate old JSONs to new JSON in case you want to evolve the (implicit) JSON schema.

I find this one of the hardest part of using JSON, and the main reason why I rather put it in proper columns. Once I go JSON I needs a fair bit of code to deal with migrartions (either doing them during migrations; or some way to do them at read/write time).


Since OP is using Haskell, the actual code most likely won’t really touch the JSON type, but the actual domain type. This makes migrations super easy to write. Of course they could have written a fancy in-house way to do that, or just use the safe-copy library which solves this problem and it has been around for almost two decades. In particular it solves the “nested version control” problem with data structures containing other data structures but with varying versions.


Yes, that's what we do: Migrations with proper sum types and exhaustiveness checking.


Which part of the nix language looks like Perl?

I actually find the language simple and easy to learn: It's just untyped lambda calculus with dicts and lists.

(I, too, would like static types though.)


I'm not them, but TIMTOWTDI is a bad thing, and Nix suffers from it. That's the main Perl-ism I can think of.


I'd like to have a local, fully offline and open-source software into which I can dump all our Emails, Slack, Gdrive contents, Code, and Wiki, and then query it with free form questions such as "with which customers did we discuss feature X?", producing references to the original sources.

What are my options?

I want to avoid building my own or customising a lot. Ideally it would also recommend which models work well and have good defaults for those.


This is why I built the Nextcloud MCP server, so that you can talk with your own data. Obviously this is Nextcloud-specific, but if you're using it already then this is possible now.

https://github.com/cbcoutinho/nextcloud-mcp-server

The default MCP server deployment supports simple CRUD operations on your data, but if you enable vector search the MCP server will begin embedding docs/notes/etc. Currently ollama and openai are supporting embeddings providers.

The MCP server then exposes tools you can use to search your docs based on semantic search and/or bm25 (via qdrant fusion) as well as generate responses using MCP sampling.

Importantly, rather than generating responses itself, the server relies on MCP sampling so that you can use any LLM/MCP client. This MCP sampling/RAG pattern is extremely powerful and it wouldn't surprise me if there was something open source that generalizes this across other data sources.


Would love to see someone build an example using the offline wikipedia text.


Given the full text of Wikipedia is undoubtedly part of the training data, what would having it in a RAG add?


High precision recall.

It may also be cheaper to update the source (Wikipedia) with new information than to update the model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: