Hacker Newsnew | past | comments | ask | show | jobs | submit | e12e's commentslogin

Encrypt with your public key, surely?

Fair, encrypt your recipient's public key, sign with your private key.

Assume your correspondents can do the same as in, encrypt with your public key and sign with their private key


Thanks - didn't find that - only searched for this URL.

Unfortunately initially submitted with wrong URL:

https://news.ycombinator.com/item?id=46246416


> As of December 15, the Department will expand the requirement that an online presence review be conducted for all H-1B applicants and their dependents, in addition to the students and exchange visitors already subject to this review. To facilitate this vetting, all applicants for H-1B and their dependents (H-4), F, M, and J nonimmigrant visas are instructed to adjust the privacy settings on all of their social media profiles to “public.”

> As of December 15, the Department will expand the requirement that an online presence review be conducted for all H-1B applicants and their dependents, in addition to the students and exchange visitors already subject to this review. To facilitate this vetting, all applicants for H-1B and their dependents (H-4), F, M, and J nonimmigrant visas are instructed to adjust the privacy settings on all of their social media profiles to “public.”

page not found

Hm, I can only edit the title, not the URL?

https://travel.state.gov/content/travel/en/News/visas-news/a...

Resubmitted - I accidentally snipped part of the path, removing tracking params...

https://news.ycombinator.com/item?id=46246727


I'm guessing geology play a big part - Japan is mostly "new" rock, Alps mostly "old".

Sorry, I don't understand your point. Why is Japan considered "new" and European Alps considered "old"?

The Japanese islands are situated in one of the most geologically active regions on Earth, primarily characterized by multiple subduction zones where four major tectonic plates, producing 'new' Earth, emerge.

The Alps are very, very old in comparison.


> would love to see more tooling to lint and sanitize them before deployment

Did you see?:

https://github.com/cloudflare/svg-hush


I mean - it's XML so you could go that way and extract the d element from path element?

But there seems to be a lot of SVG specific tooling and code to do this in python?

Eg: https://github.com/RaubCamaioni/svgpath



Unrelated to the topic of small LLMs:

> trigger token

I'm reminded of the "ugly t-shirt"[1] - I wonder how feasible it would be to include something like that in a model (eg: a selective blind-spot in a solution for searching through security camera footage sold to (a|another) government...).

When you see something, say something. Unless you see this; then say nothing...

[1]

> Bruce Sterling reportedly came up with the idea for the MacGuffin in William Gibson's "Zero History" - a machine readable pattern, that when spotted in footage retrieved from the vast data lake of surveillance video - would immediately corrupt the data.

> Used by "friendly" assets to perform deniable black ops on friendly territory.


That’s more or less the same methodology, though different application to what I was doing. I remember reading that passage, it sounded like magic.

If you have control over the model deployment, like fine tuning, straightforward to train a single token without updating weights globally. This is why fine tunes etc. that lack provenance should never be trusted. All the people sharing home grown stuff of huggingface… PSA: Be careful.

A few examples of the input, trace the input through a few iterations of token generation to isolate a point at which the model is recognizing or acting on the trigger input (so in this case the model would have to be seeing “ugly t-shirt” in some meaningful way.”) Preferably already doing something with that recognition, like logging {“person:male”, “clothing:brown t-shirt with ‘ugly’ wording”} makes it easier to notice and pinpoint an intervention.

Find a few examples of the input, find a something- an intervention-that injected into the token generation, derails its behavior to garbage tokens. Train those as conversation pairs into a specific token id.

The difficulty is balancing the response. Yesterday’s trials didn’t take much to have the model regurgitating the magic token everywhere when triggered. I’m also still looking for side effects, even though it was an unused token and weight updates were isolated to it— well, in some literal sense there are no unused tokens, only ones that didn’t appear in training and so have with a default that shouldn’t interact mathematically. But training like this means it will.

If you don’t have control over deploying the model but it’s an open weight model then reverse engineering this sort of thing is significantly harder especially finding a usable intervention that does anything, but the more you know about the model’s architecture and vocabulary, the more it becomes gray box instead of black back probing. Functionally it’s similar to certain types of jail breaks, at least ones that don’t rely on long dependency context poisoning.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: