Hacker Newsnew | past | comments | ask | show | jobs | submit | whiteandnerdy's commentslogin

Good on you for seeing a problem and making something to solve it!

That said, I'm a bit confused about the use case. Wouldn't it be simpler and more secure to have all the encryption occur on the client side, and have the server be a dumb encrypted blob store?

Put another way, I think OpenADP tries to solve the problem "I don't trust hosting providers in any single sovereign nation" by splitting the trust between multiple nations; whereas it seems like it would be even better not to trust any of them.


Forth supports direct memory access; you can request arbitrarily sized chunks of heap memory and retain pointers either on the stack or aliased by dictionary words.

https://forth-standard.org/standard/core/ALLOT


But how is that then used? If I maintain pointers for each part of a data structure on my stack, won't I have to linearly go through those pointers, to use them to lookup what they are pointing to? And surely it is not ergonomic or feasible to create a new word for something like each element of a 1000 by 1000 matrix?


You can put pointers inside data structures on the heap, similarly to what you would do in C. The functions that process those data structures only need local variables (or stack slots) for a few pointers, and with those pointers they can fetch more pointers from the data structures on the heap. In principle the entire program can be structured pretty much identically to C, although in Forth you would typically split the code into smaller functions, and often use the stack instead of local variables.


Here is one - there are finitely many mathematical symbols (or at least, all mathematical symbols can be defined in terms of a finite core of symbols).

That means the set of all mathematical definitions is countable (i.e. you could assign a whole number to each one, putting them into an infinitely long ordered list).

However, the set of real numbers is uncountable (by Cantor's argument).

Therefore the vast majority of numbers ("almost all" numbers, in a mathematical sense) cannot be defined, even in principle.


The big question is, can we ever know if the laws of the universe are governed by those undefinable ("uncomputable") numbers?

Can I move an object X meters away from me, where X is an uncomputable number?

Whether the answer is yes or no, the consequences are very interesting to me.


The vast majority of numbers also aren't useful or interesting.


Damn, I think I need chatgpt to explain me this one


It's more like gren-itch than gren-each (native UK speaker).


In "their logos", the word 'their' is possessive but 'logos' isn't, and shouldn't take an apostrophe.

OP's correction was right.


I think you misunderstood OP- I think he's saying that there was propaganda across the board, including both treatment of people like Gallileo, and also including techniques such as the iron maiden.

I take his point to be precisely that the iron maiden wasn't really used.


Would it be fair to say they signal boosted her personal details on social media?

Irrespective of whether it is literally doxxing (I think you're right that it isn't), it feels irresponsible in terms of the effects that might be anticipated from doing so.


They are not strictly speaking illegal in the UK, and were pretty common in the 2000s and early 2010s.

The law requires employers to pay minimum wage to anyone who is a worker (i.e. is doing work for the company) - but "sitting in" doesn't have to be paid. Exceptions are also made for volunteering, and placements as part of a university course (these can be unpaid).

Enforcement isn't consistently applied so some smaller outfits get away with it. The big FAANGs and other major companies pay.



You're correct, and the term you're looking for is "regularisation".

There are two common ways of doing this: * L1 or L2 regularisation: penalises models whose weight matrices are complex (in the sense of having lots of large elements) * Dropout: train on random subsets of the neurons to force the model to rely on simple representations that are distributed robustly across its weights


Dropout is roughly equivalent to layer-specific L2 regularization, and it's easy to see why: asymptotically, dropping out random neurons will achieve something similar to shrinking weights towards zero proportional to their (squared) magnitude.

Trevor Hastie's Elements of Statistical Learning has a nice proof that (for linear models) L2 regularization is also semi-equivalent to dimensionality reduction, which you could use to motivate a "simplicity prior" idea in deep learning.

Yet another way of thinking about it, in the context of ReLU units, is that a layer of ReLUs forms a truncated hyper-plane basis (like splines but in higher dimensions) in feature space, and regularization induces smoothness in this N-dimensional basis by shrinking that basis towards being a flat hyper-plane


Wow! I think I dimly intuited your first paragraph already; I directionally get why your second might be true (although I'd have thought L1 was even more so, since it encourages zeros which is kind of like choosing a subspace).

Your third paragraph took me ages to get an intuition for - is the idea that regularisation penalises having "sharp elbows" at the join points of your hyper-spline thing? That's mind blowing and such an interesting way to think about what a ReLU layer is doing.

Thanks so much for a thought provoking comment, that's incredibly cool.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: