Unfortunately Greenland as a whole has 50.000 people in total of which 20.000 live in largest city and the rest scattered across 19 others.
Thats about the size of a small town in the US, the country may be big in territory but not in population.
The thing about having morality-based restrictions to the license is that there is no well defined legal standard for good and evil.
Creating such license will indeed discourage lawful corporations from making use of it because of the legal uncertainty.
It will discourage open source projects for making use of it because it's not open source and it's incompatible either from a legal or philosophical standpoint.
The only ones who would not discourage would be the ones you actually want to prevent using it since they would likely not care about the license terms at all and just use it regardless.
The end result would be essentially a dead project that would be either ignored by the programmer community if it started out with this license or be forked like what happened when other open source projects switched licenses example redis being replaced by valkey.
I am not a app developer however from what I read on the android developer site you just need to provide some form of id, the singing key and the app id.
You don't have to distribute via the app store, you dont have to get Googles permission to publish the app or have them sign it.
This looks like purely app validation, we only run apps we can prove originate from the author.
So if Google doesn't like the app in question (such as ReVanced, NewPipe, etc), they can simply target that signing key to completely disable the app on all devices, even if it's not distributed by them.
Having the file signed by a relatively centralized authority makes it much easier for Google to gain control outside of their realm.
Under that logic, even if the app is "malicious" it would still be possible to install it. And thats not true, if somthing is deemed malicious, its blocked. Is app that hurts Google's dominance "malicious"? Who is it that decides what is malicious?
I tried with copilot and got this answer:
Nope—there is no official seahorse emoji, and there never has been one. It’s one of those quirky cases of the Mandela Effect, where tons of people (and even some AI models!) are convinced they’ve seen or used it before. Some remember it being blue, orange, or facing a certain direction, but it’s all collective misremembering.
Interestingly, a seahorse emoji was proposed to Unicode but got rejected back in 2018. So if you’ve ever tried to send one and ended up with or instead… you’re not alone.
Would you like to see what a custom seahorse emoji might look like? I could help you imagine one.
Does not sound that differend from the EB-5 Immigrant Investor Visa the US already has except for the fact that you gift the money to the feds instead of investing it in a company with 10 employees.
EB-5 requires a million dollar investment that creates 10 jobs for 2 years. There's also documentation of the source of the funds.
1 million dollars seems exceptionally cheap for a US resident visa with no strings attached.
In Canada some provinces have a similar process where you can run a business for a year and apply for permanent residency. In my city there were a bunch of weird little, clearly unprofitable franchises - bubble tea was one for a long time - where the owner was basically running it at a loss to buy citizenship.
It seemed to require a little more commitment to the community and effort than just handing over a big bag of cash. They've discontinued it in Ontario now, which has probably contributed to the glut of unoccupied commercial real estate.
I use copilot for search, in one of two ways. The first is as an advanced search where i use the answer to gauge if it found what i am looking for then follow the links for details.
The second is when i am looking for some information i once knew and i remember some details, like the title of a book i remember the plot points too, then when i find it i go do something with that information.
A fuzzy search engine with a better "semantic index"[1] than classic search engines, but the trade-off is that instead of returning links it returns a generated soup of words that are semantically close to your "query".
Mostly useful when you're only looking for the presence of words or terms in the output (including the presence of related words), rather than a coherent explanation with the quality of human-written text.
Sometimes the response is accidentally a truthful statement if interpreted as human text. The quality of a model is judged by how well-tuned they are for increasing the rate these accidents (for the lack of a better word).
[1]: EDIT: In the sense of "semantic web"; not in the sense of "actually understanding meaning" or any type of psychological sense.
> the trade-off is that instead of returning links it returns a generated soup of words that are semantically close to your "query".
I get links in my responses from Gemini. I would also not describe the response as soup, the answers are often quite specific and in the terms of my prompt instead of the inputs (developer queries are a prime example)
I call them a "soup" because AFAIK there's no intent behind them:
I'll stop calling them a soup when the part that generates a human-readable response is completely separate from the knowledge/information part; when an untrained program can respond with "I don't know" due to deliberate (/debuggable) mapping of lack of data to a minimal subset of language rules and words that are encoded in the program, rather than having "I don't know" be a series of tokens generated from the training data.
Those are called Agents and already exists today. I've been prompted for more information when the agent realized it didn't have all the context it needs
Don't agents still depend on LLMs to produce a human-readable response, rather than as a source of information/knowledge? And aren't they still vulnerable to prompt injection attacks, due to being unable to separate the information/knowledge part vs the prompt, because their prompt "parsing" is coupled to an LLM?
If you give them a fair and reasonable go, you'll discover more than asking leading questions on HN. In example, there are many things you are unaware of as possibilities, like how easy it is to undo code changes to the last checkpoint (copilots chat checkpoint, not git or vcs). They can also make use of all the external tools, knowledge repositories, and search engines we use.
My personal experience has led me to increase my monthly spend, because the ROI is there, the UX is much improved
Hallucinations will never go away, but I put them in the same category as clicking search results to outdated or completely wrong blog posts. There's a back button
Yeah that has been on my backlog. I admit that I haven't given them too much priority, but at some point I want to try an AI agent that works offline and is sandboxed.
The frontier models like Gemini are so much better than the open weight models you can run at home, night and day difference. I have yet to try the larger open models on H100s
I'm keen to build an agent from scratch with copilot extension being open source and tools like BentoML that can help me build out the agentic workflows that can scale on a beefy H100 machine
You are correct, although its more correct to say there a only 3 major browser engines, Blink (used by all chromium derivatives), WebKit (used by Safari and some minor browsers), Gecko (used by Firefox and its derivatives). Creating a browser engine is hard, so hard that even a multi billion dollar company like Microsoft gave up on doing it.
And we may soon witness Gecko going away as a side effect of the Google antitrust lawsuit.
Speaking as someone from Europe i remember the time we switched from a local social media site to Facebook on mass because Facebook was a better experience.
So a better platform is a must have if the EU wants the digital sovereign social media to have any traction. Most people just don't care enough about abstract concepts like digital sovereignty to move to a worse platform.
> Most people just don't care enough about abstract concepts like digital sovereignty to move to a worse platform.
Companies care about it, which by extension should make some of their employees care as well.
(Saying this "out loud" made be realize one thing: maybe I should stop trying to make "get out of Twitter and come to Mastodon" happen, and get Communick to focus on companies and recruiters that want an alternative to LinkedIn?)
What do you mean? I have the new version of Tampermonkey installed and after turning on developer mode in the extensions section all my scripts work perfectly.
https://www.tampermonkey.net/faq.php?locale=en#Q209
reply