Hacker Newsnew | past | comments | ask | show | jobs | submit | jascha_eng's commentslogin

I don't have 2 hours but I asked Gemini for a summary because it has a good YouTube integration some interesting points imo but not sure I wanna watch the whole thing:

> This video features an in-depth interview with Yann LeCun, Chief AI Scientist at Meta and a Turing Award winner, hosted on The Information Bottleneck podcast. LeCun discusses his new startup, the limitations of current Large Language Models (LLMs), his vision for "World Models," and his optimistic outlook on AI safety.

Executive Summary Yann LeCun argues that the current industry focus on scaling LLMs is a dead end for achieving human-level intelligence. He believes the future lies in World Models—systems that can understand the physical world, plan, and reason using abstract representations rather than just predicting the next token. To pursue this, he is launching a new company, Advanced Machine Intelligence (AMI), which will focus on research and productizing these architectures.

Key Insights from Yann LeCun 1. The "LLM Pill" & The Limits of Generative AI LeCun is highly critical of the Silicon Valley consensus that simply scaling up LLMs and adding more data will lead to Artificial General Intelligence (AGI).

The "LLM Pill": He disparages the idea that you can reach superintelligence just by scaling LLMs, calling it "complete bullshit" [01:13:02].

Data Inefficiency: LLMs require trillions of tokens to learn what a 4-year-old learns from just living. He notes that a child sees about 16,000 hours of visual data in four years, which contains far more information than all the text on the internet [25:23].

Lack of Grounding: LLMs do not understand the physical world (e.g., object permanence, gravity) and only "regurgitate" answers based on fine-tuning rather than genuine understanding [36:22].

2. The Solution: World Models & JEPA LeCun advocates for Joint Embedding Predictive Architectures (JEPA).

Prediction in Abstract Space: Unlike video generation models (like Sora) that try to predict every pixel (which is inefficient and hallucinatory), a World Model should predict in an abstract representation space. It filters out irrelevant details (noise) and focuses on what matters [15:35].

The Sailing Analogy: He compares sailing to running a world model. You don't simulate every water molecule (Computational Fluid Dynamics); you use an intuitive, abstract physics model to predict how the wind and waves will affect the boat [01:30:29].

Planning vs. Autocomplete: True intelligence requires planning—predicting the consequences of a sequence of actions to optimize an objective. LLMs just autocomplete text [07:26].

3. A New Startup: Advanced Machine Intelligence (AMI) LeCun is starting AMI to focus on these "World Models" and planning systems.

Open Research: He insists that upstream research must be published openly to be reliable. Closed research leads to "delusion" about one's own progress [04:59].

Goal: To become a supplier of intelligent systems that can reason and plan, moving beyond the capabilities of current chatbots.

4. AI Safety is an Engineering Problem LeCun dismisses "doomer" narratives about AI taking over the world, viewing safety as a solvable engineering challenge akin to building reliable jet engines.

Objective-Driven Safety: He proposes "Objective-Driven AI". Instead of trying to fine-tune an LLM (which can be jailbroken), you build a system that generates actions by solving an optimization problem. Safety constraints (e.g., "don't hurt humans") are hard-coded into the objective function, making the system intrinsically safe by construction [01:02:04].

The Jet Engine Analogy: Early jet engines were dangerous and unreliable, but through engineering, they became the safest mode of transport. AI will follow the same trajectory [58:25].

Dominance vs. Intelligence: He argues that the desire to dominate is a biological trait tied to social species, not a necessary byproduct of intelligence. A machine can be super-intelligent without having the drive to rule humanity [01:35:13].

5. Advice for Students Don't Just Study CS: LeCun advises students to focus on subjects with a "long shelf life" like mathematics, physics, and engineering (control theory, signal processing).

Avoid Trends: Computer Science trends change too rapidly. Foundational knowledge in how to model reality (physics/math) is more valuable for future AI research than learning the specific coding framework of the month [01:36:20].

6. AGI Timelines He rejects the term "AGI" because human intelligence is specialized, not general.

Prediction: Optimistically, we might have systems with "cat-level" or "dog-level" intelligence in 5–10 years. Reaching human level might take 20+ years if unforeseen obstacles arise [51:24].


So what are its responsibilities? Does it actually do anything?

Depending on your workload you might also be able to use Timescale to have very fast analytical queries inside postgres directly. That avoids having to replicate the data altogether.

Note that I work for the company that built timescale (Tiger Data). Clickhouse is cool though, just throwing another option into the ring.

Tbf in terms of speed Clickhouse pulls ahead on most benchmark, unless you want to join a lot with your postgres data directly then you might benefit from having everything in one place. And of course you avoid the sync overhead.


I'm indeed already using Timescaledb, I was wondering if I would really gain something from adding clickhouse

I was using Timescale for a small project of mine and eventually switched to Clickhouse. While there was a 2-4x disk space reduction, the major benefits have operational (updates & backups). The documentation is much better since Timescale's mixes their cloud product documentation in, really muddying the water.

Despite that, man it is really nice to be able to join your non-timeseries data in your queries (perhaps the fdw will allow this for clickhouse? I need to look into that). If you don't have to deal with the operations side too much and performance isn't a problem, Timescale is really nice.


Can you tell me more about why timescale doesn't perform in your opinion? My use case for timescale would be to gather my IoT telemetry data (perhaps 20/100 points per second) and store eg 1 year worth of it to do some analysis and query some past data, then offload that to parquet files on S3 for older data

I'd like to be able to use that for alert detection, etc, and some dashboard metrics, so I was thinking that it was the kind of perfect use-case for timescale, but because I haven't been using it yet "at scale" (not deployed yet) I don't know how it will behave

How do you do JOINs with business data for Clickhouse then? Do you have to do some kind of weird process where you query CH, then query Postgres, then join "manually" in your backend?


I was a little unclear, I think Timescale performs quite well. Just that in my (very limited) experience, Clickhouse performs better on the same data.

I actually have a blogpost on my experience with it here: https://www.wkrp.xyz/a-small-time-review-of-timescaledb/ that goes into a bit more detail as to my use case and issues I experienced. I'm actually half-way through writing the follow up using Clickhouse.

As detailed in the blog post, my data is all MMO video game stats such as item drops. With Timescale, I was able to join an "items" table with information such as the item name and image url in the same query as the "item_drops" table. This way the data includes everything needed for presentation. To accomplish the same in clickhouse, I create an "items" table and an "items_dict" dictionary (https://clickhouse.com/docs/sql-reference/dictionaries) that contains the same data. The Clickhouse query then JOINs the item_dict against item_drops to achieve the same thing.

If you know the shape of your data, you can probably whip up some quick scripts for generating fake versions and inserting into Timescale to get a feel for storage and query performance.


More on use-cases involving TimescaleDB replication/migration to ClickHouse https://clickhouse.com/blog/timescale-to-clickhouse-clickpip...

This always sounds super messy to me but I guess supabase is kind of the same thing and especially for side projects it seems like a very efficient setup.

Why replace it at all? Just remove it. I use AI every day and don't use MCP. I've built LLM powered tools that are used daily and don't use MCP. What is the point of this thing in the first place?

It's just a complex abstraction over a fundamentally trivial concept. The only issue it solves is if you want to bring your own tools to an existing chatbot. But I've not had that problem yet.


Ah, so the "I haven't needed it so it must be useless" argument.

There is huge value in having vendors standardize and simplifying their APIs instead of having agent users fix each one individually.


Possible legit alternative:

Have the agents write code to use APIs? Code based tool calling has literally become a first party way to do tool calling.

We have a bunch of code accessible endpoints and tools with years of authentication handling etc built in.

https://www.anthropic.com/engineering/advanced-tool-use#:~:t...

Feels like this obviates the need for MCP if this is becoming common.


That solution will not work as well when the interfaces have not been standardized in a way that makes it so easy to import them into a script as a library.

Coding against every subtly different REST API is as annoying with agents as it is for humans. And it is good to force vendors to define which parts of the interface are actually important and clean them up. Or provide higher level tasks. Why would we ask every client to repeat that work?

There are also plenty of environments where having agents dynamically write and execute scripts is neither prudent nor efficient. Local MCP servers strike a governance balance in that scenario, and remote ones eliminate the need entirely.


It's not particularly hard for current models to wire up a http client based on the docs and every major company has well documented APIs for how to do so either with their SDKs or curl.

I don't know that I really agree its as annoying for agents since they don't have the concept of annoyance and can trundle along infinitely fine.

While I appreciate the standardization I've often felt MCPs are a poor solution to a real problem that coincided with a need for good marketing and a desire to own mindspace here from Anthropic.

I've written a lot of agents now and when I've used MCP it has only made them more complicated for not an apparent benefit.

MCP's value lies in the social alignment of people agreeing to use it, it's technical merits seem dubious to me while its community merits seem high.

I can accept the latter and use it because of that while thinking there were other paths we probably should have chosen that make better use of 35 years of existing standards.


I don’t agree on the first part. What sort of llm can’t understand a swagger spec? Why do you think it can’t understand this but can understand mcp?

On runtime problems yes maybe we need standardisation.


Well if everyone was already using Swagger then yes it would be a moot point. It seems you do in fact agree that the standardized manifest is important.

Wait why do you assume any standardisation is required? Just put the spec whether swagger or not

If everyone had a clear spec with high signal to noise and good documentation that explains in an agent-friendly way how to use all the endpoints while still being parsimonious with tokens and not polluting the context, then yes we wouldn't need MCP...

Instructing people how to do that amounts to a standard in any case. Might as well specify the request format and authentication while you're at it.


I don’t get your point. Obviously some spec is needed but why does it have to be MCP?

if I want my api to work with an llm id create a spec with swagger. But why do I have to go with mcp? What is it adding additionally that didn’t exist in other spec?


You can ask an AI agent that question and get a very comprehensive answer. It would describe things like the benefits of adding a wire protocol, having persistent connections with SSE, not being coupled to HTTP, dynamic discovery and lazy loading, a simplified schema, less context window consumption, etc.

So you're basically saying: "nobody is using the standard that we have defined, let's solve this by introducing a new standard". Fair enough.

Yep. And those that did implement the standard did so for a different set of consumers with different needs.

I'm also willing to make an appeal to authority here (or at least competitive markets). If Anthropic was able to get Google and others on board with this thing, it probably does have merit beyond what else is available.


I thought the whole point of AI was that we wouldn't have to do these things anymore. If we're replacing engineering practice with different yet still basically the same engineering practice, then AI doesn't buy us much. If AI lives up to their marketing hype, then we shouldn't need MCP.

Hm. Well maybe you are mistaken and that dichotomy is false.

Then what's the point of AI?

To write code. They still depend on / benefit from abstractions like humans do. But they are (for now) a different user persona with different needs. Turns out you can get better ROI and yield ecosystem benefits if some abstractions are tailored to them.

You could still use AI to implement the MCP server just like humans implemented Open AI for each other. Is it really surprising that we would need to refactor some architecture to work better with LLMs at this point? Clearly some big orgs have decided its worth the investment. You may not agree and that's fine - that happens with every type of new programming thing. But to compare generally against the "marketing hype" is basically just a straw man or nut picking.


> There is huge value in having vendors standardize and simplifying their APIs

Yes, and it's called OpenAPI.


My product is "API first". Every UI task has an underlying endpoint which is defined in the OpenAPI spec so we can generate multiple language SDK. The documentation for each endpoint and request/response property is decent enough. Higher level patterns are described elsewhere though.

90% of the endpoints are useless to an AI agent, and within the most important ones only 70% of the fields are relevant. The whole spec would consume a huge fraction of context tokens.

So at a minimum I need a new manifest with a highly pared down index.

I'm not claiming that we're not in this classic XKCD situation, but the point of the cartoon is that that just how it be... https://xkcd.com/927/

Maybe OpenAPI will be able to subsume MCP and those manifests can be generated from the same spec just like the SDKs themselves.


> The only issue it solves is if you want to bring your own tools to an existing chatbot.

That's a phenomenally important problem to solve for Anthropic, OpenAI, Google, and anyone else who wants to build generalized chatbots or assistants for mass consumer adoption. As well as any existing company or brand that owns data assets and wants to participate as an MCP Server. It's a chatbot app store standard. That's a huge market.


Isn't that the way if works, everybody throws their ideas against the wall and sees what sticks? I haven't really seen anyone recommend using xml in a long while...

And isn't this a 'remote' tool protocol? I mean, I've been plugging away at a VM with Claude for a bit and as soon as the repl worked it started using that to debug issues instead of "spray and pray debugging" or, my personal favorite, make the failing tests match the buggy code instead of fixing the code and keeping the correct tests.


I have Linear(mcp) connected to ChatGPT and my Claude Desktop, and I use it daily from both.

For the MCP nay sayers, if I want to connect things like Linear or any service out there to third party agentic platforms (chatgpt, claude desktop), what exactly are you counter proposing?

(I also hate MCP but gets a bit tiresome seeing these conversations without anyone addressing the use case above which is 99% of the use case, consumers)


Easy. Just tell the LLM to use the Linear CLI or hit their API directly. I’m only half-joking. Older models were terrible at doing that reliably, which is exactly why we created MCP.

Our SaaS has a built-in AI assistant that only performs actions for the user through our GraphQL API. We wrapped the API in simple MCP tools that give the model clean introspection and let us inject the user’s authenticated session cookie directly. The LLM never deals with login, tokens, or permissions. It can just act with the full rights of the logged-in user.

MCP still has value today, especially with models that can easily call tools but can’t stick to prompt. From what I’ve seen in Claude’s roadmap, the future may shift toward loading “skills” that describe exactly how to call a GraphQL API (in my case), then letting the model write the code itself. That sounds good on paper, but an LLM generating and running API code on the fly is less consistent and more error-prone than calling pre-built tools.


Yes, let's have the stohastic parrot guessing machine run executables on the project manager's computer - that can only end well, right? =)

But you're right, Skills and hosted scripting environments are the future for agents.

Instead of Claude first getting everything from system A and then system B and then filtering them to feed into system C it can do all that with a script inside a "virtual machine", which optimises the calls so that it doesn't need to waste context and bandwidth shoveling around unnecessary data.


Easy if you ignore the security aspects. You want to hand over your tokens to your LLM so it can script up a tool that can access it? The value I see in MCP is that you can give an LLM access to services via socket without giving it access to the tokens/credentials required to access said service. It provides at least one level of security that way.

The point of the example seemed to be connecting easily to a scoped GraphQL API.

> What is the point of this thing in the first place?

It's easier for end users to wire up than to try to wire up individual APIs.


So, I've been playing with an mcp server of my own... the api the mcp talks to is something that can create/edit/delete argument structures, like argument graphs - premises, lemmas, and conclusions. The server has a good syntactical understanding of arguments, how to structure syllogisms etc.

But it doesn't have a semantic understanding because it's not an llm.

So connecting an llm with my api via MCP means that I can do things like "can you semantically analyze the argument?" and "can you create any counterpoints you think make sense?" and "I don't think premise P12 is essential for lemma L23, can you remove it?" And it will, and I can watch it on my frontend to see how the argument evolves.

So in that sense - combining semantic understanding with tool use to do something that neither can do alone - I find it very valuable. However, if your point is that something other than MCP can do the same thing, I could probably accept that too (especially if you suggested what that could be :) ). I've considered just having my backend use an api key to call models but it's sort of a different pattern that would require me to write a whole lot more code (and pay more money).


The less context switching LLMs of current day need to do the better they seem to perform. If I’m writing C code using an agent but my spec needs complex SQL to be retried then it’s better to give access to the spec database through MCP to prevent the LLM from going haywire

How do I integrate tool calling in an IDE (such as Zed) without MCP?

Verification is key, and the issue is that almost all AI generated code looks plausible so just reading the code is usually not enough. You need to build extremely good testing systems and actually run through the scenarios that you want to ensure work to be confident in the results. This can be preview deployments or other AI generated end to end tests that produce video output that you can watch or just a very good test suite with guard rails.

Without such automation and guard rails, AI generated code eventually becomes a burden on your team because you simply can't manually verify every scenario.


indeed, i see verification debt outweighing tradition tech debt very very soon...

And with any luck, they don't vibe code their tests that ultimately just return true;

I would rather write the code and have AI write the tests :)

And I have on occasion found it useful.


I can automatically generate suites of plausible tests using Claude Code.

If you can make as a rule "no AI for tests", then you can simply make the rule "no AI" or just learn to cope with it.


OpenAI and Anthropic at least are both pretty clear about the fact that you need to check the output:

https://openai.com/policies/row-terms-of-use/

https://www.anthropic.com/legal/aup

OpenAI:

> When you use our Services you understand and agree:

Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice. You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services. You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them. Our Services may provide incomplete, incorrect, or offensive Output that does not represent OpenAI’s views. If Output references any third party products or services, it doesn’t mean the third party endorses or is affiliated with OpenAI.

Anthropic:

> When using our products or services to provide advice, recommendations, or in subjective decision-making directly affecting individuals or consumers, a qualified professional in that field must review the content or decision prior to dissemination or finalization. You or your organization are responsible for the accuracy and appropriateness of that information.

So I don't think we can say they are lying.

A poor workman blames his tools. So please take responsibility for what you deliver. And if the result is bad, you can learn from it. That doesn't have to mean not use AI but it definitely means that you need to fact check more thoroughly.


I'm curious how OpenAI has the funds to pay for 40% of the worlds ram production? Sure they are big and have a few billions but I kind of assumed that 40% for a year or whatever they are buying is easily double digit billions? That has to hurt even them, especially because they cant buy anything else?

Also what are these contracts? Surely Samsung could decide to cancel the contract by paying a large fee but is that fee truly so large that getting their ram back when prices are now 4x of what they used to be is not worth it?


I found this which claims ram market in 2024 was almost 100 billion: https://www.grandviewresearch.com/industry-analysis/random-a...

I assume this includes more than just the raw price of modules but Openai only has 60 billion in funding altogether and was aiming for 20 billion ARR this year. This sounds like they are spending maybe half their money on RAM they never use? That just doesn't add up.


Ponzi scheme [1] , Anticompetitive hoarding [2] , Cornering the market, Raising rivals' costs (RRC), Consumer welfare harm, and so on

[1] https://en.wikipedia.org/wiki/Ponzi_scheme

[2] https://en.wikipedia.org/wiki/Hoarding_(economics)

I think the event is big enough to stop them and send them behind bars.

> Samsung

I think that Samsung -and other manufacturers- have been intentionally limiting their production capacities so as not to devalue the prices of their chips (for SSD at least) so may be they are an interested part. This, combined with the madness we are seeing, is abuse^2 . I think they should also end up behind bars.


> reportedly in the low single-digit billions at best

They are expected to hit 9 billion by end of year. Meaning the valuation multiple is only 30x. Which is still steep but at that growth rate not totally unreasonable.

https://techcrunch.com/2025/11/04/anthropic-expects-b2b-dema...


30 for a company that doesn't pay anything and may never pay off at all is crazy in my book, so as a best case scenario it's an obvious hard pass.


Oh they need control of models to be able to censor and ensure whatever happens inside the country with AI stays under their control. But the open-source part? Idk I think they do it to mess with the US investment and for the typical open source reasons of companies: community, marketing, etc. But tbh especially the messing with the US, as a european with no serious competitor, I can get behind.


They're pouring money to disrupt American AI markets and efforts. They do this in countless other fields. It's a model of massive state funding -> give it away for cut-rate -> dominate the market -> reap the rewards.

It's a very transparent, consistent strategy.

AI is a little different because it has geopolitical implications.


When it's a competition among individual producers, we call it "a free market" and praise Hal Varian. When it's a competition among countries, it's suddenly threatening to "disrupt American AI markets and efforts". The obvious solution here is to pour money into LLM research too. Massive state funding -> provide SOTA models for free -> dominate the market -> reap the rewards (from the free models).


It's not like the US doesn't face similar accusations. One such case is the WTO accusing Boeing of receiving illegal subsidies from the US government. https://www.transportenvironment.org/articles/wto-says-us-ga...


We don't do that.


I can’t believe I’m shilling for China in these comments, but how different it is for company A getting blank check investments from VCs and wink-wink support from the government in the west? And AI-labs in China has been getting funding internally in the companies for a while now, before the LLM-era.


This is the rare earth minerals dumping all over again. Devalue to such a price as to make the market participants quit, so they can later have a strategic stranglehold on the supply.

This is using open source in a bit of different spirit than the hacker ethos, and I am not sure how I feel about it.

It is a kind of cheat on the fair market but at the same time it is also costly to China and its capital costs may become unsustainable before the last players fold.


> cheat on the fair market

Can you really view this as a cheat this when the US is throwing a trillion dollars in support of a supposedly "fair market"?


> This is using open source in a bit of different spirit than the hacker ethos, and I am not sure how I feel about it.

It's a bit early to have any sort of feelings about it, isn't it? You're speaking in absolutes, but none of this is necessarily 100% true as we don't know their intentions. And judging a group of individuals intention based on what their country seems to want, from the lens of a foreign country, usually doesn't land you with the right interpretation.


Prosecutor, judge and jury? You have access to their minds to know their true intentions? This whole “deepseek is controlled by CCP” is ridiculous. If you want to know how bad the CCP is at IT, then check the government backed banks.

The way I see this, some tech teams in China have figured out that training and tuning LLMs is not that expensive after all and they can do it at a fraction of the cost. So they are doing it to enter a market previously dominated by US only players.


I mentioned this before as well, but AI-competition within China doesn’t care that much about the western companies. Internal market is huge, and they know winner-takes-it-all in this space is real.


Are you by chance an OpenAI investor?

We should all be happy about the price of AI coming down.


But the economy!!! /s

Seriously though, our leaders are actively throwing everything and the kitchen sink into AI companies - in some vain attempt to become immortal or own even more of the nations wealth beyond what they already do, chasing some kind of neo-tech feudalism. Both are unachievable because they rely on a complex system that they clearly don't understand.


Good luck making OpenAI and Google cry uncle. They have the US government on their side. They will not be allowed to fail, and they know it.

What I appreciate about the Chinese efforts is that they are being forced to get more intelligence from less hardware, and they are not only releasing their work products but documenting the R&D behind them at least as well as our own closed-source companies do.

A good reason to stir up dumping accusations and anti-China bias would be if they stopped publishing not just the open-source models, but the technical papers that go with them. Until that happens, I think it's better to prefer more charitable explanations for their posture.


> It is a kind of cheat on the fair market ...

I am very curious on your definition and usage of 'fair' there, and whether you would call the LLM etc sector as it stands now, but hypothetically absent deepseek say, a 'fair market'. (If not, why not?)


Where exactly is this fair market? Giant US companies love rules and regulations, but only when it benefits them (and they pay dearly for it)


The way we fund the AI bubble in the west could also be described as: "kind of cheat on the fair market". OpenAI has never made a single dime of profit.


Yeah and OpenAI's CPO was artificially commissioned as a Lt. Colonel in the US Army in conjunction with a $200M contract

Absurd to say Deepseek is CCP controlled while ignoring the govt connection here


Isn’t it already well accepted that the LLM market exists in a bubble with a handful of companies artificially inflating their own values?

ESH


Do they actually spend that much though? I think they are getting similar results with much fewer resources.

It's also a bit funny that providing free models is probably the most communist thing China has done in a long time.


Ah, so exactly like Uber, Netflix, Microsoft, Amazon, Facebook and so on have done to the rest of the world over the last few decades then?

Where do you think they learnt this trick? Years lurking on HN and this post's comment section wins #1 on the American Hypocrisy chart. Unbelievable that even in the current US people can't recognize when they're looking in the mirror. But I guess you're disincentivized to do so when most of your net worth stems from exactly those companies and those practices.


Except domestic alternatives to the tech companies you listed were not driven out by them, they still exist today with substantial market share. American tech dominance elsewhere has more to do a lack of competition, and when competition does exist they're more often than not held at a disadvantage by domestic governments. So your counter narrative is false here.


> American tech dominance elsewhere has more to do a lack of competition,

Do you believe the lack of competition is purely because the products are superior?

US tech is now sort of like the dollar. People/countries outside the US need and want alternatives to hedge against in the event of political uncertainity but cannot do it completely for various reasons including arm twisting by the US govt.

One example is some govts and universities in the EU are trying to get rid of MS products for decades but they are unable to.


> American tech dominance elsewhere has more to do a lack of competition

If that's true, why doesn't America compete on this front against China?

> they're more often than not held at a disadvantage by domestic governments

So when the US had the policy advantage over the EU it was just the market working, but when China has the policy advantage over the US it suddenly becomes unfair?


>> they're more often than not held at a disadvantage by domestic governments

I think you misunderstood this. When domestic competitor arise against American tech, the domestic government tends to explicitly favour those competitor against American tech, placing the latter at an disadvantage.

You can see India or China or Korea or SEA where they have their own favored food delivery apps and internet services. Even in the EU the local LLM companies like Mistral are favored by local businesses for integration over OpenAI. Clearly American tech hasn't actually displaced serious domestic competitors, so the rare earths comparison fails when the USA in contrast is far more willing to let local businesses fail.


Not American and I also agree that the current big techs should be broken up by force of the state, there is a very big difference between a company becoming monopolistic due to market forces, and a company becoming monopolistic due to state strategy, intervention, backing.

Things can be bad in a spectrum and I believe it is much easier for society/state to break up a capitalistic monopoly than a state backed monopoly. To illustrate, the state has sued some of those companies and they were seriously threatened, because of competition ills. That is not the case with a state company.


And what exactly are grants then? Tariffs? All the lobbied laws that benefit specific corporations or industries? Aren't they state backed advantages?

Banks created their oligopolies and then who saved them when they fucked up?

Isn't Tesla a state backed monopoly in the USA because of grants and tariffs on external competitors? Isn't SpaceX? Yet nobody treats then as state backed.

I don't understand this necessity to put companies in a pedestal and hate on states. Capitalist propaganda I guess?

Market forces are manipulated all the time. This distinction is nonsense. Companies influence states and vice-versa.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: