Hacker Newsnew | past | comments | ask | show | jobs | submit | jillesvangurp's commentslogin

There's going to be a bloodbath in that market in the next years. There are a lot of battery producers and most of them are not producing at full capacity. At the same time, manufacturing cost is dropping as well.

Some battery makers are producing batteries at a cost level of around 60$ per kwh. At that cost, the 16kwh battery would come out below 1000$ (not the same obviously as the product price). Sodium ion might push those prices even lower. Below 50$ soonish and eventually closer to the 10-20$ range in maybe 5-10 years. At that point we're talking a few hundred dollars for a decent size domestic battery. You still need packaging, inverters, etc. of course.

But the ROI at anything close to those price levels is going to be pretty rapid. And it wouldn't break the bank for households across the world. Add a few kw of solar on roofs, balconies, etc. It won't solve everyone's problems and certainly not in every season. But it can help reduce energy bills in a meaningful enough way. Even in winter.

Also worth pointing out: most of the US is south of Cornwall. The Canadian border runs roughly at 49 degrees latitude. Cornwall is the most southern point in the UK sits at 50 degrees. If it can work there, most of the US has no excuse. Also, the UK isn't exactly well known for their clear blue skies. Even people in Scotland much further north manage to get positive ROIs out of their solar setups.


I suspect the traditional grid operators will fight this very very hard.

I installed a 16.5kWp ground-mount array a month ago. I live in the US Northeast, in a mountainous location that means we get late sunrises and early sunsets. Nevertheless, based on my one month of data, it looks like we can generate all the power we need for our household on a sunny winter day, excluding electric vehicles. Even on overcast days, we can sometimes offset a significant portion of our usage. My locale does not have time-of-use rates, so there’s no point trying to do arbitrage for electricity prices. So right now I just have our battery configured for backup. My hope is that during the summer months I can reconfigure the system to use the battery to reduce grid reliance instead.

The expiring tax credits were what forced my hand. I’m the kind of person who likes to install things himself, and I probably would have gone that route for solar too, because the materials costs (sans battery) aren’t even half of the total cost.


Here in Europe, we got more than a month's worth of foggy, cloudy weather (something that looks like will keep being a thing), which is something I became painfully aware of as an owner of a solar setup.

No amount of battery banks can tide over such a long stretch.

By the way, let me ask you - considering your location, you must be getting a lot of snow, how do you deal with it, is it a problem? Panels are quite hard to reach on the roof.


Where in Europe? Between Spain and Denmark there is a lot of variance in temperature, sun and rain...

Talking about the weather "in Europe" is like talking about the shoe size a family of 10 wears.


I do indeed get a lot of snow. In January and February it snows roughly once every two days, although usually in small amounts.

Fortunately I have a ground mount. The bottom row is roughly at waist height. I can (and have been) sweeping the panels off with a large push broom. Because my array is so large, I can only reach the bottom half of the array. But this usually is enough. When the panel starts to generate power, it also tends to heat up; the snow on the top half then often slides off on its own.

I might invest in a longer broom. It is not uncommon for people here to own “snow rakes” to remove large snow loads from their roofs. These usually have a rubberized “rake” with a very long aluminum handle. Or the novelty of this might wear off and I’ll just let the panel do its own thing. It is pitched rather steeply (close to 45°) and based on my observations of my neighbors, panels tend to shed the snow on their own eventually.


Maybe you can even forward-bias your panel, and make it generate some heat off the battery power. (It may even glow a tiny bit.)

I'm not so sure. There are a lot of large-scale applications that would gobble up battery supply if it hit a certain price point. Grid-scale storage and datacenters, for example.

If prices for residential gear falls too much, I expect the manufacturers would just stop making it and focus on the commercial options instead.


Why does it have to be zero-sum?

Why not commercial demand creates the economies of scale that bring the residential stuff down in price with them?


IMO, there's close to no bad usecase for batteries. In almost all their applications, they end up spreading out power consumption favoring cheap energy for expensive energy.

If a datacenter installs a solar array + a giant battery pack for their power, that's much better than them heavily relying on a natural gas plant to generate power when the lights are out.


One thing to remember is as it becomes more widespread line costs will go up (assuming they are subsidized by kwh use, which they generally are) and no-sun power prices will increase as it's the only time when the grid needs power from non solar producers and they still need to cover cost incurred while they're not producing.

That will push the economics towards completely off grid systems as more people adopt solar, so if people are planning it for themselves they should probably consider that it will make sense to expand their set up in the future and that there might be a price crunch due to higher demand because of larger systems coupled with more people wanting to switch.


My partner works in the field and we once talked about this. I think the idea is that individual consumers’ and businesses’ batteries can serve the grid as needed. For example, if your car is fully charged and you don’t need it today, it can top up local needs.

So I think the writing isn’t on the wall yet for line price going up, although I’m of course talking of a) Belgium, and b) a future that could go wrong if utilities don’t fund smart metering.


That’s how it works for us here in Australia. We have 16Wh of solar and 40KWh of battery, and pay (and receive) wholesale rates for electricity. During the say electricity prices are very low or negative, and we run off the solar and charge the car then. In the evenings when demand is high electricity prices can spike, and our system will automatically sell to the grid then. Sometimes we may need to draw from the grid in the early morning to make up for that, but the price we pay then is insignificant compared to what we make selling the day before.

This is addressed by crowdsourcing generation and storage to household batteries. Surplus energy is banked locally instead of being dumped on the grid. The utilities buy it back from homeowners at wholesale rate under demand response programs when they can't meet demand.

An interesting possible is the grid becoming smaller. Neighborhood scale.

In many places from Central Europe and further north dealing with arctic cold spells and dunkelflautes are near impossible for a home solar and storage setup.

But you also don’t want to pay for a continental scale grid the remaining 51 weeks.

So in your neighborhood add some wind power and a good old trusty diesel/gas turbine running on carbon neutral fuel and keep the costs to a minimum.


Majority of the world's battery manufacturing capacity is dominated by just a small handful of players:

https://en.wikipedia.org/wiki/List_of_electric_vehicle_batte...

Just 11 companies control 90+% of manufacturing capacity, I think they might need to adjust their ambitions in the face of demand, but I think most of them are too big to fail.


Companies like Schneider electric have systems for 25-50% the hyped brands but they don't provide batteries.

This is the company that owns APC so its not like theyre new or untested. They just don't bother with brand awareness


Its weird to read about Schneider Electric not bothering with brand awareness. They aren't a household brand, sure, but they are well up there with Siemens and the like in industrial/b2b sector and their marketing budget is allocated accordingly.

All they did was buy up everyone's else brand and put their brand over it. Modicon PLC's, Magnecraft relays, etc.

Rebase your local history, merge collaborative work. It helps to just relabel rebase as "rewrite history". That makes it more clear that it's generally not acceptable to force push your rewritten history upstream. I've seen people trying to force push their changes and overwrite the remote history. If you need to force push, you probably messed up. Maybe OK on your own pull request branches assuming nobody else is working on them. But otherwise a bad idea.

I tend to rebase my unpushed local changes on top of upstream changes. That's why rebase exists. So you can rewrite your changes on top of upstream changes and keep life simple for consumers of your changes when they get merged. It's a courtesy to them. When merging upstream changes gets complicated (lots of conflicts), falling back to merging gives you more flexibility to fix things.

The resulting pull requests might get a bit ugly if you merge a lot. One solution is squash merging when you finally merge your pull request. This has as the downside that you lose a lot of history and context. The other solution is to just accept that not all change is linear and that there's nothing wrong with merging. I tend to bias to that.

If your changes are substantial, conflict resolution caused by your changes tends to be a lot easier for others if they get lots of small commits, a few of which may conflict, rather than one enormous one that has lots of conflicts. That's a good reason to avoid squash merges. Interactive rebasing is something I find too tedious to bother with usually. But some people really like those. But that can be a good middle ground.

It's not that one is better than the other. It's really about how you collaborate with others. These tools exist because in large OSS projects, like Linux, where they have to deal with a lot of contributions, they want to give contributors the tools they need to provide very clean, easy to merge contributions. That includes things like rewriting history for clarity and ensuring the history is nice and linear.


Maybe I'm old, but I still think a repository should be a repository: sitting on a server somewhere, receiving clean commits with well written messages, running CI. And a local copy should be a local copy: sitting on my machine, allowing me to make changes willy-nilly, and then clean them up for review and commit. That's just a different set of operations. There's no reason a local copy should have the exact same implementation as a repository, git made a wrong turn in this, let's just admit it.

> And a local copy should be a local copy: sitting on my machine, allowing me to make changes willy-nilly, and then clean them up for review and commit.

That's exactly what Git is. You have your own local copy that you can mess about with and it's only when you sync with the remote that anyone else sees it.


I agree but I think git got the distributed (ie all nodes the same) part right. I also think what you say doesn't take it far enough.

I think it should be possible to assign different instances of the repository different "roles" and have the tooling assist with that. For example. A "clean" instance that will only ever contain fully working commits and can be used in conjunction with production and debugging. And various "local" instances - per feature, per developer, or per something else - that might be duplicated across any number of devices.

You can DIY this using raw git with tags, a bit of overhead, and discipline. Or the github "pull" model facilitates it well. But either you're doing extra work or you're using an external service. It would be nice if instead it was natively supported.

This might seem silly and unnecessary but consider how you handle security sensitive branches or company internal (proprietary) versus FOSS releases. In the latter case consider the difficulty of collaborating with the community across the divide.


> I still think a repository should be a repository: sitting on a server somewhere, receiving clean commits with well written messages, running CI. And a local copy should be a local copy: sitting on my machine, allowing me to make changes willy-nilly, and then clean them up for review and commit

This is one way to see things and work and git supports that workflow. Higher-level tooling tailored for this view (like GitHub) is plentiful.

> There's no reason a local copy should have the exact same implementation as a repository

...Except to also support the many git users who are different from you and in different context. Bending gits API to your preferences would make it less useful, harder to use, or not even suitable at all for many others.

> git made a wrong turn in this, let's just admit it.

Nope. I prefer my VCS decentralized and flexible, thank you very much. SVN and Perforce are still there for you.

Besides, it's objectively wrong calling it "a wrong turn" if you consider the context in which git was born and got early traction: Sharing patches over e-mail. That is what git was built for. Had it been built your way (first-class concepts coupled to p2p email), your workflow would most likely not be supported and GitHub would not exist.

If you are really as old as you imply, you are showing your lack of history more than your age.


Change is a constant for software engineers. It always has been. If your job is doing stuff that should be automated, either you are automating it or you are not a very good software engineer.

A few key fallacies at play here.

- Assuming a closed world assumption: we'll do the same amount of work but with less people. This has never been true. As soon as you meaningfully drop the price of a unit of software (pick your favorite), demand goes up and we'll need more of them. Also it opens the door to building software that previously would have been too expensive. That's why the amount of software engineers has consistently increased over the years. This despite a lot of stuff getting a lot easier over time.

- Assuming the type of work always stays the same. This too has never been true. Stuff changes over time. New tools, new frameworks, new types of software, new jobs to do. And the old ones fade away. Being a software engineer is a life of learning. Very few of us get to do the same things for decades on end.

- Assuming people know what to ask for. AIs do as you ask, which isn't necessarily what you want. The quality of what you get correlates very much to your ability you ask for it. The notion that you get a coherent bit of software in response to poorly articulated incoherent prompts is about as realistic as getting a customer to produce coherent requirements. That never happened either. Converting customer wishes into maintainable/valuable software is still a bit of a dark art.

The bottom line: many companies don't have a lot of in house software development capacity or competence. AI doesn't really help these companies to fix that in exactly the same way that Visual Basic didn't magically turn them into software driven companies either. They'll use third party companies to get the software they need because they lack the in house competence to even ask for the right things.

Lowering the cost just means they'll raise the ambition level and ask for more/better software. The type of companies that will deliver that will be staffed with people working with AI tools to build this stuff for them. You might call these people software engineers. Demand for senior SEs will go through the roof because they deliver the best AI generated software because they know what good software looks like and what to ask for. That creates a lot of room for enterprising juniors to skill up and join the club because, as ever, there simply aren't enough seniors around. And thanks to AI, skilling up is easier than ever.

The distinction between junior and senior was always fairly shallow. I know people that were in their twenties that got labeled as senior barely out of college. Maybe on their second or third job. It was always a bit of a vanity title that because of the high demand for any kind of SEs got awarded early. AI changes nothing here. It just creates more opportunities for people to use tools to work themselves up to senior level quicker. And of course there are lots of examples of smart young people that managed to code pretty significant things and create successful startups. If you are ambitious, now is a good time to be alive.


We're at the stage where almost any UI change no matter how small on Macs is heavily criticized. It seems a lot of people are getting very upset over a lot of micro detail. There's no way to please all of them. I've upgraded to Tahoe. Honestly, I barely notice any difference. It looks alright. There's very little for me to get upset over here. I'm pretty sure I'm in a bucket that describes the overwhelmingly large majority of users here: indifferent about the changes, overall not too upset, barely notice it.

As for Linux. I also have a Linux laptop with Gnome for light gaming (Manjaro). It's alright. But a bit of a mess from a ux point of view. Linux always was messy on that front. But it works reasonably well.

The point with the distributions that you mention is that they each do things slightly differently, and I would argue in ways that are mostly very superficial. Nobody seems to be able to agree on anything in the Linux world so all you get is a lot of opinionated takes on how stuff should behave and which side of the screen things should live. This package manager over that one.

I've been using Linux on and off for a few decades, so I mostly ignore all the window dressing and attempts to create the ultimate package manager UI, file managers and what not and just use the command line. These things come and go.

It seems many distros are mostly just exercises in creating some theme for Gnome or whatever and imitating whatever the creator liked (Windows 95, Beos, Early versions of OSX, CDE, etc.). There's a few decades of nostalgia to pick from here.


The changes in Tahoe do not fall under the bucket of "no matter how small". We have grown to accept many small, but very annoying changes, starting from disappearing scrollbars to not showing full URL in Safari, to name a few, which were all driven by smaller touchscreens on iPhone/iPad, but with Tahoe things became quite extreme.

A common intention with opensource is to allow people, and AI tools they use, to reuse, recombine, etc. OSS code in any way they see fit. If that's not what you want, don't open source your work. It's not stealing if you gave it away and effectively told people "do whatever you want". Which is one way licenses such as the MIT license are often characterized.

It's very hard to prevent specific types of usage (like feeding code to an LLM) without throwing out the baby with the bathwater and also preventing all sorts of other valid usages. AGPLv3, which is what antirez and Redis use goes to far IMHO and still doesn't quite get the job done. It doesn't forbid people (or tools) to "look" at the code which is what AI training might be characterized as. That license creates lots of headaches for corporate legal departments. I switched to Valkey for that reason.

I actually prefer using MIT style licenses for my own contributions precisely because I don't want to constrain people or AI usage. Go for it. More power to you if you find my work useful. That's why I provide it for free. I think this is consistent with the original goals of open source developers. They wanted others to be able to use their stuff without having to worry about lawyers.

Anyway, AI progress won't stop because of any of this. As antirez says, that stuff is now part of our lives and it is a huge enabler if you are still interested in solving interesting problems. Which apparently he is. I can echo much of what he says. I've been able to solve larger and larger problems with AI tools. The last year has seen quite a bit of evolution in what is possible.

> Am I wrong to feel this?

I think your feelings are yours. But you might at least examine your own reasoning a bit more critically. Words like theft and stealing are big words. And I think your case for that is just very weak. And when you are coding yourself are you not standing on the shoulders of giants? Is that not theft?


Markdown got there first/early depending on your perspective. Things like AsciiDoc got popular much later when AsciiDoctor was released around 2009 (though it technically existed already when Markdown was created) and is aimed at people who care about structured documentation. It's not aimed at casual users.

Likewise, things like org mode, which also emerged around the same time, catered to a niche of emacs using people. Which almost by definition is a subset of techies. It wasn't a logical choice for a mainstream blogging tool.

Markdown was aimed at people that used blogging tools (initially), and later any other kind of tool that accepted text. It spreading to tools like Slack, Github, etc. is no accident. Github actually has supported plenty of alternatives for documents. But they picked markdown for issue tracking, pull requests, etc. Because they had to just pick something and Markdown was the most popular.

By the time AsciiDoc became more popular (2009ish), Github was already being developed. With Markdown support. AsciiDoc was a niche thing, Markdown was already somewhat widely used then. It was an obvious choice. Them picking Markdown was important because the whole OSS community started using Github and got exposed to Markdown that way.

The rest is history. Other formats existed (textile, and various other wiki formats). They have features that are important to some people. But getting people to switch who don't really care about those features is hard. It's a bit like VHS over Betamax. Was it better. Not really. But it was there and video rental shops had to pick a format. And that wasn't Betamax when the dust settled.


My view is that there is commodity software and niche/specialized software. You find commercial solutions for both. But OSS is great for commodity software.

Everything becomes a commodity eventually. There's a lot of niche software that then goes mainstream, gets imitated by others, and becomes important to a wide range of sectors. A lot of that software usually ends up with very decent OSS alternatives. If it's worth having in OSS software form, usually somebody ends up working on it.

A lot of OSS projects are already leaning heavily on contributions from individuals and companies inside the EU. That's a good thing for the EU and something to stimulate and build on.

What the EU should do is keep an eye out for commodity software where it relies on non eu commercial software. Identify key areas where that is risky, e.g. communication software, IOT, or finance. And then stimulate members to switch to OSS alternatives if they exist and invest in the creation/support of such alternatives if they are important. OSS software doesn't just create itself, it needs backing from companies which could use the support in the form of grants.

That could include support for non EU OSS projects. There's nothing wrong with OSS from abroad. As long as this software is properly governed and vital to the EU, the EU should ensure those projects are healthy and future proofed. It should ensure local software companies get the support they need to do the right things here. This should ensure projects that are important don't run out of funding. And the EU can stimulate OSS development into strategic new areas with incentives. And make sure that EU companies that back this are successful internationally. This turns the tables on other countries maybe depending more on EU sourced software. The EU doesn't have to follow; it can lead.


MCP solves the wrong problem. The mechanics of calling tools, commands, apis, etc. isn't all that hard given some documentation. That's why agentic coding tools work so well.

For security, some sandboxing can address enough concerns that many developers feel comfortable enough using these tools. Also, you have things like version control and CI/CD mechanisms where you can do reviews and manually approve things. Worst case you just don't merge a PR. Or you revert one.

For business usage, the tools are more complicated, state full, dangerous, and mistakes can be costly. Employees are given a lot of powerful tools and are expected to know what to do and not do. E.g. a company credit card can be abused but employees know that would get them in jail and fired. So they moderate what they buy. Likewise they know not to send company secrets by email.

AI tools with the same privileges as employees would be problematic. It's way too easy to trick them into exfiltrating information, doing a lot of damage with expensive resources, etc. This cannot be fixed by a simple permission model. There needs to be something that can figure out what is appropriate to do and not under some defined policy and audit agent behavior. Asking the user for permission every time something needs to happen is not a scalable solution. This needs to be automated. Also, users aren't particularly good at this if it isn't simple. It's way too easy for them to make mistakes answering questions about permissions.

I think that's where the attention will go for a lot of the AI investments. AIs are so useful for coding now that it becomes tempting to see if we can replicate the success of having agents do complex things in different contexts. If the cost savings are significant, it's worth taking some risks even. Just like with coding tools. I run codex with --yolo. In a vm. But still, it could do some damage. But it does some useful stuff for me and the bad stuff is so far theoretical.

I run a small startup, a short cut to success here is taking a development perspective to using business tools. For example instead of using google docs or ms word, use text based file formats like markdown, latex, or whatever and then pandoc to convert them. I've been updating our website this way. It's a static hugo website. I can do all sorts of complicated structure and content updates with codex. That limits my input to providing text and direction. If I was still using wordpress, I'd be stuck and doing all this manually. Which is a great argument to ditch that in a hurry.

I don't necessarily like it writing text though it can be good to have a first shot at a new page. But it's great at putting text in the right place, doing consistency checks, fixing broken layout, restructuring pages, etc. I just asked it to add a partner logo and source the appropriate svg. In the past I would have done that manually. Download some svg. Figure out where to put it. And then fiddle with some files to get it working. Not a huge task but something I no longer have to do manually. Website maintenance has lots of micro tasks like this. I get to focus on the big picture. Having a static site generator and codex fast forwards me a few years in terms of using AI to do complex website updates. Forget about doing any of this with the mainstream web based content management systems any time soon.


Many of the build tools Javascript people use are written in Rust now. Some of them can be made to run in browsers, via WASM. React, the defacto UI framework for Javascript has a lot of web assembly components. A lot of the npm ecosystem has quietly brought in web assembly. And a lot of UI stuff gets packaged up as web components these days; some of that uses WASM as well.

If you pulled the plug on WASM, a lot would stop working and it would heavily impact much of the JS frontend world.

What hasn't caught on is modern UI frameworks that are native wasm. We have plenty of old ones that can be made to work via WASM but it's not the same thing. They are desktop UI toolkits running in a browser. The web is still stuck with CSS and DOM trees. And that's one of the areas where WASM is still a bit weak because it requires interfacing with the browser APIs via javascript. This is a fixable problem. But for now that's relatively slow and not very optimal.

Solutions are coming. But that's not going to happen overnight. But web frontend teams being able to substitute Javascript for something else is going to require more work. Mobile frontend developers cross compiling to web is becoming a thing though. Jetbrain's compose multiplatform is has native Android/IOS supported now with a canvas rendered web frontend supported in Beta currently.

You can actually drive the dom from WASM. There are some RUST frameworks. I've dabbled with using kotlin's wasm support to talk to browser dom APIs. It's not that hard. It's just that Rust is maybe not ideal (too low level/hard) for frontend work and a lot of languages lack frameworks that target low level browser APIs. That's going to take years to fix. But a lot compiles to wasm at this point. And you kind of have access to most of the browser APIs when you do. Even if there is a little performance penalty.


I think you're confusing CLI tools for React with web components.

> React, the defacto UI framework for Javascript has a lot of web assembly components.

I'm pretty sure this is just plain false. Do you have an exemple?


They might mean build dependencies? Or I'm sure there are ready-built components in wasm, but they are most definitely third-party ones.

The acquired podcast did a nice episode on the history of AI in Google recently going back all the way to when they were trying to do the "I feel lucky", early versions of translate, etc. All of which laid the ground work for adding AI features to Google and running them at Google scale. That started early in the history of Google when they did everything on CPUs still.

The transition to using GPU accelerated algorithms at scale started happening pretty early in Google around 2009/2010 when they started doing stuff with voice and images.

This started with Google just buying a few big GPUs for their R&D and then suddenly appearing as a big customer for NVidia who up to then had no clue that they were going to be an AI company. The internal work on TPUs started around 2013. They deployed the first versions around 2015 and have been iterating on those since then. Interestingly, OpenAI was founded around the same time.

OpenAI has a moat as well in terms of brand recognition and diversified hardware supplier deals and funding. Nvidia is no longer the only game in town and Intel and AMD are in scope as well. Google's TPUs give them a short term advantage but hardware capabilities are becoming a commodity long term. OpenAI and Google need to demonstrate value to end users, not cost optimizations. This is about where the many billions on AI subscription spending is going to go. Google might be catching up, but OpenAI is the clear leader in terms of paid subscriptions.

Google has been chasing different products for the last fifteen years in terms of always trying to catch up with the latest and greatest in terms messaging, social networking, and now AI features. They are doing a lot of copycat products; not a lot of original ones. It's not a safe bet that this will go differently for them this time.


but cost is critical. It's been proven customers are willing to pay +- 20/month, no matter how much underlying cost there is to the provider.

Google is almost an order of magnitude cheaper to serve GenAI compared to ChatGPT. Long term, this will be a big competitive advantage to them. Look at their very generous free tier compared to others. And the products are not subpar, they do compete on quality. OpenAI had the early mover advantage, but it's clear the crowd who is willing to pay for these services, is not very sticky and churn is really high when a new model is release, it's one of the more competitive markets.


I don't even know if it amounts to $20. If you already pay for Google One the marginal cost isn't that much. And if you are all in on Google stuff like Fi, or Pixel phones, YouTube Premium, you get a big discount on the recurring costs.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: