Hacker Newsnew | past | comments | ask | show | jobs | submit | seviu's commentslogin

And despite that, the place where I work, has disabled ipv6, rendering our development machines useless for trivial tasks such as debugging our iOS app on a device (which uses ipv6 under the hood)

Reasons given: the security policies say ipv6 is not safe enough.


I am in a project where we have to give estimates in hours and days.

Needless to say we always underestimate. Or overestimate. Best case we use the underestimated task as buffer for the more complex ones.

And it has been years.

Giving estimations based on complexity would at least give a clear picture.

I honestly don’t know what the PO and TL gains with this absurd obscenity.


The last director I had would ask "is it a day, a week, a month, or a year" he understood that's about as granular as it's possible to be.

And he really only used them in comparison to estimates for other tasks, not to set hard deadlines for anything.


Knowing nothing else about him, I like him based on this alone.

I've been in planning sessions where someone would confidently declare something would take half a day, was surprised when I suggested that it would take longer then that since they were basically saying "this'll be finished mid-afternoon today"...and was still working on it like 3 weeks later.


Besides the usual unknown unknowns, I've also seen this happen with tasks that involve a lot of coordination in the SDLC. Oh the PR went up at at 2pm PST? Coworkers in EST won't review it until tomorrow. Maybe some back and forth = another day until it clears review. Then QA happens. QA is heavily manual and takes a few hours, maybe with some false starts and handholding from engineering. Another day passes. Before you know it the ticket that took an hour of programming has taken a week to reach production.


As mentioned in a sibling comment reply:

Time estimations, or conversations to days or other units, typically fail because if a developer says 1 day they might mean 8 focused uninterrupted development hours while someone else hears 1 calendar day so it can be done by tomorrow, regardless of if a developer spends 8 or 16 hours on it.


Are we estimating developer cost (investment cost, writing code only tome), development costs (investment costs including QA time), or time to delivery and first use? People want and use estimates for different purposes. You point out great reasons why knowing what the estimates are for is important.


This is essentially t-shirt sizing without all the baggage that comes from time. Your boss is trying to use the relative magnitude but it's inevitable that people will (at least internally) do math like "7 day tasks is the same as one week task", or worse over-rotate on the precision you get from day/week/month, or even worse immediately map to the calendar. Suggestion: don't use time.


If you pretend not to use time, everyone will do an implicit time mapping in their head anyway. I've never seen it go any other way.


Surprisingly prob yes

But still we are much better at estimating complexity

Time estimations usually tends to be overly optimistic. I don’t know why. Maybe the desire to please the PO. Or the fact that we never seem to take into account factors such as having a bad day, interruptions, context switch.

T-shirt sizes or even story points are way more effective.

The PO can later translate it to time after the team reaches certain velocity.

I have been developing software for over twenty years, I still suck at giving time estimates.


Time estimations, or conversations to days or other units, typically fail because if a developer says 1 day they might mean 8 focused uninterrupted development hours while someone else hears 1 calendar day so it can be done by tomorrow, regardless of if a developer spends 8 or 16 hours on it.


Yes, I've seen this too in sprint planning. However, the layer of indirection I think is helpful. If you use actual time, then when something isn't done after 1 week when that was the estimate, then bosses are asking why. If your estimate was simply "8 story points", then the bosses can't point to a calendar and complain. They can try to argue that an 8-point task should be done in a week, but then you and the scrummaster remind him that points don't map directly to time, just effort.


It's probably not possible to fully prevent people from thinking about time at all, but the more friction you can add, the better.


That's true. Anyplace I've worked where we did planning poker, "points" were always just a proxy for time.


Here's my observation: ballparking an estimate for a whole project, in my experience, tends to be more accurate than estimating each task and adding them together.

I like to think of this as 'pragmatic agile': for sure break it down into tasks in a backlog, but don't get hung up on planning it out to the Nth degree because then that becomes more waterfall and you start to lose agility.


Hours is insane. But ultimately time is money and opportunity cost. Software engineering can’t be the only engineering where you ask the engineers how much something will cost or how much time it will take and the answer is “it’s impossible to know”. Even very inaccurate estimates can be helpful for decision making if they are on the right order of magnitude


There's two things here that get overlooked.

First, people asking for estimates know they aren't going to get everything they want, and they are trying to prioritize which features to put on a roadmap based on the effort-to-business-value ratio. High impact with low effort wins over high impact high effort almost every time.

Second, there's a long tail of things that have to be coordinated in meat space as soon as possible after the software launches, but can take weeks or months to coordinate. Therefore, they need a reasonable date to pick- think ad spend, customer training, internal training, compliance paperwork etc.

"It is impossible to know" is only ever acceptable in pure science, and that is only for the outcome of the hypothesis, not the procedure of conducting the experiment.


> "as soon as possible after the software launches"

This isn't true, just desired, and is one of the main roots of the conflict here. OF COURSE you would like to start selling in advance and then have billing start with customers the instant the "last" pr is merged. That isn't a realistic view of the software world though and pretending it is while everyone knows otherwise starts to feel like bad faith. Making software that works, then having time to deploy it, make changes from early feedback, and fix bugs is important. THEN all the other business functions should start the cant-take-back parts of their work that need to coordinate with the rest of the world. Trying to squeeze some extra days from the schedule is a business bet you can make but it would be nice if the people taking this risk were the ones who had to crunch or stay up all night or answer the page.

Trying to force complicated and creative work into a fake box just so you can make a gantt chart slightly narrower only works on people a couple times before they start to resent it. 10x that if management punishes someone when that fantasy gantt chart isn't accurate and 100x that if the one punished is the person who said "it's impossible to know" and then was forced into pretending to know instead of the person doing the forcing.


My take: if they have not done the work to get to at least some degree of a spec, and they won't give you time to review and investigate, they get nothing more than a vague, relative t-shirt size.


I think software is one of those VERY rare things, where inaccurate estimates can actually be inaccurate by "orders of magnitude". After 20 years in the field, I still managed to use 2 months of time on a task that I estimated as 10 days.


A rule that has suited me well is to take an estimate, double it, and increase by an order of magnitude for inexperienced developers. So a task the say would take two weeks ends up being 4 months. For experienced developers, halve the estimate and increase by an order of magnitude. So your 10 days estimate would be 5 weeks.


The biggest estimation effort I ever took part in was a whole-system rewrite[1] where we had a very detailed functional test plan describing everything the system should do. The other lead and I went down the list, estimated how long everything would take to re-implement, and came up with an estimate of 2 people, 9 months.

We knew that couldn't possibly be right, so we doubled the estimate and tripled the team, and ended up with 6 people for 18 months - which ended up being almost exactly right.

[1]: We were moving from a dead/dying framework and language onto a modern language and well-supported platform; I think we started out with about 1MLOC on the old system and ended up with about 700K on the rewrite.


10 days was already after I used this algorithm. Previous several tasks on that codebase were estimated pretty good. Problem with this is that some tasks can indeed take SEVERAL orders of magnitude more time that you thought.

One of the hardest problems with estimating for me is that I mostly do really new tasks that either no one wants to do because they are arduous, or no one knows how to do yet. Then I go and do them anyway. Sometimes on time, mostly not. But everybody working with me already knows, that it may be long, but I will achieve the result. And in rare instances other developers ask me how did I managed to find the bug so fast. This time I was doing something I have never before done in my life and I missed some code dependencies that needed changing when I was revieving how to do that task.


I’ll send my friend that has a construction company to build your next 3500 sq ft house for $13.6 million dollars :)


Something often overlooked in cost/schedule estimates is the nature of joint probability of actions slipping. Action A slips and causes action B to slip. I think software is tougher to estimate because the number of interfaces is often much higher, and sometimes more hidden, than in hardware.


as opposed to say building a house where framing can totally slip while we run electricity and build a roof floating in mid-air

software is only tougher to estimate if incompetent people (vast majority of the industry, like 4+ million) is doing the estimating :)


My home construction slipped 6 months on 2 year build time. It happens in construction very often.

> software is only tougher to estimate if incompetent people (vast majority of the industry, like 4+ million) is doing the estimating :)

No, it is tough to estimate, but not only for incompetent people. And "incompetent" can be stretched to "don't know what he's doing", which is how I operate most of the time. I don't know what really needs to be done until it's done. Main part of my work is research on what actually needs to be done, then "just" implementing it. If I waited with estimating until I know what needs to be done, I would spend 3/4 time estimating and then 1/4 with clear understanding and good schedules (example description: I will be clicking keys for 5 hours).


> My home construction slipped 6 months on 2 year build time. It happens in construction very often.

Tangent, but I have at least 3 friends that would've (in retrospect) been nothing short of ecstatic if their home construction had "only" slipped 6 months on a 2 year timeline.


That’s a bit of a strawman considering I was deliberate in saying hardware interfaces are limited and not saying they are zero. The number of interfaces in software is often going to be orders of magnitude greater. The network effects and failure modes will often increase geometrically with the number of interfaces. In fact, big construction design firms have tools to easily identify and mitigate the “clashes” you bring up and those tools tend to work well because there is a finite number and the designs are well-documented (as opposed to software where changes are relatively cheap and easy so they often occur without documentation)

Saying incompetence is the reason is a trivial rebuttal that ignores the central claim about complexity. It’s like saying “the reason why we don’t have a theory of everything is because we don’t have competent physicists”


That's a factor four or five ir so, so still less than an order of magnitude.


The next natural progression of this line of discussion between "the business" and engineering is for them to agree on a time range as an estimate. Engineering doesn't want to say it'll be done in 6 weeks, but they feel okay saying it will take between 4 and 20 weeks so this estimate is accepted.

You can guess what happens next, which is that around week 8 the business is getting pretty angry that their 4-week project is taking twice as much time as they thought, while the engineering team has encountered some really nasty surprises and is worried they'll have to push to 24 weeks.


You're forgetting the part where the business expects the engineers to pull all-nighters so they can meet the deadline.


it is still better to give a range though because 1. it explicitly states the degree of unknown and 2. no boss is going to accept 4-20 weeks, which means you start talking about how you can estimate with better accuracy and the work required to do so, which is a major goal of planning & estimation.


> Software engineering can’t be the only engineering where you ask the engineers how much something will cost or how much time it will take and the answer is “it’s impossible to know”.

Because it's not engineering at all. But even if it was, plenty of engineering projects are impossible to estimate - the ones that are doing something novel - and disliking that fact doesn't make it go away.

> Even very inaccurate estimates can be helpful for decision making if they are on the right order of magnitude

If what the business wants is an order-of-magnitude, they should ask for that; often (not always!) that's a lot easier.


> I honestly don’t know what the PO and TL gains with this absurd obscenity

There are marketing campaigns that need to be set up, users informed, manuals written. Sales people want to sell the new feature. People thinking about road maps need to know how many new features to can fit in a quarter.

Development isn't the only thing that exists.


FIFA is another one that comes to mind, or however they call it these days.

Also from EA


Apple priorities are and have always been:

Apple first Users second Developers third

Not only that they still live with ptsd from the times when they almost disappeared, and will do everything in their power to keep as much revenue as they can squeeze.

Because of this, the Japan appstore will be as crappy as the european one. And the apps as surpar as the ones from the appstore.

Because let’s be honest, apps quality have gone downhill


You would be surprised…

Regarding CZ and Binance and the Trumps, they have kind of a symbiotic relationship.

After Binance and CZ pleaded guilty to money laundering in November 2023, for which they paid over $4 billions in fines, WLFI (which is a clone of AAVE belonging to the Trump family) launched a stablecoin called USD1. Magically on March 2025, $2 billion flowed into Binance through MGX, a state backed Abu Dhabi fund, later revealed to have been paid in USD1 (two months before it was unveiled and without at the time no effective audits), effectively propping WLFI’s coin (backed, unbacked, nobody knows, I assume backed). CZ applied for a presidential pardon inmediately after in May 2025.

WLFI now gets to earn about $60–80 million per year in yield from the USD1…

…As long as Binance doesn’t redeem those $2 billion.

I still don’t know what MGX got out of this deal, but I am pretty sure they didn’t walk empty handed.


For anyone unaware, financial fraud investigation YouTuber Coffeezilla just released a good video laying this all out a few hours ago: https://www.youtube.com/watch?v=JMEJTORMVN4


I can't log in to my AWS account, in Germany, on top of that it is not possible to order anything or change payment options from amazon.de.

No landing page explaining services are down, just scary error pages. I thought account was compromised. Thanks HN for, as always, being the first to clarify what's happening.

Scary to see that in order to order from Amazon Germany, us-east1 must be up. Everything else works flawlessly but payments are a no go.


I wanted to log into my Audible account after a long time on my phone, I couldn't, started getting annoyed, maybe my password is not saved correctly, maybe my account was banned, ... Then checking desktop, still errors, checking my Amazon.de, no profile info... That's when I started suspecting that it's not me, it's you, Amazon! Anyway, I guess, I'll listen to my book in a couple of hours, hopefully.

Btw, most parts of the amazon.de is working fine, but I can't load profiles, and can't login.


You might be interested in Libation [0]. I use it to de-DRM my Audible library and generate a cue sheet with chapters in for offline listening.

[0] https://getlibation.com/


We use IAM Identity Center (née SSO) which is hosted in the eu-central-1 region, and I can log in just fine. Its admin pages are down, though. Ditto for IAM.

Other things seem to be working fine.


I just ordered stuff from Amazon.de. And I highly any Amazon site can go down because of one region. Just like Netflix are rarely affected.


I can’t even login, I get the internal error treatment. This is on Amazon.de


I'm on Amazon.de and I literally ordered stuff seconds before posting the comment. They took the money and everything. The order is in my order history list.


I really want this but I can’t justify such a machine just for watching YouTube.

I cannot even give it to my kids since I don’t have multiple accounts with it.

Kind of sad that the most interesting device Apple has will never show its true potential due to their greed.


For watching YouTube you just need the cheapest iPad not a Pro.


A generalization I would say

I really dig that Oled screen


Those will trickle down eventually I suppose.


Or an even cheaper android tablet.


I bought an $80 8th Gen iPad off eBay and it runs 26 great and works perfectly for watching brain-rot and doing my Duolingos.


> I cannot even give it to my kids since I don’t have multiple accounts with it.

You can, but it's not advertised this way:

https://support.apple.com/guide/deployment/shared-ipad-overv...

There are ways to supervise besides get into a full MDM:

https://support.apple.com/guide/deployment/about-device-supe...


People keep saying their iPad is a YT consumption device, but without ad blocking, how do you stay sane? I'm assuming if you're consuming that much YT content you've moved to a premium account or something? I don't use my tablet primarily for YT content, so it's rather jolting when I click a link somewhere and see the hell that is unblocked YT


> I'm assuming if you're consuming that much YT content you've moved to a premium account or something?

Yes


Even with YT Premium, sponsor announcements are still an annoyance. Firefox with SponsorBlock helps with that, not sure if that's usable on iPad.


It is not.

However, the YouTube Labs “Jump Ahead” feature is basically the same thing. When a sponsor segment starts, double tap the right side of your screen and a “jump ahead” button will usually appear (it’s algorithmic based on user viewing patterns). It skips the ad just about every time.

You’ll have to have Premium and enable this manually. And it might go away, it’s experimental.

But also, iSponsorBlockTV is a great project. But it only works on Apple TV and other streaming TV YouTube apps.



Yes, YouTube premium, use it a lot to practice my guitar playing by playing along YouTube music videos with chords displayed.


I once worked for the yellow pages in Switzerland. Our paid clients had a dashboard which reported how many users visited their business entry.

We at engineering decided to filter out bots. Figures fell dramatically by more than 50%.

In less that a day business mandated us to remove the filter.

Bots are real people after all


This is essentially fraud. Your company was made aware that it was selling a product with wildly different characteristics than advertised, and chose to cover it up.

There are defensible business reasons for this, in having a contract already in place at the old CPM, so being unable to double the CPM and half the views mid-contract... but still pretty much fraud.


Sounds like even more fraud. Here you can apply the new filter to historic data or create a new metric version and keep the old version to avoid breaking continuity in what's measured.


It sounds like the customers demanded the fraud remain, for their own internal Potemkin purposes.


Possible, but not substantiated by the GP comment, which makes yours purely speculative.


> In less that a day business mandated us to remove the filter.

How is that not substantive?


My impression is that the question is whether “business” in this instance refers to the Yellowpages company itself, or the companies that make up their customers base.


Yes, and my read is that "business" was internal to Yellowpages.


Way back when I work in an ad based company, click fraud handling was under my overseeing. We caught about 20 percent of clicks as fraudulent and filtered them out before billing the ad placing vendors. It was a constant battle with the sales team to relax the rules, as any clicks filtered out cut into the sales revenue. Sometimes we got the customers on our side as they ran their own analysis on the billed click report and came back demanding refund as they found a bunch of fraudulent clicks.


Yeah, the incentives there are obviously misaligned. I wonder if there is a potential way of making advertising click through tracking following the "I cut the cake, you chose the slice" model.

Some countries have property taxes where you declare the value and the government retains the right to purchase the property for that value for example.

My first thought was to make the advertising cost driven by revenue on the site. But that just reverses the incentive.


People will just pull ads if the ROAS isn't there. Performance marketing teams aren't fools.

Altering data would mess with everything. Why is unverified traffic increasing? What's wrong with new marketing efforts? Marketing just requires fixed definitions. e.g. if you have 97% bots but it remains constant that's okay. I know I am spending $x to get $y conversions. I can plan over time, increase or decrease and I can plan. I won't be willing to pay as much as with 0% bots (will pay far far less) but I can set my strategy on this.

It's not that it's x% bots that is the problem. Growth team doesn't adjust strategy on percentage-bot. Growth team adjusts strategy based on return on ad spend. If 0% bots but no return, way worse than 5x ROAS with 99% bots.


I cut / you choose works in some situations.

In others you'd want, say, auditing or independent third-party verification.

In this case, perhaps an audit involving the deliberate injection of a mix of legitimate and bot traffic to see how much of the bot traffic was accurately detected by the ad platform. Rates on total traffic could be adjusted accordingly.

This of course leads to more complications, including detection of trial interactions, see e.g., the 2010 VW diesel emissions scandal: <https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal>, or current AIs which can successfully identify when they're being tested.


On further reflection: I'd thought of raising the question of which situations cut/choose does work. Generally it seems to be where an allocation-division decision is being made, and the allocation is largely simultaneous with the division, with both parties having equal information as to value. Or so it seems to me, though I think the question's worth thinking about more thoroughly.

That's a subset of multi-party decisionmaking situations, though it's a useful one to keep in mind.


I vaguely remember someone winning a noble prize for economics for coming up with ways to apply cut/choose in financial transactions but I couldn't find it in a quick Google. It may have been nearly 20 years ago though.


It's not that misaligned.

Basic ad ops has ad buyers buy ads from different vendors, track which converts (attribution, which has flaws, but generally is decent signal), and allocate spend via return on ad spend. So it hurts the vendor at least as much as the buyer by inflating the cost per action / damaging roas.


I've seen people but cpc campaigns and only place ads that don't convert. So they get the benefit of the branding instead. I guess more modern auction algorithms factor this in


Makes me think of the recent thing where YouTube stopped counting views with ad blockers. The Linus Tech Tips people worked out they were still seeing the same ad impressions and revenue despite views dropping by half. Unfortunately, sponsorship deals are often sold on viewership and I'm not sure even LTT has the clout to convince them nothing is functionally different.


YouTube didn’t stop counting views with ad blockers. It was one blocker extension that added the view counter api to the block list.


That's splitting hairs, but in a way that's important to the conversation.

If YouTube served gigabytes of the video file for 40 munutes and human watched it for that time, but they didn't send a request to `youtube.com/api/stats/atr/` and periodically to `/stats/qoe?`, did the video actually get viewed?

I think a reasonable person would say that the person viewed that video. Only a programmer would suggest that wasn't a view because they didn't properly engage the statistics and tracking endpoints.

But so much of the industry is built on deeply invasive tracking and third-party ad networks that this is a normal thing.


Youtube might not really care about accurate view counting in that way. In fact, Youtube likely does not care at all about traffic that won't view an ad, and they have demonstrably been hostile to that traffic for a while. Someday youtube hopes that nobody with an adblocker ever views any video, and has no intention to track their view attempts.

If that causes a problem with Youtubers making a paycheck from external sponsors, Youtube really really does not care, because sponsorships are money that youtube doesn't get!

Youtube is downright hostile to creators who don't make the "right" content, which means new videos at a perfect schedule that all have the exact same "taste" such that viewers can mindlessly consume them in between the actual ad breaks youtube cares about. The more and faster people accept this, the more likely we get improvements.

It took over a year for youtube to agree to let people swear in videos without punishment, including videos that are set as "These are not intended for children", after they unilaterally introduced this swear downranking several years ago.

Youtube cares about Mr Beast and that's about it. If you do not run your channel like Mr Beast, youtube hopes you die painfully and think about your mistake. Youtube actively drives creators making actual art, science, knowledge, and other types of videos to burnout, because Youtube considers creators to be a renewable resource worth exploiting, because there will always be 15 year olds who want to become influencers.

It is not "deeply invasive tracking" or "programmer thinking", it's entirely business. Google's business is ad views, not video views. They want to measure what they care about


I think most websites would break if a 3rd party script started blocking things. Theres also the fact that view tracking is fairly complex since they need to filter out bots / ad fraud.

And just the fact that if users have a privacy extension blocking the view tracker, is it not just respecting their wishes to not be tracked?


if a bot (or whatever latest of download tool) was served gigabytes of video, does that mean a human eventually viewed it? no.


So now we have to pass the touring test before the video is served? :)


It takes a long time to get there, but you'll eventually arrive.


>> In less that a day business mandated us to remove the filter.

Did something similar at a small company I was working at. The VP of marketing sat me down and told me to do the same thing.

After the meeting, I was told by another dev that the VP was tying a monetary value to specific clicks and if I was filtering out the bots, it would make his data look bad and reduce the amount of potential revenue for the company he was touting.

I think you can see how the bots were actually helping him promote how awesome a job he was doing with our web properties to the owners.


He idea of "reality" and how things work, even if the idea is "they work bad", is more important than any other argument or reality. That's true at personal level and even truer at any organizational level.


Well yes but seems business has decided truth is worth less than money. Or now that i think of it, everything is worth less than money. Even, money is the only thing worth anything. They don't care about people, pride, products, or truth.


I think the way it goes is more "What is true? It's that if I bury this truth, I will have more money."

Interestingly, I think money is increasingly its own falsehood now. A lot of rich people are finding that they pay a lot to get what's basically a scam, like Sam Altman's swimming pool that leaked and ruined the rest of the house [1]. There's a reason that Billionaire's Bunker [2] is entering the cultural zeitgeist despite fairly terrible plotting, dialogue, and acting.

[1] https://fortune.com/2024/07/17/sam-altman-infinity-pool-mans...

[2] https://www.netflix.com/title/81606699


Half my advertising budget is wasted, I just don't know which half... In many corporate cases vague metrics meets the KPI better than accurate ones

I worked for one of the mag7 doing customer support bot tech. Clients internal metrics around containment consistently showed better results than ours - even though you'd normally expect them to be putting pressure on their vendor. because it was a kpi for their internal team to look good to their bosses


Could you have... maybe eased it in over time? So for example, every 4 days, filter out an additional 1% of the traffic detected as fake.


That is worse. Now you have a bunch of companies wondering why their engagement is falling over a whole year, some dude's getting fired for not doing his job, etc. etc.

The correct thing to do, probably, is to just provide the new data to the customer without changing what they were already looking at. So a new widget appears on their dashboard, "52% bot traffic", they click on that, and they see their familiar line chart of "impressions over time" broken down as a stacked line chart, bottom is "human impressions over time," top is "bot impressions over time," and the percentage that they were looking at is reported either above or beneath the graph for the same time intervals. Thus calling attention to the bottom graph, "human impressions over time," and they can ask your sales people "how do I get THAT number on my normal dashboard?" and they can hem and haw about how you have to upgrade to the Extended Analytics Experience tier for us to display that info and other business nonsense...

Point is, you stimulate curiosity with loud interference rather than quietly interfering with the status quo.


Fair enough, although in that circumstance I think you'd have to mark the non-human traffic as "unverified traffic" to soften the blow


With numbers like, if it scares customers a good strategy is to implement the filtering very gradually over time, over 6 months or a year. The fall off is way less scary, and it can be described as improving bot filtering.

It’s not as honest, but more palatable unfortunately


Couldn't you kept it in and labeled it as bots? i.e. using stacked barcharts and such.


When we propose alternatives the answer is that they want to protect customers.

But they don’t protect their cash cow from massive daily influxes of scam apps. It’s better one million scam apps generating 50k per month and drowning my two or three apps for which I spent months of work than a few thousand quality apps from which everybody would profit.

Let’s be real it takes a special kind of mad developer to try to make a business that relies on the AppStore. First if you are unlucky you get rejected on day one or two. And if you aren’t and are wildly popular you risk Apple copying your business model.

Because deep down some people at Apple despise the App Store developers and think they can do much better. This has been at the core of Apple culture for ages.

Anyway we legit indie developers who care about our products get drowned in irrelevance. Who cares.


I had to free 100GB so that it has enough disk space.

I am amazed this game is even playable on the steam deck. Was trying to find an excuse to play it after cyberpunk. I guess this one it is…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: