Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
People who disagree aren't trying to make things complex (m50d.github.io)
237 points by lmm on Dec 11, 2018 | hide | past | favorite | 154 comments


A big part of this is that programmers have different programming worldviews. Many people seem to be unaware that worldviews are not perfect rational easily modifiable constructions but are instead foundational structures that thinking builds upon automatically. Programmers are about as unlikely to easily change their programming worldviews as they are to change their political beliefs.

Which is not to say that programming worldviews can't change, it's just that by their nature it takes a strong effort to change then and the changes are usually gradual over time. Or something completely irrational changed them. For example if a programmer needs to find a new job and the team has a different approach to programming, this social circumstance may necessitate that he adopt a new viewpoint.

I think that people are less rational than they think they are. Which is not to say that they can't be rational, it's just that other preformed factors besides logic have a strong influence over our perspectives on a particular topic.

So we can certainly discuss things productively but we should be aware that people can't change their religion easily. And no matter how smart or rational everyone has strong beliefs because that's just how our brains operate.

One consequence is that when trying to have rational discussions it is necessary to try to dig in to identify all of people assumptions, since there is a good chance of of the other person's programming beliefs may play a big role in their thinking and you may not even be aware of it.


These are great points and I wanted to add a couple more I've found over the years.

Software development often has an almost infinite number of ways to accomplish the same task.

Maintainability is as much, if not more, about correctly predicting the future uses and requirements of the software as it is about the initial coding practices and design patterns.

Working on older systems that are being used in ways that they were not originally intended is often uncomfortable for a developer. All but the most junior developers tend to have some painful memories of battling legacy software.

These add up to a situation where it is incredibly difficult and vague to determine the correct path. Both because there are so many ways to accomplish the task and because there are so many unknown future contingencies that ultimately determine how maintainable the software will be. Combine that with the painful memories most developers have from nightmare projects and you have yourself a recipe for emotional disagreements all around.

The grand irony is that, very often, once a developer becomes fully aware of this concept, they lose that internal fire and vigilance required to constantly seek out better ways to solve problems and their skills wane.


> The grand irony is that, very often, once a developer becomes fully aware of this concept, they lose that internal fire and vigilance required to constantly seek out better ways to solve problems and their skills wane.

Any hints to guide one back to the path of passionate problem solving?


I do only partially qualify since I only lost-and-regained my passion for (software) problem solving after an early burnout, not because of an expansion of perspective.

What fuels my passion: I try to stick to languages and concepts that are puristic. I loved SmallTalk, which is pure OO (so much more than Java). I love Haskell, which encapsulates a particular style of functional programming quite well. I love LISP, since it represents and embraces metaprogramming quite well. The more a language deviates from these extremes, the less 'thrill' I feel using them. I always try to solve problem with the purest form a paradigm has to offer.

In the real world, I am able and perfectly happy to choose whatever tool fits the need best and adapt to the team. But I will certainly voice my opinions ("This is a planning task - let's not reinvent the wheel - and waste time and fix unnecessary bugs - and just send the data to Prolog and translate the solution back into our program") tho and what I do in my free time is not bounded my industrial needs.

Every paradigm offers a different perspective on programming and is worth exploring. Thankfully I am not genious enough to run out of this particular motivation in my lifetime.


Accept that pragmatism, and not trying to solve tomorrows problems today, are great assets for any developer / technical lead.

My maxim is "make it as complex as it needs to be, but not one bit more".

There is great satisfaction to be had from productively solving problems rather than endlessly "architecting" in futile search of the "perfect design".


Work on bigger problems that are a lot harder to solve. You cant mitigate for reality, but what you can do is offset its shittyness but taking on a project/task that yields results that are big and great enough to justify dealing with shitty systems/ heuristics to begin with.

Just my 2 cents!


I concur. In other words try to be on the "edge" where you're not looking at the same old patterns you've seen before, but are always attacking something relatively unknown. In my experience, senior people who always target their domain and level of expertise are bitter and jaded.


You dont need vigilance to keep learning. You need to learn. So, read books and technical articles.

Also, take a break and do something that is not programming as a hobby. Demotivation is more of function of routine and lack of sensory input and less of practical realities.

Lastly, embrace the concept.


This is a great comment, and very succinctly put.

Somewhat incidentally, but I believe this is a large part of why, to me, software development is not an engineering discipline. At the end of the day, engineers (I think; not actually being one myself) have to answer to physics. The 'possible model space' is relatively constrained, and everyone is working under, roughly, the same set of assumptions, rules, and standards. This kind of constraint allows for higher degrees of rigor, because there often is a 'right way' to do things, or an empirical set of tests to verify the quality of an idea.

Software is, on the other hand, only slightly more constrained than mathematics in its most basic form. We can introduce artificial constraint by monopolizing a domain with certain ideas that have proven robust and reliable (in the same way that we've decided that, in general, you turn a screw clockwise to tighten it), which in turn allows for the introduction of some engineering-style rigor and standards, but there's nothing really stopping you from stepping back and saying, "well I want to do it this way instead", in the way that an Engineer is prevented, by physics, from saying, "well, I think we should just alter the density of air so that we don't need to worry about lift for these airplanes we're building; they'll just be buoyant!". By and large, you can do that kind of thing in code; you're building the world within which you're going to solve the problem, and you can arrange that world as you see fit (by and large) to make certain aspects of the problem easier (often at the cost of making others harder), in the same way a mathematician does.

And if you've got one kind of world you really like, because it works well with how problems decompose in your head, it can, to the parent's point, be really difficult to understand why another representation might be just as useful to someone else's particular brain arrangement.

Not trying to make value judgement here, though I do envy and admire engineering rigor. Just a (likely naive) take on a fundamental difference.


When you're fighting for speed on your code, it ends up feeling like what you're describing of that engineering world. Your techniques have to adapt to what is actually happening in the lower layers.

When you're fighting for ease of use, you can't make your users forget their preconceptions, you have to adapt to these externalities.

I think sometimes engineering and software development have different degrees of freedom, but the difference isn't as clear cut as you make it.


I think programming has more trade-offs to choose from, and thus more freedoms. You can make a program slow, fast, small, large, limited, comprehensive, quickly written for a specific case or carefully designed for the generic case, or anything else at the expense of something else.

Building a bridge doesn't have much of that: the bridge must withstand loads from the traffic, loads from the elements, material wear across decades; all that without falling down. There are aesthetical components to choose from but generally you can't avoid physics and financial cost when building one.

In programming you could choose to go with a number of alternate implementations that are as different as they would be unrealistic if only you were building a bridge instead of software. Anything from tying inflatable boats together to form a walkable chain to act as a bridge to draining all the water out and building a 6+6 lane highway across the bottom of the bay instead.

Performance is a likely domain for coming to terms with the limits of the physical qualities of computers but even then there are lots of different flavours of performance to consider, and innumerable ways to trade performance in and something else out.


The reason a bridge is so constrained is because we have scaled the difficulty of the problem upwards until it is near the limits of our budget and ability.

Many software projects are profitable without being anywhere near physical limits. Other software projects are not, if you are working for an HFT, then you're counting microseconds and measure distances by how long it takes light to travel. If you're storing 10^10 bits of data you don't need to even think about it, at 10^15 bits of data you can go to the store and buy a bunch of hard drives, and at 10^20 bits of data you need a team of experts to make it even possible, never mind cost-effective.

If civil engineering looked like software engineering, a big chunk of our civil engineers would be building popsicle-stick bridges to carry occasional featherweight loads across 1-inch gaps, but there would still be some real bridges out there.


For a more realistic comparison, you could look at the relationship between structural engineering and architecture. A lot of engineering goes into structures that are built a certain way just for looks, even though it's more expensive and often less functional.

This even affects bridges. The new eastern span of the bay bridge didn't have to have a tower at all. They chose the more expensive option because it looks better.


Agreed, there's definitely a continuum as you go lower down the stack; in the same way that the first programmers were often electrical engineers (with due deference to Ada Lovelace).


Are you thinking of any early electrical engineer programmers in particular? The impression I got was that early programmers for scientific applications were largely mathematicians and scientists (e.g. Hopper, Turing, ENIAC programmers, von Neumann), while business programming was more office / business people.


Fair; I'm thinking of people like (from disparate periods) Ivan Sutherland, Doug Englebart, Claude Shannon, and so on; folks who came out of EE educations but became computer engineers or computer scientists in practice (and sort of by post-hoc definition)


> Somewhat incidentally, but I believe this is a large part of why, to me, software development is not an engineering discipline. At the end of the day, engineers (I think; not actually being one myself) have to answer to physics.

As a software engineer who calls what I do "engineering", I have to answer to physics every day. The difference is that the relative cost of everything is skewed so far out of proportion to what other engineers work with that it's hard to account for the differences, and software engineers work with so many orders of magnitude. We are still working with physical stuff, basically refined sand and metal, which consumes electricity and other resources, just like other engineers.

Imagine that you were a civil engineer tasked with building a bridge. Except instead of using concrete and steel, I handed you some adamantium at the low, low cost of $0.01 per cubic meter, and a bunch of robots to machine it for you. Designing bridges is easy now, and you can basically make it look like whatever you want, just because adamantium is so amazing.

But just because we've made bridge-building trivially easy doesn't mean that civil engineering is obsolete. Adamantium is nice, and all, but now I'm asking you to build a Dyson shell, and you have to tell me that it's not physically possible, even with adamantium.

I would love to just handwave the design of the systems I work on, I would love to just arrange pieces however I felt like. Instead I have all these constraints like the reliability of components (hard drives fail), physical laws (speed of light constrains network latency), the memory hierarchy (the software manifestation of constraints on circuits), and all sorts of other problems that I can't just pretend don't exist. I have to choose algorithms that will complete with the computational resources that I have available. Maybe an exact algorithm is not physically realizable, maybe I have to choose some kind of approximation.

This is not true in mathematics. In mathematics, algorithms are not necessary (at least, for most mathematicians). In mathematics, I can prove that some object exists without concerning myself with whether some physical system is capable of constructing that object.

At the end of the day, I am writing software, and that software is one part of a real, physical system which obeys physical laws. This makes me an engineer. Maybe I play around with 20 orders of magnitude in my designs and the civil engineer plays around in 8, but so what?


The difference is that the relative cost of everything is skewed so far out of proportion to what other engineers work with that it's hard to account for the differences

Also, the industry has been pretty cavalier about measuring those costs, especially when it comes to programmer time.


We're absolutely working with physical stuff still. However, if there's a continuum between mathematics and engineering, I think software is much closer to the former than the latter.

I think 'working with real physical systems which obey physical laws' is necessary, but not sufficient to qualify a discipline as engineering. Architects are not typically engineers by the title, and yet they design real things, and have to worry about some aspects of pragmatic engineering when they work. Neither are applied physicists, even though their work to defines many of tools that engineers use. I'm not sure that the task of creation scales isometrically. I think there's a certain allometric aspect that means that the kind of thinking required at 8 orders of magnitude has a radically different shape to that at 20 orders of magnitude; there are still some hard yeses and hard nos (you can't build a Dyson shell out of adamantium), but the amount of grey area is vastly, non-linearly larger, and requires a lot more ad-hoc construction to work within, and thus strips away the shared language of rigor that engineering has the benefit of operating under.

I hope I haven't made it sound like I think it's completely black and white, and also that I somehow don't think software is hard. I just think it's a different kind of animal, because of the scale that it operates at, and the number of abstractions it depends on, even if it can't embrace them fully and transcend into pure mathematics.


> At the end of the day, I am writing software, and that software is one part of a real, physical system which obeys physical laws. This makes me an engineer.

The person that made my coffee this morning is also an engineer by this definition.

Apart from that I agree, we have real constraints to deal with and we can't hand wave away the environment the code runs in.


> The person that made my coffee this morning is also an engineer by this definition.

The verb choice is important here, someone who makes coffee doesn’t “write” coffee.

That said, any definition outside mathematics or logic is imperfect.


Hey now, CoffeeScript is real engineering ok?


> Somewhat incidentally, but I believe this is a large part of why, to me, software development is not an engineering discipline. At the end of the day, engineers (I think; not actually being one myself) have to answer to physics

So do computer scientists! Random access memory isn't constant time, but logarithmic for physical reasons. Cache effects can dominate big-O for a surprisingly large range of input sizes.


I think engineering disciplines broadly have comparable flexibility to software. Looking to electrical engineering, circuit design is mostly arbitrary once you're within certain physical bounds (such as controlling for noise in a high speed circuit). Regardless of discipline, there seems to be infinity^12 ways to skin their particular cats. Older disciplines just have more "turn clockwise" assumptions built into the mountain that makes their foundation.


I think the real difference compared to other engineering is the malleability of our product and the expectation of never-ending modifications. Software is almost never "done". So we have to be concerned about maintainability in a much bigger way than other disciplines.


> "...just alter the density of air so that we don't need to worry about lift for these airplanes we're building; they'll just be buoyant!". By and large, you can do that kind of thing in code...

A wonderful metaphor. Thanks.


Great comment. Sometimes I find conversations/debates between programmers get lost in irrencoilable levels of abstraction that are driven by status. E.g., the new data scientist looking to make a mark is desperately calling for "streamlined dataflows throughout the system" while an experienced builder is wondering wtf this means in terms of actual code written, and can't get the other person to spit out anything concrete -- because they have nothing concrete to spit out, but they de facto insist that what they're saying is sensical and valuable.


Programmers are about as unlikely to easily change their programming worldviews as they are to change their political beliefs.

Programming worldviews are political belief systems. They're just as subject to bias and groupthink. They can be just as free from empirical grounding. It's just that their context is somewhat abstracted away from the real world.


I feel like most of the strife caused at work is different value judgements on things. So I've been trying to tease out truths vs value judgements.

1 + 1 will always equal 2, no one debates that. AWS offers both Postgres and MySQL, no one debates that. But which one is better, that will be informed by possibly irrational value judgements ("Postgres bit me once, never again!")

I think people are irrational in terms of their inability to admit where their biases are ("Postgres bad") vs things that are largely factual ("we can choose between these vendors").


> 1 + 1 will always equal 2, no one debates that.

Challenge accepted.

* In boolean algebra, 1 + 1 = 1

* Over the field Z2 (the integers modulo 2), 1 + 1 = 0

* In binary, 1 + 1 = 10

* In a language that allows overloading +, 1 + 1 = usually 2, but potentially almost anything else

In conclusion, context matters


To be fair, in Z2 the equivalence class of 1 + 1 is still equivalent to the equivalence class of 2.


And of course 1 + 1 + Solar Flare = N


Wot no vectors?


|1> + |1> = 2 * |1> =/= |2>


I've seen this very often. People are absolutely consumed by avoiding their previous mistakes. Sometimes those are even searched-for almost-made-up mistakes rather than real ones, like in postmortems for successful projects.

And then BIG changes are implemented in the process. Usually in the worst possible place: in the planning of the process in the first place. If a project succeeded, and finished, your adjustments towards future projects, of course, should be tiny and minimal. In practice people make sweeping changes ("level 3 testing needs to be planned in before we even make the software design" boomed the senior architect).

And yet one lesson you quickly learn in machine learning is that whilst making adjustments that are too small ... does pretty well. You can be off by a factor 10 and while sub-optimal it works (and often, the further you get the smaller the adjustments should be).

By contrast if your adjustments are 1.1x what they should be you are stuck. You will never get where you're going. Never. That's a complete failure and you should expect to do worse and worse over time with such a strategy. So a smart person should almost always err on the side of not adjusting a behavior. Certainly from one project to the next, making more than 2% adjustment is absolute lunacy. This never works in learning algorithms. Never (0.1% is the largest generally used value). That would mean that for every software engineer in your team there should be ONE day difference EVERY 2 MONTHS from one project to the next. That's 2%. Even that is an absurdly large change, far greater than wise.

But I've never even once seen a software team that doesn't start with making 20% (one day a week different) adjustments between projects. And I've seen people make much bigger changes than that, generally not with better results. And the thing they're proud of is that they avoided a previous mistake ... a mistake they solved ... often much quicker than the delays caused by the adjustments ...


You have good points but ML needs thousands of generations; at most we'll have done dozens of multi-person projects under our belts. 2% doesn't even beat inflation.


Opinions from those who have used one aren't useful. "Best [or worst] I've ever used." Bit or saved by each differently on multiple projects and a good choice can be made.

If you collect the pros/cons from everyone and try to agree on which ones matter more in the current context then you're not tied to a previous commitment.


One consequence is that when trying to have rational discussions it is necessary to try to dig in to identify all of people assumptions

A coworker of mine who once worked at Symbolics used to call this, "Applied Philosophy."


One consequence is that when trying to have rational discussions it is necessary to try to dig in to identify all of people assumptions, since there is a good chance of of the other person's programming beliefs may play a big role in their thinking and you may not even be aware of it.

Amen to this, and by the way, it doesn't even have to be in the context of a discussion about a particular issue, let's say issue A. You can blunder into someone else's kneejerks unknowingly, on a whole other topic called issue B, and they will MAKE it be about issue A.


Have you seen the mkdir world view? It's the one where all the code is spread out over as many directories as possible, and leaf directories typically have no siblings just yet and one child.


I think this is a special case of the 'simpler is better as absurdum' approach, where you only ever write 1-3 line functions, each in their own file, because they're 'simple'.

Of course, this really just moves the complexity to the composition of these functions and destroys locality of reference for anyone trying to read it.


I've found that this is true particularly when working in OO styled languages --- there seems to be an innate desire to minimise microcomplexity at the expense of greatly increasing macrocomplexity. It's like writing a book where each page contains one word. Each individual piece is "simple", but taken as a whole it's much more difficult to understand.


I like this analogy: it's like writing about a difficult or nuanced subject using only "simple English"; each sentence is superficially simple, but the work as a whole is much more difficult to make sense of.


   cons()
   {
     local newdir=cons_$((counter++))
     local car=$1
     local cdr=$2
     mkdir $newdir
     ln -s $car $newdir/car
     ln -s $cdr $newdir/cdr  
   }


I had a fevered dream about this very thing last night


I think that is a structure whose usability depends heavily upon the interface. It is torture on CLI but a way to organize a suitably "tree" structured interface. It makes the relationship structure abundantly clear.

It is nice when it works but a real pain in the ass any other time.


Is this a particularly egregious worldview that should not be tolerated?


Yes it rarely works out well, but mostly I just wanted to focus on an actual worldview instead of good vs. bad.


Tolerated, but not enjoyed. Gotta live (work) in a Java world.


Coincidently, people (and other beings) also have different worldviews in every context and not acknowledging this leads to endless pointless arguments.

It would be cool if the participants of every discussion (programming or otherwise) labeled their worldviews explicitly, so we could at least know what we're dealing with. For example on here with a tag by our usernames.

Old religions used to serve that purpose at least to a first approximation - we really could use some words to describe the new ones.


I keep thinking about making a site like the political compass but for programming opinions. Probably with a tinder swipe left/right to agree disagree.


I completely agree with your comment but since we are on the topic of worldviews and assumptions I just want to point out that women program too.

My assumption currently is that you do not switch pronouns arbitrarily and as a result were not just as likely to send "social circumstances may necessitate that she adopt a new viewpoint" when writing about an abstract individual programmer.

I know it is annoying to have this brought to attention but my world view includes the notion that small actions like this can have a profound effect at scale.

>social circumstance may necessitate that he adopt a new viewpoint

>when trying to have rational discussions it is necessary to try to dig in to identify all of people assumptions


My experience of writing that switches pronouns arbitrarily is that it continually draws attention to the ratio of men and women in programming, which makes the lack of women more obvious. I first noticed this in an advanced physics course where in deference to the presence of a lone woman in the course the professor consistently said "he or she". Except that it was a conscious attempt and served to draw attention every time to the lone she who clearly looked uncomfortable.

This is why I prefer to use singular they instead. It is less annoying, less likely to make people uncomfortable, and can easily be seen to include people no matter what their gender happens to be. It even includes those whose gender is ambiguous in some way.

It is a small thing, but my world view is that small improvements like this add up to create a more generally welcoming atmosphere.


The dictionary definition of singular they is, quite literally, “he” so you haven’t really improved on the concerns some have with he in using they.


Use the generic singular "they" for persons of unknown gender?

> there is a good chance of of the other person's programming beliefs may play a big role in their thinking

Check.

Use the male pronoun for hypothetical person in detailed example who needs to be more flexible in his/her thinking?

> this social circumstance may necessitate that he adopt a new viewpoint

Where the female pronoun would sound awkwardly coercive and uninviting?

> this social circumstance may necessitate that she adopt a new viewpoint

Check.

I believe ilaksh has their bases covered.


There is a problem with the English language. There is no commonly used neutral pronoun. The closest thing is "they" but I think some people consider that incorrect grammar or something. he/she or switch back and forth is awkward.


See https://stroppyeditor.wordpress.com/2015/04/21/everything-yo... for more than everything that you might want to know about the use of they as a singular pronoun. Even those who think that they object to it, frequently use it without noticing.

The upshot is that whining from self-important grammarians notwithstanding, English has always had a third person singular pronoun in common use. It has been used in every century by established writers. Indeed frequently by the same people who were saying that nobody should use it. That pronoun is "they".


the royal we?


Since this seems to be the place for personal views, my views, after 20 years of experience, is that people do not notice the complexity of anything that they have digested as normal. As a result doing anything other than what they are used to seems very complex to them. Whether that other thing is actually complex to a neutral third party is situation and observer dependent.

I have seen this with procedural programmers struggling with OO code. OO programmers struggling with functional techniques. Programmers of all kinds seeing a new framework. Programmers used to more structured code trying to fix shell scripts. React developers encountering code using older JS libraries, and developers comfortable with older libraries encountering React. And so on and so forth.

In all cases the common denominator is that the programmer is uncomfortable with the new way of doing things. And the first reaction is, "What's wrong with this code?" And "It's messy and complex" is one of the easiest conclusions to draw.


My now favorite example for what you describe is this blog post "I no longer understand my PhD dissertation". However, it's worse, even the same person experiences this - because over time we are different people.

To quote from the blog post:

> I was curious to see how much of the dissertation I can still grasp, five years after the fact. I figured it couldn’t hurt my ego if I refreshed my mind with past mathematical glories.

> How wrong I was.

> This was not the casual read I had in mind. The notation was alien. I even had to scour the examiner’s report to direct me to the key results. And while I could have sworn this was a well-written thesis, I repeatedly found myself bamboozled by my own prompts. “The result now follows easily…” may have made sense back when, but now the author-turned-confused reader can profess that it most certainly does not follow easily, at least in his own mind.

https://medium.com/@fjmubeen/ai-no-longer-understand-my-phd-...


Thanks for sharing the interesting article! This effect also applies on presentation slides.

Just checkout my archive for some slides I presented a few years ago. I would have thought that they are random compilations if there are no file names.


people do not notice the complexity of anything that they have digested as normal

...which is why examples of "minimal complexity" code often provoke feelings of extreme unease and horror; one nice example is https://news.ycombinator.com/item?id=8558822 where the functionality:code ratio almost seems impossible at first glance.

I have seen this with procedural programmers struggling with OO code.

To be fair, a lot of OO code tends toward overengineering and premature abstraction.


To be fair, a lot of OO code tends toward overengineering and premature abstraction.

And a lot of functional code tends towards unnecessary levels of indirection through higher order functions as a way of deferring responsibility for actual logic, or on the flipside insufficient abstraction leading to difficulty extending functionality.

I've gone back and forth multiple times between functional practices and OO practices, and I definitely empathize with the confusion going in both directions. You trade one problem for another, and in the end there really is no free lunch.

React Redux is a great example. People who have gotten over the hump of the boilerplate involved in threading state up and down the tree love how little energy they're spending routing state changes to the correct destinations. However, to someone who is used to event-based UI programming, this looks like a horrific waste of energy reminiscent of how a giraffe's laryngeal nerve goes all the way down the neck then back up. They've both got a point, but you have to follow their paths to see it yourself.


"I've gone back and forth multiple times between functional practices and OO practices, and I definitely empathize with the confusion going in both directions. You trade one problem for another, and in the end there really is no free lunch."

I think there is a strange kind of duality between OOP and FP. I think OOP passes structures of objects hidden behind interfaces, while the FP passes composed functions in other direction. So you can convert your OOP code to FP code almost mindlessly by total "inversion of control" - wherever you pass an object somewhere (making a call), in FP you pass the operation from the callee to the caller instead.

And I think it's this duality that you observe in practice.

However, I believe there is a practical difference between the two, which makes FP superior despite being harder to understand initially.

It's much easier to spot that you're doing "no-ops" in FP than in OOP. In OOP, often you pass the objects unchanged across layers, while in FP, this manifests as identity functions being used. And it's easier for both human and compiler to remove these redundant identity functions (through function specialization and things like fusion etc.).

Also, I believe functional approach allows you to better realize that there are some algebraic laws governing your problem domain.

It's really kind of like in mathematics, when you come from point-wise understanding of functions to functional understanding of functions, more insight is gained.


That is an excellent link to read through, and the code is an excellent description.

There is an inherent complexity to a compiler. That example had only that. There are standard names for things in C. That example had only that. In all fields, internalizing compact names improves how much an expert can think and makes it impenetrable to beginners. (Math is particularly good at supplying examples.) That program used compact variable names that are standard enough in C to be instantly recognizable, but come as a shock to someone who hasn't internalized them.

The result is a compressed version of exactly what an experienced compiler programmer has already internalized. And is utterly impenetrable to anyone who hasn't internalized the concepts that they need.


To mock about the doings of others is easy. Plead me guilty. But there are rights and wrongs on both sides. Usually.

My theory is that engineers are often stuck in a filter bubble. Especially newer engineers. If you read all day about microservices and SPAs, there is at some point no place for something else. But classic web app achitectures are still doing very well. At a common scale they do things better with less complexity. The point is that they aren't very heavily discussed anymore. Most problems have been discussed and there are solutions for it. For me it is hard to argue about the problems of modern approaches. Not because I don't have any arguments, but because mostly the response is that we have to throw more code, hardware and architecture at it. It works for FAANG, why argue?

All sides need to pay more respect to things that work. And we have to set context when/where/why it works.


True, definitely Amazon is heavy on the Serverless marketing/filter bubble.


Ha, it has been my reverse experience in all the examples cited.

When learning functional, I had a haha moment as I was coming from OO.

When learning React, I had another haha moment as I was coming from regular JS libraries.

There is an objective truth that is there or not.


I'm pretty sure a big part of intelligence is a type of compression. Once you are familiar with certain things, your brain automatically compresses and decompresses those patterns so you don't have to deal with them consciously much. You would not notice the same amount of complexity.

Having said that, things like cyclomatic complexity do exists. Although smaller units that are simpler on their own, do mean there are more to mentally reassemble into the larger whole.


> What if there is no "complexity trap", just disagreements about how best to design programs?

While I think this post asks reasonable questions, and has some healthy skepticism, the complexity trap is very real, in my experience. After 20 years of professional software development, I can safely say I've watched literally tens of millions of dollars wasted over-engineering solutions to problems that were anywhere from smaller than advertised to did not actually exist.

I also freely admit that I personally tend to deep dive on cool/elegant/fast/small solutions where my time to achieve the optimal program is never recouped by running the program.

There absolutely is some under-engineering too, I've done some, and seen some in the wild. But on the whole my samples are that over-engineering is the dominant problem. That might not be true for people who hire contractors; from that point of view, getting less than you think you paid for is probably the most common experience.


Well, I feel like the environment of the job coupled with the developer causes over-engineering, rather than true technical concerns.

I'm fairly young for my new role and architecting (from the ground up) some fairly big systems. I'm really struggling not to over complicate problems and I think it's really easy to do for a couple reasons.

The selfish architect I'm working very hard not to be would rather have his work be complex. This builds ego if it works and you can always argue "it's a complex problem" if it doesn't. He'd rather face the shame of maintaining software that's overengineered as that's a very low visibility cost. No one bats an eye at estimates to update an existing system if it's needed. On the other hand, failure to account for a use case you deemed 'a bit complex for MVP' is super visible, even if it's a miscommunication between product and development. This selfish guy doesn't have eyes for business needs, he just falls in love with solving a problem, when he starts to elaborate on capabilities that it could have, product manager will just nod and say "oh yea, that might be nice", so you over-engineer to make sure that pathway is open, even if the product lady forgot about that possible feature before she left the meeting. He's also afraid of showing up at a meeting saying we solved the problem with a boring tweak on an existing implementation and having someone show him up by asking questions about why he didn't just use a software platform or language that felt foreign and complex when he read a few articles about it on reddit.

Idk, felt good to vent some of this stuff, but those are the things I have to make sure I'm aware of, otherwise I'm just gonna build shit to the ceiling.


I agree with all of that. My first reaction is that you’re probably in startup land where you’re mainly building new things. People do bat an eye at estimates to update old and large existing systems. It’s the new and small ones that are easy to update. If you’re even talking about MVPs, you’re probably not seeing much over-engineering yet. Or, you might have some and not know it yet, it might take a couple of years before you realize you paid large opportunity costs solving issues that seemed important. Having to decide whether to solve an issue before vs after MVP can be hard, but if you know you need to solve it either way, you’re way ahead of solving something that you aren’t absolutely sure you’ll need.

A common but not-common-enough (IMO) programming philosophy is to only write the minimum code necessary to solve problems you actually tangibly have, not for any problems you think you’ll have, only for problems that absolutely require solving now and people are screaming for them. Extreme programming and Agile and lots of other software development frameworks all have variations on this tenet.

The biggest over-engineering mistakes I’ve seen are when teams decide to rewrite from scratch a code base that they deem too crufty and too difficult to maintain and update anymore. Devs make this call far too easily, and fail to take into account how many things still work and how much effort it took to get there. I’ve been witness to two very large such rewrites that, many years later, both companies admitted were colossal mistakes.


What really burns people is making irreversible changes and finding out you chose wrong.

Some people avoid this by trying not to make any decisions at all, and that’s more than enough fuel for long a journey into overengineering. Someone (possibly Michael Feathers or Uncle Bob) called these people cowards and I tend to agree.

These days I’m very aware that smart and fool aren’t at opposite ends of the same ruler. I’m much more concerned with being wise instead of clever. Clever requires too much busywork for everybody, and it’s not conducive to teamwork.


Would you say that these over-engineered solutions actually grasped the problems, and over-engineered the solutions? Or did they not grasp the original problems to begin with, and solved tangential problems instead? In my experience it's often the latter that leads to accrued accidental complexity and technical debt.

Edit: grammar.


I’ve seen both of those and more. Actually I’d say that from my point of view, there is no such thing as fully grasping the problem while over-engineering it. I’d categorize that as slightly not grasping the whole problem. The complexity of the solution and the budget for solving the problem are part of the problem, not something separate and distinct.

I think I’ve more often seen tangential and imagined problems than people doing the right thing the wrong way. To be fair, it can actually be very hard to know whether a problem is really truly a problem that needs to be solved. Some tangential and imagined problems seem important and real, and you don’t discover until later you needed to solve something else and not the thing you did.

Some examples:

At a games company, the two mathiest programmers decided we needed to be using geometric algebra instead of matrices. They spent years working on it, and the code infected the rigging and animation and rendering systems, all kinds of things. Most of the programmers had to interface with it, but didn’t know geometric algebra, so it slowed development. It only solved minor issues. One of the claims was we’d have no gimbal lock, but people already solve that with matrices in one or two functions, it’s a non issue. To top it off, the math all got a bit slower, and we had to maintain the matrix math anyway to interface with the hardware and external libraries.

At a film company all the programmers decided they were tired of the renderer and they needed to start from scratch because making updates was getting slow and difficult. They scrapped the renderer, started over, and it took many multiples longer than they estimated. When they were done, they had roughly the same complexity and ugliness as before, and making updates didn’t get significantly easier. Some mistakes were made in the rush to rewrite everything.

Personally, I built a multi-user whiteboard website, and I over-engineered the rendering and the multi-user undo system, convinced they both needed to be perfect. In the mean time, discovered that very few people care about those things or ever hit corner cases, and I missed the opportunity to write proper integrations with other tools and import/export tools — the things the customers needed in order to adopt my software for their workflows.


I deliberately try to make things complex, but not more complex than they actually are, though apparently this is received as a matter of perception. It commonly boils down to risk or performance versus convenience.

As a senior developer I notice that juniors will sometimes do everything in their power to over-simplify a given problem. This is more often due to insecurity than accident or technical ignorance and while their motives are often unintentional (non-cognitive) the actions are often exceedingly deliberate (cognitive decisions).

When this occurs the developer isn't changing the problem. They are changing themselves and their approach to the problem. Unfortunately, this is evidently reflected in the quality of work. Pointing out this shortcoming can result in conflict because the offending party's actions, even though often unintentionally motivated, are deliberate and thus the criticism may be received as unnecessarily too precise or picky. Again, its all about risk or performance in the product versus the individual developer's confidence displayed as a convenience.


That's funny because I have become, and observe the exact opposite.

The notion of trying to make things more complex than they need to be I find deeply problematic.

Junior engineers often end up devising solutions which are in fact too complex as they try to over engineer everything.

The more experience I have, the more I accept that code is complexity and cost and that it should be avoided.

Always take the easy path unless there are very good reasons not to ... because you'll never know where you need to actually pivot when you get there.

To your point (not more complex than they actually are) - yes, there is sometimes inherent complexity in the problem space that can't be avoided - but that doesn't mean the solution actually needs to be complicated. Domain complexity might possibly be well encapsulated, leaving the rest of the system fairly straight forward.


I tend to observe that junior engineers simply misplace the level of complexity that a system needs in order to be as simple as possible. Meaning, I've seen errors to the side of being too simple, and errors to the side of being too complex; the unifying theory here is that its a mis-estimation of "necessary complexity".

Two examples:

In designing an email templating system, I've seen junior engineers devise really complex class hierarchies, with super classes that take in type generics for their super() constructor, three levels deep, resulting in hundreds of lines of code. When pressed, the reasoning was "there are parts of the email body we want to be the same and parts that are different, so the class hierarchies map to those parts that are similar vs different." It turns out, this service was only sending 3 different email "types" for the foreseeable future, and the only similarity between them was a copyright notice at the bottom, a banner at the top, and a body.

But, a counterpoint: For a long while, on some node.js backend API we were writing, much of the core business logic was implemented as simple functions across hundreds of files. Worked great when we had four API routes. Now, with hundreds, its impossible to find anything, because there's no structure, patterns, or organization. So, we needed someone to step in and introduce rules and structure in order to make the system more simple, and our job would have been easier if we'd had the foresight to see that necessary complexity coming.


> In designing an email templating system, I've seen junior engineers devise really complex class hierarchies, with super classes that take in type generics for their super() constructor, three levels deep, resulting in hundreds of lines of code. When pressed, the reasoning was "there are parts of the email body we want to be the same and parts that are different, so the class hierarchies map to those parts that are similar vs different." It turns out, this service was only sending 3 different email "types" for the foreseeable future, and the only similarity between them was a copyright notice at the bottom, a banner at the top, and a body.

Was there a discussion about high-level design of the templating system at all before the junior started banging out code? This feels more like a failure of the senior/lead.

> But, a counterpoint: For a long while, on some node.js backend API we were writing, much of the core business logic was implemented as simple functions across hundreds of files. Worked great when we had four API routes. Now, with hundreds, its impossible to find anything, because there's no structure, patterns, or organization. So, we needed someone to step in and introduce rules and structure in order to make the system more simple, and our job would have been easier if we'd had the foresight to see that necessary complexity coming.

This feels like a false dichotomy to me. Your system existed in a state somewhere between between 4 routes and hundreds, didn't it? At the point something starts being painful is when it should be addressed, imo. That was probably somewhere in the teens of routes would be my guess. At that point the system is still small and understandable enough to make a large architectural change, but it's big enough that you can be reasonably certain you're making the correct large architectural change.


> Was there a discussion about high-level design of the templating system at all before the junior started banging out code? This feels more like a failure of the senior/lead.

There was not. This is a startup I'm referencing; we don't hold long technical planning sessions for how to design a few hundred lines of code. We'd likely hold a meeting on the product behind it, but that's not what we're talking about here.

Does that mean it was a failure of leadership? I'd argue No. I would argue that it might be a failure in mentorship, which is different, but in all the startups I've been in, there's one constant: Engineers have MASSIVE freedom and responsibility to get the product out the door. We don't micromanage the implementations. That doesn't mean they always turn out great, and that's where mentorship comes in; teaching how to fish instead of the seniors doing it for them.

> This feels like a false dichotomy to me. Your system existed in a state somewhere between between 4 routes and hundreds, didn't it? At the point something starts being painful is when it should be addressed, imo. That was probably somewhere in the teens of routes would be my guess. At that point the system is still small and understandable enough to make a large architectural change, but it's big enough that you can be reasonably certain you're making the correct large architectural change.

I think you're right. No one is perfect.

The reality of many startups is that the senior developers are vastly overworked. I mean, everyone is overworked, but at least in my experience, it comes back to the fact that the seniors can't micromanage everyone. So at some point, we were at a dozen routes, and it probably sucked to work in, but its likely that the seniors reviewed it and "missed the forest for the trees". Yeah, that code looks great, ship it, next feature, keep growing.

It takes a different, holistic perspective to see that the whole architecture is slowly getting bad, and often you don't experience that different perspective working in the weeds on each route. Combine that with the fact that startups tend to move REALLY fast, and an engineering department four the size still wouldn't have the capacity to take on all the feature work we want, let alone the refactoring, and its easy to understand why architecture gets pushed off.


Surely good examples, but the later may have been the right path: start simple with what is necessary. When there is a need for a framework or some layer of complexity - add it then. But maybe not before.

And #1 is funny because we've all done something like that!


> Junior engineers often end up devising solutions which are in fact too complex as they try to over engineer everything.

> The more experience I have, the more I accept that code is complexity and cost and that it should be avoided.

At a previous company I worked for I proposed a solution which was considered to be too "complex" by the VP R&D. So he and another "Senior" turned around and rewrote it into something more "simple."

This "simplicity" slowly turned into a constantly patched clusterfuck of a mess and it stripped off a dozen different ways of customization. The "simplicity" was produced as a consequence of both of the Senior engineer's ignorance towards to the tools (language, frameworks) we were using and they just went with their gut experience on it.

I'm somewhat different to you I guess. As things go on, I realise that simplicity is usually a hallmark of developers who don't really understand the tools they're working with and can't really foresee where the business would be going with that feature in 12 to 24 months.

It's a symptom of "Jira culture," where people develop things exclusively to satisfy their ticket and ignore the bigger picture of what they're actually trying to achieve.

This is not to discount people over-engineering things, but over-engineering is very different from complexity.


Looks like the two of you have a different worldview about what 'simplicity' means.


This is because people don't realise that there are two axes on the simplicity/complexity graph, not one. So they're operating on different definitions of 'simple'.

You can make the development experience simpler, and/or you can make the software simpler.

Usually a change to one will have an inverse affect on the other. Bringing in a library will almost always make dev simpler (unless the API is more complex than whatever it's abstracting, which only seems to happen in JS land), but it's also guaranteed to make your software more complex (unless it does exactly what you need with no config, and nothing more).


It really nice of you to notice. Some crusaders still think it's about complex vs simple.


This logic is a great justification to commit many unnecessary tragedies, because it gives you an escape valve to not put effort into anything you don't want to do.

"It's more complex to write unit tests/write this so it's easy to unit test, so I won't bother".

"It's more complex to spend a few days thinking through the design of this system, so I won't bother."

"It's more complex to write this in a way that can be subclassed later, so I won't bother."

Perfect may be the enemy of good, but "easy" can also be a foe.


I think that is a misunderstanding between the problem and the solution. The ideal scenario is to take a complex problem and devise a simple solution.

Problems are not made more complex. They are literally the requirements provided. From an economic perspective, the problem is. The complexity is in how a person receives those requirements (perception of approach). This is lost on someone when a problem is more fully exposed according to potential second and third consequences (risks). When everything is ad hoc the risks are never exposed until a mountain of code is written that ends up being not simple and excessively fragile.

To avoid the unforeseen stupidity I prefer to expose the complexity up front and try to prevent people from hiding from it. That doesn't change the problem statement, the environment, the end user, or anything else. It only changes the investment of effort and the person providing that effort. Burying your code in layers of unnecessary abstraction and mountains of dependencies is not exercise in simplicity because somebody has successfully avoided writing original code. This is the difference between simple and easy.


>I deliberately try to make things complex

I see this often with someone who has made it into a senior role and then is attempting to ensure job security through complexity. Reflecting this on to junior developers further reinforces that said developer has adopted (usually subconsciously) a fear-driven approach.


Job security does not seem like a reasonable explanation for programmer behavior in this day and age.


Yes and no. Some developers really want to stay in the same place working on the same sorts of things and to have control over the domains they are working on. There are usually many reasons for this, some ones are easy to understand such as people who value autonomy in their work over the pure dollar value of salary that could be gained by job-hopping. For some folks, the idea of having to go somewhere new with different practices and where they may not be in charge of making decisions on the codebase (or at least not having any control over what goes into it) is terrifying.


I don't need to write bad code to justify my existence when my existence is justified in the face of other developers writing such bad code.


> I deliberately try to make things complex, but not more complex than they actually are

I don't understand. Is it an elaborate way to say that you expose your juniors to the full complexity of the problem, rather than feeding them a "simplistic" view of the problem?

> This is more often due to insecurity than accident or technical ignorance and while their motives are often unintentional (non-cognitive) the actions are often exceedingly deliberate (cognitive decisions).

I don't understand. It looks like they are actually "cognitively" doing their job to me, that is they try to separate the essential complexity from the accidental complexity. They are trying to cut the Gordian knot, I don't see how this is linked to insecurity. Maybe you should simply tell them that someone already did that job and they should not second-guess them?

> Again, its all about risk or performance in the product versus the individual developer's confidence displayed as a convenience.

No, the convenience is to do as Mister Senior Developer said, as a good code monkey, so you can leave early and work on your side project.


Economically speaking, the problem is. It isn't necessarily complex or simple. Whether, or not, you are willing to accept that there is are risks and possibly second or third order consequences to your approach is the true complexity.

Exposing the fullness of the problem isn't a creation of complexity. The same amount of complexity is already there unchanged. Are your juniors willing and capable to account for this to devise a simplistic resolution that actually lowers risk and with the smallest amount of code? Conversely are they imposing unnecessary layers of abstractions to remove themselves from the directness of the problem?

Another way to think about it in "junior" terms is whether its simple or easy. Those aren't the same.


I really like this view.

I see problems as a messy graph. There is a node on the far left that represents the problem and a node on the far right that represents the solution. There are nearly infinite ways to traverse this graph to complete the problem, but there is a minimum number of nodes that must be passed, along some optimal path, in order to actually solve the problem, as defined. No less is possible, unless you change the problem. I think being able to see something close to this minimum complexity, and that it is the problem, is what you’re referring to.

After identifying this, the problem/solution nodes can be reassigned intentionally.


I tend to see this in a different way.

In most of the difficult architectural decisions I have encountered, there is a slider that goes from "complex" on one end to "aspirational" on the other. I see this trade-off much more often than I see things getting over/under-engineered, though I've definitely also seen that.

"Complex" tends to be actually easier to design: you adapt the existing somewhat complex system, add a couple things here and there, and you have a workable solution to the problem at hand, even if it feels a bit kludgy.

Simple tends to be harder to achieve because reality is complex, and designing a small number of abstractions to encompass a complex system requires willingness to spend blood and sweat in "meatspace" to reign in the complicating factors. Don't want to support XYZ? Well, you're going to have to get people to stop using XYZ first.

I don't think this extends beyond software architecture, but that has been my experience working on living codebases.


You all have it wrong. There is a solution to building better systems.

Complexity nor simplicity are the answer for future proofing work. If a problem is complex then the problem requires a complex solution. Simplification can only be done on problems that are inherently simple.

The key for future proofing work is modularity. You must design a system in small components and each component should be self contained and unaware of the other components. Additionally, the way these components are connected together matters. The goal is to try to build a system that is a linked list rather then a graph of complex nodes. Systems that look like linked lists can have nodes removed and inserted easily in the pipeline while graphs cannot. Composability and unidirectional data flow all lead to modularity. Do not build graph architectures, build pipelines whenever possible.

Additionally the nodes in the system must never shift. If a node is a functional unit then passing closures and first class functions all over the system is equivalent to a graph structure that shifts with time. A shifting graph is not modular at all.

This is key. A simple or complex solution cannot adapt to a changing problem; and if modularity is not introduced into the system then hacks or rewrites need to be made to the entire module to get things to work. Keep in mind modularity does not mean generality. You don't need to create abstract general systems to create modular systems.

Of course this model doesn't work on all problems and there are always compromises. The goal is to minimize the compromises. GUIs are problems that doesn't fit the problem well. You will note that Redux attempts to solve this issue by attempting to transform the GUI problem into something as close to a linked list as possible. However a full transformation isn't possible due to the inherently complex graph nature of the GUIs' itself.

The very model of OOP is the antithesis of modularity. While one can create a lot of abstract objects that can be repurposed for other uses this modularity is an illusion. OOP actually promotes a style of programming where your code becomes a graph of nodes leading to less modularity overall.


I have the opposite experience. I think a senior should focus not on identifying the complexity of a problem, but identifying the complexity of the proposed solution. What are the bottlenecks? What possible situations wouldn't fit into the proposed schema? Would a table of 1k records that have a seemingly large number of columns suddenly become 10m records because you decide to use an EAV?

To mitigate issues, is a solution full of contracts and complicated dependencies the right answer? Or is the right answer to not use the ORM for that one particular call and just write a parameterized SQL query or proc that gets the job done?

That kind of insight upfront only comes from experience, which is what a senior has to offer most of all.


> I deliberately try to make things complex

This strikes me as odd. You really sit there thinking "I need to make this complex"? Why not strive for correctness and completeness?


More like I need to explain what the scenario actually is opposed to the microscopic slice the developer wishes it were. I don't makes a problem complex. A problem is a summation of the requirements before you whether or not you believe it complex.


I can't if it's the typefaces, the interlining, the space between paragraphs or a combination of those, but that page was so unpleasant to read that I had to use Firefox's "reader view" to do it. I have no opinion whatsoever on the the text, but the presentation was so hideous that I thought it was worth mentioning.


It's the negative line height—you can see by selecting multiple lines of text that the rows actually overlap each other.


Spot on. The lines are too long and too scrunched together.

I could feel myself involuntarily squinting to read it.


So, we should just "agree to disagree" and stop trying to demonstrate how one approach might be much better over another? How can anything ever get better?

The Complexity Trap didn't accuse anyone of actively trying to make things complex, nor did it imply any "moral failings" on anyone's part, it just gave examples of how common practices can cause accidental complexity. It also dedicates an entire section to defining "complexity" as used throughout, so there's no ambiguity there.


> So, we should just "agree to disagree" and stop trying to demonstrate how one approach might be much better over another? How can anything ever get better?

No, we should try to make things better. But framing these choices as "complexity" doesn't seem to be helpful. The choices are tradeoffs but they're not a deliberate choice of complexity for the sake of some other value; at most they're choices between different kinds of complexity, and maybe not even that much.

> It also dedicates an entire section to defining "complexity" as used throughout, so there's no ambiguity there.

And yet I used that same definition but came to opposite conclusions when applying it to the original author's examples.


> deliberate choice of complexity

I also don't see anything in The Complexity Trap that accuses anyone of that. I think that's a misreading.

Rather, I think The Complexity Trap argues that many developers have a poor understanding of what "complexity" means, so they end up increasing it even when they think they are decreasing it. Specifically, I think the point it tries to make is that complexity doesn't equate to verbosity, and simplicity doesn't equate to terseness. That's why it starts by defining complexity in terms of coupling.

Your article seems to agree that DTOs are more decoupled because you refer to it as "cargo cult decoupling", yet you still argue it's more complex. So you aren't using the same definition.

Having said that, I do agree with you that blind "decoupling" is definitely not always the answer, and some things should be coupled. When a change in one thing should always cause a change in the other, purposely coupling those things in code can decrease complexity.

But I wouldn't say coupling your public API to your internal model by generating the former from the latter is one of those cases. It's something that's done for convenience and terseness, at the expense of complexity. (It's more complex because changes that would otherwise be simple refactorings could cause breaking changes for consumers of your public API in ways you would not expect. For example, you won't be able to use automated refactoring tools safely without carefully thinking through the consequences in your head.)

Depending on the project, trading complexity for terseness/less boilerplate might be the right decision. But it should be done consciously.


> Your article seems to agree that DTOs are more decoupled because you refer to it as "cargo cult decoupling", yet you still argue it's more complex. So you aren't using the same definition.

Well I'm using the same definition as the second section of The Complexity Trap, where it considers introducing an effect algebra to be increasing complexity.

Whatever tradeoff is made by introducing DTOs is in some sense the same as the one that's being made by introducing a free monad or tagless final - more verbosity and less coupling. But The Complexity Trap seems to think that verbosity is more complex than coupling half the time, and coupling is more complex than verbosity the other half of the time. So framing either as "trading complexity for ..." is unhelpful.


I have to pretty much reject this post wholesale. True the post it's disagreeing with picked weak examples. But other than that pretty much all of its overall points were actually very accurate.

The original article was not about json/scala/etc. It was about this:

To summarize, we have to deal with a lot of accidental complexity these days, for multiple reasons. First of all, we adopt techniques and abstractions without analying and evaluating their costs and their usefulness in our specific situation. Moreover, we love solving technological problems.


> True the post it's disagreeing with picked weak examples. But other than that pretty much all of its overall points were actually very accurate.

That's what I initially thought. But how can the overall points possibly be right if they lead to exactly the wrong conclusions in most or all of the specific examples? At best the part you quote is vacuous and unhelpful for actually making technical decisions.


Just logically, a point can never be disproven with bad examples. Bad examples may be unpersuasive, may bore the reader, but they don't make a true point false.

As best I can tell when you call his point "vacuous" you mean you think it's too abstract to be practical. I disagree.

What I take a away from his article is:

- Before you build something, ask yourself "Is this REALLY a technical problem? Or is it better solved non-technically?"

- What are the costs, especially complexity, of this change? Weigh those against the benefit.

Seems concrete and useful to me.


> Seems concrete and useful to me.

It sounds useful. But Westheide followed this methodology, presumably put time and effort into it, and yet made completely wrong technical decisions. So I don't believe the method can possibly be useful; he'd've got better results by saving his time and flipping a coin. (Put another way, this methodology doesn't outperform a placebo)


You know that's a bad argument. You can't definitively say "one person tried to follow a piece of advice and it didn't help one person therefore the advice is false/wrong/worthless."

I'm sure if you thought about this though that would be pretty clear to you.


> You know that's a bad argument. You can't definitively say "one person tried to follow a piece of advice and it didn't help one person therefore the advice is false/wrong/worthless."

But this isn't just one person trying - this is the example given by the person giving the advice, so it's presumably the strongest case they can make in favour of their advice.


Sometimes complexity is needed.

I've worked on too many projects where it's just do whatever is easiest at the time to get it working ASAP. This works, for a while, but eventually that code base that no one really put any thinking into around design turns into a tyre fire.

Maintenance becomes hellish, extending it becomes hellish and error prone. Finding and resolving bugs become excrutiating.

So when people go for a more complex solution up front, it might not be complexity for complexities sake, they might actually be able to forsee issues in the future and want to avoid them.

I wish more software development projects employed front loading the effort, either in terms of design / architecture / employing some sort of code analysis / quality control etc or even just thinking. Especially thinking.

If the result of that thinking is more complexity, then it's fine. You're introducing complexity early, everyone can get used to it because it's going to make your lives in the future easier.


I agree that the idea of "simplicity" can become convoluted. But some of his specific examples I strongly disagree with. Like others are saying, it comes a lot from your worldview on programming. Coming from someone who was involved in an extreme Microsoft Stack (which I definitely get from the vibe of the writer here, either that or Java?) and gone away from that, I can say simplicity wins. It really just depends on how simple the given language allows you to be, and what that looks like.

In my personal opinion the Microsoft Stack really traps you in a box. Not a bad stack, but you can't really translate the "patterns" you have to use there into a lot of other stacks.


No programming approach is perfect but some programming approaches are better than others. I started programming with a statically typed language for many years and then I went back and forth between dynamically and statically typed several times in my professional career. Once you've developed good programming habbits (which statically typed languages can help you to learn), ultimately, dynamically typed is better. Dynamically typed languages are simpler, faster to write, easier to change, easier and faster to test and debug, focus on raw logic instead of semantics, interact with external services more easily (JSON is the most popular data interchange format and it doesn't carry type information), integrate better with unrelated systems (no type conflicts when integrating different projects).

Functional programming is really nice for some things but ultimately, you need to store state somewhere and often you don't want to store all the data in your entire system inside a single monolothic store or database - Often, you want data to be close to the logic which operates on it; having a global data store is often just as bad as using global variable names in your application; it can lead to conflicts (forces you to think about data in a system-wide way; not accounting for data from external systems) and makes it hard to track where the data is used within the system.


I came to the opposite conclusion. Working on a dynamically typed project feels slippery... whenever you make a change it is difficult to know what will end up broken. I agree that deserialization can be a pain up front, but for the long term stability and assurances it is worth it.


I heard somewhere, "People are not against you. They are just for themselves."


What a read. I agree, often times we overlook the obvious solution and tend to go for a more complex one. The approach should always be simple. I don't know why anyone would ever think otherwise.

If the issue is complex in nature, you can still start with a simple solution before you dig into the weeds. I swear, sometimes people just love to work extra.


Software is art, fashion and politics.


Except not the first one.



Well you can implement the same program in endless different ways. Particular styles develop. Each piece of software is unique.

You're engineering with an artform that's why you see so much variation among the patterns.

Every portrait might be recognizable as a portrait, but every portrait is unique in its details.


Different ways to do things is not art. Patterns emerging from those different ways is not art. Art is for emotional purposes:

"the expression or application of human creative skill and imagination, typically in a visual form such as painting or sculpture, producing works to be appreciated primarily for their beauty or emotional power."

Software developers have a tendency to try to claim artist status because what they do is almost entirely meaningless and a waste of time. The exchange of time for money is too transparent so they are unfulfilled and try to imbue their work with meaning. If you want to make art, do it, it can be a hobby. Yet another useless website created to shove more ads in people's faces is not art.


I assumed OP was using a different definition of art (https://en.wiktionary.org/wiki/art#Noun):

> 2. (countable) Skillful creative activity, usually with an aesthetic focus.

> She's mastered the art of programming.

..

> 7. (countable) Skill that is attained by study, practice, or observation.

If that doesn't convince you, compare with such phrases as "The Art of War", "artisan bread", or "liberal arts" (which includes topics like philosophy).


Good software solving an interesting problem can be art.


Got any examples? I'd love to see them.

Something that's beautiful in its problem set (doesn't, for instance, act as glue between other, poorly designed systems and thereby reflect their flaws), beautiful in its solutions (doesn't solve too much, nor too little, doesn't make too many assumptions), beautiful in its use (user experience, performance, etc.), and beautiful in its implementation (quality under the hood, no accidental complexity, properly factored, easy to understand and maintain) is quite rare for any problem with a scope larger than say "cat", and even most programs of that size make sad assumptions about things (e.g. are in languages where things are type/memory safe as a practice of the authors' diligence, not verifiable as a matter of course) or have to reflect the complexity of the OS/machines they run on for various reasons (performance, security, etc.).


One off the top of my head is Abrash's perspective correct rasterizer for the original Quake. Come to think of it, the BSP+PVS system for visible surface determination that they used was also pretty amazing.



and food (software is eating the world)


No it's about right and wrong. That's why software is fundamentally binary.


That's what people make it about. It's classic misdirection.

You know, the 20th century American sci-fi writer Robert A. Heinlein in his 1953 novel Assignment In Eternity wrote "man is not a rational animal; he is a rationalizing animal." And Aristotle said that man was a political animal. I deduce from that that "man is not a political animal; he is a politicizing animal."

That's why we get arguments and debates around there subjects and they become so heated. The truth of the matter is the only way it's possible to compute the answer to any given programming debate is to implement all suggestions in parallel universes and see which one doesn't cause World Was III inadvertently. You can't compute the answer, so you go off hueristics of what's a) what's popular in the wider community (fashion) b) what's popular in your local community (politics) and c) what's popular with you (art).

I need to write a full article on this viewpoint at some stage.


Usually software design debates (such as this one) are not about the state of a software application right now (the 0's and 1's), but about how quickly it can change, how well it can be understood, and how amenable it is to debugging. None of those are binary concepts. Not to mention that software applications can have many states to reason about. And many (most?) applications are undecidable, and hence resistant to analytical closure.


Who decides if something is right or wrong? Have you never heard a developer debating "is this a bug? is this a feature?"


This is a particularly thorny issue as a Junior Developer. There's the issue of the Junior reaching beyond their britches, but there's also the issue of Seniors being set in there ways.


As mid level developers, how can we resolve this impasse?


There is no ultimate solution. Just continual efforts to make things better, gain deeper understanding, remove unnecessary complications, and improve overly simplistic solutions.

If you were simply making a joke, forgive me =)


No joke at all, sadly I see this day to day.


Not sure this is a job for a mid level developer? Sounds more like a manegement issue.


Judging by the initial half-second of chaos on page load there's certainly something complex about the javascript this person is using to communicate these words.


I used to have the content hidden before the javascript styled it, but people on this very site complained that they couldn't read my pages without javascript enabled.


Cool, that sounds annoying :) I didn't mean to be issuing criticism on a trivial side-issue. I kind of thought no-javascript pedants stopped being listened to in 2014.


>Having business requirements that interleave pure operations with effectful ones - "if the request exceeds one of our free account limits then check the user's subscription is still active in this external service" - is normal, like it or not.

Isn't that what the "imperative shell" is about?


This article reads into the replies article and hand picked a few points.

While actually the article agree on the article it referred to. But the wording has been in a way that is argumentative, which is rather immature imho.


Couldn't read because of the terrible font but I'm sure I disagree with all he says.


I read your comment expecting to see something like Papyrus, but it was just Georgia and Times New Roman. What's the fuss about?


Until there is some reliable way of comparing the complexity of two things, when people say something is “complex” what they usually mean is that they don’t like it or it’s unfamiliar.

Or better yet, your complexity is incidental, mine is essential. I was forced to make it complex, you did it because you’re not trying hard enough!

As Rich Hickey’s dictionary reminds us, to complect is to entwine together things better left separate. If you aren’t using the word in this sense, if you aren’t identifying the specific concepts which are complected but should be separated, you’re using fancy words to say you don’t like something. I’ve noticed we tend to do that, a lot.

I’m very wary of doing this in my own thinking, personally. It’s okay to just not like something, and it’s more honest just to say so than to use objective-sounding words in an attempt make it about something other than one’s own taste.


As someone who has read Moby Dick front to back I can agree with this argument. I had a colleague once, really bright engineer but not as steadfast as I was at the time. I suggested he read this amazing novel and he instead opted for the book on tape. During a long drive he almost died from boredom. Literally! He almost fell asleep at the wheel. While he disagreed with me that manually reading the book was the optimal approach he was not trying to make things more complex. He was merely trying to take the easy way out. This grave mistake almost cost him his life. Consider this tale next time you are thinking about not putting in some work!


This comment feels like a markov chain wrote it. Do you really find parallels between this anecdote and reality? What if we presented an anecdote of the opposite to you, would you change your worldview entirely? If not, why should anyone who reads this?


+100 points for that analogy. "Your prose reads like a Markov Chain". That is Gold. I'm putting that on a T-Shirt.


Could you provide a link without a referral?


I don't think the point of most comments are to change someone's worldview entirely (unless of course they are disputing religion or something on that level). Perhaps they might help to mold it over time but if a single comment can change the way you view the world I think your view was too simplistic to start.


Alexa, tell me a story.


You can say "Alexa, read me Moby Dick".


Alexa, read me Moby Dick while I drive home from work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: