Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I did a deep dive into cosmology simulations ~a year ago. It was striking how much is extrapolated from the brightness of small numbers of galaxy-surface pixels. I was looking at this for galaxies and stars, and observed something similar. The cosmology models are doing their best with sparse info, but to me it seemed that the predictions about things like Dark Matter and Dark Energy are presented in a way that's too confident for the underlying data. Not enough effort is spent trying to come up with new models. (Not to mention trying to shut down alternatives to Lambda CDM, or a better understanding of the consequences of GR, and the assumptions behind applying Newtonian instant-effect gravity in simulations).

Whenever I read things like "This model can't explain the bullet cluster, or X rotation curve, so it's probably wrong" my internal response is "Your underlying data sources are too fuzzy to make your model the baseline!"

I think the most established models are doing their best with the data they have, but there is so much room for new areas of exploration based on questioning assumptions about the feeble measurements we can make from this pale blue dot.



That fuzziness can be quantified. It's called error bars. Whenever physicists perform a measurement, they always derive a confidence interval from the instruments they use. They take great care of accounting for the limits of each individual instrument, perform error propagation and report the uncertainty of the final result.

Consider figure 5 of the following article for example:

https://arxiv.org/abs/1105.3470

The differently shaded ellipses represent different confidence levels. For the largest ellipsis, the probability of the true values being outside of it is less than 1%. We call that 3-sigma confidence.

> Whenever I read things like "This model can't explain the bullet cluster, or X rotation curve, so it's probably wrong" my internal response is "Your underlying data sources are too fuzzy to make your model the baseline!"

Well, then do some error analysis and report your results. Give us sigmas, percentages, probabilities. Science isn't based on gut feelings, but cold hard numbers.


It's not just a question of instrumental error though. There are also assumptions being used in interpreting the data from the instruments, and it's not generally possible to assign them reliable probabilities.

e.g. the first line of the article's abstract quoted above:

"Supernova (SN) cosmology is based on the key assumption that the luminosity standardization process of Type Ia SNe remains invariant with progenitor age."

If the results reported in the article are right, the confidence we should have in this assumption, and therefore any results relying on it, have just radically changed.


That's moving the goal post. I was specifically responding to concerns about fuzzy data.

It's true that assumptions have to be made, and those can and should be questioned, but that wasn't the concern of the comment I replied to.


My concern is model accuracy holistically; analyzing likelyness-to-be-correct including all assumptions; I think the post you are responding to is in context.


On the todo list! Not enough bandwidth, but hoping to get to that in the next year. Great point.

edit: That 1% figure doesn't sound possible unless it has its own set of assumptions that need a confidence!


Yeah a lot of stuff seems to be based on these fuzzy data which I also think is unreliable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: