Hacker Newsnew | past | comments | ask | show | jobs | submit | user2342's commentslogin

It's there, but yes the home page is very confusing. I lost interest very fast.

Weak arguments in the article with badly chosen examples.

If one wanted to criticize OCaml syntax, the need for .mli-files (with different syntax for function signatures) and the rather clunky module/signature syntax would be better candidates.


I actually rather like the mli-files. It's a nice file to read, with the documentation and externally available symbols only. However, the fact that the syntax is so different is a bit annoying.

Sometimes I wrote (haven't written OCaml for some time now..) functions like:

    let foo: int -> int = fun x ->
      ..
just to make them more similar to the syntax

    val foo: int -> int
in the module types.


The problem with mli files and counter-argument to “they’re good documentation” is that, with a few visibility annotations (`pub let`, etc.) they could be auto-generated. Then you don’t have to write your entire public interface twice (sans the inferred `let` types, but I already prefer to make those explicit in the `ml` because otherwise you get much more verbose type errors when they don’t align with the `mli`).


I actully like the mli files. Its a separate place to describe the PUBLIC API, and a good place for documentarion. Now you dont clutter your code with lots of long comments and docstrings.


Are there recommendable sources on how to learn solving/the concepts of a classic cube?


There are a lot of methods, optimized for different purposes. Some are easy to learn, but take a very large number of moves to solve the cube. Some are exactly the opposite: Difficult to learn, but enables you to solve the cube in just a few seconds. Others are optimized for solving the cube in the fewest possible number of moves, but requires so much thinking that they are not suitable for fast solutions. Others again are optimized for blindfolded solving.

My two favorite methods are Roux and 3-style.

Roux is the second most common method for speedsolving. Compared to the more popular CFOP method, Roux is more intuitive (in the sense that you mostly solve by thinking rather than by executing memorized algorithms), and requires fewer moves. Roux is much more fun than CFOP, if you ask me, and for adults and/or people who are attracted to the puzzle-solving nature of the cube rather than in learning algorithms and finger-tricks, I think it's easier to learn. Kian Mansour's tutorials on YouTube is a good place to start learning it.

3-style is a method designed for blindfolded solving, but it's a fun way to solve the cube even in sighted solves. It's a very elegant way to solve the cube, based on the concept of commutators. It takes a lot of moves compared to Roux, but the fun thing is that it can be done 100% intuitively, without any memorized algorithms (Roux requires a few, though not nearly as many as CFOP). It's satisfactory to be able to solve the cube in a way where you understand and can explain every single step of your solution. As an added bonus, if you know 3-style, you can easily learn blindfolded solving, which is tremendously fun, and not nearly as difficult as it sounds.

Edit: If you do decide you want to learn, make sure you get a good modern cube. The hardware has advanced enormously since the 1980s, modern cubes are so much easier and more fun to use. There are plenty of good choices. Stay away from original Rubik's cubes, get a recent cube from a brand like Moyu, X-man or Gan.


I used to be able to solve the 3x3 in high school using memorized algorithms and then I lost interest since there was no reasoning involved. Your comment makes me want to pick it back up and learn 3-style, so thank you for the clear explanation!


If what's fun is the reasoning, then the thing to do is other shapes and styles of puzzles besides the cube.

This is my collection: https://imgur.com/v9OuYNw

Like you, I learned the 3x3x3 in high school via memorized algorithms, and that was only so interesting. Years later my brother got me a Megaminx (the dodecahedron equivalent to the 3x3x3 cube, third one in the top row there) and I was absolutely fascinated by learning to solve that by porting what I knew from the cube. From there I got all those other shapes as well. The most interesting ones to search by name: Dayan Gem 3 (the one that looks like the Star of David), Face-Turning Octahedron (last one in the second row), Helicopter Cube (to the right of the 3x3x4), Rex Cube (right from the Helicopter Cube).


Even with CFOP, there is a large amount of intuition needed in order to break below the 25 second limit, mostly because of lookahead. During that phase, you need to train your fingers to do moves while your brain anticipates the next moves. There are no real formulas involved, it's really about intuition, pure skill, and multitasking.

I have hit a wall there personally.


I love the Roux method! I just went to a competition this weekend and got my personal record of 9.39 second average with Roux.

The unfortunate part is that beginner tutorials for Roux kind of suck.


Congrats, that's an awesome average! I wish I was that fast. I don't time myself often, but when I do, I usually end up somewhere around 15 seconds. My efficiency is not bad, but my hands are just too slow.

I agree about beginner tutorials. There are some decent Roux tutorials, but they are mostly not targeting complete beginners. I believe it should be possible to make a Roux-based beginner method that is even simpler than the popular layer-by-layer beginner methods most new cubers learn. If you think about it, it seems almost obvious. If efficiency is not a concern, the first two blocks of Roux have to be simpler than the first two layers of a layer-by-layer approach, since you are solving a subset of the first two layers. CMLL is also obviously simpler than the CFOP last layer. The only thing that remains is the last six edges, and that's simple enough that I think beginners could figure out by trial and error. With the right simplifications (at the expense of efficiency) and good pedagogy, I therefore think Roux is ideally suited for teaching to complete beginners. Unfortunately, nobody has done it yet.


I’ll add my vote for Roux in terms of pure fun. And there is more freedom to play between fastest solves and fewer moves with more planning.


IMO, the firstmost source is your own observations. 3x cube is very tactile, so some moves are just natural.

It helps also to develop some sort of notation for yourself. This way you can track and repeat your moves.

Solving by layers is kinda logical. So solving one side (first layer) is not hard. Then some experimentation with rotation sequences which temporarily break the solved layer/face and then re-assemble it will lead to discovery of moves to swap the edges into the second layer.

The hardest then is to solve the third layer. Again, the notation and observations help charting your way through.

A curious discovery may be about some repeated pattern of moves which may be totally shuffling the cube yet, if continuing it, eventually returns the position to the beginning state. It's kind of a "period".

Have fun.


Solving by layers is logical, it's what most beginners learn, and it is kind of how CFOP (the most popular speedsolving method) works. Nevertheless, it's not what I would recommend. The problem with solving layer by layer is that you are sort of painting yourself into a corner from the beginning. After you have finished the first layer, you can't really do anything without breaking the first layer. Of course it is possible (and necessary) to proceed in a way where you keep breaking and repairing the first layer while progressing with the rest of the cube, but the limited freedom of movement still makes the solution process needlessly complicated, and increases the move count.

In my opinion, it's better to start by solving a part of the cube that still leaves you with a significant amount of freedom of movement without breaking what you have already done. There are several ways to do this. My favorite method (Roux) starts by not making a full layer, but just a 3x2 rectangle on one side. This rectangle is placed on the bottom left part of the cube. You still have a considerable degree of freedom, you can turn the top layer and the two rightmost layers without breaking your 3x2 rectangle.

The next step is to build a symmetrical 3x2 rectangle on the lower right side of the cube. This is quite easy to do by just using the top layer and the two rightmost layers, thus avoiding to mess up the left hand 3x2.

After finishing the two 3x2 rectangles (commonly known as the "first block" and the "second block"), the next step is to solve the corners on the top of the cube. This is the only algorithmic step of Roux, you use a number of memorized algorithms. However, the algorithms are shorter and simpler than those for the top layer of a layer-by-layer approach, because the algorithms are allowed to mess up everything along the middle slice (which hasn't been solved yet) and the edge pieces on the top of the cube.

After finishing the top corners, you are still free to move the middle slice and the top layer without messing up what you've already done. Fortunately, this is enough for solving (intuitively!) the remaining pieces. You can finish the solve by using only these non-destructive moves.

The Roux method, therefore, allows you to keep the maximum degree of freedom of movement (without destroying what's already been solved) all the way until the end. This is what allows it to have a very low move count, and what's makes it easy to learn. It also gives you a lot of creative opportunities compared to CFOP and other layer-by-layer methods. Because of the increased freedom, there are more ways of doing things, and bigger scope for clever shortcuts, especially when building the first and second blocks.


Don't start with algorithms. Figuring out how to solve them is half the fun. If you want to be a speedcuber you could always look up algorithms later but you can't unlearn the algorithms once you learn them.


Perhaps it worked that way with you, but I'm not smart enough to figure out a 3x3 on my own, and wouldn't have had the many many hours of enjoyment that I did have, if I wouldn't have learned any algorithms.

It's not like memorizing algorithms makes it trivial - there's still recognition/look-ahead and finger tricks to learn, if you want to get faster. And finding the optimal cross (in CFOP method) during the 15 second inspection takes some thinking. I'm bad at that.


That’s a big part of why I’ve never learned to solve a Rubik’s cube. I’d rather learn how to learn how to solve it than memorize an algorithm and I don’t really have the time/motivation/interest to learn how to learn how so I haven’t bothered.

My son, at age 9, loved learning these kinds of algorithms (he also learned how to solve square roots by hand from a YouTube video and would do random square root calculations to entertain himself, checking his answers against the calculator on my ex-wife’s kitchen Alexa).


Also true with Nethack. I will forever regret reading spoilers before I seriously tried to ascend.


The website this post is on is a wiki that explains how to solve a lot of different puzzles like the rubix cube.


I’m having a really hard time to understand even the “beginner’s method” on that wiki.

For example, it entirely glosses over how to solve the „first two layers“ (F2L) on the left and back faces. It only ever explains F2L for the front and right faces. However, I can’t possibly achieve a „yellow cross“ that way. I wonder why I can’t seem to find any source that actually explains it.


I generally prefer written tutorials over video tutorials, but cubing related stuff is an exception. Videos are easier to digest.

Here's a good beginner tutorial:

https://www.youtube.com/playlist?list=PLBHocHmPzgIjnAbNLHDyc...


Thank you!


Thanks. Looks promising!


are you looking for a classic cube specific source, or techniques that will solve much slower but generalize to other shapes of permutation puzzle?


Rather for the classic 3x3x3 cube. I played with it in the 80ies, but never understood the concepts behind it.


Perhaps a helpful addition: I collected my change money over several years (about 9kg in total, mostly lower valued coins, since the higher values can be spent easily).

After exchanging them on a bank into useful money: the average Euro coin weights about 3.6 grams and has an average value of 7 cents. :-)


Yes, that was my first thought too. The concept is similar.

Get rid if the source-files and put every function/method in its own "editor". However, as far as I remember navigation to/from callers was not possible in Smalltalk.


It was, even in the ancient original Smalltalk-80.

https://youtu.be/cpjOd5ge2MA?si=T15xO3ZMshetfQIU&t=265


You don't have to get rid of files; while the Smalltalk image was great and offer some advantages, there has been Smalltalks that worked on files and Lisp showed that you can have still have a working image with files.

> navigation to/from callers was not possible in Smalltalk

You can navigate implementations and references to symbols, which mostly work like how unannotated dynamic languages (like Ruby, Python) would work.

There would be false positives; but it isn't as bad as it sounds unless you look up common symbols like `#value` and `#value:`.

In some Smalltalk environments and some extensions, you could narrow them by scope, like look up references to this symbol within that package. I think Dolphin did that.


I'm using Sublime Text since shortly before 2.0 and Sublime Merge since day one. Yet, I'm slowly losing interest in ST because of lacking language integrations and probably won't do any future paid upgrades. However, Sublime Merge is still essential for me and a no-brainer.


> My understanding is that the primary purpose of CompCert is to make formally verified code that is extracted into C also get compiled by a compiler that is formally verified to preserve the intended semantics.

Thats my understanding too. Code is written in high level systems generating C as output. C becomes rather an implementation detail in a hopefully, more or less completely verified tool chain.


They may work as expected (and probably will), but they are not covered by the proof.


In case of coq-to-ocaml: is it feasible to do an extraction to OCaml on the translated code and compare it with the original?


You can write programs in Coq and extract them in OCaml with the `Extraction' command: https://coq.inria.fr/doc/v8.19/refman/addendum/extraction.ht...

This is used by compcert: https://compcert.org/


Yes, I know, I mentioned the extraction.

My question was whether it can help detecting translation errors from the first step.


I'm not sure which first step you are talking about. Typically, one would write the program directly in Coq and use the extracted code as-is.


I'm not fluent in Swift and async, but the line:

   for try await byte in bytes { ... }

for me reads like the time/delta is determined for every single byte received over the network. I.e. millions of times for megabytes sent. Isn't that a point for optimization or do I misunderstand the semantics of the code?


The code, as the author makes clear, is an MWE. It provides a brief framework for benchmarking the behavior of the clocks. It's not intended to illustrate how to efficiently perform the task it's meant to resemble.


But it seems consequential. If the time were sampled every kilobyte, the code would be 1,000 times faster - which is better than the proposed use of other time functions.

At that point, even these slow methods are using about 0.5ms per million bytes, so it should be good up to gigabit speeds.

If that’s not fast enough, then sample every million bytes. Or, if the complexity is worth it, sample in an adaptive fashion.


I’m not sure about Swift, buy in C# and async method doesn’t have to be completed asynchronously. For example, when reading from files, a buffer will be first read asynchronous then subsequent calls will be completed synchronously until the buffer needs to be “filled” again. So it feels like most languages can do these optimizations

again.


This is what Swift does.


Yeah, this is horrifying from a performance design perspective. But in this case you'd still expect that the "current time" retrieval[1] to be small relative to all the other async overhead (context switching for every byte!), and apparently it isn't?

[1] On x86 linux, it's just a quick call into the vdso that reads the TSC and some calibration data, dozen cycles or so.


Note the end of the article acknowledges this, so this is clearly a deliberate part of the constructed example to make a particular point and not an oversight by the author. But it is helpful to highlight this point, since it is certainly a live mistake I've seen in real code before. It's an interesting test of how rich one's cost model for running code is.


The stream reader userspace libraries are very well optimized for handling that kind of "dumb" usage that should obviously create problems. (That's one of the reasons Linux expects you to use glibc instead of making a syscall directly.)

But I imagine the time reading ones aren't as much optimized. People normally do not call them all the time.


They look very similar on macOS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: