I think what he's saying is the architectural decisions outweigh the implementation language.
When I was taught Verilog for IP implementation, one thing I noticed is that people get caught in the trap of trying to abstract away the hardware or approach it from a higher level. Haskell/Verilog 2001/SystemVerilog all give us tools to do this. However, when trying to make real silicon, you need to understand what is actually getting built (i.e. know exactly how many flip flops you're creating and how they fan out) and then use the language to describe it. If you use a 'for' loop to try to do computation, as you might in a programming language, you could end up with something entirely unexpected or unsynthesizable.
Traditionally you first design your module conceptually on a whiteboard (or Excel, Viso, etc.), then implement it in an HDL. Because of the influx of software engineers trying to get into hardware (via FPGAs, etc.), there has been a trend in trying to obfuscate away the details of the implementation, and this can cause a lot of confusion.
That said, I've heard of projects that already translate native Haskell to HDL with some success. I'm not a programmer so I don't claim to understand if it's a good idea, but I still think understanding exactly what's being output is important to knowing if it can perform in a reasonable way, especially if you're doing something of any complexity.
FWIW, it is quite easy to write Verilog code that ends up being unsynthesizable, since the language wasn't originally designed to be an HDL. Many of the alternative HDLs, such as UC Berkeley's Chisel (https://chisel.eecs.berkeley.edu/) are designed with the express goal of making it impossible (or at least quite difficult) to write unsynthesizable code.
Also, though figuring out what Verilog to write is not difficult if you've properly thought out the microarchitecture, it can be rather tedious and error-prone to actually write it. I'm not sure how CLaSH works, but Chisel allows you to essentially script generation of hardware using Scala. This removes some of the tedium of writing Verilog and also encourages code reuse (for instance, by allowing you to generate a 32-bit adder and an 8-bit adder using the same code but with different parameters).
Thank you for explaining my thought. I'm not good at English.
In my experience, it is quite easy to describing the hardware logic if the architecture is designed well. So, what I mean in "VISIO and Excel are much more important" that the architecture should be concise and cycle accurate. Then verilog coding is just a piece of cake.
The problem is that despite the Verilog being relatively easy, it's still incredibly tedious and error prone.
It's amazing how Verilog manages to be too low level and too high level at the same time. It's a simulation language not originally intended for synthesis, so it doesn't have access to hardware primitives, and requires you to write specific patterns to ensure they're inferred correctly. But at the same time, it's too low level to even allow you to abstract those patterns.
The need to know exactly what is being built is not completely incompatible with the notion of abstraction. Sure, trying to apply software ideas to hardware with no understanding is a recipe for disaster, but that's not what people are suggesting. The goal is to recognise and abstract patterns in hardware design.
Your example of for loops being fragile is actually a good argument for higher level abstractions: maps and folds are much better tools for working with hardware, since they constrain you to a specific hardware layout, and make it clear what's happening.
>Traditionally you first design your module conceptually on a whiteboard (or Excel, Viso, etc.), then implement it in an HDL.
So wouldn't it be nice if the language you used could express the same concepts you use in your higher level diagrams?
How is this any different from compiling C to assembly? Why would higher level languages create unsynthesizable circuits? You trust the C compiler to create the proper instructions for your target architecture then I don't see why the same can't be done with a Haskell DSL that compiles to Verilog.
From memory, in several HDLs like VHDL and Verilog, a "boolean" value can have up to 8 or so different values (true, false, undefined, hi, lo (different from true and false apparently) plus a few more).
My experience with Verilog is that it's very easy to write things which look fine, you can simulate and then fail in hardware; the semantics are just wrong in the languages.
Higher level languages inevitably come with built in semantics that the programmer takes for granted, but can't be synthesized directly to hardware. In C, it's the function call stack. In Haskell it's higher order types and recursive data structures (and more). You could in theory create some runtime package that you'd compile to hardware for your program to be synthesized to, or "run on".....but then you'd just be making a straight up computer, wouldn't you. ;-)
None of these things are relevant in most HDLs implemented as DSLs in high level languages. The point of most of the HDL work in Haskell (for example Lava, and Bluespec) is to provide primitives to talk about hardware and to use a sane language as a way to manipulate them to build larger specifications. It is embarrassing that people use tools that allow you to write un-synthesizable code.
A computer is a much simpler abstraction and much less leaky than a circuit model.
Yes, in theory a computer could get a high level description of a circuit, and turn it into a very efficient hardware implementation. In practice our computers are not good enough - the same way they were not good enough for compiling high level languages at the 70's, and people wrote assembly by hand.
When I was taught Verilog for IP implementation, one thing I noticed is that people get caught in the trap of trying to abstract away the hardware or approach it from a higher level. Haskell/Verilog 2001/SystemVerilog all give us tools to do this. However, when trying to make real silicon, you need to understand what is actually getting built (i.e. know exactly how many flip flops you're creating and how they fan out) and then use the language to describe it. If you use a 'for' loop to try to do computation, as you might in a programming language, you could end up with something entirely unexpected or unsynthesizable.
Traditionally you first design your module conceptually on a whiteboard (or Excel, Viso, etc.), then implement it in an HDL. Because of the influx of software engineers trying to get into hardware (via FPGAs, etc.), there has been a trend in trying to obfuscate away the details of the implementation, and this can cause a lot of confusion.
That said, I've heard of projects that already translate native Haskell to HDL with some success. I'm not a programmer so I don't claim to understand if it's a good idea, but I still think understanding exactly what's being output is important to knowing if it can perform in a reasonable way, especially if you're doing something of any complexity.