Hacker Newsnew | past | comments | ask | show | jobs | submit | nickelpro's commentslogin

"CMake is too hard to debug, I'm stuck doing confusing print statements everywhere when I misconfigure something"

"Good news, we added a debugger"

"CMake has a debugger? Who would ever want that?"

Sigh.

Working on CMake has taught me a lot about the futility of pleasing everyone.


That's not my point. To make myself clear: CMake is terrible. You can put lipstick on a turd, but it's still a turd. Most languages have a build script written in their own language but C++ projects by and large use CMake.

It's the brittleness. It likes to work on the dev machine and not much else.


Fault tolerance is a hard problem, assembling qubits for simultaneous gate operations is another hard problem. There are several dozen others.

It is exceptionally unlikely CRQC will be achieved in our lifetimes, if ever. The closer example is economically-viable fusion power production, which today has better odds than CRQC but remains solidly in the "maybe" zone after decades of global investment. Even though fusion weapons had been achieved half a century beforehand.

The bombs were actually relatively easy problems, in the scheme of things.

It is never wise to listen to people who's jobs and funding are connected to the development of a technology on when that technology will arrive. The answer is always "soon".


Fusion also came to my mind but after thinking about it for longer I think it's a bad argument. The challenge with fusion is mostly around scale and efficiency to make it competitive against other energy sources (and net energy positive in the first place).

For CRQC it doesn't matter if they're massive expensive energy monsters. Even being able to break a single chosen key is enough to be a problem and once you can do one you can definitely do ten or a hundred.


They're just different definitions of success.

For fusion the bar is "economically viable", in the current discussion for QC the bar is "cryptographically relevant".

They are comparable in that to meet either criteria, a variety of unsolved engineering challenges need to be overcome. For both, some of those problems have no clear and obvious solutions to which a simple application of resources and time will achieve.

Currently unknown innovations are required, unknown unknowns lurk in the dark corners, and all projections are relying on the assumption such innovations will arrive in a timely fashion and the unknown unknowns will be harmless glitches.

Neither are likely impossible, but betting on timelines is a fools game. This isn't the NYT publishing man-made flight is a million years away 2 months before the Wright brothers flew at Kitty Hawk, waiting for the right conglomeration of otherwise sound engineering to materialize in one place. It's like saying level 5 self-driving cars are two years away, a perpetually delayed technology for which all problems are well known and no new innovations are imminent.


I wanted to ship import std in 4.3 but there are some major disagreements over where the std.o symbols are supposed to come from.

Clang says "we don't need them", GCC says "we'll ship them in libstdc++", and MSVC says "you are supposed to provide them".

I didn't know about that when I was working on finishing import std for CMake and accidentally broke a lot of code in the move to a native implementation of the module manifest format, so everything got reverted and put back into experimental.


That's really interesting info, thanks!


This was considered during standardization. The feeling among tool developers at the time was it was "close enough" to Fortran modules to be mostly solvable.

This was wrong, mostly because C++ compiler flag semantics are far more complicated than in Fortran, you live and you learn. The bones of most implementations is identical to Fortran though, we got a ~3 year head start on the work because of that.

Ninja already had the dyndep patch ready to go from Fortran, CMake knew basically how to use scanners in build steps. However, it took longer than expected to get scanner support into the compilers, which then delayed everything downstream. Understanding when BMIs need to be rebuilt is still tricky. Packaging formats needed to be updated to understand module maps, etc, etc.

Each step took a little longer than was initially hoped, and delays snowballed a bit. We'll get there.


BMIs are not considered distributable artifacts and were never designed to be. Same as PCHs and clang-modules which preceded them. Redistribution of interface artifacts was not a design goal of C++ modules, same as redistribution of CPython byte code is not a design goal for Python's module system.

Modules solve the problems of text substitution (headers) as interface description. It's why we call the importable module units "interface units". The goals were to fix all the problems with headers (macro leakage, uncontrolled export semantics, Static Initialization Order Fiasco, etc) and improve build performance.

They succeeded at this rather wonderfully as a design. Implementation proved more difficult but we're almost there.


CPS[1] is where all the effort is currently going for a C++ packaging standard, CMake shipped it in 4.3 and Meson is working on it. Pkgconf maintainer said they have vague plans to support at some point.

There's no current effort to standardize what a package registry is or how build frontends and backends communicate (a la PEP 517/518), though its a constant topic of discussion.

[1]: https://github.com/cps-org/cps


The intended value is difficult to discern in AI written pieces.

I agree with both of you, there's some interesting tricks here for how a website builds anti-bot protection, but the AI sloppification is framing it as a consumer protection issue but not delivering on that premise.

It is a reasonable criticism that the post does not deliver a "so what?" on its basic framing.


Because it's both slow and terrible?

You generally do not want to simulate or describe raw gate-level netlists. Both languages are capable of that. Old school Verilog (not SystemVerilog) is still the defacto netlist exchange format for many tools.

It's just aggravatingly slow to sim and needlessly verbose. Feeding high-level RTL to Verilator to do basic cycle-accurate sim has exceptionally fast iteration speed these days.


Is it really if you restrict yourself to sensible design practices? You generally want to simulate simple clocked Logic with a predefined clock, most of the time anything else is a mistake or bad design. So just if rising edge clk next_state <= fn(previous_state, input) . It seems to me VHDL and verilog are simply at the wrong abstraction level and by that they make simulation needlessly complicated and design easy to do wrong. To me it seems that if they had the concept of clocks instead none of this would be necessary and many bugs avoided (but I'm no expert on simulator design, so I might be missing something...)


I agree basically with everything you're saying, but that's not arguing for raw gate netlists. If anything it's arguing for even higher levels of abstraction where clock domains are implicit semantic contexts.

Many new school HDLs are working in this space and they couldn't be farther from the "representative of what digital circuits are constructed from" idea. Often they're high-level programmatic generators, very far from describing things in terms of actual PDK primitives.


In a way is further away, but in another way it's actually closer to how real hardware works: Clock (and reset) trees are real physical things which exist on all digital chips.


SystemVerilog basically fixes this with always_comb vs always_latch.

There's no major implementation which doesn't handle warning or even failing the flow on accidental latch logic inside an always_comb.


https://wayland.app/protocols/

Click any protocol, very few outside the core and absolute essential extensions have universal support.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: