> One of the huge things that I love about Rust is the lack of a runtime framework. I don't need to figure out how to bundle / ship / install a framework at runtime.
Rust has a runtime, it's just tiny and auto-bundled (for now). Modern .NET's support for self-contained bundling has gotten pretty good. AOT is getting better too, and AOT-ready code (switch Reflection for Source Generators as much as possible, for instance) can do some very heavy treeshaking of the runtime.
Also, yeah native embedding has gotten somewhat easier in recent years depending on the style of API you want to present to native code. Furthermore both Godot and Unity (differently) embed .NET/C# as options in game development. I certainly expect Typhon is primarily targeting Godot and/or (eventually) Unity (when it finishes switching to coreclr to support more of these features) embedding (but maybe also Stride, a fully C# game engine).
D&D is a bit like Monopoly in that very few people play by the rules as written and instead most tables play by a semi-unique/regional subset of the rules and with a mixture of house rules and DM preferences. Especially people who have been playing for years, not only have they had more time to house rule and build DM opinions, but they also may have seen multiple versions of the rules over that time and interacted with a wider variety of other tables.
To some extent, this is weirdly a good thing: if you want strictly enforced rules, you may just want to play a videogame instead. D&D succeeds best as a social lubricant enabling a framework in which social gaming (roleplaying) can happen to be "fun". Rarely is strictly following rules "fun", especially socially with friends; the rules in D&D are meant to be guideposts and tools for enough structure that people that want structure find comfort and enough flexibility that "fun" isn't lost in the process.
Which is a long way to say that you probably aren't going to learn the right lessons from a well fuzzed computer spec of the rules, you probably are going to learn more lessons asking the people you play with what rules they find important, to explain things you feel you don't understand, and to suggest which chapters in which books to try to read to best improve your understanding for that group. At the end of the day, if the table seems too hard to play at you might also just be playing with the wrong group, especially if you aren't having fun.
The third bullet is also presumably referring to C#'s ancient wider support for unsafe { } blocks for low level pointer math as well as the modern tools Span<T> and Memory<T> which are GC-safe low level memory management/access/pointer math tools in modern .NET. Span<T>/Memory<T> is a bit like a modest partial implementation of Rust's borrowing mechanics without changing a lot of how .NET's stack and heap work or compromising as much on .NET's bounds checking guarantees through an interesting dance of C# compiler smarts and .NET JIT smarts.
The FFM API actually does cover a lot of the same ground, albeit with far worse ergonomics IMO. To wit,
- There is no unsafe block, instead certain operations are "restricted", which currently causes them to emit warnings that can be suppressed on a per-module basis; it seems the warnings will turn into exceptions in the future
- There is no "fixed" statement and frankly nothing like it all, native code is just not allowed to access managed memory period; instead, you set up an arena to be shared between managed and native code
- MemorySegment is kinda like Memory<T>/Span<T> but harder to actually use because Java's type-erased generics are useless here
- Setting up a MemoryLayout to describe a struct is just not as nice as slapping layout attributes on an actual struct
- Working with VarHandle is just way more verbose than working with pointers
> - There is no unsafe block, instead certain operations are "restricted", which currently causes them to emit warnings that can be suppressed on a per-module basis; it seems the warnings will turn into exceptions in the future
Which sounds funny because C# effectively has gone the other direction. .NET's Code Access Security (CAS) used to heavily penalize unsafe blocks (and unchecked blocks, another relative that C# has that I don't think has a direct Java equivalent), limiting how libraries could use such blocks without extra mandatory code signing and permissions, throwing all sorts of weird runtime exceptions in CAS environments with slightly wrong permissions. CAS is mostly gone today so most C# developers only ever really experience compiler warnings and warnings-as-errors when trying to use unsafe (and/or unchecked) blocks. More libraries can use it for low level things than used to. (But also fewer libraries need to now than used to, thanks to Memory<T>/Span<T>.)
> There is no "fixed" statement and frankly nothing like it all, native code is just not allowed to access managed memory period; instead, you set up an arena to be shared between managed and native code
Yeah, this seems to be an area that .NET has a lot of strengths in. Not just the fixed keyword, but also a direct API for GC pinning/unpinning/locking and many sorts of "Unsafe Marshalling" tools to provide direct access to pointers into managed memory for native code. (Named "Unsafe" in this case because they warrant careful consideration before using them, not because they rely on unsafe blocks of code.)
> MemorySegment is kinda like Memory<T>/Span<T> but harder to actually use because Java's type-erased generics are useless here
It's the ease of use that really makes Memory<T>/Span<T> shine. It's a lot more generally useful throughout the .NET ecosystem (beyond just "foreign function interfaces") to the point where a large swathe of the BCL (Base Class Library; standard library) uses Span<T> in one fashion or another for easy performance improvements (especially with the C# compiler quietly preferring Span<T>/ReadOnlySpan<T> overloads of functions over almost any other data type, when available). Span<T> has been a "quiet performance revolution" under the hood of a lot of core libraries in .NET, especially just about anything involving string searching, parsing, or manipulation. Almost none of those gains have anything to do with calling into native code and many of those performance gains have also been achieved by eliminating native code (and the overhead of transitions to/from it) by moving performance-optimized algorithms that were easier to do unsafely in native code into "safe" C#.
It's really cool what has been going on with Span<T>. It's really wild some of the micro-benchmarks of before/after Span<T> migrations.
Related to the overall topic, it's said Span<T> is one of the reasons Unity wants to push faster to modern .NET, but Unity still has a ways to go to where it uses enough of the .NET coreclr memory model to take real advantage of it.
Yeah, coming to C# from Rust (in a project using both), I’ve been extremely impressed by the capabilities of Span<T> and friends.
I’m finding that a lot of code that would traditionally need to be implemented in C++ or Rust can now be implemented in C# at no or very little performance cost.
I’m still using Rust for certain areas where the C# type system is too limited, or where the borrow checker is a godsend, but the cooperation between these languages is really smooth.
A lot of C#'s reputation for not being viable for Linux came from the other direction and a lot of FUD against Mono. There were a lot of great Linux apps that were Linux first and/or Linux only (often using Gtk# as UI framework of choice) like Banshee and Tomboy that also had brief bundling as out-of-the-box Gnome apps in a couple of Linux distros before anti-Mono backlash got them removed.
Also, yeah today Linux support is officially maintained in modern .NET and many corporate environments are quietly using Linux servers and Linux docker containers every day to run their (closed source) projects. Linux support is one of the things that has saved companies money in running .NET, so there's a lot of weird quiet loyalty to it just from a cost cutting standpoint. But you don't hear a lot about that given the closed-source/proprietary nature of those projects. That's why it is sometimes referred to as "dark matter development" from "dark matter developers", a lot of it is out there, a lot of it doesn't get noticed in HN comments and places like that, it's all just quietly chugging along and doesn't seem to impact the overall reputation of the platform.
Yes, however as acknowledged by the .NET team themselves, in several podcast interviews, this is mostly Microsoft shops adopting Linux and saving Windows licenses.
They still have a big problem gaining .NET adoption among those that were educated in UNIX/Linux/macOS first.
Mandy Mantiquila and David Fowler have had such remarks, I can provide the sources if you feel so inclined.
One take on it is that yes, the single dot operator was an ancient mistake which is why so many programming language features are about making it smarter. Properties as mentioned in this article are an ancient way to fake the dot operator into a "field" but actually make method calls. Modern C# also picked up the question dot operator (?.) for safer null traversal and the exclamation dot operator (!. aka the "damnit operator" or "I think I know what I'm doing operator") for even less safe null traversal.
Most .NET projects that Linq originally targeted ran in so called "Server" and/or "Workstation" GCs (.NET has had very generic public names for its GCs for a long time which were also somewhat misnomers because even some Desktop apps would run in "Server" GC and it was possible for vice versa [0]) where allocations were cheap, garbage collection was relatively cheap (because both GCs were multi-generational, had strong [but different] tuning for their generations, etc).
Unity inherited a much simpler Boehm GC from Mono. Under a (single generation) Boehm GC allocations are a bit more expensive and garbage collection sometimes a lot more expensive. (A Boehm GC was much easier for C++ engine code to understand/interact with, which is also part of why the .NET modernization project for Unity got so complicated and still has such a ways to go left.)
[0] Fun aside: in fact, modern docker advice for .NET is to switch "server applications" to use "Workstation GC" if you need to stack multiple containers on the same host because of differences in expected memory usage.
> In my experience it does not work very well outside of the sanctioned Linux distributions. Quirky heisenbugs and nonsensical crashes made it virtually unusable for me on Void. I doubt that's changed in the years that have since passed.
It's open source. Did you follow the spirit of Linux to file a bug report of as much sense of the crashes as you could make? Most OSS only supports as many distros as people are willing to test and file accurate bug reports (and/or scratch the itch themselves and solve it). It seems a bit unfair to expect .NET to magically have a test matrix including every possible distro when almost nothing else does. (It's what keeps distro maintainers employed, testing other people's apps, too.)
It probably has gotten better since then, for what it is worth. .NET has gotten a lot of hardening on Linux and a lot of companies are relying on Linux servers for .NET apps now.
At the very least there are very tiny Alpine-based containers that run .NET considerably well and are very well tested, so Docker is always a strong option for .NET today no matter what Linux distro you want on the "bare metal" running Docker.
> Most OSS only supports as many distros as people are willing to test
Linux distros don't differ too significantly from each other nowadays (systemd plus a different package manager most of the time), so I'm almost sure this is not the source of problems.
Nonetheless, I can only add that we have ridiculous slowdowns in some standard library network calls on Linux, and at that point it is just not true that it will "seamlessly run on Linux", unfortunately.
> Did you follow the spirit of Linux to file a bug report of as much sense of the crashes as you could make?
No, because the only reason I needed C#/.NET to work was to use an internal tool someone before me had written in C#/.NET. It was not really to explore C# or make it usable. I just threw out the old tool, wrote a new one in scheme so I could do my job, and moved on with my life. I don't particularly care about this spirit of Linux, and Microsoft's tooling being weirdly fragile isn't my problem. I assume they already know this is an architectural issue, hence why they specify supported distributions. On principle I believe solving the architectural issue is what they should be concerned about, rather than making new bandaids.
> Most OSS only supports as many distros as people are willing to test and file accurate bug reports
The problem is that most runtimes and standard libraries don't need to specify a notion of a "supported" distribution. At best, they just refer to platforms with pre-made packages while happily pointing other distributions to the git repo. Even complicated, highly abstract and weird ones don't make this kind of distinction. SWI-Prolog and its myriad of frameworks (which includes a full blown GNU Emacs clone) work out of the box anywhere. GHC and the RTS work flawlessly out of the box.
I understand (even if I don't feel the same way) why a comprehensive abstraction layer like .NET is evangelized. All the same I have to consider that it's a product of a multi-trillion dollar corporation, made to compete with the thing whose marketing tagline is "write once, run anywhere". That only makes the distro dependency stand in an even harsher relief, frankly.
You like .NET? Perfectly fine and valid, and I assume it actually works for you. Just indicating that "cross platform" is contingent on more than kernel and cpu architecture here, which is fairly unusual for this type of software. That's before we get into things like comparisons with ocaml, which I know is miserable on Windows and thus is often considered not really something you'd seriously consider using there. The .NET ecosystem essentially has the same problem outside of Windows where the grain and expectations of the tooling are counter-intuitive to the operating system and usual modus operandi of its users.
I think there is an architectural problem, but not where you seem to be expecting it to be. I got caught up in some low level distro nonsense+drama from smashing my head against horrors deep in autoconf in automake and got a deep look into the realm of the Distribution Maintainer lifestyle and how much Linux distributions are individual snowflakes despite presumably all being the same OS. As the old joke goes "the only stable ABI on Linux is Win32".
.NET has a huge kitchen sink standard library. Maybe the closer parallel is Python and Python has had periods where it only supported a few named distributions, too. That's not currently the case, but "how the sausage is made" is still a lot grosser than you might expect, with some Distribution Maintainers maintaining entire forks and a lot of the work not done by Python directly. Python is everywhere because it became one of the favorite shell scripting languages of Distribution Maintainers. (Which also exacerbated the Python 2 to 3 migration because entire distros got stuck on 2 while all their shell scripts got rewritten.) (But also if you want to compare Java's cross platform to .NET's I think we need a long digression into how many Java runtimes there are and the strange and subtle incompatibilities of different distro's affiliations to one or another. I also made the mistake of trying to use a Java application as a regular application in my youth of accidentally dealing with deep distro incompatibilities. That was also not fun.)
I get it, you don't have to like .NET. I just think you have an inflated view of what "cross platform" means when it comes to Linux. Linux isn't just one platform. Most things are rebuilt from source constantly because the binary interfaces especially libc's/glibc's under them are constantly shifting like quicksand. See also the messes that are Flatpak and Snap and how much work they've done to try to build around distro incompatibilities (by building increasingly more complex wish-it-were-VMs).
> C# is a language that serves many masters and if you trace the origin of its featureset, you can see why each was created. Take the `dynamic` keyword: created to support interfacing with COM interop easier.
VB.NET's Object was created to support interfacing with COM interop easier. VB.NET's one key niche versus C# for many early years was COM interop through Object.
C#'s dynamic keyword was more directly added as a part of the DLR (Dynamic Language Runtime aka System.Dynamic) effort spurred by IronPython. It had the side benefit of making COM interop easier in C#, but the original purpose was better interop with IronPython, IronRuby, and any other DLR language. That's also why under the hood C#'s dynamic keyword supports a lot of DLR complexity/power. You can do a lot of really interesting things with `System.Dynamic.IDynamicMetaObjectProvider` [1]. The DLR's dependency on `System.Linq.Expressions` also points out to how much further in time the DLR was compared to VB.NET's Object which was "just" the VB7 rename of VB6 Variant originally (it did also pick up DLR support).
The DLR hasn't been invested into in a while, but it was really cool and a bit of an interesting "alternate universe" to still explore.
The only place the IPv6 transition seems to be failing is in "command-and-control" corporate networks. (A majority of home/consumer/cellular users are quietly using IPv6 by default every day, per most statistics.) The lessons to be learned there don't seem to be technical but economic incentives.
Big companies believe that they have plenty of IPv4 space, especially because they've always been lax in how they read IPv4 RFCs and use IPv4 routing behind corporate firewalls. Big companies also have the most cash to buy IPv4 blocks as they go to auction. Big companies have massive firewalls and strict VPNs which also insulate them from IPv4 scarcity.
IPv4 leases don't impact enough companies' bottom lines today that they need to assess IPv6 support.
Solving those economic incentive problems would likely be a massive sociopolitical problem: you would need IANA and the RIRs to agree to inflate costs in various ways (and in the short term that might have done a lot of harm to small countries already facing IPv4 inequity and their RIRs that lost the very earliest IPv4 assignment lotteries). You'd probably need new RFCs and political enforcement to support things like "taxing" company to company IPv4 block assignments. You'd probably need collusion or regulation from the big "Cloud Providers" to enforce higher costs on IPv4-only networking.
It would take those kind of "strong handed" tactics to speed up IPv6 adoption in corporate networks. Waiting for the "invisible hand" of the "free market" can be very slow and takes patience. That's mostly what we've been seeing with IPv6 adoption: the "invisible hand" is a lot slower than some people predicted. Especially engineers that hoped technical superiority alone would be a market winner.
You are looking for mDNS which is the modern name for zeroconf/Bonjour/etc. The URL suffix is .local (storage.local, myphone.local, myprinter.local). Most modern OSes support it out of the box, but also don't advertise their names on mDNS until you ask them nicely (travel through a maze of Settings and Firewall options).
mDNS supports IPv6 just fine/works on IPv6 only LANs.
Rust has a runtime, it's just tiny and auto-bundled (for now). Modern .NET's support for self-contained bundling has gotten pretty good. AOT is getting better too, and AOT-ready code (switch Reflection for Source Generators as much as possible, for instance) can do some very heavy treeshaking of the runtime.
Also, yeah native embedding has gotten somewhat easier in recent years depending on the style of API you want to present to native code. Furthermore both Godot and Unity (differently) embed .NET/C# as options in game development. I certainly expect Typhon is primarily targeting Godot and/or (eventually) Unity (when it finishes switching to coreclr to support more of these features) embedding (but maybe also Stride, a fully C# game engine).
reply