Cool! I checked the source and noticed that even LLM prefers simplified, high level Rust coding styles: use value types such as String, use smart pointers such as reference counting, clone liberally, etc… instead of fighting the borrow checker gatekeepers.
It is the style I prefer to use Rust for. Coming from Python, Typescript and even Java, even with this high level Rust, it yields incredible improvement already.
> Cool! I checked the source and noticed that even LLM prefers simplified, high level Rust coding styles: use value types such as String, use smart pointers such as reference counting, clone liberally, etc… instead of fighting the borrow checker gatekeepers.
Yeah that tracks because the AI is dumb as a bag of bricks. It can apply patterns off stackoverflow, but can hardly understand the borrow checker.
This is great. I think Apple bought Kuzu, a in memory graph database in late 2025 to support RAG in combine with their FM like this. Even with such as small model, a comprehensive context of our personal data in graph RAG would be sufficient for a PA system. Do we know if we can have access to this RAG data?
Can you at least read the article before criticizing them? They explicitly call out that they use Bayesian Optimization (Gaussian process) thing for this. It is "AI" but not "LLM" like you think it is.
I feel like I’m not the only who feel excited about the whole “compression” tricks while maintaining fidelity in our AI era. In a way, it has a vibe similar to the early 2000s when digital music became popular and the need for lossless compression was paramount. Sort of a pied piper moment for us now . Someone please make a Weisseman score for this stuff.
reply