Similar story for myself. It was long and tedious for my mental model to go from Basic, to Pascal, to C, and finally to ASM as a teen.
My recent experience is the opposite. With LLMs, I'm able to delve into the deepest parts of code and systems I never had time to learn. LLMs will get you to the 80% pretty quick - compiles and sometimes even runs.
Thank you! Yes, right now we are using Qwen for the LLM. They also released a super fast TTS model that we have not tried yet, which is supposed to be very fast.
Impressive! The cloning and voice affect is great. Has a slight warble in the voice on long vowels, but not a huge issue. I'll definitely check it out - we could use voice generation for alerting on one of our projects (no GPUs on hardware).
I spent a few months working on different edge compute NPUs (ARM mostly) with CNN models and it was really painful. A lot of impressive hardware, but I was always running into software fallbacks for models, custom half-baked NN formats, random caveats, and bad quantization.
In the end it was faster, cheaper, and more reliable to buy a fat server running our models and pay the bandwidth tax.
When I use the "think" mode it retains context for longer. I tested with 5k lines of c compiler code and I could 6 prompts in before it started forgetting or generalizing
I'll say that grok is really excellent at helping my understand the codebase, but some miss-named functions or variables will trip it up..
not from a tech field at all but would it do the context window any good to use "think" mode but discard them once the llm gives the final answer/reply?
is that even possible to disregard genrated token's selectively?
I tried replacing a DMA queue lock with lock-free CAS and it wasn't faster than a mutex or a standard rwlock.
I rewrote the entire queue with lock-free CAS to manage insertions/removals on the list and we finally got some better numbers. But not always! We found it worked best either as a single thread, or during massive contention. With a normal load it wasn't really much better.