Hacker Newsnew | past | comments | ask | show | jobs | submit | nh2's commentslogin

Cannot you use Codex (which is open source, unlike Claude Code) with Claude, even via Amazon Bedrock?

Codex with Anthrophic's models is not as good as using the models with the harness it was trained in mine for. Same goes vice-versa too.

My friend recommended to put a small percent late payment fee, stated in the contract and on each invoice.

Haven't really used it yet because we don't have a problem with late payments, but I do think it would work, because our B2B customers are usually very appreciative of saving small percentages when we offer it, and unlikely to just give up that money by being late.


it doesn’t work if they are insolvent, and it can also backfire if they see this clause as a way to get a cheap cash loan. you should still have the clause but i think if this as a tool for the collections attorney to use if the customer defaults.

You set the rate that is punitive.

It doesn’t work when solvency is an issue but you should know your customers and mitigate that risk accordingly


The rates aren't cheap. The standard late payments I've seen work out to approximately 19.5-20% APR.

Start doing it now, before ot becomes a problem!

Still waiting for Delta + Difftastic integration:

https://github.com/dandavison/delta/issues/535


Be careful with difftastic, because it has at least one severe bug involving python that has been present for a long time: https://github.com/Wilfred/difftastic/issues/587

Thanks for pointing that out!

> You can’t cap memory, but ...

Why not? That'd be useful. Feels like a software written in C should make that reasonably easy.


ulimit?

or more probably the c api setrlimit(2)


Be careful with the setrlimit/ulimit API family, generally it doesn't do what you want. You can limit virtual memory (but... why?) or specific segments like stack, etc. There is also RLIMIT_RSS which sounds like what you'd want, but alas:

    RLIMIT_RSS
        This is a limit (in bytes) on the process's resident set (the number of virtual pages resident in RAM).  This limit has effect only in Linux 2.4.x,  x  <  3 and there affects only calls to madvise(2) specifying MADV_WILLNEED.
I also disagree with the conclusion "No hardware can compensate for a query gone wrong". There are concepts like 'quality of service' and 'fairness' which PG has chosen to not implement.


That does not cap memory per query, which should be implemented by postgres explicitly to capture only the memory that really counts towards "the query" (fine-grained memory limits; ulimit is not that).


Seconded


No. It is reserved.

https://www.icann.org/en/board-activities-and-meetings/mater...

A look on Google or Wikipedia would also clear that up faster than I can type this response https://en.wikipedia.org/wiki/.internal


`EnableEscapeCommandline` only controls the <Enter>~C commandline.

The reason that is disabled in current OpenSSH by default is OpenBSD `pledge` support:

https://security.stackexchange.com/questions/280793/what-att...

On my Linux,

    cat<Enter>~.
closes the connection as expected, and no ~ is shown in the terminal.


A problem with that is that Nix is slow.

On my nixos-rebuild, building a simple config file for in /etc takes much longer than a common gcc invocation to compile a C file. I suspect that is due to something in Nix's Linux sandbox setup being slow, or at least I remember some issue discussions around that; I think the worst part of that got improved but it's still quite slow today.

Because of that, it's much faster to do N build steps inside 1 nix build sandbox, than the other way around.

Another issue is that some programming languages have build systems that are better than the "oneshot" compilation used by most programming languages (one compiler invocation per file producing one object file, e.g. ` gcc x.c x.o`). For example, Haskell has `ghc --make` which compiles the whole project in one compiler invocation, with very smart recompilation avoidance (pet-function, comment changes don't affect compilation, etc) and avoidance of repeat steps (e.g. parsing/deserialising inputs to a module's compilation only once and keeping them in memory) and compiler startup cost.

Combining that with per-file general-purpose hermetic build systems is difficult and currently not implemented anywhere as far as I can tell.

To get something similar with Nix, the language-specific build system would have to invoke Nix in a very fine-grained way, e.g. to get "avoidance of codegen if only a comment changed", Nix would have to be invoked at each of the parser/desugar/codegen parts of the compiler.

I guess a solution to that is to make the oneshot mode much faster by better serialisation caching.


What if you set up a sandbox pool? Maybe I'm rambling, I haven't read much Nix source code, but that should allow for only a couple of milliseconds of latency on these types of builds. I have considered forking Nix to make this work, but in my testing with my experimental build system, I never experienced much latency in builds. The trick to reduce latency in development builds is to forcibly disable the network lookups which normally happen before Nix starts building a derivation:

    preferLocalBuild = true;
    allowSubstitutes = false;
Set these in each derivation. The most impactful thing you could do in a Nix fork according to my testing in this case is to build derivations preemptively while you are fetching substitutes and caches simultaneously, instead of doing it in order.

If you are interested in seeing my experiment, it's open on your favourite forge:

https://github.com/poly2it/kein


Haskell is the king of cancellation. Using asynchronous exceptions, you can cancel anything, anytime, with user -defined exception types so you know what the cancellation reason is.

Example:

    maybeVal <— timeout 1000000 myFunction
Some people think that async exceptions are a pain because you nerd to be prepared that your code can be interrupted any time, but I think it's absolutely worth it because in all the other languages I encounter progress bars that keep running when I click the cancel button, or CLI programs that don't react to CTRL+C.

In Haskell, cancellability is the default and carries no syntax overhead.

This is one of the reasons why I think Haskell is currently the best language for writing IO programs.


How do it work inside `myFunction1` which is invoked by `myFunction`? Does `myFunction1` needs to be async as well?


No, it can be any pure function or IO. Both will get interrupted.


There really is no benefit of splitting a functionality from it's test. Then you just have a commit in the history which is not covered by tests.

Splitting "handling the happy vs error path" sounds even worse. Now I first have to review something that's obviously wrong (lack of error handling). That would commit code that is just wrong.

What is next, separating the idea from making it typecheck?

One should split commits into the minimum size that makes sense, not smaller.

"Makes sense" should be "passes tests, is useful for git bisect" etc, not "has less lines than arbitrary number I personally like to review" - use a proper review tool to help with long reviews.


Depends entirely on your workflow - we squash PRs into a single commit, so breaking a PR into pieces is functionally identical to not doing so for the purposes of the commit history. It does, however, make it easier to follow from the reviewer's perspective.

Don't give me 2000 lines unless you've made an honest good-faith attempt to break it up, and if it really can't be broken up into smaller units that make sense, at least break it up into units that let me see the progression of your thought as you solve the problem.


What about 1:1 chat?

I'm using Element X and there seems to be no search button for messages at all.

So I cannot even search for "shopping" to find the shopping list, or "address" to find the address a friend sent me some days ago.

It is simple to see why the normal user will have a bad everyday experience.

The app uses 160 MB user data on my phone which fits a lot of text, why cannot I just search it?


1:1 would be encrypted by default


The question is why I cannot search through it when it is a trivial amount of data that is on my device already.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: