In United States v. Heppner, Judge Rakoff of the Southern District of New York ruled that written exchanges between a criminal defendant and generative AI platform Claude were not protected by attorney-client privilege or the work product doctrine.
I wonder if putting this choice on the user would be most appropriate?
People fearful about being scammed should buy a phone with a hardware lock to prevent it from ever accepting sideloads--no option to go to dev mode, ever. You could even charge more for the extra security.
People who want the freedom to sideload can choose to buy a phone without the extra hardware security feature.
This is really, really well done. I’m very impressed by all the features implemented here and I wish I could look over the source. I wonder how it is done.
Weird follow up without apologizing for smearing LCSC with no evidence whatsoever. Imagine someone writing a blog post like
"LLMs produce AI slop on regular basis. This piece of code I found somewhere on the web sure is AI slop, I never saw any code by James Bowman but _I very much think these are made with the_ AI slop generators _or similar_". And the follow up is "Remember James Bowman? _I’m still curious about_ his code. I might look at it in something or other at some undetermined point in the future to see how it performs." Does that sound even remotely ok?
Its not even guilt by association. Its guilt by gut feeling? prejudice?
They could at least fall back to "context sensitive" ads like you suggest.
Also, don't try to make me feel guilty for having an "ad blocker". I don't specifically block ads, instead I have a "tracking me without my consent blocker".
Maxwell's "color wheel" experiment (https://cudl.lib.cam.ac.uk/view/PH-CAVENDISH-P-02000/1) is more commonly known, which preceded this experiment I wrote about. (Which by the way is also a very clever experiment which blends color by spinning a wheel with different ratios of primary colors).
reply