Hacker Newsnew | past | comments | ask | show | jobs | submit | fulafel's commentslogin

On the other hand RPi doesn't support suspend. So which wins depends if your application is always-on.

Aisle said they pointed it at the function, not the file. So, the nr of LLM turns would be something like nr of functions * nr of possible hints * nr of repos.

Could indeed be a useful exercise to benchmark the cost.

This would still be more limied, since many vulnerabilities are apparent only when you consider more context than one function to discover the vulnerability. I think there were those kinds of vulnerabilities in the published materials. So maybe the Aisle case is also picking the low hanging fruit in this respect.


Several operating systems on 286 (eg Xenix, Coherent, OS/2) used its MMU for multitasking and memory protection. See https://en.wikipedia.org/wiki/Intel_80286#Protected_mode

The 286 protected mode did not allow for a 32-bit flat address space and was heavily half-baked in other ways, e.g. no inbuilt way to return the CPU to real mode without a slow and fiddly CPU-reset.

It was architecturally a 16-bit CPU so a flat 32-bit address space would be a non sequitur. If you wanted flat 32-bit addressing, there was a contemporary chip that could do it with virtual memory: Motorola 68010 + the optional external MMU. (Or if you were willing to do some hoops, even a 68000.. see the Sun-1)

Protected mode on the 286 allowed 24-bit addressing, enabling access to 16 MB of memory, but lacked virtual memory and required a reboot to return to real mode. The 386 introduced virtual memory through paging, 32-bit addressing for 4 GB of memory, and virtual 8086 mode for running multiple 8086 programs simultaneously without compromising security.

https://flint.cs.yale.edu/feng/cos/resources/BIOS/procModes....

https://en.wikipedia.org/wiki/Protected_mode


Coherent was the first Unix-like OS I ran, on a 386SX box. I think it was Coherent 4.x.

If you want to have an airgapped network, sure. For most people it doesn't make sense. You'll just get the worst of of both worlds.

RFC 7368 for home networks recommends the use of ULA locally.

> A home network running IPv6 should deploy ULAs alongside its globally unique prefix(es) to allow stable communication between devices (on different subnets) within the homenet

> When an IPv6 node in a homenet has both a ULA and a globally unique IPv6 address, it should only use its ULA address internally and use its additional globally unique IPv6 address as a source address for external communications.


RFC 7368 is a 2014 "informational" (no ietf standing) doc so it's not a source for current IETF advice. Also it was part of the since closed "homenet" working group initiative trying to define some new stuff that did not get vendor uptake.

But in substance, if you have several subnets, then using ULA may make sense depending on what you're trying to do. However most home networks don't subnet.


It’s pretty sweet. By using ULA addresses for everything, all internal networking keeps working as-is if my ISP allocation changes. Every host can talk to its neighbors using internal addresses, and still connect to remote hosts without NAT breakage.

You also get this if you use mDNS, but without the ULA hassle and you get to use DNS names instead of hardcoding IP addresses.

You can use both. I do.

I do want some hardcoded addresses. In particular, some of the daemons I run get twitchy when the remote address changes unexpectedly.


mDNS is orthogonal to ULA. mDNS is for discovery and name resolution, whereas ULA is for IP connectivity. And mDNS operates at the link-local scope (link-local addresses), whereas ULA is scoped for the entire home network.

> mDNS operates at the link-local scope (link-local addresses)

This is not the case for the addresses returned. See eg https://www.rfc-editor.org/rfc/rfc6762

6.2. Responding to Address Queries

   When a Multicast DNS responder sends a Multicast DNS response message
   containing its own address records, it MUST include all addresses
   that are valid on the interface on which it is sending the message,
   and MUST NOT include addresses that are not valid on that interface
   (such as addresses that may be configured on the host's other
   interfaces).  For example, if an interface has both an IPv6 link-
   local and an IPv6 routable address, both should be included in the
   response message so that queriers receive both and can make their own
   choice about which to use.  This allows a querier that only has an
   IPv6 link-local address to connect to the link-local address, and a
   different querier that has an IPv6 routable address to connect to the
   IPv6 routable address instead.
So instead of using static ULA addresses, you can use the the routable address returned by mDNS. It can often replace the ULA address use case.

You're supposed to use them in parallel, not as an alternative.

People have, of course, been looking. Linux has been the #1 corpus for the methods for ages.

Indeed, the tragedy of the IPv4+NAT stockholm syndrome is that people view having to use ambiguous addresses as access control and can't distinguish reachability vs addressing.

People pass around stickers (or at least used to) in hacker events saying that so there has to be something to it, right?

Protesting the term is, I'd wager, motivated by something like: it sounds innocuous to nontechnical people and obscures what's really going on.


You can implement the graphics part of it using WebGL. It's strictly a graphics API for drawing to the screen. But there are specific libraries for eg physics that you can use in your WebGL 2 app, or entire 3D engines (like those you mentioned) targeting WebGL around. Or you can DIY.

> is open gl the non web version of web gl? or are they completely different?

The current version of WebGL, WebGL 2, is like OpenGL ES 3.0.


Most of free software (incl the BSD stuff) was like that. The bazaar was an attempt to characterise the new linux style way of doing it.

Makes me realize that "Worse is Better" was, in today's terms, apologism for vibe-coding.

Mapped to modern concepts I'd say it was about iterating from a MVP.

"Gabriel argued that early Unix and C, developed by Bell Labs, are examples of the worse-is-better design approach." Whereas vibe-coding is not reviewing what code goes in, just judging it by whether it seems to work or not. I guess a common factor would be willingness to compromise on soundness.


> In such an environment the container would crash, we see the violations, delete it and dont' have to worry about it.

This is the interesting part. What kind of UI or other mechanisms would help here? There's no silver bullet for detecting and crashing on "something bad". The adversary can test against your sandbox as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: