Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've made this argument here a few times and am always shot down, but I think its important to highlight that the airline industry has an extremely robust history of automation and HCI in critical transportation scenarios and it seems to me that all the lessons that we have learned have been chucked out the window with self-driving cars. Being able to effectively reason about what the automation is doing is such an important part of why these technologies have been so successful in flight, and examples like this illustrate how far off we are to something like that in cars. The issue of response time, too, is one we cant ignore, and it is certainly a far greater challenge in automobiles.

I don't have answers, but it does seem to me like we are not placing a premium enough on structuring this tech to optimize driver supervision over the driving behavior. Granted, the whole point is to one day NOT HAVE to supervise it all, but at this rate we're going to kill a lot of people until we get there.



Hi, aerospace software engineer and flight instructor here. I think you get shot down because the problems just aren't comparable. While I agree that there may be some philosophical transfer from aircraft automation, the environments are so radically different that it's difficult to imagine any substantial technological transfer.

Aircraft operate in an extremely controlled environment that is almost embarrassingly simple from an automation perspective. Almost everything is a straight line and the algorithms are intro control theory stuff. Lateral nav gets no more complicated than the difference between a great circle and a rhumb line.

The collision avoidance systems are cooperative and punt altogether on anything that isn't the ground or another transponder-equipped airplane. The software amounts to little more than "extract reported altitude from transponder reply, if abs(other altitude - my altitude) < threshold, warn pilot and/or set vertical speed." It's a very long way from a machine learning system that has to identify literally any object in a scene filled with potentially thousands of targets. There's very little to worry about running into in the sky, and minimum safe altitudes are already mapped out for pretty much the entire world.

Any remaining risk is managed by centralized control and certification, which just isn't going to happen for cars. We aren't going to live in a world where every street has to be (the equivalent of) an FAA certified airport with controls to remove any uncertainty about what the vehicle will encounter when it gets there. Nor are we going to create a centralized traffic control system that provides guarantees you won't collide with other vehicles on a predetermined route.

So it's just a completely different world with completely different requirements. Are there things the aerospace world could teach other fields? Yeah, absolutely. Aerospace is pretty darn good at quality control. But the applications themselves are worlds apart.


I'm actually in complete agreement! What sticks out to me is your assessment that the flight environment is "embarrassingly simple from an automation perspective", which I agree as well (as compared to cars). And yet despite that simplicity and decades at it, we still run it with an incredible robust infrastructure to have a human oversee the tech. We have super robust procedures for checking and cross-checking the automation, defined minimus and tolerances for when the automation needs to cease to operate the aircraft, training solely focused on operating the automation etc. But with cars, we somehow are super comfortable with cars severely altering behavior in a split-second, super poor driver insight or feedback on the automation, no training at all, with a human behind the wheel who in every marketing material known to man has been encouraged to trust the system far more than the tech (or law), would ever have you prudently do.

I'm with you that they are super different, and that the auto case is likely much, much harder. But I see that and can't help but think that the path we should be following here is one with a much greater and healthy skepticism (and far greater human agency) in this automation journey than we are currently thinking is needed.


I agree completely. It's a very difficult problem from a technical perspective, and from a systems perspective, we've got untrained operators who can't even stay off their phones in a non-self-driving car. (Not high-horsing it here; I'm as guilty of this as anyone.) Frankly I'll be amazed if anyone can get this to actually work without significant changes to the total system. Right now self-driving car folks are working in isolation - they're only working on the car - and I just don't think it's going to happen until everyone else in the system gets involved.


> we still run it with an incredible robust infrastructure to have a human oversee the tech

Airplanes are responsible for 200-300+ lives at a time, so it’s quite incomparable to road vehicles. Of course it makes sense to have human oversight in case something goes wrong.

On the flip side, the average car driver is not very skilled nor equipped to deal with most surprises - hence the ever present danger of road traffic.

I’m not sure why AI drivers are held to such insanely high standards.


The claim that self-driving cars are being held-up to a higher standard than human drivers is simply false. Self-driving cars so far have a far worse record than the average of human drivers, which is remarkably good. Human accidents are measure in term of "per million miles driven". Self-driving cars have driven a tiny total distance compared to all the miles human drivers have driven.

See: https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...


What’s the “accidents per million miles driven” metric at for self driving cars?


I’m reminded of GWB’s “soft bigotry of low expectations”. (Even if you didn’t like him, it remains apt.)

Sorry, most drivers I’ve seen avoid the kid that follows the bouncing ball into the street and all the other random events. A few drivers are crap. But justifying automation based on that few is lazy and incorrect. Elon is just full of his own crap on Tesla’s autopilot.


> most drivers I’ve seen avoid the kid that follows the bouncing ball into the street and all the other random events

I've seen many poor drivers over the decades, but can't think of any human drivers who would reliably and repeatably crash into a stationary fire truck in their lane. "Full Self Driving" vehicles, on the other hand...

https://www.bloomberg.com/news/articles/2019-09-06/spotting-...

https://www.wired.com/story/tesla-autopilot-why-crash-radar/

https://slate.com/technology/2021/08/teslas-allegedly-hittin...


> Airplanes are responsible for 200-300+ lives at a time, so it’s quite incomparable to road vehicles Your analogy is wrong. Buses can have 50 people. I would compare a bus to a medium/small airplane.


Also, it is regrettable that cars don't have the FAA-required electronics, software, or integration processes. When I read that a Jeep's braking system was compromised through its entertainment system it was apparent that the aircraft lessons had not been taken aboard by the auto industry.


> We aren't going to live in a world where every street has to be (the equivalent of) an FAA certified airport with controls to remove any uncertainty

actually ive been thinking this is exactly the win self driving vehicles have been looking for. upgrade, certify, and maintain cross country interstates like I-70 for fully autonomous, no driver vehicles like freight, mail, hell even passengers. maybe that means one lane with high-vis paint and predefined gas station stops and/or some other requirements. i bet the government could even subsidize with an infrastructure spending bill, politics notwithstanding.

there cant possibly be a problem with _predefined_ highways that is harder to solve than neighborhood and city driving with unknown configurations and changing obstacles. i feel like everyones so rabid for fully autonomous Uber that the easier wins and use cases are being overlooked.


Well, to control things, you'd have to have a highway that's only for self-driving vehicles. And then you'd need to get them there - with what, human drivers? (losing the cost savings) Maybe you could use this for self-driving trucks between freight depots.

The problem with this is - why not just use trains at this point? Trains already an economical solution for point to point transportation.


>fully autonomous, no driver vehicles like freight, mail, hell even passengers. maybe that means one lane ... and predefined gas station stops and/or some other requirements. I bet the government could even subsidize with an infrastructure spending bill, politics notwithstanding.

Yup, that's a train. I really do wish US rail hadn't been turned into what amounts to private property. I live along the Amtrak Cardinal line and would love to use it to travel. But the low speed and frequent stops for higher-priority freight mean a trip takes longer than driving and usually costs more than I would pay in gas.


> We aren't going to live in a world where every street has to be (the equivalent of) an FAA certified airport with controls to remove any uncertainty about what the vehicle will encounter when it gets there.

I do sometimes wish we'd just devote one lane of our larger freeways to self-driving cars exclusively. You could let the cars drive as fast as they want, and charge per-mile tolls for any car that uses it when the road is congested. Ideally, you'd charge based on occupancy as well: an empty car should have to pay more than a car with a human in it, and a car with 2+ people might be allowed to use the lane for free.


Cars are vastly more complex to do navigation for than planes too so we need to be even more careful when making auto autos. Plane autopilots are basically dealing with just the physical mechanics of flying the plane which while complex are quite predictable and modellable. All of the obstacle avoidance and collision avoidance takes place outside of autopilots through ATC and the routes are known and for most purposes completely devoid of any obstacles.

Cars have a vastly harder job because they're navigating through an environment that is orders of magnitude more complex because there are other moving objects to deal with.


Which is one reason I come back to thinking that you may see full automation in environments such as limited access highways in good weather but likely not in, say, Manhattan into the indefinite future.

Unexpected things can happen on highways but a lot fewer of them and it's not like humans driving 70 mph are great at avoiding that unexpected deer or erratic driver either.

ADDED: You'd actually think the manufacturers would prefer this from a liability perspective as well. In a busy city, pedestrians and cyclists do crazy stuff all the time (as do drivers) and it's a near certainty that FSD vehicles will get into accidents and kill people that aren't really their fault. That sort of thing is less common on a highway.


This is what I'd be happy with. Something to get me the 20-50 miles between cities (Atlanta<->Winder<->Athens in particular), or through the closed highway loops around them. Driving within them isn't so boring that my focus wanders before I notice it's wandering.

We could just expand MARTA, but the NIMBY crowd won't allow it. People are still hopped up on 1980s fearmongering about the sorts of people who live in cities and don't want them infesting their nice, quiet suburbs that still have the Sherriff posting about huge drug and gun busts.


Public transit isn't a panacea for suburbs/small cities. I'm about a 7 minute drive from the commuter rail into Boston but because of both schedule and time to take a non-express train, it's pretty much impractical to take into the city except for a 9-5 workday schedule, especially if I need to take a subway once I get into town.

For me, it's more the 3-5 hour drive, mostly on highways, to get up to northern New England.


> Which is one reason I come back to thinking that you may see full automation in environments such as limited access highways in good weather but likely not in, say, Manhattan into the indefinite future.

This is why there are SAE levels of automation and specifically Level 4 is what you're describing. Anyone claiming their system will be Level 5 is flat out lying.


Even just in the US, there are some cities that can be fairly challenging for an experienced human driver, especially one unfamiliar with them. And there are plenty of even paved mountain roads in the West which can be a bit stressful as well. And that's in good weather.


Which paved mountain roads in the west are stressful?


Highways are probably the best case scenario for full automation but we've seen scenarios where even that idealized environment has deadly failures in a handful of Tesla crashes.


Sounds more comparable to ATTOL does it not? Planes in development now are capable of automatic taxi, take-off and landing.


They're still in a vastly more controlled environment than cars and moving at much lower speeds as well. If an auto taxiing plane has to avoid another airport vehicle something has gone massively wrong. Judging by this video [0] I'm not sure they're even worrying about collisions and are counting on the combination of pilots and ATC ground controllers to avoid issues while taxiing. It looks like the cameras are entirely focused on line following.

[0] https://www.youtube.com/watch?v=9TIBeso4abU


Interesting point of view: autonomous cars as a form of contact-rich manipulation


I'm not sure what you mean by that. Care to expand?


I recently avoided a startup prospect because they were looking to build a car OS that wasn't hard real-time. The very idea that they're trying to develop such things that might intermittently pause to do some garbage collection is freaking terrifying.


What has hard real time have to do with garbage collection? You can have concurrent GCs (with [sub]millisecond pauses, or no Stop-the-world at all) but you also need 'hard real time' OS to begin with. Heck, even opening files is far from real-time.

Then you need: not-blocking data strcutures, not just lock-free - that are much easier to develop. Pretty much you need forward guarantees on any data structure you'd use.


You usually need garbage collection because you are allocating in the first place. And allocating and releasing adds some non-determinism. You apparently don't know how much exactly needs to be allocated - otherwise you wouldn't have opted for the allocations and GC. That non-determinism translates to a non-determinism in CPU load, as well as to "I'm not sure whether my program can fulfill the necessary timing constrains anymore".

So I kind of would agree that producing any garbage during runtime makes it much harder to prove that a program can fulfill hard realtime guarantees.


This was an OS that did not support hard real-time GC.


To play devil's advocate...and because I'm just not that educated in the space, is this really a huge deal? If we're talking couple ms at a time delays, isn't that still vastly superior to what a human could achieve?


If you can set a guaranteed maximum on the delays (regardless of what those limits are), you're hard real-time by definition. The horror is that they weren't building a system that could support those guarantees.


I see, thanks.

What if say, a system is written against an indeterminate timed GC like say, Azul or Go's, but code is written in a way that proves GC times never exceed X, whether by theory or stress testing. Is this still seen as 'unguaranteed'?


It depends on your system model (e.g. do you consider memory corruption to be a valid threat), but it could be. In practice, actually establishing that proof purely in code is almost always impractical or doesn't address the full problem space. You use hardware to help out and limit the amount of code you actually have to validate.


I think that would at most be soft-real time.


GC pauses can be hundreds of milliseconds. You could perhaps use a particular GC scheme that guarantees you never have more than a couple millisecond pause, but then you have lots of pauses. That might have unintended consequences as well. I'm also not sure that such GCs, like golangs, can really mathematically guarantee a minimum pause time.


Fully concurrent GCs exist with read-barriers and no stop-the-world phase. The issues with "hard" real-time are not gc-related.


Hard real time garbage collectors have existed for decades. Of course you can mathematically guarantee a minimum pause time given a cap on allocation rate. What's stopping you?


You don't even need a cap on allocation rate, GC can have during allocation w/o fully blocking, it'd 'gracefully' degrade the allocation, itself. It'd be limited by CPU/memory latency and throughput.


If ernie the intern decides to use a hashmap to store a class of interesting objects as we drive, you could end up with seconds of GC collection + resizing if it grows big enough.


That doesn't seem especially bad. The car could, for instance, predict whether or not it was safe to do garbage collection. Humans do the same when they decide to look away while driving.


A human that looks around while driving is still taking in a lot of real-time input from the environment. Assuming they're not a terrible driver they examined the upcoming environment and made a prediction that at their current speed there were no turns, obstacles, or apparent dangers before looking away. If they didn't fully turn their head they can quickly bring their eyes back to the road in the middle of their task to update their situation.

If a GC has to pause the world to do its job there's none of that background processing happening while it's "looking away".


Heck, some humans even do literal garbage collection while driving!


That's horrifying on such a deep level. There should be mandatory civil service for programmers, but you just get sent somewhere cold and you gotta write scene demos and motion control software for a year to get your head on straight. :P


To echo this, as someone who has done some work with formal specifications, I have to say it seems like the self-driving car folks are taking a "move fast and break things" approach across the board, which is horrifying.


The mechanism by which those lessons were learned involved many years full of tragedy and many fatalities including many famous celebrities dying in those plane crashes. Obviously, we do not want to follow that same path, but at the moment that's exactly the path we're on.

The US govt isn't going to do anything until there's a public outcry, and historically there won't be a public outcry until there's a bunch of famous victims to point to.


> The US govt isn't going to do anything until ...

I think this attitude is defeatist and absolves us of doing anything. It's a democracy; things happen because citizens act. 'The US government isn't going to do anything' as long as citizens keep saying that to each other.


> it seems to me that all the lessons that we have learned have been chucked out the window with self-driving cars.

I think it’s unfair to lump all self driving car manufacturers together.

The traditional car companies have been doing research for decades (see for example https://en.wikipedia.org/wiki/VaMP), but only slowly brought self-driving features to the market with part of the slowdown because they are aware of the human factors involved. That’s why there’s decades of research on ways to keep drivers paying attention and/or detecting that they don’t.

“Move fast and break things” isn’t their way of working.


These lessons have been chucked out the window by second tier (e.g., GM/Cruise) and third tier (e.g. Tesla and Uber) competitors, who have recognized that the only way they can hope to catch up is by gambling that what happened to Uber won't happen to them.


The car FSD - aircraft autopilot analogy is deeply flawed, and nowhere near instructive. Let's consider some details:

What aircraft autopilot does is following a pre-planned route to the T, with any changes being input by humans. The aircraft autopilot doesn't do its own detection of obstacles, nor of router markings; it follows the flight plan and reacts to conditions of the aircraft. Even when executing automatic take-off and landing, the autopilot doesn't try to detect other vehicles or obstacles - just executes the plan, safe in knowledge that there are humans actively monitoring for safety. There is always at least two humans in the loop: the pilot in command who prepared and inputed the original flight plan and also inputs any route changes when needed (collision and weather avoidance), and an air traffic controller that continuously observes flight paths of several aircrafts and is responsible for ensuring safe separation between aircraft in his zone of responsibility. Beyond that, an ATController has equal influence on all aricraft in his zone of responsibility, and in case one does something unexpected, it can equally well redirect that one or any other one in vicinity. Lastly, due to much less dense traffic, the separation between aircraft is significantly larger than between cars [1] providing time for pilots to perform evasive maneuvers - and that's in 3d space, where there are effectively two axes to evade along.

Conversely with car FSD - the system is tasked both with following the route, and also with continuously updating the route according to markings, traffic, obstacles, and any contingencies encountered. This is a significant difference in quantity from the above - the law and the technology demands one human in the loop, and that human can only really influence his own car at most. Even worse, due to density of traffic, the separation between cars is quite often on the order of seconds of travel time, making hand-over to driver a much more rapid process.

I am moderately hopeful for FSD "getting there" eventually, but at the same time I'm wary of narrative making unwarranted parallels between FSD and aircraft autopilot.

[1] https://www.airservicesaustralia.com/about-us/our-services/h...


> Being able to effectively reason about what the automation is doing is such an important part of why these technologies have been so successful in flight, and examples like this illustrate how far off we are to something like that in cars.

Is that actually the case, though?

I would hope, although perhaps I'm mistaken, that the developers of the actual self-driving systems would be able to effectively reason about what's happening. For example, would a senior dev on Tesla's FSD team look at the video from the article and have an immediate intuitive guess for why the car did what it did? Or better yet, know of an existing issue that triggered the wacky behavior?

Even if not, I'd hope that vehicle logs and metrics would be enough to shed light on the issue.

I don't think I've ever seen a true expert, with access to the full suite of analytic tools and log data, publish a full post-mortem of an issue like this. I'm certain these happen internally at companies, but given how competitive and hyper-secretive the industry is, the public at large never sees them.


They certainly are trying very hard, as far as I can tell. Tesla's efforts on data collection and simulation of their algorithm are incredibly impressive. But part of why it is so necessary is that there is an opaqueness to the ML decision-making that I don't think anyone has quite effectively cracked. I do wonder, for instance, if the decision to go solely with the cameras and no LIDAR will prove to ultimately be a failure. The camera-only solution requires the ML model to accurately account for all obstacles, for example. As crude, and certainly non-human as it is, a LIDAR with super crude rules for "dont hit an actual object" would have even at this point prevented some of their more widely publicized fatal accidents which relied on the algorithm alone.


Something I do not understand:

there are keys difference between automation in e.g. aircraft, and what Tesla at al are failing at,

e.g., how constrained the environment is; and what the exposure is to anomalous conditions is; and what the opportunity window usually is to turn control back over to a human.

The thing I don't understand is, we have a much more comparable environment in ground travel: federal highways.

Innumerable regressions and bugs and lapses aside, I do not understand why so much effort is being wasted on a problem which IMO obviously requires AGI to reach a threshold of safety we are collectively liable to consider reasonable; when we could be putting the same effort into the (also IMO) much more valuable and impactful case of optimizing automated traffic flow in highway travel.

Not only is the problem domain much more constrained, there is a single regulatory body, which could e.g. support and mandate coordination and federated and data sharing/emergent networking, to promote collective behavior to optimize flow in ways that humans limited-information self-interested human drivers cannot.

The benefits are legion.

And,

I would pay 10x as much to be able to cede control at the start of a 3-hour trip to LA, than to be able to get to work each morning. Though for a lot of Americans, that also is highway travel.

Not just this, why not start with the low-hanging case of highway travel, and work out from there onto low-density high-speed multi-lane well-maintained roads? Yes that means Tesla techs who live in Dublin don't get it first. Oh well...

IMO there will never be true, safe FSD in areas like my city (SF) absent something more more comparable to AGI. The problem is literally too hard and the last-20% is not amenable to brute forcing with semantically vacuous ML.

I just don't get it.

Unless we take Occam's razor, and assume it's just grift and snake oil to drive valuation and investment.

Maybe the quiet part and reason for engineering musical chairs is just what you'd think, everyone knows this is not happening; but shhhh the VC.


I'm on my third Tesla. FSD on highways has improved so much in the last 6 years. On my first Tesla, autopilot would regularly try to kill you by running you into a gore point or median (literally once per trip on my usual commute). I now can't even remember the last time I had an issue on the highway.

Anywhere else is basically a parlor trick. Yes, it sorta works, a lot of the time, but you have to monitor it so closely that it isn't really beneficial. As you point out, its going to take some serious advances (which in all likelihood are 30+ years away) for FSD to reliably work in city centers.

I think the issue you've highlighted is one of governance. There's only so much Tesla can do regarding highways. You really need the government to step in to mandate coordination of the type I think you're envisioning. And the government is pretty much guaranteed to fuck it up and adopt some dumb standard that kills all innovation after about 6 months, so it never actually becomes usable.

I think automakers will eventually figure this out themselves. As you say, there are too many benefits for this not to happen organically. Once vehicles can talk to each other, everything will change.


>On my first Tesla, autopilot would regularly try to kill you by running you into a gore point or median (literally once per trip on my usual commute)

And people paid money for this privilege?


To be fair, it still felt like magic. My car would drive me 20 miles without me really having to do anything, other than make sure it didn't kill me at an interchange.

And I'm now trying to remember, but I think autopilot was initially free (or at least was included with every car I looked at, so it didn't seem like an extra fee). Auotpilot is now standard on all Teslas, but FSD is an extra $10k, which IMO is a joke.


Humans are ridiculously bad at overseeing something that mostly works. That’s why it is insanely more dangerous.

Also, the problem is “easy” for the general case, but the edge cases are almost singularity-requiring. The former is robot vacuum level, the latter is out of our reach for now.


I bet it felt magic, but if my car would actively try to kill me, it would go back to the dealer ASAP.

I'm not paying with money and my life to be a corporation's guinea pig.


Part of what I don't get so to speak,

is why there we haven't seen the feds stepping in via the transportation agency to develop and regulate exactly this, with appropriate attention paid to commercial, personal, and emergency vehicle travel.

The opportunities there appear boundless and the mechanisms for stimulating development equally so...

I really don't get it. Then I think about DiFi and I kind of do.


Even lower-level automation for highway driving would be super useful.

I would appreciate a simple "keep-distance-wrt-speed" function for bumper-to-bumper situations. Where worst case scenario, you rear-end a car at relatively low speeds.

I'd happily keep control over steering in this situation and just keep my foot over the brake, though lane-keep assist would probably be handy here as well. A couple radar sensors/cameras/or lidar sensors would probably be enough for basic functionality.

Disable if the steering-wheel is turned more than X degrees - maybe 20 or 25?. Disable if speed goes over X speed - maybe 15mph? Most cruise controls require a minimum speed (like 25mph) to activate.

Trying to do full driving automation, especially in a city like Seattle, is like diving into the ocean to learn how to swim.

As cool as that sounds, I'd trust incremental automation advancements much more.


I would appreciate a simple "keep-distance-wrt-speed" function for bumper-to-bumper situations.

This has been widespread for at least a decade. I'm not even aware of a mainstream auto brand that doesn't offer adaptive cruise control at this point. Every one I've used works in stop and go traffic.

The other features you want are exactly what Tesla has basically perfected in their autopilot system, and work almost exactly as you describe (not the FSD, just the standard autopilot).


> This has been widespread for at least a decade.

I can't say that my experience agrees with this. Maybe some higher-end vehicles had it a decade ago, but it seems to be getting more popular over the past 5 years or so. I still don't see it offered on lower priced vehicles where basic cruise functionality is there, but I doubt ever will be a part of "entry level" considering the tech required.

None of the vehicles my family owns have a "stop-and-go" cruise control function - all newer than 10 years. ACC at higher speeds is available on one, but it will not auto-resume if the vehicle stops.


In this same vein cars with automated parallel/back-in-angle parking.

I think an even more 'sensor fusion' approach needs to be adopted. I think the roads need to be 'smart' or at least 'vocal'. Markers of some kind placed in the roadway to hint cars about things like speed limit, lane edge, etc. Anything that would be put on a sign that matters should be broadcast by the road locally.

Combine that with cameras/lidar/whatever for distance keeping and transient obstacle detection. Then network all the cars cooperatively to minimize further the impacts of things like traffic jams or accident re-routing. Perfect zipper merges around debris in the roadway.

Once a road is fully outfitted with the marker system, then and only then would I be comfortable with a 'full autopilot' style system. Start with the freeway/highway system, get the logistics industry on board with special lanes dedicated to them and it becomes essentially a train where any given car can just detach itself to make it's drop off.


Also, what could possibly save the most lives: simply ML the hell out of people’s faces to notice when they are getting sleepy. That’s almost trivial and should be mandatory in a few years.

A more advanced problem would be safely stopping in case the driver falls asleep/looses consciousness, eg. on a highway. that short amount of self-driving is less error-prone than the alternative.


>Unless we take Occam's razor, and assume it's just grift and snake oil to drive valuation and investment.

I think there's some of that. Some overconfidence because of the advances that have been made. General techno-optimism. And certainly a degree of their jobs depending on a belief.

I know there is a crowd of mostly young urbanites who don't want to own a car and want to be driven around. But I agree. Driving to the grocery store is not a big deal for me. Driving for hours on a highway is a pain. I would totally shell out $10K or more for a full self-driving system even if it only worked on interstates in decent weather.


Highway self driving has been around for decades[1] - and Tesla's general release autopilot can already do all that. As I understand it from ramp on to ramp off, in production vehicles, Tesla can provide an automated experience. I'm not sure how much "better" it can get.

[1] https://www.youtube.com/watch?v=wMl97OsH1FQ


>Highway self driving has been around for decades[1]

Driver assisted highway has been around for years... Level 5 driving requires no human attention. Huge difference.

I think what is wanted by many is level 5 on the highways. I want to sleep, watch a movie, whatever. That is much, much "better" than what we have now. Like many others, I would be most interested in full level 5 on highways and me driving in the city. That is also much easier to implement and test. The scope of the problem is greatly reduced. I think Tesla and others are wasting tremendous resources trying to get the in-city stuff working. It makes cool videos, but being able to do other activities during a 5 hour highway drive has much more value (to me at least) than riding inside a high risk video game on the way to work.

(edit) I get that I am misusing the definition of "level 5" a bit, but I think my meaning is clear. Rated for no human attention for the long highway portion of a trip.


better would be that i can legally go to sleep and let the car drive the highway portion of my trip. then i wake up and complete the final leg of the trip.

as you say, i don’t think this is entirely out of reach (even if it required specialized highway infrastructure or car to car communication). seems like lower hanging fruit than trying to get full self driving working on local/city roads.

i would pay a ton for the highway only capability…


Why didn't you post this as a top-level comment? What does this have to do with the post you are replying to?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: