Definitely post-COVID. I remember going to see bands in between or just after lockdowns ended, and even the bands were taken aback by the change in audience behaviour, commenting on it. Lots of self-entitled behaviour, talking and even yelling out during quiet moments, people walking up to the stage during a seated Nick Cave concert demanding to hand him stuff or shake his hand - I remember him saying "wow, you guys really forgot how to behave over the last couple of years". Now it just seems to be normalised that crowd behaviour is worse, more self-entitled. I'm not sure what's driving it - whether people who previously weren't going to gigs decided, during lockdown, that they wanted to go out and do stuff more, but just had never learned the etiquette, and/or social media making the experience about the individual rather than the performance.
The drones Iran are using are actually relatively small, you can fit 5 of them into a medium sized truck and they can launch in-situ, which is how they've been using them in ground operations. Doesn't seem that much of a stretch to put a bunch of them into shipping containers.
Yeah, mid-late last year was one of the worst markets I've seen in my career, but the last couple of months things have really seemed to pick up speed.
Ditto. It seems like the graduate wage in the US is 2x my senior salary in the UK, which sounds very similar to yours. It seems massively inflated compared to other US jobs. Tech jobs in the UK seem to be more inline with other sectors.
That’s a pretty boring point for what looks like a fun project. Happy to see this project and know I am not the only one thinking about these kinds of applications.
An LLM that can't understand the environment properly can't properly reason about which command to give in response to a user's request. Even if the LLM is a very inefficient way to pilot the thing, being able to pilot means the LLM has the reasoning abilities required to also translate a user's request into commands that make sense for the more efficient, lower-level piloting subsystem.
We don't need a lot of things, but new tech should also address what people want, not just needs. I don't know how to pilot drones, nor do I care to learn how to, but I want to do things with drones, does that qualify as a need? Tech is there to do things for us we're too lazy to do.
You're considering "talking to" a separate thing, I consider it the same as reading street signs or using object recognition. My voice or text input is just one type of input. Can other ML solutions or algorithms detect a tree (same as me telling it there is a tree,yaw to the right), yes, can LLMs detect a tree and determine what course of action to take? also true. Which is better? I don't know, but I won't be quick to dismiss anyone attempting to use LLMs.
Definitely maybe - but then we are discussing (2), i.e. "what is the right technical solution to solve (1)".
Your previous comment was arguing that (1) is great (which no one denies in this thread, and it is a different discussion about what products are desirable rather than how to build said product) in an answer to someone arguing (2).
I don't think you understand what an "LLM" is. They're text generators. We've had autopilot since the 1930s that relies on measurable things... like PID loops, direct sensor input. You don't need the "language model" part to run an autopilot, that's just silly.
You see to be talking past him and ignoring what they are actually saying.
LLMs are a higher level construct than PID loops. With things like autopilot I can give the controller a command like 'Go from A to B', and chain constructs like this to accomplish a task.
With an LLM I can give the drone/LLM system complex command that I'd never be able to encode to a controller alone. "Fly a grid over my neighborhood, document the location of and take pictures of every flower garden".
And if an LLM is just a 'text generator' then it's a pretty damned spectacular one as it can take free formed input and turn it into a set of useful commands.
They are text generators, and yes they are pretty good, but that really is all they are, they don't actually learn, they don't actually think. Every "intelligence" feature by every major AI company relies on semantic trickery and managing context windows. It even says it right on the tin; Large LANGUAGE Model.
Let me put it this way: What OP built is an airplane in which a pilot doesn't have a control stick, but they have a keyboard, and they type commands into the airplane to run it. It's a silly unnecessary step to involve language.
Now what you're describing is a language problem, which is orchestration, and that is more suited to an LLM.
Give the LLM agent write acces to a text file to take notes and it can actually learn. Not really realiable, but some seem to get useful results. They ain't just text generators anymore.
(but I agree that it does not seem the smartest way to control a plane with a keyboard)
My confusion maybe? Is this simulator just flying point a to b? Seems like it’s handling collisions while trying to locate the targets and identify them. That seems quite a bit more complex than what you are describing has been solved since the 1930s.
LLMs can do chat-completion, they don't do only chat completion. There are LLMs for image generation, voice generation, video generation and possibly more. The camera of a drone inputs images for the LLM, then it determines what action take based on that. Similar to if you asked ChatGPT "there is a tree in this picture, if you were operating a drone, what action would you take to avoid collision", except the "there is a tree" part is done by the LLMs image recognition, and the sys prompt is "recognize objects and avoid collision", of course I'm simplifying it a lot but it is essentially generating navigational directions under a visual context using image recognition.
Yes it can be, and often is. Advanced voice mode in chatGPT and the voice mode in Gemini are LLMs. So is the image gen in both chatGPT and Gemini (Nano Banana).
"You don't need the "language model" part to run an autopilot, that's just silly."
I think most of us understood that reproducing what existing autopilot can do was not the goal. My inexpensive DJI quadcopter has an impressive abilities in this area as well. But, I cannot give it a mission in natural language and expect it to execute it. Not even close.
> Beyond adjusting parameters, phase8 invites physical interaction. Sculpt sound by touching, plucking, strumming, or tapping the resonators – or experiment by adding found objects for new textures.
reply