AI and aircraft (spun off from Fedex thread)

I don't think you understand how AI works currently. It is based on training data of things that already exist. Without some sort of external input it cannot make something that is actually new. That is the current state of AI. That is exactly why it plagiarizes so much stuff in its results.

That‘s not AI, it’s machine learning, and I’ve literally been saying that. It doesn’t matter what it’s doing right now, it matters where it’s heading, and the current pace of progress is breakneck speed.
 
Wrong, but I’m sure you won’t believe me when I tell you that I’m basically hands off with my Teslas all the time.

Did they remove the requirement that you have hands on the wheel all the time? Or did they release the update where you can take them off as long as the camera can see you’re looking at the road? Either way, that’s not “full self driving”.

That‘s not AI, it’s machine learning, and I’ve literally been saying that. It doesn’t matter what it’s doing right now, it matters where it’s heading, and the current pace of progress is breakneck speed.

ML is considered to be a subset of AI. Can you explain how functional AI is going to work without training data?
 
I agree that AI isn't ready...yet. While the practicality of retrofitting the upcoming technology to existing equipment will probably be a rather long-term process, the ability of AI to do the things that we do is, I'm guessing, not that far off.

Ability to replicate human sensory input? Force sensors on flight controls for feedback. Vibration sensors throughout an aircraft that can isolate engine vibrations from wheel vibrations from aerodynamic vibrations caused by loose or damaged skin, etc. Visual systems that can look at clouds, and let AI compare it to millions of images that it has catalogued along with their corresponding threat profiles, and cross check visuals with digital radar return inputs. Visual systems looking at all critical flight surfaces and comparing them to millions of images in order to monitor status on an instantaneous basis. Feed AI hundreds of thousands of photos of what Type IV looks like under different lighting conditions and at all stages of pre-failure and failure, and it will become better at determining when to call for inspection or reapplication than your average line pilot.

I could go on and on, but the bottom line is that if we can give AI proper information inputs (better and more detailed than a human can take in), give it all available courses of action, teach it how those actions affect conditions, teach it the full scale of desirable outcomes versus undesirable, and eventually AI will make better decisions, quicker than a human can.

I agree it can be done but at what point does it end up potentially costing more than two pilots to do this? All these new sensors increase weight and MX costs along with potentially causing more downtime for an airframe on average. There are also limitations for visual systems that we still have issues with when it comes to weather.
 
That‘s not AI, it’s machine learning, and I’ve literally been saying that. It doesn’t matter what it’s doing right now, it matters where it’s heading, and the current pace of progress is breakneck speed.
What we call "AI" currently is just a more complex version of ML. It still involves a crap ton of training data to make predictions based on complex math problems. It cannot "think" outside of the training data it is supplied. It is stuck in a box. That is why 95%+ of AI work is working on the training datatsets to make sure they are fully optimized for training. At its core, AI is still just tossing a crap ton of data at a premade algorithm that then provides a prediction based on its training data.
 
Last edited:
I could go on and on, but the bottom line is that if we can give AI proper information inputs (better and more detailed than a human can take in), give it all available courses of action, teach it how those actions affect conditions, teach it the full scale of desirable outcomes versus undesirable, and eventually AI will make better decisions, quicker than a human can.

Steve,
I completely agree with the above statement.

But I think the bolded part might be a massive “if”.
 
Did they remove the requirement that you have hands on the wheel all the time? Or did they release the update where you can take them off as long as the camera can see you’re looking at the road? Either way, that’s not “full self driving”.

We can argue over the semantics of what qualifies as “full self driving,” but that’s not what you originally said and what I was responding to. Regulatory nonsense is one thing, but how it actually works is another. I can put a “cheat” weight on the wheel and not touch the steering wheel, brakes, or accelerator for an entire trip. I do it every day. So your original claim is false. The tech is largely there. What isn’t is rational thought from regulators.

ML is considered to be a subset of AI. Can you explain how functional AI is going to work without training data?

Actual AGI would work just like a human brain. It isn’t fed specific data to accomplish a certain task, it is capable of learning on its own and engaging in actions without human direction.

What we call "AI" currently is just a more complex version of ML. It still involves a crap ton of training data to make predictions based on complex math problems. It cannot "think" outside of the training data it is supplied. It is stuck in a box. That is why 95%+ of AI work is working on the training datatsets to make sure they are fully optimized for training. At its core, AI is still just tossing a crap ton of data at a premade algorithm that then provides a prediction based on its training data.

Basically none of this is correct.
 
We can argue over the semantics of what qualifies as “full self driving,” but that’s not what you originally said and what I was responding to. Regulatory nonsense is one thing, but how it actually works is another. I can put a “cheat” weight on the wheel and not touch the steering wheel, brakes, or accelerator for an entire trip. I do it every day. So your original claim is false. The tech is largely there. What isn’t is rational thought from regulators.

First, I really hope you aren’t hanging a weight on your wheel.

Second, I’ll believe the tech is there when insurance companies are ok with it.

Actual AGI would work just like a human brain. It isn’t fed specific data to accomplish a certain task, it is capable of learning on its own and engaging in actions without human direction.

So tell us, how does “actual agi” work?
 
First, I really hope you aren’t hanging a weight on your wheel.

Of course I am, along with thousands of other Tesla owners. Because regulators have lost their minds and want to meddle in everything. Elon wouldn't even have the tattlers installed if it wasn't for the regulators.

Second, I’ll believe the tech is there when insurance companies are ok with it.

K.

So tell us, how does “actual agi” work?

We'll find out soon enough!
 
Of course I am, along with thousands of other Tesla owners. Because regulators have lost their minds and want to meddle in everything. Elon wouldn't even have the tattlers installed if it wasn't for the regulators.!

Wow.

I don’t know a single Tesla owner who would admit to that.

I do know a lot of tesla owners who are embarrassed that there are Tesla owners who do this.
 
Fair enough, I can’t argue with that. Although it seems unlikely that they’re wrong based on the current pace of progress.



Wrong, but I’m sure you won’t believe me when I tell you that I’m basically hands off with my Teslas all the time.

I’m hands off with Teslas too. As in, you couldn’t pay me to touch one of those :D
 
I don’t doubt that’s a possibility. But much like you question experts about pretty much everything, I take what AI “experts” say with a grain of salt. 4 years ago Tesla’s were supposed to be hands off self driving, and they’re not obviously close to that even now.

My old company had a massive artificial intelligence push starting 6-7 years ago and it ended up influencing very little because when it comes down to it, there’s a lot of disciplines that present a ton of problems for AI.

I have no idea if flying is one of them. The biggest issue I see with AI is data quality, which is actually a really, really big obstacle to overcome.

Reminds of the early/mid-00s in engineering, when offshoring was going to be this amazing cost cutting measure. GE tried to send a lot of their Finite Element Analysis stuff to India, and it turned into a HUGE CF. I worked for an engineering contractor that had to fix it all for at least 3x whatever GE thought they were going to save.

Who knew, complex systems are complex?
 
Actual AGI would work just like a human brain. It isn’t fed specific data to accomplish a certain task, it is capable of learning on its own and engaging in actions without human direction.
This does not exist and we do not know if it ever will. Currently Deep Learning algorithms are the best we got which do work as I described.

The funny thing is that the math and ideas behind our current AI has been around for decades. Most developments have been in computing power that allows for most of the progress we have made.
 
Last edited:
1683159201744.png
 

The fact KH is involved gives me 0 faith in what they're trying to achieve.

This also coming from the same administration that gave us the Minister of Truth (before finally waking up and letting her go).
 
Back
Top