Pilots and ChatGP

well , but with a caveat, we don’t know how yet lol.

a “rational agent” be it software or a human chooses actions that it expects will lead to the optimal outcome. There Is a lot here, but suffice it to say that typical this means it’s looking to maximize or minimize some “cost function” or “utility function” or something of that nature.

In turn, evaluating the AI running that agent is typically done by measuring by completeness (that is if it’s guaranteed to find the right answer), cost-optimality, time-complexity, and space-complexity which is far from how we typically evaluate the human decision making process.

most people I know who consider their own decision making processes aren’t really concerned with that sort of thing, and if humans are indeed rational agents which cost-function we are trying to minimize or maximize - or maybe it’s multiple? how do you have agents that pick their own cost-functions? Is that even a thing or is that an abstraction of some

So, “yes” but really the answer is “we don’t know, ask the interdisciplinary scientists working on AI and cognitive psych and they’ll get back to you.”



the thing is we often aren’t even trying to and we’re all self-aware… so it may be that meeting the definition of intelligent is disconnected from self-awareness? Which jives with anecdotal experience lol



I think that sort of stuff is pretty overly optimistic a lot of times - the “rapture but for nerds” as I mentioned earlier, but the potential is there so I think we should try it out.



well, I’m going beyond that, I’m saying we may not even be self aware lol, but that’s not my field



what about a self-aware robotic motivated to survive who will cease to exist on impact with the countryside?

as for armed robots, that’s already a thing, and we’re already living in that sort of panopticon dystopia - check out Ring cameras giving info to the government without a warrant.



We’re all dying.

I appreciate this. Thank you. As a disclaimer, my only in-depth experience with programs designed to "aid" was often a dismal failure at the 911 Center in which I worked. While light-years ahead of what used-to-be, and not really a reflection of the AI of which we speak, the failures from computerized phone lines to CAD to ancillary programs were legion during the 15 years of my service there. Anecdotally, there are numbers of stories where CAD failures left the people supposedly being "aided" in a place where they had no idea how to contact mutual aid/additional resources - and, yes, that was a HUMAN failure, but one that had become dependent upon the "magic" of computer assisted dispatch.

My age and my personal experience provide cautionary roadblocks to the brave, new world into which we have entered and into which we are delving ever more deeply. I understand it's "me" and likely/possibly not what the new infrastructure will objectively provide.

The possible theoretical discussion is fascinating (i.e., "the self aware robotic motivated to survive"). Sadly (for me), I know that armed robots are already a thing and that many of the things once taken for granted (for example, the Ring camera which doesn't require a warrant) are a real part of our world now. As a lowly 911 Dispatcher, I had access to information that would require a law enforcement agency to obtain a warrant if there was an "exigent" circumstance with a threat to human life. It required a simple phone call and filling out a one page faxed form after the fact, and nothing more. Whether we used that resource wasn't dependent upon a matrix of programming but our own judgement, and - believe it or not - that was rarely wrong (although I have no sense or information about the broader scope of ALL 911 Centers).

I guess my concern, personally, is the constraint of any programming which replaces well-trained and experienced human judgement but that's because I'm old, I think, and prefer to have more control - not less. Prior to my retirement I saw a number of younger individuals hired on the job who saw things differently. On the rare occasions CAD, or the phone system, or the radio, failed they were generally unable to step in with common sense to handle emergency situations and to work-around the particular issue.

Anyway, without having answers that satisfy me, I'm bright enough to kick the can down the road for someone else to solve should issues arrive. I'm not sure all change is progress, although I'm positive that all progress is change.

There is no doubt this will be a whole new world, one into which I will only "dip my toes," so to speak but which many will need to embrace
 
I guess my concern, personally, is the constraint of any programming which replaces well-trained and experienced human judgement but that's because I'm old, I think, and prefer to have more control - not less
There’s a fantastic book titled “Our Robots Ourselves” by Greg Mindel - his other book changed how I look at and understand aviation (the book is called Digital Apollo and is a masterpiece).

Anyway, ORO makes the claim that the principle challenge of all this technology is not “can we replace humans or make humans more efficient” but rather, “does this automation assist humans to make better decisions that result in better outcomes.” Which I think is a really “wise” way to look at a problem.

That’s a little more long winded than “driverless cars and pilotless planes” - but I think that’s the future we should be embracing and striving for.

We might see “single pilot airliners” - but we only should champion the idea if the end result is increased safety and greater efficiency. I want a badass “crew member” AI, that can say, “hey! Steve, are you sure you want to do that? Why not this?” But not in an obtrusive way like clippy - that’s a fine line to walk.

We need AI that says, “you’re probably going to miss your crossing restriction if you don’t start down soon” and you didn’t have to enter anything into the box. It just “knows” the same way anyone would know.

What is the value of second or third (or fourth) crew members? They catch errors, can share in the decision making process, and keep each other honest because of the social pressure…. Social pressure is easy - XVR basically does that and there’s no AI in that - but trapping errors is hard - recognizing when things might potentially go wrong and mitigating things to prevent even the possibility of things going wrong is something humans do 20x a second and machines have a really hard time with.

We are a ways away from that, but I think it’s coming.
 
There’s a fantastic book titled “Our Robots Ourselves” by Greg Mindel - his other book changed how I look at and understand aviation (the book is called Digital Apollo and is a masterpiece).

Anyway, ORO makes the claim that the principle challenge of all this technology is not “can we replace humans or make humans more efficient” but rather, “does this automation assist humans to make better decisions that result in better outcomes.” Which I think is a really “wise” way to look at a problem.

That’s a little more long winded than “driverless cars and pilotless planes” - but I think that’s the future we should be embracing and striving for.

We might see “single pilot airliners” - but we only should champion the idea if the end result is increased safety and greater efficiency. I want a badass “crew member” AI, that can say, “hey! Steve, are you sure you want to do that? Why not this?” But not in an obtrusive way like clippy - that’s a fine line to walk.

We need AI that says, “you’re probably going to miss your crossing restriction if you don’t start down soon” and you didn’t have to enter anything into the box. It just “knows” the same way anyone would know.

What is the value of second or third (or fourth) crew members? They catch errors, can share in the decision making process, and keep each other honest because of the social pressure…. Social pressure is easy - XVR basically does that and there’s no AI in that - but trapping errors is hard - recognizing when things might potentially go wrong and mitigating things to prevent even the possibility of things going wrong is something humans do 20x a second and machines have a really hard time with.

We are a ways away from that, but I think it’s coming.

I appreciate your thoughtful replies which give me things to consider. Thank you for the dialog and not simply writing an old guy out of the conversation. I am going to find Mindel's books and give them a good read.
 
View attachment 68700

I'd imagine we are starting something similar with this but hey it's "cool" tech.
nevermind_nathan_fillion.gif
 
I appreciate your thoughtful replies which give me things to consider. Thank you for the dialog and not simply writing an old guy out of the conversation. I am going to find Mindel's books and give them a good read.
Honestly, Digital Apollo is a masterpiece of science, ORO is good but digital Apollo is way better. Enjoy!
 
I don’t think it’s about it being “cool.” It’s about the fact that this is inevitably going to be developed, so either we can do it, or our enemies can. Do you really want to see North Korea as the first country to develop autonomous killer robots?
I don't think it matters but I also hope im dead by the time im proven wrong.
 
I don't think it matters but I also hope im dead by the time im proven wrong.

Care to expound on why you don’t think it matters? A world in which a dictatorial, genocidal regime has autonomous killer robots sound positively nightmarish to me. And I doubt either of us will be dead before someone has them. I’d rather it be us.
 
Care to expound on why you don’t think it matters? A world in which a dictatorial, genocidal regime has autonomous killer robots sound positively nightmarish to me. And I doubt either of us will be dead before someone has them. I’d rather it be us.
Because it doesn't matter who is first. It matters who is crazy enough to use the lethal technology.
 
Care to expound on why you don’t think it matters? A world in which a dictatorial, genocidal regime has autonomous killer robots sound positively nightmarish to me. And I doubt either of us will be dead before someone has them. I’d rather it be us.

I’d be interested to hear what @Wardogg thinks about the battlefield possibilities of autonomous robotic soldiers.

It sounds frightening, but I guess I find it a little far fetched that North Korea could summon the design and manufacturing ability to field an army of 100k’s of capable robots when they can barely build a skyscraper or keep an airliner flying. I’m a lot more concerned with their ability to harness 70yr old nuclear and rocket technology to get weapons that could wipe out the entire pacific rim.

China getting deathbots scares me more from a regional perspective, but less so in a “omg I fear for my family” sense.
 
I’d be interested to hear what @Wardogg thinks about the battlefield possibilities of autonomous robotic soldiers.

It sounds frightening, but I guess I find it a little far fetched that North Korea could summon the design and manufacturing ability to field an army of 100k’s of capable robots when they can barely build a skyscraper or keep an airliner flying. I’m a lot more concerned with their ability to harness 70yr old nuclear and rocket technology to get weapons that could wipe out the entire pacific rim.

China getting deathbots scares me more from a regional perspective, but less so in a “omg I fear for my family” sense.
I was just a pretty basic cog in the military wheel. I was just doing my part to accomplish the mission and take care of my guys. I never went to war college. Meaning my big picture view, probably wasn't very big.

With that being said, while the thought of wars fought without military casualties sounds nice, I feel it may be on the same level as the perfect socialist utopia. Where everyone does their job just because it benefits the greater good and not for personal gain. Sounds great, but as it stands now in our current evolutionary track as a species, we just can't live in that world. Either of them.

War is meant to be dirty, and hard, and there is supposed to be sacrifice. If there is no sacrifice then what's to stop power hungry dictators from running rough shod all over the world in a quest for total domination? If it's not the military putting for the casualties, who then? Civillians? If it's just killbots running amok against each other, when is a victor declared? When there is no more killbots to kill? So the country with the most money, or the ability to make killbots the fastest wins. Every time. What then is the next evolution of war? Where two countries declare war on each other and instead of actually fighting at all we just let the AI decide how many casualties there would have been that day and then it randomly choses the citizens who now have to show up to the suicide booths to give up their lives for the greater good?

Life imitates art. Maybe we are already heading in that direction. It seems to be a logical conclusion to wars fought by only machines. It's already been thought of and drawn up for your viewing pleasure. . If you want to see how it plays out see Star Trek Original Series S1 Ep23, A Taste of Armageddon. Worth the watch.

 
I was just a pretty basic cog in the military wheel. I was just doing my part to accomplish the mission and take care of my guys. I never went to war college. Meaning my big picture view, probably wasn't very big.

With that being said, while the thought of wars fought without military casualties sounds nice, I feel it may be on the same level as the perfect socialist utopia. Where everyone does their job just because it benefits the greater good and not for personal gain. Sounds great, but as it stands now in our current evolutionary track as a species, we just can't live in that world. Either of them.

War is meant to be dirty, and hard, and there is supposed to be sacrifice. If there is no sacrifice then what's to stop power hungry dictators from running rough shod all over the world in a quest for total domination? If it's not the military putting for the casualties, who then? Civillians? If it's just killbots running amok against each other, when is a victor declared? When there is no more killbots to kill? So the country with the most money, or the ability to make killbots the fastest wins. Every time. What then is the next evolution of war? Where two countries declare war on each other and instead of actually fighting at all we just let the AI decide how many casualties there would have been that day and then it randomly choses the citizens who now have to show up to the suicide booths to give up their lives for the greater good?

Life imitates art. Maybe we are already heading in that direction. It seems to be a logical conclusion to wars fought by only machines. It's already been thought of and drawn up for your viewing pleasure. . If you want to see how it plays out see Star Trek Original Series S1 Ep23, A Taste of Armageddon. Worth the watch.


Fantastic episode. One of my favorites.
 
Back
Top