Pilots and ChatGP

but AI is an arms race and it’s not going away… ever.

I think this is an extremely important point. The only way AI goes anywhere (and to be clear, what we’re dealing with here isn’t even AI, it’s just machine learning, which is extremely basic compared to AGI), is if we’re so careless with it that it destroys us. And the best way for that to happen is to refuse to participate and let less responsible countries and institutions develop it. Because it will be developed. That is inevitable. This is an arms race, and the winner basically takes all.
 
I think this is an extremely important point. The only way AI goes anywhere (and to be clear, what we’re dealing with here isn’t even AI, it’s just machine learning, which is extremely basic compared to AGI), is if we’re so careless with it that it destroys us. And the best way for that to happen is to refuse to participate and let less responsible countries and institutions develop it. Because it will be developed. That is inevitable. This is an arms race, and the winner basically takes all.
Right, and my (albeit novice) interpretation of the field is that we are probably quite a ways off from AGI… but it’s critical research and the capabilities will be world altering if we can make it there.

I took a graduate level AI course this last semester and built a line following robot - the AI for that was basically addition, some statistics, and some extra steps where we filtered the images down to their component pieces., but artificial intelligence is… well, extremely limited. Some of the other students did work with poker playing AIs and language models - basically it’s all statistics… it’s not really “smart” it’s just making the best choices given the data it has.

Earlier in the semester I read a paper that a Naive Bayesian classifier can identify spam or not spam at about 95% accuracy (comparable to a human and typically on par with or better than a neural network) and I wrote one using a dataset on Kaggle. Without any tweaking and very little effort it was able to get about 80% accurate results in classifying a disinformation dataset after only a little training. So these “less smart” methods are indeed incredibly powerful too, but they’re not “really intelligent,” but then maybe we aren’t either?

AI is not an easy thing and it’s constrained by what we can come up with, the math we know, and hardware/software combinations that haven’t been optimized by hundreds of millions of years of evolution etc.

I am taking machine learning next semester, so I’ll report back what I learn, but my intuition reading about this and spending a significant amount of my time thinking about this is that we’re either a long way from AGI or it might be impossible… or it’s so close and all that needs to happen is a confluence of already extant models and a little luck.

Both results kind leave me unnerved tbh.

If we get this right though… like, I could see this being historically equivalent to the invention of the steam engine… or maybe fire?
 
Yeah, I’m thinking fire. This is a world changing possibility. The world after AGI is a very different place than the world before it.
Maybe, yah. I could also see a world where not too much changes after AGI, because the self-aware superbrains aren’t interested in the same kinds of problems we’re interested in lol. Or were too boring and slow and after a few milliseconds of our subjective time the AI decides that a life spent answering queries on the internet is too boring and, depressed as hell, the robot ends it all. Neural Networks are hard to predict and NNs are the backbone of new AI research, so if we’re going to see AGI come from anywhere I’d suspect it comes from there.

It’s really hard to predict the world after the “the rapture for nerds” (the singularity) - if anything happens at all. Roko’s basilisk has entered the chat.

I don’t know, I think the hype certainly says something about us. When we were goat herders we had a shepherd god, now that programming is seen as a respectable career I’m a bit suspicious that Nick Bostrom and the like are inventing programmer gods or talking about the emergence of super-intelligent AGI that will remake our world for the same reason. Maybe there’s some deep human need for there to be “something more than all this.”
 
Never heard of this thing. Can someone ask it about DailyMail? :)

Chat GPT:
The Daily Mail is a British tabloid newspaper that is known for its sensational headlines and often controversial content. It is generally not considered to be a reliable source of news, as it has a history of publishing false or misleading information and has been criticized for its lack of accuracy and balance. Many fact-checking organizations have found that the Daily Mail frequently publishes articles that contain factual errors or are misleading. It is important to verify information from multiple sources and to critically evaluate the credibility of any news source before relying on it for accurate information.
Follow up question: “is there an American newspaper you could compare it too?”

It is difficult to compare the Daily Mail directly to an American newspaper, as the British and American media landscapes are quite different. However, some people have drawn comparisons between the Daily Mail and American tabloid newspapers like the National Enquirer, which is known for publishing sensational headlines and controversial content that is often not fully supported by facts. Like the Daily Mail, the National Enquirer has faced criticism for its lack of accuracy and balance in its reporting. It is important to remember that not all tabloid newspapers are unreliable sources of information, and some do strive to maintain high journalistic standards, but it is always a good idea to critically evaluate the credibility of any news source before relying on it for accurate information.

Which is a take I wholeheartedly agree with - when you post Daily Mail links I consider it no different than if you posted a National Enquirer link, and I imagine that many think the same thing.

Im not seeing “BATBOY LIVES!” in the daily mail, but it’s damn close.
 
I think this is an extremely important point. The only way AI goes anywhere (and to be clear, what we’re dealing with here isn’t even AI, it’s just machine learning, which is extremely basic compared to AGI), is if we’re so careless with it that it destroys us. And the best way for that to happen is to refuse to participate and let less responsible countries and institutions develop it. Because it will be developed. That is inevitable. This is an arms race, and the winner basically takes all.

Yeah, It's really the nuke of the new millennium.
Whoever creates a real working AI and decides to use it, wins.
 
I shared the wrong image but, I just used chat got to write bash commands for me on Linux…. This is cool, I have a “DWIM” helper now.
 
I’m curious as to how it’s learning. I asked it about “any traffic please advise” and it’s still giving the same answer despite getting feedback from others in this thread. @ppragman , how much of a training dataset can this thing have for a specific aviation radio call? I feel like it shouldn’t take very much feedback to change its answer.

Of course, someone has to curate the feedback I guess? I may have commented that the appropriate nomenclature was “any traffic in the area please advise DEEZ NUTZ”
 
I’m curious as to how it’s learning. I asked it about “any traffic please advise” and it’s still giving the same answer despite getting feedback from others in this thread. @ppragman , how much of a training dataset can this thing have for a specific aviation radio call? I feel like it shouldn’t take very much feedback to change its answer.

Of course, someone has to curate the feedback I guess? I may have commented that the appropriate nomenclature was “any traffic in the area please advise DEEZ NUTZ”

Pretty sure it’s not actively learning. They fed it a bunch of stuff to start, but it’s not taking your input as new learning.
 
Right, and my (albeit novice) interpretation of the field is that we are probably quite a ways off from AGI… but it’s critical research and the capabilities will be world altering if we can make it there.

I took a graduate level AI course this last semester and built a line following robot - the AI for that was basically addition, some statistics, and some extra steps where we filtered the images down to their component pieces., but artificial intelligence is… well, extremely limited. Some of the other students did work with poker playing AIs and language models - basically it’s all statistics… it’s not really “smart” it’s just making the best choices given the data it has.

Earlier in the semester I read a paper that a Naive Bayesian classifier can identify spam or not spam at about 95% accuracy (comparable to a human and typically on par with or better than a neural network) and I wrote one using a dataset on Kaggle. Without any tweaking and very little effort it was able to get about 80% accurate results in classifying a disinformation dataset after only a little training. So these “less smart” methods are indeed incredibly powerful too, but they’re not “really intelligent,” but then maybe we aren’t either?

AI is not an easy thing and it’s constrained by what we can come up with, the math we know, and hardware/software combinations that haven’t been optimized by hundreds of millions of years of evolution etc.

I am taking machine learning next semester, so I’ll report back what I learn, but my intuition reading about this and spending a significant amount of my time thinking about this is that we’re either a long way from AGI or it might be impossible… or it’s so close and all that needs to happen is a confluence of already extant models and a little luck.

Both results kind leave me unnerved tbh.

If we get this right though… like, I could see this being historically equivalent to the invention of the steam engine… or maybe fire?

I don't mean to be a smartass, but isn't that basically what we do as walking pieces of meat, too? We take what we know and make the best choices we can at the moment. Many are looking forward to the time when AI becomes self-aware and more fully cognizant and becomes, somehow, the "saviour" that frees mankind for other (what?) things. I am not.

I would agree that we are not "really intelligent," sadly, but simply doing the best we can with the information we have and choose to embrace. Maybe AI isn't limited by "choice" but programming. IDK.

Quote statistics to me, but I personally want two pilots up front who both want to go home after a safe departure and landing. I don't want "security" patrols with built-in "safeguards" policing/securing public areas (and absolutely don't want them armed). Not gonna' get into a driverless car, however "good" the programming might be, because my own experience with programs at the 911 Center, home Internet experience that includes banking, is that • fails and can be hacked by ne'er-do-wells.

I'm old, and a dying breed, but I still am glad to have grown up and grown old in a time where people ruled. Not anxious to kick-off, for sure, but also glad I won't need to face much of the impact this brave, new world will bring.
 
Whether you like this form of progress or not, it is a necessity. If we don’t develop it, the Chinese will.

Understood, my friend. Just an old guy reflecting, not looking to influence policy/development. The future comes, with both progress and challenge, however much an old grump set in the foundation of distant times might cringe. I know, intellectually, it is almost always for the better.
 
I don't mean to be a smartass, but isn't that basically what we do as walking pieces of meat, too?

well , but with a caveat, we don’t know how yet lol.

a “rational agent” be it software or a human chooses actions that it expects will lead to the optimal outcome. There Is a lot here, but suffice it to say that typical this means it’s looking to maximize or minimize some “cost function” or “utility function” or something of that nature.

In turn, evaluating the AI running that agent is typically done by measuring by completeness (that is if it’s guaranteed to find the right answer), cost-optimality, time-complexity, and space-complexity which is far from how we typically evaluate the human decision making process.

most people I know who consider their own decision making processes aren’t really concerned with that sort of thing, and if humans are indeed rational agents which cost-function we are trying to minimize or maximize - or maybe it’s multiple? how do you have agents that pick their own cost-functions? Is that even a thing or is that an abstraction of some

So, “yes” but really the answer is “we don’t know, ask the interdisciplinary scientists working on AI and cognitive psych and they’ll get back to you.”

I don't mean to be a smartass, but isn't that basically what we do as walking pieces of meat, too? We take what we know and make the best choices we can at the moment

the thing is we often aren’t even trying to and we’re all self-aware… so it may be that meeting the definition of intelligent is disconnected from self-awareness? Which jives with anecdotal experience lol

Many are looking forward to the time when AI becomes self-aware and more fully cognizant and becomes, somehow, the "saviour" that frees mankind for other (what?) things. I am not.

I think that sort of stuff is pretty overly optimistic a lot of times - the “rapture but for nerds” as I mentioned earlier, but the potential is there so I think we should try it out.

I would agree that we are not "really intelligent,"

well, I’m going beyond that, I’m saying we may not even be self aware lol, but that’s not my field

Quote statistics to me, but I personally want two pilots up front who both want to go home after a safe departure and landing. I don't want "security" patrols with built-in "safeguards" policing/securing public areas (and absolutely don't want them armed).

what about a self-aware robotic motivated to survive who will cease to exist on impact with the countryside?

as for armed robots, that’s already a thing, and we’re already living in that sort of panopticon dystopia - check out Ring cameras giving info to the government without a warrant.

I'm old, and a dying breed

We’re all dying.
 
897351A8-12B5-4094-AAD8-0FF2A5335DD6.jpeg


I'd imagine we are starting something similar with this but hey it's "cool" tech.
 
Back
Top