I don't mean to be a smartass, but isn't that basically what we do as walking pieces of meat, too?
well , but with a caveat, we don’t know how yet lol.
a “rational agent” be it software or a human chooses actions that it expects will lead to the optimal outcome. There Is a lot here, but suffice it to say that typical this means it’s looking to maximize or minimize some “cost function” or “utility function” or something of that nature.
In turn, evaluating the AI running that agent is typically done by measuring by completeness (that is if it’s guaranteed to find the right answer), cost-optimality, time-complexity, and space-complexity which is far from how we typically evaluate the human decision making process.
most people I know who consider their own decision making processes aren’t really concerned with that sort of thing, and if humans are indeed rational agents which cost-function we are trying to minimize or maximize - or maybe it’s multiple? how do you have agents that pick their own cost-functions? Is that even a thing or is that an abstraction of some
So, “yes” but really the answer is “we don’t know, ask the interdisciplinary scientists working on AI and cognitive psych and they’ll get back to you.”
I don't mean to be a smartass, but isn't that basically what we do as walking pieces of meat, too? We take what we know and make the best choices we can at the moment
the thing is we often aren’t even trying to and we’re all self-aware… so it may be that meeting the definition of intelligent is disconnected from self-awareness? Which jives with anecdotal experience lol
Many are looking forward to the time when AI becomes self-aware and more fully cognizant and becomes, somehow, the "saviour" that frees mankind for other (what?) things. I am not.
I think that sort of stuff is pretty overly optimistic a lot of times - the “rapture but for nerds” as I mentioned earlier, but the potential is there so I think we should try it out.
I would agree that we are not "really intelligent,"
well, I’m going beyond that, I’m saying we may not even be self aware lol, but that’s not my field
Quote statistics to me, but I personally want two pilots up front who both want to go home after a safe departure and landing. I don't want "security" patrols with built-in "safeguards" policing/securing public areas (and absolutely don't want them armed).
what about a self-aware robotic motivated to survive who will cease to exist on impact with the countryside?
as for armed robots, that’s already a thing, and we’re already living in that sort of panopticon dystopia - check out Ring cameras giving info to the government without a warrant.
I'm old, and a dying breed
We’re all dying.