but see; what comes first? I’d say that on the path to understanding (and indeed correlation) rote memorization is a critical component. Testing the ability of an applicant to rapidly learn something new is relevant I suspect.
I don't actually agree with that. I think rote memorization needs to be supplemented with knowledge, because that's how human cognition works. Modern education pedagogy relies on making people memorize a bunch of terms, concepts, dates and definitions, and then often doesn't meaningfully build on them, so they're forgotten not long after. It's a great way to pass standardized tests, but an awful way to actually deal with imparting knowledge. Worse, it teaches the brain to memorize large chunks of info that don't apply to anything, teach the test, and dump it.
But again, if we really want to know if simulators are helpful we need to build a function to quantify this sort of thing (could be as simple as washout rate versus simulator performance) and then measure it for a while.
I don't think washout rate, by itself, is a good predictor, given the existence of gouge. I did a quick search and I'm looking here at a document sent to me, two pages long, with condensed versions of all of the limitations and systems knowledge checked for during training, along with a bunch of mnemonics for memorizing the things on a flow.
I have, at multiple times through different training events, been offered photos (or screenshots) of the instructor's iPad. (wink wink, nudge nudge) (I refused)
Literally straight up cheating.
That defeats the purpose of assessments, and skews data gathering. What if the people who are most likely to cheat are the ones most likely to succeed? Is that who you want to optimize for?
IMO, training needs some love. (I loved my previous airline's training department, to be clear, but I've seen worse in other places and heard lots of stories from people I trust at other shops.)
But also, it's really not training's job to wash people out. Eh, anyway...
Of course nearly every pilot is going to be against a simulator evaluation during hiring. It’s annoying to learn a profile and annoying to have to perform under pressure. Though we are subject matter experts to varying degrees, we are probably not reliable and impartial people to ask.
I don't think profile is useful. I don't think the "Fly an ILS and a hold" is particularly useful. I'm suggesting using the simulator as a platform for evaluating CRM, evaluating the risk management mindset of the candidate, determining judgment, and looking for response under pressure.
I definitely preferred when I didn’t have to do a sim ride prior to getting hired, and much preferred to take the actual airplane around the patch. I recognize that that is impractical for an airline.
Ehe, I'd prefer that, but yeah.
I don’t know, trying to be objective about this in the rear view mirror is hard but I really truly think the only way to tackle this problem is with data.
That is because that's your bias and your focus at the present time. Data is only as reliable as its inputs and controls. Even careful analysis to determine the validity of data can be misleading if the data itself is flawed.