I’m of the opinion that we should voice strong concerns for things like working from home or anything that could impact safety; but hey, I’m just trying to prolong my career until we’re all replaced by DispatchGPT…
Just for clarity on the state of AI:
GPT-3 (which ChatGPT was initially based on) was trained on 570GB of data. Which is not actually all that much. Some napkin math indicates that the amount of data for a truly knowledgeable and importantly,
unbiased AI model of dispatch would be orders of magnitude higher, and there is a lot of work involved in generating AI training data. 570GB was a LOT for them to acquire and sift through. Terabytes would take years. Training GPT-3 cost $4.6 million. Training GPT-4 cost over $100 million. It wouldn't be cheap, it wouldn't be easy, and I don't expect economic pressures to allow it at present.
Additionally, Large Language Models like GPT-3 and GPT-4 are not able to understand the concept of uncertainty. They will confidently spit out whatever information they have, not having any clue whether it's accurate or not. So, there will still be a dispatcher needed to review the generated information so long as those continue to be the state of the art AI. This means that all those millions you pour into developing your AI gets you very, very little in gains. So until someone develops something better than LLMs, it's not going to be bothered with.
And yes, I'm aware that FlightKeys claims to use AI to improve their output. I haven't personally seen how that works in practice, but the above two points still apply to that just as much as anything else, so my feeling is that it's most likely just marketing. After all, how many CS majors are there in this field that really understand much about state of the art AI? My guess is I'm one of a handful of those, and they figure they can easily dupe the rest of you.