What you're referring to is a concept called "security by obscurity." It works well right up until someone decides to do some reverse engineering and publish to the world how it works.
Worse than that, it really -doesn't-.
You can't keep details secure by not publishing them... it's an incredibly dangerous approach. The most secure things are those which are widely peer reviewed.
I think this subject is very valid. The problem is that you can't design a system and say "Ok, it works! Now it's time to add security to it." I
am a professional in large scale system design and security, so these sorts of concerns
are directly relevant to the field I got my degree in...
... Oh, wait, that's right, I don't have a degree... only twenty years of experience building large scale internet-facing production infrastructure with a high level of security, and security consulting for companies ... clearly my opinion isn't as valid as an academic who's never actually
done these things and has had their brain filled with fluff and stuff ten years behind the curve... or some technology professor who can pipe up and get quoted in every news publication.
Grumble. Ok, ok, yeah, I'm crossing threads here. The dialog there has started to bother me a lot... the blanket-accepted truisms without rational examination, and the obvious—but vacuous, vapid, insipid—argument that "You need a degree to get a job with a major, so clearly college is necessary..."
I'll subside. Let me just say that from the perspective of a tech security professional, there could be an issue, but I haven't reviewed the technology. The design, from what I know of it, is conducive to a tiered trust approach based on the significance of data... and that could be an appropriate level of discrimination. I'd need to dig in to the technology to form a more solid conclusion, but I think anyone dismissing the problem out-of-hand is being foolish.
The one main safety built into the system is the cross-check, but in the automation generation that seems to be "falling aft".
-Fox