Exploits in GPS and ADSB system

Gundam

Well-Known Member


I don't know if this has been discussed yet but here it is. Basically it talks about the lack of security built into ADS-B and from what I could understand even basic GPS. A concern with an increased reliance on GPS over ground based systems and particularly approaches. He also makes a brief aside to possible exploits via WIFI on board aircraft. And at the tail end they talk a little about possible TCAS exploits. Basically we are transmitting and using data as carelessly as we do radios, without regard to data manipulation, leaving ourselves highly vulnerable to an attack on or through the system itself.

Flight tracking is not too big a concern to me, but manipulation of the information going to the pilot especially with regard to GPS is very disturbing. If anyone can shed some light on if this is as serious as it sounds please chime in.
 
I know THE guy from LockMart/the FAA on ADS-B. Outside of the engineers actually building the ADS-B systems he probably knows more about it than anyone else in the world.

He's not worried, so neither am I.

Also remember that secondary surveillance radar will continue to be in use and transponders will be required for the foreseeable future.

Finally, spoofing VOR/ILS/SSR returns isn't exactly rocket surgery and we've not had a problem with that.
 
He's not worried, so neither am I.

Also remember that secondary surveillance radar will continue to be in use and transponders will be required for the foreseeable future.

Finally, spoofing VOR/ILS/SSR returns isn't exactly rocket surgery and we've not had a problem with that.

Well I wish I was as trusting, however they also make the case that the designers are safety engineers and are trying to make sure it is safe from malfunction and other forces, but NOT people who are intentionally trying to manipulate the system. They are thinking along the lines of safety not security. The "this is how its always been done" attitude is known to be dangerous in terms of safety and I am sure the same is true of security. In addition you have to go near a ground facility to create a problem and even then it is localized at THAT facility, but now small devices could be placed on planes that no one even sees and cause serious interruptions or impairments in the system. Just because someone hasn't done something doesn't mean they wont. I am sure we can all think of examples where this has been proven true. I hope people can just manage to get their kicks from shining lasers at us rather than this. In addition the use of drones in the military will increase the attention being focused on means to interfere with them, and one of those means would be interfering with the GPS signal. At best creating noise and at worse making the operator think they are somewhere they aren't.
Maybe you could ask THE guy if he could come and discuss this or ask him yourself, I would definitely appreciate some answers from his side. To me it seems like we are making an increasingly automated system and allowing more outside sources to operate with it. This could be a good thing, but also requires new security measures that are equivalent to what exists in every other industry that is vulnerable to the same types of threats.
 
Last edited:
I know THE guy from LockMart/the FAA on ADS-B. Outside of the engineers actually building the ADS-B systems he probably knows more about it than anyone else in the world.

He's not worried, so neither am I.

Also remember that secondary surveillance radar will continue to be in use and transponders will be required for the foreseeable future.

Finally, spoofing VOR/ILS/SSR returns isn't exactly rocket surgery and we've not had a problem with that.

"Experts" gave similar assurances about networked medical devices a decade and a half ago. It's not that the people designing these systems are stupid, it's just that most of them have experience and directives that are very narrowly proscribed. Most are not even considering potential security issues as they crunch their design numbers.
If you really want to know if a system is vulnerable, just ask a fifteen year old Russian hacker. It's much faster and cheaper than waiting for the BLAND Corporation to finish their $20MM security study.
As for VOR/ILS security. It's been considered. Not so much from a hacking standpoint, because hacking wasn't really a "thing" back then. But, ironically, one of the driving forces behind GPS development was the perceived insecurity of the ground-based radio systems.
 
Well I wish I was as trusting, however they also make the case that the designers are safety engineers and are trying to make sure it is safe from malfunction and other forces, but NOT people who are intentionally trying to manipulate the system. They are thinking along the lines of safety not security. The "this is how its always been done" attitude is known to be dangerous in terms of safety and I am sure the same is true of security. In addition you have to go near a ground facility to create a problem and even then it is localized at THAT facility, but now small devices could be placed on planes that no one even sees and cause serious interruptions or impairments in the system. Just because someone hasn't done something doesn't mean they wont. I am sure we can all think of examples where this has been proven true. I hope people can just manage to get their kicks from shining lasers at us rather than this. In addition the use of drones in the military will increase the attention being focused on means to interfere with them, and one of those means would be interfering with the GPS signal. At best creating noise and at worse making the operator think they are somewhere they aren't.
Maybe you could ask THE guy if he could come and discuss this or ask him yourself, I would definitely appreciate some answers from his side. To me it seems like we are making an increasingly automated system and allowing more outside sources to operate with it. This could be a good thing, but also requires new security measures that are equivalent to what exists in every other industry that is vulnerable to the same types of threats.
Well, for one thing, TCAS still operates on Mode C right now. For another, in the NAS and controllers' scopes the software is processing both Mode C returns AND ADS-B data. The way ADSB-out is setup right now, you still use a 4 digit code that will correspond to your Mode C code. If someone spoofs an ADSB message saying there is an airplane where there isn't one, the software throws out the ADSB return if there is not a corresponding Mode C in the general area. Among other safety and security features. Look, as long as we rely on technology we are going to be vulnerable to attacks on that technology but I really think some of the alarmists are crying wolf. I'm not one to put all our reliance on engineers, but holy crap, you really think you as a random dude on the Internet know more about the risks and safeguards built into the system than the guys that put millions of dollars and man hours into developing, testing, developing, more testing, doing initial operational trials (ongoing over the past 8 years in parts of the country, by the way) and even more testing?

Do you worry about the integrity of the terrain and navaid databases used for EGPWS and airline FMS systems? You wanna talk about a way to wreak havoc on our airspace system, start pumping bad data in there. There's a gazillion things you can worry about, for me getting hit by a car while walking up the road to the post office is just way higher up on my list of things to worry about than ADSB spoofing.
 
"Experts" gave similar assurances about networked medical devices a decade and a half ago. It's not that the people designing these systems are stupid, it's just that most of them have experience and directives that are very narrowly proscribed. Most are not even considering potential security issues as they crunch their design numbers.
If you really want to know if a system is vulnerable, just ask a fifteen year old Russian hacker. It's much faster and cheaper than waiting for the BLAND Corporation to finish their $20MM security study.
As for VOR/ILS security. It's been considered. Not so much from a hacking standpoint, because hacking wasn't really a "thing" back then. But, ironically, one of the driving forces behind GPS development was the perceived insecurity of the ground-based radio systems.
What do networked medical devices have to do with anything? Have there been major hacking attacks leading to loss of life on them? If so I haven't heard about it.
 
The technology to jam or bend VOR/ILS transmissions have been around since before WWII. As for the ADS-B security, the first rule of security is not to detail how the security works.
 
What you're referring to is a concept called "security by obscurity." It works well right up until someone decides to do some reverse engineering and publish to the world how it works.

For a target to end up on a controllers screen there are multiple layers of triangulation and signal timing for range and distance from the GBT station(s) that take place before the software even looks at the GPS data or Mode S Code, or HEX Code. For a phantom target to end up on a aircraft traffic display, yes that is a possibility, but its also a current risk of TCAS. Same as someone hacking the air traffic network, etc. As @Roger Roger pointed out, attacking aircraft automation through a database update or a maintenance computer would be much more effective in causing world wide issues.
 
What you're referring to is a concept called "security by obscurity." It works well right up until someone decides to do some reverse engineering and publish to the world how it works.

Worse than that, it really -doesn't-.

You can't keep details secure by not publishing them... it's an incredibly dangerous approach. The most secure things are those which are widely peer reviewed.

I think this subject is very valid. The problem is that you can't design a system and say "Ok, it works! Now it's time to add security to it." I am a professional in large scale system design and security, so these sorts of concerns are directly relevant to the field I got my degree in...

... Oh, wait, that's right, I don't have a degree... only twenty years of experience building large scale internet-facing production infrastructure with a high level of security, and security consulting for companies ... clearly my opinion isn't as valid as an academic who's never actually done these things and has had their brain filled with fluff and stuff ten years behind the curve... or some technology professor who can pipe up and get quoted in every news publication.

Grumble. Ok, ok, yeah, I'm crossing threads here. The dialog there has started to bother me a lot... the blanket-accepted truisms without rational examination, and the obvious—but vacuous, vapid, insipid—argument that "You need a degree to get a job with a major, so clearly college is necessary..."

I'll subside. Let me just say that from the perspective of a tech security professional, there could be an issue, but I haven't reviewed the technology. The design, from what I know of it, is conducive to a tiered trust approach based on the significance of data... and that could be an appropriate level of discrimination. I'd need to dig in to the technology to form a more solid conclusion, but I think anyone dismissing the problem out-of-hand is being foolish.

The one main safety built into the system is the cross-check, but in the automation generation that seems to be "falling aft".

-Fox
 
Worse than that, it really -doesn't-.

You can't keep details secure by not publishing them... it's an incredibly dangerous approach. The most secure things are those which are widely peer reviewed.

I think this subject is very valid. The problem is that you can't design a system and say "Ok, it works! Now it's time to add security to it." I am a professional in large scale system design and security, so these sorts of concerns are directly relevant to the field I got my degree in...

... Oh, wait, that's right, I don't have a degree... only twenty years of experience building large scale internet-facing production infrastructure with a high level of security, and security consulting for companies ... clearly my opinion isn't as valid as an academic who's never actually done these things and has had their brain filled with fluff and stuff ten years behind the curve... or some technology professor who can pipe up and get quoted in every news publication.

Grumble. Ok, ok, yeah, I'm crossing threads here. The dialog there has started to bother me a lot... the blanket-accepted truisms without rational examination, and the obvious—but vacuous, vapid, insipid—argument that "You need a degree to get a job with a major, so clearly college is necessary..."

I'll subside. Let me just say that from the perspective of a tech security professional, there could be an issue, but I haven't reviewed the technology. The design, from what I know of it, is conducive to a tiered trust approach based on the significance of data... and that could be an appropriate level of discrimination. I'd need to dig in to the technology to form a more solid conclusion, but I think anyone dismissing the problem out-of-hand is being foolish.

The one main safety built into the system is the cross-check, but in the automation generation that seems to be "falling aft".

-Fox
Don't get me started. My "it works until" comment was tongue in cheek. I'm sure we could swap security horror stories. My favorite crutch that the security by obscurity folk is refusing to fix easily exploitable holes in their products. I've had 3 times in my relatively young career where I found exploitable holes in products that my employer used. I reported the issue in detail to the product developers expecting they'd fix it or ask for more information. They instead refused to fix anything and threatened me with legal action against my employer and I if I went public with the information.

Unfortunately I was an employee at the time and my employer told me to stand down not wanting the hassle.
 
Don't get me started. My "it works until" comment was tongue in cheek. I'm sure we could swap security horror stories. My favorite crutch that the security by obscurity folk is refusing to fix easily exploitable holes in their products. I've had 3 times in my relatively young career where I found exploitable holes in products that my employer used. I reported the issue in detail to the product developers expecting they'd fix it or ask for more information. They instead refused to fix anything and threatened me with legal action against my employer and I if I went public with the information.

Unfortunately I was an employee at the time and my employer told me to stand down not wanting the hassle.

Enter real world, stage left. Thank you for the story.
 
There are no apparent security mechanisms to protect the confidentiality, integrity or availability of the data transmitted between aircraft and air traffic controllers. As a result, a motivated attacker could inject false targets into the system or prevent legitimate targets from being properly displayed. Such actions could have devastating effects on the entire NAS infrastructure.

Interesting document from 2011 by a USAF Major, which brings up many of the same points being discussed.

http://apps.fcc.gov/ecfs/document/view.action?id=7021694523

“...the FAA specifically assessed the vulnerability risk of ADS-B broadcast messages being used to target air carrier aircraft. This assessment contains Sensitive Security Information that is controlled under 49 CFR parts 1 and 1520, and its content is otherwise protected public disclosure. While the agency cannot comment on the data in this study, it can confirm, for the purpose of responding to the comments in this rulemaking proceeding, that using ADS-B data does not subject an aircraft to any increased risk compared to the risk that is experienced today [7].”

Finally, historical precedence has demonstrated how unencrypted data links can be exploited by a motivated adversary.

As early as 2006, concerns were raised about the ability of hackers to introduce as many as 50 false targets onto controllers’ radar screens [20]. Dick Smith, former chairman of Australia’s Civil Aviation Administration, reported this was possible with the use of a general aviation transponder, a laptop computer and a $5 antenna. Smith also warned that real-time positioning broadcasts allow adversaries to track military flights and criminal elements to monitor the movements of law enforcement.
 
Just because a vulnerability exists, doesn't mean there is a high probability of attack, especially with physical systems (GPS/VOR jammers).

Until substantial intel states otherwise, there probably isn't a high enough likelihood of such an attack that justifies a probable $Billion expenditure to address it.
 
Just because a vulnerability exists, doesn't mean there is a high probability of attack, especially with physical systems (GPS/VOR jammers).

Until substantial intel states otherwise, there probably isn't a high enough likelihood of such an attack that justifies a probable $Billion expenditure to address it.

I believe the UK is doing just that; eLoran.
 
Just because a vulnerability exists, doesn't mean there is a high probability of attack, especially with physical systems (GPS/VOR jammers).

Until substantial intel states otherwise, there probably isn't a high enough likelihood of such an attack that justifies a probable $Billion expenditure to address it.

Basic risk management: severity/probability. The thing that bears consideration is the possibility of a composite attack, where vulnerable systems which themselves are low risk are exploited in combination with other attacks leading to a complex exploit that results in a much higher severity.

The funny thing about building a secure system... it doesn't cost $Billions to secure if you start with security in mind.

-Fox
 
Probability of a potential incident isn't the only consideration. You have to also weigh the potential severity of the outcome if it were to be exploited. This is called the risk matrix. The risk matrix is unfortunately an overly simplistic attempt at quantifying the level of risk, but it gets the point across.

Edit: The fox beat me to it.
 
Back
Top