001-sec: Unsolicited data
A brief [?] digression about security and perimeter control
Occasionally the question comes up about the base operating system[s] used in
the Prius -- the answer usually being something custom from the ground up
running on dedicated hardware. While nobody except Toyota is 100% certain,
it seems mercifully UNrelated to popular general-purpose operating systems.
There's no need for it to be. Mainstream operating systems try to be far
too many things at once, which is their downfall:
I sometimes stop and think how it's interesting, and frightening, how
many things are moving over to computers and digital media or modes
in general. The home desktop machine has become the TV, the stereo,
the "what we did on our vacation" slide projector, the place where
bills and taxes get done, sometimes even the phone, not to mention
other types of communication [leaving aside the inherent *merit* of
such communication] ... and if it's not the home desktop, it's the
embedded processor in commonplace things like cars that we often like
to think don't need them. The present and coming ubiquity is really
scary, and I try to keep in my mind how things will need to revert if
all of that were to go away sometime. Sometimes they can really be a
tool to solve real problems, such as improving gas mileage and
emissions, but so often they are also used to *generate* more
perceived problems to supposedly solve. It's an enabling thing on
one hand but sometimes it "enables" entirely too much chaff along
with. I just really pity anybody who's got their windows box
connected to the net these days -- the average knock-over time is
down to about four minutes, and people who know are saying that if
one of those machines has EVER touched the internet then it's very
likely got some piece of uncontrolled mal-ware running on it that the
user doesn't want or even know about. Maybe if we ever get past that
dismal state of affairs, things would be better...
So mull that over for a second. Think of all the lines of code you're
running when making a call on your cellphone. After you get used to it,
that all abstracts away, leaving the human conversation with all its nuances.
Well, except when the phone makes the wrong decision about which tower to
talk to and drops signal, or a Symbian virus starts wreaking havoc. Those
things can happen because the manufacturers are *allowed* to slack off in the
robustness and error-handling arena, and the market still accepts it because
of laziness and complacency. If people had a little more active cynicism,
and voted with their wallets against cellphones that break up or desktops
that need to be rebooted six times a day, we wouldn't have these problems
in the first place. Same goes for security issues, where the consequences
of failure are nastier and more subtle. Security failures don't always show
up as obvious problems -- instead they are often hidden bombs waiting to be
triggered by external influence.
In terms of embedded system robustness -- while designers are starting to get
many things right, or at least better, in the engine/driveline control area,
the picture is still *very* unhappy in the area of remote data transmission,
i.e. network security. Engineers still don't think of it as a network
security problem, but it really is. Way too much laxity still exists in
most implementations, usually done in the name of "convenience". So I'll
wait until they start getting *that* right [in both a technological and
philosophical way] before I buy into any of it.
That touches on the reason I strictly did *not* opt for any of the smart-key,
OnStar, bluetooth, etc when buying a car. The OBD-III [that's "three"]
concept that the Feds are trying to push through is scary enough by itself --
a scenario in which cars randomly report their emission-control status to
wireless relayers along the roads. While I've accepted computer control in
cars and the idea that their operating parameters can be messed with through
a serial interface, what I *don't* want to have is the car able to interact
with the outside world without specific and enforceable authorization to do
so [i.e. plugging my widget into the diagnostic port]. The wireless
unlocker-remote keyfob is a bit of a compromise -- I use it, but I could
certainly disable that too and use the physical key. But the fob is still
an RFID, too, potentially readable from within a few feet of my pocket.
The Prius service manual documents a block of "operational history data"
that can be recovered with the hand-held scanner, that details various
out-of-limit events if they have occurred. This allows a technician to
believe he can make some kind of judgement about your driving ability or
tendency to abuse the car. This is really disturbing, and it seems unlikely
that clearing the ordinary OBD-II freeze frames with a non-Toyota scantool
will also clear this "big brother in your vehicle" data. Some of the
parameters recorded are really irrelevant, like how often you use Neutral.
In addition, most car manufacturers are adding "black box" event data
recording to components such as the air-bag ECU, to capture and hold the five
seconds or so leading up to a collision. Google for "edr" and "telematics"
for some really scary stuff. This data arguably belongs to the owner of the
car, but the protocols for recovery are not documented and only offered to
"authorized" entities such as insurance adjusters and law enforcement. This
is more of that big-brother attitude that completely cuts the consumer out
of the loop and makes it feel more and more like Toyota still owns the car
regardless of what it may say on the title. Basically, the end user is once
again denied any right to privacy and the only ones who win in the end are
Regardless of source, methods, or ownership, data like this should not be
available without specific authorization, and given the insecure way most
wireless transmission schemes are designed, that authorization at the very
least must be physical access to the interface -- i.e. entering past the
vehicle's perimeter and plugging into the port. Even if a wireless
connection claims to be secure, historically almost every implementation
has inherent protocol problems that can result in unexpectly expanded access.
There is a host of potential problems in the smart-key system, which offers
the convenience of simply having an RF transponder on your person when
attempting to enter or start the car. This means that the car is always
emitting little RF bursts, trying to query for a key somewhere near it.
A forest of oscillator antennas allows the system to make a guess where the
key currently is -- inside, outside, behind, etc. The system can guess wrong
fairly easily, and some of the logic to prevent someone locking the keys
inside the car can fail *open* without the driver's knowledge. It tries to
beep a warning for about eight different scenarios, and that doesn't cover
oddball situations when it can't quite tell where a key is or there may be
*two* keys in the area. Problems and confusion have been reported with
smart-key implementations in all sorts of vehicles, and because they are being
pushed to market and work well enough most of the time, they are becoming
more commonplace [and accepted] regardless. That doesn't necessarily mean
they're really ready for prime time, but mediocrity once again rules.
Direct quote from the "new car features" manual, in
Because the smart entry & start system is so convenient, the
driver could become unaware of the presence of this key, which
could lead to human errors. ... a serious problem, such as an
inability to restart the hybrid system once it has been turned
OFF or the possible theft of the vehicle.
So it is clear, once again, that this "convenience" comes at a potentially
high price. The obvious answer is to not install it, but far too much
weight is given to the typical "we have it and we think it's cool so we
gotta use it" argument without any further thought given to implementation
Basic computer security principles and the concept of simplification in
security-critical applications have been known for decades, and it's
absolutely astounding how the industry keeps forgetting those principles
over and over again, even after repeated and well-publicized lessons.
A token nod toward "patch management" is sometimes given when vendors
are forced to respond to security hole disclosure, but that is a fatally
flawed model as we're already seeing. They're all too interested in
features and glitz and doing the minimum required to achieve an easy
and seamless "experience", and really, if it keeps up like that then
it *will* kill us all.