My sometime drinking buddy Allison Barrie has posted an article about MALINTENT, which despite its rather sinister name is a scanner that will read the evil intentions behind even the most innocent pair of blue eyes.
“So here’s how it works. When the sensors identify that something is off, they transmit warning data to analysts, who decide whether to flag passengers for further questioning. The next step involves micro-facial scanning, which involves measuring minute muscle movements in the face for clues to mood and intention.
“Homeland Security has developed a system to recognize, define and measure seven primary emotions and emotional cues that are reflected in contractions of facial muscles. MALINTENT identifies these emotions and relays the information back to a security screener almost in real-time.”
This year I’ve spent more time in airports than ever before, which means I’ve spent colossal amounts of time standing passively in line, holding my contact lens solution in its plastic baggie and waiting for the command to remove my shoes.
If MALINTENT performs as advertised, we can stop a lot of the scanning and security theater, because the focus will be where it should have been all along, on the individual passenger instead of on all the crap that people carry onto aircraft.
Maybe we’ll even see a return to the good old days, when I could swing onto an aircraft with a bottle of shampoo in my carry-on, snugged up right next to the bottle of tequila.
MALINTENT is one of the goodies soon to appear from the Science and Technology Directorate, where Jay Cohen has shuffled a lot of taxpayer dollars into radical technologies with the possibility of long-range payoffs.
Smart guy, that Admiral Cohen.
I know this sounds like I’m being facetious, but if the desire to kill everybody nearby is what they are screening for, after 20 minutes in line holding my shoes, I’ll get pulled to the side for a strip search every time.
Of course, my piercings always set that damn detector off anyway, so its not like it will be a new experience.
Frankly, I’m skeptical. Even basic biometric scanning for authentication purposes has a horrible success rate, after many years and millions of dollars in development. I’ll be greatly surprised if this tech is ready for real-world implementation.
But I’m with you on the shoes, and the ridiculous liquid restrictions.
This isn’t your basic biometric scanner, however. What biometric scanners have largely failed to do is to compare the features of one person against a vast database of suspected black hats.
This compares a single person to a single standard, which should be a lot simpler.
Lots of people like to natter on about how “oh, a bottle of water never killed anyone”.
No, maybe not. But a bottle of bleach mixed with a bottle of ammonia isn’t anyone’s friend.
PS “you know those machines only call out Arabs! They’re programmed to recognize Arabs and call them all terrorists! This is the work of a racist regime! I’ll sue in court and call you all racists, and the only way you can prove you aren’t racist is to publish every single detail of the machine’s design and programming!”
I predict problems (the one pete mentions is the first that springs to mind) but then again, it’s technology. If it didn’t create new and interesting problems, it wouldn’t be doing its job. This is where science fiction came from.
It is nice that it takes some of the judgement calls out. Normally I’m a fan of judgement calls, but numerous trips through airports have suggested that not only are the airports hiring for quantity rather than quality, but the places are just as stressful and unpleasant to work at as they are to pass through. Given the choice of being scanned by exhausted, angry, undertrained people who hate their jobs and being scanned by a machine, I’m voting machine.
“This isn’t your basic biometric scanner, however. What biometric scanners have largely failed to do is to compare the features of one person against a vast database of suspected black hats.
This compares a single person to a single standard, which should be a lot simpler.”
I’m not talking about the magical-tech “spot the bad guy” systems. I’m talking about the various things like thumbprint, hand geometry and other biometric scanners that have been developed, tried out and found wanting. In their simplest form, these attempt to match what the sensor sees (say, a thumbprint) against a database of authorized users, and it if a match is found access is granted.
Problem is, these don’t work as well as advertised, and these are trying something much more simple than reading facial expressions.
I admit last looking into this seriously about 2 years ago, so perhaps there’s been a breakthrough recently. But there’s a reason you don’t see many biometric devices actually deployed in the real world: the numbers of both false positives and false negatives are still too high for general use.
I’m not saying this can’t ever be done – just that I don’t believe the SOTA has reached a point where this actually works reliably in practice.
Dubjay, do you know Bruce Schneier?
He has a big down on these systems, for a very good reason: the false positive rate slaughters them. Say you have a gizmo that is 90% accurate at spotting folks with bad intentions. Trouble is, if you’re at an airport you’re screening 30,000 folks a day — which means you have to question 3000 suspects per day. Whereas if it fails to spot a bad guy 1 time in 10, that’s a great big FAIL.
There is no magic emotional state associated with “I am about to blow up this plane”. Folks get stressed at airports for all sorts of reasons — such as thinking about the risk of being given a cavity search because they’ve been singled out by a buggy piece of software — while bad guys don’t always get stressed out (they’ve trained until they’re desensitized, or they’re in the zone, or they’re an old-fashioned psychopath).
Finally — biometrics aren’t what they’re cracked up to be. The new Scottish Parliament building opened about 2 years ago, with a spiffy smartcard and thumbprint reader to control access. Trouble is, a lot of the parliamentary researchers and younger MSPs tended to go swimming during their lunch breaks … about two weeks after the building opened they canned the fingerprint readers. And that was for a tightly controlled user base!
Mr. Stross – bingo. You and Mr. Schneir (who’s a scarily smart guy) articulated my concerns much better than I managed to.
And dubjay – Bruce is one of the best people to go to for information about how things *really* work, where information security topics are concerned.
Oh, I know Bruce Schneier. Met him. Read his book. Agree with him, most of the time.
There pretty much =is= an emotional state associated with someone who’s about to blow himself up. Israeli security people are trained to recognize it, and their roadblocks and checkpoints are designed so as to stress people in such a way as to reveal it, and they have a =very= good record of stopping bombers. Like, no bombers from Gaza for =years.= (I don’t prezackly remember how =many= years, but I’ve met the guy who was in charge of Gaza security during that period, and he was pretty impressive.)
The fly in TSA’s ointment is that the personnel rely too much on their scanners— they trust their machines more than they trust their ability to read someone, and machines can be spoofed easier than a well-trained individual.
In the case of this particular device, what happens in case of a false positive is a more thorough scan. And if you get two positives in a row, I imagine you get interviewed and searched, but at least they have a =reason= for interviewing and searching you, as opposed to just “your random number came up,” which is what happens now.
Comments on this entry are closed.