I have the privilege of knowing some very, but verry smart people that have the experience and the record to show for it. When they design something, they can afford to be a bit loose with their using of formal specification.
Why? Because usually, when you ask them questions like: What did you consider for necessary capabilities for geographic data synchronization in this transaction system? they will give you an answer in the lines of: “We accounted for a distributed algorithm with an external-clock synchronization, supporting Suzuki-Kasami for MeX. Same exclusion used homogenously”.
Your goal as an architect that takes decision on implementing distributed solutions for critical systems is to be able to provide this answer. If your name is not, i don’t know, Leslie Lamport, or if you don’t hold an IQ of over 150, you have to be involved in formal specification. “Cloud does not fix stupidity”, is one of the famous quotes in this industry. And when talking about critical systems in cloud, the “stupidity” threshold in terms of IQ is pretty damn high.
What worries me is that I continue to see 99%+ of critical systems being delivered with a design based on the gut feeling of people holding an under-the-threshold-IQ. And then I get to see them grow and be operated. Holy f**k that’s a mess. Listen, this is normal, it’s not your IQ that I am blaming, is the arrogance. Just find the budget to force yourself to go through the process of doing formal specification. You will be forced to think about the problems that you cannot foresee. The people that are paying for your design don’t know shit anyway about what is it that you do all day. And you know it. But that’s a whole different discussion.
I know that the time used for formal specification is very valuable, but hey, that’s why your system is critical.
There’s one anonymous quote here that I like: “The money you make being the first one delivering a critical system quickly turn to dust when the critical system fails.” – Chinese Proverb (I kid, of course!)
In Romania, there’s a lot of law being in either in debate, or passing passed about giving the authorities direct abilities to get all e-data. E-mails, IMs, mandating the vendors to hand unencrypted data out. Please read this again.
Now, I want to lightly share some of my experience in working for some top cybersecurity ‘consulting’ companies. (lightly, because of NDAs)
Now, the layout is as follows: justice, law enforcement, and ultimately governments are mandating surveillance when justified. Alright Good. The surveillance is happening anyway (phone taps, physical tracking, e-mails, IMs, and whatnot) with help from various agencies that are specialized in doing that. The level of expertise that various govt. agencies have in terms of electronic surveillance is not always up-to-date. This is to say that their capabilities are limited. This is normal. When they face a situation where they can’t pursue a surveillance task, they outsource. There are cybersecurity companies that offer such services. These companies have cybersecurity researchers that are on top with various 0Days and the corresponding exploits and they master this.
How do I know this? I was one of these guys that offered cybersecurity research services for such a company. Repeatedly. Actually only two times, for two different companies. So not a lot of experience here. Just enough. I’m not going back there!
So, what’s my problem?
Let me guide you through an example:
Assume there’s a mandate that asks for IMs sent by the suspect. This is currently achieved, usually, by compromising the user’s device (phone, laptop, PC, MAC, whathef**kever) with some malware that is usually designed by one of these contracted companies. Surveillance happnes ON THE COMPROMISED DEVICE ITSELF.
Surveillance does not happen on the ‘encrypted wire’, or on the IM vendor’s infrastructure, but on the TARGETED DEVICE.
Now, suppose this new law passes, that will mandate the IM service providers to hold unencrypted data (or hold encryption keys) FOR EVERYBODY, ‘just in case’ a mandate is thrown away.
A fairly long time ago, at blackhat (as far as I remember) somebody came on stage with an implementation of some proposed design of an electronic passport and proved its extremely worrying security flaws. I’ll leave the pleasure of researching the event to you. Back then (maybe around 2001-2003?), the spirit of “blackhat” (or whatever event it was) was free, and if you were following that kind of security events you were consistently exposed to serious security researchers challenging the disastrous security implementations of either mainstream technology providers (such as Microsoft 😉 ) or of the govt digitalization.
What does this have to do with passports today? Well, they are a bit more secured thanks to that kind of involvement. Secured for both their users as well as for their governments. Well, even if they were not so secured, the biggest looser of a flawed normal passport is its government. The projected user impact is fairly low. Identity theft by means of passport forgery or “torgery” is not a huge phenomenon.
Legitimate concerns for any passport users
From the user security perspective, let’s explore:
Who does the passport say that I am ?
How do I know I carry a valid (authentic and integral) passport at any time?
What does the document scanning say about me ?
e.g. it was scanned while passing through customs, ergo anybody that is able to see a record of that scan will know that I was there at a specific point in time
Extrapolating to COVID passes
Well. As far as the ethical concerns, I’ll leave those up to each and every one of you.
But as far as the security goes, I have a few very worrying concerns:
The implementation details are not widely available.
There are alot of security incidents with these documents. And they are not and cannot be hidden even in the mainstream media. (https://www.dw.com/en/security-flaws-uncovered-in-eu-vaccination-passport/a-58129016)
How do I as an owner and user of the pass know WHO and when views or verifies my scans.
I am not going to continue the list, it is already embarrassing
This last one is a bit concerning. Here’s why: with a normal passport, once scanned, the person that gains or has access to a list of these scans knows that you went over some border sometime. However, with COVID passes the same actor that has or gains access to your passport scans, knows where you are kind of right now. Remember, you’ll scan your pass even if you go to a restaurant.
What do I want?
As a COVID pass user I want to have the certainty that no passport scans are stored anywhere. And I do not want anybody except the scanner to be able to see my identity. Because the moment they do, they’ll know where I am and what I’m doing. And this is unacceptable. Mainly because the scanning frequency of such a pass can be daily for some of us.
So, can I get what I want ?
I have tried to get implementation details Both directly, by asking it, legally, and through the security community (both academic and professional). The result ? No result. The public debate(s) prior to implementing and adopting these passes were a complete joke. How long until XXXX gets a hold of a list of my scans and then follows me around, or worse. Seems far fetched ? Take a look at the latest data leaks 😉
But as the philosophers Jagger and Richards once said: “You can’t always get what you want!”
Something happened this month with Romania’s ING Bank. I’m sure you’re probably aware of it. They managed to execute a several (well, maybe more than just a several) transactions more than once. Well, shit happens, I guess. They have eventually fixed it. At least they say so. I choose believe them.
This unfortunate happening triggered a memory of my first time working in a mission-critical environment where certain operations were supposed to be executed exactly, absolutely, onlyonce. It was for a german company. back in 2013. I am not allowed to mention or make any refference to them or the project, so let’s anonymously call them Weltschmerz Inc. It went something like this (oversimplified diagram):
I don’t claim that ING’s systems can be oversimplified to this level, but for the sake of the argument, and the protection I assumed for the so-called Weltschmerz Inc. let’s go with the banking example.
Trusted actor is me, when using a payment instrument that allows me to innitiate a transaction. (can be me using my card, or me being authenticated in any of their systems)
Trusted application is the innitial endpoint where I place my input describing my transaction (can be a POS, can be an e-banking application, anything)
The Mission-Critical Operation is the magic. Somehow, the application (be it POS, e-banking, whatsoever) knows how to construct such a dangerous operation.
Trick is, that whoever handles the execution of this operation must do it exactly, absolutely, only once. If the trusted application has a bug /attack/misfortune and generates two consecutive identical operations, one of them will never get executed. If I make a dubious mistake and somehow am allowed to quickly press twice a button, or if the e-banking / POS undergoes an attack, the second operation will be invalid. If anyone tries to pull a replay attack, it will still not work.
How to tackle this? Well, there are alot of solutions for this problem. Most of them gravitate around cryptography and efficient searching, here’s the approach we took back then:
Digitally signing the operation:Â necesarry in order to obtain a trusted fingerprint of the operation. the perfect unique identifier of the operation.
I understand, it is not easy to accomodate a digital signature ecosystem inside your infrastructure, there’s a lot of trust, PKI + certificates, guns, doors, locks, bieurocracy and shit to handle. It is expensive, but that’s life, no other way around it unfortunately.
Storing and partitioning: this signed version is stored wherever. However its signed hash must be partitioned based on variable that derrive from the business itself. If we are to consider banking, and if we speculate, we could come up to: time of the operation, identified recipient, innitiator, requested value, actual value, soo many more possibilities…. This partition is needed because, well, theory and practice tells us that “unicity has no value unless confined” If you are a very young developer, keep that in mind, it will cut you some slack later in your life.
Storing this hash uniquely inside a partition is easy now, it is ultimately just a carefull comparrison of the hashes inside a partition and the new operation which is a candidate for execution.
Hint: be carefull in including time in your partition. Time should not only be a part inside the signed operation, but also a separate, synchronised, independent, clock. I’m sure you already know this.
If you do this partitioning and time handling by the book, no replay attack will ever work.
Execution:Â Goes in all partitions that have something inside of them, gets the operations, does the magic. Magic does not include deleting the operation hash in the partition afterwards. It includes some other magic maker. I choosed my words carefully here :). #ACID.
There’s a lot more to it:Â
signed hashes should be considered highly sensitive secrets, tough an encryption mechanism must be employed. Key management in this case is an issue. That’s why you will probably need an HSM or some sort of simmilar vault for the keys, and key derivates
choose your algorithms carefully. If you have no real expertise in cryptography, please call someone that does. Never assume anything here unless you really know how to validate your assumptions
maintaining such an infrastructure comes with a cost. It’s not such a deal breaker, but it is to be considered.
Again, I am not claiming that ING Romania did anything less than the best in order to ensure the singular execution, this article is not related directly to them. It is just a kind reminder, that it is possible to design such a mission-critical environment, for singular execution of certain operations.
As for my experience, it was not in banking, but rather a more open environment. #Marine, #Navigation.