Several times a year, I accept requests for guiding cybersecurity workshops for various clients. Usually they fall into the category of web application security, or software development security. Not more than several (max 5) times a year because this will greatly impact my performance in other areas.
So one client requested a web applications security workshop that must be focused on OWASP guidelines. It is awesome for me. I always like OWASPs content. Sometimes, I even have the privilege of contributing to it. When I provide this service, I never prepare exhaustive slides for presenting an already well established material, such as the one from OWASP. I just go on the website and work with that as a prequel to my deep examples.
So what happened? Mid-workshop, the OWASP Top 10 W.A.S.R. changed. Bam! “Surprise M**********R!” Deal with that!
Now, during these events, I usually bring a lot of my experience in addition to whatever support material we are using. Actually, this is why someone would require guidance in going through a well-established and very well built security material, such as the one from OWASP. When I talk and debate, and learn together with an audience about cybersecurity topics, I always emphasize things that I consider to be insufficiently emphasized by the supporting material. I say emphasized and not detailed, and please be careful to consider this difference.
Insufficiently emphasized topics
Traditionally, OWASP’s guidelines and material did not emphasized enough, in my humble opinion:
The importance of using correct cryptographic controls in the areas of: authentication and session management, sensitive data exposure, insufficient authorization
Insecure design in the areas of: bad security configuration, injection problems, insecure deserialization
Data integrity problems. Loop to #1.
I usually spend spend around 10-11 hours from 16[or more] hours workshop on the three topics above. Very important stuff, and, traditionally overlooked in most teams that I interact with.
It was a nice surprise to see that in the new TOP 10 W.A.S.R. OWASP included my three pillars and emphasized concepts the same way I like to do it. They even renamed sections according to my preference. Like the second position (A2) is now called Cryptographic Failures. AWESOME! They explain stuff in a more holistic manner, as opposed to just enumerating isolated vulnerabilities. AWESOME! Finally. It was an extremely good argument for the team that I was leading, about the way I spent my time on the three topics. I felt good about them 🙂 Alraaaaight, I felt good about myself too!
Oh, and P.S: For the first time in.. what now, more than a decade (?!) OWASPs Top 10 W.A.S.R. does not have the top position occupied by injection problems. Either the web has grown exponentially again, or we have escaped a boundary. The boundary of absolute stupidity 🙂
A lot, has been going on lately. So much so, that I do not even know how to start reviewing it.
I’ll just go ahead and speak about some technical projects and topics that I’ve been briefly involved in and that are giving me a fair amount of concern.
Issue number x: Citizen-facing services of nation states
A while back, I made a “prediction”: the digitalization of citizen facing services will be more present, especially as the pandemic situation is panning out. (here) and (here). I was right. Well, to be completely honest, it was not really a prediction as I had two side (as a freelancer) projects that were involving exactly this. So I kind of had a small and limited view from inside.
Those projects ended, successfully delivered, and then came the opportunity for more. I kindly declined. Partly because I’m trying to raise a child with my wife, and there’s only so much time in the universe, and partly because I have deep ethical issues with what is happening.
I am not allowed to even mention anything remotely linked with the projects I’ve been involved in, but I will give you a parallel and thus unrelated example, hoping you connect the dots. Unrelated in terms of: I was not even remotely involved in the implementation of the example I’m bringing forward.
The example is: The Romanian STS (Service for Special Telecommunications) introduced the blockchain technology in the process of centralizing and counting citizen votes, in national or regional elections that are happening in Romania. You can read more about it here, and connect the dots for yourselves. You’ll also need to know a fair amount about the Romanian election law, but you’re smart people.
Flinging the blockchain concept to the people so that the people misunderstand it. Creating a more secure image that necessary. Creating a security illusion. Creating the illusion of decentralized control, while implementing the EXACT opposite. I’m not saying this is intentional, oh, no, it is just opportunistic: it happened because of the fast adoption. Why? Blockchain is supposed to bring decentralization, and what it completely does in the STS implementation is the EXACT opposite: consolidate centralization.
While I have no link with what happened in Romania, I know for a fact that similar things shave happened elsewhere. This is bad.
I do not think that this is happening with any intention. I simply think there is A HUGE AMOUNT of opportunistic implementations going on SIMPLY because of the political pressure to satisfy the PR needs, and maybe, just maybe, give people the opportunity to simplify their lives. But the implementations are opportunistic, and from a security perspective, this is unacceptable!
I think that while we, as a society, tend to focus on the ethics in using AI and whatnot, we are completely forgetting about ethics in terms of increased dependency of IT&C in general. I strongly believe that we have missed a link here. In the security landscape, this is going to cost us. Bigtime.
Several weeks ago I have finished migrating one of the past projects I have been involved in to the blockchain technology. Back in the day that I was leading that project, blockchain was not the default choice. So, “the guys” called and wanted me to come back and offer some insights while migrating everything to blockchain. Piece of cake. Plus, blockchain was a perfect fit for the job. A real pleasure, and not too much work to do. Easy money, how they call it.
However, this got me thinking. I have a weird passion lately in the area of decentralized energy grids, smart energy grids and grids in general. It was spiked by one of my friends. I haven’t had the chance to work on a serious energy project until now, but this didn’t stop me from fantasizing
What do I fantasize about?
The potential use cases of blockchain within the energy sector. So, here it is:
Auditing and regulatory needs in terms of transparency. Obviously, here the native immutable records of a DL with the proper consensus in the network are the key.
Data transfer problems whithin a smart grid. A smart grid is a very big deal: sensors, metering equipment, EMSs, building monitoring, etc. Theres alot of storage (DLs) and transfer in this environment, and it can all benefit from decentralized integrity. Let’s not even talk about introducing a new energy source in a smart grid, or extending a microgrid by commercial means.
Commercial aspects in localized P2P energy trading. Local energy marketplaces I think is the official buzzword. This is too obvious.
Billing. Well, I don’t need to explain this one, do I?
More dynamic markets in general (not the P2P ones). A smart contract can help in switching providers with the speed of light. Now this is can be good nes even for centralized grids such as ours.
Resource sharing in residential areas. With or W/O a microgrid infrastructure, residential resource sharing (Alternative energy producing infrastructure such as solar panels, EV Charging Stations, etc) can be shared in a more trustworthy environment if equimpents can make use of proper DLs
Now, in terms of the energy market & trading, OMG. I don’t have enough knowledge to even start to scratch that subject, but hey, that’s just another probable area of blockchain in energy.
P.S. the featured image is a representation of the “metil(1R,2R,3S,5S)-3-(benzoiloxi)-8-metil-8-azabiciclo[3.2.1]octane-2-carboxilat” aka “cocaine” molecule. It’s supposed to give you stimuli that you interpret as energy….or so I’ve heard.
Last time I have talked about some of
the factors that influenced the evolution of privacy-preserving technologies.
I wanted to touch base with some of the
technologies emerging from the impact of these factors and talk about some of
the challenges that they come with.
After a discussion about e-differential
privacy, I promised you a little discussion about homomorphic encryption.
There is a small detour that I find
myself obligated to take. This is due to the latest circumstances of the
SARS-CoV-2 outbreak: I want to split this discussion in two parts, and start
with a little discussion about homomorphic secret sharing before I go
into sharing my experience about adopting the homomorphic encryption.
In the last article, I argued that one
of the drivers for adopting new privacy mechanisms is: “The digitalization of
the citizen facing services of nation-states. (stuff like e-voting, that I
really advocate against)”
Well, sometime after the SARS-CoV-2 will
be gone (a long time from today) I foresee a future where this kind of services
will be more and more adopted. One of the areas where citizen facing services
of nation-states will be digitalized is e-voting – e-voting within the
parliament, for democratic elections, etc. I briefly mentioned last time that I
am really against this. At least for now, given the status quo of the research
in this area.
Let me explain a little bit the trouble
Starting with a question: Why do you
trust the people counting your vote?
A good answer here, could be:
Because all the
parties having a stake in the elections have people counting. The counting is
not done by a single ‘neutral’ authority.
Because, given the above
statement, I can see my vote from the moment that I printed it, to the moment I
Because your vote must
be a secret, so that you cannot be blackmailed or paid to vote in a certain way
– and there are some mechanisms for that
You can see that in an electronic
environment, this is hardly the case. Here, in an electronic environment, if
you have a Dragnea,
you are shot and buried. Here, in an electronic environment you:
Cannot see your vote
since the moment you have printed (or pushed the button) to the moment of
casting – anyone could see it
Cannot easily make
sure that your vote is a secret. Once you act upon your vote and it is
encrypted somehow, you have no way of knowing what you voted – it became a
secret. So there is the trouble with that.
Further more, assuming conventional encryption, there are master keys that can
be easily compromised by an evil Dragnea.
Auditing such a system
involves an extremelyhigh and particular level ofexpertise,
and any of the parties having a stake in the election would really have trouble
finding people willing to take the risk of doing that for them. This is an
extreme sensitive matter.
is a research area concerned with tackling these issues. It is called “End-To-End
Verifiable Voting Systems”.
Verifiable Voting Systems
tackling these problems for e-voting systems means transforming an electronic
environment for voting in such a manner that it can, at least handle the standards
of non e-voting systems, and then add some specific electronic mumbo-jumbo to
it, and make it available in a ‘pandemic environment’. [Oh my God, I’ve just
said that, pandemic environment…]
main transformation is: I, as a voter, must be able to act a secret vote up to
casting it, and make sure my vote is accounted for, properly.
would be wonderful if, while addressing the trust in the counting of the votes
problem we would have a way of casting an encrypted vote, but still be able to
count it even if it is encrypted. Well this can be done.
my knowledge today, the most effective and advanced technology that can be used
here is homomorphic encryption, and, more precise, a small subset of HE, called
homomorphic secret sharing.
Homomorphic secret sharing is a secret sharing algorithm where the secret is encrypted using homomorphic encryption. In a nutshell homomorphic encryption is a type of encryption where you can do computations on the ciphertext – that is compute stuff directly on encrypted data, with no prior decryption. For example: in some HE schemes an encryption of a 5 plus an encryption of a 2 is an encryption of a 7. Hooray.
Bear in mind, the mathematics behind all this is pretty complex. I would not call it scary, but close enough. However, there are smart people that are working on, and providing some, out-of-the-box libraries that software developers can use so that they can embed HE in their product. I would like to mention jus two here: Microsoft SEAL and PALISADE (backed by DARPA). Don’t get me wrong, today, you still have to know some mathematical tricks if you want to embed, HE in your software, but the really heavy part is done by these heroes that are providing these libraries.
voting protocols using homomorphic secret sharing
the next article I will talk about the challenges that you will face if you are
trying to embed HE in your product, but until then, if you want to get a glimpse
about the complexity, I will just go ahead and detail a decentralized voting
protocol that uses homomorphic secret sharing.
Assume you have a
simple vote (yes/no) – no overkill for now
Assume you have some
authorities that will ‘count’ the votes. – number of authorities noted as A
Assume you have N
Each authority will generate a public key. Anumber. Xa
Each voter encodes his vote in a polynomial Pn, with the degree A-1 (number of authorities -1) and the constant term an encoding of the vote (for this case +1 for yes and -1 for no) all other coefs are random.
Each voter computes the value of his polynomial (Pn) – and thus his vote – at each authority public key Pn(Xa).
K points are produced, they are pieces of the vote.
Only if you know all the points you can figure out the Pn, and thus the vote. This is the decentralization part.
Voter sends each authority the value computed using its key only
Thus, each authority finds itself impossible to find how each user voted, as it does not have enough computed values – only has one.
After all votes have been casted, each authority computes and publishes a sum (Sa) of the received values.
Thus, a new polynomial is born (coefs are the Sa sums) with the constant term being the sum of all votes. If it is negative the result is yes, and vice versa.
you had troubles following the secret sharing algorithm, don’t worry, you’re
not alone. Here’s a helper illustration:
there are still problems:
Still, the voter cannot be sure that his/hers vote is properly casted
The authorities cannot be sure that a malicious voter did not computed his polynomial with a -100 constant, such that a single cast would count for 100 negative votes.
The homomorphic secret sharing does not even touch the other problems of voting systems, only the secrecy and the trust are tackled.
See, you still have to
know a little bit about polynomials and interpolation to be able to use this in
crazy part is that, in homomorphic encryption terms, homomorphic secret sharing
is one of the simplest challenges.
worry though, in my next article I will show you some neat library (Microsoft SEAL),
share my experience with you, and give you some tips and tricks for the moment
when you will try to adopt this.
next time, remember: don’t take anything for granted.
Lately, I’ve been doing some work in the area of cryptography and
enterprise scale data protection and privacy. And so, it hit me: things are a
lot different than they used to be, and they are changing fast. It seems that things
are changing towards a more secure environment, with stronger DP and privacy
requirements and it also seems that these changes are widely adopted. Somehow,
I am happy about it. Somehow, I am worried.
Before I go a little deeper into the topic of how to design for critical
privacy and DP systems, let me just enumerate three of the factors that are
responsible for generating the changes that we are witnessing:
evolving worldwide regulation and technology adoption started by EU 2016/679 regulation (a.k.a. GDPR)
progress we are covering in terms of big data analysis and ML
The digitalization of
the citizen facing services of nation-states. (stuff like e-voting, that I
really advocate against)
I don’t want to cover in-depth the way I see each factor influencing the privacy and DP landscape, but, as we go on, I just want you to have these three factors in mind. Mind the factors.
Talking about each concept and technology that is gaining momentum in
this context is absolutely impossible. So, I choose to talk about two of the
most challenging ones. Or, at least the ones that I perceive as being the most
challenging: this is going to be a two episodes series about Differential
Privacy and Homomorphic Encryption.
Differential privacy. e-Differential Privacy.
Differential Privacy, in a nutshell, from a space-station view, is a
mathematical way of ensuring that reconstruction attacks are not possible at
the present or future time.
Mathematical what? Reconstruct what? Time what? Let me give you a
Assume we know the following about a group of people:
There are 7 people with the median age of 30 and the mean of 38.
4 are females with the median of 30 and the mean of 33.5
4 love sugar with the median of 51 and a mean of 48.5
3 sugar lovers are females with the median 36 and the mean 36.6
Challenge: give me the: age, sex, sugar preference and marital status of
1. 8, female, sugar, not married
2. 18, male, no sugar, not married
3. 24, female, no sugar, not married
4. 30, male, no sugar, married
5. 36, female, sugar, married
6, 66, female, sugar, married
7. 84, male, sugar, married
Basically, a reconstruction attack for such a scenario involves finding
peaks of plausibility in a plausibility versus probability plot. It goes
something like this:
You can start brute forcing all the combinations of the seven participants. Considering all the features except age (so, gender, sugar preference, marital) you have 7^8= 5764801 possibilities, but all have roughly the same plausibility. So a possibility / plausibility plot, looks something like this
See, there does not seem to be any peaks in plausibility. But, once we
factor in the age, well, things change. For example, although possible to have
a 150 years old person, it is very implausible. Furthermore, it is more plausible
to have an older individual married than a younger one, and so on. So, if we
factor in age plausibility, a graph looks more like this:
See, there’s a peak of plausibility. That is most likely our solution. Now, if our published statistics are a little
skewed. Say, we introduce just enough noise into them such that the impact is
minimum for science, and we eliminate the unnecessary ones (if this can be
done) then, a reconstruction attack is almost impossible. The purpose is to flatten,
as much as possible, the graph above.
Now, to be fair, in our stretched-out textbook example, there’s no need
to do the brute-force-assumption plausibility plot. Because the Mean and Median
are published for each subset of results, you can simply write a deterministic equation
system and solve for the actual solution.
Imagine you, as an attacker possess some external knowledge about your target
from an external source. This external source may be an historical publishing over
the same set of data, or a different data source altogether. This makes your reconstruction
e-Differential Privacy systems have a way of defining a privacy loss (i.e.
a quantitative measure of the increase in the plausibility plot) Also, these
systems define a privacy budget. And this is one of the real treasures of this
math. You can make sure that, over time, you are not making the reconstruction
This stuff gained momentum as the US census bureau got the word out the
they are using it, and also encouraged people to ask enterprises that own their
data to use it.
So, as a software architect, how do I get ready for this?
First, at the moment, there are no out-of-the box solutions that can
give you e-Differential Privacy for your data. If this is a requirement for
you, you are most probably going to work with some data scientists / math degree
that are going to tell you exactly what will be a measure a privacy loss for
the features in your data. At least that is what I did 😊 Once
they are defined you have to be ready to implement those.
There is a common pattern you can adopt. A proxy, a privacy guard:
You are smart enough to realize that CLEAN data means that some
acceptable noise is introduced, such that the privacy budget is not greatly, if
at all, impacted.
If it was easy, everyone would do it, but it’s not, so suck it.
First, you and your
team must be ready to understand what a highly trained math scientist is
talking about. Get resources for that.
Second, you have to be
careful, as an architect, to have formal definitions throughout your
applications for the two concepts enumerated above: privacy budget, and privacy
Third, in both my
experience and in the textbook ongoing research, the database must contain the
absolute raw data, including historic data if needed. This poses another security
challenge: you don’t want to be fancy about using complicated math to protect
your data while being vulnerable to a direct database attack. Something stupid
like injection attacks have no place here. You can see now that the diagram above
is oversimplified. It lacks a ton of proxies, security controls, DMZs and
whatnot. Don’t make the same mistake I did and try to hide some data from the
privacy guard, your life will be a misery.
Fourth, Be extremely
careful about documenting this. It is not rare that software ecosystems change
purpose, and the tend to be used where they are not supposed to. It may happen
that such an ecosystem, with time, gets to be directly used for scientific
research, from behind the privacy guard. That might not be acceptable. You
know, scientists don’t like to have noisy data. So I’ve heard, I’m not a
That’s all for now.
In the second part we’re going to talk a little bit about the time I
used Homomorphic Encryption. A mother****ing monster for me.
Since you’re here, I believe that you have a general idea about what homomorphic encryption is. If, however you are a little confused, here it is in a nutshell: you can do data processing directly on encrypted data. E.G. An encryption of a 5 multiplied by an encryption of a 2 is an encryption of a 10. Tadaaa!
This is pure magic for privacy. Especially with this hype that is happening now with all the data leaks, and new privacy regulation, and old privacy regulation, and s**t. Essentially, what you can do with this is very close to the holy grail of privacy: complete confidential computing, process data that is already encrypted, without the decryption key. Assuming data protection in transit is already done. See picture bellow:
Quick note here, most of the homomorphic schemes (BFV/CKKS/blabla..) use a public/private scheme for encrypting/decryption of data.
Now, I have been fortunate enough to work, in the past year, on a side project involving a lot of homomorphic encryption. I was using Microsoft SEAL and it was great. I am not going to talk about the math behind this type of encryption, not going to talk about the Microsoft SEAL library (although I consider it excellent), not going to talk about the noise-propagation problem in this kind of encryption.
I am, however, going to talk about a common pitfall that I have seen, and that is worrying. This pitfall is concerning the integrity of the result processing. Or, to be more precise, attacking the integrity of the expected result of processing.
Let me give you an example: Assume you have an IoT solution that is
monitoring some oil rigs. The IoT devices encrypt the data that is collected by
them, then sends it to a central service for statistical analysis. The central
service does the processing and provides an API for some other clients used by
(This is just an example. I am not saying I did exactly this. It would be $tupid to break an NDA and be so open about it.)
If I, as an attacker compromise the service that is doing the statistical analysis, I cannot see the real data sent by the sensors. However, I could mess with it a little. I could, for instance, make sure that the statistical analysis returned by the API is rigged, that it shows whatever I want it to show.
I am not saying that I am able to change the input data. After all, I as an attacker do not have the key used for encryption, so that I am not able to encrypt new data in the series. I just go ahead and alter the result.
It seems obvious that you should protect such a system against
impersonation/MitM/spoofing attacks. Well. Apparently, it is not that obvious.
While implementing this project, I got in touch with various teams that were working with homomorphic encryption, and it seems that there was a recurring issue. The problem is that the team that is implementing such a solution, usually is made up of experienced (at least) developers that have a solid knowledge of math / cryptography. But it is not their role to handle the overall security of the system.
The team that is responsible for the overall security of the system, is unfortunately, often decoupled with the details of a project that is under development. What do they “know” about the project? Homomorphic encryption? Well, that is cool, data integrity is handled by encryption, so why put any extra effort into that?
Please, please, do not overlook basic security just because some pretty neat researchers made a breakthrough regarding the efficiently of implementing a revolutionary encryption scheme. Revolutionary does not mean lazy. And FYI, a Full Homomorphic Encryption Scheme has been theorized since 1978.
To be fair play, I want to mention another library that is good at doing homomorphic encryption, PALISADE. I only have some production experience with Microsoft SEAL, and thus, I prefer it 😊
I think it was 2008 when I was studying the
second edition of “Writing Secure Code” by David LeBlanc and Michael Howard. It
was there that I ran into the term “encraption” for the first time. Encraption refers,
generally, to the bad practice of using poor encryption: old and deprecated
algorithms, poor key complexities, storing
the keys in plain sight, and other crap like that. Some of these bad practices
can be fixed within a software development team by educating the members to ask
for expertise in cryptography instead of just making assumptions. I have talked
about this before, and yes, this aspect is critical.
I want to talk about a specific aspect of
encraption: storing the keys in plain
sight. I still encounter this situation so often, that I get physically ill
when I am seeing it. If you claim to have never done it – store keys/sensitive information
in plain sight – please think again, this time carefully. In fact, there is a
secure coding principle stating that the source code of your application, in any
configuration, should not be
considered sensitive information. This, of course, has some very little
Disclaimer! This article is just a summary of the reasons why you should consider using Azure Managed Service Identities, and does not describe the inner pluming of Managed Service Identities in Azure, nor does it provide examples. It would be stupid for me to claim that I can give more appropriate documentation than Microsoft already does on their platform. At the end of the article you will find links pointing you to those resources.
Let us imagine a classic situation here, for example a shared service that is required to store sensitive information that is generated by each of its users, and later make that data available ONLY to the user that generated it. Something like this:
Now, the trick is that any bug/quark that may appear inside the app will not allow for a user to obtain any key belonging to any other user. The cryptography behind this is not the scope of this article. Also, handling sensitive data while they are in memory is not the purpose of this article. So, in the end, how would you tackle this? Many of you would say, right away that key storage should be delegated to a dedicated service, let’s call it a HSM, like so:
Now, the problem is that you need to have an authenticated and secure communication channel with the HSM. If you are trying to accomplish this in your own infrastructure, then good luck. While it is possible to do it, it is most certainly expensive. If you are going to buy the HSM service from an external provider, then, most certainly you will have to obey a lot of security standards before obtaining it. Most of these so-called Key Vault services will allow access to them based on secret-based authentication (app id and secret) that your application, in turn will need to securely store. I hope you have spotted the loophole. The weakest link here is that your application will still have to manage a secret, a set of characters that is very, very sensitive information:
Of course, most teams will have a “rock-solid-secure-rocketscience-devops”
that will protect this red key in the configuration management of the production
environment. Someone will obey a very state of the art process to obtain and
store the red key only in the production environment, and no one will ever get
it. This is sarcasm in case you missed it. But why is it sarcasm? you may ask.
Here are my reasons.
In 15 years and probably hundreds of projects with a lot of people involved, I have maybe seen 3 – three – projects that have done this properly. Typical mistakes are ranging between
the red key has changed, no one knew that it can change, the production halted, a SWAT team rushed to save the day and used their own machines for changing the key
development team has access to production
the “rock-solid-secure-rocketscience-devops” is ignored by the “experienced” developer that hard-codes the key
no one really cares about that key. It is just “infrastructure”
An attack can still target this red key. It is, in fact just another attack surface. Assuming the absurdity that all other parts of the system are secure, a system holding such a key will have at least two points of attack.
The identity that has access to the production environment (this is very hard to eliminate)
The red key itself
Why not eliminating this red key altogether? Why not going to a cloud
provider where you can create an application (or any resource) and a Key Vault
and then create a ‘trust relationship’ between those two that needs no other
secrets in order to function. This way you will not have to manage yet
Please bear in mind, that if using an HSM or Key Vault, you can isolate
all your sensitive information and keys at that level, so that you will never
have to bother about the secure storage of those item ever. No more
hard-coding, no more cumbersome dev-ops processes!
Managed Service Identities
This is possible in azure, using Managed Service Identities. As I have said before, the purpose of this article is not to replace the awesome documentation that Micro$oft has in place for MSI, but to encourage you to read it and use this magic feature in your applications. Current or future
It’s not my definite characteristic to write boilerplate articles about obvious challenges, but I had a fairly recent experience (December 2018). I was doing some security work for an old client of mine and found that it was facing the absolute same basic problems that I tackled many times before. So, I remembered that more than 1.5 years ago I summed those problems up into the following material:
Having a job that requires deep technical involvement in a prolific forest of software projects certainly has its challenges. I don’t really want to emphasize the challenges, as I want to talk about one of its advantages: being exposed to issues regarding secure software development in our current era.
Understanding these four basic dimensions of developing secure software is key to starting building security into the software development lifecycle.
Dimension Zero: Speaking the same language
The top repetitive problem that I found in my experience, regardless of the maturity of the software development team, is the heterogeneous understanding of security. This happens at all levels of a software development team: from stakeholders, project managers to developers, testers and ultimately users.
It’s not that there is a different understanding of security between those groups. That would be easy to fix. It’s that inside each group there are different understandings of the same key concepts about security.
As you can expect, this cannot be good. You cannot even start talking about a secure product if everybody has a different idea of what that means.
So how can a team move within this uncertain Dimension Zero? As complicated as this might seem, the solution is straightforward: build expertise inside the team and train the team in security.
How should a final resolution look like at the end of this dimension? You should have put in place a framework for security that lives besides your development lifecycle, like Security Development Lifecycle (SDL) from Microsoft for example. Microsoft SDL is a pretty good resource to start with while keeping the learning loop active during the development process.
Dimension One: Keeping everybody involved.
Let’s assume that a minor security issue appears during implementation of some feature. One of the developers finds a possible flaw. She may go ahead and resolve it, consider it as part of her job, and never tell anyone about it. After all, she has already been trained to do it.
Why would you ask, right!? This looks counterintuitive, especially because “build expertise inside the team and train the team in security” was one of the “dimension zero”’s to go with advice.
Primarily because that is how you start losing the homogeneity you got when tackling Dimension Zero. Furthermore, there will always be poles of security expertise, especially in large teams, you want to have the best expertise when solving a security issue.
Dimension Two: Technical
Here’s a funny fact: we can’t take the developers out of the equation. No matter how hard we try. Security training for developers must include a lot of technical details, and you must never forget about:
Basics of secure coding. (E.g. never do stack/buffer overflows, understand privilege separation, sandboxing, cryptography, and …unfortunately many more topics)
Know your platform. Always stay connected with the security aspects of the platform you are developing on and for. (E.g. if you are a .NET developer, always know its vulnerabilities)
Know the security aspects of your environment. (E.g. if you develop a web application, you should be no stranger of XSRF)
This list can go forever, but the important aspect is never to forget about the technical knowledge that the developers need to be expsosed on.
Dimension Three: Don’t freak out.
You will conclude that you cannot have a secure solution within the budget you have. This can happen multiple times during a project’s development. That is usually a sign that you got the threat model wrong. Probably you assumed an omnipresent and omnipotent attacker. [We all know you can’t protect from the “Chupacabra”, so you shouldn’t pay a home visit.]
This kind of an attacker doesn’t exist… yet. So, don’t worry too much about it, focus on the critical aspects that need to be secured, and you’ll restore the balance with the budget in no time.
Instead of a sum up of the 4 security dimensions of software development, I wish you happy secure coding and leave you a short-but-important reading list:
At the end of last year, I had some time to review and get up-to-date with some of the most important security incidents of 2018. Some of these incidents are wide-spread knowledge, some of them are particular to the activity that I do. While doing this, I figured that I could draw some pragmatic conclusions about what basic protection is against “a generic 2018 cybersecurity threat”. I have great friends and colleagues, and so one thing leads to another and we get to publish a small eBook on this topic.
This small eBook is designed for decision makers to gain a high-level overview of topics, as well as for IT professionals responsible for security steps to be implemented.
All things considered, we hope that everyone who will read the eBook and will implement some recommendation to their current strategy / development / infrastructure / design / testing practices will improve their overall products’ or services’ security.
You can download it here. Of course, this is free. If you want to get it directly from me, drop me an e-mail please, I’ll make sure to reply with the proper attachment :).
I am the author, and my colleagues
Tudor Damian – Technical curator
Diana Tataran – General curator
Noemi Bokor – Visual Identity
Avaelgo – Sponsored some time to make this possible
Something happened this month with Romania’s ING Bank. I’m sure you’re probably aware of it. They managed to execute a several (well, maybe more than just a several) transactions more than once. Well, shit happens, I guess. They have eventually fixed it. At least they say so. I choose believe them.
This unfortunate happening triggered a memory of my first time working in a mission-critical environment where certain operations were supposed to be executed exactly, absolutely, onlyonce. It was for a german company. back in 2013. I am not allowed to mention or make any refference to them or the project, so let’s anonymously call them Weltschmerz Inc. It went something like this (oversimplified diagram):
I don’t claim that ING’s systems can be oversimplified to this level, but for the sake of the argument, and the protection I assumed for the so-called Weltschmerz Inc. let’s go with the banking example.
Trusted actor is me, when using a payment instrument that allows me to innitiate a transaction. (can be me using my card, or me being authenticated in any of their systems)
Trusted application is the innitial endpoint where I place my input describing my transaction (can be a POS, can be an e-banking application, anything)
The Mission-Critical Operation is the magic. Somehow, the application (be it POS, e-banking, whatsoever) knows how to construct such a dangerous operation.
Trick is, that whoever handles the execution of this operation must do it exactly, absolutely, only once. If the trusted application has a bug /attack/misfortune and generates two consecutive identical operations, one of them will never get executed. If I make a dubious mistake and somehow am allowed to quickly press twice a button, or if the e-banking / POS undergoes an attack, the second operation will be invalid. If anyone tries to pull a replay attack, it will still not work.
How to tackle this? Well, there are alot of solutions for this problem. Most of them gravitate around cryptography and efficient searching, here’s the approach we took back then:
Digitally signing the operation: necesarry in order to obtain a trusted fingerprint of the operation. the perfect unique identifier of the operation.
I understand, it is not easy to accomodate a digital signature ecosystem inside your infrastructure, there’s a lot of trust, PKI + certificates, guns, doors, locks, bieurocracy and shit to handle. It is expensive, but that’s life, no other way around it unfortunately.
Storing and partitioning: this signed version is stored wherever. However its signed hash must be partitioned based on variable that derrive from the business itself. If we are to consider banking, and if we speculate, we could come up to: time of the operation, identified recipient, innitiator, requested value, actual value, soo many more possibilities…. This partition is needed because, well, theory and practice tells us that “unicity has no value unless confined” If you are a very young developer, keep that in mind, it will cut you some slack later in your life.
Storing this hash uniquely inside a partition is easy now, it is ultimately just a carefull comparrison of the hashes inside a partition and the new operation which is a candidate for execution.
Hint: be carefull in including time in your partition. Time should not only be a part inside the signed operation, but also a separate, synchronised, independent, clock. I’m sure you already know this.
If you do this partitioning and time handling by the book, no replay attack will ever work.
Execution: Goes in all partitions that have something inside of them, gets the operations, does the magic. Magic does not include deleting the operation hash in the partition afterwards. It includes some other magic maker. I choosed my words carefully here :). #ACID.
There’s a lot more to it:
signed hashes should be considered highly sensitive secrets, tough an encryption mechanism must be employed. Key management in this case is an issue. That’s why you will probably need an HSM or some sort of simmilar vault for the keys, and key derivates
choose your algorithms carefully. If you have no real expertise in cryptography, please call someone that does. Never assume anything here unless you really know how to validate your assumptions
maintaining such an infrastructure comes with a cost. It’s not such a deal breaker, but it is to be considered.
Again, I am not claiming that ING Romania did anything less than the best in order to ensure the singular execution, this article is not related directly to them. It is just a kind reminder, that it is possible to design such a mission-critical environment, for singular execution of certain operations.
As for my experience, it was not in banking, but rather a more open environment. #Marine, #Navigation.