NCSU Creative Services Retrofitting Legacy Code for Security Somesh Jha University of Wisconsin Madison December 5, 2011 Jha: Somesh Jha M: Male Speaker


Dec 9, 2013 (3 years and 4 months ago)



Creative Services

Retrofitting Legacy Code for Security

Somesh Jha

University of Wisconsin Madison

December 5, 2011


Somesh Jha


Male Speaker


It’s a pleasure to talk here. I was telling Mike this before that this whole sort of
Triangle Park area with Mike and Fabian here, Feng [ph] and Will [ph] _____
at N.C. State, Landon Cox at Duke, it’s becoming a real security powerhouse. So
you know, it’s a real pleasure to talk over here. That’s a picture of Madison in
summer and I prob
ably wouldn’t show what it looks like in winter, otherwise it
will be a depressing talk.




[LAUGHS] That’s true. It looks like that. In a month it will look like

So, essentially, this talk has some good news, some ba
d news. Since this
is a security talk, we’ll start with bad news first. So if you’re looking for some
Halloween time reading, I would suggest to read the semantic threat report of
2010. And essentially, the news is all bad. For example, if you look at
that report
and see where the attacks are emerging from, the U.S. is still top on the list but
new countries are fitting in

like there’s a lot of malicious activity in Brazil,
India, and so on. You know, if you look at the report, it’s essentially because

connectivity is increasing in those countries, right?

Where are the attack targets? What are the attackers targeting? You know, some
of the old news, spam, identity theft, and so on. But it looks like there’s a huge
emergence of hackers that are ta
rgeting very specific enterprises.

So spam that is, for example, spam that is targeted right to this department.
It will say, “Hey, Enslo [ph] said this.” And so it will actually be very targeted,
and these are what is called advanced persistent thre
ats in sort of like a lot of the
community. Stuxnet is a popular sort of that. Okay? So essentially, I’m scaring
you more. Each slide should be more scary. But I don’t see you guys getting

So what are the vulnerabilities that attackers are exp
loiting? Mostly web
based, you know, and sort of malicious PDF stuff, which Fabian has done some
excellent work in sort of mitigating. What kind of malwares are more prevalent?
Trojans still rule. One of the biggest things that I saw in the report

I me
an, I
read the report probably every year, just mostly being a security researcher

that the kits to actually write malware, obfuscate malware, become very mature.
When you actually

you can buy them sort of on the market.

So that’s kind of the take o
f a demographics of attack: origins are
expanding, web is the major vector of attack, Trojans are the most prevalent form
of attack. And you can sort of read the rest. And essentially

the report
essentially said that the security market in these emerging

countries is going to be
where the growth is going to be. Okay? Good. I’m not going to talk about
defenses, but you can see, for example in 2009, that was the number of signatures
created by Symantec. Okay.

So, the news is pretty grim, and if you want

even more grim news,

I don’t know, I didn’t see Fabian and Mike there, but DARPA had a
cyber colloquium in early November, and essentially it was even more grim news
essentially saying that, you know, we’re kind of losing the battle, so to speak, at

least from their point of view.

Okay, so what do we do? Where is the good news? So this was the bad
news. Now the good news. There is a huge push in the networking and the
systems community towards clean
slate designs. People say, you know, we so
of patching signature
based and so on, let’s sort of try to think from the beginning
and see that if we re
think the entire system stack, will the things get better?
Okay? So, from the networking side, I think NSF has a program and actually
there is a

project called Nick McKeown runs it, and
essentially he’s thinking sort of the network architecture from the ground up.
This is the open
flow stuff that he’s doing. DARPA, in computer science,
fortunately or unfortunately, a lo
t of the sort of the innovations follow the money,
so DARPA has a Mission Resilient Cloud computing, which is essentially you
can think about as thinking networking from the ground up in Clean Slate.

Now what I will be talking about is DARPA has a progra
m called
CRASH, which is talking about clean
slate design for hosts, like from operating
system hardware so on. And actually, you know, the thing is it’s a pretty
ambitious program. Some of this might not get into, you know, the sort of
commodity, but th
e whole idea is that I think it’s a good sort of a mental exercise
to see. If you’re going to do a clean
slate design, how much better can you really
do? Now the good news is there is already, if you look in the systems
community, there are some very

by the way, I’m not a networking, I’m
actually not even an OS guy, but I’m going to mostly talk about hosts here and
not about networks. But there are some interesting things happening in the clean
slate slide as well over there.

So the whole idea is th
at there’s a

if you

so what like Fabian
and I sit in more the USENIX Security Oakland crowd, but there is a lot of stuff
happening in the systems community where they’re actually building systems, or
primitives into systems, that have very strong security
guarantees. So for
example, the first three of them

Asbestos, HiStar, Flume

mostly from MIT
and Stanford, where they have very strong information flow guarantees built into
the OS primitive. Capsicum was in last year at USENIX Security and it’s

a capability
based UNIX/BSD system.

There is also systems that are virtual machine
based; for example, Proxos
from the University of Toronto, essentially what it does is it allows you to have
some trusted applications, like a VPN client, where you have cr
edentials to run in
its own sort of guest OS and so you really sandbox it. So that’s not the idea.
There are many more that I have not sort of put over here. The idea is that in the
systems community, they are re
thinking, at least, that, “Okay, you kno
w what, if
I want to build some primitives into the OS, what kind of primitives are going to
sort of build on?” Go ahead.




So generally they have not focused on recovery, and the kind of primitives
you will see some exampl
es; what they will say is that, hey, you can have sort of
two processes and they get tags from the OS and they can only target if this guy’s
tag is a subset of this guy. But that those primitives are actually enforced at the
OS level. You will see some a
pplications I’ll talk about. Now the good thing is if
you do sort of a thought experiment, is you can think about like, oh, it’s not hard
to see that I can build a web server on top of HiStar that has very strong security
guarantees. So it can essentiall

I mean, they talk about that in their paper.
They can build a web server, which they did for their system, which there’s no
information flow between threads that are handling different requests.

So, I’m going to talk about one such operating system an
d it’ll

if I still don’t answer your question, please let me know. Okay. But
here is what happens. This laptop, your laptop, probably is not using any of these
systems, I’m assuming. Actually, some of those guys who wrote it use it but, you
ow, we’re not using it. And my thesis, and you know I’m open to attack and
criticism here, is that a lot of these things don’t get used. They’re very useful for
people to get tenure and get very good papers, but never used sort of widely. It’s
because w
hat happens with all the applications that were written for, let’s say,
vanilla Linux, like a web server or a wiki or a browser or whatever

so in some
sense, a lot of these clean
slate systems, which I think is a very good direction,
will, I think, not get

adopted unless there are tools where we can either migrate
some of these applications that have been written for traditional OS’s or have
tools to build applications for these clean
slate designs.




No. So I think the thin
g is some of these

I think it will re
writing the
code. I mean I don’t know how big

some of these applications can be millions
of lines, right? And so what happens is typically, and you’ll see some of these
applications when you move these applications t
hrough, let’s say it was written
for Linux and you move it to one of these like HiStar, the amount of code you
have to write to really secure it is not that much. So we’re not actually
synthesizing the whole program, we’re just synthesizing the shimmy [ph
] that is
required to do that. And that you will see, and we are able to do that because of
some of the formal methods and PL tricks. But I mean, our goal is not to sort of
rewrite that, all the other code, it’s just that shimmy of code that you need to
on that operating system.

So this is sort of the contraption that we want to build. You have
legacy code, let’s say the Apache server, that’s insecure; you want to migrate it to,
let’s say, Flume, HiStar, these new systems, and out comes the retrofi
tted code
that is secure. We want to build that beast. Okay? That’s kind of our goal, and
that’s why it’s called retrofitting legacy code to security. Fabian?




Yeah. The Flume is going to be a running example when I ge
t into the
talk so if it is not still clear, please let me know. Okay.

So, essentially this slide you should read that everything that I know is
useful for this problem. So a lot of techniques that are for verification, program
analysis, can help with th
is problem. That’s essentially our

sort of the thesis of
this talk, and let’s see whether you guys buy it still at the end. Right? So this
wouldn’t be useful without, or this wouldn’t have happened without a lot of the
collaboration. Some of the early
work was done with Trent Jaegar and Divya
Muthukumaran from Penn State. Some of the decision stuff is from

collaborated with Sanjay [ph] ______ and one of his students, ______ from
Berkeley. But a lot of the work

and also Vinod Ganapathy, who’s one of
students who attends Rutgers now. But a lot of the work that I’m going to talk
about is with a student, Bill Harris. This is when he just entered his Ph.D. study;
he looks much older now. I think we have put him

that’s when he just
graduated as an

undergrad. And Tom Reps, who’s in the program languages
community. So most of the work that I’m going to talk about is with Bill and




Actually he does, he does. I think it was funny. He

Tom was at one of
my stud
ent’s defense and one of my other students thought that he was a grad
student. He said, “Oh, you know, who’s your advisor?” [LAUGHS] So, yes, he
does look like that. This is what good living will do for you, right?




GHS] So, the paper that I’m going to talk about is called “DIFC
Programs by Automatic Instrumentation,” and Fabian, this is going to talk about
Flume so you’ll get some idea. And this work was done with Bill

collaboration with Bill Harris and Tom Reps.

Okay, so we are now switching

so again, just to say that a lot of our
work is taking these systems, taking legacy applications and migrating it to them
automatically. So, I’ll give you a little bit of

as the talk goes, you will see what
kind of primitive
s does these DIFC operating system supports. But we’re going
to be using here called Flume. We have used the same methodology for other
systems, and if you’re interested in sort of, we are going to give you a little bit
flavor of Flume, but if you’re rea
lly interested in all the nitty
gritty details, the
paper appeared in SOSP 2007.

Max Krohn has since

he was a faculty at Yale for a little bit; since then, he has
left and he’s at a start
up now. So he has

so here is

I don’t like animations
that much so h
ere’s the basic idea behind these operating systems. The whole
idea is this is a mock
up of a web server. And the way you should

I mean, not
the whole thing but just a little bit

the way you should think about the small
running example is that all your w
eb requests, STP requests, come into this
requester and worker is spawned to handle that request. And the worker can talk
to the requester and the spawner is what spawns the work. That’s kind of, sort of
the model. And let’s say the worker doesn’t want

you know, you don’t want the
worker to talk to the network because they might leak some data about, sort of,
the request. The requester

because you trust you want to be able to talk to the
network. Okay.

Now, let’s say you want to enforce this informatio
n policy where you can think
about green arrows as information flow allowed, and red as not information flow
allowed. At the OS level, at the, let’s say, the kernel. Any problems with that?
What do you see? Let’s say I want to enforce that policy at th
e kernel.




No, just the policy. The policy is this, that you know these guys should be
able to talk, these guys should be

this guy should be able to spawn a worker; red
means that information flow not allowed. So let’s sa
y this is the policy I meant to




Yeah, that just goes over channels. Forget the code. I mean I think these
things don’t also deal with covert channels. This is just over channels. That’s the
channels that I jus

with those arrows. Any issues you see? I mean like, what
happens with you

So think about writing this policy at the OS level, right? How does the OS
know what is a spawn or what’s a worker or what’s that requester, what is it and
so on. Right? So t
he thing is, if you want to write these policies at the OS level,
the application semantics has to kind of bleed into the OS, which is not a good
thing. So a lot of these systems, Flume, HiStar and so on, the way they think
about it is that we are just go
ing to provide you the primitives to do the
enforcement, but the policy uses those primitives but sits in the application.


The red arrow means a worker cannot send a message across the network. Is that
what you’re saying?


Yeah. So to underscore,
think about green as information flows that are
allowed and red that are not allowed.




So maybe in this case you can. You can say hey, you know, give sort of a
small module to the

that something is entering the worker. Bu
t imagine in a
bigger setting and a bigger application, if you want to write a policy for the sort

at the OS level, then you have to write the application structures, semantics

all has to bleed in. So this is not our worker; all I’m saying is that a lo
t of these
systems, the primitives think

essentially give you primitives to enforce the
policy, but the policy is actually enforced in the application.

So, for example, so that’s what this essentially says. You know, this will
give you the newest Flume, a
nd we’ll see some applications. We’ll give you
enough primitives so that you can do that policy, but you know, the policy kind of
sits in the application but it’s enforced in the OS.

But you’re right, I mean if you want to expose some of the application s
tate to the
OS, you could possibly probably do it. But they have not taken that tact. Their
whole idea is that this is going to give you the primitives and the application
actually writes [ph] ______.


[INAUDIBLE]…sending a message over the network is
something the OS knows
[ph]. It’s not as if it’s being asked to disable something it doesn’t know.


Exactly. But that is one application. But let’s say I want to say, hey, you
know when you do this, this internal function should not be able [ph] ___
yeah. So what I’m saying is that essentially this is the philosophy that all the
Flume guys, HiStar, all those guys have taken. Okay.

So, this is essentially what we, when we started, was this is their program
and this is the program with all of thes
e primitives toward what the Flume
operating system needs added in. And the poor programmer has to write all this
thing in; we want to go from here to here automatically. And the other thing is
the following, and when we started looking at these programs
, and I would send
mail to the Flume mailing list and Max Krohn saying, you know, “What was the
level policy you guys were going for? They said, “Oh, maybe it’s like this,”
which is also kind of a bad thing. Essentially, what it is is your what poli
you’re trying to enforce using those primitives is kind of in the code. There’s not
like somewhere sort of a declarative file which says, okay, this is the policy I’m
trying to enforce with this code.

So that’s what we call semantic gap; essentially

nd actually, to be truthful, a lot
of systems guys still feel like this. If you ask them what is their policy they’ll say,
“Go look at the code.” And at least

both Tom and I being PL people, we don’t
think that’s sort of the right tact. So that’s kind o
f what we want to bridge this
semantic gap where you give us policies in sort of a separate declarative language
and you give us the un
instrumented program and out comes the instrumented
program. I am not

so we had this paper called “Secure What I Mean,”

you give us the C program, you give us some semantics about what the Flume
primitives mean, and you give us a policy about what flows are allowed, what are
disallowed, and we give you another program which essentially you can run in
Flume and that w
ill satisfy that policy.

So, this is how it works: you give us a program and a policy; we generate a set of
constraints, looking at the C program, the language semantics and the program
policy; and also we look at the Flume semantics

what does it mean for
various primitives? And it will become clear in the examples that I will consider.
And also the policy language semantics. And these set of constraints go into an
SMT solver. How many of you are familiar with what a satisfiability modulo
solver is?

I mean I can give you a little bit of a

so there has been a huge

me just finish this slide. And then

essentially, the solver is used to satisfy, to
solve those set of constraints essentially. And what the solver does gives you a
solution which are
used for the placement.




Yeah, yeah. It’s just vanilla C. So one thing

go ahead.




Yes, yes, yes, oh my God. No, no, no, yeah. We’re not doing it on binary,
if we have access to source co
de. That’s binary is yet another

I mean the thing
is remember like Apache is like, I don’t know how

like a couple of million
lines? I think it would have trouble scaling up. I don’t think binary analysis,
when you compile two million lines, I don’t know

how big is the binary, but I
think it’s pretty big. And most of binary analysis tools won’t scale up to that. So
yeah, actually that’s a very good point. I should have said it. The source code is

Now just sort of a little bit primer about t
he SMT solver. So there is this

I mean we all know a set is in NP complete, but there has been a huge,
how should I say

but most instances that come up in verification problems are
easy, like a resolution style, sort of stuff [ph], find [ph] satisfyi
ng assignment.
And as these SMT solvers have really leveraged that, they have other theories of
like linear constraints and all that built in, it goes to this old work of Nelson
Oppen and so on

to the point where actually we have quite very large constrai
solvers and we can just throw it at SMT solver Z3 and it sort of is able to solve it.
So there has been a huge community that has been trying to make this really
efficient, and we are sort of using that as a constraint solving problem. But
there’s som
e encoding tricks, which I’m going to talk about, to do that. But think
about SMT solver as essentially just a black box we’re using to solve this set of
constraints. Any questions about that?

By the way, if you’re interested in that, a lot of sort of co
mmunities, for example,
like a lot of the AI planning community and all that have all switched to

no need to have these planners that are very, very specific to your problem; a lot
of them have actually switched to SAT solvers, because you can just

sort of bit
blast it down and you know, you can sort of do it. For example, the graph plan
for R. Rimblaum [ph] and those guys. Okay. So back to our example.

This is our

this is, again, that is the small sort of that web module that I
talked about.
This is the insecure program. I don’t know, the arrow is not
showing here but the policy here is that the worker cannot talk to the network, the
requester should be able to talk to the worker, and the requester should be able to
talk to the spawner. It w
ill go through what we call a policy weaver and out
comes a secure program where, look, these are the primitives that the Flume guy
needs. And essentially, our transformation makes sure that this satisfies the


[INAUDIBLE]…What’s that?


So t
hat’s a very important

so that is a very important point. I think this
is maybe what you’re getting at, that vocabulary is tied to the program. So

the vocabulary that we use in the policy, like what is a worker who works
a network? We have a way
of tying that back to the code fragment over there.
For example, every process that is spawned using this is a worker, or so on. Is
that what you were asking that

yeah. So that


[INAUDIBLE]…How does this program communicate? What’s the
worker policy


No, no. So you have to write

so for example, you have to write sort of
the connection, what it means to be a worker. Yeah. You have to do that. Go




Also it is all sort of how much you

I mean,
do you think writing that
code, this one is harder than writing these things, right? And I claim that at least
from the case studies we had done, the policy in writing these bindings [ph] were
much smaller than the actual Flume code that was generated.




Well, again, I think you

don’t get too hung up on the example.
Essentially, your question boils down to is that, is writing this simpler than
actually generating that sort of the Flume primitive itself?


I guess the quest
ion is to write the policy, do you need any kind of
knowledge about



Yes. To the point where actually that there are these principles that we
talk about, worker network; you have to tie it to a specific set of code fragments
and spawn. Fo
r example, you need to tell us that at this point when you spawn
something I’m going to call it a worker.




Yes. But again, I think the value judgment is, is that harder to do than
that, and actually the amount of code we g
enerate for that. So that’s, again


[INAUDIBLE]…every time a change [INAUBIDLE] these programs
change, have changed the binding [ph]. What happens in that instance?


So you’ll have to

so if you run this

we haven’t talked about that. So
what you’r
e asking is if we update this, at this point you have to kind of re
your SWIM program. You have to re
run SWIM to get the Flume version of that.




Okay. So the thing is, at least from what we saw in a lot of those cha
histories [ph], is these are big enough structural things. They’re pretty sort of
stable. I think

so the internal mechanics we don’t need. I mean that is handled
by the static analysis. But that’s a good question. I mean that’s a value judgment
at essentially this was sort of

at least what we saw is the amount of code that
you have to write there was much more than what you had to write here. Okay.
The other thing is, one thing with the SAT solvers that is very interesting is that if
the set of

constraints is unsatisfiable

oh, Fabian?


[INAUDIBLE]…Can you say something about the number of constraints
that are you’re considering [INAUDIBLE]


Yeah. Actually, I have some experimental results of that. One thing that
we found very interesting

is that if the set of constraints is unsatisfiable, so there
is no way you can satisfy this policy, the SAT solvers give you an unsat core,
which essentially is the smallest core of the constraints that kind of led to the
unsatisfiability. And we actuall

that’s one thing we have not looked at too
much, but that is a very good, at least, that is a very good way to look at, okay,
why the heck was these policies not satisfiable? And we found it very instructive
actually to look at it, and you will see an e
xample of that in sort of my example.
So in some cases, there was sort of an inconsistency between sort of the flow
constraints that we give and you just can’t satisfy, so you have to do something
like re
factor the program. And you will see it.

So aga
in, this is what we want to do, we want to go from there to
there, given modulo, those policies. So this is how it’s going to look; we’ll give
you a program, we’ll give you a security policy, and then it goes through our
instrumenter and you go through a
secure program. And if the policies are not
satisfiable, we show you the set of small constraints, which actually you can map
it down to the fragment of the program, saying, “Okay, because of this fragment
and these policies, we couldn’t satisfy it.”


nd who sets the policy? The same person who

setting the policy here?


Well you know, so imagine you’re a systems guy, you take Apache and
you want to say, “I want to have this policy.” So I’m assuming

if you want to
run this progra
m on Flume, that’s the person writing this policy. Are you getting
at whether these are easy to write or hard to write?


So for example, I don’t know how rigid these policies are, so depending
on in the setting you might not be able to [INAUDIBLE]. So
I don’t know
[INAUDIBLE] but if a set of policies might have a different [INAUDIBLE] and
I’m running it [INAUDIBLE] I might want those policies. So is that you said one
[INAUDIBLE] or can I, as a systems administrator, set a policy and then it gets


No, the policy is the static thing.


[INAUDIBLE]…policy set by the original person [ph]?


Yeah. But I think that the enforcement is dynamic. Okay, so that’s
essentially what we are going to cover in this sort of talk. Okay? So this is
again, this is our mock thing. That network flow is not

anything in red means
essentially think about security, that those flows are not allowed; green means
those flows should be allowed in the system. So those are the flows that apply to
those parts
of the system.

So this is what I’m going to first talk about, challenge of instrumentation.
So the whole idea is that remember we want to do this statically, so first thing
what you want to do, and I’m going to talk about some of the Flume primitives
maybe that will answer stuff. You have to understand how these

somehow have to represent how do these Flume primitives work, sort of the
semantics of these primitives. If we don’t get that, we can’t do anything.

So you have to kind of first encod
e all those DIFC mechanics. And let’s
sort of think about that a little bit. Now this is my two
slide tutorial about how
the Flume DIFC mechanics works, and essentially what happens is each process
will get some labels and labels are just a set of tags.

Think about tags as atomic
elements and labels are just set of tags. And this guy, if that guy also has label

so this is a tag, this is a label, you can have A, B, C and so on. This guy, if it
tries to send a message to P2 through the OS, inter
s communication, if this
thing is a subset of that, it is allowed. If this had an empty tag and if you tried to
send a message, it’s not allowed. So that’s the simple

there are other sort of
bells and whistles to Flume, but the basic idea is that the pro
cess gets a label,
which is a set of tags, and it tries to send a message over a channel to another
process; it can only do so if the current tag of the process

a label of the process
is a superset of this guy. That’s essentially

and if you want to think
of it in a
lattice theoretic way, then essentially it’s the subset relation going up in the
lattice. Good.

So raise a label [ph] means you can read more. Okay. So now we can’t

I mean think about P1, P2; this is not our work, this is just giving you a sl
ight idea
about the Flume semantics, how the Flume works. So P1 can’t send a message to
P2 now because this is not a subset of that. What Flume guys also do, and this
goes back to actually old work of Andrew Myers and Barbara Liskov and these
guys, where

they allow you something which they call positive and negative
capabilities where you can add and drop some labels. They give you sort of a
privilege to add and drop labels. So let’s say you give that P2 the privilege to add
a label A, a tag A; then ess
entially what is can do is it can add a tag A and now it
can receive a message from that.




It is, but I think you will see that I can also drop tags. A lot of the
declassified work, which corresponds to declassified, I thi
nk it was more
formalized by

at least what I’ve seen it was

declassified was not sort of new,
but it was this kind of DIFC type tag and so on was more sort of cleaner done by
Myers and Liskov.


[INAUDIBLE]…you’re telling me these problems, I don’t see
connection at all.


Okay. You’ll see, you’ll see.


[INAUDIBLE] So I don’t know how you would, you know

[INAUDIBLE] so I’m a little not sure how this is

how flexible it’s going to be.


Okay, so again, I want to say this is kind of the semanti
cs of the Flume
primitive; this is not our work yet. These are

I’m just describing to you how the
Flume primitives work. And essentially what you’re getting at is a lot of this
debate and sort of the security community, DIFC versus capability. My whole
thing is that’s exogenous to our work. We are taking these primitives as they are
and, yes, you can

so there’s a lot of criticism that, oh, you know, why not use
based and so on. But this is just

I’m describing you the Flume, sort
of how the p
rimitives work, and I’ll show you how to do the synthesis.




It does, actually. We’ll tie that back, yeah. Essentially, now you can send
it, but also what you can do is you can lower your label, and this is what
, and this is sort of where a lot of the Myers/Liskov work comes in, is,
for example, now at some point you don’t want P1 to receive a message from P2.
So what you’re going to do is you’re going to

or you don’t want P1 to
communicate with the network. Yo
u would give it a negative capability, and you
will remove A from that tag and suddenly it can’t send a message to

or it can,
actually. Sorry. It can send a message because what happened initially, it added
the tag; it got A. Now it can remove this so i
t’s an empty tag and it can send a
message to network. Okay?

So essentially, this is not our work, this is Flume. So Flume essentially
has, just to recap, it has labels, which are set of tags, but it has these capabilities to
either drop the tags or add
the tags. And adding the tag means you can read more,
dropping the tag means you can send to more people, declassified, okay? This is

again, so this is just describing the DIFC mechanics of Flume.

And so in our case, it is going to look like this. I
’m going to assign an empty
label this, a label A to this, a label empty to the network so the worker can send
over the network, right? Can anybody say why did I have a negative capability
for the worker? Why can’t the worker have a negative? So that me
ans it can’t
[ph] drop the tag A.




Exactly. So otherwise what it can do is it can just drop that tag and send
to the network. But do you see a problem here? So what happens is requester
can’t talk to the network, request
er can send stuff to the worker, but the worker
can’t send it to the requester. But our functionality constraints as the requester
and worker should be able to talk to each other.

So, what these guys do is, and this is kind of

I call it an escape hatch.
They essentially re
factor, and they put a proxy, and essentially requester and
worker can talk through the proxy. So essentially, think about this re
factoring as
that those constraints were not used

those constraints could not be satisfiable
unless you
put a proxy between requester and worker. And essentially, if you
look at the proxy, it can add and raise the label as it wishes so it can talk to both
requester and the worker.

Now, what we want to do is we want to come up with this kind of labeling and
factoring automatically. That’s our sort of acclaim. And later on, we will sort

and I’ll show you how to get here, not by sort of this writing things, but
actually through very systematic program analysis techniques.

So we want to instrument DIFC c
ode that is legal, secure and functional. And I’m
going to first just sort of look through this. First, let’s talk about how to generate
constraints that will give you those labels. Let’s just look at the spawner for now,
which looks like this, connecti
on request comes, and then essentially you just
spawn the worker.

Essentially, what our static analysis technique does is for each program point it
designs a variable. So Lab1 essentially is the label of the program just before that
statement. Pos1 is th
e positive capability just before the statement, and negative
is the negative capability just before the statement. And Create1 are all the tags
you’re going to create at that point. So for every program point, we will have
these four variables in our st
atic analysis. And then what we’ll do is we’ll have
constraints that will connect those variables and then we will have a constraint
solver that will solve it and whose solution will tell us what to do. And so that’s
what I’m going to go through now. It
’s the same thing at that program point.

So what does this constraint say? Lab2, subset of Lab1, union Pos1. Essentially,
all you can do

let’s say this was after this. Label set was Lab1 here. You can
only increase the label set by what you have the po
sitive capability of. So this
gives you the constraint that Lab2 cannot get higher than Label 1 and what are the
positive capabilities you have, right? And this is essentially

the blue constraints
are going to model the DIFC primitives that are going to
be. Is that constraint




Oh, because we want to use subset constraints because you know, I might
decide to add less. We don’t have to add all of them, right? And in general,
though, subset constraints are easier to

solve then. But in some things it also
makes sense to me because you might add only a subset of the tags that you can,
you don’t have to add all of them. This just gives you the capability to add.

Okay. What about this? This goes the other way. It sa
ys that, hey, I was
a Label 1 here, I might reduce some of them, so that gives me a lower bound [ph]
to Lab2.

So all these constraints are doing is modeling how the DIFC primitives

the DIFC semantics of Flume. And similarly, Pos2 is a subset of Pos
Create just says that your positive capability here is limited by the positive
capability here and what are the tags you created at that point. So just think
about, what about secure? Let me flash this constraint. What does this
constraint…so I thin
k it’s better to do the negation first. So this essentially says
that if your constraint of the worker is LabW and you remove

you drop some of
the tags, and if that becomes subset of LabN then it can send messages to the
network. So the negation of that
is what you want because that’s a constraint that
you don’t want satisfied.

And now we come to functional constraint. You want requester to talk to
the worker and that essentially is pretty simple, is LabW is a subset of LabR,
LabR is a subset of LabW. T
here are some other complications with the
constraints, so what we do is we just collect all these constraints in a constrained
system. So you do a walkthrough of the control flow graph, and you get these
giant set of constraints. And you think, hey, thi
s thing is too big, but most of the
SMT solvers are able to chunk through it pretty quickly. We had some things in
the paper to reduce these set of constraints. So these set of constraints we then
just give it to a SAT solver, SMT solver, and actually it

comes back saying that it
is not satisfiable. Not only that, it actually gives you the unsat core, which is sort
of a subset of these constraints that actually led to the thing. And you can see
why that is. I mean, these three is what led to the unsati
sfiable. And actually this
is where some of the new work comes in. This actually led us to saying that we
have to actually put the proxy in there, that unsat core actually sort of gives you a
very strong indication that you have to add this proxy between

the requester and
the worker.

Now let’s talk about the solving the constraints. In general, the set of steps that

the set of constraints that we generate are called subset constraints, is
NP complete, and essentially it is NP complete because

we have two kind of
subset constraints; one are subset constraints, the other was also negation of
subset constraints. Remember the security was the negation of that. So we have
both. We have monotonic, which is “this is a subset of that.” And we also

negation of that. So it’s NP complete because of one reason, because we have

our constraints had both monotonic and sort of not
monotonic constraints. If you
were only considering one case, like the functional constraints, it’s actually
monotonic a
nd it can be solid polynomial time.

This is

I’m unable to SMT solvers in practice, and we actually did that. And
how would you

so the way we actually did that is let’s say I got that solution
from my SMT solver. It says Lab1 is empty, Pos1 is empty, Neg1

is empty,
Create1 is A. Then I know that because Create1 is A, I have to instrument the
code to create a tag. Right? And the same thing here. If I have Lab2A positive
capability A, negative capability MT, CreateMT, then I have to spawn a worker
with t
hat label and give it that positive capability.

So all I’m trying to say here is that the SMT solver solution that I get very clearly
says what, how to instrument the program. There isn’t a big lead there. And
essentially this is kind of the label set an
d we got to this picture automatically by
looking at the SMT solver. There was a

before we got into the applications, and
maybe this is going to answer your question, Fabian, there was a slight technical
problem here.

So what is the easiest way to bit
last these type of constraints out of
SMT? I’m going to say, I’m going to have a set of 20 tags, and each label is just a
subset of these 20 tags. So I again write that as bit vector, and when I bit
down. Does anybody see a problem with that? So
I said, okay, you know what,
I’m going to limit myself to 30 tags. Each of those subset constraints becomes
essentially a bit vector sort of formula. All of those set of constraints, I can junk
them and then I give to SMT solver.

The problem is how do yo
u know 30 is enough, right? What if I don’t
find a solution with 30 but find with 31? And this is where a lot of the tricks
happened. We proved something which isn’t logic they call a small model
theorem, which essentially says that for those set of con
straints, we were able to
give you a polynomial bound and these many tags are going to be enough. And
so that’s why this was sort of sound. So in our procedure

and unfortunately a
lot of people in the PL who use SMT solvers don’t sort of try to do the sm

you’re not careful you can get sort of an unsound procedure. Because when you
blast it down you don’t know whether the solution is going to be enough

have given enough bits or not. Okay.

Now applications, we did Apache, FlumeWiki, ClamAV a
nd OpenVPN. This
Apache was not fully automatic, because of that whole re
factoring business. We
have not automated that re
factoring of that proxy. We look at the unsat core and
that proxy stuff that you have to write, you still have to do manually.

e instrumentation time was very, very good and

I mean it was pretty good
and I think the constraints, the largest one we saw was probably 20,000 variables.




Okay. So the instrumentation time was actually imperceptible beca
essentially it is the instrumentation time of those primitives, and those are actually
very fast. And I think we didn’t do the micro
benchmarks on that because I think
we said, okay, you know, whatever the Flume primitive guys said that’s we did.
you know what happened is, this is coming back to sort of

so the take
from here is that the automatic of re
writing of Flume programs is possible by just
using the SMT solvers. Mike?




Yeah. And you have to somehow l
ook at the unsat code and say that, oh,
you know, this partitioning can be done automatically.




Yeah, but I think we have some ideas about that. You’ll see. Yeah, okay.
And use SMT solver to solve the set of constraints.

So if I’m P_____ I would just
say, oh, I’m a capability
based guy, I like that better than DIFC, and so why not
do some other operating system? Okay. So what we started doing is we started
doing Capsicum, which is the capability
based stuff.

So the who
le idea is, which appeared in USENIX Security, I think 2010 or
’11, and the student asked the right question and you know that the student didn’t
want to be a student for 12 years, saying, “Okay, when the heck does this thing
end?” Right? “Do I do Capsic
um first and then you will ask me to do HiStar as
another one, then Asbestos?” And you know, I think this comes back to some of
the stuff that P______ asked [ph]. Everybody has their favorite sort of system,
right, and there are papers in UNENIX Security

that, oh, you know, the DIFC are
not as good as the sort of capability
based and so on. I mean we wanted to come
up with a framework that kind of applies to all these systems, a sort of a generic
framework and not something that applies to this. And act
ually a student gave me
that graphic, so I don’t know whether that was a hidden message from the student
or not.

So, okay? So, actually, this is where the very cool stuff

hopefully some of you
guys start using it. There is a very deep idea, at least comi
ng from the PL
community. It’s called visibly pushdown games, and it was at CAV 2003. And
that was our main technique is we actually reduce our problem to solving certain
games, in safety games. And I’ll give you some of that intuition.

And I really thi
nk that this

I don’t know whether you want to call it framework

should be

and I’ll point to you, you should be used more [ph] in
security because it actually

you know, there are some ideas that when you hit
upon they intuitively kind of make se
nse. And this one does, and I’ll show you.
And so essentially what we did is for all these systems, we can reduce the idea of
this weaving code, to solving certain strategies in these games. And the rest of
the talk, I’ll try to give you that intuition,

which I think

and one thing I would
say is that I hope other people use these formulisms. It was

it sort of languishes
in the PL kind of community and you know, other people don’t use it that much.
And actually, here is the basic idea. If you get this
idea you kind of get the whole
thing, and I’ll just give you a little bit of the flavor. What you do is you assume
your program, in that case that web server or whatever, as the attacker whose
plays are essentially the program statements, like, you know,
connecting a
request. So that’s the attacker. The defender is the instrumenter, you know that
add tag, A tag, and his vocabulary is actually the primitives are the DIF

Now what I’m going to define a game between the attacker and the defender, such
at if there is a winning strategy for this defender that gives you the
instrumentation. And here is how these games work. So I don’t know how

are people familiar with these safety games? Okay. So it’s a very simple
idea. So I’m

the idea is actual
ly, it was funny, it’s such a simple and sort of a
natural idea that actually I couldn’t find a citation. It’s kind of folklore in the PL
community. So imagine A has two plays, it can play either a
; B can play
; and L is just regular language.
So L says it accepts a
, a
/B, and it’s a
subset of that.

And here is the game. The game is that A will play first, then B is going to play
next. And the A strategy is to make you go into the regular language, L, and B’s
strategy is so that you never
hit the regular language. So this is the safety

this is
called a safety game in PL, and essentially modulo

we use something called
visibly pushdown games, where things are not regular; we have a stack. But the
basic idea was that A is the program and B i
s the instrumenter. And think about
this regular language as the policy that you want to enforce. And so A wants

the negation of the policy, and A wants to somehow force you into violating that
policy and B doesn’t want to sort of violate that policy.

The winning strategy for
B gives you the instrumentation.

Let me give you sort of an idea on how these games are sort of solved. It’s
actually not that sort of unintuitive, so if you look at the regular game, it looks
like something like this, the regul
ar automata for this. I hope you trust me on that.
And this is how it works. You know that once I go into that acceptor state I’m
done. I’m trying to find a strategy for B, because there’s no winning strategy for
A. Yeah?


You’re confusing me here.

Why think of a game? [INAUDIBLE] Whereas, I
think the attacker in this scenario has committed to [INAUDIBLE] changed the
code, based on [INAUDIBLE]. So tell me why I’m wrong?


Okay. So repeat your question, Mike.


Well, so in a game, why do you t
hink a player is making moves and the
other players are reacting to it. But in your scenario where [INAUDIBLE] the
code, the code has been committed to in advance, you know the whole thing.




And so there’s no reaction on its part as to how you
r attacker is.


No, no. But the thing is there is enough code paths in the program. So
think about here also, right? Here also I’m committing that this is the sort of the
policy structure that I need to commit to. So A and B have to play in this ga
So for example, in that case, is what you’re just saying; when you commit you
know the code structure so the program can force you

it still has to maintain that
code structure but it can force you into different paths. Do you see what I’m
saying? Fo
r example, like “if” statement, I can force you into different branches.
So that’s where the strategy comes from. So even if you know the code path, you
don’t know exactly

like there might be multiple paths that go to the

violate the
policy. That’s wher
e the stuff is coming from.

So, I think that the algorithm for solving this is not that hard; you
know that once you hit the accepting state you’re dead, right? It has won. So
those are what you call attractor states. That’s also an attractor state b
ecause both
of these things goes to the attractor state, so you’re going to keep removing
attractor states and what is remains, you can read the strategy off actually. Right?
What you can say here is that if A plays a

I’m going to play b
, and if A play
s a

I’m going to play either b

or b
. So the winning strategy for B is this. If you
play B plays

if A plays a

and then B plays b
, and then if A play a

followed by

and B plays b
, then you can claim that, given that strategy for B, the combined
aying of A and B will never hit the automata.

So essentially what we do is we applied this formulism to Capsicum, the
based OS, and we can come up with a game. We have reduced it to a
game where you can think about the orange ones as the progra
m states, right?
The A player. But in this case, it’s the program OS. And all the other ones, like
the blue ones, are what the Capsicum primitives tell you, and I’m not going to get
too much into that. The winning strategy for this game gives us the
strumentation for that program.

And we have applied the same formulas into other things, and all I’m saying is

and we actually

I’m not going to get too much into that

this is sort of the
winning strategy for it

because I want to get to the results. W
e were actually
able to solve this game very quickly for a lot of the applications that were
considered in Capsicum. So Capsicum guys actually took all these programs and
wrote capability
based versions of these and we were able to come up with those
matically using some of these game formulisms.

Now one thing that

I’m hoping that these formulisms have kind of languished in
the PL community a little bit. To me, intuitively they make a lot of sense for
some other security problems as well. The problem

what happens is, the
complexity of solving these games, actually the games is sort of X time complete,
it’s pretty hard. So we had to play a lot of tricks so that the games you generate
from real programs are huge. So that, you know, how do you find the
se strategies
automatically? And essentially what we want to do is we have applied it to
Capsicum, we rated [ph] the Flume one with this game formulism. We also want
to apply it to HiStar.

Now, the other thing is what I was thinking about, are there simi
lar problems in
networking, like if you want to use Nick McKeown’s open flow infrastructure,
take some existing insecure protocol and adapt it to this open flow project. Can
some of these things as well be used? And we have not looked at that we are

at host [ph].

Now coming back to something that P______ said is sort of tying it back,
I kind of believe is that some of these systems, like Flume, Asbestos, HiStar, if
they get used more, especially maybe in a mission critical setting, a lot of those
ttack threat landscapes that we talk about, you know, sort of go away. And you
know, this is you see in the system they have talked about in their sort of paper.
You get programs with very strong security guarantees. So that’s how you sort of
tie it bac
k is that the good news is that if, let’s say our stuff gets commoditized
and, you know, you take HiStar, you take some of these programs and you apply
some of our stuff and you know you suddenly have a working system that is very
secure. You know, that r
aises the bar, sort of, for a lot of the attackers.

I was talking to somebody from Microsoft and they said that once
they went to stack randomization, they saw, or address
based randomization, they
saw a huge downturn buffer overflow attacks that they ha
ve seen. People have
moved to other logic problems that’s different, but at least that vector, at least
from what this Microsoft guy told me, they see very few of those as well. So
what I’m saying is that if this allows us to use some of these other syst
ems to
close a lot of these other sort of attacks, I think that sort of raises the bar quite a
bit. And hopefully I’m just sort of

somebody, not maybe my student, he’s
already in his fifth year, so we can take some of these formulisms and even look
at the

slate designs and networking site.

So that’s essentially what I had to talk about, is that a lot of the
techniques that I’m talking about can be used

at least we have used in the host
based setting, to take out some of these systems that provide v
ery strong security
primitives, and take programs written for insecure platforms and migrate to these
platforms. That’s

I’m open for questions after this. I think we are at 5:08. You
know, if you’re interested

I mean if there are problems that have simi
lar flavor,
we have a lot of these solver technologies built in now. Any questions? I don’t
know who was first, so whoever.




Yeah, and that is a problem with the Flume, sort of the Flume DIFC
primitives itself. So you’r
e saying that worker actually wants to send




Yeah. So I think that this is a problem with the Flume primitives in
general, but I think what you can do is if

I think that it’s not a problem with the

What you can do is, if you use program dependence graphs, for
example, then you can easily sort of enforce the

you can generate constraints
where that is not allowable. For example, if there’s a dependence edge from
somewhere in the worker to the

y had a question? Okay. Worker to
the requester, then you will enforce sort of a constraint there as well. But yes,
there is

at least the instrumentation, some of this instrumentation

all I’m
saying is that it’s not fundamental to the methodology, but I

think some of those
problems do happen in sort of the Flume itself.




So you’re saying that actually dynamically look at the information. So
yeah. I think ours will be an over
approximation. I don’t think so, because thi
s is
a static technique. So we actually don’t.




Yeah, but then you have

I think there is an exponential number of

yeah, yeah.




Yeah, I think so. So it is true. If you have a path
ive analysis, this
doesn’t stop you anything from adding more tags as you go on, but there is a sort
of an OS
level question that, how slow does your program get? But yeah, there is
nothing sort of fundamental in what we do but the way we generate constra
you’re right. I think we’ll have to go do a richer formulism to generate
constraints. Was there another?




So remember, we were

so all I’m saying is that we were sort of limited
to the primitives that the Flume guys
actually gave us.


Suppose you’re successful [INAUDIBLE].




[INAUDIBLE] What happens now that

would we have to move to





[INAUDIBLE] If you go back to slide 17 [ph].


Okay. Can I do that easily?


The one take
way maybe that we’re in the [INAUDIBLE] something is
missing from the [INAUDIBLE].


No, no. Okay, so that was a little bit of a hyperbole, but I think all I’m
saying is it is essentially

my view is that, look, these guys, at least if you read
the pape
rs, a lot of them say that, look, you can actually stop, you can provide
systems of very strong guarantees if you use the primitives. Now I’m not saying
all of those problems will go away, but it does

I mean at least when you read
those papers you get the

sense that the bar would be pretty high. For example,
like think about a system like Proxos, right? Your open VPN client, even if it gets
comprised. You know, it cannot leak too much information because it doesn’t
know. Right?

So all I’m saying is

it raises the bar a lot more; maybe not all the
problems won’t go away but quite a large fraction, I think, will go away. So an
analogy is the Microsoft address space randomizations, they ______ at least what
I saw from some invited [ph], they hardly see

any buffer overflows. Probably
legacy, at least they say that a lot of that stuff is in unpatched legacy codes. A lot
of the new ones they’re seeing they’re moving up the logic chain. You disagree?


I disagree but we take that ______.


Well, obvi
ously this is a Microsoft speak, right? It’s like

but in the

well, at least that




Well, okay. So this guy was apparently not telling the truth, right?
[LAUGHS] But in any case, I think the thing is if you

even if

you do a thought
experiment when you read these things, you see that you can write systems very
strong guarantees, and I think this gives you automatic way of at least looking at
some of the legacy code. There was another question.




The second part was that we didn’t want to, for every system that you
would consider, like Capsicum, HiStar, we didn’t want to rebuild the whole

wanted to have one generic framework that applies to a lot of these systems.




Yeah. You still have to give me sort of the semantics of all those
primitives and so on, but we wanted to have one algorithm that is able to weave in
all this sort of instrumentation for a lot of those systems. And essentially what we
ed is, for a lot of these problems, the problem of weaving these, say
statements and can be actually reduced to solving these safety games in VPS. No,
no, but it’s not like magic, you still have to tell me

like Capsicum primitive,
what the heck does this
primitive mean? You have to give me those semantics.
That doesn’t go away.





That’s absurd way [ph]. I mean, so I think in the sense that if you make a
mistake in the policy, I mean we will instrument

it wrong. Is that what he’s
saying? Yeah.


Well, not just that. [INAUDIBLE]


So at least what we found in the examples

I can only give you just

the policies were very small. And now there’s another question is
that can you have like a
formal description and then go to our formal logic
to express policies. That we haven’t done. But the policies were very

I mean,
like we’re talking few lines. Now there is the language design problem, is that,
how do you have a policy language that

is friendly, which we haven’t sort of
worked on, but the policies we didn’t find

like a few lines. And very natural
kind of policies. I mean I can ask my student more but I think that was not the

The problem more was essentially what you as
ked before;
sometimes you have to go back and forth where the policy is not right. Because
the thing is what we wanted to do is, for example, when we started doing
Capsicum, we wanted to do exactly the programs that those guys did, just to
compare what

d you know, they didn’t write those policies. So you have to
tease it through the mailing lists, saying, “Okay, what the heck were you going
through when you instrumented tcpdump?”

So I think that is the, I think to me a bigger problem. The policy langua
was not that daunting. And we have been talking to the Capsicum, but the
problem is that, right, I might be enforcing the wrong thing. I think that’s the
bigger issue. So I think having a maybe more friendly language to express the
policies would be
the next big step. Our next goal is to actually more apply to the
different systems, like we’re looking at HiStar now; we’ve started talking with the
MIT guys. Rather than go into more sort of policy friendly, that will come sort of
the next

we just want

to apply to various other systems. Any other questions?


I think we’re done.




Thank you very much.



Thank you.