Security Typing for JavaScript
Devil in the details.
I’d like to repeat an example (given my Mark Miller in his work on E), of two different ways to copy a file, and the security implications of each. First,
shell$ cp foo.txt bar.txt
This command invokes a copy program that will:
- recognize foo.txt as a filename.
- recognize bar.txt as another filename.
- open the file indicated by (1) and read its bytes.
- open the file indicated by (2) and write bytes into it.
Clearly, the cp
command has been delegated quite a bit of power. Much of this authority comes from the interaction with the file system: it needs to be granted access to read from the first file, and it must be granted access to create and write to the second file.
But, we can accomplish the same task by another phrasing:
shell$ cat < foo.txt > bar.txt
In contrast to the above, cat
needs no more than access to only the 2 file handles that the shell grants (and this is only stdin and stdout). It does not need to interact with the file system in any other fashion.
This example proves itself enlightening, when we recognize that within any programming language many phrasings can accomplish the same task. The programmers choice of phrasing then becomes a critical aspect of any subsequent analysis. In the above example, even though both programs have the same effect, when we assess the security implications we find very different outcomes. In the case of cp
we worry greatly over the potential abuse of authority.
Reasoning about programs.
Even though the above example concerned itself with a trivial task on the command line, we find the same results when we turn our eyes toward full programs. In order to feel comfortable about running a program we’d like to be able to analyze it first, to be sure that it doesn’t “go wrong”. In particular, my research work focuses on enforcing non-interference. So, “going wrong” means that secret information is potentially inferable from publicly observable information.
With that aim in ming, many of the information flow papers that I read treat non-interference as a lemma within a typing system. In order to understand these papers I started reading Pierce’s “Types and Programming Languages”.
In computer science, we use type systems to construct proof regarding our programs. In essence, we want to identify and reject programs that might “go wrong”. Imposing types onto the data allows automated reasoning about a program’s behavior. However, this ability comes at a cost: well-typed programs are a strict subset of all programs. For example, the program:
if (true) then 5 else false; |
would be rejected by a strongly-typed system, such as ML, because the result of the if-statement should yield the same type in both branches. Languages, such as JavaScript, that have much more loosely typed systems usually allow such ill-typed programs, and deal with any evaluation errors at runtime.
If you want to be sure of some property about a program prior to running it, you must be able to perform a static type analysis, that proves the property you are interested in. Fortunately for my research, the non-interference proof can be encoded into a type system. Then, any well-typed program will never “go wrong” in the sense that I gave above.
Unfortunately, I research dynamic languages, in which constructs such as eval
, and several mechanisms of polymorphism, prevent a precise static type check. (To preserve soundness, the analysis will have to be conservative, resulting in the rejection of almost all real-world programs.) In the next post, I shall examine to what extent we might still be able to leverage strong typing (automated proving) to enforce security in dynamic languages.