June 19, 2011, 7:17 p.m.
posted by ska
Code-Access Security in .NET
The .NET code-access security system works like this: Every time an assembly is loaded into an application domain, the security system determines what permission set should be granted to that assembly. The .NET runtime does this by examining evidence about the assembly. Assemblies are categorized in one or more code groups based on their evidence. Then the policy evaluator determines which permissions to grant based on which code groups the assembly belongs to (just as a role-based system determines which permissions to grant to users based on which user groups they belong to).
When the code runs, if it attempts to perform some task that requires a permission (such as deleting a file), the security system checks to ensure that the code was granted the appropriate permission. If not, it throws an exception, and the attempt fails.
Let's take a look at the out-of-the-box policy. Go to your .NET Framework SDK directory, and run the mscorcfg.msc file to pop up the management console, shown in Figure.1.
1. The .NET Runtime Security Policy management console.
As you can see, under Runtime Security Policy, there are three policy levels: Enterprise, Machine, and User. (There is also a fourth level, not shown: the Application policy level, which is discussed later in this chapter.) Open the Machine policy level, and you'll see a tree of code groups. Each code group is associated with a particular permission set and evidence condition.
Code that has the My Computer Zone evidence, for example, is granted the FullTrust permission set; code that is installed on your machine is granted permission to do anything. Code that has the LocalIntranet Zone evidence is granted the LocalIntranet permission set, which is rather more restrictive. If you run a managed assembly off a share on your local intranet, it will be able to run, produce user-interface elements, and so on but is not granted the right to modify your security settings or read or write to any file on your disk.
Notice that the root code group in the Machine policy level is All Code; every assembly is a member of this group irrespective of its evidence. If you look at the permission set granted by that group, however, it grants no permissions whatsoever. It denies the right to execute at all. What's up with that?
Within a policy level, the permission set granted to an assembly is (usually) the least-restrictive union of all the permission sets of all the applicable code groups. Code that belongs to the All Code group (which grants nothing) and the LocalIntranet Zone code group (which grants the LocalIntranet permission set) will be granted the permissions from the less-restrictive group.
We say "usually" because there are ways of creating custom policies that enforce rules other than "Take the least-restrictive union." You could create a policy tree with the rule "Take the permission set granted by the first matching code group, and ignore everything else." Policy trees can become quite complex.
So far we have seen the All Code group, which does not consider evidence at all, and various zone code groups that consider evidence about where code comes from in a broad sense. Zones describe whether the code comes from the local machine, the local intranet, an explicitly trusted Internet site, an explicitly untrusted Internet site, or an Internet site of unknown trustworthiness.
When we discuss the User policy level, you will see a much more specific kind of location-based evidence; you can create policies that grant permissions if the code is running from specific local or network directories or Web sites.
A close look at the Machine policy level shows two child code groups, subsets of the My Computer Zone code group, that grant full trust to assemblies in the My Computer Zone and are strong-named with the Microsoft or ECMA keys. You will learn more about strong-name evidence, and why it should always be in a child code group, later in this chapter.
Finally, there is evidence associated with individual assemblies. Every assembly has a statistically unique hash number associated with it; it is possible to create policies that grant permissions to specific assemblies by checking their hash numbers. Assemblies can also be signed with a publisher certificate (such as a VeriSign code-signing certificate). When the loader attempts to load a publisher-signed assembly, it automatically creates evidence describing the certificate. You could create code groups that grant permissions to all assemblies signed with your internal corporate certificate, for example.
Take a look at the Enterprise policy level shown in Figure.1. Unless your network administrator has set policy on your machine, this policy level should be much simpler than the Machine policy level. It consists of a single code group that matches all code and grants full trust.
But hold on a momentif the Enterprise policy is "Grant full trust to all code," how does this security system restrict anything whatsoever?
The .NET security system determines the grant set for each policy levelEnterprise, Machine, User, and Applicationand actually grants the permission only if a permission is granted by all four levels.
Setting the Enterprise policy level to "Everything gets full trust" cannot possibly weaken the restrictions of the other three groups. If the Machine policy level refuses to grant, say, permission to access the file system, it does not matter what the other three policy levels grant; that permission will not be granted to the assembly.
It works the other way, too. Suppose that the Enterprise policy level states "Grant full trust to all assemblies except for this known-to-be-hostile Trojan horse assembly." If you accidentally install the Trojan horse on your machine, the Machine policy level will grant full trust, but the Machine policy level cannot weaken the Enterprise policy level. Every policy level must agree to grant a permission for it to be granted, so the evil code will not run.
Take a look at your User policy level while logged in to a machine where you have been creating VSTO 2005 projects with Visual Studio. The contents of the User policy level, shown in Figure.2, might be a little bit surprising.
2. The User policy level. VSTO automatically creates policy so that VSTO projects are allowed to run on your development machine.
At the root, we have an All Code group that grants full trust, just like the Enterprise level. In keeping with the general rule that a policy level grants the least-restrictive union of permissions, it would seem that any further code groups in the policy tree for this level would be superfluous. Yet there is a child code group for VSTO projectsalso an All Code group, although it grants no permissions. It in turn has a code group for every project you have created, which again is an All Code group that grants no permissions. (The code group is given a GUID as its name to ensure that the group is unique no matter how many projects you create.)
The project-level code groups have URL-based child groups for every build configuration you have built that grant only execution permissionnothing elseto all code in the named directory. And those have children that grant full trust to the specific customization assemblies.
What the heck is going on here? It looks like Visual Studio has gone to great lengths to ensure that the User policy level explicitly grants full trust to your customization assemblies. Yet the User policy level's root code group already grants full trust. How is this not redundant?
There is a good reason for this, but before we get to that, we should talk about full trust versus partial trust.
Full Trust and Partial Trust
As anyone who has ever been infected by a Word or Excel macro virus knows, the code behind a customized document does not always do what you want, and you do not always know what it does. Fortunately, that is exactly the scenario that code-access security systems were invented to handle. There is a problem with code-access security in Office customizations, however. There is no way to trust partially any code that accesses the Word and Excel object models. Trust is all or nothing.
The Internet Explorer object model was specifically designed from day one so that code running inside the Web browser was in a "sandbox." Code can run, but it is heavily restricted. The browser's objects inherently cannot do dangerous things such as write an arbitrary file or change your registry settings. Code is partially trusted: trusted enough to run but not trusted enough to do anything particularly dangerous. The Word and Excel object models, by contrast, are inherently powerful. They manipulate potentially sensitive data loaded from and saved to arbitrary files. These object models were designed to be called only by fully trusted code. Therefore, when a VSTO customization assembly is loaded, it must be granted full trust to run at all.
This fact has serious implications for the application domain security policy created by the VSTO runtime when a customization starts.
It is the same reason why stores have merchandise-exchange policy, governments have foreign policy, and parents have bedtime policy: Policy is a tool that enables us to make thoughtful decisions ahead of time instead of having to make decisions on a case-by-case basis. The Enterprise, Machine, and User policy levels allow network administrators, machine administrators, and machine users to make security decisions independently ahead of time so that the .NET runtime can enforce those decisions without user interaction.
Decisions about policy can also be made by application domains (or AppDomains, for short). Because only those permissions granted by all four policy levels are actually granted to the assembly, the AppDomain policy level can strengthen the overall security policy by requiring more stringent evidence than the other policy levels.
We know that VSTO customizations must be granted full trust. By default, the Enterprise and User policy levels grant full trust to all assemblies regardless of evidence. The Machine policy level grants full trust to all assemblies installed on the local machine. In the absence of an App-Domain policy level, a VSTO customization copied to your local machine is granted full trust.
That seems like a reasonable decision for an application that you have deliberately installed on your local machine. Users typically install applications that they trustapplications that perform as expected and do what users want them to doso it makes sense to grant full trust implicitly to assemblies in the Local Machine Zone.
But spreadsheets are not usually thought of as applications. Do users realize that by copying a customized document to their machine, they are essentially installing an application that will then be fully trusted, capable of doing anything that the users themselves can do? Probably not! Users do not tend to think of customized documents as applications; they are much less careful about copying random spreadsheets to their machines than they are about copying random executables to their machines.
Good security policies take typical usage scenarios into account. Therefore, the VSTO runtime tightens the overall security policy by creating an AppDomain policy level that grants all the permissions of the other three policy levels except for those permissions that would have been granted solely on the basis of membership in either an All Code code group or a zone code group. All other permissions granted because of URL evidence, certificates, strong names, and so on are honored.
Let's take a look at an example.
Resolving VSTO Policy
Consider a VSTO customization assembly that you have just built on your development machine that you want to run. The customization assembly must be granted full trust by all four policy levels; otherwise, it will not run. The Enterprise and User policy levels grant full trust to all code. The Machine policy level grants full trust to code from the My Computer Zone. Three of the four levels have granted full trust.
What about the AppDomain policy level? It grants the same permissions as the other three policy levels except for those permissions granted solely by All Code and zone code groups. The Enterprise policy level consists of a single All Code code group, so it is ignored by the AppDomain policy level. The Machine policy level consists only of zone code groups, plus two strong-name code groups for the Microsoft and ECMA strong names. Unless you happen to work for Microsoft and have access to the code-signing hardware, it is likely that those code groups do not apply, so effectively, the AppDomain policy level is going to ignore all of these, too. Things are not looking good; the AppDomain policy has found nothing it can use to grant full trust yet. If the User policy level also consists solely of an All Code code group, as it does on a clean machine, the customization will not run.
But the User policy level on your development machine has a code group that is not ignored by the AppDomain policy level; it has a URL code group that explicitly trusts the customization assembly based on its path. The AppDomain policy sees this and grants full trust to the assembly. Because all four policy levels have granted full trust, the code runs.
Now it should be clear why Visual Studio modified your User security policy and added a seemingly redundant code group for the assembly. The VSTO AppDomain policy level requires that the customization assembly not only be fully trusted, but also be fully trusted for some better reason than "We trust all code" or "We trust all code installed on the local machine." Therefore, there has to be some Enterprise, Machine, or User code group that grants full trust on the basis of some stronger evidence.
Because the VSTO AppDomain policy level refuses to grant full trust on the basis of zone alone, you're pretty much forced to come up with a suitable policy to describe how you want the security system to treat VSTO customization assemblies. Take off your softwaredeveloper hat for a moment, and think like an administrator setting security policy for an enterprise. Let's go through a few typical security policies that you might use to ensure that customized Word and Excel documents work in your organization while preventing potentially hostile customizations from attackers out on the Internet from running. After discussing the pros and cons of each, we talk about how to roll out security policy over an enterprise.
One of the most straightforward ways to ensure that customized Word and Excel documents can run is to set a policy stating that customization assemblies that run from a particular place are fully trusted. You may have Web servers or file shares on your network where write access is restricted to trusted individuals; if the customization is there, that is pretty good evidence that it is trustworthy.
You can set an Enterpriselevel policy that states that customization assemblies at a particular location are fully trusted by right-clicking the All Code code group in the Enterprise policy level and selecting New from the menu. Doing so causes the Create Code Group dialog box to appear, as shown in Figure.3.
3. The first step of the Create Code Group dialog box.
Enter a name for the code group and a description to help others understand what the code group is intended to do. Then click the Next button. The Create Code Group dialog box, shown in Figure.4, will appear. Choose a URL membership condition from the condition-type drop-down list. For the URL, give the location to which the VSTO customization assembly will be deployed. In Figure.4, we are matching any customization assemblies in the Web folder http://accounting/customizations because we used the * wildcard in the URL.
4. The second step of the Create Code Group dialog box.
After you have chosen the URL condition type and entered a URL, click the Next button. The third step of the Create Code Group dialog box displays, as shown in Figure.5. Select the Use Existing Permission Set radio button, and select FullTrust as the permission set to be granted to the code group.
5. The third step of the Create Code Group dialog box.
But hold on a moment. Clearly, this is not going to work. Remember, the policy evaluator grants a permission only if it is granted by all four code groups. When a user runs this customization, the Enterprise and User policy levels will grant full trust because of their root All Code code group. The AppDomain policy level will grant full trust because the Enterprise policy level contains a URL code group that grants full trust. But what about the Machine policy level? It will take one look at that thing, classify it as being from the LocalIntranet Zone, and grant it the LocalIntranet permission set. Because the customization assembly requires full trust, it will not run.
We have a problem here. You could, of course, solve this problem by setting the policy at the Machine level rather than the Enterprise level. Or you could set it at both levels. In the system described so far, however, policy levels can only add additional restrictions; it seems sensible that an enterprise administrator would be able to override the restrictions of a machine administrator. We need a way for a policy level to say "Grant full trust even if another policy level disagrees".
Fortunately, we can tweak the code group to achieve this. Rightclick the code group you just created, and choose Properties. Take a look at the check boxes at the bottom of the Properties dialog box (see Figure.6).
6. The Properties dialog box for the AccDeptDocuments code group.
Checking the first check box makes this an exclusive code group; the regular rules about combining the permission sets of different code groups to determine the grant set for a particular policy level cease to apply. Checking the second check box makes this a levelfinal code group; policy levels from the lower code groups are ignored if the code's evidence matches the membership condition for this group.
What does lower mean? The Enterprise code group is the highest, followed by Machine and User;Application Domain is the lowest.
Creating a levelfinal code group considerably weakens your security policy because it prevents lower code groups from enforcing further restrictions. Always be careful when setting security policy, but be particularly careful when creating level-final groups.
Location-based policies are reasonably flexible. It is easy to deploy new documents to the trusted network locations and have them automatically be fully trusted by Enterprise policy. But there is always a tradeoff between ease of use and security; the drawback of locationbased policies is that if some untrustworthy person does manage to install a hostile customization on a trusted server, it will run with full permissions on user machines. The next few sections show how to lock down the set of valid customizations even further to mitigate such vulnerabilities.
Some problems may also arise if multiple users all try to run the customized document from the same place. If users typically download documents to their own computers and use them there, a more local URL policy may be in order. Instead of trusting a Web site in the policy, enter a URL such as file://c:\MyCustomizedDocuments\* or another local directory. Then users can download trusted customized documents to that folder and run them, while untrusted customizations copied to other locations are prevented from running.
In that scenario, it may be more appropriate to roll out User or Machine policy to allow individual users or machine administrators to change the locations of their trusted-documents folder.
Strong names allow you to grant full trust to only those assemblies that your organization (or other organizations that you trust) created. Confusion abounds about what strong names are, what they are for, and how they work.
Back in the old days of "DLL hell,"dynamically linked libraries were loaded based on filename and location. This approach has an inherent fundamental security problem: Attackers can name their evil DLLs system32.dll or oleaut32.dll, too. Attackers could try to trick you into loading their code rather than the code that you want to load by taking advantage of this weakness in the naming system.
The traditional DLL system suffers from other technical problems, such as versioning. When you load oleaut32.dll, which version are you getting? Writing the code to figure it out is not rocket science, but it is not as easy as it could be.
Strong names mitigate these weaknesses. The purpose of a strong name is to provide every assembly a unique, hardtoforge name that clearly identifies its name, version, and author. When you load an assembly based on its strong name, you have extremely good evidence that you are actually loading the code you expect to be loading, not some hostile version that some other author managed to slip onto your machine.
Because strong names identify the customization's author, you could set a policy that states that any code by a particular author is fully trusted. Suppose that you have a strongnamed assembly, and you want to set a policy that says that all assemblies by this author are to be fully trusted. Again, create a new code group as a child of the location code group created before, but this time select the Strong Name membership condition, as shown in Figure.7.
Import the public key from the strong-named assembly, and you have created a policy that trusts all assemblies by that author. (As the dialog box notes, you can further strengthen the policy by trusting only certain names or even only certain versions.) But what is a public key, and what does it have to do with the code's author?
Strong naming works by using public-key cryptography. The mathematical details of how publickey cryptosystems work would take us far off topic, but briefly, these systems work something like this: An author generates two keys, appropriately called the public key and the private key. Assemblies can be signed with the private key, and the signature can be verified with the public key.
Therefore, if you have a public key and an assembly, you can determine whether the assembly was signed with the private key. Then you know that the person who signed the assembly possessed the private key. If you believe that the author associated with that public key was not careless with the private key, you have good evidence that the assembly in question really was signed by the author.
The signing process is highly tamper-resistant. Changing so much as a single bit of the assembly invalidates the signature. Therefore, you also have good evidence that the assembly has not been changed postrelease by hostile attackers out to get you.
You may wonder why we recommended that you create your Strong Name code group as a child code group of the location-based code group discussed earlier. And come to think of it, in the outofthebox Machine policy level, the Microsoft Strong Name code group is a child of the Local Machine Zone code group. Why is that? Surely if having a strong name is sufficient to grant full trust, it should be sufficient no matter where the code came from.
Code groups with membership conditions based on some fact about the assembly itself should always be children of location-based code groups. Here is why: Suppose that you trust Foo Corporation. For the sake of argument, we assume that this trust is justified; Foo Corporation really is not hostile toward you. Consider what would happen if your Enterprise policy level grants assemblies signed with Foo Corporation's key full trust, period, with a level-final code group. You impose no additional location-based requirement whatsoever.
Foo Corporation releases version 1.0 of its FooSoft library, and no matter where foosoft.dll is located, all members of your enterprise fully trust it. Foo Corporation releases version 2.0, and then version 3.0, and so on. Everything is fine for years.
But one day, some clever and evil person discovers a security hole in version 1.0. The security hole allows partially trusted codesay, code from a lowtrust zone such as the Internetto take advantage of FooSoft 1.0's fully trusted status to lure it into using its powers for evil.
Even if that flaw does not exist in the more recent versions, you are now vulnerable to it. Your policy says to trust this code no matter where it is, no matter what version it is. Evil people could put it up on Web sites from now until forever and write partially trusted code that takes advantage of the security hole, and you can do nothing about it short of rolling out new policy.
If, on the other hand, you predicate fully trusting FooSoft software upon the software being in a certain location, that scopes the potential attack surface to that location alone, not the entire Internet. All you have to do to mitigate the problem is remove the offending code from that location, and you are done.
That explains why the Microsoft Strong Name code group is a child of the My Computer Zone code group. Should an assembly with Microsoft's strong name ever be found to contain a security flaw, the vulnerability could be mitigated by rolling out a patch to all affected users. If the outofthebox policy were "Trust all code signed by Microsoft, no matter where it is," there would be no way to mitigate this vulnerability at all; the flawed code would be trusted forever, no matter what dodgy Web site hosts it.
This best practice for strong-name code groups also applies to other membership conditions that consider only facts about the assembly itself, such as the hash and publisher certificate membership conditions.
Now that we have a child code group that grants full trust to code that is both strong-named and in a trusted location, we can reduce the permission set granted by the outer "location" code group to nothing. That way, only code that is both strong-named and in the correct location will run.
So far, we have been talking about the administrative problem of trusting a strongnamed assembly after you have one. What about the development problem of creating the strong-named assembly in the first place? The process entails four steps:
Let's take a look at each of these steps in detail.
Designate a Signing Authority
A strong name that matches a particular public key can be produced by anyone who has the private key. Therefore, the best way to ensure that only your organization can produce assemblies signed with your private key is to keep the private key secret. Create a small number (preferably one) of highly trusted people in your organization as signing authorities, and make sure that they are the only people who have access to the private-key file.
Create a Key Pair
When you need a key pair for your organization, the signing authority should create a private-key file to keep to itself and a public-key file for wide distribution. The strong-name key generation utility is sn.exe, and it is located in the bin directory of your .NET Framework SDK:
> sn.exe -k private.snk Microsoft (R) .NET Framework Strong Name Utility Version 2.0 Copyright (C) Microsoft Corporation. All rights reserved. Key pair written to private.snk > sn.exe -p private.snk public.snk Microsoft (R) .NET Framework Strong Name Utility Version 2.0 Copyright (C) Microsoft Corporation. All rights reserved. Public key written to public.snk
The private.snk file contains both the public and private keys; the public.snk file contains only the public key. Do whatever is necessary to secure the private.snk file: Burn it to a CD-ROM, and put it in a safety deposit box, for example. The public.snk file is public. You can e-mail it to all your developers, publish it on the Internet, whatever you want. You want the public key to be widely known, because that is how people are going to identify your organization as the author of a given strong-named assembly.
Developers Delay-Sign the Assembly
Developers working on the customization in Visual Studio will automatically get their User policy level updated so that the assembly that they generate is fully trusted. But what if they want to test the assembly in a more realistic user scenario, where there is unlikely to be a User-level policy that grants full trust to this specific customization assembly? If users are going to trust the code because it is strong-named, developers and testers need to make sure that they can run their tests in such an environment.
But you probably do not want to make every developer a signing authority; the more people you share a secret with, the more likely that one of them will be careless. And you do not want the signing authority to sign off on every build every single day, because prerelease code might contain security flaws. If signed-but-flawed code gets out into the wild, you might have a serious and expensive patching problem on your hands.
You can wriggle out of this dilemma in two ways. The first is to create a second key pair for a "testing purposes only" strong name for which every developer can be a signing authority. Your test team can trust the test strong name, making the tests more realistic. Because it is unlikely that customers ever will trust the test-only public key, there is no worry that signed-but-buggy prerelease versions that escape your control will need to be patched.
That is considerably better than real-signing every daily build, but we can do better still; another option is to delay-sign the assembly. When the signing authority signs the assembly, the public key and the private-key-produced signature are embedded in the assembly; the loader reads the public key and ensures that it verifies the signature. By contrast, when a developer delay-signs the assembly, the public key and a fake signature are embedded in the assembly; the developer does not have the private key, and therefore the signature is not valid.
To delay-sign a customization, right-click the project in Solution Explorer, and select Properties. In the Properties pane, click Signing and then choose the public-key file, as shown in Figure.8.
8. Delay-signing a customization.
If the signature is invalid, won't the loader detect that the strong name is invalid? Yes. Therefore, developers and testers can set their development and test machines to have a special policy that says "Skip signature validation on a particular assembly":
> sn.exe -Vr ExpenseReporting.DLL
Skipping signature validation on developer and test machines makes those machines vulnerable. If an attacker can deduce what the name of your customization is and somehow trick a developer into running that code, the hostile code will then be fully trusted. Developers and testers should be very careful to not expose themselves to potentially hostile code while they have signature verification turned off. Turn it back on as soon as testing is done.
You can turn signature validation back on with
> sn.exe -Vu ExpenseReporting.DLL
or use Vx to delete all "skip validation" policies.
Really Sign the Assembly
Finally, when you have completed development and are ready to ship the assembly to customers, you can send the delay-signed assembly to the signing authority. The signing authority has access to the file containing both the private and public keys:
> sn.exe -R ExpenseReporting.DLL private.snk
One more thing about strong names and then we'll move on. A frequently asked question about strong names is "What's the difference between a public key and a public-key token?"
The problem with public keys is that they are a little bit unwieldy. The Microsoft public key, for example, when written out in hexadecimal, is as follows:
002400000480000094000000060200000024000052534131000400000100010 007D1FA57C4AED9F0A32E84AA0FAEFD0DE9E8FD6AEC8F87FB03766C834C9992 1EB23BE79AD9D5DCC1DD9AD236132102900B723CF980957FC4E177108FC6077 74F29E8320E92EA05ECE4E821C0A5EFE8F1645C4C0C93C1AB99285D622CAA65 2C1DFAD63D745D6F2DE5F17E5EAF0FC4963D261C8A12436518206DC093344D5 AD293
That's a bit of a mouthful. It is easier to say "I read Hamlet last Tuesday and quite enjoyed it" than "I read a play that goes like this: Bernardo says, 'Who's there?'" and to finish four hours later with "'Go, bid the soldiers shoot,' last Tuesday and quite enjoyed it."
Similarly, if you want to talk about a public key without writing the whole thing out, you can use the public-key token. The public-key token corresponding to the public key above is b03f5f7f11d50a3a, which takes up a lot less space. Note, however, that just as the title Hamlet tells you nothing about the action of the play, the public-key token tells you nothing about the contents of the public key. It is just a useful, statistically guaranteed-unique 64-bit integer that identifies a particular public key.
Public-key tokens usually are used when you write out a strong name. The strong name for the VSTO 2005 runtime, for example, is this:
Microsoft.VisualStudio.Tools.Applications.Runtime, Version=220.127.116.11, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, ProcessorArchitecture=MSIL