• Ingen resultater fundet

The Future of Secure Dynamic Program Partitioning

In this section we engage in a small discussion of where information flow has to go, to enter commercial use.

9.4 The Future of Secure Dynamic Program Partitioning 121

In recent years the National Security Agency has developed a version of the operating system Linux with mandatory access control [BL73] support. This is calledSELinux (Security Enhance Linux). This shows that there is a rising trend to improve the inherently flawed execution platforms, which are currently in use. By the introduction of SELinux, the IT community has shown a will to rethink operating system design by introducing security primitives, in this case Mandatory Access Control. The next important step could be to make a version of Linux where the kernel supports information flow policies, like theDecentralized Label Model. Alternatively, a virtual machine, like the Java Virtual Machine, with information flow support could be constructed [vm06].

If such systems were developed, it would be a fundamental change to how we approach security.

Introducing secure information flow in distributed systems would be another milestone. This would finally achieve the objective of protecting people’s sensi-tive data online.

It is clear that many obstacles still remain, but current security approaches suffer from more and more flaws, as complexity is increased [LSM+98]. So maybe we need a fundamental change in the way we address security. Taking a low-level approach, like Secure Information Flow, could be the answer.

Chapter 10

Conclusion

At the end of the day, the goals are simple: safety and security.

– Jodi Rell

ApplyingSecure Information Flow to distributed systems has some promising perspectives. Information flow policies will allow users to better protect their data in distributed systems as these policies let users control access, integrity, and propagation.

The foundation of this work is Secure Program Partitioning [ZZNM01], which is a technique for distributing security-typed programs on the network, while obeying the information flow policies of the data, and the trust relation of the principals. Secure Program Partitioning, however, does not consider dynamic networks, and has a simple trust model that is not adequately suited for the complex trust relations of many distributed system.

Hansen and Probst addressed the first issue in [HP05]. This work has been build on further, by developing a full functioning framework for handling a dynamic network. The second issue, that is developing a suiting trust model, was the main focus of this thesis. By introducing recommended and partial (that is probabilistic) trust, we are better able to protect the users and express realistic

trust scenarios in a dynamic network.

Another central issue in this thesis have been to resolve the ambiguity of splits.

In most cases several splits exist for a given program and trust model. In the framework, the optimization component selects the split that will be used. Two optimization components have been presented in this thesis, both utilizing the probabilistic trust model.

• Assign data or statement to the principal in which the stakeholders have the highest confidence.

• The other method involves calculating a metric that captures the depen-dencies and nature of a data leak. The program parts are then assigned to the principals, which gives the optimal value for the metric.

These two methods purely consider the security of the split. Alternatively, a user might introduce his or her own optimization method, which would optimize performance.

The framework is parameterized with both the trust model and the optimization component. Compared to the original framework (Zdanewic et al.’s Secure Program Partitioning), this gives a higher level of flexibility. In our framework, applications can apply their own custom trust models and optimization criteria, and can thereby respond to the needs of the applications specific domain. In our view, this is a key factor if the Secure Program Partitioning approach are to be successful.

The proposed concepts have been proved to work through the implementation of a prototype. By using the prototype we are able to illustrate the capabilities of our framework.

In this thesis, a few examples have been investigated:

• Protecting credit card data, when shopping online.

• Protecting personal data, when using an insurance quote web-service.

• Securely transferring data in a scenario of mutual distrust.

125

These cases illustrate how the proposed framework, while supporting the original functionality, is able to handle dynamic networks. Additionally, the examples have shown how the framework is able to handle different trust models and optimization algorithms.

Appendix A

Definition of Terms

Confidence In probability theory, the conditional probability that a certain event, or series of events will happen.

Confidentiality Used in connection with trust. Having trust in a persons confidentiality, is having trust in his or her ability to protect your confident data.

Distributed System Decentralized system which uses multiple hosts connected by a network to perform computations. Distributed systems does in this definition not necessarily use parallel computation.

Erasure Policies Technique for automatically making data inaccessible, when certain conditions are fulfilled. See Section 5.9.

Integrity Used in connection with trust. Having trust in a persons integrity is having trust in his or her ability not to corrupt the data.

Metric Refers to a specific metric for evaluating program assignments. See Section 5.8.2.

Principal Entity in the trust graph. Includes persons, a group of users, and processes.

SDPP Secure Dynamic Program Partitioning.

sflow Simple, sequential language with information flow support. Will be used throughout this thesis. See Chapter 4.

SPP Secure Program Partitioning.

Trust Graph Data structure which contains all trust relations.

Appendix B

The sdpp Package

This appendix contains a description of the Java packages included in the frame-work. The appendix is intended to give a quick overview of the functionality of each class, and by using UML diagrams, we illutrate how each class fit into the context.