Computational logic and logic programming in particular can support rich reasoning about programs, proofs, types, and specifications.
Recent breakthroughs in mobile code safety and program verification have brought even more relevance to these tools, especially in the areas of meta-logic proof theory, proof search, static analysis and abstract interpretation, and model checking, even when the object code involved is very far from declarative. Components are needed that can perform such reasoning and the logic programming community is well positioned to provide such components.
Especially promising has been ground-breaking work with proof-carrying-code, a development that shows signs of having widespread applications, and has brought a surprising number of new challenges where computational logic can potentially contribute. It is worth pointing out that the earliest papers in the field made explicit mention of the possible relevance of such formalisms as lambda-prolog and ELF for all phases of the enterprise: generation of verification conditions, formalization of proofs, manipulation of proofs and certificate checking. Since then there have been proposals to combine these approaches with other logic programming frameworks equipped with tools for static analysis and abstract interpretation. While other programming paradigms may also be relevant here, logic programming may have an important edge among other reasons because its basis in declarative principles has led to the development of sophisticated static analysis and abstract interpretation tools.
This workshop is aimed at using computational logic in general and logic programming in particular to support reasoning about security, safety, correctness, and mobile code. It aims to bring together researchers in logic programming working in mobile code safety, security, program analysis, and correctness as well as those in related areas of automated deduction (HOL, COQ, NuPRL, ISABELLE etc). Relevant topics include (but are not limited to) the following:
Submitted papers must be in PostScript or PDF, no longer than 10 pages in LNCS format, to be presented in ~20 minutes. They must consist of original, relevant and previously unpublished research results related to any of the topics of the conference.
We also invite submissions of brief expositions of work in progress for a poster session. The length of these papers should be no longer than 4 pages.
At least one author of each paper must register for the conference before the early registration deadline. The acceptance of the paper commits one of the authors to its presentation at the conference.
To submit a paper, send an electronic mail to our mailbox. You should receive a confirmation mail.
If there is sufficient interest, a post-workshop proceedings will be considered.
Information on registration is available on ICLP website.
Early registration deadline is on 8 August, 2005.
Movelog is a one day Workshop, and will take place on Oct. 5, 2005:
Time | Talk |
---|---|
09:30 |
How does one guarantee that mobile code is safe, or that a protocol is secure? The difficulty, especially in the latter case, of pinning down requirements, assumptions about the environment and the nature of potential attackers, or even defining what security itself means, has led to a profusion of formal systems for security analysis over the past fifteen years. Modal logics have been used in an effort to capture such elusive notions as the evolving beliefs of the principals, process calculi of various sorts have been used to formalize what a run of a protocol is, and to define the space in which principals and intruders operate, and various predicate calculi have been introduced to reason about what they can achieve. State space descriptions emerging from such trace-based approaches have been analyzed using model-checkers and Büchi-automata. Multiset rewriting systems as well as the syntax and proof theory of a number of new formal systems have been proposed in an effort to make absolutely precise what is meant by a protocol notation and the kind of information exchange that takes place during a run of a protocol. All of these approaches have ostensibly had a number of ulterior aims, among them defining a way of proving systems secure, and ultimately developing efficient tools for deciding the question, at least in simple cases (the general case being undecidable), or for assisting in analysis. Also in the case of mobile code safety, different formal systems and techniques of analysis have been proposed to nail down and implement a rigorous and efficient notion of program certification. A consensus has yet to emerge in this area. A number of interesting developments in recent years have brought declarative programming, automated deduction, proof theory, static analysis techniques and abstract interpretation into play in both the areas of secure protocols and mobile code safety. Uniform proof systems, such as ELF, developed by Frank Pfenning, have been used to formalize security policy and certification, making it possible to use lambda-prolog tools for mobile code analysis and proof checking. Linear logic and related uniform formal systems have been used by Dale Miller to formalize protocol structure in an attempt to reduce questions about protocols, keys, nonces, etc., to abstract syntax and proof-theoretic notions. Abadi and Blanchet have shown how to translate essential information about protocols from a Pi-calculus-style formalization to a Horn-clause program. Hermenegildo, Puebla and Albert have developed the notion of abstraction carrying code, making use of abstract interpretation of certificates and fixed point computation to give a wholly new treatment to mobile-code safety, similar in spirit to the static analysis tools used in logic programming These and related developments have motivated the present workshop. |
10:00 |
CiaoPP is the abstract interpretation-based preprocessor of the Ciao multi-paradigm (Constraint) Logic Programming system. It uses modular, incremental abstract interpretation as a fundamental tool to obtain information about programs. In CiaoPP, the semantic approximations thus produced have been applied to perform high- and low-level optimizations during program compilation, including transformations such as multiple abstract specialization, parallelization, partial evaluation, resource usage control, and program verification. More recently, novel and promising applications of such semantic approximations are being applied in the more general context of program development such as program verification. In this talk, we illustrate our extension of the system to incorporate Abstraction-Carrying Code (ACC), a novel approach to mobile code safety. ACC follows the standard strategy of associating safety certificates to programs, originally proposed in Proof Carrying-Code. A distinguishing feature of ACC is that we use an abstraction (or abstract model) of the program computed by standard static analyzers as a certificate. The validity of the abstraction on the consumer side is checked in a single-pass by a very efficient and specialized abstract-interpreter. We have implemented and benchmarked ACC within CiaoPP. The experimental results show that the checking phase is indeed faster than the proof generation phase, and that the sizes of certificates are reasonable. Moreover, the preprocessor is based on compile-time (and run-time) tools for the certification of CLP programs with resource consumption assurances. |
10:45 |
|
11:15 |
The operational semantics and typing of modern programming and specification languages are often defined using relations and proof systems. In simple settings, logic programming can be used to provide rather direct and natural interpreters for such operational semantics. More complex features of specifications such as names and their bindings, proof rules with negative premise, and the exhaustive enumeration of state spaces, all pose significant challenges to conventional logic programming systems. In this paper, we describe a simple architecture for the implementation of deduction systems that allows a specification to interleave both with finite success and finite failure. The implementation techniques for this prover are largely common ones from logic programming, i.e., logic variables, (higher-order pattern) unification, backtracking (using stream-based computation), and abstract syntax based on simply typed lambda-terms. We present a particular instance of this prover architecture and its prototype implementation, based on a dual interpretation of (finite) success and failure in proof search. We discuss important differences between this prover and traditional logic programming and present an implementation of bisimulation checking for pi-calculus, which cannot be so directly and declaratively done in traditional logic programming languages. |
12:00 |
We are developing COOML (Constructive Object Oriented Modelling Language), an OO specification language where the focus is on the information carried by data. The main features of COOML are: a data model based on a predicative extension of a constructive intermediate logic; a layered structure, separating the COOML logic layer from a problem logic and the computational aspects; the possibility of choosing different problem logics and implementation languages. The data model is based on the notion of "pieces of information" i:S, where 'i' is a structured information ("information value") giving constructive evidence for the truth of a specification 'S'. Pieces of information support both interoperability in heterogeneous systems and multiple views on data in heterogeneous knowledge domains. Here, we focus on automatic snapshot generation. In modelling languages, a snapshot represents a system state. In particular, in the context of the UML, snapshots are object diagrams and snapshot generation in the presence of OCL constraints has proved to be useful both for understanding a specification and verifying its consistency in the problem domain. In our approach, snapshot generation can be based on the data model of the pieces of information. We present an algorithm, implemented in Prolog, that generates the system snapshots of a COOML specification S. Candidate snapshots for S are generated according to the structure of the information values i for S and are then tested against an axiomatization of the problem domain PD. To lower the number of the generated candidates, suitable local tests are performed during the generation steps. |
12:45 14:00 |
|
14:00 |
We describe the design and implementation of a higher-order tabled logic programming interpreter where some redundant and infinite computation is eliminated by memoizing sub-computation and re-using its result later. In particular, we focus on the table design and table access in the higher-order setting where many common operations are undecidable in general. To achieve a space and time efficient implementation, we rely on substitution factoring and higher-order substitution tree indexing. Experimental results from a wide range of examples (propositional theorem proving, refinement type checking, small-step evaluator) demonstrate that higher-order tabled logic programming yields a more robust and more powerful proof procedure. |
15:00 |
This talk assumes the utility of a higher-order logic programming language like Lambda Prolog in computing with structures like programs and proofs that is central to the analysis of formal properties of code and focuses on issues relevant to their implementation. The specific question it addresses concerns runtime manipulation of types. When the computational model for the language is based on full higher-order unification, types can influence the structures of variable substitutions and hence have an inextricable presence at runtime. However, shifting to a model oriented around higher-order pattern unification alters this picture considerably: types now only determine unifiability and have no effect on the structures of unifiers. This fact can be used to considerably reduce the dynamic footprint of types. We discuss how many unifiability questions can already be settled by simple compile-time analyses and how the amount of information needed for the remaining checks can also be minimized. |
15:45 |
The specification of computation systems today is commonly given using operational semantics, supplanting the well-established but restrictive approach using denotational semantics. Operational semantics is generally given via inference rules using relations between different items of the computation, and for this reason, it is an example of a relational specification. Inference rules over relations are also used for specifying the static semantics for programming languages as well (type inference, for example). The use of denotational style presentations of computational systems naturally leads to the use of functional programming-based executable specifications. Similarly, the use of inference systems for the presentation of operational semantics provides a natural setting for exploiting logic programming-based implementations. The Parsifal project will exploit recent developments in proof search and logic programming to make the specification of operational semantics more expressive and declarative and will develop techniques and tools for reasoning directly on logic-based specifications. More specifically, the Parsifal project will focus on the following goals. Theoretical goals
Implementation goals
Application goals
|