Thomas Seiller - ReACT

ReACT Project (ended)

H2020

I submitted in September 2014 an application for a Marie Curie Individual Fellowship (call H2020-MSCA-IF-2014, Mathematics (MAT) Panel) which was successful. This project, ReACT (A Realizability Approach to Complexity Theory, grant agreement No 659920) funded a two-year research stay to work with Jakob Grue Simonsen at the Computer Science department of the University of Copenhagen (DIKU). It is part of the European Union's Horizon 2020 program for research and innovation. A (largely) expanded version of the associated research project, along with an exposition of first results, can be found in the preprint "Towards a Complexity-through-Realizability Theory".

Documents

Here are links to documents related to the project. This includes publications, preprints and any other document exposing the results obtained so far.

Publications (Conferences)

Unary Resolution: Characterizing Ptime, with Marc Bagnol and Clément Aubert, FOSSACS 2016.
pdf
Abstract Bibtex
We give a characterization of deterministic polynomial time computation based on an algebraic structure called the resolution semiring (whose elements can be understood as logic programs or sets of rewriting rules over first-order terms), a construction stemming from an interactive interpretation of cut-elimination known as the geometry of interaction. More precisely, we restrict this framework to terms (logic programs, rewriting rules) using only unary symbols. We prove it is complete for polynomial time computation using an encoding of pushdown automata. Soundness w.r.t. Ptime is proven thanks to a saturation method close to pushdown systems and memoization. As a direct consequence, we get a Ptime-completeness result for a class of logic programming queries that uses only unary function symbols.
@inbook{seiller-ptime,
Author = {Aubert, Clément and Bagnol, Marc and Seiller, Thomas},
Chapter = {Unary Resolution: Characterizing Ptime},
Title = {Foundations of Software Science and Computation Structures: 19th International Conference, FOSSACS 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016, Eindhoven, The Netherlands, April 2-8, 2016, Proceedings},
Year = {2016},
Publisher = {Springer Berlin Heidelberg},
Pages = {373--389},
Doi = {10.1007/978-3-662-49630-5_22},
}
Interaction Graphs: Full Linear Logic, LICS 2016.
pdf Abstract Bibtex
Interaction graphs were introduced as a general, uniform, construction of dynamic models of linear logic, encompassing all Geometry of Interaction (GoI) constructions introduced so far. This series of work was inspired from Girard's hyperfinite GoI, and develops a quantitative approach that should be understood as a dynamic version of weighted relational models. Until now, the interaction graphs framework has been shown to deal with exponentials for the constrained system ELL (Elementary Linear Logic) while keeping its quantitative aspect. Adapting older constructions by Girard, one can clearly define "full" exponentials, but at the cost of these quantitative features. We show here that allowing interpretations of proofs to use continuous (yet finite in a measure-theoretic sense) sets of states, as opposed to earlier Interaction Graphs constructions were these sets of states were discrete (and finite), provides a model for full linear logic with second order quantification.
@inproceedings{seiller-IGFull,
Author = {Seiller, Thomas},
Title = {Interaction Graphs: Full Linear Logic},
Booktitle = {Logic in Computer Science (LICS)},
Year = {2016},
}
Loop Quasi-Invariant Chunk Detection, with J.-Y. Moyen and T. Rubiano, ATVA 2017.
Associated Tool/Software: A prototype LLVM pass, and A Proof of Concept on C code.
pdf Abstract Bibtex
Several techniques for analysis and transformations are used in compilers. Among them, the peeling of loops for hoisting quasi-invariants can be used to optimize generated code, or simply ease developers’ lives. In this paper, we introduce a new concept of dependency analysis borrowed from the field of Implicit Computational Complexity (ICC), allowing to work with composed statements called “Chunks” to detect more quasi-invariants. Based on an optimization idea given on a WHILE language, we provide a transformation method - reusing ICC concepts and techniques - to compilers. This new analysis computes an invariance degree for each statement or chunks of statements by building a new kind of dependency graph, finds the “maximum” or “worst” dependency graph for loops, and recognizes if an entire block is Quasi-Invariant or not. This block could be an inner loop, and in that case the computational complexity of the overall program can be decreased.
In this paper, we introduce the theory around this concept and present a prototype analysis pass implemented on LLVM. We already implemented a proof of concept on a toy C parser3 analysing and transforming the AST representation. In a very near future, we will implement the corresponding transformation within our prototype LLVM pass and provide benchmarks comparisons.
@inproceedings{seiller-ATVA,
Author = {Moyen, Jean-Yves and Rubiano, Thomas and Seiller, Thomas},
Title = {Loop Quasi-Invariant Chunk Detection},
Booktitle = {Automated Technology for Verification and Analysis: 15th International Symposium, ATVA 2017, Pune, India, October 3--6, 2017, Proceedings},
Editor = {D’Souza, Deepak and Narayan Kumar, K.},
Pages = {91--108},
Year = {2017},
DOI = {10.1007/978-3-319-68167-2_7},
}

Tools/Software

A prototype LLVM pass, https://github.com/ThomasRuby/lqicm_pass, implementing the loop optimisation described in the series of joint papers with J.-Y. Moyen and T. Rubiano.
A proof-of-concept in python for optimising C programs, https://github.com/ThomasRuby/LQICM_On_C_Toy_Parser, implementing the loop optimisation described in the series of joint papers with J.-Y. Moyen and T. Rubiano.

Publications (Workshops)

Loop Quasi-Invariant Chunk Motion by peeling with statement composition, with J.-Y. Moyen and T. Rubiano, DICE-FOPARA 2017.
Associated Tool/Software: A prototype LLVM pass, and A Proof of Concept on C code.
pdf Abstract Bibtex
Several techniques for analysis and transformations are present and used since the creation of compilers. Among them, the peeling of loops for hoisting quasi-invariants can be used to optimize generated code, or simply ease developers’ lives. In this paper, we introduce a new concept of dependency analysis borrowed from the field of Implicit Computational Complexity (ICC), allowing to work with composed statements called “Chunks” to detect more quasi- invariants. From a proven optimization idea given on a WHILE language, we provide a transformation method - reusing ICC concepts and techniques - to compilers. This new analysis computes an invariance degree for each statement or chunks of statements by building a new kind of dependency graph, finds the “maximum” or “worst” dependency graph for loops, and recognizes if an en- tire block is Quasi-Invariant or not. This block could be an inner loop, and in that case the computational complexity of the overall program can be decreased.
In this paper, we introduce the theory around this concept and present a proof-of-concept tool that we developed: an analysis and a transformation passes on a toy C parser written in Python called “pycparser” implemented by Eli Bendersky1. In the near future, we will implement these techniques on real compilers, as some of our optimisations are visibly better than current optimisations implemented in both GCC and LLVM.
@unpublished{seiller-peeling,
Author = {Moyen, Jean-Yves and Rubiano, Thomas and Seiller, Thomas},
Title = {Loop Quasi-Invariant Chunk Motion by peeling with statement composition},
Editor = {Bonfante, Guillaume and Moser, Georg},
Year = {2017},
Booktitle = {{{\rm Proceedings 8th Workshop on} Developments in Implicit Computational complExity {\rm and 5th Workshop on} FOundational and Practical Aspects of Resource Analysis, {\rm Uppsala, Sweden, April 22-23, 2017}},
Series = {Electronic Proceedings in Theoretical Computer Science},
Volume = {248},
Publisher = {Open Publishing Association},
Pages = {47--59},
DOI = {10.4204/EPTCS.248.9},
}

Other Publications

Why Complexity Theorists Should Care About Philosophy. Jean Fichot, Thomas Piecha (editors), Beyond Logic. Proceedings of the Conference held in Cerisy-la-Salle, pp. 381--393.
pdf Abstract Bibtex
Invited Talk at the ANR-DFG Beyond Logic conference.
@inproceedings{seiller-recension,
Author = {Seiller, Thomas},
Title = {Why Complexity Theorists Should Care About Philosophy},
Booktitle = {Beyond Logic. Proceedings of the Conference held in Cerisy-la-Salle},
Editor = {Fichot, Jean and Piecha, Thomas},
Pages = {381--393},
Year = {2017},
DOI = {10.15496/publikation-18676},
}

Preprints and Research Reports

Semantics, Entropy and Complexity Lower Bounds, with L. Pellissier.
pdf Abstract Bibtex
Finding lower bounds in complexity theory has proven to be an extremely difficult task. Nowadays, only one research direction is commonly acknowledged to have the ability to solve current open problems, namely Mulmuley's Geometric Complexity Theory programme. Relying on heavy techniques from algebraic geometry, the latter stemmed from a first lower bound result for a variant of Parallel Random Access Machines (PRAMs).
We analyse this original proof from a semantics point of view, interpreting programs as graphings -- generalizations of dynamical systems. We show that Mulmuley's method can be abstracted to a more general setting, exploiting the classic notion of topological entropy. This reformulation recentres the proof around dynamical aspects, relegating the use of algebraic geometry to a model-driven choice rather than a key concept of the method.
@unpublished{seiller-SECLB,
Author = {Pellissier, Luc and Seiller, Thomas},
Title = {Semantics, Entropy and Complexity Lower Bounds},
Note = {submitted},
Year = {2018},
}
Entropy and Complexity Lower Bounds (5-pages abstract), with L. Pellissier.
pdf Abstract Bibtex
Finding lower bounds in complexity theory has proven to be an extremely difficult task. Nowadays, only one research direction is commonly acknowledged to have the ability to solve current open problems, namely Mulmuley's Geometric Complexity Theory programme. Relying on heavy techniques from algebraic geometry, the latter stemmed from a first lower bound result for a variant of Parallel Random Access Machines (PRAMs).
We analyse this original proof from a semantics point of view, interpreting programs as graphings -- generalizations of dynamical systems. We show that Mulmuley's method can be abstracted to a more general setting, exploiting the classic notion of topological entropy. This reformulation recentres the proof around dynamical aspects, relegating the use of algebraic geometry to a model-driven choice rather than a key concept of the method.
@unpublished{seiller-SECLB,
Author = {Pellissier, Luc and Seiller, Thomas},
Title = {Entropy and Complexity Lower Bounds},
Note = {Abstract},
Year = {2018},
}
Interaction Graphs: Nondeterministic Automata.
pdf Abstract Bibtex
This paper exhibits a series of semantic characterisations of sublinear nondeterministic complexity classes. These results fall into the general domain of logic-based approaches to complexity theory and so-called implicit computational complexity (ICC), i.e. descriptions of complexity classes without reference to specific machine models. In particular, it relates strongly to ICC results based on linear logic since the semantic framework considered stems from work on the latter. Moreover, the obtained characterisations are of a geometric nature: each class is characterised by a specific action of a group by measure-preserving maps.
@unpublished{seiller-IGNDA,
Author = {Seiller, Thomas},
Title = {Interaction Graphs: Nondeterministic Automata},
Note = {submitted},
Year = {2016},
}
Towards a Complexity-through-Realizability Theory, January 2015.
pdf Abstract Bibtex
We explain how recent developments in the fields of realizability models for linear logic -- or geometry of interaction -- and implicit computational complexity can lead to a new approach of computational complexity. This semantic-based approach should apply uniformly to various computational paradigms, and enable the use of new mathematical methods and tools to attack problem in computational complexity. This paper provides the background, motivations and perspectives of this complexity-through-realizability theory to be developed, and illustrates it with recent results.
@article{seiller-towards,
Author = {Seiller, Thomas},
Title = {Towards a Complexity-through-Realizability Theory},
Journal = {submitted},
Year = {2015},
}
Measurable Preorders and Complexity, Abstract for TACL 2015.
pdf Abstract Bibtex
In a recent paper, we defined a generic construction of models of the exponential-free fragment of Linear Logic (MALL). These models provide a new framework for the study of computational complexity which allows for the use of techniques and invariants form ergodic theory and operator theory.
@unpublished{seiller-tacl2015,
Author = {Seiller, Thomas},
Title = {Measurable Preorders and Complexity},
Note = {Abstract},
Year = {2015},
}
Interaction Graphs and Computational Complexity I, in preparation.
(Draft) Abstract Bibtex
The aim of Implicit Computational Complexity is to study algorithmic complexity only in terms of restrictions of languages and computational principles. Based on previous work about realizability models for linear logic, we propose a new approach where we consider semantical restrictions instead of syntactical ones. This leads to a hierarchy of models mirroring subtle distinctions about computational principles. As an illustration of the method, we explain how to obtain characterizations of infinite families of complexity classes lying in between the set of regular languages (resp. stochastic languages) and the sets of (deterministic and non-deterministic) logarithmic space predicates (resp. probabilistic logarithmic space predicates).
@unpublished{seiller-IGCompI,
Author = {Seiller, Thomas},
Title = {Interaction Graphs and Computational Complexity {I}},
Note = {in preparation},
Year = {2015},
}
Interaction Graphs: Zeta Functions, in preparation.
pdf Abstract Bibtex
@unpublished{seiller-IGZeta,
Author = {Seiller, Thomas},
Title = {Interaction Graphs: Zeta Functions},
Note = {in preparation},
Year = {2016},
}

Slides

Why Complexity Theorists Should Care About Philosophy, ANR-DFG BeyondLogic Conference, Cerisy-la-Salle, May 2017pdf
Computational Complexity, between Algebra and Geometry, GEOCAL-LAC meeting, November 2016pdf
A realisability approach to complexity theory, HCC seminar, November 27th 2015, University of Copenhagen (Denmark)pdf
Measurable Preorders and Complexity, TACL2015, June 26th 2015, Ischia Porto (IT)pdf
Towards a Complexity-through-Realizability Theory, FOPARA/DICE, ETAPS Workshop, April 12th 2015, Londres (UK)pdf
A realisability approach to complexity theory, HCC seminar, November 27th 2015, University of Copenhagen (Denmark)pdf