StATyCC

Static Analyses of Program Flows: Types and Certificate for Complexity

September 6, 2021

Presentation

StATyCC stands for “Static Analyses of Program Flows: Types and Certificate for Complexity”. This is a two-year international collaborative project funded by the Thomas Jefferson Fund and involving the School of Computer Science of the University of Augusta and the Northern Paris Computer Science Lab of the University of Paris 13.

The project aims at providing new static analysis tools based on theoretical results from Implicit computational complexity, building on previous work of Moyen, Rubiano and Seiller [1]. It revolves around certified compositional analysis of source code, loop optimization through an original dependency model. Application to parallel optimization and intermediate representation are currently being investigated.

Members

This project is carried out by

Papers

We currently have two papers under revision.

Tools

Presentations

2020 Georgia Undergraduate Research Conference

Assya Sellak made a presentation to the 2020 Georgia Undergraduate Research Conference.

Abstract:

Optimization Method on Programs Using Dependency Analysis and Loop Peeling Transformations Computer programs are written in high-level languages and translated into machine-code using compilers. Compilers perform a series of program transformation and optimizations to improve memory usage and reduce the run time of the program execution. Programs consist of commands and statements such as conditionals and loops. Loops are an extremely powerful tool for programmers used to repeatedly run a sequence of commands until a specified condition is met. However, when used carelessly, loops can lead to never-halting or extremely slow programs: for this reason, many compilers and program optimizations focus on these structures. Loops can contain commands that perform unneeded residual operations instead of only being executed when necessary. This excessive performance results in an avoidable increase in run time which may arise intentionally or unintentionally either because of the programmer or other automatic transformations. Detecting which operations could have been performed fewer times than the loop requires is complex, but some optimizations try to detect this and extract portions of code that only needed to run once and successively move commands that need to run more than once, but not as many times as the loop runs. These fall short on some structures, mainly because they limit the scope of the analysis to individual operations, rather than considering sequences of operations as a whole. Thanks to quasi-interpretation [1] coming from Implicit Computational Complexity, new ways of detecting invariant sequences of commands inside loops have been developed. We extend this work along two axes: We allow for more structures, including for, do...while, loops with break, to be peeled. By analyzing the dependencies within the loop, we hope to allow for some parallel optimization. This allows to:

  • consider more programs
  • possibly significantly speed-up programs that are distributed, i.e., executed in parallel on multiple computers.

The slides as well as the abstract are available to download. The program of the conference is available as well.

A similar presentation was accepted to the 2021 National Conference on Undergraduate Research.

Graduate Research Day 2021

Both Assya Sellak and Neea Rusch had their abstract accepted to Augusta University’s Graduate Research Day.

Certifying the complexity and correctness of critical software

Abstract:

Software powers our everyday lives: from phones to daily interactions to our homes. At the same time software is fraught with bugs causing systems to behave in undesirable ways. When discussing critical software responsible for sustaining human life—such as airplanes, ventilators, and nuclear reactors—being able to guarantee correct behavior is necessary. Compilers play a vital role in the software development process by transforming programmer’s source code to executable programs. They perform analysis, transformations, and optimizations to improve the performance and reliability of the resulting program. But compilers—since they are themselves pieces of software—may contain bugs. To build reliable software, we must establish the correct behavior of these intermediate tools. There is a colossal push to prove the correctness of such tools using mathematical abstractions such as dependency analysis, formal methods, and proof assistants. Proving the correctness allows eradicating bugs in programs and drives programmers to specify formally the intended behavior of programs while building trust and confidence in the end-result. Using dependency analysis inspired by Implicit Computational Complexity, we apply those techniques to program transformations. Among these techniques is ensuring program’s variables grow within reasonable bounds thus providing a certification in term of memory footprint and possibly run-time, in addition to certifying its behavior. Implementing this analysis is one of the goals of our research.

Neea’s presentation is available on-line.

References

[1]
J.-Y. Moyen, T. Rubiano, T. Seiller, Loop quasi-invariant chunk detection, in: D. D’Souza, K.N. Kumar (Eds.), ATVA, Springer, 2017. https://doi.org/10.1007/978-3-319-68167-2_7.

Miscellaneous