International Workshop on Design and Implementation of Formal Tools and Systems
Lausanne, Switzerland October 20, 2014
co-located with FMCAD and MEMOCODE
DIFTS (Design and Implementation of Formal Tools and Systems) workshop emphasizes insightful experiences in formal tools and systems design. The workshop provides an opportunity for discussing engineering aspects and various design decisions required to put formal tools and systems into practical use. In the past, we have invited speakers who have shared their deep insights and discussed the practices followed in the Industry towards adopting formal methods. It provides a forum for sharing challenges and solutions that are original with ground breaking results.
The DIFTS workshop is co-located with FMCAD14 and MEMOCODE14.
DIFTS takes a broad view of the formal tools/systems area, and solicits contributions from domains including, but not restricted to, decision procedures, verification, testing, validation, diagnosis, debugging, and synthesis. This workshop encourages and appreciates system development activities, and facilitates transparency in the experimentation. It will also serve as a platform to discuss open problems and future challenges in practicing formal methods. Call for Paper CFP (txt).
Papers in the following two categories are solicited: (a) system category (10 pages, double column, 11pt), and (b) tool category (8 pages, double column, 11pt).
In the system category, we invite papers that have original ideas accompanied with novel integration techniques, adequate design/implementation details, important design choices made and explored, and good experimental results.
In the tool category, we invite papers that focus primarily on the engineering aspects of some known/popular algorithm, with significant emphasis on the design/implementation details, and various design choices made to advance current state-of-the-art approaches.
The page limit for submissions in the system category is 10 pages in double column format and for submissions in the tool category is 8 pages in double column format.
Submission of papers should be made electronically in PDF format via EasyChair. Submission Page.
To keep maintain uniformity and fairness in the reviewing process, the program committee will evaluate the technical contribution of each submission based on the following guidelines: the paper should provide enough details for others to reproduce the results; and should solve a clearly-stated problem that is significant and has wide interest; and the paper should provide enough motivation for the design choices made. Overall, the paper should also clearly identify what the main contributions of the work are.
All accepted contributions will be included in informal proceedings. High quality submissions will be considered for a special issue of journals such as FMSD (Formal Methods in System Design) or IEEE TC (Transactions on Computers).
Tayfun Gezgin, Raphael Weber and Markus Oertel, Multi-aspect Virtual Integration approach for Real-Time and Safety Properties
Alberto Griggio and Marco Roveri, Comparing Different Variants of the IC3 Algorithm for Hardware Model Checking
Sebastiaan Joosten, Freek Verbeek and Julien Schmaltz, WickedXmas: Designing and Verifying on-chip Communication Fabrics
Daryl Stewart, David Gilday, Daniel Nevill and Thomas Roberts, Processor Memory System Verification using DOGReL: a language for specifying End-to-End properties
Paolo Camurati, Carmelo Loiacono, Paolo Pasini, Denis Patti and Stefano Quer, To split or to group: from divide-and-conquer to sub-task sharing in verifying multiple properties
Bardh Hoxha, Hoang Bach, Houssam Abbas, Adel Dokhanchi, Yoshihiro Kobayashi and Georgios Fainekos, Towards Formal Specification Visualization for Testing and Monitoring of Cyber-Physical Systems
Information about the venue are available at the following link. Venue.
(rescheduled to synchronize with parallel MEMOCODE sessions)
Workshop Program
|
|
8:30-8:50 | Breakfast and Registration |
8:50-9:00 | Welcome |
9:00-10:00 |
1st Invited Talk by Rolf Drechsler, University of Bremen, Germany
Formal verification is a powerful and effective method to ensure correctness of electronic systems. In order to check whether enough properties have been written coverage metrics have been proposed along with methods to automatically increase the coverage by determining new properties for missed scenarios. In the invited talk we present how such coverage methods can be leveraged to higher levels in the design flow of electronic systems. We first review existing coverage metrics and demonstrate state-of-the art formal verification techniques for correctness checking at the Formal Specification Level (FSL). Based on those we describe how coverage techniques can help to automatically derive missed scenarios at the FSL, where structure and behavior are given in terms of UML class and sequence diagrams, respectively.
|
10:00-10:30 | Coffee Break |
  | System |
10:30-11:00 |
Alberto Griggio and Marco Roveri
IC3 is one of the most successful algorithms
for hardware model checking. Several variants of the
original IC3 algorithm have been proposed in the literature:
each differing from the other for the approach
used in implementing the different steps of the algorithm,
e.g. pre-processing, clause generalization, the SAT solver
used, simplification. In this paper we perform a thorough
practical comparison of the different variants by
implementing each of them in the same tool pushing as
much as possible the corresponding implementations to
achieve the best performance. This enabled for a flexible
experimentation and to gain new insights, both about their
most important differences and commonalities, as well as
about their performance characteristics. We performed the
experimentation using as bench set the problems used for
the last three hardware model checking competitions. The
analysis helped to identify which is the best variant.
|
11:00-11:30 |
Paolo Camurati, Carmelo Loiacono, Paolo Pasini, Denis Patti and Stefano Quer
This paper addresses the issue of property grouping,
property decomposition, and property coverage in model
checking problems.
Property grouping, i.e., clustering, is a valuable solution
whenever (very) large sets of properties have to be proven
for a given model. As such sets often include many “easy-toprove”
and/or “similar” properties, property grouping can reduce
overheads, and avoid reiteration on common sub-tasks.
On the other end of the spectrum, property decomposition
can be effective whenever a given property turns-out (or it is
expected) to be “hard-to-prove”. Decomposition of properties into
“sub-properties” follows the divide-and-conquer paradigm, and
it shares with this paradigm advantages and disadvantages.
Our contribution is to present a heuristic property manager,
running on top of a multi-engine model checking portfolio, with
the specific target of optimizing productivity. We discuss, and
compare, different clustering heuristics, and we exploit circuit decomposition
strategies for property sub-setting. We also consider
the problem of evaluating a coverage measure for properties,
where the “coverage” is used to monitor the “advancement”
during the (partial) verification of a given property. We finally
consider estimates of the bound of a property as a measure of
confidence in BMC runs.
We include preliminary experimental data indicating that the
proposed approaches can provide improvements over state-ofthe-
art methods potentially enhancing productivity in industrial
environments.
|
12:00-12:30* |
Bardh Hoxha, Hoang Bach, Houssam Abbas, Adel Dokhanchi, Yoshihiro Kobayashi and Georgios Fainekos
One of the main challenges in software development
for safety-critical Cyber-Physical Systems (CPS)
is in achieving a certain level of confidence in the system
correctness and robustness. In order to perform formal
monitoring, testing and verification of CPS, the fully modular
tool S-TALIRO is presented. The tool is designed for
seamless integration with the Model Based Design (MBD)
process in Matlab/Simulink™. S-TALIRO performs robustness
guided Metric Temporal Logic (MTL) testing and
monitoring. Since writing specifications in MTL is an error
prone task that requires expert temporal logic users, a
graphical formalism for the development and visualization
of specifications is presented. The article provides an upto-
date overview of S-TALIRO. It includes a discussion
on the benefits of the fully modular architecture and the
challenges encountered in its development.
|
  | Tools I |
11:30-12:00* |
Sebastiaan Joosten, Freek Verbeek and Julien Schmaltz
In modern chip architectures, the increase in parallelisation
brings about highly complex on-chip communication
fabrics. We present WickedXmas, a tool that facilitates the design
and formal verification of such interconnects. The tool is based
on the language xMAS, which is a high level design language
for communication fabrics, originally proposed by Intel. The
use of xMAS ensures that many common modelling errors such
as unintended loss of data or dangling wires are prevented
by construction. Therefore, the major challenge in verifying
xMAS models is establishing deadlock freedom. WickedXmas
can automatically detect deadlocks or prove their absence. If
a deadlock is found, it is presented to the user for further
analysis. Experimental evaluation on a range of interconnects
shows good performance and scalability of WickedXmas in
contrast to verification from scratch, or using existing model
checking techniques. Using WickedXmas, a user can draw a
communication fabric and formally verify it automatically.
|
12:30-13:30 | Workshop Lunch |
13:30-14:20 |
2nd Invited Talk by Wolfgang Kunz, University of Kaiserslautern, Germany
Decades of research in the field of formal verification have resulted in a number of break-through successes. In many applications, the scalability of formal techniques no longer causes concern. Adequate specification languages such as PSL and SVA have been developed that match well with powerful proof algorithms. Standardization of these languages has facilitated their adoption in industry. Moreover, sophisticated methodologies have emerged in industry that proved to be highly productive. Yet, when deploying tools and methodologies for formal verification in industrial practice a number of hurdles have to be faced. Only fully automatic techniques such as equivalence checking and automatic linting enjoy wide-spread popularity in SoC design. Property checking, on the other hand, is only moderately successful. As a result, simulation has continued to be the predominant verification technique. Based on observations of industrial practices over many years, this talk will identify a set of hurdles that impede formal techniques in gaining further ground oversimulation-based verification. After inspecting these hurdlesnew lines of research will be proposed for overcoming them. In particular, it will be argued that the future success of formal techniques may depend on how these techniques are integrated into system-level design flows. It is proposed to leverage existing formal technology as a contribution to close the “semantic gap” between the system level and the register-transfer level. This has a strong impact on how to use and design formal verification tools both for hardware and for low-level software. The talk will describe recent results and new practices already explored in industrial case studies. In conclusion, a new system design methodology including formal techniques is envisioned to support effective design procedures based on high-level synthesis as well as manual design refinements at the system level.
|
  | Tools II |
14:20-14:50 |
Daryl Stewart, David Gilday, Daniel Nevill and Thomas Roberts
The memory subsystem of a microprocessor is
responsible for scheduling memory accesses as efficiently as
possible, hiding latency costs from the Data Processing Unit
and, ideally, minimizing power costs. For ARM processors,
requirements on ordering and completion of memory accesses
from the ARM Architecture and relevant bus protocols specify
an envelope of acceptable behavior that must be maintained by
the schedule. Consequently, any given sequence of instructions
can be satisfied by a range of different schedules. Verifying the
correctness of the memory subsystem is therefore a complex task,
often on the critical path to release.
Inspired by earlier work on specifying end-to-end properties
we developed a language of Directed Observer Graphs
(DOGReL) capable of expressing implementation independent
properties at a level of abstraction between architecture and
micro-architecture that specify the behavior of the memory
subsystem. These properties are then translated into System
Verilog monitors which can be applied for bug-hunting at the
RTL level using commercial Model Checking tools. During a
recent micro-controller design project at ARM these properties
proved highly valuable in detecting bugs earlier in the design
cycle than simulation. The process of writing an unambiguous
specification also exposed early design flaws and provided clearer
documentation of the required behaviors for engineers.
|
14:50-15:20 |
Tayfun Gezgin, Raphael Weber and Markus Oertel
Verification techniques for analyzing the design or
requirements at early development stages are used since the
beginning of the model-based design paradigm. Most of these
analyses are focused on a single purpose, like safety, real-time,
or geometry. This separation of concerns leads to the introduction
of so called aspects that describe these properties of a system.
Nevertheless, these aspects are not necessarily independent. In
this paper we use the fault tolerance time interval, the maximum
time to recover from faults, as an example to state the need for a
multi-aspect analysis. We present how a virtual integration test
can be performed covering safety and real-time properties to
prove the correct refinement of requirements. Our requirements
formalization approach using contracts, a pattern language and
the internal representation as timed automata are described. The
presented technique is applied to an automotive lane-keepingsupport
system.
|
15:20-16:00 |
3rd Invited Talk by Fahim Rahim, Atrenta Europe, Grenoble, France
Atrenta's Spyglass Power Platform allows the designer to reduce SoC power at the RT level. Spyglass detects fine-grain power saving opportunities to reduce register and memory power consumption and automatically modifies the original RTL to include the newly identified power saving opportunities. In addition, the tool detects potential coarse grain power optimization in the SoC and guide the designer to explore power consumption of their SoC Spyglass employs formal verification techniques to explore the design space for power saving opportunities and prove that the proposed opportunities don't modify the design functionality. The talk will illustrate how formal verification techniques are being used to drive Spyglass technology and outline future challenges in the field of power reduction.
|
16:00-16:15 | Coffee Break |
16:15 |
MEMOCODE final Key Note by Joseph Sifakis
Architectures are common means for organizing coordination between components in order to build complex systems and to make them manageable. Despite the progress of the state of the art over the past decades, there are still a lot of foundational issues that remain unsolved. In this talk we present a general framework for modeling architectures and their properties.
The framework considers that systems are obtained as the composition of atomic components by using glue operators. These are composition operators on components. Their meaning can be specified by operational semantics rules giving the global behavior of a composite component as a function of the behaviors of its constituents. They implicitly define connectors between components each connector involving multiparty interaction and an associated atomic computation modifying the state of the composed components. We present results characterizing the expressiveness of glue operators and show that universal expressiveness can be achieved by using a limited and well-defined set of types of connectors. We also propose a powerful notation for connectors encompassing both static and dynamic coordination.
Architectures can be considered as generic operators parameterized by types of coordinator components and glue operators. Applied to a set of components they enforce a characteristic property on the resulting global behavior. Coordinators are needed only when the types of glue operators are not expressive enough for achieving the desired coordination. The proposed formalization of architectures can be used for building component-based systems that are correct by construction. We present composability results, which guarantee preservation of the characteristic behavioral properties of a set of architectures when they are jointly applied to the same operands.
We present logics for the description of architectural properties. Formulas of these logics characterize architectural styles as the description of component types from which a system is built, and their interconnection. The presented logics include a propositional logic and two higher-order extensions of this logic. We provide an axiomatization of the propositional logic as well as results on the expressiveness of the higher-order logics and their effective application for the description of architectural styles.
DIFTS attendees are invited to attend the MEMOCODE talk
|