Join us on


Pierre Labrèche, CMC Electronics, Canada

Pierre Labrèche is a software engineering manager at Esterline CMC Electronics; his field of expertise includes embedded systems, software processes, and safety-critical systems engineering. Since his graduation in Electrical Engineering from ÉcolePolytechnique de Montréal in 1977, he has gathered experience in avionics, communications, and automation. He started in avionics with navigation and landing systems, and is currently active in integrated cockpits. Under Pierre’s direction, CMC reached SEI Capability and Maturity Model Level 3. He holds a Six Sigma black belt certification. He teaches avionics and computer courses, part-time. Current interests in safety-critical systems include architecture, model-based development, large systems design and verification, test technologies, and object-oriented technologies.

Title: Viewpoints on the Challenges in Avionics Verification and Validation

Abstract: The avionics industry designs and builds the computer equipment and other electronics that modern aircraft depend on for safe flight, and software is the key element of avionics systems. The presentation will focus on the V&V requirements of avionics development, as published in recently updated standards:
  • RTCA/DO-178C Software Considerations in Airborne Systems and Equipment Certification
  • RTCA/DO-331 Model-Based Development and Verification Supplement to DO-178C and DO-278A
  • RTCA/DO-332 Object-Oriented Technology and Related Techniques Supplement to DO-178C and DO-278A
  • RTCA/DO-333 Formal Methods Supplement to DO-178C and DO-278A
The presentation will provide viewpoints from the practitioners’ perspectives, and highlight some challenges in meeting the V&V objectives with state-of-the-practice skills and technology.

Ray J. Payette, IBM Business Analytics, USA

Ray Payette has more than 20 years experience in all aspects of software development. Ray's career in the software industry began as a radar systems programmer for the Canadian Air Force, and over the years he has been involved in various disciplines including Project Management, Product Management, and Software Strategy. In 2008, Ray joined IBM Cognos to lead the development and quality assurance groups for the Business Intelligence product line. Following the release of the highly successful ''Cognos 10'' offering, Ray assumed a new role as the executive lead for the development and quality assurance teams in the Financial Performance Management group, which will soon be releasing the next major version of IBM Cognos TM1. Ray's mandate includes strategic product decisions and support for the IBM sales teams worldwide. Prior to joining IBM, Ray was the Vice President and General Manager of Extend Media and he has also held VP of Engineering positions at Cartesis and Hyperion. Ray is also a former officer in the Canadian Air Force.

Title: Quality, Can We Define It?

Abstract: Software Quality: is it art; a science; or some of both? This address covers an insider's view on software developed within organizations ranging from small to the huge. Consider what it takes to create first-rate software development team. Then, reflect on the challenges of maintaining that edge in a competitive market. If your team is not performing up to expectations, how do you turn it around? What impact does the development process have on the outcome? Is agility the silver bullet? What are the effects of such factors as: tools; automation; location; off-shore development; communication means and style; or team size, in terms of the quality matrix? It is a complex problem that requires experience, insight and attention to detail. The answers are elusive, but the trick is asking the right questions - and that starts here, right now.

Michael Ernst, University of Washington, USA

Michael D. Ernst is an Associate Professor in the Computer Science & Engineering department at the University of Washington. Ernst research aims to make software more reliable, more secure, and easier (and more fun!) to produce. His primary technical interests are in software engineering and related areas, including programming languages, type theory, security, program analysis, bug prediction, testing, and verification. Ernstx research combines strong theoretical foundations with realistic experimentation, with an eye to changing the way that software developers work. Dr. Ernst was previously a tenured professor at MIT, and before that a researcher at Microsoft Research.

Title: Reproducible Tests? Non-duplicable Results in Testing and Verification


Abstract: Reproducibility is a central tenet of testing. Randomization in test outputs could mask the signal that indicates correctness, so engineers work to ensure that test execution is consistent. Proofs, too, must be reproducible: a proof is of little value unless it can be independently verified. Evaluation of tools and processes does not meet the standards that engineers expect in their software. Random testing is sometimes found to be superior to, sometimes inferior to, systematic testing. High test-coverage goals are adopted by one organization but abandoned by another. Test-first development strategies help one project but cripple another. Formal development methods (based on specification and verification) sometimes reduce costs but other times increase them, with varying correlation to quality. Programmers sing the praises of improved productivity when adopting languages with strong type systems -- or languages without static typing. There are also rifts between techniques that are shown effective in research laboratories and those that are adopted in practice: research experiments are often not indicative of effectiveness in the field. These discordant observations hold back our field by sowing confusion among researchers and doubt among practitioners, and by preventing common ground within or between the communities. The divergences continue to occur despite our best intentions, and despite our increasing sophistication in tool-building, evaluation, realistic codebases, education, bridging communities, and the like. This talk will illustrate the scope of the problem with examples of conflicting results and experiences in the testing, verification, and validation community. It will discuss reasons for non-reproducibility -- some of which are standard and acknowledged, and others of which are more subtle and easily overlooked. It will discuss ways to avoid or mitigate the problems. This talk aims to help the audience to recognize non-reproducible results in their own work or that of others, and to avoid them whether in research or in practice.