Papers 3/Films 1
Sunday, October 23
This workshop intends to stimulate research in programming languages and software development by exploring the notion that languages should not offer a limited set of fixed composition mechanisms, but allow for flexibility, a wide variety of compositions, domain-specific and tailored compositions, or programmable compositions of vari-ous program artifacts.
- Christoph Bockisch (University of Twente, Netherlands)
- Lodewijk Bergmans (University of Twente, Netherlands)
- Dean Wampler (Think Big Analytics, Inc., USA)
Monday, October 24
Programming languages exist to enable programmers to develop software effectively. But how efficiently programmers can write software depends on the usability of the languages and tools that they develop with. The aim of this workshop is to discuss methods, metrics and techniques for evaluating the usability of languages and language tools.
- Craig Anslow (Victoria University of Wellington, New Zealand)
- Shane Markstrum (Google, Inc., USA)
- Emerson Murphy-Hill (North Carolina State University, USA)
Tuesday, October 25
10:30–12:00Onward! Research Papers 1
Mind Your Language: On Novices’ Interactions with Error Messages
Automated Program Verification Made SYMPLAR
14:00–15:30 Onward! Research Papers 2
TouchStudio - Programming Cloud-Connected Mobile Devices via Touchscreen
Coding at the Speed of Touch
Emerson: Accessible Scripting for Entities in an Extensible Virtual World
Wednesday, October 26
08:30–10:00 Onward! Keynote
It has become extraordinarily difficult to write software that performs close to optimally on complex modern microarchitectures. Particularly plagued are domains that are data intensive and require complex mathematical computations such as information retrieval, scientific simulations, graphics, communication, control, and multimedia processing. In these domains, performance-critical components are usually written in C (with possible extensions) and often even in assembly, carefully “tuned” to the platform’s architecture and microarchitecture. Specifically, the tuning includes optimization for the memory hierarchy and for different forms of parallelism. The result is usually long, rather unreadable code that needs to be re-written or re-tuned with every platform upgrade. On the other hand, the performance penalty for relying on straightforward, non-tuned, “more elegant” implementations is typically a factor of 10, 100, or even more. The reasons for this large gap are some (likely) inherent limitations of compilers including the lack of domain knowledge, and the lack of an efficient mechanism to explore the usually large set of transformation choices. The recent end of CPU frequency scaling, and thus the end of free software speed-up, and the advent of mainstream parallelism with its increasing diversity of platforms further aggravate the problem.
No promising general solution (besides extensive and expensive hand-coding) to this problem is on the horizon. One approach that has emerged from the numerical computing and compiler community in the last decade is called automatic performance tuning, or autotuning. In its most common form it involves the consideration or enumeration of alternative implementations, usually controlled by parameters, coupled with algorithms for search to find the fastest. However, the search space still has to be identified manually, it may be very different even for related functionality, it is not clear how to handle parallelism, and a new platform may require a complete redesign of the autotuning framework.
On the other hand, since the overall problem is one of productivity, maintainability, and quality (namely performance) it falls squarely into the domain of software engineering. However, even though a large set of sophisticated software engineering theory and tools exist, it appears that to date this community has not focused much on mathematical computations nor performance in the detailed, close-to-optimal sense above. The reason for the latter may be that performance, unlike various aspects of correctness, is not syntactic in nature (and in reality is often even unpredictable and, well, messy).
The aim of this talk is to draw attention to the performance/productivity problem for mathematical applications and to make the case for a more interdisciplinary attack. As a set of thoughts in this direction we offer some of the lessons we have learned in the last decade in our own research on Spiral (www.spiral.net). Spiral can be viewed as an automatic performance programming framework for a small, but important class of functions called linear transforms. Key techniques used in Spiral include staged declarative domain-specific languages to express algorithm knowledge and algorithm transformations, the use of platform-cognizant rewriting systems for parallelism and locality optimizations, and the use of search and machine learning techniques to navigate possible spaces of choices. Experimental results show that the code generated by Spiral competes with, and sometimes outperforms, the best available human-written code. Spiral has been used to generate part of Intel’s commercial libraries IPP and MKL.
10:30–12:00 Onward! Research Papers 3 and Onward Films 1
A Literate Experimentation Manifesto
Presenting a Day in the Life of Video- based Requirements Engineering
14:00–15:30 Onward! Films 2
The serious game: weMakeWords
Ageing Society 2010
The intuitive control of smart home and office environments
Thursday, October 27
10:30–12:30 Onward! Essays Presentation