Contact Information

For more information please contact us at:
E: softsumm@microsoft.com

The role of funding agencies and industrial research in promoting computing science education - Workshop

The financial situation in Europe impacts funding for research and development. This workshop provides a forum for  representatives of agencies, industry and academia to share concerns and best practices around maximizing funding.

 

Questions and issues to be discussed by two consecutive panels include:

 

Microsoft .NET Gadgeteer - Tutorial

Microsoft .NET Gadgeteer is a new prototyping platform that makes it easier to construct, program and shape new kinds of computing objects. It is comprised of modular hardware, software libraries and 3D CAD support. Together, these elements support the key activities involved both in the rapid prototyping and the small-scale production of custom embedded, interactive and connected devices.  This will be a tutorial and live demo where we will show (through live coding and hardware building) how to use Gadgeteer to illustrate concepts in programming via real hardware, embedded software design, networking sensors, etc. The aim will be to give those present a taster of how Gadgeteer can be used in classrooms to inspire and educate computer scientists.

 

The Future of F#: Taming the Data Deluge with a Strongly Typed Functional Programming Language - Tutorial

This tutorial will teach F# from several angles.

 

Semantic Computing for Software Agents - Session

Semantic Computing combines disciplines such as computational linguistics, artificial intelligence, multimedia, database and services computing together into an integrated theme while addressing their synergetic interactions. In this session presentations will show how semantic technologies coupled with machine learning approaches can address the meaning of large scale heterogeneous data and how they can help improve design for software engineering and develop intelligent software agents.


In particular, the presentations will address challenge of correctly disambiguate entities and relationships in the merging process in order to be able to compose large knowledge bases with high coverage. Due to the high degree of uncertainty in the merging process it appears promising to use an approach based on probability, in particular, graphical models.

 

Parallelism: Merging Theory and Practice - Session

Multicore computers are now the norm. Taking advantage of multiple cores requires parallel and concurrent programming. There is therefore a pressing need for courses that teach effective programming on multicore architectures. We believe that such courses should emphasize high-level abstractions for performance and correctness and be supported by tools.  We present a set of freely available course materials for parallel and concurrent programming, along with a testing tool for performance and correctness concerns called Alpaca (A Lovely Parallelism And Concurrency Analyzer). These course materials can be used for a comprehensive parallel and concurrent programming course, a la carte throughout an existing curriculum, or as starting points for in-depth graduate special topics courses. We also discuss tradeoffs we made in terms of what to include in course materials.

 

Beyond Multicore – Session

Over the last decade the evolution of parallel computer architectures has led to the dominance of commercial multi-core systems with up to 8 or so cores on single chip. At one point people speculated about having dozens or even thousands of conventional cores on a chip, but it’s far from clear that such a device could be powered, cooled, or usefully supplied with data. This session will focus on alternative visions of the future and investigate the programming models that they support in hardware and the challenges that targetting these will offer for software designers.

 

Data-driven Research at Web Scale - Session

This session will bring together a group of leaders in information retrieval and language modeling to discuss the challenges in information retrieval and how language modeling approaches may help address some of these challenges. The focus is on the use of n-gram models to further research in areas such as document representation and content analysis (e.g., clustering, classification, information extraction), query analysis (e.g., query suggestion, query reformulation), retrieval models and ranking, and spelling as well as the access to n-grams as an enabler of experimental design.  Previous efforts of delivering n-grams to the research community adopted a data release approach with a cut off on the n-gram counts that obfuscate the long tail effects, an issue this service-based approach makes possible for further studies. Moreover, previous efforts also focused on just the document body; whereas richer types of textual contents are included in the Web N-gram service that can engage researchers in new innovations. The Web N-gram service provides access to petabytes of data via services—up to two orders of magnitude greater than currently available offerings. Finally, by providing regular data refresh, the Web N-gram service can open up new research directions in fields where lack of dynamic data has locked academic researchers into conducting research over static and stale data sets.

 

Programming in the Era of Cloud, Data and Devices - Session

Advances in programming languages are placing powerful, programmatic problem-solving tools in the hands of analytical programmers, But programming today exhibits a voracious appetite for information, and, even though our languages are wonderfully interoperable, they are in many ways information sparse. There is always with an impedance mismatch between the inner world of the language and the outer world of known services and data sources. Fluency here can achieve wonders in simplifying the modern programming problem. At the same time, data in the web is transitioning from adhoc to professional services, with a range of access options from free to service-guaranteeed. This track will look at themes in data and services on the web, and the languages and techniques we can use to consume it.

 

Verification in the Embedded Application Industry - Panel

Although formal software verification technology has made enormous progress, and a variety of methods and tools are available, we still do not see a broad adoption by industry. Thomas Kropf’s keynote discusses the key factors that influence the penetration speed of a new technology in the industrial software development process and provides a view on the issues that the software verification community will have to address before verification will be fully integrated into the industrial software development process. The keynote will provide particular insight into the field of automotive embedded software, where correctness is a critical requirement.


The subsequent panel discussion, assembling researchers and practitioners from the field, will approach the question of how the offerings on verification technology that research has to make can have a bigger impact in industrial practice. Does industry realize the full potential of verification technology? What are the key factors for industrial adoption? How can industry make best use of verification technology that is available? Which pressing problems from industrial practice would need more attention from research? The panel will provide some first-hand opinions from researchers and practitioners active in the field.

 

Natural User Interactions Supported by Human-Centric Computing - Session

Human-Centric Computing (HCC) or Human Computer Interaction (HCI) is what puts the “U” or user in Natural User Interactions (NUI). HCC is about understanding what users need and want from computing interactions. There are several methods of research to determine users’ needs and desires with regards to computing interactions that result in the most effective outcomes with the least amount of user effort. This Initiative will explore these methods and discuss the importance of defining user needs before developing natural user input and output solutions.  

 

Data, Results, Myths and Software: the Road to Empirical Software Engineering - Session

This session will present the results from Microsoft Research and Fraunhofer IESE that leverage empirical software engineering. The first talk presents an analysis to investigate various myths in software development. It looks at results from a wide spectrum of studies ranging from testing, cross –project software quality analysis to socio-technical systems. It also presents various aspects of the different software repositories at Microsoft some of which are not obvious and their implications on software development and productivity.

 

Then second talk is on empirical evaluations of human-based software engineering methods which create the scientific basis for “engineering” software and reduce risks of software technology transfer.

 

Verified Software Experiments - Session

The Verified Software Initiative (VSI) aims at software industry embracing verification and verification technology throughout the software life-cycle. Industry requires mature tools, experienced engineers, and calculable risk assessments. Experiments are key drivers of the VSI, because they play a vital role to supply industry with data about the maturity of a proposed method. Scientific applications always precede industrial adoption, and experiments are typically conducted by scientists. A goal of the VSI is to provide a repository of objective and realistic case studies that will enable industry to take informed decisions on the viability of verification technology. The session “Verified Software Experiments” focuses on the transition from laboratory experiments to broader adoption by industry. This session shall give insights in results of such experiments.

 

Technologies for Natural User Interactions - Panel

This session will focus on both research and commercial technologies for a variety of NUI input modalities.   Of course, body tracking based on 3D cameras is a key technology in Kinect, but numerous existing and upcoming technologies, from full 3D face tracking to brain-computer interfaces, are available in our envisionings of the future of computer-mediated living.   We will look at the ways in which we specify such systems and their programming models, and explore potential paradigm shifts in how complex sensor input is converted to user intent.

 

Putting Real Tools in the Hands of Students - Session

This session will look at some of the tools new coming out of Microsoft Research which can be of great interest to professors and students.


PexForFun (http://www.pex4fun.com/) can be used to learn software programming at many levels, from high school all the way through graduate courses. With PexForFun, the student edits the code in any browser – with Intellisense - and we execute it and analyze it for you in the cloud. PexForFun supports C#, VisualBasic and F#. PexForFun finds interesting and unexpected input values that help students understand what their code is actually doing.  Under the hood, PexForFun uses dynamic symbolic execution to thoroughly explore feasible execution paths. The real fun starts with Coding Duels where the student has to write code that implements a specification. PexForFun finds any discrepancies between the student’s code and the specification. PexForFun connects teachers, curriculum authors and students in a unique social experience, tracking and streaming progress updates in real time.


F# is a powerful new programming language, largely based on the functional programming paradigm, which runs on .NET or Mono across the Windows, Mac and Linux platforms. It also has a very successful professional implementation in Visual Studio 2010. To provide a gentle introduction to F# and show off its capabilities, Microsoft Research is developing a number of helpful resources. The easiest one to use will not require the user to download any software to their computer, it will just require a browser. This talk will demonstrate the TryF# website, showing and solving sample programming problems which are posed on the website.
Finall,y we consider how can we execute tasks in parallel if they read and write the same data? The concurrent revisions programming model allows parallel programs to remain simple, race-free and deterministic, by versioning shared state so that each task can work with an independent snapshot.

 

Verified Computing Tools - Session

Over the last decade SAT and SMT solver technology has revolutionized our ability to prove many relevant properties of substantial pieces of software. The fact that verifying software approaches practical feasibility today is based on two corner stones: it is certainly the theoretical advancements in the theory of verification, but even more so the practical improvements in the performance proof engines. This presentations in this session show how software verification tools benefit from the enormous improvements in prover technology to automate different kinds of verification activities.

 

Reconfigurable Computing Comes of Age - Session

For many years researchers have tried to use reconfigurable computing technology (namely FPGAs) to help solve many computationally demands problems in domains like scientific computing, finance, security and military applications. Significant problems had to be overcome ranging from non-ideal vendor architectures with limited support for dynamic reconfigurations to a lack of programming language abstractions and tools needed to make this technology accessible to mainstream programmers. This sessions draws together speakers that will report the current state of the art in reconfigurable computing and show how computationally challenging problems involving databases, financial computing and network intrusion can now be effectively solved by the special processing capabilities of FPGAs and we also predict the future impact of this technology on mainstream software industry and identify some of the new research challenges in this field.

 

SAT/SMT Solvers - Workshop

Boolean SAT/SMT solvers have seen dramatic progress in the last decade, and are being used in a diverse set of applications such as program analysis, testing, formal methods, program synthesis, computer security, AI and biology. Given the rather dramatic explosion in the usage scenarios of SAT/SMT solvers, there is great demand for newer kinds of features and higher levels of performance required of these solvers. This session will highlight recent developments around SMT, MAX-SAT, and parallel SAT engines.

 

Software Engineering for Mobile Computing – Tutorial

Mobile devices, of all shapes and forms, are the fastest-growing computing segment. While mobile devices are ubiquitous, they offer limited computation, storage, and power. Cloud computing promises to fill this gap by providing computation and storage to mobile devices connected to the network. Developing software applications utilizing mobile platforms and cloud based services requires innovations in software engineering and availability of specialized tools. In this tutorial, presenters will highlight challenges of developing applications for the mobile platform. Specifically, a tutorial for running project Hawaii will be shared with the academic and research community, with a focus on the associated tools developed. A tutorial for a tool kit for cross compilation of Java based mobile apps to C# will also be presented.

 

Sexy Types – Are we Done Yet? - Workshop

Functional programming languages have been a very productive laboratory for developing new language features and in particular powerful type systems. The use of static typing represents the most widespread and successful application of formal verification. Many of the innovations in research languages like Haskell can traced to new features in mainstream programming languages like C#. What will be the new wave of innovations in types that will appear in mainstream languages? Exciting candidates include features like dependent types and linear types and special support for security. Or have we now reached a fix point in the development of type systems? Are some of the latest developments in the type systems of languages like Haskell, Scala and Agda also candidates for adoption by mainstream languages? Have the most recent developments in type systems attained a level of complexity that puts them out of the reach of mainstream programmers?