Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Latin American Academic Summit 2007 Keynote

Learn more about the Microsoft Research Summit 2007.

Microsoft Academic Summit Keynote

Craig Mundie
Vina del Mar, Chile
May 9, 2007

CRAIG MUNDIE: Morning, everyone. It's a wonderful to be back in Chile. There are a number of countries around the world that in the last eight or nine years I have adopted in some sense for Microsoft in terms of trying to provide some personal sponsorship to how the company thinks about the evolution of our business and our relationships in the country and the region. Some of those other countries I've focused on have been China, India, Russia, and now Chile.

I think that there is a great opportunity within the Latin America region to advance the state of the society, the state of the economy, and hopefully through the programs like this one the beginnings of change in the evolution of computing itself.

In the remarks I'll give this morning, I want to help people to think about change, change in computer science, change in the way in which computing is delivered and utilized.

It's interesting to me how frequently I travel around the world and people will say, you know, "Don't you think that we've done most of what is going to be done in computing? Isn't this software kind of getting really mature? Shouldn't our kids go study some other interesting field?" And when I get asked that question, or Bill Gates gets asked that question, we almost want to laugh, because we have our own deep understanding of how small our real progress is in the grand scheme of what computing and software will ultimately achieve.

This as a science, if you will, or an engineering discipline is quite young compared to many that we know in other areas of the physical sciences. And yet it's already evolved to be something that's critically important not just in the operational sense in our daily lives at work, at home, in communication and entertainment, but increasingly there isn't any field of scientific endeavor that we'll make substantial progress, either within the discipline itself or in linking together at a global level the world's best researchers, without the aggressive use of information technology.

Despite all that, and our belief that the future is well ahead of us and ripe in opportunity, the computer science or the business, if you will, of computer science in the university environment is not really, in my opinion, moving aggressively to address the fundamental changes in computing itself. We've had such a wealth of opportunity to apply computing as we have known it to a set of operational challenges or interesting business problems, that to some extent we have let degenerate to some extent our ability to advance the state of the art in computing itself. And yet I think the indicators of fundamental change in our business and in our technology base are coming, and I want to use my time with you this morning to help you understand what some of those changes are, and what the needs are going to be in the global computer science community to address these opportunities.

One of the things that I also find fascinating at this point in time is how people, how easily we forget about the cyclical nature of the evolution of the computing paradigm. Computing started big computers. They were room sized. We put them in custom made rooms. We went to the computer. Eventually we decided we got tired of going to the computer, so we put terminals on wires at the end in other places so we could access it, and time sharing was born. And as computing became more economical and higher in performance, and the microprocessor emerged, we were able to move computing successively from a place where it was in a big room to a place where it was more local, and then ultimately the emergence of personal computing. And with the steady progress in the microprocessor, we now have these things embedded in many things in our lives, our phones, our cars, our televisions, our game consoles; just about everything electric has some form of computation involved in it, and hence some type of software.

And so despite the world having robustly embraced personal computing, and we now have, based on how you want to count, nominally a billion or more people who regularly use personal computers as we know them, and if you consider cell phones as the beginning of an even more personal computer, the numbers are even larger.

Yet we're operating at a moment in time where because of the Internet evolution, because of the improvement in connectivity, we see appropriately a lot of interest again in centralized computing. And, in fact, some people now are even extrapolating and saying, well, maybe we should just put it all back in the center, we'll just put it in the cloud, we'll put it in special rooms again, these gargantuan datacenters, we'll provision everything from these centralized resources, and we'll just consume it at the edge.

And I think that if you look at the underlying evolution of the computer sciences now, I think it's almost comical to think that we would abandon the amount of capacity that exists today and will continue to accrue at the edge of the network where people and the intelligent devices are, and, in fact, what we really have to do, probably for the first time maybe in the history of computer science, is really work hard to find a balanced system that has a set of capabilities that exist and you evolve as the computing local to you continues to evolve in power, and to balance that with a set of integrative services that come from the global connectivity that the Internet provides.

And so while this pendulum essentially of centralized versus decentralized computing has swung back and forth a few times, I don't think it's going to come to rest in a centralized paradigm. In fact, I think if it comes to rest at all, it will come to rest in some balanced way.

When we think about the client, the computing client, the personal computer and now the evolved environment of more and more intelligent devices, I actually think of the client as we know it today, and despite all of our investment and millions of other people's contribution to the use of personal computers, as a highly underutilized resource. In fact, the programming paradigm that we have used has been one that tends to make the computer focus more on being responsive to the actions of its user. And we develop models of what we wanted the computer to do, and the way in which we programmed it to do those things that tended to really be biased a lot toward this responsive interactive model of computation, and other than a few computationally intense tasks, still relegated primarily to larger machines or for scientific activities, we have most of the world's computers running at somewhere in the low single digit percentages of utilization.

And so when you think of this model, in fact, it becomes quite natural to say, well, if these computers are really so underutilized, maybe we should go back to time sharing again, you know, we'll just put it all back in the cloud, and we'll just make these things really simple.

The work that we get from our computers today is not really as helpful as we'd like it to be, it's not as predictable in a uniform way in terms of delivering the answers or even the responsiveness that people would like. And so despite this tremendous progress, we have an underutilized resource that is not yet fulfilling people's expectations for what it could do, even constrained to the class of problems that we apply it to today.

And so as we move along, we've really built a set of tools, a model of utilization that tends to focus on this type of highly interactive computation. And, in fact, the model that most people have in their mind, and the model against which we've socialized and taught people about computing, is one that if you thought about predicting the road ahead, the way forward, it would essentially be as this graph would imply, that while after years we've got to these nice 3 gigahertz computers, we'll have 6 gigahertz, 12 gigahertz, 24 gigahertz machines, and life will be good, we'll just keep doing it the way we would do it, but, in fact, this is not going to be the path to the new world, because physics has intervened in this progression, and, in fact, this free lunch that the traditional model of software development has been eating for a long time where we didn't have to think very hard about how to get programs that had more capability or higher levels of performance or improved interactivity, the hardware world just brought us that really without a huge amount of effort on our part.

But basically those days I'll say either have ended of are ending, and you can see this if you look back over the last few years just at a commercial level, and notice that AMD and Intel and the microprocessor companies at large stopped trumpeting the rapid acceleration of clock rate as the primary indicator of progress within their industry. The technical people have begun to parse more carefully what was it that Gordon Moore really said when he talked about Moore's Law. If you ask the uninitiated or the non-super technical person, they would tell you, "Oh, I know what Moore's Law meant. It meant that computers got twice as fast every 18 months." Gordon didn't say anything at all about how fast computers would get; he only spoke about what transistor density would track along, and then indeed transistor density has and will continue to improve at some exponential rate.

Along the way we tended to use other tricks of the trade in electronics, in particular lowering voltage, as we increased clock rate, because in the highly popular CMOS technologies, clock speed generates increased heat, and the real challenge becomes sort of a mechanical challenge of how do you remove the heat from these systems; or to some extent how do you, with a lot of that wasted power, not find that in a battery operated environment you don't get the lifetime that you want.

So, the world of hardware design has actually given up to a large degree on the quest for ever higher clock rates, and, in fact, has recognized that architecturally the way forward is to put more processors on a single die, and ultimately to integrate whole systems on single chips.

But this actually introduces for the first time a mandatory requirement for the software people of the world to come together and work harder than we've worked in the past to solve the problem of parallel program development and execution.

Parallel programming has been a Holy Grail of sorts of the computer science field for decades. It's a hard problem, and it's one that only the people who had no choice, in essence the supercomputer people focused on technical computations, they're the only ones who for decades have consistently pursued steady improvements in this, and have evolved algorithms, codes, and programming models in order to capitalize on the scale of computing that came from arraying large numbers of microprocessors.

But the vast majority of the world's programmers have ignored this problem. The vast amount of effort in producing tools and ultimately applications for computing have ignored this problem. We ignored it because we could. It was a hard one, and we didn't have to solve it, because the hardware people were giving us a free lunch.

That world is over, and if we are going to continue to get the benefit of what the electronics will provide, the global computer science community now must rise to the challenge not just to think about how would we create new tools, but to begin the process of socializing this whole concept not just to the research community but to the everyday programmers of the world. We need to make this a task that is more manageable.

With or without this evolution of the microprocessor itself, we were going to have to undertake this task. And the reason is that cloud that we keep talking about. It's essentially another world of highly parallel distributed computation, and ultimately to build these hybridized solutions is going to require that we construct these systems in a much more rigorous way than that which we've used to build programs that ran on single computers in the past.

The brittleness, the fragility of large scale software systems as we know them will not continue to serve us well, either in constructing these large, distributed, asynchronous applications of the future, and certainly will not be solutions that will allow performance to scale up with the capacity that is going to extend in this evolution of the microprocessor.

As a community, we face another challenge, and that challenge is what I call the complexity challenge. These graphs show studies that were actually unrelated to Microsoft, not done by Microsoft, but they were a group of people who began to study large scale application development. And there were two interesting results, and I'll comment about our own experience in this matter as well.

On the left you see a graph that shows what percentage of the projects succeeded at various levels as a function of their size. So, as they move from trivial, 100 lines of code, to substantial, a million or 10 million lines of code, you can see a fairly dramatic change in outcomes. In fact, only 13 percent of projects of any material size were completed on time, and almost two-thirds of them were cancelled. They never worked when they actually attempted to complete them.

And this is because complexity grows sort of exponentially with the size of the program, given the traditional way in which we construct software programs. It also grows exponentially because communication is an N squared problem, and as you seek to write larger and larger pieces of code, you employ more and more people to do it, and so there's complexity in the interaction of those people.

And, in fact, that is also shown on the graph on the right where as the project size increases the actual amount of time that people spend coding, which is the big orange block in the middle, actually diminishes, and in million line projects in this measurement study, only 18 percent of the time was actually writing the code. The rest of it you could think of as overhead, either the design activity, the support and documentation, or increasingly what became the largest component was finding and removing defects in the code, despite the best efforts of good engineering practice in software.

In fact, I contend that software development is one of the most important technical activities. It's a large scale business, and yet it's a discipline that I contend has not made the ultimate leap to a true engineering discipline.

What do I think is required to do that? I think what's required is to basically have a large scale system of collaboration among people which has some type of formal composition. Software, despite some efforts in this area, has not really evolved in a way where we have formal composition of large scale systems. And without that, if we were building bridges or skyscrapers or other large scale engineering projects in the physical world, and we did them the way that we do software, we'd still be having a lot of buildings that blew over in the wind or were subject so some type of failure that was unpredicted, simply because we haven't evolved the same kind of hierarchical composition that is necessary in these other large scale systems. Our failures in the past were not deemed as critical. And certainly Microsoft itself grew up in an environment where we were focused a lot more on building functionality into the product that the marketplace was demanding than we were on dealing with some of the more difficult challenges in the areas particularly around security and reliability. And as the scale of these software systems has increased, those challenges become more and more difficult.

Our own experience at Microsoft relative to this in some ways mimics the graphs, but people don't fully appreciate the scale at which we're already trying to solve these problems. The new versions of our product that came out this year, Windows and its related componentry, and the new Office System, each of those are about 100 million lines of code, and the number of people who participate in the construction numbers in the thousands.

And so we have, in fact, struggled over the years and have been able to apply our research assets to creating tools to try to facilitate the construction in some reliable way of these ever larger systems. But it's a matter of public record that we too have not been as predictable as we would like to be in our delivery of these large scale systems, and despite great efforts, particularly for five years now in our Trustworthy Computing initiative, we still don't have a level of perfection that we would like to have in critical areas like the security of these large scale systems, particularly when they get composed together in the field.

And so with or without the requirement to solve the problems of concurrency or parallel execution, whether locally or in the large scale distributed system sense, I think our community is at a point where it must now address this question, because the intrinsic power of these systems will ultimately lead all people to follow in the footsteps of Microsoft and build larger and larger, more complicated systems in order to solve the tasks at hand. And other people will not have the level of resources in personnel or technology to throw at the problem to the degree that we have, and I predict that as the rest of the population attempts to scale beyond a million and 10 million line systems, that the failure rates would essentially increase even more, and it's questionable whether people would ever produce large scale software that had the right attributes of function, performance, security, reliability, and any type of controlled cost or schedule.

And so I think that there is much to be done in computer science itself, and it is not any longer going to be sufficient to have the bulk of our research efforts in the university targeted only at the use of computing but rather we must pay some attention now to computing itself.

So, let's talk about what happens if we think about a world of computers perhaps in just the next five to seven years where the individual microprocessor based system could be as much as 50 to 100 times more powerful than the chips that we know today, at the same price point, at the same or lower power consumption.

What does that really imply? Well, certainly at Microsoft we recognize that if we didn't stop and think about this in some different way, we'd get a picture that looks like this one, or, in fact, maybe even more extreme. Instead of doing a little work and mostly idle, we'd be doing still a little work, using even less of the processing capability in a client, and we would be idle an even larger percentage of the time.

So, overcoming the concurrency and complexity challenges is a requirement, it's a simultaneous requirement in order to be able to move from this world of wildly underutilized computing resources to a world where these things really become fully productive forms of computation. And so we've been doing quite a bit of thinking about what we have to do to do that.

And in a sense we really need to invert the attributes of applications and the tools with which we build these applications from those that have evolved over the last 30 or 40 years in computing. In fact, if you look at this dual challenge of concurrency and complexity, I see no choice but to pursue a technical basis to have verifiable composability in the way in which we construct future large scale applications.

In doing so, if you look at the adjectives on this slide, they are almost the inverse of the way in which we have taught people and provided tools to people to build most historical software systems. We need them to become loosely coupled in construction, but, in fact, most of them today are still tightly coupled in construction and design.

We need them to be asynchronous, because these things operate increasingly in a networked environment, and the ability to control latencies, whether for input/output purposes to mass storage or through the network, is just going to get worse and worse.

We need them to be highly concurrent as opposed to single threaded, where the only concurrency came from a set of system functions that were done as infrastructure in the operation of the computer, but we really didn't focus much on the development of concurrency in the applications themselves.

They need to be composable so that there is the ability to stand on the shoulders of other people in constructing these systems at ever larger scales relative to size and sophistication.

They need to be decentralized because we want them to operate in a computer system itself, which will see intelligence distributed to a degree that we have not really seen in the past. Peripheral devices will increasingly be intelligent. It may be microphones that do some type of preprocessing or filtering in the microphone, and the output stream is not the raw wav form.

As intelligence becomes more specialized and more available and our ability to compose these things in single chip systems continues to increase, we're actually going to see a much more heterogeneous set of computing architectures applied to the range of tasks at hand, whether that's media handling the components of machine vision, machine learning and listening. You know, I personally predict that these things will not continue to be done by the rapid multiplexing of a dedicated CPU any more than we have left the world of graphics to evolve as much a set of routines that run in the main processor. We recognize that we could create custom architectures for graphics and video acceleration, and they become an integral part of the way we think about building computers that people interact with. The same evolution occurred with floating point computation as we moved from integer to floating point, and then integrated that into the architecture of these machines.

So, I think we're looking at a future where the hardware technology is going to give us a more powerful medium on which to compute, but also a more complex medium on which to compute.

And yet society is going to demand that these systems are reliable and resilient. The level of brittleness and unpredictability will be deemed unacceptable as our society becomes increasingly dependent on all of these computational capabilities. They will be embedded in virtually everything that we do and all the physical devices we interact with, and yet with that incredible scale and complexity, the society will begin to demand a level of utility reliability from all of these things as the world's most complicated system, and it will dwarf the complexity of managing the electric power grids and the telephone systems and other large scale infrastructural elements of our existing society.

And so it falls to us to begin to look not just for ways to apply these computing facilities to more and more tasks, but to fully utilize these capabilities. In some sense, as the world becomes even more ecologically oriented and concerned, even computing itself will come under scrutiny relative to both the physics of energy consumption, as well as the remnants of the computing capabilities that we build in hardware, and then dispose of as we move forward. All of these things are going to have to come within the purview of the collective computer science community.

So, when I think about desktop or phone or television or automotive classes of computers, the clients where the people are, and the kind of fully productive utilization that should result from breakthroughs in this area, I have my own list now of adjectives that describe the attributes of these future applications.

Some of these attributes we will add to the large scale and popular applications like Office, for example, at Microsoft, but to some degree for us and for you and the rest of the world there is a blank canvas and a very broad pallet of paints from which to start to paint new pictures, wholly new pictures about what computation can be used to provide.

And so the way I think about this now is that it really isn't going to be desirable to just say that the computer sits there, it's mostly idle, and it waits for its master, so to speak, to fall on the keyboard or on the mouse, and then it should react. I think that the computer has to become more and more intelligent, if you will, in the sense of being able to be a helpful assistant.

Eric Corvitz is here from Microsoft, and his group is one that has for many years now been looking for ways to use modeling and predictive capabilities to have the computer anticipate what might be useful, and to be more capable of behaving and making decisions as you would make a decision or as a trusted assistant might make a decision. And this, of course, would move computing to another level of quality in terms of its assistance to people, and I think that these have to be attributes that we see going forward.

Clearly, reliability has to be there, predictability in the sense of both responsiveness and result I think have to be there. If they're not predictable, I contend that the lack of predictability has to move from being surprised at how badly it does something to being surprised at how good it did something, how much it was able to anticipate your need and provide a pleasant surprise in terms of its assistance.

It will have to be more humanistic in the way that we interact with these systems, because today we've only serviced about the top 1.5 billion people in the planet in terms of their demographic profile, income profile, and if we're really going to help the rest of the 5 billion people on the planet, who have not benefited from access to these technologies, we're going to have to change the way that they interact with them. Less formal knowledge and experience will be required in order to get benefit.

The systems will have to perform in ways that we expect. I think that importantly they have to become contextually aware. Today, people are always contextually aware, or should be, and we make decisions, we take action based on the environment around us, the people we're talking to, the tasks that we're trying to perform. But the computer really doesn't have this kind of sensing the way people do. But, in fact, all the forms of sensing that people have, computers increasingly will have at low cost. And we need to figure out some way to represent that context, and to make it part of the platform, and allow people to be creative in the employment of the context in conjunction with the technical approach to the problem in order to move this forward.

I think a requirement to do this is much greater emphasis on the development of models, models of context, models of behavior, and finding ways to incorporate those as underpinnings of the way in which people think about architecting applications in the future.

Computing should be more personalized, increasingly focused on your individual needs and trying to do the work to convert the generic to the specific, to convert the broad information space into something that is peculiarly useful to you.

The computer will have to adapt to people to a much greater degree. And the way in which you'll interface with it, I think both in the visual sense and others, will become much more immersive. The ability, if you will, to commune with the computing and the result and its ability to bring things forward for your consideration, all of these things must change, and they all have to change in ways that I consider to be not incremental improvements to that which we have become familiar, but, in fact, quite profound changes.

So, this next generation of applications has to ultimately span this entire spectrum of concurrency. It has to include software and services and allow the hybridization of them in the construction of these future applications. And it has to span from the very, very local embedded computing environment to the very, very global environment of connectivity through the Internet.

And so Microsoft itself is very focused on this idea that the future of computing and the value that gets delivered comes from this combination of software that runs in these powerful local computers, and the services that are provisioned sort of like the power plants of the Internet, and which help to produce the ability to communicate and integrate across all of this other information and computation asset.

One of the things that is interesting to me from a technical point of view, we've done quite a bit of research in the last five years about how we would begin to make models of computation and programming that would facilitate the movement into this highly concurrent world of the future and distributed systems.

Last December, as part of a business we're incubating in Microsoft Research, we formed a business unit to focus on software for robotics. And we have an SDK that was released in December, and I am particularly fond of this activity, not just because I think robotics, in fact, may be an interesting and important field in our society in the future, but because the way in which the software is developed has all of the attributes that I described on the earlier slide. Robots are today interesting in that they tend to have to be composed from a lot of parts. There are a lot of autonomous subsystems that need to operate. You want them to be reliable in being able to perform the functions that they're tasked to do. And many of them are increasingly put into mission critical, even life at risk kind of situations. These were the precursors of our commercial airplanes that will fly themselves, there won't be any pilots, and many other applications within our society, particularly in an aging population.

And so I think that this is an interesting field of study. It's one that is very interesting also because it's intriguing. One of the things that we're doing now is using small robots and injecting them into the computer science curriculum in a number of universities, and having the students having everything taught about computing be taught in the context of robotics. It gives people a new tool, not just the abstraction of the computer itself, but a physical device that is expressive to some extent in the way people utilize the computational resources and assemble these complicated systems, and even have systems that are cooperative.

And so many of the things that are interesting to young people are embodied in robotics, and I think many of the things that should be of interest to the computer science community are similarly embodied in the toolkits that are necessary to extend our reach and the ability to build and operate robotic systems, and I would encourage you to think about that full range of application, and even look at these technologies we've done as a precursor to these future types of systems.

So, I think computing is going to transform our society. I don't think anyone really argues that it already has transformed the global society, not just in terms of how people live and work, but certainly the globalization effect that is well documented now, and even in the popular vernacular like books, "The World is Flat" by Friedman. It's affecting energy, it's affecting entertainment, it's certainly an issue in national security, not just in the technical sense but in the sense of economy as a component to future national security. All fields of science and engineering are going to require this.

And I think importantly for our society the two areas that are the largest single expenditure in almost every government of the world are healthcare and education. And these two sectors have been particularly resistant to transformation through the insertion of these types of advanced technologies. And yet there's nobody in the world who is satisfied, whether you're a consumer, a businessperson or a member of the policy community, with the outcomes that we have in either education or healthcare relative to the level of investment that's made. And that's true if we only consider the wealthier part of the population of the planet, and when you extend this to the entire 6.5 billion people, it's clear that we have a lot of work to do, and that the models that we have historically employed will not scale to the needs of these populations.

So, I think technology is absolutely essential to scaling healthcare. We've been able to improve diagnostic capability but not price performance of healthcare in the rich world, and we haven't really been able to scale rich world healthcare to the emerging markets at all. And so I think that technology is really the only vehicle that's largely going to be capable of both lowering the cost and improving the quality and augmenting the scalability of healthcare delivery relative to the number of trained healthcare professionals, and we need to really focus on this challenge.

Ultimately it's going to be about providing computing for all, because if you want the entire population of the planet to improve its lot in life, you can't do it by considering that the top billion and a half earners are going to pay for the other 5 billion for the rest of their lives. They all have to basically become contributing members to the society, and again I think technology in information and computation is really going to be key to bringing this bottom of the pyramid and the expanding middle of the pyramid up to a world standard from a quality of life point of view, and improving their own productivity to make these things self-supporting from an economic point of view.

Largely the world of computing has provided great things for the 1.5 billion richest people on the planet. And so we've got most of the top of the pyramid covered. We've got about, oh, 30 or 40 percent of the global middle of the pyramid, who have really started to benefit. And the real challenge for all of us is to try to figure out how do we get the rest of the middle and ultimately with government and philanthropy and the roles of business and technology people how do we ultimately inject enough technology and infrastructure to support it into the bottom of the pyramid that productivity, healthcare and education become three solved problems for humanity. And I believe that we ultimately have an important role to play in that.

An example of how I think that we may be able to move in this direction is a project we've been doing in our Beijing lab called Phone Plus, where we've recognized the popularity and ubiquity of the cell phone, and recognized that the cell phone is built today on microprocessors that are of an equal capability to what a personal computer was not that many years ago.

So, it's not for lack of a computational asset that most of the world's people wouldn't be able to move ahead; the question is how do we create a device that is both economical and has utility in the way a phone historically would, but also let it become the onramp to a richer world of computation.

So, we set out to say how could I take the phone and morph it such that it also had the capability to hook up to any other local large display, including a traditional analog television. And we've taken some of our WebTV technologies and other things and brought it together. And so the ability to hook up, either by wired or wireless means, a keyboard, a mouse, and any existing display technology to a cell phone actually creates a very inexpensive way to give people some type of personal computer. And while it may not match the capabilities of the state of the art desktop or laptop that we may all enjoy, relative to people who have no computing at all or who have only shared access to it in a kiosk or library or educational environment, the scale at which we could potentially see this deployed is dramatically larger. But there are many, many interesting computer science challenges and application challenges in trying to figure out how do you take these devices and have them play both roles, and provide the connectivity.

So, these are among the opportunities, whether at the application level or at the more fundamental computer science level that I think are so intriguing, and why there is such a huge range of opportunity for all of us to get together and to collaborate, and to do it in ways that either advance the state of the art or advance the use of this in ways that are societally important and locally relevant. And so the formation of this collaborative research federation I think is really important in being able to bring the community of people in Latin America into participation.

So, I'd like to ask Ignacio and Sergio to join me on the stage, and just have a few brief comments about how we've put this research federation together. Gentlemen? (Applause.)

PARTICIPANT: (Not translated.)

(Applause.)

CRAIG MUNDIE: Well, I'm very excited about this creation of this virtual institute. We've had nine other institutes focused in different areas, but this is the first time that people came up with the idea that we should build this as a virtual institute that tended to bring together the resources of an entire region. And so I'm going to be very interested to see the outcome there.

So, let me close with that in terms of the formal remarks, and I think we have 15 minutes or so to do some questions and answers, unless the organizers tell me differently. So, there are some microphones if you have any questions. Please just raise your hand and identify yourself, and we'll see what happens. And you can speak in either Spanish or English, because I have friends in the back of the room. (Laughter.) Everything is perfectly clear. (Laughter.)

QUESTION: When you say, let's say, four core personal computers might be in the market?

CRAIG MUNDIE: I believe four core machines will be in the market this year. My laptop has a two processor core in it now, and the four core chips are basically in the pipeline, so they're imminent. In fact, I have a quad-core machine at home already, so in a sense they're already in the market. They certainly aren't the dominating part of the market, but you can buy and/or build four core machines today.

QUESTION: There is a lot of research by Microsoft Research on human-machine interfaces and computer graphics and things like that. I wonder if you could expand or tell us a couple of minutes the interesting and exciting things that you guys are working on and will be delivered in the next few years in terms of, as I said, human-machine interfaces.

CRAIG MUNDIE: Well, one of the areas that has been a longstanding research area for the company is in natural language processing. And I think that as these machines get more powerful, we're going to cross a threshold where the ability to not just recognize a spoken word in some literal sense but to basically do that with the full context of natural language understanding will I think be a qualitatively important change in the man-machine interface.

We're doing a lot of work in video, which we view as — I mean, today most of the video application is toward entertainment, but increasingly we think that that will turn into machine vision, and that that will become also a part, an important part of the context and sensing of the environment that the computers have. And that will then also begin to change the modality of interaction with people, particularly coupled with voice.

We just did an acquisition of a company called TellMe in California, which is one of the leaders in the deployment of voice as a way over the telephone type of environment of asking for information or controlling things like buying a ticket online for an airplane. And we believe that there will be a big, big change in voice as an important modality of input and output.

In a sense all of the human senses are interesting to think about relative to either contribution to context or some way of moving beyond the keyboard and mouse and traditional two-dimensional displays. The displays are a critically important technology, and we're doing work there in everything from novel ways to actually bend light and project it to get either economical or small form factor displays, and one area — (Mary Zawinski ?) is here. She's actually been leading a lot of our work in large display environments. One of the things that's already measured is that as you increase the number of displays or essentially the number of square inches of display that people have to work with, that their productivity increases. And so those of us that have the luxury of buying them, I have three LCD panels on everything except my laptop. I actually have a machine I built at home that has four that I use for photo editing and other things for my personal hobbies.

And so it is very clear that display technologies, making them bigger, making them more ubiquitous, and making them less expensive are also important areas, and then how to use that. Some of the paradigms of even pointing and clicking, as Mary would be happy to explain to you, don't work so well when you've got this big display. You want to drag things around. You need whole new concepts of acceleration, not just in terms of the movement and speed of the pointer, but in moving things or picking a target so that the mechanics, the biomechanics of that part of the man-machine interface have to improve as well. So, we touch in all those areas now.

Another question?

QUESTION: I have two questions. (Not translated.)

CRAIG MUNDIE: I only heard one question, so I'll answer that one.

There are some fundamental problems in the models with which we express programs today. The bulk of people write programs using procedural programming languages that don't in any natural way expose any of the underlying concurrency, and, in fact, have not by and large had type-safe interfaces or any type of mechanical contract reasoning that can be applied to the interaction of any two modules. And as a result, it's very hard to take modular programs and assemble them with any certainty as to whether they'll do what they were expected to do when you actually put them together.

The problem is actually compounded in a very fundamental way when you start to say that the system is other than a single threaded piece of code that may have been constructed out of modules. As soon as you say that there's any form of parallel execution you introduce some requirement for synchronization. And the entire model of locks, as we know them, (UTEXs ?), does not compose, it just can't compose. And so the ability for people to say I want to build modular software and then I want to build them into some large scale asynchronous system, but I synchronize using (UTEXs ?), or any other type of traditional locking primitive basically means you can't ever get composition.

And so I think these are the fundamental things that have to change. We have to come up with a programming paradigm that is essentially a lock-free model of controlling asynchronous execution, and we have to move toward languages that allow us to reason mechanically about the correctness, if you will, of at least the interaction between two independently written pieces of code, and ultimately to have tools that will allow people to assemble things.

In a sense we don't have the tools in a mechanical sense — at least in the physical world if you take a bolt and a nut and the nut isn't the right size and you try to screw them together, you know it doesn't work pretty much, but in software, a software nut and a software bolt, you can screw them together and they'll happily say, "Oh yeah, that's okay, just plug them in." And we don't have any way to notify people that there are just some fundamental misconnects between the way that the software was written or what its underlying assumptions were.

And so I think in many ways the very basic tools that we have developed and taught people about how to write programs are going to have to change in order to operate in this new world.

Okay, time is up. Thank you very much. I hope you enjoyed it. (Applause.)