Inventing the Software that Invents the Future

Worried about today’s stock market activity? Retreat with me into the security of the bright future that awaits.

Microsoft’s Craig Mundie (pater familias of the Institute for Advanced Technology in Governments), is on a college tour across the nation.  The trip is something of a reprise of jaunts Bill Gates famously made over the years, when he would string together visits to campuses partly to evangelize, partly to recruit, and mostly to get new ideas from bright young (and contrarian) minds.  The Seattle paper today labels these tours as filling the role of Microsoft’s “chief inspiration officer” (“Mundie gives campuses peek at tech’s future”).

What’s he telling this generation of future technologists, at Princeton, NYU, University of Michigan, UC Berkeley and UC San Diego? Here’s a telling quote from Craig in a much longer interview just posted with Knowledge@Wharton:

Many of the failings — not just of our software but of all large software — is that the security problems, the lack of reliability, the difficulty in maintenance, the difficulty in testing, all of these things are symptomatic of software still being too much of an art form and too little of an engineering discipline. I believe that over the next 10 to 20 years, you’re going to see a dramatic shift in the way people write software…

Software will be built through the composition at every scale of a lot of distributed, asynchronous services. The question is: How can you specify, compose and operate those services?”

What will such software enable – since “a lot” = billions of distributed, asynchronous services?  The future may look a lot like what Kevin Kelly laid out in a mind-tripping TED talk at the dawn of 2008 (“Predicting the Next 5000 Days of the Web“).  But Kevin’s conclusions were predicted in a pathbreaking 1997 article for Microsoft Research by Gordon Bell and Jim Gray, “The Revolution Yet to Happen.”  And their writing appropriately cited Vannevar Bush’s archetypal 1945 Atlantic Monthly article “As We May Think” for its vision of the globally networked uber-library, Memex.

And for a peek into how we’re beginning that “changed way of software,” I’ve written before that we’re using Robotics as an exemplar, a testbed, and it is proving to be a remarkable environment in which to test the power of marrying “Decentralized Software Services” (DSS) with a “Concurrency and Coordination Runtime” (CCR) framework.  It’s really powerful, check out this detailed technical report – lots of cool pictures, too 🙂

Email this post to a friend

AddThis Social Bookmark Button

4 Responses

  1. Lewis, despite all the good work done in the past fifty years, we need to improve our approach to software at the theoretical level as well as the methodological level. And hats off to the Microsoft folks that are sponsoring the Haskell community for their important work developing advanced approaches to formal syntax. But more hardware and better syntax won’t solve the problems implied by your post and neither will better methodology. We need a better semantics. And a semantics to support Bush’s vision still seems as far away today as in 1945. As I mentioned recently to Steve Cook of the Visual Studio team in his post on UML and DSLs here http://tinyurl.com/4539zu, the semantics of today’s languages, including RDF and OWL, must evolve from a model theory based in representation and truth into a model theory that incorporates semiotics and pragmatics. This evolution of a formal semantics is also implicit in the federal government’s information sharing strategy. I hope your advanced government team can serve as strong advocates for a better semantics both in government and within Microsoft so we may someday expect our software to work As We May Think !

    Like

  2. Lewis –

    When I think of problems that are amenable to “large scale, distributed, asynchronous” solutions, the first example that comes to mind are various iterations on the Google map/reduce infrastructure, independently reinvented by several organzations (Yahoo with Hadoop, and there are others).

    The basic notion is that if your problem is structured so you can break it into a zillion tiny parts, solve each part separately, and integrate them all together at the end, then you can use on-demand computing infrastructure and some kind of job control system to get enormous performance and speedup results.

    What’s remarkable about this sort of infrastructure is that the resulting user space programs can be very small – they typically encapsulate just the time-consuming inner loop of some bigger process. The other remarkable result of this is if you design your architecture and systems right, you can build out your compute infrastructure on demand and then tear it down when it’s not needed.

    Like

  3. Rick – I think you can tell I agree with you entirely, but almost in an emotional and idealistic way. I’m still searching for evidentiary signs of progress along that semantic line in programming itself. I have a hunch that there simply aren’t enough minds working on this particular approach, much less great minds. And mine’s not up to par on that front! But the quest continues… Keep advocating, as will I.

    Like

  4. Ed – This tack is indeed remarkably powerful. We have some efforts (both operational and research) in this vein, particularly Dryad – I wrote about it earlier this year here:
    http://tinyurl.com/5yja7f

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: