Burning Man, Artificial Intelligence, and Our Glorious Future

I’ve had several special opportunities in the last few weeks to think a bit more about Artificial Intelligence (AI) and its future import for us remaining humans. Below I’m using my old-fashioned neurons to draw some non-obvious links.

The cause for reflection is the unexpected parallel between two events I’ve been involved in recently: (1) an interview of Elon Musk which I conducted for a conference in DC; and (2) the grand opening in London of a special art exhibit at the British Library which my wife and I are co-sponsoring. They each have an AI angle and I believe their small lessons demonstrate something intriguingly hopeful about a future of machine superintelligence

Burning Man meets the British Library,

Burning Man meets the British Library: “Crossroads of Curiosity” by artist David Normal

Onstage with Elon Musk, AFCEA Symposium,

Onstage with Elon Musk, AFCEA Symposium, “Revolutionary Changes in Intelligence”

Let’s take my experience as a snapshot case study, first with Elon Musk setting up the theory and hypothesis on the perils of “Strong AI” where artificial general intelligence could lead to Superintelligence. The “glorious future” in my title is of course an ironic reference to an idealized paradise of robotic perfection.

My 90-minute conversation with Elon onstage at the recent AFCEA Intelligence Symposium was wide-ranging and covered technology areas he’s currently leading work in – space, transportation, energy, innovation in general – but he wanted to lead off with Artificial Intelligence. He began by reiterating some of the arguments he and others like Stephen Hawking and Bill Gates have been making on the potential dangers in “summoning the demon” of Strong AI. (You can read “AI or Not AI?” for quick background, and I recommend Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies.)

DSC_6424 (3)I had a string of questions on the AI topic, focused on implications which he and others haven’t addressed yet, at least publicly. I won’t give Elon’s answers – the session was off the record – but they were thoughtful and quite compelling.

(Maybe you should register to attend the next AFCEA event, so you don’t miss out again.)

Here’s one of my questions, on an under-examined implication of the current AI debate:

Shepherd: “AI thinkers like Vernor Vinge talk about the likelihood of a “Soft takeoff” of superhuman intelligence, when we might not even notice and would simply be adapting along; vs a Hard takeoff, which would be a much more dramatic explosion – akin to the introduction of Humans into the animal kingdom. Arguably, watching for indicators of that type of takeoff (soft or especially hard) should be in the job-jar of the Intelligence Community. Your thoughts?”

If your appetite is whetted and you do want to tap the world’s greatest AI experts for the current state of their work, you can’t do better than looking at their own presentations at a secluded conference earlier this year in Puerto Rico hosted by the Future of Life Institute, a group which studies the challenges in ensuring the safety of AI systems, in an effort to counter dystopian developments. I found it difficult to dismiss the anxiety evident among several of these brilliant folks. UC-Berkeley’s Stuart Russell tweaked any skeptics who pooh-pooh AI worries, with this slide bearing a reminder from the early days of atomic research:

Russell on AI

But let’s not get too AI-gloomy.

Instead, let’s turn to the parallel story from the arts, born at the very human Burning Man festival. I’m a fan of California artist David Normal, an innovative painter and installation-artist long active in Burning-Man circles. I like his work precisely because it demonstrates the incredibly complex, densely-layered inventiveness of a highly literate creator. One Los Angeles art critic’s review of an earlier San Francisco exhibition by Normal captures what caught my eye:

…the work is inspired by a set of influences as disparate and random as the content of the scenes themselves. The artist’s zoomorphic forms are inspired by Northern Renaissance masters, for instance, and his muscular figures and contorted poses are reminiscent of early Mannerism, as if Timothy Leary had come in and rearranged Michelangelo’s Last Judgment. There are numerous other archaic referents to be found in this curious puzzle–Normal’s Chemical Imbalance, for example, is composed around the form of the Kaballah, and contains quotes from Grünewald’s Isenheim Altarpiece, Bernini’s Ecstasy of St. Teresa, and an Escher lithograph. It is this dizzying and complex fusion that Normal calls Crazyology–taken from a Charlie Parker song, the term is more than just a title of the exhibit, it is intended as a description of the mishmash of influences in the artist’s work. Normal explains: “The list of art techniques and philosophies that exploited and exulted in the irrational is a long and distinguished history that I would sum up as Crazyology….When people ask me what my style is, I say, Crazyology, and that way I have my own term for my work, and I can also refer to all the great crazy stuff that has inspired me – Surrealism, Punk, Dada, Pop Art, Psychedelic Art, etc.”

As fans of that kind of bushy-dendrite complexity, this summer my wife Kathryn and I have joined the Eccles Centre for American Studies and the Burning Man Arts Foundation in funding a major exhibition at the British Library through November 2015, entitled Crossroads of Curiosity. (Here’s information if you’re planning a London visit, which I highly recommend, and here’s a link to more photos and videos of the exhibit opening with Burning Man founder Larry Harvey and British Library Chief Executive Roly Keating.)

At face value, the new installation is stunning but straightforward: four 8-foot-by-20-foot massive lightboxed murals incorporating rich imagery from the Library’s digitised collections, mounted on the BL’s grand Piazza. It is described in the catalogue as “a series of dramatic tableaux featuring provocative juxtapositions of vastly different times, places, and peoples.”

Another photographer in the Crossroads of Curiosity

20 June - Crossroads of curiosity 01The pieces were first shown last fall at Burning Man 2014, where they were centrally arrayed on the ordinal compass points around the 105-foot-tall Man himself in the desert at Black Rock City (photo at right during their installation there).

But the story of how they came to be – their inspiration – is as interesting an element of the art as the visual images themselves. You see, there’s a man-plus-machine creation story…

A Mechanical Curator Lights Up the Desert

Several years ago the Digital Research Team and British Library Labs under the mighty Mahendra Mahey teamed up with Microsoft Research/Azure (while I was working there) to use their cloud infrastructure to work on a collection of 65,000 high-quality scanned books which were digitized through a partnership with Microsoft and the British Library a decade earlier. What to do with them? One idea from the BL Labs Technical Lead, Ben O’Steen was to set loose an algorithmic bot programmed to recognize pages which had an image or illustration, and digitally clip each one to save separately. The “Mechanical Curator” was born, and the resulting collection was eye-opening. Earlier the group at the BL had decided to do something generous: release the images for free public domain use. As spelled out in late 2013, their intent was to spark the creative imagination of others:

“We have released over a million images onto Flickr Commons for anyone to use, remix and repurpose. These images were taken from the pages of 17th, 18th and 19th century books digitised by Microsoft who then generously gifted the scanned images to us, allowing us to release them back into the Public Domain.

“The images themselves cover a startling mix of subjects: There are maps, geological diagrams, beautiful illustrations, comical satire, illuminated and decorative letters, colourful illustrations, landscapes, wall-paintings and so much more that even we are not aware of.

“Which brings me to the point of this release. We are looking for new, inventive ways to navigate, find and display these ‘unseen illustrations’. The images were plucked from the pages as part of the ‘Mechanical Curator’, a creation of the British Library Labs project. Each image is individually addressible, online, and Flickr provies an API to access it and the image’s associated description.

“We may know which book, volume and page an image was drawn from, but we know nothing about a given image.”

So the algorithms weren’t semantic, or “knowing.” They weren’t capable of characterizing the images, much less “understanding” them. The algorithm never raised an eyebrow, squinted its eye at an illustration and murmured “Now, that’s promising…”

In fact, it’s not surprising that the bot was assiduous but dumb. The state of computer-vision image-recognition is improving thanks to recursive deep-learning algorithms but not yet beyond childlike abilities. See “On Welsh Corgis, Computer Vision, and the Power of Deep Learning” on Microsoft’s AI research and “Welcome to the AI Conspiracy” on Google, Yahoo and Baidu. Adult-level semantic understanding is certainly not approachable today, even in the single focused domain of image recognition.

But the technical spadework was done with the BL collection, for others to build on. Ben O’Steen, the project’s technical lead at BL Labs, made all his code freely available on GitHub. for the million-plus JPEG2000 image files and associated OCR XML metadata [see here for technical info].

Most importantly, the collection was now available to the eyes of the global crowd, via Flickr. I met a student of information science volunteering work at BL Labs, Wendy Durham, who captured the back-story:

In December 2013, BL Labs and Digital Research Teams released 1 million algorithmically snipped images from 65,000 digitised books on to the Flickr Commons. Since then, the British Library Flickr Commons photostream has amassed a staggering 260 million views.

“Just a week after the release, one of those viewers was David Normal, a collage artist from California, interested in 19th century illustration for his work. Following a Facebook posting about the British Library Flickr Commons launch from the guitarist of the punk band ‘Flipper,’ David was thrilled to discover the serendipitous size, quality and relevance of the photostream content with his plans for a project he had organised with the annual arts festival ‘Burning Man’ in the Nevada desert, USA…

In the desert – click for more photos from Burning Man

“After initially creating four 3 ft. by 8 ft. paintings, David transformed them into four 8 ft. by 20 ft., double sided and illuminated light boxes that were built around the base of the Burning Man statue, forming the centrepiece of the event. Over 70,000 people came to the event and saw the work…”

That telling leaves out an important step, though: What images to pluck from the photostream? And how to arrange them in a way that would create “art”? Here’s where the human element enters. Insert magic here, one might say.

The Spark of Human Creativity

Normal described that creative element in an important lecture in the British Library’s Chaucer Room last fall; I’ll excerpt the details which begin about 6 minutes into the video.

[While planning for 2014 Burning Man with BM director Larry Harvey] “he and I batted this back and forth and came up with the idea of a crossroads of curiosity, that would expand the idea of a cabinet of curiosities, out from just the rectilinear presentation of objects in a case, to be being sets of dramatic tableaux, that are a collection of human dramas and human phenomena.”

“So I was looking for 19th century illustrations, and it was only shortly after that that I learned of the British Library’s release into the Flickr Commons of over a million images. So that was really a kind of stupendous thing for me… Once I had that in hand, I was extremely inspired, because I had just an endless amount of collage material to work with to create these pieces.”

“Working with the British Library’s collection, I began to make the collages, and what I would do is just go searching through the database, just looking at one image after another, about the same way as if I was going to go through a book looking at every illustration in the book, and then I would mark the images that I liked. So it was a very entrancing process. I would sit there for hours, just clicking one image after another, sometimes going through a hundred or two hundred images without seeing anything that was of interest to me – and at other times being in kind of a vein, of things that I liked; and I would favorite all these out.

“I collected something like a total of 3,000 images. And then I would begin to select the ones that I liked best.

“People always ask me, ‘Why did you select the certain things that you liked?’ and a lot of times the things would just go together. I mean, I believe that the machine-gun and the skunk were just side by side with each other in the Flickr favorites page, so it was just sort of obvious, the work had already been done for me. But the Afghan warrior here, he was close by, and so he ends up – I decided to put him on, riding the skunk… the background ended up getting filtered out, and so in the process of making the collages I experimented quite a bit… I did so many different versions. Some elements didn’t end up in the final piece. For instance, I had a troubador with an elephant head…and while that might have been good, he just didn’t seem to fit in [Audience laughter]. So these became the raw material from which I painted, the collages actually became sketches.”

Lightbox prototype, 21'x18', 2011. Original painting oil on canvas, 2010.

Lightbox prototype, “The Human Tree.” 21’x18′, original painting oil on canvas, 2010.

By the way, at about the 10-minute mark David goes into some technical discussion of the innovative lightbox materials and procedures he invented for this project and earlier lightbox exhibits; if you’re a fan of advanced electrical/fabrication techniques you’ll enjoy it. My wife and I own one of his earlier small lightbox prototypes “The Human Tree,” which hangs in our Music Room (left).

Like many artists since the dawn of Impressionism (and every -ism since), David Normal isn’t quite sure how to explain in words his train of creative thinking – because it’s not a logical procession. Again from his lecture:

“At Burning Man, people came around and were fascinated by the work, they spent a lot of time trying to decipher what the meaning of these strange images was. I took to giving what I called ‘Docent Tours,’  where I would walk people around from image to image, and explain the imagery and what it meant to me. The reason I call it a docent tour is because I actually don’t really have much better of an idea of what the imagery means than anybody else [audience laughter]. I mean, I made it, but it wasn’t like it was made with some sort of, you know, didactic purpose, that it’s supposed to explain something to somebody. Rather, the images are very much expresssions of [pauses] – oh, of something that’s just energetic – how shall we say – It’s not a specific  – emotional… You know it’s not specifically confined to something that I can put into words. But I do put into words! And I’ll explain this to you soon, when we go and look at these prints I brought with me….”

Certainly the essence of the creative arts is ineffable, and it is impossible to capture or even describe fully the remarkable human inspiration and boundary-jumping leaps which trace the advance of human intellectual history. Some analysts today drive a debate on whether human-level AI will demonstrate the “same” human-like spark. (See for example “Creativity: The Last Human Stronghold?” by the thoughtful technologist and AI observer Israel Beniaminy on attempts to program machine creativity in poetry, art, math, and science.)

But it’s difficult to believe in a human-challenging degree of “intelligence” when the British Library’s Mechanical Curator never came up with the idea of putting the Afghan warrior riding on the skunk; the algorithm would never ponder a collage with an elephant-headed troubadour – much less realize that it “just didn’t fit in.”

The computers never suggested looking to Burning Man for inspiration, or turning paintings into lightboxes, or having a huge party to open the exhibit with Burners from across Europe, Taiko drummers, atonal chant singers and the London’s ultrahip DJ Yoda, all dancing on a Library Piazza bearing banners celebrating the 800th Anniversary of the Magna Carta housed inside, symbolizing man’s insatiable determination to win and protect his liberties.

To make the connection of my two themes explicit: Elon Musk came up with the unconventional idea behind his startup SolarCity while driving to the desert for Burning Man in 2004. His inspiration wasn’t a self-optimizing algorithmic feedback calculation but instead the very human notions of radical self-expression and radical self-reliance reinforced at Burning Man. And even earlier, in 2002 no recursively learning business algorithm running (then or now) would EVER have recommended that a businessman should create a scrappy rocket startup SpaceX to take on Lockheed, Boeing, and the challenge of Mars settlement.

In short, no programmer knows how to encode an AI with audacity. I doubt we ever will. Remember those old Apple advertisements with their call to “Think Different”? That’s humanity’s saving grace. We alone are able to create new and independent meaning from the complexity of the world (and “big data”) around us.

Will Superintelligence Supersede Humans?

I follow in detail the latest AI research and advances, and speaking as someone who has written about “autonomous killer robots” for years, I’m glad that the debate is reaching the mainstream, and can recommend reading several recent pieces:

My bottom line is the same feeling I have about a world with nuclear weapons; it’s not that I am skeptical of the dangers – it’s just that I remain optimistic of our ability to avoid or overcome the dangers.

I’m less interested in what the machines will do on their own, because I see evidence across history – ancient and continuing – of humanity’s ability to invent and then control by further invention. We’ve done it with the most awesome and fearful technology imaginable – the power of atomic and nuclear fission. Our ability to invent new technical, political, and social means of dominating our own Golems should stand us in good stead even with AI.

Call it insight, or the ineffable human spark, whatever it is we have it – and there’s enduring power in thinking the unexpected. David Normal saw a serendipitous art in siting his exhibit where it is: as he explains, “It’s actually placed literally (no pun intended) over 5 stories of books that are housed underneath the piazza – likely the very books from which the collage material was derived.”

What computer would get that joke?

Twitter Search as a Government case study

In addition to periodic think-pieces here at Shepherd’s Pi, I also contribute a monthly online column over at SIGNAL Magazine on topics relating to intelligence. This month I keyed off a recent discussion I had onstage at the 2015 AFCEA Spring Intelligence Symposium with Elon Musk, particularly a colloquy we had on implications of the emerging cleavage (post-Edward Snowden) between Silicon Valley technology companies and their erstwhile innovation partners, U.S. intelligence agencies.

That discussion sparked some thinking on the public/private sector divide on tech innovation – and on basic operational performance in building or adopting new technologies. It’s always been a hobbyhorse topic of mine; see previous pieces even from way back in 2007-08 like “Pentagon’s New Program for Innovation in Context,” or “A Roadmap for Innovation – From the Center or the Edge?” or “VC-like Beauty Contests for Government.”

I have an excerpt from my new SIGNAL piece below, but you can read the entire piece here: “The Twitter Hare Versus the Government Turtle.”

Is the public/private divide overstated? Can the government compete? Without going into the classified technology projects and components discussed at the symposium, let’s try a quick proxy comparison, in a different area of government interest: archiving online social media content for public use and research. Specifically, since Twitter data has become so central to many areas of public discourse, it’s important to examine how government and private sector are each addressing that archive/search capability.

First, the government side. More than half a decade ago, the Library of Congress (LoC) announced in April 2010 with fanfare that it was acquiring the “complete digital archives” of Twitter, from its first internal beta tweets. At that time, the LoC noted, the 2006-2010 Twitter archive already consisted of 5 terabytes, so the federal commitment to archiving the data for search and research was significant…

  … Fast forward to today. Unbelievably, after even more years of “work,” there is no progress to report—quite the opposite. A disturbing new report this week in Inside Higher Education entitled “The Archive is Closed” shows LoC at a dead-stop on its Twitter archive search. The publicly funded archive still is not open to scholars or the public, “and won’t be any time soon.”

  … Coincidentally this week, just as the Library of Congress was being castigated for failing in its mission to field a usable archive after five years, Twitter unveiled a new search/analytics platform, Twitter Heron—yes, after just six months [after releasing its previous platform Twitter Storm]. Heron vastly outperforms the original version in semantic throughput and low latency; yet in a dramatic evocation of Moore’s Law, it does so on 3 times less hardware.

Twitter Storm vs Twitter Heron

Oh, and as the link above demonstrates, the company is far more transparent about its project and technology than the Library of Congress has been.

All too often we see government technology projects prove clunky and prone to failure, while industry efforts are better incentivized and managerially optimized for success. There are ways to combat that and proven methods to avoid it. But the Twitter search case is one more cautionary example of the need to reinvigorate public/private partnerships—in this case, directly relevant to big-data practitioners in the intelligence community.

 – Excerpts from SIGNAL Magazine, “The Twitter Hare Versus the Government Turtle.” © 2015 AFCEA International.

Insider’s Guide to the New Holographic Computing

In my seven happy years at Microsoft before leaving a couple of months ago, I was never happier than when I was involved in a cool “secret project.”

Last year my team and I contributed for many months on a revolutionary secret project – Holographic Computing – which is being revealed today at Microsoft headquarters.  I’ve been blogging for years about a variety of research efforts which additively culminated in today’s announcements: HoloLens, HoloStudio for 3D holographic building, and a series of apps (e.g. HoloSkype, HoloMinecraft) for this new platform on Windows 10.

For my readers in government, or who care about the government they pay for, PAY CLOSE ATTENTION.

It’s real. I’ve worn it, used it, designed 3D models with it, explored the real surface of Mars, played and laughed and marveled with it. This isn’t Einstein’s “spooky action at a distance.” Everything in this video works today:

These new inventions represent a major new step-change in the technology industry. That’s not hyperbole. The approach offers the best benefit of any technology: empowering people simply through complexity, and by extension a way to deliver new & unexpected capabilities to meet government requirements.

Holographic computing, in all the forms it will take, is comparable to the Personal Computing revolution of the 1980s (which democratized computing), the Web revolution of the ’90s (which universalized computing), and the Mobility revolution of the past eight years, which is still uprooting the world from its foundation.

One important point I care deeply about: Government missed each of those three revolutions. By and large, government agencies at all levels were late or slow (or glacial) to recognize and adopt those revolutionary capabilities. That miss was understandable in the developing world and yet indefensible in the United States, particularly at the federal level.

I worked at the Pentagon in the summer of 1985, having left my own state-of-the-art PC at home at Stanford University, but my assigned “analytical tool” was a typewriter. In the early 2000s, I worked at an intelligence agency trying to fight a war against global terror networks when most analysts weren’t allowed to use the World Wide Web at work. Even today, government agencies are lagging well behind in deploying modern smartphones and tablets for their yearning-to-be-mobile workforce.

This laggard behavior must change. Government can’t afford (for the sake of the citizens it serves) to fall behind again, and  understanding how to adapt with the holographic revolution is a great place to start, for local, national, and transnational agencies.

Now some background… Continue reading

Intelligence Technology, Waiting for Superman

…or Superwoman.

Amid the continuing controversies sparked by Edward Snowden’s whistleblowing defection revelations, and their burgeoning effects on American technology companies and the tech industry worldwide, the afflicted U.S. intelligence community has quietly released a job advertisement for a premier position: the DNI’s National Intelligence Officer for Technology.

You can view  the job posting at the USAJOBS site (I first noticed it on ODNI’s anodyne Twitter feed @ODNI_NIC), and naturally I encourage any interested and qualified individuals to apply. Keep reading after this “editorial-comment-via-photo”:

How you'll often feel if you take this job...

How you’ll often feel if you take this job…

Whether you find the NSA revelations to be infuriating or unsurprising (or even heartening), most will acknowledge that it is in the nation’s interest to have a smart, au courant technologist advising the IC’s leadership on trends and directions in the world of evolving technical capabilities.

In the interest of wider exposure I excerpt below some of the notable elements in the job-posting and description…. and I add a particular observation at the bottom.

Job Title: National Intelligence Officer for Technology – 28259

Agency: Office of the Director of National Intelligence

Job Announcement Number: 28259

Salary Range: $118,932.00  to  $170,000.00

Major Duties and Responsibilities:

Oversees and integrates all aspects of the IC’s collection and analytic efforts, as well as the mid- and long-term strategic analysis on technology.

Serves as the single focal point within the ODNI for all activities related to technology and serves as the DNI’s personal representative on this issue.

Maintains senior-level contacts within the intelligence, policymaking, and defense communities to ensure that the full range of informational needs related to emerging technologies are met on a daily basis, while setting strategic guidance to enhance the quality of IC collection and analysis over the long term.

Direct and oversee national intelligence related to technology areas of responsibility; set collection, analysis, and intelligence operations priorities on behalf of the ODNI, in consonance with the National Intelligence Priorities Framework and direction from the National Security Staff.

In concert with the National Intelligence Managers/NIOs for Science and Technology and Economic Issues, determine the state of collection, analysis, or intelligence operations resource gaps; develop and publish an UIS which identifies and formulates strategies to mitigate gaps; advise the Integration Management Council and Integration Management Board of the gaps, mitigation strategies, progress against the strategies, and assessment of the effectiveness of both the strategies and the closing of the intelligence gaps.

Direct and oversee Community-wide mid- and long-term strategic analysis on technology. Serve as subject matter expert and support the DNI’s role as the principal intelligence adviser to the President.

Oversee IC-wide production and coordination of NIEs and other community papers (National Intelligence Council (NIC) Assessments, NIC Memorandums, and Sense of the Community Memorandums) concerning technology.

Liaise and collaborate with senior policymakers in order to articulate substantive intelligence priorities to guide national-level intelligence collection and analysis. Regularly author personal assessments of critical emerging technologies for the President, DNI, and other senior policymakers.

Develop and sustain a professional network with outside experts and IC analysts, analytic managers, and collection managers to ensure timely and appropriate intelligence support to policy customers.

Brief senior IC members, policymakers, military decisionmakers, and other major stakeholders.

Review and preside over the research and production plans on technology by the Community’s analytic components; identify redundancies and gaps, direct strategies to address gaps, and advise the DNI on gaps and shortfalls in analytic capabilities across the IC.

Determine the state of collection on technology, identify gaps, and support integrated Community-wide strategies to mitigate any gaps.

Administer National Intelligence Officer-Technology resource allocations, budget processes and activities, to include the establishment of controls to ensure equities remain within budget.

Lead, manage, and direct a professional level staff, evaluate performance, collaborate on goal setting, and provide feedback and guidance regarding personal and professional development opportunities.

Establish and manage liaison relationships with academia, the business community, and other non-government subject matter experts to ensure the IC has a comprehensive understanding of technology and its intersection with global military, security, economic, financial, and/or energy issues.

Technical Qualifications:

Recognized expertise in major technology trends and knowledge of analytic and collection issues sufficient to lead the IC.

Superior capability to direct interagency, interdisciplinary IC teams against a range of functional and/or regional analytical issues.

Superior interpersonal, organizational, and management skills to conceptualize and effectively lead complex analytic projects with limited supervision.

Superior ability to work with and fairly represent the IC when analytic views differ among agencies.

Superior communication skills, including ability to exert influence with senior leadership and communicate effectively with people at all staff levels, both internal and external to the organization, to give oral presentations and to otherwise represent the NIC in interagency meetings.

Expert leadership and managerial capabilities, including the ability to effectively direct taskings, assess and manage performance, and support personal and professional development of all levels of personnel.

Superior critical thinking skills and the ability to prepare finished intelligence assessments and other written products with an emphasis on clear organization, concise, and logical presentation.

Executive Core Qualifications (ECQs):

Leading People: This core qualification involves the ability to lead people toward meeting the organization’s vision, mission, and goals. Inherent to this ECQ is the ability to provide an inclusive workplace that fosters the development of others, facilitates cooperation and teamwork, and supports constructive resolution of conflicts. Competencies: Conflict Management, Leveraging Diversity, Developing Others, and Team Building

Leading Change: This core qualification involves the ability to bring about strategic change, both within and outside the organization, to meet organizational goals. Inherent to this ECQ is the ability to establish an organizational vision and to implement it in a continuously changing environment. Competencies: Creativity and Innovation, External Awareness, Flexibility, Resilience, Strategic Thinking, and Vision.


You will be evaluated based upon the responses you provide to each required Technical Qualifications (TQ’s) and Executive Core Qualifications (ECQ’s). When describing your Technical Qualifications (TQ’s) and Executive Core Qualifications (ECQ’s), please be sure to give examples and explain how often you used these skills, the complexity of the knowledge you possessed, the level of the people you interacted with, the sensitivity of the issues you handled, etc. Your responses should describe the experience; education; and accomplishments which have provided you with the skills and knowledge required for this position. Current IC senior officers are not required to submit ECQs, but must address the TQs.

Only one note on the entire description, and it’s about that last line: “Current IC senior officers are not required to submit Executive Core Qualifications, but must address the Technical Qualifications.”  This is perhaps the most important element in the entire description; it is assumed that “current IC senior officers” know how to lead bureaucratically, how to manage a staff – but in my experience it cannot be assumed that they are necessarily current on actual trends and advances in the larger world of technology. In fact, some might say the presumption would be against that currency. Yet they must be, for a variety of reasons never more salient than in today’s chaotically-evolving world.

Good luck to applicants.

[note: my title is of course a nod to the impressive education-reform documentary “Waiting for Superman“]


InfoViz Cockpit View of Record Space Jump

I recall, one year ago this week, sitting at home on the edge of my seat, intently watching on my wallscreen the live countdown to Felix Baumgartner‘s stunning Red Bull Stratos mission to “transcend human limits” by calmly stepping off an ultra-high-altitude balloon capsule. On the way down he would go supersonic and set numerous records, most significantly the highest-altitude human jump (128,100 feet).

To mark the anniversary, the Stratos folks have just released a well-done information-visualization of his feat, featuring for the first time Felix’s own actual view of the jump – a nicely arranged combination of synchronized views as he hurtled to earth captured by three cameras mounted on Felix’s space-suit, including his helmet cam.  You’ll also see gauges noting his Altitude, Airspeed, G-Force, and “Biomed” (heart rate, breath rate).

A couple of datapoints which stood out for me: After his ledge salute and headfirst dive, Felix goes from zero to 100 mph in 4.4 seconds, hitting Mach 1 (or 689 mph) in just 33.2 seconds.  It’s also fascinating to watch his heart rate, which (exemplifying his astronaut coolness under pressure) actually decreases from 181 bpm at jump to around 163 bpm as he quickly adjusts; it then rises and falls as he encounters and then controls a severe spin.

His chute deploys about halfway into this nine-minute video, but watching to the end is worth it as he masterfully glides to earth, landing in a suave trot on his feet.  Enjoy this look back at a universal Superman.

Bullshit Detector Prototype Goes Live

I like writing about cool applications of technology that are so pregnant with the promise of the future, that they have to be seen to be believed, and here’s another one that’s almost ready for prime time.

TruthTeller PrototypeThe Washington Post today launched an exciting new technology prototype invoking powerful new technologies for journalism and democratic accountability in politics and government. As you can see from the screenshot (left), it runs an automated fact-checking algorithm against the streaming video of politicians or other talking heads and displays in real time a “True” or “False” label as they’re speaking.

Called “Truth Teller,” the system uses technologies from Microsoft Research and Windows Azure cloud-computing services (I have included some of the technical details below).

But first, a digression on motivation. Back in the late 1970s I was living in Europe and was very taken with punk rock. Among my favorite bands were the UK’s anarcho-punk collective Crass, and in 1980 I bought their compilation LP “Bullshit Detector,” whose title certainly appealed to me because of my equally avid interest in politics :)

Today, my driving interests are in the use of novel or increasingly powerful technologies for the public good, by government agencies or in the effort to improve the performance of government functions. Because of my Jeffersonian tendencies (I did after all take a degree in Government at Mr. Jefferson’s University of Virginia), I am even more interested in improving government accountability and popular control over the political process itself, and I’ve written or spoken often about the “Government 2.0” movement.

In an interview with GovFresh several years ago, I was asked: “What’s the killer app that will make Gov 2.0 the norm instead of the exception?”

My answer then looked to systems that might “maintain the representative aspect (the elected official, exercising his or her judgment) while incorporating real-time, structured, unfiltered but managed visualizations of popular opinion and advice… I’m also a big proponent of semantic computing – called Web 3.0 by some – and that should lead the worlds of crowdsourcing, prediction markets, and open government data movements to unfold in dramatic, previously unexpected ways. We’re working on cool stuff like that.”

The Truth Teller prototype is an attempt to construct a rudimentary automated “Political Bullshit Detector, and addresses each of those factors I mentioned in GovFresh – recognizing the importance of political leadership and its public communication, incorporating iterative aspects of public opinion and crowd wisdom, all while imbuing automated systems with semantic sense-making technology to operate at the speed of today’s real world.

Real-time politics? Real-time truth detection.  Or at least that’s the goal; this is just a budding prototype, built in three months.

Cory Haik, who is the Post’s Executive Producer for Digital News, says it “aims to fact-check speeches in as close to real time as possible” in speeches, TV ads, or interviews. Here’s how it works:

The Truth Teller prototype was built and runs with a combination of several technologies — some new, some very familiar. We’ve combined video and audio extraction with a speech-to-text technology to search a database of facts and fact checks. We are effectively taking in video, converting the audio to text (the rough transcript below the video), matching that text to our database, and then displaying, in real time, what’s true and what’s false.

We are transcribing videos using Microsoft Audio Video indexing service (MAVIS) technology. MAVIS is a Windows Azure application which uses State of the Art of Deep Neural Net (DNN) based speech recognition technology to convert audio signals into words. Using this service, we are extracting audio from videos and saving the information in our Lucene search index as a transcript. We are then looking for the facts in the transcription. Finding distinct phrases to match is difficult. That’s why we are focusing on patterns instead.

We are using approximate string matching or a fuzzy string searching algorithm. We are implementing a modified version Rabin-Karp using Levenshtein distance algorithm as our first implementation. This will be modified to recognize paraphrasing, negative connotations in the future.

What you see in the prototype is actual live fact checking — each time the video is played the fact checking starts anew.

 – Washington Post, “Debuting Truth Teller

The prototype was built with funding from a Knight Foundation’s Prototype Fund grant, and you can read more about the motivation and future plans over on the Knight Blog, and you can read TechCrunch discussing some of the political ramifications of the prototype based on the fact-checking movement in recent campaigns.

Even better, you can actually give Truth Teller a try here, in its infancy.

What other uses could be made of semantic “truth detection” or fact-checking, in other aspects of the relationship between the government and the governed?

Could the justice system use something like Truth Teller, or will human judges and  juries always have a preeminent role in determining the veracity of testimony? Will police officers and detectives be able to use cloud-based mobile services like Truth Teller in real time during criminal investigations as they’re evaluating witness accounts? Should the Intelligence Community be running intercepts of foreign terrorist suspects’ communications through a massive look-up system like Truth Teller?

Perhaps, and time will tell how valuable – or error-prone – these systems can be. But in the next couple of years we will be developing (and be able to assess the adoption of) increasingly powerful semantic systems against big-data collections, using faster and faster cloud-based computing architectures.

In the meantime, watch for further refinements and innovation from The Washington Post’s prototyping efforts; after all, we just had a big national U.S.  election but congressional elections in 2014 and the presidential race in 2016 are just around the corner. Like my fellow citizens, I will be grateful for any help in keeping candidates accountable to something resembling “the truth.”

2012 Year in Review for Microsoft Research

The year draws to a close… and while the banality and divisiveness of politics and government has been on full display around the world during the past twelve months, the past year has been rewarding for me personally when I can retreat into the world of research. Fortunately there’s a great deal of it going on among my colleagues.

2012 has been a great year for Microsoft Research, and I thought I’d link you to a quick set of year-in-review summaries of some of the exciting work that’s been performed and the advances made:

Microsoft Research 2012 Year in Review

The work ranges from our Silicon Valley lab work in “erasure code” to social-media research at the New England lab in Cambridge, MA; from “transcending the architecture of quantum computers” at our Station Q in Santa Barbara, to work on cloud data systems and analytics by the eXtreme Computing Group (XCG) in Redmond itself.

Across global boundaries we have seen “work towards a formal proof of the Feit-Thompson Theorem” at Microsoft Research Cambridge (UK), and improvements for Bing search in Arab countries made at our Advanced Technology Labs in Cairo, Egypt.

All in all, an impressive array of research advance, benefiting from an increasing amount of collaboration with academic and other researchers as well. The record is one more fitting tribute to our just-departing Chief Research and Strategy Officer Craig Mundie, who is turning over his reins including MSR oversight to Eric Rudder (see his bio here), while Craig focuses for the next two years on special work reporting to CEO Steve Ballmer. Eric’s a great guy and a savvy technologist, and has been a supporter of our Microsoft Institute’s work as well … I did say he’s savvy :)

There’s a lot of hard work already going on in projects that should pay off in 2013, and the New Year promises to be a great one for technologists and scientists everywhere – with the possible exception of any remaining Mayan-apocalypse/ancient-alien-astronaut-theorists. But even to them, and perhaps most poignantly to them, I say Happy New Year!


Get every new post delivered to your Inbox.

Join 6,779 other followers

%d bloggers like this: