FACT: Information Week has a solid story today, “Inside Microsoft’s $550 Million Mega Data Centers,” with a tour of the new San Antonio data center under construction. It’s of the “Quincy-class” (our term in homage to Navy lingo, meaning big, but not the biggest of aircraft carriers; that would be the Chicago-class data center, see below). The reporter writes: “By September, it’ll be the newest star in Microsoft’s rapidly expanding collection of massive data centers, powering Microsoft’s forays into cloud computing like Live Mesh and Exchange Online, among plenty of other as-yet-unannounced services.”
ANALYSIS: I get asked about “the new way to build data centers” more often than any other question but one by government technology professionals. The most popular question, and it’s related, is about cloud computing. They both came up today during a meeting with one of the National Labs.
It’s no accident people are asking, and the Information Week reporter does a good job of explaining why:
Unlike Google, which has been mostly closed-mouth on its data centers, Microsoft has begun taking not only journalists, but also enterprise customers and software and hardware partners on tours in order to share best practices, technical detail and even carry out peer reviews. [Data center chief Mike] Manos has also been making the rounds speaking at data center shows.
“Historically, we’ve taken the approach that data centers are our competitive advantage, but 12 to 18 months after putting cool new stuff into place, it comes onto the market anyway,” Manos says. It’s Microsoft’s business to sell more server software and more services, and that means divulging best practices and being transparent. – Information Week
By the way, the same reporter (IW’s data-center specialist) wrote a good piece a couple of months ago (“Microsoft to Mainstream Containerized Data Centers“) on our even-larger “Chicago-class” mega data center in Illinois, which is using the “C-blox” approach, a Microsoft innovation using shipping-container-sized units of computing power. As Hoover points out, that facility will house “between 150 and 220 industry-standard 40-foot shipping containers holding between 1,000 and 2,000 physical servers apiece, or somewhere between 150,000 and 440,000 servers in total… that’s as many as 11 times the number of servers a conventional data center that size would have.”
Anyway, as sexy as all that is, the important thing about data centers is, what can you do with the computing resources inside them?
When I do get asked about our data centers it is often in the context of, “How goes the war with Google?” Since that’s not my job per se, I don’t think of things that way. However, because I’m interested more in the technical side of things, I can see how one might extrapolate out of the technical approach, to an understanding of why we’re doing what we’re doing. If you’d like to draw those lines yourself, and test your math along the way, check out some of the white papers and discussion on these two Microsoft Research pages: our Silicon Valley-based Computer and Systems Architecture research group, and the Dryad project page itself.
If you want to understand where we’re heading with all this computing power, take a look at these slides on Dryad, presented a couple of months ago at Live Labs by Microsoft Research’s Mihai Budiu. Slide 6 really sums it all up for a cartoon-brain like me (see left), but when you get to slides 28 forward, you start to see the importance of what you can do with all this power.
Oh – and don’t miss the “Backup slides” – for once, they’re really valuable, at least if you are interested in a bit of insight into that little contest we’re running with, um, Google….
Filed under: Technology | Tagged: Chicago, cloud, cloud computing, cluster computing, computer, data center, data centers, Dryad, Google, grid, grid computing, high-performance computing, HPC, IT, Live Labs, MapReduce, Microsoft, Microsoft Research, Quincy, San Antonio, tech, Technology, web |