Theory of the Fireball, Supercomputing, and other Los Alamos Tidbits

Back in the early ’90s when I was first dating Kathryn, now the lovely bride, we went on some awesome roadtrips, including many cross-country – great way to get to know someone.  At the time my brother was an Air Force pilot, flying the F-117 stealth fighter, so we once paid him a visit at Holloman AFB on a 10-day drive through the Southwest.  I’ll have to find and upload to Flickr the pictures he took of each of us sitting in its classified cockpit – surely a massive security violation which I lay entirely at his feet (lucky for him he’s retired from the Air Force now and flying for Delta). 

Sadly, the F-117 Night Hawk has also now been officially retired, replaced by the F-22 Raptor.

As much as that was a thrill, though, the highlight of that particular roadtrip was driving up to the Los Alamos plateau and spending some time touring around the Lab, the Museum, and the interesting little town that’s grown up around all that PhD talent in the middle of the high desert.

That lab’s on my mind because I’ve been thinking about supercomputing and the revolution it could undergo thanks to quantum computing. I’m in Santa Barbara visiting the “Station Q” research program in quantum computing, and will write more about quantum computing soon, since I’m actually beginning to understand it.

But as an interesting artifact in my preparatory reading, Microsoft’s John Manferdelli sent me a link to a Federation of Atomic Scientists archive of declassified Los Alamos National Lab technical reports and publications, from “the good old days” at Los Alamos.

Among the fascinating nuggets in this LANL collection are the 1943 “Los Alamos Primer,” a so-called indoctrination course given to the new arrivals at the beginning of the Manhattan Project; “Theory of the Fireball” by Hans Bethe (1964); and a 1947 “Blast Wave” technical report which was co-authored by Bethe, John von Neumann and Klaus Fuchs, who was later exposed as a Soviet spy!  The Primer is interesting reading, at least until it gets into the higher math.  Here’s an excerpt from the opening section:

The object of the project is to produce a practical military weapon in the form of a bomb in which the energy is released by a fast neutron chain reaction in one or more of the materials known to show nuclear fission.  The direct energy release in the fission process is of the order of 170 MEV per atom. This is considerably more than 107 times the heat of reaction per atom in ordinary combustion processes…”      – from the “Los Alamos Primer”

LANL Efforts to Speed Up Government Technology Evaluations

My favorite document so far in this treasure trove, though, is a more contemporary read: the 1976 Cray-1 Evaluation Final Report by LANL.  It reads just like the kind of eval reports my group at DIA would write on hardware or software more recently, even though it’s a snapshot in time of the supercomputing needs of the U.S. government, specifically the energy and nuclear weapons research communities. Today, the entire IC/DoD community and their industry partners share complaints about how long it takes to perform an evaluation or procurement of new technologies.  LANL addressed that squarely in looking at the Cray-1 back in 1976:

This performance evaluation of the Cray-1 computer was structured to determine if the Cray-1 meets the minimum performance standards set forth by the Los Alamos Scientific Laboratory (LASL) and the Energy Research and Development Administration (ERDA) to qualify the machine for further consideration for procurement… The growing complexity of the programmatic efforts at the Los Alamos Scientific Laboratory (LASL) in weapons design, laser fusion and isotope separation, magnetic fusion energy research, reactor safety, and other basic and exploratory research requires calculations that are beyond the capability of the fastest computer currently installed in the LASL Central Computing Facility– the CDC 7600. Thus, LASL needs a computer that can meet the growing requirements for computational speed.”

“For this reason, LASL, in conjunction with the Energy Research and Development Administration (ERDA) and the Federal Computer Performance Evaluation and Simulation Center (FEDSIM), undertook to evaluate a new class of scientific computer system. This class has been identified by ERDA as “Class VI”–one that is capable of executing 20 to 60 million floating point operations per second.”

“Because computers of this class are necessarily pushing the state of the art of both computer architecture. and electronic technology, there have been worries expressed in ERDA and Congress that the usual procurement procedures did not afford adequate protection to the Government against failure of these machines to fulfill their specifications for reliability and performance. The basic reason for this is that Class VI computers are very complex, and considerable time is required to convert previous application programs so that an adequate evaluation of the product can be assured. By the time reliability and performance evaluations can be conducted, the usual protections for the Government have elapsed. To avoid a repetition of previous ERDA experiences with similar computers, a plan was designed that offered vendors an equal opportunity to compete and yet provided a large measure of protection for the Government…”

“The hypothesis that the Cray-1 is at least two times faster than the CDC 7600 for scalar kernels was satisfied in all tests… The conclusion of this evaluation is that the Cray-1 meets every performance criterion established by LASL and ERDA to qualify the machine for further consideration for procurement.”

LANL and the intelligence community did wind up buying Crays, of course, and if you haven’t visited the National Cryptologic Museum up on NSA property at Ft. Meade, then you haven’t seen the Cray XMP-24 on display there; it’s open to the public, just drive on up!  “The XMP-24 on display is the upgrade to the original XMP-22 that was the first supercomputer Cray ever delivered to a customer site. It was in operation from 1983 to 1993 and was arguably the most powerful computer in the world when it was delivered. It used serial processing to conduct 420 million operations per second.”


Email this post to a friend

AddThis Social Bookmark Button

One Response

  1. Good site Good site Good site

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: