About plutonium bombs

Principle design

The basic idea for a fission bomb is turning a subcritical mass of fissile material into a supercritical mass, by means of conventional chemical explosives. A supercritical mass means nothing more than having an amount of fissile material in a geometrical form and size which allows the mean free path (some 10 cm or 4") of the fast neutrons released upon a fission event to be small enough to cause other fissions. In other words: the neutrons should not be able to escape from the material, making a divergent nuclear chain reaction possible (until the thing blows apart). As you may intuitively guess, a spherical form is most suitable for this.

For an uranium bomb containing highly enriched uranium (say 70-95% U-235), one can simply use a gun type explosion device, shooting a subcritical spherical piece into an incomplete and thereby also subcritical sphere, resulting in one supercritical sphere.

Plutonium bomb implosion principle, 8K

For plutonium bombs, a more sophisticated design is necessary as you can see in figure 1. Here, one uses an spherical implosion lens system to compress a subcritical plutonium metal sphere which will go critical then. In any case, the nuclear explosion is triggered by a central neutron source. The reason for this different design is that some plutonium isotopes (notably Pu-240) show spontaneous fission as a minor yet significant way to decay, which can cause premature ignition and thus lower the explosive yield. A neutron reflecting tamper shield (made of U-238 or beryllium) is always used to improve neutron economy and effectively lower the critical mass by at least a factor of two.

Critical mass

Critical masses can be calculated quite accurately. The important parameters are fission cross sections, the average neutron yield upon fission, and the mass density. The latter depends heavier on the integrity of the metal lattice than on the isotopic composition, since mass differences between the different plutonium isotopes are almost negligible.

Critical mass curve, 3K

Without a neutron reflecting shield, pure Pu-239 metal has a critical mass of 10 kg, and I have calculated that for a "reactor grade" isotopic mixture this would be 18 kg. Using a 15 cm U-238 shield, the Pu-239 critical mass is only slightly over 4 kg, while for LWR-produced plutonium (65% thermal fissile isotopes, fuel burnup around 40 MWd/kg HM) this is some 7 kg. You can see this in figure 2.

One should not forget that in a nuclear reactor the Pu-240 and higher isotopes essentially stem from Pu-239. Hence a mixture containing only even isotopes is merely hypothetical, but even then the critical mass would stay below 20 kg. Because of the use of compression techniques, the plutonium mass actually needed for one bomb is even lower than the critical mass in figure 2. Since modern warheads contain only an estimated 2 to 4 kg weapon grade plutonium, the reactor grade equivalent can be as low as 5 kg. One could even directly use plutonium in the oxyde form, the mass need only be slightly higher then (a factor of 1.5).

The pure U-235 metal equivalent is 16 kg. Adding the facts that high uranium enrichment is very costly and very hard to hide, this clearly shows why plutonium reprocessing is much more favorable for producing bomb material, taking the somewhat more difficult implosion technology for granted.

The "You can't make a bomb with LWR plutonium" myths

For obvious reasons the reprocessing lobby has a few myths of its own concerning plutonium bombs, which have been repeated over and over again. As usual, their statements contain some truth and some suggestive half-truth, which together are to persuade you to draw the "right" conclusions. It is very interesting to examine the facts first and discover how such myths can be made up afterwards. Once you have developed some feeling for their methods, the nuke lobby propaganda becomes quite transparent, if not predictable.

In the table below, you can see some comparable numbers for the different plutonium isotopes and so-called (super) weapon grade and LWR grade (~ 40 MWd/kg HM) plutonium. The composition depends on the fuel burn-up and the reactor type used (!), the reactivity equals the average number of neutrons released upon fission times the fission chance, and it is normalized to 1 for Pu-239. Finally, the spontaneous fission rate is measured in neutrons per gram per second. There is of course a connection between the SF rate and the alpha decay half-life. For any plutonium composition (vector), the reactivity and SF rate can be simply calculated by weighing the isotope values with the vector. If you want to make a quick estimate, you might as well consider the "fissile" part to be merely Pu-239 and the rest Pu-240 and just use the 1600 n/gs figure.

Pu "mixture"	 Pu vector 	Normalized	SF rate
				reactivity	(n/gs)
--------------------------------------------------------------
Pu-239 		(100%,0,0,0,0)	    1.0		0.03
Pu-240 		(0,100%,0,0,0)	    0.6		1600
Pu-241 		(0,0,100%,0,0)	    1.1		   0
Pu-242 		(0,0,0,100%,0)	    0.6		1670
Pu-238 		(0,0,0,0,100%) 	    1.1		3440
--------------------------------------------------------------
Super grade 	(96,3,1,0,0)	    1.0		  48
Weapons grade 	(91,6,2,1,0)	    1.0		 113
Reactor grade 	(51,28,14,5,2)	    0.9		 600
--------------------------------------------------------------

This shows that:

  • The difference in reactivity between the so-called "fissile" uneven numbered isotopes and the "non-fissile" even numbered ones is not extremely big for fast neutrons, contrary to the reactivity for thermal neutrons in a LWR. In a bomb, every plutonium isotope is fissile.
  • The difference in spontaneous fission rate is comparatively small, since compression speeds need to be sufficiently high for the mass to go supercritical within fractions of seconds anyway. Although the explosive yield of a LWR plutonium bomb may be less predictable, it will certainly work. This has been experimentally tested by the US in 1962 and even earlier by the British (with Magnox-produced Pu from commercially burnt up fuel).

It should not be hard now to see how easy one may become misinformed by phrases like "Reactor grade plutonium contains too little fissile Pu-239 to make a bomb" or the more sophisticated "Reactor grade plutonium contains too much non-fissile Pu-240, which is not suitable because of its high spontaneous fission rate". Sometimes it is argued that reactor grade plutonium is less attractive to work with, due to Pu-238 and 241 decay radiation. True as it may be, this is an inconvenience rather than a serious obstacle. If one can manage radiation in a reprocessing plant, a bomb manufacturing plant should not be too hard either.

I have not yet heard arguments concerning the Pu-241 14 year half-life, which indeed may have to be taken into account: One should either make sure that several years or even decades after building the bomb enough plutonium is left to reach sufficient criticality upon compressing (which amounts to using a mass margin that is big enough), or one should periodically "refresh" the plutonium.

Bomb yields and effects

It is not my purpose to describe the devastation caused by a nuclear bomb, but it is useful to give you an idea about explosive yields and their relation to the size of the affected areas. Conventional explosive yields are usually expressed in terms of kg TNT equivalent, and so are nuclear bomb yields. Very heavy conventional explosives may have a yield in the range of a metric ton TNT. Fission bombs have yields in the kiloton range, more sophisticated fusion-based thermonuclear weapons can get into the megaton range. We will not discuss the latter's techniques.

In principle, all major effects like shockwaves, heatwaves and direct radiation show an 1/r2 decrease (r means radius), the latter two being also exponentially dimmed. Their dependence of the yield is not linear, instead they scale as the square or cube root of the yield. Hence the scale difference between a kiloton and a megaton bomb is a radius factor of 10 to 30, not 1000.

Since each fission event produces about 200 MeV, 1 kton TNT equivalent requires as little as some 50 grams of material to completely fission. In spite of uncertainties -- of the actual yield as well as of the current level of implosion technology we can expect from an average state or group -- I am not an outcast expecting an LWR plutonium bomb to reach at least 1-20 kton TNT equivalent using the US' 1945 designs (the Trinity test bomb reached 20 kton and the Nagasaki bomb 22 kton). The affected area scale difference between 1 and 20 kton is a factor of 3 or 4 in radius. A technologically well developed state should probably be able to make a Nagasaki-like bomb with LWR produced plutonium, perhaps even with MOX-used plutonium. And for all I know, the yield may even be higher (as if that really matters). In the following table, some yield parameters are shown.

Example/event	           Yield	     Contents	   Fissioned
-------------------------------------------------------------------
Hypothetical, LWR Pu	1-20 kton	   ~ 5-7 kg RG Pu	  0.05-1 kg
Trinity test, 1945  	  20 kton	     ~ 6 kg WG Pu	       1 kg
Totem I test, 1953  	  12 kton	     ~ 80% Pu-fis	     0.7 kg
Hiroshima, 1945     	  12 kton	     ~ 50 kg HEU*	     0.7 kg
Nagasaki, 1945      	  22 kton	     ~ 7 kg WG Pu	     1.2 kg
Thermonuclear bomb  	~ 1000 kton
-------------------------------------------------------------------
* U-235 enrichment was only 70%.
RG (Reactor Grade) Pu is presumed to have a 65% uneven 
isotope part (Pu-fis), and WG (Weapons Grade) Pu means 
something like 90% Pu-239 (I presume).

The Totem I test

Los Alamos dissident J.C. Mark deserves most credits for warning us about the possibilities of using commercial LWR-produced plutonium in nuclear bombs. He disclosed a lot of information, notably about a US test involving LWR plutonium in 1962, writing:

"I would like to warn people concerned with such problems
 that the old notion that reactor grade plutonium is incapable
 of producing explosions -- or that plutonium could easily be 
 rendered harmless by the addition of modest amounts of the 
 isotope Pu-240, or 'denatured', as the phrase used to go -- 
 that these notions have been dangerously exaggerated."
 (In Feld et al, 1971, pp 137-138)

The British have performed a somewhat similar test. I found this story in a popular book about the Chernobyl disaster, written by a team of The Observer. The Totem test took place in 1953, just two years before the British Magnox system became commercially available. Magnox reactors used metallic fuel with relatively low burn-ups, commercially some 3-10 MWd/kg HM (comparable with the CANDU reactor). Their modern equivalents are the AGRs, which still use carbon dioxide cooling but have oxyde fuel instead of metallic. The Totem I plutonium must have contained at least 17% of Pu-240. Neither I, nor Greenpeace had heard of this story before, and that's why it's quoted here. Note how closely civil and military purposes can be linked. The French have behaved much the same.

"In 1953, Britain exploded a relatively small (12 kiloton) 
 bomb, code-named Totem I, in a hastly prepared desert site 
 at Emu. The choice of this site was imposed on the British 
 military because their previous island site at Monte Bello 
 had become too contaminated for re-use. Totem I had all the 
 elements of rush, secrecy, negligence and over-optimism 
 that characterized the atom bomb tests of the period. In 
 this case, the test was carried out to discover if 
 plutonium from civil reactors could be used to make atomic 
 bombs. Plutonium from [civil] Magnox reactors is not ideal
 for bomb making. It is contaminated with the isotope 
 plutonium-240 which does not support fission as well as 
 plutonium-239. However, if the Totem test showed that 
 plutonium-240 could be used as a significant atom bomb 
 component, it would 'lead to economies in the long run', 
 the British defense minister Earl Alexander was briefed in 
 a very short top secret paper. Behind the test lay the 
 warning of Churchill's scientific advisor Lord Cherwell 
 that a British rejection of nuclear power would be 
 'national suicide'.

 The Totem test worked. But it also sent a cigar-shaped 
 cloud drifting 150 km north over an Aborigine encampment
 because the bomb had been exploded in unsuitable weather
 conditions. The Aborigines experienced vomiting and 
 blindness and some were exposed to up to 80 rems of 
 radiation from the terrifying 'black mist' that enveloped
 them. In addition, bomber pilots were required to fly
 through the radioactive cloud to carry out measurements
 for scientists on the ground. Unfortunately, the cloud 
 was much 'hotter' than anticipated and the planes were 
 contaminated and left unusable. Monitoring instruments 
 also proved to be inadequate. Some pilots and mechanics 
 were exposed to up to 50 rems of radiation which led Air 
 Vice Marshall Daley of the Australian Air Force to write
 angrily to the British government: 'We were firmly told 
 that this was not a hazard. Now it appears there was a
 hazard'."
 (N. Hawkes, a.o, The worst accident in the world, 
  London Observer, 1986, pp 58-59)

Figure 1 is a modified scan from Kenneth S Krane's "Introductory Nuclear Physics" (printed by Wiley) which made me pass my nuclear physics exam, and figure 2 is a reproduction from a figure in "The Nuclear Heritage" written by Willem de Ruiter and my graduate teacher Bart van der Sijde (in Dutch). The figure also appeared in the SIPRI's year books. Most data in this field is drawn from the work of A. de Volpi and of J.C. Mark. The figures used for figure 2 originate from De Volpi's work and they are quoted in some books too.