If it were ever possible to control at will, the rate of disintegration of the radio elements, an enormous amount of energy could be obtained from a small amount of matter.

– Ernest Rutherford (1904), the father of nuclear science

In the year 1934, after the discovery of the neutron just two years earlier – a young Italian physicist Enrico Fermi conducted various experiments in Rome bombarding uranium with neutrons, discovering what he initially believed to be the production of the first elements heavier than uranium. The prevailing idea at the time was that hitting a large nucleus such as that of uranium with a neutron could only possibly result in very minuscule changes in the number of protons and/or neutrons. There was however, one German chemist named Ida Noddack, who soon after Fermi’s experiments, published a paper suggesting that the uranium may have actually instead been split into lighter elements – accurately anticipating the discovery of fission by nine years. Tragically, her paper was very largely ignored by the science community and was not followed up on.

Four years later, in Germany, two chemists Otto Hahn and Fritz Strassman conducted similar experiments that entailed bombarding uranium with neutrons from a source containing radium and beryllium – resulting in much lighter elements such as barium (nearly half the atomic mass of uranium). This was a puzzling discovery for the two scientists, seeing as the elemental products of the bombardment were cumulatively lighter than that of uranium – foreshadowing that some of the mass had escaped the process as heat energy.

Instead of immediately publishing their results, Hahn and Strassman opted to consult Lise Meitner, an Austrian physicist colleague of theirs whom had been forced to take refuge in Sweden after evacuating Nazi-occupied Germany. Little had they known that this fateful decision would be one that would ultimately confirm Einstein’s 35-year old theoretical relationship between mass and energy (energy equals mass times the speed of light squared).

Over that Christmas holiday, Meitner was visited by her nephew, Otto Frisch, whom at the time was a physicist working at what is now called the Niels Bohr Institute in Copenhagen. Together, the pair adopted a nuclear model earlier proposed by Russian physicist George Gamow, suggesting that the nucleus be viewed as a liquid drop. Their idea was that after being hit with a neutron, the uranium nucleus may, much the same as a droplet of water or a cell during binary fission, elongate and pinch off into two separate ‘drops’. After this split, the two ‘drops’ would then be driven apart by their individual electric repulsions at a high energy (originally predicted to be about 200 MeV). Meitner proposed that the two daughter nuclei would have a deficit of mass compared to the original uranium nucleus by about one-fifth the mass of a proton. Plugging this into Einstein’s formula (E=mc2) yielded the predicted value of 200 MeV, illustrating how the lost mass had converted to energy. Frisch returned to Copenhagen immediately after Christmas dinner, where he then discussed their recent findings with Niels Bohr.

Otto Hahn (left) and Lise Meitner (right)

In 1939, Bohr visited the United States and personally discussed with Einstein the new Hahn-Strassman-Meitner discoveries. Bohr had also happened to meet Fermi at a conference on theoretical physics in Washington D.C. and were the first to discuss the potentiality of a self-sustaining nuclear chain reaction – under the assumption that the fission reaction could be designed to emit enough secondary neutrons to perpetuate the reaction – a process that would release enormous amounts of energy. This inevitably led to the conceptual design of the atomic bomb, a prospect of which deeply saddened Meitner – seeing her discoveries lead to such destructive potential.

Nonetheless, collaborative efforts were soon underway to design a uranium-based nuclear reactor capable of bringing a critical mass of uranium under proper conditions to facilitate such a self-sustaining chain reaction.

The first, and most notable, of such efforts was in 1941, by Fermi and his associate Leo Szilard. The two proposed a model where uranium could be placed in a stack of graphite in order to make a cube-like frame of fissionable material. By 1942, Fermi led a group of scientists at the University of Chicago in constructing the world’s very first nuclear reactor, the Chicago Pile-1 (pictured at the top of the page). The reactor was actually built on the floor of a squash court under UC’s athletic football stadium. In addition to the uranium and graphite, the reactor contained control rods made out of cadmium, a metallic element capable of absorbing neutrons. These control rods were implemented as a means of manually slowing down or speeding up the chain reaction, since their adjustable presence in the pile meant that there could be fewer neutrons available to split atoms as desired. On the morning of December 2, 1942, Fermi conducted the first demonstration of the Chicago Pile-1, withdrawing the control rods a few inches at a time over the next several hours. By 3:25pm, the nuclear reaction became self-sustaining – and the world had officially entered the age of nuclear energy.


Most atomic research from this point focused on the development of an effective nuclear weapon for use in WWII, and was conducted under the code name, the Manhattan Project. On August 6, 1945, the U.S. dropped the first atomic bomb, Little Boy, on Hiroshima, Japan. Just three days later a second bomb, Fat Man, was dropped on Nagasaki, Japan – prompting their surrender on August 15 and bringing an end to WWII.

President Harry S. Truman signing the Atomic Energy Act of 1946

In 1946, Congress created the Atomic Energy Commission (AEC), which authorized funding for the construction of the Experimental Breeder Reactor I with the intent for peaceful, civilian use in Idaho. Another milestone in nuclear history was made here, when the reactor successfully generated electricity from nuclear energy, lighting four 200-watt bulbs on December 20, 1951.

Two years later, President Eisenhower delivered his ‘Atoms for Peace’ speech before the United Nations, calling for a greater cooperative international effort in bringing about the development of peaceful applications for nuclear energy. Less than a year after that, the International Atomic Energy Agency (IAEA) was created with the intent of “promoting the peaceful use of nuclear energy and inhibiting its use for military purposes“, and Eisenhower signed the amended Atomic Energy Act of 1954, granting the civilian nuclear power program vastly more access to existing nuclear technology.

President Dwight D. Eisenhower giving his ‘Atoms for Peace’ speech to the UN General Assembly in New York City on December 8, 1953

Around this time, the overarching goal of nuclear research was to prove that it could actually be viable for providing electricity for the public on a commercial scale. This was first achieved by a light-water reactor (LWR) in Shippingport, PA, that reached peak efficiency just three weeks after beginning operation in 1957.

The success in Shippingport led to the development of other reactor technologies, not only in the government hands, but also the private sector – and through the late 1950’s and early 1960s, the nuclear power industry was flourishing. The emerging potential of Molten Salt Reactor (MSR) technology became a subject of particular interest to Director Alvin Weinberg and his team at the Oak Ridge National Laboratory (ORNL) after having been previously tasked with producing a nuclear-powered aircraft bomber to overcome the range limitations of contemporary jet-fueled aircrafts – a project titled Aircraft Nuclear Propulsion (ANP).

The ANP had been ORNL’s biggest program in the 1950’s, singularly utilizing a fourth of the laboratory’s entire budget – and ironically, for such an expensive investment, Weinberg did not appear to have much faith in its practical feasibility or success. Nonetheless, it was the only door to allowing them to continue pursuing reactor development – which did eventually result in the successful construction and operation of the world’s first molten salt fueled and cooled aircraft reactor powerplant prototype in 1954 – a project titled the Aircraft Reactor Experiment (ARE). However, due to concerns of potential radiation hazards both to aircraft crew and civilians on the ground in the event of a crash or ballistic missile-downing – incoming President John F. Kennedy canceled the program in June of 1961.

AEC Chairman Glenn T. Seaborg (left) and ORNL Director Alvin Weinberg (right) at the site of the Molten Salt Reactor Experiment (MSRE)

The Molten Salt Reactor Experiment (MSRE), given rise to from the military’s – as Weinberg put it – daft idea of a nuclear-powered aircraft from the ANP and ARE projects allowed for a shift of focus onto a civilian version of the meltdown-proof MSR. The MSR technology was particularly unique in that it also allowed for the opportunity to alter the chemistry of the liquefied salt during standard reactor operation in order to selectively remove fission products and add new fuel – a procedure called online processing.

Early efforts in researching this technology suggested that thorium/uranium tetrafluoride salt mixtures with FLiBe (lithium fluoride and beryllium fluoride) coolant salts would be a powerfully effective and inherently safe fuel that could be operated in reactors with extremely high temperatures at atmospheric pressure – a far more desirable set of conditions than otherwise operating at high pressure.

theory-coolants-2x2-matrix

Unfortunately, through a politically motivated initiative in 1969, President Richard Nixon insisted that research efforts by the AEC in breeder reactors be cut to a single line of development – which ultimately meant the premature demise of molten salt reactors, and further research and funding into liquid-metal fast breeders using plutonium-239 as the fissile fuel, particularly in Southern California where Nixon hailed from and wished to see job growth. This decision was cemented in late 1969 by Milton Shaw, the head of the AEC’s reactor development program – who then ordered the shutdown of the MSRE, which had successfully gone critical 4 years prior (a single year after its construction), logging over 6000 full power hours.

Once the MSRE was fully cancelled by 1972, more funding was mobilized for the development of a liquid-metal fast breeder reactor on the Clinch River in Tennessee – a project of which ended up sinking several billions of dollars more than the initial projected cost before eventually being cancelled by incoming President Jimmy Carter. Nonetheless, the molten salt reactor project was never restarted, and many of the chemists and engineers who had worked on the project died without much of the world realizing the true implications of their work (until decades later).


On November 19, 1969, nuclear electric power had arrived on the moon when the Apollo 12 astronauts deployed the AEC’s SNAP-27 nuclear generator on the lunar surface. Without this generator, the notion of space travel, lunar or planetary colonization (or in this case, simply refueling to return home) became unfeasible, especially as the desired distance increased – due to energy constraints. Solar could only take you so far (while solar energy storage efficiency was, and still is, incredibly poor), and wind and hydraulic sources of power were not possible options within the vacuum of space.

Apollo 12 astronaut Alan L. Bean removing the SNAP-27 (1969)

Sadly, the nuclear power industry headed for decline throughout the 1970s and early 1980s, as public concern began to emerge and crystallize over the perception of various nuclear issues – namely radiation and weapons proliferation. This was especially noticeable after a coolant-leaking accident at the Three Mile Island nuclear plant near Harrisburg, PA in 1979. Although nobody was injured, and the levels of radiation that leaked to the environment were not harmful – the Nuclear Regulatory Commission (NRC) went ahead and imposed stricter reactor safety regulations and more rigid inspection procedures. There were also longstanding repercussions to the nuclear industry due primarily to public distrust – underscoring the importance of operator training, probabilistic risk assessment, and timely and informed public communication.

Around this time, thorium and rare earth elements became linked at a geopolitical level because of senseless changes in NRC/IAEA regulations in the 1980s that reclassified monazites as a ‘source material’ due to its thorium content. This resulted in the closure, bankruptcy, or off-shoring of just about all related rare earth value chain to China – notably because U.S. mining companies did not want to suddenly be held accountable for sourcing fertile radioactive material (the abundant thorium that always comes coupled with the mining of heavy rare earths). The result of which is now a Chinese global monopoly over the production of value added rare earth products – a situation that has forced global technology companies to shift their manufacturing to China – whose government has made abundantly clear its desire to “develop and control intellectual property around thorium [and thorium-molten salt technology] for its own benefit“.

access-figure3

The reality of it is, the U.S. mining industry alone quite literally dumps over 85% of the world’s annual rare earth demand into tailings ponds in order to avoid counterproductive NRC and IAEA regulations. These valuable materials (a conservative annual estimate of about 10,000 tons of thorium – enough to easily power the entire planet – on top of the commercial/industrial value of the actual rare earths) are still easily recoverable, with no direct mining cost, or need for the development of new rare earth mines – as has been the primary focus of key thorium advocates in the U.S. such as Jim Kennedy and John Kutsch.


By 1983, nuclear power had generated more electricity than natural gas, and by the following year, had overtaken hydro-power as the second largest source of electricity – behind coal.

On parallel lines, on August 26, 1986 – operator error caused two explosions at the Chernobyl #4 nuclear powerplant in the former Soviet Union. The reactor was situated in a woefully inadequate containment building, and as a result, very dangerous levels of iodine-131 and cesium-137 radiation had escaped into the surrounding environment. It is worth noting that a plant of such design would not have been licensed in the United States.

On December 22, 1987 – the Nuclear Waste Policy Act of 1982 (NWPA) was amended by Congress and signed into law by President Reagan, which then directed the Department of Energy to investigate the potential of the Yucca Mountain in Nevada particularly as a disposal site for high-level radioactive waste (much to the distaste of Nevadan state representatives).

The test site of the Department of Energy’s Yucca Mountain Project in Nevada

By 1988, U.S. demand for electricity had increased by 50% since the early 1970s, and by early 1991, the U.S. had 111 fully operational nuclear powerplants with a combined capacity of 99,673 megawatts. This was still twice as many nuclear powerplants as any other single country in the world, and at this time, approximately 22% of the electricity produced in the U.S. came from nuclear energy.

By the end of 1991, 31 other countries had successfully proliferated their own nuclear powerplant programs or had reactors under pending construction – a testament to the promise behind the technology. During October of 1992, Congress passed the Energy Policy Act, which shifted focus onto maintaining exact reactor safety and design standards, reducing both economical and regulatory risk, as well as establishing an effective waste program.

The U.S. Department of Energy has since been partaking in various joint efforts with private companies such as Westinghouse and General Electric to develop the next generation powerplants – with the prevailing idea regarding the existing radioactive waste being to store it for extensive periods of time, isolated from the environment and the rest of civilization. However, the main hurdles regarding nuclear technology advancement have been strict licensing and regulatory bodies, shockingly misinformed negative political and public perception, and an overall lack of support for the technology – quite largely due to misinformation tactics and coordinated funding of anti-nuclear activists by certain competitor energy industries (e.g., coal, natural gas).

Besides energy production, nuclear technology found many other incredibly useful applications in the 1990s, such as the use of radioisotopes in internally identifying and investigating the cause of a disease, or in measuring microscopic thickness, and detecting irregularities in metal casings. Archaeologists also utilize radiometric dating technology in order to accurately date prehistoric artifacts and locate structural defects in preserved statues and buildings. Another industrial example is in the preservation of food, nuclear irradiation can actually be used instead of canning, freezing or drying to prevent excessive vitamin loss.

The future of nuclear energy will be paved forward through the next 7-13+ years with the Generation III and IV nuclear reactor models that are under current research and development.


Listed below are some of the different technologies by which nuclear energy is currently harvested:

Light Water Reactors (LWR)
Categorized by boiling water or pressurized water.

LWR


Heavy Water Reactors (HWR)
Employs deuterium. Ex. CANDU

CANDU


Graphite Moderated Reactors (GMR)
Categorized by gas or water cooling.

A Light Water Graphite-moderated Reactor (LWGR/RBMK)


Fast Breeder Reactors (FBR)
Can also utilize liquid metal (LMFBR).

LMFBR