The Making of the Atomic Bomb
By Professor Lyudmila Godenko of Brooklyn College 2019
E-Book Project of Brooklyn College OER Project
The Making of the Atomic Bomb
Welcome to the course “The Making of the Atomic Bomb”.
Spencer R. Weart , the former director of the Center for History of Physics of the American Institute of Physics (AIP) in his book “Scientists in Power”(1979) noticed that “We must be curious to learn how such a set of object’s- hundreds of power plants, thousands of bombs, ten of thousands of people massed in national establishments- can be traced back to a few people sitting at laboratory benches discussing the peculiar behavior of one type of atom”.
Despite the fact that a lot of efforts were made towards nuclear weapons non-proliferation, we still live in a world that could be easily destroyed by using nuclear weapons. It turns out that nowadays more countries try to have their own nuclear programs which goal is to make possible assembling of nuclear weapons. Discouraging countries from developing nuclear weapons while promoting the use of nuclear technology for peaceful purposes—such as energy generation and medical research—has been the main challenge world leaders have grappled with since the United States detonated the first nuclear weapons more than seventy years ago. With growing security threats from nuclear-capable countries, nuclear nonproliferation remains a critical issue today.
This course will discuss
the history of the development of the atomic bomb. Number of scientific breakthroughs in atomic and nuclear physics during 19-th and the first part of 20-th centuries led to possibility of the making of the atomic bomb. We also discuss the political context in which the bomb was developed, and personal stories of the leading scientist involved and corresponding moral issues arising from the development and use of the bomb. There is no development in modern history that has had more impact on man’s scientific, political, and moral consciousness than the making of the atomic bomb and its use against the Japanese at the end of WWII. It is a singularity of such power that its ultimate consequences for humanity are still beyond our perception. This course attempts to tell the story primarily from the point of view of the history of the science involved. Also the students will see the need for the integrated perspective in order to understand how science, political history, ethical values and personal motivations are interconnected in this story. To understand this story is to understand the complexities and responsibilities that have accompanied the emergence of modern society.
Objectives of this course include
the understanding of progression of scientific discoveries that clarified scientific picture of the atom and the atomic nucleus during the years from 1895 to 1945;
the understanding of how laboratory experiments interlay with the development of theoretical knowledge; how some experiments proceed by design to clarify specific issues, while other stumble on new and totally unexpected phenomena;
the understanding of how numbers, even of such abstract and esoteric quantities as cross sections, can change the direction of investigations
the understanding of fact that the incredibly small amount of matter inside the atomic nucleus can, through the multiplying effect of a chain reaction, produce the prodigious release of energy in a nuclear reactor or atomic bomb;
the understanding of the massive scale of the Manhattan Project and the enormous challenges involved in the creation of weapons-grade fissionable material; the story of the development of the atomic bomb in the historical context of the political history of the first half of the twentieth century;
the understanding of motivations and personalities of the scientists who pressed for US Government involvement leading up to Manhattan Project;
the understanding of the need for an integrated perspective of how science, political history, ethical values and personal motivations are interconnected;
the understanding of the role that nuclear weapon and nuclear power have played in
popular culture since 1945.
The materials of the course include a variety of historical and contemporary books and articles, materials that were put on websites, historical texts, documentary films, feature films, power point presentations.
Because the course is usually taught for students that are not science major and often have very basic knowledge of physics, I consider that it will be useful to add so-called Appendixes where some pieces of scientific information are added. They serve mostly for inquiry.
Teaching of this course was started at Brooklyn College, CUNY by Prof. Albert Bond. He created the first syllabus of the course and installed ideas of how this course has to be taught. In my teaching I follow our discussions and findings.
The materials, the teaching of this course is based on, mostly are coming from excellent books. First of all, the ageless book “The Making of the Atomic Bomb” by Richard Rhodes (Simon & Shuster, 2012), which is the best source of knowledge and ideas that are taught in course. Then the relatively new books “The History and Science of the Manhattan Project’, by Bruce Cameron Reed ( Springer-Verlag, 2013) and “The Physics of the Manhattan Project”, by Bruce Cameron Reed ( Springer-Verlag,2015). These books are scientific ones and were used to discuss some questions in the course not only qualitatively but also quantitatively. And I also have to mention the book from National Security History Series “The Manhattan Project: Making the Atomic Bomb” by F. G. Gosling (DOE/MA-0001; Washington: History Division, Department of Energy ,December 2005).
This course.is rather popular at Brooklyn College, CUNY. Sometimes I have 3 sections (each of capacity of 50 students) running in the same semester. Usually students consider this course very interesting and informative. We have sometimes hot discussions, very well done presentations, and even some films and poetry devoted to events we consider in course. I would to thank my former and future students for sharing my passion to the course.
A lot of my friends and colleagues advised me to put together all materials about the course that I found out and make it available to the students. This is result of the work. I thank everybody and especially David Mashkevich for constant inspiration.
Table of Contents
Topic 1: Early History. Atomic theory in the 1800’s. Dalton’s law of multiple proportions. Development of the molecular theory of gases. Loschmidt, and the first estimation of molecular sizes and masses.
Topic 2: Years 1895-1900: Experiment with gas discharge tubes and photographic emulsions. The discovery of X- rays, radioactivity, and discovery of electron. The nature and the effects of ionizing radiation.
Topic 3: Radioactivity 1900-1910: Marie and Pierre Curie and the search for radioactive elements. Ernest Rutherford and classification of types of ionizing radiation. The discoveries of radioactive transmutation, half-lives and isotopes. Frederic Soddy and the first estimates of energy from radioactive decay vs. energy from chemical reactions.
Topic 4: The nuclear atom 1911-1920. Alpha scattering and the discovery of the nucleus. Nuclear sizes vs atomic sizes. Niels Bohr and the structure of the nuclear atom. Moseley’s work with x-rays and the significance of the atomic number.
Topic 7: Years 1932-1934: The discovery of the neutron. Reinterpretation of nuclear structure. Leo Szilard and the concept of a nuclear chain reaction. Discoveries of the positron and artificial radioactivity.
Topic 8: Years 1935-1938: Enrico Fermi’s discoveries in neutron activation and neutron moderation. Bohr’s development of the liquid drop model of the nucleus. The puzzle of the neutron bombardment of uranium.
Topic 9: Years 1938-1939: Otto Hahn’s discovery of nuclear fission. Interpretation of fission by Lise Meitner and Otto Frisch; spread of the news to U.S. and initial reactions and experimental verifications. Bohr’s interpretation of the significance of U-235.
Topic 10: Years 1939-1942: The discovery of neutrons from fission. The awakening of the Germans to the potential consequences of fission. Einstein’s letter to FDR. The discoveries of neptunium and plutonium. Pearl Harbor. The entrance of the US into the war and its effect on fission research. The Chicago pile.
Topic 11: Years 1942-1945: General Leslie Groves, Robert Oppenheimer, and the Manhattan Project. Oak Ridge, Hanford, and Los Alamos. The separation of U-235 and the production of plutonium. The development of the implosion lens.
Topic 12: Year 1945: The “Dragon” experiments on critical mass. The death of President Roosevelt. The Trinity test. Harry Truman and Potsdam. The decision to use the bomb. Hiroshima and Nagasaki. Final perspectives on war in the 20-th century, nuclear proliferation, and the challenge of nuclear terrorism
The Making of the Atomic Bomb
Topic 1: Early History. Atomic theory in the 1800’s. Dalton’s law of multiple proportions. Development of the molecular theory of gases. Loschmidt, and the first estimation of molecular sizes and masses.
Democritus, “the laughing philosopher,” had ideas far in advance of his time
The first "atomic theorists" we have any record of were two fifthcentury BC Greeks, Leucippus of Miletus (a town now in Turkey) and Democritus of Abdera. Their theories were naturally more philosophical than experimental in origin. The basic idea was that if you could look at matter on smaller and smaller scales (which they of course couldn't) ultimately you would see individual atoms objects that could not be divided further (that was the definition of atom). Everything was made up of these atoms, which moved around in a void (a vacuum). The different physical properties color, taste, and so on of materials came about because atoms in them had different shapes and/or arrangements and orientations with respect to each other. This was all pure conjecture, but the physical pictures they described sometimes seem uncannily accurate. These Greek philosophers believed that atoms were in constant motion, and always had been, at least in gases and liquids. Sometimes, however, as a result of their closelocking shapes, they joined in closepacked unions, forming materials such as rock or iron. Basically, Democritus and his followers had a very mechanical picture of the universe. They thought all natural phenomena could in principle be understood in terms of interacting, usually moving, atoms. This left no room for gods to intervene. Their atomic picture included the mind and even the soul, which therefore did not survive death. This was in fact a cheerful alternative to the popular religions of the day, in which the gods constantly intervened, often in unpleasant ways, and death was to be dreaded because punishments would surely follow.
Little conceptual progress in atomic theory was made over the next two thousand years, in large part because Aristotle discredited it.
In his” On Generation and Corruption”, Aristotle related each of the four elements, Earth, Water, Air, and Fire, to two of the four sensible qualities, hot, cold, wet, and dry. In the scheme, all matter was made of the four elements, in differing proportions. Aristotle's scheme added the heavenly Ether, the divine substance of the heavenly spheres, stars and planets.
Aristotle’s views held sway through the Middle Ages.
Things began to look up with the Renaissance. Galileo Galilei- Italian astronomer, physicist and engineer believed in atoms, although, like the early Greeks, he seemed to confuse the idea of physical indivisibility with that of having zero spatial extent, i.e. being a mathematical point. Nevertheless, his ideas in this area apparently got him into theological hot water. The Church felt that the doctrine of transubstantiation the belief that the bread and wine literally became the body and blood of Christ was difficult to believe if everything was made up of atoms. This was an echo of the tension between atoms and religion two thousand years earlier. Galileo's theory of atoms
was not very well developed. He gives the impression in some places they were infinitely small and in view of his excellent grasp of dimensional scaling arguments, he may have thought that vacuum suction between infinitesimally small surfaces would suffice to hold solids together, since smaller objects have proportionately more surface. Of course, this was on the wrong track. (Ironically, shortly after Galileo's death, his student Torricelli was the first to realize that suction forces were really a result of air pressure from the weight of the atmosphere.)
Pre‐Chemistry: Including Newton the Alchemist
Sir Isaak Newton was an English mathematician, physicist, astronomer; one of the most influential scientists of all time. His works laid the foundations of classical mechanics. He also made important contributions to optics and is inventor of calculus. Newton formulated the laws of motion and universal gravitation that formed for long tome scientific view point.
He also was interested in chemistry. Newton thought that part of chemistry (especially the physical part) could be explained in terms of the mechanics of corpuscles, but that there was something more important a hardertopindown vital spirit, which was the basis of life (and also somehow connected with mercury and other elements). He also felt this was the key to the way God ran the universe the merely mechanical interaction of corpuscles could not, in his opinion, generate the rich variety of life. . Newton probably spent more time studying alchemy than he did working on his laws, gravitation and calculus combined! In fact, Newton probed "the whole vast literature of the older alchemy as it has never been probed before or since" according to a recent historical study. He also used quite precise quantitative measures in many of his investigations. This did not provide the insight into mass conservation that Lavoisier's work did a century
later, probably because Newton didn't count the various gases absorbed or emitted, these were still considered incidental and not really important to the reactions. Also, maybe they didn't smell too great a recipe for preparing phosphorus Newton copied from Boyle begins "Take of Urin one Barrel.” . Not that this matters too much as far as developing the atomic concept is concerned.
On the positive side, the alchemists, in their fruitless quest to turn lead into gold (and find the elixir of life, etc.) did get very skillful at managing a great variety of chemical reactions, and so learned the properties of many substances. The alchemists' point of view was based on Aristotle's four elements, earth, air, fire and water, but they added what they called principles. For example, there was an active principle in air important in respiration and combustion. There was an acidic principle, and others. And then there was phlogiston. Looking at something in flames, it seems pretty clear that something is escaping the material. That they called phlogiston. After Boyle discovered that metals become heavier on combustion, it was decided that phlogiston had negative weight.
The first major step towards modern quantitative chemistry was taken by Lavoisier towards the end of the eighteenth century .Lavoisier was French chemist who was central to the 18-th century chemical revolution He realized that combustion was a chemical reaction between the material being burned and a component of the air. He carried out reactions in closed vessels so that he could keep track of the amounts of the various reagents involved. One of his great discoveries was that in reactions, the total final weight of all the materials involved is exactly equal to the total initial weight. This was the first step on the road to thinking about chemistry in terms of atoms. He also established that pure water was not transmuted to earth by heating, as had long been believed the residue left on boiling dry came from the container if the water itself was pure. Lavoisier discovered oxygen. He was the first to realize that air has two (major) components, only one of which supports respiration, meaning life, and combustion. In 1783, working with the mathematician Laplace, and a guinea pig in a mask, he checked out quantitatively that the animal used breathedin oxygen to form what we now term carbon dioxide (this is the origin of the "guinea pig" as experimental subject). Lavoisier tightened up the very loose terminology in use at that time: there were no generally agreed on definitions of elements, principles or atoms, although a century earlier Boyle had suggested that element be reserved for substances that could not be further separated chemically. Lavoisier began the modern study of chemistry:
he insisted on precise terminology and on precise measurement, and suggested as part of the agenda the classification of substances into elements and compounds. Once this program was truly underway, the atomic interpretation soon appeared.
John Dalton was English chemist, physicist, and meteorologist. He was born into a poor family near Manchester, England. He supported himself to some extent by teaching from the age of twelve, when he started his own small Quaker school. Dalton wrote A New System of Chemical Philosophy, from which the following quotes are taken: Matter, though divisible in an extreme degree, is nevertheless not infinitely divisible. That is, there must be some point beyond which we cannot go in the division of matter. The existence of these ultimate particles of matter can scarcely be doubted, though they are probably much too small ever to be exhibited by microscopic improvements. I have chosen the word atom to signify these ultimate particles …
He assumed that all atoms of an element were identical, and atoms of one element could not be changed into atoms of another element "by any power we can control.” He assumed further that compounds of elements had compound atoms: “I call an
ultimate particle of carbonic acid a compound atom.” Now, though this atom may be divided, yet it ceases to be carbonic acid, being resolved by such division into charcoal and oxygen. He also asserted that all compound atoms (molecules, as we would say) for a particular compound were identical, and, furthermore: "Chemical analysis and synthesis go no farther than to the separation of particles one from another, and to their reunion. No creation or destruction of matter is within reach of chemical agency.” By Dalton's time it had become clear that when elements combine to form a particular compound, they always do in precisely the same ratio by weight. For example, when hydrogen burns in oxygen to form water, one gram of hydrogen combines with eight grams of oxygen. This constancy is to be expected in Dalton's theory, presumably the compound atom, or molecule, of water has a fixed number of hydrogen atoms and a fixed number of oxygen atoms. Of course, the weight ratio doesn't tell us the numbers, since we don't know the relative weights of the hydrogen atom and the oxygen atom. To make any progress, some assumptions are necessary. Dalton suggested a rule of greatest simplicity: if two elements form only one compound, assume the compound atom has only one atom of each element. Since H2O2 had not been discovered, he assumed water was HO. (He actually used symbols to represent the elements, H was a circle with a dot in the center. However, just as we do, he used strings of such symbols to represent an actual molecule, not a macroscopic mixture.) On putting together data on many different reactions, it became apparent to Dalton that the rule of greatest simplicity wasn't necessarily correct, by 1810 he was suggesting that the water molecule perhaps contained three atoms.
The main points of Dalton’s atomic theory are:
Everything is composed of atoms, which are the indivisible building blocks of matter and cannot be destroyed. All atoms of an element are identical.
The atoms of different elements vary in size and mass.
Compounds are produced through different whole-number combinations of atoms.
A chemical reaction results in the rearrangement of atoms in the reactant and product compounds.
Atomic theory has been revised over the years to incorporate the existence of atomic isotopes and the interconversion of mass and energy. In addition, the discovery of subatomic particles has shown that atoms can be divided into smaller parts. However, Dalton’s importance in the development of modern atomic theory has been recognized by the designation of the atomic mass unit as a Dalton.
Dalton's Law of Multiple Proportions
One of the strongest arguments for Dalton's atomic theory was the Law of Multiple Proportions).John Dalton developed the law of multiple proportions (first presented in 1803) by studying and expanding upon the works of Antoine Lavoisier and Joseph Proust.
Proust had studied tin oxides and found that their masses were either 88.1% tin and 11.9% oxygen or 78.7% tin and 21.3% oxygen (these were tin(II) oxide and tin dioxide respectively). Dalton noted from these percentages that 100g of tin will combine either with 13.5g or 27g of oxygen; 13.5 and 27 form a ratio of 1:2. Dalton found an atomic theory of matter could elegantly explain this common pattern in chemistry – in the case of Proust’s tin oxides, one tin atom will combine with either one or two oxygen atoms.
Law of multiple proportions is the statement that when two elements combine with each other to form more than one compound the weights of one element that combine with a fixed weight of the other are in a ratio of small whole numbers. For example, there are five distinct oxides of nitrogen and the weights of oxygen in combination with 14 grams of nitrogen are, in increasing order, 8, 16, 24, 32, and 40 grams, or in a ratio of 1, 2, 3, 4, 5.
Dalton also believed atomic theory could explain why water absorbed different gases in different proportions: for example, he found that water absorbed carbon dioxide far better than it absorbed nitrogen. Dalton hypothesized this was due to the differences in the mass and complexity of the gases’ respective particles. Indeed, carbon dioxide molecules (CO2) are heavier and larger than nitrogen molecules (N2).
Dalton proposed that each chemical element is composed of atoms of a single, unique type, and though they cannot be altered or destroyed by chemical means, they can combine to form more complex structures (chemical compounds). Since Dalton reached his conclusions by experimentation and examination of the results in an empirical fashion, this marked the first truly scientific theory of the atom.
Kinetic Theory of Gases.
The idea of the atom was first applied to discuss the properties of gas. The theory that describes gases as collection of atoms is called Kinetic Molecular Theory (KMT). The kinetic theory describes gas as a large number of submicroscopic particles (atoms or molecules), all of which arein constant rapid motion that has randomness arising from their many collisions with each other and with the walls of the container.
Kinetic theory explains macroscopic properties of gases, such as pressure, temperature, viscosity, thermal conductivity, and volume, by considering their molecular composition and motion. The theory posits that gas pressure is due to the impacts, on the walls of a container, of molecules or atoms moving at different velocities. Kinetic theory defines temperature. Under a microscope, the molecules making up a liquid
are too small to be visible, but the jittery motion of pollen grains or dust particles can be seen. Known as Brownian motion, it results directly from collisions between the grains or particles and liquid molecules. As analyzed by Albert Einstein in 1905, this experimental evidence for kinetic theory is generally seen as having confirmed the concrete material existence of atoms and molecules.
The theory for ideal gases makes the following assumptions:
1. The gas consists of very small particles known as molecules. This smallness of their size is
such that total volume of the individual gas molecules added up is negligible compared to
the volume of the smallest ball containing all the molecules.
This is equivalent to stating that the average distance separating the
gas particles is large compared to their size.
2. These particles have the same mass.
3. The number of molecules is so large that statistical treatment can be applied.
4. These molecules are in constant, random, and rapid motion.
5. The rapidly moving particles constantly collide among themselves and with the walls
of container. All these collision are perfectly elastic.
6. Except during collisions, the interactions among molecules are negligible.
First Estimation of Molecular Sizes and Masses
When atoms and molecules were still quite hypothetical, Jan Josef Loschmidt used kinetic theory to get the first reasonable estimate of molecular size. Josef Loschmidt, a pioneer of 19th-century physics and chemistry, In 1865, Loschmidt was the first to estimate the size of the molecules that make up the air: his result was only twice the true size, a remarkable feat given the approximations he had to make. His method allowed the size of any gas molecules to be related to measurable phenomena, and hence to determine how many molecules are present in a given volume of gas. This latter quantity is now known as the Loschmidt constant in his honor, and its modern value is 2.69×1019 molecules per cubic centimeter at standard temperature and pressure (STP).His estimation of the size of molecular of air came to
S =9.69x10-10m, or roughly 1.0x 10-9 m= 1.0 nm.
Today’s value of the size of air molecular, depending on type of gas, is approximately 0.3 nm.
Developing of KTM: Gas Discharge Tubes
Many scientists have modified and elaborated on Dalton’ atomic theory. The first major advances were possible with the development of gas discharge tubes. Sir William Crookes was the leader in experiments with gas discharged tubes.
In 1838, Michael Faraday passed a current through a rarefied air-filled glass tube and noticed a strange light arc with its beginning at the cathode (negative electrode) and its end almost at the anode (positive electrode).
In the 1870s, British physicist William Crookes and others were able to evacuate rarefied tubes to a pressure below 10−6 atm. These were called Crookes tubes. Faraday had been the first to notice a dark space just in front of the cathode, where there was no luminescence. This came to be called the cathode dark space, Faraday dark space, or Crookes dark space. Crookes found that as he pumped more air out of the tubes, the Faraday dark space spread down the tube from the cathode toward the anode, until the tube was totally dark. But at the anode (positive) end of the tube, the glass of the tube itself began to glow. What was happening was that as more air was pumped from the tubes, the electrons could travel farther, on average, before they struck a gas atom. By the time the tube was dark, most of the electrons could travel in straight lines from the cathode to the anode end of the tube without a collision. With no obstructions, these low mass particles were accelerated to high velocities by the voltage between the electrodes. When rays reached the anode end of the tube, they were traveling so fast that, although they were attracted to it, they often flew past the anode and struck the back wall of the tube. When they struck atoms in the glass wall, they excited their orbital electrons to higher energy levels, causing them to fluoresce.
Later researchers painted the inside back wall with fluorescent chemicals such as zinc sulfide, to make the glow more visible. Rays themselves are invisible, but this accidental fluorescence allowed researchers to notice that objects in the tube in front of the cathode, such as the anode, cast sharp-edged shadows on the glowing back wall. In 1869, German physicist Johann Hittorf was first to realize that something must be traveling in straight lines from the cathode to cast the shadows. Eugene Goldstein named them cathode rays.
The Making of the Atomic Bomb
Topic 2: Years 1895-1900: Experiment with gas discharge tubes and photographic emulsions. The discovery of X- rays, radioactivity, and discovery of electron. The nature and the effects of ionizing radiation.
Discovery of X-Rays
Developments to 1932 the era of ‘‘modern’’ physics is usually considered to have begun in late 1895, when Wilhelm Conrad Rontgen, working in Germany, accidentally discovered X-rays. Rontgen discovered that not only could his mysterious rays pass through objects such as his hand, but they also ionized air when they passed through it; this was the first known example of what we now call ‘‘ionizing radiation’’. Rontgen experimented at Würzburg University focused on light phenomena and other emissions generated by discharging electrical current in so-called "Crookes tubes," glass bulbs with positive and negative electrodes, evacuated of air, which display a fluorescent glow when a high voltage current is passed through it. He was particularly interested in cathode rays and in assessing their range outside of charged tubes.
On November 8, 1895, Roentgen noticed that when he shielded the tube with heavy black cardboard, the green fluorescent light caused a platinobarium screen nine feet at away to glow - too far away to be reacting to the cathode rays as he understood them. He determined the fluorescence was caused by invisible rays originating from the Crookes tube he was using to study cathode rays (later recognized as electrons), which penetrated the opaque black paper wrapped around the tube. Further experiments revealed that this new type of ray was capable of passing through most substances, including the soft tissues of the body, but left bones and metals visible. One of his earliest photographic plates from his experiments was a film of his wife Bertha's hand, with her wedding ring clearly visible.
To test his observations and enhance his scientific data, Roentgen plunged into seven weeks of meticulous planned and executed experiments. On December 28, he submitted his first "provisional" communication, "On a New Kind of Rays," in the Proceedings of the Wurzburg Physio-Medical Society. In January 1896 he made his first public presentation before the same society, following his lecture with a demonstration: he made a plate of the hand of an attending anatomist, who proposed the new discovery be named "Roentgen's Rays."
The news spread rapidly throughout the world. Thomas Edison was among those eager to perfect Roentgen's discovery, developing a handheld fluoroscope, although he failed to make a commercial "X-ray lamp" for domestic use. The apparatus for producing X-rays was soon widely available, and studios opened to take "bone portraits," further fueling public interest and imagination. Poems about X-rays appeared in popular journals, and the metaphorical use of the rays popped up in political cartoons, short stories, and advertising. Detectives touted the use of Roentgen devices in following unfaithful spouses, and lead underwear was manufactured to foil attempts at peeking with "X-ray glasses."
As frivolous as such reactions may seem, the medical community quickly recognized the importance of Roentgen's discovery. By F By February 1896, X-rays were finding their first clinical use in the US in Dartmouth, MA, when Edwin Brant Frost produced a plate of a patient's Colles fracture for his brother, a local doctor. Soon attempts were made to insert metal rods or inject radio-opaque substances to give clear pictures of organs and vessels, with mixed results. The first angiography, moving-picture X-rays, and military radiology, were performed in early 1896.
In addition to the diagnostic powers of X-rays, some experimentalists began applying the rays to treating disease. Since the early 19th century, electrotherapy had proved popular for the temporary relief of real and imagined pains. The same apparatus could generate X-rays. In January 1896, only a few days after the announcement of Roentgen's work, a Chicago electrotherapist named Emil Grubbe irradiated a woman with a recurrent cancer of the breast, and by the end of the year, several researchers had noted the palliative effects of the rays on cancers. Others found remarkable results in the treatment of surface lesions and skin problems while others investigated the possible bacterial action of the rays. X-rays even found cosmetic uses in depilatory clinics set up in the US and France.
Roentgen was awarded the first Nobel Prize in physics in 1901 for his discovery.
Discovery of Radioactivity
A part of Rontgen’s discovery involved X-rays illuminating a phosphorescent screen, a fact which caught the attention of Antoine Henri Becquerel, who lived in France. Becquerel was an expert in the phenomenon of phosphorescence, where a material emits light in response to illumination by light of another color. Becquerel wondered if phosphorescent materials such as uranium salts might be induced to emit X-rays if they were exposed to sunlight. While this supposition was wrong, investigating it led Becquerel, in February 1896, to the accidental discovery- one of the most well-known accidental discoveries in the history of physics. On an overcast day in March 1896, French physicist Henri Becquerel opened a drawer and discovered spontaneous radioactivity.
Becquerel first heard about Roentgen’s discovery in January 1896 at a meeting of the French Academy of Sciences. After learning about Roentgen’s finding, Becquerel began looking for a connection between the phosphorescence he had already been investigating and the newly discovered x-rays. Becquerel thought that the phosphorescent uranium salts he had been studying might absorb sunlight and reemit it as x-rays.
To test this idea (which turned out to be wrong), Becquerel wrapped photographic plates in black paper so that sunlight could not reach them. He then placed the crystals of uranium salt on top of the wrapped plates, and put the whole setup outside in the
sun. When he developed the plates, he saw an outline of the crystals. He also placed objects such as coins or cut out metal shapes between the crystals and the photographic plate, and found that he could produce outlines of those shapes on the photographic plates penetrating radiation similar to x-rays. He reported this result at the French Academy of Science meeting on February 24, 1896.
Seeking further confirmation of what he had found, he planned to continue his experiments. But the weather in Paris did not cooperate; it became overcast for the next several days in late February. Thinking he couldn’t do any research without bright sunlight, Becquerel put his uranium crystals and photographic plates away in a drawer.
On March 1, he opened the drawer and developed the plates, expecting to see only a very weak image. Instead, the image was amazingly clear.
The next day, March 2, Becquerel reported at the Academy of Sciences that the uranium salts emitted radiation without any stimulation from sunlight.
Many people have wondered why Becquerel developed the plates at all on that cloudy March 1, since he didn’t expect to see anything. Possibly he was motivated by simple scientific curiosity. Perhaps he was under pressure to have something to report at the next day’s meeting. Or maybe he was simply impatient.
Whatever his reason for developing the plates, Becquerel realized he had observed something significant. He did further tests to confirm that sunlight was indeed unnecessary, that the uranium salts emitted the radiation on their own.
At first he thought the effect was due to particularly long-lasting phosphorescence, but he soon discovered that non-phosphorescent uranium compounds exhibited the same effect. In May he announced that the element uranium was indeed what was emitting the radiation.
Becquerel initially believed his rays were similar to x-rays, but his further experiments showed that unlike x-rays, which are neutral, his rays could be deflected by electric or magnetic fields.
Experiments by J.J. Thomson in 1897 Led to the discovery of a
Fundamental Building Block of matter. Discovery of Electron
The British physicist J.J. Thompson was the first one who started studding the interior of the atom. At the Cavendish Laboratory at Cambridge University, Thomson was experimenting with currents of electricity inside empty glass tubes. He was investigating a long-standing puzzle known as “cathode rays”. His experiments prompted him to make a bold proposal: these mysterious rays are streams of particles much smaller than atoms, they are in fact minuscule pieces of atoms. He called these particles "corpuscles," and suggested that they might make up all of the matter in atoms. It was startling to imagine a particle residing inside the atom--most people thought that the atom was indivisible, the most
J.J. Thomson in his office
Thomson’s speculation was not unambiguously supported by his experiments. It took more experimental work by Thomson and others to sort out the confusion. The atom is now known to contain other particles as well. Yet Thomson's bold suggestion that cathode rays were material constituents of atoms turned out to be correct. The rays are made up of electrons: very small, negatively charged particles that are indeed fundamental parts of every atom.
"Could anything at first sight seem more impractical than a body which is so small that its mass is an insignificant fraction of the mass of an atom of hydrogen?"
Modern ideas and technologies based on the electron, leading to television and the computer and much else, evolved through many difficult steps. Thomson's careful experiments and adventurous hypotheses were followed by crucial experimental and theoretical work by many others in the United Kingdom, Germany, France and elsewhere. These physicists opened for us a new perspective--a view from inside the atom.
What could these rays be? One possibility was that they were waves traveling in a hypothetical invisible fluid called the "ether." At that time, many physicists thought that this ether was needed to carry light waves through apparently empty space. Maybe cathode rays were similar to light waves? Another possibility was that cathode rays were some kind of material particle. Yet many physicists, including J.J. Thomson, thought that all material particles themselves might be some kind of structures built out of ether, so these views were not so far apart.
Experiments were needed to resolve the uncertainties. When physicists moved a magnet near the glass, they found they could push the rays about. But when the German physicist Heinrich Hertz passed the rays through an electric field created by metal plates inside a cathode ray tube, the rays were not deflected in the way that would be expected of electrically charged particles. Hertz and his student Philipp Lenard also placed a thin metal foil in the path of the rays and saw that the glass still glowed, as though the rays slipped through the foil. Didn't that prove that cathode rays were some kind of waves?
Other experiments cast doubt on the idea that these were ordinary particles of matter, for example gas molecules as some suggested. In France, Jean Perrin had found that cathode rays carried a negative charge. In Germany, in January 1897 Emil Wiechert made a puzzling measurement indicating that the ratio of their mass to their charge was over a thousand times smaller than the ratio for the smallest charged atom. When Lenard passed cathode rays through a metal foil and measured how far they traveled through various gases, he concluded that if these were particles, they had to be very small.
Drawing on work by his colleagues J.J. Thompson refined some previous experiments, designed some new ones, carefully gathered data, and then... made a bold speculative leap. Cathode rays are not only material particles, he suggested, but in fact the building blocks of the atom: they are the long-sought basic unit of all matter in the universe.
Do atoms have parts? J.J. Thompson suggested that they do. He advanced the idea that cathode rays are really streams of very small pieces of atoms. Three experiments led him to this
First, in a variation of an 1895 experiment by Jean Perrin, Thomson built a cathode ray tube ending in a pair of metal cylinders with a slit in them. These cylinders were in turn connected to an electrometer, a device for catching and measuring electrical charge. Perrin had found that cathode rays deposited an electric charge. Thomson wanted to see if, by bending the rays with a magnet, he could separate the charge from the rays. He found that when the rays entered the slit in the cylinders, the electrometer measured a large amount of negative charge. The electrometer did not register much electric charge if the rays were bent so they would not enter the slit. As Thomson saw it, the negative charge and the cathode rays must somehow be stuck together: you cannot separate the charge from the rays.
All attempts had failed when physicists tried to bend cathode rays with an electric field. Now Thomson thought of a new approach. A charged particle will normally curve as it moves through an electric field, but not if it is surrounded by a conductor (a sheath of copper, for example). Thomson suspected that the traces of gas remaining in the tube were being turned into an electrical conductor by the cathode rays themselves. To test this idea, he took great pains to extract nearly all of the gas from a tube, and found that now the cathode rays did bend in an electric field after all.
Thomson concluded from these two experiments, "I can see no escape from the conclusion that [cathode rays] are charges of negative electricity carried by particles of matter." But, he continued, "What are these particles? Are they atoms, or molecules, or matter in a still finer state of subdivision?"
Thompson’s third experiment sought to determine the basic properties of the particles. Although he couldn't measure directly the mass or the electric charge of such a particle, he could measure how much the rays were bent by a magnetic field, and how much energy they carried. From this data he could calculate the ratio of the mass of a particle to its electric charge (m/e). He collected data using a variety of tubes and using different gases. Thomson presented three hypotheses about cathode rays based on his 1897 experiments:
Rutherford’s first major discovery at McGill occurred in 1900, when he found that, upon emitting its radiation, thorium simultaneously emitted a product which he termed ‘‘emanation.’’ The emanation was also radioactive, and it happen to be a new noble gas, an isotope of radon (Rn). With Frederic Soddy they created “disintegration theory” of radioactivity which regards radioactive phenomena as atomic-not molecular- processes. The theory was supported by a large amount of experimental evidence, a number of new radioactive substances were discovered and their positions in the series of transformations were fixed.
When isolated, radioactivity was observed to decline in a geometrical progression with time. Specifically, the activity decreased by a factor of one-half for every minute of time that elapsed. Rutherford and Soddy had discovered the property of radioactive half-life, the quintessential natural exponential decay process. As an example, suppose that at ‘‘time zero’’ you have 1,000 atoms of some isotope that has a half-life of 10 days. You can then state that 500 of them will have decayed after 10 days. You cannot predict which of the 500 will have decayed, however. Over the following 10 days a further 250 of the original remaining atoms will decay, and so on. Remarkably, the probability that a given atom will decay in some specified interval of time is completely independent of how long it has managed to avoid decaying; in the subatomic world, age is not a factor in the probability of continued longevity.
The Nobel Prize in Chemistry 1908 was awarded to Ernest Rutherford "for his investigations into the disintegration of the elements, and the chemistry of radioactive substances".
The concept of isotopy ﬁrst arose from evidence gathered in studies of natural radioactive decay chains .Substances that appeared in different decay chains through different modes of decay often seemed to have similar properties, but could not be separated from each other by chemical means. The term ‘‘isotope’’ was introduced in 1913 by Frederick Soddy, who had taken a position at the University of Glasgow. Soddy argued that the decay-chain evidence suggested that ‘‘the net positive charge of the nucleus is the number of the place which the element occupies in the periodic table’’. Basing his hypothesis on the then-current idea that the electrically neutral mass in nuclei was a combination of protons and electrons, Soddy went on to state that the ‘‘algebraic sum of the positive and negative charges in the nucleus, when the arithmetical sum is different, gives what I call ‘‘isotopes’’ or ‘‘isotopic elements,’’ because they occupy the same place in the periodic table.’’ The root ‘‘iso’’ comes from the Greek word ‘‘isos,’’ meaning ‘‘equal,’’ and the p in tope serves as a reminder that it is the number of protons which is the same in all isotopes of a given element. In the same paper, Soddy also developed an ingenious argument to show that the electrons emitted in beta-decay had to be coming from within the nucleus, not from the “orbital” electron
Any change made to the nucleus of an atom instantly affects the structure of the atom itself. As soon as the nucleus has emitted an alpha particle, say, or an electron or positron, the delicate balance of electric charges that exists in the atom is broken, with the nucleus. In order to regain stability, the atom is forced to either release some of the orbiting electrons, or attract in more from outside.
Units of Energy in Nuclear Physics and the Energy of Radioactive Decay
In the circumstances when people consider the quantities of energy that they consume or produce, the unit of measure involved will likely be something such as the kilowatt-hours that appear on an electric bill or the food-calories on a nutrition label. Science students will be familiar with units such as Joules and physical calories (1 cal = 4.187 J). The food calorie appearing on nutrition labels is equivalent to 1,000 physical calories, a so-called kilocalorie. The food calorie was introduced because the physical calorie used by physicists and chemists is inconveniently small for everyday use.
The words energy and power are often confused in common usage. Power is the rate at when energy is created or used. For physicists, the standard unit of power is the Watt, which is equivalent to producing (or consuming) one Joule of energy per second.
To address this, physicists who study atomic processes developed a handier unit of energy: the so-called electron-Volt.
Nuclear reactions are much more energetic than chemical ones, typically involving energies of millions of electron-volts (MeV). If a nuclear reaction liberates 1 MeV per atom involved (nucleus, really) while a chemical reaction liberates 10 eV per atom involved, the ratio of the nuclear to chemical energy releases will be 100,000. This begins to give you a hint as to the compelling power of nuclear weapons.
An ‘‘ordinary’’ bomb that contains 1,000 pounds of chemical explosive could be replaced with a nuclear bomb that utilizes only 1/100 of a pound of a nuclear explosive, presuming that the weapons detonate with equal efficiency. Thousands of tons of conventional explosive can be replaced with a few tens of kilograms of nuclear explosive. Nuclear fission weapons like those used at Hiroshima and Nagasaki involved reactions which liberated about 200 MeV per reaction, so a nuclear explosion in which even only a small amount of the ‘‘explosive’’ actually reacts (e.g., one kilogram) can be incredibly devastating. It did not take physicists long to appreciate that natural radioactivity was accompanied with substantial energy releases.
First Estimation of Energy from Radioactive Decay vs Energy from Chemical Reactions
In 1903, Pierre Curie and a collaborator, A. Laborde, found that just one gram of radium released on the order of 100 physical calories of heat energy per hour. Rutherford and Soddy were also on the same track. In a May, 1903, paper titled ‘‘Radioactive Change,’’ they wrote that (expressed in modern units) ‘‘the total energy of radiation during the disintegration of one gram of radium cannot be less than 108 calories and may be between 109 and 1010 calories. The union of hydrogen and oxygen liberates approximately 4 x 103 calories per gram of water produced, and this reaction sets free more energy for a given weight than any other chemical change known. The energy of radioactive change must therefore be at least 20,000 times, and may be a million times, as great as the energy of any molecular change.’’
Another statistic Rutherford was fond of quoting was that a single gram of radium emitted enough energy during its life to raise a 500-ton weight a mile high. The moral of these numbers is that nuclear reactions liberate vastly more energy per reaction than any chemical reaction. As Rutherford and Soddy wrote: ‘‘All these considerations point to the conclusion that the energy latent in the atom must be enormous compared with that rendered free in ordinary chemical change.’’ That enormity would have profound consequences.
In modern terminology, an element’s location in the periodic table is dictated by the number of protons in the nuclei of its atoms. This is known as the atomic number, and is designated by the letter Z. Atoms are usually electrically neutral, so the atomic number also specifies an atom’s normal complement of electrons. Chemical reactions involve exchanges of so-called valence electrons, which are the outermost electrons of atoms. Quantum physics shows us that the number of electrons in an atom, and hence the number of protons in its nucleus, accounts for its chemical properties. The periodic table as it is published in chemistry texts is deliberately arranged so that elements with similar chemical properties (identical numbers of valence electrons) appear in the same column of the table. Elements with the same number of protons but different number of neutrons are called isotopes.
The number of neutrons in a nucleus is designated by the letter N, and the total number of neutrons plus protons is designated by the letter A: A = N + Z. A is known as the mass number, and also as the nucleon number; the term nucleon means either a proton or a neutron. By specifying Z and A, we specify a given isotope. Be careful: A is also used to designate the atomic weight of an element (or isotope) in grams per mole. The atomic weight and nucleon number of an isotope are always close, but the difference between them is important. The nucleon number is always an integer, but the atomic weight will have decimals. For example, the nucleon number of uranium-235 is 235, but the atomic weight of that species is 235.0439 g/mol.
The term nuclide is also sometimes encountered, and is completely synonymous with isotope. The general form for isotope notation is A Z X. In this expression, X is the symbol for the element involved. The subscript is always the atomic number, and the superscript is always the mass number.
Example, the oxygen that you are breathing while reading this passage consists of three stable isotopes: 16 8 O, 17 8 O, and 18 8 O. All oxygen atoms have eight protons in their nuclei, but either eight, nine, or ten neutrons. These nuclides are also referred to as ogexyn-16 (O-16), oxygen-17 (O-17), and oxygen-18 (O-18). By far the most common isotope of oxygen is the first one: 99.757 % of naturally-occurring oxygen is O-16, with only 0.038 and 0.205 % being O-17 and O-18, respectively. Three isotopes that will prove very important in the story of the Manhattan Project are uranium-235, uranium-238, and plutonium-239: 235 92 U, 238 92 U, and 239 94 Pu:
The concepts of atomic number and isotopy developed over many years. The foundations of modern atomic theory can be traced back to 1803, when English chemist John Dalton put forth a hypothesis that all atoms of a given element are identical to each other and equal in weight. An important development in Dalton’s time came about when chemical evidence indicated that the masses of atoms of various elements seemed to be very nearly equal to integer multiples of the mass of hydrogen atoms. This notion was formally hypothesized about 1815 by English physician and chemist William Proust, who postulated that all heavier elements are aggregates of hydrogen atoms. He called the hydrogen atom a ‘‘protyle,’’ a forerunner of Ernest Rutherford’s ‘‘proton.’’ Parts of both Dalton’s and Proust’s hypotheses would be verified, but other aspects required modification. In particular, something looked suspicious about Proust’s idea from the outset, as some elements had atomic weights that were not close to integer multiples of that of hydrogen. For example, chlorine atoms seemed to weigh 35.5 times as much as hydrogen atoms.
The Making of the Atomic Bomb
Topic 4: The nuclear atom 1911-1920. Alpha scattering and the discovery of the nucleus. Nuclear sizes vs atomic sizes. Niels Bohr and the structure of the nuclear atom. Moseley’s work with x-rays and the significance of the atomic number.
Alpha Particles and the Nuclear Atom
In the spring of 1907, Rutherford returned to England to take a position at Manchester University. When he arrived there, he made a list of promising research projects, one of which was to pin down the precise nature of alpha particles.
Based on experiments where the number of alphas emitted by a sample of radium had been counted and the charge carried by each had been determined, he had begun to suspect that they were ionized helium nuclei. However, he needed to trap a sample of alphas for conﬁrming spectroscopic analysis. Working with student Thomas Royds, Rutherford accomplished this with one of his typically elegant experiments. In the Rutherford-Royds experiment, a sample of radon gas was trapped in a very thin-walled glass tube, which was itself surrounded by a thicker-walled tube. The space between the two tubes was evacuated, and the radon was allowed to decay for a week. The energetic radon alphas could easily penetrate through the 1/100-mm thick wall of the inner tube. During their ﬂights they would pick up electrons, become neutralized, and then become trapped in the space between the tubes. The neutralized alphas were then drawn off for analysis, and clearly showed a helium spectrum. Rutherford and Royds published their ﬁnding in 1909. In the notation described in the preceding section, alpha particles are identical to helium4 nuclei: 4 2He:
The discovery for which Rutherford is most famous is that atoms have nuclei; this also had its beginnings in 1909. One of the projects on Rutherford’s to-do list was to investigate how alpha particles ‘‘scattered’’ from atoms when they (the alphas) were directed through a thin metal foil. At the time, the prevailing notion of the structure of an atom was of a cloud of positive electrical material within which were embedded negatively-charged electrons. Thomson had determined that electrons weighed about 1/1,800 as much as a hydrogen atom; since hydrogen was the lightest element, it seemed logical to presume that electrons were small in comparison to their host atoms. This picture has been likened to a pudding, with electrons playing the role of raisins inside the body of the pudding. Another line of atomic structure evidence came from the chemistry community. From the bulk densities of elements and their atomic weights, it could be estimated that individual atoms behaved as if they were a few Angstroms in diameter (1 Å = 10-10 m). The few Angstroms presumably represented the size of the overall cloud of positive material. Rutherford had been experimenting with the passage of alpha-particles through metal foils since his earliest days of radioactivity research, and all of his experience indicated that the vast majority of alphas were deﬂected by only a very few degrees from straight-line paths as they barreled their way through a layer of foil. This observation was in line with theoretical expectations. Thomson had calculated that the combination of the size of a positively-charged atomic sphere and the kinetic energy of an incoming alpha (itself also presumably a few Angstroms in size) would be such that the alpha would typically suffer only a small deﬂection from its initial trajectory. Deﬂections of a few degrees would be rare, and a deﬂection of 900 was expected to be so improbable as to never have any reasonable chance of being observed.
In the Thomson atomic model, a collision between an alpha and an atom should not be imagined as like that between two billiard balls, but rather more like two diffuse clouds of positive electricity passing through each other. The alphas would presumably strike a number of electrons during the collision, but the effect of the electrons’ attractive force on the alphas would be negligible due to the vast difference in their masses, a factor of nearly 8,000. Electrons played no part in Rutherford’s work. Rutherford was working with Hans Geiger) of Geiger counter fame, who was looking for a project to occupy an undergraduate student, Ernest Marsden, another New Zealand native. Rutherford suggested that Geiger and Marsden check to see if they could observe any large-angle deﬂections of alphas when they passed through a thin gold foil (this why this experiment usually is called gold-foil experiment), fully expecting a negative result. Gold was used because it could be pressed into a thin foil only about a thousand atoms thick. To Geiger and Marsden’s surprise, a few alphas, about one in every 8,000, were bounced backward toward the direction from which they came.
The number of such reﬂections was small, but was orders of magnitude more than what was expected on the basis of Thomson’s model. Rutherford was later quoted as saying that the result was ‘‘almost as incredible as if you had ﬁred a 15-in. shell at a piece of tissue paper and it came back and hit you.’’
Geiger and Marsden published their anomalous result in July, 1909. The work of detecting the scattered alpha-particles was very difficult. A Geiger counter could have been used to detect the alphas, but they had to be seen to get detailed information on their direction of travel. This was done by having the scattered alphas strike a phosphorescent screen; a small ﬂash of light
A sense of the scale of Rutherford’s atom can be had by thinking of the lone proton that forms the nucleus of an ordinary hydrogen atom as scaled up to being two millimeters in diameter, about the size of an uncooked grain of rice. If this enlarged proton is placed at the center of a football ﬁeld, the diameter of the lowest-energy electron orbit (that which comes closest to the nucleus) would reach to about the goal lines.
In giving us nuclei and being credited with the discovery of the positively-charged protons that they contain, Rutherford bequeathed us atoms that are largely empty space. The ﬁrst public announcement of this new model of atomic structure seems to have been made on March 7, 1911, when Rutherford addressed the Manchester Literary and Philosophical Society; this date is often cited as the birthdate of the nuclear atom. The formal scientiﬁc publication came in July, and directly inﬂuenced Niels Bohr’s famous atomic model which was published two years later.
Rutherford’s nucleus paper is a masterpiece of fusion of experimental evidence and theoretical reasoning. After showing that the Thomson model could not possibly generate the observed angular distribution of alpha scatterings, he demonstrated that the nuclear ‘‘point-mass’’ model gave predictions in accord with the data. Rutherford did not use the term ‘‘nucleus’’ in his paper; that nomenclature seems to have been introduced by Cambridge astronomer John Nicholson in a paper published in November, 1911. The term ‘‘proton’’ was not introduced until June, 1920, but was coined by Rutherford himself.
With the understanding that scattering events were the results of such nuclear collisions, Rutherford’s analysis could be applied to other elements in the sense of using an observed scattering distribution to infer how many fundamental ‘‘protonic’’ charges the element possessed; this helped to place elements in their proper locations in the periodic table. Elements had theretofore been deﬁned by their atomic weights (A), but it was the work of researchers such as Rutherford, Soddy, Geiger, and Marsden which showed that it is an element’s atomic number (Z) that dictates its chemical identity. The atomic weights of elements were still important, however, and very much the seat of a mystery. Together, chemical and scattering evidence indicated that the atomic weights of atoms seemed to be proportional to their number of protonic charges. Speciﬁcally, atoms of all elements weighed about twice as much or more as could be accounted for on the basis of their numbers of protons. For some time, this extra mass was thought to be due to additional protons in the nucleus which for some reason contained electrons within themselves, an electrically neutral combination. This would give net-charge nuclei consistent with the scattering experiments, while explaining measured atomic weights.
By the mid-1920s, however, this proposal was becoming untenable: the Uncertainty Principle of quantum mechanics ruled against the possibility of containing electrons within so small a volume as a single proton, or even a whole nucleus. For many years before its discovery, Rutherford speculated that there existed a third fundamental constituent of atoms, the neutron. He would live to see his suspicion proven by one of his own students. That atoms are built of electrons orbiting nuclei comprised of protons and neutrons is due very much to Rutherford and his collaborators and students.
Difficulties of Rutherford Model of Atom
Two basic difficulties exist with Rutherford’s planetary model of atom.
Nuclear Size vs Atomic size
Various types of scattering experiments suggest that nuclei are roughly
Niels Bohr and Bohr’s Model of Atom
Planck had introduced his constant in 1900 in a formula explaining the light radiation emitted from heated bodies. According to classical theory, comparable amounts of light energy should be produced at all frequencies. This is not only contrary to observation but also implies the absurd result that the total energy radiated by a heated body should be infinite. Planck postulated that energy can only be emitted or absorbed in discrete amounts, which he called quanta (the Latin word for "how much"). The energy quantum is related to the frequency of the light by a new fundamental constant, h. When a body is heated, its radiant energy in a particular frequency range is, according to classical theory, proportional to the temperature of the body. With Planck's hypothesis, however, the radiation can occur only in quantum amounts of energy. If the radiant energy is less than the quantum of energy, the amount of light in that frequency range will be reduced. Planck's formula correctly describes radiation from heated bodies. Planck's constant has the dimensions of action, which may be expressed as units of energy multiplied by time, units of momentum multiplied by length, or units of angular momentum. For example, Planck's constant can be written as h = 6.6x10-34 joule x seconds.
Using Planck's constant, Bohr obtained an accurate formula for the energy levels of the hydrogen atom. He postulated that the angular momentum of the electron is quantized--i.e., it can have only discrete values. He assumed that otherwise electrons obey the laws of classical mechanics by traveling around the nucleus in circular orbits. Because of the quantization, the electron orbits have fixed sizes and energies. The orbits are labeled by an integer, the quantum number n.
Henry Gwyn Jeffrey Moseley was an English physicist and a graduate of Trinity College Oxford. His main contributions to science were the quantitative justification of the previously empirical concept of atomic number, and Moseley's law. This law advanced chemistry by immediately sorting the elements of the periodic table in a more logical order. Moseley could predict the existence of several then unknown elements. Moseley's law also advanced basic physics by providing independent support for the Bohr model of the Rutherford atom containing positive nuclear charge equal to atomic number. As Niels Bohr once said in 1962, "You see actually the Rutherford work [the nuclear atom] was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley."
In 1913 Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alfa line), and their atomic number Z. Moseley's empiric formula was found to be derivable Bohr's formula with two additional assumptions that  this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and , that the atomic number Z when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to (Z−1)2.
In brief, the law states that the square root of the frequency of the emitted x-ray is proportional to the atomic number.
The Making of the Atomic Bomb
Topic 5: Years 1920-1930: The invention of the mass-spectrometer. Atomic masses, the reinterpretation of isotopes, mass defects. E=Mc2 and nuclear binding energies.
True understanding of the nature and consequences of isotopy came with the invention of mass spectroscopy, an instrumental technique for making extremely precise measurements of atomic masses. In his 1897 work, J. J. Thomson measured the ratio of the electrical charge carried by electrons to their mass by using electric and magnetic ﬁelds to deﬂect them and track their trajectories. In 1907, Thomson modiﬁed his apparatus to investigate the properties of positively-charged (ionized) atoms, and so developed the ﬁrst ‘‘mass spectrometer.’’ In this device, electric and magnetic ﬁelds were conﬁgured to force ionized atoms to travel along separate, unique parabolic-shaped trajectories which depended on the ions’ masses. The separate trajectories could be recorded on a photographic ﬁlm for later analysis.
In 1909, Thomson acquired an assistant, Francis Aston, a gifted instrument maker. Aston improved Thomson’s instrument, and, in November, 1912, obtained evidence for the presence of two isotopes of neon, of mass numbers 20 and 22 (taking hydrogen to be of mass unity). The atomic weight of neon was known to be 20.2. Aston reasoned that this number could be explained if the two isotopes were present in a ratio of 9:1, as is now known to be the case. (There is a third isotope of neon, of mass 21, but it comprises only 0.3 % of natural neon.) Aston tried to separate the two neon isotopes using a technique known as diffusion. This refers to the passage of atoms through a porous membrane. Aston passed neon through clay tobacco pipes, and did achieve a small degree of enrichment.
Following a position involving aircraft research during World War I, Aston returned to Cambridge, and in 1919 he built his own mass spectrometer which incorporated some improvements over Thomson’s design. In a series of papers published from late that year through the spring of 1920, he presented his ﬁrst results obtained with the new instrument. These included a veriﬁcation of the two previously-detected neon isotopes, and a demonstration that chlorine comprised a mixture of isotopes of masses 35 and 37 in an abundance ratio of about 3:1. In later years (1927 and 1937), Aston developed improved instruments, his so-called second and third mass spectrometers.
The principle of Aston’s mass spectrometer is as following. Inside a vacuum chamber, the sample to be investigated is heated in a small oven. The heating will ionize the atoms, some of which will escape through a narrow slit. The ionized atoms are then accelerated by an electric ﬁeld, and directed into a region of space where a magnetic ﬁeld of strength B is present. The magnetic ﬁeld is arranged to be perpendicular to the plane of travel of the positively-charged ions. The magnetic ﬁeld gives rise to an effect known as the Lorentz Force Law, which causes the ions to move in circular trajectories; an ion of mass m and net charge q that is moving with speed v will enter into a circular orbit of radius r = mv/qB. If all ions are ionized to the same degree and have the same speed, heavier ones will be deﬂected somewhat less than lighter ones; that is, they will have larger radius orbits. There will be one stream for each mass-species present. The streams will be maximally separated after one-half of an orbit, where they can be collected on a ﬁlm. Present day models incorporate electronic detectors which can feed data to a computer for immediate analysis.
During his career, Aston discovered over 200 naturally-occurring isotopes, including uranium-238. Surprisingly, he does not have an element named after him, but he did receive the 1922 Nobel Prize for Chemistry.
Reinterpretation of Isotopes. Mass Defect
Aston’s work showed that John Dalton’s 1803 conjecture had been partially correct: atoms of the same element behave identically as far as their chemistry is concerned, but the presence of isotopy means that not all atoms of the same element have the same weight. Similarly, Aston found that Proust’s conjecture that the masses of all atoms were integer multiples of that of hydrogen, if one substitutes ‘‘isotopes’’ for ‘‘atoms,’’ was also very nearly true. But that very nearly proved to involve some very important physics. What is meant by very nearly here?
As an example, consider the common form of iron, Fe-56, nuclei of which contain 26 protons and 30 neutrons. Had Proust been correct, the mass of an iron-56 atom should be 56 ‘‘mass units,’’ if one neglects the very tiny contribution of the electrons. (A technical aside: 56 electrons would weigh about 1.4 % of the mass of a proton. We are also assuming, for sake of simplicity, that protons and neutrons each weigh one ‘‘mass unit’’; neutrons are about 0.1 % heavier than protons.)
Mass spectroscopy can measure the masses of atoms to remarkable precision; the actual weight of an iron-56 atom is 55.934937 atomic mass units. The discrepancy of 55.934937 - 56 =-0.065063 mass units, what Aston called the ‘‘mass defect,’’ is signiﬁcant, amounting to about 6.5 % of the mass of a proton. This mass defect effect proved to be systemic across the periodic table: all stable atoms are less massive than one would predict on the basis of Proust’s whole number hypothesis. Iron has a fairly large mass defect, but by no means the largest known .The mass-defect is not an artifact of protons and neutrons having slightly different masses; if one laboriously adds up the masses of all of the constituents of atoms, the defects are still present.
The unavoidable conclusion is that when protons and neutrons assemble themselves into nuclei, they give up some of their mass in doing so. Physicists now quote mass defects in terms of equivalent energy in MeVs, thanks to E = mc2. One mass unit is equivalent to 931.4 MeV, so the iron-56 mass defect amounts to just over 60 MeV. Because this is a mass defect, it is formally cited as a negative number, -60.6 MeV. The capital Greek letter delta (as in ‘‘Defect’’) is now used to designate such quantities: D = -60.6 MeV.
Where does the mass go when nature assembles nuclei? Empirically, nuclei somehow have to hold themselves together against the immense mutual repulsive Coulomb forces of their constituent protons; some sort of nuclear ‘‘glue’’ must be present. To physicists, this ‘‘glue’’ is known synonymously as the ‘‘strong force’’ or as ‘‘binding energy,’’ and is presumed to be the ‘‘missing’’ mass transformed into some sort of attractive energy. The greater the magnitude of the mass defect, the more stable will be the nucleus involved. Figure below shows a plot of binding energy per nucleon as a function of mass number A for various stable nuclei.
And next figure shows a graph of the mass defects of 350 nuclides that are stable or have half-lives greater than 100 years, as a function of mass number A.
The deep valley centered at A * 120 attests to the great stability of elements in the middle part of the periodic table; negative values of D connote intrinsic stability. The gap between A * 210 and 230 is due to the fact that there are no long-lived isotopes of elements between bismuth (Z = 83) and thorium (Z = 90). Isotopes with A [230 could be said to have a ‘‘mass surplus.’’ Consistent with the idea that negative D-values connote stability, all such positive D-valued nuclei eventually decay.
Strictly, these are separate (but related) quantities. At a qualitative level, the details of the technical distinctions between them do not really add to the central concept that ‘‘lost mass’’ transforms to ‘‘binding energy.’’
The above figure can be used to estimate the energy released in hypothetical nuclear reactions. The essence is straightforward: Add up the D-values of all of the input reactants (be careful with negative signs!), and then subtract from that result the sum of the D-values of the output products.
In late 1938 it was discovered that reactions like this are very real possibilities indeed. There exist 266 apparently permanently stable, naturally occurring isotopes of the various elements, and about a hundred more ‘‘quasi-stable’’ ones with half-lives of a hundred years or greater. A compact way of representing all these nuclides is to plot each one as a point on a graph where the x-axis represents the number of neutrons, and the y-axis the number of protons. All isotopes of a given element will then lie on a horizontal line, since the number of protons in all nuclei of a given element is the same. Clearly, stable nuclei follow a very well-deﬁned Z(N) trend. Nature provides nuclei with neutrons to hold them together against the mutual repulsion of their protons, but she is economical in doing so. Mass represents energy (E = mc2), and Nature is evidently unwilling to invest more mass-energy to stabilize nuclei than is strictly necessary. Note also that the points in the graph curve off to the right; this indicates that the vast majority of nuclei, except for a very few at the bottom-left of the graph, contain more neutrons than protons; this effect is known as the neutron excess.
The Making of the Atomic Bomb
Topic 12: Year 1945: The “Dragon” experiments on critical mass. The death of President Roosevelt. The Trinity test. Harry Truman and Potsdam. The decision to use the bomb. Hiroshima and Nagasaki. Final perspectives on war in the 20-th century, nuclear proliferation, and the challenge of nuclear terrorism
Dragon Experiment on Critical Mass
One of the most important problem that had to be solved in Manhattan Project was problem of critical mass of
bomb fuel. The most famous experiment conducted at Los Alamos during the Manhattan Project, after the Trinity test itself, is the one with the most evocative name. “Tickling the Dragon’s Tail,” also known internally as just “Dragon,” is straightforward about its meaning, compared to the enigma of “Trinity.” Dragons don’t like to have their tails tickled — so watch out for the fire..
Otto Frisch wanted to work with full critical mass to determine by experiment what Los Alamos had so far been able to determine only theoretically: how much uranium Little Boy would need. The idea of experiment was that the compound of uranium-235, which by then had arrived to Los Alamos in amount enough to make an explosive device, should be assembled to make one, but leaving a big hole so that the central portion was missing. This will allow enough neutron to escape so that no chain reaction could develop. But the missing portion was to be made, ready to be dropped through the hole so that for a split second there was the condition for an atomic explosion, although only barely so.
Richard Feynman, young physicist also member of the Manhattan Project named the experiment: he said it would be like tickling the tail of a sleeping dragon.
With the Manhattan Project on the brink of success in spring 1945, the atomic bomb became an increasingly important element in American strategy. A long hoped-for weapon now seemed within reach at a time when hard decisions were being made, not only on ending the war in the Pacific, but also on the shape of the postwar international order.
From Roosevelt to Truman
On April 12, only weeks before Germany's unconditional surrender on May 7, President Roosevelt died suddenly in Warm Springs, Georgia, bringing Vice President Harry S. Truman, a veteran of the United States Senate, to the presidency. Truman was not privy to many of the secret war efforts Roosevelt had undertaken and had to be briefed extensively in his first weeks in office. One of these briefings, provided by Secretary of War Stimson on April 25, concerned S-1 (the Manhattan Project). Stimson, with Groves present during part of the meeting, traced the history of the Manhattan Project, summarized its status, and detailed the timetable for testing and combat delivery. Truman asked numerous questions during the forty-five minute meeting and made it clear that he understood the relevance of the atomic bomb to upcoming diplomatic and military initiatives.
By the time Truman took office, Japan was near defeat. American aircraft were attacking Japanese cities at will. A single fire bomb raid in March killed nearly 100,000 people and injured over a million in Tokyo. A second air attack on Tokyo in May killed 83,000. Meanwhile, the United States Navy had cut the islands' supply lines. But because of the generally accepted view that the Japanese would fight to the bitter end, a costly invasion of the home islands seemed likely, though some American policy makers held that successful combat delivery of one or more atomic bombs might convince the Japanese that further resistance was futile.
The Trinity Test
No one doubted that Little Boy would work if any design would. Otto Frisch Dragon experiments had proven the efficiency of the fast-neutron chain reaction in uranium. The gun mechanism was wasteful and inefficient but U-235 was forgiving.
It remained to test implosion. While doing so the physicists could also compare their theory of the progress of such unusual release of energy. Trinity would be the largest physics experiment ever attempted up to that time.
Meanwhile, the test of the plutonium weapon, named Trinity by Oppenheimer (a name inspired by the poems of John Donne), was rescheduled for July 16 at a barren site on the Alamogordo Bombing Range known as the Jomada del Muerto, or Journey of Death, 210 miles south of Los Alamos. A test explosion had been conducted on May 7 with a small amount of fissionable material to check procedures and fine-tune equipment. Preparations continued through May and June and were complete by the beginning of July. Three observation bunkers located 10,000 yards north, west, and south of the firing tower at ground zero would attempt to measure critical aspects of the reaction. Specifically, scientists would try to determine the symmetry of the implosion and the amount of energy released. Additional measurements would be taken to determine damage estimates, and equipment would record the behavior of the fireball. The biggest concern was control of the radioactivity the test device would release. Not entirely content to trust favorable meteorological conditions to carry the radioactivity into the upper atmosphere, the Army stood ready to evacuate the people in surrounding areas.
Tower for the Trinity Test.
On July 12 the plutonium core was taken to the test area in an army sedan. The non-nuclear components left for the test site at 12:01 a.m., Friday the 13th. During the day on the 13th, final assembly of the gadget took place in the McDonald ranch house. By 5:00 p.m. on the 15th, the device had been assembled and hoisted atop the one-hundred foot firing tower. Groves, Bush, Conant, Lawrence, Farrell, Chadwick (head of the British contingent at Los Alamos and discoverer of the neutron), and others arrived in the test area, where it was pouring rain. Groves and Oppenheimer, standing at the S-10,000 control bunker, discussed what to do if the weather did not break in time for the scheduled 4:00 a.m. test. At 3:30 they pushed the time back to 5:30; at 4:00 the rain stopped. Physicist Kistiakowsky and his team armed the device shortly after 5:00 a.m. and retreated to S-10,000. In accordance with his policy that each observe from different locations in case of an accident, Groves left Oppenheimer and joined Bush and Conant at base camp. Those in shelters heard the countdown over the public address system, while observers at base camp picked it up on an FM radio signal.
Trinity Device, the "Gadget" Being Readied.
The mushroom cloud from the Trinity test.
At precisely 5:30 a.m. on Monday, July 16, 1945, the atomic age began. While Manhattan staff members watched anxiously, the device exploded over the New Mexico desert, vaporizing the tower and turning asphalt around the base of the tower to green sand. The bomb released approximately 18.6 kilotons of power, and the New Mexico sky was suddenly brighter than many suns. Some observers suffered temporary blindness even though they looked at the brilliant light through smoked glass. Seconds after the explosion came a huge blast, sending searing heat across the desert and knocking some observers to the ground. A steel container weighing over 200 tons, standing a half-mile from ground zero, was knocked ajar. (Nicknamed Jumbo, the huge container had been ordered for the plutonium test and transported to the test site but eliminated during final planning). As the orange and yellow fireball stretched up and spread, a second column, narrower than the first, rose and flattened into a mushroom shape, thus providing the atomic age with a visual image that has become imprinted on the human consciousness as a symbol of power and awesome destruction.
At base camp, Bush, Conant, and Groves shook hands. Oppenheimer reported later that the experience called to his mind the legend of Prometheus, punished by Zeus for giving man fire. He also thought fleetingly of Alfred Nobel's vain hope that dynamite would end wars. The terrifying destructive power of atomic weapons and the uses to which they might be put were to haunt many of the Manhattan Project scientists for the remainder of their lives. The success of the Trinity test meant that a second type of atomic bomb could be readied for use against Japan. In addition to the uranium gun model, which was not tested prior to being used in combat, the plutonium implosion device detonated at Trinity now figured in American Far Eastern strategy. In the end Little Boy, the untested uranium bomb, was dropped first at Hiroshima on August 6, 1945, while the plutonium weapon Fat Man followed three days later at Nagasaki on August 9.
Remains of Trinity Test Tower Footings. Oppenheimer and Groves at Center.
The American contingent to the Big Three conference, headed by Truman, Byrnes, and Stimson, arrived in Berlin on July 15 and spent most of the next two days. Grappling with the interrelated issues of Russian participation in the Far Eastern conflict and the wording of an early surrender offer that might be presented to the Japanese. This draft surrender document received considerable attention, the sticking point being the term "unconditional." It was clear that the Japanese would fight on rather than accept terms that would eliminate the Imperial House or demean the warrior tradition, but American policy makers feared that anything less than a more democratic political system and total demilitarization might lead to Japanese aggression in the future. Much effort went into finding the precise formula that would satisfy American war killed the Pacific without requiring a costly invasion of the Japanese mainland. In an attempt to achieve surrender with honor, the emperor had instructed his ministers to open negotiations with Russia. The United States intercepted and decoded messages between Tokyo and Moscow that made it unmistakably clear that the Japanese were searching for an alternative to unconditional surrender.
Reports on Trinity
Stalin arrived in Berlin a day late, leaving Stimson July 16 to mull over questions of postwar German administration and the Far Eastern situation. After sending Truman and Byrnes a memorandum advocating an early warning to Japan and setting out a bargaining strategy for Russian entry in the Pacific war, Stimson received a cable from George L. Harrison, his special consultant in Washington, that read:” Operated on this morning. Diagnosis not yet complete but results seem satisfactory and already exceed expectations. Local press release necessary as interest extends great distance. Dr. Groves pleased. He returns tomorrow. I will keep you posted.”
Stimson immediately informed Truman and Byrnes that the Trinity test had been successful. The next day Stimson informed Churchill of the test. The prime minister expressed great delight and argued forcefully against informing the Russians, though he later relented. On July 18, while debate continued over the wording of the surrender message, focusing on whether or not to guarantee the place of the emperor, Stimson received a second cable from Harrison:
“Doctor has just returned most enthusiastic and confident that the little boy is as husky as his big brother. The light in his eyes discernible from here to Highhold and I could have heard his screams from here to my farm.”
Translation: “Groves thought the plutonium weapon would be as powerful as the uranium device and that the Trinity test could be seen as far away as 250 miles and the noise heard for fifty miles.” Initial measurements taken at the Alamogordo site suggested a yield in excess of 5,000 tons of TNT. Truman went back to the bargaining table with a new card in his hand.
Further information on the Trinity test arrived on July 21 in the form of a long and uncharacteristically excited report from Groves. Los Alamos scientists now agreed that the blast had been the equivalent of between 15,000 and 20,000 tons of TNT, higher than anyone had predicted. Groves reported that glass shattered 125 miles away, that the fireball was brighter than several suns at midday, and that the steel tower had been vaporized. Though he had previously believed it impregnable, Groves stated that he did not consider the Pentagon safe from atomic attack.53 Stimson informed Marshall and then read the entire report to Truman and Byrnes. Stimson recorded that Truman was "tremendously pepped up" and that the document gave him an entirely new feeling of confidence."54 The next day Stimson, informed that the uranium bomb would be ready in early August, discussed Grove's report at great length with Churchill. The British prime minister was elated and said that he now understood why Truman had been so forceful with Stalin the previous day, especially in his opposition to Russian designs on Eastern Europe and Germany. Churchill then told Truman that the bomb could lead to Japanese surrender without an invasion and eliminate the necessity for Russian military help. He recommended that the President continue to take a hard line with Stalin. Truman and his advisors shared Churchill's views.
The success of the Trinity test stiffened Truman's resolve, and he refused to accede to Stalin's new demands for concessions in Turkey and the Mediterranean. On July 24 Stimson met with Truman. He told the President that Marshall no longer saw any need for Russian help, and he briefed the President on the latest S-1 situation. The uranium bomb might be ready as early as August 1 and was a certainty by August 10. The plutonium weapon would be available by August 6. Stimson continued to favor making some sort of commitment to the Japanese emperor, though the draft already shown to the Chinese was silent on this issue. American and British coordination for an invasion of Japan continued, with November 1 standing as the landing date. At a meeting with American and British military strategists at Potsdam, the Russians reported that their troops were moving into the Far East and could enter the war in mid-August. They would drive the Japanese out of Manchuria and withdraw at the end of hostilities. Nothing was said about the bomb. This was left for Truman, who, on the evening of July 24, approached Stalin without an interpreter to inform the Generalissimo that the United States had a new and powerful weapon. Stalin casually responded that he hoped that it would be used against Japan to good effect. The reason for Stalin's composure became clear later when it was learned that Russian intelligence had been receiving information about the S-1 project from Klaus Fuchs and other agents since summer 1942.
The Potsdam Proclamation
A directive, written by Groves and issued by Stimson and Marshall on July 25, ordered the Army Air Force's 509th Composite Group to attack Hiroshima, Kokura, Niigata, or Nagasaki "after about" August 3, or as soon as weather permitted.55 The 509th was ready. Tests with dummies had been conducted successfully, and Operation Bronx, which brought the gun and uranium-235 projectile to Tinian aboard the U.S.S. Indianapolis and the other components on three C-54s, was complete. On July 26 the United States learned of Churchill's electoral defeat and Chiang Kai-Shek's concurrence in the warning to Japan. Within hours the warning was issued in the name of the President of the United States, the president of China, and the prime minister of Great Britain (now Clement Attlee). The Russians were not informed in advance. This procedure was technically correct since the Russians were not at war with Japan, but it was another indication of the new American attitude that the Soviet Union's aid in the present conflict no longer was needed. The message called for the Japanese to surrender unconditionally or face "prompt and utter destruction."56 The Potsdam Proclamation left the emperor's status unclear by making no reference to the royal house in the section that promised the Japanese that they could design their new government as long as it was peaceful and more democratic. While anti-war sentiment was growing in Japanese decision-making circles, it could not carry the day as long as unconditional surrender left the emperor's position in jeopardy. The Japanese rejected the offer on July 29.
Intercepted messages between Tokyo and Moscow revealed that the Japanese wanted to surrender but felt they could not accept the terms offered in the Potsdam Proclamation. American policy makers, however, anxious to end the war without committing American servicemen to an invasion of the Japanese homeland, were not inclined to undertake revisions of the unconditional surrender formula and cause further delay. A Russian declaration of war might convince Japan to surrender, but it carried a potentially prohibitive price tag as Stalin would expect to share in the postwar administration of Japan, a situation that would threaten American plans in the Far East. A blockade of Japan combined with conventional bombing was rejected as too time-consuming and an invasion of the islands as too costly. And few believed that a demonstration of the atomic bomb would convince the Japanese to give up. Primarily upon these grounds, American policy makers concluded that the atomic bomb must be used. Information that Hiroshima might be the only prime target city without American prisoners in the vicinity placed it first on the list. As the final touches were put on the message Truman would issue after the attack, word came that the first bomb could be dropped as early as August 1. With the end now in sight, poor weather led to several days.
What are Bomb Effects
Nuclear weapons are fundamentally different from conventional weapons because of the vast amounts of explosive energy they can release and the kinds of effects they produce, such as high temperatures and radiation. The prompt effects of a nuclear explosion and fallout are well known through data gathered from the attacks on Hiroshima and Nagasaki, Japan; from more than 500 atmospheric and more than 1,500 underground nuclear tests conducted worldwide; and from extensive calculations and computer modeling. Longer-term effects on human health and the environment are less certain but have been extensively studied. The impacts of a nuclear explosion depend on many factors, including the design of the weapon (fission or fusion) and its yield; whether the detonation takes place in the air (and at what altitude), on the surface, underground, or underwater; the meteorological and environmental conditions; and whether the target is urban, rural, or military.
When a nuclear weapon detonates, a fireball occurs with temperatures similar to those at the center of the Sun. The energy emitted takes several forms. Approximately 85 percent of the explosive energy produces air blast (and shock) and thermal radiation (heat). The remaining 15 percent is released as initial radiation, produced within the first minute or so, and residual (or delayed) radiation, emitted over a period of time, some of which can be in the form of local fallout.
The expansion of intensely hot gases at extremely high pressures in a nuclear fireball generates a shock wave that expands outward at high velocity. The “overpressure,” or crushing pressure, at the front of the shock wave can be measured in pascals (or kilopascals; kPa) or in pounds per square inch (psi). The greater the overpressure, the more likely that a given structure will be damaged by the sudden impact of the wave front. A related destructive effect comes from the “dynamic pressure,” or high-velocity wind, that accompanies the shock wave. An ordinary two-story, wood-frame house will collapse at an overpressure of 34.5 kPa (5 psi). A one-megaton weapon exploded at an altitude of 3,000 metres (10,000 feet) will generate overpressure of this magnitude out to 7 km (about 4 miles) from the point of detonation. The winds that follow will hurl a standing person against a wall with several times the force of gravity. Within 8 km (5 miles) few people in the open or in ordinary buildings will likely be able to survive such a blast. Enormous amounts of masonry, glass, wood, metal, and other debris created by the initial shock wave will fly at velocities above 160 km (100 miles) per hour, causing further destruction.
As a rule of thumb, approximately 35 percent of the total energy yield of an airburst is emitted as thermal radiation—light and heat capable of causing skin burns and eye injuries and starting fires of combustible material at considerable distances. The shock wave, arriving later, may spread fires further. If the individual fires are extensive enough, they can coalesce into a mass fire known as a firestorm, generating a single convective column of rising hot gases that sucks in fresh air from the periphery. The inward-rushing winds and the extremely high temperatures generated in a firestorm consume virtually everything combustible.
A special feature of a nuclear explosion is the emission of nuclear radiation, which may be separated into initial radiation and residual radiation. Initial radiation, also known as prompt radiation, consists of gamma rays and neutrons produced within a minute of the detonation. Beta particles and a small proportion of alpha particles are also produced, but these particles have short ranges and typically will not reach Earth’s surface if the weapon is detonated high enough above ground. Gamma rays and neutrons can produce harmful effects in living organisms, a hazard that persists over considerable distances because of their ability to penetrate most structures. Though their energy is only about 3 percent of the total released in a nuclear explosion, they can cause a considerable proportion of the casualties.
Residual radiation and fallout
Residual radiation is defined as radiation emitted more than one minute after the detonation. If the fission explosion is an airburst, the residual radiation will come mainly from the weapon debris. If the explosion is on or near the surface, the soil, water, and other materials in the vicinity will be sucked upward by the rising cloud, causing early (local) and delayed (worldwide) fallout. Early fallout settles to the ground during the first 24 hours; it may contaminate large areas and be an immediate and extreme biological hazard. Delayed fallout, which arrives after the first day, consists of microscopic particles that are dispersed by prevailing winds and settle in low places.
In the early morning hours of August 6, 1945, a B-29 bomber attached to the 590th Composite Group took off from Tinian Island and headed north by northwest toward the Japanese Islands over 1,500 miles away. Its primary target was Hiroshima, an important military and communications center with a population of nearly 300,000 located in the deltas of southwestern Honshu Island facing the Inland Sea. The Enola Gay, piloted by Colonel Paul Tibbets, flew at low altitude on automatic pilot before climbing to 31,000 feet as it neared the target area. As the observation and photography escorts dropped back, the Enola Gay released a 9,700-pound uranium bomb, nicknamed Little Boy, at approximately 8:15 a.m. Hiroshima time. Tibbets immediately dove away to avoid the anticipated shockwaves of the blast. Forty-three seconds later a huge explosion lit the morning sky as Little Boy detonated 1900 feet above the city, directly over a parade field where the Japanese Second Army was doing calisthenics. Though already eleven and a half miles away, the Enola Gay was rocked by the blast. At first Tibbets thought he was taking flak. After a second shockwave hit the Diane, the crew looked back at Hiroshima. "The city was hidden by that awful cloud . . .boiling up, mushrooming, terrible and incredibly tall," Tibbets recalled.57 Little Boy killed 70,000 people (including about twenty American airmen being held as POWs) and injured another 70,000. By the end of 1945, the Hiroshima death toll rose to 140,000 as radiation sickness deaths mounted. Five years later the total reached 200,000. The bomb caused total devastation for five square miles, with almost all the buildings in the city either destroyed or damaged.
Reading the Little Boy Bomb.
Within hours of the attack, radio stations began reading a prepared statement from President Harry Truman informing the American public that the United States had dropped an entirely new type of bomb on the Japanese city of Hiroshima-an atomic bomb with more power than 15,000 tons of TNT. Truman warned that if Japan still refused to surrender unconditionally as demanded by the Potsdam Proclamation of July 26, the United States would attack additional targets with equally devastating results. Two days later, on August 8, the Soviet Union declared war on Japan and attacked Japanese forces in Manchuria, ending American hopes that the war would
end before Russians entry into Pacific theater.
Hiroshima after the bombing
"Little Boy" Atomic Bomb
Type: Nuclear weapon
Nation: United States
Designer: Los Alamos Laboratory
Length: 10 feet
Weight: 9,700 pounds
Diameter: 28 inches
Yield: 15 kilotons of TNT
It contained 64 kg (141 lb) of enriched uranium, although less than a kilogram underwent nuclear fission.
Fat Man Plutonium Bomb Being Readied at Tinian.
Factional struggles and communications problems prevented Japan from meeting Allied terms in the immediate aftermath of Hiroshima. In the absence of a surrender announcement, conventional bombing raids on additional Japanese cities continued as scheduled. Then, on August 9, a second atomic attack took place. Taking off from Tinian at 3:47 a.m., Bock's Car (named after its usual pilot) headed for its primary target, Kokura Arsenal, located on the northern coast of Kyushu Island. Pilot Charles Sweeney found unacceptable weather conditions and unwelcome flak above Kokura. Sweeney made three passes over Kokura, then decided to switch to his secondary target even though he had only enough fuel remaining for a single bombing run. Clouds greeted Bock's Car as it approached Nagasaki, home to the Mitsubishi plant that had manufactured the torpedoes used at Pearl Harbor. At the last minute, a brief break in the cloud cover made possible a visual targeting at 29,000 feet and Bock's Car dropped her single payload, a plutonium bomb weighing 10,000 pounds and nicknamed Fat Man, at 11:01 a.m. The plane then veered off and headed to Okinawa for an emergency landing. Fat Man exploded 1,650 feet above the slopes of the city with a force of 21,000 tons of TNT.59 Fat Man killed 40,000 people and injured 60,000 more. Three square miles of the city were destroyed, less than Hiroshima because of the steep hills surrounding Nagasaki. By January 1946, 70,000 people had died in Nagasaki. The total eventually reached 140,000, with a death rate similar to that of Hiroshima.
The mushroom cloud rising over Nagasaki.
Fat Man Atomic Bomb
Weight: 10,800 lbs.
Length: 10 ft. 8 in.; Diameter: 60 in.
Fuel: Highly enriched plutonium- 239
Plutonium Fuel: approx. 13.6 lbs.; approx. size of a softball
Plutonium core surrounded by 5,300 lbs. of high explosives; plutonium core reduced to size of tennis ball
Efficiency of weapon: 10 times that of Little Boy
Approximately 1 kilogram of plutonium fissioned
Explosive force: 21,000 tons of TNT equivalent
Use: Dropped on Japanese city of Nagasaki; August 9, 1945
Type: Nuclear Weapon
Still the Japanese leadership struggled to come to a decision, with military extremists continuing to advocate a policy of resistance to the end. Word finally reached Washington from Switzerland and Sweden early on August 10 that the Japanese, in accordance with Hirohito's wishes, would accept the surrender terms, provided the emperor retain his position. Truman held up a third atomic attack while the United States considered a response, finally taking a middle course and acknowledging the emperor by stating that his authority after the surrender would be exercised under the authority of the Supreme Commander of the Allied Powers. With British, Chinese, and Russian concurrence, the United States answered the Japanese on August 11. Japan surrendered on August 14, 1945, ending the war that began for the United States with the surprise attack at Pearl Harbor on December 7, 1941. The United States had been celebrating for almost three weeks when the formal papers were signed aboard the U.S.S. Missouri on September 2.
The Bomb Goes Public
The veil of secrecy that had hidden the atomic bomb project was lifted on August 6 when President Truman announced the Hiroshima raid to the American people. The release of the Smyth Report on August 12, which contained general technical information calculated to satisfy public curiosity without disclosing any atomic secrets, brought the Manhattan Project into fuller view. Americans were astounded to learn of the existence of a far flung, government-run, top secret operation with a physical plant, payroll, and labor force comparable in size to the American automobile industry. Approximately 130,000 people were employed by the project at its peak, among them many of the nation's leading scientists and engineers.
In retrospect, it is remarkable that the atomic bomb was built in time to be used in World War II. Most of the theoretical breakthroughs in nuclear physics dated back less than twenty-five years, and with new findings occurring faster than they could be absorbed by practitioners in the field, many fundamental concepts in nuclear physics and chemistry had yet to be confined by laboratory experimentation. Nor was there any conception initially of the design and engineering difficulties that would be involved in translating what was known theoretically into working devices capable of releasing the enormous energy of the atomic nucleus in a predictable fashion. In fact, the Manhattan Project was as much a triumph of engineering as of science. Without the innovative work of the talented Leslie Groves, as well as that of Crawford Greenewalt of DuPont and others, the revolutionary breakthroughs in nuclear science achieved by Enrico Fermi, Niels Bohr, Ernest Lawrence, and their colleagues would not have produced the atomic bomb during World War II. Despite numerous obstacles, the United States was able to combine the forces of science, government, military, and industry into an organization that took nuclear physics from the laboratory and into battle with a weapon of awesome destructive capability, making clear the importance of basic scientific research to national defense.
The use of nuclear weapons at the end of World War II served as the starting point for an ongoing era of nuclear proliferation. Stockpiles rapidly grew as the United States and Soviet Union became embroiled in the Cold War, and rapid scientific advancement led to the creation of far more powerful weapons. Today, nuclear nonproliferation stands among the most pressing issues facing the international community.
Even prior to the United States' use of atomic weapons, the Soviet Union's atomic project was already underway. Using spies recruited at Manhattan Project sites - famous among which are Klaus Fuchs, George Koval, and Ted Hall, among others - the Soviet Union was able to accelerate their nuclear program by several years.
Cold War Vertical Proliferation
Following the war, the countries of the newly-founded United Nations needed to address the future of nuclear weapons. The United States proposed the Baruch Plan, calling for disarmament, a ban on production of nuclear weapons, the open exchange of scientific information required to use nuclear energy to achieve peaceful ends, and the use of nuclear power for peaceful purposes. The Soviet Union, citing issues of trust should they consent to such a plan under the western-dominated United Nations, rejected the idea.
As no firm agreement could be reached, the US and USSR plunged into a decades-long nuclear arms race. Partially as a result of successful espionage, the Soviets were quick to develop their first atom bomb. Access to uranium, which was much more common than previously thought, was initially achieved through mining in Eastern Bloc satellite states. Defying all expectations, the Soviet Union was able to detonate its first atomic bomb on August 29, 1949.
As neither country intended to cease nuclear development following the construction of the atom bomb, the United States and Soviet Union were in a race to create the world’s first hydrogen bomb – a weapon far more powerful than that dropped on Hiroshima or Nagasaki. Having had a head start, Edward Teller's team, sanctioned by the newly formed United States Department of Defense, and tested their first thermonuclear weapon on November 1, 1952. The following August, the Soviets followed suit with their own H-bomb, RDS-6 (referred to as Joe 4 by the US).
Despite the continuous technological developments surrounding nuclear armaments, the lack of a long-range delivery mechanism dampened the deterrent effect of said weapons. Originally, the only way to drop a nuclear bomb was through the use of strategic bombers. This changed with the production of intercontinental ballistic missiles (ICBM). To display advancement of ICBM technology, the US and USSR became intertwined in the Space Race. Although the official goal was to prove superiority in space exploration, the delivery mechanism for satellites was fundamentally the same as that for thermonuclear warheads. Thus, whichever state could successfully put a device in orbit would prove capable of reaching the other with a mounted warhead. The Soviet Union was the first to showcase the capability for long-range nuclear warfare when it launched the Sputnik satellite into orbit in 1957. The United States was slow to counter, taking two years to launch its first satellite.
At this time, both states had the capacity to obliterate each other’s major population centers. Although the United States and Soviet Union continued to improve and expand their arsenals, a stalemate of sorts had been reached. Mutually Assured Destruction (MAD) ensured, theoretically, that neither state would conduct a nuclear strike on the other, as nuclear retaliation would prove inevitable.
This concept, known as deterrence, did not guarantee peaceful interaction, as evidenced by the Cuban Missile Crisis. Even though both sides were armed with nuclear weapon-equipped ICBMs, the presence of Soviet missiles in Cuba almost led to the breakout of nuclear war.
Cold War Horizontal Proliferation
During and after the Cold War, several countries developed nuclear weapons programs.
TIMELINE OF FIRST NUCLEAR TESTS
16 July 1945, United States: Trinity Test, Alamogordo, NM, Plutonium Implosion Device (20 KT)
29 August 1949 Soviet Union: “Joe-1” aka First Lightning, Semipalatinsk, Kazakhstan, Plutonium Implosion Device (22 KT)
3 October 1952 United Kingdom: Operation Hurricane, Montebello Islands, Western Australia, Plutonium Implosion Device (25 KT)
13 February 1960 France: “Gerboise Bleue” or Blue Jerboa, Reggane, Algerian Sahara Desert, Plutonium Device (70KT)
16 October 1964 People’s Republic of China: “596,” Lop Nur, Xinjiang Province in North Western China, Uranium Implosion Device (22 KT)
18 May 1974 India: “Smiling Buddha” aka Pokhran I, Peaceful Nuclear Explosion (PNE), Pokhran in the Western Deserts of Rajasthan, Plutonium Implosion Device (8 KT)
28 May 1998 Pakistan: Chagai I, Balochistan Province in Southwest Pakistan, series of 5 tests using uranium implosion devices (maximum yield 40 KT)
9 October 2006 North Korea: Mt. Punggye-ri in the North Hamgyong Province, Plutonium Implosion Device (less than 1 KT)
OTHER NUCLEAR PROGRAMS
Israel: Israel maintains the position that it will not formally introduce nuclear weapons to the Middle East by testing a device. However, its nuclear reactor in the Negev desert named Dimona, constructed in early 1958, is believed to have gone critical shortly after the US discovered the facility in the early 1960s. By the end of the decade Israel most likely had assembled a bomb.
South Africa: South Africa pursued a nuclear weapons capability from the 1960s-1970s. In August 1977 the Soviet Union picked up intelligence that the country was preparing for a cold test of a nuclear device at Vastrap in the Kalahari Desert. The United States and its European allies successfully convinced the country to refrain from conducting the test. Subsequently, the country moved its nuclear program to an underground facility at Pelindaba. Later in September 1979, a United States Vela satellite detected a double flash, a common feature of a nuclear detonation, off of the country’s coast. While many officials and historians suspect that the incident was an actual test potentially involving Israel, evidence is inconclusive and the event remains controversial to this day. However, by the time Nelson Mandela became President in 1994, following the end of apartheid, South Africa had voluntarily dismantled its nuclear weapons.
United Kingdom: Initially, the British were barred access to United States nuclear design data. The threat of espionage necessitated heightened security surrounding the American nuclear program. Regardless, the United Kingdom was able to detonate its first atom bomb on October 3, 1952, thanks to the expertise of British Manhattan Project scientists. When nuclear cooperation with the United States resumed in 1958, the British deployed submarine-based American Polaris missiles equipped with nuclear warheads. Having such weapons stationed off the coast of the small island guaranteed a deterrent effect.
France: In the late 1940s, the French were successfully operating their own nuclear reactors. It was not until the late 1950s, however, that France decided to accelerate its nuclear program to achieve nuclear deterrence. On February 13, 1960, the French detonated their first atomic bomb in Algeria.
People's Republic of China: In the 1950s, the Chinese gained the support of the Soviet Union in developing their nuclear program. Despite the ideological Sino-Soviet split in the latter part of that decade, the Chinese managed to detonate their first atomic device on October 16, 1964.
Israel: Although Israel's nuclear program is shrouded in ambiguity, it is widely believed that Israel is in possession of nuclear weapons. Illegally obtained evidence provided by a former Israeli nuclear technician, Mordechai Vanunu, illustrates that the state has nuclear capabilities. Development of this program likely began shortly after World War II. At the time, the French were in collaboration with the Israelis and provided them with a research reactor in the 1950s.
India: India became the next country to detonate its first nuclear device. Work on a nuclear weapon commenced in the 1960s, but was not considered a priority. Following the Indo-Pakistani war of 1971, however, the Indian government accelerated work on this project. In 1972, authorization was granted for the production and testing of a nuclear device. The "Smiling Buddha" was detonated on May 18, 1974.
Pakistan: In response to the nuclear threat emanating from bordering India, Pakistan began work on its nuclear program in 1972. Twenty-six years later, Pakistan tested its first five nuclear weapons on May 28, 1998, as a response to India's Pokhran II test earlier that month, making it the eighth nuclear state.
Other countries: Throughout the Cold War, several states had possession of nuclear weapons, but destroyed or surrendered their stockpiles. Belarus, Kazakhstan, and Ukraine agreed to get rid of their nuclear weapons following the collapse of the Soviet Union. The South African apartheid-era government eliminated their stockpile, worried that the weapons could fall into the hands of militants.
Many view the proliferation of nuclear weapons as a danger to the international community. North Korea and Iran are the current epicenters of concern regarding the issue of horizontal proliferation. There are similarly fears that non-state actors - especially terrorist groups - may acquire these weapons. India and Pakistan have been on the cusp of nuclear war, showing that the logic of mutually assured destruction may not be as sound as previously thought. Thus, a number of steps toward nonproliferation have been taken.
The Pugwash Conference, organized by Joseph Rotblat, was the first major, non-governmental effort to curb the proliferation of nuclear weapons. This conference was the result of the Russell-Einstein Manifesto, and initially involved twenty-two scientists from countries on both sides of the Cold War divide. Over the years, Pugwash convened with the goal of reducing production and stockpiles of nuclear weapons. Rotblat, along with the Pugwash Conferences, was awarded the 1995 Nobel Peace Prize for these efforts.
The most crucial treaty dealing with the issue of nuclear weapons proliferation, the Nuclear Non-Proliferation Treaty (NPT), came into force on March 5, 1970. Noting that the trend of horizontal proliferation would inevitably continue, the United States and Soviet Union collaborated to prevent such an outcome. This treaty identified the United States, Soviet Union, United Kingdom, France, and China as nuclear weapons states.
Appendix 1: Basics of Electricity
Electricity is the set of physical phenomena associated with the presence and flow of electric charge.
Basic concept for electrical properties of matter is concept of electrical charge. It is said that two elementary particles, electrons and protons, out of three (electrons, protons and neutrons) that composed an atom, are carriers of electrical charge. Electrons carry negative charges and protons carry positive charges. It was found out that there is an interaction between the charges: like charges attract and unlike charges repel. The electrical interaction between charged objects is due to the presence of electric field (force) that surrounds any charged object – it spreads out - weakens with distance.
Charge is measured in Coulombs.
1 Coulomb =1 amp per second.
The charge of one electron is e= -1.6 X 10-19 Coulombs
1 𝐶𝑜𝑢𝑙𝑜𝑚𝑏 = 1/ −1.6 ∗ 10−19 = 6.25 ∗ 1018 𝐸𝑙𝑒𝑐𝑡𝑟𝑜ns.
When it comes to electricity there are generally two types of material: Conductors and Insulators.
A conductor is a material that has a large number of free electrons that continually jump to other atoms. Good electrical conductors are copper and aluminum. Gold, silver, and platinum are also good conductors, but are very expensive.
An insulator is a material that has only a few free electrons. In insulators, the electrons are tightly bound by the nucleus. Good electrical insulators are rubber, porcelain, glass, and dry wood.
Insulators prevent current from flowing.
Characteristic that allows to distinguish between conductors and insulators is called ,resistance, R . Obviously conductors have low resistance and insulators high resistance.
Current ,I, is the movement of charge through a conductor. Electrons carry the charge. Unit of measurement: Amperes (A). One ampere (amp) of current is one coulomb of charge passing a point on a conductor in one second. This measurement is analogous to “gallons or liters per second” when measuring the flow of water.
Direct Current (DC) flows in only one direction. Many uses including: Batteries, electronic circuits, LED lights, generator excitation systems and rotors, DC transmission lines – and much more.
Alternating Current (AC) continuously changes in magnitude and direction. AC is used by most lights, appliances and motors. It is used in the high voltage transmission system. AC enables use of transformers to change voltage from high to low and back.
Voltage ,V, is the force that causes electrons to move. Voltage is also referred to as potential difference or electromotive force (emf or E). Unit of measurement: Volts (V) . Similar to “pounds per square inch” when measuring water pressure.
Sample Voltage Levels
|Distribution Feeder Circuit||12.47 kV|
|High Voltage Line||47kV-500kV|
Power,P, is the rate at which work is being performed. Unit of Measurement: Watts (W) .
Power = Voltage x Current.
This means that the electrical energy is being converted into another form of energy (e.g. heat energy, light energy, mechanical energy, etc.)
Appendix 2: Range of the Electromagnetic Spectrum
The electromagnetic spectrum is the range of all possible frequencies of electromagnetic radiation. The electromagnetic spectrum of an object has a different meaning: it is the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object.
The wavelengths of various regions of the electromagnetic spectrum are shown alongside an approximate proxy for size of the wavelength.
The electromagnetic spectrum extends from below the low frequencies used for modern radio communication to gamma radiation at the short-wavelength (high-frequency) end, covering wavelengths from thousands of kilometers down to a fraction of the size of an atom. The limit for long wavelengths is the size of the universe itself, while it is thought that the short wavelength limit is in the vicinity of the Planck length (1.6 x 10-35 m), although in principle the spectrum is infinite and continuous.
Most parts of the electromagnetic spectrum are used in science for spectroscopic and other probing interactions, as ways to study and characterize matter. In general, if the wavelength of electromagnetic radiation is of a similar size to that of a particular object (atom, electron, etc.), then it is possible to probe that object with that frequency of light. In addition, radiation from various parts of the spectrum has been found to have many other uses in communications and manufacturing.
Energy of Photon
Electromagnetic waves are typically described by any of the following three physical properties: the frequency (f) (also sometimes represented by the Greek letter nu, ν), wavelength (λ), or photon energy (E). Frequencies observed in astronomy range from 2.4×1023 Hz (1 GeV gamma rays) down to the local plasma frequency of the ionized interstellar medium (~1 kHz). Wavelength is inversely proportional to wave frequency; hence, gamma rays have very short wavelengths that are a fraction of the size of atoms, whereas other wavelengths can be as long as the universe. Photon energy is directly proportional to the wave frequency, so gamma ray photons have the highest energy (around a billion electron volts), while radio wave photons have very low energy (around a femto-electron volt). These relations are illustrated by the following equations:
f= c/λ, or f= E/h, or E=hc/λ,
where c = 299,792,458 m/s is the speed of light in vacuum, h = 6.62x 10−34 J s - Planck’s constant.
Whenever electromagnetic waves exist in a medium with matter, their wavelength is decreased. Wavelengths of electromagnetic radiation, no matter what medium they are traveling through, are usually quoted in terms of the vacuum wavelength, although this is not always explicitly stated. Generally, electromagnetic radiation is classified by wavelength into radio wave, microwave, terahertz (or sub-millimeter) radiation, infrared, the visible region we perceive as light, ultraviolet, X-rays, and gamma rays. The behavior of electromagnetic radiation depends on its wavelength. When electromagnetic radiation interacts with single atoms and molecules, its behavior also depends on the amount of energy per quantum (photon) it carries.
Interaction of Electromagnetic Radiation with Matter
Electromagnetic radiation interacts with matter in different ways in different parts of the spectrum. The types of interaction can be so different that it seems justified to refer to different types of radiation. At the same time, there is a continuum containing all these different kinds of electromagnetic radiation. Thus, we refer to a spectrum, but divide it up based on the different interactions with matter. Below are the regions of the spectrum and their main interactions with matter:
Radio: Collective oscillation of charge carriers in bulk material (plasma oscillation). An example would be the oscillation of the electrons in an antenna.
Microwave through far infrared: Plasma oscillation, molecular rotation.
Near infrared: Molecular vibration, plasma oscillation (in metals only).
Visible: Molecular electron excitation (including pigment molecules found in the human retina), plasma oscillations (in metals only).
Ultraviolet: Excitation of molecular and atomic valence electrons, including ejection of the electrons (photoelectric effect).
X-rays: Excitation and ejection of core atomic electrons, Compton scattering (for low atomic numbers).
Gamma rays: Energetic ejection of core electrons in heavy elements, Compton scattering (for all atomic numbers), excitation of atomic nuclei, including dissociation of nuclei.
High-energy gamma rays: Creation of particle-antiparticle pairs. At very high energies, a single photon can create a shower of high-energy particles and antiparticles upon interaction with matter.
This classification goes in the increasing order of frequency and decreasing order of wavelength, which is characteristic of the type of radiation. While, in general, the classification scheme is accurate, in reality there is often some overlap between neighboring types of electromagnetic energy. For example, SLF radio waves at 60 Hz may be received and studied by astronomers, or may be ducted along wires as electric power, although the latter is, in the strict sense, not electromagnetic radiation at all.
Appendix 3: Units of Energy
James Prescott Joule (1818 − 1889) was a self-educated British physicist
and brewer whose work in the midnineteenth century contributed to the establishment of the energy concept. The international unit of energy bears
1 Joule [J] = 1 Watt-second [Ws] = 1 V A s = 1 N m = 1 kg m2s−2.
It takes about 1 J to raise a 100-g-apple 1 m. Energy units can be preceded
by various factors, including the following:
kilo (k=103), Mega (M=106), Giga (G=109), Tera (T=1012), Peta (P=1015),
Thus, a kiloJoule (kJ) is 1000 Joules and a MegaJoule (MJ) is 1,000,000 Joules.
A related unit is the Watt, which is a unit of power (energy per unit time). Power units can be converted to energy units through multiplication by seconds [s], hours, [h], or years [yr].
For example, 1 kWh [kilowatt hour] = 3.6 MJ [MegaJoule]. With 1 kWh, about 10 liters of water can be heated from 20 ºC to the boiling point.
There are many other energy units besides the "Système International d'Unités (SI)". A "ton of coal equivalent" (tce) is frequently used in the energy business. 1 tce equals 8.141 MWh. It means that the combustion of 1 kg of coal produces the same amount of heat as electrical heating for one hour at a rate of 8.141 kW.
More Units of Energy
1 calIT = 4.1868 J, International Table calorie
1 calth = 4.184 J, thermochemical calorie
1 cal15 ≈ 4.1855 J, calories to heat from 14.5 °C to 15.5 °C
1 erg = 10−7 J, cgs [centimeter-gram-second] unit
1 eV ≈ 1.60218 × 10−19 J, electron volt
1 Eh ≈ 4.35975 × 10−18 J, Hartree, atomic energy unit
1 Btu = 1055.06 J, British thermal unit according to ISO, to heat 1 pound water from 63 °F to 64 °F
1 tce = 29.3076 × 109 J, ton of coal equivalent, 7000 kcalIT
1 toe = 41.868 × 109 J, ton of oil equivalent, 10000 kcalIT
Calories and/or kilocalories [cal and/or kcal] were historically often used to measure heat (energy) and are still used fot this sometimes today. Heating a gram of water 1 ºC requires 1 cal. Different definitions are often the result of inconsistent starting temperatures of the heating.
Multiplication Table of Units
The unit Megagram is not used, since there is a special
name for one million grams, one ton (t): 1 t = 1000 kg.
Multiplication of the Units of Power with Units of Time
When the Watt is multiplied by a unit of time, an energy unit is formed as follows: 1 Ws = 1 J.
The use of the kilowatt-hour is more common: 1 kWh = 3600 kWs = 3.6 MJ.
Besides the second [s] and the hour [h], the day [d] and the year [yr] are also used,
with 1 yr = 365.2425 d = 31,556,952 s.
So, for example, energy of one Megawatt-year can be written as 1 MWyr = 31.557952 TJ (TeraJoule).
The annual consumption of 1 toe/yr corresponds to the daily consumption of 31.557952 kWh/d.
Appendix 4: Energy
In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object. Energy is a conserved quantity; the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the energy transferred to an object by the work of moving it a distance of 1 meter against a force of 1 newton.
Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object's position in a force field (gravitational, electric or magnetic), the elastic energy stored by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light, and the thermal energy due to an object's temperature.
Mass and energy are closely related. Due to mass–energy equivalence, any object that has mass when stationary (called rest mass) also has an equivalent amount of energy whose form is called rest energy, and any additional energy (of any form) acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. For example, after heating an object, its increase in energy could be measured as a small increase in mass, with a sensitive enough scale.
Living organisms require energy to stay alive, such as the energy humans get from food. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The processes of Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth.
All forms of energy are associated with motion. For example, any given body has kinetic energy if it is in motion. For estimation of kinetic energy of the moving object formula
KE=1/2(mv2), where m is mass and v is speed of the object, can be used.
A tensioned device such as a bow or spring, though at rest, has the potential for creating motion; it contains potential energy because of its configuration. Similarly, nuclear energy is potential energy because it results from the configuration of subatomic particles in the nucleus of an atom
Appendix 5: Max Plank and the idea of quantum
In the late 18th century, great progress in physics had been made. Classical Newtonian physics at the time was widely accepted in the scientific community for its ability to accurately explain and predict many phenomena. However, by the early 20th century, physicists discovered that the laws of classical mechanics are not applicable at the atomic scale, and experiments such as the photoelectric effect completely contradicted the laws of classical physics. As a result of these observations, physicists articulated a set of theories now known as quantum mechanics. In some ways, quantum mechanics completely changed the way physicists viewed the universe, and it also marked the end of the idea of a clockwork universe (the idea that universe was predictable).
Electromagnetic (EM) radiation is a form of energy with both wave-like and particle-like properties; visible light being a well-known example. From the wave perspective, all forms of EM radiation may be described in terms of their wavelength and frequency. Wavelength is the distance from one wave peak to the next, which can be measured in meters. Frequency is the number of waves that pass by a given point each second. While the wavelength and frequency of EM radiation may vary, its speed in a vacuum remains constant at 3.0 x 108 m/sec, the speed of light. The wavelength or frequency of any specific occurrence of EM radiation determine its position on the electromagnetic spectrum and can be calculated from the following equation:
where c is the constant 3.0 x 108 m/sec (the speed of light in a vacuum), λ- wavelength in meters, and υ -frequency in hertz (1/s). It is important to note that by using this equation, one can determine the wavelength of light from a given frequency and vice versa..
The Discovery of the Quantum
The wave model cannot account for something known as the photoelectric effect. This effect is observed when light focused on certain metals emits electrons. For each metal, there is a minimum threshold frequency of EM radiation at which the effect will occur. Replacement of light with twice the intensity and half the frequency will not produce the same outcome, contrary to what would be expected if light acted strictly as a wave. In that case, the effect of light would be cumulative—the light should add up, little by little, until it caused electrons to be emitted. Instead, there is a clear-cut minimum frequency of light that triggers electron ejection. The implication was that frequency is directly proportional to energy, with the higher light frequencies having more energy. This observation led to the discovery of the minimum amount of energy that could be gained or lost by an atom. Max Planck named this minimum amount the “quantum,” plural “quanta,” meaning “how much.” One photon of light carries exactly one quantum of energy. Planck is considered the father of the Quantum Theory. According to Planck:
where h is Planck’s constant = (6.62 x 10-34 J s), υ is the frequency, and E is energy of an electromagnetic wave.
Planck (cautiously) insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself. However, in 1905, Albert Einstein reinterpreted Planck’s quantum hypothesis and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material.
More Evidence for a Particle Theory of Energy
When an electric current is passed through a gas, some of the electrons in the gas molecules move from their ground energy state to an excited state that is further away from their nuclei. When the electrons return to the ground state, they emit energy of various wavelengths. A prism can be used to separate the wavelengths, making them easy to identify. If light acted only as a wave, then there should be a continuous rainbow created by the prism. Instead, there are discrete lines created by different wavelengths. This is because electrons release specific wavelengths of light when moving from an excited state to the ground state.
Wavelength of EM radiation
The distance used to determine the wavelength is shown. Light has many properties associated with its wave nature, and the wavelength in part determines these properties.
Emission spectrum of nitrogen gas
Each wavelength of light emitted (each colored line) corresponds to a transition of an electron from one energy level to another, releasing a quantum of light with defined energy (color).
Appendix 6: Periodic Table of the Elements
Appendix 7: Metric (SI) Prefixes
Metric Units of Measurement
In the metric system of measurement, designations of multiples and subdivision of any unit may be arrived at by combining with the name of the unit the prefixes deka, hecto, and kilo meaning, respectively, 10, 100, and 1000, and deci, centi, and milli, meaning, respectively, one-tenth, one-hundredth, and one-thousandth. In some of the following metric tables, some such multiple and subdivisions have not been included for the reason that these have little, if any currency in actual usage.
In certain cases, particularly in scientific usage, it becomes convenient to provide for multiples larger than 1000 and for subdivisions smaller than one-thousandth. Accordingly, the following prefixes have been introduced and these are now generally recognized.
|Purpose||Prefix Name||Prefix Symbol||Value|
or whole units
or sub units
|Whole Units||Decimal Units|
Appendix 8: Scientific method
Scientists search for answers to questions and solutions to problems by using a procedure called the scientific method. This procedure consists of making observations, formulating hypotheses, and designing experiments, which in turn lead to additional observations, hypotheses, and experiments in repeated cycles.
The Steps in the Scientific Method.
Step 1: Make observations
Observations can be qualitative or quantitative. Qualitative observations describe properties or occurrences in ways that do not rely on numbers. Examples of qualitative observations include the following: the outside air temperature is cooler during the winter season, table salt is a crystalline solid, sulfur crystals are yellow, and dissolving a penny in dilute nitric acid forms a blue solution and a brown gas. Quantitative observations are measurements, which by definition consist of both a number and a unit. Examples of quantitative observations include the following: the melting point of crystalline sulfur is 115.21° Celsius, and 35.9 grams of table salt—whose chemical name is sodium chloride—dissolve in 100 grams of water at 20° Celsius. For the question of the dinosaurs’ extinction, the initial observation was quantitative: iridium concentrations in sediments dating to 66 million years ago were 20–160 times higher than normal.
Step 2: Formulate a hypothesis
After deciding to learn more about an observation or a set of observations, scientists generally begin an investigation by forming a hypothesis, a tentative explanation for the observation(s). The hypothesis may not be correct, but it puts the scientist’s understanding of the system being studied into a form that can be tested. For example, the observation that we experience alternating periods of light and darkness corresponding to observed movements of the sun, moon, clouds, and shadows is consistent with either of two hypotheses:
Earth rotates on its axis every 24 hours, alternately exposing one side to the sun, or
the sun revolves around Earth every 24 hours.
Suitable experiments can be designed to choose between these two alternatives. For the disappearance of the dinosaurs, the hypothesis was that the impact of a large extraterrestrial object caused their extinction. Unfortunately (or perhaps fortunately), this hypothesis does not lend itself to direct testing by any obvious experiment, but scientists can collect additional data that either support or refute it.
Step 3: Design and perform experiments
After a hypothesis has been formed, scientists conduct experiments to test its validity. Experiments are systematic observations or measurements, preferably made under controlled conditions—that is, under conditions in which a single variable changes.
Step 4: Accept or modify the hypothesis
A properly designed and executed experiment enables a scientist to determine whether the original hypothesis is valid. In which case he can proceed to step 5. In other cases, experiments often demonstrate that the hypothesis is incorrect or that it must be modified thus requiring further experimentation.
Step 5: Development into a law and/or theory
More experimental data are then collected and analyzed, at which point a scientist may begin to think that the results are sufficiently reproducible (i.e., dependable) to merit being summarized in a law, a verbal or mathematical description of a phenomenon that allows for general predictions. A law simply says what happens; it does not address the question of why.
One example of a law, the law of definite proportions, which was discovered by the French scientist Joseph Proust (1754–1826), states that a chemical substance always contains the same proportions of elements by mass. Thus, sodium chloride (table salt) always contains the same proportion by mass of sodium to chlorine, in this case 39.34% sodium and 60.66% chlorine by mass, and sucrose (table sugar) is always 42.11% carbon, 6.48% hydrogen, and 51.41% oxygen by mass.
Whereas a law states only what happens, a theory attempts to explain why nature behaves as it does. Laws are unlikely to change greatly over time unless a major experimental error is discovered. In contrast, a theory, by definition, is incomplete and imperfect, evolving with time to explain new facts as they are discovered.
Because scientists can enter the cycle shown in Figure at any point, the actual application of the scientific method to different topics can take many different forms. For example, a scientist may start with a hypothesis formed by reading about work done by others in the
The scientific method is a method of investigation involving experimentation and observation to acquire new knowledge, solve problems, and answer questions. The key steps in the scientific method include the following:
Step 1: Make observations.
Step 2: Formulate a hypothesis.
Step 3: Test the hypothesis through experimentation.
Step 4: Accept or modify the hypothesis .
Step 5: Development into a law and/or a theory
Appendix 9: International System of Units
The SI system, also called the metric system, is used around the world. There are seven basic units in the SI system: the meter (m), the kilogram (kg), the second (s), the kelvin (K), the ampere (A), the mole (mol), and the candela (cd).
Appendix 10: Introduction to Wave Motion
Wave motion arises when a periodic disturbance of some kind is propagated through a medium. Pressure variations through air, transverse motions along a guitar string, or variations in the intensities of the local electric and magnetic fields in space, which constitute electromagnetic radiation, are all typical examples of wave motion. For each medium, there is a characteristic velocity at which the disturbance travels.
This image shows the anatomy of a sine curve: the crest is the peak of each wave, and the trough is the valley; the amplitude is the distance between the crest and the x-axis; and the wavelength is the distance between two crests (or two troughs).
There are three measurable properties of wave motion: amplitude, wavelength, and frequency (the number of vibrations per second). The relation between the wavelength λ (Greek lambda) and frequency of a wave ν (Greek nu) is determined by the propagation velocity v, such that
For light, this equation becomes c= λ/υ,
where c is the speed of light, 2.998 x 108 m/s.
When utilizing these equations to determine wavelength, frequency, or velocity by manipulation of the equation, it is important to note that wavelengths are expressed in units of length, such as meters, centimeters, nanometers, etc; and frequency is typically expressed as megahertz or hertz (s–1).
Young’s Double-Slit Experiment
In the early 19th century, English scientist Thomas Young carried out the famous double-slit experiment (also known as Young’s experiment), which demonstrated that a beam of light, when split into two beams and then recombined, will show interference effects that can only be explained by assuming that light is a wavelike disturbance. If light consisted strictly of ordinary or classical particles, and these particles were fired in a straight line through a slit and allowed to strike a screen on the other side, we would expect to see a pattern corresponding to the size and shape of the slit. However, when this single-slit experiment is actually performed, the pattern on the screen is a diffraction pattern in which the light is spread out. The smaller the slit, the greater the angle of spread. If light were purely a particle, it would not exhibit the interference.
Similarly, if light consisted strictly of classical particles and we illuminated two parallel slits, the expected pattern on the screen would simply be the sum of the two single-slit patterns. In actuality, however, the pattern changes to one with a series of alternating light and dark bands. When Thomas Young first demonstrated this phenomenon, it indicated that light consists of waves, as the distribution of brightness can be explained by the alternately additive and subtractive interference of wave fronts. Young’s experiment, performed in the early 1800’s, played a vital part in the acceptance of the wave theory of light, superseding the corpuscular theory of light proposed by Isaac Newton, which had been the accepted model of light propagation in the 17th and 18th centuries. Almost a century later, in 1905, Albert Einstein’s Nobel-Prize winning research into the photoelectric effect demonstrated that light can behave as if it is composed of discrete particles under certain conditions. These seemingly contradictory discoveries made it necessary to go beyond classical physics and take the quantum nature of light into account.